Let’s look into the future with an antiparticle telescope

I have this belief that antiparticles such as positrons are actually particles traveling back in time. There are a lot of reasons for this such as the fact that their time evolution contains a minus in front of the t.

However there are not a lot of antiparticles in this universe dominated by normal matter (for reasons I will not get into right now). I think this is one of the possible arguments to why we only see into the past and not the future – there is only the most tenuous influence from the future, and not something humans can perceive. Plus we are constructed from particles evolving forward with time and everything around us is just normal matter, also traveling forward.

You can also see this as a vast amount of information flowing from the past to the future but only a few bits traveling the other direction.

One can think of particles and antiparticles and being like corkscrews in time; the difference being whether they have left or right handed rotation. This is also like a clock turning clockwise or anti clockwise. And for example if a gamma ray turns into an electron-positron pair, one can also picture that as a positron traveling back in time, colliding with a gamma ray and bouncing forward in time phase reversed as an electron.

Anyway I think it might be interesting to test this by building a space based telescope that can create a 2D image from the arrival of antiparticles and another coincident one creating an image from the arrival of particles.

The question is whether these two images will actually show more or less the same view of the universe, or if the antiparticle generated image will show a more evolved state of the cosmos. Interesting to propose. I hope that someone has the influence to make such a project happen.

(Also if it’s happening and I don’t know about it, let me know!)

Mathematics struggles to describe physics

TLDR mathematical representations of physical things are not the things themselves – they are insufficiently abstract, and may introduce nonsense which needs to be “discovered” and fixed later.
 
It occurs to me (and is often done in various ways) that one should be able to write all physical laws as an abstract function that defines the “physical system” in question, that when passed through a (possibly non linear) abstract functional (function of a function, or “operator”) gives the result equal to zero.
 
Then later we can argue about what basis is best to use for a particular application to represent the function and functional, knowing that choosing bases and origins may introduce fake degrees of freedom, where we then have to say the answer is such and such times some generator of a bunch of group invariance nonsense.
 
For example in quantum physics we might begin by saying a single particle has the wave function Ψ. As soon as we say its Ψ(x,y,z,t) we are already on a loser because we fixed four bases and four origins which have to be “unfixed” later. And for a start that means it’s not Lorenz invariant.
 
But even before then we are still assuming non-physical things by saying Ψ is a complex number. Because actually it should be normalized over the interval of interest in order to give correct probability values – so it’s in a projective space; and also there should be no absolute phase; or if gauge invariance applies, then we should’t be fixing local phase either by assuming the 12 o’clock phase position of one location is the same as every other – especially in the context of space time curvature.
 
The above discussion show that the usual assumptions about Ψ introduce at least two and possibly an infinite number of spurious mathematical degrees of freedom in the representation of reality.
 
General relativity while a wonder of beauty, is also terrible, in that it only fixes the second derivative of the metric, and the Ricci tensor is a reduction of the Riemann curvature tensor, so any solution that represents a particular space time is just one of an infinite family of equivalent solutions which also satisfy the same equations and describe the same physics, even if you stick to one coordinate system.
 
If the math hadn’t introduced non-physical degrees of freedom then Higgs wouldn’t have had to discover/introduce the Higgs field and boson because it would have already been present in the solutions.
 
I could jokingly claim that the history of physics is a history of people not realizing they are assuming extra degrees of freedom in their equations, and making great discoveries about physics later, which are actually in fact discoveries about math.

European privacy laws make no sense for companies that use machine learning

The new European GDPR personal privacy data laws allow users to ask any company to delete all their personal data and to provide a copy on demand. Non-compliance leads to harsh penalties.

Those laws don’t make any sense (in that it is impossible to comply) for companies that are developing any kind of machine learning / neural networks / artificial intelligence that learn global models of any kind from attributes gathered from multiple users. This is why:

Lawyers expect that personal data is localized and understandable. But increasingly we are aggregating personal data into all kinds of computer models about users where that data becomes diffuse and incomprehensible.

Just think of it as someone asking you to forget they ever existed and to roll yourself back to whatever you would have been like if you had never had any contact with them, and also they want an exhaustive list of the personal neural mental data you are currently holding on them in a form that they can understand.

It’s important for users to know that, as technology is progressing, their data is being utilized in ways that cannot be undone, and that a request for the stored data is becoming impossible to fulfill. However lawyers and regulators should also understand that aggregating personal data in machine learning algorithms can be an effective form of anonymization.

How to remove all Adobe software from your Mac

No AdobeAdobe likes to take over your computer, especially if you have installed a number of products, or enrolled in Creative Cloud. There will be many Adobe processes running all the time and various ones running at startup or login. Adobe is well known for creating buggy products with security vulnerabilities, like Flash, and running many processes that bog down your machine. I just wanted to be rid of them altogether. Here’s what worked for me on a MacBook with OSX 10.13.4.

The goal is to get the following results at the terminal command line: find ~/Library | grep -i adobe returns no results; ps aux | grep -i adobe returns only the ps command itself; find /Applications | grep -i adobe returns only other applications that reference Adobe in some passive way (in my case Xcode has some Flash related libraries); and most importantly launchctl list | grep -i adobe returns no results.

The first thing that I personally had to do was to pay over $100 to terminate my Creative Cloud contract with Adobe. Hopefully you don’t have to do that.

To begin this journey, ensure that no applications are running except for a finder window and a terminal window, and maybe this blog entry copied to a text file (not PDF), or printed.

Continue reading

How to Make a Random Orthonormal Matrix

To initialize neural networks it’s often desirable to generate a set of vectors which span the space. In the case of a square weights matrix this means that we want a random orthonormal basis.

The code below generates such a random basis by concatenating random Householder transforms.


import numpy
import random
import math

def make_orthonormal_matrix(n):
	"""
	Makes a square matrix which is orthonormal by concatenating
	random Householder transformations
	"""
	A = numpy.identity(n)
	d = numpy.zeros(n)
	d[n-1] = random.choice([-1.0, 1.0])
	for k in range(n-2, -1, -1):
		# generate random Householder transformation
		x = numpy.random.randn(n-k)
		s = math.sqrt((x**2).sum()) # norm(x)
		sign = math.copysign(1.0, x[0])
		s *= sign
		d[k] = -sign
		x[0] += s
		beta = s * x[0]
		# apply the transformation
		y = numpy.dot(x,A[k:n,:]) / beta
		A[k:n,:] -= numpy.outer(x,y)
	# change sign of rows
	A *= d.reshape(n,1)
	return A

n = 100
A = make_orthonormal_matrix(n)

# test matrix
maxdot = 0
maxlen = 0.0
for i in range(n-1):
	maxlen = max(math.fabs(math.sqrt((A[i,:]**2).sum())-1.0), maxlen)
	for j in range(i+1,n):
		maxdot = max(math.fabs(numpy.dot(A[i,:],A[j,:])), maxdot)
print("max dot product = %g" % maxdot)
print("max vector length error = %g" % maxlen)

Another way to do this is to do a QR decomposition of a random Gaussian matrix. However the code above avoids calculating the R matrix.

Postscript:

I did some timing tests and it seems like the QR method is 3 times faster in python3:

import numpy
from scipy.linalg import qr

n = 4
H = numpy.random.randn(n, n)
Q, R = qr(H)
print(Q)

Mr Average Does Not Exist

Let’s say that we make measurements of a large group of people. Such measurements might include height, weight, IQ, blood pressure, credit score, hair length, preference, personality traits, etc. You can imagine obtaining a mass of data about people like this where each measurement is taken to lie on a continuous scale. Typically the distribution of the population along each one of these measurements will be a bell curve. Most people have average height for example. The interesting fact is that the more measurements you take, the less likely it is that you will find anyone who is simultaneously average along all the dimensions that you consider. All of us are abnormal if you consider enough personal attributes.

This brings us to the shell property of high dimensional spaces.

Let’s consider a normal (Gaussian) distribution in D-dimensions. In 1D it is obvious that all the probability bulk is in the middle, near zero. In 2D the peak is also in the middle. One might imagine that for any number of dimensions this would continue to hold, but this is false. The shell property of high dimensional spaces shows that the probability mass of a D-dimensional Gaussian distribution where D>>3 is all concentrated in a thin shell at a distance of sqrt(D) away from the origin, and the larger the value of D, the thinner that shell becomes. This is because the volume of the shell grows exponentially with D compared with the volume around the origin, and so with large D there is essentially zero probability that a point will end up near the center: Mr Average does not exist. Continue reading

Deep Neural Nets for Micro-controllers

mind machineAt the moment I’m writing an integer-based library to bring neural networks to micro-controllers. This is intended to support the ARM and AVR devices. The idea here is that even though we might think of neural networks as the domain of super computers, for small scale robots we can do a lot of interesting things with smaller neural networks. For example a four layer convolutional neural network with about 18,000 parameters can process a 32×32 video frame at 8 frames per second on the ATmega328, according to code that I implemented last year.

For small networks, there can be some on-line learning, which might be useful to learn control systems with a few inputs and outputs, connecting for example IMU axes or simple sensors to servos or motors, trained with deep reinforcement learning. This is the scenario that I’m experimenting with and trying to enable for small, low power, and cheap interactive robots and toys.

For more complex processing where insufficient RAM is available to store weights, a fixed network can be stored in ROM built from weights that have been trained off line using python code.

Nixie display module is now available

Nixie Module

I have worked hard to bring a production version of my Nixie display controller to market. You can now actually order these units from my Etsy store here

Features

  • High quality gold plated surface mount PCB
  • Four digit Nixie display; product includes tubes.
  • RGB LED back-lighting on each tube independently programmable to generate multiple colors
  • The colon indicator can also be turned on and off
  • Modules can be stacked next to each other for more digits
  • Runs from 9-12V, with on-board 180V power supply
  • Easily controlled by a serial line from the Arduino or any micro-controller or laptop to display any digits
  • The board can also function as a stand-alone voltmeter
  • Based on the familiar ATMega328
  • Comes pre-programmed with open source display software
  • Easily customized via the ISP port using standard tools
  • Most spare micro-controller pins are accessible at the connector
  • Based on plug-in IN-4 Nixies which are easily replaced
  • Schematics and code are available for easy hacking

Nixie display module

I recently finished my design for a Nixie display module. This has four digits that can be controlled from a serial link, or alternatively it can act as a voltmeter. Also the colon and backlighting can be controlled to give different effects and colors. Its useful for a variety of maker projects and I am about to manufacture a quantity of them, so let me know if you’d like to be an early adopter.

Sign up here to keep up to date.

High quality streamable free-viewpoint video

free viewpointMicrosoft just recently presented the paper “High quality streamable free-viewpoint video” at SIGGRAPH. In this presentation, they are capturing live 3D views of actors on a stage using multiple cameras and using computer vision to construct detailed texture mapped mesh models which are then compressed for live viewing. On the viewer you have the freedom to move around the model in 3D.

I contributed to this project for a year or so when I was employed at Microsoft, working on 3D reconstruction from multiple infra-red camera views, so it was nice to get an acknowledgment. Some of this work was inspired by our earlier work at Microsoft Research which I co-presented at SIGGRAPH in 2004.

It’s very nice to see how far they have progressed with this project and to see the possible links that it can have with the Hololens virtual reality system.