This web site is currently very out of date. At the moment my only up to date research work and CG art is all on Instagram.

This web site is currently very out of date. At the moment my only up to date research work and CG art is all on Instagram.

TLDR mathematical representations of physical things are not the things themselves – they are insufficiently abstract, and may introduce nonsense which needs to be “discovered” and fixed later.

It occurs to me (and is often done in various ways) that one should be able to write all physical laws as an abstract function that defines the “physical system” in question, that when passed through a (possibly non linear) abstract functional (function of a function, or “operator”) gives the result equal to zero.

Then later we can argue about what basis is best to use for a particular application to represent the function and functional, knowing that choosing bases and origins may introduce fake degrees of freedom, where we then have to say the answer is such and such times some generator of a bunch of group invariance nonsense.

For example in quantum physics we might begin by saying a single particle has the wave function Ψ. As soon as we say its Ψ(x,y,z,t) we are already on a loser because we fixed four bases and four origins which have to be “unfixed” later. And for a start that means it’s not Lorenz invariant.

But even before then we are still assuming non-physical things by saying Ψ is a complex number. Because actually it should be normalized over the interval of interest in order to give correct probability values – so it’s in a projective space; and also there should be no absolute phase; or if gauge invariance applies, then we should’t be fixing local phase either by assuming the 12 o’clock phase position of one location is the same as every other – especially in the context of space time curvature.

The above discussion show that the usual assumptions about Ψ introduce at least two and possibly an infinite number of spurious mathematical degrees of freedom in the representation of reality.

General relativity while a wonder of beauty, is also terrible, in that it only fixes the second derivative of the metric, and the Ricci tensor is a reduction of the Riemann curvature tensor, so any solution that represents a particular space time is just one of an infinite family of equivalent solutions which also satisfy the same equations and describe the same physics, even if you stick to one coordinate system.

If the math hadn’t introduced non-physical degrees of freedom then Higgs wouldn’t have had to discover/introduce the Higgs field and boson because it would have already been present in the solutions.

I could jokingly claim that the history of physics is a history of people not realizing they are assuming extra degrees of freedom in their equations, and making great discoveries about physics later, which are actually in fact discoveries about math.

The new European GDPR personal privacy data laws allow users to ask any company to delete all their personal data and to provide a copy on demand. Non-compliance leads to harsh penalties.

Those laws don’t make any sense (in that it is impossible to comply) for companies that are developing any kind of machine learning / neural networks / artificial intelligence that learn global models of any kind from attributes gathered from multiple users. This is why:

Lawyers expect that personal data is localized and understandable. But increasingly we are aggregating personal data into all kinds of computer models about users where that data becomes diffuse and incomprehensible.

Just think of it as someone asking you to forget they ever existed and to roll yourself back to whatever you would have been like if you had never had any contact with them, and also they want an exhaustive list of the personal neural mental data you are currently holding on them in a form that they can understand.

It’s important for users to know that, as technology is progressing, their data is being utilized in ways that cannot be undone, and that a request for the stored data is becoming impossible to fulfill. However lawyers and regulators should also understand that aggregating personal data in machine learning algorithms can be an effective form of anonymization.

Adobe likes to take over your computer, especially if you have installed a number of products, or enrolled in Creative Cloud. There will be many Adobe processes running all the time and various ones running at startup or login. Adobe is well known for creating buggy products with security vulnerabilities, like Flash, and running many processes that bog down your machine. I just wanted to be rid of them altogether. Here’s what worked for me on a MacBook with OSX 10.13.4.

The goal is to get the following results at the terminal command line: **find ~/Library | grep -i adobe **returns no results; **ps aux | grep -i adobe** returns only the **ps** command itself; **find /Applications | grep -i adobe** returns only other applications that reference Adobe in some passive way (in my case Xcode has some Flash related libraries); and most importantly **launchctl list | grep -i adobe** returns no results.

The first thing that I personally had to do was to pay over $100 to terminate my Creative Cloud contract with Adobe. Hopefully you don’t have to do that.

To begin this journey, ensure that no applications are running except for a finder window and a terminal window, and maybe this blog entry copied to a text file (not PDF), or printed.

To initialize neural networks it’s often desirable to generate a set of vectors which span the space. In the case of a square weights matrix this means that we want a random orthonormal basis.

The code below generates such a random basis by concatenating random Householder transforms.

import numpy import random import math def make_orthonormal_matrix(n): """ Makes a square matrix which is orthonormal by concatenating random Householder transformations """ A = numpy.identity(n) d = numpy.zeros(n) d[n-1] = random.choice([-1.0, 1.0]) for k in range(n-2, -1, -1): # generate random Householder transformation x = numpy.random.randn(n-k) s = math.sqrt((x**2).sum()) # norm(x) sign = math.copysign(1.0, x[0]) s *= sign d[k] = -sign x[0] += s beta = s * x[0] # apply the transformation y = numpy.dot(x,A[k:n,:]) / beta A[k:n,:] -= numpy.outer(x,y) # change sign of rows A *= d.reshape(n,1) return A n = 100 A = make_orthonormal_matrix(n) # test matrix maxdot = 0 maxlen = 0.0 for i in range(n-1): maxlen = max(math.fabs(math.sqrt((A[i,:]**2).sum())-1.0), maxlen) for j in range(i+1,n): maxdot = max(math.fabs(numpy.dot(A[i,:],A[j,:])), maxdot) print("max dot product = %g" % maxdot) print("max vector length error = %g" % maxlen)

Another way to do this is to do a QR decomposition of a random Gaussian matrix. However the code above avoids calculating the R matrix.

Postscript:

I did some timing tests and it seems like the QR method is 3 times faster in python3:

import numpy from scipy.linalg import qr n = 4 H = numpy.random.randn(n, n) Q, R = qr(H) print(Q)

Let’s say that we make measurements of a large group of people. Such measurements might include height, weight, IQ, blood pressure, credit score, hair length, preference, personality traits, etc. You can imagine obtaining a mass of data about people like this where each measurement is taken to lie on a continuous scale. Typically the distribution of the population along each one of these measurements will be a bell curve. Most people have average height for example. The interesting fact is that the more measurements you take, the less likely it is that you will find anyone who is simultaneously average along all the dimensions that you consider. All of us are abnormal if you consider enough personal attributes.

This brings us to the shell property of high dimensional spaces.

Let’s consider a normal (Gaussian) distribution in D-dimensions. In 1D it is obvious that all the probability bulk is in the middle, near zero. In 2D the peak is also in the middle. One might imagine that for any number of dimensions this would continue to hold, but this is false. The shell property of high dimensional spaces shows that the probability mass of a D-dimensional Gaussian distribution where D>>3 is all concentrated in a thin shell at a distance of sqrt(D) away from the origin, and the larger the value of D, the thinner that shell becomes. This is because the volume of the shell grows exponentially with D compared with the volume around the origin, and so with large D there is essentially zero probability that a point will end up near the center: Mr Average does not exist. Continue reading

At the moment I’m writing an integer-based library to bring neural networks to micro-controllers. This is intended to support the ARM and AVR devices. The idea here is that even though we might think of neural networks as the domain of super computers, for small scale robots we can do a lot of interesting things with smaller neural networks. For example a four layer convolutional neural network with about 18,000 parameters can process a 32×32 video frame at 8 frames per second on the ATmega328, according to code that I implemented last year.

For small networks, there can be some on-line learning, which might be useful to learn control systems with a few inputs and outputs, connecting for example IMU axes or simple sensors to servos or motors, trained with deep reinforcement learning. This is the scenario that I’m experimenting with and trying to enable for small, low power, and cheap interactive robots and toys.

For more complex processing where insufficient RAM is available to store weights, a fixed network can be stored in ROM built from weights that have been trained off line using python code.

I have worked hard to bring a production version of my Nixie display controller to market. You can now actually order these units from my Etsy store here

- High quality gold plated surface mount PCB
- Four digit Nixie display; product includes tubes.
- RGB LED back-lighting on each tube independently programmable to generate multiple colors
- The colon indicator can also be turned on and off
- Modules can be stacked next to each other for more digits
- Runs from 9-12V, with on-board 180V power supply
- Easily controlled by a serial line from the Arduino or any micro-controller or laptop to display any digits
- The board can also function as a stand-alone voltmeter
- Based on the familiar ATMega328
- Comes pre-programmed with open source display software
- Easily customized via the ISP port using standard tools
- Most spare micro-controller pins are accessible at the connector
- Based on plug-in IN-4 Nixies which are easily replaced
- Schematics and code are available for easy hacking

I recently finished my design for a Nixie display module. This has four digits that can be controlled from a serial link, or alternatively it can act as a voltmeter. Also the colon and backlighting can be controlled to give different effects and colors. Its useful for a variety of maker projects and I am about to manufacture a quantity of them, so let me know if you’d like to be an early adopter.

Sign up here to keep up to date.

Microsoft just recently presented the paper “High quality streamable free-viewpoint video” at SIGGRAPH. In this presentation, they are capturing live 3D views of actors on a stage using multiple cameras and using computer vision to construct detailed texture mapped mesh models which are then compressed for live viewing. On the viewer you have the freedom to move around the model in 3D.

I contributed to this project for a year or so when I was employed at Microsoft, working on 3D reconstruction from multiple infra-red camera views, so it was nice to get an acknowledgment. Some of this work was inspired by our earlier work at Microsoft Research which I co-presented at SIGGRAPH in 2004.

It’s very nice to see how far they have progressed with this project and to see the possible links that it can have with the Hololens virtual reality system.