European privacy laws make no sense for companies that use machine learning

The new European GDPR personal privacy data laws allow users to ask any company to delete all their personal data and to provide a copy on demand. Non-compliance leads to harsh penalties.

Those laws don’t make any sense (in that it is impossible to comply) for companies that are developing any kind of machine learning / neural networks / artificial intelligence that learn global models of any kind from attributes gathered from multiple users. This is why:

Lawyers expect that personal data is localized and understandable. But increasingly we are aggregating personal data into all kinds of computer models about users where that data becomes diffuse and incomprehensible.

Just think of it as someone asking you to forget they ever existed and to roll yourself back to whatever you would have been like if you had never had any contact with them, and also they want an exhaustive list of the personal neural mental data you are currently holding on them in a form that they can understand.

It’s important for users to know that, as technology is progressing, their data is being utilized in ways that cannot be undone, and that a request for the stored data is becoming impossible to fulfill. However lawyers and regulators should also understand that aggregating personal data in machine learning algorithms can be an effective form of anonymization.

How to remove all Adobe software from your Mac

No AdobeAdobe likes to take over your computer, especially if you have installed a number of products, or enrolled in Creative Cloud. There will be many Adobe processes running all the time and various ones running at startup or login. Adobe is well known for creating buggy products with security vulnerabilities, like Flash, and running many processes that bog down your machine. I just wanted to be rid of them altogether. Here’s what worked for me on a MacBook with OSX 10.13.4.

The goal is to get the following results at the terminal command line: find ~/Library | grep -i adobe returns no results; ps aux | grep -i adobe returns only the ps command itself; find /Applications | grep -i adobe returns only other applications that reference Adobe in some passive way (in my case Xcode has some Flash related libraries); and most importantly launchctl list | grep -i adobe returns no results.

The first thing that I personally had to do was to pay over $100 to terminate my Creative Cloud contract with Adobe. Hopefully you don’t have to do that.

To begin this journey, ensure that no applications are running except for a finder window and a terminal window, and maybe this blog entry copied to a text file (not PDF), or printed.

Continue reading

Deep Neural Nets for Micro-controllers

mind machineAt the moment I’m writing an integer-based library to bring neural networks to micro-controllers. This is intended to support the ARM and AVR devices. The idea here is that even though we might think of neural networks as the domain of super computers, for small scale robots we can do a lot of interesting things with smaller neural networks. For example a four layer convolutional neural network with about 18,000 parameters can process a 32×32 video frame at 8 frames per second on the ATmega328, according to code that I implemented last year.

For small networks, there can be some on-line learning, which might be useful to learn control systems with a few inputs and outputs, connecting for example IMU axes or simple sensors to servos or motors, trained with deep reinforcement learning. This is the scenario that I’m experimenting with and trying to enable for small, low power, and cheap interactive robots and toys.

For more complex processing where insufficient RAM is available to store weights, a fixed network can be stored in ROM built from weights that have been trained off line using python code.

High quality streamable free-viewpoint video

free viewpointMicrosoft just recently presented the paper “High quality streamable free-viewpoint video” at SIGGRAPH. In this presentation, they are capturing live 3D views of actors on a stage using multiple cameras and using computer vision to construct detailed texture mapped mesh models which are then compressed for live viewing. On the viewer you have the freedom to move around the model in 3D.

I contributed to this project for a year or so when I was employed at Microsoft, working on 3D reconstruction from multiple infra-red camera views, so it was nice to get an acknowledgment. Some of this work was inspired by our earlier work at Microsoft Research which I co-presented at SIGGRAPH in 2004.

It’s very nice to see how far they have progressed with this project and to see the possible links that it can have with the Hololens virtual reality system.

 

Energy pooling in neural networks for digit recognition

NeuronsHaving trained a two layer neural network to recognize handwritten digits with reasonable accuracy, as described in my previous blog post, I wanted to see what would happen if neurons were forced to pool the outputs of pairs of rectified units according to a fixed weight schedule.

I created a network which is almost a three layer network where the output of pairs of the first layer rectified units are combined additively before being passed to the second fully connected layer. This means that the first layer has a 28×28 input and a 50 unit output (hidden layer) with rectified linear units, and then pairs of these units are averaged to reduce the neuron count to 25, and then the second fully connected layer reduces this down to 10. Finally the softmax classifier is applied.
Continue reading

Training two-layer neural nets on MNIST digits

In my last blog post I talked about trying out my code for training neural nets on a simple one-layer network which consists of a single weight layer and a softmax output. In this post I share results for training a fully connected two-layer network.

In this network, the input goes from 28×28 image pixels down to 50 hidden units. Then there is a rectified linear activation function. The second layer goes from the 50 hidden units down to 10 units, and finally there is the softmax output stage for classification.

When I train this network on the MNIST handwriting dataset I get a test error rate of 2.89% which is pretty good and actually lower than other results quoted on the MNIST web site. It is interesting to inspect the patterns of the weights for the first layer below (here I organized the weights for the 50 hidden units as a 10×5 matrix):

two_layers_50_289 Continue reading

Training neural nets on MNIST digits

Recently I have been experimenting with a C++ deep learning library that I have written by testing it out on the MNIST handwritten digits data set. In this dataset there are 60,000 training images and 10,000 test images which are of size 28×28 pixels. I have been trying to reproduce some of the error rates that Yann LeCun reports on the MNIST site. The digits written in many different styles and some of them are quite hard to classify, and so it makes a good test for neural net learning.

MNIST examples Continue reading

Two-wheeled rolling robot

I am passionate about machine learning, intelligence, and robotics. I have a number of robot projects on the go. I wanted to build a platform that would allow me to do a lot of complex experiments on sensor fusion and creating intelligent emergent behaviors. I needed to make a robot that has quite a number of sensor inputs, but not so many that it would overload the processing capability to do anything useful. I decided to make a simple two-wheeled robotic platform that has a lot of flexibility and load it up with appropriate sensors.

One of the aspects of my robotics philosophy is that information from simple sensors can be highly informative and that current robot designs jump too quickly to complex high bandwidth data sources and they then do a marginal job of interpreting the information from those sources in software. I am inspired by insects and other small creatures that seem to have small numbers of sensors, for example eyes with only a few photoreceptors, but still have very complex adaptive behaviors which are often leagues beyond what we can do with today’s machines. Part of this is due to the efficiency with which they extract every little bit of useful information out of the sensory data, including correlations we would never think of. I am interested in applying experience gained from machine learning in order to extract from sensors information that could not easily be determined by using hand coded algorithms.

Rolling RobotMy rolling robot has two wheels and these have wheel encoders to give a feedback of position or wheel rotation speed. It also has an infra red range finder that can indicate the Continue reading

How to read old EPROMs with the Arduino

I’m struggling with a health issue at the moment so I’m doing some some small projects to stay sane…

I’ve been helping a friend fix old pinball games which typically make use of 8-bit micros like the 6800 or 6502. Often we want to know what’s on these old ROM chips that even some modern device readers can’t easily scan. I built a shield for Arduino that can read them by listing the file over the serial link. The only components other than the Arduino Uno and a prototyping shield were a couple of 74HCT573s and a 24-pin socket.

EPROM reader shield Continue reading

Man vs AI: Ethics and the Future of Machines

mind_machine_by_neodecayRecent strides in artificial intelligence from big name players such as Google, Facebook, and Baidu, as well as increasingly successful heterogenous systems like IBM’s Watson have provoked fear and excitement amongst the intelligentsia in equal measures. Public figures, such as Steven Hawking, are concerned, and not surprisingly the popular press are excited to cover it. Recently, Elon Musk has become worried that AI might eventually spell doom for the human race. He donated $10 million to fund the Future of Life organization whose stated goal is to ensure AI remains beneficial and does not threaten our wellbeing. An open letter by this organization, titled “Research Priorities for Robust and Beneficial Artificial Intelligence,” was signed by hundreds of research leaders. Influential futurist, Ray Kurzweil, has popularized the idea of the technological singularity where intelligent systems surpass human capabilities and leave us marginalized at best.
Continue reading