I have worked hard to bring a production version of my Nixie display controller to market. You can now actually order these units from the Impressive Machines web site here.
- High quality gold plated surface mount PCB
- Four digit Nixie display; product includes tubes.
- RGB LED back-lighting on each tube independently programmable to generate multiple colors
- The colon indicator can also be turned on and off
- Modules can be stacked next to each other for more digits
- Runs from 9-12V, with on-board 180V power supply
- Easily controlled by a serial line from the Arduino or any micro-controller or laptop to display any digits
- The board can also function as a stand-alone voltmeter
- Based on the familiar ATMega328
- Comes pre-programmed with open source display software
- Easily customized via the ISP port using standard tools
- Most spare micro-controller pins are accessible at the connector
- Based on plug-in IN-4 Nixies which are easily replaced
- Schematics and code are available for easy hacking
I recently finished my design for a Nixie display module. This has four digits that can be controlled from a serial link, or alternatively it can act as a voltmeter. Also the colon and backlighting can be controlled to give different effects and colors. Its useful for a variety of maker projects and I am about to manufacture a quantity of them, so let me know if you’d like to be an early adopter.
Sign up here to keep up to date.
Microsoft just recently presented the paper “High quality streamable free-viewpoint video” at SIGGRAPH. In this presentation, they are capturing live 3D views of actors on a stage using multiple cameras and using computer vision to construct detailed texture mapped mesh models which are then compressed for live viewing. On the viewer you have the freedom to move around the model in 3D.
I contributed to this project for a year or so when I was employed at Microsoft, working on 3D reconstruction from multiple infra-red camera views, so it was nice to get an acknowledgment. Some of this work was inspired by our earlier work at Microsoft Research which I co-presented at SIGGRAPH in 2004.
It’s very nice to see how far they have progressed with this project and to see the possible links that it can have with the Hololens virtual reality system.
When training neural networks it is a good idea to have a training set which has examples that are randomly ordered. We want to ensure that any sequence of training set examples, long or short, has statistics that are representative of the whole. During training we will be adjusting weights, often by using stochastic gradient descent, and so we ideally would like the source statistics to remain stationary.
During on-line training, such as with a robot, or when people learn, adjacent training examples are highly correlated. Visual scenes have temporal coherence and people spend a long time at specific tasks, such as playing a card game, where their visual input, over perhaps hours, is not representative of the general statistics of natural scenes. During on-line training we would expect that a neural net weights would become artificially biased by having highly correlated consecutive training examples so that the network would not be as effective at tasks requiring balanced knowledge of the whole training set.
If you are training neural networks or experimenting with natural image statistics, or even just making art, then you may want a database of natural images.
I generated an image patch database that contains 500,000 28×28 or 64×64 sized monochrome patches that were randomly sampled from 5000 representative natural images, including a mix of landscape, city, and indoor photos. I am offering them here for download from Dropbox. There are two files:
image_patches_28x28_500k_nofaces.dat (334MB compressed)
image_patches_64x64_500k_nofaces.dat (1.66GB compressed)
The first file contains 28×28 pixel patches and the second one contains 64×64 patches. The patches were sampled from a corpus of personal photographs at many different locations and uniformly in log scale. A concerted effort was made to avoid images with faces, so that these could be used as a non-face class for face detector training. However there are occasional faces that have slipped through but the frequency is less than one in one thousand. Continue reading
Having trained a two layer neural network to recognize handwritten digits with reasonable accuracy, as described in my previous blog post, I wanted to see what would happen if neurons were forced to pool the outputs of pairs of rectified units according to a fixed weight schedule.
I created a network which is almost a three layer network where the output of pairs of the first layer rectified units are combined additively before being passed to the second fully connected layer. This means that the first layer has a 28×28 input and a 50 unit output (hidden layer) with rectified linear units, and then pairs of these units are averaged to reduce the neuron count to 25, and then the second fully connected layer reduces this down to 10. Finally the softmax classifier is applied.
In my last blog post I talked about trying out my code for training neural nets on a simple one-layer network which consists of a single weight layer and a softmax output. In this post I share results for training a fully connected two-layer network.
In this network, the input goes from 28×28 image pixels down to 50 hidden units. Then there is a rectified linear activation function. The second layer goes from the 50 hidden units down to 10 units, and finally there is the softmax output stage for classification.
When I train this network on the MNIST handwriting dataset I get a test error rate of 2.89% which is pretty good and actually lower than other results quoted on the MNIST web site. It is interesting to inspect the patterns of the weights for the first layer below (here I organized the weights for the 50 hidden units as a 10×5 matrix):
Recently I have been experimenting with a C++ deep learning library that I have written by testing it out on the MNIST handwritten digits data set. In this dataset there are 60,000 training images and 10,000 test images which are of size 28×28 pixels. I have been trying to reproduce some of the error rates that Yann LeCun reports on the MNIST site. The digits written in many different styles and some of them are quite hard to classify, and so it makes a good test for neural net learning.
When deriving sensory data from IMU chips it is always an issue that the gain and offset of the readings is not known, and varies from chip to chip. I have written a short Python script which uses a least squares fit to calibrate these devices. All you need to do is capture a set of XYZ readings while moving the device through different orientations, and put the readings in a text file. You can get this script from my github.