I have this belief that antiparticles such as positrons are actually particles traveling back in time. There are a lot of reasons for this such as the fact that their time evolution contains a minus in front of the t.
However there are not a lot of antiparticles in this universe dominated by normal matter (for reasons I will not get into right now). I think this is one of the possible arguments to why we only see into the past and not the future – there is only the most tenuous influence from the future, and not something humans can perceive. Plus we are constructed from particles evolving forward with time and everything around us is just normal matter, also traveling forward.
You can also see this as a vast amount of information flowing from the past to the future but only a few bits traveling the other direction.
One can think of particles and antiparticles and being like corkscrews in time; the difference being whether they have left or right handed rotation. This is also like a clock turning clockwise or anti clockwise. And for example if a gamma ray turns into an electron-positron pair, one can also picture that as a positron traveling back in time, colliding with a gamma ray and bouncing forward in time phase reversed as an electron.
Anyway I think it might be interesting to test this by building a space based telescope that can create a 2D image from the arrival of antiparticles and another coincident one creating an image from the arrival of particles.
The question is whether these two images will actually show more or less the same view of the universe, or if the antiparticle generated image will show a more evolved state of the cosmos. Interesting to propose. I hope that someone has the influence to make such a project happen.
(Also if it’s happening and I don’t know about it, let me know!)
When training neural networks it is a good idea to have a training set which has examples that are randomly ordered. We want to ensure that any sequence of training set examples, long or short, has statistics that are representative of the whole. During training we will be adjusting weights, often by using stochastic gradient descent, and so we ideally would like the source statistics to remain stationary.
During on-line training, such as with a robot, or when people learn, adjacent training examples are highly correlated. Visual scenes have temporal coherence and people spend a long time at specific tasks, such as playing a card game, where their visual input, over perhaps hours, is not representative of the general statistics of natural scenes. During on-line training we would expect that a neural net weights would become artificially biased by having highly correlated consecutive training examples so that the network would not be as effective at tasks requiring balanced knowledge of the whole training set.
Recent strides in artificial intelligence from big name players such as Google, Facebook, and Baidu, as well as increasingly successful heterogenous systems like IBM’s Watson have provoked fear and excitement amongst the intelligentsia in equal measures. Public figures, such as Steven Hawking, are concerned, and not surprisingly the popular press are excited to cover it. Recently, Elon Musk has become worried that AI might eventually spell doom for the human race. He donated $10 million to fund the Future of Life organization whose stated goal is to ensure AI remains beneficial and does not threaten our wellbeing. An open letter by this organization, titled “Research Priorities for Robust and Beneficial Artificial Intelligence,” was signed by hundreds of research leaders. Influential futurist, Ray Kurzweil, has popularized the idea of the technological singularity where intelligent systems surpass human capabilities and leave us marginalized at best.
This post is about fitting a curve through a set of points. This is called regression: It is also the classic problem of coming up with a generalization from a discrete training data set in machine learning. We have a set of points that are observations at specific places and we want to make a system that predicts what the likely observations should be at all places within the domain of interest. We use the given observations (a training set) to train a model and then when we get more observations (a test set) we can evaluate how much error there is in our Continue reading
I wonder about this.
Watching that video makes me think that the person who made it is probably… getting old.
I don’t think that young people worry that the world is changing too fast. After all what makes the author think that the rules of business suddenly changed 15 years ago. Maybe that was when they started in the business and now they feel they don’t understand things Continue reading
Many people are familiar with the maxim from chaos theory that the fluttering of a butterfly in one part of the world can lead to a hurricane in another. Causes and effects are ubiquitous. Mystics tell us that one of the basic revelations is that everything is connected, there is no you and me, and we are one with the cosmos. Let us shine the spotlight of science onto this concept and play a little.
As we go about our day doing ordinary things, we think in terms of tasks that we need to do and the objects and tools that will be useful. We talk to other people who seem distinct Continue reading
A formal system in mathematics is a system which contains a set of axioms (unquestionable statements of truth), and then adds to these a set of production rules by which new true statements can be generated. The process of production can go on forever allowing a never-ending list of truths to be derived.
A famous result called Gödel’s incompleteness theorem shows that in such a system, there are some true statements that are nevertheless un-provable (undecidable) using the rules of the system. It shows that a self-consistent Continue reading
The hypothesis that all ravens are black is logically equivalent to the statement that all non black things are non ravens, and this is supported by the observation of a white shoe.
This is a paraphrasing of a famous paradox due to Hempel. There is a lot of fairly impenetrable discussion on the Wikipedia page about this paradox, some of which I believe to be incorrect, and so I include a readable resolution based on a Bayesian perspective here, and I also relate this issue to our ability to know the truth Continue reading