Can we scan, store and simulate the human brain, and with it bring to life all our memories, experiences, emotions, and personality? This is a big philosophical question, but here I assume for the moment that this is a problem with only technical challenges, and not ones that relate to mystical aspects of being.
The human brain contains around 75 billion neurons. If each one has at most 100,000 inputs, then we could store information about the sources of all those inputs, as well as information about the way that each neuron processes action potentials arising via these connections. We could give some routing ID number to each connection and additionally store all the details about the delay, strength and type of synapse in perhaps 16 bytes for each synaptic input.
Essentially I’m suggesting that sufficient descriptive properties could be stored in around 1-2 megabytes for each neuron to enable a numerical simulation to be run. If this is the case, then the whole brain structure can be stored in 120,000 terabytes. Probably some compression is possible, and also not all neurons have so many inputs. This is getting into the realm of possibility. Google is probably sitting on at least 1,000 terabytes of data, between YouTube and the search store.
Scanning the brain to obtain the information in the first place is the hard part.
However, one problem with this kind of view is that it fails to take into account the need for full embodiment to produce the experience of being alive. We are more than a brain in a vat, but instead a complex biological system of body, brain and environment.
But potentially, if we can simulate the complexity of the nervous system, simulating the physics of joints, muscles and limbs will be small fry. We would also have to simulate the endocrine system, gut, and internal organs to make the experience realistic. This is because we are so much a complete system with our bodies and experience emotions through feedback of internal and external bodily sensations.
However, simulating internal organs like the liver is relatively easy because they don’t have the complex network microstructure of the nervous system, and instead just act as a unit.
Another thing to consider is whether we would have to simulate the glial cells of the brain. I suspect that this would be important. When neurons change their connectivity, they grow new axon connections and this process is not very well understood at the moment, involving complex chemical factors, with possibly the help of glial cells, to guide them to their targets.
In general the problem is really quite hard. But most of it comes from four things:
- Insufficient computer memory storage.
- Insufficient compute power for simulation.
- Insufficient knowledge of the exact properties of different types of synapse and functioning of individual neurons and their glial cell neighbors.
- Inability to scan a whole living brain to get the actual data.
I think that 1 and 2 will fairly quickly become possible (in the next 20 years), while 3 is hard, but is finite information that is probably going to be eventually understood by man (maybe within the next 100 years).
Regarding 4, while I’m no expert on the state of the art of NMR and other non-invasive scanning techniques, I imaging that this will be the biggest roadblock for a long time to come, unless there is some kind of nanotechnology that can be used to pull out all the needed information. This is because not only must the scan be extremely detailed, but it must also be incredibly fast to recover such a vast amount of information in a reasonable length of time.
It will be interesting to watch scientific progress in this kind of direction.