- And it's also saddled with compression that ranges from nearly-lossless, all the way down to a JPEG that's been reopened and then resaved 50x at the 0 quality setting. The bonus is that you have little control over what type of compression is going to be used. Writing things to permanent memory also requires anywhere from one to a dozen refreshes.
- There's also no ECC support.
- The compression used on stored information does tend to become increasingly lossy as time progresses.
- Random access of information is also spotty, due to poor indexing functionality.
- It's very susceptible to radiation damage.
- Irreversible damage occurs within minutes of a power interruption. (Not just data loss - permanent damage. )
- Low G-force tolerance.
- No upgradability. (With current technology.)
- No user manual.
+ High capacity for a few specific types of parallel computing.
If it was a computer, I'd want a refund.
I know you are making a joke, but most of your arguments are a really big fail.
My opinion :
ECC is not needed when the redundancy is stored as part of the information.
The indexing is also part of the stored information.
While at the same time removing any form of data that is useless. Culling of noise. The reason why the brain needs to do data noise culling up front is because the noise has a very negative effect on indexing. Just ask people to use imagination when looking at a random pattern of dots. Index window of the internal mind working over time during creativity and the results of that index window are determined by :
- The neurotransmitters present.
- Signals from senses.
- The internal virtual senses. The groups of neurons that act as if there are senses, but in reality create virtual sensor data. This gives as the ability to think and to simulate environmental situations that did not happen. This is looking ahead, predicting based on previous stored information and changing the variables a bit to simulate a different outcome.
The data compression of the brain is inversely proportional to the repetition rate of the data. The more often a piece of information is presented, the more accurate it is stored.
The side effect is that the more accurate the information is stored, the higher the index value becomes, meaning this information attains a higher priority.
Because of the large form of redundancy, the brain is self healing. And with presenting the same data as what was previously stored but damaged because of whatever reason(for example radiation), the data can be corrected (Examples : Learning to walk again, learning to speak again, learning to do a certain task again). Raid 5 type storage for example could be very similar as how data is stored in the brain. Although not completely, storing bits or storing information is something completely different.
The brain just works at a radical different level of information storage technology when compared to the modern hardware/ software combo.
But it will become in the near future very easy to implement typical brain functions.
p.s. :
Any one who is interested, read up how gpu cards do polygon culling, and what the gpu & edram can do in the xbox360. It would be a nice project for advanced AI with fpga logic or multi core processing. The brain is not a classical database system. And that is why most ai implementations do not or will work. The magic can be found in how the gpu processes data to reduce the information only to what is really needed for the view port.