Distributed representation and learning

In some curious medical cases, patients with physical trauma to the head have not only failed to associate with their loved ones when confronted with them, but even claimed that these very loved ones were impostors just disguised as their loved ones! While a bizarre occurrence, such situations may shed more light onto the exact mechanisms of neural learning. Clearly, the patient recognizes this person, as some neurons encoding the visual patterns corresponding to the features of their loved ones (such as face and clothes) are fired. However, since they interestingly report this disassociation with these same loved ones despite being able to recognize them, it must mean that all the neurons that would normally fire upon coming across this loved one (including the neurons encoding the emotional representations our patient may have for this person) did not fire at the moment when our patient met their significant acquaintance.

These sorts of distributed representations may well allow our brain the versatility in extrapolating patterns from very little data, as we observe ourselves capable of doing. Modern neural networks, for example, still require you to provide it with hundreds (if not thousands) of images before it can reliably predict whether it is looking at a bus or a toaster. My three year-old niece, on the other hand, is able to parallel this accuracy with about three to five pictures of buses and toasters each. Even more fascinating is the fact that the neural networks running on your computer can, at times, take gigawatts of energy to perform computations. My niece only needs 12 watts. She will get what she needs from a few biscuits, or perhaps a small piece of a cake that she carefully sneaks away from the kitchen.