Created: September 4, 2015 / Updated: November 12, 2017 / Status: in progress / 5 min read (~838 words)
- Compare humans to current computer architecture
- Video card
- Audio card
- Power supply
- Software vs hardware
- Are the voice spectrogram we generate when we talk actually produced within the brain or are they different?
The goal of this study is to learn how we, as humans, are nothing more than machines ourselves. I will approach this by comparing ourselves with a computer, which is one of the most advanced machine we've developed as a species so far. Both the hardware and software will be analyzed in order to make parallels between us and machines.
- Long term storage <-> Hard drive
- Short term storage <-> Memory
- Processing <-> CPU/Processor
- A sequential task is split into small subtasks that can be anticipated and prepared for execution
- The brain coordinates the execution of these subtasks
When we think, it is likely that we are doing multiprocessing in the same fashion a computer does it, that is, through small iterations on various things at once.
One question AGI researchers and neuroscientists have asked themselves is how it would be possible to capture the voice that is within our head (inner voice). One thing we know is that it is possible for us to externalize this voice, and that it is naturally done when we talk to someone else. Because of this process, it should be natural to expect the externalization of thought to be possible without having to vocalize it. One way this might be accomplished is by starting at the source of the voice signal, which comes from the control of the mouth muscles, the tongue, as well as the vocal chords.