Liz Beson, Staff Illustrator

When you ask which is the better thinking machine, a brain or a computer, it stands to reason you’ll get different answers depending on whom you ask. For a biologist, the question is obvious. The brain is a miracle of evolution, with over 100 billion neurons and another hundred trillion connections working in parallel to sustain our bodies and manage our thoughts.

Ask an engineer, and he might nod respectfully for his biological counterpart before suggesting a different answer. Sure, a human can tell a cat from a dog and a captcha from a rorschach blot, but no matter how big you find your brain, it’s not going to be able to get a shuttle to the moon or find that elusive billionth digit of pi.

In any case, they’ll probably agree it’s a bit of an apples to oranges question. Computers are procedural where brain’s all parallel. While a brain can change its own structure where a solid chip cannot, you can improve, customize and expand a chip where slicing into a brain simply lobotomizes its function. They’re built in different ways, and they do different things.

But researchers in Amsterdam and the Cornell DARPA laboratories are challenging those differences. It’s a bit pointless to try and make a brain like a computer, but why not make a computer like a brain? Neural computing has been around for many years, after all, and algorithms based on the networks of the brain are applied to everything from economics to chemistry simulations. Perhaps the same concepts can be applied to hardware.

The problem is tricky, since neural networks gain a tremendous amount of complexity from their innate parallelism, and can take up extraordinary amounts of power when simulated with traditional chips. The TrueNorth chip, created by a partnership between Cornell, DARPA, and Isreal’s Technion institute, attempts to overcome these issues by mimicking the brains ability to store memory in the same structures used for computing.

By designing the very silicon of the chip to mimic the synapses of the brain, they’ve managed to fit a million “neurons” on a single chip, and consume only a tenth of a watt while doing so. The unique neural chip can identify images with the certainty of a supercomputer, for a fraction of the real estate and power.

Similarly, the researchers of the University of Amsterdam are using a more software-oriented approach, building their networks using clouds of statistical functions rather than the conducting channels of a silicon wafer. Their goal is to establish the context of our higher language functions, allowing translation functions to not only determine the equivalents of the words they receive, but their grammar and functionality as well.

But for all the sophistication of these computers, Skynet still seems mercifully far away – the power required to run a computer with the capacity of a human brain would exceed the combined electrical use of New York and San Fransisco combined. But the increased image and language abilities of neural parallelism might mean computers will sqoon be far more capable at eluding detection from our everyday nets for their presence. You might not have to battle terminators on your lawn, but you might have to sift them out of your inbox.

Copeland is a member of the class of 2015.



The AI Divide: Creating a New Class System in Education

The conversation around AI in education isn't just about technology; it's about fairness and opportunity.

Notes by Nadia: The importance of being a good listener

I hope that more people can value the act of listening attentively and positively responding to conversations.

Before criticizing performative activism, ask what you are doing to help

What’s come about from the widespread connectivity of the online world is a form of activism that centers around reshares and reposts.