Deep Learning, Evolution, and Why Intelligence is not (just) Recognition

14 minute read

Published:

There has been a lot of talk recently about Deep Learning. Namely, how its recent resurgence is kicking ass across the board on machine learning benchmarks left and right. At the 2010 (and 2012) ImageNet Large Scale Image Recognition Competition, a group from University of Toronto, led by Geoffrey Hinton, built a neural net which outperformed the competition by 10% reduction in classification error rate (misrecognizing an image, so saying a picture that is in fact of a magnifying glass is a photo of scissors). This essentially brought neural networks back into play for the field of computer vision, where it dominates what most of us consider to be the coolest latest technology. Facebook’s face recognition? Neural network. Google’s self-driving cars? Controlled by neural nets. The tempting logical jump to make here is: artificial intelligence? Let’s just use a neural net!

What are Neural Nets in Context of Computers?

Before I go on, let me give a high-level description of what exactly us nerds mean when we say “neural network.” Unfortunately for those of us trained in biology or neuroscience, it’s not as straightforward as it sounds. Also unfortunately for those of us trained in computer science, it’s not as easy as it seems on the surface. Really only statisticians win here.

Neural nets are, in some ways, similar to the biological network of our brain circuitry. On one end, you get some input. Then, it goes through multiple hidden layers of processing, resulting in some output, or set of outputs — just like how when I get surprised, I both jump and shriek a little. The difference is the way in which these hidden layers are connected and influence each other. Each inner layer is made of nodes (“neurons”) which transform the input in some way, and then passes that transformed stuff to the next layer until it looks like reasonable output.

The trick is that the weights with which each layer influences the next, and even which nodes within each layer influence the next, change over time based on feedback. That gradual change in weights which slowly produces better and better output is what is canonically referred to as learning. The rate that these weights change, the number of layers used, the number of nodes per layer, are all parameters to this algorithm that are arguably the main areas of research currently being conducted on neural nets.

Why Neural Nets Could Solve Everything

This multi-layer network is important because it is a composition of non-linear transformations. At least in image recognition, the idea is that multiple levels of representation correspond to multiple layers of abstraction about a scene. But this jumps beyond just seeing a shovel and labeling it as such (or calling a spade a spade, if you will). It is tempting to believe that with enough layers of abstraction, this network could form a sophisticated conceptual hierarchy, not dissimilar from our own classifications of the world. Without difficulty, we can imagine SuperVision, our ILSVRC winner in 2012, being plugged into a neural network trained to perfectly pronounce things it reads, and suddenly we have a machine which both recognizes objects in an image and talks about them, from one algorithm.

Some say we can go deeper.

If each task can eventually be solved perfectly with a neural network, and we plug all those little modules together, we see an intelligent machine. Better yet, one giant neural network (say, 100 billion of them and a whole lot of compute) that manages to learn extremely complex tasks based on intricate reward and punishment systems, like how we brush our teeth every night because otherwise, at some unknown future point, with some unknown probability, we get a cavity. We cannot think of this net as the ones that exist currently which are trained to do one specific task; this net is, itself, a learner. It not only learns how to perform tasks, but which tasks it needs to learn in order to achieve its goals.

Neural Nets vs. Human Behavior

The problem lies in the deterministic nature of these networks [Major disclaimer: technically neural nets are necessarily probabilistic but in such a way that my following statements are true enough to retain their purpose]. If a thousand neural nets of the same complexity are trained on the same data, they will answer questions about new input in the same way. There is some variation, surely, based on the way the weights between layers is initialized (randomly? heuristically? all set to 0?), but generally the same learning rate means the same results, and that is not how humans work. If you show my brother a picture of a train, he will probably start talking about the mechanics of trains. Which is cool, but if you ask me, not nearly as cool as all the people with suitcases getting on and off, going about their day. And that is partly because, over our lifetimes of experience, we have become attracted to different things.

But how much of that attraction is predetermined? Maybe I was just born to be more interested in people than my brother was. Or maybe I got along with more people as a child, because I was more outgoing, because I got along with more people, because I… talked sooner? Didn’t argue as much? Have more motor control over tiny facial muscles? It is not the origin of this preference, but the fact that it may exist, which is the key here. To say people are born with different inclinations is certainly to say that at least some part of our behavior is predetermined — or at least pre-boosted to be more likely in some people than others. And the notion that my preference may in fact have been a butterfly effect of learned behavior sort of flies in the face of our current concepts of nature vs. nurture.

Where this connects back to deep learning is this compounded learning effect. Could a sufficiently complex neural network, with a sufficient amount of training data (like, say, a lifetime of experience), pass as a human? Tomaso Poggio of MIT has coined the term “super Turing Test,” which refers to the idea of a machine which mimics human behavior — vision, language, motion, creativity, everything — so perfectly that it is indistinguishable from the real deal.

To harken back to the biological influences of our beautiful selves, let’s consider how this might work in human evolution. DNA is information in an intensely compact form. Storing information in genes is expensive. In the midst of all that important “let’s add another lung to the left side” and “a spinal cord would look nice here” there is only so much space to encode behavior to increase fitness. Yet, there are some things which we should probably know from day 1, like maybe don’t wander off in the dark, or touch that giant hairy thing with fangs. So, if we think of genes as biological “nodes” in our neural net, some of the influences (weights) might be pre-initialized in dramatic ways — our “stay away from cliff” node starts with a weight of +10,000. This still leaves room for behaviors that aren’t so important for fitness to not be “hardwired” in, and to in fact be learnt on the fly. Suddenly, there’s potential for a whole realm of human behaviors that our body (and genes) never really cared about, like, say, texting etiquette. This is the evolution/survival basis of the argument that neural nets could in fact be a perfect model for human behavior.

But deep learning networks, even the most complex that exist, are still only descriptive.

Mistakes and Fundamental Differences in Creative and Descriptive Thought

Let’s talk about an element of what makes human intelligence unique. The fact that people often act illogically or inconsistently with their (stated) primary goals lead to creative experiences. Granted, most people are trying to maximize multiple objectives — financial security, personal happiness, physical comfort are all pretty common — so to calculate the specific “value” of an action may not be something humans can consciously do. It is possible, of course, that at every given point the brain is calculating the exact expected value of every action without our knowledge, and those complex calculations make seemingly illogical behavior look attractive. Or, just like current models of machine learning, our brain balances the “explore/exploit” tradeoff (which is exactly what it sounds like: do something that you know is good all the time, or take a risk doing something new that could be better), and this is the origin of our irrationality.

The fact that people don’t always behave reasonably (at least to other people) is indisputable, but whether or not that is a part of the definition of human intelligence is debatable. Worse yet, there is no way to prove experimentally that irrational actions do or do not hinder personal progress! There is no rigorously scientific, sufficiently complex designed scenario which could viably prove with subjects and controls that behavior incongruous with the pursuit of goals in fact hinders the achievement of those goals in a meaningful way (quality, speed of accomplishment, etc).

Consider, for example, the anecdotal success stories of tech giants or which we are all so fond. Certainly, dropping out of university to pursue company growth is the exception to the rule in terms of chugging along towards maximum productivity, yet it turned out pretty well for some of our favorite entrepreneurs. One could argue these individuals are just more inclined to “explore” rather than “exploit.” A psychologist would be quick to point out their engagement in high-risk behavior is significantly above your average Joe. And I feel comfortable asserting that these major success stories have inspired hoards of other young, charismatic tech nerds to explore a whole lot more than they otherwise would have. Startup fever plagues the young tech scene, for better or worse. Most start-ups fail, or get gobbled up by larger companies. The high-risk, high-success individuals inspire followers, yet no ultimate strategy has been established. This is at least partly because those whose companies fail still manage to find success along the way. The process of starting a company leads to large human networks, strong technical skills, and a plethora of business smarts that make people highly employable. Even the “failures” end up with success.

So, does it matter that people make “mistakes?”

Maybe. Certainly not no. But also probably (?) not yes. I can safely say that my tryst abroad — perhaps a career “mistake” — has afforded me numerous advantages over my peers, such as contacts at tech scenes around the world, a broader global perspective to draw on to solve problems, and more empathy for unexplored cultures. As a result, I think in a fundamentally different way than I did, and than my peers do. But again, there is no way to empirically test whether or not this experience will eventually lead to an advantage for me.

So, it is difficult to tell whether a deeply complex, intelligent neural net will have trouble acting in ways which only may result in marginal payoffs, eventually, which I personally consider to be integral to human behavior and intelligence, despite controversy over whether it is ultimately counterproductive, which is unprovable.

The second big issue with calling neural nets a panacea for the AI community is creativity. Creative, generative thought is fundamentally different than descriptive, analytic, and deterministic behavior. Every piece of art ever made by a computer has lines of code behind it that are static, and can only ever produce a finite number of artistic permutations on an idea. It is not until another programmer comes in and writes something new that the computer can do anything differently.

For the young’uns out there who took AP or IB foreign language exams in high school, you’ll remember the oral section in which students are shown a strip of pictures and have to make up a story about what is going on. Mine was a cartoonish depiction of a girl getting onto a train (what is it with trains in this article?), then talking with somebody, then the two of them getting off at a different stop. Now ignoring the fact that I accidentally talked about this poor girl and boy in a war (seriously, gare and guere aren’t that hard to confuse as a monolingual sophomore!) I made up a pretty good story. And everybody else in my class came up with a different one. The boy was her brother, boyfriend, teacher, train conductor, total stranger, whatever! The simple strip of pictures implies an infinite story-space.

This is the area where I believe computers will struggle. The crux of the artificially intelligent machine is that imagination is hard. Neuroscientists do not like to study it because it is difficult to quantify. Psychologists do not like to study it because it does not result in observable behavior — or at least not directly. Computer scientists do not study it because they’re computer scientists and actually that is a pretty good excuse. We have no idea how to build or even measure what makes humans unique without so much as a semblance of understanding of our infinite imaginations.

…So? Honestly, most realistic applications of artificial intelligence do not want human-like machines. We call for inferential intelligence (“What is that?” instead of “What is that spikey pink thing in the fruit bowl that wasn’t there before?”) and an idiot-savant-like genius in specific fields (“I don’t know what’s wrong with my car, just fix it!), but as soon as our machines have agency, we have to deal with them. Emotions are hard for humans, and the struggle of free-will is even harder. Most subfields of AI do not strive for consciousness and creativity. Those are for humans. Those are sacred.

But still, machine learning needs to be socially accepted in order to fully progress. Until it is widely embraced by an informed public, it will suffer from being misconstrude and misunderstood. By framing the intelligence of machines with a human lens, we make computers relatable. Perhaps when we foster a friendly relationship with computers, we will see their full capabilities.