As a new PhD student, starting at a new university, in a new city, in a new country, I’ve found myself introducing my work to many new people lately. I have worked in and (plan to, remember, just started) study machine learning for social and emotional interactions in robots. I’ve experimented with many ways to introduce this topic: “emotional intelligence, but for robots,” “social robotics,” “machine learning for robotic emotions.” Try as I might, I always get some flavor of this response:
Oh, so you’re the one destroying the future? To which I roll my eyes. Hard.
That being said, I get why people have that response. I think about technology all day, every day. I’ve worked in industry, and researched in academia. Although I am young, I am one of relatively few people in the world who specializes in social interactions of robots (although it’s a quickly growing field). Not everybody’s life revolves day-in, day-out around the future of technology, so of course I do not expect everybody to think as much about these topics as I do.
This is why I so, so often find media coverage of emerging technology anywhere from disappointingly misleading, to downright enraging.
Although there are many examples of such articles, I write this as a specific response to a 2014 youtube video which I’ve had the pleasure of viewing a number of times, and was recently used by somebody to justify their criticism of my entire career. After watching this video, it’s no wonder the average intelligent human I come across is nervous about the future.
In this piece I hope to explain both how to spot fear-mongering tactics with media coverage of technology, and why the “robot revolution,” is, in fact, nothing to fear.
What I am NOT saying is “don’t think about the future.” Absolutely DO think about the future. Often! And, don’t be scared of it. Let me tell you why you should be excited!
This will come to you in three parts:
Act I: The Video Act II: Fear Mongering and Deception Act III: The Progress of Humanity
Act I: The Video
Grey begins with a fairly standard historical introduction of automation. The basic arguments one uses to assuage the fears of grandma at Thanksgiving, or crazy uncle Pete, who did the 60s a little too hard:
“Replacing human labor with mechanical muscles frees people to specialize, and that leaves everyone better off, even those still doing physical labor.” With this, I whole-heartedly agree. Because we look at history, and it’s always been true. “Luddite,” became a word in the english vocabulary because radical english textile workers felt their jobs were threatened by automation… and now we have more fashion designers, specialty embroiderers, and fabric choice than ever before. Historically, new technologies haven’t gotten rid of jobs, but created all new ones.
However, quickly after making this point, Grey states his thesis of the video: “This time, it’s different.”
Grey starts by introducing Baxter with an annoyingly optimistic reading on its capabilities.
“Baxter can learn what you want him to do by watching you do it, and he costs less than the annual average salary of a human worker… he can do whatever work is within the reach of his arms.” What Grey fails to mention here are the extreme limitations within which Baxter can learn. One-shot learning is very much still not-a-thing in mechanical robotics, and robots even understanding where and how they can move their appendages is an active area of research… that doesn’t exactly look sexy (well, not to me). Path-planning for how to move a robotic arm from point A to point B is not a solved problem. Humans, on the other hand, can actually do what Grey shows in the video: watch a human perform an object manipulation, and repeat, instantly, infinitely, with many micro-differences in orientations, sizes, and shapes of the object. Baxter, straight up, can’t do that.
Additionally, Baxter lacks a key element of human workers: the ability to interact socially. There are entire fields of research dedicated to the many ways in which social ability improves the quality of interactions between humans and machines, which Baxter does not and likely will not ever have. This includes the ability to read the emotion of the teacher, which is an invaluable skill that human workers have in abundance. “Are they angry at me?” → “Did I just mess up in a way that is serious, and I should try to fix through iteration?”; “Are they afraid?” → “Did I just do something potentially dangerous, that I should avoid again at all costs?” While Baxter can watch and attempt to imitate, the word “learn” implies a level of ability that robots, right now, simply do not have.
My issue here is that Grey repeatedly pounds home the point that robots are now, when, in reality, with these two problems above plus many others which I fail to mention, I’d say that optimistically, robots are, maybe, 25 years from now. Rodney Brooks, “the father of AI,” happens to think it’s much further.
Grey also brings up how incredibly cheap robotic labor is:
“His hourly cost is pennies of electricity, while his meat-based competition costs minimum wage.” Indeed, this is true. Once Baxter is up and running (built, installed, and trained by a human, thankyouverymuch).
Or is it? For reference, the federal minimum wage in the United States is $7.25/hr. Ignoring the fact that this is reprehensibly and appallingly low, this gives us a base salary of just over $15k/year for a minimum wage worker. Baxter’s monumental startup cost, on the other hand, is $22k, which includes only a 1-year warranty of manual fixes and software upgrades (you can get an extended 3-year warranty for an additional fee). While I find Rethink Robotics to be a great, honest, and awesome company, they’re still a company, and they exist to make money. So after that first year, software upgrades cost money. Tesla, for example, charges $9000 for a software update, and that’s for a consumer product (which will always be less money than for a corporate deal, because companies have more money than people).
And why do you think an extended warranty costs extra? It’s because robots break. And high-use and high-precision robots break often. And you don’t know how to fix them, so they cost money to fix. A lot of money, because a highly-skilled human has to come in and fix them.
Bottom line: we are nowhere near automating even “teachable” low-skill jobs. And when it looks like we are, look closer: the cost for robots is often much, much higher than you think.
Grey goes on to target hospitality workers and describe how robots are “coming for them,” by using two target examples: self-checkout machines, and barista-bots.
The first example I find absolutely laughable; self-checkout machines used to be 30 humans, yet somehow we haven’t seen huge unemployment among the grocery-worker industry. I literally don’t even know how to dignify this example with a response; nobody liked bagging groceries, grocery clerks and baggers still exist, and by far the biggest challenge to those working in grocery stores have been their human managers trying to screw them over.
So, let’s talk about baristas, I guess.
First of all, when was the last time you saw a robot barista? It was probably memorable, because there are hardly any of them. If you want to count automated coffee machines, your instances go way up, sure, but we also have free water fountains literally legally required everywhere. So let’s focus on fancy beverages: they’re hard to make, and the reason they haven’t taken over is because they are not cost effective. Coffee shops generally operate under two models: make a lot of it, cheaply, and quickly, or make a little of it, extremely carefully, and charge a boat load. For the former, It’s cheaper to pay Bobby-the-part-time-student to make your latte than it is to buy a $30k robot when you already spent $50k for a franchise license. For the latter, obviously a robot could never have the same touch as Alphonso, your reigning in-house latte-making champion of Florence for five years running.
Grey then employs what I can only describe as an annoyingly misleading tactic of fear-mongering and say “this robot is actually a giant network of robots that remembers who you are and how you like your coffee.” Almost everything we interact with on a daily basis is a network of machines (hot tip, you can sub out the word machine for robot any time you like! It doesn’t always work in both directions, though, so be careful). Reading this on an internet browser, you’re probably being served information from thousands of different machines in locations around the world. I will save my rant about “the Cloud” for another time, but by emphasizing that one machine is actually many machines incorrectly paints a picture that everywhere we interact with a machine, there are dozens more just lurking in the background. In reality, nearly every interface for humans is built with a complete pipeline in mind, and can be treated as a single unit.
One analogy that I actually simply adore in this video is Grey’s Horse comparison. He asks viewers to imagine being a horse before the automobile revolution. “Surely, there will still be jobs,” one horse says to another, and then goes on to point out how horses have “become obsolete,” and “have no work to do.”
He then points out that a rule of “Better technology makes more better jobs for horses,” sounds silly, but “when we replace ‘horses’ with ‘humans’ and suddenly people think it sounds about right.”
Well, yes, actually, we do and absolutely should expect that to be true. Because we, as humans, optimize society, culture, economics, and life to be human-centric… not horse-centric. The reason “better technology makes more better jobs for humans,” can be thought of as true is because humans control technology.
And as for the horses… horseback riding is now a leisure activity, and the vast majority of horses now have a substantially higher quality of life than before being abused by farm workers. Similarly, human working and living conditions have only gotten better since humans have been required to do less hard labor and made more money for less dying.
He conveniently ignores this while he points out “horses have been on the decline ever since,” and to that I say: indeed. Because humans are in charge of breeding horses, and horses are tools for humans. Similarly, computers, machines, and robots are tools for humans. Humans are in control of technology, and always will be.
About four years ago, I made a bet with my dad that on my 30th birthday, we wouldn’t be able to get into a self-driving car and have it take us to a fancy seafood restaurant. We get into that car, I pay for dinner. Any human plays a role in our navigation and transportation there, he pays. I still feel confident about my bet, and I still have a few years to go. Which is why I laughed out loud when I heard Grey say, four years ago…
“Self-driving cars aren’t they future. They’re here and they work.” Let me explain. “Here,” to me, would mean companies are actually using self-driving cars to automate-out jobs. Despite startups and attempts, this is still not true. So… they’re not “here,” and they especially weren’t “here,” four years ago.
But, I agree, as Grey points out, self-driving vehicles are substantially — one might even say alarmingly — safer than human drivers, so they will, and more importantly absolutely should eventually replace human drivers. See the end of this article.
One solution Grey glosses over is “pushing 100 million additional people through higher education,” but as it turns out, you don’t need to complete higher education to have a high-skilled job. The American-centric vision of education-as-a-must-for-good-work is, as far as I can tell, exclusively American. Many other countries commonly institute apprenticeships for students as young as 15 (France and Germany come to mind), and in my personal experience, no other country looks down so much on highly skilled trade work, such as plumbing or electrician workers — both of which there are currently zero attempts (that I’m aware of) to automate using robotics. Grey, don’t be so educationally elitist.
One fun-unknown-fact that machine learning scientists love to bring up is the task of “discovery.” “Discovery,” basically means sifting through mountains and mountains of documents for your field (law, if you’re a lawyer; medicine, if you’re a doctor) and finding correlations, anomalies, relevant articles, and other patterns from dense, obfuscated information. Doctors used to have to spend hundreds of hours sifting through research that wasn’t relevant to find one line that might be worthwhile for a particular patient’s case, and for the most part, that was time wasted.
So, I’m deeply confused by Grey’s attempt to make it sound like a bad thing that humans no longer have to do that. Highly skilled, highly trained people get to spend more time doing the thing that they are highly skilled and trained for… I truly fail to see the detriment to humanity here. It’s not like those people are losing their jobs — discovery is only one small part of being a highly-skilled worker, and often the worst, most tedious part at that.
On the topic of doctors, he says
“Doctor bots keep track of everything worldwide, and make correlations that would be impossible to find otherwise.” Uhh… yes? Here, he brings up a doctor-bot that “gives guidance on lung-cancer treatments.” The key word here is guidance. If I were a doctor charged with saving a person’s life, for fuck’s sake I would absolutely want every possible tool available to me to save that person’s life. And, as the doctor, I would retain responsibility for that life. It’s not as if we are handing over our well-beings to robots and telling the doctors to shut up and get out. Quite the opposite, we are empowering doctors to make better decisions for their patients. How, exactly, is it a bad thing that doctors are getting better tools to — let me say it again — SAVE HUMAN LIVES?
And why does he think this will lead to doctors becoming obsolete?
“Not all doctors will go away, but when doctor bots are comparable to humans and as close as your phone, the need for general doctors will be less.” Yes, so for once maybe it’s a good thing every western nation has a shortage of doctors, and developing nations have an extreme shortage. Maybe we’ll fill that, and more people will be able to get reliable healthcare.
But this is not exactly a new issue. Every time I visit the doctor, it’s already because technology hasn’t given me the answer. Nobody(ish) doesn’t google their symptoms and try to self-diagnose or self-treat. With more accurate home-diagnoses, this will allow humans to self-diagnose and self-treat more safely.
The next angle to assess is the idea that computers might begin to produce art or other creative works which rival human ability, and to that I simply say, perhaps! As a member of multiple artistic programming communities, I holistically advocate for use of new technologies, including deep learning techniques which Grey seems to find so “terrifying,” to produce new types of art. Because machines and machine learning is now just another tool for artistic expression.
Additionally, art is produced by humans, for humans. Art that is not valuable in some way to humans is worthless, and therefore I just absolutely cannot see a future in which artists are displaced because machines are practicing their craft “better,” then them. And further, it is important to remember that art that might have been strictly produced by a machine holding a paintbrush or writing music, was actually produced by a human: the programmer.
The argument that computers will replace art and creativity seems in this video to be succinctly summed up by…
“People used to think playing chess was a uniquely human thing to do… right up until the point that computers beat the best of us.” (Quick aside, chess is a remarkably misleading comparison to make in a section entitled “creative bots,” because chess is one of the easiest things to get a computer to do well — it has extremely precise and specific rules, known strategies, and requires highly-branching simulations of permutations of these known rules… quite, quite unlike art. But still, even if we lend him this comparison…)
Yet people still play chess for fun. Just as people will continue to paint, draw, compose, write, and create, for their passion. And while “we can’t have a poem and painting-based economy,” art produced by humans will always be valuable at a bare minimum because it was produced by humans. Because it’s art, and it has the value we place on it.
This video creates a false sense of urgency about an economic and social chaos that isn’t here… and isn’t coming. The hype around computers — and in particular, that one special word, robots — is overblown at best, and utterly unfounded at worst. And it is also, unfortunately, ubiquitous.
I get it, not everybody is as intimately aware of the abundance of problems that machine learning, robotics, and AI researchers deal with as I am. From robotic arm movements, manufacturing techniques, camera exposure, voice production, conversational interfaces, safety measures, movement and navigation, weather conditions, task-specific knowledge, contextualization, action learning, or any number of the other hundreds of active research areas, the obstacles to machines even getting close to wholesale replacing human labor are vast and complicated. But when you hear authorities go on a rant like this, it can make all those problems seem… solved. They’re really not.
So, what was the purpose of this video?
“This video isn’t about how automation is bad, rather, how automation is inevitable.” Dude, give me a break. I just explained all the ways automation isn’t inevitable, and I didn’t even get into my own field (social and emotional robotics — if you think movement is hard, try psychology).
But am I just just being defensive? Do I just “want to reject it,” as Grey suggests? Is my interpretation that this surely must exist exclusively to scare people into being Luddites unfounded?
Let’s go ahead and reach back to high school literature class brains and do some tone analysis.
Fear Mongering and Deception
Not-Fun Fact: 3 out of the last 10 people I told I work in robotics said “oh so like terminator?”
When people tell you they’re not trying to scare you, they’re probably trying to scare you. This video de-personalizes humans at nearly every turn, referring to people as “meat-based counterparts,” to machines — simultaneously de-humanizing the human, and putting our silicon-based counterparts directly in our seats. While referring to humans as “meat,” is cheeky within the field (my personal favorite term is actually “squishy,”) doing so outside, to the average media consumer, naturally evokes images of being consumed, eaten, devoured. Especially coupled with Grey’s ascription of human-y he-series pronouns to Baxter, calling humans “meat,” in a video that is clearly designed to scare people away from robots purposely puts people in a defensive position, in which they must defend themselves from slaughter.
Think that sounds dramatic? Take a look at the violent imagery used throughout the video. At Jeopardy, Watson doesn’t simply out-perform or even “beat,” the competition, but “crushes,” humans. Much more violent, much more dramatic. At performing sheer volumes of tasks, automated machines don’t simply out-produce or work longer and more consistently, but “destroy,” human performance. He describes their current role in the economy as “terrifying.” Make no mistake, these are deliberate attempts to get humans to imagine heartless, tyrannical overlords that are purposefully and intentionally causing real, physical harm to humans. Again, conveniently ignoring the fact that machines are produced by humans, for humans.
And then there’s the entire over-arching horse analogy. Not only does Grey again compare humans to animals, but tools, subtly implying that humans will become the obsolete tools of computers, instead of viewing computers and machines as what they are: yet another tool created to advance human progress. The thing about this analogy is it’s so easy to switch a few things around, and suddenly it looks like a much less scary and more accurate picture of reality: Instead of the human being the horse, the computer is the automobile. With the automobile, horse labor became obsolete, as did a whole host of jobs associated with caring for and working with horses, which ultimately enabled humans to have more time working in other fields (sometimes literally), or simply have more leisure time. The automobile enabled whole new hosts of job categories (like that whole transportation thing that we managed to live without for the first 3000 years of modern economic systems). The automobile was a tool of human progress, and there is no reason why computers, again, built by humans for humans, will not follow a similar path.
There are a number of times in which he makes it clear that humans and computers are in direct competition:
“Mechanical minds will push humans out of the economy” “Robots are already beating humans…” But as I’ve said many times, this is a false equivalency. Humans and computers are simply not in competition. Computers are, always have been, and always will be, designed to have purposes that are ultimately useful to humans. Humans are useful to other humans by virtue of the fact that we are social beings that can never be replaced by machines. Just as there was no competition between humans and horses to pull carriages, computers will automate-away tedious and laborious tasks that humans don’t want to or simply cannot do. Computers will do jobs that, sure, perhaps humans could do, but really rather not. And, truth be told, I just don’t see a problem with that.
Another tactic used in this video is showing misleading, irrelevant, or confusing footage and images throughout intense and aforementioned scary voiceovers. Showing footage of Atlas, the bipedal Boston Dynamics robot, for example, is simply ridiculously misleading. Yes, they have a very impressive demo reel. But Atlas also fell over just after a demo. Robots can be described as finicky, temperamental, incredibly frustrating, or just fucking hard. And every time somebody shows you an incredibly impressive demo — particularly of the big, scary Boston Dynamics bots — just remember it likely took at least one hundred takes to get something remotely usable, that is probably also heavily edited. This isn’t to say the field isn’t advancing, it’s to say showing a video and proclaiming it as the current standard when it absolutely is not and will not be for the foreseeable future for known and easily googleable reasons is plain irresponsible.
In a similar vein, showing a deep learning neural net and some math equations and implying that can solve any problem that a human worker can solve is straight-up ridiculous. When the majority of people don’t understand deep learning or how it works, and many feel a vague anxiety or fear around mathematics, and humans in general feel uncomfortable with the unknown, it is fear-mongering, plain and simple, to quickly show people a math equation, tell them it’s evil, and move on. To do it twice, with the same image, is just lazy video editing.
So, with this apocalyptic tone, it’s no wonder Grey sets us up at the end, saying
“I know this is a lot to take in, and you might want to reject it, but…” Well, that’s just a frustrating tactic. Of course I want to reject it… you’ve spent 13 minutes telling me in no subtle way that life as I know it is completely over, and exclusively computers are to blame. Anybody who isn’t thinking critically and one-by-one picking apart these arguments will of course be convinced, because people respond emotionally to emotional words.
These tactics are unfortunately not unique to this video. They are used heavily throughout media coverage of emerging and advancing technologies, and it is important that the research community is extremely careful to spot and call them out, as well as avoid unintentionally using them ourselves (alright, alright! I’ll stop calling humans “squishy robots”). As for the average media consumer, I encourage each and every single person to think critically about not only the content, but the tone of what you consume. Some questions you can ask yourself to spot fear-mongering tactics are:
How am I feeling while listening to this? Do I feel threatened, or relaxed? What sort of concerns do I have about the topic being presented? Who is the author of this piece, and what are their affiliations? Now, as for the particular topic of this video, let me explain why a heavily computerized workforce is actually an incredibly exciting opportunity and amazing advancement for humanity.
The Progress of Humanity
Over-Abundance Isn’t A Bad Thing
There are a number of proposals that do anticipate widespread automation and consequential unemployment (which, to be clear, I do not believe will happen).
Firstly, there exists a future in which humans live in such abundance that we straight-up don’t need to work. Finland recently ended a universal basic-income (UBI) trial after two-years, with inconclusive — not negative! — results. Many celebrities espouse the benefits of UBI, and specific townships which have tried it have experienced positive results. Hamilton, Canada, for example, is currently participating in a 3-year UBI experiment, with citizens proclaiming:
“Basic income has given me freedom to live with some dignity with a little extra money to buy the essentials in life” And for those worried about the economic stagnation of such situations:
“Some experiments have even found that basic income increases entrepreneurship, which would ultimately lead to more employment down the road. The truth is that most people want to contribute to society. If we can provide them with basic financial security, they’ll find a way to do it.” Even all that aside, economies don’t necessarily need to grow to be healthy, and reinforcing the notion that they do perpetuates what is, in my opinion, a dangerously ruthless and narrow instantiation of capitalism.
So, in the best case scenario, we, as a society, get our heads out of our own asses and understand that maybe not every single human needs to work for us to all coexist peacefully and comfortably, sharing this big blue rock we know and love. I understand that sounds like a socialist utopia, so let’s just go through all the amazing concrete things that actually are happening now, thanks to the robotics revolution.
Improving Quality of Life For Neglected Populations
Persons with disabilities (including the veteran in the photo above) arguably have the most to gain from a techno-revolution. While most developed nations are better about treating persons with non-standard abilities as real humans, the U.S. is pretty trash at it. From robotic walkers to aid in lower-limb rehabilitation, to applications that help healthcare providers monitor depression, to literally any of the hundreds of robots and devices designed to deliver and improve care to underserved populations, the robotic revolution is seen as an exclusive positive for many whose life, up until this point, we just haven’t valued like we should.
Social robots provide companionship to depressed seniors, who, when surveyed, report significantly lower loneliness which can help the elderly live longer with robotic companions. This isn’t taking jobs away from elderly-care workers, this is filling a niche which we, as a society, have currently left wide open. It’s not as if robots will suddenly replace grandkids visiting grandma at the home, or even at her house — it’s that we’re just shitty to our elderly people. Robots, especially sweet, cuddly, friendly ones, can straight-up improve the quality of life for this oft-neglected and significant portion of our population.
Additionally, advancements in technology can lead to overall better care for persons who need significant monitoring. Many emerging systems are starting to come on the market to help under-staffed, over-worked nursing-home workers with monitoring their patients, which leads to improvements in nursing home care.
Robots and new technologies also help non-neurotypical individuals navigate the world. Rosalind Picard, who coined the term and started the field of Affective Computing, designed glasses to help children with autism understand emotions of those around them. This field also helped create a seizure-detecting bracelet to help alert caregivers when individuals experience convulsive seizures. Both of these things would not be possible without serious advances in machine learning and robotic technology.
Saving Human Lives
On transportation, Grey says:
“These jobs are over.” Well, some of them, yes. And as a result, 32,000 people won’t be killed in car accidents each year. Hundreds of thousands will be spared the injury and pain of being in, or having a loved one be involved in, car accidents. Families will not be made bankrupt by the financial burden of having to pay for chronic injuries as a result of car accidents.
Food will be delivered faster, and prices will go down. Less will be wasted by rotting in transit. More people will get more nutritious food.
And besides, as I’ve stated above, technological advancements will eventually have entirely self-driving systems, but most companies are currently investigating hybrid models that will still require human intervention.
But ultimately, are transportation industry jobs more important than human lives?
As for the fear that bots will make doctors obsolete, well, I cannot stress enough how untrue that is.
There is currently a shortage of doctors that disproportionately affects developing nations. With self-diagnosing tools and automated recommendations, more people, across more area, will have access to accurate, reliable healthcare, that simply do not have it right now. These are not jobs that are being taken away, but again, an unfulfilled niche that we, as a global society, have left empty. This is exclusively providing more to people who have less, not even taking and distributing resources, but just allowing people with less to have more.
Not to mention all the other great social-justicy things about robotic decision making, including the potential to reduce human biases in police and miedical work, and consequently alleviate racial and class divides, which I won’t elaborate on here because that’s a complex topic that I honestly have to give more thought towards before I espouse its virtues.
Ultimately, what I find so frustrating is the way so much media coverage simply glosses over how many human lives will and are being saved by emerging technologies. And even though some jobs are going to start being done by computers (again, still hasn’t happened, it’s not “here,” yet), economic growth is not more important than human lives.
Say it with me now: economic growth is not more important than human lives.
There are so many more exciting applications and implications of new technologies that I didn’t even start to cover here. Augmented and virtual reality video games, personalized security systems, decreasing food and shelter prices, bias-detection… just about every aspect of human life does have the potential to be impacted by computers. The limit really is the human imagination, and the way humans choose to put these applications into practice. Humans absolutely should be thinking about the future, and how we can use technology to make positive impacts in our local, personal, disenfranchised, developing, underserved, and global communities.
While I’ve made it clear I don’t think the robot revolution is here, it is coming — albeit slowly — and that’s a good thing. Remember, technology has always provided humans with more: more time, more money, more food, more connection, more buying power, more comfort, more, more, more. Not only do we not have to be scared of computers “coming for our jobs,” but we should be actively brainstorming how we can use computers to make our lives easier. Indeed, how could I make a robot do my job? And, as it turns out, if your job involves any amount of moving around, being in close proximity to other humans, social interaction including conversation, contextualized knowledge, or creative interpretation, sorry, but you’re probably gonna have to keep working at that job awhile longer.
And if anybody comes up with a way to automate cognitive emotional robotics research, let me know.