Deciding that there are things that computers will never do is bad for you

Lorenzo Wood
12 min readJul 20, 2018

TL;DR — As the buzz about the threat to jobs from automation grows, I hear people claiming that computers will never work collaboratively, be creative or have intuition or empathy. Therefore, we are told, focus on these human skills and your employability is safe from the machines.

This is bad for you. It’s bad for the individual, because it creates a false sense of security. It’s bad for humanity as a whole, because it discourages investment in things that may be of great benefit.

At a philosophical level, it may be true. Computers will not be able to “feel” emotions any time soon. At a practical level — as it relates to jobs—it’s possible to see how computers will provide many of the same benefits to people that these human qualities do. It is harder to achieve this than some of the more obvious uses of automation today. In some cases, the results may be better.

Focusing on those skills is not wrong; nor is it sufficient.

Daniel Susskind, co-author of The Future of Professions: How Technology Will Transform the Work of Human Experts, described a common feature of the many interviews with professionals — lawyers, accounts and so on — that he did as part of the research for the book. Yes, everyone agreed, technology (and particularly “artificial intelligence”) would have a major impact on their respective industries. Yes, they also agreed, it would have a huge impact on their companies: many jobs would have to change greatly, and some, undoubtedly, would no longer exist. Except, of course, the respondent’s own job, which obviously could not be done by a computer.

It seems likely that not everyone was correct on that last point.

Is it surprising they thought like this? Not really. There’s plenty to read about “illusory superiority”, the cognitive bias that makes us consider ourselves better than others.

“Sheeple“ on xkcd by Randall Munroe

There’s also plenty of precedent in business. Kodak’s decline through not embracing the world’s move from film to digital imaging quickly enough is often contrasted with the meteoric rise of Instagram. Blockbuster famously passed up the opportunity to buy Netflix.

From Jobs Lost, Jobs Gained: Workforce Transitions in a time of Automation, McKinsey Global Institute, December 2017 p3

In recent years, the availability of lots of data and lots of specialised processing power has made some decades-old machine learning techniques much more useful (also an existing pattern — relational calculus was a theoretical curiosity until query optimisers gave relational databases practical performance), and now much is written about the “fourth industrial revolution” and the threat to jobs from automation.

These are complex matters and there is a broad spread of predictions. Historians point out that there have been big shifts of jobs out of (eg,) agriculture and manufacturing, but these changes happened slowly enough for demand in other sectors to grow. Boosters say that this time it’s different, and that technology even poses an existential threat to humanity. More seasoned forecasters point to the lack of investment and infrastructure as a tempering force.

In all of the hubbub, increasingly I hear remarks along the lines of “computers will never work collaboratively”, “computers will never be creative”, “computers will never have intuition or empathy.”

This is bad for you. It’s bad for the individual, because it creates a false sense of security. It’s bad for humanity as a whole, because it discourages investment in things that may be of great benefit.

Practicality, not philosophy

At this point, you may be rolling up your sleeves to launch into an argument that empathy and creativity are human qualities by definition, so this is a non-argument.

The OED defines empathy as “the ability to understand and share the feelings of another”. Sharing someone’s feelings, presumably, means substantially feeling the same way yourself. Do I think a machine can feel, for example, excluded or marginalised? To answer in the affirmative, we’d be off into the territory of artificial consciousness, rights for machines etc. I don’t think any of that is impossible but I suspect it’s a very long way off and, more importantly, I don’t think it’s relevant.

People saying “computers can never have empathy” are usually not making a philosophical point. They’re suggesting that there are safe areas where human work can be the only solution. They’re making a practical point. And they’re wrong.

The illusion of working collaboratively

To collaborate is to “work jointly on an activity or project”.

Collaboration is a skill. It requires some particular behaviours, not all of which are obvious or natural. It requires organisation that values collaboration and helps it (if you work in professional services, one of the things you need is the ability to be able to find the right people and get them to work on your project with a minimum amount of red tape). And when it works well, it can be very rewarding.

This all sounds very human. And in stark contrast to tools. We don’t talk about “collaborating” with Excel (maybe “fighting with” it). We “use” tools like Excel.

I suggest that this asymmetry is a side effect of the immaturity of the technology. Tools don’t have agency; we wield them. People have tried to create anthropomorphic mechanical collaborators, without much success.

Satirical rendition of Microsoft’s “Clippy” assistant in Microsoft Office c.1997

However, when we give our tools agency, there is more opportunity to work in a way that feels like collaboration. When Adobe Fellow David Nuescheler demonstrated Adobe’s Sensei creative assistance technology, he presented it as a way to reduce the dreary mechanical parts of the creative process. Yet, being able to make suggestions about translating from sketch to screen, or being able to re-work something with a different output in mind, seem like the kinds of things that happen in collaborations, particularly with someone in a junior position.

Working with a team of peers in the same space can be stimulating and rewarding. More and more collaboration happens remotely, and across time zones: when you are already using technology to collaborate, is it unreasonable to imagine some of the members of the team being machines?

Physical presence is a valuable part of collaboration. If you have a robotic kitchen, will it be more fun and rewarding for you to treat it as an appliance (as Moley shows in its film) or to join in and share the cooking task?

Collaborative working is definitely a good skill for people to develop. It will increasingly be useful as a way of working with machines alongside working with people.

The illusion of creativity

Creativity is often presented as a uniquely human alchemical process, in which people’s individual perspectives and experience allow them to connect the dots. Art is as much (more?) about the story of its creation and the significance accorded to it as it is about the artefact itself.

Here is a conceptual process for producing good, original ideas:

Clearly, people do not (always) consciously follow a process like this, but the pieces are there: people are inspired; they have ideas, reject some, shape them, perhaps test them on other people or do sketches or studies or experiments to refine them.

The human way to do this is to be thoughtful in creating ideas. For example, people rarely write by putting random words one after the other to see whether an interesting piece of writing emerges. It is human to be explanatory in evaluation, to look at ideas with fresh eyes or from a different perspective, to make connections. These things resist explanation. “Expert systems” of the 1980s, that attempted to codify how talented people worked, were not very successful because those people’s understanding of how they did what they did was not very good.

Computers can generate and evaluate millions of ideas in short times. This approach is already popular where there is a well-defined, deterministic evaluation method — computer-aided drug discovery and the creation of strong and efficient building structures, for example. It has been used to generate novel designs for electronic circuits. Latterly, it has been applied to the visual arts (the image below is from a selection of computer-created images used in a study where viewers were asked to identify which had been created by people; the computer scored 53%). It has also been applied to generating new photographic imagery in a particular style — for example, researchers at NVIDIA demonstrated the use of “generative adversarial networks” (GANs) to create images that look like photographs of celebrities but do not feature real people.

Artworks created by Creative Adversarial Networks (CAN). Courtesy of the Art and Artificial Intelligence Laboratory, Rutgers University, published in Art World

There are well-worn criticisms of this sort of creativity: the machines do not understand what they are doing; they are not ingenious or flexible, relying on how they have been programmed to do a task (for further reading, try this essay by cognitive scientist Margaret Boden).

While we are probably a long way from creating the artificial general intelligence (AGI) that rivals people for its flexibility, it seems as though we can pick any domain we like and make machines that produce work as creative as we need it to be.

For example: armed with knowledge about a person, can we craft an image, a headline and some copy that makes that person more likely to buy a product than would a generic version of the same? How much more likely? The delta gives us a budget: we can’t afford people to do that for each customer, but if we can do it with machines at a low enough cost, it’s worth it.

Could a machine create a brand new business? If we give it the canvas to describe one (instead of the pixels in an image, the parameters of a detailed market simulation) and the tools to discriminate (the results of running such a simulation), why not? We can give a machine the ability to run small experiments directly and at scale, to inform its work. The value of successful businesses is self-evident: surely a big enough prize.

The illusion of empathy

In 1965, MIT computer scientist Joseph Weizenbaum wrote ELIZA — a computer program for the study of natural language communication between man and machine. He describes a computer program that we could recognise today as a “chatbot”. It conversed in the manner of a psychotherapist, producing transcripts like this. He created it to explore the ways in which a computer could create the illusion of understanding. ELIZA’s completely illusory understanding relied very heavily on assumptions made by the person conversing with it.

In his paper, Weizenbaum notes that, even though he had to work within “the usual constraints dictated by the need to be economical in the use of computer time and storage space”, “some subjects have been very hard to convince that ELIZA (with its present script) is not human.”

Modern computers are comparatively free from the constraints of MIT’s IBM 7094 in 1965. Even an iPhone is roughly a million times faster.

Stills from Toyota Concept-愛i | Concept Movie, Toyota, 2017. The title is pronounced “Concept-i”; is the Japanese for “love”, pronounced “ai”

The image above is from a conceptual study: Toyota’s Concept-愛i from 2017, by William Chergosky and colleagues. The film is well worth its nine minutes.

The story covers maybe forty years in the life of a family. We first meet Noah as a small boy, playing in his family’s Toyota. When Noah gets his driving licence, we meet the intelligence embodied in the car, which he names Yui. Yui was there when Noah was out with his friends. It was there when he was dating the girl he would marry, and as their child grew up.

Towards the end, Noah is having a lovely moment with his young daughter as he drops her off for school. We cut to a few years later, and his daughter — now exhibiting typical teenage behaviour — leaves him feeling rather deflated. Though he doesn’t say so, the car infers this emotion, empathises (“you’re a great dad”), and offers up the music which was playing that previous moment many years before. The car correctly predicts that this will trigger positive memories for Noah and improve his mood. It also predicts that showing the daughter a recording of his reaction will trigger positive memories for her (“he’s not that bad really”) and clearly talks to her in a different way than it talks to her father (“yeah… as dads go”).

Does it exhibit empathy? Does it behave in an empathetic way? Absolutely. Does it feel emotions? No suggestion of that; nor any need.

Concept-愛i is just that: a concept. And it looks really hard to do, on at least two levels. One is the design and engineering challenge: if the machine misunderstands what is being asked of it, or acts inappropriately, trust may be immediately lost, so reliability is paramount. The other (bigger?) challenge is commercial. Toyota talks about this as a 2030 vision. Will Toyota have the commitment to make something this good? Amazon’s Alexa is a toy by comparison, but it is doing very nicely for Amazon even so.

Making a machine behave empathetically in a way that is useful is hard. Is there enough value to make the effort? If everyone had a friend they could chat to, who would be available any time, who would listen and empathise with their emotional ups and downs, would that be a good thing? Would people be happier, healthier, wealthier? And if that friend is a machine, does it matter?

Protectionism is unnecessary

Doom-mongers claim that AI will take over the world and render the majority of humans useless. Machine learning entrepreneur Sean Gourley suggests that a gap at the bottom will remain where the cost of human labour is less than the cost of the electricity required for a machine to do the same job. Faced with such a prospect, it’s natural to take a stand for the uniquely human qualities that will provide us with purpose.

We may assert that machines do not feel emotion or exhibit real creativity. When we take that assertion and extrapolate that tasks reliant on these qualities are the sole domain of people, we limit our ambition. The constraint on automating any given result — including illusions of creativity and empathy—to any level of quality we desire appears only to be the effort we are prepared to put it into it, which is driven by the value to us of that automation.

This is why these assertions are a bad idea.

If you are making them to convince yourself that the work you do is immune to automation, you are merely failing to plan for the moment that someone decides the marginal cost of automating what you do (which might be very low, if some adjacent task has already been automated) makes it worthwhile. That might not happen tomorrow, to be sure; but a contributor to rapid growth in technology is that most problems need be solved only once. For example: how complicated is it to write a program to detect faces in images? In Python, you can do it in 24 lines — because hundreds of problems have already been solved by others.

If you are making them to rule out some properties of machines, you are lobbying for a worse society. Would you rather have mindless retargeting of a few human-created banners (ghastly for users, but makes money) than individually crafted experiences that are both pleasant and effective? Because machines can’t be creative? Would you rather have your elderly relative have visits from a carer be high points in an otherwise dreary, solitary life, than have them feel looked after all the time? Because machines can’t have empathy?

Eric Posner and Glen Weyl, in their book Radical Markets, introduce the term “collective intelligence”, in preference to AI, to reinforce the fact that automation is already a collaboration between humans and machines, albeit asymmetrical (their book is largely about addressing asymmetries, and they begin their chapter Data as Labor with a transcript of an imaginary chat between a Facebook user, Jayla, and Facebook itself in which Facebook asks Jayla about two of her friends and pays her for her time and insight).

Human protectionism is unhelpful. Happily, it is also unnecessary. We can, and should, expect automation to grow in capability and sophistication. We should expect — demand!—that it is as collaborative, creative and empathetic as possible, because it will be more fun and rewarding for people to work with and produce greater benefit for people.

People are not the enemy; they are the point.

--

--

Lorenzo Wood

I like making impossible things work, and helping others do the same