Will robots come for your job?
(This article also appears on LinkedIn)
I’ve recently read Humans + Machine: Reimagining Work in the Age of AI by Daugherty and Wilson. It is one of the books that got me thinking - and not because of the “next crazy thing AI will shortly be able to do”.
I like it a lot, but not for entirely obvious reasons. It is not as easy to read as e.g. Pedro Domingos Master Algorithm or all those “ten jobs you didn’t know AI will totally replace soon” lists. This is not due to difficult to read tech jargon or complicated writing. The book though is filled with lists and descriptions, making it often a somewhat dry read. Sure, there is an anecdote for almost every list item, but that still does not make it a good bedtime read.
What I do like it for however, is that is has changed my view on certain points. It also provided me lots of interesting food for thought - and that is what I always highly value in a book.
But let me start with describing what the book is about, either sparing you the dryer parts or maybe making you want to dive all in yourself: the choice is of course yours.
Robot + Human makes… a better human and a better robot!
The authors make a major point right from the beginning: machines will not replace humans. Well, at least not for the foreseeable future. Instead, they proclaim there will be a renaissance of human labor. They base this on the simple fact that there are tasks that humans excel at and other tasks, where machines do.
The human part is the left one and the machine part the right. Between those lies the „missing middle“, a core idea of the book: where humans and machines work together. This seems like a trivial point, but hear me out: this is indeed important. My idea was always that the right part (the „machine“ part) is constantly creeping towards the left, leaving more and more workers without a job. This would be accelerated by the fact that machine learning systems now come for the white collar jobs, where employees have been told over the last decades, is the safe space in regards to automation - and if that assumption is false, there would be nowhere else to go (red-collar jobs maybe?).
Daugherty and Wilsons point is though that this is not necessarily true. Sure, corporations could go for the standard approach to reduce their workforce since fewer people can now perform the same task. But they could also do something a lot smarter: increase their out- and throughput.
What does that mean, you ask? I liked an example from the book very much, so I’m using it here: Imagine a customer complaint department. They switched from snail mail and telephone a couple of years ago to online forms on the company's website. Human employees instantly answer the simpler questions or delegate complex ones to the relevant departments.
This is a perfect process to be automated by a machine learning system. After a training phase, it could answer standard queries and delegate the rest. The company could now lay off a significant part of the workforce in this department.
The alternative? Open more channels. Instead of sticking to the web-based forms (and admit it: we all hate those. I’m looking at you, mobile phone companies!), they could open Facebook Messenger accounts, email addresses, WeChat apps, Instagram accounts and many more. This would significatnly improce customer servic and increase data intake.
But isn’t that only a theory?
Why did I know that you would ask that? Here's a practical example: I travel to China a lot with the Dutch carrier KLM. A couple of months ago, I had a question about a flight that was somewhat trivial, but to some extent important to me.
While looking for the dreaded customer service hotline number on Facebook, I saw that little „Messenger“ button active on their account. I figured what the heck, let’s give that a try. And I was stunned: I entered into a very nice and somewhat funny conversation that immediately answered my question. Turns out, KLM employs Chatbots to help their employees respond on Facebook. If you want (and I highly recommend it), check out KLMs story here. An interesting side note from the book is that these systems might become the face of your organization to customers instead of real humans (think Alexa, Siri). The implications of this on UX, processes, marketing, etc. are tremendous and the book only scratches the surface of this.
Since then I’m a total KLM fan. If any of you guys read this, this is totally awesome and I forgive you for Paris-Charles de Gaulle airport. Well. At least a little bit.
To a certain extent, you can already see this approach in todays process automation systems. In one of my former jobs, I used to be responsible for a system that processes incoming digital or paper-based invoices. It was thus able to decrease the amount of employees necessary. Sadly, systems like these are only used to decrease costs (i.e. employees on your payroll) instead of thinking how you could improve your company. What I DID learn is that you always need to think about processes first, before implementing any kind of technology. Another point in which I agree with the book.
So this is the first part where my mind got changed. AI is not coming to either destroy us all or bring us some kind of utopia. Look at it from a more realistic way, out of a business perspective. Instead of using Artificial Intelligence to reduce your workforce, use it to make dull jobs way more interesting and increase out- and throughput! This goes even further: Senior employees, who are approaching retirement, are able to train AI systems. This
gives them a very important and meaningful task on their last years with the company and
reduces the need to re-fill their roles, easing the impact of the demographic change companies have to deal with.
Daugherty and Wilson also introduce a framework called MELDS, short for Mindset, Experimentation, Leadership, Data, Skills. They argue that these are the key areas to make a companys AI initiative successful. I agree with that, and „data“ and „skills“ being the more obvious parts of this. I think however, that companies will not change in one big strategy initiative and install MELDS from the top.
This is one of the big issues I have with the book. Instead, in my personal experience, „pockets“ will emerge throughout the company. These pockets use a certain kind of machine learning and are successful at it. RPA (Robotic Process Automation) is a good example of this. Though not quite Artificial Intelligence, it somewhat employs the same principles. RPA can be used to reduce workforce through an automated process. What you see with these kind of systems is that once you have a few working examples, people see the possibilities and start implementing it in other places. An organization must provide necessary the technical means. Then let people carefully implement certain technologies for very specific processes. Basic principle: the need to clearly prove how this either decreases costs or raises out-/throughput.
A role must explicitly state what happens when a person turns on the PC in the morning
The roles that the book describes, first made me cringe. I often see that the first thing people do in any kind of new organization, is to define new roles. These roles are often very ambiguous and it is not clear, what the people actually do all day. A good friend of mine has the following approach to new roles: Ask yourself wether you can describe, what a person specifically does, when she comes into the office and turns on her PC. What programs does she open and how does she start her workday? If you can describe this, your roles are solid and it makes sense to implement them.
Having this in mind, the roles in the book seemed a little far-fetched. Ethics compliance manager? Explainability strategist? They don’t pass my friends „what do they do when they open their laptop in the morning“ test. But then, I stumbled upon this article by Wired and this one by Technology Review . The articles argue that algorithmic bias is becoming a problem. And indeed, once you think about it, a Machine Learning system tries to find a signal in a lot of noise. If that signal is based on biased data, you get a biased signal. Hence, there are companies being founded right now, that make sure your algorithms react accordingly, comprehensibly - and maybe even ethically. Check out O’Neil Risk Consulting & Algorithmic Auditing for a working example. This made me think that I might have been too skeptical about the roles described in the book.
While I was driving in my car today, I was toying around with the ACC (Adaptive Cruise Control) System. It uses a radar in front of the car to detect a de- or accelerating car in front of me and adjusts the speed of my car accordingly. I always switch on the “system view” feature so I can instantly see, whether the system sees the car in front of me or not. Sometimes it detects one pulling in my lane very late, sometimes it looses the one in front of me. I always wonder “why” and wish I could see what the actual sensor sees - I only get a symbol once the car is fully detected. It would help a lot to be able to see what happens before the systems decides “well, yes, that is indeed a car in front of me!”. Today though, I also thought about the “Explainer” role: I wish my car would explain a lot better, what it was seeing and how it was deciding what it’s going to do. This would raise confidence into the algorithm and also help me use the system better in predicting how it’ll work in certain situations.
Regarding the explainer role, this means we won’t only need people to explain algorithms better. We will also need to incorporate this thinking into the whole user experience. And you actually do see the start of this in certain software: Netflix displays “because you watched...” before it suggests certain other movies.
Google, as another example, doesn’t do that. I’ve wondered for a long time why I get certain results and corresponding ads. I understand though that this is one of the cores of their business model. But I would actually like to help it correct a specific finding: show me how you came to this conclusion. Then let me correct it by giving you feedback on your path of “thinking”.
These two insights changed my mind regarding the roles section of the book: We do actually see these kind of jobs emerge. I still think they are far fetched. If I remember correctly, the authors state that they are somewhat ambiguous and might not be implemented or called exactly like that. But it does make sense to think about who we need to safely implement AI systems into companies. The same goes for what roles might be helpful or even necessary to do so
The book goes a lot deeper into roles, skills and the psychology behind certain ideas. For example, it argues how employees need certain control over algorithms to make their work meaningful and increase trust in these systems. It is also filled to the roof with links to statistics, articles, studies, examples and other facts. They also work through several departments of companies (supply chain, back office, R&D, etc) and present lots of actual case studies where AI has been successfully implemented. Don’t get to frightened though with these - it’s a little difficult to evaluate, how many of these stories are marketing stunts and still in operation. They still give you many good ideas on where to look in your own workplace.
One more takeaway is that you need to start reading and thinking about and working with machine learning techniques, for two reasons:
As the theory goes, jobs will not be replaced. But they WILL change. You, as an employee, will need specific skills not in developing AI systems, but in using them- so start developing them in yourself. The fact that you’re reading this article is a hint that you’ve quite understood this principle :)
As an employer or member of the management in your company, you should also start to develop these skills - if not in yourself, then at least on your employees. Again, real ML developers are hard to come by. But your workforce should be readied for the coming change and it’s your responsibility to prepare them. To use a quote from the book: Think of AI as more of an investment into skills than into technology.
What the book doesn’t give you - despite the claim - is a concrete plan on how to change your boring 1990s-style company into a lightning-fast, agile, AI-powered, 21st-century one (please, read the sarcasm in this!). It is more a call to action. I like the MELDS framework, but a “start here” approach would have been useful: what are the actual first steps to get me started into the right direction? I still highly recommend reading it - if you got the stamina to push through :)