Thursday, April 19, 2018

On Artificial Intelligence

I have long struggled with what artificial intelligence actually is. I often have problems with concepts that don’t seem to fit my frame of reference. I don’t know why this happens with some concepts and not with others. In general I like ambiguity. But whereas game theory never gave me an issue, spaces with more than three dimensions did. And whereas I love obscure and uncertain elements in economics, I always struggled with some parts. There seems to be a stop switch inside my brain, a switch that is almost random but rather final when applied.

Today, I think I managed to overcome the switch, and in the end it was as simple as reading a Wikipedia article. Last month I read through a long and rambling survey in The Economist about artificial intelligence in business, and I hated it. It seemed little more than a list of computing applications, without any real structure and with far less of the deduction and conclusion that Economist surveys usually include.

I was frustrated, and I wondered if my blockage might be one of definition. The survey made some attempt, but I then found that the articles did not really fit the definition used. So I tried Wikipedia. A quote worked wonders for me. It was a quip – Artificial Intelligence is whatever hasn’t been done yet.

I found this quip to be a beautiful definition, and it succeeded in removing my blockage. Artificial intelligence is some technological advance using computers that closes the gap between what a machine can achieve and what a human can. Actually, it is even simpler. It is any technological advance using computers, because the rest of the definition is moot. Since humans can use machines, machines can’t overtake us, except in things like speed of processing. So if some advance is really an advance, it will close the gap, but never to zero.

The insight solved many things for me. First, it helped me understand why AI is so old. Apparently, the term was first coined in 1956. A good friend of mine joined Shell with me in 1982 and was assigned to a unit working on AI. I didn’t really understand what that was, but it sounded exciting. Now I realize that the point is to sound exciting. The concept is so generic that it was never really invented, merely coined, in the same way that we always practiced customer relationship management even before someone coined the term CRM. So something like voice recognition has moved progressively from science fiction to dream to goal to flawed reality to mainstream application, and it is only in the last step that it ceases to be classified as AI.

So why the big fuss now? It is because several enabling advances have come at the same time, to make more things feasible. It is a period of high computing innovation. If I was being cynical, I might note that banks and analysts are especially tuned in to innovation just now, since there is no recession to fight and also a dearth of other star investment opportunities. The current situation is highly reminiscent of 1999-2000, the internet boom. The same thing happened then.

I actually had relevant roles for Shell during the internet boom, first as head of strategy for European retail, and then as e-commerce head for the technical business, the job I held that I was least qualified for, in every conceivable dimension. I had no technical qualification, no relevant network, and barely understood what the internet was.

Still, at least I was able to learn some lessons from that crazy time, including that few others really knew what the internet was either, and those that did didn’t know what to do with it. In the early days this field attracted misfits, computer people who did not like the discipline and commercial constraints of IT departments. These people, Shell and external, came through my office in a line looking for funding. They had pictures of computing flow charts, and utopian dreams. What they usually did not have was any commercial or practical sense.

It was fun trying to work out a path forward in this situation. Some ideas seemed to have some merit but would be expensive or complex, while others had little upside. So I tried to work out what our core strengths as a business were, and to look for ideas that leveraged those. I found few.

I even had one idea myself, one not involving any flow charts, and I regret that I didn’t follow it up with more energy, because it might just have worked. In retail, I wondered if we could use selected stations as local delivery points, recognising that delivery at home for large or valuable goods was not ideal, and that Shell had the core strengths of convenient space manned for long hours and with car parking. This is still a challenge for Amazon and others, and it is possible that we could have made a large deal with them that could have survived and even changed the landscape.

I also have plenty of experience of failed initiatives from this era. We were rapidly growing our shop business at the time, and were learning about supply chains and layouts and vagaries of demand and supply. Some consultant sold us the idea of a global system tracking all the goods from point of sale systems, so we could be more efficient and squeeze our suppliers. This was maybe twenty years too early. Even at a national level, our data quality was far too low and scale far too small to benefit from such an investment. We were arrogant enough to think we could do a better job than a local retailer in predicting the demand of ice cream next week.

At the time, we also thought that our loyalty schemes and our card processing systems gave us a competitive advantage via the customer data gathered. This one was only ten years too early. At the time, we had no real idea how to use the data profitably. Now, it has become not just possible but even core and necessary for businesses like Shell’s.

That was 1999 and we all know what happened to that bubble. 95% of the ideas came to nothing, stock prices ballooned then collapsed, but in the end Amazon and a few others were left not only standing but becoming titans to this day.

Looking back, that wasn’t the only time I happened to venture near the forefront of AI. In 1979, I was in a team that may have created the first route finder tool for the UK. Nowadays Waze and its competitors are everywhere, but in 1979 this could have been described as AI. We created a database of junctions and links and an algorithm to work out the cheapest route between any places. To get from North London to South London, the algorithm considered millions of available routes, including via Scotland. It had the key elements of rapid computing power and a large data set. Nowadays, such algorithms add the extra dimension of variable travel time based on up to the minute data, but at the time even our simple algorithm needed all the computing power we could muster.  

Will this current excitement about AI end in tears as well? The last bubble arose because of the internet, this one because of the step changes in computing power and data availability coming onstream. Reading the Economist survey, I am sure that most stories will end in failure once again. If I had to pick winners, I would stick with the same ones as last time, because they have existing scale and knowhow, and, critically, data. And there will be some fabulous advances buried among the debris. I would also try to learn from my own lessons. The critical success factors are usually in business rather than technology – is there a way of making money, and are key strengths relevant and in place? In the technology, look for the weakest link – often something about incentives, or data quality, or boundaries like the last mile or battery life.

And now I know what AI is, at least well enough for my own head, I won’t be so mentally repelled. It turns out what I was doing 45 years ago without knowing it, and in the same way what seems amazing now will seem routine in ten years time. I think I also understand better why Hawking and others think AI is a global threat. I don’t think it is anything about mad machines taking over or aliens or science fiction. It is simply about speed. Algorithms can model anything a human can devise, but at warp speed. So, for example, leaving nuclear weapon strategy and implementation in the hands of computers is completely feasible, but very dangerous, because things can spiral out of control faster than humans can easily stop them. So it is all about controls: how humans intervene to put tripwires into computer executions. With some of the current batch of humans in positions of influence, I can see what Hawking was getting at. 

No comments: