How much longer will AI fever last?

 
AI.jpg

Unless you are a hermit, you cannot escape the fact that artificial intelligence (AI) is now all the rage in town. Everyone is talking about it – you can hear it discussed ad nauseam on the radio and television and there are countless articles mentioning it in the print media and on the internet. People from all spheres of life, from Pauper to Prince and Pope to Prime Minister, have been making various pronouncements on it. Putting their monies where their mouths are, governments and large corporations have been pouring R&D funds into it.

In the UK, as part of its Industrial Strategy, the government has declared ‘Artificial intelligence and data’ as one of its Grand Challenges and has pledged to put the country at the forefront of the AI and data revolution. Last May, in a speech at Jodrell Bank, the Prime Minister set her government the mission to “use data, artificial intelligence and innovation to transform the prevention, early diagnosis and treatment of diseases like cancer, diabetes, heart disease and dementia by 2030.”

Other European countries have been no less enthusiastic about AI. For example, Finland has a plan, together with Estonia and Sweden, to be Europe’s No.1 ‘laboratory’ for AI trials. In a strategy report last year, the Finnish government estimated that approximately 1 million of its citizens would eventually need to update their AI skills. The process of training them has already started with an initiative that has educated one per cent of Finns, without knowledge of programming, to understand AI and the opportunities it offers.

So what is AI and why is there now such a fuss about it? After all, it has been around a long time. In fact, the term “artificial intelligence” itself was coined by McCarthy over sixty years ago, at a summer workshop at Dartmouth College, Hanover, New Hampshire, where he was working as an assistant professor. AI is now commonly accepted as that branch of Computer Science concerned with making computers and other machines perform tasks typically requiring human intelligence.

AI has come in waves. In the early days, people like Simon and Newell worked on computer programs for universal problem solving. Given the computing technology available at the time, those general solvers could address only toy problems. Meanwhile, other researchers, most notably Rosenblatt, tried to build computers modelled upon the human brain. Again, limited success was achieved. In the late 1960s, the nascent area of neural computing suffered a serious setback when Minsky and Papert pointed out a fundamental drawback with Rosenblatt’s perceptron network.

In the mid-1960s, there was another AI wave. We saw the advent of software for solving problems that were difficult and real but also narrow in scope, for example, medical diagnosis and mineral prospecting. The computer programs developed, also known as expert systems, contained codified human expertise and used that knowledge to reach the required solution. We also saw the birth of fuzzy logic for handling problems characterised by imprecision and nature-inspired algorithms such as genetic and evolutionary algorithms for finding solutions to complex optimisation problems.

Interest in neural networks and expert systems resurged in the 1980s. The Fifth Generation Computer Systems initiative by the Ministry of International Trade and Industry in Japan triggered other funded research programmes here and elsewhere that focused on information technology and intelligent computing. For example, the UK government had its five-year Alvey programme of collaborative research in information technology. Europe-wide, over the period 1985-1998, there were five consecutive ‘European Strategic Programmes on Research in Information Technology (ESPRIT).’

It is not certain what started the current AI fever or when exactly it began. Whereas previous waves were associated with step changes in computer technology – the births of the digital computer, minicomputer and microcomputer – there has not been a single phenomenon that could be attributed to today’s renewed intense interest in AI. Instead, several factors have contributed to it: faster and cheaper hardware, availability of cloud services, autonomous cars, more powerful machine learning paradigms, and spectacular successes of software such as IBM’s Watson and Google’s DeepMind.

So when will the fever subside? To find the answer, I was tempted to apply a theory by a UC Berkeley professor about the duration of research fashions. Cynically, he posited that they generally last 7-8 years, or roughly two US election cycles. However, as election cycles differ around the globe and, moreover, the precise beginning of the current AI fever is unknown, that theory cannot predict the end with certainty. On the other hand, without having to be a great futurist, I can confidently say that this AI fever will pass when it is time for another fad to take over.

References:

  1. https://www.theguardian.com/us-news/2017/dec/16/san-francisco-homeless-robot

  2. https://www.telegraph.co.uk/news/2018/09/06/prince-charles-warns-crazy-ai-world-part-human-part-machine/

  3. https://www.weforum.org/agenda/2018/01/the-pope-calls-for-humanity-in-the-face-of-workplace-automation/

  4. https://www.gov.uk/government/speeches/pm-speech-on-science-and-modern-industrial-strategy-21-may-2018

  5. https://www.politico.eu/article/finland-one-percent-ai-artificial-intelligence-courses-learning-training/

  6. http://www.birmingham.ac.uk/schools/mechanical-engineering/index.aspx

About the Author

Duc Pham is the Chance Professor of Engineering and Director of Research at the Department of Mechanical Engineering at The University of Birmingham.

Featured in CMM

 
Daniel Camara