Robot apocalypse … Will artificial intelligence help robots take over the world, or transform humanity for the better?

June 15, 2017

It took 13 years to sequence the human genome for the first time, achieved back in 2003. Now it takes 1 hour. For $99 the genetics profiling company 23andMe will analyse everything about you, to understand where you come from, and where you are likely to go.

The age of big data has, apparently, finally arrived (90% of all the data in the world has been created within the past two years, says IBM). Which means that the ubiquitous and scarily powerful artificial intelligence can’t be too far behind. The first is the raw material for the second — and the numbers are staggering.

That data explosion, driven largely by smartphones and other connected devices, has coincided with such dramatic advances in storage and computing power that this sea of bits and bytes can be used, for the first time, to “train” AI systems on a large scale.

Whether it is a program that tags people in photos, recommends films or interprets facial expressions in a video job interview, any AI requires vast amounts of data to get to the point where it becomes useful. The more data it crunches, the smarter it gets at separating the signal from the noise. A system for driverless cars that has seen 30m images of humans will be far better at detecting you at a zebra crossing than one that has seen 1,000. That principle can be applied to virtually everything.

Andrew Ng, former chief scientist at Baidu (China’s answer to Google) and a professor at Stanford University, recently gave a speech in which he dubbed AI “the new electricity”. He said: “By replacing steam-powered machines with those using electricity, we transformed transportation, manufacturing, agriculture, healthcare and so on. Now, AI is poised to start an equally large transformation on many industries.”

Not everyone is celebrating the impending revolution. Elon Musk is its most vociferous detractor. The Tesla and SpaceX billionaire has said the rise of machines smarter than us may pose humanity’s “biggest existential threat”, warning of an army of Terminator-style robots that could exterminate us pesky humans. “With artificial intelligence, we are summoning the demon,” he said.

Even discounting a robot apocalypse, the digitisation of our lives, and the use of that information to train algorithms, raises thorny issues, from speeding the onslaught of automation to privacy.

The revolution starts in the internet’s plumbing: data centres, processors and connections. Until recently — the past few years — it was simply too expensive to store the amount of data necessary to train AI systems, and computing power was not up to the task.

Those bottlenecks are fast disappearing. Pure Storage, the sponsor of Snoop Dogg’s performance on the pier, has created flash storage hardware that can handle 125 times the workload of traditional hard disks, and at dramatically faster speeds. Moore’s law, which states that computing power doubles every two years, continues to be proven. What do such advances mean, in practical terms? Researchers can now sequence a human genome in one hour. The first genome took 13 years to crack.

The next step is using AI to pick out from that genome the faintest indications of disease that the best doctor in the world could not detect, and treat them before they become a serious health problem. The same principle can be applied across virtually every industry, from helping self-driving cars avoid collisions to suggesting repairs on a 747’s turbine before a part breaks. Par Botes, an executive at Pure, said: “We’re doing things that make things better for humanity. It’s pretty cool.”

Predictably, calling yourself an “AI” company has become a bit like adding “dotcom” to your name in 2000. “It’s another hype cycle,” said Babak Hodjat, the founder of AI company Sentient Technologies.

Hodjat has been through a few himself. He was part of the team that developed the technology that became Siri, the voice recognition system bought by Apple. He said the emergence of super-cheap computing and storage power was “tantamount” to the rise of a new generation that promises to deliver where previous iterations fell short.

But it takes a lot to make AI truly useful. These systems go beyond simple algorithms, which are step-by-step guides, not unlike cooking recipes. AI consists of algorithms that evolve, or “learn”, the more data is thrown at them.

For example, Sentient last year rolled out an AI-based financial trading system. It is unlike quantitative hedge funds, which employ highly paid PhDs to design algorithms that they constantly tweak.

Sentient’s AI in effect wrote its own code. The program took months to develop its first “trader”, which manages the personal cash of some of its managers and venture capital backers. Why so long? Because before settling on a format that would best meet the desired criteria, the AI cycled through an unthinkable number of permutations: 10 to the power of 58. “That’s bigger than any number you could imagine,” said Hodjat. Sentient has turned its AI on other problems, from detecting sepsis (a leading cause of in-hospital deaths) before it becomes acute, to making tastier basil by tweaking its growing conditions in a food lab.

Hodjat added: “The idea with AI is that you task it with a problem, and it comes back with something you weren’t expecting and it’s constantly evolving. But that’s scary to a lot of people. All you need is a naive AI trading operator to lose $10m in a day for people to shut this thing down and say we’re not going to use it any more.”

Such concerns are the tip of the iceberg. The Electronic Privacy Information Center (Epic), a Washington non-profit research organisation, filed a complaint last month with the Federal Trade Commission against a company called Universal Tennis.

The company, Epic said, uses a “secret algorithm” to rank hundreds of thousands of amateur and aspiring tennis players, including children, on a scale from 1 to 16. The system creates its ratings based on millions of match results.

Epic said the scores “define the status of young athletes in all tennis-related activity” and “impact opportunities for scholarship, education and employment”. They may also, it argued, “in the future provide the basis for ‘social scoring’ and government rating of citizens”.

It is not hard to imagine that, as AI grows smarter, its decisions will have far greater impact than simply meaning you miss out on a scholarship due to your weak second serve. Which is what scares Musk and the rest of the AI doom-mongers. Because AI systems evolve and change, it is unclear how much even their creators understand what makes them tick. Hodjat said: “The turn-off is that it is always going to be seen as a black box approach.”

Eric Burgener of research firm IDC expects this debate increasingly to be played out in the courts. “AI clearly raises issues that fall outside the boundaries of any existing laws,” he said. “There will be more lawsuits to force the courts to start defining things.”

Join the discussion

Your email address will not be published. Required fields are marked *