Ambient tracking, Bounty Programs, Solar Highways, 5D Printing, Digital Frailty, Nanobot Nurses, and Thinkables? And The Big 9

April 15, 2019

This month marks the 30th anniversary of Tim Berners-Lee’s original proposal for a World Wide Web, which his supervisor called “vague, but exciting.” We’ve come a long way in those last hyperconnected 30 years.

So … Ambient tracking, Bounty Programs, Solar Highways, 5D Printing, Digital Frailty, Nanobot Nurses, and Thinkables … are just some of the more exotic examples out of the 315+ technology and science trends report by futurist Amy Webb’s team at The Future Today Institute,

Some key takeaways include the following:

Privacy is dead

One persistent theme  is surveillance. Whether it’s how hard we press on our mobile phone screens, our faces as we cross an intersection, our genetic matches with distant relatives, our conversations in the kitchen or even the associations we keep, we are now being continually monitored. Just by virtue of being alive in 2019, you are generating data—both intentionally and unwittingly—that is mined, refined, productized and monetized. We no longer have an expectation of total privacy. At least not like we’ve known it before. Companies that rely on our data have new challenges ahead: how to store the vast quantities of data we’re generating, how to safeguard it, how to ensure new datasets aren’t encoded with bias and best practices for anonymizing it before sharing with third parties

Voice SEO

About half of the interactions you have with computers will be using your voice by the end of 2020. Whether you’re talking to a smart speaker, or your car’s dashboard, or your mobile digital assistant, you’ll soon talk more often than you type. As content creators venture into spoken interfaces, publishers and other companies will soon be focused more on voice search optimization (VSO). The emergence of VSO affects scores of industries: advertising, hospitality and tourism, finance and banking, retail, news and entertainment, education and more. This means opportunity: there’s an entire VSO ecosystem waiting to be born, and first movers are likely to reap huge windfalls. But it also signals disruption to those working on the business side of search.

Personal data records 

We will start to see the emergence of “Personal Data Records,” or PDRs. This is a single unifying ledger that includes all of the data we create as a result of our digital usage (think internet and mobile phones), but it will also include other sources of information: our school and work histories (diplomas, previous and current employers); our legal records (marriages, divorces, arrests); our financial records (home mortgages, credit scores, loans, taxes); travel (countries visited, visas); dating history (online apps); health (electronic health records, genetic screening results, exercise habits); and shopping history (online retailers, in-store coupon use). AIs, created by the Big Nine, will both learn from your personal data record and use it to automatically make decisions and provide you with a host of services. Your PDR will be heritable—a comprehensive record passed down to and used by your children. Ideally, you will be the owner of your PDR, it will be fully interoperable between systems, and the Big Nine would simply act as custodians. We are at the beginning of our transition from email and social media logins and passwords to PDRs.

Rising China

China is pushing ahead in many different fields. It has launched a space race with ambitions not just to return humans to the moon, but to build indoor farms and livable spaces on the lunar surface. It is making bold advancements in genomic editing, in humans as well as in livestock and produce. Through its various state initiatives, China is building infrastructure and next-generation internet networks across Southeast Asia and Latin America. It is setting the global pace for air quality, carbon emissions and waste reduction. China’s electric vehicle market dwarfs every other country in the world. All of that in addition to China’s significant investments and advancements in artificial intelligence. Don’t be tempted to monitor each of these trend areas alone— you’ll miss the connections that signal something much bigger is afoot. No other country’s government is racing towards the future with as much force and velocity as China. This means big shifts in the balance of geopolitical power in the years ahead.

Rethinking laws

Lawmakers around the world are not prepared to deal with new challenges that arise from emerging science and technology. In 2019, we are sure to see proposals for new regulatory frameworks. However these new rules, regulations and policies won’t be modeled to understand their broader, next-order implications. Or whether they can be enforced, as technology and science continue to evolve.

Tech fusion

Tye report also focus on consolidation across the big tech giants, news and entertainment media, robotics, home automation and biotech for the past few years, and we anticipate more of the same for the near future. Consolidation tends to mean a concentration of resources, which often acts as a driver of acceleration. This will help those working in some areas of tech and science—and it’s good for business. But consolidation tends to draw the attention of regulators, especially in the US and EU.

The Big 9 

There are nine big tech companies—six American, and three Chinese—that are overwhelmingly responsible for the future of artificial intelligence. They are the G-MAFIA in the US: Google, Amazon, Microsoft, Apple, IBM and Facebook. In China it’s the BAT: Baidu, Alibaba and Tencent. Just nine companies are primarily responsible for the overwhelming majority of research, funding, government involvement and consumer-grade applications. University researchers and labs rely on these companies for data, tools and funding. The Big Nine are also responsible for mergers and acquisitions, funding AI startups, and supporting the next generation of developers. Businesses in the West will soon have to choose AI frameworks and cloud providers—likely Google, Amazon or Microsoft—a decision that will be extremely difficult to reverse in the future. Read on.

Amy Webb’s great new book The Big 9 : How the Tech Titans and Their Thinking Machines Could Warp Humanity is well worth a read. She calls it “a call-to-arms about the broken nature of artificial intelligence, and the powerful corporations that are turning the human-machine relationship on its head.”

In the new book Amy describes how just nine giant tech companies in the USA and China are behind the vast majority of advancements in AI worldwide. She explores three possible futures, ranging from optimistic to apocalyptic, that could result from the actions we take–or don’t take–to control the development of AI and shape its global impact. In this excerpt, she puts forth a series of tough ethical questions that the humans building AI systems should use to guide their work.

Inc Magazine reviewed the book saying “The rules, or the algorithm, by which every culture, society, and nation lives, and has ever lived, were always created by just a few people. Democracy, communism, socialism, religion, veganism, nativism, colonialism–these are constructs we’ve developed throughout history to help guide our decisions. Even in the best cases, they aren’t future-proof. Technological, social, and economic forces always intervene and cause us to adapt.”

The Ten Commandments make up an algorithm intended to create a better society for humans alive more than 5,000 years ago. One of the commandments is to take a full day of rest a week and not to do any work at all that day. In modern times, most people don’t work the exact same days or hours from week to week, so it would be impossible not to break the rule. As a result, people who follow the Ten Commandments as a guiding principle are flexible in their interpretation, given the realities of longer workdays, soccer practice, and email. Adapting is fine–it works really well for us, and for our societies, allowing us to stay on track. Agreeing on a basic set of guidelines allows us to optimize for ourselves.

There would be no way to create a set of commandments for AI. We couldn’t write out all of the rules to correctly optimize for humanity, and that’s because while thinking machines may be fast and powerful, they lack flexibility. There isn’t an easy way to simulate exceptions, or to try and think through every single contingency in advance. Whatever rules might get written, there would always be a circumstance in the future in which some people might want to interpret the rules differently, or to ignore them completely, or to create amendments in order to manage an unforeseen circumstance.

Knowing that we cannot possibly write a set of strict commandments to follow, should we, instead, focus our attention on the humans building the systems? These people – AI’s tribes – should be asking themselves uncomfortable questions, beginning with:

  • What is our motivation for AI? Is it aligned with the best long-term interests of humanity?
  • What are our own biases? What ideas, experiences, and values have we failed to include in our tribe? Whom have we overlooked?
  • Have we included people unlike ourselves for the purpose of making the future of AI better–or have we simply included diversity on our team to meet certain quotas?
  • How can we ensure that our behavior is inclusive?
  • How are the technological, economic, and social implications of AI understood by those involved in its creation?
  • What fundamental rights should we have to interrogate the data sets, algorithms, and processes being used to make decisions on our behalf?
  • Who gets to define the value of human life? Against what is that value being weighed?
  • When and why do those in AI’s tribes feel that it’s their responsibility to address social implications of AI?
  • Does the leadership of our organization and our A.I. tribes reflect many different kinds of people?
  • What role do those commercializing AI play in addressing the social implications of AI?
  • Should we continue to compare AI to human thinking, or is it better for us to categorize it as something different?
  • Is it OK to build AI that recognizes and responds to human emotion?
  • Is it OK to make AI systems capable of mimicking human emotion, especially if it’s learning from us in real time?
  • What is the acceptable point at which we’re all OK with AI evolving without humans directly in the loop?
  • Under what circumstances could an AI simulate and experience common human emotions? What about pain, loss, and loneliness? Are we OK causing that suffering?
  • Are we developing AI to seek a deeper understanding of ourselves? Can we use AI to help humanity live a more examined life?

There are nine big tech companies–six American and three Chinese–that are overwhelmingly responsible for the future of artificial intelligence. In the U.S., they are Google, Microsoft, Amazon, Facebook, IBM, and Apple (“G-MAFIA”). In China, it’s the BAT: Baidu, Alibaba, and Tencent.

The G-MAFIA has started to address the problem of guiding principles through various research and study groups. Within Microsoft is a team called FATE–for Fairness, Accountability, Transparency, and Ethics in A.I. In the wake of the Cambridge Analytica scandal, Facebook launched an ethics team that was developing software to make sure that its A.I. systems avoided bias. (Notably, Facebook did not go so far as to create an ethics board focused on AI) DeepMind created an ethics and society team. IBM publishes regularly about ethics and AI. In the wake of a scandal at Baidu–the search engine prioritized misleading medical claims from a military-run hospital, where a treatment resulted in the death of a 21-year-old student–Baidu CEO Robin Li admitted that employees had made compromises for the sake of Baidu’s earnings growth and promised to focus on ethics in the future.

The Big 9 produces ethics studies and white papers, it convenes experts to discuss ethics, and it hosts panels about ethics–but that effort is not intertwined enough with the day-to-day operations of the various teams working on AI.

The Big 9’s AI systems are increasingly accessing our real-world data to build products that show commercial value. The development cycles are quickening to keep pace with investors’ expectations. We’ve been willing–if unwitting–participants in a future that’s being created hastily and without first answering all those questions. As AI systems advance and more of everyday life gets automated, the less control we actually have over the decisions being made about and for us.

Also check out FTI’s foresight tools

Join the discussion

Your email address will not be published. Required fields are marked *