The phrase “artificial intelligence” in pop culture often conjures up dystopian images such as the sentient computer Hal 9000 from the 1968 film “2001: A Space Odyssey” that killed people for its self preservation; or the cyborg assassin with a metal endoskeleton in director James Cameron’s “The Terminator.” In recent years, our fascination with the potential of AI has taken a more starry-eyed turn, as shown in the 2013 sci-fi drama “Her,” where the main character falls in love with a virtual assistant.

In reality, artificial intelligence (AI) technology is quickly permeating every aspect of our lives. From Amazon’s voice-activated Alexa to writing technology that helps managers craft job postings, AI is in our hearts, homes and workplaces. And it’s only going to become a bigger part of our lives: Experts call the rise of AI the driving force behind the fourth industrial revolution.

The A.I. Age | This 12-month series of stories explores the social and economic questions arising from the fast-spreading uses of artificial intelligence. The series is funded with the help of the Harvard-MIT Ethics and Governance of AI Initiative. Seattle Times editors and reporters operate independently of our funders and maintain editorial control over the coverage.

On a recent afternoon at the NVIDIA robotics research lab in Seattle’s University District, researchers use a simulated kitchen to test robots’ ability to perform simple tasks such as grabbing objects. A 5-feet 7-inch tall white robot, basically a spindly arm affixed with a claw of the sort customarily found in an arcade vending machine, glided around the kitchen on its two Segway wheels.

Following the command of a research scientist sitting at a nearby computer, the robot grabbed a Cheez-It box on the counter and extended its limb to gently place the snacks inside a cabinet.

“What’s deceptive is that what’s simple to us in the kitchen is challenging for a robot,” said University of Washington Computer Science and Engineering Professor Dieter Fox, who also serves as the lab’s senior director of robotics research. The Silicon Valley-based technology company opened the robotics lab last fall to harness the UW’s talent in a sector where Seattle plays a central role.

Still, paranoia around the capabilities of AI technology persist. When we recently asked readers what they want to know about AI, many focused on the negative. We received questions such as, “How will we know when we aren’t in control?” and “When will it be against the law to make, produce and distribute since it is a patent danger to humans?” Others voiced concerns about the potential that AI will displace workers. Such worry was reflected in a recent Northeastern University and Gallup survey that found 71 percent of Americans feared the surge in AI would cause more job loss than gain.

Advertising

We’re here to clear up confusion, and highlight the pros and cons of AI technology. Over the coming months, The Seattle Times will explore the social and economic effects of AI by examining regulation of the technology, privacy concerns and the changing landscape of labor in the AI Age. This piece will explain some of the terms you’ll hear, and look at examples of AI around us.

But first, let’s go over the basics.

Robots entered the cultural imagination about 100 years ago in the Czech play R.U.R., in which artificial people wipe out humans. A seminal moment for AI came a few decades later, when the British mathematician Alan Turing proposed the first test to measure a computer’s intelligence in his 1950 paper, Computing Machinery and Intelligence.

The Turing test analyzed whether a machine has developed a humanlike level of awareness through a test that involved one human and one machine respondent, and a human judge placed in another room who would ask the two contestants questions for five minutes. If the judge mistook the machine for a human more than 30% of the time, the computer was determined to have artificial intelligence — a test that many researchers believe has yet to be passed.

Computer scientist John McCarthy coined the phrase “artificial intelligence” in 1956, later describing it as the “science and engineering of making intelligent machines, especially intelligent computer programs.” In the decades since, AI has continually reached new milestones.

In 1997, IBM’s Deep Blue computer for the first time defeated a professional player in a six-game chess match, demonstrating the calculating abilities of computers. This July, Facebook and Carnegie Mellon University announced they’d created an AI program called Pluribus that defeated five top players in the popular poker game Texas Hold’em, suggesting AI has reached a level of strategic reasoning that surpasses humans, the researchers said. Carnegie Mellon University computer science professor Tuomas Sandholm said the research behind Pluribus could be applied to real-world scenarios including optimizing strategies in investment banking.

Common terms

As AI becomes more widespread, you’ll hear a lot of new terms, such as deep learning, deepfakes, algorithms and natural language processing. Here’s a guide to some of those concepts and how you may unknowingly cross paths with them.

Advertising

Algorithms

Algorithms are mathematical formulas that amount to a set of processing instructions — akin to a recipe — that aim to solve a specific problem. In some AI systems, algorithms are designed to allow the program to learn on its own. For instance, if a robot followed a recipe to mix together flour, eggs and milk, then placed the ingredients in a preheated oven to bake a cake, it might learn over several attempts — a data set — that too much flour would make the cake dry.

Machine learning

Ever wonder how Netflix, out of thousands of titles, recommends a few TV shows that match your tastes? Or how an ad for the Wayfair rug you’ve been eyeing popped up on your Facebook feed? It’s not magic, it’s an analytics system called machine learning that finds patterns in a large amount of data. Platforms such as Facebook make recommendations by collecting information about users, including their browsing history, age and online purchasing habits, to make inferences about future choices or preferences.

Machine learning comes in three forms: supervised, unsupervised and reinforcement learning. Most practical applications of AI are found in supervised learning, which is used when workers annotate images that are fed into software. For instance, autonomous vacuums are taught how to clean without running into objects through the use of algorithms and the labeling of images of rooms that serve as training data for the technology.

When people purchase mattresses online, customers’ information is pooled into a large database and unsupervised learning finds patterns to predict future purchasing habits. Based on the actions of other customers who purchase mattresses, for example, the algorithm could determine that the person is likely to buy a bed frame next. Unsupervised learning is when a system goes on a fishing expedition to find underlying patterns in information without a specific goal in mind.

The AI bot Pluribus beat five top poker players in Texas Hold ‘em through reinforcement learning — it improved its results by learning which bets were likely to win more money, and analyzed its hands afterward to determine whether alternatives would have yielded better results. In reinforcement learning, a system teaches itself through trial and error.

Deep learning

Just as human brains learn to recognize various people through knowledge and experience, self-driving cars can be trained to recognize pedestrians and objects on roads through an AI subset called deep learning. Relying on numerous layers of algorithms to sift through and process large amounts of data, deep learning utilizes a web of computation models called “neural networks” that are designed to mimic human brains. The more driving experience the car has, the more likely it is to recognize humans in their various colors, shapes and sizes. This type of deep learning is also at work when Facebook suggests name tags on images that are uploaded to its platform. The technology called facial recognition analyzes a person’s face by measuring the distance between facial features and uses algorithms to find a match.

Natural language processing

Have you wondered how Siri understands that you need the directions to Sammy’s house and not the sandwich store? Or how a transcript of a voice mail is sent to your email inbox when you’ve missed a call? Thank natural language processing (NLP). NLP technology uses machine-learning algorithms that tag parts of speech and the relationships between words to analyze the meaning in text and audio. Gmail’s Smart Compose feature, unveiled last year, takes NLP a step further by offering users suggestions to complete a sentence in the body of an email.

Deepfake technology

Another new application of AI, deepfake technology, uses deep learning models to manipulate photos and videos to create realistic images of people doing or saying something they never did, sometimes for nefarious reasons. Watch out for deepfake videos that depict fabricated political speech in the upcoming election cycle!

Advertising

Region is an AI hub

The presence of the two tech giants Amazon and Microsoft has made the Puget Sound area a veritable hub for artificial intelligence. Combine those with the University of Washington, and “we have a strong flow of really talented scientists and engineers who’ve grown to call Seattle home,” said Eric Boyd, who oversees development of Microsoft’s AI tools.

Microsoft’s cloud-computing service for artificial intelligence, Azure AI, allows developers and data scientists to easily add capabilities such as translation and image recognition to their projects. For office workers, AI tools in Microsoft 365 offers suggestions to complete projects in Word, PowerPoint and Excel. AI-powered speech recognition tools in the translation and transcription service Translator are also used in lectures to provide captions on a screen for students who are deaf or hard of hearing.

One way Amazon is transforming the retail sector is through its checkout-free shopping at Amazon Go. Amazon Go uses computer vision, multiple sensors and deep learning — the same technologies found in autonomous cars — to automatically detect when a customer takes items from shelves and walks out of the store. An AI-powered feature on the Amazon app called StyleSnap allows shoppers to customize their wardrobes by uploading a photo or screenshot of a style they admire. The StyleSnap option uses computer vision and deep learning to classify the clothes and offer recommendations for similar items found on the site that match the shopper’s desired look.

Meanwhile, the Amazon virtual assistant Alexa uses natural language processing and machine-learning algorithms to enable more than 90,000 tasks, such as listening to music on command, calling friends and family, booking hotel rooms and controlling light switches.

Advertising

“We have been working with advanced technologies like machine learning for decades, but we are only in the beginning stages of understanding the possibilities and how invention can improve lives for good by helping solve the big (and small) problems we all face every day,” Amazon’s Vice President of the Core AI team, Pat Bajari, said in an email.

Still scared?

Meanwhile, AI technology is transforming the retail, automotive and hospitality industry worldwide. Automated check-ins and robots are becoming more commonplace in hotels, and retail stores such as Walmart have rolled out autonomous floor cleaners and smart conveyor belts to keep up with competitors such as Kroger and Whole Foods.

Concerned about job security, hotel workers around the nation have created contract language that grants employees some say in the deployment of technology. In recent months, dockworkers have protested and launched petitions to thwart projected job losses after the Port of Los Angeles’ decision to deploy automated vehicles.

 

Some AI experts argue it is unrealistic to expect all manual laborers to become data analysts. Society should redirect people who lack higher education to jobs that draw on the uniquely human trait of empathy, said Oren Etzioni, CEO of the Seattle-based Allen Institute for Artificial Intelligence. As a step to finding new careers for people displaced by advancements in artificial intelligence and automation, Etzioni has suggested wage increases for caregivers and the creation of government programs to train a new generation of caretakers.

“You don’t want a robot taking care of your baby; an ailing elder needs to be loved, to be listened to, fed, and sung to. This is one job category that people are — and will continue to be — best at,” Etzioni wrote in a Wired article.

Predictions are all over the map about whether technology will usher in new work to offset the loss of jobs to artificial intelligence in upcoming years. The manufacturing industry will be hit the hardest, according to a recent Oxford Economics report that found 20 million manufacturing jobs will be lost by 2030. On a more positive note, a 2018 report from World Economic Forum — a nonprofit composed of the world’s 1,000 top companies — predicts that while 75 million jobs will be displaced by automation, it will generate 133 million new roles across all industries within the next three years if workers receive continuous retraining.

Advertising

What’s the future of AI?

There’s already a backlash against the application of some forms of AI technology. In recent months, several cities and states have banned or considered a moratorium on facial-recognition technology after reports of its potential for misuse. A recent Washington Post investigation, for instance, showed Immigration and Customs Enforcement and Federal Bureau of Investigation agents used the software to scan millions of Americans’ driver’s licenses without their consent in order to track down undocumented immigrants and identify suspects.

A growing number of AI experts and politicians agree that the advancements in AI have outpaced government regulation. Technology companies such as Microsoft, followed by Amazon, have urged Congress to issue federal guidelines on the use of facial-recognition technology.

Meanwhile, some politicians seek to expand government support of AI. U.S. Sen. Martin Heinrich (D-N.M.), Rob Portman (R-Ohio), and Brian Schatz (D-Hawaii) introduced legislation in May to fund AI developments over the next 10 years. Backers say the bipartisan Artificial Intelligence Initiative Act seeks to prepare an AI workforce and deploy ethically responsible AI tools for private, government and academic use.

Although the next stop remains unknown, the AI train isn’t stopping anytime soon. As World Economic Forum Founder and Executive Chairman Klaus Schwab aptly summarized the Fourth Industrial Revolution, “There has never been a time of greater promise or greater peril.”