clock menu more-arrow no yes mobile

Filed under:

The Terror Queue

These moderators help keep Google and YouTube free of violent extremism — and now some of them have PTSD

Casey Newton a contributing editor who has been writing about tech for over 10 years. He founded Platformer, a newsletter about Big Tech and democracy.

Google and YouTube approach content moderation the same way all of the other tech giants do: paying a handful of other companies to do most of the work. One of those companies, Accenture, operates Google’s largest content moderation site in the United States: an office in Austin, Texas, where content moderators work around the clock cleaning up YouTube.

Peter is one of hundreds of moderators at the Austin site. YouTube sorts the work for him and his colleagues into various queues, which the company says allows moderators to build expertise around its policies. There’s a copyright queue, a hate and harassment queue, and an “adult” queue for porn.

Peter works what is known internally as the “VE queue,” which stands for violent extremism. It is some of the grimmest work to be done at Alphabet. And like all content moderation jobs that involve daily exposure to violence and abuse, it has had serious and long-lasting consequences for the people doing the work.

In the past year, Peter has seen one of his co-workers collapse at work in distress, so burdened by the videos he had seen that he took two months of unpaid leave from work. Another co-worker, wracked with anxiety and depression caused by the job, neglected his diet so badly that he had to be hospitalized for an acute vitamin deficiency.

Peter, who has done this job for nearly two years, worries about the toll that the job is taking on his mental health. His family has repeatedly urged him to quit. But he worries that he will not be able to find another job that pays as well as this one does: $18.50 an hour, or about $37,000 a year.

Since he began working in the violent extremism queue, Peter noted, he has lost hair and gained weight. His temper is shorter. When he drives by the building where he works, even on his off days, a vein begins to throb in his chest.

“Every day you watch someone beheading someone, or someone shooting his girlfriend,” Peter tells me. “After that, you feel like wow, this world is really crazy. This makes you feel ill. You’re feeling there is nothing worth living for. Why are we doing this to each other?”

Like many of his co-workers working in the VE queue in Austin, Peter is an immigrant. Accenture recruited dozens of Arabic speakers like him, many of whom grew up in the Middle East. The company depends on his language skills — he speaks seven — to accurately identify hate speech and terrorist propaganda and remove it from YouTube.

Several workers I spoke with are hoping to become citizens, a feat that has only grown more difficult under the Trump administration. They worry about speaking out — to a manager, to a journalist — for fear it will complicate their immigration efforts. (For this reason, I agreed to use pseudonyms for most of the workers in this story.)

More than that, though, Peter and other moderators in Austin told me they wanted to live like the full-time Google employees who sometimes visit his office. A higher wage, better health benefits, and more caring managers would alleviate the burdens of the job, they told me.

“We see the people coming from there, how they are, how they are acting more free,” Peter tells me.

For most of this year, I thought the same thing Peter did. Bring the moderators in house, pay them as you would pay a police officer or firefighter, and perhaps you could reduce the mental health toll of constant exposure to graphic violence.

Then I met a woman who had worked as a content moderator for Google itself. She earned a good salary, nearing the six-figure mark. There were excellent health benefits and other perks. But none of these privileges would ultimately prevent the disturbing content she saw each day from harming her.

After a year of removing terrorism and child abuse from Google’s services, she suffered from anxiety and frequent panic attacks. She had trouble interacting with children without crying. A psychiatrist diagnosed her with post-traumatic stress disorder.

She still struggles with it today.

Daisy Soderberg-Rivkin was working as a paralegal in 2015 when she spotted a listing online for an open position at Google. The job was content moderation — although, like many jobs in content moderation, it was described using an opaque euphemism: in this case, “legal removals associate.”

Daisy had grown up with Google services, and as she began to think about working there, her mind turned to the company’s famous perks: its cafes and micro kitchens, free massages and dry cleaning. The job that she ultimately applied for was based at Google’s headquarters in Mountain View, California — the team would later be transferred to a satellite office in nearby Sunnyvale — and it was a full-time position with benefits. It paid $75,000 a year, plus a grant of Google stock that took the total closer to $90,000.

No way I’ll get this job, she thought to herself. She applied anyway.

The listing said associates would process legal requests to remove links from Google search due to copyright violations, defamation, and other inappropriate content. It stated that associates would also have to review some links containing child abuse imagery. “But I remember very clearly in parentheses it said, ‘this kind of content would be limited to one to two hours per week,’” Daisy says.

Removing disturbing content from Google’s services requires the collaboration of several teams within the company. For the most part, videos reported for terrorist content or child exploitation are reviewed by contractors like the ones in Austin. (Google refers to employees hired by third-party firms as “vendors,” but I found that the employees universally describe themselves as contractors, and I use that word throughout this story.) But Google also hires full-time employees to process legal requests from government entities — and, when required, remove images, videos, and links from web search.

Daisy was surprised when, a few months after she applied, a recruiter called her back. Over eight rounds of interviews, Googlers sold her on the positive impact that her work would have. You’re going to help support free speech online, she remembers them telling her. You’re going to make the internet a safer place.

“It felt like you were putting on a cape, working at Google, getting your free kombucha, sleeping in nap pods,” she says. “But every once in a while, you’d have to see some disturbing content. Really, how bad could it be?”

She called her mom and said she was taking the job. She was 23 years old.

Daisy, who had no previous history of mental health issues, didn’t consider the potential effect the new job might have on her psyche. Neither, it seems, did Google. During her orientation, the company did not offer any training for what workers in this field now call “resilience” — developing emotional tools to cope with a high volume of graphic and disturbing text, images, and video.

Daisy was assigned to review legal requests for content removals that originated in France, where she is fluent in the native language. Eventually, she would become the company’s program lead for terrorism in the French market. Each day, she would open her queue, sort through the reports, and determine whether Google was obligated — either by law or by Google’s terms of service — to take down a link.

To her surprise, the queue began to overflow with violence. On November 13th, 2015, terrorists who had pledged their loyalty to ISIS killed 130 people and injured 413 more in Paris and its suburb of Saint-Denis, with the majority dying in a mass shooting during a concert at the Bataclan.

“Your entire day is looking at bodies on the floor of a theater,” she says. “Your neurons are just not working the way they usually would. It slows everything down.”

In July 2016, terrorists connected to ISIS drove a cargo truck into a crowd of people celebrating Bastille Day in the French city of Nice, killing 86 people and wounding 458 more. Links to graphic photos and videos began to pile up. Managers pressured Daisy to process an ever-higher number of requests, she says. We need to kill this backlog, they said. If she didn’t, she worried that she would get a bad review.

Daisy tried to work faster but found it to be a struggle.

“All you see are the numbers going up in your queue,” she says.

In February, I wrote about the lives of Facebook moderators in the United States, focused on a site in Phoenix where workers complained of low pay, dire working conditions, and long-lasting mental health problems from policing the social network. In June, I wrote a follow-up report about a Facebook site in Tampa, Florida, where a moderator had died after suffering a massive heart attack on the job.

By then, I had received messages from employees of other big social platforms explaining that these issues affected their companies as well. Beginning this summer, I sought out people who had worked as moderators for Google or YouTube to compare their experiences with those I had previously written about. Over the past five months, I interviewed 18 current and former Google employees and contractors about their working conditions and the job’s effects on their mental health.

With its large number of internet services, some of which have attracted user bases with more than a billion people, Google requires an army of moderators. Much of the content submitted for review is benign and even tedious: cleansing spam from Google’s ad platform, for example, or removing fraudulent listings from Google Maps. But disturbing content can be found nearly everywhere Google allows users to upload it. In October, the company reported that, in the past year, it had removed 160,000 pieces of content for containing violent extremism from Blogger, Google Photos, and Google Drive alone — about 438 per day.

Even on YouTube, much of the content reviewed by moderators is benign. When no videos are reported in their queues, moderators often sit idle. One Finnish-language moderator told me she had gone two months at her job with nothing at all to do during the day. At most, she might be asked to review a few videos and comments over an eight-hour span. She spent most of her workday browsing the internet, she told me, before quitting last month out of boredom.

Other moderators’ experiences varied widely based on their locations, their assignments, and the relative empathy of their managers. Several of them told me they mostly enjoy their work, either because they find the task of removing violent and disturbing videos from Google search and YouTube rewarding or because the assigned tasks are simple and allow them ample time during the day to watch videos or relax.

“Overall, employees feel that this is a very easy job and not something to be complaining about,” a moderator for YouTube in India, who makes about $850 a month, told me in an email. “We usually spend our wellness [time] playing games like musical chairs, dumb charades, Pictionary, et cetera. We have fun!”

“Fun” was not a word anyone I spoke with used to describe the work of moderating terrorist content. Instead, they spoke of muscle cramps, stress eating, and — amid the rising rents in Austin — creeping poverty. They talked of managers who denied them break time, fired them on flimsy pretexts, and changed their shifts without warning.

For the workers most deeply affected by the violence, they expressed a growing anxiety about the side effects of witnessing dozens or more murder scenes per day.

“If I said it didn’t affect me, it’s a complete lie,” says Tariq, who has worked in the Austin violent extremism queue for more than 18 months. “What you see every day ... it shapes you.”

When he leaves his job in Austin, Peter tries to unwind. Over time, this has become more difficult. The action movies he once enjoyed no longer seem fictional to him. Every gunshot, every death, he experiences as if it might be real.

“Even if I know that ... this is not true,” Peter says.

Some of his co-workers cope by using drugs — mostly weed. Since Google first hired Accenture to begin spinning up the VE queue in Texas, he has seen them all become more withdrawn.

“At the beginning, you’d see everybody saying, ‘Hi, how are you?’” Peter remembers. “Everybody was friendly. They’d go around checking in. Now nobody is even wanting to talk to the others.”

He joined the project in 2017, the year it began. At the time, YouTube had come under significant pressure to clean up the platform. Journalists and academics who investigated the service had found a large volume of videos containing hate speech, harassment, misinformation about mass shootings and other tragedies, and content harmful to children. (Many of those videos had been found on YouTube Kids, an app the company had developed in an effort to steer children toward safer material.)

In response, YouTube CEO Susan Wojcicki announced that the company would expand its global workforce of moderators to 10,000, which it did. A fraction of those — Google wouldn’t tell me how many — were hired in the United States, with the largest concentration in Austin.

Contract content moderators are cheap, making just a little over minimum wage in the United States. By contrast, full-time employees who work on content moderation for Google search could make $90,000 or more after being promoted, not including bonuses and stock grants. Temporary workers, contractors, and vendors — the workers who Googlers refer to internally as TVCs — now make up 54 percent of the company’s workforce.

Kristie Canegallo, Google’s vice president of trust and safety, oversees its thousands of moderators. She told me that relying on firms like Accenture helps Google adjust staffing levels more efficiently. If the company is developing a new tool to help catch bad videos, it might need more moderators initially to help train the system. But afterward, those moderators are no longer needed.

“Contracting with vendor companies really does help us have flexibility to adjust to changing demands,” says Canegallo, who joined Google in 2018 after serving as a deputy chief of staff to President Barack Obama.

Like other big players in the industry, Accenture’s Austin site is based on the model of a call center. (Unlike Facebook, Google declined to let me visit any of its sites.) Employees work in a dedicated space known as the production floor where they work in shifts to process reports. The work is critical to enabling YouTube’s existence: many countries have passed laws that legally require the company to remove videos containing terrorist material, some of them in as little as 24 hours after a report is received.

Daisy found the terrorist material disturbing, but she was even more unsettled by what Google calls child sexual abuse imagery (CSAI). The job listing had promised she would only be reviewing content related to child abuse for an hour or two a week. But in practice, it was a much bigger part of the job.

It’s illegal to view CSAI in most cases, so Google set up what the moderators called a “war room” where they could review requests related to child exploitation without the risk that other co-workers would inadvertently see the material. Initially, the company set up a rotation. Daisy might work CSAI for three weeks, then have six weeks of her regular job. But chronic understaffing, combined with high turnover among moderators, meant that she had to review child exploitation cases most weeks, she says.

“We started to realize that essentially, we were not a priority for the company,” Daisy says of Google. “We would ask for things and they would say, ‘Look, we just don’t have the budget.’ They would say the word ‘budget’ a lot.”

(Google reported $110 billion in revenue in 2018.)

A year into the job, Daisy’s then-boyfriend pointed out to her that her personality had begun to change. You’re very jumpy, he said. You talk in your sleep. Sometimes you’re screaming. Her nightmares were getting worse. And she was always, always tired.

A roommate came up behind her once and gently poked her, and she instinctively spun around and hit him. “My reflex was This person is here to hurt me,” she says. “I was just associating everything with things that I had seen.”

One day, Daisy was walking around San Francisco with her friends when she spotted a group of preschool-age children. A caregiver had asked them to hold on to a rope so that they would not stray from the group.

“I kind of blinked once, and suddenly I just had a flash some of the images I had seen,” Daisy says. “Children being tied up, children being raped at that age — three years old. I saw the rope, and I pictured some of the content I saw with children and ropes. And suddenly I stopped, and I was blinking a lot, and my friend had to make sure I was okay. I had to sit down for a second, and I just exploded crying.”

It was the first panic attack she had ever had.

In the following weeks, Daisy retreated from her friends and roommates. She didn’t want to talk with them too much about her work for fear of burdening them with the knowledge she now had about the world. Her job was to remove this content from the internet. To share it with others felt like a betrayal of her mission.

Google kept a counselor on staff, but she was made available to the legal removals team at irregular intervals, and her schedule quickly filled up. Daisy found the counselor warm and sympathetic, but it was hard to get time with her. “They would send you an email saying, ‘She’s coming this day,’ and you would have to sign up very quickly because it would fill up almost immediately. Because everyone was feeling these effects.”

When she did successfully make an appointment, the counselor suggested that Daisy begin seeing a private therapist.

Meanwhile, Daisy grew more irritable. She asked the people in her life not to touch her. When one friend invited her to her three-year-old’s birthday party, Daisy went but left after a short while. Every time she looked at the children, she imagined someone hurting them.

As her mental health declined, Daisy struggled to keep up with the demands that were placed on her. More and more, she cried at work — sometimes in the bathroom, sometimes in front of the building. Other times, she fell asleep at her desk.

Toward the end of that first year, her manager asked to have a conversation. They met inside a conference room, and the manager expressed his concerns. You’re not getting through your queue fast enough, he said. We need you to step up your productivity game.

She was tired when he said that, because she was always tired, and something about those words — “productivity game” — enraged her. “I just snapped,” Daisy says.

“How on earth do you want me to step up my productivity game?” she told her manager. “Do you know what my brain looks like right now? Do you understand what we’re looking at? We’re not machines. We’re humans. We have emotions, and those emotions are deeply scarred by looking at children being raped all the time, and people getting their heads chopped off.”

Sometimes, when she thought about her job, she would imagine walking down a dark alley, surrounded by the worst of everything she saw. It was as if all of the violence and abuse had taken a physical form and assaulted her.

“All the evil of humanity, just raining in on you,” she says. “That’s what it felt like — like there was no escape. And then someone told you, ‘Well, you got to get back in there. Just keep on doing it.’”

A few days later, Daisy told her manager that she intended to take paid medical leave to address the psychological trauma of the past year — one of several on her team who had taken leave as a result of emotional trauma suffered on the job. She thought she might be gone a few weeks, maybe four.

She would not return to Google for six months.

The killings were coming in faster than the Austin office could handle. Even with hundreds of moderators working around the clock in shifts, Accenture struggled to keep up with the incoming videos of brutality. The violent extremism queue is dominated by videos of Middle Eastern origin, and the company has recruited dozens of Arabic speakers since 2017 to review them.

Many of the workers are recent immigrants who had previously been working as security guards and delivery drivers and heard about the job from a friend.

“When we migrated to the USA, our college degrees were not recognized,” says Michael, who worked at the site for almost two years. “So we just started doing anything. We needed to start working and making money.”

Workers I spoke to were initially grateful for the chance to work for a large technology company like Google. (While the contractors technically work for Accenture, Google blurs the boundaries in several ways. Among other things, the contractors are given google.com email addresses.)

“I was finally working in an office,” Peter says. “I thought about all the opportunities. I thought about a career.”

But until orientation, the actual nature of the work in the violent extremism queue remained opaque. “I didn’t have an idea what it was,” Peter says, “because they won’t tell you.”

Accenture instructs moderators to process their 120 videos per day in five hours, according to the workers I spoke with, with two hours per day of paid “wellness” time and a one-hour unpaid lunch. (Wojcicki promised to reduce their burden to four hours last year, but it never happened. Accenture denies setting any productivity quotas for workers.) Wellness time is set aside for workers to decompress from the rigors of the job — by taking a walk outside, talking to an on-site counselor, or by playing games with co-workers. “At the beginning, they were really good,” Michael says. “If you see something bad, take a break. Close your screen and just go.”

Google offers its contractors dramatically more downtime than Facebook, which asks its moderators to make do with two 15-minute breaks, a 30-minute lunch, and just nine minutes per day of wellness time. (Facebook says that with training and coaching, its moderators are viewing content roughly six hours a day.)

“We continually review, benchmark and invest in our wellness programs to create a supportive workplace environment,” Accenture told me in a statement. “Our people in Austin have unrestricted access to wellness support, which includes proactive and on-demand counseling that is backed by a strong employee assistance program, and they are encouraged to raise wellness concerns through these programs.”

But if two hours of wellness time per day is the ideal, in Austin, it is not the norm. Four workers told me they were routinely denied break time when the VE queue got particularly busy. Beginning around six months ago, they also had to start giving up break time to hit their “utilization” score, which is a measurement of the time actively spent moderating videos during the day. Tracking software installed on their computers records each minute of video they watch, with a target of five hours. But other critical work tasks, such as checking email or participating in team meetings, don’t count toward that goal, forcing employees to regularly eat into their break time to make up for the loss.

The false promise of extended break time in Austin is consistent with the overall picture workers have painted for me at content moderation sites around the world. When new sites are spun up, managers rally new employees around their noble mission: to make the internet safe for everyone to use. Initially, the contractors are granted freedoms that full-time employees at Google, Facebook, and elsewhere take for granted: the freedom to go to the bathroom without asking for permission, the freedom to eat food at their desk, the freedom to schedule a vacation.

As the months wear on, vendors like Accenture and Cognizant begin to claw back these freedoms, often with little explanation. In Austin, eating at your desk was banned. Some managers began asking employees why they were spending so long in the bathroom. (They had been gone perhaps six or seven minutes.) Workers had initially been allowed to bring personal cellphones to their desks, but they lost that freedom as well, apparently over privacy concerns.

The cellphone ban has created a particular kind of dark comedy in the Austin office. Certain Accenture services require employees to log in using two-factor authentication, with expiring codes sent to workers’ phones. Since Accenture banned phones on the production floor, employees now have to race to the lockers where their phones are kept, then race back to their desks to enter the code before it expires. Accenture also banned pens and paper at employee desks, so employees who worry they’ll forget their code have to quickly scribble it on their hands before locking their phones back up and making the run back to their desk. Workers are now frequently seen sprinting through the office with a series of digits scrawled messily on their palms.

Two employees at the Austin site told me they had been denied vacation requests based on the volume of terrorism videos in the queue. Others were transferred to different shifts with little or no explanation. And YouTube’s moderators have not received raises in two years, even as Austin’s rents are among the fastest-rising in the country. (Accenture says the vast majority of its workers receive annual raises.) Peter told me that he spends 50 percent of his monthly income on rent, with most of the rest going to other bills. Life in Austin is getting more expensive, he says, but his wages have not kept pace.

“They treat us very bad,” Michael says. “There’s so many ways to abuse you if you’re not doing what they like.”

When she went on leave from Google, Daisy began working with a psychiatrist and a therapist. She was diagnosed with post-traumatic stress disorder and chronic anxiety, and she began taking antidepressants.

In therapy, Daisy learned that the declining productivity that frustrated her managers was not her fault. Her therapist had worked with other former content moderators and explained that people respond differently to repeated exposure to disturbing images. Some overeat and gain weight. Some exercise compulsively. Some, like Daisy, experience exhaustion and fatigue.

“It sounds to me like this is not a you problem, this is a them problem,” Daisy’s therapist told her, she recalls. “They are in charge of this. They created this job. They should be able to … put resources into making this job, which is never going to be easy — but at least minimize these effects as much as possible.”

The therapist suggested that Daisy get a dog. She adopted a border collie / Australian shepherd mix from the SPCA and named her Stella after finding herself calling after the dog in a Brando-esque bellow. They took a course together in which Stella trained to become an emotional support animal, alert to the signs of Daisy’s panic attacks and adept at putting her at ease.

Daisy began taking Stella to UCSF Benioff Children’s Hospital to visit sick children. Over time, she found that she became able to interact with children again without triggering a panic attack. “Seeing a child pet my dog had a profound influence on how I moved forward with my relationship with kids,” she says.

She is grateful that, unlike a contractor, she could take time to get help while still being paid. “I had those months to think about my choices, and to think about ways out, without having to deal with unemployment or having to deal with how am I going to pay rent,” she says.

Half a year after leaving Google, Daisy returned to her job. To her dismay, she found that little about her managers’ approach had changed.

“They did check up on me,” she says. “They said, ‘How are things going? How are you feeling? We’ll start you off slowly.’ But the end game was still the same, which was to get you up to your [target] productivity again.”

A week after returning, she decided to apply to graduate school. She was accepted to the Fletcher School of Law and Diplomacy at Tufts University, and earlier this year, she earned a master’s degree. Today, she is a policy fellow at the R Street Institute, a nonpartisan think tank. She focuses on children and technology, drawing on her time at Google to brief lawmakers about child privacy, child exploitation, and content moderation.

“I’m going to use all this to fuel my desire to make a change,” Daisy says.

In Austin, as Accenture put into place various new restrictions on the workplace, some began joking to one another that they were being experimented on. “You’re just a rat,” Peter says. “They try new things on you.”

For a small group of contractors, this is true literally. Earlier this year, Google presented a paper at the Conference on Human Computation and Crowdsourcing. The paper, “Testing Stylistic Interventions to Reduce Emotional Impact of Content Moderation Workers,” described two experiments the company had conducted with its content moderators. In one, the company set all videos to display in grayscale — disturbing content in black and white, instead of color. In the other, it blurred content by default.

Researchers were interested in whether transforming videos and images can lessen the emotional impact they have on moderators.

“Part of our responsibility and our commitment to all of our team members who are looking at this content to be getting them the best support possible to be doing their job,” Canegallo told me. Whatever Google learns about improving conditions for its workers, it will share with the industry, she said.

The grayscale tool was made available to 76 moderators, who had opted in to the study. Moderators spent two weeks looking at the regular, colorized queue and then answered a questionnaire about their mood. They spent the next two weeks looking at a grayscale queue and then took the questionnaire again.

The study found that presenting videos in grayscale led reviewers to report significantly improved moods — for that week, at least.

It is just as notable what the company is not testing: limiting the amount of disturbing content individual moderators can be exposed to in a lifetime; paid medical leave for contractors developing PTSD; and offering support to former employees who continue to struggle with long-term mental health issues after leaving the job.

Instead, Google is doing what tech companies often do: attempting to apply tech solutions to the problem. The company is building machine learning systems that executives hope will someday handle the bulk of the work. In the meantime, Google researchers have suggested future studies that examine the emotional impact on moderators of changing the color of blood to green, other “artistic transformations” of content, and more selective blurring — of faces, for example. (Facebook has already implemented grayscale and face-blurring options for its moderators, along with an option to mute the sound in videos by default.)

But companies have known for years now that employees are seeking medical leave to deal with job-related trauma. It is striking that a company with resources as vast as Google is just now beginning to dabble in these minor, technology-based interventions, years after employees began to report diagnoses of PTSD to their managers.

We are now two years into a great expansion of the content moderation industry. As governments around the world make more demands of tech companies to police their services, tens of thousands of people have signed up for the job. The need for moderators appears to be expanding even as some vendors are reevaluating their ability to do the work. In October, Cognizant announced that it would exit the business over the next year.

At the same time, we still lack a basic understanding of how the most difficult aspects of this work — removing graphic and disturbing content — affect the people doing it. We know that a subset of people who work in YouTube’s violent extremism queue and similar roles around the world will develop PTSD and related conditions on the job. We don’t know what a safe level of exposure might be.

Tech company executives tend to describe this issue to me as a recruiting problem. In their view, there are workers who are resilient in the face of unending violence and abuse, and those who are not.

But in my conversations this year with more than 100 moderators at companies of all sizes, it seems clear that content moderator safety is not a binary issue. Some workers develop early symptoms of PTSD during their first few weeks on the job. Others develop them after doing the work for years.

You never know when you’re going to see the thing you can’t unsee until you see it.

Ultimately, I can’t say it any more clearly than Google’s own researchers: “There is ... an increasing awareness and recognition that beyond mere unpleasantness, long-term or extensive viewing of such disturbing content can incur significant health consequences for those engaged in such tasks.”

And yet, at Google, as at Facebook, workers are discouraged from even discussing those consequences. Managers who warn them that they can be easily replaced, coupled with the nondisclosure agreements that they are forced to sign upon taking the job, continue to obscure their work.

And as some portion of them sinks into anxiety and depression, they will get very different care based on whether they work as full-fledged employees or as contractors. A relative few, like Daisy, will be able to take months of paid medical leave. Others, like one person I spoke with in Austin, will continue working until they are hospitalized.

Still, the fact remains: no matter how well you are paid or how good the benefits are, being a content moderator can change you forever.

Recently, an employee of one of the big tech companies explained to me the concept of “toxic torts” — laws that allow people to sue employers and homebuilders if they expose the plaintiff to unhealthy levels of a dangerous chemical. These laws are possible because we have a scientific understanding of how certain chemicals affect the body. We know that exposure to lead-based paint, for example, can cause brain damage, especially in children. We know that exposure to asbestos can cause lung cancer. And so we establish a safe level of exposure and attempt to hold employers and homebuilders to those levels.

Perhaps we will never be able to determine a safe level of exposure to disturbing content with the same degree of precision. But it seems notable that none of the tech giants, which employ tens of thousands of people to do this work, are even trying.

If that is to change, it will be because of some combination of collective worker action, class action lawsuits, and public pressure. Google employees are leading the industry in advocating for the rights of their contractor colleagues, and I hope that work continues.

Two years removed from her time at Google, Daisy still grapples with the after-effects of the work that she did there. She still has occasional panic attacks and takes antidepressants to stabilize her mood.

At the same time, she told me that she is grateful for the fact she was able to take paid medical leave to begin addressing the effects of the job. She counts herself as one of the lucky ones.

“We need as many people as we can doing this work,” Daisy says. “But we also need to change the overall system and the overall structure of how this work is being done. How we support these people. How we give them tools and resources to deal with these things. Or else, these problems are only going to get worse.”