One day in July 2008, a fifty-three-year-old employee at France Telecom wrote a letter to his trade union representative. The worker was a satellite technician, but the company had assigned him to a new job in a call center. He hated the work. It made him feel “like a mechanical puppet.” He had pleaded with managers to move him to another role, but they refused. He wanted his union rep to know that he could not take this job anymore. After sending the letter, he walked to a train station and threw himself under a train.

This man was one of dozens of France Telecom workers who committed suicide because of work-related stress from 2008 to 2014. To dispel any doubt, some left notes saying they were killing themselves specifically and solely because of work. The suicides happened in waves, which sparked an outcry in Europe and led to the resignation of the company’s CEO.

This wasn’t happening by accident. Most of the suicide victims were fifty-something engineers and technicians. France Telecom wanted to get rid of them, but they were civil servants, and the law said they could not be fired. So the company devised a new plan: it would make these people so miserable that they would quit. One strategy involved assigning employees to work in call centers. They were subjected to intense surveillance and forced to recite scripts, like “talking robots,” one worker wrote in a suicide note. Professionals accustomed to working with freedom and autonomy now were punished for minor tardiness and had to ask permission to use the bathroom, according to the British Medical Journal.

The tactic was to demean the workers and dehumanize them—to deprive them of their human qualities. To treat them like robots. They were tethered to machines and expected to behave like machines themselves. To be sure, committing suicide was an extreme response. Why didn’t they just quit? Why do some people crumble under this kind of treatment while others carry on? Each victim might offer a different explanation. More interesting, however, is what their suicides reveal about the nature of the work they were doing. Precisely that kind of work and those kind of working conditions are becoming more common. Think of people who get relegated to work in gig-economy jobs. Or people who fulfill orders in Amazon warehouses, racing around picking boxes off shelves, too busy to take bathroom breaks. Or the thousands of people living lives of quiet desperation in call centers—monitored, measured, and managed by machines. Some call center workers rarely interact with an actual human manager, and usually it is only to be rebuked because the performance-monitoring software has “reported” them for doing something wrong.

Even for ordinary white-collar workers, the modern workplace abounds with dehumanizing policies and practices, some trivial, some more profound. In my quest to understand the epidemic of worker unhappiness, I’ve come across stressors like dwindling paychecks, job insecurity, and constant, unrelenting change. But this fourth and final factor of unhappiness in the workplace—dehumanization—might be the most dangerous of them all.

Much of the dehumanization is driven by technology. Twenty or thirty years ago, when computer technology first entered the workplace, the idea was just to give people tools that let them get their work done more efficiently. We had personal computers running word processors and spreadsheets to help us complete tasks that once required hours of painstaking effort. Back then, we used tech. Today, it feels like tech is using us.

Computers have become unfathomably more powerful, pervasive, and intelligent. Technology connects the supply chain to the sales department to the accountants in the finance office. Tech tracks the humans who work in customer service and support—and in some cases just handles customer support on its own, without any humans needed. Tech tells telemarketers if they’re hitting their quotas and warns them if they’re falling short. Tech decides which people should be hired and which should be fired. The company itself can come to feel like a kind of computer, a big thrumming electronic machine that we humans get plugged into.

Hoping to save money, companies now automate every aspect of their organization, from sales and marketing to customer support. They are even automating HR, a department that actually has the word “human” in its name. Ask a question about how to sign up for health benefits, and there’s a good chance you’ll be talking to a chatbot. Send out your résumé when you’re job-hunting, and it may be screened by a software program, not a human being. To get to an interview with a human, you first need to impress the software. As companies rely on ever-smarter “applicant tracking systems,” job hunters keep figuring out new “hacks” to beat the filters. Some even employ their own AI arsenal to combat the AI arsenal used by companies. A program called VMock uses machine learning and artificial intelligence to scan your résumé and tell you how to make it better. VMock tells you which words to use and which to avoid, and even recommends which fonts to use and how to format the document. By 2025 more than a billion people will have interacted with an AI assistant, Wired reported in June 2018.

How do corporate executives measure employee morale? That, too, can be done by machine. Instead of walking around and talking to people, managers today use apps like TinyPulse to survey workers about their happiness—apparently unaware that impersonal electronic surveys might be part of what makes workers unhappy. When I was working at a start-up, we were surveyed like this relentlessly. Once, we were asked what the company could do to make us more happy. “More surveys,” I wrote back to the machine.

Even the most basic thing we do—talking to each other—increasingly has become mediated by technology, thanks not only to email and text messaging, but to newer platforms like Slack and HipChat. Have you ever seen two people at work sitting side by side, or facing one another, communicating via text messages rather than actually talking to one another? Apparently, a lot of us now prefer this. The problem is that the electronic tools that purport to connect us can also have the effect of making us feel isolated and disconnected from one another—“alone together,” as MIT sociologist Sherry Turkle puts it.

Those are the small things, the “lite” version of dehumanization. The more extreme version is playing out in workplaces like Amazon shipping centers. There, much of the work is done by robots, but the company still needs humans for some tasks. The twist is that Amazon expects those humans to behave as much like robots as possible. They get few breaks and must constantly race to hit quotas. They perform repetitive tasks and are monitored by software that scores their performance and dispenses penalties for infractions. “The result is a work environment that is profoundly dehumanizing,” according to a report by the Institute for Local Self-Reliance, a nonprofit advocacy group.

For white-collar workers, taking a job at Amazon means agreeing to be plugged into what one former employee described to the New York Times as a “continual performance improvement algorithm,” a vast invisible machine that monitors employees, measures their performance, and doles out data-driven punishment. Remarkably, a lot of Amazon professionals go along with this. They subsume their identities into the system and become one with the algorithm. Significantly, they actually call themselves “Amabots.”

An Automaton Class

Uber manages its three million drivers almost entirely with software. Why not? The company makes no secret of the fact that it hopes one day (as soon as possible) to get rid of human drivers entirely and replace them with self-driving cars. For now, the ride-sharing company treats human drivers as poorly as it can and keeps them at arm’s length. Software becomes a barrier between worker and employer. To the driver, what is Uber? Where is it located? What does it look like? Uber is a black box. Uber is an app on a smartphone screen.

Drivers rarely talk to actual human managers at Uber, except when being recruited, and sometimes not even then. They answer to a software “boss” that tracks their performance and deactivates them if their score falls below a certain point. Software entrepreneur David Heinemeier Hansson says Uber drivers and other gig-economy workers represent a new caste of people—an automaton class, who are “treated as literal cogs in transportation and delivery machines.” The machine—the software—is the essence of the company, not the humans. The humans are ancillary to the machine. We are meat puppets, tethered to an algorithm.

Companies first embraced the idea of using software to manage workers because it saved money. Now they’ve discovered a secondary benefit, which is that software can creep into employees’ psyches and exploit their vulnerabilities. Uber uses software to manipulate its drivers psychologically, using tricks learned from addictive video games. The company employs hundreds of social scientists who devise behavioral science techniques that push drivers to work longer shifts.

This represents a new twist on Taylorism and the notion of management science. Software-driven psychological manipulation started in the gig economy but soon it may be coming for the rest of us. “Pulling psychological levers may eventually become the reigning approach to managing the American worker,” the New York Times reports. Companies already use psychological tricks on consumers, hoping to get them to buy products.

Sixty years ago, the psychologist Erich Fromm warned in The Sane Society that the combination of capitalism and automation could create deep psychological harm, leading to widespread alienation, depression, and a kind of cultural insanity. “In the next fifty or a hundred years…automatons will make machines which act like men and produce men who act like machines. The danger of the past was that men became slaves. The danger of the future is that men may become robots.”

During the early days of the personal computer and then the dawn of the Internet, a lot of people believed the growing use of technology would be good for workers. Technology would empower us and give us more autonomy and freedom. It could democratize the workplace and give rank-and-file workers a greater voice in how the company was run.

But some started to worry—including some who had invented the new ways of working. In the 1990s Babson College business professor Thomas Davenport helped create something called business process reengineering. This was a strategy for using computer technology to restructure organizations. It was supposed to be a good thing, but when corporations embraced “reengineering,” they just used it as an excuse to fire lots of people. Davenport, who was seen as the father of reengineering, was appalled. He decried the mass firings as “mindless bloodshed” carried out by managers who “treated the people inside companies as if they were just so many bits and bytes.” He called it “the fad that forgot people,” and said he regretted his involvement.

Things only got worse. In 2005, another Babson professor, James Hoopes, warned that technology “can be used not only to liberate human beings but to control them,” and fretted that as managers relied more on technology, employees were being dehumanized. In 2018 I wrote to Hoopes and asked him where things stand now. “My worst fears are becoming realized,” he replied. “But now I’d say it goes beyond the dehumanized employee to the dehumanized customer as well.”

When I visited Hoopes at his office on the Babson campus outside Boston, he said he has become disappointed in how companies have used information technology to automate customer service and are forcing customers to interact with computer systems. They’re also using technology to track how customers use products and gather information about them. But the biggest letdown stems from how technology gets turned against employees. “I would like to see information technology that was put at the service of employees, as a way to make work better, rather than information technology being put at the service of management to make work more efficient,” he told me.

There is a lot of research about the harm caused by dehumanization in the workplace. For one thing, it leads to bullying and harassment. It may contribute to mental illnesses such as depression, anxiety, and stress-related disorders. It can cause “pervasive feelings of sadness and anger,” and “leave its victims feeling degraded, invalidated, or demoralized,” according to a 2014 study by Kalina Christoff, a psychology professor at University of British Columbia in Vancouver. Another study found that dehumanized workers feel shame and guilt, while also demonstrating diminished cognition.

There’s one aspect of modern work that is especially harmful, and that’s the ever-increasing use of electronic surveillance. Privacy laws make it illegal for the state to spy on us. (Supposedly, anyway.) But employers are bound by no such restriction. You’re an employee at will. They can snoop on you as much as they want. And they do—more and more each year.

At Work in the Panopticon

In the eighteenth century, British philosopher Jeremy Bentham designed an ingenious prison in which a single guard could control a large number of inmates. It was a circular building where a guard sat in a central tower and prisoners were placed in cells around the periphery. A viewing mechanism enabled the guard to watch any prisoner at any time. Since prisoners could not tell when they were being watched, they would have to assume that they were always being watched. Therefore they would behave. Instead of needing an army of prison guards, you could exploit the psychology of the inmates and get them to control themselves. Bentham called this the panopticon, from Greek roots meaning, roughly, “to see all.”

The idea didn’t really fly as a way to build prisons, but it is often used as a metaphor about power and control in modern society, most notably by French philosopher Michel Foucault. Researchers who study workplace surveillance often cite Foucault’s work when they discuss the “panoptic effect” that surveillance exerts on employees.

Today electronic surveillance at work has become nearly ubiquitous and is enabled by an array of powerful tools. “Electronic performance monitoring” systems track punctuality, break time, idle time—pretty much everything you do at work. Employers snoop through our email, most often using algorithms to scan for keywords but sometimes by having actual humans read through the messages. In 2007, an American Management Association survey found that 40 percent of companies had human beings reading through employee email. Keep that in mind next time you feel the urge to fire off a message bashing your CEO to a work buddy.

Companies monitor our social media activity, too. Some even spy on us through the cameras in our computers. They listen to and record our phone calls, and they track our location with ID badges, wristbands, and mobile phones. A Wisconsin company called Three Square Market has put RFID implants into employees’ hands so they can swipe into the building just by waving their hand. Some collect biometric information about workers, like their voiceprints, iris scans, and fingerprints. A common application is requiring fingerprints for “time and attendance” systems, making employees prove when they clock in and clock out.

In Illinois, dozens of employers, including Intercontinental Hotels, are facing lawsuits from employees whose fingerprints were gathered and who claim the practice violated an Illinois law about biometric privacy. Companies also use voice biometrics with customers, with banks like Vanguard using voiceprints to authenticate account holders. Nuance Communications, which sells voice biometric technology, claims to have collected voiceprints from three hundred million people, who perform more than five billion authentications annually. In addition to gathering biometrics, companies also feel free to peer into our brains, subjecting workers to personality training and figuring out how to push their buttons. According to the Wall Street Journal, some organizations, like SPS Companies, a steel processor, now use AI-based tools to evaluate employee surveys and figure out how people really feel about work—as opposed to what they say.

Companies claim surveillance is necessary, that it boosts productivity and prevents theft, for example. But many companies plunge ahead just because the allure of new technology becomes impossible to resist. Those companies don’t need a cost-benefit analysis, but “monitor their employees simply because they can,” one study said. The same study argues that whatever benefits employers might gain from surveillance may be outweighed by the harm caused to workers.

The damage is significant. Surveillance creates a toxic and demoralizing environment, a digital sweatshop filled with stress, anxiety, depression, fatigue, anger, and even loss of identity. When the National Association of Working Women surveyed female call center workers about surveillance in the 1980s, women frequently described their feelings about surveillance using images of rape or sexual abuse. Another study found a spike in health problems. In a survey done at AT&T comparing monitored to unmonitored clerical workers performing similar tasks, monitored workers reported significantly more physical ailments, like stiff necks, sore wrists, and numb fingers, as well as “racing or pounding heart” and acid indigestion.

Surveillance “can have a profound effect on employees’ sense of dignity, their sense of freedom, and their sense of autonomy,” Jennifer Stoddart, the privacy commissioner of Canada, warned in a 2006 speech. “The working world of the future could be a very scary place if we don’t hold the line on increasingly pervasive monitoring.”

Now, as I write this twelve years later, we are surveilled in ways Stoddart could not have fathomed. Much of the surveillance technology comes from Silicon Valley. Tech companies also are among the most aggressive users of surveillance, tracking emails, chats, instant messaging, website visits. “It’s horrifying how much they know,” a former Facebook employee, who got fired for leaking information to a reporter, told the Guardian. Facebook employs a team of secret police, known internally as “rat-catchers,” who hunt down workers suspected of leaking confidential information. “If anyone steps out of line, they’ll squash you like a bug,” the fired Facebook employee told the Guardian.

Apple reportedly plants moles throughout the organization to spy on workers—employees call them the “Apple Gestapo.” Google and Amazon encourage employees to snitch on co-workers. Amazon even provides a software tool to make snitching easier. Workday, a Silicon Valley software maker, delivers a similar snitching tool as part of its bundle of HR programs, used by more than two thousand companies.

Many tech companies run a modern-day version of an old-fashioned sales boiler room—vast call centers where hundreds of workers, usually recent college graduates, bang away on phones, calling dozens of people every day. Basically, they’re telemarketers. For some, the work can be soul-destroying, especially because of the way workers are monitored and surveilled.

Six months after graduating from a big California university with a humanities degree, a woman I’ll call Athena got hired by Yelp, the online review website. She went in feeling excited to be working at a cool tech company with video games and a beer garden in a hip part of San Francisco, but almost immediately became disillusioned.

Her every move was monitored by software. Her calls were recorded. “I hated having to repeat the same task every two minutes for eight hours a day. It was dehumanizing. I became depressed. I would come home and go to sleep at eight o’clock. I started to dread going to work.” She lasted about a month, just long enough to get her first (bad) performance review. “It was an incredible disappointment, a terrible experience,” she says.

Take a Seat. The Robot Will Be Right with You.

The next time you look for a job, your first interview might not be with a human being—but with an AI-powered software system.

Instead of talking to a recruiter from the HR department, you sit in front of a computer, or even your smartphone, and answer questions that pop up on the screen. You use your device’s camera to record your responses on video. You might also be asked to solve a puzzle or play a game. The whole thing takes about ten minutes. When you’re finished, artificial intelligence algorithms zip through your video, sizing up how well you speak, which words you use, and even evaluating your tiniest facial expressions. Are you smiling? Blinking? Raising your eyebrows a lot? If the robot recruiter deems you worthy of actual human attention, you will be passed on to the next round of interviews. Fail to impress the software, and you’ll receive a nice thank-you note.

This sounds like science fiction, even perhaps like the Voight-Kampff test from Blade Runner, the one used to identify replicants by asking them questions. But this stuff is happening today. A Utah tech company called HireVue provides this service to more than a hundred companies, including Unilever and Hilton Worldwide.

Companies like the AI system because it lets them look at far more job candidates—ten times as many as they might see using the old-fashioned in-person approach, HireVue claims. They also can zip through a lot of candidates in less time. At Hilton, the average “time to hire” went from six weeks to five days after the hotel chain started using HireVue’s AI interview system. Another driver is diversity. Computers don’t have unconscious bias that humans bring to the table. HireVue claims the software does a better job of picking job candidates than a human recruiter.

HireVue has been in business since 2004. At first the company provided a service that lets companies interview job candidates by recorded video. That saved companies money because they didn’t have to send recruiters to college campuses to conduct first-round interviews. That also meant companies could look at a lot more candidates from more campuses. “It lets them really open up the aperture much wider,” HireVue CEO Kevin Parker says.

But there was still a bottleneck. Human recruiters still had to look at all the videos, and there is only so much they can do. Sure, they could fast-forward through videos and make decisions. But to scale up even more, “We started asking, how can we use technology to take the place of what humans are doing?” Parker says.

HireVue assembled a team of data scientists and industrial and organizational psychologists, who took existing science on things like “facial action units” and encoded it into software. Two years ago HireVue began offering this service to its customers.

HireVue has more than seven hundred clients, including Nike, Intel, Honeywell, and Delta Airlines. Only about a hundred are using the AI-powered assessment service, though Parker says that part of the business is growing rapidly. So far, HireVue’s AI system has evaluated more than a half million videos.

One HireVue client, a big bank, reviews a thousand videos each day. HireVue’s business is taking on scale that previously would have been unimaginable. In the company’s first twelve years it recorded a total of four million videos. Today HireVue records that many in a year. And this is just the beginning. A decade from now this stuff will be routine and commonplace, Parker says.

This brave new world means job seekers must learn a new set of skills. Consultants are already springing up to teach students how to impress their robot overlords. “The big challenge of the talk-to-the-box interview, or the AI interview, is that you can’t get any feedback on whether what you’re saying is interesting to the interviewer or not,” says Derek Walker, course director at Finito Education, a London consultancy that offers career training and has started coaching new college graduates on how to do well in AI-based interviews.

All of the nonverbal give-and-take of a human-to-human interview goes away. For most people that can be really disconcerting. “This is completely alien to us as human beings,” Walker says. “We’re used to going back and forth, building rapport, and you can’t do that with a box. So it’s difficult to feel comfortable. Certain people find this a very nerve-wracking and disturbing experience.”

It’s so disconcerting that some good job candidates might get passed over because they don’t perform well, he says. The trick is to practice. Walker works mostly with recent college graduates, helping them learn to feel comfortable in front of the camera.

Walker has spent thirty years in recruiting, including stints at Merrill Lynch, Barclays, and Saïd Business School at Oxford, and for most of that time the field has remained largely the same. AI-based interviewing “is the first real major innovation for quite a long time,” he says.

Today AI-based job candidate assessments are a novelty, but in a decade or two they could be routine. There’s a scary implication to this. HireVue’s system can work up a rich profile of a job candidate and even evaluate their personality, by asking questions that measure traits like empathy. On top of that, HireVue recently acquired MindX, which uses psychometric games and puzzles to gauge someone’s cognitive abilities. They can estimate your IQ and other reasoning abilities. In theory the system can infer things that you are not even aware of. It might know more about you than you know yourself.

This raises issues about the kind of information being gathered and who has control over that information. In 2017 and 2018 Facebook came under fire after revealing that companies like Cambridge Analytica in Britain had used online puzzles and quizzes to glean psychographic information about millions of Facebook users, and then employed that insight to manipulate people with targeted political ads.

They were using stupid little Facebook quizzes. Imagine how much more information you reveal about yourself in a job interview. HireVue’s robot recruiting system is building a database of deep, rich psychographic information on millions of people. Moreover, the data is not anonymous. Your psychographic blueprint is connected to all of your personal information—name, address, email, phone number, work history, education. And they have you on video. Everything you say in an interview can follow you around for the rest of your life. If the AI determines that you’re “not competitive” or “too independent,” or have only “average intelligence,” will this rule you out from certain jobs? If you slip and use the word damn, will the system mark you out as vulgar?

Parker says HireVue collects a lot of information, “but we safeguard it. We’re very careful about that.” HireVue stores the information but doesn’t own it. The information is owned by the clients who pay HireVue for its service. “We never use it for any other purpose other than to help someone get an interview and a job,” Parker says.

Fair enough. But the potential exists for this data to be compiled, sifted, sold, shared, stolen, and used in ways that we can’t imagine. Will people even realize, when they sit down to apply for that bank teller job, what they are actually giving up? Even if they are aware, what choice will they have? Will they not apply for the job? Giving up privacy could become the price people pay to enter the workforce. Getting hired could mean letting Skynet delve into the deepest recesses of your psyche and figure out your IQ, your personality type, and all your quirks and foibles. A complete psychographic profile of you exists—the blueprint of your brain, every inch of your wiring—and you have no control over it. Apply for another job, and the system adds to your profile. Over the years, your profile becomes richer and more granular. Can you imagine what that information would be worth to a political party, or certain government agencies? And what they might do with it?

The whole process of recruiting and evaluating and hiring used to be haphazard and half-assed, with information strewn across different systems and kept in paper files. It’s messy, but that messiness of the analog world was basically what we called privacy. Soon thousands of companies will be gathering profiles on millions of people. Anyone who gets that data can figure out what makes those people tick. We worry a lot, and rightly so, about humans being replaced by machines and about jobs being killed by automation. We also should worry about the humans who will have to work alongside artificial intelligence.

The machines determine who gets hired and sometimes (as at Uber) who gets fired. What will this do to humans, as a species? In the journey from analog to digital work, we are being pushed into bargains that we may not fully comprehend. In our quest to gain efficiency, to boost productivity, to do more with less, we may give up something much greater in exchange.

Your Next Boss May Be a Computer

Ray Dalio is the founder of Bridgewater Associates, the world’s largest hedge fund. He’s one of the richest people in the world. And if Dalio has his way, your next boss might be a computer. For several years Dalio has been trying to develop an AI-powered “automated management system” that could render human managers, with their gut instincts and “World’s Greatest Boss” mugs, obsolete. The system is based on concepts and processes—known internally as the “Principles”—that Dalio uses to run Bridgewater. It’s “like trying to make Ray’s brain into a computer,” an insider told the Wall Street Journal. The project is being led by computer scientist David Ferrucci, who helped create IBM’s Watson artificial intelligence system, and has had various code names, including the Book of the Future, the One Thing, and the Principles Operating System, or PriOS.

Hedge funds like Bridgewater already rely on AI systems to make stock trading decisions. Teaching machines to make business decisions seems like the next logical step. In 2017, Dalio told Business Insider he expected to have a “thorough version” of PriOS running at Bridgewater by 2020. He compares PriOS to a GPS navigation system. Just as your GPS tells you where to make the next turn, PriOS will tell managers which decisions to make, like when to hire or fire someone, or even when to make a certain phone call, Vanity Fair reported in 2017. Dalio wants to share his invention with the world and told Bloomberg in 2017 that tech companies were eager to get their hands on it.

Using software to manage a company may not be so far-fetched. Even top executives might be replaceable, say researchers at Silicon Valley’s Institute for the Future, who a few years ago coded up software called iCEO that could do the job of a big-company CEO. Devin Fidler, one of the researchers, wrote about the project for the Harvard Business Review in 2015 and warned, ominously, “It will not be possible to hide in the C-suite much longer.”

Whether this will be good or bad depends a lot on who programs the AI and sets its parameters. Dalio wants to replicate in software the brutal, combative culture that he created at Bridgewater. For most of us that would be a nightmare. Even in the nasty world of hedge funds, the cult-like Bridgewater has earned a reputation for shocking levels of nastiness. “A cauldron of fear and intimidation” is how one former employee described the hedge fund in a complaint to a state labor board.

Dalio forces all Bridgewater employees to undergo psychometric testing. (He loves testing. When his kids were little, he had them psychometrically tested as well, to get “a road map for how they would develop over the years.”) Security guards roam the halls. There are cameras everywhere. All meetings are recorded. People carry iPads with an app that lets them constantly critique each other using “dots”—live, on the fly, during every meeting—while an algorithm gathers up all the dots from all the meetings and generates a profile of each person’s personality, which is used to assign them to particular jobs. Employees are encouraged to criticize each other and rat each other out. Some people are badgered to tears. “If there was a hell this would be it,” says one Glassdoor commenter, adding that “it’s a cult, basically,” and a “human being experiment.”

Working for the real-life Dalio sounds awful enough. But the AI-replicated version of him could be even worse. Imagine putting this clusterfuck of a culture into an AI-powered computer, then giving that computer to the nitwits you work for today and letting them run wild with it, and you have an idea of the kind of future that Ray Dalio wants the rest of us to inhabit.

It’s amazing to me that anyone takes Dalio seriously in the first place. But the guy has a net worth of nearly $18 billion—he once earned $4 billion in a single year—and when you’re worth that kind of coin, people listen to you. In 2017, Dalio published a memoir, Principles, which lays out his philosophy about life and work. The book has been a massive bestseller.

In the book Dalio embraces the metaphor of man as a machine—but unlike, say, the philosopher Erich Fromm, who envisioned men behaving like machines but considered this a nightmarish abomination, Dalio thinks it’s great. “Think of yourself as a machine within a machine,” he intones. If you’re managing people, imagine that you’re operating a machine, trying to get the best outcome, he advises.

Principles grew out of a hundred-page manifesto, also called “Principles,” that is given to all Bridgewater employees and contains 227 principles for becoming a better person. New York magazine once said the manifesto read as if “Ayn Rand and Deepak Chopra had collaborated on a line of fortune cookies.”

The book’s title also flicks at the notion of being “principled,” which is not something you usually associate with hedge fund managers. The tome is nearly six hundred pages long, and it’s only the first of a two-volume set Dalio plans. Clearly this man has a lot on his mind. To get a sense of how Dalio sees himself, consider that on page 2 of Principles, by way of explaining why he wrote the book, Dalio says he wishes that Einstein and Churchill and Leonardo da Vinci had written down their “principles,” too.

Dalio is a hero in a certain circle of finance and consulting professionals, but it’s too early to say how ideas like his will play out in the wider world. Even if he fails to create an AI-powered boss, someone else will probably figure out how to manage people with computers.

The potential downsides are obvious—and in some cases, they are already happening. Consider the case of Ibrahim Diallo, a thirty-one-year-old software programmer in Los Angeles who lost his job when his employer’s software system went haywire. The software determined that Diallo had been terminated. His manager knew this wasn’t the case. But “the machine” kept shutting down the ID badge Diallo used to enter the building. It also prevented him from logging into his computer. Diallo’s manager escalated the situation to a director. The director sent email to HR—and received back a computer-generated email saying Diallo was not a valid employee. Security guards escorted Diallo out of the building. “At first I was laughing. It was confusing and funny at the same time,” Diallo told me.

It took three weeks to sort things out. Diallo returned to work but left the company a few months later. He says the experience taught him something: “Automation can be an asset, but there needs to be a way for humans to take over if the machine makes a mistake.” The incident also revealed something about the human tendency to defer to machine intelligence and to invest computers with authority. We believe they are smarter than we are. Have you ever followed the directions on your Waze navigation app, even when the route seems to make no sense? If so, you know how this works. The good news is that usually Waze is right. Even if it’s not, the worst that happens is you go down a wrong road and get delayed a little bit. In the case of Diallo, a guy lost his paycheck for a few weeks. But the stakes may get higher as we rely on AI to run our workplaces. And by most estimates AI is going to play an ever bigger role in our lives.

Sales of artificial intelligence software will grow from $8 billion in 2016 to $52 billion in 2021, according to IDC, a research firm. Sales of robotic systems will more than triple, from $65 billion in 2016 to just under $200 billion in 2021. By 2030 robots may wipe out eight hundred million jobs, roughly one-fifth of all jobs worldwide, according to McKinsey. By 2050, robots may replace one-third of American working-age men, Brookings Institution vice president Darrell West claims in his 2018 book The Future of Work: Robots, AI, and Automation.

Companies love robots and AI-powered management systems. They don’t have accidents. They don’t call in sick and don’t have messy personal lives. They also don’t collect paychecks or demand health insurance and 401(k) plans. Someday, investors could create companies that might not need any flesh-and-blood humans at all. The workers would be robots, and the managers would be artificial intelligence software code. Futurists call these “autonomous organizations.” For investors, this sounds like a dream scenario—except that the next step might be that even the investors will not need to be human. In 2016, a team of computer scientists in Hong Kong launched a hedge fund run completely by AI. “If we all die, it would keep trading,” founder Ben Goertzel told Wired.

How can humans keep up? The answer so far has been for humans to try to be more like machines. We need a Plan B.