Now is the time to figure out the ethical rights of robots in the workplace

By the year 2025, robots and machines driven by artificial intelligence are predicted to perform half of all productive functions in the workplace. What is not clear is whether the robots will have any worker rights.

Companies across many industries already have robots in their workforce. DHL uses autonomous robots by Fetch Robotics to help fulfillment center and warehouse employees, while Toyota, Google and Panasonic are among the companies that use Fetch’s mobile manipulator technology in research efforts.

Humans already have shown hatred toward robots, often kicking robot police resources over or knocking down delivery bots in hopes of reclaiming a feeling of security or superiority. Incidents of violence against machines are nothing new. Man has been at odds with machines for many decades. We kick the car when it does not operate, shove the vending machine when it does not dispense, and bang at the sides of the printer when it does not produce a copy. What is new is that it will only be a matter of time before the automated creatures will “feel” this hostility and/or feel the need to retaliate. And if we grant robots rights as quasi-citizens, will they be charged with assault and battery and legally responsible for the harm they may cause under criminal or civil law? Or should a robot’s programmer be held jointly responsible?

These acts of hostility and violence have no current legal consequence — machines have no protected legal rights. But as robots develop more advanced artificial intelligence empowering them to think and act like humans, legal standards need to change.

Several studies show that we are spending considerable time worried about what robots will do to humans. According to the World Economic Forum’s future of jobs study, “if managed wisely, [machine integration] could lead to a new age of good work, good jobs and improved quality of life for all, but if managed poorly, pose the risk of widening skills gaps, greater inequality and broader polarization.” According to a survey by Pew Research Center “Americans express broad concerns over the fairness and effectiveness of computer programs making important decisions in people’s lives.”

Few are considering this trend from the perspective of the rights of our automated coworkers. What legal standing should the robot in the cubicle next to you have from a labor, employment, civil rights or criminal law perspective, and as a citizen? There are still far more questions than answers.

Can a robot be programmed to be racist? Can a robot sexually harass a co-worker, human or non-human? Will humans discriminate against the machines? Will workplace violence or intolerance be tolerated against robots? Will robots at some point be able to feel pain, stress, anxiety, a lack of inclusiveness or overt discrimination? How might it affect their productivity? How might it affect the culture of the company? Will robots react to human violence or incivility in turn? Could a robot discriminate against another robot? Should robots be compensated for their work? How and when? Are they eligible for vacation or medical benefits? What if a robot causes harm to a coworker or customer? Who’s responsible? Will robots be protected by unions? If a robot “invents” something in the workplace, or improves a product or service of the company, who or what will own the intellectual property?

As the retail and technology giant Amazon has come to learn, integration of machine learning may unintentionally cause big problems and expose a company to unexpected ridicule or even litigation. Amazon was developing a recruiting engine programed to help the company find top talent and make the recruiting process more efficient. But the outcome discovered in testing of the secret system was discrimination against female candidates, and exposure to bad press.

The AI system was not programmed to be biased against women, in fact, many thought it may actually help remove human biases all together. So what happened? The system was trained to screen applications by observing patterns in old resumes (most of which were submitted by men) and eventually, the machine taught itself to vet out female applicants. Amazon scrapped the technology while it was still a work-in-progress, but the company has surely learned a valuable lesson, and it will not be the last to do so.

Robots are predicted to serve in more meaningful roles within business and society than the basic manufacturing jobs, online customer service and order fulfillment they already handle. We may even begin to see AI-driven robots sitting in the seats of boards of directors at Fortune 500 companies or on a judge’s bench. An AI-based astronaut assistant, CIMON, is currently working on the International Space Station. In 2017, a European Parliament legal affairs committee recommended a resolution that envisions a special legal status of “electronic persons” to protect the most sophisticated automated robots, a political idea that remains controversial among politicians, scientists, business people and legal experts.

Considering the issues is an important step employers and employees must consider if our society hopes to achieve the singularity and symbiosis of machine integration into a workplace. In the next five to ten years, we’ll need an entirely new set of labor and employment laws affecting EEOC, unions, workplace violence, intellectual property, termination, downsizing and family leave, not to mention a new set of social and workplace norms and best practices.

Many may remember the hitchhiking robot that made headlines on its journey to cross the United States only to be destroyed two weeks into its journey in Philadelphia (the robot did successfully travel across Canada before its ultimate demise). Some of this abuse stems from tomfoolery, some stems from pure vandalism, and some may stem from fear. Regardless of the reasoning, the abuse raises concerns regarding humans’ ethics towards technology. As Peter W. Signer, a political scientist and the author of “Wired for War”, has said, “The rationale for robot ‘rights’ is not a question for 2076, it’s already a question for now.”

Andrew Sherman is a partner in the Washington, D.C., office of Seyfarth Shaw and the author of 26 books on business growth, capital formation and intellectual property. He is an expert on corporate governance and has served as general counsel to the Entrepreneurs’ Organization (EO) since 1987.

Comment via Facebook