Politicians at all levels, federal and state, are becoming more aware of the dangers of algorithms that hire, fire, and manage employees.
A new California rule designed to prevent overworking in the warehouse business does not mention any specific companies. However, the legislation’s goal is clear: Amazon.com Inc. is accused of abusing workers by leveraging technology to place unreasonable demands on them.
Assemblywoman Lorena Gonzalez introduced AB 701, which restricts the use of monitoring systems that interfere with basic worker rights like rest intervals, restroom breaks, and safety. The legislation would determine whether governments may control human-resources software, which is projected to play an increasingly important role in determining who is hired and fired, as well as how much workers are paid and how hard they work.
This is just the beginning of our work to regulate Amazon & its algorithms that put profits over workers’ safety, Gonzalez, a San Diego Democrat, tweeted earlier this year. The legislation, signed by California Governor Gavin Newsom in September, goes into force on Jan. 1.
Regulators, particularly in the tech industry, are continuously playing catch-up. Experts in computer science are skeptical that laws will be able to successfully manage machines that are predicted to be smarter than humans and even capable of deceiving them. It’s still a challenge to create artificial intelligence systems that do what they’re supposed to do. It’s considerably more difficult to ensure they don’t have unexpected repercussions.
Many of the functions historically performed by human managers have been delegated to machines at Amazon. Software decides how many products a fulfillment center can handle, where each product is meant to go, how many people are needed for a particular shift, and which truck is best positioned to deliver an order to a consumer on time for massive fulfillment centers. Delivery drivers are constantly monitored by algorithms and cameras to ensure that they deliver a particular quantity of packages per shift, place them correctly, and follow traffic laws.
The company claims that automation is necessary to run its large operations and that the technology is mostly working as it should. However, no algorithm is perfect, and even a little margin of error at a company the scale of Amazon can cause long-term damage.
An investigation into algorithmic management over the past year documented the experiences of a gig delivery driver who was mistakenly fired by a machine, an aspiring doctor who was paralyzed after a harried Amazon delivery driver plowed into his car, and warehouse workers who felt like disposable cogs in a machine. Workers often described the algorithms as merciless taskmasters. They described a harsh environment where people don’t stay for lengthy periods of time, injuries are greater than the industry average, and staff are forced to fulfill unrealistic productivity objectives.
Companies are largely expected to implement features of Amazon’s management automation in the coming years.
Machines are already sifting through job applications, determining work schedules, and even predicting which employees are about to leave.
Amazon has yet to sell its worker-monitoring software to other businesses, and some industry observers predict it will never do so. However, Amazon’s cloud division has recently begun offering tools that automate real-world jobs, including some developed for the company’s e-commerce and logistics operations. Amazon Web Services unveiled a suite of tools last year to monitor equipment and industrial lines, with the goal of supplementing or even replacing human workers. Amazon Connect call-center software is used by companies as diverse as Capital One, Labcorp, and GE Appliances. It uses artificial intelligence tools to make human agents more efficient and to completely automate some customer interactions.
The increasing ubiquity of algorithms has led calls for laws requiring firms to be more transparent about how their software affects people.
Senator Chris Coons, a Delaware Democrat, presented the Algorithmic Fairness Act in December. It would compel the Federal Trade Commission to establish standards to guarantee that algorithms are utilized equitably and that those who are affected by their judgments are notified and have the opportunity to correct errors.
Artificial intelligence brings real benefits to society and opens exciting possibilities. However, it also comes with risks, Coons said, after unveiling the proposed legislation. Companies are increasingly using algorithms to make decisions about who gets a job or a promotion, who gets into a certain school or who gets a loan. If these decisions are being made by artificial intelligence that is using unfair, biased or incorrect data, it has an outsized impact on people’s lives.
The proposal has so far stagnated in a divided Washington grappling with more pressing issues such as the flu pandemic and voting rights.
The California law has a more limited scope. It mandates that warehouses with at least 100 employees disclose performance quotas to employees and prohibits workloads that prevent employees from taking legally required food and rest breaks. The law aims to provide workers with some remedy if quotas violate safety laws, and it empowers the state labor commissioner to check worker compensation records at sites with high injury rates and issue sanctions if the injuries are caused by excessive workloads.
Industry organizations opposed the bill, claiming that it would encourage employees to bring lawsuits, causing businesses to impose unneeded costs.
An Amazon representative said the business will provide updates on the productivity tracking process to managers at its 32 California fulfillment sites, but declined to indicate what improvements will be made to comply with the law. He also stated that Amazon does not have workload quotas and instead uses productivity indicators to identify employees who require assistance in attaining their goals.
Algorithms and artificial intelligence have long been the subject of controversy among computer scientists. According to Roman Yampolskiy, an AI safety expert at the University of Louisville, the technology is such a black box that it’s difficult to tell if it’s creating potentially dangerous scenarios. He doubts that the California law will have the desired result.
We don’t always know why machines make certain decisions because it’s a complex web of neural networks, Yampolskiy said. Any time you put in writing what you are trying to accomplish, smart people will find a loophole to work around it. Legislation will be gamed by algorithms and their owners.