Special Reports

Bob Casey talks ‘robot bosses’ and the need to regulate AI

Pennsylvania’s senior senator discusses the challenges posed by artificial intelligence and who should be responsible for regulating it.

U.S. Sen. Bob Casey wants to limit the use of artificial intelligence as a means to make employment-related decisions in the workplace, a practice that he says threatens to add further strain to the relationship between workers and their bosses.

U.S. Sen. Bob Casey wants to limit the use of artificial intelligence as a means to make employment-related decisions in the workplace, a practice that he says threatens to add further strain to the relationship between workers and their bosses. U.S. Sen. Bob Casey's Office

The term “robot bosses” might bring to mind images of humanoid, droid-like machines out of a science fiction film, but in today’s reality where artificial intelligence is becoming more and more of a hot topic among politicos, businesses and regulators, the term means something quite different. 

U.S. Sen. Bob Casey wants to limit the use of artificial intelligence as a means to make employment-related decisions in the workplace, a practice that he says threatens to add further strain to the relationship between workers and their bosses. 

“I think it’s – for a lot of people – disturbing that bots are firing workers without any recourse,” Casey told City & State. “There’s nothing that constrains a big company from abusing these automated decision systems. We’ve got to make sure that we’re getting – if not ahead of the technology – at least trying to keep up on technology in the workplace. In that context, we’re pretty far behind.” 

In the interview below, Casey discusses the need to protect workers against automated decision systems, the threats posed by artificial intelligence and how state and federal lawmakers can work together to regulate AI.

The following interview has been edited and condensed for length and clarity.

What initially prompted you to introduce the No Robot Bosses Act? How did this bill come to be?

The reality of the workplace is that technology now is taking the place of human interaction. You’re hearing story after story of employees being either under surveillance by their employers or in other cases employment decisions are being made or greatly influenced by some kind of automated systems. 

Especially in this one area of our society, we’ve got to make sure that we put constraints on and have guardrails to protect workers. The bill, among other things, would prohibit employers from using these automated decision systems exclusively when they’re making these employment-related decisions. It would require that these systems are tested, validated. It would also make sure that the US Department of Labor is doing more by establishing the so-called “Technology and Worker Protection Division,” which would regulate the use of these automated decision systems.

In essence, what we need is more of an effort to protect the worker from these arbitrary decisions where these decisions might be made by a machine or a bot or something automated, as opposed to the people who should be responsible for those decisions, which is the employer interacting with an employee in the workplace.

If the proper checks aren’t in place to monitor the use of automated decision systems, what dangers do these systems present if they continue to go unregulated?

You’re gonna have employment decisions or adverse employment outcomes for workers across the country that I’m not sure anyone has imagined before, anyone’s had to experience. It’s difficult enough in workplaces where employers often have, in my judgment, too much power. The diminution of unions over time has made it very difficult for workers to begin with. This adds another layer of burden and, potentially, another layer of terribly adverse consequences for workers.

You had also mentioned AI used for surveillance or productivity monitoring. Does that concern you? Is that something that also might need to be addressed, legislatively?

We have a whole separate bill on that. It’s the so-called Stop Spying Bosses Act. That bill is to deal with that particular problem, where the employer is using technology to monitor the activities of the employee in a manner that is frankly abusive. In an already disproportionate power dynamic, it just makes the power that the employer has over the employee even more onerous for the employee.

Are there other areas in the artificial intelligence space that you think Congress really needs to address?

There’s no question that the most significant challenges – in addition to the employment context – would be national security. Just imagine what a terrorist or an autocrat or another actor – whether it’s a state actor, a nation or even even an individual or small group of individuals – what they could do if they’re able to use AI to invent a speech by an individual that creates all kinds of instability… It can be used in so many ways that are pernicious and evil that it has huge implications for national security. 

It also has huge implications for economic security, even beyond the workplace. So much of human activity will be automated and we've got to wrestle with that and figure out how you harness the power of an AI for good. We know that there are all kinds of applications that will have the potential at least to be very positive for human health and for human activity – even to help us solve overarching problems we couldn't solve before. The challenge for us is while we’re utilizing the benefits of AI to advance the best interest of human beings, what are we doing to constrain the truly evil and destabilizing manifestations of the applications of it?

Several colleagues of yours in the Senate, including Sens. Klobuchar and Coons, have introduced a bill to ban the use of false, AI-generated content that seeks to influence federal elections. What do you make of the potential for AI to influence U.S. elections, given the amount of misinformation we've seen in recent years?

It will have an impact on U.S. elections. It will have an impact on elections all over the world. The question is – how significant will the impact be? But we’re already seeing it; you’re already seeing content generated in scenarios where candidates or public officials are presented as saying something they’ve said before. It’s already happened. So the question is, how do we, just from a purely elections point-of-view, how do we constrain that?

The outcomes of that over time – in one particular election, in one community or even a statewide election – will have an impact. The question is, when you have multiple election cycles, can you get both local and state, and even national elections, being impacted by it? That’s when you see how corrosive it can be. I think that the efforts of colleagues of mine on both sides of the aisle are pointed in the same direction, which is: How do we better manage this while the technology – so far, at least – has run far ahead of any kind of legislative approach? 

Whether it’s AI or its data privacy or regulating social media, it seems like there are a lot of efforts at both the state and federal levels in this regard. Is this something that the federal government should ultimately be regulating?

I think the federal government has to regulate it. I think we forgo that at our peril as a nation. When I say peril I mean national security and so much else. It has to be a national effort because the nature of the technology isn't governed by state boundaries or state borders. 

But at the same time… states are the laboratories of democracy. In other words you can have an approach in a state that you might want to replicate nationally – or it might deal with a problem that the federal government doesn’t get to. It can also be simply a question of timing. I’d love to be able to tell you that the United States Senate and House is going to have significant AI legislation enacted into law by the end of this calendar year, but I think if I told you that it’d be misleading.

I’m hoping we can get it done in the early part of 2024 or as soon as possible. In the interim, if you’re a governor or state legislature and you think you can begin to tackle this problem on a statewide basis, I’d say that can only be positive. You can’t always wait for the ultimate and comprehensive national solution. Sometimes it has to be a little bit patchwork, even though some issues just obviously lend themselves to a national approach, but I’d encourage states to take it on.

Back to Special Report: AI in Government