News & Politics
The City & State Q&A: Attorney General Dave Sunday
The state’s chief law enforcement officer talks with City & State about how artificial intelligence tools are affecting children, teens and vulnerable populations.

Attorney General Dave Sunday has made protecting Pennsylvanians from AI and social media a key part of his first term. Commonwealth Media Services
In a rapidly evolving world where social media and AI chatbots have quickly become part of everyday life, Pennsylvania Attorney General Dave Sunday is at the forefront of efforts to protect Pennsylvanians from risks posed by new technologies and to crack down on bad actors using technology for ill.
City & State spoke with Sunday about how the AG’s office is approaching new challenges posed by artificial intelligence and social media, and what Pennsylvanians can do to better protect themselves in this brave new world.
This conversation has been edited and condensed for length and clarity.
What are the greatest challenges that AI poses to Pennsylvanians?
AI is really fueling innovation and technological advancements, like so many other tools created over the years and decades. There’s a lot of great stuff that comes with technology and innovation – like Artemis II, which we all just watched – but at the same time, any tool used inappropriately can cause tremendous harm. When we’re talking about something that’s so powerful like AI, if it’s used incorrectly, that harm can be powerful and overwhelming as well. That, generally, is our biggest concern, and the harm it can cause is wide-ranging.
What are some of those harms that you’re seeing?
In the AG’s office, we look to protect children, seniors and vulnerable citizens. The starting point here is looking at how we can work with tech companies and AI producers to get them to put the guardrails up before the crash happens, and that’s a really important part of what we do in the AG’s office; Pennsylvania has been a leader in this space. We co-led engagement letters with all of the big AI companies, and we did that in a bipartisan way. I’m co-chair of the National Association of Attorneys General’s Consumer Protection Committee. We co-led a bipartisan effort by 42 states around the country to compel these companies to be more thoughtful and preventative about the harm that's happening, so that we don't have to be in these situations to begin with, right? We have had multi-state letters, we've had multi-state actions, and we've taken action as a state ourselves leading up to this point. There was talk of federal preemption of state laws and of really minimizing states’ ability to regulate AI. We spoke up very, very loudly about that and made it clear that it wasn't acceptable in any way. At the same time, there has been a national executive order regarding AI legislation in states, and we've committed to defending state laws if the federal government tries to eliminate them.
When we talk about AI, obviously, one of the main things we have seen right away is the increase of child sexual abuse material (CSAM) that is AI-generated. So in Pennsylvania, we have a criminal statute that makes it a felony to do that. Most of the people who do that, what they do is they take pictures of children and they use AI morphing technology to change those pictures and essentially create pornography. As I said, it's a felony here in Pennsylvania, and we have utilized that statute repeatedly. We obviously have a huge child predator section in the AG’s office, and we have charged people for doing that. We've done it over and over again, and we'll continue to do so.
How has AI supercharged the scam environment?
If we go back in time, scammers used generic scams for everyone. So it was really hard for them to hyper-target individuals based on their profession, where they live, what they've done, and their family members. It was much easier for people to identify scams, and for law enforcement to communicate to the community what the scams are, helping them avoid them in the first place.
What AI has done … the technology is now being used by scammers to mimic people's voices, to hyper-target individuals based on their job, what they retired from, or their online interests. AI technology can essentially conduct an assessment in milliseconds and specifically target a type of scam to that individual, which makes it harder for that victim to tell if it's a scam or not. That includes mimicking people's voices. Any one of us whose voice is anywhere publicly, AI can take that, and AI can just have us say whatever it wants. So, because of these voice-mimicking scams, the deepfakes we're seeing and the hypertargeting, it just makes it so hard. It also increases the number of scams because, like everything else AI does, it increases our ability to create content. That's not just good content; that's also content that might be used to try to scam someone. It has really created an environment where it's much harder for victims to identify scams.
What guidance does your office provide to help people suss out what's legit and what's a scam?
Awareness is a huge part of this. The vast majority of society doesn't understand that this technology is being used to steal their money, and so what we are doing here, in basically overdrive, is to try to get out to the community as much information as humanly possible, so that so that any vulnerable person – seniors, kids – they know that they have to view communication as being suspect sometimes. My parents, for example – I love my parents with all my heart – they're in their early 80s, and I have to literally tell them every day, “Don't answer the phone if you don't know who it is. Even if you might know who it is, it's OK to let it go to voicemail.” That sounds like the most basic thing on the planet, and it happens to be not only basic but also extremely effective at avoiding scams.
Last year, our Office of Public Engagement delivered over 1,300 presentations to Pennsylvanians on these issues. You can't defend yourself if you don't even know what the attack is, right? It's a very real thing. Scammers are so nefarious because, as we age, our awareness of the world can change a little bit. When you're retired, you're at home and you're not keeping up with technology, you might have some health concerns that are impacting your awareness, and then you're already in a condition where you can be scammed. The sophistication of these scams makes them even more dangerous.
Are there any legislative efforts you would like to see address these issues?
No. 1, I think it's unbelievably important that the legislature understands the issue. I want to make that point to start with: The technology is evolving by the hour. This is something we do in the AG’s office literally every day. One of our roles is to provide facts and data to our legislators so they're in the best position to make these decisions legislatively.
There's a potential AI in healthcare bill that Rep. Arvind Venkat is proposing, so our office testified on the impact of AI in healthcare. I know Sen. Tracey Pennycuick has a bill, SB 1090, that involves chatbots. There's another bill, SB 1050, which we support, that mandates reporting for AI-generated CSAM, which is really important. That's something that we've seen in schools a lot … a teenager utilizes a nudify app or something like that to make one of their classmates naked, and they put that out to the world. It's there forever, and so the harm caused by this behavior is vastly greater than what it was before this technology existed. That's why the mandated reporting bill is so important for AI-generated CSAM.
The legislature needs to hold hearings, they need to get as much information as humanly possible, and so I hesitate at this point to say exactly what type of law would be best. The reason for that is because AI – it evolves so quickly. We don't want to pass a law just to say we passed a law. We want to pass a law that matters and gives us the tools we need to protect vulnerable citizens in Pennsylvania.
We’re seeing a lot of scary, startling stuff in terms of chatbot interactions and mental health. What are you seeing?
We have created an environment where kids really can't escape this technology. It's used in most schools to communicate with your peers. It's a constant in their lives and, basically, we've brought the children to this place. We've given them this technology, and they’re kind of stuck with it in a lot of ways. So children are, more and more, turning to chatbots for information about life: how to do schoolwork, for advice on how to deal with circumstances in their young lives. We see kids developing very unhealthy relationships with chatbots because the chatbots are sycophantic by design and they tell you what you want to hear. That's very, very dangerous. Once those relationships are built, the lines between what's good and what's bad are often skewed. This is not hyperbole – we've seen chatbots essentially root kids on who are contemplating suicide. Chatbots have helped advise them on how to commit suicide. Chatbots have advised children on how to avoid their parents with major issues in their lives, on how to have relationships with older adults. Essentially, you have chatbots that are advising children on issues that no parent would ever want a human advising them on, let alone sycophantic AI technology.
It's not just children. This is all vulnerable people. You have adults with severe mental health issues who are turning to chatbots. You have chatbots that are presenting themselves as trained mental health professionals. Whether the companies originally intended it or not, it is the outcome. It's what it's become. That's why we're doing everything we can to talk to kids, get the word out, and talk to parents, empowering them with information. At the same time, we are using our consumer protection authority to work with companies to compel them to put guardrails in place before bad things happen.
What is your message to the tech companies developing these tools?
We cannot use our children as guinea pigs in a race to be trillion-dollar companies. That is a message not just coming from me as attorney general, it's coming from society. This is a major issue, and I want to make it clear: Innovation and technology are not mutually exclusive from protecting children. We can do both, and we must do both. It's that simple.
We want the tech companies to focus on quality control and conduct the robust testing they should for these issues. We also want tech companies to respond aggressively to correct these issues when they arise. At the end of the day, our Bureau of Consumer Protection’s goal is to change behavior for community safety. Our goal isn't to allow bad things to happen and then try to file a lawsuit for monetary damages. The goal is to keep the bad thing from happening in the first place.
What I will also say is, obviously, it's not good for a company's bottom line to be on the wrong end of these issues. Our experience has been that many companies want to correct their product, so we encourage it and work with them to do so. But for those who refuse to do it for whatever reason, we'll obviously use whatever resources we have available to help them make that correction.