In a new special episode of the Washington AI Network Podcast, host Tammy Haddad moderates a conversation with two of the nation’s leading forces for safety and security in elections and how they are safeguarding against the threats posed by artificial intelligence (AI) in the 2024 election. The episode features:
- Jen Easterly, Director of the Cybersecurity and Infrastructure Security Agency (CISA)
- Georgia Secretary of State Brad Raffensperger
- Kyle Kondik, managing editor of Larry Sabato’s Crystal Ball
The conversation was recorded in front of a live audience on Friday, April 5 at the 25th anniversary celebration of the University of Virginia Center for Politics and Sabato’s Crystal Ball.
This special episode is a collaboration of the Washington AI Network podcast and the Center’s Politics Is Everything podcast, co-hosted by Kondik and Carah Ong Whaley.
Highlights of the conversation are below. Transcript here.
Jen Easterly on ballot hacking and multi-factor authentication:
“Generative AI is something that will make spear phishing emails more tailored to an individual…there’s not some exotic new steps that are taken. It’s all of the technological, the physical, the procedural controls that state and local election officials have already put in place, and that’s why I am not as freaked out as everybody seems to be about generative AI.”
Jen Easterly on the importance of dedicated election workers and AI companies to protect against threats:
“At the end of the day, these [election workers] are not getting a lot of money, and are doing it because it’s the right thing to do because they take an oath. And when you go to the polls, take the time to thank your folks who are working there, because they really are the front lines of defending democracy. I think that’s incredibly important to recognize.
And then just back to the AI piece, I think it’s also important to talk about the generative AI companies…we really need to rely on the [generative AI companies] to ensure that they’re authenticating content that’s being delivered with our capabilities and labeled in a way that you can tell it was generated by AI and so you can’t have this fake content being manipulated without voters knowing. I think that’s really important…It’s left us in a world where the internet is full of malware, software is full of vulnerabilities, and social media is full of disinformation.
And that’s why we’re very focused at CISA on working with generative AI companies to ensure that they’re building capabilities that put security at the very top of priorities, because these capabilities are more powerful. More unpredictable, moving faster than any technology we’ve ever had, and frankly, we need to ensure that what is being created is safe and secure for use, not just for elections, but for all of our critical infrastructure that we rely on.”
Jen Easterly on the benefits of AI to combat cyber security:
“From a cyber security perspective, we think that there will be benefits in using AI to detect cyber threats faster, so that will be a good thing. But I continue to focus more on the threats and the risks because, quite frankly, these are the most powerful technologies of our century, maybe of all time. And as opposed to the most powerful technologies and weapons of the last century – nuclear weapons that were built and safeguarded by governments – these are all built by the private sector largely who are incentivized to create profit for their shareholders. So the incentives are different in terms of how quickly these things are moving and how secure they are.
And as we know, there’s never been technology regulation on anything. Again, that’s why – malware, software, the internet. And there are about 34 states who have 50 separate bills around AI and putting guardrails around them…I think we all really need to be focused on what we can do from a security perspective, even as AI, of course, will do amazing things to cure diseases and to provide education and tutors and all of this. But at the end of the day, our job is to really ensure that we are securing these very powerful capabilities from misuse.”
Jen Easterly on AI regulation:
“We need to ensure that we have the right guardrails to protect these capabilities from being abused by nefarious actors. But that doesn’t mean that we can, that it’ll prevent innovation. There is such a thing as responsible innovation. And there is no country more innovative in the world than the United States of America. And nobody should bet against America when we’re talking about innovation. But at the end of the day, we cannot, as we have for technology for 40 years, just allow these capabilities to be built without any security.”
Jen Easterly on AI ‘hallucinations’ and nefarious actors:
“So you bring up a really great point: if you have garbage in, you will likely have garbage out. And so these capabilities do something called ‘hallucinate,’ which they can – you can ask them a question and they can give you information. That sounds very accurate, and it can be completely wrong. I like to call them frequently wrong, never in doubt. And there’s a real worry about these hallucinations when you’re asking questions…We really need to focus on how we can put rules in place to prevent nefarious actors from using this, particularly in deceptive ways.”
Kyle Kondik on the potential for AI to become ‘partisanized’:
“Sometimes what happens when a new issue emerges, and this is true throughout history, is that it takes some time for it to get partisanized. And I wonder if maybe we’ll get to a point where AI does get more partisanized, which probably would be a bad thing, but maybe now is the time to strike because it is a new issue and maybe the different sides haven’t quite figured out where their sides’ position on it may be.”
Brad Raffensperger on the importance of ballot chain of custody and accountability:
“Then we check with the ballot marking device that there were actually 1,000 ballots that were printed at this location. Then we actually look at the ballot box. You’ve got a thousand pieces of paper, whatever that number is, that all those numbers are lining up. And then we see what the county says they received. Oh, they got a thousand ballots. Check it off. Then we do a 100 percent hand recount if we have to. But typically we’ll run those through the scanners again, and we’re using a 95 percent risk limit, which sounds like there’s a 5 percent plus or minus error. No, anyone that has done statistics, it’s actually very – it’s less than 1 percent difference delta based on the sample size you’ve actually picked. And so I say all that is you’re cued to actually do a 100 percent hand recount.”
Brad Raffensperger on encouraging citizens to become poll workers:
“We actually encourage people that are activists on both sides. You don’t trust the process. Okay, we’re going to give you poll worker training. We put that into state law. So you have poll worker training, but you’re just an observer. Actually do the really hard work, show up for that 16-hour day and go through there and actually become a poll manager. But you’re going to see all the checks and fail safes that we have in place…This is what we’re doing at all 2,500 precincts in Georgia.”
Jen Easterly on evaluating disinformation and misinformation online:
“Everybody should be a critical thinker in terms of what they see. And if it doesn’t sound right, don’t retweet it, don’t amplify it, ask questions around it. And again, to Brad’s point, if you have any questions about the security or integrity of elections, be a part of that process and go talk to those folks who are the authoritative sources and the subject matter experts, so you can help us protect our democracy.”
You must be logged in to post a comment.