What We’re Thinking
Integrity Institute members and staff are leading voices in the integrity field and bring years of technical expertise to tackling these problems. These posts represent their individual thoughts, analysis, and ideas about contemporary issues in the integrity space.
Election Deepfakes: What To Do About Political Media That Doesn’t Mean What You Think It Means
Integrity Institute members Eric Davis, Diane Chang, Lucia Gamboa, Amari Cowan, Swapneel Mehta, Nichole Sessego, and David Evan Harris submitted comments to the FEC in October 2023, requesting it to address the anticipated onslaught of deepfakes in 2024 US campaign advertising.
Reflection from 2023 DEF CON
Integrity Institute visiting fellows Rebecca Thein, Theodora Skeadas, and Sarah Amos share their reflections from attending 2023 DEF CON alongside Institute members.
Technology Companies Must Make Platforms Safer for Women in Politics
Integrity Institute visiting fellow Theodora Skeadas co-authored this piece that first appeared in Tech Policy Press.
How Much Has Social Media Affected Polarization?
Following the first published research from the 2020 Facebook and Instagram Election Study (FIES), Institute fellow Tom Cunningham shows that social media has probably not made a huge contribution to US polarization and how we can extrapolate estimates from the FIES to other effects of interest, specifically the aggregate impact of social media on the US over the last 20 years.
Comment to PCAST on Generative AI
In May 2023, the President’s Council of Advisors on Science and Technology (PCAST) launched a working group on generative artificial intelligence (AI) and invited public input. Integrity Institute visiting fellows Theodora Skeadas, David Evan Harris, and Arushi Saxena organized and – together with Institute members Diane Chang and Sabhanaz Rashid Diya – submitted comments to the PCAST.
Comment on EU AI Act
In April 2023, select Institute members addressed three main topics in their comments on the draft EU AI Act: examples of real life harms that have resulted from the use of AI systems; the categories of AI systems classified as “high risk”; and a draft methodology for auditing AI systems.
Unleashing the Potential of Generative AI in Integrity, Trust & Safety Work: Opportunities, Challenges, and Solutions
Select Institute members share their thoughts on the potential of generative AI in integrity, trust & safety work
Why AI May Make Integrity Jobs Harder
Select Institute members share their thoughts on why AI may make integrity jobs harder