What We’re Thinking
Integrity Institute members and staff are leading voices in the integrity field and bring years of technical expertise to tackling these problems. These posts represent their individual thoughts, analysis, and ideas about contemporary issues in the integrity space.
Child Safety Online
Integrity Institute delineates the best practices we advocate for Child Safety across all digital platforms.
Pixels and Protocols: A Journey from Gaming Nostalgia to Digital Responsibility
This is what draws me most to integrity work: its focus is on protecting people not just after harm has occurred, but also by building systems and policies aimed to prevent it in the first place and address its root causes.
Integrity Talks Series: How Platforms Engage Governments
This is the first of what we hope will be many conversations with Integrity Institute members and friends that aim to demystify some of the integrity topics that are most often cited as confusing or frustrating. Our goal here is “real talk” about issues that we frequently see misunderstood, dramatized, or which just haven’t been discussed in a useful way for someone who hasn’t worked on integrity issues day in and day out.
When AI Systems Fail: The Toll on the Vulnerable Amidst Global Crisis
Reactive measures to address biased AI features and the spread of misinformation on social media platforms are not enough, says Nadah Feteih, an Employee Fellow with the Institute for Rebooting Social Media at the Berkman Klein Center and a Tech Policy Fellow with the Goldman School of Public Policy at UC Berkeley.
How Generative AI Makes Content Moderation Both Harder and Easier
Content moderation was already an extremely difficult and thankless job, and with generative AI potentially increasing the quantity, quality, and personalization of adversarial content, is it borderline impossible for social media platforms to moderate content now?