With elections anticipated to happen in over 50 nations in 2024, the misinformation risk might be high of thoughts.
OpenAI, the developer of the AI chatbot ChatGPT and the picture generator DALL-E, has introduced new measures to stop abuse and misinformation forward of this yr’s huge elections.
In a January 15 put up, the agency introduced that it was collaborating with the National Association of Secretaries of State (NASS), the oldest non-partisan skilled group for public officers within the US, to stop the usage of ChatGPT for misinformation forward of the US Presidential Election in November.
“Lessons from this work will inform our strategy in different nations and areas,” the agency added.
Fighting Deepfakes with Cryptographic Watermarking
To stop deepfakes, OpenAI additionally mentioned it’ll implement the Coalition for Content Provenance and Authenticity’s (C2PA) digital credentials for photos generated by DALL-E 3, the most recent model of its AI-powered picture generator.
C2PA is a venture of the Joint Development Foundation, a Washington-based non-profit that goals to deal with misinformation and manipulation within the digital age by implementing cryptographic content material provenance requirements. Its important initiatives are the Content Authenticity Initiative (CAI) and Project Origin.
Several main firms, together with Adobe, X and The New York Times – which has just lately sued OpenAI and Microsoft for copyright infringement – are members of the coalition and actively assist the event of the usual.
Finally, OpenAI mentioned it was experimenting with a provenance classifier, a brand new device for detecting photos generated by DALL-E.
“Our inside testing has proven promising early outcomes, even the place photos have been topic to widespread varieties of modifications. We plan to quickly make it out there to our first group of testers – together with journalists, platforms, and researchers – for suggestions.”
Google DeepMind has developed an identical device for digitally watermarking AI-generated photos and audio with SynthID. Meta can also be experimenting with an identical watermarking device for its picture generator, though Mark Zuckerberg’s firm has shared little details about it.
A Move within the Right Direction
Speaking to InfosecurityAlon Yamin, co-founder and CEO of AI-based textual content evaluation platform Copyleaks, inspired OpenAI’s dedication in opposition to misinformation however warned it may very well be difficult to implement.
“Going into this election yr, thought of one of many greatest in current historical past, and never simply in America however worldwide, there’s loads of concern about how AI might be misused for political campaigns, and many others., and that concern is totally justified. So, to see OpenAI taking preliminary steps to take away potential AI abuse is encouraging. But as we have witnessed with social media over time, these actions might be tough to implement because of the huge measurement of a person base,” he mentioned.
In the UK, the place the subsequent normal election must be held between mid-2024 and January 2025, the Information Commissioner’s Office (ICO) launched a session collection on generative AI on January 15.
The first chapter is open till March 1.
