WASHINGTON (Nov. 27, 2025) — A recent data breach affecting ChatGPT and OpenAI API users has sent shockwaves through the AI community, prompting new calls for stronger transparency and vendor selection standards. OpenAI reported that the incident occurred within Mixpanel, a web analytics provider used on portions of its API interface. The exported dataset included names associated with API accounts, email addresses, approximate locations, browsers and operating systems, referral websites and user identifiers
The American Council for Ethical AI (ACEAI) and the American AI Association (AAIA) emphasize the urgent need for more rigorous AI vendor selection, urging organizations to thoroughly evaluate their vendors’ security practices, data handling policies and accountability measures. ACEAI encourages the public to continue to report AI-related incidents and concerns through its signature AI Whistleblower Program.
Generative AI is producing vast volumes of poor, repetitive content, saturating online ecosystems.
Source: Nieman Journalism Lab
Platforms like Meta AI and Character.AI are probed for deceptive claims as mental health tools. Source: Texas Attorney General
A tracker of U.S. copyright litigation involving various AI companies including OpenAI, Meta, Anthropic, and more. Source: Wired
Consumers allege Microsoft restricted access to AI compute in its OpenAI partnership, inflating costs.
Source: Reuters
A petition challenges the denial of copyright protection to AI-generated works under U.S. law.
Source: Reuters
Courts restrict use of public AI tools and require training, citing risks in legal decision-making.
Source: Reuters
A judge preliminarily approved a settlement for alleged unauthorized use of books in AI training.
Source: AP News
Hollywood studio accuses AI of infringing on their IP by reproducing iconic characters.
Source: AP News
Ongoing updates about U.S. generative AI copyright lawsuits, e.g. studios vs. AI image tools.
Source: McKool Smith
Stanford warns major AI firms are using user conversations for training, raising privacy risks.
Source: Stanford HAI
Anthropic discloses misuse cases like extortion, fraudulent employment schemes, and AI-generated ransomware.
Source: Anthropic
OpenAI reveals actions disrupting over 40 networks misusing AI (scams, influence ops).
Source: OpenAI
SUBMISSIONS MUST BE ORIGINAL, RELEVANT, AND FREE OF ANY COPYRIGHT INFRINGEMENT.
1629 K Street Northwest, Suite 300, Washington D.C., DC, USA
The American Council for Ethical AI - All Rights Reserved
1629 K Street NW, Suite 300
Washington, District of Columbia 20006
Phone: 202-609-0425
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.