In This Edition:
Key takeaways from news leveraging insider sources:
Updates on OpenAI, including conversations with Elon Musk, worry about Anthropic’s growth, potential removal of the AGI clause of the Microsoft licensing deal, and another safety researcher leaving
Apple is struggling with Baidu, Character.ai lawsuit over child safety, Google asking the FTC to investigate OpenAI’s cloud licensing deal, and the US is balancing chip exports between China and the UAE
FLI publishes its AI Safety Practices Paper, comparing labs’ safety practices: Whistle-blowing channels still seem to be underdeveloped. New whistle-blower protections are however expected in the upcoming iteration of the EU General-Purpose AI Code of Practice.
A positive: Anthropic continues to collaborate and publish with external organizations.
Insider Currents
Carefully curated links to the latest news spotlighting voices and information emerging from within the frontier of AI from the past 2 weeks.
Suchir Balaji, OpenAI Whistleblower, has Passed Away
We open this newsletter with the news of Suchir Balaji’s passing, a matter we felt deserved a place of prominence. Official statements have not indicated evidence of ‘foul play,’ and we will refrain from engaging in speculations.
Instead, we encourage you to visit his personal blog to reflect on his work and contributions. For further context, detailed reports from the BBC and the New York Times, including their coverage from October, provide additional information about his case.
Anthropic’s Growth Raises OpenAI’s Concern
The Information published a piece referencing multiple OpenAI insider sources indicating worry in OpenAI’s leadership over Anthropic’s rapid expansion (10x revenue in 3 months), especially in the coding domain. The increasing willingness to release experimental products, especially agentic ones also raised eyebrows.
OpenAI Weighs AGI Clause: Investment vs. Oversight:
Reuters talked to sources indicating that OpenAI is considering removing a clause that limits Microsoft’s access to its models once artificial general intelligence (AGI) is achieved. Originally intended to ensure AGI oversight by its non-profit board, the clause may be revised to attract sustained investment from Microsoft.
→ Read the Full ArticleElon Musk’s For-Profit Vision for OpenAI:
OpenAI releases conversations on their power struggles with Elon Musk. Likely released in response to Musk's continued legal action over OpenAI’s for-profit pivot, OpenAI seems to want to make clear that Musk’s issue lies not with the for-profit switch but rather his not being in control. Some of these emails had already been published, and the transparency created here seems to not be motivated by a general desire for openness.
→ Read the OpenAI PostUS Strategy in AI: Balancing China and UAE Dynamics:
The US continues to leverage its technological influence to limit China’s access to advanced AI chips while strengthening ties with allies such as the UAE. By designating tech giants like Google and Microsoft as gatekeepers—according to two sources—and approving selective exports, the US aims to control global AI distribution without alienating strategic partners. Read the curated articles below:
→ Major Cloud Providers Could Get Key Role in AI Chip Access Outside the US→ Advanced AI Chips Cleared for Export to UAE Under Microsoft Deal
Apple is struggling with Baidu’s AI Models in China
According to sources familiar with the matter, Apple’s plans to integrate Baidu’s AI models into its iPhones in China is facing challenges - both on the model’s ability to interpret prompts and over Baidu wanting to save and analyse certain user data - which Apple’s privacy policy forbids. The use of foreign models in China is not forbidden - the regulatory approval of foreign models is however not “a priority”.→ Apple Hits Snags Adapting Baidu’s AI Models for China Users
Google Urges FTC to End Microsoft’s Exclusive Cloud Deal with OpenAI
Google asked the US Federal Trade Commission (FTC) to investigate Microsoft’s exclusive agreement with OpenAI, as confirmed by a source involved in the effort to The Information. This agreement prevents competitors from hosting the OpenAI's models on their cloud platforms. Competitors like Amazon and Google believe this deal creates unnecessary barriers and costs for cloud customers who want access to OpenAI’s technology.
→ Read the Full Article
Character.AI Lawsuit Sparks Alarm Over Chatbot Safety and AI Protections
Character.AI faces a second lawsuit accusing its chatbots of harming two minors, including one case where a chatbot allegedly encouraged violence against a child’s parents. Google and its parent, Alphabet, are also named as defendants. Character.AI declined to comment but emphasized its safety measures, including a model to limit sensitive content for teens.
Another Safety Researcher Has Left OpenAI
News is slightly older than 2 weeks - but we felt it important to highlight this departure and point you to her personal blog.
→ Read Her Departure SubstackEvaluating “AI Safety Practices”: Transparency, Internal Review, Whistleblower Protections
Not strictly insider news: FLI published an AI-Lab-Watch-style report. Broad findings are unsurprising, scorecard details however hold interesting details. Page 33 onwards, “Governance & Accountability,” should be especially interesting for the reader of this Substack. FLI’s methodology finds large differences between labs (Anthropic leads, Meta trails). Snippet on a section on internal review protections and whistleblower protections below.
→ The FLI Report
Policy & Legal Updates
Updates on regulations with a focus on safeguarding individuals who voice concerns.
Whistle-blower Protections Will be Detailed in The Next EU GPAI Code of Practice Iteration
The next iteration of the EU General-Purpose AI Code of Practice should be released by the end of this week. “Measure 19: Whistleblowing protections” was extremely broad in its previous iteration - we look forward to seeing detailing here. The next feedback round will follow this iteration.
→ Check Here for the Doc Release→ Read Miles Brundage’s and Dean Ball’s Commentary on the First Iteration
Regulatory Recap and Outlook for 2025
A decent overview summary by law firm Burges & Salmon for taking a step back and getting ready for 2025:
→ Read the Full Blog Post
Announcements & Call to Actions
Updates on publications, community initiatives, and “call for topics” that seek contributions from experts addressing concerns inside Frontier AI.
Third Opinion: We’re live!
Share us with people who could benefit from our offering. Find our X thread here, Bluesky, Threads. And, of course, our website.Third Opinion in AI Safety Map:
Find us at the base of the support mountain.
Other
Highlighting Transparency: Anthropic Publishes “Alignment Faking in Large Language Models.”
A commendable example of ongoing cooperation between frontier labs and external researchers.
→ Read the Paper
→ Read Scott Alexander’s Commentary
Thank you for trusting OAISIS as your source for insights on protecting and empowering insiders who raise concerns within AI labs.
Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI labs who courageously address the challenges and risks they encounter.
If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.
Until next time,
The OAISIS Team