In This Edition:
Key takeaways from news leveraging insider sources:
OpenAI-related News: As part of OpenAI’s for-profit transition and ever-upcoming fundraise, Microsoft and OpenAI are clashing on equity, IP rights, and exclusivity of IP and cloud usage. Their AGI definition (a model that can generate +USD 100B in profits) has also been leaked - achieving this would, if the clause remains, end the partnership between OpenAI and Microsoft. This comes as OpenAI’s GPT-5 project (Orion) faces delays and higher costs, Microsoft is planning to integrate non-OpenAI models into its Co-Pilot product, and Alec Radford, a key OpenAI figure, has left to pursue independent research.
Global AI Development: ByteDance plans a $7 billion investment in Nvidia chips for its AI initiatives outside China. Google’s Gemini AI Contractors evaluate model responses outside their expertise and use Anthropic’s Claude for output refinement. Israel’s use of AI in military operations raises serious questions about accuracy and wartime ethics. Meanwhile, the EU’s draft of the GPAI Code of Practice is moving towards strengthening whistleblower protections.
OAISIS Volunteer Call: OAISIS seeks AI-savvy social media volunteers to amplify news and discussions that involve AI professionals addressing critical public concerns.
Insider Currents
Carefully curated links to the latest news spotlighting voices and information emerging from within the frontier of AI from the past 2 weeks.
Suchir Balaji: GoFundMe and Details on Police Report
The mother of the tragically deceased OpenAI whistleblower has taken to X.com to allege insufficient police investigation and contrasting evidence on the cause of Suchir Balaji’s death. She has also set up a GoFundMe to finance the investigation. Business Insider had previously talked to her and also summarized Police statements, including that no person besides the deceased was captured entering or leaving Suchi Balaji’s flat on CCTV.
→ GoFundMe Campaign to Support Investigation→ Business Insider Piece on Police Statements and Interview with Mother
OpenAI & Microsoft Negotiation Dance…
In reasoning for their for-profit shift, OpenAI themselves make no secret around needing to fundraise much more (“Hundreds of billions of dollars”), with Microsoft being a likely major source of that funding.
The Information reports the following points of contention in negotiations, sourced from a person close to Altman:
1. “Microsoft’s equity stake in the for-profit entity;
2. whether Microsoft will continue to be OpenAI’s exclusive cloud provider;
3. how long Microsoft will maintain rights to use OpenAI’s intellectual property in its products as it pleases;
4. and whether Microsoft will continue to take 20% of OpenAI’s revenue”.
…as AGI Definition is ‘Leaked’…
The “rights to use OpenAI’s intellectual property” we referenced already in last week’s article about the AGI clause removal in the partners’ agreement - which previously barred Microsoft from using frontier models until the achievement of AGI and allows OpenAI to end the partnership agreement upon achievement of AGI. We now know that the point in time when AGI is achieved means “when OpenAI [has the] “capability” to generate the maximum total profits to which its earliest investors, including Microsoft, are entitled, according to documents OpenAI distributed to investors. Those profits total about $100 billion”. That might take a *very* long time still - removing the clause, of course, would be another “goodie” for Microsoft.
…And Microsoft turns to other model providers
In the meantime, according to insider sources, Microsoft is planning to allow more model providers into its Copilot products, with OpenAI only supplying frontier models. We believe that’s called building “BATNA”.
Also, how does all the above fit to Deepseek v3 being trained on USD 6m?
→ Microsoft and OpenAI Wrangle Over Terms of Their Blockbuster Partnership→ Microsoft Works to Add non-OpenAI models into 365 Copilot Products
→ Critical Words on OpenAI Funding Need by John Gruber, Leaked Google Memo “We Have No Moat”
ByteDance Planned to Spend $7 Billion on Nvidia Chips Next Year
ByteDance, the Chinese parent company of TikTok, plans to invest up to $7 billion in 2025 to acquire Nvidia chips outside of China, according to a source involved in the initiative. Circumventing export bans, the company is currently in talks with data center operators about securing Nvidia’s Blackwell chips. ByteDance, which runs China’s leading consumer AI chatbot, has stated that its spending on AI will exceed that of other Chinese tech giants, as reported by The Information.
Next Great Leap in AI is Behind Schedule and Crazy Expensive
Somewhat of an open secret, OpenAI’s GPT-5, code-named Orion, faces delays and rising costs, as reported by The Wall Street Journal - based on many insider voices. Despite significant investment, its training runs have not met expectations. According to sources close to the project, after conducting at least two major runs that processed vast amounts of data, new challenges arose each time, and the model did not deliver the desired results.
→ Read the The Wall Street Journal Article HereGoogle Contractor Concerns About Evaluating AI Responses Beyond Their Expertise. Contractors Use Anthropic’s Claude to Improve Gemini
According to internal communications, Google Contractors involved in judging model outputs have expressed concerns about new policies that disallow the skipping of AI responses if contractors judging model output accuracy feel an AI response is outside their areas of expertise. That could heighten the risk of Gemini generating incorrect information, particularly on sensitive topics such as healthcare. Google responded that contractor input does not “directly impact algorithms” but is one of many data points.
Internal communications obtained by TechCrunch also reveal that contractors assessing Google’s Gemini AI use Anthropic’s Claude to do so. They noted that Claude applies stricter safety measures, more often avoiding responses to unsafe prompts.
Read TechCrunch Reports:
→ Google is Using Anthropic’s Claude to Improve its Gemini AI
→ Google’s Gemini is Forcing Contractors to Rate AI Responses Outside Their Expertise
Israel’s Use of AI for War Raises Concerns Amongst IDF’s Top Commanders
The Washington Post investigates how the elite Israeli intelligence unit 8200 has been using AI-targeting systems in the Gaza war. Numerous sources, including current and former military officers, question the accuracy of AI-derived intelligence and raise concerns about how AI advocacy within the military has triggered a cultural shift — from one that prized individual reasoning to one that prioritizes technological prowess.
Israel’s use of AI targeting systems was first reported by +972 Magazine in April 2024, citing multiple sources from within Israel’s intelligence unit.
→ Read The Washington Post ArticleAnother Senior Researcher is Leaving OpenAI
Alec Radford, one of OpenAI’s most senior employees, is leaving the company to pursue research independently.
→ Read The Information’s report on OpenAI’s latest departure here
Policy & Legal Updates
Updates on regulations with a focus on safeguarding individuals who voice concerns.
Whistleblower Protections in EU GPAI Code of Practice - Second Draft Published:
The second draft of the EU Code of Practice has been published. Section 19, Whistleblower protections, has been detailed further, including guarantees of retaliation protection. Further feedback will be provided by the working groups in the week of the 13th of January. We are happy to share that we are advising experts across multiple working groups on the next iteration of feedback for Section 19.
→ Second Iteration on EU GPAI CoP
Announcements & Call to Action
Updates on publications, community initiatives, and “call for topics” that seek contributions from experts addressing concerns inside Frontier AI.
Join Us as a Social Media Volunteer:
Run our socials with us! Share relevant news around our niche, supporting concerned insiders at the frontier of AI. Ideal volunteers will have a background in AI or AI journalism, a strong grasp of AI concepts, and a keen interest in discussions that involve AI professionals addressing critical public concerns. Experience in growing social media accounts, especially within the AI or tech sectors, is preferred. A key aspect of this role is the ability to create engaging, on-brand content that connects with our audience. You’ll manage platforms like X, Bluesky, and Threads (LinkedIn coming soon). Contact us: collaborate@oais.isOther Substacks that featured us:
→ Zvi: Don’t Worry About the Vase
Thank you for trusting OAISIS as your source for insights on protecting and empowering insiders who raise concerns within AI labs.
Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI labs who courageously address the challenges and risks they encounter.
If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.
Thanks to Indah Harahap and Zi-Ting Law for their contributions to this newsletter!
Until next time,
The OAISIS Team