Inside AI #4: Amazon, Epoch AI, Meta Scrambling, Big Tech & the IDF, Suchir Balaji's Family Raises More Questions
Edition 4
In This Edition:
Key takeaways:
Recent News:
Amazon Whistleblower Allegations: Evading Antitrust.
Epoch’s FrontierMath Benchmark & OpenAI’s Involvement - Insider Voices.
Stargate Artificial Intelligence Project to Exclusively Serve OpenAI.
Meta Scrambles After Chinese AI Equals Its Own, Upending Silicon Valley.
Tech Companies’ Military Collaboration with Israel Post-October 7.
Before Google’s $2.7 Billion Deal With AI Startup, a Stark Warning on Safety.
Family Questions Circumstances Surrounding OpenAI Whistleblower’s Death.
Not sure what to wear at your next Bay Area House Party? Get our Merchandise now, at cost to us or with markup, shipped to your home. Click here to order and become a conversation starter around the need to support insiders at the frontier of AI!
Insider Currents
Carefully curated links to the latest news spotlighting voices and information emerging from within the frontier of AI from the past 2 weeks.
Amazon Whistleblower Allegations: Evading Antitrust
A whistleblower has filed complaints with federal regulators alleging that Amazon structured its $380 million deal with robotics start-up Covariant AI to deliberately evade antitrust scrutiny. While publicly presented as a licensing deal with some key hires in August, the whistleblower claims it was effectively an acquisition in disguise.
Key points:
Amazon hired three founders and 25% of Covariant’s staff while claiming to only purchase a non-exclusive license for their robot AI software.
Deal documents show restrictions on Covariant’s ability to sell licenses to others, allegedly turning it into a “zombie” company.
The company's CEO was recorded saying Amazon structured the deal this way because a direct acquisition “would have had a transaction blocked.”
The case highlights a broader trend of tech giants using creative deal structures to avoid antitrust review when acquiring AI capabilities.
This follows growing scrutiny of similar arrangements, including Microsoft’s $650 million licensing deal with Inflection AI and Amazon's earlier deal with Adept, as regulators worry about tech giants stifling AI innovation through strategic acquisitions.
→ Read the Washington Post Article
Epoch’s FrontierMath Benchmark & OpenAI Involvement - Insider Voices
OpenAI’s latest AI model, o3, achieved an impressive 25.2% on FrontierMath, a new mathematical benchmark by Epoch AI—far surpassing previous scores, which were below 2%. However, controversy arose when Epoch AI revealed that OpenAI had secretly funded the benchmark’s development and had access to both the problems and their solutions, despite the test being presented as an independent evaluation tool.
OpenAI insists it did not use this data to train o3, but critics argue that merely having access to the problems could have helped optimize performance. Epoch AI, which developed the benchmark, has acknowledged they should have been more transparent about OpenAI's involvement, especially to mathematicians developing the problems. To address concerns, Epoch is now creating a separate ‘holdout’ set of questions, whose answers have not—and will not—be shared with OpenAI, according to Epoch. The situation highlights tensions between the need for independent AI evaluation metrics and the influence of major AI companies in creating and using them.
→ LessWrong Conversation including comments from Epoch Founder
→ Epoch’s Blog Post on the Topic
→ Fortune Article
Stargate Artificial Intelligence Project to Exclusively Serve OpenAI
Stargate, the prominent new high-profile AI infrastructure project backed by OpenAI and Tokyo-based SoftBank, aims to invest up to $500 billion over four years. The project was unveiled at the White House, where Donald Trump called it, “a resounding declaration of confidence in America.” Despite this grand announcement, sources familiar with the matter indicate the initiative is still in its early stages and “has not yet secured the funding it requires, will receive no government financing and will only serve OpenAI once completed.” One of the people added, “The intent is not to become a data centre provider for the world, it’s for OpenAI.” Microsoft, a key OpenAI investor, will offer technical support but will not contribute capital to the project, as reported by the Financial Times.
Stargate will operate through two main divisions: OpenAI will lead an operational unit responsible for building and managing data centers, while SoftBank will handle capital-raising efforts, a person familiar with the project said. This initiative reflects OpenAI CEO Sam Altman’s strategy to expand the company’s access to data and computing power, which he sees as a critical bottleneck. Construction of the first facility is already underway in Abilene, Texas, with Oracle committing approximately $7 billion for computing chips.
→ Read The Report by the Financial Times
Meta Scrambles After Chinese AI Equals Its Own, Upending Silicon Valley
Meta and other major AI companies are closely analyzing DeepSeek, which we assume all our readers are familiar with. DeepSeek claims to rival leading American AI models while operating at a fraction of the cost—an assertion that has drawn scepticism from various researchers, according to The Information.
Internally, Meta executives, including AI infrastructure director Mathew Oldham, have expressed concerns that the next version of Meta’s flagship AI, Llama, may not perform as well as DeepSeek, say two employees familiar with the discussions. The Information also added that “researchers at OpenAI, Meta and other top developers have been scrutinizing the DeepSeek model to see what they could learn from it, including how it manages to run more cheaply and efficiently than some American-made models.” In response, Meta has established four ‘war rooms’ to analyze DeepSeek and extract insights to enhance Llama, the employees said:
Two teams are focused on reducing training and operating costs.
One team is investigating DeepSeek’s training data sources.
One team is examining its architecture, particularly its multi-model approach, where different components handle specific tasks.
While leading U.S. researchers privately acknowledge DeepSeek’s strong performance, some suspect High-Flyer may have taken shortcuts. One theory, endorsed by OpenAI, is that it employed distillation, a technique where a model is trained using outputs from existing models, such as OpenAI’s GPT-4 (o1) or Meta’s Llama, to accelerate development.
→ Read the Full Article by The Information
Tech Companies’ Military Collaboration with Israel Post-October 7
Recent investigations have uncovered extensive collaboration between major tech companies and the Israeli military following the events of October 7.
According to documents obtained by The Washington Post, Google employees have provided Israel’s military with access to the company’s latest AI technology since the early weeks of the Israel-Gaza war. Internal records reveal that Google has directly supported Israel’s Defense Ministry and the Israel Defense Forces (IDF), despite publicly distancing itself from the country's national security operations. This follows employee protests against a cloud computing contract with the Israeli government, known as Project Nimbus.
“Thanks to the Nimbus public cloud, phenomenal things are happening during the fighting, these things play a significant part in the victory — I will not elaborate,” said Gaby Portnoy, Director General of the Israeli government’s National Cyber Directorate, in a statement reported by The Washington Post.
A separate report by The Verge revealed that Google competed with Amazon to secure its AI services contract. Internal documents obtained by The Washington Post show that an employee warned Google needed to respond quickly to Israel’s military requests or risk losing the contract to Amazon.
Additionally, +972 Magazine reported that leaked commercial records from Israel’s Defense Ministry and files from Microsoft’s Israeli subsidiary show Microsoft has significantly expanded its role in Israel’s military infrastructure. Since the escalation in Gaza, sales of Microsoft’s cloud and AI services to the Israeli army have surged.
Previous to the revelations, The Verge reported in April that Google fired 28 employees for participating in sit-in protests at two of its offices. An internal memo warned, “If you’re one of the few who are tempted to think we’re going to overlook conduct that violates our policies, think again.”
→ Read: Google Rushed to Sell AI Tools to Israel’s Military After Hamas Attack
→ Read: Leaked Documents Expose Deep Ties Between Israeli Army and Microsoft
→ Read: Google Reportedly Worked Directly with Israel’s Military on AI Tools
→ Read: Google Fires 28 Employees After Sit-in Protest Over Israel Cloud Contract
Before Google’s $2.7 Billion Deal With AI Startup, a Stark Warning on Safety
This topic was also covered in our first edition. However, The Information recently provided a deep dive into the story. The company’s chatbots and star co-founders helped secure a lucrative licensing deal with Google last August. However, inside the company, employees began to voice concerns about how its products were affecting children. Both Google and Apple raised alarms over the app’s safety, prompting stricter content filters.
Despite these concerns, Google later struck a $2.7 billion deal to acquire Character’s technology and bring back co-founders Noam Shazeer and Daniel De Freitas to work on AI at Google. According to four former employees, staff members at Character also expressed worries about the app’s impact on users’ mental health, particularly among teenagers. Before the Google deal, employees raised these issues with executives, with three former staff members confirming the discussions during companywide meetings last year. One employee also mentioned that staff had inquired about measures to prevent children from bypassing age restrictions by entering false birthdates, as reported by The Information.
→ Read The Deep Dive Report by The Information Here
Family Questions Circumstances Surrounding OpenAI Whistleblower’s Death
Following our previous coverage of the OpenAI whistleblower case, newly released photos show the scene where Suchir Balaji was found deceased. His family disputes the claim that he took his own life, also presenting additional evidence. Content Warning: Some images may be graphic.
→ Watch an Interview with Suchir’s mother, Poornima Ramarao, here.
→ See Collin Rug’s summary Post on X
Announcements & Call to Action
Updates on publications, community initiatives, and “call for topics” that seek contributions from experts addressing concerns inside Frontier AI.
OAISIS Merchandise
Not sure what to wear to your next Bay Area House Party? Get our Merchandise - at cost to us or with a markup (your choice of collection). Start a conversation around the need to support insiders at the frontier of AI.OAISIS in the EU AI Roundtable – Monday, 2 February
On Monday, February 2, OAISIS will present its feedback on the Whistleblower sections of the second draft of the EU GPAI Code of Practice to the Chairs of Working Group 4 at the EU AI Roundtable.
Thank you for trusting OAISIS as your source for insights on protecting and empowering insiders who raise concerns within AI labs.
Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI labs who courageously address the challenges and risks they encounter.
If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.
Until next time,
The OAISIS Team