Inside AI #6: AISI Existence Threatened, Google Stumbling, SEC AI Whistleblowing Programme
Edition 6
In This Edition:
Key Takeaways:
AI Safety Institute’s existence is threatened by mass-layoffs, Google is stumbling over its own feet, OmniGPT breach, DeepSeek considering capital raise
OpenAI is (somewhat) transparent, publishes updated malicious use report
The SEC, and their whistleblowing programme, extends the responsibilities of a Unit dedicated to Cyber-Crime, now also including covering select AI topics. Reach out to us if you would want some guidance on what this might mean for you.
- has published a piece (Link) in on the importance of building out whistleblower protections in AI - give it a read.
Insider Currents
Carefully curated summaries and links to the latest news, spotlighting the voices and concerns emerging from within AI labs.
Before diving into insider reports, an interesting regulatory update: The SEC has announced the creation of the Cyber and Emerging Technologies Unit (CETU) to focus on combatting cyber-related misconduct and protect retail investors from bad actors in the emerging technologies space. Acting SEC Chairman Mark T. Uyeda said:
The unit will not only protect investors but will also facilitate capital formation and market efficiency by clearing the way for innovation to grow. It will root out those seeking to misuse innovation to harm investors and diminish confidence in new technologies.
Why are we covering this so prominently? You might be aware that the SEC offers a Whistleblowing Programme, awarding whistleblowers 10% to 30% of sanctions collected due to their tips. One of the seven areas of fraud and misconduct covered by the CETU is AI:
“Fraud committed using emerging technologies, such as artificial intelligence (AI) and machine learning.“
Under this context, the SEC had previously charged QZ Asset Management for allegedly falsely claiming that it would use its proprietary AI-based technology to generate extraordinary weekly returns while promising “100% protection for client funds”.
If you believe you possess knowledge of fraud or misleading of public or private investors relating to AI - reach out to us and we might be able to point you to the right advisor to help you evaluate your case.Potential Mass Layoffs at NIST Threaten U.S. AI Safety Institute
Sources told Axios that all probationary employees at the National Institute of Standards and Technology (NIST) face imminent termination, with the agency preparing to cut 497 positions.The cuts would significantly impact the U.S. AI Safety Institute (AISI), established in 2023 to develop testing, evaluations, and guidelines for "trustworthy" AI. According to anonymous sources cited by Bloomberg, some employees received verbal termination notices last week. These layoffs align with the Department of Government Efficiency (DOGE) panel’s broader push to reduce federal staffing.
Industry experts warn the cuts could weaken U.S. AI competitiveness despite the administration’s recent $500 billion investment in AI infrastructure. “It feels almost like a Trojan horse,” said Jason Corso, a professor at the University of Michigan. Eric Gastfriend of Americans for Responsible Innovation emphasized that probationary employees include much of the AI talent crucial for evaluating models like China’s DeepSeek.
→ Roundup by “The Hill”Google's AI Ambitions Slowed by Internal Team Conflicts
According to four sources with knowledge of the situation, Google’s AI product, NotebookLM, faced strong internal resistance before launch. Workspace employees argued it conflicted with their plans for existing apps, with some even pushing to shut it down.
This tension highlights broader organizational struggles slowing Google’s AI efforts. Employees across divisions report friction between Google DeepMind (which prioritizes rapid AI model development) and Google Cloud (which focuses on developing reliable enterprise products). This caused clashes over the development of AI Studio, with employees concerned it wasn't improving fast enough to compete with OpenAI and Anthropic, according to three people involved.
In another example, CEO Sundar Pichai personally intervened to scale back on the development of an AI assistant called "Pixie" for Pixel phones to avoid competing with the company's Gemini assistant, according to a person with direct knowledge of the instructions. As a result, the product launched with fewer features than originally planned.
Despite Google's efforts to streamline AI development—merging Google Brain and DeepMind in 2023, shifting Gemini from Search to DeepMind, and recently moving AI Studio from Cloud to DeepMind—the company still lags behind OpenAI in consumer adoption. ChatGPT receives ten times more web traffic than Google’s Gemini.
DeepSeek Weighs First Fundraising Round Amid Rapid Growth
DeepSeek is considering its first external funding as its viral chatbot strains company resources, according to people with direct knowledge of internal discussions. Alibaba and Chinese state funds have expressed interest, though CEO Liang Wenfeng remains cautious about outside capital.Two sources familiar with Liang’s September 2024 U.S. travel plans said he met with researchers, including OpenAI employees, to “stay up to date”. This news has fuelled online speculation.
Meanwhile, DeepSeek’s mobile app reached 30 million daily users within a month, but the company faces infrastructure challenges and potential U.S. regulatory hurdles—issues that investments from Chinese government-linked entities are unlikely to ease.
→ Read the full articleOmniGPT Allegedly Breached, Exposing Millions of User Conversations
Threat actors on Breach Forums claim to have stolen over 34 million user conversations and 30,000 emails and phone numbers from AI aggregator OmniGPT.According to researchers at Hackread.com, the leaked data includes API keys, credentials, billing information, and links to sensitive documents stored on OmniGPT's servers. The breach is especially concerning as users often share personal matters with AI chatbots, which security expert Andrew Bolster likens to "artificial agony aunts".
OmniGPT, which provides access to multiple AI models including ChatGPT and Claude, has not commented on the alleged breach yet.
→ Read the full articleOpenAI Investor Details Financial Cost of Leaving Company
At a staff meeting, Thrive Capital's Josh Kushner and Vince Hankes explicitly calculated the wealth employees would forfeit by departing OpenAI, according to a person familiar with the presentation.
The investors claimed a typical OpenAI researcher could earn $30 million at a $600 billion valuation, versus just $4 million for someone with 1% equity in a startup that exits at $1 billion. Their unusual presentation comes as former CTO Mira Murati has poached over half a dozen key staffers for her new venture Thinking Machines Lab, adding pressure to the already fierce AI talent wars.
OpenAI Creating Some Transparency on Malicious Use
OpenAI's February 2025 report reveals how they banned accounts using their models for malicious purposes, including a China-based surveillance operation tracking protests, another China-linked campaign planting anti-US articles in Latin American media, North Korean employment scams, Iranian influence networks, Cambodia-based romance scams, and a Ghana election influence operation.The report emphasizes how AI companies can identify connections between seemingly unrelated malicious activities, demonstrating the value of industry collaboration against threat actors using AI for espionage, fraud, and influence operations.
However, the report did not provide information regarding total instances of malicious use detected, their severity, or the extent to which OpenAI is confident they are able to detect malicious use.
→ Read the report
Announcements & Calls to Action
Updates on publications, community initiatives, and “call for topics” that seek contributions from experts addressing concerns inside Frontier AI.
If you’re looking for a volunteering role in Growth & Marketing or know of someone who might be keen - let us know.
Stay tuned for the launch of our “Contact Hub” later next month.
Thank you for trusting OAISIS as your source for insights on protecting and empowering insiders who raise concerns within AI labs.
Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI labs who courageously address the challenges and risks they encounter.
If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.
Until next time,
The OAISIS Team