Inside AI #7: New AI Whistleblower Law in California, Meta Leaker Terminations & Whistleblower
Edition 7
In This Edition:
Key takeaways:
Whistleblower Policy Updates: SB 53 Whistleblower Protection Bill introduced into the California Senate - see our feedback and recommendations here.
At the same time, the 3rd edition of the EU AI Act Code Of Practice has been published. Stay tuned for our feedback.DeepSeek’s Industry Impact: Insider sources reveal DeepSeek has transformed China's AI landscape, with tech giants ByteDance, Alibaba, Tencent, and Baidu integrating its models while driving a black market for Nvidia gaming chips, which are being modified to run AI models at lower costs to bypass export restrictions.
Corporate Turbulence: The newsletter covers Meta's termination of ~20 employees for leaking confidential information, Microsoft's struggle for AI independence from OpenAI amid executive tensions, and Meta whistleblower allegations about previously considering sharing user data with Chinese authorities.
Insider Currents
Carefully curated summaries and links to the latest news, spotlighting the voices and concerns emerging from within AI labs.
California Senator introduces AI Safety Bill to Protect Whistleblowers
Below is news coverage. For our thoughts on SB 53, see our feedback post.
Sen. Scott Wiener (D-San Francisco) has reintroduced Senate Bill 53 (SB 53), a critical piece of legislation aimed at strengthening AI safety and whistleblower protections after a similar bill was vetoed last year. The bill would protect AI lab employees from retaliation if they disclose risks or irresponsible development practices while also requiring companies to establish anonymous reporting mechanisms.
The legislation defines “critical risk” as foreseeable AI-related harms that could cause 100+ deaths or over $1 billion in damages. While Wiener acknowledges the bill may evolve, he stated that “I’m closely monitoring the work of the Governor’s AI Working Group, as well as developments in the AI field for changes that warrant a legislative response.” He also warns that California’s leadership in AI regulation is crucial, especially as federal protections weaken. However, opposition from tech companies and lawmakers remains strong, with critics arguing it could stifle innovation and drive developers out of the state.
Additionally, SB 53 proposes the creation of CalCompute, a shared compute cluster to ensure equitable access to computational resources.
→ Read the Full Article
→ Read the Senate Bill 53 Here
Facebook (now, Meta) Considered Sharing User Data with China, Alleges Former Global Policy Director
Sarah Wynn-Williams, who held the role of Global Policy Director from 2011 to 2017 at Facebook (now, Meta), alleges that the company considered sharing user data with Chinese authorities to enter China. While this revelation is not directly related to AI development and the main focus of this newsletter, it still raises critical concerns about Big Tech ethics and whistleblower protections.
Internal documents, provided by Wynn-Williams to the SEC and seen by The Washington Post, suggest Meta was willing to allow government oversight of content moderation and suppression of dissent in exchange for regulatory approval. Meta also explored appointing a "chief editor" with powers to remove content or shut down the platform during times of "social unrest". Even more alarmingly, Meta reportedly considered giving Chinese authorities access to user data—including that of Hong Kong residents.
According to the complaint, Meta executives also faced pressure from Chinese officials to store local user data within China, which Wynn-Williams alleges would have facilitated covert access by the Chinese Communist Party.
In 2014, Mark Zuckerberg allegedly assembled a dedicated “China team” to develop a version of Meta’s services that complied with Chinese regulations. This initiative, code-named Project Aldrin, reportedly involved commitments to:
Remove content deemed “dangerous for China,” including terror threats.
Strengthen cooperation with Chinese embassies and consulates worldwide.
These efforts, however, ultimately collapsed. By 2019, as the Trump administration intensified its trade war with China, Meta quietly abandoned its ambitions.
→ Read The Washington Post’s Deep Dive
DeepSeek 1: DeepSeek Spurs Black Market Demand for Nvidia Gaming Chips in China
DeepSeek’s ability to develop advanced AI models at an exceptionally low cost sent shockwaves through the market, wiping $750 billion from Nvidia’s valuation in January. Despite the sell-off, Nvidia CEO Jensen Huang remained optimistic, believing DeepSeek’s rise would boost demand for its chips in China, according to a source familiar with the matter.
However, this demand has materialised in unexpected ways, fuelling a thriving underground market for Nvidia’s gaming GPUs as tech companies discover they can run DeepSeek’s models at a fraction of the cost. Smugglers in Asia are increasingly supplying these chips to Chinese startups, universities, and state enterprises seeking to bypass U.S. export restrictions.
Researchers at Beijing’s Tsinghua University report that some engineers have managed to run DeepSeek’s full-powered models on a single Nvidia gaming chip. Meanwhile, other Chinese companies are using servers with eight modified gaming GPUs instead of standard AI chips, according to two smugglers and an engineer involved in the setups.
Nvidia’s layered distribution network makes tracking its shipments difficult. The company sells gaming chips through authorized distributors, who often resell them to other dealers, further complicating oversight, added The Information.
→ Read the Article by The Information
DeepSeek 2: Chinese Tech Giants Embrace DeepSeek's AI Technology
China's leading tech companies—ByteDance, Alibaba, Tencent, and Baidu—are integrating DeepSeek's AI models alongside their own technologies following the startup's rapid success. According to ByteDance employees, the company is conducting internal tests to incorporate DeepSeek into its Doubao chatbot, triggering intense debate on company forums between those advocating adoption and others concerned about undermining in-house AI development.
Tencent's adoption decision came directly from CEO Pony Ma and President Martin Lau, according to insiders familiar with the matter, reflecting the company’s pragmatic approach of prioritising competitive consumer offerings over exclusive model ownership. Meanwhile, Tencent Cloud has been favouring DeepSeek-powered solutions over its own Hunyuan models when selling to enterprise customers.
ByteDance CEO Liang Rubo acknowledged at a recent all-hands meeting that the company's AI goals may not have been "ambitious or focused enough," while Baidu CEO Robin Li cited DeepSeek's influence on Baidu's decision to open-source its upcoming Ernie 4.5 models.
→ Read the Article by The Information
Microsoft’s AI Chief, Mustafa Suleyman, Wants Independence From OpenAI
Microsoft is struggling to reduce its reliance on OpenAI, despite investing over $13 billion in the AI startup. Mustafa Suleyman, Microsoft’s AI chief, has expressed frustration over OpenAI’s lack of transparency, particularly regarding the chain-of-thought reasoning used in its latest model, o1. According to sources familiar with the matter, this frustration led to a tense exchange between Suleyman and OpenAI executives, including then-CTO Mira Murati.
In an effort to gain more autonomy, Microsoft has been developing its own AI models, internally referred to as MAI. These models reportedly perform “nearly as well” as leading systems from OpenAI and Anthropic on key benchmarks, according to someone involved in the effort, as reported by The Information.
“Microsoft is also considering making its MAI models commercially available on its Azure cloud computing service, which would bring its AI lab into even more direct competition with OpenAI and other AI firms,” wrote The Information in another article.
However, Microsoft’s progress has been slow, hindered by technical challenges, shifting strategies, and key employee departures. Interviews with six current and former Microsoft employees suggest that some company veterans disagreed with Suleyman over his management style and technical direction.
Skepticism about Microsoft’s AI strategy is growing. Venture capitalist Nathan Benaich, who invests in AI startups, remarked: “From the outside, it’s still not clear what they have to show for the creation of this unit under Suleyman a year later. They need to make Copilot a serious competitor to ChatGPT, but I’m not sure how they aim to do that, or what their strategy is beyond chasing what OpenAI has already done.”
Meanwhile, OpenAI continues to extend its lead, recently launching o3 and GPT-4.5—making it even harder for Microsoft to replace OpenAI’s models.
→ Read the Article by The Information
→ Read The Information’s Brief Report on “Microsoft Trains New In-House AI Models in Copilot”
Google Reports Scale of Complaints About AI Deepfake Terrorism Content to Australian Regulator
Google disclosed to the Australian eSafety Commission that, within close to 12 months, it has received 258 reports of its AI, Gemini, generating deepfake terrorist content and 86 reports alleging child exploitation material. Regulators, calling this a "world-first insight" into AI misuse, reveals growing concerns about the effectiveness of safeguards in leading AI labs, Reuters reported.
While Google prohibits such content, the regulator noted that the company did not disclose how many reports were verified. Google uses hatch-matching to detect and remove AI-generated child abuse imagery but has not implemented the same system for terrorist or extremist content, according to regulators.
Meanwhile, platforms like X (formerly Twitter) and Telegram have been fined for inadequate safety reporting. X lost an appeal over a A$610,500 ($382,000) fine but plans to challenge it again, while Telegram also intends to appeal.
Meta Fires 20 Employees for Leaking Information
According to a statement from Meta spokesperson Dave Arnold to The Verge, Meta has terminated approximately 20 employees for sharing confidential information externally, with more terminations expected. This crackdown follows recent leaks about unannounced products and internal meetings, including a recent all-hands meeting led by CEO Mark Zuckerberg.
CTO Andrew Bosworth previously told staff they were “making progress on catching people”—comments that were themselves leaked. The firings come amid declining employee morale following controversial changes to content moderation, elimination of DEI programs, and targeted layoffs.
→ Read the Full Article
Announcements & Call to Action
Updates on publications, community initiatives, and “call for topics” that seek contributions from experts addressing concerns inside Frontier AI.
Growth Volunteers
Looking to support our outreach and growth efforts? Reach out under collaborate@oais.is
Thank you for trusting OAISIS as your source for insights on protecting and empowering insiders who raise concerns within AI labs.
Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI labs who courageously address the challenges and risks they encounter.
If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.
Until next time,
The OAISIS Team