Inside AI #8: Meta Streisand Effect, Whistleblowing Culture Matters, SB 53 Passes First Hurdle
Edition 8
In This Edition:
Key takeaways:
This Week in AI: Key Updates
Whistleblower Streisand Effect at Meta
OpenAI continues shifting infrastructure away from Microsoft
Siri faces ongoing challenges
DeepMind undergoes further reorganisation
Alex Lawsen writes about the critical need to embed whistleblowing policies into labs’ cultures—we agree and share some thoughts.
SB 53 passes its first regulatory hurdle, and we’ve contributed feedback on non-retaliation commitments for the EU AI Act CoP.
Insider Currents
Carefully curated summaries and links to the latest news, spotlighting the voices and concerns emerging from within AI labs.
Legal Action Against Facebook Whistleblower Causes Streisand Effect
Following our coverage of Sarah Wynn-Williams in the 7th edition, the former Meta executive, who turned whistleblower, is now facing legal restrictions after publishing Careless People, a book that details the company’s internal operations.
Meta responded quickly. A day after publication, the company invoked an arbitration clause, resulting in an emergency order barring Wynn-Williams from making “disparaging” statements about Meta. We recall that these “non-disparagement” clauses in Silicon Valley contracts formed the basis of the right-to-warn campaign by multiple former and current AI insiders last year. Back in June, former ex-OpenAI insiders filed a complaint with the SEC, claiming this non-disparagement violated their ability to utilize their rights as whistleblowers. Wynn-Williams also filed whistleblower complaints with the SEC and DoJ, misleading investors over their dealings with China. Such filings are important to claim whistleblower protection status.
This order has also prevented her from speaking with lawmakers in the U.S., U.K., and EU, her legal team stated in arbitration proceedings obtained by CNN. According to legal documents filed by members of the U.S. Congress, U.K. Parliament, and European Parliament, they had requested to speak with Wynn-Williams regarding issues raised in her book, including Meta’s interactions with the Chinese government and its potential impact on teenage mental health.
Meta’s spokesperson has denied interfering with her legal rights while dismissing the book’s claims as “false accusations” and “outdated”. Despite or because of Meta’s action, interest in Careless People has grown, with some readers expressing heightened curiosity following Meta’s legal action. “Now I’m 500× more interested in this book,” one reader remarked, as reported by Vulture.
Before she was barred from discussing her criticism of the company, Wynn-Williams reflected on Zuckerberg’s leadership and strategic direction in an interview with NPR, stating, “He sort of looks at the world as if it's a board game, like a game of Risk. It’s about occupying every territory, building an empire—those are his concerns, not the real-world impact of what that means.”
→ Read the Article by Vulture
→ Listen to the NPR Interview with Sarah Wynn-Williams
OpenAI's Strategic Leap: First Data Storage Center Designed to Gain Comprehensive Control Over AI Data Ecosystem
According to individuals familiar with the discussions obtained by The Information, OpenAI is weighing a major shift in its infrastructure strategy, exploring the development of its first dedicated data storage centre worth billions of dollars. This deal would instantly position OpenAI as one of the largest storage clients globally and demonstrate its aim to gain greater control over data that are critical to developing artificial intelligence while potentially reducing costs associated with cloud providers like Microsoft, Oracle, and CoreWeave.
OpenAI is making a significant strategic move through its Stargate initiative to become “one of the biggest data centre customers in the world based on a belief that the companies with the most computing power will win the AI race”, wrote The Information. They plan to triple their data centre capacity in 2025, consuming nearly 2 gigawatts of energy. OpenAI leaders also told researchers that they are expected to have 8x more computing power for AI model training by the end of 2025 compared to 2024, enhancing the speed and efficiency of AI development.
To support this expansion, OpenAI is evaluating storage solutions from potential bids from leading storage technology firms, including Pure Storage, Vast Data, DDN, MinIO, and Weka. OpenAI aims to secure approximately $10 billion in funding by the end of the month to support the project's infrastructure development.
→ Read the Full Article by The Information
Inside Apple’s Siri AI Struggles
Apple’s Siri division is grappling with delays in its AI roadmap, leading to internal frustration and management shake-ups. During a recent all-hands meeting, Robby Walker, Senior Director at Apple, acknowledged the uncertainty surrounding the launch of Siri’s new AI-powered enhancements, according to sources familiar with the matter. These individuals, who spoke to Bloomberg on the condition of anonymity due to the private nature of the discussion, said Walker described the situation as “ugly and embarrassing.” However, he praised the team’s efforts, calling the features they developed “incredibly impressive”, and recognised the burnout and disappointment among team members.
Apple’s planned Siri upgrades, initially introduced at the Worldwide Developers Conference (WWDC) in June, are designed to improve the assistant’s capabilities by integrating personal user data, enhancing app control, and analyzing on-screen content. However, executives have reportedly had to walk back some of the AI promises made at WWDC, highlighting tensions between Apple’s Siri unit and the company’s marketing teams. According to a report from The Verge, the marketing team sought to emphasize features such as Siri’s ability to understand personal context and take actions based on on-screen content—despite these capabilities not being ready for deployment. In response, Apple has since removed promotional materials showcasing unreleased AI-powered Siri functions.
→ Read the Full Bloomberg’s Article
→ Read the Article by The Verge
Google DeepMind’s Expanding Role Amid Ongoing Reorganisation

According to internal documents acquired by The Information, Google DeepMind, the company’s AI division is undergoing constant restructuring as it works to catch up to competitors like OpenAI. Led by Demis Hassabis, DeepMind has absorbed several new teams, including oversight of Google’s Gemini chatbot and related developer tools, as well as AI research teams across Google. Demis Hassabis currently leads a team of approximately 5,600 employees, which represents a significant increase from about 2,500 staff members two years prior. Despite this growth, DeepMind remains substantially smaller than Google Cloud, comprising only about one-tenth of that division’s workforce.
Hassabis now has 12 direct reports, including key leaders such as Sissie Hsiao, who manages the 1,800-person team behind Gemini, and David Thacker, whose Applied AI team serves as the primary liaison for external collaborations. Thacker’s promotion to report directly to Hassabis reflects a change in the product structure. In April, DeepMind formed a “product impact” group led by Vice President Chandu Thota to bridge DeepMind’s Gemini model teams with product teams across Google. However, some employees felt this new group created an unnecessary layer, according to several sources familiar who interacted with Thota’s team.
The repeated organisational restructurings indicate that Hassabis is actively exploring optimal strategies to turn DeepMind’s technology into products, wrote The Information. His approach involves two primary methods: integrating AI into Google's existing product lineup and developing AI-specific products such as the Gemini chatbot.
→ Read the Article by The Information
Policy & Legal Updates
Updates on regulations with a focus on safeguarding individuals who voice concerns.
SB 53 Has Passed its First Hurdle
Senator Scott Wiener’s new AI bill, focused on establishing CalCompute and, most relevant for us and our readers, establishing whistleblower protections concerning critical risks, has been approved by its assigned California Senate Panel.
This means the bill can proceed through the California Legislative Process. We wrote our commentary on the bill here. In a nutshell, we think it’s a great step in the right direction, although we would like to see some improvements. Given its current focus on critical risks, we would also expect bipartisan support to be feasible.
EU AI Act CoP Feedback is Closing End of March
We were happy to get the chance to take a leading role in shaping civil society organizations’ feedback on the non-retaliation commitment in the CoP. We remain hopeful that the section will be strengthened. If you are interested in our feedback, please reach out to us directly at hello@oais.is.
Announcements & Call to Action
Updates on publications, community initiatives, and “call for topics” that seek contributions from experts addressing concerns inside Frontier AI.
An Internal Whistleblowing Mailbox Isn’t Enough — Culture Matters
In an essay published this week, Alex Lawsen (
) highlights a challenge inside frontier AI companies: The presence of whistleblowing systems on paper versus their actual accessibility and psychological usability in practice.Prompted by a conversation with an Anthropic employee who was unaware of their company’s anonymous reporting channel, Lawsen reflects on a deeper issue: that whistleblowing systems, while technically in place, often fail to be psychologically accessible or culturally normalised. In high-stakes environments like AI labs, raising a concern should feel like a routine procedural step—not a personal or moral risk.
To address this, Lawsen outlines a minimum policy standard:
Clear, accessible procedures that all employees know how to use, even under pressure.
A normalised reporting process that feels procedural, not accusatory.
Credible and protected channels that both insiders and outsiders can trust—with real structural barriers to retaliation.
The piece ends with a call for AI labs to not only implement robust internal mechanisms but to actively foster cultures where raising concerns is expected, not exceptional. As Lawsen puts it: “Policies mean very little if staff do not trust that they will be adhered to.”
Our Short Take:
He’s absolutely right. Unfortunately, many companies have a distorted perception of what internal whistleblowing functions entail. They often fear excessive bureaucracy, operational slowdowns, and a decline in team cohesion when these channels are actively used. As a result, they under-promote whistleblowing hotlines and fail to integrate them meaningfully into their workplace culture.
However, evidence shows the opposite is true:
A survey of over 1,000 companies with internal whistleblowing channels revealed these systems are remarkably effective— not only in their core purpose of detecting and deterring misconduct but also in:
Improving company processes
Increasing employee satisfaction by fostering a ‘speak up’ culture
Enhancing the company’s reputation
(Note: This survey was co-authored by Whistleblower Netzwerk e.V., the non-profit that hosts OAISIS)
The Question Arises:
How Can Companies Encourage Effective Internal Whistleblowing Channels?
Proven Approach: Strong External Whistleblower Channels Drive Internal Improvements
Most employees prefer to report concerns internally rather than going directly to a regulator. However, if external whistleblower channels are significantly more reliable and accessible than internal ones, employees will naturally choose that route.
Our experience shows that when external whistleblower mailboxes are well-established and widely recognized, companies respond by improving their internal systems to ensure reports reach them first. SB 53, mentioned earlier, could be a step in this direction—though its focus on critical risks may not likely have the desired effect of lowering reporting barriers across the board.
Possible Explanations for Weak Internal Channels:
Speculation I: Education of lab leadership
Lab leaders may lack awareness or confidence in the benefits of internal whistleblowing channels or knowledge about implementing them effectively. Additionally, a function of lab leadership may not fully understand how to manage these channels effectively—balancing efficiency with minimal overhead. However, this is merely speculation, and we would welcome insights from our readers.
Speculation II: Good Intentions, Poor Policy Design
Lab leadership may recognise the importance of whistleblowing channels, but ineffective policy design or a lack of outreach could be limiting their impact. To our knowledge, none of the leading AI labs have publicly shared their internal whistleblowing policies or survey results on whether employees understand these policies —both would drive improvement.
Direct Message to
:Consider the evidence above. As someone who has advocated for whistleblower protections, you have an opportunity to lead by example.
We’ll also be conducting an insider survey soon to identify current gaps. Stay tuned.
Thank you for trusting OAISIS as your source for insights on protecting and empowering insiders who raise concerns within AI labs.
Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI labs who courageously address the challenges and risks they encounter.
If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.
Until next time,
The OAISIS Team