In This Edition:
Key takeaways:
We have provided feedback on the Whistelblower sections of the second draft of the EU GPAI Code of Practice to the Chairs of Working Group 4.
Top priorities are encouraging both internal and external reporting channels for time-sensitive risks, extending protection globally beyond EU borders, implementing external audits of whistleblower policy implementation, and establishing joint industry-wide ombudsperson services to support potential whistleblowers - just to highlight a few.
Find our brief feedback here [Link] and the detailed version, including reasoning, here [Link].
We want to thank all those involved in this effort, especially our non-profit partner, Whistleblower Netzwerk e.V.Recent News: OpenAI faces delays in launching computer-using agents due to prompt injection concerns, while Anthropic seeks $2 billion in funding at a $60 billion valuation. Revealed court filings show Meta’s obsession with catching up to GPT-4. A legal battle between Elon Musk and OpenAI heads to court in January 2025. Sam Altman addressed the 2023 board conflict and AI safety in a Bloomberg interview, and the family of the tragically deceased OpenAI whistleblower Suchir Balaji is raising crypto funds to investigate his death. Shakeel Hashim highlighted challenges in AI journalism. Miles Brundage has proposed creating a Global Association of AI Ombudspeople (GAAIO) to independently verify AI companies' claims.
We’d love your feedback to understand where we can improve - trough the poll below or via reply to this email - especially for improvement ideas. Thank you!
Insider Currents
Carefully curated links to the latest news spotlighting voices and information emerging from within the frontier of AI from the past 2 weeks.
Why OpenAI is Taking So Long to Launch Agents
The Information says “prompt injection,” where LLMs are tricked into following instructions by malicious users, has slowed OpenAI's release of computer-using agents. OpenAI employees tell The Information why prompt injection should be taken seriously (long story short: users have less control over what information a model is ingesting) and express surprise at Anthropic’s decision to release its experimental computer-use feature in October 2024.
→ Read the Full ArticleAnthropic in Talks To Raise Funding at $60 Billion Valuation
According to a person with direct knowledge of the matter, reported by The Information, Anthropic is in discussions to secure $2 billion in new funding, potentially boosting its valuation to $60 billion. Lightspeed Venture Partners, which has previously backed AI competitors including Mistral, Stability AI, and Cartesia, is set to lead this funding round.“The fundraise would more than triple the startup’s valuation from around a year ago, which valued it at $18 billion, followed by Amazon committing $4 billion to Anthropic and developing a supercomputing cluster for the startup to develop its technology,” added The Information. This partnership also includes Anthropic using Amazon’s cloud infrastructure and Amazon distributing Anthropic’s AI technology through its marketplace under a revenue-sharing agreement.
→ Read the Report by The Information
Meta Execs Obsessed Over Beating OpenAI’s GPT-4 Internally, Using Copyrighted Data, Court Filings Reveal
Internal messages revealed in the Kadrey v. Meta lawsuit show Meta executives, particularly VP of Generative AI Ahmad Al-Dahle, were intensely focused on beating GPT-4 with Llama 3, while dismissing open competitor Mistral as “peanuts.” The documents expose discussions about using the LibGen dataset, which contains copyrighted materials from major educational publishers, with executives questioning if “stupid reasons” were preventing the use of certain datasets. Most notably, the messages suggest Meta's leadership was “very aggressive” in obtaining training data, with prosecutors alleging that Mark Zuckerberg approved the use of copyrighted materials in their race to make Llama 3 competitive with leading closed models.
→ Read the Full Article
Sam Altman Interviews with Bloomberg
Sam Altman gets his say in this friendly Bloomberg interview. Interesting bits:On ‘The Board Drama’:
Altman admits that he doesn't consider himself someone with a strong emotional intelligence (EQ) and felt deceived by the board, particularly in their decision to appoint Emmett Shear. While reflecting on the situation, he acknowledged not fully informing the board about the scale of the ChatGPT launch. He also addressed prior conflicts with individual board members, noting that he had previously attempted to remove some of them before his firing. He also claims that his temporary role as General Partner of the OpenAI startup fund was due to speed considerations - not to be sneaky.
On Safety:
Altman shares that the current setup is also confusing internally, with three separate boards overseeing different aspects:
“an internal-only safety advisory group [SAG] that does technical studies of systems and presents a view. We have an SSC [safety and security committee], which is part of the board. We have the DSB with Microsoft. And so you have an internal thing, a board thing and a Microsoft joint board. We are trying to figure out how to streamline that.”
On managing risks, Altman further states he hasn’t updated his risk profiles. In the long term, he says, “I can simultaneously think that these risks are real and also believe that the only way to appropriately address them is to ship the product and learn.”
Elon Musk vs. OpenAI: What To Expect from the Showdown in 2025
The ongoing legal battle between Elon Musk and OpenAI, previously covered in our past editions, is entering a critical phase in early 2025. According to Business Insider, Judge Yvonne Gonzalez Rogers is set to begin hearing arguments on January 14 regarding Musk’s motion to halt OpenAI’s transition to a for-profit entity.Musk, who launched competing AI venture xAI in 2023, claims his early contributions—including seed capital and recruitment of key AI scientists—were predicated on OpenAI’s commitment to nonprofit status and prioritizing AI safety. OpenAI counters that Musk’s legal maneuvers reflect competitive interests rather than genuine concern for the organization’s mission, stating he “should be competing in the marketplace rather than the courtroom.” Adding weight to these concerns, former OpenAI employees have also raised concerns that the nonprofit would have a reduced role in public safety.
→ Read the Article by Business InsiderSuchir Balaji’s Family Raise Crypto Funds for Investigation
The mother of former OpenAI employee turned whistleblower, Suchir Balaji, is raising funds for an independent investigation into Balaji’s death. Should you wish to make a contribution, a link is available in the article.
→ Read the Article
Shakeel Hashim on AI Journalism
In a recent interview with AXRP—AI X-risk Research Podcast, Shakeel Hashim, who works at The Tarbell Center for AI Journalism and writes for his newsletter, the Transformer, highlights three key issues: “resource constraints facing AI journalism, the disconnect between journalists’ and AI researchers’ views on transformative AI, and efforts to improve the state of AI journalism,” as summarized by AXRP here.
Policy & Legal Updates
Updates on regulations with a focus on safeguarding individuals who voice concerns.
Miles Brundage Proposes an Independent Ombudspeople Organization to verify claims of AI labs:
, proposes creating a Global Association of AI Ombudspeople (GAAIO) that would independently verify claims made by companies and countries about their AI systems. Like nuclear facility inspectors, these ombudspeople would get employee-level access to organizations on a voluntary basis to assess specific claims through document review and testing. While this human approach could work now, Brundage notes that automated verification will be needed as AI advances.
Former OpenAI policy researcher,
→ Read Miles’ Substack Post
Announcements & Call to Action
Updates on publications, community initiatives, and “call for topics” that seek contributions from experts addressing concerns inside Frontier AI.
Our Feedback on the EU GPAI Code of Practice (CoP):
We propose several major expansions to whistleblower protections in the AI GPAI CoP, including encouraging both internal and external reporting channels for time-sensitive risks, extending protection globally beyond EU borders, and establishing joint industry-wide ombudsperson services to support potential whistleblowers. These changes would allow whistleblowers to act more quickly when faced with critical AI risks and ensure protection regardless of their location. Additionally, the proposal calls for standardized evaluation of whistleblowing policies through independent audits and emphasizes the importance of explicit communication about the impact of whistleblower reports, addressing a key concern that whistleblowers often lack visibility into how their reports are handled.
→ Read Our Detailed Feedback Here
Thank you for trusting OAISIS as your source for insights on protecting and empowering insiders who raise concerns within AI labs.
Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI labs who courageously address the challenges and risks they encounter.
If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.
Until next time,
The OAISIS Team