Experts urge funding for safe AI development at California conference

May 27, 2025 | California State Assembly, House, Legislative, California


This article was created by AI summarizing key points discussed. AI makes mistakes, so for full details and context, please refer to the video of the full meeting. Please report any errors so we can fix them. Report an error »

Experts urge funding for safe AI development at California conference
The Assembly Privacy and Consumer Protection Committee convened on May 27, 2025, to discuss critical issues surrounding artificial intelligence (AI) and its implications for safety, innovation, and national security. The meeting featured a range of expert testimonies, highlighting both concerns and optimism regarding the future of AI development.

The session opened with a discussion on the competitive landscape of AI, particularly in relation to advancements made by Chinese companies. Committee members emphasized the need for the United States to ensure its AI technologies are not only competitive but also safe for public use. Concerns were raised about potential disruptions to economies and democracies if AI development is not approached with caution.

Mary Pellerin, a representative from national labs, expressed her hope for the future of AI, citing her work on designing safe AI systems. She advocated for increased funding and incentives for both private companies and academic institutions to focus on safety in AI development. Pellerin argued that while private sector innovation is important, it should not overshadow the critical role of public safety missions.

The conversation shifted to the importance of collaboration between national labs and academic institutions. Pellerin noted that partnerships with universities like UC Davis and UC Berkeley are essential for building a pipeline of talent necessary for advancing AI safety. She highlighted the need for more resources and personnel dedicated to this mission.

Dr. Yoshua Bengio, a prominent AI researcher, echoed these sentiments, stressing that the development of advanced AI should not be left solely to the private sector. He drew parallels between AI and nuclear technology, suggesting that oversight is crucial to ensure safety and ethical standards are maintained.

The meeting also featured Dr. Kevin Esvelt from the MIT Media Lab, who identified himself as a "techno optimist." He shared his belief that AI and biotechnology could lead to transformative benefits for society. Esvelt noted that those who are confident in the potential of these technologies are often the ones advocating for responsible policy measures to guide their development.

As the meeting concluded, committee members reflected on the importance of fostering a collaborative environment that prioritizes safety and ethical considerations in AI development. The discussions underscored a collective commitment to ensuring that advancements in technology serve the public good while addressing the challenges posed by rapid innovation. The committee plans to continue exploring these themes in future sessions, emphasizing the need for ongoing dialogue and action in the evolving landscape of AI.

View full meeting

This article is based on a recent meeting—watch the full video and explore the complete transcript for deeper insights into the discussion.

View full meeting

Comments

    Sponsors

    Proudly supported by sponsors who keep California articles free in 2025

    Scribe from Workplace AI
    Scribe from Workplace AI
    Family Portal
    Family Portal