Connecticut officials on Feb. 14 proposed state-backed investment, data and testing programs to help the state capture AI-driven business growth while also trying to prevent harm, as members of the General Law Committee pressed for clearer rules on privacy, testing and accountability.
The nut graff
The administration offered a three-part plan: allow state agencies to post curated datasets for AI training; create an artificial intelligence regulatory sandbox to test new services under modified licensing; and set up an artificial intelligence and quantum technology investment fund at Connecticut Innovations to back local companies. In a wide-ranging Q&A, lawmakers pushed the administration to clarify testing expectations, buyer protections and when the state would step in to require fixes to biased systems.
What officials proposed and why it matters
Dan O’Keefe, commissioner of the Department of Economic and Community Development (DECD), framed the proposal as a means to keep companies and jobs in Connecticut by giving startup founders access to training data, an avenue for limited regulatory relief while testing new services, and investment capital. He told the committee the fund would be managed by Connecticut Innovations and that agencies would be empowered to publish data for model training while protecting privacy. Commissioner O’Keefe also described the sandbox concept as a way for applicants to test innovative products under limited licensure or regulatory flexibility before full deployment.
Lawmakers and witnesses pressed on three practical issues: what tests or metrics will define lawful versus biased outcomes; how small businesses get access to models and test data when developers claim proprietary protections; and whether state and regional coordination is possible so that different states do not create a patchwork of rules that deter investment. Representative LaMarr and Senator Maroney asked about model transparency and the potential for private litigation under existing consumer-protection law. O’Keefe said he had asked the attorney general to consider an opinion on whether existing consumer and anti-discrimination statutes already apply to AI-based decisions.
Industry witnesses, including Brianna January of Chamber of Progress and academic panelists later in the hearing, generally supported the investment and sandbox ideas but asked for stronger privacy guardrails and safe‑harbor language so companies can test and fix their systems without immediate exposure to lawsuits while still preserving consumer remedies for discrimination. Several lawmakers urged steps to increase access to compute and workforce training; O’Keefe said DECD is pursuing options, including RFIs, to expand compute access for research and start‑ups.
What’s next
Committee members signaled they want more detail before approving broad statutory changes: they asked for explicit testing standards, a clearer definition of “high risk” uses, and a plan for cross‑state coordination. Several witnesses suggested a state AI institute or public testing lab that could certify models against agreed benchmarks; O’Keefe welcomed that idea and flagged the Connecticut AI Alliance and university partners as potential collaborators.
Ending
Lawmakers praised the proposal’s ambition but repeatedly said the devil is in the details: Connecticut is seeking to balance competitiveness and consumer protection by coupling investment with rules, and the committee asked the administration to return with specific testing standards, privacy safeguards and workforce support before the bill moves into final form.