
AI can do many things, from editing writing and presentations, to creating schedules and analyzing data. Chat GPT is depicted on the computer screen here.
San Jose, California – In San Jose, artificial intelligence is no longer a distant experiment—it’s becoming a routine part of how the city functions. From crafting speeches to helping secure multi-million dollar grants, AI tools like ChatGPT are increasingly embedded in the daily operations of city government. Under Mayor Matt Mahan’s leadership, the city has invested in 89 ChatGPT licenses, aiming to train 15% of its nearly 7,000 workers on how to use the tools by next year.
The city’s ambitions are wide-ranging. Officials envision AI assisting with everything from routing buses and responding to pothole complaints to analyzing footage from vehicle-tracking surveillance cameras. One official has already used a customized chatbot to help draft a successful $12 million grant proposal for electric vehicle infrastructure, bypassing some of the long nights and weekend hours that typically go into such work.
The move reflects a broader push in the Bay Area to integrate artificial intelligence into public service. Neighboring San Francisco recently announced a plan to roll out Microsoft’s Copilot to tens of thousands of city workers. In both cities, officials say safeguards are in place to prevent misuse, and that the goal is to enhance—not replace—human judgment.
Still, the rapid adoption of generative AI in local government raises critical questions about transparency, accuracy, and unintended consequences. San Jose has not reported any major mishaps with its pilot projects, but other cities haven’t been so lucky. In Fresno, a school official resigned after relying on an AI-generated document riddled with errors. And in Washington, D.C., a federal commission chaired by Robert Kennedy Jr. published an error-filled paper partially written by ChatGPT.
Even in cities where AI is being rolled out carefully, there are signs of caution. In Stockton, a proof-of-concept trial for an AI system that could help residents book public spaces or check pool availability was abandoned due to cost concerns. Market research firm Gartner has predicted that more than 40% of such “agentic AI” initiatives will be canceled by 2027, citing unclear value and poor risk controls.
San Jose’s experiment reflects both the promise and the perils of putting AI at the center of government bureaucracy. The tools may streamline tedious tasks and free up staff for higher-level work, but they also carry real risks—especially when deployed without sufficient oversight or public understanding.
For now, Mayor Mahan remains optimistic, casting AI as a way to make city services faster, smarter, and more responsive. But as automation continues its march through every layer of public life, the question remains: Is this happening too quickly for the systems and people it’s meant to serve?