Building a Practice of Responsible AI

AI can advance equitable and empathetic government service delivery—when it’s deployed thoughtfully

At Code for America, we’ve spent the past 15 years using human-centered technology to improve public services and make government work better for everyone. At the center of our work is the desire to make government work more efficiently—that’s where technology can be extremely helpful—while advancing equity and ensuring everyone’s voice is represented as we move forward. 

With any new technology, the goal with government adoption is lasting change. That means we have to start small and test the right opportunities, building use cases to learn from along the way. As governments and civic tech partners start to experiment with artifical intelligence (AI), it’s critical that we do so in a responsible manner.

What is responsible AI?

Imagine you are a caseworker who faces an overflowing digital inbox every day due to a system inefficiency that creates a bottleneck. You need to manually review and scan each document uploaded to tag and classify it in order to move to the next step of reviewing the case. You know most clients get stuck in this queue, but you need to be thorough and accurate in your review. In this scenario, AI might do the work of scanning documents, protecting personal identifying information, classifying, and identifying them while still keeping humans in the loop for oversight. The potential to deliver critical benefits to clients faster and more efficiently can mean all the difference in social services. It may help a family visit a much-needed doctor or pay for childcare so the parents can go to work.

That’s a responsible use of AI.

By our definition, responsible AI is human-centered. But when we’re talking about technology, what exactly does that mean? To start, it means asking some important questions before we even begin to engage with AI, things like: is AI the right tool, is it doable, and does it benefit people? It’s critical to ground AI experimentation in human-centered practices like this to consider whether it serves a real human need and enhances their experience. This also can help anticipate risks or potential harm. 

AI can do harm if it isn’t stewarded responsibly. For example, if an AI application collects and uses someone’s data without their consent, it can lead to a host of ethical and legal issues. Imagine a benefits eligibility form that uses AI to analyze user data to determine their eligibility for various social services. If AI collects sensitive personal information without explicitly informing the users or obtaining their consent, it violates their privacy rights. Unauthorized use of personal information can erode trust, lead to misuse, and potentially cause significant harm to the individuals involved. We can avoid adverse outcomes like that by focusing on a core set of principles.

Interested in how you might use AI in government? Learn more about our AI workshops.

Our principles of responsible AI

When we begin any new engagement around the use of AI in government, we’re dedicated to following a core set of principles that center ethics, safety, and impact. We believe these values are crucial for the successful and equitable integration of AI into government systems and processes. 

  • Deploying AI must be done with a human-centered approach. This means we dedicate ourselves to understanding and meeting user needs. Rooted in our principles of human-centered service design and delivery, we can leverage AI to tailor government services to the specific needs and preferences of the people we serve and ensure that we can deliver measurable, lasting improvements in access to benefits.
  • The use of AI must be ethical and equitable. When considering the use of AI in a new way, the approach must ensure the benefits of the technology are accessible to all.
  • There has to be a deep sense of responsibility. Transparency has to come first, augmenting human judgment with AI that identifies and mitigates potential harms, while safeguarding privacy and societal impact.
  • Prioritizing safety is paramount. We must have a strong commitment to protecting the right to privacy and data (similar to our ethical data use policy), and ensuring minimal capacity for harm in applications through the principle of do-no-harm.
  • AI is innovative—the process of implementing it should be, too. We can embrace innovation and experimentation through an iterative and responsive approach, and leverage real-time user data for continuous feedback loops.
  • The effects should be lasting. Whenever we work with government, we want to make sure the shifts are setting up agencies for long term change. AI adoption is no different. It will require governments to grow their capacities, perhaps hire for new roles, and make process changes. We want governments to feel equipped to handle new technologies, their associated regulatory and statutory frameworks, and all the supporting data that can inform how they adapt their strategies over time. 

Responsible AI at work

There are places we can already see responsible AI at work based on our AI principles. In our work supporting people applying for benefits in California, we used AI to classify submitted text in the live chat support function to manage message queues for our client support staff. We configured a chatbot to use a large language model to classify client queries written in English, Spanish, or Mandarin in order to identify those queries that could be successfully handled by an automated response. We didn’t generate answers—instead we were able to send templated messages to clients and help staff focus on clients whose cases needed more support. We chose this approach to prioritize client safety, in that there’s always a fallback option to reach out to client support staff. We also ensured transparency by clearly indicating that these messages are automated. 

We used AI to classify submitted text in the live chat support function.

In another use case, large language models can be used to identify complex relationships between words and phrases, allowing for more flexible and accurate classifications compared to many traditional rule-based or machine learning approaches. These models can be “instructed” (with natural language) to extract specific features from documents. We developed a proof of concept for an automatic record clearance process using AI to extract key data from legal documents and place it in a structured format. This approach could reduce caseworker load and clear records more quickly. We based this proof-of-concept on user needs uncovered by our human-centered approach and these flexible classifications can be more forgiving of nicknames and mis-spellings, helping to ensure that more people can have their records cleared automatically.

We developed a proof of concept for an automatic record clearance process using AI to extract key data from legal documents.

The potential of AI

At Code for America, we envision a future where interacting with government services is as easy as other modern experiences. Government can and should be transparent and tailored to client needs, making their interactions with programs and services smooth and trustworthy. We believe AI has the potential to help us get there by supporting people making decisions, analyzing and interpreting data more quickly than we could do without it, and augmenting workflows to make room for more meaningful human interaction. This future will have meaningful human oversight and the right protections in place, where AI can serve as a trusted team member unlocking benefits for those who need them most. 

Code for America’s AI Studio is a new initiative focused on preparing government for working with new technologies in human-centered ways. Interested in partnering with us? Contact us at AIpartnerships@codeforamerica.org

Related stories