State governments navigate challenges of regulating artificial intelligence

Webp 3r2l9nmmbri3huekmox6348shtyh
Alexandra Reeve Givens President & CEO at Center for Democracy & Technology | Official website

State governments navigate challenges of regulating artificial intelligence

ORGANIZATIONS IN THIS STORY

Following the introduction of ChatGPT in 2022, artificial intelligence (AI) has become a significant topic for state legislatures and governors across the United States. With Congress yet to take action, many states have taken it upon themselves to regulate AI use within government operations through executive orders (EOs). Thirteen states, including Alabama, California, Maryland, and others, along with Washington D.C., have issued EOs focusing on AI's role in state governance.

These EOs reveal several trends. First, there is no consistent definition of AI among the states. For instance, Maryland and Massachusetts are unique in using a federal definition from the National Artificial Intelligence Initiative Act of 2020. Second, most state EOs recognize potential risks associated with AI usage in public services but aim to balance these with efficiency benefits. California's EO emphasizes equitable service delivery while acknowledging these risks.

Civil rights protection is another focal point. While most EOs include civil rights concepts, only Washington, Oregon, and Maryland explicitly prioritize them. Maryland's EO outlines principles like "fairness and equity" to guide AI use by state agencies.

Pilot projects are commonly suggested as initial steps for integrating AI into government functions. However, specific goals for these projects are not always outlined; Alabama and California specify objectives such as improving citizen experiences with government services.

AI governance is prioritized through task forces designed to guide each state's approach to AI implementation. These task forces vary in composition but generally aim to provide recommendations on AI deployment strategies.

Washington's EO stands out by defining "high-risk" generative AI systems and drawing from federal guidelines like the Biden Administration’s AI Bill of Rights. It also emphasizes protecting marginalized communities affected by AI in public services.

California's EO directs agencies to ensure ethical outcomes for marginalized communities when using AI and mandates transparency through inventories of high-risk uses.

Pennsylvania focuses on not overburdening users or agencies while promoting transparency about generative AI use and engaging communities for feedback.

Governors can advance responsible public sector AI use by aligning definitions across states, setting clear adoption priorities, implementing risk management practices, promoting transparency through inventories, ensuring pilot project safeguards are met, forming comprehensive task forces with senior members from various sectors, and incorporating community engagement requirements.

With rapid advancements in AI technology and increasing pressure for governmental adoption of such systems, executive orders serve as vital tools for governors aiming to establish responsible practices within their jurisdictions during this pivotal period.

ORGANIZATIONS IN THIS STORY