CDT, in collaboration with Data & Society (D&S), has submitted comments in response to the U.S. AI Safety Institute’s (AISI) request for feedback on their draft guidance aimed at mitigating the risks associated with the misuse of foundation models. The guidance focuses on how developers can assess and reduce risks such as generating child sexual abuse material (CSAM) and non-consensual intimate imagery (NCII), enabling the development of chemical, biological, radiological, or nuclear weapons (CBRN), and facilitating cyberattacks and deception.
"Our comments emphasize that the guidance does not adequately consider the sociotechnical context in which foundation models are deployed and may be misused," CDT stated. They urged AISI to broaden its scope to include risks related to bias and discrimination, highlighting that misuse of models in this way can cause significant harm, particularly to marginalized and vulnerable communities. "We also encourage AISI to recognize that the issues currently highlighted in the guidance are not independent of concerns around bias and discrimination," they added. As an example, CDT pointed out that the generation of NCII is tied to existing patterns of gender-based online violence, making it crucial for developers to account for gender bias when managing this type of misuse.
CDT further recommended that AISI offer more explicit guidance on involving subject matter experts—such as social scientists, public health experts, and advocacy groups—in identifying, assessing, and mitigating misuse risks. They suggested that foundation model developers consult these experts during risk assessments, include them in red teaming exercises, and involve them in interpreting results. According to CDT, this approach would lead to more comprehensive and contextually-aware risk management.
Lastly, CDT addressed the inherent uncertainty in evaluating and mitigating misuse risks by urging AISI to clarify how developers can establish reasonable risk tolerances. They emphasized the importance of making release decisions based on these tolerances and communicating their decision-making process transparently across stakeholders within the AI supply chain. "Guidance on determining and communicating risk tolerance is especially crucial for open-weight models," they noted.
The full comments can be read here.