Jason Oxman President and Chief Executive Officer at Information Technology Industry Council | Official website
Today, global tech trade association ITI responded to the National Telecommunications and Information Administration’s (NTIA) request for information related to NTIA’s assignments in U.S. President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.
In its comments, ITI recommended that NTIA adopt a risk-based approach for policy recommendations, acknowledge the gradient of openness in defining openness, recognize risk management as a shared responsibility across the AI value chain, and engage in discussions with key international allies and partners on the risks and benefits of open foundation models.
According to ITI, "There is growing evidence about the benefits of open foundation models. This includes enabling competition, catalyzing innovation, and ensuring transparency. Additionally, widely available model weights can help to democratize access and use of AI systems, allowing a greater number of users to contribute to AI development processes." The association continued by stating, "Widely available model weights are able to promote innovation allowing actors across the AI value chain to create new economic opportunities in diverse fields such as marketing, communications, medicine, education and employee training. Developers and deployers can customize their models depending on the specific use case."
ITI emphasized the importance of safe and responsible AI development and deployment grounded in trust, transparency, ethics, and collaboration among government, industry, civil society, and academia. INCITS, an affiliate division of ITI, serves as the U.S. technical advisory group to the international standards body that recently published a standard for managing the risks and opportunities of AI. This management system standard supports improved quality, security, traceability, transparency, and reliability of AI applications.
Earlier this year, ITI unveiled a new guide for global policymakers, Authenticating AI-Generated Content: Exploring Risks, Techniques & Policy Recommendations, aiming to address the pressing need to authenticate AI-generated content, including chatbots, image, and audio generators.