Image: Adobe/Grandbrothers The National Institute of Standards and Innovation developed the AI Security Institute on Feb. 7 to identify guidelines and requirements for AI measurement and policy. U.S. AI business and business that do company in the U.S. will be affected by those guidelines and requirements and may have the opportunity to have input about them. What is the U.S. AI Safety Institute consortium? The U.S. AI Security Institute is a joint public and private sector research group and
data-sharing area for “AI developers and users, academics, government and market researchers, and civil society organizations,”according to NIST. Organizations could use to become members between Nov. 2, 2023 and Jan. 15, 2024. Out of more than 600 interested organizations, NIST selected 200 business and organizations to end up being
members. Participating organizations consist of Apple, Anthropic, Cisco, Hewlett Packard Enterprise, Hugging Face, Microsoft, Meta, NVIDIA, OpenAI, Salesforce and other companies, academic institutions and research companies. Those members will deal with projects including: Establishing brand-new guidelines, tools, approaches, procedures and best practices to contribute to industry standards for developing and deploying
safe, secure and reliable AI. Establishing guidance and
- benchmarks for identifying and evaluating AI abilities, especially those capabilities that might cause harm. Establishing techniques to include protected advancement practices for generative AI. Developing techniques and practices for successfully red-teaming artificial intelligence.
- Developing ways to authenticate AI-generated digital content. Specifying and encouraging AI workforce skills. “Responsible AI offers enormous potential for mankind, companies and civil services, and Cisco firmly thinks that a holistic
- , simplified approach will assist the U.S. securely recognize the complete benefits of AI,”stated Nicole Isaac,
- vice president, global public policy at Cisco, in a declaration to NIST
- . SEE: What are the differences between AI and maker
learning!.?.!?( TechRepublic Premium )”Working together across industry, government and civil society is important if we are to establish typical standards around safe and reliable AI,” stated Nick Clegg, president of global affairs at Meta, in a declaration to NIST.”We’re passionate about being part of this consortium and working closely with the AI Safety Institute.”An interesting omission on the list of U.S. AI Security Institute members is the Future of Life Institute, an international
not-for-profit with financiers including Elon Musk, developed to avoid AI from adding to”severe massive risks”such as international war. More must-read AI protection The creation of the AI Safety Institute and its place in the federal government
The U.S. AI Security Institute was developed as part of the efforts set in location by President Joe Biden’s Executive Order on AI proliferation and safety in October 2023. The U.S. AI Security Institute falls under the jurisdiction of the Department of Commerce. Elizabeth Kelly is the institute’s inaugural director, and Elham Tabassi is its chief innovation officer. Who is
dealing with AI security? In the U.S., AI security and regulation at the government level is dealt with by NIST, and, now, the U.S. AI Safety Institute under NIST. The major AI business in the U.S. have worked with the federal government on motivating AI safety and abilities to assist the AI industry
build the economy. Academic organizations working on AI safety consist of Stanford University and University of Maryland
and others. A group of global cybersecurity companies developed the Standards for Secure AI System Advancement in November 2023 to resolve AI safety early in the advancement cycle. Source