U.K. Government Introduces AI Self-Assessment Tool

Uncategorized

The U.K. government has launched a free self-assessment tool to help businesses responsibly manage their use of artificial intelligence.

The questionnaire is intended for use by any organisation that develops, provides, or uses services that use AI as part of its standard operations, but it’s primarily intended for smaller companies or start-ups. The results will tell decision-makers the strengths and weaknesses of their AI management systems.

How to use AI Management Essentials

Now available, the self-assessment is one of three parts of a so-called “AI Management Essentials” tool. The other two parts include a rating system that provides an overview of how well the business manages its AI and a set of action points and recommendations for organisations to consider. Neither has been released yet.

AIME is based on the ISO/IEC 42001 standard, NIST framework, and E.U. AI Act. Self-assessment questions cover how the company uses AI, manages its risks, and is transparent about it with stakeholders.

SEE: Delaying AI’s Rollout in the U.K. by Five Years Could Cost the Economy £150+ Billion, Microsoft Report Finds

“The tool is not designed to evaluate AI products or services themselves, but rather to evaluate the organisational processes that are in place to enable the responsible development and use of these products,” according to the Department for Science, Innovation and Technology report.

When completing the self-assessment, input should be gained from employees with technical and wide business knowledge, such as a CTO or software engineer and an HR Business Manager.

The government wants to include the self-assessment in its procurement policy and frameworks to embed assurance into the private sector. It’d also like to make it available to public-sector buyers to help them make more informed decisions about AI.

On Nov. 6, the government opened a consultation inviting businesses to provide feedback on the self-assessment, and the results will be used to refine it. The rating and recommendation parts of the AIME tool will be released after the consultation closes on Jan. 29, 2025.

More must-read AI coverage

Self-assessment is one of many planned government initiatives for AI assurance

In a paper published this week, the government said that AIME will be one of many resources available on the “AI Assurance Platform” it seeks to develop. These will help businesses conduct impact assessments or review AI data for bias.

The government is also creating a Terminology Tool for Responsible AI to define and standardise key AI assurance terms to improve communication and cross-border trade, particularly with the U.S.

“Over time, we will create a set of accessible tools to enable baseline good practice for the responsible development and deployment of AI,” the authors wrote.

The government says that the U.K.’s AI assurance market, the sector that provides tools for developing or using AI safety and currently comprises 524 firms, will grow the economy by more than £6.5 billion over the next decade. This growth can be partly attributed to boosting public trust in the technology.

The report adds that the government will partner with the AI Safety Institute — launched by former Prime Minister Rishi Sunak at the AI Safety Summit in November 2023 — to advance AI assurance in the country. It will also allocate funding to expand the Systemic Safety Grant program, which currently has up to £200,000 available for initiatives that develop the AI assurance ecosystem.

Legally binding legislation on AI safety coming in the next year

Meanwhile, Peter Kyle, the U.K.’s tech secretary, pledged to make the voluntary agreement on AI safety testing legally binding by implementing the AI Bill in the next year at the Financial Times’ Future of AI Summit on Wednesday.

November’s AI Safety Summit saw AI companies — including OpenAI, Google DeepMind, and Anthropic — voluntarily agree to allow governments to test the safety of their latest AI models before their public release. It was first reported that Kyle had voiced his plans to legislate voluntary agreements to executives from prominent AI companies in a meeting in July.

SEE: OpenAI and Anthropic Sign Deals With U.S. AI Safety Institute, Handing Over Frontier Models For Testing

He also said that the AI Bill will focus on the large ChatGPT-style foundation models created by a handful of companies and turn the AI Safety Institute from a DSIT directorate into an “arm’s length government body.” Kyle reiterated these points at this week’s Summit, according to the FT, highlighting that he wants to give the Institute “the independence to act fully in the interests of British citizens”.

In addition, he pledged to invest in advanced computing power to support the development of frontier AI models in the U.K., responding to criticism over the government scrapping £800 million of funding for an Edinburgh University supercomputer in August.

SEE: UK Government Announces £32m for AI Projects After Scrapping Funding for Supercomputers

Kyle stated that while the government can’t invest £100 billion alone, it will partner with private investors to secure the necessary funding for future initiatives.

A year in AI safety legislation for the UK

Heaps of legislation has been published in the last year committing the U.K. to developing and using AI responsibly.

On Oct. 30, 2023, the Group of Seven countries, including the U.K., created a voluntary AI code of conduct comprising 11 principles that “promote safe, secure and trustworthy AI worldwide.”

The AI Safety Summit, which saw 28 countries commit to ensuring safe and responsible development and deployment, was kicked off just a couple of days later. Later in November, the U.K.’s National Cyber Security Centre, the U.S.’s Cybersecurity and Infrastructure Security Agency, and international agencies from 16 other countries released guidelines on how to ensure security during the development of new AI models.

SEE: UK AI Safety Summit: Global Powers Make ‘Landmark’ Pledge to AI Safety

In March, the G7 nations signed another agreement committing to exploring how AI can improve public services and boost economic growth. The agreement also covered the joint development of an AI toolkit to ensure the models used are safe and trustworthy. The following month, the then-Conservative government agreed to work with the U.S. in developing tests for advanced AI models by signing a Memorandum of Understanding.

In May, the government released Inspect, a free, open-source testing platform that evaluates the safety of new AI models by assessing their core knowledge, ability to reason, and autonomous capabilities. It also co-hosted another AI Safety Summit in Seoul, which involved the U.K. agreeing to collaborate with global nations on AI safety measures and announcing up to £8.5 million in grants for research into protecting society from its risks.

Then, in September, the U.K. signed the world’s first international treaty on AI alongside the E.U., the U.S., and seven other countries, committing them to adopting or maintaining measures that ensure the use of AI is consistent with human rights, democracy, and the law.

And it’s not over yet; with the AIME tool and report, the government has announced a new AI safety partnership with Singapore through a Memorandum of Cooperation. It will also be represented at the first meeting of international AI Safety Institutes in San Francisco later this month.

AI Safety Institute Chair Ian Hogarth said “An effective approach to AI safety requires global collaboration. That’s why we’re putting such an emphasis on the International Network of AI Safety Institutes, while also strengthening our own research partnerships.”

However, the U.S. has moved further away from AI collaboration with its recent directive limiting the sharing of AI technologies and mandating protections against foreign access to AI resources.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *