close
close

UK Government Unveils Artificial Intelligence Self-Assessment Tool

UK Government Unveils Artificial Intelligence Self-Assessment Tool

The UK Government has launched a free self-assessment tool to help businesses manage their use responsibly. artificial intelligence.

The questionnaire is intended for use by any organization that develops, provides or uses services that use AI as part of their standard operations, but is primarily intended for small companies or start-ups. The results will show decision makers the strengths and weaknesses of their AI management systems.

How to use AI Control Fundamentals

Now availableThe self-assessment is one of three parts of the so-called AI Governance Fundamentals tool. The other two parts include a rating system that provides insight into how well a company is managing its AI, as well as a set of actions and recommendations that organizations should take into account. Neither has been released yet.

AIME is based on the ISO/IEC 42001 standard. NIST structureAnd EU Artificial Intelligence Law. The self-assessment questions cover how the company uses AI, manages its risks and is transparent about this with stakeholders.

SEE: Delaying the UK’s adoption of artificial intelligence by five years could cost the economy more than £150 billion, Microsoft report says

“The tool is not intended to evaluate artificial intelligence products or services themselves, but rather to evaluate existing organizational processes to ensure the responsible development and use of these products,” it said. Report from the Department of Science, Innovation and Technology.

When conducting a self-assessment, it is necessary to obtain information from employees with technical and broad business knowledge, such as a technical director or software engineer, as well as a business HR manager.

The government wants to incorporate self-assessment into its procurement policies and mechanisms to embed assurance in the private sector. The company would also like to make it available to public sector buyers to help them make more informed decisions about AI.

November 6 government opened a consultation inviting businesses to provide feedback on the self-assessment and the results will be used to refine it. The rating and recommendation parts of the AIME tool will be published after the consultation closes on 29 January 2025.

The self-assessment is one of many planned government initiatives to ensure AI quality.

IN paper In a publication this week, the government said AIME will be one of many resources available on the “AI Security Platform” it is aiming to develop. This will help businesses conduct impact assessments or check AI data for bias.

The government is also creating a Terminology Tool for Responsible AI to define and standardize key AI safety terms to improve communication and cross-border trade, especially with the US.

“Over time, we will build a set of accessible tools that enable basic best practices for responsible AI development and deployment,” the authors write.

The government says the UK AI security market, a sector that provides tools to develop or use AI security and currently includes 524 firms, will boost the economy by over £6.5 billion over the next decade. Part of this growth can be attributed to increased public confidence in the technology.

The report adds that the government will work with the Institute for Artificial Intelligence Security, set up by former Prime Minister Rishi Sunak on AI Security Summit in November 2023 – to promote AI guarantees in the country. It will also provide funding to expand the systems security grant program, which currently awards up to £200,000 to initiatives to develop the AI ​​security ecosystem.

Legally binding AI safety law coming next year

Meanwhile, Peter Kyle, the UK’s technology minister, has pledged to make the voluntary agreement on AI safety testing legally binding by introducing AI Bill next year in Financial Times: The future of the AI ​​summit on Wednesday.

At the November AI Safety Summit, AI companies including OpenAI, Google DeepMind and Anthropic voluntarily agreed to allow governments to test the safety of their latest AI models before releasing them to the public. It was first reported that Kyle announced his plans to legislate voluntary agreements heads of well-known companies involved in artificial intelligence at a meeting in July.

SEE: OpenAI and Anthropic Sign enter into deals with US Institute for Artificial Intelligence Security, sharing cutting-edge models for testing

He also said that the artificial intelligence bill would focus on large basic ChatGPT-style models. created by a handful of companies and transformed the Artificial Intelligence Security Institute from a department of DSIT into an “independent government body.” Kyle reaffirmed these provisions at this week’s summit, according to the FT, stressing that he wants to give the Institute “the independence to act entirely in the interests of British citizens.”

He also pledged to invest in advanced computing power to support the development of advanced artificial intelligence models in the UK, responding to criticism of the government. £800m funding for University of Edinburgh supercomputer abandoned in August.

SEE: UK government announces £32m funding for artificial intelligence projects after supercomputing funding withdrawn

Kyle said that while the Government could not invest £100 billion on its own, it would work with private investors to secure the necessary funding for future initiatives.

A year in AI safety legislation in the UK

Over the past year, a raft of legislation has been published requiring the UK to develop and use AI responsibly.

On 30 October 2023, the G7 countries, including the UK, created voluntary code of conduct for AI comprising 11 principles that “promote safe, secure and trustworthy AI around the world.”

Just a couple of days later, the AI ​​Security Summit kicked off, with 28 countries committing to ensure safe and responsible development and implementation. Later in November, the UK National Cyber ​​Security Centre, the US Cybersecurity and Infrastructure Security Agency and international agencies from 16 other countries issued guidelines on how to ensure safety when developing new AI models.

SEE: UK AI Safety Summit: World powers make ‘landmark’ pledge to keep AI safe

In March, G7 countries signed another agreement pledging to explore how AI can improve government services and boost economic growth. The agreement also covered joint development of AI tools to ensure that the models used are safe and trustworthy. The following month, the then Conservative government agreed to work with the USA in developing tests for advanced AI models by signing a Memorandum of Understanding.

In May the government published InspectA free, open-source testing platform that evaluates the safety of new AI models by assessing their core knowledge, reasoning, and autonomous capabilities. He was also one of the organizers another AI security summit in Seoulin which the UK agreed to work with countries around the world on AI safety measures and announced grants of up to £8.5 million for research into protecting society from risk.

Then, in September, Britain signed The world’s first international treaty on AI along with the EU, US and seven other countries, committing them to adopt or support measures to ensure that the use of AI is consistent with human rights, democracy and the rule of law.

And this is not the end; Thanks to the AIME tool and report, the government announced a new AI security partnership with Singapore through a Memorandum of Cooperation. It will also be presented at the first meeting of the International AI Safety Institutes in San Francisco later this month.

AI Safety Institute Chairman Ian Hogarth said: “An effective approach to AI safety requires global collaboration. This is why we are placing such emphasis on the International Network of AI Safety Institutes, as well as strengthening our own research partnerships.”

However, the United States has moved even further away from cooperation in the field of AI with its recent directive limiting the exchange of AI technologies and introducing mandatory protection against foreign access to AI resources.