It’s all about trust

Few minutes to read
By Clare Naden
Published on

Artificial intelligence (AI) has the potential to aid progress in everything from the medical sphere to saving our planet, yet as the technology becomes ever more complex, questions of trust arise. Increased regulation has helped to rebuild this trust, but grey areas remain. How can we ensure AI is trustworthy without impeding its progress?

Close up view of 52 Facebook notifications on a smart phone.

Using our personal data without authorization to spam us with products to buy is one thing, but quite another is when it is used in an attempt to manipulate politics. This was best demonstrated in the Cambridge Analytica affair, where millions of Facebook profiles of US voters were harvested to build a software system that could target them with personalized political advertising. The dangers of this were well recognized by the US consumer regulator that slammed Facebook with a USD 5 billion fine, but the trust in how organizations use our data was rattled, to say the least. The scandal also exposed the power, and dangers, of badly used artificial intelligence (AI).

But AI is here to stay. Used well, it can help to improve our lives and solve some of the world’s toughest issues. It enables humans and machines to work collaboratively, with the potential to enhance the capabilities of humans and technology beyond what we can even imagine. For organizations, this can mean increased productivity, reduced costs, improved speed to market and better customer relations, amongst other things. This is reflected in a Forbes Insights survey titled “On Your Marks: Business Leaders Prepare For Arms Race In Artificial Intelligence”, which revealed that 99 % of executives in technical positions said their organizations were going to increase AI spending in the coming year.

The technology is developing at lightning speed, raising as many questions about safety and security as the benefits it promises to deliver. If the point is to outperform humans on decisions and estimations such as predicting disease outbreaks or steering trains, how can we be sure we have control?

In AI we trust?

Leading industry experts believe that ensuring trustworthiness from the outset is one of the essential aspects to widespread adoption of this technology. With this in mind, ISO and the International Electrotechnical Commission (IEC) set up joint technical committee ISO/IEC JTC 1, Information technology, subcommittee SC 42, Artificial intelligence, to serve as a focal point for AI standardization. Among its many mandates, the group of experts is investigating different approaches to establish trust in AI systems.

Convenor of the trustworthiness working group within SC 42, Dr David Filip, research fellow at the ADAPT Centre in Trinity College Dublin, a dynamic research institute for digital technology, sums it up: “When software began ‘eating the world’, trustworthiness of software started coming to the forefront. Now that AI is eating the software, it is no big surprise that AI needs to be trustworthy.”

“However,” he analyses, “my impression is that people fear AI for the wrong reasons. They fear doomsday caused by some malicious artificial entity… A far bigger issue, I feel, is that the lack of transparency will allow a deep-learning system to make a decision that should be checked by a human but isn’t.”

Naturally, the level of harm depends on the way in which AI is used. A poorly designed tool that recommends music or restaurants to users will obviously cause less harm than an algorithm that helps to diagnose cancer. There is also the danger of using data to manipulate outcomes, such as in the Cambridge Analytica case.

Threats to trustworthiness

Fully automatic bottling plant in operation.

According to the Organisation for Economic Co-operation and Development (OECD), a collaborative international government body dedicated to furthering economic progress and world trade, malicious use of AI is expected to increase as it becomes less expensive and more accessible [1]. Malicious use, personal data leaks and cybersecurity are key threats to our trustworthiness.

A self-driving car, for example, that is involved in an accident could be hacked and information related to liability meddled with. A system that aggregates patient data and uses it to recommend treatments or make diagnoses could suffer errors or bugs that result in disastrous outcomes.

Other risks include the effects of data or algorithmic bias, a phenomenon that occurs when an algorithm produces results that are systematically compromised due to erroneous assumptions in the machine-learning process. When influenced by racist, prejudiced or other subjective behaviour, this can have a profound influence on everything, from what you see in your social media feed to the profiling of criminals in policy systems, or the processing of immigration claims.

AI systems that require access to personal information also pose risks to privacy. In healthcare, for example, AI has the potential to help advance new treatments by using patient data and medical records in certain ways. But this creates the possibility that data will be misused. Privacy laws reduce that risk but also limit the technology. It is clear that if AI systems are robust, secure and transparent, the eventuality of this happening is removed and their potential can flourish so we can fully reap the benefits.

What is being done

Woman holding her smartphone and printing on a 3D printer.

The industry is very aware of the need for trustworthiness and many technologies have been developed, and are steadily evolving, such as differential privacy, which introduces bits of randomness into aggregated data in order to reduce the risk of re-identification and preserve the contributions of individual users. Other examples include cryptographic tools such as homomorphic encryption and multiparty computation, which allows machine-learning algorithms to analyse data that is still encrypted, and thus secure. Or using a trusted execution environment, which is a technology to protect and verify the execution of legitimate software.

The European Union (EU) formed a High-Level Expert Group on Artificial Intelligence (AI HLEG) to support the implementation of Europe’s strategy on artificial intelligence, which includes ethical, legal and social dimensions. Earlier this year, it published Policy and Investment Recommendations for Trustworthy Artificial Intelligence that set out the group’s vision for a regulatory and financial framework for trustworthy AI.

On an international scale, the Partnership on AI to Benefit People and Society is dedicated to advancing the public understanding of AI and formulating best practices for future technologies. Bringing together diverse global voices, it works to “address such areas as fairness and inclusivity, explanation and transparency, security and privacy, values and ethics, collaboration between people and AI systems, interoperability of systems, and of the trustworthiness, reliability, containment, safety and robustness of the technology”, thus providing support opportunities for AI researchers and other key stakeholders.

“We are a co-founder of the Partnership on AI,” says Olivier Colas, Senior Director International Standards at Microsoft, who also plays an active role in SC 42, “and we’ve forged industry partnerships with both Amazon and Facebook to make AI more accessible to everyone.” He asserts that “as AI systems become more mainstream, we as a society have a shared responsibility to create trusted AI systems and need to work together to reach a consensus about what principles and values should govern AI development and use. The engineering practices that can be codified in International Standards should support these principles and values”. Microsoft, he says, has set up an internal advisory committee to help ensure its products adhere to these principles and takes part in industry-wide discussions on international standardization.

The standards factor

Engineer works a robotic arm from a tablet.

Standards, then, are the key. Dr Filip explains why: “We can never guarantee user trust, but with standardization we can analyse all the aspects of trustworthiness, such as transparency, robustness, resilience, privacy, security and so on, and recommend best practices that make AI systems behave in the intended and beneficial way.”

Standards help build partnerships between industry and policy makers by fostering a common language and solutions that resolve both regulatory privacy issues and the technology required to support that, without stifling innovation. Colas believes standards will play an important role in coding engineering best practice to support how AI is being developed and used. They will also complement emerging policies, laws and regulations around AI.

“International Standards have been successfully used to codify risk assessment and risk management for decades. The ISO/IEC 27000 series on information security management is a great example of such an approach for cybersecurity and privacy,” he says. It helps organizations manage the security of their assets, such as financial information, intellectual property, employee details or information entrusted by third parties. “What’s more, AI is a complex technology,” observes Colas. “Standards for AI should provide tools for transparency and a common language; then they can define the risks, with ways to manage them.”

The time is now

Rear view of humanoid robot with screen on torso displaying directions to ice cream.

The ISO/IEC JTC 1/SC 42 work programme outlines several topics for AI, many of which are currently under development in its working group WG 3, Trustworthiness. Projects include a number of normative documents directly aimed at helping stakeholders in the AI industry build trust into their systems. One example is future technical report ISO/IEC TR 24028, Information technology – Artificial intelligence (AI) – Overview of trustworthiness in artificial intelligence, which analyses the factors that may contribute to the erosion of trust in AI systems and details possible ways of improving it. The document covers all stakeholders and AI vulnerabilities such as threats to security, privacy, unpredictability, system hardware faults and much more.

SC 42 takes a horizontal approach by working closely with as many people as possible across industry, government and related technical committees, so as to build on what already exists rather than duplicating it. This includes ISO/TC 262, Risk management, whose standard ISO 31000 on risk assessment serves as a basis for the development of ISO/IEC 23894, Information technology – Artificial intelligence – Risk management. The new guidelines will help organizations better assess typical risks and threats to their AI systems and effectively integrate risk management for AI into their processes.

The standard will be joined by other important technical reports on the assessment of the robustness of neural networks (ISO/IEC TR 24029-1) and the bias in AI systems and AI-aided decision making (ISO/IEC TR 24027). All of these will complement the future ISO/IEC TR 24368, designed to tackle the ethical and societal concerns thrown up by AI (see article To ethicize or not to ethicize…).

Early consideration of trustworthiness in standardization is essential for ensuring the successful role of artificial intelligence in society. “Humans need trust to survive in every sense,” remarks Dr Filip. This includes trust in technology and infrastructure to be safe and reliable. “We rely on our politicians to put laws and systems in place that protect us, and we rely on the good of humans around us to function in everyday society. Now, we need to be able to trust software and digital technology in all its forms. Standards provide us with a way of achieving that.”

  1. OECD, Artificial intelligence in society. Paris: OECD Publishing, 2019
Default ISOfocus
Elizabeth Gasiorowski-Denis
Editor-in-Chief of ISOfocus