Eighty years ago in 1942, acclaimed science fiction writer, Isaac Asimov had the foresight to anticipate the potential risks of autonomous, near-sentient robots or technology systems of the future to propose his Three Laws of Robotics in order to limit their danger to humanity. In Asimov's fictional ethical code, robots would never harm humans (1st law), robots would always obey humans (2nd law), and robots would always protect themselves (3rd law) as long as they didn’t contradict the first two laws.
Asimov understood that technological advancement was inevitable, the problem he wanted to avoid was the unintended consequences of poorly designed technology. By proposing his Three Laws of Robotics, he was undoubtedly forcing machines to operate under a moral code of conduct. If you expand that logic to AI, then it’s nothing but forcing the architects of game-changing AI applications to take greater moral responsibility for their actions – from the drawing board itself, not after releasing a poorly thought-out or potentially dangerous AI algorithm to cause irreparable damage to our way of life.
“AI Ethics can be defined as a socio-technical lens on the design and impact of AI solutions on our societies, involving the application of principles and techniques to ensure responsible development and use of AI technologies,” according to Ria Cheruvu, Lead Architect - AI Ethics, Intel. AI Ethics can cover various domains, she says, including Transparency; Sustainability; Fairness and Bias; and Security, Safety, and Privacy.
Ria Cheruvu, Lead Architect - AI Ethics, Intel
The importance of ethical AI is highlighted from not only a technical or organizational standpoint, but it also takes into account individual and societal perspectives around some serious topics: Algorithmic bias causing harm and discrimination to certain populations. Transparency and trust concern with AI systems, with the potential to severely impact consumer confidence and brand reputation. Technical failures and vulnerability to attacks, are particularly critical to assess for high-risk scenarios and data types, such as vulnerable and sensitive medical records. Impact of AI systems on the climate, such as the carbon footprint or AI systems, and even copyright and license concerns for AI technologies in the future, among other things.
Mariagrazia Squicciarini, Chief of Executive Office and Director AI, UNESCO, explains this further. “AI has an advantage that is much more pervasive – so it’s not only one sector, it can help all sectors. And what you ultimately need is skilled human resources, good computational ability and access to data,” she says.
Ethical AI means that it needs to be better framed for people to be able to not give their data by the time they don't want to, or know what they're used for and consent to it. Ethical AI can level the playing field, and allow developing countries to leapfrog ahead of advanced economies to benefit from technological progress, according to Squicciarini.
Mariagrazia Squicciarini, Chief of Executive Office and Director a.i., UNESCO
“At the levels of organizations and businesses, Ethical AI can be approached from defining principles and guidelines to ensure responsible development practices are followed within an organization. These processes and objectives are known as ethical governance,” says Intel’s Ria Cheruvu.
She further explains the need for multi-disciplinary teams to help drive ethical compliance and governance within an organization, to ensure development and deployment of internal and external AI technologies does not lead to harm to the stakeholders they influence and additional ethical issues. “At the technical level, toolkits, frameworks, and methodologies can play a key role in accelerating identifying issues with AI systems early-on (e.g., transparency mechanisms) through qualitative and quantitative metrics, as well as helping mitigate certain types of issues (e.g., bias). Technical tools are not a one-size-fits-all solution for AI Ethics – consequently, a key next step identified in the Responsible and Ethical AI space is to strike the right balance between tooling and societal aspects,” says Ria Cheruvu.
A critical challenge with standardizing ethical AI concepts we see today is identifying the right definitions and scope of the AI system, in relation to stakeholder personas and the AI lifecycle, according to Ria Cheruvu of Intel.
“These challenges of developing ethical AI standards also open up important opportunities, including reaching alignment at national and international levels towards how AI systems should be implemented, guidelines, and guardrails towards the system. A great example of this is determining categorizations of ethical AI risk that can apply to different sets of AI systems for varying use cases, enabling implementation of ethical AI guardrails,” argues Cheruvu.
UNESCO’s Squicciarini echoes Cheruvu’s sentiment, that it remains a challenge on how to translate AI implementation into government guidelines right now all around the world.
“Let's say by the time you have to move from the principles to the practice, you have to actually start by serving what already exists. The challenge that AI has is that it touches so many parts of governments. And we know that governments very often work in silos, not because they are nasty, but simply because it is very complicated to bring everybody on board,” says Squicciarini, claiming it’s not easy to coordinate inside governments, because there are a lot of coordination costs involved.
But the good news is that there is a lot of work being done around the world and there are many government initiatives around regulating AI, according to Squicciarini. “I think we shouldn't start from scratch, we shouldn't reinvent the wheel. We should check what work works and share the good practices from one part of the world to another,” she emphasizes.
Intel’s Ria Cheruvu is definitely trying to push the realm of ethical AI beyond just business and bottom lines of companies. “My team within the Network and Edge engineering group at Intel creates technical toolkits and frameworks in the AI Observability and Robustness spaces to help our users better comprehend AI model outcomes, as we calibrate our tooling capabilities with customers. We see great value in collaborations across industry, academic, government, and other organizations to drive alignment towards the use of AI, and the right balance of technology that we can introduce to help reach ethical AI,” explains Ria Cheruvu.