PREVIOUS ARTICLENEXT ARTICLE
INTERVIEWS
By 8 June 2021 | Categories: interviews

0

With the rise of AI across all facets of society, ethics is proving to be the new frontier of technology. Public awareness, press scrutiny, and upcoming regulations are mandating organisations and the data science community to embed ethical principles in AI initiatives. Olivier Penel, SAS Data & Analytics Strategic Advisor speaks to Ryan Noik about the responsible use of AI, and unpacks the principles involved in ensuring it.

RN: How do you define Responsible Artificial Intelligence and why is it so important to distinguish it out from AI in general?

OP: Well, to start with, I don’t think there is such a thing as “Responsible AI” (RAI) perse. In the same way as there is no such thing as a responsible calculator. When I use a calculator for my tax return and inadvertently declare less revenue than I should, can we blame the calculator for this? Can we say that it did not behave responsibly? No, it was my use of the calculator that was to blame. Which is why I think we should talk about the Responsible Use of AI rather than Responsible AI. Now to be fair, the difference between my calculator and a typical AI system is that with my calculator, I was the one providing clear instructions in term of the numbers entered and the operations selected. I did not ask the calculator to learn from previous calculations and to predict a result.

When we talk about “Responsible AI”, what we refer to is the way we’re using AI technologies, and the adherence to certain principles that relate to the greater good, the protection of individuals and their fundamental rights, and generally the trustworthiness of the AI application.

RN: What are the principles of the responsible use of AI?

OP: There are several frameworks that have been developed for RAI over the last few years, but I think it boils down to eight principles:

1). Being human-centred

This provides meaningful interactions with users. For instance, using natural language where even non-specialists can make sense of the AI system. It is also important to have human oversight of the AI system with clear processes in place to review the impact of AI or to appeal the decisions made.

2). Being accountable

This sees the need for a governance framework with clear roles and responsibilities, providing clarity about who is responsible for the predictions and decisions made with the AI system. This accountability is about putting the necessary safeguards in place as well as the guiding principles to everyone involved. Providing them with adequate training is also critical.

3). Being fair and impartial

The AI system should not discriminate between categories of people. This should be achieved through proactive monitoring of bias in the data used to train models, in the output of the model scoring against different groups of people, and in the fairness of the decisions made with predictive models. Of course, all data sets are biased, but this can be detected and mitigated. Ultimately, the data should be representative of the population to which the model is going to be applied.

4). Being transparent and explainable

For example, you should be able to answer questions such as:

  • What data was used to train a model?
  • How well the inner-workings, attributes, and correlations of the model are known and documented?
  • What variables positively or negatively influenced a specific prediction?
  • What rules/logic were used along with the prediction to drive a specific decision?

5). Being robust and reliable

The AI system must produce consistent and reliable outputs. The deployment of the AI models should also be automated to avoid error-prone manual activities. Finally, the models in production must be proactively monitored not only for accuracy but also for fairness. There should be some processes in place to retrain, rebuild, or replace a model when needed.

6). Being safe and secure

The AI systems must be protected from potential risks that may cause physical or digital harm, including the possibility of cyberattacks.

7). Being compliant

The AI systems must be compliant with key regulations, particularly for privacy such as the POPI Act and GDPR. There must be an option for individuals to opt-in or out of sharing their data and the company not using customer data beyond its intended and stated use.

Being compliant is not only about regulatory requirements. There is also a pressing demand from stakeholders to do the right thing, to follow certain rules, and to protect the brand.

8). Being ethical

The AI systems must be ethical. Ethics is a concept that might have very different meanings depending on where you live, your culture, and your values.

For AI to be ethical, it should therefore have the ability to comply with a specific code of ethics, whatever it is. It typically includes human rights, societal well-being, and sustainability, for example.

RN: How do we address bias, and implement responsible AI principles?

OP: Bias talks to the impartiality of the decisions being made and is something that must be considered across the end-to-end process, from data to decisions. To mitigate the risk of bias taking place, the owners of AI applications must be proactive in selecting training data sets that are truly representative of the population to which the AI system is aimed for.

If the appropriate data sets are not available, there are technics available to generate synthetic data to make the training data more representative.

Bias can then be monitored throughout the lifecycle, for example, checking if the model used is behaving consistently across different groups of people and the impact the decisions have had on the people as a result.

But this level of scrutiny and oversight can only be achieved through a proactive governance framework and not left as an afterthought.

RN: Can you elaborate on how ethics is the new frontier for technology? Is that being driven by particular factors?

OP: Much of this comes down to a ‘can’ versus ‘should’ discussion. Thanks to technology innovations, virtually anything has become possible for AI to do. The key thing here is what should be done as opposed to what can be done. Obviously, there is a regulatory framework that mandate certain rules about can or cannot be done, but the technology is moving faster than society and what we see today is a tension between what the technology makes possible, and the ethical implications. A lot of questions and concerns are being raised, and companies are under pressure to keep control of the technology they use and to act responsibly for the greater good.

The reality is that we are all learning as we go along, the impacts that AI systems can have are not all known or felt yet, and mistakes are inevitable.

RN: How can RAI be implemented and then tracked to ensure it is consistently performed?

OP: The key thing when it comes to RAI is to put the human back into the equation. There is a difference between automated decision-making and aiding the decision-making process. Companies must therefore structure the use, deployment, and implementation of AI technology with a people-centric approach in mind.

It is also critical to avoid handling RAI as an afterthought and to embed RAI principles into your AI initiatives from the outset. Injecting those principles in a deployed AI application is much more complicated and costly and would probably require a complete re-design and re-build of the underlying models. So better get things right from the start!

RN: To what extent is the onus on leadership to ensure responsible AI is being implemented?

OP: Everybody has a role to play in RAI, but it can’t be down to the people in the trenches to see the “big picture” and to do what’s right, which is often more difficult, lengthy and costly. Therefore, there must be a mandate coming from the top. Business and public leaders are ultimately accountable to ensure that AI technologies are used responsibly.

The main responsibility of leadership is to contextualise RAI principles and to translate them into actionable guidelines. What’s acceptable in a specific context is not acceptable in another one, depending on the industry, country, use case, culture, and values of the organisation. It is all well and good to say a business must be fair and impartial, but how does that translate into actionable guidelines for teams to implement? Safeguards, standards and best practices should be carefully defined so all involved know what is expected from them. 

But the responsibility of the leadership doesn’t stop there. They should also ensure that there is the appropriate oversight, a governance framework, and processes to monitor and mitigate the risks.

RN: Are there any factors that would impede the deployment of responsible AI – and any significant business benefits from it as well?

OP: There are many obstacles to deploying RAI, but none of them are unmanageable. I can mention the diversity of skills needed to anticipate the potential implications and to mitigate the risks from AI applications. It’s not just about data science and other considerations should be included that relate to other domains such as sociology, economics, psychology, etc.

Another potential obstacle is the pressure to deploy AI systems quickly to gain a competitive advantage. In this AI race, shortcuts can be taken, and risks ignored, in order to shorten the time to market. This is a big mistake as ultimately, RAI is about building trust, with employees, partners, customers, stakeholders, and without trust, there is no adoption, and without adoption, there is no value delivered.

RN: Do you think AI will be able to run autonomously or will human oversight always be a critical factor to ensure biases aren’t just being amplified?

OP: It is not a matter of whether AI will be able to run autonomously, but whether it should.

Back to the first principle of RAI, “human-centricity”, the success of AI applications will come from the partnership between humans and algorithms. The appropriate level of human oversight greatly depends on the use case. Personalising ads on a website doesn’t require much but supporting decision-making in the healthcare or justice domains is far more critical because of the potential impacts on individuals and will require a different combination of analytical insight and human input.

AI can bring tremendous value to people, to the environment, and to society at large, but it cannot go unchecked. Ultimately, AI should serve our needs and humans should be part of the equation.

USER COMMENTS

Read
Magazine Online
TechSmart.co.za is South Africa's leading magazine for tech product reviews, tech news, videos, tech specs and gadgets.
Start reading now >
Download latest issue

Have Your Say


What new tech or developments are you most anticipating this year?
New smartphone announcements (45 votes)
Technological breakthroughs (28 votes)
Launch of new consoles, or notebooks (14 votes)
Innovative Artificial Intelligence solutions (28 votes)
Biotechnology or medical advancements (21 votes)
Better business applications (132 votes)