PREVIOUS ARTICLENEXT ARTICLE
NEWS
By 27 February 2024 | Categories: news

0

News sponsored by Vodacom Transformation of Work:

By Josefin Rosén, Trustworthy AI specialist at SAS’ Data Ethics Practice

The adoption of artificial intelligence (AI) into a business is not just about leveraging advanced technologies to improve efficiencies. Instead, it is about nurturing trust, ethics, and responsibility in digital systems. Of course, when it comes to any AI implementation, its impact and risk must be considered. This is where Trustworthy AI becomes an organisational imperative.

At a foundational level, AI should have a high impact in terms of creating efficiencies, but also low risk for creating harm. So, what is Trustworthy AI in this sense? As the name suggests, it is AI that you can trust, AI that is safe, fair, and ethical, and does not harm anyone or anything. Trustworthy in itself is another term that is fraught with misinterpretation, but it comes down to what is right and fair for us as people. Ultimately, it is that which reflects our values.

This means that Trustworthy AI must be developed and used according to ethical principles. In the global fabric, there’s a reported strong alliance among leaders for the G7 countries ‘toward fundamental values such as democracy and human rights’, and towards achieving interoperability in AI governance frameworks across global markets. But, at a local level, what is acceptable in one society might not work in another. For example, Trustworthy AI will feel and look different in the US or Europe, to how it is adopted by countries across Africa because people’s expectations as well as cultural and state laws and regulation are so different within each jurisdiction. 

Guiding principles

At SAS we follow six guiding principles for Responsible Innovation – human-centricitytransparencyaccountabilityprivacy and securityinclusivity, and robustness. These must be reflected in everything we do, in our people, our processes and in our AI platform. Yes, we are a software company, but we are also an advisor to guide customers on best practices. Our job is to make sure our principles are reflected throughout the AI lifecycle from development to deployment.

The essence of Trustworthy AI therefore lies in its capacity to align with our core values, broader societal norms, and ethical principles. In addition to our guiding principles, we have adopted a comprehensive approach including a collaborative governance model based on four quadrants. These include:

·       Oversight perspective - an inter-disciplinary executive committee that guides AI ethical dilemmas from sales opportunities to procurement decisions,

·       Controls – that focus on regulatory activities around the world and to establish AI specific risk management methods for ourselves,

·       Platform – to integrate Trustworthy AI into our solutions and provide capabilities that enable customers to develop and deploy responsible AI applications, and finally,

·       Culture – through training and coaching building a culture of well-meaning people that is consistent around the world when it comes to Trustworthy AI.

Do no harm

In a world where AI's impact is permeating everything we do, the need for AI systems that do no harm and respect our diverse societal expectations is critical.

Trustworthy AI has become important in several sectors including financial services, healthcare, and law enforcement. These impact on all people regardless of their socio-economic status. Should we deny a person credit or take the word of an AI model when it comes to diagnoses of people’s health? These are life-altering choices so data and AI must be trusted. Companies will therefore start differentiating themselves based on how well they adopt Trustworthy AI.

Diverse teams

Beyond our principles and quadrants, we place a strong emphasis on diversity within our teams. The varied perspectives and experiences within our teams are crucial for developing AI applications that are fair, unbiased, and reflective of the global community we serve. Our commitment extends to carefully evaluating the data that fuels these systems, ensuring it is representative, devoid of sensitive biases, and used in a manner that upholds our ethical standards.

Operationalising Trustworthy AI involves the integration of technology, people, and processes. It is about embedding ethical considerations into every stage of the AI lifecycle, from ideation to deployment. This comprehensive approach demands continuous learning, governance, and the willingness to adapt to emerging challenges. It also necessitates transparency and interpretability in the AI models being used to allow stakeholders to understand and trust the decisions made by these systems.

Continuous learnings

The journey towards Trustworthy AI is complex and ongoing. It requires a proactive stance on governance, monitoring, and the integration of human oversight to ensure AI systems remain aligned with ethical standards.

Our continued dedication to the adoption of Trustworthy AI is driven by the belief that technology, when guided by ethical principles and human values, can be a force for good.

USER COMMENTS

Read
Magazine Online
TechSmart.co.za is South Africa's leading magazine for tech product reviews, tech news, videos, tech specs and gadgets.
Start reading now >
Download latest issue

Have Your Say


What new tech or developments are you most anticipating this year?
New smartphone announcements (45 votes)
Technological breakthroughs (28 votes)
Launch of new consoles, or notebooks (14 votes)
Innovative Artificial Intelligence solutions (28 votes)
Biotechnology or medical advancements (21 votes)
Better business applications (132 votes)