PREVIOUS ARTICLENEXT ARTICLE
NEWS
By 20 July 2021 | Categories: news

0

By Robin Fisher, Senior Area Vice President, Salesforce Emerging Markets 


COVID-19 continues to accelerate organisations’ plans to digitally transform, automate and leverage AI-powered technologies to help businesses build resilience and a competitive edge. They’re enhancing the customer experience by solving problems faster and more efficiently, facilitating remote working, and empowering employees to take on more strategic roles that the digital economy demands.

In a post-pandemic success from anywhere world, the surge in demand for AI technologies means companies must hold themselves to higher standards to ensure the development of responsible technology. The costs of creating, selling, and implementing technology without a holistic understanding of its implications are far too great to ignore.

To build and deploy AI with confidence, we must focus on inclusive measures and ethical intent, taking measures to transparently explain the impact and rationale of its actions and recommendations. As ethical and responsible technology becomes an organisational imperative, here are three ways organisations can earn trust - ensuring accountability, transparency, and fairness.

Scaling ethical AI practice

In the enterprise, ethics in AI means creating and sustaining a culture of critical thinking among employees. It’s not feasible to ask a single group to effectively take sole responsibility for identifying ethical risks during development. Instead, ethics-by-design requires a multitude of diverse perspectives from different cultures, ethnicities, gender identities, and areas of expertise.

Ultimately, every employee has to have a sense of responsibility to everyone in the company and to their customer base. Cultivating an ethics-by-design mindset requires systematic engagement, with all employees serving as advisors to product and data science teams on practical ways to identify and address ethical issues associated with their projects.

Understanding the nature and degree of bias

Although there is much potential for AI to make a positive impact on businesses and society, we must also be mindful of how these technologies can be problematic, particularly in relation to reinforcing biases. It is one thing to build AI in a lab, but it is another to accurately predict how it will perform in the real world.

Throughout the product lifecycle, questions of accountability should be top-of-mind. Teams also need to understand the nature and degree of bias associated with the datasets they are using and the models trained on those datasets, as well as their own, of course. It’s essential that ethical AI teams facilitate questions about how to make AI models more explainable, transparent, or auditable. Establishing well defined, externally validated methods and practices for supporting decision-making will ensure clarity for everyone involved.

Whereas developers provide AI platforms, AI users effectively own and are responsible for their data. And whilst developers can provide customers with training and resources to help identify bias and mitigate against harm, if retrained inadequately or left unchecked, algorithms can perpetuate harmful stereotypes. This is why it is important that organisations provide customers and users with the right tools to use technologies safely and responsibly, to know how to identify problems and address them. With appropriate guidance and training, customers will better understand the impact of their data handling.

Applying best practice through transparency

Gaining feedback about how teams collect data can avoid unintended consequences of algorithms, both in the lab and future real-world scenarios. Providing as much transparency as possible around how an AI model has been built will ensure that the end user has a better sense of the safeguards in place to minimise bias. This can be done, for example, by publishing model cards which describe the intended use and users, performance metrics, and any other ethical consideration. This will help build trust not just among prospective and existing customers, but also regulators and wider society.

Ultimately, to trust AI, relevant audiences need to understand why AI makes certain recommendations or predictions. AI users approach these technologies with different levels of knowledge and expertise. Data scientists or statisticians, for instance, will want to see all the factors used in a model. Alternatively, sales reps without a background in data science or statistics might be overwhelmed by this level of detail. To inspire confidence and avoid confusion, teams need to understand how to communicate these themes and explanations appropriately for different users.

USER COMMENTS

Read
Magazine Online
TechSmart.co.za is South Africa's leading magazine for tech product reviews, tech news, videos, tech specs and gadgets.
Start reading now >
Download latest issue

Have Your Say


What new tech or developments are you most anticipating this year?
New smartphone announcements (44 votes)
Technological breakthroughs (28 votes)
Launch of new consoles, or notebooks (14 votes)
Innovative Artificial Intelligence solutions (28 votes)
Biotechnology or medical advancements (21 votes)
Better business applications (132 votes)