The Ethics Of Artificial Intelligence
Dr. Adam Bujak is an expert in intelligent automation, business process transformation and strategic management. He heads Capgemini's Business Services' Intelligent Automation Practice, helping multinational clients to embrace the future of an augmented workforce in the front, middle, and back office.
Addressing ethical issues surrounding AI will always be work-in-progress and will need to develop, as AI itself evolve.
Imagine you've applied for a job or for a loan, and you're told you're unsuccessful. You're curious as to why, and so you use GDPR legislation to request access to the information the company holds on you. You obtain your data and at the same time, you discover that the decision was made using artificial intelligence(AI) algorithms that screened out your application for no obvious reason.
Or imagine this. You discover that AI is being used for surveillance purposes at your place of work and also that your employer is collecting and processing data relating to your health history using AI algorithms. In neither case has your consent been sought or obtained.
In all these scenarios, it would be understandable if you felt pretty aggrieved. At the very least, it would damage the relationship you have with the organization employing the AI; at the worst, it might move you to consider taking legal action, and going public with your story.
In short, while organizations are increasingly taking advantage of the benefits of AI, they must simultaneously be mindful of the consequences of their behavior. A recent study by the Capgemini Research Institute has found that consumers, employees and citizens will reward organizations that proactively show that their AI systems are ethical, fair and transparent. The "Why addressing ethical questions in AI will benefit organizations" study surveyed over 1,500 executives from large organizations across 10 countries and over 4,400 consumers across six countries.
Developing a Plan of Action
The main findings of the study are perhaps obvious: the ethical concerns are deemed important by pretty much everyone who is served
Addressing ethical issues surrounding AI will always be work-in-progress and will need to develop, as AI itself evolve.
Imagine you've applied for a job or for a loan, and you're told you're unsuccessful. You're curious as to why, and so you use GDPR legislation to request access to the information the company holds on you. You obtain your data and at the same time, you discover that the decision was made using artificial intelligence(AI) algorithms that screened out your application for no obvious reason.
Or imagine this. You discover that AI is being used for surveillance purposes at your place of work and also that your employer is collecting and processing data relating to your health history using AI algorithms. In neither case has your consent been sought or obtained.
In all these scenarios, it would be understandable if you felt pretty aggrieved. At the very least, it would damage the relationship you have with the organization employing the AI; at the worst, it might move you to consider taking legal action, and going public with your story.
In short, while organizations are increasingly taking advantage of the benefits of AI, they must simultaneously be mindful of the consequences of their behavior. A recent study by the Capgemini Research Institute has found that consumers, employees and citizens will reward organizations that proactively show that their AI systems are ethical, fair and transparent. The "Why addressing ethical questions in AI will benefit organizations" study surveyed over 1,500 executives from large organizations across 10 countries and over 4,400 consumers across six countries.
Developing a Plan of Action
The main findings of the study are perhaps obvious: the ethical concerns are deemed important by pretty much everyone who is served
by or employed by organizations; that regulation is deemed desirable; and those companies are rewarded or punished in relation to the degree to which they are perceived to use AI ethically.
However, the study goes beyond an analysis of people's attitudes, to outline a course of action for organizations seeking to develop an ethical AI strategy. The recommended approach embraces all key stakeholders:
• For CxOs, business leaders and those with a remit for trust and ethics: establish a strategy and code of conduct for ethical AI develop policies that define acceptable practices for the workforce and AI applications; create ethics governance structures and ensure accountability for AI systems; and build diverse teams to ensure sensitivity towards the full spectrum of ethical issues.
• For the customer and employee-facing teams, such as HR, marketing, communications and customer service: ensure ethical usage of AI application; educate and inform users to build trust in AI systems;empower users with more control and the ability to seek recourse; and proactively communicate on AI issues internally and externally to build trust
• For AI, data and IT leaders and their teams: seek to make AI systems as transparent and understandable as possible, so as to gain users' trust; practice good data management, and mitigate potential biases in data; continuously monitor for precision and accuracy; and use technology tools to build ethics in AI.
The Need for Continuity Perhaps the key take-away from the study is that the structured, planned approach to AI that I have summarized above can achieve two important aims. First, it will earn people's trust and loyalty, and achieve greater market share. And second, it will avert significant risks from a compliance, privacy, security, and reputational perspective.
Of course, whatever organizations do won't and can't be a one time fix. Addressing ethical issues surrounding AI will need to develop, as AI itself evolves: it will always be work-in-progress, or, to use that business cliché, a journey. And the sooner that journey begins the better.
• For AI, data and IT leaders and their teams: seek to make AI systems as transparent and understandable as possible, so as to gain users' trust; practice good data management, and mitigate potential biases in data; continuously monitor for precision and accuracy; and use technology tools to build ethics in AI.
The Need for Continuity Perhaps the key take-away from the study is that the structured, planned approach to AI that I have summarized above can achieve two important aims. First, it will earn people's trust and loyalty, and achieve greater market share. And second, it will avert significant risks from a compliance, privacy, security, and reputational perspective.
Of course, whatever organizations do won't and can't be a one time fix. Addressing ethical issues surrounding AI will need to develop, as AI itself evolves: it will always be work-in-progress, or, to use that business cliche, a journey. And the sooner that journey begins the better.
While organizations are increasingly taking advantage of the benefits of AI, they must simultaneously be mindful of the consequences of their behavior
However, the study goes beyond an analysis of people's attitudes, to outline a course of action for organizations seeking to develop an ethical AI strategy. The recommended approach embraces all key stakeholders:
• For CxOs, business leaders and those with a remit for trust and ethics: establish a strategy and code of conduct for ethical AI develop policies that define acceptable practices for the workforce and AI applications; create ethics governance structures and ensure accountability for AI systems; and build diverse teams to ensure sensitivity towards the full spectrum of ethical issues.
• For the customer and employee-facing teams, such as HR, marketing, communications and customer service: ensure ethical usage of AI application; educate and inform users to build trust in AI systems;empower users with more control and the ability to seek recourse; and proactively communicate on AI issues internally and externally to build trust
• For AI, data and IT leaders and their teams: seek to make AI systems as transparent and understandable as possible, so as to gain users' trust; practice good data management, and mitigate potential biases in data; continuously monitor for precision and accuracy; and use technology tools to build ethics in AI.
The Need for Continuity Perhaps the key take-away from the study is that the structured, planned approach to AI that I have summarized above can achieve two important aims. First, it will earn people's trust and loyalty, and achieve greater market share. And second, it will avert significant risks from a compliance, privacy, security, and reputational perspective.
Of course, whatever organizations do won't and can't be a one time fix. Addressing ethical issues surrounding AI will need to develop, as AI itself evolves: it will always be work-in-progress, or, to use that business cliché, a journey. And the sooner that journey begins the better.
• For AI, data and IT leaders and their teams: seek to make AI systems as transparent and understandable as possible, so as to gain users' trust; practice good data management, and mitigate potential biases in data; continuously monitor for precision and accuracy; and use technology tools to build ethics in AI.
The Need for Continuity Perhaps the key take-away from the study is that the structured, planned approach to AI that I have summarized above can achieve two important aims. First, it will earn people's trust and loyalty, and achieve greater market share. And second, it will avert significant risks from a compliance, privacy, security, and reputational perspective.
Of course, whatever organizations do won't and can't be a one time fix. Addressing ethical issues surrounding AI will need to develop, as AI itself evolves: it will always be work-in-progress, or, to use that business cliche, a journey. And the sooner that journey begins the better.