Due to AI hype, organisations are quickly building and deploying solutions without much consideration being given to ethical impacts/aspects. Gartner expects “AI cause and effect” cycle to occur over the next three to five years.
Trustworthiness is a prerequisite for people and societies to develop, deploy and use AI systems. Trust in the development, deployment and use of Intelligent systems concerns not only AI’s inherent properties, but also the qualities of the socio-technical processes that are created or managed by AI. A trustworthy approach is key in driving revenue and profitability, by providing the foundation upon which all those affected by AI can trust that their design, development and use are lawful, ethical and robust. In the world of AI, people want to work with organisations they can trust with their personal data and that will use algorithms to help rather than manipulate.
Trustworthy AI can improve individual flourishing and collective wellbeing by generating prosperity and value creation. It can contribute to achieving a fair society, by helping to increase person’s health and well-being in ways that foster equality. It is therefore imperative that AI ethics focuses on the ethical issues raised by the development, deployment and use of AI. AI systems should follow human-centric design principles and leave meaningful opportunity for human choice. This means securing human oversight over AI work processes. The use of AI should never lead to people being deceived or unjustifiably impaired in their freedom of choice. In the world of AI, it is about the organisation and employees who built the AI solution putting themselves in the shoes of the person who will be using the AI solution.
The quality of the data sets used is paramount to the performance of AI systems. Data sets used by AI systems (both for training and operation) may suffer from the inclusion of inadvertent historic bias, incompleteness and bad governance models. Feeding malicious data into an AI system may change its behaviour, particularly with self-learning systems. Processes and data sets used must be tested and documented at each step such as planning, training, testing and deployment. Identifiable and discriminatory bias should be removed in the collection phase where possible. The way in which AI systems are developed (e.g. algorithms’ programming) may also suffer from unfair bias. This could be counteracted by putting in place oversight processes to analyse and address the system’s purpose, constraints, requirements and decisions in a clear and transparent manner.
Explicability is crucial for building and maintaining users’ trust in AI systems. This means that processes need to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions – to the extent possible – explainable to those directly and indirectly affected. Consumers and businesses will require transparency of AI solutions as a “right to explanation” in how an AI-derived decision was made and proof that it was made in an unbiased way. New analysis tools will be developed for traceability and auditability to quickly understand how the AI made the decision.
As AI becomes a part of everything (software, hardware, consumer devices) and autonomously communicates with other AI, new policies and governance will emerge to protect consumers, citizens and businesses from unethical AI usage and new patterns.Increasingly, we will see new AI solutions emerge that are explicitly designed to undermine other AI solutions in order to protect consumer/employee privacy.
As the result of increased governance policies and more ethical awareness, AI applications will take longer to build. That’s because organisations will proactively make sure that the data is correct and integrated properly and that they have the right types of data and datasets.