PROLOGUE
What is good for the goose may not necessarily be good for the gander. Ethical responsibility, as we have seen with recent issues with social media does not necessarily have a right or a wrong and it is based on personal viewpoints. Having AI make such judgements will be a constant struggle.
As we morph from the cyber-physical connected processes of Industry 4.0 to a more human-centric perspective in the next iteration of the industrial revolution, Industry and Services 5.0, synthetic automation using AI will need to include ethical, moral and other responsible considerations.
As reputable institutions devise case studies on moral dilemmas on what is ethical especially in case of imminent accidents during autonomous driving, we at Numorpho Cybernetic Systems (NUMO) believe that an explainable approach to the mechanisms leading to AI training is a better solution to devising actionable intelligence.
Why Explainability Matters?
AI has the power to automate decisions and those decisions have business impacts, both positive and negative. Much like hiring decision-makers in the organization, it’s important to understand how AI makes decisions. A lot of organizations want to leverage AI but are not comfortable letting the model or AI make more impactful decisions because they do not yet trust the model. Explainability helps with this as it provides insights into how models make decisions.
There is also a growing recognition that the responsibility, safety and fairness of AI systems are critical in business settings. The wide-ranging incoming legislation mandates that businesses should provide:
- explainability reports,
- apply stress tests,
- and ensure humans remain “in the loop”.
This perfectly matches with the needs for evolving Industry and Services 5.0 to have a human-centric, sustainable, and resilient solutioning process that are driven by empathy and human goodness and the goal to move the focus from shareholders to stakeholders. Appropriate solutions and their applications have the potential to mitigate the human discomfort as well as catapult the solutions into future, actionable and sustainable systems.
BACKGROUND
For the last fifty years, Artificial Intelligence (AI) has been one of the most fascinating fields in Computer Science. Unfortunately, there is a lack of understanding about what Artificial Intelligence really is. Partly this is because the field has been shrouded in mystery due to the absence of a rigorous mathematical theory. Also, partly because certain philosophical problems that are intimately related to AI, such as the nature of intelligence itself, and the relation between mind and body, remain unsolved.
AI is as useful as it is ubiquitous, but they are only as intelligent, rational, thoughtful and unbiased as their creators. Safe, ethical and effective AI will be vital for our future. This is an important issue in AI because systems are increasingly making decisions that have an impact on people’s lives, and we need to understand why they make the decisions they do.
EXPLAINABILITY
An aspect that is becoming increasingly pertinent is about the use of Artificial Intelligence (AI) both from training the neural nets and of utilizing the results. Perhaps there is need to understand their ramifications too in terms of its progression and how it would interact with humans in terms of cooperation, collaboration, and even competition (coopetition?). Terms like responsible, ethical, fairness, explainable and interpretable need to be explored as use cases from an AI POV also. In each use case, both societal and technical aspects shape who might be affected by AI systems and how.
‘Explainable AI (xAI)’ techniques have been in the news recently following concerns from many people about the decisions made by AI-based systems. Explainability helps data scientists, auditors, and business decision makers to ensure that AI systems can reasonably justify their decisions and how they reach their conclusions. This also ensures compliance with company policies, industry standards, and government regulations. A data scientist should be able to explain to the stakeholder how they achieved certain levels of accuracy and what influenced this outcome. Likewise, in order to comply with the company’s policies, an auditor needs a tool that validates the model, and a business decision maker needs to be able to provide a transparent model in order to gain trust.
XAI is a subset of AI ethics, and is part of a broader movement towards increased transparency and accountability in machine learning. It focuses on making AI systems explainable by exposing their inner workings, and explaining how they reach certain conclusions. By being more open and transparent, it builds trust by dispelling fears about AI systems being biased or discriminatory. The goal of XAI is to be open about how AI systems make decisions, so that humans can understand, trust and even challenge them. This is particularly important as AI systems become more widespread and integrated into our daily lives.
CHATGPT’s TAKE
Explainable artificial intelligence (xAI) refers to the practice of making the decision-making processes and reasoning behind AI systems transparent and understandable to humans. This is important for a number of reasons.
- First, explainability can help to build trust in AI systems, as it allows people to understand how and why the system is making certain decisions. This is particularly important in situations where the decisions made by AI systems have significant consequences, such as in autonomous vehicles or healthcare.
- Second, explainability is necessary for ensuring the ethical, responsible, and fair use of AI. For example, if an AI system is making decisions that disproportionately affect certain groups of people, explainability can help to identify and address any potential biases in the system.
- Finally, explainability is required by some regulations, such as the European Union’s General Data Protection Regulation, which mandate that companies must be able to explain the decisions made by AI systems.
Overall, explainable AI is an important aspect of the development and deployment of AI systems, as it helps to ensure their responsible and ethical use.
OUR PERSPECTIVE
As we progress on our endeavor of building our ecosystem of connected dots – integrating people, processes and platforms to create a harmonious melange of interacting heterogeneous systems, there is need to make them interoperable and trusted.
“Interoperability is the ability of independent systems to exchange meaningful information and initiate actions from each other, in order to operate together for mutual benefit.” As we progress with the needs for ubiquitous connectivity, especially using AI, there will be needs for explaining the outcomes – some of the consequences that may not be good.
This would need a strong basis for our training of AI constructs and we plan to use a multi-modal schema (thinking like an engineer and adding redundancies), and a strong physmatic model to scientifically and mathematically ascertain the result.
SUMMARY
It is important to ensure that AI systems are interoperable and can exchange meaningful information with other systems in order to operate effectively and achieve mutual benefit. Interoperability is particularly important in the context of connected systems, as it allows different systems to interact and work together seamlessly. Ensuring interoperability can also help to build trust in AI systems, as it allows people to understand how the systems operate and how they can be used in different contexts.
One way to improve interoperability and build trust in AI systems is to use a multi-modal schema and a strong physmatic model to train the systems. A multi-modal schema involves using multiple approaches or methods to train the AI system, which can help to improve its performance and reliability. A strong physmatic model, on the other hand, involves using scientific and mathematical principles to understand and predict the behavior of the AI system. By using both of these approaches together, it may be possible to develop AI systems that are more reliable, trustworthy, and able to operate effectively in different contexts.
NITIN UCHIL – Founder, CEO and Technical Evangelist
nitin.uchil@numorpho.com
REFERENCE
