What is good for the goose may not necessarily be good for the gander. Ethical responsibility, as we have seen with recent issues with social media does not necessarily have a right or a wrong and it is based on personal viewpoints. Having AI make such judgements will be a constant struggle.
As we morph from the connected process of Industry 4.0 to a more human-centric perspective in the next iteration, Industry and Services 5.0, automation using AI will need to include ethical, moral and other responsible considerations.
As reputable institutions devise case studies on moral dilemmas on what is ethical especially in case of imminent accidents during autonomous driving, we at Numorpho Cybernetic Systems (NUMO) believe that an explainable approach to the mechanisms leading to AI is a better approach to devising actionable intelligence.
Why Explainability Matters?
AI has the power to automate decisions and those decisions have business impacts, both positive and negative. Much like hiring decision-makers in the organization, it’s important to understand how AI makes decisions. A lot of organizations want to leverage AI but are not comfortable letting the model or AI make more impactful decisions because they do not yet trust the model. Explainability helps with this as it provides insights into how models make decisions.
NI+IN UCHIL Founder, CEO & Technical