Nobody knows why an AI model gives the exact answer it gives or why it makes a particular decision. That’s a problem for organizations in high impact regulated industries like financial services and healthcare, for both legal and ethical reasons. Learn more about how UMNAI is applying Neuro-symbolic #AI in this interview with Angelo Dalli by Jennifer Schenker on The Innovator. #UMNAI #HybridIngelligence
#AI large language models (LLMs) lack transparency. Nobody, including the people who create them, knows why an LLM model gives the exact answer it gives or why it makes a particular decision. That’s a problem for organizations in high impact regulated industries like #financialservices or #healthcare, for both legal and ethical reasons. The issue is not limited to LLMs: most AI models are based on opaque statistics that generally cannot be understood easily. “You can’t trust what you can’t control, and you can’t control something you don’t understand,” says Angelo Dalli, CTO and Chief Scientist of UMNAI, a UK-based startup. UMNAI is trying to tackle this issue by marrying #neuralnetworks and #LLMs with #neurosymbolicAI, which relies on logic and reasoning, and an understanding of cause-and-effect, rather than just pure statistical predictions and associations, to represent knowledge and uses rule-based systems and logical inference to derive conclusions. How does that work when applied to real-world business applications? Sign-up for a free four-week trial of The Innovator to find out. https://lnkd.in/es7nvUzV