Earlier this week, I had the pleasure of speaking at the NYSE tech summit, alongside Oracle CIO Jae Sook Evans, Talkdesk CIO Jeff Haslem, and C3 AI President Ed Abbo. We talked about the topic du jour - impact of generative AI on business productivity
Here is what I learned from our discussion as well as listening to the attendees.
😢 The curious case of missing business productivity
Everyone has moved on from the earliest use cases of "writing better emails". While these use cases were magical, they have not delivered business value.
However, they see opportunity in service operations such as IT, HR, Finance and in GTM operations. This is why we built the Plugin Library which lists 100s of AI agent use cases to drive business productivity forward. https://lnkd.in/ghtWm4UZ
💩Poor data quality
All leaders I spoke to shared that data quality remains a big impediment to deriving value from AI. Businesses sit on mountains of data that has largely been left uncared for.
Data can be missing, inaccurate or historical, or the permissions are overly broad. In many cases, there are conflicting sources of data (eg revenue data in CRMs but also in emails, and internal documents)
However, all is not lost here. Everyone has pockets of high value data. Operations run books in all companies seem to be clean. Be it IT support manuals, HR policies, or GTM collateral. Often, business data that resides in CRM, HRIS, ERP systems is also clean. As a result, C-suite leaders are connecting generative AI tools to these pockets of clean data, while they invest in isolating or fixing other data issues.
At Moveworks, our customers use Knowledge Studio which helps them improve data quality when it comes to employee service manuals. This tool uses generative AI to identify gaps in content (missing data), recommends new or updates to content, and finally uses ticket notes to create high quality grounded content. https://lnkd.in/gMf4j7jY
🤥 Hallucinations - grounding is not enough!
The initial promise of grounding to prevent hallucinations has faded. Despite grounded generation, end users continue to be misled due to a variety of hallucinations - from making up data to lack of citations to confusing classifications (such as contractors vs contingent workers). Turns out simply prompting an LLM to be truthful does not make them truthful!
We saw this first hand last year and built many techniques in our Copilot to counter this problem. We invested heavily in citation presence and entity linking for a variety of business objects like people, business objects, and sources. We also built a fact checking model that inspects LLM generated summaries for factfulness against citations.
All in all, it was a stellar and insightful event. I came away inspired and enlightened. Thank you Lynn Martin and Chuck Adkins!