At NESA, AI is a critical part of our strategy to revolutionize virtual care. Our AI solutions are based on empirical research that extends our understanding of AI, and how we can successfully leverage it to improve clinical workflows and patient safety. A recent research collaboration lead by Jon Walbrin - a NESA AI team member - is a great example of this:
Very pleased to see our new paper published in iScience: https://lnkd.in/dv_him2F Great work by Jorge Almeida (proactionlab.fpce.uc.pt), Nikita Sossounov, Morteza Mahdiani and Igor Vaz. We show that a neural network simultaneously trained on image and text inputs (CLIP-ViT) captures detailed, multi-faceted human knowledge about everyday objects. More specifically: - Human knowledge about everyday objects can be broken down into a set of distinct dimensions that describe the visual appearance of the object, how it is grasped and used, as well as other contextual information (e.g. it belongs in a kitchen) - CLIP-ViT does a good job at predicting these object dimensions, despite not being explicitly trained to do so - CLIP and similar methods that leverage large image and text datasets are a promising tool for approximating and understanding complex human object knowledge