Sense about Science’s Post

View organization page for Sense about Science, graphic

2,626 followers

💻 Technology is advancing rapidly, and our ability to scrutinise – judging the safety and efficacy of new systems – needs to keep pace.  From the moment that AI tools are developed, to when they are implemented into public life, there are key ‘handover’ points in which information risks being lost – especially when handover occurs between people with different levels of experience or expertise (i.e., from developers to clinicians).     The new Responsible Handover framework empowers people to understand the origins, capability and limitations of new tech, so that it can be applied safely and effectively, solidifying the flow of information from developers to end users, and ensuring that AI is used as intended. See below for more information 👇

  • Image shows the responsible handover in biomedical settings. Tools are handed over to various groups, with varying levels of experience/expertise. When handing over from statisticians (who generate the AI tool, have technical expertise and a full understanding of working limits), to researchers who use the AI tool in new context (but who may potentially not have technical expertise, and can test code running but not perform rigorous testing), information which can be lost includes data used for training, assumptions and testing of tool. When passing from researchers to app developers (who have understanding of consumer needs and technical expertise), we risk losing reasoning for any adaptations from original model. When passing from developers to clinicians (who have understanding of potential patient harms, but limited data science experience), we risk losing user-skill requirements, acceptable error rate and how to monitor performance.

To view or add a comment, sign in

Explore topics