Imagine a passenger in a semi-autonomous (L3) vehicle for the first time. Excited about novelty and ease. Nervous if it works as advertised, and if/how they will know to take control when required.
One element of control transfer that warrants attention is keeping the human in the loop. We know the risks and dangers when a user is out-of-the-loop with (semi-)autonomous systems. More mistakes, possibly fatal.
Existing research can guide us to improve the safety of these systems, but in practice we don't always see that. See much of Tesla's news coverage. Some use it irresponsibly. Others follow the provided guidance but end up in trouble when taking over because they find themselves unprepared. We want both of these categories of people kept safe.
Enter generative AI. Suddenly, it's much easier to tailor the information we provide to passengers in these vehicles for countless scenarios. To personalize it. To keep the operators in the loop. It's equally possible to overwhelm them, or deepen over-reliance on the automated system. I fear that we'll throw technology at the problem preemptively, without doing due diligence on its implications.
We need robust human factors research on this topic, and we need it now. I'm starting to see publications pop up on the topic, but it's still thin so far.
If you're in this space, what research have you seen or are you working on? What worries you most, or what excites you the most?
Two helpful references:
- SAE Levels of automation for vehicles: https://lnkd.in/eJ-VMPAP
- A seminal paper on the topic titled, "A model for types and levels of human interaction with automation": https://lnkd.in/ekDXpTrE
#GenerativeAI #AutonomousVehicles #UserExperience #FutureOfTransportation #HumanFactors
Partner✳Liquidator✳Restructuring expert✳Complex problem solver✳Solution focused sounding board
9moAgreed Allister, the dream can not be over! Let’s fundraise together.