Don't believe the hype, and don't believe the panic either. Generative Large Language Models are becoming very powerful indeed, but in their current architectures and operating modes they will never be anything more than (extremely useful) imperfect philosophical zombies. It's not about the tools - it's about the users and the use cases. “We wrote a whole paper in late 2020 (Stochastic Parrots, published in 2021) pointing out that this head-long rush to ever larger language models without considering risks was a bad thing. But the risks and harms have never been about ‘too powerful AI’,” she tweeted. “Instead: They're about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).” #ai #aiethics #trustworthyai #llm #generativeai
The letter is kind of ridiculous, but it is good that there is this side of the spectrum, I think. we do need more alignment and AI safety research going forward, perhaps this will promote funding it.
I love the phrase “Stochastic Parrots”. A great paper and future band name 🙂
Interesting read, thanks Daniel
Solution Maker - Digital Seller - Technology Transformation Leader - Passionate Storyteller - Eternal Optimist
1yI agree, I posted about this yesterday. The letter is allowing fear to stop the progress of one of the greatest technologies known to man. We need to simply regulate it more effectively, not stop it altogether. Stopping it will do absolutely nothing. I'm resharing for visibility. My relative is literally the person who connected Brain Theory to Natural Language Processing (Michael Arbib) who had Norbert Wiener as his mentor at MIT.