Who does it better? https://lnkd.in/epbMhXxX It's time for models to get multi-modal & it's really cool to see that you already have open artefacts like Idefics and Obelics for all companies to build similar features to what OAI just announced: https://lnkd.in/eWCGJ3Fn
It’s amazing how the combined pressure of DeepMind’s looming Gemini launch, and the incredible ingenuity of the Hugging Face open source community is driving OpenAi to accelarate new capability launches. Where Open Source has the real potential to break new ground is in local edge device powered models. This requires magnitudes more efficiency in compute and cost. The findings from ‘Text Books Are All You Need’, in which researchers were able to train a 1.4 billion parameter model in 4 days, comparable to GPT 3.5, offers a tantalising view or what will soon be possible running locally on iPhones and Android devices. This will also have major implications for the cost per token model, as if llms are running locally and not querying the cloud for every prompt, then that is a significant cost saving.
*where did I put my glasses now* does it also work like that..
Clem Delangue 🤗 you are so ahead of it and my 🙏 that you are able to self finance your growth without selling to a big tech player. The market is clamoring for this.
I saw the other video and understand why you choose this example. But: wouldn't that prompt work even without an image?
Always open-source 🔥
Tried it. Works well.
Sebastian Schramm
Risks and Compliance Insights Lead - 1st Place Jun 2023 Hackathon Awards WW Experian with AI Bot Assistant - 1st certified Experian PMP worldwide- USA and FR Pilot licences - Officer LTNT in French Army.
1yWhichever model removing this risk down to almost zero will be better: Limitations: The model can produce factually incorrect texts, hallucinate facts (with or without an image) and will struggle with small details in images. While the model will tend to refuse answering questionable user requests, it can produce problematic outputs (including racist, stereotypical, and disrespectful texts), in particular when prompted to do so. We encourage users to read our findings from evaluating the model for potential biases in the model card.