📙 How does it feel to speak with LLM-based AI? For just announced Google Gemini AI, the model uses 32k-token context. It compares to a book of 40-50 pages. Imagine you've read a book of this size and talk about the contents. So it does the AI. 📊 Interestingly, context size has first shown success from a mere value of 2048, used in ChatGPT-3. After intriguing attempts to raise it up to a million, it was found that it is not an effective solution. A special "attention" was applied there; that made effective frame size of more reasonable value. #llm #gemini
Today, we launched Gemini – Google’s largest and most capable AI model. It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across, and combine different types of information including text, code, audio, image and video. Built with responsibility and safety at the core, Gemini has the most comprehensive safety evaluations of any Google AI model to date, including for bias and toxicity. We’ve conducted new research into potential risk areas, applied Google Research’s best-in-class adversarial testing techniques to help identify critical safety issues in advance of Gemini’s deployment, and are working with a diverse group of external experts and partners to stress-test our models across a range of issues. In this new era, Gemini will help people be more creative, learn more, and advance science itself. This is a major milestone. Congrats to all the teams that made today happen. Read more about Gemini and its availability here: https://lnkd.in/e-ZWDdDS #GoogleDeepMind #GeminiAI #AI
AGI Researcher – VARANKIN
10mo⚠ Attention, please! Facebook (social network) is full of fake advertisements titled similar to "Gemini AI". Text includes a link to attacker's page where it prompts to download RAR file with a "model", that is really a virus installer, targeted on Chromium-based web browser.