🤖 𝐁𝐮𝐢𝐥𝐝 𝐚 𝐜𝐮𝐬𝐭𝐨𝐦 𝐬𝐞𝐫𝐯𝐞𝐫𝐥𝐞𝐬𝐬 𝐚𝐬𝐬𝐢𝐬𝐭𝐚𝐧𝐭 𝐢𝐧 Langtail’s 𝐔𝐈 𝐰𝐢𝐭𝐡 Qdrant We live in a world where from “𝘖𝘯𝘦 𝘱𝘳𝘰𝘣𝘭𝘦𝘮 𝘐 𝘩𝘢𝘷𝘦 𝘪𝘴 𝘵𝘩𝘢𝘵 𝘸𝘩𝘦𝘯 𝘐 𝘢𝘮 𝘵𝘢𝘭𝘬𝘪𝘯𝘨 𝘢𝘣𝘰𝘶𝘵 𝘖𝘱𝘦𝘯𝘈𝘐 𝘈𝘗𝘐 𝘸𝘪𝘵𝘩 𝘓𝘓𝘔, 𝘪𝘵 𝘬𝘦𝘦𝘱𝘴 𝘶𝘴𝘪𝘯𝘨 𝘵𝘩𝘦 𝘰𝘭𝘥 𝘈𝘗𝘐, 𝘸𝘩𝘪𝘤𝘩 𝘪𝘴 𝘷𝘦𝘳𝘺 𝘢𝘯𝘯𝘰𝘺𝘪𝘯𝘨.” to an AI assistant that solves this, there are a couple of simple steps. Daniel Melo provides detailed instructions, code and configurations on how he built his #RAG assistant with Langtail and Qdrant. 👉 Check it out https://lnkd.in/dUk8cmFD
Qdrant
Softwareentwicklung
Berlin, Berlin 27.764 Follower:innen
Massive-Scale Vector Database
Info
Powering the next generation of AI applications with advanced and high-performant vector similarity search technology. Qdrant engine is an open-source vector search database. It deploys as an API service providing a search for the nearest high-dimensional vectors. With Qdrant, embeddings or neural network encoders can be turned into full-fledged applications for matching, searching, recommending, and much more. Make the most of your Unstructured Data!
- Website
-
https://qdrant.tech
Externer Link zu Qdrant
- Branche
- Softwareentwicklung
- Größe
- 51–200 Beschäftigte
- Hauptsitz
- Berlin, Berlin
- Art
- Privatunternehmen
- Gegründet
- 2021
- Spezialgebiete
- Deep Tech, Search Engine, Open-Source, Vector Search, Rust, Vector Search Engine, Vector Similarity, Artificial Intelligence und Machine Learning
Orte
-
Primär
Berlin, Berlin 10115, DE
Beschäftigte von Qdrant
Updates
-
👋 Atita Arora at Big Data Conference Europe 2024 in Vilnius! Our Solution Architect, Atita Arora, has been invited to speak at: ✅ 𝐏𝐚𝐧𝐞𝐥 𝐃𝐢𝐬𝐜𝐮𝐬𝐬𝐢𝐨𝐧 | 𝐁𝐞𝐲𝐨𝐧𝐝 𝐭𝐡𝐞 𝐇𝐲𝐩𝐞: 𝐑𝐞𝐚𝐥𝐢𝐬𝐭𝐢𝐜 𝐄𝐱𝐩𝐞𝐜𝐭𝐚𝐭𝐢𝐨𝐧𝐬 𝐨𝐟 𝐀𝐈 ✅ 𝐈𝐦𝐩𝐚𝐜𝐭 𝐨𝐟 𝐕𝐞𝐜𝐭𝐨𝐫 𝐒𝐞𝐚𝐫𝐜𝐡: 𝐔𝐧𝐫𝐚𝐯𝐞𝐥𝐢𝐧𝐠 𝐏𝐮𝐫𝐩𝐨𝐬𝐞-𝐛𝐮𝐢𝐥𝐭 𝐯𝐬. 𝐓𝐫𝐚𝐝𝐢𝐭𝐢𝐨𝐧𝐚𝐥 𝐃𝐚𝐭𝐚𝐛𝐚𝐬𝐞𝐬 𝐟𝐨𝐫 𝐆𝐞𝐧 𝐀𝐈 𝐀𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬 She’ll highlight the impact of vector search on #GenAI applications, comparing purpose-built databases with traditional databases incorporating vector capabilities. 😊 Come meet her—she’s Qdrant’s go-to expert and is excited to share her insights with you! 👉 Conference https://lnkd.in/dhET9Gr
-
Qdrant hat dies direkt geteilt
#GenerativeAI in Action. Listen to this podcast generated with NotebookLM based on our new blog post about 𝐂𝐨𝐥𝐏𝐚𝐥𝐢 (https://lnkd.in/dk4-qWjU). The two 𝐀𝐈 𝐚𝐯𝐚𝐭𝐚𝐫𝐬 in the podcast have a vital conversation about the new approach and information retrieval in general. "𝘐𝘵'𝘴 𝘧𝘢𝘴𝘤𝘪𝘯𝘢𝘵𝘪𝘯𝘨 𝘵𝘰 𝘵𝘩𝘪𝘯𝘬 𝘢𝘣𝘰𝘶𝘵 𝘩𝘰𝘸 𝘵𝘩𝘪𝘴 𝘤𝘰𝘶𝘭𝘥 𝘪𝘮𝘱𝘢𝘤𝘵 𝘵𝘩𝘦 𝘧𝘶𝘵𝘶𝘳𝘦 𝘰𝘧 𝘴𝘦𝘢𝘳𝘤𝘩 𝘢𝘯𝘥 𝘪𝘯𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘰𝘯 𝘳𝘦𝘵𝘳𝘪𝘦𝘷𝘢𝘭 𝘢𝘴 𝘢 𝘸𝘩𝘰𝘭𝘦. 𝘛𝘩𝘪𝘴 𝘧𝘦𝘦𝘭𝘴 𝘭𝘪𝘬𝘦 𝘢 𝘴𝘵𝘦𝘱𝘱𝘪𝘯𝘨 𝘴𝘵𝘰𝘯𝘦 𝘵𝘰 𝘴𝘰𝘮𝘦𝘵𝘩𝘪𝘯𝘨 𝘦𝘷𝘦𝘯 𝘣𝘪𝘨𝘨𝘦𝘳, 𝘢 𝘸𝘩𝘰𝘭𝘦 𝘯𝘦𝘸 𝘸𝘢𝘺 𝘰𝘧 𝘪𝘯𝘵𝘦𝘳𝘢𝘤𝘵𝘪𝘯𝘨 𝘸𝘪𝘵𝘩 𝘪𝘯𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘰𝘯. 𝘈𝘯𝘥 𝘳𝘦𝘮𝘦𝘮𝘣𝘦𝘳, 𝘵𝘩𝘪𝘴 𝘪𝘴 𝘫𝘶𝘴𝘵 𝘵𝘩𝘦 𝘣𝘦𝘨𝘪𝘯𝘯𝘪𝘯𝘨. 𝘈𝘴 𝘚𝘦𝘮𝘢𝘯𝘵𝘪𝘤 𝘚𝘦𝘢𝘳𝘤𝘩 𝘤𝘰𝘯𝘵𝘪𝘯𝘶𝘦𝘴 𝘵𝘰 𝘦𝘷𝘰𝘭𝘷𝘦, 𝘸𝘦 𝘤𝘢𝘯 𝘦𝘹𝘱𝘦𝘤𝘵 𝘦𝘷𝘦𝘯 𝘮𝘰𝘳𝘦 𝘵𝘳𝘢𝘯𝘴𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘷𝘦 𝘤𝘩𝘢𝘯𝘨𝘦𝘴 𝘪𝘯 𝘩𝘰𝘸 𝘸𝘦 𝘢𝘤𝘤𝘦𝘴𝘴 𝘢𝘯𝘥 𝘶𝘯𝘥𝘦𝘳𝘴𝘵𝘢𝘯𝘥 𝘪𝘯𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘰𝘯. 𝘚𝘰 𝘩𝘦𝘳𝘦'𝘴 𝘢 𝘧𝘪𝘯𝘢𝘭 𝘵𝘩𝘰𝘶𝘨𝘩𝘵 𝘵𝘰 𝘭𝘦𝘢𝘷𝘦 𝘺𝘰𝘶 𝘸𝘪𝘵𝘩 𝘢𝘴 𝘴𝘦𝘢𝘳𝘤𝘩 𝘣𝘦𝘤𝘰𝘮𝘦𝘴 𝘮𝘰𝘳𝘦 𝘪𝘯𝘵𝘦𝘭𝘭𝘪𝘨𝘦𝘯𝘵 𝘢𝘯𝘥 𝘱𝘦𝘳𝘴𝘰𝘯𝘢𝘭𝘪𝘻𝘦𝘥, 𝘩𝘰𝘸 𝘮𝘪𝘨𝘩𝘵 𝘵𝘩𝘢𝘵 𝘤𝘩𝘢𝘯𝘨𝘦 𝘵𝘩𝘦 𝘸𝘢𝘺 𝘸𝘦 𝘭𝘦𝘢𝘳𝘯, 𝘸𝘰𝘳𝘬, 𝘢𝘯𝘥 𝘪𝘯𝘵𝘦𝘳𝘢𝘤𝘵 𝘸𝘪𝘵𝘩 𝘵𝘩𝘦 𝘸𝘰𝘳𝘭𝘥 𝘢𝘳𝘰𝘶𝘯𝘥 𝘶𝘴?" 🤖 Even though there are still some small mistakes, I'm pretty impressed. It took just around 5 minutes to generate it. Listen in. ⬇
-
⁉️ 𝐇𝐨𝐰 𝐭𝐨 𝐛𝐮𝐢𝐥𝐝 𝐚𝐧𝐝 𝐦𝐚𝐧𝐚𝐠𝐞 𝐜𝐨𝐦𝐩𝐥𝐞𝐱 𝐆𝐞𝐧𝐀𝐈 𝐰𝐨𝐫𝐤𝐟𝐥𝐨𝐰, 𝐏𝐚𝐫𝐭 𝐈 Kameshwara Pavan Kumar Mantha shares how to set up a robust, highly explainable, and reliable pipeline when working on #GenAI applications. In the first article of the series, we get a detailed HowTo on setting up and running MLflow, Ollama and Qdrant, all connected by LlamaIndex. 👉 Check out Pavan’s recommendations https://lnkd.in/dGhQ4kKh
-
🧬 𝐑𝐞𝐀𝐜𝐭 𝐚𝐠𝐞𝐧𝐭-𝐛𝐚𝐬𝐞𝐝 𝐚𝐬𝐬𝐢𝐬𝐭𝐚𝐧𝐭 𝐟𝐨𝐫 𝐦𝐨𝐥𝐞𝐜𝐮𝐥𝐚𝐫 𝐯𝐢𝐬𝐮𝐚𝐥𝐢𝐳𝐚𝐭𝐢𝐨𝐧 𝐚𝐧𝐝 𝐚𝐧𝐚𝐥𝐲𝐬𝐢𝐬 Adria Cabello Blanque developed an assistant to provide intuitive guidance for users interacting with the molecular visualization system PyMOL (Schrödinger). Using LangGraph (LangChain), FastAPI, and Qdrant, he made a real-time search of PyMOL documentation possible, along with executing commands based on users' natural language instructions. 👉 Read here how drastically simplifying the use of complex tools works: https://lnkd.in/dNSEqqfS
-
🔥 𝐅𝐫𝐞𝐞 𝐁𝐨𝐨𝐭𝐜𝐚𝐦𝐩 𝐨𝐧 𝐀𝐩𝐩𝐥𝐢𝐞𝐝 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐆𝐞𝐧𝐀𝐈 ❓ 𝐖𝐡𝐞𝐧: November 15th & 16th, 2024 ❓ 𝐖𝐡𝐨: Our distinguished ambassador, Kameshwara Pavan Kumar Mantha, and CEO of Antz, Sashank Pappu ❓ 𝐖𝐡𝐚𝐭: Getting an advanced conceptual understanding of Multimodal RAG & Agentic RAG-based solutions ❓ 𝐖𝐡𝐞𝐫𝐞: https://lnkd.in/g-mvmxgb
-
Qdrant hat dies direkt geteilt
What an incredible weekend at #GenAISummit! The connections and insights were truly inspiring! Though it was our first time meeting many users, it’s hard to think of them as “strangers” when they’ve entrusted Qdrant with some of their most critical projects and innovations. We were also excited to meet the ML team from Johnson & Johnson and hear how they succeeded with Qdrant! A big shoutout as well to our amazing friends who stopped by the booth! - Jaakko Timonen from Softlandia Ltd. - Arijit Bandyopadhyay from Intel Corporation - Daniel Svonava from Superlinked - Chenhe Gu from Dify - Daniel Gallego Vico from Zylon by PrivateGPT - Mike Chrabaszcz from AI Makerspace Dmytro Spodarets thank you for hosting the panel on #vectorsearch. We enjoyed hearing the perspectives of Charles Xie and Jobi George on the future of #vectordatabases! Thank you to everyone for your engagement, feedback, and trust. We’ll be following up in the coming days to continue these valuable conversations. Here’s to a week filled with fresh ideas and continued innovation! #Qdrant #AI #GenAI #MachineLearning #TechCommunity
-
+2
-
Is Vision All You Need? 👀 Text chunking methods in RAG are resource-demanding and often result in the loss of significant visual context. But what if you could skip chunking entirely? In his latest blog, Olli-Pekka Heinisuo shares how VLMs are revolutionizing RAG. By indexing entire document pages as images, strategies like ColPali can now capture both text and visual context faster and with no need for chunking. 👉 Reall the full blog: https://lnkd.in/d_SF3Mxw 🖥️ Try out the V-RAG demo on Softlandia Ltd.’s GitHub: https://lnkd.in/dzqF-PiW
-
Qdrant hat dies direkt geteilt
Most people 𝐣𝐮𝐬𝐭 𝐩𝐥𝐚𝐲 𝐌𝐚𝐫𝐢𝐨 𝐊𝐚𝐫𝐭 ... 𝐌𝐋 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐬? We make Qdrant do it for us instead 😅 Today I bring you the most convoluted image search application ever implemented by a human being: 𝐐𝐝𝐫𝐚𝐧𝐭 𝐊𝐚𝐫𝐭 ----- 💠 𝐖𝐡𝐚𝐭? I know, I know. It's not efficient, it's not the way to do things, I could have used a simple CNN finetune, etc. But I bet you've never seen an image search application that consists of playing Mario Kart 64 ... Do I have your attention? 👀 Nice, so let's take a look at the three steps the project is divided into 👇 ----- 💠 𝐃𝐚𝐭𝐚 𝐜𝐨𝐥𝐥𝐞𝐜𝐭𝐢𝐨𝐧 Possibly the funniest data collection ever. It is about ... playing Mario Kart! 🎮 With a fixed time period (𝐞.𝐠. 𝟐𝟎𝟎 𝐦𝐬) we take a screenshot of the game as well as record the joystick buttons we are pressing at that moment (e.g. B was pressed, A was not pressed, turning right 30 degrees, etc.). 💠 𝐈𝐧𝐬𝐞𝐫𝐭𝐢𝐧𝐠 𝐞𝐦𝐛𝐞𝐝𝐝𝐢𝐧𝐠𝐬 𝐢𝐧 𝐐𝐝𝐫𝐚𝐧𝐭 Next, I created a Qdrant Docker container and used Resnet50 to generate embeddings for all the images I had gathered in the previous step. Additionally, I included the joystick button information as part of the payload for each embedding. 💠 𝐐𝐝𝐫𝐚𝐧𝐭 𝐩𝐥𝐚𝐲𝐬 𝐌𝐚𝐫𝐢𝐨 𝐊𝐚𝐫𝐭 I'm going to simplify all the work (99% of the project) that I've invested in configuring 𝐠𝐲𝐦-𝐦𝐮𝐩𝐞𝐧𝟔𝟒𝐩𝐥𝐮𝐬, making the input plugin work, etc. 😅 Essentially, the Qdrant Agent will take the current frame, convert it into an embedding using Resnet50 (or any other CNN architecture), and search Qdrant for the top 5 nearest neighbors. It will then average the buttons that were being pressed in those 5 examples to determine the next move, which is sent back to the emulator. That's, in a nutshell, the behavior of the agent loop. You can see it in action in the video below 👀 #mlops #machinelearning #datascience ----- 💡 𝐅𝐨𝐥𝐥𝐨𝐰 𝐦𝐞 𝐟𝐨𝐫 𝐫𝐞𝐥𝐞𝐯𝐚𝐧𝐭 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐨𝐧 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧 𝐌𝐋, 𝐌𝐋𝐎𝐩𝐬 𝐚𝐧𝐝 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈
-
RAG, but with Vision Language Models! 📄👀 It's no secret that traditional RAG systems can miss out on important visual context. By integrating vision models, we’re transforming retrieval to deliver more accurate, context-aware results. That’s what Kameshwara Pavan explores with a dual-stream RAG setup using Vision Language Models (VLMs). 🔧 Key Tools & Strategies: ▪ Dual Processing: PDFs are split into text (extracted with pypdf) and images (pdf2image). ▪ Qdrant Multi-Vector Storage: Stores both text and image embeddings, using CLIP for visuals and MiniLM for text. ▪ Smart Retrieval: Uses Reciprocal Rank Fusion (RRF) to fetch the most relevant content, analyzing visuals with OpenAI’s GPT-4o Vision model. Build a RAG setup that gets the full picture — literally: https://lnkd.in/gPNHKBVh