⏰ Final reminder! Tomorrow is your chance to learn how to build a RAG application in just 5 minutes! 🔍 What you’ll learn: ▪ Fast development with Bootstrap RAG ▪ Testing and evaluation strategies ▪ Key packages for efficient RAG development Register now: https://lnkd.in/eEmJ2hB2
Qdrant
Softwareentwicklung
Berlin, Berlin 27.202 Follower:innen
Massive-Scale Vector Database
Info
Powering the next generation of AI applications with advanced and high-performant vector similarity search technology. Qdrant engine is an open-source vector search database. It deploys as an API service providing a search for the nearest high-dimensional vectors. With Qdrant, embeddings or neural network encoders can be turned into full-fledged applications for matching, searching, recommending, and much more. Make the most of your Unstructured Data!
- Website
-
https://qdrant.tech
Externer Link zu Qdrant
- Branche
- Softwareentwicklung
- Größe
- 11–50 Beschäftigte
- Hauptsitz
- Berlin, Berlin
- Art
- Privatunternehmen
- Gegründet
- 2021
- Spezialgebiete
- Deep Tech, Search Engine, Open-Source, Vector Search, Rust, Vector Search Engine, Vector Similarity, Artificial Intelligence und Machine Learning
Orte
-
Primär
Berlin, Berlin 10115, DE
Beschäftigte von Qdrant
Updates
-
💡 Using ColPali and Binary Quantization for efficient document retrieval! In our latest video, Sabrina A. explains ColPali’s architecture and demonstrates a practical example of how combining it with Binary Quantization can enhance retrieval efficiency. 🎥 Watch the full video on YouTube: https://lnkd.in/gxWgYUyv 🖥 See the code: https://lnkd.in/g3kjVytW
-
T -minus 24 hours until our next webinar! Thierry Damiba will be talking with Kameshwara Pavan Kumar Mantha about building GenAI at Warp Speed with Qdrant and Bootstrap RAG
Lead Software Engineer - AI, LLM @ OpenText | Pursuing PhD in Generative AI (LLM) | Ambassador @Qdrant
🚀 Excited to announce that I will be speaking at a webinar "5 Minute RAG: Learn How to Build GenAI at Warp Speed" along with Thierry Damiba. If you're looking to accelerate your GenAI journey and build solutions that scale with speed and precision, this is the session for you. 🚀 📅 Date: 29-Oct-2024 Register at: https://lnkd.in/gz2au6Pe #GenAI #LLM #RAG
-
💡 Using ColPali and Binary Quantization for efficient document retrieval! In our latest video, Sabrina A. explains ColPali’s architecture and demonstrates a practical example of how combining it with Binary Quantization can enhance retrieval efficiency. 🎥 Watch the full video on YouTube: https://lnkd.in/gxWgYUyv 🖥 See the code: https://lnkd.in/g3kjVytW
-
Qdrant hat dies direkt geteilt
Modern 𝐒𝐩𝐚𝐫𝐬𝐞 𝐍𝐞𝐮𝐫𝐚𝐥 𝐑𝐞𝐭𝐫𝐢𝐞𝐯𝐚l from theory to practice, a comprehensive overview comparing different approaches used in modern sparse neural retrieval and their evolution. From 𝘋𝘦𝘦𝘱𝘊𝘛, 𝘋𝘦𝘦𝘱𝘐𝘮𝘱𝘢𝘤𝘵, 𝘛𝘐𝘓𝘋𝘌𝘷2, 𝘊𝘖𝘐𝘓, 𝘚𝘗𝘈𝘙𝘛𝘈, 𝘵𝘰 𝑺𝑷𝑳𝑨𝑫𝑬++. "𝘐𝘯 𝘢𝘳𝘦𝘢𝘴 𝘸𝘩𝘦𝘳𝘦 𝘬𝘦𝘺𝘸𝘰𝘳𝘥 𝘮𝘢𝘵𝘤𝘩𝘪𝘯𝘨 𝘪𝘴 𝘤𝘳𝘶𝘤𝘪𝘢𝘭 𝘣𝘶𝘵 𝘵𝘳𝘢𝘥𝘪𝘵𝘪𝘰𝘯𝘢𝘭 𝘢𝘱𝘱𝘳𝘰𝘢𝘤𝘩𝘦𝘴 𝘢𝘳𝘦 𝘪𝘯𝘴𝘶𝘧𝘧𝘪𝘤𝘪𝘦𝘯𝘵 𝘧𝘰𝘳 𝘪𝘯𝘪𝘵𝘪𝘢𝘭 𝘳𝘦𝘵𝘳𝘪𝘦𝘷𝘢𝘭, 𝘴𝘦𝘮𝘢𝘯𝘵𝘪𝘤 𝘮𝘢𝘵𝘤𝘩𝘪𝘯𝘨 𝘢𝘥𝘥𝘴 𝘴𝘪𝘨𝘯𝘪𝘧𝘪𝘤𝘢𝘯𝘵 𝘷𝘢𝘭𝘶𝘦. 𝘋𝘦𝘯𝘴𝘦 𝘳𝘦𝘵𝘳𝘪𝘦𝘷𝘦𝘳𝘴 𝘵𝘦𝘯𝘥 𝘵𝘰 𝘳𝘦𝘵𝘶𝘳𝘯 𝘮𝘢𝘯𝘺 𝘧𝘢𝘭𝘴𝘦 𝘱𝘰𝘴𝘪𝘵𝘪𝘷𝘦𝘴, 𝘸𝘩𝘪𝘭𝘦 𝘴𝘱𝘢𝘳𝘴𝘦 𝘯𝘦𝘶𝘳𝘢𝘭 𝘳𝘦𝘵𝘳𝘪𝘦𝘷𝘢𝘭 𝘩𝘦𝘭𝘱𝘴 𝘯𝘢𝘳𝘳𝘰𝘸 𝘵𝘩𝘦𝘮 𝘥𝘰𝘸𝘯 𝘢𝘯𝘥 𝘤𝘢𝘯 𝘣𝘦 𝘢 𝘷𝘢𝘭𝘶𝘢𝘣𝘭𝘦 𝘰𝘱𝘵𝘪𝘰𝘯 𝘧𝘰𝘳 𝘴𝘤𝘢𝘭𝘪𝘯𝘨, 𝘦𝘴𝘱𝘦𝘤𝘪𝘢𝘭𝘭𝘺 𝘸𝘩𝘦𝘯 𝘸𝘰𝘳𝘬𝘪𝘯𝘨 𝘸𝘪𝘵𝘩 𝘭𝘢𝘳𝘨𝘦 𝘥𝘢𝘵𝘢𝘴𝘦𝘵𝘴." Article by Evgeniya Sukhodolskaya https://lnkd.in/ewD4S9aA
-
-
🔏 Qdrant Cloud users can’t imagine using Qdrant without the authentication enabled. API key ensures that an unauthorized person cannot access the data you keep in the collection without knowing a secret. ℹ️ But did you know that basic authentication might also be enabled in the open-source version? The API key might be specified in the configuration file or by setting the QDRANT__SERVICE__API_KEY environmental variable. Check out what else we suggest regarding the security: 📋 https://buff.ly/3BZMEiY
Security - Qdrant
qdrant.tech
-
📜 𝐋𝐨𝐨𝐤𝐢𝐧𝐠 𝐟𝐨𝐫 𝐬𝐨𝐦𝐞𝐭𝐡𝐢𝐧𝐠 𝐭𝐢𝐦𝐞𝐩𝐫𝐨𝐨𝐟 𝐥𝐢𝐤𝐞 #BM25 𝐛𝐮𝐭 𝐰𝐢𝐭𝐡 𝐬𝐞𝐦𝐚𝐧𝐭𝐢𝐜 𝐮𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠? 𝐒𝐩𝐚𝐫𝐬𝐞 𝐍𝐞𝐮𝐫𝐚𝐥 𝐑𝐞𝐭𝐫𝐢𝐞𝐯𝐚𝐥 might be what you need. Discover how it’s different, how it works, and how to use the latest model — 𝐒𝐏𝐋𝐀𝐃𝐄++ — in Qdrant. Why 𝐒𝐩𝐚𝐫𝐬𝐞 𝐍𝐞𝐮𝐫𝐚𝐥 𝐑𝐞𝐭𝐫𝐢𝐞𝐯𝐚𝐥? ✅ Great for fields like 𝘮𝘦𝘥𝘪𝘤𝘪𝘯𝘦, 𝘭𝘢𝘸, and 𝘦-𝘤𝘰𝘮𝘮𝘦𝘳𝘤𝘦 where both exact keyword matching and nuanced understanding matter. ✅ Helps cut down on false positives that dense models might introduce, refining initial retrieval accuracy. ✅ Works alongside traditional retrieval systems, bringing added value without extra complexity. 👉 Read the article by Evgeniya Sukhodolskaya https://lnkd.in/dfUnwJMU
-
-
Sprinklr needed a scalable solution to handle the vast data from customer interactions across 30+ digital channels. 🔍 Why Qdrant? After evaluating several options, they chose Qdrant for its easy integration, flexibility, and cost-effective speed. 🚀 The Results: ➡ 20ms P99 latency for searches on 1M+ vectors (ideal for real-time tasks like live chat). ➡ High throughput of 250 RPS under heavy query loads. ➡ Superior write performance, with indexing time for 1M vectors at less than 10% of Elasticsearch. 📖 Wanna see how they did it? Read the full case study! 👇 https://buff.ly/4eVtNnD
-
-
👂#VectorWeekly: 𝐒𝐡𝐚𝐫𝐞 𝐰𝐡𝐢𝐜𝐡 𝐞𝐦𝐛𝐞𝐝𝐝𝐢𝐧𝐠 𝐦𝐨𝐝𝐞𝐥𝐬 𝐰𝐨𝐫𝐤 𝐢𝐧 𝐲𝐨𝐮𝐫 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧. #Vector #databases are agnostic to embedding models or datatypes: you can put in whatever you want (at least if they support 𝗺𝘂𝗹𝘁𝗶𝘃𝗲𝗰𝘁𝗼𝗿𝘀, 𝘀𝗽𝗮𝗿𝘀𝗲 𝘃𝗲𝗰𝘁𝗼𝗿𝘀 and 𝗱𝗲𝗻𝘀𝗲 𝘃𝗲𝗰𝘁𝗼𝗿𝘀 as Qdrant does). However, 𝘦𝘷𝘦𝘯 𝘵𝘩𝘦 𝘣𝘦𝘴𝘵 𝘷𝘦𝘤𝘵𝘰𝘳 𝘥𝘢𝘵𝘢𝘣𝘢𝘴𝘦 𝘤𝘢𝘯 𝘰𝘯𝘭𝘺 𝘳𝘦𝘵𝘳𝘪𝘦𝘷𝘦 𝘵𝘩𝘦 𝘯𝘦𝘦𝘥𝘦𝘥 𝘥𝘢𝘵𝘢 𝘪𝘧 𝘵𝘩𝘦 𝘮𝘰𝘥𝘦𝘭 𝘧𝘪𝘵𝘴 𝘵𝘩𝘦 𝘱𝘳𝘰𝘥𝘶𝘤𝘵𝘪𝘰𝘯 𝘶𝘴𝘦 𝘤𝘢𝘴𝘦. There are a few ways to narrow your options before diving into experiments. 1️⃣ You can check public benchmarks like 𝗠𝗧𝗘𝗕 and 𝗕𝗘𝗜𝗥. Yet, there's a high risk that a current leader model is 𝗼𝘃𝗲𝗿𝗳𝗶𝘁𝘁𝗲𝗱 to the benchmark, and it will disappoint you in production. 2️⃣ Another approach is asking experts who've built something that actually works in production. We have you, a fantastic community of professionals around Qdrant, so we can cheat and use this approach:) Recently, during our 𝗺𝗼𝗻𝘁𝗵𝗹𝘆 𝗗𝗶𝘀𝗰𝗼𝗿𝗱 𝗛𝗮𝗻𝗴𝗼𝘂𝘁, our community members recommended for production built around textual-data retrieval to use 𝗻𝗼𝗺𝗶𝗰-𝗲𝗺𝗯𝗲𝗱-𝘁𝗲𝘅𝘁 embeddings, which seem to outperform OpenAI's 𝘁𝗲𝘅𝘁-𝘀𝗺𝗮𝗹𝗹-𝟯 in their use cases; 𝘀𝗻𝗼𝘄𝗳𝗹𝗮𝗸𝗲-𝗮𝗿𝗰𝘁𝗶𝗰-𝗲𝗺𝗯𝗲𝗱-𝗹 also was proposed as a decent option. ❓We'd like to hear everyone’s opinion: 𝗪𝗵𝗶𝗰𝗵 𝗲𝗺𝗯𝗲𝗱𝗱𝗶𝗻𝗴 𝗺𝗼𝗱𝗲𝗹𝘀 𝗮𝗿𝗲 𝘆𝗼𝘂 𝘂𝘀𝗶𝗻𝗴 𝗶𝗻 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 (around #search)? Is it 𝘀𝗻𝗼𝘄𝗳𝗹𝗮𝗸𝗲-𝗮𝗿𝗰𝘁𝗶𝗰-𝗲𝗺𝗯𝗲𝗱, 𝗻𝗼𝗺𝗶𝗰-𝗲𝗺𝗯𝗲𝗱-𝘁𝗲𝘅𝘁, 𝗲𝟱, 𝗢𝗽𝗲𝗻𝗔𝗜'𝘀 𝘁𝗲𝘅𝘁-𝗲𝗺𝗯𝗲𝗱𝗱𝗶𝗻𝗴-𝟯, 𝗷𝗶𝗻𝗮-𝗲𝗺𝗯𝗲𝗱𝗱𝗶𝗻𝗴𝘀, 𝗕𝗔𝗔𝗜/𝗯𝗴𝗲, or any 𝗼𝘁𝗵𝗲𝗿 𝗺𝗼𝗱𝗲𝗹 𝗳𝗼𝗿 𝘁𝗲𝘅𝘁𝘂𝗮𝗹 𝗿𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹? Perhaps you have recommendations for 𝗼𝘁𝗵𝗲𝗿 𝗺𝗼𝗱𝗮𝗹𝗶𝘁𝗶𝗲𝘀? P.S. Just two days ago, Cohere released 𝗘𝗺𝗯𝗲𝗱 𝟯, a multimodal embedding model (for text and images). According to them, it works on complex reports—including graphs and charts—e-commerce product catalogues, design files, and templates. They claim it drastically outperforms 𝐂𝐋𝐈𝐏, so it might be worth a try!
-
-
Qdrant hat dies direkt geteilt
Martin Fowler just announced the new Volume of the 𝐓𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐲 𝐑𝐚𝐝𝐚𝐫, an opinionated guide to today's technology landscape by Thoughtworks. https://lnkd.in/dbChQJMB "𝘈𝘨𝘢𝘪𝘯 𝘵𝘩𝘦𝘳𝘦'𝘴 𝘢 𝘭𝘰𝘵 𝘸𝘦'𝘷𝘦 𝘭𝘦𝘢𝘳𝘯𝘦𝘥 𝘢𝘣𝘰𝘶𝘵 𝘶𝘴𝘪𝘯𝘨 𝘎𝘦𝘯𝘈𝘐 𝘩𝘦𝘳𝘦, 𝘳𝘦𝘧𝘭𝘦𝘤𝘵𝘦𝘥 𝘪𝘯 𝘵𝘩𝘦 𝘦𝘹𝘱𝘭𝘰𝘴𝘪𝘰𝘯 𝘰𝘧 𝘈𝘐-𝘢𝘥𝘫𝘢𝘤𝘦𝘯𝘵 𝘵𝘰𝘰𝘭𝘴, 𝘸𝘩𝘪𝘤𝘩 𝘸𝘦'𝘷𝘦 𝘧𝘰𝘶𝘯𝘥 𝘩𝘦𝘭𝘱𝘧𝘶𝘭 𝘪𝘯 𝘣𝘶𝘪𝘭𝘥𝘪𝘯𝘨 𝘶𝘴𝘦𝘧𝘶𝘭 𝘈𝘐-𝘣𝘢𝘴𝘦𝘥 𝘴𝘺𝘴𝘵𝘦𝘮𝘴." https://lnkd.in/dS2KfyjG Qdrant vector similarity search engine status was upgraded from 𝘈𝘴𝘴𝘦𝘴𝘴 to 𝘛𝘳𝘪𝘢𝘭. "𝘖𝘶𝘳 𝘵𝘦𝘢𝘮𝘴 𝘩𝘢𝘷𝘦 𝘶𝘴𝘦𝘥 𝘰𝘱𝘦𝘯-𝘴𝘰𝘶𝘳𝘤𝘦 𝘦𝘮𝘣𝘦𝘥𝘥𝘪𝘯𝘨𝘴 𝘭𝘪𝘬𝘦 𝘔𝘪𝘯𝘪𝘓𝘔-𝘷6 𝘢𝘯𝘥 𝘉𝘎𝘌 𝘧𝘰𝘳 𝘮𝘶𝘭𝘵𝘪𝘱𝘭𝘦 𝘱𝘳𝘰𝘥𝘶𝘤𝘵 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘣𝘢𝘴𝘦𝘴. 𝘞𝘦 𝘶𝘴𝘦 𝘘𝘥𝘳𝘢𝘯𝘵 𝘢𝘴 𝘢𝘯 𝘦𝘯𝘵𝘦𝘳𝘱𝘳𝘪𝘴𝘦 𝘷𝘦𝘤𝘵𝘰𝘳 𝘴𝘵𝘰𝘳𝘦 𝘸𝘪𝘵𝘩 𝘮𝘶𝘭𝘵𝘪-𝘵𝘦𝘯𝘢𝘯𝘤𝘺 𝘵𝘰 𝘴𝘵𝘰𝘳𝘦 𝘷𝘦𝘤𝘵𝘰𝘳 𝘦𝘮𝘣𝘦𝘥𝘥𝘪𝘯𝘨𝘴 𝘢𝘴 𝘴𝘦𝘱𝘢𝘳𝘢𝘵𝘦 𝘤𝘰𝘭𝘭𝘦𝘤𝘵𝘪𝘰𝘯𝘴, 𝘪𝘴𝘰𝘭𝘢𝘵𝘪𝘯𝘨 𝘦𝘢𝘤𝘩 𝘱𝘳𝘰𝘥𝘶𝘤𝘵'𝘴 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘣𝘢𝘴𝘦 𝘪𝘯 𝘴𝘵𝘰𝘳𝘢𝘨𝘦. 𝘜𝘴𝘦𝘳 𝘢𝘤𝘤𝘦𝘴𝘴 𝘱𝘰𝘭𝘪𝘤𝘪𝘦𝘴 𝘢𝘳𝘦 𝘮𝘢𝘯𝘢𝘨𝘦𝘥 𝘪𝘯 𝘵𝘩𝘦 𝘢𝘱𝘱𝘭𝘪𝘤𝘢𝘵𝘪𝘰𝘯 𝘭𝘢𝘺𝘦𝘳." https://lnkd.in/dkw_HQNi Btw. Since Version 1.9.0 granular 𝘈𝘤𝘤𝘦𝘴𝘴 𝘊𝘰𝘯𝘵𝘳𝘰𝘭 𝘗𝘰𝘭𝘪𝘤𝘪𝘦𝘴 are handled on the server side by using JWT tokens. https://lnkd.in/dB-MsXqT 🔐
-