Just announced 👇 We're spending an hour with the co-founders of @useblacksmith, a fast-growing startup backed by @ycombinator. The topic? Outgrowing Postgres, and what to do when your application database can no longer handle your analytics workloads. Join the discussion here:
Tinybird’s Post
More Relevant Posts
-
Postgres is the most popular database in the world, but for real-time analytics, you'll quickly run into its limitations. Join us on September 10th for a conversation with Blacksmith, a YC-backed startup that's shaking up the world of Continuous Integration. They'll share their analytics journey from Postgres to alternatives, including Tinybird. In this session, you'll learn: - What matters most when building user-facing analytics - How analytics is different than typical relational workloads - When they could tell their Postgres instance was no longer sufficient - Choosing between scaling Postgres vs. migrating analytics to another solution - Choosing between building it yourself or going serverless
To view or add a comment, sign in
-
🎉 Exciting News! For Onehouse and those rooting for the open data lakehouse 🎉 We are happy to announce our $35M Series B round of funding, led by Craft Ventures. The new funding adds more fuel to the Onehouse rocketship, accelerating how we redefine the cutting-edge of open-source data lakehouse technology and bring our product to as many customers as possible. 🚢 Together with our funding, we are also bringing to life two new products: 1️⃣ Onehouse LakeView, a free tool for the Apache Hudi OSS community, to monitor lakehouse data tables and identify inefficiencies. 2️⃣ Onehouse Table Optimizer, a managed service for optimizing tables 10x on data ingestion/ETL and query performance - for Apache Hudi and Apache Iceberg/Delta Lake using Apache XTable (Incubating). We thank our investors, early customers, partners, and the Onehouse team for their unwavering support and dedication. Together, we are building the future of data! ✍️ Check out this blog from our founder/CEO Vinoth Chandar to learn more! https://lnkd.in/gWjqiK3r #dataengineering #bigdata #data #datawarehouse #datalake #datalakehouse #datamanagement #streamprocessing #datascience #machinelearning #opensource #startups #cloud
To view or add a comment, sign in
-
Ready to take your data strategy to the next level? 🚀 HexaCluster's got your back every step of the way! Whether you're tackling database migrations, diving deep into PostgreSQL, or ready to unleash the power of machine learning with MLOps, our team is here to bring your vision to life. 💪 Let's work together to turn your data dreams into reality! ✨ Visit us at: https://hexacluster.ai/ #postgresql #databasemigrations #mlops #opensourceleaders #postgres #24X7support #machinelearning #ml #hexacluster #postgrescontributors
To view or add a comment, sign in
-
🚀 New on Medium: Surprising Ways People Are Using PostgreSQL PostgreSQL is more than just a relational database—it’s a powerhouse that’s popping up in unexpected places. From serving as a Kubernetes datastore to powering feature flag tools like Unleash, Postgres is proving its versatility across the tech stack. Check out my latest Medium post where I dive into some of the most unique and unconventional use cases for Postgres. Whether you’re a seasoned dev or just curious about new tech, you might be surprised by what this open-source giant can do. 👉 https://lnkd.in/eSV8JP-Y
To view or add a comment, sign in
-
Software Architecture | Data Engineer | Project Management Lead | Machine Learning | NLP | ITSM Manager
I've been digging into Mage, workflow orchestration, for the last few days, as part of my learning journey with DataTalksClub in the Data Engineering Zoomcamp Cohort 2.024. 👉 Parameterized execution: using runtime and block variables 📌 Testing other connectors like MySQL or MongoDB 📍 Deploying Mage to GCP I'm building a parameterized workflow to extract information from Airbnb and collect it into MongoBD, selecting the city. Then, I'll be able to run a containerized flask API to get some data relationships. 🛠 Load data from Opendatasoft using get requests ♻ Transform JSON data and remove useless keys 📥 Load transformed data to MongoDB Atlas and index it 📂 Save data in GCP ❔ Query the data through an REST API And all can be done in a very intuitive way thanks to Mage features! Let's keep working on it!!! #dezoomcamp #dataengineering #dataorchestration #mage #nosql
To view or add a comment, sign in
-
**DE Zoomcamp Week 1 Achievements: 🚀** 1. Successfully ingested data into PostgreSQL using a robust pipeline. 2. Executed powerful SQL queries on the freshly loaded data. 3. Embarked on the Terraform journey, unraveling infrastructure magic! Excited for the upcoming challenges and learning in Week 2! #DEZoomcamp #DataEngineering #SQL #Terraform
To view or add a comment, sign in
-
🚀 Excited to share my Medium article on "Building an Elasticsearch-Powered Question Answering System: Setup and Pipeline Construction"! 🛠️🔍 Dive into the world of open-source Q&A models, leveraging Elasticsearch and Haystack. In this article, we explore the setup process and the construction of a powerful pipeline. 🌐💡 Read the full article on Medium:https://lnkd.in/d_U-HMQ4 Explore the code on GitHub: https://lnkd.in/dwJT49FB For more insights, visit my blog: https://lnkd.in/dzZ-eg8c #Elasticsearch #QuestionAnswering #HaystackLibrary #OpenSource #MachineLearning #PLMErrorAnalysis 🔍🤖✨
To view or add a comment, sign in
-
Meet the StarTree All-Stars: Ken Krugler 🌟 Through his data consulting company, Scale Unlimited, Ken helps companies around the world design and develop solutions for big data processing and search-based analytics problems using Apache Pinot, Flink, and other technologies. Ken is a member of The Apache Software Foundation and an active contributor to the Pinot community. Learn more about him: 𝗤: 𝗪𝗵𝗮𝘁 𝗶𝘀 𝘆𝗼𝘂𝗿 𝗳𝗮𝘃𝗼𝗿𝗶𝘁𝗲 𝗺𝗼𝘃𝗶𝗲 𝗾𝘂𝗼𝘁𝗲? A: "Thus have we made the world... thus have I made it." 𝗤: 𝗪𝗵𝗮𝘁’𝘀 𝘆𝗼𝘂𝗿 𝗳𝗮𝘃𝗼𝗿𝗶𝘁𝗲 𝗣𝗶𝗻𝗼𝘁 𝗳𝗲𝗮𝘁𝘂𝗿𝗲 𝗮𝗻𝗱 𝘄𝗵𝘆? A: Batch generation of pre-indexed segments with metadata push. We can efficiently build segments from historical data using a Flink workflow, store the results in S3 or HDFS, and efficiently update tables. 𝗤: 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗮 𝗰𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝘆 𝗽𝗿𝗼𝗷𝗲𝗰𝘁 𝗼𝗿 𝗰𝗼𝗻𝘁𝗿𝗶𝗯𝘂𝘁𝗶𝗼𝗻 𝘆𝗼𝘂’𝗿𝗲 𝗽𝗿𝗼𝘂𝗱 𝗼𝗳? A: The talk I gave about comparing Pinot and Elasticsearch. (Editor's Note: You can find it here: https://lnkd.in/dj43iJt7) 𝗤: 𝗪𝗵𝗮𝘁'𝘀 𝘀𝗼𝗺𝗲 𝗮𝗱𝘃𝗶𝗰𝗲 𝗳𝗼𝗿 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿𝘀 𝗹𝗼𝗼𝗸𝗶𝗻𝗴 𝘁𝗼 𝗺𝗮𝗸𝗲 𝗮𝗻 𝗶𝗺𝗽𝗮𝗰𝘁 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝘆? A: Lurk for a while to get a sense for how the community works, then start cherry-picking issues that you feel comfortable working on. Documentation fixes & updates will provide lots of karma :) #StarTree #Community #ApachePinot
To view or add a comment, sign in
-
Excited to share this insightful article on how Elasticsearch works! 🚀 Whether you're new to Elasticsearch or a seasoned user, this breakdown of its architecture and core components by Arton Demaku provides valuable insights into its distributed nature, data storage, querying capabilities, and more. Check it out and level up your Elasticsearch knowledge! #Elasticsearch #DataAnalytics #TechInsights https://lnkd.in/gZMcSjqb
How Elasticsearch Works
medium.com
To view or add a comment, sign in
-
🚀Big news! Unstructured now integrates with Couchbase! With the new destination connector in the `unstructured-ingest` library, you can easily ingest data from 20+ sources, preprocess 25+ unstructured file types using the Unstructured Serverless API, chunk, embed, and seamlessly upload RAG-ready documents into Couchbase Capella. Learn more in the docs: https://lnkd.in/eRa5B_EK
To view or add a comment, sign in
7,502 followers
🙏 I Help Businesses, Leveraging Digital Tools and Platforms to Achieving their Goals, Boosting their Online Presence, and Driving Growth ☎️ Book Free 1:1 Appointment on Google Meet
1moGreat opportunity!