Crunchy Data

Crunchy Data

Software Development

Charleston, South Carolina 5,525 followers

The Trusted Open Source Enterprise PostgreSQL Leader

About us

Crunchy Data is the industry leader in enterprise PostgreSQL support and open source solutions. Crunchy Data was founded in 2012 with the mission of bringing the power and efficiency of open source PostgreSQL to security-conscious organizations and eliminate expensive proprietary software costs. Since then, Crunchy Data has leveraged its expertise in managing large-scale, mission-critical systems to provide a suite of products and services, including: * Building secure & mission-critical PostgreSQL deployments * Architecting on-demand, secure database provisioning solutions on any cloud infrastructure * Eliminating support inefficiencies to provide customers with guaranteed access to highly-trained engineers * Helping enterprises to adopt large-scale, open source solutions safely and at scale Crunchy Data is committed to hiring and investing in the best talent available to provide unsurpassed PostgreSQL expertise to your enterprise.

Industry
Software Development
Company size
51-200 employees
Headquarters
Charleston, South Carolina
Type
Privately Held
Founded
2012
Specialties
PostgreSQL, Security, Kubernetes, Containers, Geospatial, PostGIS, and Cloud

Products

Locations

Employees at Crunchy Data

Updates

  • Crunchy Data reposted this

    View profile for Utku Azman, graphic

    Director, Postgres and Database R&D

    When was the last time you truly tested the product your team is building? Last week, last month, last year? At Crunchy Data, we embrace “dogfooding” – using our own products as our customers would. While working on the new Spatial Analytics in Crunchy Bridge, I decided to put it to test on a personal project: mapping Greece’s 15,000 km coastline to identify the best spots for water sports based on 10 years of hourly weather data and precise coastal geometry. As a result, I now have a list of hidden gems to visit with my surf for next year based on the period of the year and time of the day :). More immediately useful, though, are the insights I gained into geospatial workload challenges—where our product excels and where it can improve. Combined with feedback from other dogfooding users, like Paul Ramsey's vehicle routing with Overture data example, and real customer input, these insights guide us to solve real-world problems, unlike many teams that start with solutions and later search for problems. Here are some highlights from my experience. Let me know if these resonate: Easily Accessing Geospatial Data: Support for various geospatial file formats made it easy to load things like coastal geometries and water quality data from various sources (e.g. shapefile, geoparquet) Integrating Cloud Native Data: By creating foreign tables that pointed to online weather data, I bypassed bulky downloads and storage – selecting only the data I needed, joining it with local geospatial fact tables and writing out the relevant data directly to S3 in compressed parquet format. Query performance: Query pushdowns to DuckDB improved processing speed of some PostGIS functions by 100%, though I had to keep certain considerations in mind to get the best speed up (e.g., avoiding coordinate projections within the "push-downable" queries). Seamless compatibility: I worked with both local PostgreSQL tables and foreign tables pointing to parquet files in S3. This allowed me to get best of both worlds - convenience of keeping compressed data in object storage and querying it with a powerful vectorized engine; making full use of PostGIS capabilities such as spatial indexes. I was also able to visualize my data in Qgis from both types of tables with its built-in Postgres integration. Honestly, without Crunchy Bridge for Analytics (CBA), I don’t think I could take my fun little project to this scale. I would have had to spend a lot more time accessing, loading and transforming data, spend much more money in storing large data files and keeping computational resources running. Thankfully, the benefits I’ve seen in my tests have applied to real-world workloads for our customers as well, who say that CBA helped them to streamline complex data pipelines, make full use of their existing Postgres knowhow / application logic and avoid complex integrations. https://lnkd.in/dxGXytVp #postgres, #postgis, #crunchydata, #geospatial, #duckdb

    • No alternative text description for this image
  • Crunchy Data reposted this

    View profile for Brian Pace, graphic

    Data Architect at Crunchy Data

    Migrating data between database platforms? Need to ensure your logical replication is perfectly in sync? Look no further than pgCompare! Check out our latest release, featuring new support for DB2 and enhanced functionality for case-sensitive column and table names. #DataMigration #pgCompare #DatabaseTools https://lnkd.in/eyrU6ajC

    GitHub - CrunchyData/pgCompare: pgCompare – a straightforward utility crafted to simplify the data comparison process, providing a robust solution for comparing data across various database platforms.

    GitHub - CrunchyData/pgCompare: pgCompare – a straightforward utility crafted to simplify the data comparison process, providing a robust solution for comparing data across various database platforms.

    github.com

  • View organization page for Crunchy Data, graphic

    5,525 followers

    Some call them buckets and some call them bins — they mean the same thing. Either way, buckets or bins are used in reporting and dashboards. Binning is taking a continuous value (like date or time) and grouping values into a discrete value like a day, week, month, quarter, or year. Binning is used all types of continuous values, like integers and money, but for this, we’ll focus on dates. Christopher Winslett breaks down some of the most helpful Postgres functions for buckets and bins in this post. https://lnkd.in/gHgfUKSk

    • No alternative text description for this image
  • View organization page for Crunchy Data, graphic

    5,525 followers

    We're excited to be sponsoring PostgreSQL Conference Europe this week in Athens, Greece! 🏛️ 🇬🇷 Make sure you catch the talks by our team: * Karen Jex on Wednesday at 3pm - Crunchy Postgres for Kubernetes: Your Virtual DBA * Louise Grandjonc on Thursday at 11:50am - A Deep Dive into Postgres Statistics We also have several folks at our sponsor table including Roberto Mello and Jean-Paul Argudo who are happy to talk all things Postgres with you or demo the leading approach to running Postgres natively in the cloud: Crunchy Postgres for Kubernetes

    • No alternative text description for this image
  • View organization page for Crunchy Data, graphic

    5,525 followers

    One of the things that makes #Postgres so awesome for software development is the incredibly useful system of constraints. Constraints are a way to tell Postgres which kinds of data can be inserted into tables, columns, or rows. As an application developer, you're going to build in this logic to your application as well and that’s great. However…adding this logic into your database protects your data long-term from bad data, null statements, or application code that isn't working quite right and does not conform to your data requirements. Constraints are also great for catching outliers and things you didn’t account for in application code but you know need to be caught before an insert statement. Postgres supports a variety of types of constraints that are worth understanding: * Primary/unique key constraints * Foreign key constraints * Not null constraints * Check constraints Primary key constraints are declared as the main unique identifier for your table: CREATE TABLE users ( id serial PRIMARY KEY, name text, email text ); CREATE TABLE rooms ( id serial PRIMARY KEY, number text ); Foreign key constraints are helpful when reference between tables. For example if we were to define a reservations table it'd depend on the primary keys from users and rooms: CREATE TABLE reservations ( user_id int references users(id), room_id int references rooms(id), start_time timestamp, end_time timestamp, event_title text ); But you can also go further for example with check constraints. Check constraints will satisfy that some logic holds true before inserting the data. In this case here's a basic example that ensures the reservation start_time is before the end_time. ALTER TABLE public.reservations ADD CONSTRAINT start_before_end CHECK (start_time < end_time ); A database without constraints is likely to be a database with data quality issues. You probably have primary keys in place, but take it a step further with other constraints to have your data quality in the best shape possible. If you want to get more familiar with constraints check out our in browser tutorial: https://lnkd.in/gfn7WcTE

  • View organization page for Crunchy Data, graphic

    5,525 followers

    🗞️ We just released our schedule for #PostGIS day 2024! We have an exciting lineup of talks about #OpenSource solutions for #Geospatial. This is the day after #gisday, November 21st. This is free, online on Zoom, and runs the whole day from afternoon in Europe to afternoon in Pacific time. Plan to join us!

    View profile for Paul Ramsey, graphic

    Executive Geospatial Engineer at Crunchy Data

    We have an amazing agenda booked for 2024 PostGIS Day, check it out, and sign up to (virtually) attend! https://lnkd.in/gJXaxUsP

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

Crunchy Data 1 total round

Last Round

Series A
See more info on crunchbase