Crunchy Data

Crunchy Data

Software Development

Charleston, South Carolina 5,361 followers

The Trusted Open Source Enterprise PostgreSQL Leader

About us

Crunchy Data is the industry leader in enterprise PostgreSQL support and open source solutions. Crunchy Data was founded in 2012 with the mission of bringing the power and efficiency of open source PostgreSQL to security-conscious organizations and eliminate expensive proprietary software costs. Since then, Crunchy Data has leveraged its expertise in managing large-scale, mission-critical systems to provide a suite of products and services, including: * Building secure & mission-critical PostgreSQL deployments * Architecting on-demand, secure database provisioning solutions on any cloud infrastructure * Eliminating support inefficiencies to provide customers with guaranteed access to highly-trained engineers * Helping enterprises to adopt large-scale, open source solutions safely and at scale Crunchy Data is committed to hiring and investing in the best talent available to provide unsurpassed PostgreSQL expertise to your enterprise.

Industry
Software Development
Company size
51-200 employees
Headquarters
Charleston, South Carolina
Type
Privately Held
Founded
2012
Specialties
PostgreSQL, Security, Kubernetes, Containers, Geospatial, PostGIS, and Cloud

Products

Locations

Employees at Crunchy Data

Updates

  • View organization page for Crunchy Data, graphic

    5,361 followers

    Most queries against a database are short lived. Whether you're inserting a new record or querying for a list of upcoming tasks for a user, you're not typically aggregating millions of records or sending back thousands of rows to the end user. A typical short lived query in Postgres can easily be accomplished in a few milliseconds or less. But lying in wait is a query that can bring everything crashing to a crawl. Queries that run for too long are often going to create some cascading effects, most commonly these queries take one of four forms: * An intensive BI/reporting query that is scanning a lot of records and performing some aggregation * A database migration that inadvertently updates a few too many records * A miswritten query that wasn't intended to be a reporting query, but now is joining a few million records. * A runaway recursive query Each of the above queries is likely to scan a lot of records and shuffle the cache within your database. It may even spill from memory to disk in sorting data... It could be as bad as holding some locks so new data can't be written. Enter your key defense to keep your PostgreSQL database safe from these disaster situations: The statement_timeout configuration parameter. You can set this value at a database, user, or session level. This makes it easy to have a sane default while overriding it intentionally for long running queries. If you haven't already set this on your Postgres database what are you waiting for?

  • View organization page for Crunchy Data, graphic

    5,361 followers

    What's new in Postgres 17? Today let's look at performance. Postgres 17 was just released last week, and we added support for it with Crunchy Data the same day as it was released. Often with new Postgres major version releases there is "better performance" included by upgrading. When you see benchmarks and better performance it's always a question of what's the real world impact. Well we benchmarked it with the RC1 on our actual production code base for Crunchy Bridge and the results were impressive of up to 30% improvement with no change to any code on our side. So that's a big change, but what is the actual change that gets this performance? It applies to B-tree indexes. The B-tree is Postgres' overwhelmingly most common and best optimized index, used for lookups on a table's primary key or secondary indexes, and undoubtedly powering all kinds of applications all over the world, many of which we interact with on a daily basis. During lookups, a B-tree is scanned, with Postgres descending down through its hierarchy from the root until it finds a target value on one of its leaf pages. Previously, multi-value lookups like id IN (1, 2, 3) or id = any(1, 2, 3) would require that process be repeated multiple times, once for each of the requested values. Although not perfectly efficient, it wasn't a huge problem because B-tree lookups are very fast. As of a Postgres 17 enhancement to nbtree's ScalaryArrayOp execution, that's no longer always the case. Any particular scan with multiple scalar inputs will consider all those inputs as it's traversing a B-tree, and where multiple values land on the same leaf page, they're retrieved together to avoid repetitive traversals. While B-tree looks were generally fast before because of the common usage of IN (1, 2, 3) by many frameworks you may still see a significant performance gain like us of up to 30% by upgraded.

    • No alternative text description for this image
  • View organization page for Crunchy Data, graphic

    5,361 followers

    Postgres added some extra security around the public schema in Postgres 15. While it's good for security, this can cause a few hiccups when adding new database users. In Crunchy Postgres for Kubernetes, we’ve just added the ability to automatically create user schemas. This makes things easier on your users while still maintaining good Postgres security posture. Here’s a before video of a new user trying to create a table and after.   - First the user is created and then immediately denied create table access.   - After we enable the AutoCreateUserSchema, the user has a new schema, ‘elephant’ to work with.

Similar pages

Browse jobs

Funding

Crunchy Data 1 total round

Last Round

Series A
See more info on crunchbase