Neues vom PostgreSQL Planet
One of the often cited advantages of the PostgreSQL project is its resiliency. Especially in the presence of rogue actors: being a distributed Community, it is hard to target any individual, group or entity and affect/disrupt the whole Community.
The PostGIS Team is pleased to release the first alpha of the upcoming PostGIS 3.2.0 release.
Below are the dates of Postgres major version releases and when they first became available on RDS and Aurora.
Useful for gaining a leg up in your office AWS Managed Postgres Major Version Release Date betting pool.
One of the major revelations for almost every new user to Postgres is that there’s no technical advantage of specifying columns as varchar(n) compared to just using bound-less text. Not only is the text type provided as a convenience (it’s not in the SQL standard), but using it compared to constrained character types like char and varchar carries no performance penalty. From the Postgres docs on character type (and note that character varying is the same thing as varchar):
Hubert 'depesz' Lubaczewski: Waiting for PostgreSQL 15 – Revoke PUBLIC CREATE from public schema, now owned by pg_database_owner.
pgbackrest supports the JSON output format, and this can be useful to automate some information analysys.
Time-series data is everywhere, and it drives decision-making in every industry. Time-series data collectively represents how a system, process, or behavior changes over time.
After a few months of research and experimentation with running a heavily DB-dependent Go app, we’ve arrived at the conclusion that sqlc is the figurative Correct Answer when it comes to using Postgres (and probably other databases too) in Go code beyond trivial uses. Let me walk you through how we got there.
First, let’s take a broad tour of popular options in Go’s ecosystem:
© Laurenz Albe 2021
Over the years, many of our PostgreSQL clients have asked whether it makes sense to create indexes before – or after – importing data. Does it make sense to disable indexes when bulk loading data, or is it better to keep them enabled? This is an important question for people involved in data warehousing and large-scale data ingestion. So let’s dig in and figure it out:
If you use Postgres and Python together you are almost certainly familiar with psycopg2. Daniele Varrazzo has been the maintainer of the psycopg project for many years. In 2020 Daniele started working full-time on creating psycopg3, the successor to psycopg2. Recently, the Beta 1 release of psycopg3 was made available via PyPI install. This post highlights two pieces of happy news with psycopg3:
Here is a quick test I did after reading:
The PostGIS development team is pleased to provide bug fix and performance enhancements 3.1.4 and 3.0.4 for the 3.1, 3.0 stable branches.
3.1.4 This release supports PostgreSQL 9.6-14.
Recently I have been tasked to familiarize myself with the Foreign Data Wrapper (FDW) interface API to build a new FDW capable of doing vertical / columnar sharding, meaning that the FDW is capable of collecting column information from multiple sources and combine them together as a result query. I will document and blog about the vertical sharding in later posts. Now, in this blog, I would like to share some of the key findings and understanding of FDW interface related to foreign scan.
As a PostgreSQL Support Engineer, one common scenario we experience is a slow system on a reasonably powerful machine. In these cases, we often see that max_connections is set to 10,000 or more, sometimes even as high as 30,000. While we will advise that max_connections is too high and needs to be lowered, the usual response is, “Well, most of those connections are idle, so they shouldn’t affect performance.” This statement is not true, as an idle connection is not weightless.