Neues vom PostgreSQL Planet
New pgSCV 0.6.0 has been released, with two new features and with minor fixes and improvements.
pgSCV is a Prometheus exporter and monitoring agent for PostgreSQL environment. Project’s goal is to provide a single tool (exporter) for collecting metrics about PostgreSQL and related services.
One of the most missing features in Postgres in my opinion was always a possibility to ignore nulls in analytic functions. Other databases support ignore nulls clause which makes the task much easier. Let’s start by creating a simple, two-column table and populating it with data:create table tbl ( id bigint primary key, a int ); insert into tbl (id, a) select s, case when random() < .5 then s end from generate_series(0, 100) s;
The problem is to generate a column (lnn – last not null) with the last non-null value ordered by id:
How I implemented a kind of Oracle-to-PostgreSQL backup.
PostgreSQL contains some hidden gems which have been around for many years and help to silently speed up your queries. They optimize your SQL statements in a clever and totally transparent way. One of those hidden gems is the ability to synchronize sequential scans. Actually, this feature has been around for 15+ years, but has gone mostly unnoticed by many end-users. However, if you are running data warehouses and analytical workloads, you might have already used synchronized seq scans without actually knowing it.
In many Open Source communities it is difficult to consider who a contributor is. Some projects take an exclusive view, requiring a direct contribution to be made to be considered a contributor. Looking at this holistically, we find that the success of a project is found only when there is a mutual connection between the hands-on team and those who support it.Without that connection, PostgreSQL would just be a fever dream of academic pursuit.
In the recent Postgres Community Survey by Timescale, nearly 50% of users reported that they started using Postgres within the last 5 years, including almost 20% within just the last 2 years. With the pandemic and subsequent lock-downs wiping out so many local user groups and in-person conferences, being able to get help and network with other Postgres users online has never been more important.
© Laurenz Albe 2021
When analyzing customer’s problems, I have seen the mess you can get into if you disable autocommit in your interactive client, so I’d like to bring this topic to a wider audience.What is autocommit?
In PostgreSQL, like in any other ACID-complicant database, each statement runs in a transaction: if the statement causes an error, PostgreSQL undoes all its effects.
PostgreSQL relies on the concept of template databases to create a new one.
Postgres is a convenient platform to manage and manipulate date and timestamp information, and included in that is the ability to manage time zones. Postgres can be compiled to use pre-installed time zone definitions, or --with-system-tzdata can be used to specify an external time zone definition directory.
We spend time day in, day out, answering the questions that matter and coming up with solutions that make the most sense. However, sometimes a question comes up that is just so darn…interesting that even if there are sensible solutions or workarounds, it still seems like a challenge just to take the request literally. Thus was born this blog series, Devious SQL.
A week ago I announced pgSCV — a new metrics exporter for PostgreSQL. After that, some people asked me about dashboards — it will be nice to provide dashboards for pgSCV. I had a task to make dashboards in my todo list, and the request for dashboards didn’t surprise me.
PostgreSQL has gained two features that help in getting information about the memory usage of sessions. First, as of this commit, there is one new system view that reports the memory usage for a session:
This post examines at a common database design decision involving the choice of using BIGINT versus INT data types. You may already know that the BIGINT data type uses twice the storage on disk (8 bytes per value) compared to the INT data type (4 bytes per value). Knowing this, a common decision is to use INT wherever possible, only resorting to using BIGINT when it was obvious* that the column will be storing values greater than 2.147 Billion (the max of INT).