Neues vom PostgreSQL Planet
Contributing to open-source projects can be intimidating – and PostgreSQL is no exception. As a Postgres contributor, I share my hard-earned tips to help you make your first contribution (or contribute more).
The article is published in Timescale blog:
A “materialized view” is a database object which stores the result of a precalculated database query and makes it easy to refresh this result as needed. Materialized views are an integral feature of pretty much all advanced database systems. Naturally, PostgreSQL also provides support for materialized views, and offers the end-user a powerful tool to handle more time-consuming requests.
The main questions are now: What are the pitfalls, and how can you make use of materialized views in the first place? Let’s dive in and find out.
Having worked with databases for 32 years, I have always lived in a world of locks, sharing, and complexity, but sometimes, I look out the window and wonder, "Why does everything have to be so hard?" After a while, I just give up figuring it out and get back work.
On a particular project, Materialized Views (MVs) were being used quite extensively in the Apps that were being migrated from Oracle to PostgreSQL. One thing I noticed was missing was VACUUM ANLYZE on these MVs after being created or refreshed. It is extremely important to integrate VACUUM ANALYZE commands into this process. Here is the reasoning behind it.
1. ANALYZE – updates the statistics for the MV so that the PG Optimizer can make the best decision possible for access path determination for SQL statements running against these MVs.
Assume, that jb - is an attribute of type jsonb, than
raw_size = pg_column_size(jb::text::jsonb) -- the size of jsonb in memory
compressed_size = pg_column_size(jb) -- stored size of jsonb (raw_size if not TOAST-ed and non-compressed)
© Laurenz Albe 2021
For many people, “cross join” is something to be afraid of. They remember the time when they forgot the join condition and the DBA was angry, because the query hogged the CPU and filled the disk. However, there are valid use cases for cross joins which I want to explore in this article.
Part of the release engineering process at Swarm64 is to run performance benchmarks that assure us new versions of Swarm64 DA and PG Nitrous cause PostgreSQL performance to improve and not regress.
I mentioned the problem of safety systems causing safety failures last year. There is a corollary to that that I want to mention related to planned vs unplanned database downtime.
Databases are critical parts of most information technology infrastructure, so downtime is a big concern. There are a few cases where you know your Postgres database will need to be down, if only briefly:
Relational databases started as simple data containers with a relational structure. Over the decades, SQL matured into a language that allows complex processing inside relational databases. The typical life-cycle is that once a client-side feature become well-understood and established, it often moves into a relational database for efficiency and consistency among applications.
A couple of weeks ago, Jobin and I did a short presentation during Percona Live Online bearing a similar title as the one for this post: “PostgreSQL HA With Patroni: Looking at Failure Scenarios and How the Cluster Recovers From Them”.
New pgSCV 0.6.0 has been released, with two new features and with minor fixes and improvements.
pgSCV is a Prometheus exporter and monitoring agent for PostgreSQL environment. Project’s goal is to provide a single tool (exporter) for collecting metrics about PostgreSQL and related services.
One of the most missing features in Postgres in my opinion was always a possibility to ignore nulls in analytic functions. Other databases support ignore nulls clause which makes the task much easier. Let’s start by creating a simple, two-column table and populating it with data:create table tbl ( id bigint primary key, a int ); insert into tbl (id, a) select s, case when random() < .5 then s end from generate_series(0, 100) s;
The problem is to generate a column (lnn – last not null) with the last non-null value ordered by id:
How I implemented a kind of Oracle-to-PostgreSQL backup.
PostgreSQL contains some hidden gems which have been around for many years and help to silently speed up your queries. They optimize your SQL statements in a clever and totally transparent way. One of those hidden gems is the ability to synchronize sequential scans. Actually, this feature has been around for 15+ years, but has gone mostly unnoticed by many end-users. However, if you are running data warehouses and analytical workloads, you might have already used synchronized seq scans without actually knowing it.
In many Open Source communities it is difficult to consider who a contributor is. Some projects take an exclusive view, requiring a direct contribution to be made to be considered a contributor. Looking at this holistically, we find that the success of a project is found only when there is a mutual connection between the hands-on team and those who support it.Without that connection, PostgreSQL would just be a fever dream of academic pursuit.