Neues vom PostgreSQL Planet
To avoid having concurrent transactions interfere with each other, SQL engines implement isolation as a feature. This property corresponds to the I letter in the well known ACID acronym, the other properties being Atomicity, Consistency and Durability.
Isolation happens to be configurable, with different levels that correspond to different behaviors when executing concurrent transactions.
As hardware and software evolve, the bottlenecks in a database system also shift. Many old problems might disappear and new types of problems pop-up.Old Limitations
There were days when CPU and Memory was a limitation. More than a decade back, servers with 4 cores were “High End” and as a DBA, my biggest worry was managing the available resources. And for an old DBA like me, Oracle’s attempt to pool CPU and Memory from multiple host machines for a single database using RAC architecture was a great attempt to solve it.
The right way to get the current PostgreSQL configuration is by means of pg_settings.Take advantage of pg_settings when dealing with your configuration
I often see messages on PostgreSQL related mailing list where the configuration is assumed by a Unix-style approach. For example, imagine you have been asked to provide your autovacuum configuration in order to see if there’s something wrong with it; one approach I often is the copy and paste of the following:
One of the worst performance hits a Postgres query can take is having to perform a sort or hash operation on disk. When these space-intensive operations require more memory than is available, Postgres uses disk space instead. Disk space is much slower to read and write than RAM, so this generally takes significantly longer.
The best solution to this problem is to avoid having to perform the operation entirely, for example by adding a judicious index.
When migrating from MS SQL to PostgreSQL, one of the first things people notice is that in MS SQL, object names such as tables and columns all appear in uppercase. While that is possible on the PostgreSQL side as well it is not really that common. The question therefore is: How can we rename all those things to lowercase – easily and fast?
The End Point development team has completed a major application migration from one stack to another. Many years ago, the vendor maintaining the old stack abandoned support and development. This led to a stack evolution riddled with independent custom changes and new features in the following years.
\pset format csv
select i, sin(i) from generate_series(0, 6.3, 0.5) g(i) \g |gnuplot -p -e "set datafile separator ','; set key autotitle columnhead; plot '-'with boxes"
+ + * ******* + + + +
0.8 +-+ ****** * * sin *******-+
You may have noticed how dropping a column is fast in PostgreSQL, haven’t you?Why Dropping a Column does not Reclaim Disk Space? (or better, why is it so fast?)
Simple answer: because PostgreSQL knows how to do its job at best!
Let’s create a dummy table to test this behavior against:
PostgreSQL 10 introduced native partitioning and more recent versions have continued to improve upon this feature. However, many people set up partition sets before native partitioning was available and would greatly benefit from migrating to it. This article will cover how to migrate a partition set using the old method of triggers/inheritance/constraints to a partition set using the native features found in PostgreSQL 11+.
Mark Wong: PDXPUG February 2020 Meetup: DBlink and SQL/MED and FDW, oh my! External Data Access tricks
2020 February 20 Meeting 6pm-8pm
PSU Business Accelerator
2828 SW Corbett Ave · Portland, OR
Parking is open after 5pm.
Speakers: Gabrielle Roth & Michelle Franz
In February, long-term PDXPUGers Michelle Franz & Gabrielle Roth return to discuss DBLINK and FDW, two methods of accessing data that’s outside your database. We’ll briefly mention SQL/MED on the way, and tell a couple of bad (Dad) jokes.
Surprisingly, since parallel query was introduced in PostgreSQL 9.6, it was impossible to know wich backend a parallel worker was related to. So, as Guillaume pointed out, it makes it quite difficult to build simple tools that can sample the wait events related to all process involved in a query. A simple solution to that problem is to export the lock group leader information available in the backend at the SQL level:
check_pgbackrest is designed to monitor pgBackRest backups, relying on the status information given by the info command.
The main features are: