Sammlung von Newsfeeds
The lock is an essential part of a database system. In PostgreSQL, there are various locks, such as table lock, row lock, page lock, transaction lock, advisory lock, etc. Some of these locks are automatically added to complete the database functions during the operation of the database system, and some are manually added for the elements in PostgreSQL through some SQL commands. This blog explores the locks in PostgreSQL.
PostgreSQL Conference Germany 2020 in Stuttgart, Germany, on May 15th is now open for registrations.
The Call for Papers is already closed, and we are working with the last speakers to confirm their talks, and we will have a full schedule published soon.
There are still a few "EARLYBIRD" tickets available, until end of February.
See you in Stuttgart!
If you are using Azure PostgreSQL and have upgraded your client side libpq to version 12 (which can happen automatically for example if you use the PostgreSQL apt repositories), you may see connection attempts fail with symptoms like:
PG9.6 has added Parallel execution of sequential scans, joins, and aggregates.
What happens when you execute a transaction in PostgreSQL?
The block will be fetched to shared buffers, executed there and upon commit, the dirty buffer will leave to disk again, but how does that happen?
The answer is here.
PostgreSQL is an ORDBMS software, Let’s ignore O for time being (we will have a separate post on O alone), as an RDBMS, PostgreSQL has to support ACID properties (we will have a seperate post to discuss ACID properties), but for now,
The PostgreSQL caching system has always been a bit of a miracle to many people and many have asked me during consulting or training sessions: How can I figure out what the PostgreSQL I/O cache really contains? What is in shared buffers and how can one figure out? This post will answer this kind of question and we will dive into the PostgreSQL cache.
Creating a simple sample database
Before we can inspect shared buffers we have to create a little database. Without data the stuff we are going to do is not too useful:
Warning: The information provided here is based on unsupported development version of PostgreSQL 13.
DROP DATABASE drops a database. It removes the catalog entries for the database and deletes the directory containing the data. It can only be executed by the database owner. It cannot be executed while you are connected to the target database.
Here, from PostgreSQL v13 on wards you have an option to drop database forcefully even if an external session is connected.
Hubert 'depesz' Lubaczewski: Waiting for PostgreSQL 13 – Add %x to default PROMPT1 and PROMPT2 in psql
To avoid having concurrent transactions interfere with each other, SQL engines implement isolation as a feature. This property corresponds to the I letter in the well known ACID acronym, the other properties being Atomicity, Consistency and Durability.
Isolation happens to be configurable, with different levels that correspond to different behaviors when executing concurrent transactions.
As hardware and software evolve, the bottlenecks in a database system also shift. Many old problems might disappear and new types of problems pop-up.Old Limitations
There were days when CPU and Memory was a limitation. More than a decade back, servers with 4 cores were “High End” and as a DBA, my biggest worry was managing the available resources. And for an old DBA like me, Oracle’s attempt to pool CPU and Memory from multiple host machines for a single database using RAC architecture was a great attempt to solve it.
Media failure is one of the crucial things that the database administrator should be aware of. Media failure is nothing but a physical problem reading or writing to files on the storage medium.
A typical example of media failure is a disk head crash, which causes the loss of all files on a disk drive. All files associated with a database are vulnerable to a disk crash, including datafiles, wal files, and control files.
The right way to get the current PostgreSQL configuration is by means of pg_settings.Take advantage of pg_settings when dealing with your configuration
I often see messages on PostgreSQL related mailing list where the configuration is assumed by a Unix-style approach. For example, imagine you have been asked to provide your autovacuum configuration in order to see if there’s something wrong with it; one approach I often is the copy and paste of the following:
One of the worst performance hits a Postgres query can take is having to perform a sort or hash operation on disk. When these space-intensive operations require more memory than is available, Postgres uses disk space instead. Disk space is much slower to read and write than RAM, so this generally takes significantly longer.
The best solution to this problem is to avoid having to perform the operation entirely, for example by adding a judicious index.