Neues vom PostgreSQL Planet
PostgreSQL 13 development is coming along nicely, Postgres 13 Beta3 was released on 8/13/2020. The Postgres Beta 1 and 2 releases were released in May and June 2020. One of the features that has my interest in Postgres 13 is the B-Tree deduplication effort. B-Tree indexes are the default indexing method in Postgres, and are likely the most-used indexes in production environments. Any improvements to this part of the database are likely to have wide-reaching benefits.
Our latest release to the Citus extension to Postgres is Citus 9.4. If you’re not yet familiar, Citus transforms Postgres into a distributed database, distributing your data and your SQL queries across multiple nodes. This post is basically the Citus 9.4 release notes.
If you’re ready to get started with Citus, it’s easy to download Citus open source packages for 9.4.
Many of you rely on databases to return correct results for your SQL queries, however complex your queries might be. And you probably place your trust with no questions asked—since you know relational databases are built on top of proven mathematical foundations, and since there is no practical way to manually verify your SQL query output anyway.
A little contribution in spreading the PostgreSQL word!Hey there! I’m using PostgreSQL!
A few weeks ago I changed my old mobile phone, and so I had to install again all my applications, including something I personally hate: WhatsApp.
While checking the configuration of the application, correctly and automatically cloned from my old phone, I came across the standard status that WhatsApp places for you:
pgBackRest is a well-known powerful backup and restore tool. It offers a lot of possibilities.
While pg_basebackup is commonly used to setup the initial database copy for the Streaming Replication, it could be interesting to reuse a previous database backup (eg. taken with pgBackRest) to perform this initial copy.
This content updates one of my old posts, using PostgreSQL 13 and the latest pgBackRest version.
In a database production environment, a backup plays quite an essential role. The database server can fail for a number of reasons. It can happen because of hardware failure, software malfunction or just because of user error. Whatever the reason, when a live database goes down, a backup is essential in fixing and recovering it.
In a database system, the data is stored in binary files. Every database provider offers some kind of backup tools using which database files may be backed up. PostgreSQL database server also provides a comprehensive set of tools.
It would be very easy if I drove the same car regularly, but because of my family size and travels, I don't have that luxury. Some cars I drive have smart keys, some mechanical keys. Some have gas tank doors on the driver's side, others from the passenger side. They steer differently, have different acceleration capabilities, even different service requirements. I have gotten used to switching cars, but still get confused when I have to fuel the car since I have to remember which side has the gas tank door.
Is it Postgre, PostGreSQL, Postgres or PostgreSQL? We have all seen a couple of wrong ways to spell “PostgreSQL”. The question therefore is: How can one find data even if there are typos? In PostgreSQL there are various solutions to the problem. Depending on what kind of search you need you can choose between various methods.
Before we get started it is necessary to compile some sample data:
If you're an application developer, analyst, data scientist, or anyone who's had to figure out how to work with relational databases, chances are you're familiar with indexes. At least to the extent that you know they somehow help speed up your queries. (That's where I'd left my understanding of indexes for a good amount of time).
Having worked in open source for decades, where every success and failure is public knowledge, I have always wondered how proprietary development is done, particularly for databases. I have gotten some glimpses into that world from former employees, but this Y combinator thread is the most extensive view of Oracle development I have ever seen.
Every modern database system has some way to compress its data at some level. The obvious reason for this feature is to reduce the size of it's database, especially in today's world where the data is growing exponentially. The less obvious reason is to improve query performance; the idea is: smaller data size means less data pages to scan, which means lesser disk i/o and faster data access. So, in any case, data de-compression should be fast enough so as not to hamper the query performance, if not improve it.