Sammlung von Newsfeeds
Paul Ramsey: PostGIS Performance: Improve Bounding Boxes with Decompose and Subdivide
In the third installment of the PostGIS Performance series, I wanted to talk about performance around bounding boxes.
Geometry data is different from most column types you find in a relational database. The objects in a geometry column can be wildly different in the amount of the data domain they cover, and the amount of physical size they take up on disk.
Ian Barwick: PgPedia Week, 2025-10-26
Daniel Vérité: Producing UUIDs Version 7 disguised as Version 4 (or 8)
Ian Barwick: PgPedia Week, 2025-10-19
Due to an unfortunate recent visitation by the Virus of the Decade (so far), I have a backlog of these which I'm trying to work through, so in the remote chance anyone is waiting with bated breath for the newest editions, my apologies. Normal service will be resumed as soon as humanly possible.
Henrietta Dombrovskaya: October PUG Recording
Almost a month late, but I hope you enjoy it!
Chris Travers: NUMA, Linux, and PostgreSQL before libnuma Support
This series covers the specifics of running PostgreSQL on large systems with many processors. My experience is that people often spend months learning the basics when confronted with the problem. This series tries to dispel these difficulties by providing a clear background into the topics in question. The hope is that future generations of database engineers and administrators don’t have to spend months figuring things out through trial and error.
Deepak Mahto: PostgreSQL Partition Pruning: The Role of Function Volatility
In one of our earlier blogs, we explored how improper volatility settings in PL/pgSQL functions — namely using IMMUTABLE, STABLE, or VOLATILE — can lead to unexpected behavior and performance issues during migrations.
Hans-Juergen Schoenig: Counting Customers in PostgreSQL
As a database consulting company, we are often faced with analytics and reporting related tasks which seem to be easy on the surface but are in reality not that trivial. The number of those seemingly simple things is longer than one might think, especially in the area of reporting
Mayur B.: ALTER Egos: Me, Myself, and Cursor
I pushed the most boring change imaginable, add an index. Our CI/CD pipeline is textbook ==> spin up a fresh DB, run every migration file in one single transaction, in sequential manner. If anything hiccups, the whole thing rolls back and the change never hits main. Foolproof autotests.
Enter The Drama Queen :
Mankirat Singh: .abi-compliance-history file in PostgreSQL source?
Hubert 'depesz' Lubaczewski: Do you really need tsvector column?
Josef Machytka: PostgreSQL 18 enables data‑checksums by default
As I explained in my talk on PostgreSQL Conference Europe 2025, data corruption can be silently present in any PostgreSQL database and will remain undetected until we physically read corrupted data. There can be many reasons why some data blocks in tables or other objects can be damaged. Even modern storage hardware is far from being infallible. Binary backups done with pg_basebackup tool – which is very common backup strategy in PostgreSQL environment – leave these problems hidden. Because they do not check data but copy whole data files as they are.
Radim Marek: Beyond Start and End: PostgreSQL Range Types
One of the most read articles at boringSQL is Time to Better Know The Time in PostgreSQL where we dived into the complexities of storing and handling time operations in PostgreSQL. While the article introduced the range data types, there's so much more to them. And not only for handling time ranges.
Cornelia Biacsics: Contributions for week 44, 2025
PostgreSQL received attention through the following contributions at Data Stack Conf 2025 on Oct 29, 2025:
Speaker
- Radoslav Stanoev
- Pavlo Golub
- Lætitia Avrot
- Valeria Bogatyreva
- Devrim Gündüz
PostgreSQL Booth Staff
- Devrim Gündüz
- Pavlo Golub
Gabriele Quaresima spoke at Cloud Native Bergen on Tuesday, October 28, 2025.
Dave Stokes: Migration From MySQL To PostgreSQL In Five Steps Using DBeaver
I wrote a post in my MySQL blog on migrating from MySQL to PostgreSQL using DBeaver. You can pass it along to your acquaintances who want to get off the Dolphin and on the Elephant.
Not only will DBeaver move your tables and data, but you can compare them afterwards. In the post, I outline the process in five steps. DBeaver will let you do it in four.
Antony Pegg: Meeting High Availability Requirements in Non-Distributed PostgreSQL Deployments
High availability in PostgreSQL doesn't always require a globally distributed architecture. Sometimes you need reliable failover and replication within a single datacentre or region. pgEdge Enterprise Postgres handles this scenario with a production-ready PostgreSQL distribution that includes the tools you need for high availability out of the box.
Tomas Vondra: Don't give Postgres too much memory
From time to time I get to investigate issues with some sort of a batch process. It’s getting more and more common that such processes use very high memory limits (maintenance_work_mem and work_mem). I suppose some DBAs follow the logic that “more is better”, not realizing it can hurt the performance quite a bit.
Let me demonstrate this using an example I ran across while testing a fix for parallel builds of GIN indexes. The bug is not particularly interesting or complex, but it required a fairly high value for maintenance_work_mem (the initial report used 20GB).
Umair Shahid: What Are “Dirty Pages” in PostgreSQL?
PostgreSQL stores data in fixed‑size blocks (pages), normally 8 KB. When a client updates or inserts data, PostgreSQL does not immediately write those changes to disk. Instead, it loads the affected page into shared memory (shared buffers), makes the modification there, and marks the page as dirty. A “dirty page” means the version of that page in memory is newer than the on‑disk copy.
Nikolay Samokhvalov: #PostgresMarathon 2-011: Prepared statements and partitioned tables — the paradox, part 3
In #PostgresMarathon 2-009 and #PostgresMarathon 2-010, we explored why execution 6 causes a lock explosion when building a generic plan for partitioned tables — the planner must lock all 52 relations because it can't prune without parameter values.
Today we'll test what actually happens with different plan_cache_mode settings.

