Neues vom PostgreSQL Planet
Henrietta Dombrovskaya: October PUG Recording
Almost a month late, but I hope you enjoy it!
Chris Travers: NUMA, Linux, and PostgreSQL before libnuma Support
This series covers the specifics of running PostgreSQL on large systems with many processors. My experience is that people often spend months learning the basics when confronted with the problem. This series tries to dispel these difficulties by providing a clear background into the topics in question. The hope is that future generations of database engineers and administrators don’t have to spend months figuring things out through trial and error.
Deepak Mahto: PostgreSQL Partition Pruning: The Role of Function Volatility
In one of our earlier blogs, we explored how improper volatility settings in PL/pgSQL functions — namely using IMMUTABLE, STABLE, or VOLATILE — can lead to unexpected behavior and performance issues during migrations.
Hans-Juergen Schoenig: Counting Customers in PostgreSQL
As a database consulting company, we are often faced with analytics and reporting related tasks which seem to be easy on the surface but are in reality not that trivial. The number of those seemingly simple things is longer than one might think, especially in the area of reporting
Mayur B.: ALTER Egos: Me, Myself, and Cursor
I pushed the most boring change imaginable, add an index. Our CI/CD pipeline is textbook ==> spin up a fresh DB, run every migration file in one single transaction, in sequential manner. If anything hiccups, the whole thing rolls back and the change never hits main. Foolproof autotests.
Enter The Drama Queen :
Mankirat Singh: .abi-compliance-history file in PostgreSQL source?
Hubert 'depesz' Lubaczewski: Do you really need tsvector column?
Josef Machytka: PostgreSQL 18 enables data‑checksums by default
As I explained in my talk on PostgreSQL Conference Europe 2025, data corruption can be silently present in any PostgreSQL database and will remain undetected until we physically read corrupted data. There can be many reasons why some data blocks in tables or other objects can be damaged. Even modern storage hardware is far from being infallible. Binary backups done with pg_basebackup tool – which is very common backup strategy in PostgreSQL environment – leave these problems hidden. Because they do not check data but copy whole data files as they are.
Seiten
- « erste Seite
- ‹ vorherige Seite
- …
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14

