Sammlung von Newsfeeds
Shaun Thomas: Using Patroni to Build a Highly Available Postgres Cluster—Part 1: etcd
The last PG Phriday article focused on the architecture of a Patroni cluster—the how and why of the design. This time around, it’s all about actually building one. I’ve often heard that operating Postgres can be intimidating, and Patroni is on a level above that. Well, I won’t argue on the second count, but I can try to at least ease some of the pain.To avoid an overwhelming deluge consisting of twenty pages of instructions, I’ve split this article into a series of three along these lines:
Andreas Scherbaum: PostgreSQL Berlin March 2026 Meetup
warda bibi: How PostgreSQL Scans Your Data
To understand how PostgreSQL scans data, we first need to understand how PostgreSQL stores it.
Zhang Chen: Inside the Kernel: The Complete Path to PostgreSQL Delete Recovery — From FPW to Data Resurrection
Zhang Chen: Expert-Level PostgreSQL Deleted Data Recovery in Just 5 Steps — No Kernel Knowledge Required
Robert Haas: pg_plan_advice: Plan Stability and User Planner Control for PostgreSQL?
I'm proposing a very ambitious patch set for PostgreSQL 19. Only time will tell whether it ends up in the release, but I can't resist using this space to give you a short demonstration of what it can do. The patch set introduces three new contrib modules, currently called pg_plan_advice, pg_collect_advice, and pg_stash_advice.
Read more »Jan Kristof Nidzwetzki: pg_plan_alternatives: Tracing PostgreSQL’s Query Plan Alternatives using eBPF
PostgreSQL uses a cost-based optimizer (CBO) to determine the best execution plan for a given query. The optimizer considers multiple alternative plans during the planning phase. Using the EXPLAIN command, a user can only inspect the chosen plan, but not the alternatives that were considered. To address this gap, I developed pg_plan_alternatives, a tool that uses eBPF to instrument the PostgreSQL optimizer and trace all alternative plans and their costs that were considered during the planning phase.
Lætitia AVROT: Mostly Dead is Slightly Alive: Killing Zombie Sessions
Muhammad Aqeel: pg_semantic_cache in Production: Tags, Eviction, Monitoring, and Python Integration
Part 2 of the Semantic Caching in PostgreSQL series that’ll take you from a working demo to a production-ready system.
Laurenz Albe: INSERT ... ON CONFLICT ... DO SELECT: a new feature in PostgreSQL v19
© Laurenz Albe 2026
PostgreSQL has supported the (non-standard) ON CONFLICT clause for the INSERT statement since version 9.5. In v19, commit 88327092ff added ON CONFLICT ... DO SELECT. A good opportunity to review the benefits of ON CONFLICT and to see how the new variant DO SELECT can be useful!
Cornelia Biacsics: Contributions for week 8, 2026
Prague PostgreSQL Meetup met on Monday, February 23 for the February Edition - organized by Gulcin Yildirim Jelinek & Mayur B.
Speakers:
Gilles Darold: pgdsat version 2.0
Floor Drees: Developer U: Exercising Cohesion and Technical Skill in PostgreSQL
Vibhor Kumar: Open Source, Open Nerves
Last year at the CIO Summit Mumbai, I had the opportunity to participate in a leadership roundtable with CIOs across banking, fintech, telecom, manufacturing, and digital enterprises.
The session was not a product showcase.
Shaun Thomas: How Patroni Brings High Availability to Postgres
Let’s face it, there are a multitude of High Availability tools for managing Postgres clusters. This landscape evolved over a period of decades to reach its current state, and there’s a lot of confusion in the community as a result.
Radim Marek: PostgreSQL Statistics: Why queries run slow
Every query starts with a plan. Every slow query probably starts with a bad one. And more often than not, the statistics are to blame. But how does it really work? PostgreSQL doesn't run the query to find out — it estimates the cost. It reads pre-computed data from pg_class and pg_statistic and does the maths to figure out the cheapest path to your data.
Alastair Turner: A reponsible role for AI in Open Source projects?
AI-driven pressure on open source maintainers, reviewers and, even, contributors, has been very much in the news lately. Nobody needs another set of edited highlights on the theme from me.
Alastair Turner: A reponsible role for AI in Open Source projects?
AI-driven pressure on open source maintainers, reviewers and, even, contributors, has been very much in the news lately. Nobody needs another set of edited highlights on the theme from me. For a Postgres-specific view, and insight on how low quality AI outputs affect contributors, Tomas Vondra published a great post on his blog recently [https://vondra.me/posts/the-ai-inversion/], which referenced an interesting talk by Robert Haas [https://www.pgevents.ca/events/pgconfdev2025/schedule/session/254-committer-review-an-exercise-in-paranoia/] at PGConf.dev in Montreal last year.
Tomas Vondra: The real cost of random I/O
The random_page_cost was introduced ~25 years ago, and since the very beginning it’s set to 4.0 by default. The storage changed a lot since then, and so did the Postgres code. It’s likely the default does not quite match the reality. But what value should you use instead? Flash storage is much better at handling random I/O, so maybe you should reduce the default? Some places go as far as recommending setting it to 1.0, same as seq_page_cost. Is this intuition right?
Paul Ramsey: Postgres JSONB Columns and TOAST: A Performance Guide
Postgres has a big range of user-facing features that work across many different use cases — with complex abstraction under the hood.
Working with APIs and arrays in the jsonb type has become increasingly popular recently, and storing pieces of application data using jsonb has become a common design pattern.
But why shred a JSON object into rows and columns and then rehydrate it later to send it back to the client?

