Sammlung von Newsfeeds
David Wheeler: pg_clickhouse 0.2.0
In response to a generous corpus of real-world user feedback, we’ve been hard at work the past week adding a slew of updates to pg_clickhouse, the query interface for ClickHouse from Postgres. As usual, we focused on improving pushdown, especially for various date and time, array, and regular expression functions.
Cornelia Biacsics: Contributions for week 14, 2026
The Toulouse PostgreSQL User Group met on April 7, 2026 organized by
- Geoffrey Coulaud
- Xavier SIMON
- Jean-Christophe Arnu
Speakers:
Richard Yen: Understanding PostgreSQL Wait Events
One of the most useful debugging tools in modern PostgreSQL is the wait event system. When a query slows down or a database becomes CPU bound, a natural question is: “What are sessions actually waiting on?” Postgres exposes this information through the pg_stat_activity view via two columns:
wait_event_type wait_eventThese fields reveal what the backend process is blocked on at a given moment. Among the different wait types, one category tends to cause confusion:
Jeremy Schneider: Zero autovacuum_cost_delay, Write Storms, and You
A few days ago, Shaun Thomas published an article over on the pgEdge blog called [Checkpoints, Write Storms, and You]. Sadly a lot of corporate blogs don’t have comment functionality anymore.
Ruohang Feng: 504 Extensions: Expand the PostgreSQL Landscape
Lukas Fittl: Waiting for Postgres 19: Reduced timing overhead for EXPLAIN ANALYZE with RDTSC
Shaun Thomas: Checkpoints, Write Storms, and You
Every database has to reconcile two uncomfortable truths: memory is fast but volatile, and disk is slow but durable. Postgres handles this tension through its Write-Ahead Log (WAL), which records every change before it happens. But the WAL can't grow forever. At some point, Postgres needs to flush all those accumulated dirty pages to disk and declare a clean starting point. That process is called a checkpoint, and when it goes wrong, it can bring throughput to its knees.
Hubert 'depesz' Lubaczewski: Waiting for PostgreSQL 19 – new pg_get_*_ddl() functions
warda bibi: The 1 GB Limit That Breaks pg_prewarm at Scale
Recently, we encountered a production incident where PostgreSQL 16.8 became unstable, preventing the application from establishing database connections. The same behavior was independently reproduced in a separate test environment, ruling out infrastructure and configuration issues. Further investigation identified the pg_prewarm extension as the source of the problem.
Jim Mlodgenski: pgcollection 2.0: Integer Keys, Range Deletes, and Oracle Parity
In my first post about pgcollection, I introduced the collection type to address the challenge of migrating Oracle associative arrays keyed by strings to PostgreSQL. For integer-keyed associative arrays, I noted that native PostgreSQL arrays work well enough for simple cases. That holds true until the keys are sparse.
Consider this Oracle pattern:
Cornelia Biacsics: Contributions for week 13, 2026
The Prague PostgreSQL Meetup met on March 30, 2026, organized by Gulcin Yildirim Jelinek and Mayur B.
Speakers:
- Radim Marek
- Mayur B.
Community Blog Posts:
- Pat Wright about Nordic Pg Day 2026
Community Videos:
- Pavlo Golub about SCALE 23x
Ahsan Hadi: Using the pgEdge MCP Server with a Distributed PostgreSQL Cluster
I recently wrapped up my blog series covering the exciting new features in PostgreSQL 18 — from Asynchronous I/O and Skip Scan to the powerful RETURNING clause enhancements.
Laurenz Albe: Schemas in PostgreSQL and Oracle: what is the difference?
© Laurenz Albe 2026
Lætitia AVROT: pg_column_size(): What you see is not what you get
David Wheeler: pg_clickhouse 0.1.10
Hi, it’s me, back again with another update to pg_clickhouse, the query interface for ClickHouse from Postgres. This release, v0.1.10, maintains binary compatibility with earlier versions but ships a number of significant improvements that increase compatibility of Postgres features with ClickHouse. Highlights include:
Radim Marek: Don't let your AI touch production
Not so long ago, the biggest threat to production databases was the developer who claimed it worked on their machine. If you've attended my sessions, you know this is a topic I'm particularly sensitive to.
These days, AI agents are writing your SQL. The models are getting incredibly good at producing plausible code. It looks right, it feels right, and often it passes a cursory glance. But "plausible" isn't a performance metric, and it doesn't care about your execution plan or locking strategy.
Richard Yen: WAL as a Data Distribution Layer
Every so often, I talk to someone working in data analytics who wants access to production data, or at least a snapshot of it. Sometimes, they tell me about their ETL setup, which takes hours to refresh and can be brittle, with a lot of monitoring around it. For them, it works, but it sometimes gets me wondering if they need all that plumbing to get a snapshot of their live dataset. Back at Turnitin, I set up a way to get people access to production data without having to snapshot nightly, and I thought maybe I should share it with people here.

