Neues vom PostgreSQL Planet
Oleg Bartunov: Unpublished interview
Interview with Oleg Bartunov
“Making Postgres available in multiple languages was not my goal—I was just working on my actual task.”
Tomas Vondra: Don't give Postgres too much memory (even on busy systems)
A couple weeks ago I posted about how setting maintenance_work_mem too high may make things slower. Which can be surprising, as the intuition is that memory makes things faster. I got an e-mail about that post, asking if the conclusion would change on a busy system. That’s a really good question, so let’s look at it.
To paraphrase the message I got, it went something like this:
Umair Shahid: PostgreSQL Column Limits
If you’ve ever had a deployment fail with “tables can have at most 1600 columns”, you already know this isn’t an academic limit. It shows up at the worst time: during a release, during a migration, or right when a customer escalation is already in flight.
But here’s the more common reality: most teams never hit 1,600 columns; they hit the consequences of wide tables first:
Mayur B.: PostgreSQL Santa’s Naughty Query List: How to Earn a Spot on the Nice Query List?
Santa doesn’t judge your SQL by intent. Santa judges it by execution plans, logical io, cpu utilization, temp usage, and response time.
This is a practical conversion guide: common “naughty” query patterns and the simplest ways to turn each into a “nice list” version that is faster, more predictable, and less likely to ruin your on-call holidays.
Hans-Juergen Schoenig: PostgreSQL Performance: Latency in the Cloud and On Premise
PostgreSQL is highly suitable for powering critical applications in all industries. While PostgreSQL offers good performance, there are issues not too many users are aware of but which play a key role when it comes to efficiency and speed in general. Most people understand that more CPUs, better storage, more RAM and alike will speed up things. But what about something that is equally important?
We are of course talking about “latency”.
Radim Marek: Instant database clones with PostgreSQL 18
Have you ever watched long running migration script, wondering if it's about to wreck your data? Or wish you can "just" spin a fresh copy of database for each test run? Or wanted to have reproducible snapshots to reset between runs of your test suite, (and yes, because you are reading boringSQL) needed to reset the learning environment?
When your database is a few megabytes, pg_dump and restore works fine. But what happens when you're dealing with hundreds of megabytes/gigabytes - or more? Suddenly "just make a copy" becomes a burden.
Cornelia Biacsics: Contributions for week 52, 2025
Pavlo Golub gave a talk at WaW Tech conference in Warsaw on Dec 16 2025
Hyderabad PostgreSQL UserGroup Meetup on Dec 19 2025. Organised by Hari Kiran.
Speakers:
Floor Drees: PostgreSQL Contributor Story: Mario Gonzalez
Devrim GÜNDÜZ: What happened?
Last month PostgreSQL RPM repos were broken for Rocky Linux and AlmaLinux 9 and 10 users due to an OpenSSL update that Red Hat pushed to versions 10.1 and 9.7, which broke backward compatibility. Actually I broke the repos. Continue reading "What happened?"
Pavel Stehule: fresh dll of orafce and plpgsql_check for PostgreSQL 17 and PostgreSQL 18
I compiled and uploaded zip files with latest orafce and plpgsql_check for PostgreSQL 17 and PostgreSQL 18 - I used Microsoft Visual C 2022.
Setup:
Mayur B.: The OOM-Killer Summoning Ritual: “Just Increase work_mem”
You’ve probably seen the incident pattern:
- Postgres backends start disappearing.
- dmesg / journalctl -k shows the kernel OOM killer reaping postgres.
- Someone spots “out of memory” and reflexively recommends: “Increase work_mem.”
That recommendation is frequently backwards for OS OOM kills.
Dave Page: Code Signing fun and games for pgAdmin
Ahsan Hadi: pgEdge-Support-for-Large-Object-Logical-Replication
As AI capabilities continue to evolve and integrate more deeply into our applications, we’re faced with interesting architectural decisions about how to expose our data to large language models (LLMs). Two approaches that have gained significant traction are Retrieval Augmented Generation (RAG) servers (such as pgEdge RAG Server) and Model Context Protocol (MCP) servers (such as pgEdge Natural Language Agent).
Dave Page: RAG Servers vs MCP Servers: Choosing the Right Approach for AI-Powered Database Access
As AI capabilities continue to evolve and integrate more deeply into our applications, we’re faced with interesting architectural decisions about how to expose our data to large language models (LLMs). Two approaches that have gained significant traction are Retrieval Augmented Generation (RAG) servers (such as pgEdge RAG Server) and Model Context Protocol (MCP) servers (such as pgEdge Natural Language Agent).
Pavlo Golub: Dev Container for pgrx PostgreSQL Extensions: Lessons Learned
I like reproducible development. I also like short feedback loops. Combining both for pgrx was… educational. 🙂 In this post, I share the mistakes, the small pains, and the fixes I used to get a working VS Code dev container for a Rust project that builds PostgreSQL extensions with pgrx. If you’re writing extensions or using pgrx in a team, this will save you a few grey hairs.
TL;DR:
David Wheeler: 🐏 Taming PostgreSQL GUC “extra” Data
New post up on on the ClickHouse blog:
David Wheeler: 🐏 Taming PostgreSQL GUC “extra” Data
New post up on on the ClickHouse blog:
I wanted to optimize away parsing the key/value pairs from the [pg_clickhouse] pg_clickhouse.session_settings GUC for every query by pre-parsing it on assignment and assigning it to a separate variable. It took a few tries, as the GUC API requires quite specific memory allocation for extra data to work properly. It took me a few tries to land on a workable and correct solution.
Jan Wieremjewicz: Enhancing PostgreSQL OIDC with pg_oidc_validator
With PostgreSQL 18 introducing built-in OAuth 2.0 and OpenID Connect (OIDC) authentication, tools like pg_oidc_validator have become an essential part of the ecosystem by enabling server-side verification of OIDC tokens directly inside PostgreSQL. If you’re new to the topic, make sure to read our earlier posts explaining the underlying concepts and the need for external validators:
semab tariq: The Road to Deploy a Production-Grade, Highly Available System with Open-Source Tools
Everyone wants high availability, and that’s completely understandable. When an app goes down, users get frustrated, business stops, and pressure builds.
But here’s the challenge: high availability often feels like a big monster. Many people think, If I need to set up high availability, I must master every tool involved. And there’s another common belief too: Open-source tools are not enough for real HA, so I must buy paid tools.
Stefan Fercot: pgBackRest preview - simplifying manual expiration of oldest backups
A useful new feature was introduced on 11 December 2025: Allow expiration of the oldest full backup regardless of current retention. Details are available in commit bf2b276.
Seiten
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- nächste Seite ›
- letzte Seite »

