Neues vom PostgreSQL Planet
Getting a single random row, or a few rows, from a table in order to get representative data for example is a frequent need. The most common way to do this in PostgreSQL is using ORDER BY random() like:
Postgres has always run well on commodity hardware, and works well on Raspberry Pi and consumer devices too. However, we have always lagged in optimizing Postgres on very larger hardware, mostly because our community has limited access to such hardware. In recent years, Postgres support companies have given the community access to large hardware, and run performance test themselves, which has allowed the community to enhance Postgres for such hardware.
PostgreSQL has an extension to jsonpath: ** operator, which explores arbitrary depth finding your values everywhere. At the same time, there is a lax mode, defined by the standard, providing a “relaxed” way for working with json. In the lax mode, accessors automatically unwrap arrays; missing keys don’t trigger errors; etc. In short, it appears that the ** operator and lax mode aren’t designed to be together :)
What happens to table that are not logged into WALs when a physical replication is in place?To WAL or not to WAL? When unlogged becomes logged… Creating and populating a database to test
First of all, let’s create a clean database just to keep the test environment separated from other databases:
Now let’s create and populate three tables (one temporary, one unlogged and one normal):
Analysing data within PostGIS is just one side of the coin. What about publishing our datasets as maps that various clients can consume and profit from? Let’s have a look at how this can be accomplished with Geoserver. Geoserver acts in this regard as a full-fledged open source mapping server, supporting several OGC compliant services, such as OGC Web Mapping (WMS) or Web Feature Service (WFS). OGC is an abbreviation for Open GIS Consortium, which is a community working on improving access to spatial information by developing standards in this area.
So, this happened:
Basically a GeoDjango user posted some workarounds to some poor performance in spatial queries, and the original query was truly awful and the workaround not a lot better, so I snarked, and the GeoDjango maintainer reacted in kind.
Sometimes a guy just wants to be a prick on the internet, you know? But still, I did raise the red flag of snarkiness, so it it seems right and proper to pay the fine.
Whether you are starting a new development project, launching an application modernization effort, or engaging in digital transformation, chances are you are evaluating Kubernetes. If you selected Kubernetes, chances are you will ultimately need a database.
Citus is an extension to Postgres that lets you distribute your application’s workload across multiple nodes. Whether you are using Citus open source or using Citus as part of a managed Postgres service in the cloud, one of the first things you do when you start using Citus is to distribute your tables.
I already covered JSONB type casting of single values. However, if you are extracting multiple values from JSONB that casting method can be cumbersome, and it also has limitations. Fortunately, there is another way — a way to retrieve multiple JSONB or JSON values and cast everything in one location.
Here is a simple table taken from my previous blog post:
In PostgreSQL, every database connection is a server-side process. This makes PostgreSQL a robust multi-process rather than a multi-threaded solution. However, occasionally people want to terminate database connections. Maybe something has gone wrong, maybe some kind of query is taking too long, or maybe there is a maintenance window approaching.
In this blog you will learn how to terminate queries and database connections in PostgreSQL.
Logging and debugging help to monitor and identify issues or problems occurring in your program. Sometimes we need to log debug information to figure out the problems during software development and testing. However, if debug is enabled, a large number of debug messages are generated and it is hard to read. Proper logging and debugging configurations are important.
There are a number of ways to retrieve debug information from Pgpool-II. In this post, I will describe the various ways for logging and debugging Pgpool-II.
The world changes. ARM architecture breaks into new areas of computing. An only decade ago, only your mobile, router, or another specialized device could be ARM-based, while your desktop and server were typically x86-based. Nowadays, your new MacBook is ARM-based, and your EC2 instance could be ARM as well.
pg_dump supports a few useful options to export data as a list of INSERTspg_dump and inserts
pg_dump(1) is the default tool for doing backups of a PostgreSQL database.
I often got answers about how to produce a more portable output of the database dump, with portable meaning truly “loadable into another PostgreSQL version or even a different database”.
In fact, pg_dump defaults to use COPY for bulkd loading data: