Neues vom PostgreSQL Planet
Recently we have covered “count” quite extensively on this blog. We discussed optimizing count(*) and also talked about “max(id) – min(id)” which is of course a bad idea to count data in any relational database (not just in PostgreSQL). Today I want to focus your attention on a different kind of problem and its solution: Suppose you want to grant a user access to a certain piece of data only X times. How can one implement that safely?
I am planning to virtually attend and present at the Percona Live Online conference tomorrow, May 19. It starts at 10am, Eastern USA time, and spans 24 hours, so it covers every time zone. I am speaking at noon, Eastern USA time.
Attendance is free, so you might want to check it out. I saw some interesting topics on the program. I am also curious to experience a 24-hour virtual conference, though I am unlikely to remain awake that long.
pgBackRest is a well-known powerful backup and restore tool. The 2.26 version has been released on Apr 20, 2020. New features have been developed since then.
Today, let’s have a look at: add backup/expire running status to the info command.
Peter Gagarinov: The new version of PgMex brings support for Matlab 2020a and PostgreSQL 12 along with performance improvements
We are happy to announce the new release of PgMex 1.2.0!
When we talk about database roles, most people immediately think of login roles, which allow people to log in. However, another user management feature is the ability to create non-login roles, formerly called groups.
First feature is a option --skip-columns-like. Now, it can be used only for browsing csv or tsv documents. When this option is used, then specified columns (specified by substring of name) are not displayed. It can be useful for psql PostGIS data. More time some GEO data has not sense to display. If you have wide table with lot of columns, then can be nasty to write list of columns every time. Example
PostgreSQL is an open-source RDMS and running across many platforms including Linux (all recent distributions), Windows, FreeBSD, OpenBSD, NetBSD, Mac OS X, AIX, HP/UX, IRIX, Solaris, Tru64 Unix, and UnixWare. There are many discussions about how to build Postgres and extensions from source code on a Linux-like environment, but sometimes, a developer may want to quickly setup a Windows environment to check a feature for cross-platform support.
Many organizations are prioritizing projects to tighten security around their applications and services after the slew of breaches that made headlines over the past few years. The use of SSL/TLS has proliferated, and remains an important component to any software deployment. Unsurprisingly this is true for databases, and the PostgreSQL community is continuing to augment its alraedy-reliable security for the world’s most powerful open-source database.
cary huang: Benefits of External Key Management System Over the Internal and How they Could Help Securing PostgreSQL
Data and user security have always been important considerations for small to large enterprises during the deployment of their database or application servers. PostgreSQL today has rich support for many network level and user level security features. These include TLS to secure database connections, internal user authentication, integration with external user authentication services such as RADIUS, LDAP and GSSAPI, and TLS certificate based user authentication …etc.
I have completed the draft version of the Postgres 13 release notes, containing 181 items. The release notes will be continually updated until the final release, which is expected to be in September or October of this year. Beta testing will start in the next few weeks.
© Laurenz Albe 2020
A frequently asked question in this big data world is whether it is better to store binary data inside or outside of a PostgreSQL database. Also, since PostgreSQL has two ways of storing binary data, which one is better?
I decided to benchmark the available options to have some data points next time somebody asks me, and I thought this might be interesting for others as well.
Whenever you are dealing with a lot of data, it helps to cache it. Postgres does this using shared_buffers. However, one risk of caching data is that a large query that accesses a lot of data might remove frequently-accessed data from the cache; this is called cache wipe. To avoid this, Postgres limits the number of shared buffers used by data operations that are expected to access a lot of data.
Database performance is truly important. However, when looking at performance in general people only consider the speed of SQL statements and forget the big picture. The questions now are: What is this big picture I am talking about? What is it that can make a real difference? What if not the SQL statements? More often than not the SQL part is taken care of. What people forget is latency. That is right: Network latency
You might have seen autovacuum running, and noticed that it sometimes performs freeze operations on transaction ids (32 bits) and multi-xacts (used for multi-session row locking).