Neues vom PostgreSQL Planet
Postgres has made great strides in adding features to match proprietary databases, and it has many complex features that other databases don't have. However, that doesn't mean it is the best fit for every organization. There are still reasons not to use Postgres:
© Laurenz Albe 2020
HOT updates are not a new feature. They were introduced by commit 282d2a03dd in 2007 and first appeared in PostgreSQL 8.3.
PostgreSQL will create some catalog and other caches for each connection request to store some queried data locally, which can speed up query efficiency. If a process undergoes a DDL operation that causes catalog to change, other running processes need to be notified that some cache data is invalid. We call this notification mechanism the SI (Share Invalid) mechanism.
Database applications are initially written using the simplest queries possible. During testing and in production, some application tasks might have unacceptable performance. This is where re-architecturing happens, and where simple queries and data schema layouts can get complex. They might get complex because it is required to accomplish the task, or it might be because of limitations in how the database software handles certain queries. Database and tooling upgrades can require further complex additions.
I tested Fedora 33 on my old Lenovo T510 (enhanced 16GB RAM). It looks pretty well. The interactive processes are quick under high load (what was some time problem in older releases). It is very nice and comfortable.
Some time ago I’ve stumbled on this interesting question on stackoverflow (as stated in the title). I could not find any satisfactory answer at that time, so I came up with this solution. I think the problem is interesting enough to make it into a blog post.
The following table shows the expected results. The x column contains the original data whereas mdn_x contains the median computed from current up to 3 preceding rows.
My introduction to databases and PostgreSQL was for web application development and statistical analysis. I learned just enough SQL to get the queries to return the right answers.
Indexes are one of the core features of all the database management systems (DBMS). Indexes have a very long history in PostgreSQL, which has quite a rich set of index features. PostgreSQL has B-Tree, Hash, GIN, GIST, and BRIN indexes. And because the development around indexes is still going on, PostgreSQL 13 provides some enhancements.
We can divide the overall improvements into two categories:
I'm sure everybody is aware you can have PostgreSQL fetch data live across the internet (or locally of course) as part of your queries and use the data directly. In fact there are a large number of drivers available to access different kinds of data for download. But in the simplest case, we can also just use the file_fdw wrapper that's included in the standard PostgreSQL packages, together with everybody's http-swiss-army-knife, curl.
It is very common to see many customer cases where a sudden increase in disk space usage is caused by a lot of WAL segments filling up the WAL directory (pg_wal). Panicking customers generally ask “Why isn’t PostgreSQL deleting them?”. The most common reasons we used to find were:
SQL and PostgreSQL are perfect tools to analyze data. However, they can also be used to create sample data which has to possess certain statistical properties. One thing many people need quite often is a normal distribution. The main question therefore is: How can one create this kind of sample data?Tablefunc: Creating normal distributions
The first thing you have to do is to enable the tablefunc extension, which is actually quite simple to do:
tl;dr -- If you are not actively monitoring your PostgreSQL clusters, and alerting on anomalous conditions, you are "flying blind". Don't do that!