Sammlung von Newsfeeds
Sometimes, you may need to manage large objects, i.e. CLOB, BLOB and BFILE, using PostgreSQL. There are two ways to deal with large objects in PostgreSQL: one is to use existing data type, i.e. bytea for binary large object, and text for character-based large object; another is to use pg_largeobject; This blog will explain how to use pg_largeobject.
Chris Travers: How TimescaleDB Solves Common PostgreSQL Problems in Database Operations With Data Retention Management
In my career, I have frequently worked for companies with large amounts of time-partitioned data, where I was a software engineer focusing on our PostgreSQL databases. So it was my job to deal with outages in the database. One source of repeated outages has been partition management to handle data retention.
You might have already seen our previous articles supporting Oracle to PostgreSQL and SQL server to PostgreSQL migrations. Our previous articles have provided solutions to some of the complexities during Oracle to PostgreSQL migrations. In this article, we are going to discuss about migration of synonyms from Oracle to PostgreSQL.
Attempts to compress PostgreSQL WAL at different levels have always been around since the beginning. Some of the built-in features (wal_compression) have been there since 2016, and almost all backup tools do the WAL compression before taking it to the backup repository. But it is time to take another look at the built-in wal_compression because PostgreSQL 15 has much more to offer.
This article will contain spoilers both on how I solved 2022 Day 14's challenge "Regolith Reservoir" using SQL, as well as general ideas on how to approach the problem. I recommend trying to solve it yourself first, using your favorite language. This article is delayed from the actual puzzle's release. Also note that my solutions are seldom going to be the "best" solutions - they were solved as quickly as possible, and these articles will show my first solutions, with some minor reformatting and cleaning up.
© Laurenz Albe 2022
I have been faced with a request for an unusual unique constraint that puzzled me for a while. Since the solution I came up with is a nice show-case for range data types, I’ll share it with you. Also, it allows me to rant some about NULL, which is a temptation I find hard to resist.The problem: an unusual unique constraint
We have a table like this:CREATE TABLE tab ( a integer NOT NULL, b integer );
Now we want to make sure that the combination of a and b is unique in the following way:
Welcome to the 5th installment of the #PGSQLPhriday blogging series. I am thrilled to be this month's host! The topic posts should be published by Friday February 3rd.
On January 17, Chicago PUG started the new year by welcoming Ryan Booz. Ryan’s presentation Successful Database DevOps with PostgreSQL. Although I already thanked Ryan and want to thank him again – the presentation was phenomenal! We had an extended discussion after his talk, and even the next day, people in my office kept talking about it! I should mention that there was a poker tournament in the room adjacent to the one where we had our meetup. A couple of people migrated to our room to listen to the presentation!
A simple way to make more attractive the PostgreSQL command line interface!
AuthenticationAuthorization Verify that the user is who they claim to be. This can be through password, trust, or some other federated login (like Kerberos) As an authenticated user, what am I permitted to do within the system?
Before we get started, lets establish a few terms:
What is a database job scheduler? Essentially, a job scheduler is a process that kicks off in-database functions and procedures at specified times and runs them independently of user sessions. The benefits of having a scheduler built into the database are obvious: no dependencies, no inherent security leaks, fits in your existing high availability plan, and takes part in your data recovery plan, too.
Postgres is one of the most widely used databases and supports a number of operating systems. When you are writing code for PostgreSQL, it’s easy to test your changes locally, but it can be cumbersome to test it on all operating systems. A lot of times, you may encounter failures across platforms and it can get confusing to move forward while debugging. To make the dev/test process easier for you, you can use the Postgres CI.
After looking at the implications of the replication factor for a YugabyteDB cluster, this blogpost looks at how a table "materialises" in a YugabyteDB cluster.
The creation of a table in YugabyteDB YSQL first of all adds the table metadata to the PostgreSQL catalog, exactly like it does in PostgreSQL. This data (the metadata) is stored on the master.
This article will contain spoilers both on how I solved 2022 Day 13's challenge "Distress Signal" using SQL, as well as general ideas on how to approach the problem. I recommend trying to solve it yourself first, using your favorite language. This article is delayed from the actual puzzle's release. Also note that my solutions are seldom going to be the "best" solutions - they were solved as quickly as possible, and these articles will show my first solutions, with some minor reformatting and cleaning up.
There’s a particular anti-pattern in database design that PostgreSQL handles… not very well.
For example, let’s say you are building something like Twitch. (The real Twitch doesn’t work this way! At least, not as far as I know!) So, you have streams, and you have users, and users watch streams. So, let’s do a schema!
CREATE TABLE stream (stream_id bigint PRIMARY KEY);
CREATE TABLE "user" (user_id bigint PRIMARY KEY);