PostgreSQL / EDB

PgBouncer –  light connection pooler for PostgreSQL

11/01/2023
Share:

Z  artykułu dowiesz się między innymi, czy:

  • warto stosować zarządzanie połączeniami w PostgreSQL;
  • PgBouncer jest jedynym słusznym connection poolerem;
  • tryb “session” zawsze będzie wolniejszy od “transaction”;
  • lepiej ustawić 100 czy 10 połączeń do bazy danych.

In the following article, you will find if:

  • it is worth to implement the connections’ management in PostgreSQL,
  • PgBouncer is the only legitimate connection pooler;
  • the “session” mode is always going to be slower than “transaction” mode;
  • it is better to set 100 or 10 of inks to the database.

In my work with clients, I often hear that using the tools for managing connections in PostgreSQL has become obsolete. This statement is based on the fact that the software made nowadays does already have this solution implemented in it.

In theory, this statement does seem true. If there is a properly operating connection pooler, present on the side of the application, then there is no need to add such a solution on the side of the database.

Why “in theory” and what does properly “operating” mean?

Sometimes it happens that such applications have not been installed on virtual machines, but unclosed in containers. During the phase of building and deploying of the applications, we can agree on the maximal number of connections which it will establish. Efficient management of connections is embedded inside the container, so we have all that we need.

Let’s imagine a scenario:

  • configuration of the pod: holds 10 fixed connections, maximally 50 connections;
  • the system assumes the scalability of app to 10 pods;
  • we have to configure ad list 50 connections to such database + reserves + extra connections, we put 600 (it is common to calculate the amount of possible connections based on the number of threads of CPU at the ratio of 3-4 connection a thread, which is 600/4=150 – we should provide the machine with this many threads;
  • half a year (or any other time after deployment) a decision is being made: the database is working very well, but the application is the bottleneck, so we double the number of possible pods.

Should we now increase the number of possible connections and processors two times? Or maybe configure all pods, for them to consume fewer connections?

Without delving into the ins and outs of PostgreSQL (because this is not the objective of this article) I am going to answer ‘it depends on’:

  • we do not always need as many as 150 threads of CPU to handle 600 connections, but sometimes out of 96 we can handle 96 connections and increasing its number would negatively impact the efficiency of the system; it all depends on the nature of the movement;
  • a situation may occur, when we would have the processing power of the CPU, but the real issues would be the lack of the available RAM,
  • another situation can also occur, when the problem would be the higher number of locks in the base related to the higher number of transactions;
  • the poor quality of code which references to the databases or a suboptimal configuration of the connection pooler built in the application could also be the problem,

I performed simple tests, with the aim to test:

  • how the number of connections influences the efficiency of the database;
  • what effect we can achieve when we use pgBouncer.

Tests are not complicated and do not aim to realistically measure the efficiency of the server, but only to suggest the general possibilities of implementing this technology. The test results could also be unsatisfying in terms of efficiency. The main objective of these tests is to detect if there is any possibility of improvement using connection management.

Each test has been conducted 3 to 5 times, to eliminate serious mistakes. The visualized result is the one closest to average. The last tests (with 4000 connections) have been carried out with the aim to compare the memory usage for a high number of connections in the case of two scenarios—one with using the handling of connections and one without it.

Machine configuration:

  • 4 CPU
  • 8GB RAM
  • 40 GB SSD
  • Centos 7
  • PostgreSQL 14 (port 5432)
  • PgBouncer (port 6432)

The tests have been conducted by using the pgbench tool and verified by psql and pg_top.

First test:

  • number of clients: 1 
  • time: 5 seconds
  • script: TPC-B
PgBouncer first test
PostgreSQLPgBouncer (transaction)
tps170122,5
latency average5,88 ms8,16 ms
initial connection time15,95 ms4,17 ms

Second test:

  • number of clients: 10
  • time: 5 seconds
  • script: TPC-B
PgBouncer second  test
PostgreSQLPgBouncer (transaction)
tps246,76182,59
latency average40,52 ms54,76 ms
initial connection time171,12 ms30,01 ms

Third test:

  • number of clients: 100
  • time: 5 seconds
  • script: TPC-B
PgBouncer third test
PostgreSQLPgBouncer (transaction)
tps168,89163,72
latency average592,07 ms610,78 ms
initial connection time1722,09 ms299,56 ms

Fourth test:

  • number of clients: 500
  • time: 5 seconds
  • script: TPC-B
PgBouncer fourth test
PostgreSQLPgBouncer (transaction)
tps0139,4
latency averagend3586,89 ms
initial connection timend1718,43 ms

In the test above, it is clearly visible that with this number of direct connections to the database, the waiting time was so high that pgbench did not manage to perform any operation during 5 seconds of the test’s duration time. The next test presumes the execution of a certain number of operations, not the duration of the operation.

Fifth test:

  • number of clients: 500
  • number of operations: 50
  • script: TPC-B
PgBouncer fifth test
PostgreSQLPgBouncer (transaction)
tps73,86134,17
latency average6769,80 ms3726,68 ms
initial connection time8187.13 ms1734,29 ms
memory usage2113 MB1253 MB
max. l con. to base50020

During the execution of this test, it has been already worth paying attention to the memory usage. I took into account the memory status reported by pg_top – it includes both the database’s and connection pooler’s consumption, as well as the testing tools.

Sixth test:

  • number of clients: 4000
  • number of transactions: 5 seconds
  • script: TPC-B
PgBouncer sixth test
PostgreSQLPgBouncer (transaction)PgBouncer (session)PgBouncer (session)
tps11,3838,0653,448,41
latency average351472,98 ms105073,70 ms74908,722 ms82613,25 ms
initial connection time68986,58 ms15270,04 ms14686,64 ms15489,31 ms
memory usage7702 MB1511 MBb.d.b.d.
max. l con. to base4000100 / 1010 / 4100
max. number of locks in base414 K1,8 Kb.d.b.d.

The PGBouncer test was carried out with the query forwarding setting in 2 modes:

  1. transactions: all queries within one transaction are routed to the same database connection, but different transactions within the same connection can go to different database connections;
  2. sessions: all queries within one connection are forwarded to the same database connection.

In this case,the variant with forwarding session with the lowest number of connections to the database has turned out to be the fastest.

Seventh test:

  • number of clients: 4000
  • number of transactions: 5
  • script: Select
PgBouncer seventh test 1
PgBouncer seventh test 2
PostgreSQLPgBouncer (transaction)PgBouncer (session)
tps92,893416,252834,39
latency average43063,11 ms1170,87 ms1411,24 ms
initial connection time118704,05 ms14082,64 ms13324,65 ms
memory usage7724 MB1358 MB1250 MB
max. l con. to base4000100 / 10100 / 10
max. number of locks in base000

This test has been conducted in 2 variants of the method of transferring connections and the number of final connections to the database.

Summary

The PostgreSQL Base invokes a unique CPU process for the handling of each connection. Like any other solution, it has its pros and cons. The downside of this situation is primarily the increased resource consumption compared to the thread-based model. PgBouncer, on the other hand, is the only process that manages connections by using threads.

Differences in the resource usage

-> Memory usage

Based on the tests, you can see a huge difference in the memory consumption. Memory usage difference when handling the 500 connections: 2113 MB (Postgres) and 1253 MB (PgBouncer) compared to the difference of the memory usage at 4,000 connections: 7702 MB (Postgres) and 1511 MB (PgBouncer) allows us to determine that the database itself consumes the amounts of memory in MB to handle one connection, while PgBouncer in KB.

-> Locks in database

PgBouncer has been configured to simultaneously handle 100 connections to the database, and thus has performed 100 queries at the same time when the rest have been waiting in the queue.

This approach resulted in the creation of 230 times fewer blocks (up to 1.8 thousand) in the database compared to handling 4,000 connections simultaneously (up to 414 thousands).

Thanks to this, 3.5 times higher efficiency has been achieved in the test in which for one transaction falls: 1xSELECT, 3xUPDATE, 1xINSERT.

-> CPU usage

In the situation where we have 4000 connections and 4 CPU threads, the CPU’s time has to  be divided to handle them. This case is perfectly illustrated by test number 7 – which is a series of queries that are not highly demanding for the database, but require quick switching between the handled connections.

In the case of vanilla PostgreSQL, switching the CPU to handle the next connection takes up several CPU cycles, while in the case of PgBouncer it is a memory address switch.

-> Connection time

The total connection initialization time is significantly higher in the scenario without the use of a connection pooler. This is due to the fact that it takes longer to start another process in the operating system than a thread in PgBouncer.

-> The number of transactions per second

All of the above-mentioned differences are directly reflected in the performance of the database – TPS, which is the number of transactions per second.

The tests show that for a smaller number of connections (in this case up to 100, but for other systems this value may be different) the database achieves better results without additional connection management. As the number of simultaneous connections increases, the benefits of managing them also increase.

This effect is most visible in the scenario with a series of simple SELECT queries, where there are no locks in the database – the use of PgBadger alone has resulted in a 36-fold increase in performance.

The purpose of this article has not been to prove that PgBouncer is the only legitimate choice. The best solution is to use such a connection management that is suitable for a given situation.

Sometimes it may be PgBouncer, otherwise HikariCP or another tool. There are also systems where many applications use the database and the only way to manage connections is the one at the database.

We have to remember that better results of the database in the tests are not due to the connection pooler handling some work more quickly. Instead, it’s because connection management was enabled, therefore the database had the traffic spread over time rather than in a single file.

Additionally, in the presented tests, a simple configuration of connection management has been used. Never mind how many connections will come, just set all of the connections in a queue and perform batches of 100 or 10 transactions and the configuration abilities will increase significantly.

The above results should not be treated as an oracle, but as a guide on how to plan the tests on your own system.

Applied scripts:

PgBadger applied scripts
Look more