News & Blog back

Subscribe

Deep dive into postgres stats: pg_stat_database

Everything you always wanted to know about Postgres stats

In this post we continue our discussion about postgres stats. This time we’ll be focusing on pg_stat_database. As mentioned in postgres documentation, this view contains one row for each database in the cluster, showing database-wide statistics. It is well known that postgres may have several databases within single instance, hence this view contains stats about all of them.
You can find full description of view’s columns in the official documentation so here I will focus on types of problems that it helps us to solve:
  • Cache hit ratio.
  • Commit ratio.
  • Database anomalies.
  • Load distribution.
Cache hit ratio. Everybody knows that reading from memory is always faster than from disk and that’s better when our data fit in memory. When clients request data, postgres checks shared memory and if there are no relevant data there it has to read it from disk, thus queries become slower. Using pg_stat_database we can track this process and estimate cache hit ratio. This view contains information about number of blocks read from shared buffer cache or from disk – blks_hit and blks_read. Using simple calculations it can provide desired values:

SELECT
  datname, 100 * blks_hit / (blks_hit + blks_read) as cache_hit_ratio
FROM pg_stat_database WHERE (blks_hit + blks_read) > 0;
or summary among all databases:

SELECT 
  round(100 * sum(blks_hit) / sum(blks_hit + blks_read), 3) as cache_hit_ratio
FROM pg_stat_database;

Sweet spot here are values close to 100 – it means that the almost all necessary data were read from shared buffers. Values near 90 show that postgres read from disk time to time. And values below 80 show that we have insufficient amount of shared buffers or physical RAM . Data required for top-called queries don’t fit into memory, and postgres has to read it from disk. It’s quite good if this data in the OS page cache, if they aren’t there it’s a bad scenario. The basic idea here is increased amount of shared buffers – good starting point for this is 25% of the available RAM. When all databases are able to fit in RAM, the good starting point is to allocate 80% of all available RAM.
Note, when postgres is restarted and actively fills buffer cache, it shows low cache hit ratio and this is normal behaviour.

Commit ratio. Commit ratio is the similar to cache hit ratio, but in addition it shows the amount of successful operations. It’s well known that changes made by transactions may be aborted (rollback) or commited. If rollback isn’t properly handled by an application, it can be considered as an error. Also, single queries that have failed outside the transactions are also accounted as rollbacks. So, in general, using commit ratio we can estimate amount of errors in a particular database. For commit ratio, xact_commit and xact_rollback values are used, and using queries like above it’s possible to calculate commit ratio. Here is an example which shows per-database results:

SELECT 
  datname, 100 * xact_commit / (xact_commit + xact_rollback) as commit_ratio
FROM pg_stat_database WHERE (xact_commit + xact_rollback) > 0;

Result values may vary between 1 to 100. Values that are closer to 100 mean that you database has very few errors. In case when commit ratio is below 90, a good idea is to configure proper logging and check out logs for errors, built a list of most often errors and begin to eliminate them step by step.

Database anomalies. Another pg_stat_database’s useful option is the anomaly detection. Anomalies are the unwanted events occurring in databases and this stats view provides information about rollbacks, recovery conflicts, deadlocks and temporary files. All these events are unwanted and if there are too many of them, you should pay attention and try to eliminate their sources.

As mentioned above, rollbacks aren’t the only transaction aborts – it also failed queries, so do check logs to understand what caused an error. Another type of anomalies is recovery conflicts – the situation when queries running on standbys are cancelled. From user’s perspective it looks like a failed query and when it fails, additional details written in the postgres logs. Also, a good point to start an investigation is the pg_stat_database_conflicts view – there may be various reasons for conflicts to occur and this view allows us to understand the exact cause.

Another issue are deadlocks – when two or more transactions have locked resources and are trying to obtain resources already locked by other transactions. In normal situation xact A locks resource A and xact B locks resource B. In deadlock situation xact A will try to lock resource B. This, in turn puts it into hold mode since xact B has already locked resource B. This way both transactions lock each other’s resources. When a deadlock occurs, postgres cancels one of the involved transactions and removes it from the waiting queue so that other xacts continue their work.

As a rule, deadlocks warn you about application’s bad design, and general recommendation here is to fix application logic which causes the deadlocks. Good starting point here is to enable log_lock_waits option in postgresql.conf, reload and check logs time to time. When a deadlock occurs, extra information about conflicted queries will be logged.

The last anomaly type is temporary files – when postgres tries to sort or hide huge amount of data and has insufficient memory, it swaps data between memory and temporary files located on disk. As we know, disks are slower than memory, so queries that use temp files potentially become slower. Good practice is to track temp files and problematic queries. The log_temp_files option may help detecting queries which create temp files. Next step is to either rewrite problematic queries but if it’s very hard or impossible, an alternative is to increase work_mem – this is quite flexible setting and postgres allows to change it per-session, per-user or for a whole server. In latter case you do need to be careful since large work_mem values for the whole server may cause OOM (Out-of-memory condition) and harm postgres.

Load distribution. And to finish here are a few words on load distribution. This metric isn’t as important as metrics mentioned above, however, sometimes it is useful when you need to understand the kind of workload in your database. Is your workload write- or read-intensive? Answer to this question could give you the following metrics: tup_returned, tup_fetched, tup_inserted, tup_updated, tup_deleted. They reveal how many operations were done by queries. With your favorite monitoring system, which can make graphs using these numbers, you can quickly track load spikes or collapses and react appropriately.

To summarise, the pg_stat_database view contains general information about databases and events occured, so tracking numbers from this view is a good starting point that allows you to assess whether everything is fine with your databases. Of course, using only that view is insufficient and additional resources like views and logs should be used. In my next post I will continue to explain other stats views and how to efficiently use them.

You may also like: