PostgreSQL

PostgreSQL Query Store with pg_stat_statements: A DBA’s Guide to Performance Tracking

Introduction

When tuning PostgreSQL databases, the hardest part isn’t writing the fix — it’s knowing what to fix. Without visibility into which queries consume the most resources, optimization becomes guesswork.

That’s where pg_stat_statements comes in. It acts as PostgreSQL’s Query Store, recording detailed statistics about query executions. With it, DBAs and developers can measure performance, identify bottlenecks, and validate optimizations in real-time.

In this blog, we’ll cover:

  • What pg_stat_statements is and why it matters

  • How to enable and configure it

  • Key columns and metrics explained

  • Example queries to analyze performance

  • A real-world before/after optimization demo

  • Best practices for DBAs

  • Integration with monitoring tools


What is pg_stat_statements?

pg_stat_statements is a PostgreSQL extension that tracks statistics for all executed SQL statements. Instead of storing every single query instance, it normalizes queries (replaces constants with placeholders) so that logically identical queries are grouped together.

This makes it easier to see which query patterns are expensive overall rather than drowning in duplicates.

Benefits of pg_stat_statements:

  • Tracks frequency (calls) of queries

  • Measures execution time (total, mean, min, max)

  • Shows rows returned/affected

  • Reports I/O usage (shared buffers hit vs read)

  • Normalizes queries for better grouping

  • Persists across restarts (if configured)

Effectively, it provides query-level insights at the database engine level without needing to instrument applications.


Step 1: Enabling pg_stat_statements

  1. Edit postgresql.conf

shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 10000 # number of unique queries to track
pg_stat_statements.track = all # track all queries (default: top-level)
pg_stat_statements.save = on # persist stats across restarts
pg_stat_statements.track_utility = on # include utility commands like CREATE/DROP
pg_stat_statements.track_planning = off # set on if you want planning times
  1. Restart PostgreSQL

sudo systemctl restart postgresql
  1. Enable the extension in your database:

CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

✅ The query store is now ready.


Step 2: Understanding Key Columns

Querying pg_stat_statements gives you many useful columns. Here are the most important ones:

Column Meaning
query Normalized SQL statement text
calls Number of times this query was executed
total_exec_time Total execution time across all calls (ms)
mean_exec_time Average execution time per call (ms)
min_exec_time Minimum execution time (ms)
max_exec_time Maximum execution time (ms)
rows Average number of rows returned/affected per execution
shared_blks_hit Buffer hits (memory accesses)
shared_blks_read Buffer reads (disk I/O)
blk_read_time Time spent reading data from disk (ms)
blk_write_time Time spent writing data to disk (ms)

Step 3: Exploring Query Statistics

Most Expensive Queries (by total time)

SELECT
query,
calls,
total_exec_time AS total_time,
mean_exec_time AS avg_time,
rows,
(100 * total_exec_time / sum(total_exec_time) OVER ())::numeric(5,2) AS pct_of_load
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;

Example Output (Before Optimization):

Query Calls Total Time (ms) Avg Time (ms) Rows % Load
SELECT * FROM orders WHERE ... 15k 92000 6.1 1.2M 45.3%
UPDATE orders SET ... 2k 46000 23.0 2k 22.7%
SELECT * FROM customers ... 30k 30000 1.0 50k 14.8%

Most Frequently Called Queries

SELECT query, calls, mean_exec_time
FROM pg_stat_statements
ORDER BY calls DESC
LIMIT 10;

Detecting Slow Queries

SELECT query, calls, mean_exec_time
FROM pg_stat_statements
WHERE mean_exec_time > 50
ORDER BY mean_exec_time DESC
LIMIT 5;

I/O-Heavy Queries (High Disk Reads)

SELECT query, shared_blks_hit, shared_blks_read, (100.0 * shared_blks_read / (shared_blks_hit+1)) AS pct_read
FROM pg_stat_statements
ORDER BY shared_blks_read DESC
LIMIT 5;

Step 4: Real-World Optimization Demo

Query Before Indexing:

SELECT * FROM orders WHERE customer_id = 123;

Stats from pg_stat_statements:

  • Calls: 5000

  • Avg Time: 12.4 ms

  • Rows Returned: ~50

Optimization:

CREATE INDEX idx_orders_customer ON orders(customer_id);

Query After Indexing:

  • Calls: 5000

  • Avg Time: 0.7 ms

  • Rows Returned: ~50

Execution time improved by ~17x, validated with pg_stat_statements.


Step 5: Resetting Statistics

To reset all stats (useful for measuring specific workloads):

SELECT pg_stat_statements_reset();

Step 6: Best Practices

  • Reset stats regularly (daily, weekly, or per workload test)

  • ✅ Use pg_stat_statements.max high enough (10k–50k) to capture workload variety

  • ✅ Enable pg_stat_statements.track_planning if query planning time matters

  • ✅ Use with pgBadger for reports and Prometheus + Grafana for dashboards

  • ✅ Watch I/O-heavy queries (disk reads) as they usually indicate missing indexes


Step 7: Integration with Monitoring Tools

  • pgAdmin: Displays pg_stat_statements output in a GUI for DBAs.

  • pgBadger: Generates HTML reports combining logs and query stats.

  • Prometheus + Grafana: Visualizes query performance over time using exporters.

This turns pg_stat_statements into a continuous performance monitoring system.


Conclusion

pg_stat_statements is one of PostgreSQL’s most powerful performance tools.
It allows you to:

  • Identify expensive queries

  • Detect workload patterns

  • Measure optimization improvements

  • Monitor database health continuously

Every PostgreSQL DBA should enable this extension early in their environment.

Technical Editor

About Author

SQL Server DBA, Trainer, Blogger, Author

Leave a comment

You may also like

DBA Stuff PostgreSQL

Vacuum, Autovacuum, and Analyze — Everything a PostgreSQL DBA Must Know

Vacuum, Autovacuum, and Analyze — Everything a PostgreSQL DBA Must Know Introduction PostgreSQL uses MVCC (Multi-Version Concurrency Control) to handle
PostgreSQL

Top PostgreSQL Monitoring Tools Every DBA Should Know

Top PostgreSQL Monitoring Tools Every DBA Should Know (pg_stat_statements, pgBadger, pgAdmin, Prometheus, Grafana) Introduction As databases grow in size and