The HikariPool.getConnection() method is failing because the pool cannot acquire a new connection from the database within the configured connectionTimeout limit. This often points to a bottleneck either in the database itself or in the application’s ability to manage its connections.

Common Causes and Fixes

1. Database is Overloaded or Unresponsive

Diagnosis: Check your database’s CPU, memory, and I/O utilization. Look for long-running queries or a high number of active connections on the database side.

  • PostgreSQL: SELECT pid, datname, usename, query, state, wait_event_type, wait_event FROM pg_stat_activity WHERE state != 'idle';
  • MySQL: SHOW FULL PROCESSLIST;
  • SQL Server: SELECT * FROM sys.dm_exec_requests WHERE session_id > 50 ORDER BY start_time ASC;

Fix:

  • Optimize Queries: Identify and optimize slow queries using database-specific profiling tools.
  • Increase Database Resources: Scale up CPU, RAM, or I/O for your database server.
  • Tune Database Parameters: Adjust parameters like max_connections (PostgreSQL/MySQL) or max server memory (SQL Server) if the database is hitting its own limits. For PostgreSQL, consider increasing shared_buffers and work_mem.

Why it works: This directly addresses the root cause if the database is simply too busy to accept new connections or respond to HikariCP’s requests in time.

2. HikariCP maximumPoolSize is Too Low

Diagnosis: Examine HikariCP’s metrics. If activeConnections and idleConnections consistently sum up to maximumPoolSize, and pendingThreads is high, the pool is saturated.

  • JMX Metrics: Monitor HikariPool.activeConnections, HikariPool.idleConnections, HikariPool.totalConnections, and HikariPool.threadsAwaitingConnection.

Fix: Increase spring.datasource.hikari.maximum-pool-size (or hikari.maximumPoolSize if not using Spring Boot) to a value that accommodates your application’s peak concurrent request load. A common starting point is (core_threads * 2) + 1 for typical web applications, but adjust based on observed load and database capacity. For example, if your application servers have 16 core threads and the database can handle 50 connections, you might set maximum-pool-size=40.

Why it works: A larger pool allows more concurrent requests to hold connections simultaneously, preventing the pool from exhausting its available connections during peak times.

3. HikariCP connectionTimeout is Too Low

Diagnosis: The error message itself indicates this. If your database is generally responsive but occasionally has brief spikes in latency, the default connectionTimeout (often 30 seconds) might be too aggressive.

Fix: Increase spring.datasource.hikari.connection-timeout to a higher value, such as 60000 (60 seconds). Ensure this is still less than your database’s own connection timeout settings to avoid confusing error messages.

Why it works: This gives HikariCP more time to wait for a connection to become available from the database, tolerating transient network or database delays.

4. Application is Holding Connections for Too Long

Diagnosis: Monitor HikariPool.activeConnections. If this number stays high for extended periods, and HikariPool.idleConnections is often zero, your application might be slow to release connections. This can be due to long-running transactions, blocking operations within a try-finally block that doesn’t release the connection, or inefficient data fetching.

Fix:

  • Shorten Transactions: Ensure database transactions are as short as possible. Commit or rollback promptly.
  • Asynchronous Operations: Move long-running, non-database-bound operations outside the scope of connection acquisition.
  • Proper try-finally / try-with-resources: Ensure Connection, Statement, and ResultSet are always closed, preferably using Java’s try-with-resources statement. Example:

java try (Connection conn = dataSource.getConnection(); PreparedStatement ps = conn.prepareStatement("SELECT * FROM my_table")) { // ... execute query and process results ... } catch (SQLException e) { // handle exception }

Why it works: Releasing connections promptly allows them to be returned to the pool and reused by other threads, reducing the overall demand on the pool.

5. Network Issues Between Application and Database

Diagnosis: Use network diagnostic tools like ping, traceroute (or tracert on Windows), and mtr from your application server to the database server. Check for packet loss, high latency, or intermittent connectivity drops. Also, review firewall logs on both the application and database sides for dropped connections or blocked ports (default is 5432 for PostgreSQL, 3306 for MySQL, 1433 for SQL Server).

Fix:

  • Resolve Network Problems: Work with your network team to address latency, packet loss, or firewall misconfigurations.
  • Ensure Correct Ports: Verify that the database port is open and accessible.
  • Consider leakDetectionThreshold: While not a direct fix, setting spring.datasource.hikari.leak-detection-threshold to a value like 30000 (30 seconds) can help identify connections that are being leaked or held for too long due to application logic, which might be exacerbated by network delays.

Why it works: Reliable network connectivity is fundamental for the application to communicate with the database and for HikariCP to manage its connections.

6. Database Server Not Starting or Crashing

Diagnosis: Check the database server’s logs for startup errors, out-of-memory errors, or crash reports. Verify the database process is running.

Fix: Troubleshoot and resolve the underlying issues preventing the database from starting or causing it to crash. This might involve increasing server resources, fixing configuration errors, or addressing application-induced database corruption.

Why it works: If the database isn’t running, HikariCP cannot possibly establish new connections.

7. Incorrect JDBC Driver or Configuration

Diagnosis: Ensure you are using the correct JDBC driver for your database version and that the jdbcUrl is correctly formatted, including the database hostname, port, and database name. Check for typos in the URL.

Fix:

  • Update Driver: Use the latest stable version of the JDBC driver for your database.
  • Validate jdbcUrl: Double-check the jdbcUrl format. For example, a common PostgreSQL URL is jdbc:postgresql://your_db_host:5432/your_db_name. A MySQL URL might be jdbc:mysql://your_db_host:3306/your_db_name?serverTimezone=UTC.

Why it works: An incorrect jdbcUrl or driver can prevent the database from being reachable or accepting connections, leading to timeouts.

After resolving these issues, the next error you might encounter is related to statement preparation or query execution if there are underlying data inconsistencies or schema problems.

Want structured learning?

Take the full Spring-boot course →