Grafana’s TimescaleDB data source is fundamentally a PostgreSQL data source with a few clever optimizations for time-series data.

Here’s a TimescaleDB instance running, ingesting data from a simulated IoT sensor:

-- Create a table for sensor readings
CREATE TABLE sensor_readings (
    time        TIMESTAMPTZ NOT NULL,
    device_id   INT NOT NULL,
    temperature DOUBLE PRECISION,
    humidity    DOUBLE PRECISION
);

-- Create a hypertable from the table, partitioning by time
SELECT create_hypertable('sensor_readings', 'time');

-- Insert some sample data
INSERT INTO sensor_readings (time, device_id, temperature, humidity)
VALUES
(NOW() - INTERVAL '1 hour', 1, 22.5, 45.2),
(NOW() - INTERVAL '59 minutes', 1, 22.6, 45.1),
(NOW() - INTERVAL '1 hour', 2, 21.0, 50.5),
(NOW() - INTERVAL '58 minutes', 2, 21.1, 50.3);

Now, let’s see how Grafana queries this. In Grafana, you’d add a new data source, select "TimescaleDB," and configure the connection details. The critical part is the "TimescaleDB Settings" section. You can enable "TimescaleDB extensions" which allows Grafana to use TimescaleDB-specific functions for more efficient queries.

When you write a query in Grafana’s query editor for TimescaleDB, it looks like SQL, but Grafana translates it into optimized queries. For instance, to get the average temperature per device over the last hour, you might write:

SELECT
  time_bucket('1 minute', time) AS time,
  device_id,
  avg(temperature) AS avg_temperature
FROM sensor_readings
WHERE
  $__timeFilter(time)
GROUP BY 1, 2
ORDER BY 1, 2;

Grafana’s $__timeFilter(time) macro is crucial. It automatically injects a WHERE time BETWEEN <start_time> AND <end_time> clause based on the time range selected in your Grafana dashboard. This makes your queries dynamic and responsive to user interaction.

The "hypertable" concept is TimescaleDB’s core advantage. Instead of querying a single massive table, TimescaleDB partitions your data into smaller "chunks" based on the time column. When you query, TimescaleDB intelligently queries only the relevant chunks, drastically speeding up reads. The create_hypertable command shown earlier is what sets this up.

The true power comes when you leverage TimescaleDB’s analytical functions. For example, to calculate a 5-minute rolling average of temperature for each device:

SELECT
  time,
  device_id,
  avg(temperature) OVER (PARTITION BY device_id ORDER BY time RANGE BETWEEN INTERVAL '4 minutes' PRECEDING AND CURRENT ROW) AS rolling_avg_temperature
FROM sensor_readings
WHERE
  $__timeFilter(time);

This query uses PostgreSQL’s window functions, which TimescaleDB handles efficiently. Grafana can visualize this directly, showing smoothed trends.

The "TimescaleDB extensions" setting in Grafana isn’t just a toggle; it enables Grafana to recognize and utilize TimescaleDB-specific functions like time_bucket and first() or last() with time_bucket for downsampling. If you have a very high-frequency dataset, you might downsample it for long-term storage and querying to improve performance.

For example, to create a summary table that aggregates 1-hour averages from the raw sensor_readings hypertable:

CREATE MATERIALIZED VIEW sensor_readings_1h_summary WITH (timescaledb.continuous) AS
SELECT
  time_bucket('1 hour', time) AS bucket,
  device_id,
  avg(temperature) AS avg_temp,
  avg(humidity) AS avg_humidity
FROM sensor_readings
GROUP BY bucket, device_id;

-- Add the materialized view as a continuous aggregate
SELECT add_continuous_aggregate('sensor_readings_1h_summary', '1 hour');

Grafana can then query this materialized view, which is updated automatically by TimescaleDB. This is a common pattern for reducing query load on raw data.

The most surprising thing is how seamlessly TimescaleDB’s unique features, like hypertables and continuous aggregates, map onto standard SQL and PostgreSQL window functions, making them directly usable and performant within Grafana’s familiar query interface. You don’t need to learn a new query language; you just need to understand how to structure your TimescaleDB schema and leverage its extensions for optimal performance.

The next concept to explore is using TimescaleDB’s data retention policies to automatically prune old data.

Want structured learning?

Take the full Timescaledb course →