Designing Database Schemas for Time-Series Data
Published: April 5, 2026 | Category: Architecture
As the world becomes more instrumented—from IoT sensors to high-frequency trading—the need to store and query "Time-Series" data has exploded. Unlike traditional relational data, time-series data is characterized by its append-only nature and the critical importance of the timestamp as a primary axis.
The Golden Rule: Use UTC
The most important rule in database design is to always store timestamps in UTC. Never store local time in your primary data records. Local time depends on offset, daylight saving transitions, and political changes. Storing a Unix timestamp (integer) or an ISO 8601 UTC string ensures that your data remains unambiguous and easily sortable across global regions.
Relational vs. Purpose-Built
While PostgreSQL and MySQL can handle time-series data using partitioning and indexing, high-volume applications often turn to purpose-built databases like InfluxDB, TimescaleDB, or Amazon Timestream.
- Compression: Time-series databases use specialized compression (like Delta-of-Delta encoding) to reduce storage by up to 90%.
- Retention Policies: Automatic deletion or downsampling of old data (e.g., keeping per-second data for a day, but per-minute data for a year).
Indexing for Time
In a time-series table, the timestamp should almost always be part of the primary index. In PostgreSQL, using BRIN (Block Range INdexes) is particularly effective for time-ordered data, as it is much smaller than a B-Tree index while remaining extremely fast for range queries.
Common Schema Patterns
-- The 'Narrow' Table (Highest flexibility)
CREATE TABLE sensor_data (
time TIMESTAMP WITH TIME ZONE NOT NULL,
sensor_id INT,
value DOUBLE PRECISION
);
-- The 'Wide' Table (Optimized for specific dashboards)
CREATE TABLE weather_metrics (
time TIMESTAMP WITH TIME ZONE NOT NULL,
temperature DOUBLE,
humidity DOUBLE,
pressure DOUBLE
);
Conclusion
Good time-series design requires thinking about the lifecycle of the data. By choosing the right storage engine, enforcing UTC, and implementing smart retention policies, you can build systems that scale to billions of events while maintaining sub-second query performance.