Timestamp Precision Guide
Seconds vs milliseconds vs microseconds vs nanoseconds. Storage requirements, language support, database behavior, and conversion gotchas.
By Michael Lip · Published April 11, 2026 · zovo.one
One of the most common timestamp bugs is mixing up seconds and milliseconds. JavaScript returns milliseconds, Python returns seconds, and Go can return nanoseconds. This guide covers every precision level, showing storage requirements, overflow dates, and exactly which APIs and databases use which precision.
Precision Level Comparison
| Precision | Unit | Digits (2026) | Example Value | Resolution | Storage (int) | 32-bit Overflow | 64-bit Overflow |
|---|---|---|---|---|---|---|---|
| Seconds | 1s | 10 digits | 1744329600 | 1 second | 4 or 8 bytes | 2038-01-19 | 292B years |
| Milliseconds | 1ms | 13 digits | 1744329600000 | 0.001 seconds | 8 bytes | N/A (too large) | 292M years |
| Microseconds | 1µs | 16 digits | 1744329600000000 | 0.000001 seconds | 8 bytes | N/A | 292K years |
| Nanoseconds | 1ns | 19 digits | 1744329600000000000 | 0.000000001 seconds | 8 bytes | N/A | Year 2262 |
Language & Database Precision Defaults
| Language / DB | Default Precision | API Call | Higher Precision Available | Gotcha |
|---|---|---|---|---|
| JavaScript | Milliseconds | Date.now() | performance.now() (microseconds) | Number.MAX_SAFE_INTEGER limits to year 285,616 |
| Python | Seconds (float) | time.time() | time.time_ns() (nanoseconds) | Float precision loses microseconds after 2020 |
| Java | Milliseconds | System.currentTimeMillis() | Instant.now() (nanoseconds) | Date class only supports milliseconds |
| Go | Seconds | time.Now().Unix() | time.Now().UnixNano() (nanoseconds) | UnixNano overflows in year 2262 |
| Rust | Seconds + nanos | SystemTime::now() | Duration has nanos field | Duration stores (secs, nanos) as separate fields |
| C/C++ | Seconds | time(NULL) | clock_gettime(CLOCK_REALTIME) (ns) | 32-bit time_t overflows in 2038 |
| Ruby | Seconds | Time.now.to_i | Time.now.to_f (microseconds via float) | to_f loses precision beyond microseconds |
| PHP | Seconds | time() | microtime(true) (microseconds) | microtime returns float, precision loss possible |
| PostgreSQL | Microseconds | TIMESTAMP | 8 bytes, microsecond resolution | Microsecond precision, not nanosecond |
| MySQL | Seconds | DATETIME | DATETIME(6) (microseconds) | Must specify (6) for sub-second precision |
| SQLite | Text / Real / Int | strftime('%s','now') | No native timestamp type | Stores as text, real, or integer — your choice |
| MongoDB | Milliseconds | new Date() | ISODate has ms precision only | Cannot store microsecond or nanosecond timestamps |
Community Questions from Stack Overflow
Real questions developers ask about timestamp precision, sourced from the Stack Overflow API:
Methodology
Data sources:
- Stack Overflow API — Queried
api.stackexchange.com/2.3/search?intitle=timestamp+precisionon April 11, 2026, returning questions sorted by votes - Language documentation — Precision defaults verified against official documentation for Python 3.12, Node.js 22, Java 21, Go 1.22, Rust 1.77, Ruby 3.3, PHP 8.3
- Database documentation — Storage sizes and precision limits from PostgreSQL 17, MySQL 8.4, MongoDB 8.0, and SQLite 3.45 documentation
- Conversion formulas — Seconds to milliseconds: multiply by 1,000. To microseconds: multiply by 1,000,000. To nanoseconds: multiply by 1,000,000,000.
Frequently Asked Questions
What is the difference between Unix timestamp in seconds vs milliseconds?
A Unix timestamp in seconds counts seconds since January 1, 1970 UTC (currently ~1.78 billion). A millisecond timestamp is 1000x larger (~1.78 trillion). To identify which you have: 10 digits means seconds, 13 digits means milliseconds, 16 digits means microseconds, 19 digits means nanoseconds. JavaScript's Date.now() returns milliseconds, while Python's time.time() returns seconds with decimal precision.
When will the Unix timestamp overflow?
The 32-bit signed Unix timestamp overflows on January 19, 2038 at 03:14:07 UTC (the "Y2K38 problem"). A 64-bit signed integer storing seconds won't overflow for 292 billion years. For nanosecond precision in 64-bit, overflow occurs in the year 2262. Most modern systems use 64-bit timestamps, but embedded systems and legacy code may still use 32-bit.
Which programming languages use millisecond timestamps by default?
JavaScript (Date.now()), Java (System.currentTimeMillis()), and Dart use milliseconds by default. Python, Ruby, Go, and C use seconds by default. This inconsistency is a major source of bugs when converting between languages — always check whether your API returns seconds or milliseconds.
How much storage does each timestamp precision require?
Second-precision timestamps need 4 bytes (32-bit) or 8 bytes (64-bit). Millisecond, microsecond, and nanosecond precision all require 8 bytes as 64-bit integers. PostgreSQL's TIMESTAMP stores microsecond precision in 8 bytes. MySQL's DATETIME uses 5 bytes at second precision, or 8 bytes with DATETIME(6) for microseconds.
What are common timestamp precision bugs?
Most common bugs: 1) Treating a millisecond timestamp as seconds, resulting in dates in the year 50,000+, 2) Truncating microseconds when storing in a database with lower precision, 3) Floating-point precision loss in JavaScript beyond 2^53, 4) Comparing timestamps with different precisions without normalization, 5) ActiveRecord truncating timestamp precision below database support.
Free to use under CC BY 4.0 license. Cite this page when sharing.