Original Research

Timestamp Precision Guide

Seconds vs milliseconds vs microseconds vs nanoseconds. Storage requirements, language support, database behavior, and conversion gotchas.

By Michael Lip · Published April 11, 2026 · zovo.one

4
Precision Levels
13
SO Votes (Top Q)
2038
32-bit Overflow Year
10
Languages Covered

One of the most common timestamp bugs is mixing up seconds and milliseconds. JavaScript returns milliseconds, Python returns seconds, and Go can return nanoseconds. This guide covers every precision level, showing storage requirements, overflow dates, and exactly which APIs and databases use which precision.

Precision Level Comparison

Precision Unit Digits (2026) Example Value Resolution Storage (int) 32-bit Overflow 64-bit Overflow
Seconds1s10 digits17443296001 second4 or 8 bytes2038-01-19292B years
Milliseconds1ms13 digits17443296000000.001 seconds8 bytesN/A (too large)292M years
Microseconds1µs16 digits17443296000000000.000001 seconds8 bytesN/A292K years
Nanoseconds1ns19 digits17443296000000000000.000000001 seconds8 bytesN/AYear 2262

Language & Database Precision Defaults

Language / DB Default Precision API Call Higher Precision Available Gotcha
JavaScriptMillisecondsDate.now()performance.now() (microseconds)Number.MAX_SAFE_INTEGER limits to year 285,616
PythonSeconds (float)time.time()time.time_ns() (nanoseconds)Float precision loses microseconds after 2020
JavaMillisecondsSystem.currentTimeMillis()Instant.now() (nanoseconds)Date class only supports milliseconds
GoSecondstime.Now().Unix()time.Now().UnixNano() (nanoseconds)UnixNano overflows in year 2262
RustSeconds + nanosSystemTime::now()Duration has nanos fieldDuration stores (secs, nanos) as separate fields
C/C++Secondstime(NULL)clock_gettime(CLOCK_REALTIME) (ns)32-bit time_t overflows in 2038
RubySecondsTime.now.to_iTime.now.to_f (microseconds via float)to_f loses precision beyond microseconds
PHPSecondstime()microtime(true) (microseconds)microtime returns float, precision loss possible
PostgreSQLMicrosecondsTIMESTAMP8 bytes, microsecond resolutionMicrosecond precision, not nanosecond
MySQLSecondsDATETIMEDATETIME(6) (microseconds)Must specify (6) for sub-second precision
SQLiteText / Real / Intstrftime('%s','now')No native timestamp typeStores as text, real, or integer — your choice
MongoDBMillisecondsnew Date()ISODate has ms precision onlyCannot store microsecond or nanosecond timestamps

Community Questions from Stack Overflow

Real questions developers ask about timestamp precision, sourced from the Stack Overflow API:

13
12
5
4
3
2

Methodology

Data sources:

Frequently Asked Questions

What is the difference between Unix timestamp in seconds vs milliseconds?

A Unix timestamp in seconds counts seconds since January 1, 1970 UTC (currently ~1.78 billion). A millisecond timestamp is 1000x larger (~1.78 trillion). To identify which you have: 10 digits means seconds, 13 digits means milliseconds, 16 digits means microseconds, 19 digits means nanoseconds. JavaScript's Date.now() returns milliseconds, while Python's time.time() returns seconds with decimal precision.

When will the Unix timestamp overflow?

The 32-bit signed Unix timestamp overflows on January 19, 2038 at 03:14:07 UTC (the "Y2K38 problem"). A 64-bit signed integer storing seconds won't overflow for 292 billion years. For nanosecond precision in 64-bit, overflow occurs in the year 2262. Most modern systems use 64-bit timestamps, but embedded systems and legacy code may still use 32-bit.

Which programming languages use millisecond timestamps by default?

JavaScript (Date.now()), Java (System.currentTimeMillis()), and Dart use milliseconds by default. Python, Ruby, Go, and C use seconds by default. This inconsistency is a major source of bugs when converting between languages — always check whether your API returns seconds or milliseconds.

How much storage does each timestamp precision require?

Second-precision timestamps need 4 bytes (32-bit) or 8 bytes (64-bit). Millisecond, microsecond, and nanosecond precision all require 8 bytes as 64-bit integers. PostgreSQL's TIMESTAMP stores microsecond precision in 8 bytes. MySQL's DATETIME uses 5 bytes at second precision, or 8 bytes with DATETIME(6) for microseconds.

What are common timestamp precision bugs?

Most common bugs: 1) Treating a millisecond timestamp as seconds, resulting in dates in the year 50,000+, 2) Truncating microseconds when storing in a database with lower precision, 3) Floating-point precision loss in JavaScript beyond 2^53, 4) Comparing timestamps with different precisions without normalization, 5) ActiveRecord truncating timestamp precision below database support.

Free to use under CC BY 4.0 license. Cite this page when sharing.