What is epoch time?
Epoch time (also called Unix time or POSIX time) is the number of time units that have elapsed since 1970-01-01T00:00:00Z. The default unit is seconds, but modern systems express the same instant in milliseconds (JavaScript), microseconds (PostgreSQL TIMESTAMP), or nanoseconds (Go time.UnixNano, Linux clock_gettime). The instant is the same — only the unit changes.
Why does my epoch have 13 / 16 / 19 digits?
A 10-digit value is seconds, 13 is milliseconds, 16 is microseconds, and 19 is nanoseconds (current era). The converter auto-detects the precision from the digit count and shows the equivalent in every other unit. The boundary years where digit counts roll over are well past 2100 for seconds, so the heuristic is reliable today.
What is the difference between microseconds (μs) and nanoseconds (ns)?
A microsecond (μs) is one millionth of a second; a nanosecond (ns) is one billionth — 1 μs = 1,000 ns. Microseconds are the standard precision for SQL TIMESTAMP columns (PostgreSQL, MySQL DATETIME(6)). Nanoseconds are used by high-resolution OS clocks, Go time, and tracing systems like OpenTelemetry.
When do I need nanosecond precision?
Three common cases: distributed-systems tracing (Jaeger, Zipkin, OpenTelemetry use nanosecond start/end times to compute span durations); Go programs that call time.Now().UnixNano(); and Linux/eBPF tooling reading CLOCK_MONOTONIC values. For most application logging, milliseconds are sufficient and easier to read.
Can I lose precision converting nanoseconds to milliseconds?
Yes — going from ns to ms is integer division by 1,000,000, which discards the sub-millisecond remainder. This converter performs the math with BigInt so the conversion itself does not overflow JavaScript number range, but the truncation is permanent. To preserve the original instant, store the highest-precision value you have and convert on the way out.
What is the maximum 32-bit Unix timestamp (Y2038)?
A signed 32-bit integer rolls over at 2,147,483,647 seconds — 2038-01-19T03:14:07Z. After that any system still using a 32-bit time_t (older C, embedded firmware, some database columns) wraps to a negative number representing 1901. 64-bit systems and any timestamp stored as int64 milliseconds, microseconds, or nanoseconds are unaffected for hundreds of millennia.
Why does PostgreSQL use microseconds?
PostgreSQL stores TIMESTAMP / TIMESTAMPTZ as a 64-bit integer count of microseconds since 2000-01-01 (configurable to floating-point at compile time, but rare). Microsecond resolution gives nine significant digits in a typical year — fine for application logging — without paying the storage cost of nanoseconds. EXTRACT(epoch FROM ts) returns seconds with fractional precision; multiply by 1,000,000 to get the underlying integer.
How do I generate epoch time in different languages?
JavaScript: Date.now() (ms). Python: time.time() (float seconds), time.time_ns() (int ns). Go: time.Now().Unix() (s), .UnixMilli(), .UnixMicro(), .UnixNano(). Ruby: Time.now.to_i / .to_f / .strftime("%s%N"). Java: System.currentTimeMillis() / Instant.now().toEpochMilli(). Rust: SystemTime::now().duration_since(UNIX_EPOCH). PostgreSQL: EXTRACT(epoch FROM now()).