Free Unix Timestamp Converter — Epoch ↔ Human Date
This is a free online timestamp converter that translates between Unix timestamps and human-readable dates in both directions, without ever sending your input to a server. Paste an epoch integer and get it back as ISO 8601, UTC, your local timezone, and a relative “3 minutes ago” phrase. Pick a date in the calendar and get Unix seconds, Unix milliseconds, and an ISO 8601 string — ready to paste into a cookie, a JWT, a database query, or a log filter.
A live ticker at the top shows the current Unix time so you can grab “now” with one click. The tool auto-detects whether you pasted seconds (10 digits today) or milliseconds (13 digits today) by digit count, and you can override that with the Seconds / Milliseconds toggle if the value is ambiguous. On the reverse side, flip the Local / UTC toggle to control how the date picker is interpreted — the same clock-time resolves to two different Unix timestamps depending on the timezone, and getting this wrong is the single most common source of “off by one hour” bugs.
Unix timestamp, in one paragraph
A Unix timestamp is the number of seconds that have passed since 00:00:00 UTC on 1 January 1970 — a moment known as the Unix epoch. It’s a single integer that pins down any point in time, independent of timezone or calendar. Because it’s just an integer, it’s trivial to store in a database column, subtract to compute durations, compare for sorting, and transmit over the wire. Almost every operating system, filesystem, API, database, logging pipeline, and authentication token uses Unix timestamps as the canonical “when did this happen” value.
Seconds or milliseconds?
Both formats point to the same instant — they just differ in precision. Pick the one your target system expects:
- Unix seconds (10 digits in the current era): the classic Unix convention. Standard in HTTP
Set-Cookieexpires, JWTexp/iat/nbfclaims, OAuth tokens,crontab, most Linux tooling, and PostgreSQL’sto_timestamp(). - Unix milliseconds (13 digits in the current era): the JavaScript and Java default because
Date.now()andSystem.currentTimeMillis()both return milliseconds. You’ll also see it in Kafka timestamps, Elasticsearch’s@timestamp, and most log aggregators.
If you’re unsure which one a value is, count the digits: 10 → seconds, 13 → milliseconds, 16 → microseconds, 19 → nanoseconds. This tool’s Auto unit toggle does exactly that by default.
How to use the timestamp converter
Converting a Unix timestamp to a human date
- Paste the integer into the Unix timestamp → Human date field (or click Use now to load the current epoch).
- Leave the unit on Auto unless you want to force seconds or milliseconds.
- Copy any of the four output rows — ISO 8601 (UTC), UTC, Local, or Relative — with one click.
Converting a date to a Unix timestamp
- Pick a date and time in the Human date → Unix timestamp picker (or click Use now).
- Choose how to interpret the clock-time: Local (your browser’s timezone) or UTC.
- Copy Unix seconds, Unix milliseconds, or the ISO 8601 string — whichever your target system expects.
Timezones and daylight saving time
Unix timestamps are always in UTC — an epoch value has no timezone baked into it. Confusion enters the picture when you convert in either direction:
- When converting Unix → human, the Local row uses your browser’s configured timezone (detected via
Intl.DateTimeFormat().resolvedOptions().timeZone, shown in parentheses on the label) and correctly applies whatever DST offset was in effect at that specific moment — not the current one. - When converting human → Unix, the Local / UTC toggle controls how the clock-time you picked is interpreted. “2026-03-14 02:30” means two different Unix timestamps depending on whether you mean that wall-clock in New York, in London, or in UTC — and on DST transition days it can even mean a time that doesn’t exist or exists twice.
When you’re filing a bug report, writing a migration, or pasting a value into someone else’s system, prefer ISO 8601 with an explicit offset (for example 2026-04-22T15:30:00Z) — it’s the only format that’s completely unambiguous across teams and timezones.
The Year 2038 problem
Older systems store Unix timestamps as signed 32-bit integers, which overflow at 03:14:07 UTC on 19 January 2038 — the largest value a signed 32-bit integer can hold is 2,147,483,647 seconds. One second later, the counter wraps to a large negative number and jumps back to 1901. Modern 64-bit operating systems, mainstream programming languages, and every recent database use wider integer types (or BIGINT / time_t on 64-bit Linux) and won’t overflow for another 292 billion years. Legacy embedded hardware, old C code, and some file formats still need migration — if you’re auditing a system, test with timestamps above 2,147,483,647 to flush out the bugs.
Common use cases
- Reading logs — convert a raw epoch in a server log or a
journalctlentry into a human date. - Writing database queries — paste a human date and grab the Unix integer for a
WHERE created_at > …filter. - Debugging JWTs — check whether a token’s
exporiatclaim is in the future or the past. - Setting cookies — compute the exact seconds-since-epoch for a
Set-Cookieexpiry. - Comparing events across timezones — convert two local times to UTC seconds and subtract for the real gap.
- Seeding test data — paste arbitrary epoch values into fixtures to simulate historical events.
Privacy and offline use
All conversion happens locally in your browser with plain JavaScript Date objects — no value is ever sent to a server, logged, or cached. Once the page has loaded, you can disconnect from the internet and the tool keeps working. That matters when you’re converting timestamps from a confidential log file or a production database and don’t want the values leaving your machine.