What Is a Unix Timestamp?
A Unix timestamp (also called Unix time, POSIX time, or Epoch time) is a system for describing a point in time as a single integer: the number of seconds that have elapsed since 00:00:00 UTC on Thursday, January 1, 1970. That specific moment is known as the Unix epoch.
For example, the timestamp 1741824000 represents March 13, 2026 at 00:00:00 UTC. Every second that passes, every Unix timestamp increments by exactly 1. This simplicity — a single integer representing any moment in time — is what makes Unix timestamps so powerful and so ubiquitous in software engineering.
Unix timestamps are used everywhere: database records store them for created_at and updated_at fields, JWT authentication tokens have an exp (expiration) field expressed as a Unix timestamp, log files use them for precise event ordering, and distributed systems rely on them for coordinating time-sensitive operations across servers in different geographic regions.
Why Was 1970 Chosen as the Epoch?
The choice of January 1, 1970 was essentially arbitrary — a pragmatic decision made by the early Unix developers at Bell Labs in the late 1960s and early 1970s. The Unix operating system was being developed during this period, and a reference point had to be chosen.
Earlier versions of Unix actually used January 1, 1971 as the epoch, with time counted in 1/60th-second intervals. This was later revised to the 1970 epoch with second-level precision, which offered a better balance between range and resolution on the 32-bit hardware of the era.
The year 2000 (Y2K) concerns inspired retrospective interest in this choice, but by then Unix time was so deeply embedded in operating systems, protocols, and software that any change was unthinkable. The Unix epoch of January 1, 1970 is now one of the most consequential arbitrary decisions in the history of computing.
What the Timestamp Represents
A Unix timestamp counts elapsed seconds since the epoch, not counting leap seconds. This is a subtle but important point. UTC periodically inserts leap seconds to stay synchronized with Earth's rotation. Unix time ignores these — it assumes every day has exactly 86,400 seconds (60 seconds × 60 minutes × 24 hours).
This means Unix time is not perfectly synchronized with UTC. Over the 27 leap seconds that have been added since 1972, Unix time has drifted 27 seconds behind true UTC. For most applications this is irrelevant, but for high-precision scientific or financial systems, it matters.
A standard Unix timestamp is always expressed in whole seconds. However, many modern systems use millisecond timestamps — particularly JavaScript, which uses milliseconds by default. A millisecond timestamp is simply the Unix timestamp multiplied by 1000. For example, 1741824000000 is the millisecond-precision version of the timestamp above.
Some systems go further, using microsecond (×1,000,000) or nanosecond (×1,000,000,000) timestamps. The Linux kernel's clock_gettime() function can provide nanosecond resolution, and databases like PostgreSQL support microsecond precision.
Converting Unix Timestamps in Every Major Language
JavaScript / TypeScript
JavaScript's Date object uses milliseconds, not seconds:
// Get current Unix timestamp in milliseconds
const msTimestamp = Date.now(); // e.g. 1741824000000
// Get current Unix timestamp in seconds
const secTimestamp = Math.floor(Date.now() / 1000); // e.g. 1741824000
// Convert a Unix timestamp (seconds) to a Date object
const date = new Date(1741824000 * 1000);
console.log(date.toISOString()); // "2026-03-13T00:00:00.000Z"
// Convert a Date to a Unix timestamp (seconds)
const ts = Math.floor(new Date('2026-03-13').getTime() / 1000);
Python
import time
from datetime import datetime, timezone
# Get current Unix timestamp (float, includes sub-second precision)
ts = time.time() # e.g. 1741824000.123456
# Get integer Unix timestamp
ts_int = int(time.time())
# Convert timestamp to datetime (UTC)
dt = datetime.fromtimestamp(1741824000, tz=timezone.utc)
print(dt) # 2026-03-13 00:00:00+00:00
# Convert datetime to Unix timestamp
dt = datetime(2026, 3, 13, 0, 0, 0, tzinfo=timezone.utc)
ts = int(dt.timestamp()) # 1741824000
PHP
<?php
// Get current Unix timestamp
$ts = time(); // integer seconds
// Convert timestamp to formatted date (UTC)
echo date('Y-m-d H:i:s', 1741824000); // 2026-03-13 00:00:00
// Convert date string to Unix timestamp
$ts = strtotime('2026-03-13 00:00:00 UTC'); // 1741824000
// Using DateTime objects (recommended)
$dt = new DateTime('@1741824000');
echo $dt->format('Y-m-d H:i:s'); // 2026-03-13 00:00:00
?>
Java
import java.time.Instant;
import java.time.Instant;
// Get current Unix timestamp in seconds
long epochSeconds = Instant.now().getEpochSecond();
// Get current Unix timestamp in milliseconds
long epochMillis = Instant.now().toEpochMilli();
// Convert Unix timestamp to Instant
Instant instant = Instant.ofEpochSecond(1741824000L);
System.out.println(instant); // 2026-03-13T00:00:00Z
// Convert Instant back to Unix timestamp
long ts = instant.getEpochSecond(); // 1741824000
SQL (PostgreSQL)
-- Convert Unix timestamp to timestamp
SELECT TO_TIMESTAMP(1741824000);
-- Result: 2026-03-13 00:00:00+00
-- Convert timestamp to Unix timestamp
SELECT EXTRACT(EPOCH FROM TIMESTAMP WITH TIME ZONE '2026-03-13 00:00:00 UTC')::BIGINT;
-- Result: 1741824000
-- Get current Unix timestamp
SELECT EXTRACT(EPOCH FROM NOW())::BIGINT;
SQL (MySQL)
-- Convert Unix timestamp to datetime
SELECT FROM_UNIXTIME(1741824000);
-- Result: 2026-03-13 00:00:00
-- Convert datetime to Unix timestamp
SELECT UNIX_TIMESTAMP('2026-03-13 00:00:00');
-- Result: 1741824000
-- Current timestamp
SELECT UNIX_TIMESTAMP();
The Year 2038 Problem
Many older systems and embedded devices store Unix timestamps as a signed 32-bit integer. A signed 32-bit integer can hold values from −2,147,483,648 to 2,147,483,647. The maximum value, 2,147,483,647, corresponds to January 19, 2038 at 03:14:07 UTC.
At that exact moment, 32-bit systems will overflow. The timestamp will flip from the maximum positive value to the minimum negative value (−2,147,483,648), which corresponds to December 13, 1901. Systems that have not been updated to use 64-bit timestamps could interpret future dates as being in 1901, causing catastrophic failures in date calculations, certificate expiry checks, scheduled events, and database queries.
This is the Year 2038 Problem (sometimes called Y2K38 or Y2038). Unlike Y2K, which was largely addressed before it hit, Y2038 is a known threat with a fixed deadline. Most modern 64-bit operating systems and databases already use 64-bit timestamps (which won't overflow for approximately 292 billion years), but billions of embedded devices, legacy industrial controllers, and 32-bit systems remain at risk.
If you are writing any software today that will still be running in 2038, always use 64-bit integers for Unix timestamps. In practice, this means using int64_t in C/C++, long in Java, bigint in PostgreSQL, and avoiding 32-bit timestamp columns in any database schema.
Negative Timestamps for Pre-1970 Dates
Unix timestamps can be negative — a negative value simply represents a moment before the Unix epoch. For example:
-1= December 31, 1969 at 23:59:59 UTC-86400= December 31, 1969 at 00:00:00 UTC-2208988800= January 1, 1900 at 00:00:00 UTC
Most modern programming languages and databases handle negative timestamps correctly. However, some older implementations or poorly written code assumes timestamps are always non-negative, leading to bugs when processing historical data. If you're working with dates before 1970 — for example, digitizing historical records or building genealogy software — verify that your date libraries handle negative timestamps correctly.
Use Cases in Software Development
Unix timestamps are the default choice for time storage in software because they are:
- Timezone-agnostic: A Unix timestamp always refers to a specific UTC moment. There is no ambiguity about timezone or DST.
- Sortable: Because they are plain integers, timestamps sort correctly with any standard sort algorithm.
- Comparable: Calculating the difference between two events is a simple subtraction.
- Compact: A 64-bit integer takes 8 bytes to store, versus 20+ bytes for an ISO 8601 string.
- Universal: Every programming language, database, and operating system natively understands Unix timestamps.
Common use cases include: storing created_at/updated_at fields in databases, JWT token expiry (exp claim), rate limiting (allow N requests per second window), cache TTL (expire at timestamp X), session expiry, audit log ordering, and distributed tracing (correlating events across microservices).
Frequently Asked Questions
What is a Unix timestamp?
A Unix timestamp is a single integer representing a moment in time as the number of seconds that have elapsed since January 1, 1970, 00:00:00 UTC (the Unix epoch). For example, the timestamp 1741824000 represents March 13, 2026 at midnight UTC.
Why does Unix time start from 1970?
January 1, 1970, was chosen as the Unix epoch by early Unix developers at Bell Labs as a pragmatic, arbitrary reference point. The Unix operating system was being developed in the late 1960s and early 1970s, and 1970 provided a convenient, recent starting date for the 32-bit hardware of the era.
What is the Year 2038 problem?
Systems that store Unix timestamps as signed 32-bit integers will overflow on January 19, 2038, at 03:14:07 UTC. The timestamp will flip from the maximum positive value to a negative number corresponding to December 1901, potentially causing catastrophic date calculation failures. Using 64-bit integers eliminates this problem.
How do I get the current Unix timestamp in JavaScript?
Use Math.floor(Date.now() / 1000) for seconds or Date.now() for milliseconds. JavaScript's Date object natively uses milliseconds, so divide by 1000 and floor the result to get the standard seconds-based Unix timestamp.
How do I get the current Unix timestamp in Python?
Use int(time.time()) from the time module for an integer timestamp, or time.time() for a float with sub-second precision. For timezone-aware conversion, use datetime.now(timezone.utc).timestamp().
Can Unix timestamps be negative?
Yes. A negative Unix timestamp represents a moment before January 1, 1970. For example, -86400 represents December 31, 1969, at midnight UTC. Most modern programming languages handle negative timestamps correctly, though some older implementations may not.
What is the difference between Unix timestamp in seconds and milliseconds?
A standard Unix timestamp counts seconds since the epoch. A millisecond timestamp is the same value multiplied by 1000, providing sub-second precision. JavaScript uses milliseconds by default, while most other languages and databases use seconds. Always verify which unit a system expects to avoid off-by-1000x errors.
Sources
- The Open Group: POSIX.1-2017 — Seconds Since the Epoch
- RFC 3339: Date and Time on the Internet: Timestamps
- The Year 2038 Problem — NIST Cybersecurity Framework reference
- MDN Web Docs: Date.now() and Date object reference