The Short Answer
Use UUIDs when
- arrow_rightIDs are generated client-side or in distributed systems
- arrow_rightYou expose IDs in public URLs and want to prevent enumeration
- arrow_rightYou need to merge databases without renumbering
- arrow_rightSharding is on the roadmap
- arrow_rightOffline-first or mobile clients create records pre-sync
Use Auto-Increment when
- arrow_rightSingle-master OLTP database, no sharding planned
- arrow_rightIDs are internal, never exposed to users
- arrow_rightIndex size and write throughput are critical
- arrow_rightYou need human-readable, ordered IDs for support
- arrow_rightStorage cost matters at billion-row scale
Index Performance
Almost every modern relational database — PostgreSQL, MySQL/InnoDB, SQL Server — stores rows in a B-tree clustered or organized by the primary key. The B-tree assumes that newly inserted keys land near each other in key space, so that one or two leaf pages stay hot in cache and writes append rather than fragment.
An auto-increment integer is the perfect citizen: every new row goes to the rightmost leaf. Cache hit rate is near 100%, page splits are rare, the index stays compact.
A random UUID v4 is the worst possible citizen. Every insert lands in a random page, evicting hot pages, causing page splits, and bloating the index. Benchmarks consistently show 30–50% lower insert throughput on UUID v4 keys versus bigserial keys at the multi-million-row scale, with index size 2–3× larger on disk.
UUID v7 fixes the worst of this. It encodes a millisecond timestamp in the first 48 bits, so inserts land in (mostly) monotonic order. Postgres benchmarks from 2024 onward show v7 inserts within 5–10% of bigserial throughput while keeping all the distributed-generation benefits.
Privacy & Security
A URL like /orders/47 tells any visitor exactly two things: there are at least 47 orders in your system, and order 46 probably exists. This is the "ID enumeration" problem and it has caused dozens of public breaches — most famously the AT&T iPad email leak in 2010, which scraped 114,000 emails by incrementing an ICCID parameter.
UUIDs in URLs (/orders/3f6a4b1c-...-d4e7) make enumeration computationally infeasible — UUID v4 has 122 bits of entropy. They also leak no count signal. A competitor scraping your invoice numbers can't tell whether you ship 100 or 100,000 orders a day.
Important: UUIDs are not authorization. They are obscurity. Always check that the requesting user owns the record. Treating UUIDs as "capabilities" only works if they were generated as such (cryptographically random, never logged) — and it's usually clearer to use a real auth check.
Distributed Generation
An auto-increment column requires the database to assign the value at insert time. That works fine for a single primary, becomes painful with multiple writers, and breaks down completely for offline-first apps or microservices that need to assign an ID before talking to a database.
UUIDs are generated client-side from entropy alone. No round trip, no coordination. A mobile app can create a draft record, sync it later, and the ID is stable from the moment of creation — meaning logs, analytics, and foreign keys all line up without backfilling.
// Browser, Node 19+, Deno, Bun
const id = crypto.randomUUID();
// "3f6a4b1c-9c2e-4d8a-b9f1-72e3a5b6d4e7"
// Postgres: generate at insert time
// CREATE TABLE orders (
// id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
// ...
// );
// Or supply your own from the app
// INSERT INTO orders (id, ...) VALUES ($1, ...);Storage Cost
An auto-increment 32-bit integer is 4 bytes. A bigserial is 8 bytes. A UUID is 16 bytes natively, or 36 bytes if you store it as CHAR(36) (which you should not, but MySQL users sometimes do).
| Type | Bytes | Notes |
|---|---|---|
| int / serial | 4 | Max ~2.1B values |
| bigint / bigserial | 8 | Effectively unlimited |
| Postgres uuid | 16 | Native binary type |
| MySQL BINARY(16) | 16 | Manual UNHEX/HEX |
| MySQL CHAR(36) | 36+ | Avoid — wasteful |
At 1 billion rows, UUID primary keys cost roughly 8 GB more than bigserial. Every secondary index that includes the PK pays the cost again. Not catastrophic on modern hardware, but real.
URL Aesthetics
/orders/47 is undeniably nicer than /orders/3f6a4b1c-9c2e-4d8a-b9f1-72e3a5b6d4e7. But user-facing identifiers don't have to be the database primary key. Many apps maintain both:
- Internal: UUID primary key for distribution and security
- External: short slug or human-readable invoice number (
INV-2026-00147) for support
If you want short URL-safe IDs, look at NanoID (21 characters, ~126 bits) or base62-encoded UUIDs. These give you most of the URL benefit without the dashes-and-hex visual noise.
UUID v4 vs v7
UUID v4 is fully random. Pros: maximum entropy, leaks zero information about the generator. Cons: terrible for B-tree indexes, as covered above.
UUID v7 (RFC 9562, finalized in 2024) embeds a millisecond Unix timestamp in the first 48 bits, then 74 bits of randomness. The result is sortable by creation time and friendly to B-tree inserts, while still being unguessable across machines.
// v4: full random
import { v4 as uuidv4 } from 'uuid';
uuidv4();
// "9c8b1f2a-3d4e-4f5a-9b8c-1d2e3f4a5b6c"
// v7: time-ordered
import { v7 as uuidv7 } from 'uuid';
uuidv7();
// "01931a2b-4c5d-7e6f-8a9b-0c1d2e3f4a5b"
// ^^^^^^^^^^^^^^^ first 48 bits = ms timestamp
// Verify ordering:
const ids = Array.from({ length: 5 }, () => uuidv7());
console.log([...ids].sort().join('\n') === ids.join('\n'));
// true (within the same millisecond, randomness orders them)Native v7 support is in PostgreSQL 18 (uuidv7() function), the JavaScript uuid package since v9, Python uuid-utils, Go google/uuid v1.6+, and most other major libraries. If you're starting a new project in 2026, default to v7.
Migration Strategy
Adding UUIDs to an existing sequential-ID table is straightforward but requires planning. The common pattern:
-- 1. Add the new column, nullable, with default
ALTER TABLE orders
ADD COLUMN public_id uuid DEFAULT gen_random_uuid();
-- 2. Backfill existing rows (already filled by DEFAULT,
-- but for migrating from another column do it in batches)
UPDATE orders SET public_id = gen_random_uuid()
WHERE public_id IS NULL;
-- 3. Make it NOT NULL and add a unique index
ALTER TABLE orders
ALTER COLUMN public_id SET NOT NULL;
CREATE UNIQUE INDEX orders_public_id_key ON orders(public_id);
-- 4. Update foreign keys gradually
-- (add new uuid FK column, dual-write, switch reads, drop old)
-- 5. Switch URLs to public_id, keep numeric id internalDon't try to swap the primary key in one step on a live table — you'll lock writes for the duration of the rewrite. Use the dual-column approach, migrate readers and FKs incrementally, and only drop the old PK when nothing references it.
Need a UUID right now?
v1, v4, v7, NIL, and bulk generation — all in your browser.