The Short Answer
A good slug is short, lowercase, hyphenated, descriptive, and stable. A bad slug uses query parameters, mixed case, underscores, padding words, or changes every time the title gets edited. Three examples of each:
Good slugs
- check/blog/url-slug-best-practices
- check/products/wireless-headphones
- check/guides/typescript-generics
Bad slugs
- close/blog/post?id=4827&cat=12
- close/Products/Wireless_Headphones_FINAL_v2
- close/guides/the-complete-and-utter-guide-to-using-typescript-generics-in-2026-for-beginners
Why Slugs Matter for SEO
Google has confirmed since 2008 that the words in your URL are a small ranking signal. The signal itself is modest — content quality dominates — but the slug also drives the second, larger effect: click-through rate from the search results page.
Studies of SERP behavior consistently show that users scan the URL strip before they click. A clean URL like /url-slug-best-practices tells the user exactly what they will get. A URL like /p?id=4827 communicates nothing and gets clicked roughly 25 to 40 percent less often for the same position. Over thousands of impressions per week the lift compounds.
The slug also affects how a page is shared. Slack, iMessage, X, LinkedIn, and Discord all preview the URL alongside the title card. A descriptive slug doubles as a fallback title when the OpenGraph image fails to load.
The Anatomy of a Perfect Slug
Six rules cover ninety-five percent of cases. Internalize these and you can stop second-guessing every new page.
| Rule | Why it matters |
|---|---|
| Lowercase only | URLs are case-sensitive on most servers; mixed case creates duplicate content risk. |
| Hyphens between words | Google treats hyphens as space; underscores join words into one token. |
| 3 to 5 keywords | Enough context for ranking, short enough to fully render in SERP and social cards. |
| 50 to 80 characters | Long slugs get truncated in SERPs and look spammy when shared. |
| Drop most stop words | Words like "the", "a", "of" rarely add ranking value and waste characters. |
| Strip punctuation | Apostrophes, quotes, commas, and parentheses force percent-encoding and ugly URLs. |
Hyphens vs Underscores
The single most common mistake on legacy CMSs is the underscore. Matt Cutts, then Google's head of webspam, addressed this directly in 2005 and again in 2011: Google treats a hyphen as a word separator and an underscore as a word joiner. That means red_running_shoes is indexed as the single token red_running_shoes, while red-running-shoes is indexed as three searchable words.
Modern Google has gotten smarter about parsing underscores, but the original rule still holds in 2026: hyphens are unambiguous, underscores are not. The cost of switching is zero and the upside is real. Always use hyphens.
A related question: hyphen or em-dash? Always the ASCII hyphen-minus (U+002D). Em-dashes and en-dashes get percent-encoded into %E2%80%93, which destroys readability.
Should Slugs Include Stop Words?
Stop words are short function words that carry little semantic weight: the, a, an, of, in, on, at, for, and, or. They are the first thing many slug generators strip out. The conventional advice — drop them all — is too aggressive.
| Context | Advice | Example |
|---|---|---|
| Ranking-critical pages | Drop stop words | /best-running-shoes |
| Brand or quote phrases | Keep stop words | /the-art-of-war |
| Recipes / food sites | Keep when readability matters | /cake-with-chocolate-ganache |
| E-commerce categories | Drop aggressively | /mens-leather-jackets |
The rule of thumb: if removing the stop word changes the meaning or breaks a recognizable phrase, keep it. Otherwise drop it.
Trailing Slashes
/blog/post/ and /blog/post are technically two different URLs. Google may index them as duplicates, splitting your link equity. The fix is binary: pick one convention site-wide and 301 the other to it.
Most static site generators (Next.js, Hugo, Astro) and most CDNs (Vercel, Cloudflare, Netlify) make this configurable. The choice itself does not matter for SEO — only consistency does. Web Core convention leans toward no trailing slash for content pages and trailing slash for directory-style URLs, but a single rule applied everywhere beats any specific choice.
Verify with curl: curl -I https://yoursite.com/page/ should return either a 200 or a 301 to the canonical form. A 200 on both forms is the bug.
Handling Slug Changes Without Breaking SEO
Sometimes you must change a slug. The title was wrong. The product line was renamed. The URL contained a brand name you no longer use. The single rule is: never let an old URL 404. Every slug change ships with a 301 redirect from the old slug to the new one, kept in place forever.
A 301 (permanent) redirect passes essentially all link equity to the new URL. A 302 (temporary) does not — Google treats the old URL as still authoritative. For slug changes you almost always want 301.
Watch for chains: if /a → /b → /c, collapse them to /a → /c and /b → /c. Each hop adds latency and slightly degrades the signal. Most CDNs let you express this as a routing rule.
Audit tip: Crawl your sitemap with Screaming Frog or sitebulb every quarter and look for redirect chains, 404s, and pages with mixed-case URLs. These are the slug bugs that cost ranking silently.
International / Unicode Slugs
Should the URL for a Spanish article about jamón ibérico be /jamon-iberico or /jamón-ibérico? Both are technically valid: RFC 3987 allows IRIs (Internationalized Resource Identifiers) with full UTF-8 characters, and modern browsers display them correctly.
In practice, transliteration (stripping accents and converting to ASCII) wins for three reasons. First, when copy-pasted into a non-Unicode-aware system, accented URLs break. Second, social platforms still occasionally percent-encode them into unreadable strings like /jam%C3%B3n-ib%C3%A9rico. Third, some analytics tools treat the encoded and decoded forms as separate pages.
The exception is non-Latin scripts (Japanese, Arabic, Cyrillic). For those audiences, native-script slugs perform better in CTR studies because they look correct to the reader. The percent-encoding ugliness is unavoidable but most users never see it.
Programmatic Slug Generation
Every CMS needs a slug function. Most ship with one that misses an edge case — non-Latin diacritics, ampersands, multiple consecutive spaces, or trailing punctuation. Here is a tight implementation that handles the cases most generators miss:
// Production-grade slug generator
function slugify(input) {
return input
.normalize('NFKD') // separate accents from base letters
.replace(/[\u0300-\u036f]/g, '') // strip combining diacritics
.toLowerCase()
.trim()
.replace(/&/g, ' and ') // ampersand to readable word
.replace(/[^a-z0-9\s-]/g, '') // keep only letters, digits, space, hyphen
.replace(/\s+/g, '-') // collapse whitespace into a single hyphen
.replace(/-+/g, '-') // collapse repeated hyphens
.replace(/^-+|-+$/g, '') // trim leading/trailing hyphens
.slice(0, 80); // hard cap on length
}
slugify("Café Au Lait — A Beginner's Guide!");
// => "cafe-au-lait-a-beginners-guide"
slugify('What is REST? An API Primer (2026)');
// => "what-is-rest-an-api-primer-2026"The two non-obvious pieces are normalize('NFKD') and the diacritic strip. NFKD decomposes characters like é into a base e plus a combining accent, after which the regex strips the accent. Without this step, [^a-z0-9\s-] would also strip the base letter, producing caf-au-lait.
In production, also enforce uniqueness. If two posts produce the same slug, append a numeric suffix (-2, -3) only at insert time, never silently rewrite a slug after publish.
Generate clean slugs in seconds
Paste any title and get an SEO-ready slug — handles Unicode, stop words, and length caps.