Nextjs — SEO Optimization

Augustine Joseph
6 min readJan 11, 2025

--

Best practices for SEO Perfomance for Next.js applications.

Metadata

Metadata on webpage inspect

1. Static Metadata

Static metadata defined using a metaData object exported from a layout.tsx or page.tsxfile. This object contains information that remains constant across all page renders.
This static metadata is suitable for information that doesn’t change based on runtime data.


import type { Metadata } from "next";

export const metadata: Metadata = {
title: {
default: "My Blog",
template: "%s - My Blog",
},

description: "Read the articles!",

robots: {
index: true,
follow: true,
},

openGraph: {
type: "website",
url: "https://www.myblogposts.com",
title: "My Awesome Blog",
description: "Come and read my awesome articles!",
images: [
{
url: "https://www.myblogposts.com/images/og-image.jpg",
width: 1200,
height: 630,
alt: "My Blog - Latest Articles",
},
],
siteName: "My Blog",
},

twitter: {
card: "summary_large_image",
site: "@myblogposts",
title: "My Blog",
description: "Come and read my awesome articles!",
images: [
{
url: "https://www.myblogposts.com/images/twitter-card.jpg",
alt: "Twitter Card Image",
},
],
},

keywords: ["awesome blog", "articles", "technology", "web development"],

alternates: {
canonical: "https://www.myblogposts.com",
},
};

title

Use: Defines the default title of the webpage.
Use Case : Displayed in the browser tab and search engine results. Helps improve SEO

  • default: The default title when no specific page title is provided.
  • template: A customizable format to append page-specific titles (e.g., blog post title).

description

Use: Provides a short description of the page.
This content is used by search engines and social media platforms (like Twitter and Facebook) in previews.

robots

Use: Controls how search engine bots interact with your page.
Use Case: Tells search engines whether to index or follow links on the page.

  • index: Whether search engines should index the page.
  • follow: Whether search engines should follow links on the page.

openGraph

Use: Controls how your content appears when shared on social media (Facebook, LinkedIn, etc.).
Use Case: Enhances the display of your website when shared on social platforms by specifying rich content like images, descriptions, and URLs.

  • type: Defines the type of content (website, article, etc.).
  • url: The canonical URL of the page.
  • title, description: The preview content for social media.
  • images: An array of images used in the social media preview.
  • siteName: The name of your site displayed on social media.

twitter

Use: Defines how your page will appear when shared on Twitter.
Use Case: Provides control over how your content is displayed as a Twitter Card.

  • card: Type of Twitter Card to use (summary, summary_large_image).
  • site: Your Twitter handle.
  • title, description: Content for the Twitter Card.
  • images: Image URLs for Twitter Card.

keywords

Use: Provides a list of keywords relevant to the content.
Use Case: Helps improve SEO by targeting specific search terms.

alternates

Use: Defines alternate URLs for different versions of your content.
Use Case: Helps with SEO by specifying canonical links to avoid duplicate content issues.

2. Dynamic Metadata

generateMetadata function takes parameters representing the current route, search parameters, and resolves metadata from parent segments.

import { Metadata } from "next";

export async function generateMetadata({
params: { postId },
}: BlogPostPageProps): Promise<Metadata> {
const response = await fetch(`https://dummyjson.com/posts/${postId}`);
const post: BlogPost = await response.json();

const baseUrl = process.env.NEXT_PUBLIC_SITE_URL || "http://localhost:3000";
const imageUrl = process.env.NEXT_PUBLIC_IMAGE_URL || `${baseUrl}/images`;

return {
title: post.title,
description: post.body,

keywords: [
...post.title.split(" "),
"blog",
"articles",
"web development",
"technology",
],

openGraph: {
type: "article",
url: `${baseUrl}/posts/${postId}`,
title: post.title,
description: post.body,
images: [
{
url: `${imageUrl}/og-image-${postId}.png`,
width: 1200,
height: 630,
alt: `Open Graph image for ${post.title}`,
},
],
siteName: "My Awesome Blog",
},

twitter: {
card: "summary_large_image",
site: "@myawesomeblog",
title: post.title,
description: post.body,
images: [
{
url: `${imageUrl}/twitter-card-${postId}.jpg`,
alt: `Twitter Card image for ${post.title}`,
},
],
},

alternates: {
canonical: `${baseUrl}/posts/${postId}`,
},
};
}

The post’s title, description, images, and keywords are dynamically generated based on the content. Each piece of content receives uniquely tailored metadata.

Caching

Building nextjs project without generateStaticParams

Here, the dynamic content is rendered on demand using Node.js, and the page is generated on each request by fetching data directly from the API. This ensures the page always has the latest content, but it can cause a performance hit since the data must be fetched every time the API is hit.

Building nextjs project with generateStaticParams

Here, the dynamic content is rendered statically at build time for each post using generateStaticParams. The static pages are pre-rendered and cached, improving performance. The pages can be incrementally re-generated when the content changes, ensuring that the latest content is served to users without rebuilding the entire site.

When managing a website with thousands of blog posts, using generateStaticParams to pre-render every post can become inefficient. This approach increases build time, deployment size, and puts unnecessary load on both the server and CDN. To address this, we can selectively generate static pages for only the most popular or trending posts, while utilizing Incremental Static Regeneration (ISR) and Server-Side Rendering (SSR) for less frequently accessed content.

For posts that are less popular or frequently changing, use ISR to regenerate the page in the background when it’s requested. This ensures content remains fresh without re-building the entire site.

404 Not Found Page

Not Fount page.

A well-designed 404 page can enhance SEO performance in the following ways:

  • Improves user experience: Guides visitors back to valuable content, keeping them on the site longer.
  • Reduces bounce rates: Minimizes the chances of a “hard bounce,” which can negatively affect SEO rankings.
  • Signals site quality: Search engines may interpret a helpful 404 page as an indication of a well-maintained site.
  • Boosts engagement: A 404 page with relevant links or a search bar helps users find what they’re looking for, increasing user interaction.

Dynamic Sitemap

Dynamic sitemap code and sitemap.xml file

A sitemap.ts file helps search engines crawl and index the website more effectively, improving SEO performance.

Nextjs Documentation

1. Efficient URL Discovery

A sitemap provides search engines with a comprehensive list of URLs on your site. By dynamically fetching content (such as blog posts or product pages) from an external source or database, the sitemap ensures that all relevant pages are discoverable by search engines.

2. Updated Content Indexing

Including the lastModified field for each URL in the sitemap allows search engines to know when content was last updated. This helps ensure that updated pages or newly added content are indexed promptly, improving their chances of appearing in search results faster.

3. Comprehensive Site Structure

A well-structured sitemap includes all important pages, such as blog posts, category pages, and static pages (like “About” or “Contact”). This comprehensive representation of the site structure gives search engines a clear map of how your content is organized, ensuring that no important page is overlooked.

4. Improved Crawl Budget

Search engines use a “crawl budget” which determines how often and how deeply they crawl the website. A well-structured and accurate sitemap helps search engines use their crawl budget efficiently, ensuring that high-priority pages are crawled more frequently and thoroughly.

Robots file

Nextjs Documentation

Robots.txt file.

The robots.txt file used for controlling how search engine crawlers access and index the website.

  • Control Crawling: Specifies which pages or sections of your site should not be crawled by search engine bots.
  • Prevent Indexing: Helps prevent search engines from indexing specific pages, such as admin pages or duplicate content.
  • Improve Crawl Budget: Directs crawlers to focus on important pages, ensuring they efficiently use the crawl budget.
  • Avoid Overload: Prevents bots from overloading your server by restricting access to non-essential resources.
  • Manage Subdirectories: Directs crawlers to avoid certain subdirectories, like staging or testing environments, to keep them from being indexed.

--

--

No responses yet