Introduction
So here's the thing. I really don't want to have to care about SEO. It's all very nebulous, and it attracts so many snake-oil salespeople. SEO websites are the worst. And yet, if you want people to see the stuff that you build, SEO remains super important.
Happily, we don't need to become SEO experts. A few key optimizations can play a big role in our search engine results!
I was intrigued by a recent blog post about how the Ghost team moved their blog to Gatsby. The move had a profound impact on their SEO:
In that article, the author explains how adding an XML sitemap (among other factors) helped them achieve remarkable organic traffic gains. So today, this tutorial will walk you through how to generate a sitemap for your Gatsby blog.
Watch the video NEW
Prefer your lessons in video format? Watch for free on egghead:
Link to this headingWhat is an XML Sitemap?
An XML sitemap is a raw document designed to help machines learn about the structure of a website. They look like this:

This is different from the "sitemap" sometimes linked to in the footers of websites. No human is meant to look at this, and they shouldn't be linked to. This is a document exclusively for Googlebot and its cousins.
Link to this headingLeveraging the ecosystem
Whenever I run into a new problem when working on a Gatsby project, my first instinct is always to check and see if a solution has been created by the community. A quick search reveals gatsby-plugin-sitemap, an officially-maintained plugin that solves this exact problem! 🎉
Let's install it, either using yarn or npm:
bash
Next, we can add it to our gatsby-config.js
:
js
Whenever we build our site, this plugin will generate a sitemap.xml
file, alongside all the other files that Gatsby builds.
Critically, this plugin only runs when building for production. This means that you won't be able to test it when running in development mode. Let's build, and spin up a static server with serve
:
bash
You should now be able to open localhost:5000/sitemap.xml
, and see a beautiful ugly XML document.
Link to this headingExcluding certain paths
Unless you're extremely lucky, it's likely that this sitemap isn't quite right.
One of the biggest reasons to add a sitemap is to tell Google which pages not to worry about. For example, my blog had the following sites specified in the original version of my sitemap:
xml
admin
is an authenticated route I use for viewing stats about the website, and confirmed
is shown when users join my newsletter. Neither of these pages makes sense to include in search results.
Happily, we can customize the plugin to pass an array of paths to exclude:
js
Link to this headingAdvanced customizations
When reading the Google sitemap recommendations, I found this bit of information:
List only canonical URLs in your sitemaps. If you have two versions of a page, list only the (Google-selected) canonical in the sitemap.
A "canonical" URL is the "true home" for a specific entity. If you have multiple URLs that contain the same content, you need to mark one as "canonical" for search engines to use.
If you don't do this, Google will penalize you, and it can hurt your search result rankings 😬
On my blog, post URLs are in the following format: /:category/:slug
. This presents a problem, since posts can belong to multiple categories. For example, the post that you're reading right now can be reached through both of these URLs:
/gatsby/seo-friendly-sitemap/
/seo/seo-friendly-sitemap/
The posts on my blog are all written using MDX. In the frontmatter for the posts, I have data that looks like this:
mdx
Categories are listed in priority order, so the first category should always form the canonical URL.
The challenge is clear: I need to fetch the categories from my MDX frontmatter and use it to filter the sites generated in the sitemap. Delightfully, this is an option with the plugin!
Link to this headingQuerying data with GraphQL
Inside our gatsby-config.js
, we can write a GraphQL query to pull whatever data we need:
js
By default, the plugin uses a query like this, but we can overwrite it. Here it fetches the siteUrl
, which in my case is http://www.joshwcomeau.com
, and then it fetches the path for every page node (eg. /gatsby/seo-friendly-sitemap
). It stitches those two strings together for every page it finds, and produces a sitemap.
In order to filter out non-canonical results, we first need to expose the right data to GraphQL!
allSitePage
is an index of every page created, either by putting a React component in src/pages
, or using the createPage
API. In my case, I'm generating all articles/tutorials programmatically with createPage
.
Here's what a typical createPage
call looks like, inside gatsby-node.js
:
js
If you're building a blog with Markdown or MDX, you're probably already using this to generate your pages. You provide it a path
to live, a component
to mount, and some contextual data that the component might need. Anything passed to context
becomes available to the component
via props.
Happily, it turns out that context
also gets exposed to GraphQL!
I added a new piece of data to context
:
js
The currentCategory
and canonicalCategory
variables were already available to me, since I was iterating through all my data and using it to create these pages.
With this data added, I could update the GraphQL query passed to query
, in my gatsby-config.js
:
js
Link to this headingFiltering pages
We've now exposed each page's "canonical status" to GraphQL, and written it into the query that gatsby-plugin-sitemap
will use. The final piece of this puzzle: overwriting the default "serializer" to specify what should be done with this queried data.
Here's what that looks like:
js
serialize
is a function that transforms the data from the query
into an array of "sitemappy" objects. The items we return will be used as the raw data to generate the sitemap.
Now that we've specified it in GraphQL, we can access node.context.isCanonical
to filter out duplicate pages.
By using the query
and serialize
escape hatches built into gatsby-plugin-sitemap
, we are given far greater control over the generated sitemap. It also allows us to fine-tune some page-specific options!
Link to this headingPage-specific options
When generating the XML sitemap, you may have noticed a couple additional fields being shown:
xml
In fact, there are a handful of options that can be used to tweak each page, for optimal effects.
Link to this headingchangefreq
changefreq
is a measure of how often your page changes. From the Sitemaps protocol:
This value provides general information to search engines and may not correlate exactly to how often they crawl the page. Valid values are:
always hourly daily weekly monthly yearly neverThe value "always" should be used to describe documents that change each time they are accessed. The value "never" should be used to describe archived URLs.
For a blog, I feel like daily
fits most usecases pretty well.
Link to this headingpriority
priority
is a relative measure of a page's importance. You can use this to signal to the crawler which pages it should care about, and which aren't so important. There are 11 values available to you: 0.0
through 1.0
.
On this blog, I'm using it to rank article pages like this one above "index" pages like the latest content page.
Link to this headinglastmod
Finally, we can add a date-time stamp to indicate when the page was last modified.
I'm honestly not sure how valuable this is, since presumably Googlebot is smart enough to detect when a page's content has changed, but correctly following a specification can't hurt!
Link to this headingEven more customizations
If you feel like you're limited by the options presented by this plugin, the folks at Ghost created their own advanced sitemap plugin. It uses XSL templating for a much prettier output! Because it's a newer and less-battle-tested plugin, I opted to stick with the standard one for my blog, but this could be a powerful option for folks with advanced usecases!
Link to this headingSubmitting your sitemap
Once your sitemap has been generated, and your site's been deployed, you'll need to let Google know that it exists!
For this, there are a number of options. I opted to submit it via the Google Search Console tool, though there are other options outlined in their documentation.
Last Updated
August 30th, 2021