Back to Top

Rural Local SEO: A Marketing Package Strong on Education

Posted by MiriamEllis

Can your marketing agency make a profit working with low-budget clients in rural areas?

Could you be overlooking a source of referrals, publicity, and professional satisfaction if you’re mainly focused on landing larger clients in urban locales? Clients in least-populated areas need to capture every customer they can get to be viable, including locals, new neighbors, and passers-through. Basic Local SEO can go a long way toward helping with this, and even if package offerings aren’t your agency’s typical approach, a simple product that emphasizes education could be exactly what’s called for.

Today, I’d like to help you explore your opportunities of serving rural and very small town clients. I’ve pulled together a sample spreadsheet and a ton of other resources that I hope will empower you to develop a bare-bones but high-quality local search marketing package that will work for most and could significantly benefit your agency in some remarkable ways.

Everything in moderation

The linchpin fundamental to the rural client/agency relationship is that the needs of these businesses are so exceedingly moderate. The competitive bar is set so low in a small-town-and-country setting, that, with few exceptions, clients can make a strong local showing with a pared-down marketing plan.

Let’s be honest — many businesses in this scenario can squeak by on a website design package from some giant web hosting agency. A few minutes spent with Google’s non-urban local packs attest to this. But I’m personally dissatisfied by independent businesses ending up being treated like numbers because it’s so antithetical to the way they operate. The local hardware store doesn’t put you on hold for 45 minutes to answer a question. The local farm stand doesn’t route you overseas to buy heirloom tomatoes. Few small town institutions stay in business for 150 years by overpromising and under-delivering.

Let’s assume that many rural clients will have some kind of website. If they don’t, you can recommend some sort of freebie or cheapie solution. It will be enough to get them placed somewhere in Google’s results, but if they never move beyond this, the maximum conversions they need to stay in business could be missed.

I’ve come to believe that the small-to-medium local marketing agency is the best fit for the small-to-medium rural brand because of shared work ethics and a similar way of doing business. But both entities need to survive monetarily and that means playing a very smart game with a budget on both sides.

It’s a question of organizing an agency offering that delivers maximum value with a modest investment of your time and the client’s money.

Constructing a square deal

When you take on a substantial client in a large town or city, you pull out all the stops. You dive deeply into auditing the business, its market, its assets. You look at everything from technical errors to creative strengths before beginning to build a strategy or implement campaigns, and there may be many months or years of work ahead for you with these clients. This is all entirely appropriate for big, lucrative contracts.

For your rural roster, prepare to scale way back. Here is your working plan:

1. Schedule your first 15-minute phone call with the client

Avoid the whole issue of having to lollygag around waiting for a busy small business owner to fill out a form. Schedule an appointment and have the client be at their place of business in front of a computer at the time of the call. Confirm the following, ultra-basic data about the client.

  • Name
  • Address
  • Phone
  • URL
  • Business model (single location brick-and-mortar, SAB, etc.)
  • Category
  • Are there any other businesses at this address?
  • Main products/services offered
  • If SAB, list of cities served
  • Most obvious search phrase they want to rank for
  • Year established and year they first took the business online
  • Have they ever been aware of a penalty on their website or had Google tell them they were removing a listing?
  • Finally, have the client (who is in front of their computer at their place of business) search for the search term that’s the most obviously important and read off to you the names and URLs of the businesses ranking in the local pack and on the first page of the organic results.

And that’s it. If you pay yourself $100/hr, this quick session yields a charge of $25.

2. Make a one-time investment in writing a bare-bones guide to Local SEO

Spend less than one working day putting together a .pdf file or Google doc written in the least-technical language containing the following:

  • Your briefest, clearest definition of what local SEO is and how it brings customers to local businesses. Inspiration here.
  • An overview of 3 key business models: brick & mortar, SAB, and home-based so the client can easily identify which of these models is theirs.
  • A complete copy of the Guidelines for representing your business on Google with a link in it to the live guidelines.
  • Foolproof instructions for creating a Google account and creating and claiming a GMB listing. Show the process step-by-step so that anyone can understand it. Inspiration here.
  • A list of top general industry citation platforms with links to the forms for getting listed on them. Inspiration here and if the client can hit at least a few of these, they will be off to a good start.
  • An overview of the role of review acquisition and response, with a few simple tips for earning reviews and a list of the top general industry review platforms. Inspiration here and here.
  • An overview of the role of building offline relationships to earn a few online linktations. Inspiration here.
  • Links to the Google My Business forum and the main Google support platforms including their phone number (844.491.9665), Facebook, Twitter, and online chat. Tell the client this is where to go if they encounter a problem with their Google listing in the future.
  • Links to major independent business associations as a support vehicle for small and rural businesses like AMIBA, ILSR, and Small Business Saturday. Inspiration here.
  • Your agency’s complete contact information so that the business can remember who you are and engage you for further consulting down the road, if ever necessary.

If you pay yourself $100 an hour, investing in creating this guide will cost you less than $1000.00. That’s a modest amount that you can quickly earn back from clients. Hopefully, the inspirational links I’ve included will give you a big head start. Avoid covering anything trendy (like some brand new Google feature) so that the only time you should have to update the guide in the near future will be if Google makes some major changes to their guidelines or dashboard.

Deliver this asset to every rural client as their basic training in the bare essentials of local marketing.

3. Create a competitive audit spreadsheet once and fill it out ad infinitum

What you want here is something that lets you swiftly fill in the blanks.

For the competitive audit, you’ll be stacking up your client’s metrics against the metrics of the business they told you was ranking at the top of the local pack when they searched from their location. You can come up with your own metrics, or you can make a copy of this template I’ve created for you and add to it/subtract from it as you like.

Make a copy of the ultra-basic competitive local audit template — you can do so right here.

You’ll notice that my sample sheet does not delve deeply into some of the more technical or creative areas you might explore for clients in tougher markets. With few exceptions, rural clients just don’t need that level of insight to compete.

Give yourself 45 focused minutes filling in the data in the spreadsheet. You’ve now invested 1 hour of time with the client. So let’s give that a value of $100.

4. Transfer the findings of your audit into a custom report

Here’s another one-time investment. Spend no more than one workday creating a .pdf or Google Docs template that takes the fields of your audit and presents them in a readable format for the client. I’m going to leave exact formatting up to you, but here are the sections I would recommend structuring the report around:

  • A side-by-side comparison of the client vs. competitor metrics, bucketed by topic (Website, GMB, Reputation, Links, Citations, etc)
  • A very basic explanation of what those metrics mean
  • A clear recommendation of what the client should do to improve their metrics

For example, your section on reputation might look like this:

The beauty of this is that, once you have the template, all you have to do is fill it out and then spend an hour making intelligent observations based on your findings.

Constructing the template should take you less than one workday; so, a one-time investment of less than $1,000 if you are paying yourself $100/hr.

Transferring the findings of your audit from the spreadsheet to the report for each client should take about 1 hour. So, we’re now up to two total hours of effort for a unique client.

5. Excelling at value

So, you’ve now had a 15-minute conversation with a client, given them an introductory guide to the basics of local search marketing, and delivered a customized report filled with your observations and their to-dos. Many agencies might call it a day and leave the client to interpret the report on their own.

But you won’t do that, because you don’t want to waste an incredible opportunity to build a firm relationship with a business. Instead, spend one more hour on the phone with the owner, going over the report with them page by page and allowing a few minutes for any of their questions. This is where you have the chance to deliver exceptional value to the client, telling them exactly what you think will be most helpful for them to know in a true teaching moment.

At the end of this, you will have become a memorable ally, someone they trust, and someone to whom they will have confidence in referring their colleagues, family members, and neighbors.

You’ve made an overall investment of less than $2,000 to create your rural/small town marketing program.

Packaging up the guide, the report and the 1:1 phone consulting, you have a base price of $300 for the product if you pay yourself $100/hour.

However, I’m going to suggest that, based on the level of local SEO expertise you bring to the scenario, you create a price point somewhere between $300–$500 for the package. If you are still relatively green at local SEO, $300 could be a fair price for three hours of consulting. If you’re an industry adept, scale it up a bit because, because you bring a rare level of insight to every client interaction, even if you’re sticking to the absolute basics. Begin selling several of these packages in a week, and it will start totaling up to a good monthly revenue stream.

As a marketer, I’ve generally shied away from packages because whenever you dig deeply into a client’s scenario, nuances end up requiring so much custom research and communication. But, for the very smallest clients in this least competitive markets, packages can hit the spot.

Considerable benefits for your agency

The client is going to walk away from the relationship with a good deal … and likely a lot to do. If they follow your recommendations, it will typically be just what they needed to establish themselves on the web to the extent that neighbors and travelers can easily find them and choose them for transactions. Good job!

But you’re going to walk away with some amazing benefits, too, some of which you might not have considered before. To wit:

1. Relationships and the ripple effect

A client you’ve treated very well on the phone is a client who is likely to remember you for future needs and recommend you. I’ve had businesses send me lovely gifts on top of my consulting fee because I’ve taken the time to really listen and answer questions. SEO agencies are always looking for ways to build authentic relationships. Don’t overlook the small client as a centroid of referrals throughout a tight-knit community and beyond it to their urban colleagues, friends, and family.

2. Big data for insights and bragging rights

If your package becomes popular, a ton of data is going to start passing through your hands. The more of these audits you do, the more time you’re spending actively observing Google’s handling of the localized SERPs. Imagine the blog posts your agency can begin publishing by anonymizing and aggregating this data, pulling insights of value to our industry. There is no end to the potential for you to grow your knowledge.

Apart from case studies, think of the way this package can both build up your proud client roster and serve as a source of client reviews. The friendly relationship you’ve built with that 1:1 time can now become a font of very positive portfolio content and testimonials for you to publish on your website.

3. Agency pride from helping rebuild rural America

Have you noticed the recent spate of hit TV shows that hinge on rebuilding dilapidated American towns? Industry consolidation is most often cited as the root of rural collapse, with small farmers and independent businesses no longer able to create a tax base to support basic community needs like hospitals, fire departments, and schools. Few of us rejoice at the idea of Main Streets — long-cherished hallmarks not just of Americana but of shared American identity — becoming ghost towns.

But if you look for it, you can see signs of brilliant small entrepreneurs uniting to buck this trend. Check out initiatives like Locavesting and Localstake. There’s a reason to hope in small farming co-ops, the Main Street movement, and individuals like these who can re-envision a crumbling building as an independent country store, a B&B, or a job training center with Internet access.

It can be a source of professional satisfaction for your marketing agency if you offer these brave and hard-working business owners a good deal and the necessary education they need to present themselves sufficiently on the web. I live in a rural area, and I know just how much a little, solid advice can help. I feel extra good if I know I’m contributing to America’s rural comeback story.

Promoting your rural local SEO package

Once you’ve got your guide and templates created, what next? Here are some simple tips:

  • Create a terrific landing page on your website specifically for this package and call it out on your homepage as well. Wherever appropriate, build internal links to it.
  • Promote on social media.
  • Blog about why you’ve created the package, aligning your agency as an ally to the rebuilding of rural communities.
  • If, like me, you live in a rural area, consider presenting at local community events that will put you in front of small business owners.
  • Don’t overlook old school media like community message boards at the local post office, or even fliers tacked to electric poles.
  • If you’re a city slicker, consider how far you’d have to travel to get to the nearest rural community to participate in events.
  • Advertising both off and online in rural papers can be quite economical. There are also place of worship print bulletins, local school papers, and other publications that welcome sponsors. Give it a try.
  • And, of course, ask happy clients to refer you, telling them what it means to your business. You might even develop a referral program.

The truth is that your agency may not be able to live by rural clients, alone. You may still be targeting the bulk of your campaigns towards urban enterprises because just a few highly competitive clients can bring welcome security to your bank account.

But maybe this is a good day to start looking beyond the fast food franchise, the NY attorney and the LA dermatology group. The more one reads about rural entrepreneurs, the more one tends to empathize with them, and empathy is the best foundation I know of for building rewarding business relationships.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 4 hours ago from

Email marketers: Get more creative with customer data

Use subscriber personalization techniques with first-party data you already have, like loyalty program information, location and name. But you can do more. Here’s how.

Please visit Marketing Land for the full article.

Reblogged 4 hours ago from

Zendesk buys Smooch to support customer messaging across multiple platforms

Smooch was among the first WhatsApp Business providers.

Please visit Marketing Land for the full article.

Reblogged 4 hours ago from

Vue.js And SEO: How To Optimize Reactive Websites For Search Engines And Bots

Vue.js And SEO: How To Optimize Reactive Websites For Search Engines And Bots

Vue.js And SEO: How To Optimize Reactive Websites For Search Engines And Bots

Paolo Mioni


Reactive JavaScript Frameworks (such as React, Vue.js, and Angular) are all the rage lately, and it’s no wonder that they are being used in more and more websites and applications due to their flexibility, modularity, and ease of automated testing.

These frameworks allow one to achieve new, previously-unthinkable things on a website or app, but how do they perform in terms of SEO? Do the pages that have been created with these frameworks get indexed by Google? Since with these frameworks all — or most — of the page rendering gets done in JavaScript (and the HTML that gets downloaded by bots is mostly empty), it seems that they’re a no-go if you want your websites to be indexed in search engines or even parsed by bots in general.

In this article, I will talk mostly about Vue.js, since it is the framework I’ve used most, and with which I have direct experiences in terms of indexing by the search engines on major projects, but I can assume that most of what I will cover is valid for other frameworks, too.

Replacing jQuery With Vue.js

Did you know that you can incorporate Vue into your project the same way that you would incorporate jQuery — with no build step necessary? Tell me more →

Some Background On The Problem

How Indexing Works

For your website to be indexed by Google, it needs to be crawled by Googlebot (an automated indexing software that visits your website and saves the contents of pages to its index) following links within each page. Googlebot also looks for special Sitemap XML files in websites to find pages that might not be linked correctly from your public site and to receive extra information on how often the pages in the website change and when they have last changed.

A Little Bit Of History

Until a few years ago (before 2009), Google used to index the content of a website’s HTML — excluding all the content created by JavaScript. It was common SEO knowledge that important links and content should not be written by JavaScript since it would not get indexed by Google, and it might cause a penalty for the website because Google might consider it “fake content” as if the website’s owner was trying to show users something different from what was shown to the search engines and trying to fool the latter.

It was very common practice by scammers to put a lot of SEO-friendly content in the HTML and hide it in JavaScript, for example. Google has always warned against this practice:

“Serving Googlebot different content than a normal user would see is considered cloaking, and would be against our Webmaster Guidelines.”

You could get penalized for this. In some cases, you could be penalized for serving different content to different user agents on the server side, but also for switching content via JavaScript after the page has loaded. I think this shows us that Google has been indexing websites executing JavaScript for a long time — at least for the sake of comparing the final HTML of the website (after JavaScript execution) and the raw HTML it was parsing for its indexes. But Googlebot did not execute JavaScript all the time, and Google was not using the JavaScript-generated content for indexing purposes.

Then, given the increased usage of AJAX to deliver dynamic content on websites, Google proposed an “AJAX crawling scheme” to help users index AJAX-based websites. It was very complicated; it basically required the website to produce a rendering of pages with AJAX content included. When requested by Google, the server would provide a version of the page with all (or most) of the content that would have been generated dynamically by JavaScript included in the HTML page — pre-rendered as an HTML Snapshot of the content. This process of having a server-side solution deliver content that was (for all other purposes) meant to be generated client-side, implied that those wanting to have a site that heavily relied on JavaScript indexed in Google had to go through a lot of technical hassles.

For example, if the content read by AJAX came from an external web service, it was necessary to duplicate the same web service calls server-side, and to produce, server-side, the same HTML that would have been produced client-side by JavaScript — or at least a very similar one. This was very complicated because, before the advent of Node.js, it required to at least partially duplicate the same rendering logic in two different programming languages: JavaScript for the frontend, and PHP, Java, Python, Ruby, and so on, on the backend. This is called “server-side rendering”, and it could lead to maintenance hell: if you made important changes to how you were rendering content in the frontend you had to duplicate those changes on the backend.

The only alternative to avoid duplicating the logic was to parse your own site with a browser executing JavaScript and save the end results to your server and serve those to Googlebot. This is sort of similar to what is now called “pre-rendering”.

Google (with its AJAX crawling scheme) also guaranteed that you would avoid penalties due to the fact that in this case you were serving different content to Googlebot and to the user. However, since 2015, Google has deprecated that practice with an official blog post that told website managers the following:

“Today, as long as you’re not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers.”

What this told us was not that Googlebot had suddenly acquired the capability of executing JavaScript when indexing web pages, since we know that it had done so for a very long time (at least to check for fake content and scams). Instead, it told us that the result of JavaScript execution would be indexed and used in SERPs.

This seems to imply that we don’t have to worry about providing Google with server-side rendered HTML anymore. However, we see all sorts of tools for server-side rendering and pre-rendering made available for JavaScript frameworks, it seems this is not the case. Also, when dealing with SEO agencies on big projects, pre-rendering seems to be considered mandatory. How come?

How Does Google Actually Index Pages Created With Front-End Frameworks?

The Experiment

In order to see what Google actually indexes in websites that have been created with a front-end framework, I built a little experiment. It does not cover all use cases, but it is at least a means to find out more about Google’s behavior. I built a small website with Vue.js and had different parts of text rendered differently.

The website’s contents are taken from the description of the book Infinite Jest by David Foster Wallace in the Infinite Jest Wiki (thanks guys!). There are a couple of introductory texts for the whole book, and a list of characters with their individual biography:

  • Some text in the static HTML, outside of the Vue.js main container;
  • Some text is rendered immediately by Vue.js because it is contained in variables which are already present in the application’s code: they are defined in the component’s data object;
  • #Some text is rendered by Vue.js from the data object, but with a delay of 300ms;
  • The character bios come from a set of rest APIs, which I’ve built on purpose using Sandbox. Since I was assuming that Google would execute the website’s code and stop after some time to take a snapshot of the current state of the page, I set each web service to respond with an incremental delay, the first with 0ms, the second with 300ms, the third with 600ms and so on up to 2700ms.

Each character bio is shortened and contains a link to a sub-page, which is available only through Vue.js (URLs are generated by Vue.js using the history API), but not server-side (if you call the URL of the page directly, you get no response from the server), to check if those got indexed too. I assumed that these would not get indexed, since they are not proper links which render server-side, and there’s no way that Google can direct users to those links directly. But I just wanted to check.

I published this little test site to my Github Pages and requested indexing — take a look.

The Results

The results of the experiment (concerning the homepage) are the following:

  • The contents which are already in the static HTML content get indexed by Google (which is rather obvious);
  • The contents which are generated by Vue in real-time always get indexed by Google;
  • The contents which are generated by Vue, but rendered after 300ms get indexed as well;
  • The contents which come from the web service, with some delay, might get indexed, but not always. I’ve checked Google’s indexing of the page in different moments, and the content which was inserted last (after a couple of seconds) sometimes got indexed, sometimes it didn’t. The content that gets rendered pretty quickly does get indexed most of the time, even if it comes from an asynchronous call to an external web service. This depends on Google having a render budget for each page and site, which depends on its internal algorithms, and it might vary wildly depending on the ranking of your site and the current state of Googlebot’s rendering queue. So you cannot rely on content coming from external web services to get indexed;
  • The subpages (as they are not accessible as a direct link) do not get indexed as expected.

What does this experiment tell us? Basically, that Google does index dynamically generated content, even if comes from an external web service, but it is not guaranteed that content will be indexed if it “arrives too late”. I have had similar experiences with other real, production websites besides this experiment.

Competitive SEO

Okay, so the content gets indexed, but what this experiment doesn’t tell us is: will the content be ranked competitively? Will Google prefer a website with static content to a dynamically-generated website? This is not an easy question to answer.

From my experience, I can tell that dynamically-generated content can rank in the top positions of the SERPS. I’ve worked on the website for a new model of a major car company, launching a new website with a new third-level domain. The site was fully generated with Vue.js — with very little content in the static HTML besides <title> tags and meta descriptions.

The site started ranking for minor searches in the first few days after publication, and the text snippets in the SERPs reported words coming directly from the dynamic content.

Within three months it was ranking first for most searches related to that car model — which was relatively easy since it was hosted on an official domain belonging to the car’s manufacturer, and the domain was heavily linked from reputable websites.

But given the fact that we had had to face strong opposition from the SEO company that was in charge of the project, I think that the result was still remarkable.

Due to the tight deadlines and lack of time given for the project, we were going to publish the site without pre-rendering.

Animated Text

What Google does not index is heavily-animated text. The site of one of the companies I work with, Rabbit Hole Consulting, contains lots of text animations, which are performed while the user scrolls, and require the text to be split into several chunks across different tags.

The main texts in the website’s home page are not meant for search engine indexing since they are not optimized for SEO. They are not made of tech-speak and do not use keywords: they are only meant to accompany the user on a conceptual journey about the company. The text gets inserted dynamically when the user enters the various sections of the home page.

(Image source: Rabbit Hole Consulting) (Large preview)

None of the texts in these sections of the website gets indexed by Google. In order to get Google to show something meaningful in the SERPs, we added some static text in the footer below the contact form, and this content does show as part of the page’s content in SERPs.

The text in the footer gets indexed and shown in SERPs, even though it is not immediately visible to the users unless they scroll to the bottom of the page and click on the “Questions” button to open the contact form. This confirms my opinion that content does get indexed even if it is not shown immediately to the user, as long as it is rendered soon to the HTML — as opposed to being rendered on-demand or after a long delay.

What About Pre-Rendering?

So, why all the fuss about pre-rendering — be it done server-side or at project compilation time? Is it really necessary? Although some frameworks, like Nuxt, make it much easier to perform, it is still no picnic, so the choice whether to set it up or not is not a light one.

I think it is not compulsory. It is certainly a requirement if a lot of the content you want to get indexed by Google comes from external web service and is not immediately available at rendering time, and might — in some unfortunate cases — not be available at all due to, for example, web service downtime. If during Googlebot’s visits some of your content arrives too slowly, then it might not be indexed. If Googlebot indexes your page exactly at a moment in which you are performing maintenance on your web services, it might not index any dynamic content at all.

Furthermore, I have no proof of ranking differences between static content and dynamically-generated content. That might require another experiment. I think that it is very likely that, if content comes from external web service and does not load immediately, it might impact on Google’s perception of your site’s performance, which is a very important factor for ranking.

Recommended reading: How Mobile Web Design Affects Local Search (And What To Do About It)

Other Considerations


Up until recently, Googlebot used a fairly old version of Chromium (the open-source project on which Google Chrome is based), namely version 41. This meant that some recent JavaScript or CSS features could not be rendered by Google correctly (e.g. IntersectionObserver, ES6 syntax, and so on).

Google has recently announced that it is now running the latest version of Chromium (74, at the time of writing) in Googlebot, and that the version will be updated regularly. The fact that Google was running Chromium 41 might have had big implications for sites which decided to disregard compatibility with IE11 and other old browsers.

You can see a comparison of Chromium 41 and Chromium 74’s support for features here, however, if your site was already polyfilling missing features to stay compatible with older browsers, there should have been no problem.

Always use polyfills since you never know which browser misses support for features that you think are commonplace. For example, Safari did not support a major and very useful new feature like IntersectionObserver until version 12.1, which came out in March 2019.

JavaScript Errors

If you rely on Googlebot executing your JavaScript to render vital content, then major JavaScript errors which could prevent the content from rendering must be avoided at all costs. While bots might parse and index HTML which is not perfectly valid (although it is always preferable to have valid HTML on any site!), if there is a JavaScript error that prevents the loading of some content, then there is no way Google will index that content.

In any case, if you rely on JavaScript to render vital content to your end users, then it is likely that you already have extensive unit tests to check for blocking errors of any kind. Keep in mind, however, that Javascript errors can arise from unpredictable scenarios, for example, in case of improper handling of errors on API responses.

It is better to have some real-time error-checking software in place (such as Sentry or LogRocket) which will alert you of any edge-case errors you might not pick during unit or manual testing. This adds to the complexity of relying on JavaScript for SEO content.

Other Search Engines

The other search engines do not work as well as Google with dynamic content. Bing does not seem to index dynamic content at all, nor do DuckDuckGo or Baidu. Probably those search engines lack the resources and computing power that Google has in spades.

Parsing a page with a headless browser and executing JavaScript for a couple of seconds to parse the rendered content is certainly more resource-heavy than just reading plain HTML. Or maybe these search engines have made the choice not to scan dynamic content for some other reasons. Whatever the cause of this, if your project needs to support any of those search engines, you need to set up pre-rendering.

Note: To get more information on other search engines’ rendering capabilities, you can check this article by Bartosz Góralewicz. It is a bit old, but according to my experience, it is still valid.

Other Bots

Remember that your site will be visited by other bots as well. The most important examples are Twitter, Facebook, and other social media bots that need to fetch meta information about your pages in order to show a preview of your page when it is linked by their users. These bots will not index dynamic content, and will only show the meta information that they find in the static HTML. This leads us to the next consideration.


If your site is a so-called “One Page website”, and all the relevant content is located in one main HTML, you will have no problem having that content indexed by Google. However, if you need Google to index and show any secondary page on the website, you will still need to create static HTML for each of those — even if you rely on your JavaScript Framework to check the current URL and provide the relevant content to put in that page. My advice, in this case, is to create server-side (or static) pages that at least provide the correct title tag and meta description/information.


The conclusions I’ve come to while researching this article are the following:

  1. If you only target Google, it is not mandatory to use pre-rendering to have your site fully indexed, however:
  2. You should not rely on third-party web services for content that needs to be indexed, especially if they don’t reply quickly.
  3. The content you insert into your HTML immediately via Vue.js rendering does get indexed, but you shouldn’t use animated text or text that gets inserted in the DOM after user actions like scrolling, etc.
  4. Make sure you test for JavaScript errors as they could result on entire pages/sections not being indexed, or your site not being indexed at all.
  5. If your site has multiple pages, you still need to have some logic to create pages that, while relying on the same front-end rendering system as the home page, can be indexed by Google as individual URLs.
  6. If you need to have different description and preview images for social media between different pages, you will need to address this too, either server-side or by compiling static pages for each URL.
  7. If you need your site to perform on search engines other than Google, you will definitely need pre-rendering of some sort.
Smashing Editorial
(dm, yk, il)

Acknowledgements: Many thanks to Sigrid Holzner of SEO Bavaria / Rabbit Hole Consulting for her review of this article.

Reblogged 5 hours ago from

Switching From WordPress To Hugo

Switching From WordPress To Hugo

Switching From WordPress To Hugo

Christopher Kirk-Nielsen


When WordPress 5 was released, I was excited about making use of the Gutenberg editor to create custom blocks, as posts on my personal blog had a couple of features I could turn into a block, making it easier to set up my content. It was definitely a cool thing to have, yet it still felt quite bloated.

Around the same time, I started reading more and more about static site generators and the JAMstack (this article by Chris Ferdinandi convinced me). With personal side projects, you can kind of dismiss a wide variety of issues, but as a professional, you have to ensure you output the best quality possible. Performance, security and accessibility become the first things to think about. You can definitely optimize WordPress to be pretty fast, but faster than a static site on a CDN that doesn’t need to query the database nor generate your page every time? Not so easy.

I thought that I could put this into practice with a personal project of mine to learn and then be able to use this for professional projects, and maybe some of you would like to know how, too. In this article, I will go over how I made the transition from WordPress to a specific static site generator named Hugo.

Hugo is built in Go, which is a pretty fast and easy to use language once you get used to the syntax, which I will explain. It all compiles locally so you can preview your site right on your computer. The project will then be saved to a private repository. Additionally, I will walk you through how to host it on Netlify, and save your images on a Git LFS (Large File Storage). Finally, we’ll have a look at how we can set up a content management system to add posts and images (similar to the WordPress backend) with Netlify CMS.

Note that all of this is absolutely free, which is pretty amazing if you ask me (although you’ll have to pay extra if you use up all your LFS storage or if your site traffic is intense). Also, I am writing this from a Bitbucket user point of view, running on a Mac. Some steps might be slightly different but you should be able to follow along, no matter what setup you use.

You’ll need to be somewhat comfortable with HTML, CSS, JS, Git and the command terminal. Having a few notions with templating languages such as Liquid could be useful as well, but we will review Hugo’s templates to get you started. I will, nonetheless, provide as many details as possible!

I know it sounds like a lot, and before I started looking into this, it was for me, too. I will try to make this transition as smooth as possible for you by breaking down the steps. It’s not very difficult to find all the resources, but there was a bit of guesswork involved on my part, going from one documentation to the next.

  1. Exporting The Content From WordPress
  2. Preparing Your Blog Design
  3. Setting Up A New Repository
  4. Activating Git LFS (Optional)
  5. Creating The Site On Netlify
  6. Preparing For Netlify Large Media (Optional)
  7. Setting Up Hugo On Your Computer
  8. Creating Your Custom Theme
  9. Notes On The Hugo Syntax
  10. Content And Data
  11. Deploying On Netlify
  12. Setting Up A Custom Domain
  13. Editing Content On Netlify CMS

Note: If you have trouble with some of these, please let me know in the comments and I’ll try to help, but please note this is destined to be applied to a simple, static blog that doesn’t have a dozen widgets or comments (you can set that up later), and not a company site or personal portfolio. You undoubtedly could, though, but for the sake of simplicity, I’ll stick to a simple, static blog.


Before we do anything, let’s create a project folder where everything from our tools to our local repository is going to reside. I’ll call it “WP2Hugo” (feel free to call it anything you want).

This tutorial will make use of a few command line tools such as npm and Git. If you don’t have them already, install those on your machine:

With these installed, let’s get started!

1. Exporting The Content From WordPress

First off, we’ll need to export your content from WordPress: posts, pages, and uploads. There are a few tools available that Hugo mentions but personally, only one of them worked: blog2md. This one works by running a JavaScript file with Node.js in your command terminal. It takes the XML files exported by WordPress, and outputs Markdown files with the right structure, converting your HTML to Markdown and adding what is called the Front Matter, which is a way to format metadata at the start of each file.

Go to your WordPress admin, and open the Tools menu, Export submenu. You can export what you want from there. I’ll refer to the exported file as YOUR-WP-EXPORT.xml.

WordPress export tool (Large preview)

You can select exactly what data you want to export from your WordPress blog.

Inside our WP2Hugo folder, I recommend creating a new folder named blog2md in which you’ll place the files from the blog2md tool, as well as your XML export from WordPress (YOUR-WP-EXPORT.xml). Also, create a new folder in there called out where your Markdown posts will go. Then, open up your command terminal, and navigate with the cd command to your newly created “blog2md” folder (or type cd with a space and drag the folder into the terminal).

You can now run the following commands to get your posts:

npm install
node index.js w YOUR-WP-EXPORT.xml out

Look into the /WP2Hugo/blog2md/out directory to check whether all of your posts (and potential pages) are there. If so, you might notice there’s something about comments in the documentation: I had a comment-free blog so I didn’t need them to be carried through but Hugo does offer several options for comments. If you had any comments on WordPress, you can export them for later re-implementation with a specialized service like Disqus.

If you’re familiar enough with JS, you can tweak the index.js file to change how your post files will come out by editing the wordpressImport function. You may want to capture the featured image, remove the permalink, change the date format, or set the type (if you have posts and pages). You’ll have to adapt it to your needs, but know that the loop (posts.forEach(function(post){ ... })) runs through all the posts from the export, so you can check for the XML content of each post in that loop and customize your Front Matter.

Additionally, if you need to update URLs contained in your posts (in my case, I wanted to make image links relative instead of absolute) or the date formatting, this is a good time to do so, but don’t lose sleep over it. Many text editors offer bulk editing so you can plug in a regular expression and make the changes you want across your files. Also, you can run the blog2md script as many times as needed, as it will overwrite any previously existing files in the output folder.

Once you have your exported Markdown files, your content is ready. The next step is to get your WordPress theme ready to work in Hugo.

2. Preparing Your Blog Design

My blog had a typical layout with a header, a navigation bar, content and sidebar, and a footer — quite simple to set up. Instead of copying pieces of my WordPress theme, I rebuilt it all from scratch to ensure there was no superfluous styles or useless markup. This is a good time to implement new CSS techniques (pssst… Grid is pretty awesome!) and set up a more consistent naming strategy (something like CSS Wizardry’s guidelines). You can do what you want, but remember we’re trying to optimize our blog, so it’s good to review what you had and decide if it’s still worth keeping.

Start by breaking down your blog into parts so you can clearly see what goes where. This will help you structure your markup and your styles. By the way, Hugo has the built-in ability to compile Sass to CSS, so feel free to break up those styles into smaller files as much as you want!

A blog layout with a banner up top, with a menu below it. The main area has a large section for content and a smaller side area for secondary content. At the bottom is a footer with a copyright note and links to the author’s Twitter page and their email.

A very simple blog layout. (Large preview)

When I say simple, I mean really simple.

Alternatively, you can completely bypass this step for now, and style your blog as you go when your Hugo site is set up. I had the basic markup in place and preferred an iterative approach to styles. It’s also a good way to see what works and what doesn’t.

3. Setting Up A New Repository

Now that that is out of the way, we need to set up a repository. I’m going to assume you will want to create a new repository for this, which is going to be a great opportunity to use Git LFS (Large File System). The reason I advise to do this now is that implementing Git LFS when you already have hundreds of images is not as smooth. I’ve done it, but it was a headache you’re likely to want to avoid. This will also provide other benefits down the road with Netlify.

While I’ll be doing all this via Bitbucket and their proprietary Git GUI, Sourcetree, you can absolutely do this with GitHub and GitLab and their own desktop tools. You can also do it directly in the command terminal, but I like to automate and simplify the process as much as I can, reducing the risk of making silly mistakes.

When you’ve created your new repository on the Git platform of your choice, create an empty folder inside your local project folder (WP2Hugo), e.g. hugorepo, then open up your command terminal or Git GUI tool and initialize your local Git repository; then, link it to the remote repository (you can usually find the exact command to use on the newly created remote repository).

I’d recommend creating a dev (or stage) branch so that your main branch is strictly used for production deployments. It’ll also limit new builds to be generated only when you’re done with a potential series of changes. Creating a branch can be done locally or on your repository’s remote webpage.

A guide to the various steps to get to the 'New branch' form on repositories. GitHub requires the user to click the active branch and type a new name in the input field. GitLab requires the user to click a 'plus' menu that reveals a dropdown menu with a 'New branch' link to a page with the form. Bitbucket requires the user to click the 'plus' in the general menu to slide out options and to click the 'Create a branch' link to access a new page with the form.

How to create a new branch on GitHub, GitLab and Bitbucket. (Large preview)

GitHub makes it easy to create a branch by clicking the branch switcher and typing a new name. On GitLab, you need to open the “Plus” dropdown to access the option. Bitbucket requires you to open the “Plus” menu on the left to open the slide-out menu and click “Create a branch” in the “Get to work” section.

4. Activating Git LFS (Optional)

Git Large File System is a Git feature that allows you to save large files in a more efficient way, such as Photoshop documents, ZIP archives and, in our case, images. Since images can need versioning but are not exactly code, it makes sense to store them differently from regular text files. The way it works is by storing the image on a remote server, and the file in your repository will be a text file which contains a pointer to that remote resource.

Alas, it’s not an option you just click to enable. You must set up your repository to activate LFS and this requires some work locally. With Git installed, you need to install a Git-LFS extension:

git lfs install

If, like me, that command didn’t work for you, try the Homebrew alternative (for macOS or Linux):

brew install git-lfs

Once that’s done, you’ll have to specify which files to track in your repository. I will host all of the images I uploaded in WordPress’s /upload folder in an identically-named folder on my Hugo setup, except that this folder will be inside a /static folder (which will resolves to the root once compiled). Decide on your folder structure, and track your files inside:

git lfs track "static/uploads/*"

This will track any file inside the /static/uploads folder. You can also use the following:

git lfs track "*.jpg"

This will track any and all JPG files in your repository. You can mix and match to only track JPGs in a certain folder, for example.

With that in place, you can commit your LFS configuration files to your repository and push that to your remote repository. The next time you locally commit a file that matches the LFS tracking configuration, it will be “converted” to an LFS resource. If working on a development branch, merge this commit into your main branch.

Let’s now take a look at Netlify.

5. Creating The Site On Netlify

At this point, your repository is set up, so you can go ahead and create an account on Netlify. You can even log in with your GitHub, GitLab or Bitbucket account if you like. Once on the dashboard, click the “New site from Git” button in the top right-hand corner, and create your new Netlify site.

Note: You can leave all the options at their default values for now.

The form displayed on Netlify when a user creates a new website, with build options left to their default, empty values.

Netlify’s new site creation page. (Large preview)

Select your Git provider: this will open a pop-up window to authenticate you. When that is done, the window will close and you’ll see a list of repositories on that Git provider you have access to. Select your freshly created repo and continue. You’ll be asked a few things, most of which you can just leave by default as all the options are editable later on.

For now, in the Site Settings, click “Change site name” and name your site anything you want — I’ll go with chris-smashing-hugo-blog. We will now be able to access the site via a beautiful 404 page!

6. Preparing For Netlify Large Media (Optional)

If you set up Git LFS and plan on using Netlify, you’ll want to follow these steps. It’s a bit more convoluted but definitely worth it: it’ll enable you to set query strings on image URLs that will be automatically transformed.

Let’s say you have a link to portrait.jpg which is an image that’s 900×1600 pixels. With Netlify Large Media, you can call the file portrait.jpg?nf_resize=fit&w=420, which will proportionally scale it. If you define both w and h, and set nf_resize=smartcrop, it’ll resize by cropping to focus on the point of interest of the image (as determined by a fancy algorithm, a.k.a. robot brain magic!). I find this to be a great way to have thumbnails like the ones WordPress generates, without needing several files for an image on my repository.

If this sounds appealing to you, let’s set it up!

The first step is installing Netlify’s command-line interface (CLI) via npm:

npm install netlify-cli -g

If it worked, running the command netlify should result in info about the tool.

You’ll then need to make sure you are in your local repository folder (that I named “hugorepo” earlier), and execute:

netlify login

Authorize the token. Next, we’ll have to install the Netlify Large Media plugin. Run:

netlify plugins:install netlify-lm-plugin
netlify lm:install

There should be a command line shown at the end of the resulting message that you must copy (which should look like /Users/YOURNAME/.netlify/helper/ on Mac) — run it. Note that Keychain might ask you for your machine’s administrator password on macOS.

The next step is to link Netlify:

netlify link

You can provide your site name here (I provided the chris-smashing-hugo-blog name I gave it earlier). With this in place, you just need to set up the Large Media feature by executing the following:

netlify lm:setup

Commit these new changes to your local repository, and push them to the remote development branch. I had a few errors with Sourcetree and Keychain along the lines of git "credential-netlify" is not a git command. If that’s your case, try to manually push with these commands:

git add -A
git commit -m "Set up Netlify Large media"
git push

If that didn’t work, you might need to install Netlify credential Helper. Here’s how to do it with Homebrew:

brew tap netlify/git-credential-netlify
brew install git-credential-netlify

Try pushing your commit through now (either with your GUI or command terminal): it should work!

Note: If you change your Netlify password, run netlify logout and netlify login again.

You might ask: “All this, and we still haven’t even initialized our Hugo build?” Yes, I know, it took a while but all the preparations for the transition are done. We can now get our Hugo blog set up!

7. Setting Up Hugo On Your Computer

You’ll first need to install Hugo on your computer with any of the provided options. I’ll be using Homebrew but Windows users can use Scoop or Chocolatey, or download a package directly.

brew install hugo

You’ll then need to create a new Hugo site but it won’t like setting it up in a non-empty folder. First option: you can create it in a new folder and move its contents to the local repository folder:

hugo new site your_temporary_folder

Second option: you can force it to install in your local repository with a flag, just make sure you’re running that in the right folder:

hugo new site . --force

You now have a Hugo site, which you can spin up with this command:

hugo server

You’ll get a local preview on localhost. Sadly, you have no content and no theme of your own. Not to worry, we’ll get that set up really soon!

Let’s first have a look at the configuration file (config.toml in my case): let’s set up the blog’s name and base URL (this must match the URL on your Netlify dashboard):

title = "Chris’ Smashing Hugo Blog"
baseURL = ""

This link will be overwritten while you develop locally, so you shouldn’t run into 404 errors.

Let’s give Hugo our exported articles in Markdown format. They should be sitting in the /WP2Hugo/blog2md/out folder from the first step. In the Hugo folder (a.k.a. the local repository directory), access the content folder and create a subfolder named posts. Place your Markdown files in there, and then let’s get a theme set up.

8. Creating Your Custom Theme

For this step, I recommend downloading the Saito boilerplate, which is a theme with all the partials you’ll need to get started (and no styles) — a very useful starting point. You could, of course, look at this collection of ready-made themes for Hugo if you want to skip over this part of the process. It’s all up to you!

From the local repository folder, clone the theme into themes/saito:

git submodule add themes/saito  

You can rename this folder to anything you want, such as cool-theme. You’ll have to tell your Hugo configuration which theme you want to use by editing your config.toml/yaml/json file. Edit the theme value to saito, or cool-theme, or whatever your theme’s folder name is. Your preview should now show your blog’s title along with a copyright line. It’s a start, right?

Open the theme’s layout/partials/home.html file and edit it to display your content, limiting to the five first items which are of type posts (inside the content/posts/ folder), with range, first and where:

<div class="container">
{{ range first 5 (where .Paginator.Pages "Type" "posts") }}
    <article class="post post--{{ .Params.class }}">
        <h2 class="post__title">{{ .Title }}</h2>
        <section class="post__content">
            {{ .Content }}
{{ end }}

Your content is now visible, in the most basic of ways. It’s time to make it yours — let’s dive in!

Templating With Hugo

You can first read the Introduction to Hugo templating if you like, but I’ll try to go over a few essentials that will help you understand the basics.

All operations in Hugo are defined inside delimiters: double curly braces (e.g. {{ .Title }}), which should feel familiar if you’ve done a bit of templating before. If you haven’t, think of it as a way to execute operations or inject values at a specific point in your markup. For blocks, they end with the {{ end }} tag, for all operations aside from shortcodes.

Themes have a layout folder which contains the pieces of the layout. The _default folder will be Hugo’s starting point, baseof.html being (you guessed it!) the base of your layout. It will call each component, called “partials” (more on this on Hugo’s documentation about Partial Template), similar to how you would use include in PHP, which you may have already seen in your WordPress theme. Partials can call other partials — just don’t make it an infinite loop.

You can call a partial with {{ partial "file.html" . }} syntax. The partial section is pretty straightforward, but the two other ones might need explaining. You might expect to have to write partials/file.html but since all partials are to be in the partials” folder, Hugo can find that folder just fine. Of course, you can create subfolders inside the “partials” folder if you need more organization.

You may have noticed a stray dot: this is the context you’re passing to your partial. If you had a menu partial, and a list of links and labels, you could pass that list into the partial so that it could only access to that list, and nothing else. I’ll talk more about this elusive dot in the next section.

Your baseof.html file is a shell that calls all the various partials needed to render your blog layout. It should have minimal HTML and lots of partials:

<!DOCTYPE html>
<html lang="{{ .Site.LanguageCode }}">
        <title>{{ block "title" . }}{{ .Site.Title }}{{ end }}</title>
        {{ partial "head.html" . }}
        {{ partial "header.html" . }}
        {{ partial "nav.html" . }}

            {{ block "main" . }}{{ end }}

            {{ partial "sidebar.html" . }}

        {{ partial "footer.html" . }}

The {{ block "main" . }}{{ end }} line is different because it is a block that is defined with a template based on the content of the current page (homepage, single post page, etc.) with {{ define "main" }}.


In your theme, create a folder named assets in which we will place a css folder. It will contain our SCSS files, or a trusty ol’ CSS file. Now, there should be a css.html file in the partials folder (which gets called by head.html). To convert Sass/SCSS to CSS, and minify the stylesheet, we would use this series of functions (using the Hugo Pipes syntax instead of wrapping the functions around each other):

{{ $style := resources.Get "css/style.scss" | toCSS | minify | fingerprint }}

As a bonus — since I struggled to find a straight answer — if you want to use Autoprefixer, Hugo also implements PostCSS. You can add an extra pipe function between toCSS and minify on the first line, like so:

{{ $style := resources.Get "css/style.scss" | toCSS | postCSS | minify | fingerprint }}

Create a “postcss.config.js” file at the root of your Hugo blog, and pass in the options, such as:

module.exports = {
    plugins: {
        autoprefixer: {
            browsers: [
                "> 1%",
                "last 2 versions"

And presto! From Sass to prefixed, minified CSS. The “fingerprint” pipe function is to make sure the filename is unique, like style.c66e6096bdc14c2d3a737cff95b85ad89c99b9d1.min.css. If you change the stylesheet, the fingerprint changes, so the filename is different, and thus, you get an effective cache busting solution.

9. Notes On The Hugo Syntax

I want to make sure you understand “the Dot”, which is how Hugo scopes variables (or in my own words, provides a contextual reference) that you will be using in your templates.

The Dot And Scoping

The Dot is like a top-level variable that you can use in any template or shortcode, but its value is scoped to its context. The Dot’s value in a top-level template like baseof.html is different from the value inside loop blocks or with blocks.

Let’s say this is in our template in our head.html partial:

{{ with .Site.Title }}{{ . }}{{ end }}

Even though we are running this in the main scope, the Dot’s value changes based on context, which is .Site.Title in this case. So, to print the value, you only need to write . instead of re-typing the variable name again. This confused me at first but you get used to it really quick, and it helps with reducing redundancy since you only name the variable once. If something doesn’t work, it’s usually because you’re trying to call a top-level variable inside a scoped block.

So how do you use the top-level scope inside a scoped block? Well, let’s say you want to check for one value but use another. You can use $ which will always be the top-level scope:

{{ with .Site.Params.InfoEnglish }}{{ $.Site.Params.DescriptionEnglish }}{{ end }}

Inside our condition, the scope is .Site.Params.InfoEnglish but we can still access values outside of it with $, where intuitively using .Site.Params.DescriptionEnglish would not work because it would attempt to resolve to .Site.Params.InfoEnglish.Site.Params.DescriptionEnglish, throwing an error.

Custom Variables

You can assign variables by using the following syntax:

{{ $customvar := "custom value" }}

The variable name must start with $ and the assignment operator must be := if it’s the first time it’s being assigned, = otherwise like so:

{{ $customvar = "updated value" }}

The problem you might run into is that this won’t transpire out of the scope, which brings me to my next point.


The Scratch functionality allows you to assign values that are available in all contexts. Say you have a list of movies in a movies.json file:

        "name": "The Room",
        "rating": 4
        "name": "Back to the Future",
        "rating": 10
        "name": "The Artist",
        "rating": 7

Now, you want to iterate over the file’s contents and store your favorite one to use later. This is where Scratch comes into play:

{{ .Scratch.Set "favouriteMovie" "None" }}{{ /* Optional, just to get you to see the difference syntax based on the scope */ }}

{{ range .Site.Data.movies }}
        {{ if ge .rating 10 }}
            {{ /* We must use .Scratch prefixed with a $, because the scope is .Site.Data.movies, at the current index of the loop */ }}
            {{ $.Scratch.Set "favouriteMovie" .name }}
        {{ end }}
{{ end }}
My favourite movie is {{ .Scratch.Get "favouriteMovie" }}
<!-- Expected output => My favourite movie is Back to the Future -->

With Scratch, we can extract a value from inside the loop and use it anywhere. As your theme gets more and more complex, you will probably find yourself reaching for Scratch.

Note: This is merely an example as this loop can be optimized to output this result without Scratch, but this should give you a better understanding of how it works.


The syntax for conditionals is a bit different from what you’d expect — from a JavaScript or PHP perspective. There are, in essence, functions which take two arguments (parenthesis optional if you call the values directly):

{{ if eq .Site.LanguageCode "en-us" }}Welcome!{{ end }}

There are several of these functions:

  • eq checks for equality
  • ne checks for inequality
  • gt check for greater than
  • ge check for great than or equal to
  • lt checks for lesser than
  • le checks for lesser than or equal to

Note: You can learn all about the functions Hugo offers in the Hugo Functions Quick Reference.


If you’re as picky about the output as I am, you might notice some undesired blank lines. This is because Hugo will parse your markup as is, leaving blank lines around conditionals that were not met, for example.

Let’s say we have this hypothetical partial:

{{ if eq .Site.LanguageCode "en-us" }}
<p>Welcome to my blog!</p>
{{ end }}
<img src="/uploads/portrait.jpg" alt="Blog Author">

If the site’s language code is not en-us, this will be the HTML output (note the three empty lines before the image tag):

<img src="/uploads/portrait.jpg" alt="Blog Author">

Hugo provides a syntax to address this with a hyphen beside the curly braces on the inside of the delimiter. {{- will trim the whitespace before the braces, and -}} will trim the whitespace after the braces. You can use either or both at the same time, but just make sure there is a space between the hyphen and the operation inside of the delimiter.

As such, if your template contains the following:

{{- if eq .Site.LanguageCode "en-us" -}}
<p>Welcome to my blog!</p>
{{- end -}}
<img src="/uploads/portrait.jpg" alt="Blog Author">

…then the markup will result in this (with no empty lines):

<img src="/uploads/portrait.jpg" alt="Blog Author">

This can be helpful for other situations like elements with display: inline-block that should not have whitespace between them. Conversely, if you want to make sure each element is on its own line in the markup (e.g. in a {{ range }} loop), you’ll have to carefully place your hyphens to avoid “greedy” whitespace trimming.

The example above would output the following if the site’s language code matches “en-us” (no more line breaks between the p and img tags):

<p>Welcome to my blog!</p><img src="/uploads/portrait.jpg" alt="Blog Author">

10. Content And Data

Your content is stored as Markdown files, but you can use HTML, too. Hugo will render it properly when building your site.

Your homepage will call the _default/list.html layout, which might look like this:

{{ define "main" }}
    {{ partial "list.html" . }}
{{ end }}

The main block calls the list.html partial with the context of ., a.k.a. the top level. The list.html partial may look like this:

{{ define "main" }}
<ol class="articles">
    {{ range .Paginator.Pages }}
                <a href="{{ .URL }}">
                    <h2>{{ .Title }}</h2>
                    <img src="{{ .Params.featuredimage }}" alt="">
                    <time datetime="{{ .Date.Format "2006-01-02" }}">
                        {{ .Date.Format "January 2 2006" }}
    {{ end }}
{{ partial "pagination.html" . }}
{{ end }}

Now we have a basic list of our articles, which you can style as you wish! The number of articles per page is defined in the configuration file, with paginate = 5 (in TOML).

You might be utterly confused as I was by the date formatting in Hugo. Each time the unit is mapped out to a number (first month, second day, third hour, etc.) made a lot more sense to me once I saw the visual explanation below that the Go language documentation provides — which is kind of weird, but kind of smart, too!

 Jan 2 15:04:05 2006 MST
=> 1 2  3  4  5    6  -7

Now all that’s left to do is to display your post on a single page. You can edit the post.html partial to customize your article’s layout:

        <h1>{{ .Title }}</h1>
            Posted on <time datetime="{{ .Date.Format "2006-01-02" }}">{{ .Date.Format "2006. 1. 2" }}</time>
        {{ .Content }}

And that’s how you display your content!

If you’d like to customize the URL, update your configuration file by adding a [permalinks] option (TOML), which in this case will make the URLs look like

    posts = ":filename/"

If you want to generate an RSS feed of your content (because RSS is awesome), add the following in your site configuration file (Saito’s default template will display the appropriate tags in head.html if these options are detected):

rssLimit = 10
        mediatype = "application/rss"
        baseName = "feed"

But what if you had some sort of content outside of a post? That’s where data templates comes in: you can create JSON files and extract their data to create your menu or an element in your sidebar. YAML and TOML are also options but less readable with complex data (e.g. nested objects). You could, of course, set this in your site’s configuration file, but it is — to me — a bit less easy to navigate and less forgiving.

Let’s create a list of “cool sites” that you may want to show in your sidebar — with a link and a label for each site as an array in JSON:

    "coolsites": [
        { "link": "", "label": "Smashing Magazine" },
        { "link": "", "label": "Hugo" },
        { "link": "", "label": "Netlify" }

You can save this file in your repository root, or your theme root, inside a data folder, such as /data/coolsites.json. Then, in your sidebar.html partial, you can iterate over it with range using .Site.Data.coolsites:

<h3>Cool Sites:</h3>
{{ range .Site.Data.coolsites.coolsites }}
    <li><a href="{{ .link }}">{{ .label }}</a></li>
{{ end }}

This is very useful for any kind of custom data you want to iterate over. I used it to create a Google Fonts list for my theme, which categories the posts can be in, authors (with bio, avatar and homepage link), which menus to show and in which order. You can really do a lot with this, and it is pretty straightforward.

A final thought on data and such: anything you put in your Hugo /static folder will be available on the root (/) on the live build. The same goes for the theme folder.

11. Deploying On Netlify

So you’re done, or maybe you just want to see what kind of magic Netlify operates? Sounds good to me, as long as your local Hugo server doesn’t return an error.

Commit your changes and push them to your remote development branch (dev). Head over to Netlify next, and access your site’s settings. You will see an option for “Build & deploy”. We’re going to need to change a couple of things here.

  1. First, in the “Build settings” section, make sure “Build command” is set to hugo and that “Publish directory” is set to public (the default that is recommended you keep on your Hugo config file);
  2. Next, in the “Deploy contexts” section, set “Production branch” to your main branch in your repository. I also suggest your “Branch deploys” to be set to “Deploy only the production branch”;
  3. Finally, in the “Environment variables” section, edit the variables and click “New variable”. We’re going to set the Hugo environment to 0.53 with the following pair: set key to HUGO_VERSION and value to 0.53.

Now head on over to your remote repository and merge your development branch into your main branch: this will be the hook that will deploy your updated blog (this can be customized but the default is reasonable to me).

Back to your Netlify dashboard, your site’s “Production deploys” should have some new activity. If everything went right, this should process and resolve to a “Published” label. Clicking the deploy item will open an overview with a log of the operations. Up top, you will see “Preview deploy”. Go on, click it — you deserve it. It’s alive!

12. Setting Up A Custom Domain

Having the URL as isn’t to your taste, and you already own I get it. Let’s change that!

Head over to your domain registrar and go to your domain’s DNS settings. Here, you’ll have to create a new entry: you can either set an ALIAS/CNAME record that points to, or set an A record that points your domain to Netlify’s load balancer, which is at the time of writing.

You can find the latest information on Netlify’s documentation on custom domains. The load balancer IP will be in the DNS settings section, under “Manual DNS configuration for root and www custom domains”.

When that’s done, head over to your site’s dashboard on Netlify and click “Domain settings”, where you’ll see “Add custom domain”. Enter your domain name to verify it.

You can also manage your domains via your dashboard in the Domains tab. The interface feels less confusing on this page, but maybe it will help make more sense of your DNS settings as it did for me.

Note: Netlify can also handle everything for you if you want to buy a domain through them. It’s easier but it’s an extra cost.

After you’ve set up your custom domain, in “Domain settings”, scroll down to the “HTTPS” section and enable the SSL/TLS certificate. It might take a few minutes but it will grant you a free certificate: your domain now runs on HTTPS.

13. Editing Content On Netlify CMS

If you want to edit your articles, upload images and change your blog settings like you’d do on WordPress’ back-end interface, you can use Netlify CMS which has a pretty good tutorial available. It’s a single file that will handle everything for you (and it is generator-agnostic: it will work with Jekyll, Eleventy, and so on).

You just need to upload two files in a folder:

  • the CMS (a single HTML file);
  • a config file (a YAML file).

The latter will hold all the settings of your particular site.

Go to your Hugo root’s /static folder and create a new folder which you will access via (I will call mine admin). Inside this admin folder, create an index.html file by copying the markup provided by Netlify CMS:

<!doctype html>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Content Manager</title>
<!-- Include the script that builds the page and powers Netlify CMS -->
    <script src="^2.0.0/dist/netlify-cms.js"></script>

The other file you’ll need to create is the configuration file: config.yml. It will allow you to define your site’s settings (name, URL, etc.) so that you can set up what your posts’ front matter should contain, as well as how your data files (if any) should be editable. It’s a bit more complex to set up, but that doesn’t mean it isn’t easy.

If you’re using GitHub or GitLab, start your config.yml file with:

    name: git-gateway
    branch: dev # Branch to update (optional; defaults to master)

If you’re using Bitbucket, it’s a bit different:

    name: bitbucket
    repo: your-username/your-hugorepo
    branch: dev # Branch to update (optional; defaults to master)

Then, for our uploads, we’ll have to tell the CMS where to store them:

media_folder: "static/images/uploads" # Media files will be stored in the repo under static/images/uploads
public_folder: "/images/uploads" # The src attribute for uploaded media will begin with /images/uploads

When you create a new post, the CMS will generate the slug for the filename which you can customize with three options:

    encoding: "ascii" # You can also use "unicode" for non-Latin
    clean_accents: true # Removes diacritics from characters like é or å
    sanitize_replacement: "-" # Replace unsafe characters with this string

Finally, you’ll need to define how the data in your posts is structured. I will also define how the data file coolsites is structured — just in case I want to add another site to the list. These are set with the collections object which will definitely be the most verbose one, along with a nice handful of options you can read more about here.

    - name: "articles" # Used in routes, e.g., /admin/collections/blog
        label: "Articles" # Used in the Netlify CMS user interface
        folder: "content/posts" # The path to the folder where the posts are stored, usually content/posts for Hugo
        create: true # Allow users to create new documents in this collection
        slug: "{{slug}}" # Filename template, e.g.,
        fields: # The fields for each document, usually in front matter
            - {label: "Title", name: "title", widget: "string", required: true}
            - {label: "Draft", name: "draft", widget: "boolean", default: true }
            - {label: "Type", name: "type", widget: "hidden", default: "post" }
            - {label: "Publish Date", name: "date", widget: "date", format: "YYYY-MM-DD"}
            - {label: "Featured Image", name: "featuredimage", widget: "image"}
            - {label: "Author", name: "author", widget: "string"}
            - {label: "Body", name: "body", widget: "markdown"}
    - name: 'coolsites'
            label: 'Cool Sites'
            file: 'data/coolsites.json'
            description: 'Website to check out'
                - name: coolsites
                    label: Sites
                    label_singular: 'Site'
                    widget: list
                        - { label: 'Site URL', name: 'link', widget: 'string', hint: 'https://…' }
                        - { label: 'Site Name', name: 'label', widget: 'string' }

Note: You can read more about how to configure individual fields in the Netlify CMS Widgets documentation which goes over each type of widget and how to use them — especially useful for date formats.


The last thing we need to do is to ensure only authorized users can access the backend! Using your Git provider’s authentication is an easy way to go about this.

Head over to your Netlify site and click the “Settings” tab. Then go to “Access control” which is the last link in the menu on the left side. Here, you can configure OAuth to run via GitHub, GitLab or Bitbucket by providing a key and a secret value defined for your user account (not in the repository). You’ll want to use the same Git provider as the one your repo is saved on.


Go to your “Settings” page on GitHub (click your avatar to reveal the menu), and access “Developer Settings”. Click “Register a new application” and provide the required values:

  • a name, such as “Netlify CMS for my super blog”;
  • a homepage URL, the link to your Netlify site;
  • a description, if you feel like it;
  • the application callback URL, which must be “”.

Save, and you’ll see your Client ID and Client Secret. Provide them to Netlify’s Access Control.


Click your avatar to access the Settings page, and click “Applications” in the “User Settings” menu on the left. You’ll see a form to add a new application. Provide the following information:

  • a name, such as “Netlify CMS for my super blog”;
  • a redirect URI, which must be “”;
  • the scopes that should be checked are:
    • api
    • read_user
    • read_repository
    • write_repository
    • read_registry

Saving your application will give you your Application ID and Secret, that you can now enter on Netlify’s Access Control.


Head over to your user account settings (click your avatar, then “Bitbucket settings”). Under “Access Management”, click “OAth”. In the “OAuth consumers” section, click “Add consumer”. You can leave most things at their default values except for these:

  • a name, such as “Netlify CMS for my super blog”;
  • a callback URL, which must be “”;
  • the permissions that should be checked are:
    • Account: Email, Read, Write
    • Repositories: Read, Write, Admin
    • Pull Requests: Read, Write
    • Webhooks: Read and write

After saving, you can access your key and secret, which you can then provide back on Netlify’s Access Control.

After providing the tokens, go to Netlify, and find the Site Settings. Head to “Identity” and enable the feature. You can now add an External Provider: select your Git provider and click on “Enable”.

In case you need additional details, Netlify CMS has an authentication guide you can read.

You can now access your Netlify site’s backend and edit content. Every edit is a commit on your repo, in the branch specified in your configuration file. If you kept your main branch as the target for Netlify CMS, each time you save, it will run a new build. More convenient, but not as clean with “in-between states”.

Having it save on a dev branch allows you to have finer control on when you want to run a new build. This is especially important if your blog has a lot of content and requires a longer build time. Either way will work; it’s just a matter of how you want to run your blog.

Also, please note that Git LFS is something you installed locally, so images uploaded via Netlify CMS will be “normal”. If you pull in your remote branch locally, the images should be converted to LFS, which you can then commit and push to your remote branch. Also, Netlify CMS does currently not support LFS so the image will not be displayed in the CMS, but they will show up on your final build.

Recommended reading: Static Site Generators Reviewed: Jekyll, Middleman, Roots, Hugo


What a ride! In this tutorial, you’ve learned how to export your WordPress post to Markdown files, create a new repository, set up Git LFS, host a site on Netlify, generate a Hugo site, create your own theme and edit the content with Netlify CMS. Not too bad!

What’s next? Well, you could experiment with your Hugo setup and read more about the various tools Hugo offers — there are many that I didn’t cover for the sake of brevity.

Explore! Have fun! Blog!

Further Resources

Smashing Editorial
(dm, yk, il)
Reblogged 5 hours ago from

Why has paid search been pigeonholed to the bottom of the funnel?

To grow your total business impact from SEM you’ll need to invest in upper funnel tactics to drive more consumers into your funnel.

Please visit Search Engine Land for the full article.

Reblogged 6 hours ago from

Reimagining the marketing funnel for the disrupted customer experience

To optimize for conversions, create reports based on segments, optimize landing pages and don’t assume visitors will follow a predetermined route.

Please visit Search Engine Land for the full article.

Reblogged 16 hours ago from

8 ways to boost your social media conversion rate

Social consumers have more purchasing power than ever before.

According to recent social media statistics, Instagram and Facebook have become two of the top channels for folks looking to research and buy products online. Consumers have become accustomed to the concept of social selling, which is good news for budding businesses and big brands alike.

That doesn’t mean social sales are a foregone conclusion, though.

In fact, the number one challenge of brands in 2019 is aligning their social strategy with their business goals.

If you’re struggling for an ROI or want to generate more customers from social media, you need to look at your social conversion strategy.

In this guide, we cover the specific tactics and tools necessary to boost your social media conversion rate. These tips can help create more social customers no matter what stage they’re currently at within your marketing funnel.

With that, let’s dive in!

1. Make your landing pages seamless and mobile-friendly

With mobile ecommerce on the rise, the need for brands to appeal to customers on-the-go is a no brainer.

A brand’s ability to win over buyers via social boils down to creating a seamless experience. Think about how users navigate Instagram, swiping and tapping as they move from Point A to Point B. Your social landing pages should follow the same principles – easy to navigate with minimal interruption.

For starters, your social landing pages shouldn’t be hidden to customers. Check out how RageOn promotes their storefront in their Instagram bio. Pretty simple, right?

RageOn's bio leads directly to a mobile-optimized landing page

Upon clicking, it’s clear that their landing page is optimized with their social media conversion rate in mind. Swipeable and scrollable with large, bright buttons to boot, mobile shoppers obviously aren’t being treated as a second thought.

RageOn's mobile landing page is optimized for social users

Here’s another awesome example from Bose, with an intuitive and interactive landing page is tailored for customers on-the-go.

Bose's product pages are highly interactive and engaging, encouraging visitors to spend more time on-site to eventually convert

Whether you’re promoting offers on Instagram, Facebook or anywhere in-between, having a mobile-friendly landing page is a game-changer. Not only can you appeal specifically to mobile consumers, but also better assess the behavior of your social shoppers.

If you’re not sure if your landing pages are up to snuff, a quick mobile test via Google can give you some peace of mind.

Google's mobile-friendly test can give you peace of mind regarding the usability status of your landing pages

To further improve your social media conversion strategy, you can continuously split test your social landing pages and optimize them over time. Tools such as Optimizely allow you to A/B test elements such as imagery, copy and link placement to maximize conversions.

Optimizely allows brands to test their social landing pages to increase their conversion rate

And if you need a better idea of what a fine-tuned social landing page looks like, you can check out some of Unbounce‘s mobile-friendly landing page templates for inspiration.

2. Get more eyes on your promotions via video

Product-related videos go hand in hand with a higher social media conversion rate.

Noted to increase conversions and time spent on any given page, there’s a reason why video-centric posts and ads are all the rage among brands. Videos do double duty of showing off your products in action and catching the eyes of social customers.

It’s no secret that social ads centered around video traditionally perform well. Here’s an example from BigCommerce, whose recent Facebook ad campaign tripled their free trial conversions through social video.

BigCommerce tripled their free trial conversion rate through Facebook ads

The same rings true on Instagram where Stories ads are killing it right now. This campaign from Nuxe scored 6x ROI with simple, stop-motion video.

Nuxe's Instagram stories ad saw a 6x return on their social ad spend

Remember: video ads and content don’t need to be massive, big-budget productions. Anything you can do to catch your customers’ eyes and get them to stop scrolling is a plus. Video does exactly that.

Integrating video in any shape or form is crucial for your organic content, too. Whether it’s mini-commercials or creative product displays, video should be central to your social media conversion strategy.

3. Include compelling calls to action

Sometimes increasing your social media conversion rate means making small tweaks to your profiles and captions.

Asking for followers to check out your recent promotions is totally fair game, granted you’re tactful about it.

In other words, you can’t just scream “BUY OUR STUFF” and expect much traction. Instead, make a point to subtly encourage engagement with your calls to action.

For example, J. Crew invites followers to shop their Instagram feed in their bio. Straightforward, but effective.

J Crew's "shop our feed" in their bio serves as an actionable call-to-action

J. Crew’s feed isn’t shy about promoting products. However, the brand makes a point to put a bit of personality behind their promotions with captions that sound like they were written by a human versus a bot. Either way, they point directly to their product page without being pushy about it.

Bear in mind that there’s some debate going on right now over how explicit brands should be about promoting offers, though. As noted by Rand Fishkin and a recent study by Agora Pulse, some marketers are speculating that Instagram might be penalizing posts that use variations of the phrase “link in bio.”

Whether the phrase is being actively penalized by the Instagram algorithm or it just isn’t a successful strategy, the takeaway here is that brands should experiment with captions and calls to action to encourage engagement and find what works. This includes questions, tag-a-friend posts and other less “salesy” messages to your followers.

4. Split test your social posts

Just like any sort of marketing metric, analyzing your social media conversion rate means looking at your data.

For example, do you know what types of content score the most engagement, clicks and traffic to your website? By understanding your posts by the numbers, you can adjust your content strategy to align with your social media conversion strategy.

And honestly? No brand is going to get it “right” from day one. Upping your conversation rate means playing the long game of analyzing and optimizing.

The good news is that tools such as Sprout can help speed up the process so you don’t have to endlessly experiment. For example, Sprout’s social analytics can help you understand what your top-performing posts are across all networks. Sprout’s reports include everything from your best hashtags to behavioral trends among your followers.

Sprout's engagement report can clue you in on the data behind your social media conversion rate

Based on these numbers, you better determine what content is resulting in engagement and likewise when.

Through Sprout's engagement report, you can define which posts are performing best

Speaking of “when,” features such as ViralPost enable you to schedule individual posts based on when they’ll receive maximum engagement. The more eyes on your promos, the more potential customers that can make a purchase.

Viralpost is an invaluable tool for optimizing your post's engagement for conversions

By regularly looking at your analytics, you can split test your organic campaigns to figure out which posts are most poised for engagement.

If you’re interested in running a paid campaign, Facebook and Instagram can actually allow you to split test your ads automatically. In short, you’re capable of running two versions of the same promo simultaneously and identify the winner based on performance. Here’s a snapshot from Facebook themselves.

Facebook's ad split testing allows you to hone in on which version of your ad will convert more customers once you scale

5. Be consistent with your branding

This is a subtle tip, but definitely worth mentioning for the sake of your conversions.

As prospects and leads move through your funnel, there shouldn’t be any second-guessing where they’re coming from.

Creative elements such as your tone, imagery and color scheme need to be consistent as your customers approach the point of purchase. Although this might not seem like a make-or-break moment, pulling a creative bait-and-switch on your followers can be disorienting.

For example, check out this promotional post from Halo Top. Note the playful tone and brand creatives.

Notice also how their bio link is up-to-date with their latest promotion. So far so good, right?

Halo top's bio link reflects their current Instagram promotion

When we click through, we’re brought to a landing page consistent with those same creatives and messaging.

Halo top has consistent branding based on their Instagram promotions

See how that works?

As a result, it’s important to double-check the creative elements of your promotions before you roll them out. Doing so could be the difference between someone converting from a campaign or bouncing out of your funnel altogether.

6. Let your user-generated content serve as social proof to shoppers

As noted in our guide to user-generated content, customer photos are pure gold when it comes to conversions.

Serving as social proof and a much-needed sense of authenticity, user-generated content is second to none for driving social sales. Making user-generated content a cornerstone of your marketing strategy is a low-hanging way to boost your social media conversions. Check out how Keds gives their followers a shout-out.

Increasing your social media conversion rate doesn’t mean keeping those creatives confined to social media. For example, Keds features their satisfied customers on-site as a shoppable lookbook to encourage even more purchases.

Keds features a lookbook of user-generated content on-site to increase their social media conversions

Oh, and they also use their Instagram presence to convert customers through their email list.

Keds uses user-generated content from Instagram to drive their email marketing campaigns

See how much mileage you can get out of just a few user-submitted photos? Curating user-generated content is an expectation for modern brands and likewise a brilliant way to encourage sales.

7. Use social listening to stay on top of buying trends

No industry or customer base is totally static.

We mentioned earlier that scoring customers is about experimenting and evolving. This means keeping up with industry trends as well as the wants and needs of your customers.

Sprout’s social listening features make it a cinch to understand exactly what people want and expect from your brand. This ensures that your social presence never grows stagnant.

Through social listening, you can adapt your social strategy to keep up with latest trends

From what customers are saying about you to trending topics worth mentioning in your sales-related posts, these insights can directly influence your social sales strategy.

8. Track your social analytics and conversions

Last but not least, you can’t assess your social media conversion rate if you aren’t actually tracking conversions.

There are a few ways to do this, by the way. For starters, make a point to watch your social traffic in Google Analytics. You can set explicit social conversion goals as highlighted in our guide to scoring a better social media ROI.

Brands should track social media conversions in Google Analytics to better determine their social ROI


Additionally, conversion tracking in Sprout Social allows you to generate links which allow you to track the conversion rates of specific campaigns as you roll them out.

Sprout enables users to track social media conversions through URL tracking for your campaigns

With reporting and analytics, you can tie specific goals and outcomes to campaigns to understand what’s converting and what’s not.

And with that, we wrap up our guide!

How are you improving your social media conversion rate?

We’ve said it before and we’ll say it again: social sales don’t happen by accident.

Having a social media conversion strategy is essential to any brand who wants to generate customers from their social channels. The tips above and tools such as Sprout Social can help you do so by the numbers rather than treat social selling as a guessing game.

We want to hear from you, though. What are you doing to monitor your social media conversions? Notice any big difference between social customers and buyers from other channels? Let us know in the comments below!

This post 8 ways to boost your social media conversion rate originally appeared on Sprout Social.

Reblogged 19 hours ago from

SEL Daily Brief May 22, 2019

Please visit Search Engine Land for the full article.

Reblogged 21 hours ago from vs. saga shows the importance of SEO (and the bad side of a Google update)

When an algorithm update slams one of your biggest rivals, that’s when you should step on the gas.

Please visit Search Engine Land for the full article.

Reblogged 1 day ago from