Back to Top

Survey finds Google Home users do more, have ‘far higher’ satisfaction than Alexa owners

Alexa devices dominate, but higher Google Home NPS scores suggest it will have better word of mouth and could gain market share.

Please visit Search Engine Land for the full article.

Reblogged 1 day ago from feeds.searchengineland.com

Amazon to introduce video ads in mobile-app search results — report

The buy reportedly requires a minimum $35,000 budget.

Please visit Search Engine Land for the full article.

Reblogged 1 day ago from feeds.searchengineland.com

Google launches News Initiative subscriptions lab for publishers

The company also announced new fact-checking tools and other efforts to stop the spread of ‘misinformation.’

Please visit Search Engine Land for the full article.

Reblogged 1 day ago from feeds.searchengineland.com

Google’s March 2019 core quality update: Stories of recovery

Here’s an early look at a nutrition, small e-commerce and larger informational site after the March update. Each made changes based on clues from Google’s Quality Raters’ Guidelines; all saw gains in traffic after the update.

Please visit Search Engine Land for the full article.

Reblogged 1 day ago from feeds.searchengineland.com

Robots.txt best practice guide + examples

The robots.txt file is an often overlooked and sometimes forgotten part of a website and SEO.

But nonetheless, a robots.txt file is an important part of any SEO’s toolset, whether or not you are just starting out in the industry or you are a chiseled SEO veteran.

What is a robots.txt file?

A robots.txt file can be used for for a variety of things, from letting search engines know where to go to locate your sites sitemap to telling them which pages to crawl and not crawl as well as being a great tool for managing your sites crawl budget.

You might be asking yourself “wait a minute, what is crawl budget?” Well crawl budget is what what Google uses to effectively crawl and index your sites pages. As big a Google is, they still only have a limited number of resources available to be able to crawl and index your sites content.

If your site only has a few hundred URLs then Google should be able to easily crawl and index your site’s pages.

However, if your site is big, like an ecommerce site for example and you have thousands of pages with lots of auto-generated URLs, then Google might not crawl all of those pages and you will be missing on lots of potential traffic and visibility.

This is where the importance of prioritizing what, when and how much to crawl becomes important.

Google have stated that “having many low-value-add URLs can negatively affect a site’s crawling and indexing.” This is where having a robots.txt file can help with the factors affecting your sites crawl budget.

You can use the file to help manage your sites crawl budget, by making sure that search engines are spending their time on your site as efficiently (especially if you have a large site) as possible and crawling only the important pages and not wasting time on pages such as login, signup or thank you pages.

Why do you need robots.txt?

Before a robot such as Googlebot, Bingbot, etc. crawls a webpage, it will first check to see if there is in fact a robots.txt file and, if one exists, they will usually follow and respect the directions found within that file.

A robots.txt file can be a powerful tool in any SEO’s arsenal as it’s a great way to control how search engine crawlers/bots access certain areas of your site. Keep in mind that you need to be sure you understand how the robots.txt file works or you will find yourself accidentally disallowing Googlebot or any other bot from crawling your entire site and not having it be found in the search results!

But when done properly you can control such things as:

  1. Blocking access to entire sections of your site (dev and staging environment etc.)
  2. Keeping your sites internal search results pages from being crawled, indexed or showing up in search results.
  3. Specifying the location of your sitemap or sitemaps
  4. Optimizing crawl budget by blocking access to low value pages (login, thank you, shopping carts etc..)
  5. Preventing certain files on your website (images, PDFs, etc.) from being indexed

Robots.txt Examples

Below are a few examples of how you can use the robots.txt file on your own site.

Allowing all web crawlers/robots access to all your sites content:

User-agent: *
Disallow:

Blocking all web crawlers/bots from all your sites content:

User-agent: *
Disallow: /

You can see how easy it is to make a mistake when creating your sites robots.txt as the difference from blocking your entire site from being seen is a simple forward slash in the disallow directive (Disallow: /).

Blocking a specific web crawlers/bots from a specific folder:

User-agent: Googlebot
Disallow: /

Blocking a web crawlers/bots from a specific page on your site:

User-agent: 
Disallow: /thankyou.html

Exclude all robots from part of the server:

User-agent: *
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /junk/

This is example of what the robots.txt file on the theverge.com’s website looks like:

The example file can be viewed here: www.theverge.com/robots.txt

You can see how The Verge use their robots.txt file to specifically call out Google’s news bot “Googlebot-News” to make sure that it doesn’t crawl those directories on the site.

It’s important to remember that if you want to make sure that a bot doesn’t crawl certain pages or directories on your site, that you call out those pages and or directories in the in “Disallow” declarations in your robots.txt file, like in the above examples.

You can review how Google handles the robots.txt file in their robots.txt specifications guide, Google has a current maximum file size limit for the robots.txt file, the maximum size for Google is set at 500KB, so it’s important to be mindful of the size of your sites robots.txt file.

How to create a robots.txt file

Creating a robots.txt file for your site is a fairly simple process, but it’s also easy to make a mistake. Don’t let that discourage you from creating or modifying a robots file for your site. This article from Google walks you through the robots.txt file creation process and should help you get comfortable creating your very own robots.txt file.

Once you are comfortable with creating or modify your site’s robots file, Google has another great article that explains how to test your sites robots.txt file to see if it is setup correctly.

Checking if you have a robots.txt file

If you are new to the robots.txt file or are not sure if your site even has one, you can do a quick check to see. All you need to do to check is go to your sites root domain and then add /robots.txt to the end of the URL. Example: www.yoursite.com/robots.txt

If nothing shows up, then you do not have a robots.txt file for you site. Now would be the perfect time to jump in and test out creating one for your site.

Best Practices:

  1. Make sure all important pages are crawlable, and content that won’t provide any real value if found in search are blocked.
  2. Don’t block your sites JavaScript and CSS files
  3. Always do a quick check of your file to make sure nothing has changed by accident
  4. Proper capitalization of directory, subdirectory and file names
  5. Place the robots.txt file in your websites root directory for it to be found
  6. Robots.txt file is case sensitive,  the file must be named “robots.txt” (no other variations)
  7. Don’t use the robots.txt file to hide private user information as it will still be visible
  8. Add your sitemaps location to your robots.txt file.
  9. Make sure that you are not blocking any content or sections of your website you want crawled.

Things to keep in mind:

If you have a subdomain or multiple subdomains on your site, then you you will need to have a robots.txt file on each subdomain as well as on the main root domain. This would look something like this store.yoursite.com/robots.txt and yoursite.com/robots.txt.

Like mentioned above in the “best practices section” it’s important to remember not to use the robots.txt file to prevent sensitive data, such as private user information from being crawled and appearing in the search results.

The reason for this, is that it’s possible that other pages might be linking to that information and if there’s a direct link back it will bypass the robots.txt rules and that content may still get indexed. If you need to block your pages from truly being indexed in the search results, use should use different method like adding password protection or by adding a noindex meta tag to those pages. Google can not login to a password protected site/page, so they will not be able to crawl or index those pages.

Conclusion

While you might be a little nervous if you have never worked on robots.txt file before, rest assured it is fairly simple to use and set up. Once you get comfortable with the ins and outs of the robots file, you’ll be able to enhance your site’s SEO as well as help your site’s visitors and search engine bots.

By setting up your robots.txt file the right way, you will be helping search engine bots spend their crawl budgets wisely and help ensure that they aren’t wasting their time and resources crawling pages that don’t need to be crawled. This will help them in organizing and displaying your sites content in the SERPs in the best way possible, which in turn means you’ll have more visibility.

Keep in mind that it doesn’t necessarily take a whole lot of time and effort to setup your robots.txt file. For the most part, it’s a one-time setup, that you can then make little tweaks and changes to help better sculpt your site.

I hope the practices, tips and suggestions described in this article will help give you the confidence to go out and create/tweak your sites robots.txt file and at the same time help guide you smoothly through the process.

Michael McManus is Earned Media (SEO) Practice Lead at iProspect.

The post Robots.txt best practice guide + examples appeared first on Search Engine Watch.

Reblogged 1 day ago from searchenginewatch.com

What Is Marketing Management? A 1-Minute Rundown

As marketers, we all want to climb the career ladder as quickly as possible. But if you really want to become a marketing manager one day, you first need to learn what the role demands.

Below, we’ll go over what exactly marketing management is, a marketing manager’s average salary, and the education and skills you need to become one. Read on to learn more about marketing management.

Marketing Manager Salary

According to over 43,500 salaries submitted to Glassdoor, the average salary for a marketing manager working in the United States is $81,078.

Required Education

Most companies require all of their marketing professionals to at least have a bachelor’s degree, but since the marketing industry adapts so quickly, companies don’t necessarily require their marketers or marketing managers to have specialized degrees in specific fields.

However, there is one requirement needed to become a marketing manager — a three-to-five year track record of consistently performing to your potential and achieving your goals. If you want to become a marketing manager one day, your performance as an individual contributor matters more than anything.

For instance, if you aspire to manage your company’s blog team, you need to prove that you’ve been able to consistently write quality content and meet your manager’s expectations.

Required Skills

As an individual contributor, your hard skills are crucial for crushing your job, but as a manager, your soft skills are most important — the majority of your time is spent leading initiatives, gauging and handling your colleagues’ emotions, and figuring out how to simultaneously serve your team’s and employees’ needs.

So, in order to succeed as a marketer manager, you need to be humble, empathetic, adaptable, rewarding, transparent, a great communicator, and have subject matter expertise.

Marketing Plan Generator

Reblogged 1 day ago from blog.hubspot.com

Let Customer Reviews Drive Your Conversions

Live Webinar: Thursday, April 4, at 1:00 PM ET (10:00 AM PT)

Please visit Search Engine Land for the full article.

Reblogged 1 day ago from feeds.searchengineland.com

Google’s neural matching versus RankBrain: How Google uses each in search

Neural matching helps Google better relate words to searches, while RankBrain helps Google better relate pages to concepts.

Please visit Search Engine Land for the full article.

Reblogged 2 days ago from feeds.searchengineland.com

SEO Is a Means to an End: How Do You Prove Your Value to Clients?

Posted by KameronJenkins

“Prove it” is pretty much the name of the game at this point.

As SEOs, we invest so much effort into finding opportunities for our clients, executing strategies, and on the best days, getting the results we set out to achieve.

That’s why it feels so deflating (not to mention mind-boggling) when, after all those increases in rankings, traffic, and conversions our work produced, our clients still aren’t satisfied.

Where’s the disconnect?

The value of SEO in today’s search landscape

You don’t have to convince SEOs that their work is valuable. We know full well how our work benefits our clients’ websites.

  1. Our attention on crawling and indexing ensures that search engine bots crawl all our clients’ important pages, that they’re not wasting time on any unimportant pages, and that only the important, valuable pages are in the index.
  2. Because we understand how Googlebot and other crawlers work, we’re cognizant of how to ensure that search engines understand our pages as they’re intended to be understood, as well as able to eliminate any barriers to that understanding (ex: adding appropriate structured data, diagnosing JavaScript issues, etc.)
  3. We spend our time improving speed, ensuring appropriate language targeting, looking into UX issues, ensuring accessibility, and more because we know the high price that Google places on the searcher experience.
  4. We research the words and phrases that our clients’ ideal customers use to search for solutions to their problems and help create content that satisfies those needs. In turn, Google rewards our clients with high rankings that capture clicks. Over time, this can lower our clients’ customer acquisition costs.
  5. Time spent on earning links for our clients earns them the authority needed to earn trust and perform well in search results.

There are so many other SEO activities that drive real, measurable impact for our clients, even in a search landscape that is more crowded and getting less clicks than ever before. Despite those results, we’ll still fall short if we fail to connect the dots for our clients.

Rankings, traffic, conversions… what’s missing?

What’s a keyword ranking worth without clicks?

What’s organic traffic worth without conversions?

What are conversions worth without booking/signing the lead?

Rankings, traffic, and conversions are all critical SEO metrics to track if you want to prove the success of your efforts, but they are all means to an end.

At the end of the day, what your client truly cares about is their return on investment (ROI). In other words, if they can’t mentally make the connection between your SEO results and their revenue, then the client might not keep you around for long.

From searcher to customer: I made this diagram for a past client to help demonstrate how they get revenue from SEO.

But how can you do that?

10 tips for attaching value to organic success

If you want to help your clients get a clearer picture of the real value of your efforts, try some of the following methods.

1. Know what constitutes a conversion

What’s the main action your client wants people to take on their website? This is usually something like a form fill, a phone call, or an on-site purchase (e-commerce). Knowing how your client uses their website to make money is key.

2. Ask your clients what their highest value jobs are

Know what types of jobs/purchases your client is prioritizing so you can prioritize them too. It’s common for clients to want to balance their “cash flow” jobs (usually lower value but higher volume) with their “big time” jobs (higher value but lower volume). You can pay special attention to performance and conversions on these pages.

3. Know your client’s close rate

How many of the leads your campaigns generate end up becoming customers? This will help you assign values to goals (tip #6).

4. Know your client’s average customer value

This can get tricky if your client offers different services that all have different values, but you can combine average customer value with close rate to come up with a monetary value to attach to goals (tip #6).

5. Set up goals in Google Analytics

Once you know what constitutes a conversion on your client’s website (tip #1), you can set up a goal in Google Analytics. If you’re not sure how to do this, read up on Google’s documentation.

6. Assign goal values

Knowing that the organic channel led to a conversion is great, but knowing the estimated value of that conversion is even better! For example, if you know that your client closes 10% of the leads that come through contact forms, and the average value of their customers is $500, you could assign a value of $50 per goal completion.

7. Consider having an Organic-only view in Google Analytics

For the purpose of clarity, it could be valuable to set up an additional Google Analytics view just for your client’s organic traffic. That way, when you’re looking at your goal report, you know you’re checking organic conversions and value only.

8. Calculate how much you would have had to pay for that traffic in Google Ads

I like to use the Keywords Everywhere plugin when viewing Google Search Console performance reports because it adds a cost per click (CPC) column next to your clicks column. This screenshot is from a personal blog website that I admittedly don’t do much with, hence the scant metrics, but you can see how easy this makes it to calculate how much you would have had to pay for the clicks you got your client for “free” (organically).

9. Use Multi-Channel Funnels

Organic has value beyond last-click! Even when it’s not the channel your client’s customer came through, organic may have assisted in that conversion. Go to Google Analytics > Conversions > Multi-Channel Funnels.

10. Bring all your data together

How you communicate all this data is just as important as the data itself. Use smart visualizations and helpful explanations to drive home the impact your work had on your client’s bottom line.


As many possibilities as we have for proving our value, doing so can be difficult and time-consuming. Additional factors can even complicate this further, such as:

  • Client is using multiple methods for customer acquisition, each with its own platform, metrics, and reporting
  • Client has low SEO maturity
  • Client is somewhat disorganized and doesn’t have a good grasp of things like average customer value or close rate

The challenges can seem endless, but there are ways to make this easier. I’ll be co-hosting a webinar on March 28th that focuses on this very topic. If you’re looking for ways to not only add value as an SEO but also prove it, check it out:

Save my spot!

And let’s not forget, we’re in this together! If you have any tips for showing your value to your SEO clients, share them in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 2 days ago from feedproxy.google.com

Release Notes: Marketo adds collaboration features for email, makes AccountAI tool available to all users

Users who have purchased the “Secure Domains for Tracking Limits” feature will be able to display HTTPS tracking links in emails as part of the latest updates.

The post Release Notes: Marketo adds collaboration features for email, makes AccountAI tool available to all users appeared first on…

Please visit Marketing Land for the full article.

Reblogged 2 days ago from feeds.marketingland.com