Back to Top

Google may be testing shareable search results snippets

Google is at it again, testing a new feature in the core search results.

Please visit Search Engine Land for the full article.

Reblogged 2 seconds ago from feeds.searchengineland.com

MozCon 2019: The Top Takeaways From Day One

Posted by KameronJenkins

Rand, Russ, Ruth, Rob, and Ross. Dana and Darren. Shannon and Sarah. We didn’t mean to (we swear we didn’t) but the first day of MozCon was littered with alliteration, takeaways, and oodles of insights from our speakers. Topics ranged from local SEO, link building, and Google tools, and there was no shortage of “Aha!” moments. And while the content was diverse, the themes are clear: search is constantly changing.

Ready? Let’s make like Roger in his SERP submarine and dive right in!


Sarah’s welcome

Our fearless leader took the stage to ready our attendees for their deep sea dive over the next three days. Our guiding theme to help set the tone? The deep sea of data that we find ourselves immersed in every day.

People are searching more than ever before on more types of devices than ever before… we truly are living in the golden age of search. As Sarah explained though, not all search is created equal. Because Google wants to answer searchers’ questions as quickly as possible, they’ve moved from being the gateway to information to being the destination for information in many cases. SEOs need to be able to work smarter and identify the best opportunities in this new landscape.


Rand Fishkin — Web Search 2019: The Essential Data Marketers Need

Next up was Rand of SparkToro who dropped a ton of data about the state of search in 2019.

To set the stage, Rand gave us a quick review of the evolution of media: “This new thing is going to kill this old thing!” has been the theme of panicked marketers for decades. TV was supposed to kill radio. Computers were supposed to kill TV. Mobile was supposed to kill desktop. Voice search was supposed to kill text search. But as Rand showed us, these new technologies often don’t kill the old ones — they just take up all our free time. We need to make sure we’re not turning away from mediums just because they’re “old” and, instead, make sure our investments follow real behavior.

Rand’s deck was also chock-full of data from Jumpshot about how much traffic Google is really sending to websites these days, how much of that comes from paid search, and how that’s changed over the years.

In 2019, Google sent ~20 fewer organic clicks via browser searches than in 2016.

In 2016, there were 26 organic clicks for every paid click. In 2019, that ratio is 11:1.

Google still owns the lion’s share of the search market and still sends a significant amount of traffic to websites, but in light of this data, SEOs should be thinking about how their brands can benefit even without the click.

And finally, Rand left us with some wisdom from the world of social — getting engagement on social media can get you the type of attention it takes to earn quality links and mentions in a way that’s much easier than manual, cold outreach.


Ruth Burr Reedy — Human > Machine > Human: Understanding Human-Readable Quality Signals and Their Machine-Readable Equivalents

It’s 2019. And though we all thought by this year we’d have flying cars and robots to do our bidding, machine learning has come a very long way. Almost frustratingly so — the push and pull of making decisions for searchers versus search engines is an ever-present SEO conundrum.

Ruth argued that in our pursuit of an audience, we can’t get too caught up in the middleman (Google), and in our pursuit of Google, we can’t forget the end user.

Optimizing for humans-only is inefficient. Those who do are likely missing out on a massive opportunity. Optimizing for search engines-only is reactive. Those who do will likely fall behind.

She also left us with the very best kind of homework… homework that’ll make us all better SEOs and marketers!

  • Read the Quality Rater Guidelines
  • Ask what your site is currently benefiting from that Google might eliminate or change in the future
  • Write better (clearer, simpler) content
  • Examine your SERPs with the goal of understanding search intent so you can meet it
  • Lean on subject matter experts to make your brand more trustworthy
  • Conduct a reputation audit — what’s on the internet about your company that people can find?

And last, but certainly not least, stop fighting about this stuff. It’s boring.

Thank you, Ruth!


Dana DiTomaso — Improved Reporting & Analytics Within Google Tools

Freshly fueled with cinnamon buns and glowing with the energy of a thousand jolts of caffeine, we were ready to dive back into it — this time with Dana from Kick Point.

This year was a continuation of Dana’s talk on goal charters. If you haven’t checked that out yet or you need a refresher, you can view it here!

Dana emphasized the importance of data hygiene. Messy analytics, missing tracking codes, poorly labeled events… we’ve all been there. Dana is a big advocate of documenting every component of your analytics.

She also blew us away with a ton of great insight on making our reports accessible — from getting rid of jargon and using the client’s language to using colors that are compatible with printing.

And just when we thought it couldn’t get any more actionable, Dana drops some free Google Data Studio resources on us! You can check them out here.

(Also, close your tabs!)


Rob Bucci — Local Market Analytics: The Challenges and Opportunities

The first thing you need to know is that Rob finally did it — he finally got a cat.

Very bold of Rob to assume he would have our collective attention after dropping something adorable like that on us. Luckily, we were all able to regroup and focus on his talk — how there are challenges aplenty in the local search landscape, but there are even more opportunities if you overcome them.

Rob came equipped with a ton of stats about localized SERPs that have massive implications for rank tracking.

  • 73 percent of the 1.2 million SERPs he analyzed contained some kind of localized feature.
  • 25 percent of the sites he was tracking had some degree of variability between markets.
  • 85 percent was the maximum variability he saw across zip codes in a single market.

That’s right… rankings can vary by zip code, even for queries you don’t automatically associate as local intent. Whether you’re a national brand without physical storefronts or you’re a single-location retail store, localization has a huge impact on how you show up to your audience.

With this in mind, Rob announced a huge initiative that Moz has been working on… Local Market Analytics — complete with local search volume! Eep! See how you perform on hyper-local SERPs with precision and ease — whether you’re an online or location-based business.

It launched today as an invitation-only limited release. Want an invite? Request it here!

Request my invite!


Ross Simmonds— Keywords Aren’t Enough: How to Uncover Content Ideas Worth Chasing

Ross Simmonds was up next, and he dug into how you might be creating content wrong if you’re building it strictly around keyword research.

The methodology we marketers need to remember is Research – Rethink – Remix.

Research:

  • Find the channel your audience spends time on. What performs well? How can you serve this audience?

Rethink:

  • Find the content that your audience wants most. What topics resonate? What stories connect?

Remix:

  • Measure how your audience responds to the content. Can this be remixed further? How can we remix at scale?

If you use this method and you still aren’t sure if you should pursue a content opportunity, ask yourself the following questions:

  • Will it give us a positive ROI?
  • Does it fall within our circle of competence?
  • Does the benefit outweigh the cost of creation?
  • Will it give us shares and links and engagement?

Thanks, Ross, for such an actionable session!


Shannon McGuirk — How to Supercharge Link Building with a Digital PR Newsroom

Shannon of Aira Digital took the floor with real-life examples of how her team does link building at scale with what she calls the “digital PR newsroom.”

The truth is, most of us are still link building like it’s 1948 with “planned editorial” content. When we do this, we’re missing out on a ton of opportunity (about 66%!) that can come from reactive editorial and planned reactive editorial.

Shannon encouraged us to try tactics that have worked for her team such as:

  • Having morning scrum meetings to go over trending topics and find reactive opportunities
  • Staffing your team with both storytellers and story makers
  • Holding quarterly reviews to see which content types performed best and using that to inform future work

Her talk was so good that she even changed Cyrus’s mind about link building!

For free resources on how you can set up your own digital PR newsroom, visit: aira.net/mozcon19.


Darren Shaw— From Zero to Local Ranking Hero

Next up, Darren of Whitespark chronicled his 8-month long journey to growing a client’s local footprint.

Here’s what he learned and encouraged us to implement in response:

  • Track from multiple zip codes around the city
  • Make sure your citations are indexed
  • The service area section in GMB won’t help you rank in those areas. It’s for display purposes only
  • Invest in a Google reviews strategy
  • The first few links earned really have a positive impact, but it reaches a point of diminishing returns
  • Any individual strategy will probably hit a point of diminishing returns
  • A full website is better than a single-page GMB website when it comes to local rankings

As SEOs, we’d all do well to remember that it’s not one specific activity, but the aggregate, that will move the needle!


Russ Jones — Esse Quam Videri: When Faking it is Harder than Making It

Rounding out day one of MozCon was our very own Russ Jones on Esse Quam Videri — “To be, rather than to seem.”

By Russ’s own admission, he’s a pretty good liar, and so too are many SEOs. In a poll Russ ran on Twitter, he found that 64 percent of SEOs state that they have promoted sites they believe are not the best answer to the query. We can be so “rank-centric” that we engage in tactics that make our websites look like we care about the users, when in reality, what we really care about is that Google sees it.

Russ encouraged SEOs to help guide the businesses we work for to “be real companies” rather than trying to look like real companies purely for SEO benefit.

Thanks to Russ for reminding us to stop sacrificing the long run for the short run!


Phew — what a day!

And it ain’t over yet! There are two more days to make the most of MozCon, connect with fellow attendees, and pick the brains of our speakers.

In the meantime, tell me in the comments below — if you had to pick just one thing, what was your favorite part about day one?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 3 hours ago from feedproxy.google.com

How retailers can avoid the 9 biggest pitfalls of Facebook advertising

Facebook advertising success boils down to the three areas of study: pixel and campaign strategy, audience targeting and measurement.

Please visit Marketing Land for the full article.

Reblogged 3 hours ago from feeds.marketingland.com

Data scraping tools for marketers who don’t know code

Here are some free software options to extract data from small to medium data sets to help you get the job done.

Please visit Search Engine Land for the full article.

Reblogged 4 hours ago from feeds.searchengineland.com

Everything You Need To Know About CSS Margins

Everything You Need To Know About CSS Margins

Everything You Need To Know About CSS Margins

Rachel Andrew

2019-07-15T12:30:59+02:00
2019-07-16T13:35:37+00:00

One of the first things most of us learned when we learned CSS, was details of the various parts of a box in CSS, described as The CSS Box Model. One of the elements in the Box Model is the margin, a transparent area around a box, which will push other elements away from the box contents. The margin-top, margin-right, margin-bottom and margin-left properties were described right back in CSS1, along with the shorthand margin for setting all four properties at once.

A margin seems to be a fairly uncomplicated thing, however, in this article, we will take a look at some of the things which trip people up with regard to using margins. In particular, we will be looking at how margins interact with each other, and how margin collapsing actually works.

The CSS Box Model

As with all articles about parts of the CSS Box Model, we should define what we mean by that, and how the model has been clarified through versions of CSS. The Box Model refers to how the various parts of a box — the content, padding, border, and margin — are laid out and interact with each other. In CSS1, the Box Model was detailed with the ASCII art diagram shown in the image below.

Depiction of the CSS Box Model in CSS1

The four margin properties for each side of the box and the margin shorthand were all defined in CSS1.

The CSS2.1 specification has an illustration to demonstrate the Box Model and also defines terms we still use to describe the various boxes. The specification describes the content box, padding box, border box, and margin box, each being defined by the edges of the content, padding, border, and margin respectively.

diagram of the CSS Box Model

Depection of the CSS Box Model in CSS2

There is now a Level 3 Box Model specification as a Working Draft. This specification refers back to CSS2 for the definitions of the Box Model and margins, therefore it is the CSS2 definition we will be using for the majority of this article.

Margin Collapsing

The CSS1 specification, as it defined margins, also defined that vertical margins collapse. This collapsing behavior has been the source of margin-related frustration ever since. Margin collapsing makes sense if you consider that in those early days, CSS was being used as a documenting formatting language. Margin collapsing means that when a heading with a bottom margin, is followed by a paragraph with a top margin, you do not get a huge gap between those items.

When margins collapse, they will combine so that the space between the two elements becomes the larger of the two margins. The smaller margin essentially ending up inside the larger one.

Margins collapse in the following situations:

Let’s take a look at each of these scenarios in turn, before looking at the things which prevent margins from collapsing in these scenarios.

Adjacent Siblings

My initial description of margin collapsing is a demonstration of how the margins between adjacent siblings collapse. Other than in the situations mentioned below, if you have two elements displaying one after the other in normal flow, the bottom margin of the first element will collapse with the top margin of the following element.

In the CodePen example below, there are three div elements. The first has a top and bottom margin of 50 pixels. The second has a top and bottom margin of 20px. The third has a top and bottom margin of 3em. The margin between the first two elements is 50 pixels, as the smaller top margin is combined with the larger bottom margin. The margin between the second two elements in 3em, as 3em is larger than the 20 pixels on the bottom of the second element.

See the Pen [Margins: adjacent siblings](https://codepen.io/rachelandrew/pen/OevMPo) by Rachel Andrew.

See the Pen Margins: adjacent siblings by Rachel Andrew.

Completely Empty Boxes

If a box is empty, then it’s top and bottom margin may collapse with each other. In the following CodePen example, the element with a class of empty has a top and bottom margin of 50 pixels, however, the space between the first and third items is not 100 pixels, but 50. This is due to the two margins collapsing. Adding anything to that box (even padding) will cause the top and bottom margins to be used and not collapse.

See the Pen [Margins: empty boxes](https://codepen.io/rachelandrew/pen/JQLGMr) by Rachel Andrew.

See the Pen Margins: empty boxes by Rachel Andrew.

Parent And First Or Last Child Element

This is the margin collapsing scenario which catches people out most often, as it does not seem particularly intuitive. In the following CodePen, I have a div with a class of wrapper, and I have given that div an outline in red so that you can see where it is. The three child elements all have a margin of 50 pixels. However, the first and last items are flush with the edges of the wrapper; there is not a 50-pixel margin between the element and the wrapper.

See the Pen [Margins: margin on first and last child](https://codepen.io/rachelandrew/pen/BgrKGp) by Rachel Andrew.

See the Pen Margins: margin on first and last child by Rachel Andrew.

This is because the margin on the child collapses with any margin on the parent thus ending up on the outside of the parent. You can see this if you inspect the first child using DevTools. The highlighted yellow area is the margin.

The item with a yellow highlighted margin showing outside the parent

DepvTools can help you see where your margin ends up

Only Block Margins Collapse

The last example also highlights something about margin collapsing. In CSS2, only vertical margins are specified to collapse — that is the top and bottom margins on an element if you are in a horizontal writing mode. So the left and right margins above are not collapsing and ending up outside the wrapper.

Note: It is worth remembering that margins only collapse in the block direction, such as between paragraphs.

Things Which Prevent Margin Collapsing

Margins never collapse if an item has absolute positioning, or is floated. However, assuming you have run into one of the places where margins collapse outlined above, how can you stop those margins collapsing?

The first thing that stops collapsing is situations where there is something between the elements in question.

For example, a box completely empty of content will not collapse it’s top and bottom margin if it has a border, or padding applied. In the example below I have added 1px of padding to the box. There is now a 50-pixel margin above and below the box.

See the Pen [Margins: empty boxes with padding do not collapse](https://codepen.io/rachelandrew/pen/gNeMpg) by Rachel Andrew.

See the Pen Margins: empty boxes with padding do not collapse by Rachel Andrew.

This has logic behind it, if the box is completely empty with no border or padding, it is essentially invisible. It might be an empty paragraph element thrown into the markup by your CMS. If your CMS was adding redundant paragraph elements, you probably wouldn’t want them to cause large gaps between the other paragraphs due to their margins being honored. Add anything to the box, and you will get those gaps.

Similar behavior can be seen with margins on first or last children which collapse through the parent. If we add a border to the parent, the margins on the children stay inside.

See the Pen [Margins: margin on first and last child doesn’t collapse if the parent has a border](https://codepen.io/rachelandrew/pen/vqRKKX) by Rachel Andrew.

See the Pen Margins: margin on first and last child doesn’t collapse if the parent has a border by Rachel Andrew.

Once again, there is some logic to the behavior. If you have wrapping elements for semantic purposes that do not display visually, you probably don’t want them to introduce big gaps in the display. This made a lot of sense when the web was mostly text. It is less useful as behavior when we are using elements to lay out a design.

Creating a Block Formatting Context

A new Block Formatting Context (BFC) will also prevent margin collapsing through the containing element. If we look again at the example of the first and last child, ending up with their margins outside of the wrapper, and give the wrapper display: flow-root, thus creating a new BFC, the margins stay inside.

See the Pen [Margins: a new Block Formatting Context contains margins](https://codepen.io/rachelandrew/pen/VJXjEp) by Rachel Andrew.

See the Pen Margins: a new Block Formatting Context contains margins by Rachel Andrew.

To find out more about display: flow-root, read my article “Understanding CSS Layout And The Block Formatting Context”. Changing the value of the overflow property to auto will have the same effect, as this also creates a new BFC, although it may also create scrollbars that you didn’t want in some scenarios.

Flex And Grid Containers

Flex and Grid containers establish Flex and Grid formatting contexts for their children, so they have different behavior to block layout. One of those differences is that margins do not collapse:

“A flex container establishes a new flex formatting context for its contents. This is the same as establishing a block formatting context, except that flex layout is used instead of block layout. For example, floats do not intrude into the flex container, and the flex container’s margins do not collapse with the margins of its contents.”

Flexbox Level 1

If we take the example above and make the wrapper into a flex container, displaying the items with flex-direction: column, you can see that the margins are now contained by the wrapper. Additionally, margins between adjacent flex items do not collapse with each other, so we end up with 100 pixels between flex items, the total of the 50 pixels on the top and bottom of the items.

See the Pen [Margins: margins on flex items do not collapse](https://codepen.io/rachelandrew/pen/mZxreL) by Rachel Andrew.

See the Pen Margins: margins on flex items do not collapse by Rachel Andrew.

Margin Strategies For Your Site

Due to margin collapsing, it is a good idea to come up with a consistent way of dealing with margins in your site. The simplest thing to do is to only define margins on the top or bottom of elements. In that way, you should not run into margin collapsing issues too often as the side with a margin will always be adjacent to a side without a margin.

Note: Harry Roberts has an excellent post detailing the reasons why setting margins only in one direction is a good idea, and not just due to solving collapsing margin issues.

This solution doesn’t solve the issues you might run into with margins on children collapsing through their parent. That particular issue tends to be less common, and knowing why it is happening can help you come up with a solution. An ideal solution to that is to give components which require it display: flow-root, as a fallback for older browsers you could use overflow to create a BFC, turn the parent into a flex container, or even introduce a single pixel of padding. Don’t forget that you can use feature queries to detect support for display: flow-root so only old browsers get a less optimal fix.

Most of the time, I find that knowing why margins collapse (or didn’t) is the key thing. You can then figure out on a case-by-case basis how to deal with it. Whatever you choose, make sure to share that information with your team. Quite often margin collapsing is a bit mysterious, so the reason for doing things to counter it may be non-obvious! A comment in your code goes a long way to help — you could even link to this article and help to share the margin collapsing knowledge.

I thought that I would round up this article with a few other margin-related pieces of information.

Percentage Margins

When you use a percentage in CSS, it has to be a percentage of something. Margins (and padding) set using percentages will always be a percentage of the inline size (width in a horizontal writing mode) of the parent. This means that you will have equal-sized padding all the way around the element when using percentages.

In the CodePen example below, I have a wrapper which is 200 pixels wide, inside is a box which has a 10% margin, the margin is 20 pixels on all sides, that being 10% of 200.

See the Pen [Margins: percentage margins](https://codepen.io/rachelandrew/pen/orqzrP) by Rachel Andrew.

See the Pen Margins: percentage margins by Rachel Andrew.

Margins In A Flow-Relative World

We have been talking about vertical margins throughout this article, however, modern CSS tends to think about things in a flow relative rather than a physical way. Therefore, when we talk about vertical margins, we really are talking about margins in the block dimension. Those margins will be top and bottom if we are in a horizontal writing mode, but would be right and left in a vertical writing mode written left to right.

Once working with logical, flow relative directions it becomes easier to talk about block start and block end, rather than top and bottom. To make this easier, CSS has introduced the Logical Properties and Values specification. This maps flow relative properties onto the physical ones.

For margins, this gives us the following mappings (if we are working in English or any other horizontal writing mode with a left-to-right text direction).

  • margin-top = margin-block-start
  • margin-right = margin-inline-end
  • margin-bottom = margin-block-end
  • margin-left = margin-inline-start

We also have two new shorthands which allow for the setting of both blocks at once or both inline.

  • margin-block
  • margin-inline

In the next CodePen example, I have used these flow relative keywords and then changed the writing mode of the box, you can see how the margins follow the text direction rather than being tied to physical top, right, bottom, and left.

See the Pen [Margins: flow relative margins](https://codepen.io/rachelandrew/pen/BgrQRj) by Rachel Andrew.

See the Pen Margins: flow relative margins by Rachel Andrew.

You can read more about logical properties and values on MDN or in my article “Understanding Logical Properties And Values” here on Smashing Magazine.

To Wrap-Up

You now know most of what there is to know about margins! In short:

  • Margin collapsing is a thing. Understanding why it happens and when it doesn’t will help you solve any problems it may cause.
  • Setting margins in one direction only solves many margin related headaches.
  • As with anything in CSS, share with your team the decisions you make, and comment your code.
  • Thinking about block and inline dimensions rather than the physical top, right, bottom and left will help you as the web moves towards being writing mode agnostic.
Smashing Editorial
(il)
Reblogged 5 hours ago from www.smashingmagazine.com

The Ultimate Guide To Building Scalable Web Scrapers With Scrapy

The Ultimate Guide To Building Scalable Web Scrapers With Scrapy

The Ultimate Guide To Building Scalable Web Scrapers With Scrapy

Daniel Ni

2019-07-16T14:30:59+02:00
2019-07-16T13:35:37+00:00

Web scraping is a way to grab data from websites without needing access to APIs or the website’s database. You only need access to the site’s data — as long as your browser can access the data, you will be able to scrape it.

Realistically, most of the time you could just go through a website manually and grab the data ‘by hand’ using copy and paste, but in a lot of cases that would take you many hours of manual work, which could end up costing you a lot more than the data is worth, especially if you’ve hired someone to do the task for you. Why hire someone to work at 1–2 minutes per query when you can get a program to perform a query automatically every few seconds?

For example, let’s say that you wish to compile a list of the Oscar winners for best picture, along with their director, starring actors, release date, and run time. Using Google, you can see there are several sites that will list these movies by name, and maybe some additional information, but generally you’ll have to follow through with links to capture all the information you want.

Obviously, it would be impractical and time-consuming to go through every link from 1927 through to today and manually try to find the information through each page. With web scraping, we just need to find a website with pages that have all this information, and then point our program in the right direction with the right instructions.

In this tutorial, we will use Wikipedia as our website as it contains all the information we need and then use Scrapy on Python as a tool to scrape our information.

A few caveats before we begin:

Data scraping involves increasing the server load for the site that you’re scraping, which means a higher cost for the companies hosting the site and a lower quality experience for other users of that site. The quality of the server that is running the website, the amount of data you’re trying to obtain, and the rate at which you’re sending requests to the server will moderate the effect you have on the server. Keeping this in mind, we need to make sure that we stick to a few rules.

Most sites also have a file called robots.txt in their main directory. This file sets out rules for what directories sites do not want scrapers to access. A website’s Terms & Conditions page will usually let you know what their policy on data scraping is. For example, IMDB’s conditions page has the following clause:

Robots and Screen Scraping: You may not use data mining, robots, screen scraping, or similar data gathering and extraction tools on this site, except with our express-written consent as noted below.

Before we try to obtain a website’s data we should always check out the website’s terms and robots.txt to make sure we are obtaining legal data. When building our scrapers, we also need to make sure that we do not overwhelm a server with requests that it can’t handle.

Luckily, many websites recognize the need for users to obtain data, and they make the data available through APIs. If these are available, it’s usually a much easier experience to obtain data through the API than through scraping.

Wikipedia allows data scraping, as long as the bots aren’t going ‘way too fast’, as specified in their robots.txt. They also provide downloadable datasets so people can process the data on their own machines. If we go too fast, the servers will automatically block our IP, so we’ll implement timers in order to keep within their rules.

Getting Started, Installing Relevant Libraries Using Pip

First of all, to start off, let’s install Scrapy.

Windows

Install the latest version of Python from https://www.python.org/downloads/windows/

Note: Windows users will also need Microsoft Visual C++ 14.0, which you can grab from “Microsoft Visual C++ Build Tools” over here.

You’ll also want to make sure you have the latest version of pip.

In cmd.exe, type in:

python -m pip install --upgrade pip

pip install pypiwin32

pip install scrapy

This will install Scrapy and all the dependencies automatically.

Linux

First you’ll want to install all the dependencies:

In Terminal, enter:

sudo apt-get install python3 python3-dev python-pip libxml2-dev libxslt1-dev zlib1g-dev libffi-dev libssl-dev

Once that’s all installed, just type in:

pip install --upgrade pip

To make sure pip is updated, and then:

pip install scrapy

And it’s all done.

Mac

First you’ll need to make sure you have a c-compiler on your system. In Terminal, enter:

xcode-select --install

After that, install homebrew from https://brew.sh/.

Update your PATH variable so that homebrew packages are used before system packages:

echo "export PATH=/usr/local/bin:/usr/local/sbin:$PATH" >> ~/.bashrc

source ~/.bashrc

Install Python:

brew install python

And then make sure everything is updated:

brew update; brew upgrade python

After that’s done, just install Scrapy using pip:

pip install Scrapy

>

Overview Of Scrapy, How The Pieces Fit Together, Parsers, Spiders, Etc

You will be writing a script called a ‘Spider’ for Scrapy to run, but don’t worry, Scrapy spiders aren’t scary at all despite their name. The only similarity Scrapy spiders and real spiders have are that they like to crawl on the web.

Inside the spider is a class that you define that tells Scrapy what to do. For example, where to start crawling, the types of requests it makes, how to follow links on pages, and how it parses data. You can even add custom functions to process data as well, before outputting back into a file.

Writing Your First Spider, Write A Simple Spider To Allow For Hands-on Learning

To start our first spider, we need to first create a Scrapy project. To do this, enter this into your command line:

scrapy startproject oscars

This will create a folder with your project.

We’ll start with a basic spider. The following code is to be entered into a python script. Open a new python script in /oscars/spiders and name it oscars_spider.py

We’ll import Scrapy.

import scrapy

We then start defining our Spider class. First, we set the name and then the domains that the spider is allowed to scrape. Finally, we tell the spider where to start scraping from.

class OscarsSpider(scrapy.Spider):
   name = "oscars"
   allowed_domains = ["en.wikipedia.org"]
   start_urls = ['https://en.wikipedia.org/wiki/Academy_Award_for_Best_Picture']

Next, we need a function which will capture the information that we want. For now, we’ll just grab the page title. We use CSS to find the tag which carries the title text, and then we extract it. Finally, we return the information back to Scrapy to be logged or written to a file.

def parse(self, response):
   data = {}
   data['title'] = response.css('title::text').extract()
   yield data

Now save the code in /oscars/spiders/oscars_spider.py

To run this spider, simply go to your command line and type:

scrapy crawl oscars

You should see an output like this:

2019-05-02 14:39:31 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: oscars)
...
2019-05-02 14:39:32 [scrapy.core.engine] DEBUG: Crawled (200)  (referer: None)
2019-05-02 14:39:34 [scrapy.core.engine] DEBUG: Crawled (200)  (referer: None)
2019-05-02 14:39:34 [scrapy.core.scraper] DEBUG: Scraped from 
{'title': ['Academy Award for Best Picture - Wikipedia']}
2019-05-02 14:39:34 [scrapy.core.engine] INFO: Closing spider (finished)
2019-05-02 14:39:34 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 589,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 74517,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 5, 2, 7, 39, 34, 264319),
 'item_scraped_count': 1,
 'log_count/DEBUG': 3,
 'log_count/INFO': 9,
 'response_received_count': 2,
 'robotstxt/request_count': 1,
 'robotstxt/response_count': 1,
 'robotstxt/response_status_count/200': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2019, 5, 2, 7, 39, 31, 431535)}
2019-05-02 14:39:34 [scrapy.core.engine] INFO: Spider closed (finished)

Congratulations, you’ve built your first basic Scrapy scraper!

Full code:

import scrapy

class OscarsSpider(scrapy.Spider):
   name = "oscars"
   allowed_domains = ["en.wikipedia.org"]
   start_urls = ["https://en.wikipedia.org/wiki/Academy_Award_for_Best_Picture"]

   def parse(self, response):
       data = {}
       data['title'] = response.css('title::text').extract()
       yield data

Obviously, we want it to do a little bit more, so let’s look into how to use Scrapy to parse data.

First, let’s get familiar with the Scrapy shell. The Scrapy shell can help you test your code to make sure that Scrapy is grabbing the data you want.

To access the shell, enter this into your command line:

scrapy shell “https://en.wikipedia.org/wiki/Academy_Award_for_Best_Picture”

This will basically open the page that you’ve directed it to and it will let you run single lines of code. For example, you can view the raw HTML of the page by typing in:

print(response.text)

Or open the page in your default browser by typing in:

view(response)

Our goal here is to find the code that contains the information that we want. For now, let’s try to grab the movie title names only.

The easiest way to find the code we need is by opening the page in our browser and inspecting the code. In this example, I am using Chrome DevTools. Just right-click on any movie title and select ‘inspect’:

Chrome DevTools window. (Large preview)

As you can see, the Oscar winners have a yellow background while the nominees have a plain background. There’s also a link to the article about the movie title, and the links for movies end in film). Now that we know this, we can use a CSS selector to grab the data. In the Scrapy shell, type in:

response.css(r"tr[style='background:#FAEB86'] a[href*='film)']").extract()

As you can see, you now have a list of all the Oscar Best Picture Winners!

> response.css(r"tr[style='background:#FAEB86'] a[href*='film']").extract()
['<a href="/wiki/Wings_(1927_film)" title="Wings (1927 film)">Wings</a>', 
...
 '<a href="/wiki/Green_Book_(film)" title="Green Book (film)">Green Book</a>', '<a href="/wiki/Jim_Burke_(film_producer)" title="Jim Burke (film producer)">Jim Burke</a>']

Going back to our main goal, we want a list of the Oscar winners for best picture, along with their director, starring actors, release date, and run time. To do this, we need Scrapy to grab data from each of those movie pages.

We’ll have to rewrite a few things and add a new function, but don’t worry, it’s pretty straightforward.

We’ll start by initiating the scraper the same way as before.

import scrapy, time

class OscarsSpider(scrapy.Spider):
   name = "oscars"
   allowed_domains = ["en.wikipedia.org"]
   start_urls = ["https://en.wikipedia.org/wiki/Academy_Award_for_Best_Picture"]

But this time, two things will change. First, we’ll import time along with scrapy because we want to create a timer to restrict how fast the bot scrapes. Also, when we parse the pages the first time, we want to only get a list of the links to each title, so we can grab information off those pages instead.

def parse(self, response):
   for href in response.css(r"tr[style='background:#FAEB86'] a[href*='film)']::attr(href)").extract():
       url = response.urljoin(href)
       print(url)
       req = scrapy.Request(url, callback=self.parse_titles)
       time.sleep(5)
       yield req

Here we make a loop to look for every link on the page that ends in film) with the yellow background in it and then we join those links together into a list of URLs, which we will send to the function parse_titles to pass further. We also slip in a timer for it to only request pages every 5 seconds. Remember, we can use the Scrapy shell to test our response.css fields to make sure we’re getting the correct data!

def parse_titles(self, response):
   for sel in response.css('html').extract():
       data = {}
       data['title'] = response.css(r"h1[id='firstHeading'] i::text").extract()
       data['director'] = response.css(r"tr:contains('Directed by') a[href*='/wiki/']::text").extract()
       data['starring'] = response.css(r"tr:contains('Starring') a[href*='/wiki/']::text").extract()
       data['releasedate'] = response.css(r"tr:contains('Release date') li::text").extract()
       data['runtime'] = response.css(r"tr:contains('Running time') td::text").extract()
   yield data

The real work gets done in our parse_data function, where we create a dictionary called data and then fill each key with the information we want. Again, all these selectors were found using Chrome DevTools as demonstrated before and then tested with the Scrapy shell.

The final line returns the data dictionary back to Scrapy to store.

Complete code:

import scrapy, time

class OscarsSpider(scrapy.Spider):
   name = "oscars"
   allowed_domains = ["en.wikipedia.org"]
   start_urls = ["https://en.wikipedia.org/wiki/Academy_Award_for_Best_Picture"]

   def parse(self, response):
       for href in response.css(r"tr[style='background:#FAEB86'] a[href*='film)']::attr(href)").extract():
           url = response.urljoin(href)
           print(url)
           req = scrapy.Request(url, callback=self.parse_titles)
           time.sleep(5)
           yield req

   def parse_titles(self, response):
       for sel in response.css('html').extract():
           data = {}
           data['title'] = response.css(r"h1[id='firstHeading'] i::text").extract()
           data['director'] = response.css(r"tr:contains('Directed by') a[href*='/wiki/']::text").extract()
           data['starring'] = response.css(r"tr:contains('Starring') a[href*='/wiki/']::text").extract()
           data['releasedate'] = response.css(r"tr:contains('Release date') li::text").extract()
           data['runtime'] = response.css(r"tr:contains('Running time') td::text").extract()
       yield data

Sometimes we will want to use proxies as websites will try to block our attempts at scraping.

To do this, we only need to change a few things. Using our example, in our def parse(), we need to change it to the following:

def parse(self, response):
   for href in (r"tr[style='background:#FAEB86'] a[href*='film)']::attr(href)").extract()
:
       url = response.urljoin(href)
       print(url)
       req = scrapy.Request(url, callback=self.parse_titles)
       req.meta['proxy'] = "http://yourproxy.com:80"
       yield req

This will route the requests through your proxy server.

Deployment And Logging, Show How To Actually Manage A Spider In Production

Now it is time to run our spider. To make Scrapy start scraping and then output to a CSV file, enter the following into your command prompt:

scrapy crawl oscars -o oscars.csv

You will see a large output, and after a couple of minutes, it will complete and you will have a CSV file sitting in your project folder.

Compiling Results, Show How To Use The Results Compiled In The Previous Steps

When you open the CSV file, you will see all the information we wanted (sorted out by columns with headings). It’s really that simple.

A CSV of Oscar winning movies and associated information

Oscar winning movies list and information. (Large preview)

With data scraping, we can obtain almost any custom dataset that we want, as long as the information is publicly available. What you want to do with this data is up to you. This skill is extremely useful for doing market research, keeping information on a website updated, and many other things.

It’s fairly easy to set up your own web scraper to obtain custom datasets on your own, however, always remember that there might be other ways to obtain the data that you need. Businesses invest a lot into providing the data that you want, so it’s only fair that we respect their terms and conditions.

Additional Resources For Learning More About Scrapy And Web Scraping In General

Smashing Editorial
(dm, yk, il)
Reblogged 5 hours ago from www.smashingmagazine.com

Google says shortnames bug resolved, local listings restored

It’s unclear yet whether the missing reviews issue has also been fixed.

Please visit Search Engine Land for the full article.

Reblogged 8 hours ago from feeds.searchengineland.com

Micro-Influencer Marketing: A Comprehensive Guide

Has a celebrity ever convinced you to buy something?

It’s okay if the answer is yes — we’ve all been there. In fact, just recently, a famous dog helped convince me to purchase a GoPro camera. For a creature who can’t speak, he’s a pretty effective marketer.

Loki the Wolfdog’s Instagram post is a successful example of influencer marketing, which involves developing relationships with influential personalities to promote your brand to the influencer’s audience. Loki the Wolfdog has over 1 million Instagram followers GoPro may not have otherwise been able to reach with posts on its own profile.

A newer concept known as micro-influencer marketing recently joined the social media scene. It’s the same concept as influencer marketing, but on a smaller scale: Brands partner with individuals with smaller followings on social media to promote products with authentic, visual posts instead of sponsored ads.

In this blog post, we’ll tell you everything you need to know about micro-influencers, including what brands are using them successfully and how you can connect with these individuals to promote your brand.

Micro-influencers are social media users unlike typical celebrities, experts, or public figures. They’re individuals who work or specialize in a particular vertical and frequently share social media content about their interests. Unlike traditional “influencers,” micro-influencers have a more modest number of followers — typically in the thousands or tens of thousands — but they boast hyper-engaged audiences.

For example, a yoga influencer might boast millions of followers and operate several yoga studios. A yoga micro-influencer might have only a few thousand followers and post instructional videos on Instagram for their fans to try at home, but their average post receives a healthy amount of engagement relative to the size of their follower base.

Influencer vs. Micro-Influencer

Influencer marketing is when organizations partner with top content creators — people with thousands or even millions of followers — to promote their products or services to the content creator’s audience. When brands partner with influencers, companies are able to leverage the established trust amongst the influencer’s audience. Consumers are more likely to buy from someone they know and trust, so influencers are extremely effective when it comes to strategies like word-of-mouth marketing or increasing social proof. Brands will often pay influencers to either post content featuring their products or sponsor their events, capturing the influencers’ large reach. 

An excellent example of influencer marketing is the partnership between Diageo, parent company of Scottish whiskey brands Lagavulin and Oban, and actor Nick Offerman. Best known for his role as a gruff, hyper-masculine government official on Parks and Recreation, Offerman appears in a 45-minute parody video drinking a Lagavulin single malt whisky next to a traditional holiday Yule Log. The video went viral and won multiple awards, rocketing an older brand into one cultural relevance.

Micro-influencers, on the other hand, have a more moderate backing — compared to influencers, mico-influencers usually have fewer than 100,000 followers. However, the rate of audience engagement on content peaks around 1,000 followers, making a partnership with a micro-influencer incredibly valuable to companies looking to increase brand awareness. Micro-influencers generate a ton of content that appeals to their audiences and become well-established in their area of interest. Over 82% of surveyed consumers said they were likely to buy something a micro-influencer recommended. Companies can partner with micro-influencers to write a post about a product offer, publish a review, or share the product with their social communities.

The Value of Micro-Influencers

Using micro-influencers may seem counterintuitive. Why would you seek out someone with a smaller following to promote your brand?

There are several reasons to believe micro-influencers might get better results for your brand. 

Micro-influencers have better engagement rates.

Markerly studied Instagram engagement and found a surprising trend: As an influencer’s number of followers increases, their number of likes and comments from followers decreases.

In its analysis, Markerly determined the following:

  • Instagram users with fewer than 1,000 followers generated likes 8% of the time
  • Users with 1,000-10,000 followers earned likes at a 4% rate
  • Users with 10,000-100,000 followers achieved a 2.4% like rate
  • Users with 1-10 million followers earned likes only 1.7% of the time.

Check out Markerly’s graphical breakdown of how likes and comments decline as followers increase:

comment_follower_correlation.png

Source: Markerly

Markerly recommends brands pursue micro-influencers with Instagram followings in the 1,000-10,000 range. With micro-influencers, brands can achieve higher engagement rates among a large enough audience. In a recent study, Experticity learned micro-influencers have 22.2X more conversations than the typical Instagram users — largely because they’re passionate and knowledgeable about their particular interest area.

Micro-influencers have more targeted audiences.

Markerly also notes that micro-influencers have more targeted follower bases than influencers with follower numbers in the hundreds of thousands and millions.

Think about it: If a clothing brand partnered with a celebrity with millions of followers on Instagram, the celebrity could reach their huge pool, but a large portion of them might not be interested in fashion. Instead, if the clothing brand connected with 100 fashion bloggers with 1,000 followers apiece, it would be able to connect to a smaller but far more targeted and engaged audience.

Markerly CEO and co-founder Sarah Ware told Digiday that partnering with the Kardashian and Jenner sisters to promote a weight-loss tea on Instagram led to a significant number of conversions. However, Ware also noted that working with 30-40 micro-influencers achieved a higher conversion rate than when the celebrities were promoting the tea. In fact, 82% of customers surveyed by Experticity said they would be very likely to follow a recommendation from a micro-influencer.

Micro-influencers are more affordable.

Micro-influencers are typically more affordable than celebrities or profiles with millions of followers. Celebrities sometimes charge up to $75,00 for a single Instagram post promoting a product. In contrast, 97% of micro-influencers on Instagram charge less than $500 for a promotion post. Granted, brands usually work with more than one micro-influencer to maximize reach, but even 100 micro-influencers would cost less than a single celebrity on Instagram at these rates.

For micro-influencers with smaller followings, brands may even be able to compensate them in the form of free products. According to Digiday, La Croix Sparkling Water (more on them below) sent a micro-influencer vouchers for free products instead.

Micro-influencers are more authentic.

Micro-influencers are real people, so their Instagram content is real, too. Instagram users with a few thousand followers likely post their own content, reply to comments, and behave more authentically than a brand or a celebrity with a social media manager might. If a micro-influencer engages with a promotional post on Instagram, their followers might be more inclined to click to learn more about the brand they’re posting about.

It’s also worth noting that Instagram recently changed its algorithm to mirror Facebook’s. Now, posts from profiles users follow and interact with are shown first in Instagram feeds, and authentic, quality content is prioritized over promoted content from big brands. This might make micro-influencer content more visible than content from celebrities if the algorithm determines users might be more interested in it.

One note: If you were wondering why we’re only mentioning Instagram in this blog post, it’s because micro-influencers as a marketing strategy has taken off primarily on that platform. Because Instagram is so visual, it’s easy for micro-influencers to post photos of products and brand experiences instead of writing a promotional tweet or Facebook post. That’s not to say that micro-influencer marketing can’t be done on other social media platforms, but Instagram’s Explore tab helps users find and interact with micro-influencer content easily.

You’ll see what we mean when we dive into different micro-influencer strategies brands are using successfully below.

4 Brands Using Micro-Influencers Successfully

1) La Croix Sparkling Water

La Croix Sparkling Water started tapping into micro-influencers to promote its brand in a competitive marketplace. It relies primarily on social media marketing to get discovered, especially by millennials.

La Croix identifies micro-influencers on Instagram and asks them to share product awareness posts on Instagram. It finds micro-influencers by searching branded hashtags, such as #LiveLaCroix, and when users tag the brand on Instagram. It specifically targets profiles with lower follower counts to maintain a feeling of authentic “realness” that appeals to millennial Instagram users. Then, La Croix reaches out to them with product vouchers or other offers to post pictures with the sparkling water.

If you check out La Croix’s Instagram page, you’ll see it features a lot of content posted by micro-influencers, such as this photo below:

 

Lending a hydrating, helping hand. ☺️(📸:@charleyraee)

A photo posted by LaCroix Sparkling Water (@lacroixwater) on Feb 4, 2017 at 1:49pm PST

By tapping into smaller, more targeted networks of micro-influencers, La Croix cultivates a social media presence that’s authentic and fun, and ensures its product is in front of the eyes of similar users. If you have a physical product that looks great on camera (like an eye-popping can of La Croix), try engaging with micro-influencers by sending free product for Instagram promotions.

2) Kimpton Hotels

Boutique hotel chain Kimpton uses Instagram takeovers to connect with micro-influencers. These consist of micro-influencers creating original content for the brand’s Instagram and posting the content as themselves. Takeovers connect new audiences with the brand and help generate new followers, more engagement, and eventually, new potential guests at Kimpton Hotels.

Curalate Marketing Director Brendan Lowry wrote about taking over some of Kimpton’s Instagram accounts and posting photos of his own, like this one:

The caption links easily to his personal Instagram, which links back to the Kimpton account, helping his more than 27,000 followers find and interact with the hotel’s content.

Try an Instagram takeover by a micro-influencer to provide behind-the-scenes or unique looks at a brand or product. It’s more creative to feature photos taken by different people, and it directs Instagram traffic between the brand’s and the photographer’s accounts for mutually beneficial results — namely, more engagement and more followers.

3) Stitch Fix

Personal shopping website Stitch Fix invites micro-influencers to contribute content that the brand then promotes on Instagram.

In the post below, Stitch Fix’s Instagram bio linked to a post featuring a Q&A with a fashion blogger micro-influencer about how she dresses for her body type:

The micro-influencer also shared the image, mentioned Stitch Fix, and shared the blog post link on her personal Instagram profile.

This micro-influencer strategy works because it drives traffic to a brand’s blog and Instagram profile. Try reaching out to micro-influencers and offer to publish their content and cross-promote it on social media to generate engagement from their followers and readers.

4) Hawaiian Department of Tourism

Hawaii’s Department of Tourism tapped into the power of micro-influencers for its #LetHawaiiHappen Instagram campaign. It partnered with Instagram users who are travel bloggers or Hawaiian natives to share content promoting events and destinations so visitors and Hawaiians would be interested in traveling to check them out.

Hawaii’s Department of Tourism connected with photographer Rick Poon to showcase his visit to Hawaii and attract his audience to come visit.

After the campaign, 65% of people who saw the posts said they wanted to visit Hawaii (talk about effective). If you want to attract new followers and Instagram engagement, try reaching out to micro-influencers to promote an event or a location that their followers might want to check out.

Think Small

Are you on board with micro-influencers? Before you answer, consider the following.

There are a few downsides to this strategy. Notably, micro-influence works well on Instagram with visual products, such as a bright can of sparkling water or an eye-catching outfit. This might not be the best strategy for promoting complicated software or other technology. But remember, you can be creative. As long as you can find a micro-influencer to share an Instagram post that’s compelling, you might be able to generate much more engagement.

Additionally, it’s a lot of work to work with several micro-influencers. Brands have to reach out to them on Instagram and manage several different relationships. However, we think the payoff is worth it for authentic and engaging Instagram posts.

Keep an eye on Instagram users tagging your brand or using a branded hashtag — they might just be your next biggest promoter. And if you want to learn more about influencer marketing or Instagram content promotion, read our guides on these topics next.

Reblogged 9 hours ago from blog.hubspot.com

How to use Shopping campaigns to increase enterprise leads for B2B

Metric Theory’s SEM campaign for RecycleAway delivered highly-targeted leads and increased revenue by 54%, earning the agency top honors at this year’s Search Engine Land Awards.

Please visit Search Engine Land for the full article.

Reblogged 12 hours ago from feeds.searchengineland.com

The 9 most overlooked benefits of social media

We may be a bit biased when it comes to talking up the benefits of social media, but fortunately, you don’t just have to take our word for it.

Food for thought: 59% of marketers are actively using social to support their lead generation and business goals. Meanwhile, it’s hard to find a brand that isn’t active on social media in some way, shape or form.

However, there are still plenty of critics who don’t see the benefits of social media from a business perspective.

To the general public, social media often gets a bad rap for being a time-sink. More importantly, measuring your ROI from social media can be difficult versus more straightforward marketing channels (think: PPC, email marketing, etc).

So you may be asking yourself “What are the benefits of social media?”

Well, we have an answer. Actually, we have nine of ’em.

The 9 most overlooked benefits of social media

Social media deserves your attention now more than ever.

Although it may not immediately result in a flood of cash or that “viral” moment you are hoping for, there’s so much that social can do for your brand both short and long term.

Here’s our breakdown of the social media benefits that often fly under the radar for modern businesses.

1. The ability to uncover industry trends in real-time

Simply put, social media is a potential goldmine of business intelligence.

How so? For starters, think about the transparent nature of social media. We’re able to see unfiltered, real-time conversations between consumers and brands alike.

If you want to know what a brand is doing well or likewise what customers are complaining about, it’s all out there in the open.

And of course, your target audience’s social activity and shared content can clue you in on industry trends. For example, Instagram hashtags such as #summerootd or #festivalfashion can highlight everything from relevant influencers to fashion trends that are currently all-the-rage.

Spotting tends via hashtags is one of the biggest benefits of social media

And if you want to tap into those trends via social conversations, look no further than social listening to do the trick. For example, features such as trend reporting in Sprout Social help hone in on what customers discuss when they talk about your brand. This includes key terms and hashtags associated with your business.

View Twitter trends by topic and hashtag for each account

However, these mentions represent more than just chatter.

For example, social conversions can clue you in on everything from which products your customers love to areas where your company might be falling short. A flurry of praise or customer complaints can spur you to take action based on follower feedback. In turn, brands can come up with real-time solutions and products that their customers will buzz about.

2. More comprehensive competitive analysis

Perhaps one of the biggest social media marketing benefits is the ability to spy on your competition.

What are they currently promoting? What sort of ads are they running? How is your content strategy different from theirs?

These answers don’t have to be question marks. By conducting social competitive analysis, you can uncover opportunities to step into a new lane in terms of content or advertising.

For example, maybe you realize that your competitors are crushing it with Facebook ads but their Instagram presence is totally lacking. In turn, you might explore influencer marketing or user-generated content campaigns for the sake of standing out from the crowd.

Looking at your competitors’ social performance is a cinch with tools like Sprout. Our competitor and sentiment analysis reports allow you to look head-to-head to monitor growth and engagement to ensure that you aren’t falling behind.

Sprout can clue you in on the Instagram hashtags your competitors are using

Through this analysis, you can also discover which pieces of your own content are scoring the most engagement. Understanding your top-performing content is likewise key to understanding how to break through the noise in your industry.

Social media competitor content performance report

3. Provide better customer service

According to the 2018 Sprout Social Index, nearly half of all consumers have already taken to social media to ask questions and raise concerns.

Unlike awkward phone calls or lengthy emails, social customer service is quick and to-the-point. Providing social customer service means having meaningful, back-and-forth conversations with your customers that are oftentimes forward-facing.

In other words, prospects, customers and competitors alike can see how you interact with your buyers. Putting positive interactions front-and-center is a huge plus for any business.

Whether it’s listening to feedback or addressing specific concerns, an ongoing advantage of social media is that it’s the perfect place to provide speedy service and let customers know that you’re there to lend a helping hand.

And of course, social media also represents a prime channel to gather customer feedback. Responding to questions and concerns signals that you’re invested in serving your audience.

Want to know if you’re providing stellar customer service? Sprout can help with that. Sprout’s suite of social listening tools includes sentiment which tracks your brand health, ensuring that your social mentions remain on the positive side. An influx of complaints or questions could signal big-picture problems with your customer success strategy.

Social media sentiment analysis report

4. Curate customer content and stories in a snap

Customer photos and success stories go hand in hand with higher engagement and conversion rates.

And there’s no better place to gather both than social media.

For example, monitoring your mentions and tags can help you uncover positive customer interactions that you can share with the rest of your followers. Many major brands and retailers regularly curate customer photos to use throughout their marketing campaigns.

This is exactly why having a branded hashtag is so important. By encouraging your customers to tag their content, you can uncover shareable posts that your followers will love while also making a connection with your customers.

So many brands have their social presence centered around customer experiences and it’s no surprise why. Social proof in the form of customer content not only shows that you have satisfied customers but also provides your brand with a much-needed sense of authenticity.

5. Positioning power over your competition

This might seem like a no-brainer but it’s worth mentioning.

Simply having an active presence on a social channel represents positioning power for your business.

Think about it. Let’s say your closest competitor has an Instagram account that’s booming with customer photos, Stories and sleek snapshots showing off their product.

On the flip side, you have an Instagram profile that’s gathering cobwebs.

Not a good look, right? Consistently publishing on channels relevant to your business signals that you’re active and open to new customers.

Oh, and don’t forget the literal positioning power of social media in search engines when someone looks up your brand. Your Facebook or Instagram could very well be your business’ first impression on a customer versus your website. This again speaks to the importance of maintaining an active presence.

6. Build backlinks and a better search engine presence

The SEO impact of social media has been hotly debated for years.

That said, the relatively recent concept of linkless backlinks signals that there is a correlation between social media and search performance.

In short, shares and click-throughs via social represent positive search signals to Google. If nothing else, social can represent a sizable traffic source (granted you’re tracking your social traffic via Google Analytics).

Don’t neglect social media as a content distribution channel. A popular piece of content that scores hundreds of likes and shares can drive serious referral traffic to your site, especially when you optimize your social scheduling with something like Sprout’s patented ViralPost technology that ensures you’re utilizing optimal send times.

ViralPost hones in on the best times to post on social media

7. Appeal to younger, social-savvy customers

To say that social media has transformed the traditional advertising landscape would be the ultimate understatement.

Thinking of social media as a sort of hip, “young” advertising channel might seem a bit cliche, especially since it’s capable of reaching customers of all ages.

That said, research reinforces that Gen Z is shaking up marketing by responding less and less to traditional advertising. Given that younger customers are growing up alongside social media, brands will need to adapt beyond in-your-face commercials and ads.

And right now Instagram is the place to be to reach those younger customers. According to recent social media demographics, the vast majority of users on Instagram are under the age of 30.

Instagram demographics

As younger consumers hop from platform to platform, brands should likewise expect to experiment with new ad channels. The popularity of user-generated content, Stories and influencers highlights the sort of authentic, experience-focused advertising that continues to appeal to the younger crowd.

8. Humanize your brand

Businesses today rightfully want to show off their human side.

And so many brands have benefited from dropping the “suit and tie” vibe in lieu of getting personal with customers on social media.

MoonPie is one of the most popular examples, adopting a snarky, meme-heavy social presence that feels like it’s run by a teenager. Although this sort of approach to social isn’t for every brand, it’s definitely not what most customers would expect and therefore drives engagement.

Some brands manage to humanize themselves through philanthropy and activism. In a day and age where half of consumers want to see brands take a stand on social and political issues, brands like Ben & Jerry’s do a brilliant job of highlighting their human side.

Simply showing off your colleagues and coworkers is an easy way to put a face to your brand, much like Sprout does with our #TeamSprout series.

9. More top-of-the-funnel leads

Lastly, one of the key benefits from social media marketing is more leads, plain and simple.

Given that there are billions of people already active on social media, there’s a non-zero chance that your audience is already there. Whether through paid ads or content promotion, you can reel in more top-of-the-funnel leads by raising awareness for your brand.

In fact, 77% of Pinterest users discover new brands and make purchases based on what they see on the platform. Having a social presence which introduces people to what you’re selling represents yet another way to score more sales. Even if these leads don’t make purchases directly through social, raising awareness could lead them on the path to becoming full-fledged buyers down the line.

top of funnel leads from social media

And with that, we wrap up our list!

What are the biggest benefits of social media for your business?

A social media presence has become an expectation for brands rather than an exception to the rule.

And we totally agree that businesses shouldn’t latch onto social media “just because.”

Instead, businesses should assess the potential benefits of social media themselves based on specific, actionable goals. Although the impact of the benefits above varies from brand to brand, there’s no denying the business implications of having a social presence.

Of course, we still want to hear from you! What do you see as the benefits of social media for business? Let us know in the comments below!

This post The 9 most overlooked benefits of social media originally appeared on Sprout Social.

Reblogged 16 hours ago from feedproxy.google.com