Back to Top

5 Habits of the Lucky Email Marketer

Sometimes you have to make your own luck, especially when it comes to email marketing. There’s no better time than right now to adopt the email marketing best practices that will propel your campaigns to success. And while at the end of the day, you only have so much control over what happens once you…

The post 5 Habits of the Lucky Email Marketer appeared first on Benchmarkemail.

Reblogged 1 year ago from www.benchmarkemail.com

Has automation actually changed the way you do your job?; Wednesday’s daily brief

Plus, Google goes organic with its hotel search listings

Please visit Search Engine Land for the full article.

Reblogged 1 year ago from feeds.searchengineland.com

Key trends in PPC, reporting and analytics in 2021 and beyond

With COVID shifting consumer behavior in 2020, use these trends from speakers at SMX Report to find equilibrium in 2021.

Please visit Search Engine Land for the full article.

Reblogged 1 year ago from feeds.searchengineland.com

Social media demographics to inform your brand’s strategy in 2021

If you’re looking for the latest social media demographics for 2021, you’ve hit the jackpot.

We often talk about the importance of building a social strategy that’s driven by data. Demographics are no different.

Because the social space is constantly evolving with more networks available to marketers than ever before, you need to spend your time and budget wisely.

And while you may think you know which networks matter most based on your target audience, 2021’s numbers might surprise you.

Social media demographics: The numbers you need to know

From putting together personas to conducting market research, demographic data is an essential starting point for figuring out which networks and campaigns deserve your attention.

Want to know which apps are up-and-coming? Curious about new opportunities to cross-post your content? We’ve got you covered.

Below we’ve put together a list of must-know social media demographics for 2021 (and beyond). These numbers are based on the latest research and social media statistics available at the time of writing.

Plus, for each network, we’ve also highlighted key strategic takeaways to guide your marketing strategy moving forward.

Facebook demographics and usage

Takeaways from 2021’s Facebook demographics

  • Despite ongoing controversies, “adpocalypses” and ever-emerging competition, Facebook still remains the most-used and engaged-with social platform.
  • The influx of boomers to Facebook highlights why it’s such a prime place to run ads, particularly among older demographics with more money to spend.
  • The notion that the younger crowd has totally ditched Facebook isn’t quite true (but as highlighted by some of our additional social media demographics below, it’s definitely not the younger crowd’s #1 platform).
  • Due to Facebook’s sheer size and engagement rate, it makes sense to advertise there in some way, shape or form (even if that means cross-posting).
  • In fact, Facebook’s ad revenue and general usage have been up in the midst of COVID-19. Again, the platform remains many people’s “home base” when it comes to social media.
Facebook age demographics

 

Instagram demographics and usage

Takeaways from 2021’s Instagram demographics

  • Instagram’s steady growth is well-documented, cementing it as the second largest network after Facebook.
  • The fact that Facebook and Instagram share the same ad platform presents many cross-promotional opportunities for brands.
  • The introduction of Reels highlights Instagram’s intention to continue competition with TikTok, while new social ad platforms and formats signal opportunities with higher-earning demographics.
  • Instagram is facing fierce competition for younger users. In fact, TikTok (29%) recently surpassed Instagram (25%) as U.S. teens’ preferred social platform (both behind Snapchat).
  • Recent Instagram stats highlight the value of influencers and Instagram content for motivating shopping.
Instagram user growth

Twitter demographics and usage

Takeaways based on Twitter demographics

  • Twitter’s usage, user-base and growth have remained fairly consistent year-to-year.
  • The platform’s relatively straightforward, shorter-term interactions signal it as a place to gather quick news or conduct customer service.
  • Twitter’s status as a place to discuss events and gather breaking news make it a prime place to share content and drive discussions, but advertising is still tricky.
  • 80% of tweets come from 10% of the platform’s most active accounts, signaling the prominence of influencers and “power users” on Twitter.
  • Notably, 42% of Twitter users are degree-holders (compared to 31% of Americans) which highlight the platform’s demographics for higher-earners (depending on your industry).
Twitter user data 2021

LinkedIn demographics and usage

Takeaways based on LinkedIn demographics

  • Conventional wisdom might say that LinkedIn exclusively caters to an older audience, but millennials make up about one-fourth of the platform.
  • A higher-educated, higher-earning B2B demographic makes Linked In a potential goldmine for ads.
  • Marketing on LinkedIn is apples and oranges versus most other networks due to its older, primarily B2B audience.
  • According to LinkedIn themselves, it’s the top rated social network for lead generation, making it a great source for B2B marketers looking to find targeted and motivated audiences for their campaigns.
  • Over 70% of LinkedIn users live outside of the U.S., presenting opportunities for businesses and brands looking to grow their international audience.
LinkedIn membership distribution

Pinterest demographics and usage

Takeaways based on Pinterest demographics

  • Pinterest’s predominately female userbase highlights one of the most significant splits in social media demographics when it comes to gender.
  • Recently, the platform has seen a spike in usage among Gen Z and millennials (with the number of users under 25 growing twice as fast as users 25+ in Q2 2020).
  • The platform’s core userbase continues to be dedicated and isn’t going anywhere. This is drive home by Pinterest’s increased ad spend and organic usage.
  • Pinterest is perhaps the most product-focused of any social network: there are massive ad opportunities given the platform’s high-earning base.
  • Although TikTok, Snapchat and Instagram get the most attention when it comes to the younger social crowd, Pinterest still has a place among younger users.
2021 social media demographics for pinterest

TikTok demographics and usage

Takeaways based on TikTok demographics

  • TikTok’s 100+ million monthly active users prove that the platform is here to stay (hint: don’t let anyone tell you TikTok is a trend).
  • The platform’s user-base is absolutely dedicated, with the average user spending a staggering 21.5 hours per month in 2020 (compared to 12.8 hours in 2019).
  • A younger user-base presents challenges in terms of ad-targeting, although that perhaps explains the boom of influencer marketing.
  • TikTok is notably seeing growing use among adults, which begs the question of whether the platform will continue to serve younger users or change course.
  • Expect emerging ad features in 2021 (and beyond) as the network grows.
TikTok demographics

Snapchat demographics and usage

Takeaways based on Snapchat demographics

  • Although Snapchat may not be the most talked about trending network anecdotally, the platform experienced significant growth in 2020.
  • To that point, Snapchat remained U.S. teens’ network of choice over TikTok and Instagram in 2020, with 34% of teenage users describing it as their preferred platform.
  • Even if you aren’t active on Snapchat, it’s a prime place for trendspotting among younger users.
  • Similar to TikTok, Snapchat also has a highly engaged user base who manage to open the app 30 times per day on average.
  • Beyond the U.S., Snapchat experienced a staggering 100% YoY growth in daily active users in India.
favorite social networks of teen audiences

YouTube demographics and usage

Takeaways based on YouTube demographics

  • YouTube’s popularity among younger users highlights the ongoing, long-term shift toward video content.
  • Steady growth in ad revenue likewise signals not only the platform’s growth as a social network but also as a competitor of streaming services like Netflix.
  • Still, only 18% of YouTube users claim to use YouTube to discover brands and products. Brands still have a lot of work to do on the platform, finding a balance between entertainment and advertising.
  • Given that 62% of YouTube’s users log into the platform daily, it’s safe to say that the platform is the go-to video network for the Internet at large.
  • Marketers note that YouTube is still a sort of land of opportunity advertising-wise, viewing it as a place to put their ad dollars in the future versus the likes of Facebook.
social media demographics on YouTube

Which social media demographics are you using?

Keep in mind that this data is generalized across millions of users and serves as a starting point for brands looking to prioritize their social platforms.

If you’re looking to expand your reach, we recommend looking into the demographic data from your own social presence to see how it compares to the averages above.

Whether these numbers confirm what you already suspected or serve as eye-openers, make sure to bookmark them as you iron out your social strategy for 2021.

And speaking of which, make sure to check out our free social media templates to keep you organized every step of the way.

This post Social media demographics to inform your brand’s strategy in 2021 originally appeared on Sprout Social.

Reblogged 1 year ago from feedproxy.google.com

How The Internet Happened: From Netscape to the iPhone

Brian McCullough, who runs Internet History Podcast, also wrote a book named How The Internet Happened: From Netscape to the iPhone which did a fantastic job of capturing the ethos of the early web and telling the backstory of so many people & projects behind it’s evolution.

I think the quote which best the magic of the early web is

Jim Clark came from the world of machines and hardware, where development schedules were measured in years—even decades—and where “doing a startup” meant factories, manufacturing, inventory, shipping schedules and the like. But the Mosaic team had stumbled upon something simpler. They had discovered that you could dream up a product, code it, release it to the ether and change the world overnight. Thanks to the Internet, users could download your product, give you feedback on it, and you could release an update, all in the same day. In the web world, development schedules could be measured in weeks.

The part I bolded in the above quote from the book really captures the magic of the Internet & what pulled so many people toward the early web.

The current web – dominated by never-ending feeds & a variety of closed silos – is a big shift from the early days of web comics & other underground cool stuff people created & shared because they thought it was neat.

Many established players missed the actual direction of the web by trying to create something more akin to the web of today before the infrastructure could support it. Many of the “big things” driving web adoption relied heavily on chance luck – combined with a lot of hard work & a willingness to be responsive to feedback & data.

  • Even when Marc Andreessen moved to the valley he thought he was late and he had “missed the whole thing,” but he saw the relentless growth of the web & decided making another web browser was the play that made sense at the time.
  • Tim Berners-Lee was dismayed when Andreessen’s web browser enabled embedded image support in web documents.
  • Early Amazon review features were originally for editorial content from Amazon itself. Bezos originally wanted to launch a broad-based Amazon like it is today, but realized it would be too capital intensive & focused on books off the start so he could sell a known commodity with a long tail. Amazon was initially built off leveraging 2 book distributors ( Ingram and Baker & Taylor) & R. R. Bowker’s Books In Print catalog. They also did clever hacks to meet minimum order requirements like ordering out of stock books as part of their order, so they could only order what customers had purchased.
  • eBay began as an /aw/ subfolder on the eBay domain name which was hosted on a residential internet connection. Pierre Omidyar coded the auction service over labor day weekend in 1995. The domain had other sections focused on topics like ebola. It was switched from AuctionWeb to a stand alone site only after the ISP started charging for a business line. It had no formal Paypal integration or anything like that, rather when listings started to charge a commission, merchants would mail physical checks in to pay for the platform share of their sales. Beanie Babies also helped skyrocket platform usage.
  • The reason AOL carpet bombed the United States with CDs – at their peak half of all CDs produced were AOL CDs – was their initial response rate was around 10%, a crazy number for untargeted direct mail.
  • Priceline was lucky to have survived the bubble as their idea was to spread broadly across other categories beyond travel & they were losing about $30 per airline ticket sold.
  • The broader web bubble left behind valuable infrastructure like unused fiber to fuel continued growth long after the bubble popped. The dot com bubble was possible in part because there was a secular bull market in bonds stemming back to the early 1980s & falling debt service payments increased financial leverage and company valuations.
  • TED members hissed at Bill Gross when he unveiled GoTo.com, which ranked “search” results based on advertiser bids.
  • Excite turned down offering the Google founders $1.6 million for the PageRank technology in part because Larry Page insisted to Excite CEO George Bell ‘If we come to work for Excite, you need to rip out all the Excite technology and replace it with [our] search.’ And, ultimately, that’s—in my recollection—where the deal fell apart.”
  • Steve Jobs initially disliked the multi-touch technology that mobile would rely on, one of the early iPhone prototypes had the iPod clickwheel, and Apple was against offering an app store in any form. Steve Jobs so loathed his interactions with the record labels that he did not want to build a phone & first licensed iTunes to Motorola, where they made the horrible ROKR phone. He only ended up building a phone after Cingular / AT&T begged him to.
  • Wikipedia was originally launched as a back up feeder site that was to feed into Nupedia.
  • Even after Facebook had strong traction, Marc Zuckerberg kept working on other projects like a file sharing service. Facebook’s news feed was publicly hated based on the complaints, but it almost instantly led to a doubling of usage of the site so they never dumped it. After spreading from college to college Facebook struggled to expand ad other businesses & opening registration up to all was a hail mary move to see if it would rekindle growth instead of selling to Yahoo! for a billion dollars.

The book offers a lot of color to many important web related companies.

And many companies which were only briefly mentioned also ran into the same sort of lucky breaks the above companies did. Paypal was heavily reliant on eBay for initial distribution, but even that was something they initially tried to block until it became so obvious they stopped fighting it:

“At some point I sort of quit trying to stop the EBay users and mostly focused on figuring out how to not lose money,” Levchin recalls. … In the late 2000s, almost a decade after it first went public, PayPal was drifting toward obsolescence and consistently alienating the small businesses that paid it to handle their online checkout. Much of the company’s code was being written offshore to cut costs, and the best programmers and designers had fled the company. … PayPal’s conversion rate is lights-out: Eighty-nine percent of the time a customer gets to its checkout page, he makes the purchase. For other online credit and debit card transactions, that number sits at about 50 percent.

Here is a podcast interview of Brian McCullough by Chris Dixon.

How The Internet Happened: From Netscape to the iPhone is a great book well worth a read for anyone interested in the web.

Categories: 
Reblogged 1 year ago from feedproxy.google.com

Keyword Not Provided, But it Just Clicks

When SEO Was Easy

When I got started on the web over 15 years ago I created an overly broad & shallow website that had little chance of making money because it was utterly undifferentiated and crappy. In spite of my best (worst?) efforts while being a complete newbie, sometimes I would go to the mailbox and see a check for a couple hundred or a couple thousand dollars come in. My old roommate & I went to Coachella & when the trip was over I returned to a bunch of mail to catch up on & realized I had made way more while not working than what I spent on that trip.

What was the secret to a total newbie making decent income by accident?

Horrible spelling.

Back then search engines were not as sophisticated with their spelling correction features & I was one of 3 or 4 people in the search index that misspelled the name of an online casino the same way many searchers did.

The high minded excuse for why I did not scale that would be claiming I knew it was a temporary trick that was somehow beneath me. The more accurate reason would be thinking in part it was a lucky fluke rather than thinking in systems. If I were clever at the time I would have created the misspeller’s guide to online gambling, though I think I was just so excited to make anything from the web that I perhaps lacked the ambition & foresight to scale things back then.

In the decade that followed I had a number of other lucky breaks like that. One time one of the original internet bubble companies that managed to stay around put up a sitewide footer link targeting the concept that one of my sites made decent money from. This was just before the great recession, before Panda existed. The concept they targeted had 3 or 4 ways to describe it. 2 of them were very profitable & if they targeted either of the most profitable versions with that page the targeting would have sort of carried over to both. They would have outranked me if they targeted the correct version, but they didn’t so their mistargeting was a huge win for me.

Search Gets Complex

Search today is much more complex. In the years since those easy-n-cheesy wins, Google has rolled out many updates which aim to feature sought after destination sites while diminishing the sites which rely one “one simple trick” to rank.

Arguably the quality of the search results has improved significantly as search has become more powerful, more feature rich & has layered in more relevancy signals.

Many quality small web publishers have went away due to some combination of increased competition, algorithmic shifts & uncertainty, and reduced monetization as more ad spend was redirected toward Google & Facebook. But the impact as felt by any given publisher is not the impact as felt by the ecosystem as a whole. Many terrible websites have also went away, while some formerly obscure though higher-quality sites rose to prominence.

There was the Vince update in 2009, which boosted the rankings of many branded websites.

Then in 2011 there was Panda as an extension of Vince, which tanked the rankings of many sites that published hundreds of thousands or millions of thin content pages while boosting the rankings of trusted branded destinations.

Then there was Penguin, which was a penalty that hit many websites which had heavily manipulated or otherwise aggressive appearing link profiles. Google felt there was a lot of noise in the link graph, which was their justification for the Penguin.

There were updates which lowered the rankings of many exact match domains. And then increased ad load in the search results along with the other above ranking shifts further lowered the ability to rank keyword-driven domain names. If your domain is generically descriptive then there is a limit to how differentiated & memorable you can make it if you are targeting the core market the keywords are aligned with.

There is a reason eBay is more popular than auction.com, Google is more popular than search.com, Yahoo is more popular than portal.com & Amazon is more popular than a store.com or a shop.com. When that winner take most impact of many online markets is coupled with the move away from using classic relevancy signals the economics shift to where is makes a lot more sense to carry the heavy overhead of establishing a strong brand.

Branded and navigational search queries could be used in the relevancy algorithm stack to confirm the quality of a site & verify (or dispute) the veracity of other signals.

Historically relevant algo shortcuts become less appealing as they become less relevant to the current ecosystem & even less aligned with the future trends of the market. Add in negative incentives for pushing on a string (penalties on top of wasting the capital outlay) and a more holistic approach certainly makes sense.

Modeling Web Users & Modeling Language

PageRank was an attempt to model the random surfer.

When Google is pervasively monitoring most users across the web they can shift to directly measuring their behaviors instead of using indirect signals.

Years ago Bill Slawski wrote about the long click in which he opened by quoting Steven Levy’s In the Plex: How Google Thinks, Works, and Shapes our Lives

“On the most basic level, Google could see how satisfied users were. To paraphrase Tolstoy, happy users were all the same. The best sign of their happiness was the “Long Click” — This occurred when someone went to a search result, ideally the top one, and did not return. That meant Google has successfully fulfilled the query.”

Of course, there’s a patent for that. In Modifying search result ranking based on implicit user feedback they state:

user reactions to particular search results or search result lists may be gauged, so that results on which users often click will receive a higher ranking. The general assumption under such an approach is that searching users are often the best judges of relevance, so that if they select a particular search result, it is likely to be relevant, or at least more relevant than the presented alternatives.

If you are a known brand you are more likely to get clicked on than a random unknown entity in the same market.

And if you are something people are specifically seeking out, they are likely to stay on your website for an extended period of time.

One aspect of the subject matter described in this specification can be embodied in a computer-implemented method that includes determining a measure of relevance for a document result within a context of a search query for which the document result is returned, the determining being based on a first number in relation to a second number, the first number corresponding to longer views of the document result, and the second number corresponding to at least shorter views of the document result; and outputting the measure of relevance to a ranking engine for ranking of search results, including the document result, for a new search corresponding to the search query. The first number can include a number of the longer views of the document result, the second number can include a total number of views of the document result, and the determining can include dividing the number of longer views by the total number of views.

Attempts to manipulate such data may not work.

safeguards against spammers (users who generate fraudulent clicks in an attempt to boost certain search results) can be taken to help ensure that the user selection data is meaningful, even when very little data is available for a given (rare) query. These safeguards can include employing a user model that describes how a user should behave over time, and if a user doesn’t conform to this model, their click data can be disregarded. The safeguards can be designed to accomplish two main objectives: (1) ensure democracy in the votes (e.g., one single vote per cookie and/or IP for a given query-URL pair), and (2) entirely remove the information coming from cookies or IP addresses that do not look natural in their browsing behavior (e.g., abnormal distribution of click positions, click durations, clicks_per_minute/hour/day, etc.). Suspicious clicks can be removed, and the click signals for queries that appear to be spmed need not be used (e.g., queries for which the clicks feature a distribution of user agents, cookie ages, etc. that do not look normal).

And just like Google can make a matrix of documents & queries, they could also choose to put more weight on search accounts associated with topical expert users based on their historical click patterns.

Moreover, the weighting can be adjusted based on the determined type of the user both in terms of how click duration is translated into good clicks versus not-so-good clicks, and in terms of how much weight to give to the good clicks from a particular user group versus another user group. Some user’s implicit feedback may be more valuable than other users due to the details of a user’s review process. For example, a user that almost always clicks on the highest ranked result can have his good clicks assigned lower weights than a user who more often clicks results lower in the ranking first (since the second user is likely more discriminating in his assessment of what constitutes a good result). In addition, a user can be classified based on his or her query stream. Users that issue many queries on (or related to) a given topic T (e.g., queries related to law) can be presumed to have a high degree of expertise with respect to the given topic T, and their click data can be weighted accordingly for other queries by them on (or related to) the given topic T.

Google was using click data to drive their search rankings as far back as 2009. David Naylor was perhaps the first person who publicly spotted this. Google was ranking Australian websites for [tennis court hire] in the UK & Ireland, in part because that is where most of the click signal came from. That phrase was most widely searched for in Australia. In the years since Google has done a better job of geographically isolating clicks to prevent things like the problem David Naylor noticed, where almost all search results in one geographic region came from a different country.

Whenever SEOs mention using click data to search engineers, the search engineers quickly respond about how they might consider any signal but clicks would be a noisy signal. But if a signal has noise an engineer would work around the noise by finding ways to filter the noise out or combine multiple signals. To this day Google states they are still working to filter noise from the link graph: “We continued to protect the value of authoritative and relevant links as an important ranking signal for Search.”

The site with millions of inbound links, few intentional visits & those who do visit quickly click the back button (due to a heavy ad load, poor user experience, low quality content, shallow content, outdated content, or some other bait-n-switch approach)…that’s an outlier. Preventing those sorts of sites from ranking well would be another way of protecting the value of authoritative & relevant links.

Best Practices Vary Across Time & By Market + Category

Along the way, concurrent with the above sorts of updates, Google also improved their spelling auto-correct features, auto-completed search queries for many years through a featured called Google Instant (though they later undid forced query auto-completion while retaining automated search suggestions), and then they rolled out a few other algorithms that further allowed them to model language & user behavior.

Today it would be much harder to get paid above median wages explicitly for sucking at basic spelling or scaling some other individual shortcut to the moon, like pouring millions of low quality articles into a (formerly!) trusted domain.

Nearly a decade after Panda, eHow’s rankings still haven’t recovered.

Back when I got started with SEO the phrase Indian SEO company was associated with cut-rate work where people were buying exclusively based on price. Sort of like a “I got a $500 budget for link building, but can not under any circumstance invest more than $5 in any individual link.” Part of how my wife met me was she hired a hack SEO from San Diego who outsourced all the work to India and marked the price up about 100-fold while claiming it was all done in the United States. He created reciprocal links pages that got her site penalized & it didn’t rank until after she took her reciprocal links page down.

With that sort of behavior widespread (hack US firm teaching people working in an emerging market poor practices), it likely meant many SEO “best practices” which were learned in an emerging market (particularly where the web was also underdeveloped) would be more inclined to being spammy. Considering how far ahead many Western markets were on the early Internet & how India has so many languages & how most web usage in India is based on mobile devices where it is hard for users to create links, it only makes sense that Google would want to place more weight on end user data in such a market.

If you set your computer location to India Bing’s search box lists 9 different languages to choose from.

The above is not to state anything derogatory about any emerging market, but rather that various signals are stronger in some markets than others. And competition is stronger in some markets than others.

Search engines can only rank what exists.

“In a lot of Eastern European – but not just Eastern European markets – I think it is an issue for the majority of the [bream? muffled] countries, for the Arabic-speaking world, there just isn’t enough content as compared to the percentage of the Internet population that those regions represent. I don’t have up to date data, I know that a couple years ago we looked at Arabic for example and then the disparity was enormous. so if I’m not mistaken the Arabic speaking population of the world is maybe 5 to 6%, maybe more, correct me if I am wrong. But very definitely the amount of Arabic content in our index is several orders below that. So that means we do not have enough Arabic content to give to our Arabic users even if we wanted to. And you can exploit that amazingly easily and if you create a bit of content in Arabic, whatever it looks like we’re gonna go you know we don’t have anything else to serve this and it ends up being horrible. and people will say you know this works. I keyword stuffed the hell out of this page, bought some links, and there it is number one. There is nothing else to show, so yeah you’re number one. the moment somebody actually goes out and creates high quality content that’s there for the long haul, you’ll be out and that there will be one.” – Andrey Lipattsev – Search Quality Senior Strategist at Google Ireland, on Mar 23, 2016

Impacting the Economics of Publishing

Now search engines can certainly influence the economics of various types of media. At one point some otherwise credible media outlets were pitching the Demand Media IPO narrative that Demand Media was the publisher of the future & what other media outlets will look like. Years later, after heavily squeezing on the partner network & promoting programmatic advertising that reduces CPMs by the day Google is funding partnerships with multiple news publishers like McClatchy & Gatehouse to try to revive the news dead zones even Facebook is struggling with.

“Facebook Inc. has been looking to boost its local-news offerings since a 2017 survey showed most of its users were clamoring for more. It has run into a problem: There simply isn’t enough local news in vast swaths of the country. … more than one in five newspapers have closed in the past decade and a half, leaving half the counties in the nation with just one newspaper, and 200 counties with no newspaper at all.”

As mainstream newspapers continue laying off journalists, Facebook’s news efforts are likely to continue failing unless they include direct economic incentives, as Google’s programmatic ad push broke the banner ad:

“Thanks to the convoluted machinery of Internet advertising, the advertising world went from being about content publishers and advertising context—The Times unilaterally declaring, via its ‘rate card’, that ads in the Times Style section cost $30 per thousand impressions—to the users themselves and the data that targets them—Zappo’s saying it wants to show this specific shoe ad to this specific user (or type of user), regardless of publisher context. Flipping the script from a historically publisher-controlled mediascape to an advertiser (and advertiser intermediary) controlled one was really Google’s doing. Facebook merely rode the now-cresting wave, borrowing outside media’s content via its own users’ sharing, while undermining media’s ability to monetize via Facebook’s own user-data-centric advertising machinery. Conventional media lost both distribution and monetization at once, a mortal blow.”

Google is offering news publishers audience development & business development tools.

Heavy Investment in Emerging Markets Quickly Evolves the Markets

As the web grows rapidly in India, they’ll have a thousand flowers bloom. In 5 years the competition in India & other emerging markets will be much tougher as those markets continue to grow rapidly. Media is much cheaper to produce in India than it is in the United States. Labor costs are lower & they never had the economic albatross that is the ACA adversely impact their economy. At some point the level of investment & increased competition will mean early techniques stop having as much efficacy. Chinese companies are aggressively investing in India.

“If you break India into a pyramid, the top 100 million (urban) consumers who think and behave more like Americans are well-served,” says Amit Jangir, who leads India investments at 01VC, a Chinese venture capital firm based in Shanghai. The early stage venture firm has invested in micro-lending firms FlashCash and SmartCoin based in India. The new target is the next 200 million to 600 million consumers, who do not have a go-to entertainment, payment or ecommerce platform yet— and there is gonna be a unicorn in each of these verticals, says Jangir, adding that it will be not be as easy for a player to win this market considering the diversity and low ticket sizes.

RankBrain

RankBrain appears to be based on using user clickpaths on head keywords to help bleed rankings across into related searches which are searched less frequently. A Googler didn’t state this specifically, but it is how they would be able to use models of searcher behavior to refine search results for keywords which are rarely searched for.

In a recent interview in Scientific American a Google engineer stated: “By design, search engines have learned to associate short queries with the targets of those searches by tracking pages that are visited as a result of the query, making the results returned both faster and more accurate than they otherwise would have been.”

Now a person might go out and try to search for something a bunch of times or pay other people to search for a topic and click a specific listing, but some of the related Google patents on using click data (which keep getting updated) mentioned how they can discount or turn off the signal if there is an unnatural spike of traffic on a specific keyword, or if there is an unnatural spike of traffic heading to a particular website or web page.

And, since Google is tracking the behavior of end users on their own website, anomalous behavior is easier to track than it is tracking something across the broader web where signals are more indirect. Google can take advantage of their wide distribution of Chrome & Android where users are regularly logged into Google & pervasively tracked to place more weight on users where they had credit card data, a long account history with regular normal search behavior, heavy Gmail users, etc.

Plus there is a huge gap between the cost of traffic & the ability to monetize it. You might have to pay someone a dime or a quarter to search for something & there is no guarantee it will work on a sustainable basis even if you paid hundreds or thousands of people to do it. Any of those experimental searchers will have no lasting value unless they influence rank, but even if they do influence rankings it might only last temporarily. If you bought a bunch of traffic into something genuine Google searchers didn’t like then even if it started to rank better temporarily the rankings would quickly fall back if the real end user searchers disliked the site relative to other sites which already rank.

This is part of the reason why so many SEO blogs mention brand, brand, brand. If people are specifically looking for you in volume & Google can see that thousands or millions of people specifically want to access your site then that can impact how you rank elsewhere.

Even looking at something inside the search results for a while (dwell time) or quickly skipping over it to have a deeper scroll depth can be a ranking signal. Some Google patents mention how they can use mouse pointer location on desktop or scroll data from the viewport on mobile devices as a quality signal.

Neural Matching

Last year Danny Sullivan mentioned how Google rolled out neural matching to better understand the intent behind a search query.

The above Tweets capture what the neural matching technology intends to do. Google also stated:

we’ve now reached the point where neural networks can help us take a major leap forward from understanding words to understanding concepts. Neural embeddings, an approach developed in the field of neural networks, allow us to transform words to fuzzier representations of the underlying concepts, and then match the concepts in the query with the concepts in the document. We call this technique neural matching.

To help people understand the difference between neural matching & RankBrain, Google told SEL: “RankBrain helps Google better relate pages to concepts. Neural matching helps Google better relate words to searches.”

There are a couple research papers on neural matching.

The first one was titled A Deep Relevance Matching Model for Ad-hoc Retrieval. It mentioned using Word2vec & here are a few quotes from the research paper

  • “Successful relevance matching requires proper handling of the exact matching signals, query term importance, and diverse matching requirements.”
  • “the interaction-focused model, which first builds local level interactions (i.e., local matching signals) between two pieces of text, and then uses deep neural networks to learn hierarchical interaction patterns for matching.”
  • “according to the diverse matching requirement, relevance matching is not position related since it could happen in any position in a long document.”
  • “Most NLP tasks concern semantic matching, i.e., identifying the semantic meaning and infer”ring the semantic relations between two pieces of text, while the ad-hoc retrieval task is mainly about relevance matching, i.e., identifying whether a document is relevant to a given query.”
  • “Since the ad-hoc retrieval task is fundamentally a ranking problem, we employ a pairwise ranking loss such as hinge loss to train our deep relevance matching model.”

The paper mentions how semantic matching falls down when compared against relevancy matching because:

  • semantic matching relies on similarity matching signals (some words or phrases with the same meaning might be semantically distant), compositional meanings (matching sentences more than meaning) & a global matching requirement (comparing things in their entirety instead of looking at the best matching part of a longer document); whereas,
  • relevance matching can put significant weight on exact matching signals (weighting an exact match higher than a near match), adjust weighting on query term importance (one word might or phrase in a search query might have a far higher discrimination value & might deserve far more weight than the next) & leverage diverse matching requirements (allowing relevancy matching to happen in any part of a longer document)

Here are a couple images from the above research paper

And then the second research paper is

Deep Relevancy Ranking Using Enhanced Dcoument-Query Interactions
“interaction-based models are less efficient, since one cannot index a document representation independently of the query. This is less important, though, when relevancy ranking methods rerank the top documents returned by a conventional IR engine, which is the scenario we consider here.”

That same sort of re-ranking concept is being better understood across the industry. There are ranking signals that earn some base level ranking, and then results get re-ranked based on other factors like how well a result matches the user intent.

Here are a couple images from the above research paper.

For those who hate the idea of reading research papers or patent applications, Martinibuster also wrote about the technology here. About the only part of his post I would debate is this one:

“Does this mean publishers should use more synonyms? Adding synonyms has always seemed to me to be a variation of keyword spamming. I have always considered it a naive suggestion. The purpose of Google understanding synonyms is simply to understand the context and meaning of a page. Communicating clearly and consistently is, in my opinion, more important than spamming a page with keywords and synonyms.”

I think one should always consider user experience over other factors, however a person could still use variations throughout the copy & pick up a bit more traffic without coming across as spammy. Danny Sullivan mentioned the super synonym concept was impacting 30% of search queries, so there are still a lot which may only be available to those who use a specific phrase on their page.

Martinibuster also wrote another blog post tying more research papers & patents to the above. You could probably spend a month reading all the related patents & research papers.

The above sort of language modeling & end user click feedback compliment links-based ranking signals in a way that makes it much harder to luck one’s way into any form of success by being a terrible speller or just bombing away at link manipulation without much concern toward any other aspect of the user experience or market you operate in.

Pre-penalized Shortcuts

Google was even issued a patent for predicting site quality based upon the N-grams used on the site & comparing those against the N-grams used on other established site where quality has already been scored via other methods: “The phrase model can be used to predict a site quality score for a new site; in particular, this can be done in the absence of other information. The goal is to predict a score that is comparable to the baseline site quality scores of the previously-scored sites.”

Have you considered using a PLR package to generate the shell of your site’s content? Good luck with that as some sites trying that shortcut might be pre-penalized from birth.

Navigating the Maze

When I started in SEO one of my friends had a dad who is vastly smarter than I am. He advised me that Google engineers were smarter, had more capital, had more exposure, had more data, etc etc etc … and thus SEO was ultimately going to be a malinvestment.

Back then he was at least partially wrong because influencing search was so easy.

But in the current market, 16 years later, we are near the infection point where he would finally be right.

At some point the shortcuts stop working & it makes sense to try a different approach.

The flip side of all the above changes is as the algorithms have become more complex they have went from being a headwind to people ignorant about SEO to being a tailwind to those who do not focus excessively on SEO in isolation.

If one is a dominant voice in a particular market, if they break industry news, if they have key exclusives, if they spot & name the industry trends, if their site becomes a must read & is what amounts to a habit … then they perhaps become viewed as an entity. Entity-related signals help them & those signals that are working against the people who might have lucked into a bit of success become a tailwind rather than a headwind.

If your work defines your industry, then any efforts to model entities, user behavior or the language of your industry are going to boost your work on a relative basis.

This requires sites to publish frequently enough to be a habit, or publish highly differentiated content which is strong enough that it is worth the wait.

Those which publish frequently without being particularly differentiated are almost guaranteed to eventually walk into a penalty of some sort. And each additional person who reads marginal, undifferentiated content (particularly if it has an ad-heavy layout) is one additional visitor that site is closer to eventually getting whacked. Success becomes self regulating. Any short-term success becomes self defeating if one has a highly opportunistic short-term focus.

Those who write content that only they could write are more likely to have sustained success.

Reblogged 1 year ago from feedproxy.google.com

The Fractured Web

Anyone can argue about the intent of a particular action & the outcome that is derived by it. But when the outcome is known, at some point the intent is inferred if the outcome is derived from a source of power & the outcome doesn’t change.

Or, put another way, if a powerful entity (government, corporation, other organization) disliked an outcome which appeared to benefit them in the short term at great lasting cost to others, they could spend resources to adjust the system.

If they don’t spend those resources (or, rather, spend them on lobbying rather than improving the ecosystem) then there is no desired change. The outcome is as desired. Change is unwanted.

News is a stock vs flow market where the flow of recent events drives most of the traffic to articles. News that is more than a couple days old is no longer news. A news site which stops publishing news stops becoming a habit & quickly loses relevancy. Algorithmically an abandoned archive of old news articles doesn’t look much different than eHow, in spite of having a much higher cost structure.

According to SEMrush’s traffic rank, ampproject.org gets more monthly visits than Yahoo.com.

That actually understates the prevalence of AMP because AMP is generally designed for mobile AND not all AMP-formatted content is displayed on ampproject.org.

Part of how AMP was able to get widespread adoption was because in the news vertical the organic search result set was displaced by an AMP block. If you were a news site either you were so differentiated that readers would scroll past the AMP block in the search results to look for you specifically, or you adopted AMP, or you were doomed.

Some news organizations like The Guardian have a team of about a dozen people reformatting their content to the duplicative & proprietary AMP format. That’s wasteful, but necessary “In theory, adoption of AMP is voluntary. In reality, publishers that don’t want to see their search traffic evaporate have little choice. New data from publisher analytics firm Chartbeat shows just how much leverage Google has over publishers thanks to its dominant search engine.”

It seems more than a bit backward that low margin publishers are doing duplicative work to distance themselves from their own readers while improving the profit margins of monopolies. But it is what it is. And that no doubt drew the ire of many publishers across the EU.

And now there are AMP Stories to eat up even more visual real estate.

If you spent a bunch of money to create a highly differentiated piece of content, why would you prefer that high spend flagship content appear on a third party website rather than your own?

Google & Facebook have done such a fantastic job of eating the entire pie that some are celebrating Amazon as a prospective savior to the publishing industry. That view – IMHO – is rather suspect.

Where any of the tech monopolies dominate they cram down on partners. The New York Times acquired The Wirecutter in Q4 of 2016. In Q1 of 2017 Amazon adjusted their affiliate fee schedule.

Amazon generally treats consumers well, but they have been much harder on business partners with tough pricing negotiations, counterfeit protections, forced ad buying to have a high enough product rank to be able to rank organically, ad displacement of their organic search results below the fold (even for branded search queries), learning suppliers & cutting out the partners, private label products patterned after top sellers, in some cases running pop over ads for the private label products on product level pages where brands already spent money to drive traffic to the page, etc.

They’ve made things tougher for their partners in a way that mirrors the impact Facebook & Google have had on online publishers:

“Boyce’s experience on Amazon largely echoed what happens in the offline world: competitors entered the market, pushing down prices and making it harder to make a profit. So Boyce adapted. He stopped selling basketball hoops and developed his own line of foosball tables, air hockey tables, bocce ball sets and exercise equipment. The best way to make a decent profit on Amazon was to sell something no one else had and create your own brand. … Amazon also started selling bocce ball sets that cost $15 less than Boyce’s. He says his products are higher quality, but Amazon gives prominent page space to its generic version and wins the cost-conscious shopper.”

Google claims they have no idea how content publishers are with the trade off between themselves & the search engine, but every quarter Alphabet publish the share of ad spend occurring on owned & operated sites versus the share spent across the broader publisher network. And in almost every quarter for over a decade straight that ratio has grown worse for publishers.

The aggregate numbers for news publishers are worse than shown above as Google is ramping up ads in video games quite hard. They’ve partnered with Unity & promptly took away the ability to block ads from appearing in video games using googleadsenseformobileapps.com exclusion (hello flat thumb misclicks, my name is budget & I am gone!)

They will also track video game player behavior & alter game play to maximize revenues based on machine learning tied to surveillance of the user’s account: “We’re bringing a new approach to monetization that combines ads and in-app purchases in one automated solution. Available today, new smart segmentation features in Google AdMob use machine learning to segment your players based on their likelihood to spend on in-app purchases. Ad units with smart segmentation will show ads only to users who are predicted not to spend on in-app purchases. Players who are predicted to spend will see no ads, and can simply continue playing.”

And how does the growth of ampproject.org square against the following wisdom?

Literally only yesterday did Google begin supporting instant loading of self-hosted AMP pages.

China has a different set of tech leaders than the United States. Baidu, Alibaba, Tencent (BAT) instead of Facebook, Amazon, Apple, Netflix, Google (FANG). China tech companies may have won their domestic markets in part based on superior technology or better knowledge of the local culture, though those same companies have largely went nowhere fast in most foreign markets. A big part of winning was governmental assistance in putting a foot on the scales.

Part of the US-China trade war is about who controls the virtual “seas” upon which value flows:

it can easily be argued that the last 60 years were above all the era of the container-ship (with container-ships getting ever bigger). But will the coming decades still be the age of the container-ship? Possibly not, for the simple reason that things that have value increasingly no longer travel by ship, but instead by fiberoptic cables! … you could almost argue that ZTE and Huawei have been the “East India Company” of the current imperial cycle. Unsurprisingly, it is these very companies, charged with laying out the “new roads” along which “tomorrow’s value” will flow, that find themselves at the center of the US backlash. … if the symbol of British domination was the steamship, and the symbol of American strength was the Boeing 747, it seems increasingly clear that the question of the future will be whether tomorrow’s telecom switches and routers are produced by Huawei or Cisco. … US attempts to take down Huawei and ZTE can be seen as the existing empire’s attempt to prevent the ascent of a new imperial power. With this in mind, I could go a step further and suggest that perhaps the Huawei crisis is this century’s version of Suez crisis. No wonder markets have been falling ever since the arrest of the Huawei CFO. In time, the Suez Crisis was brought to a halt by US threats to destroy the value of sterling. Could we now witness the same for the US dollar?

China maintains Huawei is an employee-owned company. But that proposition is suspect. Broadly stealing technology is vital to the growth of the Chinese economy & they have no incentive to stop unless their leading companies pay a direct cost. Meanwhile, China is investigating Ericsson over licensing technology.

Amazon will soon discontinue selling physical retail products in China: “Amazon shoppers in China will no longer be able to buy goods from third-party merchants in the country, but they still will be able to order from the United States, Britain, Germany and Japan via the firm’s global store. Amazon expects to close fulfillment centers and wind down support for domestic-selling merchants in China in the next 90 days.”

India has taken notice of the success of Chinese tech companies & thus began to promote “national champion” company policies. That, in turn, has also meant some of the Chinese-styled laws requiring localized data, antitrust inquiries, foreign ownership restrictions, requirements for platforms to not sell their own goods, promoting limits on data encryption, etc.

The secretary of India’s Telecommunications Department, Aruna Sundararajan, last week told a gathering of Indian startups in a closed-door meeting in the tech hub of Bangalore that the government will introduce a “national champion” policy “very soon” to encourage the rise of Indian companies, according to a person familiar with the matter. She said Indian policy makers had noted the success of China’s internet giants, Alibaba Group Holding Ltd. and Tencent Holdings Ltd. … Tensions began rising last year, when New Delhi decided to create a clearer set of rules for e-commerce and convened a group of local players to solicit suggestions. Amazon and Flipkart, even though they make up more than half the market, weren’t invited, according to people familiar with the matter.

Amazon vowed to invest $5 billion in India & they have done some remarkable work on logistics there. Walmart acquired Flipkart for $16 billion.

Other emerging markets also have many local ecommerce leaders like Jumia, MercadoLibre, OLX, Gumtree, Takealot, Konga, Kilimall, BidOrBuy, Tokopedia, Bukalapak, Shoppee, Lazada. If you live in the US you may have never heard of *any* of those companies. And if you live in an emerging market you may have never interacted with Amazon or eBay.

It makes sense that ecommerce leadership would be more localized since it requires moving things in the physical economy, dealing with local currencies, managing inventory, shipping goods, etc. whereas information flows are just bits floating on a fiber optic cable.

If the Internet is primarily seen as a communications platform it is easy for people in some emerging markets to think Facebook is the Internet. Free communication with friends and family members is a compelling offer & as the cost of data drops web usage increases.

At the same time, the web is incredibly deflationary. Every free form of entertainment which consumes time is time that is not spent consuming something else.

Add the technological disruption to the wealth polarization that happened in the wake of the great recession, then combine that with algorithms that promote extremist views & it is clearly causing increasing conflict.

If you are a parent and you think you child has no shot at a brighter future than your own life it is easy to be full of rage.

Empathy can radicalize otherwise normal people by giving them a more polarized view of the world:

Starting around 2000, the line starts to slide. More students say it’s not their problem to help people in trouble, not their job to see the world from someone else’s perspective. By 2009, on all the standard measures, Konrath found, young people on average measure 40 percent less empathetic than my own generation … The new rule for empathy seems to be: reserve it, not for your “enemies,” but for the people you believe are hurt, or you have decided need it the most. Empathy, but just for your own team. And empathizing with the other team? That’s practically a taboo.

A complete lack of empathy could allow a psychopath to commit extreme crimes while feeling no guilt, shame or remorse. Extreme empathy can have the same sort of outcome:

“Sometimes we commit atrocities not out of a failure of empathy but rather as a direct consequence of successful, even overly successful, empathy. … They emphasized that students would learn both sides, and the atrocities committed by one side or the other were always put into context. Students learned this curriculum, but follow-up studies showed that this new generation was more polarized than the one before. … [Empathy] can be good when it leads to good action, but it can have downsides. For example, if you want the victims to say ‘thank you.’ You may even want to keep the people you help in that position of inferior victim because it can sustain your feeling of being a hero.” – Fritz Breithaupt

News feeds will be read. Villages will be razed. Lynch mobs will become commonplace.

Many people will end up murdered by algorithmically generated empathy.

As technology increases absentee ownership & financial leverage, a society led by morally agnostic algorithms is not going to become more egalitarian.

When politicians throw fuel on the fire it only gets worse:

It’s particularly odd that the government is demanding “accountability and responsibility” from a phone app when some ruling party politicians are busy spreading divisive fake news. How can the government ask WhatsApp to control mobs when those convicted of lynching Muslims have been greeted, garlanded and fed sweets by some of the most progressive and cosmopolitan members of Modi’s council of ministers?

Mark Zuckerburg won’t get caught downstream from platform blowback as he spends $20 million a year on his security.

The web is a mirror. Engagement-based algorithms reinforcing our perceptions & identities.

And every important story has at least 2 sides!

Some may “learn” vaccines don’t work. Others may learn the vaccines their own children took did not work, as it failed to protect them from the antivax content spread by Facebook & Google, absorbed by people spreading measles & Medieval diseases.

Passion drives engagement, which drives algorithmic distribution: “There’s an asymmetry of passion at work. Which is to say, there’s very little counter-content to surface because it simply doesn’t occur to regular people (or, in this case, actual medical experts) that there’s a need to produce counter-content.”

As the costs of “free” become harder to hide, social media companies which currently sell emerging markets as their next big growth area will end up having embedded regulatory compliance costs which will end up exceeding any sort of prospective revenue they could hope to generate.

The Pinterest S1 shows almost all their growth is in emerging markets, yet almost all their revenue is inside the United States.

As governments around the world see the real-world cost of the foreign tech companies & view some of them as piggy banks, eventually the likes of Facebook or Google will pull out of a variety of markets they no longer feel worth serving. It will be like Google did in mainland China with search after discovering pervasive hacking of activist Gmail accounts.

Lower friction & lower cost information markets will face more junk fees, hurdles & even some legitimate regulations. Information markets will start to behave more like physical goods markets.

The tech companies presume they will be able to use satellites, drones & balloons to beam in Internet while avoiding messy local issues tied to real world infrastructure, but when a local wealthy player is betting against them they’ll probably end up losing those markets: “One of the biggest cheerleaders for the new rules was Reliance Jio, a fast-growing mobile phone company controlled by Mukesh Ambani, India’s richest industrialist. Mr. Ambani, an ally of Mr. Modi, has made no secret of his plans to turn Reliance Jio into an all-purpose information service that offers streaming video and music, messaging, money transfer, online shopping, and home broadband services.”

Publishers do not have “their mojo back” because the tech companies have been so good to them, but rather because the tech companies have been so aggressive that they’ve earned so much blowback which will in turn lead publishers to opting out of future deals, which will eventually lead more people back to the trusted brands of yesterday.

Publishers feeling guilty about taking advertorial money from the tech companies to spread their propaganda will offset its publication with opinion pieces pointing in the other direction: “This is a lobbying campaign in which buying the good opinion of news brands is clearly important. If it was about reaching a target audience, there are plenty of metrics to suggest his words would reach further – at no cost – on Facebook. Similarly, Google is upping its presence in a less obvious manner via assorted media initiatives on both sides of the Atlantic. Its more direct approach to funding journalism seems to have the desired effect of making all media organisations (and indeed many academic institutions) touched by its money slightly less questioning and critical of its motives.”

When Facebook goes down direct visits to leading news brand sites go up.

When Google penalizes a no-name me-too site almost nobody realizes it is missing. But if a big publisher opts out of the ecosystem people will notice.

The reliance on the tech platforms is largely a mirage. If enough key players were to opt out at the same time people would quickly reorient their information consumption habits.

If the platforms can change their focus overnight then why can’t publishers band together & choose to dump them?

In Europe there is GDPR, which aimed to protect user privacy, but ultimately acted as a tax on innovation by local startups while being a subsidy to the big online ad networks. They also have Article 11 & Article 13, which passed in spite of Google’s best efforts on the scaremongering anti-SERP tests, lobbying & propaganda fronts: “Google has sparked criticism by encouraging news publishers participating in its Digital News Initiative to lobby against proposed changes to EU copyright law at a time when the beleaguered sector is increasingly turning to the search giant for help.”

Remember the Eric Schmidt comment about how brands are how you sort out (the non-YouTube portion of) the cesspool? As it turns out, he was allegedly wrong as Google claims they have been fighting for the little guy the whole time:

Article 11 could change that principle and require online services to strike commercial deals with publishers to show hyperlinks and short snippets of news. This means that search engines, news aggregators, apps, and platforms would have to put commercial licences in place, and make decisions about which content to include on the basis of those licensing agreements and which to leave out. Effectively, companies like Google will be put in the position of picking winners and losers. … Why are large influential companies constraining how new and small publishers operate? … The proposed rules will undoubtedly hurt diversity of voices, with large publishers setting business models for the whole industry. This will not benefit all equally. … We believe the information we show should be based on quality, not on payment.

Facebook claims there is a local news problem: “Facebook Inc. has been looking to boost its local-news offerings since a 2017 survey showed most of its users were clamoring for more. It has run into a problem: There simply isn’t enough local news in vast swaths of the country. … more than one in five newspapers have closed in the past decade and a half, leaving half the counties in the nation with just one newspaper, and 200 counties with no newspaper at all.”

Google is so for the little guy that for their local news experiments they’ve partnered with a private equity backed newspaper roll up firm & another newspaper chain which did overpriced acquisitions & is trying to act like a PE firm (trying to not get eaten by the PE firm).

Does the above stock chart look in any way healthy?

Does it give off the scent of a firm that understood the impact of digital & rode it to new heights?

If you want good market-based outcomes, why not partner with journalists directly versus operating through PE chop shops?

If Patch is profitable & Google were a neutral ranking system based on quality, couldn’t Google partner with journalists directly?

Throwing a few dollars at a PE firm in some nebulous partnership sure beats the sort of regulations coming out of the EU. And the EU’s regulations (and prior link tax attempts) are in addition to the three multi billion Euro fines the European Union has levied against Alphabet for shopping search, Android & AdSense.

Google was also fined in Russia over Android bundling. The fine was tiny, but after consumers gained a search engine choice screen (much like Google pushed for in Europe on Microsoft years ago) Yandex’s share of mobile search grew quickly.

The UK recently published a white paper on online harms. In some ways it is a regulation just like the tech companies might offer to participants in their ecosystems:

Companies will have to fulfil their new legal duties or face the consequences and “will still need to be compliant with the overarching duty of care even where a specific code does not exist, for example assessing and responding to the risk associated with emerging harms or technology”.

If web publishers should monitor inbound links to look for anything suspicious then the big platforms sure as hell have the resources & profit margins to monitor behavior on their own websites.

Australia passed the Sharing of Abhorrent Violent Material bill which requires platforms to expeditiously remove violent videos & notify the Australian police about them.

There are other layers of fracturing going on in the web as well.

Programmatic advertising shifted revenue from publishers to adtech companies & the largest ad sellers. Ad blockers further lower the ad revenues of many publishers. If you routinely use an ad blocker, try surfing the web for a while without one & you will notice layover welcome AdSense ads on sites as you browse the web – the very type of ad they were allegedly against when promoting AMP.

Tracking protection in browsers & ad blocking features built directly into browsers leave publishers more uncertain. And who even knows who visited an AMP page hosted on a third party server, particularly when things like GDPR are mixed in? Those who lack first party data may end up having to make large acquisitions to stay relevant.

Voice search & personal assistants are now ad channels.

App stores are removing VPNs in China, removing Tiktok in India, and keeping female tracking apps in Saudi Arabia. App stores are centralized chokepoints for governments. Every centralized service is at risk of censorship. Web browsers from key state-connected players can also censor messages spread by developers on platforms like GitHub.

Microsoft’s newest Edge web browser is based on Chromium, the source of Google Chrome. While Mozilla Firefox gets most of their revenue from a search deal with Google, Google has still went out of its way to use its services to both promote Chrome with pop overs AND break in competing web browsers:

“All of this is stuff you’re allowed to do to compete, of course. But we were still a search partner, so we’d say ‘hey what gives?’ And every time, they’d say, ‘oops. That was accidental. We’ll fix it in the next push in 2 weeks.’ Over and over. Oops. Another accident. We’ll fix it soon. We want the same things. We’re on the same team. There were dozens of oopses. Hundreds maybe?” – former Firefox VP Jonathan Nightingale

As phone sales fall & app downloads stall a hardware company like Apple is pushing hard into services while quietly raking in utterly fantastic ad revenues from search & ads in their app store.

Part of the reason people are downloading fewer apps is so many apps require registration as soon as they are opened, or only let a user engage with them for seconds before pushing aggressive upsells. And then many apps which were formerly one-off purchases are becoming subscription plays. As traffic acquisition costs have jumped, many apps must engage in sleight of hand behaviors (free but not really, we are collecting data totally unrelated to the purpose of our app & oops we sold your data, etc.) in order to get the numbers to back out. This in turn causes app stores to slow down app reviews.

Apple acquired the news subscription service Texture & turned it into Apple News Plus. Not only is Apple keeping half the subscription revenues, but soon the service will only work for people using Apple devices, leaving nearly 100,000 other subscribers out in the cold: “if you’re part of the 30% who used Texture to get your favorite magazines digitally on Android or Windows devices, you will soon be out of luck. Only Apple iOS devices will be able to access the 300 magazines available from publishers. At the time of the sale in March 2018 to Apple, Texture had about 240,000 subscribers.”

Apple is also going to spend over a half-billion Dollars exclusively licensing independently developed games:

Several people involved in the project’s development say Apple is spending several million dollars each on most of the more than 100 games that have been selected to launch on Arcade, with its total budget likely to exceed $500m. The games service is expected to launch later this year. … Apple is offering developers an extra incentive if they agree for their game to only be available on Arcade, withholding their release on Google’s Play app store for Android smartphones or other subscription gaming bundles such as Microsoft’s Xbox game pass.

Verizon wants to launch a video game streaming service. It will probably be almost as successful as their Go90 OTT service was. Microsoft is pushing to make Xbox games work on Android devices. Amazon is developing a game streaming service to compliment Twitch.

The hosts on Twitch, some of whom sign up exclusively with the platform in order to gain access to its moneymaking tools, are rewarded for their ability to make a connection with viewers as much as they are for their gaming prowess. Viewers who pay $4.99 a month for a basic subscription — the money is split evenly between the streamers and Twitch — are looking for immediacy and intimacy. While some hosts at YouTube Gaming offer a similar experience, they have struggled to build audiences as large, and as dedicated, as those on Twitch. … While YouTube has made millionaires out of the creators of popular videos through its advertising program, Twitch’s hosts make money primarily from subscribers and one-off donations or tips. YouTube Gaming has made it possible for viewers to support hosts this way, but paying audiences haven’t materialized at the scale they have on Twitch.

Google, having a bit of Twitch envy, is also launching a video game streaming service which will be deeply integrated into YouTube: “With Stadia, YouTube watchers can press “Play now” at the end of a video, and be brought into the game within 5 seconds. The service provides “instant access” via button or link, just like any other piece of content on the web.”

Google will also launch their own game studio making exclusive games for their platform.

When consoles don’t use discs or cartridges so they can sell a subscription access to their software library it is hard to be a game retailer! GameStop’s stock has been performing like an ICO. And these sorts of announcements from the tech companies have been hitting stock prices for companies like Nintendo & Sony: “There is no doubt this service makes life even more difficult for established platforms,” Amir Anvarzadeh, a market strategist at Asymmetric Advisors Pte, said in a note to clients. “Google will help further fragment the gaming market which is already coming under pressure by big games which have adopted the mobile gaming business model of giving the titles away for free in hope of generating in-game content sales.”

The big tech companies which promoted everything in adjacent markets being free are now erecting paywalls for themselves, balkanizing the web by paying for exclusives to drive their bundled subscriptions.

How many paid movie streaming services will the web have by the end of next year? 20? 50? Does anybody know?

Disney alone with operate Disney+, ESPN+ as well as Hulu.

And then the tech companies are not only licensing exclusives to drive their subscription-based services, but we’re going to see more exclusionary policies like YouTube not working on Amazon Echo, Netflix dumping support for Apple’s Airplay, or Amazon refusing to sell devices like Chromecast or Apple TV.

The good news in a fractured web is a broader publishing industry that contains many micro markets will have many opportunities embedded in it. A Facebook pivot away from games toward news, or a pivot away from news toward video won’t kill third party publishers who have a more diverse traffic profile and more direct revenues. And a regional law blocking porn or gambling websites might lead to an increase in demand for VPNs or free to play points-based games with paid upgrades. Even the rise of metered paywalls will lead to people using more web browsers & more VPNs. Each fracture (good or bad) will create more market edges & ultimately more opportunities. Chinese enforcement of their gambling laws created a real estate boom in Manila.

So long as there are 4 or 5 game stores, 4 or 5 movie streaming sites, etc. … they have to compete on merit or use money to try to buy exclusives. Either way is better than the old monopoly strategy of take it or leave it ultimatums.

The publisher wins because there is a competitive bid. There won’t be an arbitrary 30% tax on everything. So long as there is competition from the open web there will be means to bypass the junk fees & the most successful companies that do so might create their own stores with a lower rate: “Mr. Schachter estimates that Apple and Google could see a hit of about 14% to pretax earnings if they reduced their own app commissions to match Epic’s take.”

As the big media companies & big tech companies race to create subscription products they’ll spend many billions on exclusives. And they will be training consumers that there’s nothing wrong with paying for content. This will eventually lead to hundreds of thousands or even millions of successful niche publications which have incentives better aligned than all the issues the ad supported web has faced.

Added: Facebook pushing privacy & groups is both an attempt to thwart regulation risk while also making their services more relevant to a web that fractures away from a monolithic thing into more niche communities.

One way of looking at Facebook in this moment is as an unstoppable behemoth that bends reality to its will, no matter the consequences. (This is how many journalists tend to see it.) Another way of looking at the company is from the perspective of its fundamental weakness — as a slave to ever-shifting consumer behavior. (This is how employees are more likely to look at it.) … Zuckerberg’s vision for a new Facebook is perhaps best represented by a coming redesign of the flagship app and desktop site that will emphasize events and groups, at the expense of the News Feed. Collectively, the design changes will push people toward smaller group conversations and real-world meetups — and away from public posts.

Categories: 

Reblogged 1 year ago from feedproxy.google.com

Everything you need to know about audience targeting without relying on third-party cookies

30-second summary:

  • Following the passage of landmark consumer privacy laws, Google announced its intention to phase out third-party cookies by 2022
  • Businesses that rely on these cookies for granular consumer data are now forced to rethink their strategies for accurate audience targeting
  • Some businesses are turning to publisher walled gardens, while others are leaning more into contextual advertising
  • Coegi’s Sean Cotton explores the challenges and opportunities marketers face in the absence of third-party cookies, as well as viable alternatives they can use to keep audience targeting on point

Following the passage of landmark consumer privacy laws, Google officially announced its intention to phase out third-party cookies on Chrome browsers by next year. This is certainly a victory for the conscious consumer wary of selling data to advertisers, but it’s also one that might leave businesses scrambling when the cookie jar disappears. But these businesses should be more excited than alarmed. While the death of third-party cookies is an obstacle, it’s also an opportunity: As alternatives to third-party cookies emerge, advertisers might find themselves better-equipped audience targeting and acquirement methods.

Third-party cookies haven’t always been perfect right out of the oven, and their quality was largely dependent on factors such as the data provider’s methodologies, the latency and recency of that data, and any related acquisition costs. Although occasionally stale, these prebuilt audiences allowed advertisers to quickly scale their audiences. The forthcoming phaseout will put pressure on marketers to rethink their strategies for accurately targeting audiences.

What are the alternatives to third-party cookies?

Publisher walled gardens (in which publishers trade free content for first-party data) are a solid starting point for advertisers seeking alternatives to third-party cookies. These audiences won’t come cheap, but it will be possible to find publishers with audiences that strongly align with your own customer base. And because these sources of data are generally authenticated, they’re also an accurate source of modeling data to use as you construct your own user databases.

Many purchases these days begin with online research, so savvy marketers are also exploring contextual advertising as a third-party cookie alternative. Mapping out the sales funnel for your product or service will help you identify opportunities for targeted advertising as your audience performs research, but it’s important to be precise at the same time. Be sure to use negative search terms and semantic recognition to prevent your brand or product from appearing in potentially embarrassing or unsafe placements. (Just consider the word “shot,” which in this day and age could relate to anything from COVID-19 or health and wellness to debates surrounding the Second Amendment.)

There’s still time for a smooth transition away from your dependency on cookies, but you shouldn’t wait much longer to get started. As you explore new ways to get your message out to precise audiences, these strategies are a great place to start:

1. Lean on second-party data

Second-party data (such as the kind provided on publisher walled gardens) can offer accurate audience targeting for advertisers in a hurry to replace third-party cookies. This type of data can inform people- or account-based marketing strategies, helping you identify individuals in a specific industry or those with a certain relevant job title. Similarly, integrating second-party data with your broader digital marketing strategy can create use cases for lookalike modeling or provide a strong foundation for sequential messaging.

Because second-party data will come at a potentially high cost, however, try to partner with publishers and providers for the long term to keep rates as low as possible. As an added benefit, this will give you time to experiment and use various types of data in different ways.

2. Implement mobile ad ID (or MAID) targeting

MAID targeting is based on an anonymous identifier associated with a user’s mobile device operating system. MAIDs have always been the go-to for application targeting because they’re privacy-compliant and serve as a great way to segment audiences based on behaviors and interests. In fact, everyone expected MAIDs to grow as mobile and in-app usage has accelerated. In the U.S., for instance, mobile users spend just over an hour more on those devices than their computers each day, and they spend 87 percent of the time on their smartphones in-app. But the death of third-party cookies will certainly accelerate the usage of these audiences across channels even more.

One of the most powerful insights offered by MAIDs is the ability to track a user’s location data. If a device is frequenting an NFL stadium, for example, you can infer that the user is a football fan, which allows a host of other inferences to form. You can also enrich MAIDs with offline deterministic data, allowing you to construct a more complete picture of the user, their demographic information, and their relevant interests.

Note that recent changes to Apple’s iOS 14 platform might limit this type of targeting on the company’s devices. Besides this, it’s also important to verify the precision and accuracy of the provider giving you location data.

3. Build custom models and indexes

Algorithmic targeting or lookalike modeling caught a bad rap from advertisers who worried the modeled audiences would broaden targeting too far. But as the quality of your audience input increases, the quality of your modeling output increases as well. In other words, concerns are justified only if you’re modeling audiences after modeled data.

On the other hand, models can be an excellent source of additional insight if you’re using deterministic data. This information comes from all kinds of sources, including social media platforms, questionnaires and surveys, and e-commerce sites that have information on user purchase history. In short, it’s data you can trust — meaning it can inform the creation of accurate audience segments and models that capture real customer intent. With deterministic data at the helm, you can create your own models and indexes to aid in your targeting efforts.

First-party data from customers and active social media followers generally provides the best source for models. Be aware of outliers when it comes to audience insights, though; signals should be strong enough to imply the target audience’s actual behavior.

4. Use Unified ID solutions

The death of third-party cookies doesn’t mean the death of all your strategies, and you can expect to see a variety of sophisticated solutions emerge in the coming years that offer audience segmentation with increased control for advertisers and enhanced privacy protections for consumers. In fact, some companies are already working collaboratively to create Unified ID solutions that modernize audience targeting and measurement.

The solutions they’re creating aim to collect user information (such as email addresses) in exchange for free content. Those addresses will then be assigned encrypted IDs that are transmitted along the bid stream to advertisers. If publishers widely adopt unified identity products, they’ll provide an excellent alternative to an overreliance on walled gardens.

However, one of the biggest hurdles for a unified ID solution will be scalability: It will likely not be a solution that can stand on its own for some time.

The death of third-party cookies will absolutely shake up the advertising world, but that’s probably a good thing. Cookies were never designed to be the backbone of digital advertising, and their disappearance makes room for alternatives to third-party cookies that actually deliver a better experience for advertisers and the audiences they’re looking to target. As advertisers gain more granular control over who hears their messaging (and when) and customer data is ensconced behind modern encryption and privacy protection tools, it’s not hard to argue that everyone wins when we put away the cookie jar.

Sean Cotton is the president and co-founder of Coegi.

The post Everything you need to know about audience targeting without relying on third-party cookies appeared first on Search Engine Watch.

Reblogged 1 year ago from www.searchenginewatch.com

Is Google moving towards greater search equity?

30-second summary:

  • Search equity allows for your average business owner to compete on the SERP without being impeded by a lack of SEO-knowledge
  • A more equitable SERP is a necessity for Google from a business and overall web-health perspective
  • Google is pushing for equity on the SERP to a far greater extent
  • The democratization of the SERP represents an enormous paradigm shift that brings certain SEO skills to the fore

What would happen if instead of having to jump through hoops to rank your new website, you were given a seat at the SERP straight away? Presumably, that would cause all sorts of havoc for SEO professionals. What if I told you that there’s a strong push at Google to do just that? I call it “search equity”. It’s Google trying to remove optimization barriers so site owners (aka business owners) can focus on creating great content and reap the benefits of it. 

It’s a move that I think Google is pushing hard for and has already taken steps towards. 

What is search equity?

Search equity is the ability for a site to be able to compete at some level of significance on the SERP without being impeded by technical structures. It is the ability for a site to rank its content solely because that content is worthy of being consumed by the searcher. 

As such, search equity would mean that sites with limited resources can compete on the SERP. It means they would not need to have an overly complex understanding of SEO on a technical level and from a content structure perspective (think things like page structure and other technical SEO aspects). 

Search equity gives a business owner the ability to be visible on the SERP and in many ways helps to preserve the overall health of the web.

It’s a spectrum. It’s not even possible to have total search equity. At any given time, there could be more or less of it within the Google ecosystem. It’s not an all-or-nothing equation. It’s not even possible to have total search equity. What matters is that Google is trying to create as much search equity as it reasonably can. 

Why is search equity necessary?

The idea of search equity being highly desirable to your average site is self-evident but it also makes a lot of sense. What do I mean by that? 

Business owners are content experts. They are experts on the subject matter that is related to their business. They are the ones who should be creating content around the topics associated with their business. Not SEOs, not content marketers, and not some content agency. 

There’s a problem with this, however. That problem is the incentive. Content creation is hard and time-consuming so there has to be a reward for the efforts. Also, there needs to be a way to address the various technicalities that go into SEO, but that’s for later. This is where the current model falls into trouble. 

What happens when a business decides to dedicate the time and resources to create content? What happens when they are now faced with things like optimizing their page structure, internally linking, external linking, title-tags, canonical tags, keyword cannibalization, or whatever else floats your SEO boat? 

Do you see the problem?

SEO, as it’s often thought of, discourages the very people you want to be creating content for from creating content. Business owners don’t know anything about tags and links and structure. They know about running their businesses and creating content around that expertise. 

This is a real problem for Google. It means there is a lot of potential content out there that the current incentive structure doesn’t allow for.

If you think the notion that there’s a gap in the content generation is fantastical, it’s not. For starters, Google has often indicated such a gap exists in non-English speaking markets. Further, Google has an entire “Question Hub” to provide answers for when the “content just isn’t there”.

What I think makes this notion a contradiction and hard pill to swallow as there is an overabundance of content and a lack of it at the same time. This is because a vast amount of content being produced simply lacks substance. I’m not even referring to spam and the 25 billion+ pages of it that Google finds each day. The content bloat we experience is due to the overabundance of low to medium quality content. When was the last time you felt there was just so much really quality content on the web? Exactly. 

There is no gap in the quantitative amount of content on the web but there is in its quality. If Google’s main SEO talking point is any indicator, the gap of quality content out there might well be significant. That’s not to say that such content doesn’t exist, but it may not exist in healthy quantities. 

To fully capitalize on the content creation resources it needs to maintain a healthy web, Google needs, and has moved towards, search equity.

But not all of Google’s drive towards search equity is purely altruistic—there’s also a business interest. This isn’t necessarily a bad thing. In fact, in this case, it’s quite healthy. In any event, understanding how search equity aligns with Google’s business interests is an important part of understanding the impending urgency of a more equitable SERP. 

Why Google My Business demands search equity

The prominence of Google My Business and of the local SERP, in general, has risen in recent years. No longer is local SEO relegated to the loser’s table at SEO conferences. Rather, local SEO has come front and center in many ways. 

Part of this is due to the growing importance of having a GMB profile. Local SEO isn’t getting more attention because of some internal shift in SEO, it’s because it’s becoming more important for businesses to have a GMB listing set up. 

With the plethora of options and abilities that GMB offers (think Reserve with Google or Product Carousels) having a listing has become a way for a business to showcase itself.

Look no further than GMB itself advocates setting up a profile as a way to “stand out”.

Here too, Google runs into the very same problem I mentioned earlier: incentives. If Google My Business isn’t just about “managing your listing” but is also about standing out and marketing yourself, then the environment on the SERP has to be equitable.

In other words, what would happen to GMB adoption if business owners felt that in order for them to compete on all fronts they had to jump through all sorts of hoops and/or spend a ton of money hiring an SEO on a continuing basis?

Clearly, Google is trying to grow the relevancy of GMB not just in terms of the number of businesses adopting it but in how involved the platform is in the everyday functioning of the business. This incentivizes the business to create a listing, add images, and create Google Posts. What’s lacking, however, is content. 

When it comes to the content local sites create, they have to play by the rules of every other site. There is no branded query driving users to their product carousel, Google Posts, or online menu. If Google wants businesses to feel they can thrive with GMB that success has to be across the board. This means sites have to have success within the traditional organic results for a slew of keywords (not just branded local searches).  

You can’t have the truly successful adoption of GMB if it doesn’t incorporate the business’ site itself. If a business feels that Google is making it excessively difficult to perform in one area, it will not fully adopt the other area. Meaning, if Google makes it difficult for a business to rank content, that business will not be willing to fully commit to GMB in the way that Google so desires. Businesses have to feel that Google has their backs, that Google is not an impediment before they’ll consider GMB a place to showcase themselves. It’s just common sense.  

If GMB is to continue to thrive and grow in unprecedented ways, then Google needs to make sure businesses feel that the entire Google ecosystem is a place where they can thrive. 

Search equity is the only way this can happen.

How Google has already been moving towards search equity

Truth be told, Google has been heading towards greater search equity for a while. I would argue that this movement began back in 2015 when RankBrain entered the scene. As time has gone on and as Google has introduced other machine learning properties as well as natural language processing the move towards greater search equity has followed exponentially. 

To put it simply, as Google can better understand content per se, it inherently no longer needs to rely on secondary signals to the extent it used to. This is why the debate about the importance of links and specific ranking factors has grown like a wildfire in a dry forest. 

Take headers or title-tags. Whereas at one point in time you might have had to worry about the specific keyword you put into your titles and headers, that’s not exactly the case today. Aligning your title-tags to user intent and being topically focused is more significant than a specific keyword per se (one could even argue, that while still important, the overall ranking significance of the title-tag has diminished as Google takes a broader look at a page’s content).

This is really the idea of taking a more “topical” approach than a keyword-specific approach to a page’s content (an idea that has come to the forefront of SEO in recent years). That’s search equity right there. What happens when you don’t have to rummage through a tool to find the exact keyword you need? What happens when you don’t need to place that exact keyword here, there, and everywhere in order for Google to understand your page? 

What happens is businesses can write naturally and, by default (so long as the content is good), create something that Google can more or less assimilate.

In other words, the flip side of Google’s often discussed “breakthroughs” in better understanding content is “search equity”. If Google can better understand a page’s main content without having to rely as much on peripheral elements, that inherently translates into a more equitable environment on the SERP.    

You don’t need to look any further than Google’s mantra of “write naturally for users” to see what I’m referring to. Google’s improved ability to comprehend content, via elements such as BERT and the like, allows for site/business owners to write naturally for users, as previous “impediments” that demanded a specific understanding of SEO have to an extent been removed.  

An even stronger push towards increased search equity

Advocating that Google is headed towards increased search equity by pointing to an almost ethereal element, that is, the search engine’s ability to more naturally understand content is a bit abstract. There are clearer and more concrete cases of Google’s ever-increasing push towards search equity. 

Passages ranking and the clear move towards a more equitable SERP

Passage ranking is the absolute perfect example of Google’s desire for a more equitable search environment. In fact, when discussing Passage ranking, Google’s John Muller had this to say

“In general, with a lot of these changes, one thing I would caution from is trying to jump on the train of trying to optimize for these things because a lot of the changes that we make like these are essentially changes that we make because we notice that web pages are kind of messy and unstructured.

And it’s not so much that these messy and unstructured web pages suddenly have an advantage over clean and structured pages.

It’s more, well… we can understand these messy pages more or less the same as we can understand clean pages.”

Does that not sound exactly like the concept of search equity as I have presented it here? Passage ranking further equalizes the playing field. It enables Google to understand content where the page structure is not well optimized. In real terms, it offers an opportunity to content creators who don’t understand the value of strong structure from an SEO perspective, i.e., a business owner. 

Simply, Passage Ranking is a clear and direct move towards creating a more equitable SERP.  

Discover feed could lead to more equity

This is a tricky one. On the one hand, there is a tremendous danger to the average site with auto-generated feeds, such as Google Discover. It’s easy to conceive of a person’s feed being dominated by large news publishers, YouTube, and other high authority websites. That would leave little room for the average business owner and their content. 

However, let’s take a step back here and focus on the nature of the beast and not the specific content possibly being shown. What you have with Google Discover (and personally this sort of custom feed is where things are headed in many ways), is content delivery without the ability to influence placement via direct SEO. In other words, unlike the SERP, there is far less direct influence over what you can do to optimize a specific page for Discover. There is no keyword that a user implements in Discover, so there are far fewer things SEOs can do to tilt a page in a certain and very specific direction. 

Rather, Google Discover relies on the overall relevance of the page to a user’s interests as well as the site’s general authority around the topic at hand. It’s far more a content strategy-focused endeavor that hinges on the production of highly relevant and authoritative content in the context of a site’s overall identity than it is about traditional SEO. 

Discover, as such, is inherently a far more equitable construct than the SERP itself. Does that mean that it is in actuality a more equitable environment? That all depends on how Google goes about weighing the various considerations that go into showing content in Discover. Still, as a framework, the feed is of a more equitable nature regardless. 

CMSes and their role in search equity

There’s been an interesting development in the role of CMSes for SEO, to which I have a front-row seat (as I work for Wix as their SEO liaison). CMSes, like Wix and Shopify in particular, have put a heavy emphasis on evolving their SEO offering. 

As a result, and I can tell you this first-hand as I’m often a direct participant in these conversations, Google seems to be taking a more outright welcoming approach to the closed-CMSes. The reason is that as the CMSes have evolved for SEO, they offer the ability to create an equitable experience on the SERP. 

Just look at what John Mueller had to say as part of a conversation around businesses using Wix: 

 

The evolution of some of the closed CMSes is in many ways the missing piece to Google’s search equity puzzle. If a platform like Wix or Shopify provides the defaults and out-of-the-box solutions that remove the impediments associated with the more technical side of SEO then the SERP is far closer to search equity than ever. 

This is reflected by John’s statement in the next tweet from the thread I presented just above: 

Having platforms out there that take care of the user from a technical standpoint puts businesses in the position to be able to rank content. This is search equity. 

If you combine what’s happening with the CMSes along with Google’s advances around Passages and the like and you have one massive step forward for search equity. 

This creates an environment where the average person can use a platform that handles many of the SEO issues and then rely on Google’s ability to parse unstructured content. That’s a tremendous amount of equity hitting the SERP at one time. 

What greater search equity means for SEO

When you look back and what we’ve discussed so far here, search equity is a far-reaching construct. It touches on everything from the algorithm to the CMSes supporting the web. More than that, it’s an enormous shift in the paradigm that is Google search. In a way, it’s revolutionary and has the potential to fundamentally change the search marketing landscape. I don’t mean that hyperbolically either and I’m not generally an alarmist. 

No, I’m not saying SEO is dead. No, I’m not saying technical SEO is dead (not by a long shot). What I am saying is a more even playing field for those who can’t invest heavily in traditional SEO is a major change to the SERP and potentially for our industry. 

Bringing SEO strategy into focus

The evolution of search equity might mean that it is (and will be) easier for business owners to create content that ranks. It does not mean that these businesses will have any idea of what to target and how to construct the most advantageous SEO content strategy. 

In fact, I speculate that most businesses will end up trying to target extremely competitive spaces. They will try to target top-level keywords without focusing on the elements that differentiate themselves and without creating an “organic niche” for themselves. 

The point is, search equity only makes understanding SEO at the strategic level more important than ever. Understanding the algorithm and the overall direction and “inertia” that Google is trending towards will be an extremely valuable commodity.  

The business owners who will benefit from search equity will need our help to give their content efforts direction. 

(By the way, this is not to say that ensuring these sites adhere to SEO best practices should or will fall to the wayside. Although, I do think this does widen the gap in what it means to do SEO for different kinds of sites). 

Emphasis on the site as a whole (not the page)

As mentioned, search equity takes the focus off the “page” and the explicit optimization of it and places it onto the content itself. The spotlight being moved onto content per se creates a new operating framework. Namely, the importance of the site from a holistic point of view versus the significance of a per-page outlook on SEO. 

The various pages of content on a site do not exist in isolation of each other. They’re all intricately related. Imagine a site that had pages that talked about buying car insurance and other pages on how to make chicken soup with no clear connection between the two topics. From a per page perspective, each page could offer wonderful content and be intricately optimized and therefore expected to rank. However, if you step back the lack of topical focus brings with it a slew of issues. 

Search equity is synonymous with an explicit focus on the substance of a page’s content. You cannot have search equity without Google being better able to understand and subsequently value the content on a page. Search equity is synonymous with an increased valuation of the page content as page content (as opposed to page structure, for example).  

An increased focus on the content itself, with ancillary factors having, at times, a diminished role. This means that the site itself comes into a larger focus. Once that happens, the overall purpose, identity, focus, and health of that site become more important as well. 

Great content that is out of context relative to the other content on the site is less relevant. Just think about a user who hits the page from the SERP. They finish reading a blog post only to see a carousel of related articles that are entirely irrelevant to them. Who is that good for? Or imagine the internal links in such a piece of content, how relevant would they be? How strong is the content if it intrinsically can’t have any internal links, as internal links can often help support the user’s content acquisition? 

The effectiveness of a webpage’s content does not exist in a vacuum. It gains strength and relevancy from the pages that support it. If Google is taking a more direct look at content, the pages that support a given piece of content must also come into focus. 

The advancements towards greater search equity require us to take a more holistic view of a website. Search equity and the direct content focus that Google has taken mean that the relevancy of the entire site comes into focus.

This means we need to perhaps shift our attention from the role of individual pages to consider the site’s efficacy overall. This might mean a revamping of our SEO strategies and priorities and directly speaks to the importance of having a well-thought-out SEO outlook (as I mentioned earlier).   

It’s a good thing

At the end of the day, a web that removes impediments to the creation of strong content is a good thing. Might it change the SEO landscape as time goes on? Certainly. A more equitable SERP will most likely have a major impact on SEO over time. Does that mean we shouldn’t embrace it? No. Does that mean SEO is dead? Of course not. Does it mean we shouldn’t be concerned with best SEO practices to the same extent? Clearly, doing so would be a terrible idea. 

What it does mean is that we may need to change our outlook on SEO a bit and understand where we have true value to certain types of sites. 

Search equity is a good thing.

Mordy Oberstein is Liaison to the SEO Community at Wix. Mordy can be found on Twitter @MordyOberstein.

The post Is Google moving towards greater search equity? appeared first on Search Engine Watch.

Reblogged 1 year ago from www.searchenginewatch.com

How to Create a Comprehensive How to Guide [+Examples]

The irony doesn’t escape me that I’m currently writing a “How to” guide on … “How to” guides.

Fortunately, I’ve had my fair share of experiences writing How to guides for HubSpot over the years — some of my favorites include How to Give a Persuasive Presentation, How to Develop a Content Strategy: A Start-to-Finish Guide, and How to Write a Request for Proposal.

How to Guides are incredibly valuable opportunities to reach new audiences with useful, high-quality content. Plus, for both B2B and B2C businesses, How to Guides are often necessary components of a healthy lead generation strategy.

For instance, consider how many people search “How to [fill in the blank]” on Google each day:

someone searching "how to" on Google

These search queries demonstrate one of the primary reasons people turn to the internet — to learn how to do something.

If your business can reach those users with informative, relevant answers to their questions (related to your own products or services), those readers will begin to see your brand as an authority on the topic. Additionally, they’ll appreciate the value you’ve provided them.

Down the road, those same readers you first attracted with a How to Guide could become customers and loyal brand advocates who spread the word about your products or services to friends and family.

Suffice to say: How to Guides matter.

Here, we’ll explore the right structure to use when making a How to Guide and how to write a comprehensive How to Guide. We’ll also take a look at some impressive examples of How to Guides for inspiration. Let’s dive in.

How to Make a How to Guide

1. Conduct research to ensure your guide is the most comprehensive piece on the topic.

People read How to Guides to learn how to do things. And even if you know very well how to do something, it’s critical you conduct research to ensure you’re writing content that can help both the beginner and the expert who’s searching for your post.

Additionally, to rank on the SERPs, you’ll want to conduct keyword research and competitive research to ensure your How to Guide is the most comprehensive post on the subject.

For instance, let’s say you’re writing a blog post, “How to Make an Omelette.” Upon conducting research, you find Simply Recipe’s post is at the top of Google.

Diving into the post, you’ll see Simply Recipe has covered sections including “French Verses American Omelettes”, “The Best Pan for Making Omelettes”, and even “Ideas for Omelette Fillings”.

If you want to create your own How to Guide on omelettes, then, you’ll want to cover all (if not more) of the sections Simply Recipe has covered in its post.

Additionally, you should use Ahrefs or another keyword research tool to explore similar keywords or queries people ask when searching for topics like “Omelette”. This can help you create a well-rounded piece that will answer all your readers’ questions, and help you rank on Google.

Even if you know a topic incredibly well, research isn’t a step you should skip. In fact, knowing a topic well can make it more difficult to write a How to Guide on the topic, since it feels like second-nature to you. For that reason, you’ll want to rely on your research to ensure you’re including all relevant information.

2. Understand your target audience’s concerns and challenges.

For this step, you’ll want to use online community forums like Quora or internal data to identify all the various concerns or challenges your target audience might have that your How to Guide can answer.

If you’re writing “How to Create a Content Marketing Strategy”, for instance, you could start by looking at responses to “What is content marketing?” on Quora. These user-generated responses can help you identify common themes, misconceptions, or confusion around content marketing.

Next, you might reach out to your research marketing team to identify common pain points or questions they’ve seen in surveys or focus groups regarding “content marketing”. For instance, you might find that most of your audience says content marketing is a priority for them — but they don’t know how to do it on a budget.

Conducting qualitative research like this arms you with the information necessary to ensure your How to Guide answers all relevant concerns on a given topic.

3. Structure your steps in the correct order for your reader, and when possible, use screenshots.

Your readers will bounce from your page if it’s too difficult for them to quickly find the answer to their question, so you want to deliver all relevant information as quickly as possible — and in the right order.

Many readers will use your How to Guide as a list of instructions. For instance, if you’re writing, “How to Take a Screenshot on a Mac“, you’ll want to write down each specific action necessary to take a screenshot. When possible, images, screenshots, or videos can also help take your content to the next level.

For less tactical, more ambiguous topics, you should still list your tips for easy readability.

4. Tell the reader why it matters.

To write a high-quality How to Guide, it’s important you start by asking yourself: Why do my readers need, or want, to know this?

Understanding the high-level purpose behind a topic can encourage you to write with empathy. Additionally, it will help you create content that accurately meets your reader’s expectations and needs.

For instance, when writing “How to Create a Facebook Group for Your Business“, I took some time to identify that readers might search this topic if a) they are seeking out new ways to connect with customers or want to create a stronger sense of brand community, or b) they want to raise awareness about their products or services.

As a result, I wrote:

“A group is a good idea if you’re interested in connecting your customers or leads to one another, you want to facilitate a sense of community surrounding your brand, or you’re hoping to showcase your brand as a thought leader in the industry. However, a group is not a good idea if you want to use it to raise awareness about your products or services, or simply use it to post company announcements.”

In the example above, you can see I targeted a few different segments of readers with diverse purposes to help readers determine whether this How to Guide would even help them meet their own goals.

Ultimately, understanding the purpose behind your How to Guide is critical for ensuring you target all the various components or angles of the topic at-hand.

How to Write a How to Guide

Once you’re ready to start writing your How to Guide, you might wonder if your tone or writing style should differ, compared to other types of posts. 

In short: Yes, it should.

When people search “How to …” they’re often in a rush to find the information they need, which means it’s critical you write in short, concise sentences to provide an answer quickly.

Additionally, How to Guides need to offer tactical, actionable advice on a topic so readers can begin implementing the steps immediately.

There’s a world of difference between readers who search “what is an RFP”, and those who search “How to write an RFP”. While the former group is looking for a definition of RFPs and maybe an example or two, the latter group likely already has a fair understanding of RFPs and needs to create one ASAP.

If you’re writing a How to Guide, there are a few best practices to keep in mind when it comes to writing:

  • Use verbs when writing out steps. For instance, you’ll want to say, ‘Write a company background’, rather than ‘Your RFP should start with a brief background on your company.’
  • Use numbered lists, headers, and bullet points to break up the text and make your content as easy to skim as possible.
  • Use both screenshots and written text for readers who can’t load the image on their screen or don’t understand what you’re trying to tell them from the image itself. 
  • Link out to other relevant blog posts, pillar pages, or ebooks so readers can find follow-up information on certain topics mentioned in your How to Guide.
  • Provide examples to show your readers what you mean.
  • Write with empathy, acknowledging it can be frustrating when learning or refining a new skill. 

How to Guides often attract a wide range of readers, all with varying levels of expertise.

“How to Create a YouTube Channel,” for instance, likely attracts YouTube beginners who are simply interested in creating a channel to watch and comment on friends’ posts — but it probably also attracts professional marketers who need to create a channel for their business to attracts and converts leads.

With such a diverse audience, it’s critical you write clearly, but not condescendingly, to ensure you retain readers regardless of skill level or background experience.

To explore what this looks like in-practice, let’s explore some examples of How to Guides next.

How to Guide Examples

1. The Recipe How to Guide

french toast recipe how to guide

McCormick’s “How to recipe guide on french toast” is neatly organized so readers can quickly determine a) how long the recipe will take, b) the ingredients you’ll need, and c) instructions for cooking.

If a reader already knows the ingredients necessary for french toast, she can click to “Instructions” to get started right away. Alternatively, if a reader prefers viewing instructions rather than reading, she can click “Watch How-to Video”. This offers good variety when it comes to how readers’ prefer consuming How to materials. 

Takeaway: When you’re structuring your own How to Guide, consider how you can best organize it so readers can jump straight to what they need. For instance, perhaps you put the most important information at the top, or include a jump link so readers can determine what they need to read — and what they can skip. 

2. The B2B How to Guide

90 day plan how to guide on AtlassianAtlassian’s “How to write the perfect 90 day plan” provides important contextual details to the 90 day plan, including “What is a 90 day plan?” and “What should be included in a 90 day plan?” The piece is well-researched and empathetically-written. 

Best of all, the guide provides a downloadable 90 day plan PDF, so readers can immediately download and use Atlassian’s plan with their own team. 

Takeaway: Consider what ebooks, PDFs, charts, Canva designs, or Google Sheets you can make internally as an option for readers to download and use. Readers will appreciate the option to immediately apply what they’ve learned.

3. The B2C How to Guide

how to become a freelancer how to guideThis “How to Become a Freelancer” guide from FlexJobs does a good job providing relevant links and data to create a comprehensive overview of what freelancing is.

Additionally, the post uses action verbs to inspire the reader — for instance, under “How to Start a Freelance Business”, you’ll see tips such as “Do Your Homework”, “Create a Brand”, and “Plan Ahead”. The language used in this post goes a long way towards encouraging readers to get started immediately. 

Takeaway: Use action verbs and concise language to keep a reader engaged. When possible, start with a verb instead of a noun when listing out steps. 

4. The Product-related How to Guide

Google Sheets 'How VLookups Work' GuideHow to Find Data in Google Sheets with VLOOKUP” isn’t necessarily the most interesting topic, but How-to Geek effectively keeps the content engaging with empathetic statements like, “VLOOKUP might sound confusing, but it’s pretty simple once you understand how it works.” 

Additionally, How-to Geek includes useful, original images to demonstrate each tip they’re describing. These images don’t have to be state-of-the-art visuals created by a professional design team, either — as this post proves, a few simple screenshots go a long way towards helping the reader understand a complex topic. 

Takeaway: When possible, create your own visuals/screenshots to walk readers through each step-by-step instruction.

5. The Lifestyle How to Guide

effects of stress in your life how to guide

I recently saw this post in Medium titled, “11 Ways to Quickly Stop Stress in Your Life“. I clicked it expecting a few quick, easy tips for stopping stress — but, instead, I was engrossed in the first section of the post, “The Effects of Stress in Your Life”. 

While I previously mentioned the importance of starting with a quick answer to the searcher’s How-to question, there are exceptions to that rule. In this case, it’s important readers understand why they should stop stress before knowing how. This Medium writer did a good job understanding the structure he should use to keep readers engaged throughout. 

Takeaway: Play around with structure. Consider what your readers need to know in order for the rest of the post to matter to them. For instance, you might start with a section, “What is XYZ?” and “Why XYZ matters” before diving into, “How to do XYZ.” This way, your readers are fully invested in finding out how these tips can improve their lives in some small (or big) way. 

editorial calendar

Reblogged 1 year ago from blog.hubspot.com