There are plenty of reasons to love SEO. I certainly do and have since I started my SEO journey in 2007.
But every job has unique challenges – aspects that can be frustrating, difficult, tedious or even downright painful.
That’s why, earlier this week in the Search Engine Land newsletter, I asked readers: What is your least favorite part of SEO?
Well, we have a winner. Or loser?
It’s link building. More than 20% of respondents said link building was their least favorite part of SEO.
Let’s dig into the results.
Link building and outreach. It’s time-consuming. It’s tedious. And success is never guaranteed. These are a just few of the biggest complaints from SEOs about trying to build quality links that we saw from Search Engine Land readers:
Google. Yes, Google. There were a range of complaints. A few were specific to Google Business Profiles:
But our readers shared other Google-related complaints, ranging from algorithm update timing to GA 4:
Proving the worth of SEO. Have you had to convince your organization that SEO is a smart investment? The answer should be as simple as, “Have you heard of this thing called Google.” Well, now you can point them to this article: Why SEO is a great investment, not just a cost.
Defending the value of SEO shouldn’t be such a struggle anymore. It’s 2022. Yet here we are:
More least favorite parts of SEO. Finally, a few randoms. These answers didn’t fit into any of our other buckets, but they are all valid reasons for these being called out as a least favorite part of SEO:
(Note: you can read even more answers to this same question on Twitter. Google’s John Mueller asked the same question.)
Why we care. It’s good to share our frustrations with our peers. Clearly, many of you are experiencing some similar pain points in the SEO world. Just remember, it’s completely normal to not like parts of your job all the time. That could go for certain tasks, projects, clients or co-workers. And if you love link building? We salute you!
The post Link building: the least favorite part of SEO appeared first on Search Engine Land.
Reblogged 3 days ago from searchengineland.comWant to make your own landing page report in Google Analytics 4, but not sure how? Then this article is for you.
Read on to learn how you can make your own landing page report in GA4 in a few simple steps.
Step 1: Start from a similar template. In this case, the Pages and Screens report in GA4. Then click Customize Report in the upper right corner.
Step 2: Click in to edit the included dimensions and Click “Add dimension”. Scroll down the list until you find Landing Page. Select it.
Step 3: Click the three dots on the right side of the Landing Page dimension name from the list of included dimensions. Choose Set as default. Then hit apply in the bottom right corner.
Step 4: Landing page is now the default report dimension. While you’re here, you can also add or remove metrics and change/hide the chart types if you’d like. I’ve selected a line chart and removed the second chart by hitting the eye icon to hide it.
Step 5: Save as a new report. This is important! Don’t save changes to the current report because we started off using the Pages and Screens report. Otherwise, you’ll no longer have the Pages and Screens report. Title is Landing Page.
Step 6: You’ll need to add the report to a collection. Go to the library section – you’ll find this icon while in the Reports section of GA4 at the bottom of the left-side navigation.
Step 7: Choose which collection to add this new report to. It likely makes the most sense to add it to the Life cycle collection, into the Engagement topic, right next to the Pages and Screens report. Click Edit collection for where you want to put it.
Step 8: Scroll to the bottom of the list of reports on the right to find your new Landing Page report, and then drag and drop it into the topic section you want on the left column.
Step 9: Click save on the bottom of the screen. This time, choose to “Save changes to current collection” so that the Landing Pages report is added to your nav collections that are already published/visible.
Step 10: Check out your shiny new Landing Page report in the left-side nav.
That’s it! Now you know how to build a custom report. Go wild! Add all those reports you’re yearning for that don’t yet exist in GA4.
The post How to make a GA4 landing page report in 10 easy steps appeared first on Search Engine Land.
Reblogged 3 days ago from searchengineland.comFive years ago, if you were to ask a marketer about their security strategy, the likely response would have been sheer confusion. “Bots, proxies, data-center traffic? That’s for the security team to worry about.” In 2022, however, you’d be hard-pressed to find a marketing leader who hasn’t deployed a marketing security strategy. Today, most marketers view fake, automated and malicious traffic as a strategic threat to their operation, compromising efficiency and hurting their bottom line.
Recent data released by CHEQ across a pool of over 12,000 of its customers revealed that 27% of all website traffic is fake, consisting of botnets, data centers, automation tools, scrapers, crawlers, proxies, click farms and fraudsters. The scale of the “Fake Web” is massive, and marketers are seeing it everywhere. Just this past Super Bowl, 17 billion ad views came from bots and fake users. On Black Friday, a third of online shoppers weren’t real. Affiliate marketers are losing $1.4 billion a year to fraud. Elon Musk recently highlighted concerns over bots overrunning social media and Spotify is reportedly suffering from its own bot problem. Wherever marketers look, the Fake Web is there, and it’s affecting their campaigns, funnels, data and revenue.
Perhaps one of the most visible issues for marketers, especially those running paid user acquisition, is Click Fraud. Bots, click farms and even competitors are draining their ad budgets and severely damaging campaign efficiency. Many advertisers suffer from thousands and even tens of thousands of fake clicks every month, amounting to a massive waste of spend. But it’s not just the wasted spend, it’s also budgets that could have otherwise gone to real paying customers who would have generated actual revenue. In fact, recent data shows that $42 billion is lost each year in revenue opportunities because of this issue.
Many paid marketers use smart campaigns or audiences to group together users that have either previously shown interest in their products or services or share attributes with users who have. This is helpful for expanding the market they are addressing and reaching new potential buyers. At this point, it might not come as a surprise that bots and fake users can stand in the way of successfully executing this practice as well. When audiences become polluted with malicious human users or invalid bot traffic, marketers end up accidentally re-targeting and optimizing toward fake traffic. If marketing security measures are not put in place, the cycle can continue until audiences are overtaken by bots and no longer share any resemblance to a group of human users that have the ability and intention to convert. If clean audience segments are a priority, then, for many marketers, marketing security is as well.
Every marketer can relate to the frustration of illegitimate looking inbound leads. Sometimes it’s a fake account or a bogus email address. Sometimes the information looks legitimate but when you research the lead you can’t find the company or individual. But whatever the case is, nothing causes more tension between sales and marketing than bogus leads that waste the sales team’s time and never convert. In fact, poor traffic quality is one the biggest drivers of marketing security adoption today, as teams look to eliminate illegitimate form fills and submissions and prevent them from polluting the sales pipeline.
Beyond the monetary waste, budget inefficiency, polluted audiences and fake leads, there is one issue that stands above them all, which is perhaps the biggest driver of marketing security adoption – and that issue is data quality. Think about it – organizations spend so much energy, time, effort, resources and money on data management and consumption – expensive BI, analytics and reporting tools, teams of analysts, CDPs and DMPs. All of this so that they can drive better tactical decisions around landing page optimization, audiences and targeting, as well as strategic decisions around budget and channel planning, growth planning and revenue forecasting. When an average of 27% of traffic-in-funnel is fake, all that data is skewed and those decisions are severely compromised. Adding a layer of visibility to detect bots and fake users and gain transparency over their funnels, is becoming an absolute integral part of the modern-day marketer’s role.
Marketers want to eliminate these threats to their operation, but above all, they want to drive better budget efficiency, better leads and higher revenue, and that’s the ultimate goal of marketing security. Eliminating these inefficiencies drives a healthy, clean and transparent funnel that delivers better results. And for these reasons, asking a marketer “what’s your security strategy?” in 2022, is quickly becoming an almost banal question, as Marketing Security quickly becomes an industry standard.
This article was written by Daniel Avital, chief strategy officer, and global head of marketing at CHEQ.
The post You still don’t have marketing security? appeared first on Search Engine Land.
Reblogged 3 days ago from searchengineland.comGoogle has updated the title it uses for real time analytics in Universal Analytics 3 to read “in the last 5 minutes.” Previously this section was titled “right now” but with Google Analytics 4 rolling out, Google wanted to make the title more specific to what both are actually displaying.
In the last 5 minutes. Google’s title for the real time metrics was updated to say “in the last 5 minutes” to more accurately describe what UA3’s real time metrics actually displayed. Here is a screenshot of the new title:
Previously, it looked more like this saying “right now” – which is not really right now but in the last 5 minutes.
Why the change. We believe Google made this change in order to help communicate why UA3 real time metrics are different from GA4 real time metrics. UA3 real time metrics are based off the last five minutes whereas GA4 real time metrics are based off the last thirty minutes.
Why we care. When you see this change, don’t worry, you are not alone – we are all seeing this title change. But rest assured, the metrics in real time Universal Analytics 3 have not changed. Google is just making it crystal clear that UA3 is measuring the past 5 minutes and GA4 is measuring the past 30 minutes.
Keep in mind, both UA3 and GA4 also measure traffic differently – so even if they both looked at the past 5 minutes, it would show different numbers.
The post Google Universal Analytics real time metrics now titled “in the last 5 minutes” appeared first on Search Engine Land.
Reblogged 3 days ago from searchengineland.comA new video page indexing report is coming to Google Search Console in the near future, Dikla Cohen, a Web Ecosystem Consultant at Google, announced at Google I/O today. The new report shows you a summary of all the video pages Google found while crawling and indexing your site.
Video page indexing report. The video page indexing report will be found in Google Search Console, under the “Index” tab, under “video pages.” At the time of writing this, this feature does not seem live yet – but it should be coming soon.
This report shows you a summary of all the video pages Google found while crawling and indexing your site. It will help you:
What it looks like. Here are screenshots from the presentation:
Why we care. Video is an important aspect for many web sites, and these reports will help you discover how important those videos are for you related to Google Search. Google Search Console’s new video indexing reports can help you find indexing issues with your videos and how to debug those issues.
Check back to find out when this report goes live.
The post Google Search Console to release new video page indexing report appeared first on Search Engine Land.
Reblogged 3 days ago from searchengineland.comHow your customers find you can vary significantly. It may be based on their interests, needs or pain points.
Some people may already know exactly what they need and search for that on Google. Others may be just starting the research process. Others may already know what they need and compare to identify the best source to purchase from.
In this stage of your SEO research and planning, you’ll want to identify:
Your goal will be to map your target personas, buying stages and keywords for each persona and buying stage.
You can start by using customer service data or information from your Google Analytics demographic details. With this information, you can start creating target personas.
Below is an example of possible target personas for a real estate company.
Once you have your personas and ideas of who they are, what they need, and what they are looking for, you’ll want to map out the possible steps they’ll take in their buying journey.
Finally, you can add the possible keywords they’ll search for and map them to the journey.
The goal of this phase is to identify all of the possible ways you can be found and to make sure you have content optimized on your website targeting these buying phases and keywords.
You’ll start by identifying primary, root phrases. As you progress, you can go deeper into long-tail terms or semantically related keywords.
This will allow you to identify gaps and opportunities that were missed during your initial baseline and competitive research. Some of these keywords won’t be uncovered unless you truly understand your audience and their needs and pain points.
This stage will complete your research phase and give you all the information to create your content strategy and focus your on-page SEO priorities.
With your comprehensive keyword research, the next step is to look at the existing content of your site and see if it’s optimized properly.
Before creating a content calendar or editorial strategy, it’s ideal to audit your existing content. By reviewing your existing pages, you can decide which pages need to be removed, consolidated or optimized.
Some of the elements you can look for include:
To perform a content audit, you’ll need to export all of your pages from your CMS or use an SEO audit tool, such as Screaming Frog or Semrush Site Audit, to get a list of your site’s existing pages.
Consolidate all of this data into a content audit spreadsheet. Your spreadsheet could look something like this:
Once you have collected all of the data, go through the URLs and label the pages:
How to optimize, revamp or consolidate pages
Once you have all of your pages labeled, it’s time to optimize your content. Some pages may be performing well but could be refreshed to help them perform even better. Others may be performing poorly and need to be optimized to rank.
Typically, this process will involve two steps:
Select the primary and secondary keywords for each page
The best way to gather this data is to use Google Search Console for ranking pages or your keyword database for pages that are not.
To gather data from Google Search Console, click on Performance > Search Results report:
You can click on a page to see the keywords that it’s ranking for and the clicks, impressions and average position for each:
This will help you identify target keywords for each page, which you can add to your spreadsheet.
For each page, add the target primary and secondary keywords you will use when performing the necessary content updates.
When optimizing pages, you need to make sure that you are preserving or adding the correct on-page SEO elements. Let’s review these:
Primary keyword optimization
The primary keyword should appear in the:
Adding any secondary keywords
All related secondary keywords should be incorporated naturally into the article. For each related keyword, add them in an H2 heading. Whatever the focus keyword is for each paragraph, it should be both in the H2 heading and in the paragraph following the heading.
Q&A is an easy way to expand upon your articles by finding related questions. Take the primary keyword, and search for it on Google. Use the questions in the “People also ask” box as section headers:
The section header with the question will be an H2. In the next section, you should answer the question as quickly and succinctly as possible. Don’t re-state the question; instead, immediately provide the answer.
If the question was “How do you get featured in snippets,” then the first sentence should answer the question: “To get into featured snippets, you need to ask questions and answer them using paragraphs, lists, and quick answers.”
Use bullet points! Google loves listing answers with bullet points, so where possible, answer the question and immediately add a list with bullet points:
Content formatting
Use proper formatting to make the content easy for people to read quickly. Here are a few suggestions for formatting your content:
Internal links
Add 2-3 internal links to other relevant pages on the site. Keep your anchor text short. Then, find at least 3-5 relevant pages on your site, and link to your target pages. Every page of your site should contain as many links from other site pages as possible.
External links
Add 2-3 external links to relevant pages. Good external links serve a strong purpose. They create a natural link map and connect your sites to authoritative sources. Google will give more weight to a page that has good external links.
If the article is thin, you can add new content to expand on key points.
When there are several short pages or articles that are all ranking for the same keyword, it might be ideal to consolidate these articles into one longer, more comprehensive piece.
When consolidating articles, keep in mind:
Once you have created and labeled your spreadsheet and added target primary and secondary keywords, the final stage is to prioritize and assign your optimizations based on traffic or keyword importance.
If you have pages targeting important keywords that are not ranking well, move those to the top of the priority list.
If there are pages that have a lot of traffic and could be performing better, these should also be prioritized.
At the end of this stage, you should have a comprehensive keyword list that you will have mapped to existing pages or labeled to be created.
During the early stage, you want to be mindful of identifying persona, content and keyword gaps. If you don’t have content targeting some of your keywords, you’ll be missing opportunities to reach your target audience.
Most sites will have a degree of cannibalization as the SEO and content plans go through different teams and stages.
Before spending significant resources on producing new content, first, identify and maximize the content you already have, and then “mind the gap” by creating a content plan that targets all keywords that haven’t been optimized.
The post An SEO guide to audience research and content analysis appeared first on Search Engine Land.
Reblogged 4 days ago from searchengineland.comGoogle is expanding Google multisearch, a search feature Google announced several weeks ago to let you search by image and text at the same time, to support near me types of queries. This will let you find local business in Google Maps and Google Search to see local search results.
What is Google multisearch. Google multisearch lets you use your camera’s phone to search by an image, powered by Google Lens, and then add an additional text query on top of the image search. Google will then use both the image and the text query to show you visual search results.
What is near me multisearch. The near me aspect lets you zoom in on those image and text queries by looking for products or anything via your camera but also to find local results. So if you want to find a restaurant that has a specific dish, you can do so.
What multisearch near me looks like. Here is a screenshot followed by a GIF from the Google I/O keynote:
MUM not yet in multisearch. Google made a comment in its blog post saying “this is made possible by our latest advancements in artificial intelligence, which is making it easier to understand the world around you in more natural and intuitive ways. We’re also exploring ways in which this feature might be enhanced by MUM– our latest AI model in Search– to improve results for all the questions you could imagine asking.”
I asked Google if Google multisearch currently uses MUM and Google said no. For more on where Google uses MUM see our story on how Google uses artificial intelligence in search.
Available in US/English. Multisearch is live now and should be available as a “beta feature in English in the U.S.” Google said. But the near me flavor is not going live until later this year, Google said.
Why we care. As Google releases new ways for consumers to search, your customers may access your content on your website in new ways as well. How consumers access your content, be it desktop search, mobile search, voice search, image search and now multisearch – may matter to you in terms of how likely that customer might convert, where the searcher is in their buying cycle and more. This is now even more important for local businesses.
The post Google multisearch to gain near me support appeared first on Search Engine Land.
Reblogged 4 days ago from searchengineland.comHow you set up the sign up process for your email list can have a big effect on how engaged your subscribers will be later on. It might seem counterintuitive, but if you let people subscribe without any sort of confirmation, you can end up with a less engaged and less profitable list.
This is why one of the proven best practices of email marketing is to use “double opt-in.” In this post, we’ll cover:
Double opt-in does require some extra work, both for you and your subscribers. But if you want better engagement, higher deliverability, and more sales from the emails you send, it’s usually the best way to go.
Double opt-in, also called “confirmed opt-in,” is a method of subscribing to an email newsletter where subscribers have to confirm two times (hence the “double” opt-in) that they want to receive emails from you.
It is used to screen out invalid addresses and improves the overall engagement levels of a list. Once your list is sent up to use double opt-in, no one can accuse you of sending spam – neither competitors nor email services.
The double opt-in subscription confirmation process consists of two steps:
Only then can the user receive email newsletters. Double opt-ins should be implemented in channels where consent to receive email is usually not explicitly given. Otherwise, you can get a lot of spam complaints and get blacklisted.
Here’s an example of confirmed opt-in sign up form from AWeber customer The Disney Food Blog.
Step 1: Someone fills out the sign up form on your website:
Step 2: They see a “thank you” page with instructions about how to confirm their address:
Step 3: They go to their email inbox and find the confirmation email.
Step 4: They click the confirmation button or link in the confirmation email.
Step 5: They are brought to the final confirmation page.
This is when a subscriber enters their email address into a form, clicks submit, and is automatically subscribed to a list. There is no confirmation email sent with single opt-in. It is the most popular way to get new subscribers, but it does result in a lower quality list.
Among the emails submitted you may find nonexistent addresses, typos, or other people’s emails that their owners did not specify.
Sometimes single opt-in is used covertly. Users leave their mailing addresses on the website but don’t realize that they will be receiving emails in the future. Here are a few ways this can happen:
Double opt-in requires customers to confirm their email addresses twice. It’s not enough for users to leave an email on the website. They have to find the confirmation email you’ve sent and click on the link in it to confirm the subscription.
Single Opt-in | Double Opt-in | |
Convenience for the subscriber | The user doesn’t need to look for the confirmation email in their mailbox to subscribe. | Some people dislike the extra step confirmed opt-in requires. They already took their time to write the address and now they have to confirm it. |
Email list growth | The mailing list grows faster because anyone who submits their email address is automatically subscribed | All of your contacts are included in the email database, but you can only mail to the subscribers that have confirmed their addresses. Double opt-in can slow list growth down a bit, but not much. |
Email list engagement | Quantity doesn’t always mean quality. Single opt-in lists tend to have lower open and click-through rates. So even if a single opt-in list is slightly larger than it might have been with double opt-in, the double opt-in list will get more clicks and opens. | Double opt-in lists get more engagement – more opens, clicks, and sales. Don’t focus solely on the size of your list: what really matters is how engaged your subscribers are. Smaller lists can often drive better results than larger, disengaged lists. |
Getting into spam | Inactive or mistyped addresses or spam traps can be added to the email list, which harms the sender’s reputation. | Only confirmed addresses end up in the email base. This means your list is “cleaner.” As a result, your email deliverability rates will be higher. |
Email service providers | Email services might get suspicious about your list building practices | Even if they suspect you of sending spam, double (aka “confirmed”) opt-in will be one of the strongest arguments that you do everything by the rules and collect email addresses legally. |
Some companies don’t understand when to change from single opt-in to double opt-in. You should switch to double opt-in:
The practice of double opt-in is a proven email marketing best practice. Some of its advantages are:
If you want to build a high-quality, engaged list of subscribers, then set up double opt-in. That way, you only keep people interested in your emails, reducing the risk to your reputation.
This is the first step in communicating with the customer. The potential subscriber fills out the sign up form and clicks the “Subscribe” button.
Here’s an example of a sign up form from AWeber customer The Buffalo Zoo.
There are two ways to do this:
The email should contain a button or link for subscribers to confirm their email address. Keep your confirmation email simple. Remember the goal is to get your new subscriber to confirm their email address.
The Buffalo Zoo uses AWeber’s default confirmation email.
A thank you page is where your subscribers will go after completing the sign up form. Your thank you page can be used to set the subscriber expectation for how often and what you’ll be sending in your emails. It should also be used to encourage them to confirm their email address.
Here’s the thank you page from the Buffalo Zoo:
You can also send a final “you are now officially subscribed” email. This is what the zoo does. Here’s what their final confirmation email looks like:
There are some mistakes that email marketers often make when setting up double opt-in for the first time.
Give a brief explanation of why double opt-in is necessary. A short sentence such as “We want to protect your data from theft by third parties” is enough.
Even if you follow all the best practices for email marketing, there is still a chance some of your confirmation emails can end up in the spam folder. So remind people to check their spam folders if they don’t see your confirmation email within 5-10 minutes of subscribing.
Your email marketing service will provide you with a default confirmation email. That’s an okay start, but try to do better and customize your confirmation email. You don’t have to redesign the whole email, but at least try to add your company’s logo, change the colors to reflect your brand, and edit the words in the confirmation email so your subscribers feel like you’re welcoming them yourself.
Start addressing the user by name from the first email you send them. Especially at this crucial first step when they confirm their email address.
Double opt-in is a subscription in two steps: a person leaves their email in the sign up form on the website and then secures consent using a link from the confirmation email. This two-step confirmation process reduces the number of spam complaints and sending errors, as users confirm their interest in the mailing.
Again, to set up confirmed opt-in you need:
Double opt-in also helps avoid penalties for processing personal data without a person’s consent. If you follow the best practices we’ve outlined here, you can have all the benefits of a high-quality list and not slow your list growth down. Some email senders get 96% of their new subscribers to confirm – even with confirmed opt-in!
What are you using for your email lists – double opt-in or single opt-in? Have you ever considered switching from one to another? Leave a comment below and tell us what you think.
The post Best Practices for Double Opt-in appeared first on AWeber.
A sleek new My Ad Center experience was announced today at Google’s annual I/O event. It provides users with a handful of options to control the messages being served across selected Google properties.
Google users will be able to dictate:
These personalization options can be accessed from within the new My Ad Center experience or directly within the ad itself.
Privacy has been the core issue over the last few years. While Google has focused on offering a variety of solutions, many times they’ve been somewhat hard to navigate to for the unskilled user.
If adopted by consumers, the My Ad Center solution should help to feed Google’s ad serving intelligence while making the user experience better on Google Properties including YouTube, Discover and Search. With third-party data going away, the ability to follow brands will provide critical feedback directly to Google.
Here’s everything we know about My Ads Center from Google I/O:
Follow brands and topics. All Google users will now have the ability to choose the brands and topics most germane to them that they want to see. This is much different than the Topics targeting within the Privacy Sandbox now being tested, as the inputs are dictated directly by the user.
An example provided by Google was that a user interested in a hybrid car may choose this as a topic that they’d be interested in and would be served ads related to that particular topic. This can also work with specific brands that users enjoy
The key difference is that the user would be directly providing Google with the inputs to help drive targeted ads.
Personalization and data source controls. The My Ad Center location will be the go-to source for users looking to limit any/all personalization including age, relationship status, education and demographic data. Users can also limit or opt-out of sensitive ad topics (e.g., gambling, alcohol, dating, weight loss, and pregnancy & parenting) within My Ad Center.
The last personalizable element found in My Ad Center is control over the data sources used. Google users will be able to choose which data sources can be used to personalize ads and which sources should be used across some Google properties (e.g., personalized search, YouTube recommendations). Those inputs come in the form of wanting more or less ads from a topic or brand.
Expanded controls within ads. While My Ad Center is nice, let’s be honest, sometimes people just want to make changes immediately when they are served an ad. Those folks are in luck with expanded controls within ads.
Google users will have the ability to make changes or get targeting clarity directly within the ad itself. The new expanded controls will allow users to like, block or report an ad while also being able to tune the targeting if you’d like to see more or less of the brand or topic shown.
However, the biggest change for advertisers may be the transparency features included directly within the ad controls. The “About this Ad” is being replaced with the new transparency features that should make it more clear as to why users are seeing the ad.
The expanded controls will include transparency features that show who paid for the ad (using Advertiser Identity Verification) and the account categories used to show the specific ad.
In the past users could see “Why this ad” information that would display matching criteria. But the ability to see who paid for the ad is new and important.
Not for the Google Display Network, Gmail or Search Partners (yet). When My Ad Center launches the only supported products will be Google search, YouTube and Google Discover. Upon launch, there will be a second ad settings page separate from the My Ads Center for sites that partner with Off-Google ads (ie the Google Display Network).
The topics or brand updates inputted into the My Ads Center won’t initially be passed to this new second ad settings page. That said, if ad personalization is shut off entirely within My Ads Center that will shut off all personalization across all Google-owned and non-Google-owned properties.
Why we care: My Ad Center looks to be Google’s best effort yet on privacy control. Not only will users be able to see why items are serving from the center, but also from within ads themselves. Most importantly, users will get clarity into who is paying for the ad being served. If you are an advertiser currently trying to hide your information and fly under the radar, look elsewhere as your days are numbered on Google properties.
The addition of brands and topics to follow is a unique feature that could be a future benefit to advertisers. Instead of solely leveraging elements from the Privacy Sandbox like Topics, down the road this may provide Google with first-party user-inputted signals for targeting. However, the success of this option will be tied to adoption. If Google users don’t take the time to provide feedback in My Ads Center, then the value to users (and advertisers) won’t exist.
The post Google’s My Ad Center lets users control their ad experience, follow brands appeared first on Search Engine Land.
Google said it will soon incorporate a new signal into image ranking. Google is also introducing a new type of schema in an attempt to help make its image search results more racially diverse and inclusive.
Google will use MST Scale to rank images. Google said it will be adjusting how it ranks images, using what is called the Monk Skin Tone (MST) Scale. It is a 10-shade scale. It looks like this:
The MST scale was created with the help of Dr. Ellis Monk, a Harvard professor and sociologist. Google said the MST Scale is being incorporated into Images search, as well as other image products (e.g., Google Photos). And Google plans to expand it more broadly in the coming months.
Inclusive schema. Google said that creators, brands and publishers can use a new type of schema – inclusive schema – to label their content with attributes like skin tone, hair color and hair texture. Using this schema will help Google better understand what appears within the images.
Content labels coming soon. Google also noted that it wants to create a more representative search experience. As part of that, Google plans to develop a “standardized way to label web content.”
A continuation of image search changes. Google’s push toward image equity began In October 2021, Google told Bloomberg it had updated its algorithms to show more skin tones for a variety of images, ranging from [beautiful skin] to [professional hairstyles] to [happy family].
Now this effort is being pushed out more widely.
Why we care. Google is pushing to be more inclusive of skin tones in Images and adjusting its ranking algorithm to do so. So if you’re publishing diverse imagery, using this schema will help Google better understand the details within your image content, giving you a higher chance of being found in Google Images.
You can read the full announcement about how Google plans to improve skin tone representation here.
The post Google reveals new image ranking signal, inclusive schema appeared first on Search Engine Land.