Back to Top

The Link Building Webslog

Posted by rjonesx.

This is not the link building article you — or really anyone — were probably hoping for. It isn’t a step-by-step guide to getting the best backlinks, it isn’t some list of hot tips or new opportunities, and it isn’t the announcement of some great tool. What it is, unashamedly, is a window into the brutal slog that is outreach-based link building. 

What can you expect?

1. YELLING IN CAPSLOCK.

2. Some tips and tricks.

3. Weeping and gnashing of teeth


Courtesy Some Ecards

All kidding aside, one of the few aphorisms I’ve come to believe is that sharing how we do things as SEOs is almost never a problem, because 99% of people don’t have the follow-through and resources to make it happen. I would love to be proven wrong by the readers on Moz.

My goal here is to give a realistic understanding of the monotonous slog that is white-hat, outreach-based link building. I happen to think that link building is a perfect counterexample to the “Pareto Principle”. Unlike the Pareto Principle, which states that 80% of the effect comes from 20% of the cause, I find that unless you put in 60-80% of the effort, you won’t see more than 20% of the potential effect. The payoff comes when you have outworked your competitors, and I promise you they are putting in more than 20%.

pareto principle
Courtesy Quotiss

The goal of this “Webslog” is to document the weeks and months that go into a link building campaign, at least as far as how I go about the process.

motivation
Courtesy Aaron Burden

Also, look at that gorgeous fountain pen. I frickin’ love fountain pens.

I will try and update this document every week or so with progress reports, my motivation level, the tips and tricks I’ve employed over the last few days, the headaches, wins, and losses. By the end of this, I hope to have accomplished something along the lines of a link building journal. It won’t be a blueprint for link building success, but hopefully it will mark on the map of your link building journey the things to avoid, the best way to get through certain jams, and when you’re just going to have to tough it out.


Journal Entry Day One

Day one is almost always the best day. It’s a preparation day. It’s the day you buy the gym membership, purchase a veritable ton of whey protein and protein shaker bottles, weigh yourself — in all reality you accomplish nothing, but feel like you have done so much. Day one is important because it can provide momentum and clear a path to success, but it also presents the problem of motivation being incredibly disproportionate to success. It’s likely that your first day will be the most discordant with respect to motivation and results. 

Rand does a great job explaining the relationship between ROI and Effort:

However, I think the third component here is motivation. While it does largely track the chart Rand provides, I think there are some notable differences, the first of which is that, in the first few days, your motivation will be high despite not having any results. Your motivation will probably dip very quickly and become parallel with the remainder of the “effort” line on the graph, but you get the point.

motivation
Courtesy Drew Beamer

It’s essential to keep your motivation up over the course of the “slog”, and the trick is to disconnect your motivation from your ROI and attach it instead to attainable goals which lead to ROI. It’s a terribly difficult thing to do. 

Alright, so, Day One prep.

Project description

For this project, I’ll be employing a unique form of broken link building (Part 2). If you’ve seen any of my link building presentations in the last 2-3 years, you may have caught a glimpse of some of the techniques in the process. Nevertheless, the link building method really isn’t important for the sake of this project. All that matters for the sake of our discussion in the method is:

  1. Outreach Based (requires contacting other webmasters).
  2. Neutral with regard to Black/White hat (it could be done either way).
  3. Requires Prospecting.
  4. Ultimately brings Return on Investment through either advertising or an exit.

In addition, I won’t be using any aliases in this project. For once, I’m building something respectable enough that I don’t mind my name being associated with it. I do still need to be careful (avoid negative SEO, for example) as this is a YMYL industry (health related). The site is already in existence, but with almost no links.

So, what are the returns on investment (or effort) that I’ll be tracking and, importantly, won’t be tracking?

Return on Investment
Courtesy financereference.com

1. Emails sent to links placed relative to:

  • Subject line
  • Pitch email
  • Target broken link

2. Contact forms filled to links placed:

  • Subject line
  • Pitch email
  • Target broken link

3. Anchor text used in links placed

4. Not tracking:

  • Deliverability
  • Open rate
  • Reply rate
  • Domain Authority of source

I know #4 will sound like a cardinal sin to many of the professional link builders reading this, but I’m really just not interested in bothering a recipient who chooses to overlook the email. I’m certain that the speed of emails sent will not impact deliverability, so the other statistics just seem like continuing to ring the doorbell at someone’s house until they are forced to answer. Sure, it might work, but it also might get you reported.

Preparation

There are a couple of steps I take every time I begin a project like this.

1. Set up email, obviously. I typically set up [email protected], [email protected], [email protected], [email protected] and a catch all. I don’t use Google. It just seems, well, wrong. I have had success with Zoho before, although honestly I just need the email so I often go with a CPANEL host and then add the MX records to Cloudflare.

2. Set up a phone number for voice mail. I like Grasshopper, personally. This is not to improve rankings (although I do put it on the site), it’s to improve conversion rates. Email messages with a real phone number and real email address from a real person, with the same domain promoted as the domain in the email, just seem to do better when your project is truly above-board.

3. Set up SPF and DKIM records for better deliverability.

4. Set up a number of Google Docs sheets which will help with some of the prospecting and mail sending.

5. Set up my emailer. I know this is vague, but one of the things I try to do is create stumbling blocks to cheating. There are some awesome tools out there Pitchbox, BuzzStream, LinkProspector and more, but I find each very tempting to take shortcuts. I want to make sure I pull the trigger personally on every email that goes out. Efficient, no. Effective, not really. Safe, yeah.

Honestly, this is about as much as I can do in one day. I look forward to updating this regularly, make sure you follow @moz or @rjonesx on Twitter to get notified when we update this journal.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Reblogged 1 year ago from feedproxy.google.com

ABM: Back to basics

The current business environment underlines the importance of targeting the right accounts at the right time, and in the right way.

Please visit Marketing Land for the full article.

Reblogged 1 year ago from feeds.marketingland.com

The Five Types of Utility in Marketing

How do prospective consumers spend their money? What matters to them when they make decisions about how much to spend, where to spend it, and which company earns their business?

This is the role of sales and marketing teams in your organization: Designing and deploying consumer campaigns to showcase the unique value proposition of your product or service so you stand out from the competition.

The challenge? It’s not easy.

Customer preferences are constantly evolving in response to both external market forces and internal financial constraints. As a result, the reasons around how, when, and why consumers spend money are never static — companies must find ways to understand and articulate the value of service or product offerings in a way that both captures consumer interest and convinces them to convert.

Here, the concept of utility-based marketing is markedly useful. In this piece, we’ll explore the basics of utility in marketing, why it matters, and then dive into five common types of utility in marketing.

What is utility in marketing?

Put simply? Value.

While in a non-economic context the term “utility” typically means “usefulness”, the marketing-driven definition speaks to the specific value realized by consumers when they spend on products or services. Understanding utility in marketing can help companies both better-predict spending habits and design campaigns to capture consumer interest.

Why Marketing Utility Matters

Historically, marketing efforts have focused on making an impression. It makes sense — if consumers notice and remember your print, email, or television ad campaign, you’re better positioned to capture their spending when they see your brand again in-store or while shopping online.

The problem? With so many companies now competing for consumer interest both online and in-person, market saturation is a significant concern. Even more worrisome? As noted by a New York Times article, “people hate ads.” Oversaturated and overwhelmed by ads across desktops, mobile devices, and in-person, prospective buyers are now tuning out enterprise efforts to impress.

Instead, they’re looking for utility. This is the goal of utility-driven marketing: To offer consumers functional and useful products or services that provide a specific benefit or can be repurposed to serve multiple functions.

When done well, utility marketing can create stronger bonds between customers and companies, and drive increased brand loyalty over time. It’s a slow-burn process rather than a quick-spend process and one that serves a different purpose — connecting customers with brands based on value, not volume.

The Five Types of Utility in Marketing

Despite our definition, the notion of “utility” in marketing remains fairly nebulous. That’s because trying to identify the exact value offered by your products or services to a specific customer segment, and how best to communicate this value effectively, is no easy task.

As a result, utility in marketing is often broken down into different types, each of which can help inform better ad building and effective sales outcomes. Depending on how specific — or how generalized — your marketing approach, however, it’s possible to identify anything from one massive utility model to hundreds of smaller utility types for each consumer segment.

To streamline your audience targeting and campaign creation process, we’ll dig into five common types of utility in marketing.

1. Utility of Time

This is the “when” component of utility: Is your product available when customers want it? Will it arrive quickly and without complication? Consumers want to spend as little time as possible waiting for products to arrive in-stock or at their homes — as a result, utility of time is critical to capture consumer conversion on-demand.

Time utility also accounts for seasonal changes in purchasing habits; for example, sales of boots and gloves spike in the winter, while ice cream sees greater demand during the summer.

Some products are staples and are therefore time-resistant — such as groceries — but still need to be in-stock and delivered on-time. As a result, time-based marketing efforts are inherently tied to inventory and delivery systems to ensure outcomes meet consumer expectations.

2. Utility of Place

Place utility refers to the ability of consumers to get what they want, where they want it. Often applied to brick-and-mortar stores, utility of place is paramount for customers looking for familiar items that are easy to obtain.

In a world now driven by digital marketing efforts, place offers a competitive edge if companies can showcase their capacity to keep specific items in-stock at all times. And as improved logistics chains shorten the time between order and delivery, it’s possible for ecommerce operators to leverage place utility as a market differentiator.

3. Utility of Possession

Possession utility speaks to the actual act of product possession — such as consumers driving a new car off the lot or having furniture delivered to their home. It also highlights the connection between possession and purpose.

Consider plastic storage bins. While they might be sold in the “kitchen” section of an online or brick-and-mortar store, consumers are free to repurpose the items as they see fit once they take possession, increasing their overall utility.

4. Utility of Form

While some companies offer lower prices by shifting the responsibility of assembly to the consumer (e.g. that new dresser that you bought and had delivered, but still need to assemble on your own time), finished forms are often more valuable to customers.

Consider complex products such as vehicles or electronic devices — by highlighting the finished form of these items, companies can reduce potentially purchasing barriers by making it clear that consumers will receive feature-complete products that don’t add the complexity of self-assembly.

5. Utility of Information

Information utility is a new addition to this list, but in a world where competition for even basic goods now happens on a global scale, information can make the difference between successful sales and failed conversion efforts. Information utility speaks to any data that helps consumers make buying decisions. This includes product details on ecommerce pages, targeted marketing campaigns, and well-trained call center and in-store agents capable of answering customer questions.

Simply, the right information at the right time improves market utility and increases the chance of sales conversion.

Creating Customer Value

The ultimate goal of any marketing strategy is to create customer value. While not every campaign requires the complete implementation of all five utility types to improve conversion and customer satisfaction, general knowledge paves the way for implementation to deliver value at scale.

Reblogged 1 year ago from blog.hubspot.com

A Practical Introduction To Dependency Injection

The concept of Dependency Injection is, at its core, a fundamentally simple notion. It is, however, commonly presented in a manner alongside the more theoretical concepts of Inversion of Control, Dependency Inversion, the SOLID Principles, and so forth. To make it as easy as possible for you to get started using Dependency Injection and begin reaping its benefits, this article will remain very much on the practical side of the story, depicting examples that show precisely the benefits of its use, in a manner chiefly divorced from the associated theory. We’ll spend only a very little amount of time discussing the academic concepts that surround dependency injection here, for the bulk of that explanation will be reserved for the second article of this series. Indeed, entire books can be and have been written that provide a more in-depth and rigorous treatment of the concepts.

Here, we’ll start with a simple explanation, move to a few more real-world examples, and then discuss some background information. Another article (to follow this one) will discuss how Dependency Injection fits into the overall ecosystem of applying best-practice architectural patterns.

A Simple Explanation

“Dependency Injection” is an overly-complex term for an extremely simple concept. At this point, some wise and reasonable questions would be “how do you define ‘dependency’?”, “what does it mean for a dependency to be ‘injected’?”, “can you inject dependencies in different ways?” and “why is this useful?” You might not believe that a term such as “Dependency Injection” can be explained in two code snippets and a couple of words, but alas, it can.

The simplest way to explain the concept is to show you.

This, for example, is not dependency injection:

import { Engine } from './Engine';

class Car {
    private engine: Engine;

    public constructor () {
        this.engine = new Engine();
    }

    public startEngine(): void {
        this.engine.fireCylinders();
    }
}

But this is dependency injection:

import { Engine } from './Engine';

class Car {
    private engine: Engine;

    public constructor (engine: Engine) {
        this.engine = engine;
    }

    public startEngine(): void {
        this.engine.fireCylinders();
    }
}

Done. That’s it. Cool. The End.

What changed? Rather than allow the Car class to instantiate Engine (as it did in the first example), in the second example, Car had an instance of Engine passed in — or injected in — from some higher level of control to its constructor. That’s it. At its core, this is all dependency injection is — the act of injecting (passing) a dependency into another class or function. Anything else involving the notion of dependency injection is simply a variation on this fundamental and simple concept. Put trivially, dependency injection is a technique whereby an object receives other objects it depends on, called dependencies, rather than creating them itself.

In general, to define what a “dependency” is, if some class A uses the functionality of a class B, then B is a dependency for A, or, in other words, A has a dependency on B. Of course, this isn’t limited to classes and holds for functions too. In this case, the class Car has a dependency on the Engine class, or Engine is a dependency of Car. Dependencies are simply variables, just like most things in programming.

Dependency Injection is widely-used to support many use cases, but perhaps the most blatant of uses is to permit easier testing. In the first example, we can’t easily mock out engine because the Car class instantiates it. The real engine is always being used. But, in the latter case, we have control over the Engine that is used, which means, in a test, we can subclass Engine and override its methods.

For example, if we wanted to see what Car.startEngine() does if engine.fireCylinders() throws an error, we could simply create a FakeEngine class, have it extend the Engine class, and then override fireCylinders to make it throw an error. In the test, we can inject that FakeEngine object into the constructor for Car. Since FakeEngine is an Engine by implication of inheritance, the TypeScript type system is satisfied.

I want to make it very, very clear that what you see above is the core notion of dependency injection. A Car, by itself, is not smart enough to know what engine it needs. Only the engineers that construct the car understand the requirements for its engines and wheels. Thus, it makes sense that the people who construct the car provide the specific engine required, rather than letting a Car itself pick whichever engine it wants to use.

I use the word “construct” specifically because you construct the car by calling the constructor, which is the place dependencies are injected. If the car also created its own tires in addition to the engine, how do we know that the tires being used are safe to be spun at the max RPM the engine can output? For all these reasons and more, it should make sense, perhaps intuitively, that Car should have nothing to do with deciding what Engine and what Wheels it uses. They should be provided from some higher level of control.

In the latter example depicting dependency injection in action, if you imagine Engine to be an abstract class rather than a concrete one, this should make even more sense — the car knows it needs an engine and it knows the engine has to have some basic functionality, but how that engine is managed and what the specific implementation of it is is reserved for being decided and provided by the piece of code that creates (constructs) the car.

A Real-World Example

We’re going to look at a few more practical examples that hopefully help to explain, again intuitively, why dependency injection is useful. Hopefully, by not harping on the theoretical and instead moving straight into applicable concepts, you can more fully see the benefits that dependency injection provides, and the difficulties of life without it. We’ll revert to a slightly more “academic” treatment of the topic later.

We’ll start by constructing our application normally, in a manner highly coupled, without utilizing dependency injection or abstractions, so that we come to see the downsides of this approach and the difficulty it adds to testing. Along the way, we’ll gradually refactor until we rectify all of the issues.

To begin, suppose you’ve been tasked with building two classes — an email provider and a class for a data access layer that needs to be used by some UserService. We’ll start with data access, but both are easily defined:

// UserRepository.ts

import { dbDriver } from 'pg-driver';

export class UserRepository {
    public async addUser(user: User): Promise<void> {
        // ... dbDriver.save(...)
    }

    public async findUserById(id: string): Promise<User> {
        // ... dbDriver.query(...)
    }

    public async existsByEmail(email: string): Promise<boolean> {
        // ... dbDriver.save(...)
    }
}

Note: The name “Repository” here comes from the “Repository Pattern”, a method of decoupling your database from your business logic. You can learn more about the Repository Pattern, but for the purposes of this article, you can simply consider it to be some class that encapsulates away your database so that, to business logic, your data storage system is treated as merely an in-memory collection. Explaining the Repository Pattern fully is outside the purview of this article.

This is how we normally expect things to work, and dbDriver is hardcoded within the file.

In your UserService, you’d import the class, instantiate it, and start using it:

import { UserRepository } from './UserRepository.ts';

class UserService {
    private readonly userRepository: UserRepository;

    public constructor () {
        // Not dependency injection.
        this.userRepository = new UserRepository();
    }

    public async registerUser(dto: IRegisterUserDto): Promise<void> {
        // User object & validation
        const user = User.fromDto(dto);

        if (await this.userRepository.existsByEmail(dto.email))
            return Promise.reject(new DuplicateEmailError());

        // Database persistence
        await this.userRepository.addUser(user);

        // Send a welcome email
        // ...
    }

    public async findUserById(id: string): Promise<User> {
        // No need for await here, the promise will be unwrapped by the caller.
        return this.userRepository.findUserById(id);
    }
}

Once again, all remains normal.

A brief aside: A DTO is a Data Transfer Object — it’s an object that acts as a property bag to define a standardized data shape as it moves between two external systems or two layers of an application. You can learn more about DTOs from Martin Fowler’s article on the topic, here. In this case, IRegisterUserDto defines a contract for what the shape of data should be as it comes up from the client. I only have it contain two properties — id and email. You might think it’s peculiar that the DTO we expect from the client to create a new user contains the user’s ID even though we haven’t created a user yet. The ID is a UUID and I allow the client to generate it for a variety of reasons, which are outside the scope of this article. Additionally, the findUserById function should map the User object to a response DTO, but I neglected that for brevity. Finally, in the real world, I wouldn’t have a User domain model contain a fromDto method. That’s not good for domain purity. Once again, its purpose is brevity here.

Next, you want to handle the sending of emails. Once again, as normal, you can simply create an email provider class and import it into your UserService.

// SendGridEmailProvider.ts

import { sendMail } from 'sendgrid';

export class SendGridEmailProvider {
    public async sendWelcomeEmail(to: string): Promise<void> {
        // ... await sendMail(...);
    }
}

Within UserService:

import { UserRepository }  from  './UserRepository.ts';
import { SendGridEmailProvider } from './SendGridEmailProvider.ts';

class UserService {
    private readonly userRepository: UserRepository;
    private readonly sendGridEmailProvider: SendGridEmailProvider;

    public constructor () {
        // Still not doing dependency injection.
        this.userRepository = new UserRepository();
        this.sendGridEmailProvider = new SendGridEmailProvider();
    }

    public async registerUser(dto: IRegisterUserDto): Promise<void> {
        // User object & validation
        const user = User.fromDto(dto);

        if (await this.userRepository.existsByEmail(dto.email))
            return Promise.reject(new DuplicateEmailError());

        // Database persistence
        await this.userRepository.addUser(user);

        // Send welcome email
        await this.sendGridEmailProvider.sendWelcomeEmail(user.email);
    }

    public async findUserById(id: string): Promise<User> {
        return this.userRepository.findUserById(id);
    }
}

We now have a fully working class, and in a world where we don’t care about testability or writing clean code by any manner of the definition at all, and in a world where technical debt is non-existent and pesky program managers don’t set deadlines, this is perfectly fine. Unfortunately, that’s not a world we have the benefit of living in.

What happens when we decide we need to migrate away from SendGrid for emails and use MailChimp instead? Similarly, what happens when we want to unit test our methods — are we going to use the real database in the tests? Worse, are we actually going to send real emails to potentially real email addresses and pay for it, too?

In the traditional JavaScript ecosystem, the methods of unit testing classes under this configuration are fraught with complexity and over-engineering. People bring in whole entire libraries simply to provide stubbing functionality, which adds all kinds of layers of indirection, and, even worse, can directly couple the tests to the implementation of the system under test, when, in reality, tests should never know how the real system works (this is known as black-box testing). We’ll work to mitigate these issues as we discuss what the actual responsibility of UserService is and apply new techniques of dependency injection.

Consider, for a moment, what a UserService does. The whole point of the existence of UserService is to execute specific use cases involving users — registering them, reading them, updating them, etc. It’s a best practice for classes and functions to have only one responsibility (SRP — the Single Responsibility Principle), and the responsibility of UserService is to handle user-related operations. Why, then, is UserService responsible for controlling the lifetime of UserRepository and SendGridEmailProvider in this example?

Imagine if we had some other class used by UserService which opened a long-running connection. Should UserService be responsible for disposing of that connection too? Of course not. All of these dependencies have a lifetime associated with them — they could be singletons, they could be transient and scoped to a specific HTTP Request, etc. The controlling of these lifetimes is well outside the purview of UserService. So, to solve these issues, we’ll inject all of the dependencies in, just like we saw before.

import { UserRepository }  from  './UserRepository.ts';
import { SendGridEmailProvider } from './SendGridEmailProvider.ts';

class UserService {
    private readonly userRepository: UserRepository;
    private readonly sendGridEmailProvider: SendGridEmailProvider;

    public constructor (
        userRepository: UserRepository,
        sendGridEmailProvider: SendGridEmailProvider
    ) {
        // Yay! Dependencies are injected.
        this.userRepository = userRepository;
        this.sendGridEmailProvider = sendGridEmailProvider;
    }

    public async registerUser(dto: IRegisterUserDto): Promise<void> {
        // User object & validation
        const user = User.fromDto(dto);

        if (await this.userRepository.existsByEmail(dto.email))
            return Promise.reject(new DuplicateEmailError());

        // Database persistence
        await this.userRepository.addUser(user);

        // Send welcome email
        await this.sendGridEmailProvider.sendWelcomeEmail(user.email);
    }

    public async findUserById(id: string): Promise<User> {
        return this.userRepository.findUserById(id);
    }
}

Great! Now UserService receives pre-instantiated objects, and whichever piece of code calls and creates a new UserService is the piece of code in charge of controlling the lifetime of the dependencies. We’ve inverted control away from UserService and up to a higher level. If I only wanted to show how we could inject dependencies through the constructor as to explain the basic tenant of dependency injection, I could stop here. There are still some problems from a design perspective, however, which when rectified, will serve to make our use of dependency injection all the more powerful.

Firstly, why does UserService know that we’re using SendGrid for emails? Secondly, both dependencies are on concrete classes — the concrete UserRepository and the concrete SendGridEmailProvider. This relationship is too rigid — we’re stuck having to pass in some object that is a UserRepository and is a SendGridEmailProvider.

This isn’t great because we want UserService to be completely agnostic to the implementation of its dependencies. By having UserService be blind in that manner, we can swap out the implementations without affecting the service at all — this means, if we decide to migrate away from SendGrid and use MailChimp instead, we can do so. It also means if we want to fake out the email provider for tests, we can do that too.

What would be useful is if we could define some public interface and force that incoming dependencies abide by that interface, while still having UserService be agnostic to implementation details. Put another way, we need to force UserService to only depend on an abstraction of its dependencies, and not it’s actual concrete dependencies. We can do that through, well, interfaces.

Start by defining an interface for the UserRepository and implement it:

// UserRepository.ts

import { dbDriver } from 'pg-driver';

export interface IUserRepository {
    addUser(user: User): Promise<void>;
    findUserById(id: string): Promise<User>;
    existsByEmail(email: string): Promise<boolean>;
}

export class UserRepository implements IUserRepository {
    public async addUser(user: User): Promise<void> {
        // ... dbDriver.save(...)
    }

    public async findUserById(id: string): Promise<User> {
        // ... dbDriver.query(...)
    }

    public async existsByEmail(email: string): Promise<boolean> {
        // ... dbDriver.save(...)
    }
}

And define one for the email provider, also implementing it:

// IEmailProvider.ts
export interface IEmailProvider {
    sendWelcomeEmail(to: string): Promise<void>;
}

// SendGridEmailProvider.ts
import { sendMail } from 'sendgrid';
import { IEmailProvider } from './IEmailProvider';

export class SendGridEmailProvider implements IEmailProvider {
    public async sendWelcomeEmail(to: string): Promise<void> {
        // ... await sendMail(...);
    }
}

Note: This is the Adapter Pattern from the Gang of Four Design Patterns.

Now, our UserService can depend on the interfaces rather than the concrete implementations of the dependencies:

import { IUserRepository }  from  './UserRepository.ts';
import { IEmailProvider } from './SendGridEmailProvider.ts';

class UserService {
    private readonly userRepository: IUserRepository;
    private readonly emailProvider: IEmailProvider;

    public constructor (
        userRepository: IUserRepository,
        emailProvider: IEmailProvider
    ) {
        // Double yay! Injecting dependencies and coding against interfaces.
        this.userRepository = userRepository;
        this.emailProvider = emailProvider;
    }

    public async registerUser(dto: IRegisterUserDto): Promise<void> {
        // User object & validation
        const user = User.fromDto(dto);

        if (await this.userRepository.existsByEmail(dto.email))
            return Promise.reject(new DuplicateEmailError());

        // Database persistence
        await this.userRepository.addUser(user);

        // Send welcome email
        await this.emailProvider.sendWelcomeEmail(user.email);
    }

    public async findUserById(id: string): Promise<User> {
        return this.userRepository.findUserById(id);
    }
}

If interfaces are new to you, this might look very, very complex. Indeed, the concept of building loosely coupled software might be new to you too. Think about wall receptacles. You can plug any device into any receptacle so long as the plug fits the outlet. That’s loose coupling in action. Your toaster is not hard-wired into the wall, because if it was, and you decide to upgrade your toaster, you’re out of luck. Instead, outlets are used, and the outlet defines the interface. Similarly, when you plug an electronic device into your wall receptacle, you’re not concerned with the voltage potential, the max current draw, the AC frequency, etc., you just care if the plug fits into the outlet. You could have an electrician come in and change all the wires behind that outlet, and you won’t have any problems plugging in your toaster, so long as that outlet doesn’t change. Further, your electricity source could be switched to come from the city or your own solar panels, and once again, you don’t care as long as you can still plug into that outlet.

The interface is the outlet, providing “plug-and-play” functionality. In this example, the wiring in the wall and the electricity source is akin to the dependencies and your toaster is akin to the UserService (it has a dependency on the electricity) — the electricity source can change and the toaster still works fine and need not be touched, because the outlet, acting as the interface, defines the standard means for both to communicate. In fact, you could say that the outlet acts as an “abstraction” of the wall wiring, the circuit breakers, the electrical source, etc.

It is a common and well-regarded principle of software design, for the reasons above, to code against interfaces (abstractions) and not implementations, which is what we’ve done here. In doing so, we’re given the freedom to swap out implementations as we please, for those implementations are hidden behind the interface (just like wall wiring is hidden behind the outlet), and so the business logic that uses the dependency never has to change so long as the interface never changes. Remember, UserService only needs to know what functionality is offered by its dependencies, not how that functionality is supported behind the scenes. That’s why using interfaces works.

These two simple changes of utilizing interfaces and injecting dependencies make all the difference in the world when it comes to building loosely coupled software and solves all of the problems we ran into above.

If we decide tomorrow that we want to rely on Mailchimp for emails, we simply create a new Mailchimp class that honors the IEmailProvider interface and inject it in instead of SendGrid. The actual UserService class never has to change even though we’ve just made a ginormous change to our system by switching to a new email provider. The beauty of these patterns is that UserService remains blissfully unaware of how the dependencies it uses work behind the scenes. The interface serves as the architectural boundary between both components, keeping them appropriately decoupled.

Additionally, when it comes to testing, we can create fakes that abide by the interfaces and inject them instead. Here, you can see a fake repository and a fake email provider.

// Both fakes:
class FakeUserRepository implements IUserRepository {
    private readonly users: User[] = [];

    public async addUser(user: User): Promise<void> {
        this.users.push(user);
    }

    public async findUserById(id: string): Promise<User> {
        const userOrNone = this.users.find(u => u.id === id);

        return userOrNone
            ? Promise.resolve(userOrNone)
            : Promise.reject(new NotFoundError());
    }

    public async existsByEmail(email: string): Promise<boolean> {
        return Boolean(this.users.find(u => u.email === email));
    }

    public getPersistedUserCount = () => this.users.length;
}

class FakeEmailProvider implements IEmailProvider {
    private readonly emailRecipients: string[] = [];

    public async sendWelcomeEmail(to: string): Promise<void> {
        this.emailRecipients.push(to);
    }

    public wasEmailSentToRecipient = (recipient: string) =>
        Boolean(this.emailRecipients.find(r => r === recipient));
}

Notice that both fakes implement the same interfaces that UserService expects its dependencies to honor. Now, we can pass these fakes into UserService instead of the real classes and UserService will be none the wiser; it’ll use them just as if they were the real deal. The reason it can do that is because it knows that all of the methods and properties it wants to use on its dependencies do indeed exist and are indeed accessible (because they implement the interfaces), which is all UserService needs to know (i.e, not how the dependencies work).

We’ll inject these two during tests, and it’ll make the testing process so much easier and so much more straightforward than what you might be used to when dealing with over-the-top mocking and stubbing libraries, working with Jest’s own internal tooling, or trying to monkey-patch.

Here are actual tests using the fakes:

// Fakes
let fakeUserRepository: FakeUserRepository;
let fakeEmailProvider: FakeEmailProvider;

// SUT
let userService: UserService;

// We want to clean out the internal arrays of both fakes 
// before each test.
beforeEach(() => {
    fakeUserRepository = new FakeUserRepository();
    fakeEmailProvider = new FakeEmailProvider();

    userService = new UserService(fakeUserRepository, fakeEmailProvider);
});

// A factory to easily create DTOs.
// Here, we have the optional choice of overriding the defaults
// thanks to the built in Partial utility type of TypeScript.
function createSeedRegisterUserDto(opts?: Partial<IRegisterUserDto>): IRegisterUserDto {
    return {
        id: 'someId',
        email: '[email protected]',
        ...opts
    };
}

test('should correctly persist a user and send an email', async () => {
    // Arrange
    const dto = createSeedRegisterUserDto();

    // Act
    await userService.registerUser(dto);

    // Assert
    const expectedUser = User.fromDto(dto);
    const persistedUser = await fakeUserRepository.findUserById(dto.id);

    const wasEmailSent = fakeEmailProvider.wasEmailSentToRecipient(dto.email);

    expect(persistedUser).toEqual(expectedUser);
    expect(wasEmailSent).toBe(true);
});

test('should reject with a DuplicateEmailError if an email already exists', async () => {
    // Arrange
    const existingEmail = '[email protected]';
    const dto = createSeedRegisterUserDto({ email: existingEmail });
    const existingUser = User.fromDto(dto);

    await fakeUserRepository.addUser(existingUser);

    // Act, Assert
    await expect(userService.registerUser(dto))
        .rejects.toBeInstanceOf(DuplicateEmailError);

    expect(fakeUserRepository.getPersistedUserCount()).toBe(1);
});

test('should correctly return a user', async () => {
    // Arrange
    const user = User.fromDto(createSeedRegisterUserDto());
    await fakeUserRepository.addUser(user);

    // Act
    const receivedUser = await userService.findUserById(user.id);

    // Assert
    expect(receivedUser).toEqual(user);
});

You’ll notice a few things here: The hand-written fakes are very simple. There’s no complexity from mocking frameworks which only serve to obfuscate. Everything is hand-rolled and that means there is no magic in the codebase. Asynchronous behavior is faked to match the interfaces. I use async/await in the tests even though all behavior is synchronous because I feel that it more closely matches how I’d expect the operations to work in the real world and because by adding async/await, I can run this same test suite against real implementations too in addition to the fakes, thus handing asynchrony appropriately is required. In fact, in real life, I would most likely not even worry about mocking the database and would instead use a local DB in a Docker container until there were so many tests that I had to mock it away for performance. I could then run the in-memory DB tests after every single change and reserve the real local DB tests for right before committing changes and for on the build server in the CI/CD pipeline.

In the first test, in the “arrange” section, we simply create the DTO. In the “act” section, we call the system under test and execute its behavior. Things get slightly more complex when making assertions. Remember, at this point in the test, we don’t even know if the user was saved correctly. So, we define what we expect a persisted user to look like, and then we call the fake Repository and ask it for a user with the ID we expect. If the UserService didn’t persist the user correctly, this will throw a NotFoundError and the test will fail, otherwise, it will give us back the user. Next, we call the fake email provider and ask it if it recorded sending an email to that user. Finally, we make the assertions with Jest and that concludes the test. It’s expressive and reads just like how the system is actually working. There’s no indirection from mocking libraries and there’s no coupling to the implementation of the UserService.

In the second test, we create an existing user and add it to the repository, then we try to call the service again using a DTO that has already been used to create and persist a user, and we expect that to fail. We also assert that no new data was added to the repository.

For the third test, the “arrange” section now consists of creating a user and persisting it to the fake Repository. Then, we call the SUT, and finally, check if the user that comes back is the one we saved in the repo earlier.

These examples are relatively simple, but when things get more complex, being able to rely on dependency injection and interfaces in this manner keeps your code clean and makes writing tests a joy.

A brief aside on testing: In general, you don’t need to mock out every dependency that the code uses. Many people, erroneously, claim that a “unit” in a “unit test” is one function or one class. That could not be more incorrect. The “unit” is defined as the “unit of functionality” or the “unit of behavior”, not one function or class. So if a unit of behavior uses 5 different classes, you don’t need to mock out all those classes unless they reach outside of the boundary of the module. In this case, I mocked the database and I mocked the email provider because I have no choice. If I don’t want to use a real database and I don’t want to send an email, I have to mock them out. But if I had a bunch more classes that didn’t do anything across the network, I would not mock them because they’re implementation details of the unit of behavior. I could also decide against mocking the database and emails and spin up a real local database and a real SMTP server, both in Docker containers. On the first point, I have no problem using a real database and still calling it a unit test so long as it’s not too slow. Generally, I’d use the real DB first until it became too slow and I had to mock, as discussed above. But, no matter what you do, you have to be pragmatic — sending welcome emails is not a mission-critical operation, thus we don’t need to go that far in terms of SMTP servers in Docker containers. Whenever I do mock, I would be very unlikely to use a mocking framework or try to assert on the number of times called or parameters passed except in very rare cases, because that would couple tests to the implementation of the system under test, and they should be agnostic to those details.

Performing Dependency Injection Without Classes And Constructors

So far, throughout the article, we’ve worked exclusively with classes and injected the dependencies through the constructor. If you’re taking a functional approach to development and wish not to use classes, one can still obtain the benefits of dependency injection using function arguments. For example, our UserService class above could be refactored into:

function makeUserService(
    userRepository: IUserRepository,
    emailProvider: IEmailProvider
): IUserService {
    return {
        registerUser: async dto => {
            // ...
        },

        findUserById: id => userRepository.findUserById(id)
    }
}

It’s a factory that receives the dependencies and constructs the service object. We can also inject dependencies into Higher Order Functions. A typical example would be creating an Express Middleware function that gets a UserRepository and an ILogger injected:

function authProvider(userRepository: IUserRepository, logger: ILogger) {
    return async (req: Request, res: Response, next: NextFunction) => {
        // ...
        // Has access to userRepository, logger, req, res, and next.
    }
}

In the first example, I didn’t define the type of dto and id because if we define an interface called IUserService containing the method signatures for the service, then the TS Compiler will infer the types automatically. Similarly, had I defined a function signature for the Express Middleware to be the return type of authProvider, I wouldn’t have had to declare the argument types there either.

If we considered the email provider and the repository to be functional too, and if we injected their specific dependencies as well instead of hard coding them, the root of the application could look like this:

import { sendMail } from 'sendgrid';

async function main() {
    const app = express();

    const dbConnection = await connectToDatabase();

    // Change emailProvider to makeMailChimpEmailProvider whenever we want
    // with no changes made to dependent code.
    const userRepository = makeUserRepository(dbConnection);
    const emailProvider = makeSendGridEmailProvider(sendMail);

    const userService = makeUserService(userRepository, emailProvider);

    // Put this into another file. It’s a controller action.
    app.post('/login', (req, res) => {
        await userService.registerUser(req.body as IRegisterUserDto);
        return res.send();
    });

    // Put this into another file. It’s a controller action.
    app.delete(
        '/me', 
        authProvider(userRepository, emailProvider), 
        (req, res) => { ... }
    );
}

Notice that we fetch the dependencies that we need, like a database connection or third-party library functions, and then we utilize factories to make our first-party dependencies using the third-party ones. We then pass them into the dependent code. Since everything is coded against abstractions, I can swap out either userRepository or emailProvider to be any different function or class with any implementation I want (that still implements the interface correctly) and UserService will just use it with no changes needed, which, once again, is because UserService cares about nothing but the public interface of the dependencies, not how the dependencies work.

As a disclaimer, I want to point out a few things. As stated earlier, this demo was optimized for showing how dependency injection makes life easier, and thus it wasn’t optimized in terms of system design best practices insofar as the patterns surrounding how Repositories and DTOs should technically be used. In real life, one has to deal with managing transactions across repositories and the DTO should generally not be passed into service methods, but rather mapped in the controller to allow the presentation layer to evolve separately from the application layer. The userSerivce.findById method here also neglects to map the User domain object to a DTO, which it should do in real life. None of this affects the DI implementation though, I simply wanted to keep the focus on the benefits of DI itself, not Repository design, Unit of Work management, or DTOs. Finally, although this may look a little like the NestJS framework in terms of the manner of doing things, it’s not, and I actively discourage people from using NestJS for reasons outside the scope of this article.

A Brief Theoretical Overview

All applications are made up of collaborating components, and the manner in which those collaborators collaborate and are managed will decide how much the application will resist refactoring, resist change, and resist testing. Dependency injection mixed with coding against interfaces is a primary method (among others) of reducing the coupling of collaborators within systems, and making them easily swappable. This is the hallmark of a highly cohesive and loosely coupled design.

The individual components that make up applications in non-trivial systems must be decoupled if we want the system to be maintainable, and the way we achieve that level of decoupling, as stated above, is by depending upon abstractions, in this case, interfaces, rather than concrete implementations, and utilizing dependency injection. Doing so provides loose coupling and gives us the freedom of swapping out implementations without needing to make any changes on the side of the dependent component/collaborator and solves the problem that dependent code has no business managing the lifetime of its dependencies and shouldn’t know how to create them or dispose of them.

Despite the simplicity of what we’ve seen thus far, there’s a lot more complexity that surrounds dependency injection.

Injection of dependencies can come in many forms. Constructor Injection is what we have been using here since dependencies are injected into a constructor. There also exists Setter Injection and Interface Injection. In the case of the former, the dependent component will expose a setter method which will be used to inject the dependency — that is, it could expose a method like setUserRepository(userRepository: UserRepository). In the last case, we can define interfaces through which to perform the injection, but I’ll omit the explanation of the last technique here for brevity since we’ll spend more time discussing it and more in the second article of this series.

Because wiring up dependencies manually can be difficult, various IoC Frameworks and Containers exist. These containers store your dependencies and resolve the correct ones at runtime, often through Reflection in languages like C# or Java, exposing various configuration options for dependency lifetime. Despite the benefits that IoC Containers provide, there are cases to be made for moving away from them, and only resolving dependencies manually. To hear more about this, see Greg Young’s 8 Lines of Code talk.

Additionally, DI Frameworks and IoC Containers can provide too many options, and many rely on decorators or attributes to perform techniques such as setter or field injection. I look down on this kind of approach because, if you think about it intuitively, the point of dependency injection is to achieve loose coupling, but if you begin to sprinkle IoC Container-specific decorators all over your business logic, while you may have achieved decoupling from the dependency, you’ve inadvertently coupled yourself to the IoC Container. IoC Containers like Awilix solve this problem since they remain divorced from your application’s business logic.

Conclusion

This article served to depict only a very practical example of dependency injection in use and mostly neglected the theoretical attributes. I did it this way in order to make it easier to understand what dependency injection is at its core in a manner divorced from the rest of the complexity that people usually associate with the concept.

In the second article of this series, we’ll take a much, much more in-depth look, including at:

  • The difference between Dependency Injection and Dependency Inversion and Inversion of Control;
  • Dependency Injection anti-patterns;
  • IoC Container anti-patterns;
  • The role of IoC Containers;
  • The different types of dependency lifetimes;
  • How IoC Containers are designed;
  • Dependency Injection with React;
  • Advanced testing scenarios;
  • And more.

Stay tuned!

Reblogged 1 year ago from smashingmagazine.com

Google algorithm updates 2020 in review: core updates, passage indexing and page experience

Despite the pandemic, Google was busy working on changes to its search ranking algorithm — here is a summary review of those changes through 2020.

Please visit Search Engine Land for the full article.

Reblogged 1 year ago from feeds.searchengineland.com

SEO Trends 2021: What every marketer should know

30-second summary:

  • SEO is a major supportive strategy to digital marketing for businesses.
  • Impact of AI and machine learning (ML) is undeniable for the future of SEO.
  • User Experience is taking prominence for a majority of search engines out there.
  • Local SEO is important for businesses to get hits from local audiences.
  • The Expertise, Authority, and Trustworthiness aspect is defined by Google as a ranking factor.
  • Structured Data can help you deliver rich results for users on SERPs.
  • There are more data-backed trends to discover, read on to find out what those are!

Digital marketing has emerged as the most powerful form of marketing in the current era. With more than 4.66 billion internet users as per the 2020 report by Statista.com, there is no doubt an immense potential for businesses to capitalize through online marketing. SEO thus becomes a means to cash-in on this opportunity as the majority of the world’s internet traffic is generated through Search engines. More importantly, Google is the present world leader in search engines, followed by Bing, Yahoo, and Baidu. In light of this information, let’s take a quick look at the SEO trends 2021 that you must know.

According to a recent report by Safari Digital, the number one results on Google’s SERP (search engine result page) enjoy a CTR (click-through-rate) of 34.36%. Furthermore, 61% of marketers consider SEO to be the key to online success.

Moreover, 82% of people who implemented SEO-based strategies found it to be effective, while businesses allocate on average 41% of their marketing strategy budget to invest in SEO.

Top 11 SEO Trends 2021

1. AI to show greater impact

BERT (Bidirectional Encoder Representations Transformers) has been around for a couple of years now, and it seems only recently that we see this technology and the potency it has for the future. Developed and published in 2018 by Google, it is a neural network-based technique for NLP (natural language processing) pre-training. In simpler terms, it can help Google to decipher the context of words in a search query.

Furthermore, 37% of businesses and organizations are already employing AI as per Data Prot’s recent report. With the AI industry projected to earn around $118 billion by 2025, there is no doubt that the technology will leave a lasting impact on SEO.

Through AI powering your business, you would not only be able to create more powerful content but also augment your keyword research, maximize link building opportunities, and optimize all digital platforms.

There are tremendous tools like Keyword Tool and Twin word which use AI to speed up your keyword research.

Screenshot Credits

Along with that, Wordsmith, Articoolo, and WordAI are amongst the best tools that help you create content using artificial intelligence.

2. Build and improve on UX

User experience is the highlight of any purchased product or hired service. Furthermore, according to a recent study by Small Biz Genius, 88% of online shoppers don’t return after having a bad user experience. In fact, 70% of online businesses fail due to bad user experiences, which is why UX testing is crucial. This also means that providing an attractive and efficient UI (user interface) is also important. Some major qualities to create the best user experience include:

  • Incredibly fast loading times
  • Encouraging users to explore seamlessly
  • Providing easy to navigate interface
  • Adequate use of white space, fonts, and high-quality images
  • Visual aid to guide users and their experiences (a quick video to showcase step by step guide)
  • User-friendly URLs and sitemap
  • Streamline website design so that it helps users find specific functions
  • Terrific user dashboards for registered members & much more

3. Get more from local SEO

When it comes to digital marketing, local SEO has become really important, and local results are some of the most relevant for users who are looking for solutions they can acquire. This has allowed businesses to take advantage of and benefit from local SEO. According to Chat Meter ® “near me” and “tonight/today” searches increased by 900% in the past two years.

In fact, mobile search for “open + now + near me” has grown by 200%. And to top it all off, around 46% of all searches on Google are local.

Here are some tips to start mastering your local SEO:

  • Create a Google My Business Account
  • Optimize for mobile devices and voice search (more on this later)
  • Cash-in on local keywords
  • Take advantage of online business directories
  • Develop content based on local events, news stories, and location-specific places

4. Implement the E.A.T concept

For SEO, EAT stands for Expertise, Authority, and Trustworthiness. These factors are also mentioned in the Search Quality Evaluator Guidelines by Google.

SEO trends 2021 - Use E.A.T

Screenshot Credits

Google mentions that they use third-party Search Quality Raters that are spread out all over the world and highly trained to provide feedback to help them understand which changes make Search more useful. The sub-section 3.2 of the general guidelines under section 3.0 entitled ‘Overall Page Quality Rating’ is where you will find EAT. Here are some tips for you to utilize this concept:

  • Produce documents with a professional approach with regards to your industry, market, trade, or niche
  • Get links from high authority domains
  • Hire experts and deliver content with facts that are relevant and checked for accuracy
  • Keep content up to date
  • Get reviews
  • Flash your credentials
  • Get a Wikipedia page
  • Demonstrate work that showcases your expertise to Google

5. Learn about structured data

Structure data is becoming increasingly vital, and it allows your content to be better understood by search engines. Google offers you a Structured Data Testing Tool that you can use to familiarize yourself with the concept and start applying structured data for your website and landing pages.

This helps to deliver rich results from your website or landing pages to appear on SERPs.

A rich result undoubtedly gets more limelight and hence more attention from users which means that their CTRs also increase tremendously.

Having rich results from your domains being made available on the user’s screen leaves a lasting impact on them and also help you build authority in the eyes of your niche market’s audience.

6. Obligation: Mobile-friendliness

According to mobile marketing statistics published by Web FX, 52.2% of all website traffic is generated from mobile phones, while 61% of consumers are most likely to purchase from mobile-friendly sites.

Mobile friendliness

Source

Furthermore, it has been observed that over 96% of people use Google when they search on mobile.

I think that stats say a lot more for themselves and with over 3.5 billion smartphone users around the world, it is obvious that you need to get your website friendly for mobile devices. Here are some tips to get you started:

  • Prioritize website speed
  • Make website responsive
  • Keep web designs simple
  • Use Google’s Mobile-Friendly Testing tool for further recommendations

7. Play with long-tail keywords

Neil Patel offers you some valid reasons to use long-tail keywords, and he mentions that they account for more than 70% of all web searches.

They can help you outrank the competition, plus they are how people actually make use of search on the internet. Long-tail keywords offer context to your content, and also support better conversion rates with an average long-tail keyword having a 36% conversion rate. Let’s see an example provided by SEMRush.

SEO trends 2021 - Play with long tail keywords

Here are some tips for finding long-tail keywords:

  • Use Google suggestions and variations as a source for long-tail keywords
  • Google related searches are also a good source
  • Go for questions, specially optimize your content for how-to questions
  • You can also mine your analytics and your search query reports for further insights
  • Browse for topics on eHow, Wikipedia, and various Q&A sites

8. Search intent takes prominence

According to a study conducted by Ahrefs, optimizing search content led to a 677% increase in organic traffic for one of their core landing pages in just a period of six months. Search intent is the reason behind a person’s query. It related to the thing that they want to learn, find, or make a purchase.

SEO trends 2021 - Search intent

Source

While Google intends to provide the most relevant result to user queries, we already know that BERT and technologies like NLP are already growing and improving themselves with the passage of time. Students opting for assignment writing service already understand this and are making full use of it to make their own queries online. The future of search is therefore going to be highly influenced by user intent or search intent.

9. Terrific quality of content

Content has always been the king, whether it is in written form, animated, or uses video-audio format. Interactive content, as well as the use of infographics, are incredible traffic magnets. According to a post by OptinMonster, 91% of B2B marketers use content marketing to reach customers, while 86% of B2C marketers consider content marketing their key strategy.

Content marketing is focused on the creation and distribution of consistent, relevant, and valuable content for audiences. Here are some tips for you to follow:

  • Stay original and refrain from plagiarism
  • Deliver solutions to main points
  • Make content reader-friendly and digestible for users
  • Include visual aids to support the context of your content
  • Provide actionable tips to help resolve user queries
  • Keep your headlines strong and your facts accurate from credible sources
  • Never shy away from updating your already published content to keep it fresh

10. Video marketing strategy

Videos nowadays are probably the most popular media consumed by online users today. According to video marketing statistics by WordStream, YouTube has over a billion users, and 82% of Twitter users watch video content on Twitter. With 30 million active daily users, more than 1 billion videos are watched on YouTube daily. This is how big the video market is.

To start with your own video marketing strategy, here are some bits of advice:

  • Create tutorial or demo videos centered on user intent
  • Optimize videos with SEO content
  • Include powerful CTAs
  • Stay informative for the most part and only slightly selling
  • The starting of the video is the most important
  • End on a high note

11. Voice search optimization

The DBS Interactive shares their voice search statistics in which they report that 27% of the online population is using voice search on mobile.

Around 111.8 million people in the US which is over a third of the nation’s total population used voice assistance monthly and furthermore over more than half of all smartphone users are already engaged using voice search technology.

To jump on the bandwagon would be the right decision, and here is how you can get started too:

  • Voice search keywords are longer and conversational
  • These are normally questions – For example, “What are SEO trends for the upcoming year?”

voice search optimization

Source

  • Local listings and searches take precedence
  • Optimize for rich answers and make use of structured data guidelines
  • Include FAQs for your product pages and blogs
  • Make use of conversational language for content

Conclusion

Every business in the world with an online presence is running after better and more improved SEO tactics to get more traffic and generate more sales.

I hope the aforementioned tips and guidelines will help you get things sorted for your SEO department before you dive into the next year.

In case you want to make an inquiry of your own related to the topic, feel free to leave a mention of your query in the comment section below. All the best for your future endeavors, and Cheers!

Amanda Jerelyn currently works as a Marketing Manager at Dissertation Assistance, a perfect place for students to buy academic writing services from expert dissertation writers UK.  During her free time, she likes to practice mindful yoga to keep herself fit and healthy.

The post SEO Trends 2021: What every marketer should know appeared first on Search Engine Watch.

Reblogged 1 year ago from www.searchenginewatch.com

How NLP and AI are revolutionizing SEO-friendly content [Five tools to help you]

30-second summary:

  • Natural language processing (NLP) is one factor you’ll need to account for as you do SEO on your website.
  • If your content is optimized for NLP, you can expect it to rise to the top of the search rankings and stay there for some time.
  • As AI and NLP keep evolving, we may also eventually see machines doing a lot of other SEO-related work, like inserting H1 and image alt tags into HTML code, building backlinks via guest posts, and doing email outreach to other AI-powered content editors.
  • While it seems far-fetched right now, it’s exciting to see how SEO, NLP and AI will evolve together.
  • Writer.com’s Co-founder and CEO, May Habib discusses in-depth about SEO content and shares top tools to help you through the content creation process.

Modern websites are at the mercy of algorithms, which dictate the content they show in the search results for specific keywords. These algorithms are getting smarter by the day, thanks to a technology called machine learning, also known as artificial intelligence (AI).

If you want your site to rank in search results, you need to know how these algorithms work. They change frequently, so if you continually re-work your SEO to account for these changes, you’ll be in a good position to dominate the rankings. 

Natural language processing (NLP) is one factor you’ll need to account for as you do SEO on your website. If your content is optimized for NLP, you can expect it to rise to the top of the search rankings and stay there for some time.

The evolving role of NLP and AI in content creation & SEO

Before we trace how NLP and AI have increased in influence over content creation and SEO processes, we need to understand what NLP is and how it works. NLP has three main tasks: recognizing text, understanding text, and generating text.

  • Recognition: Computers think only in terms of numbers, not text. This means that any NLP solution needs to convert text into numbers so computers can understand them.
  • Understanding: Once the text has been converted into numbers, algorithms can then perform statistical analysis to discover the words or topics that appear together most frequently. 
  • Generation: The NLP machine can use its findings to ask questions or suggest topics around which a writer can create content. Some of the more advanced machines are already starting to put together content briefs. 

With the help of NLP and artificial intelligence (AI), writers should soon be able to generate content in less time as they will only need to put together keywords and central ideas, then let the machine take care of the rest. However, while an AI is a lot smarter than the proverbial thousand monkeys banging away on a thousand typewriters, it will take some time before we’ll see AI- and NLP-generated content that’s actually readable.

As AI and NLP keep evolving, we may also eventually see machines doing a lot of other SEO-related work, like inserting H1 and image alt tags into HTML code, building backlinks via guest posts, and doing email outreach to other AI-powered content editors. While it seems far-fetched right now, it’s exciting to see how SEO, NLP, and AI will evolve together.

Major impact from Google BERT update

In late 2019, Google announced the launch of its Bidirectional Encoder Representations from Transformers (BERT) algorithm.  BERT helps computers understand human language using a method that mimics human language processing. 

According to Google, the BERT algorithm understands contexts and nuances of words in search strings and matches those searches with results closer to the user’s intent. Google uses BERT to generate the featured snippets for practically all relevant searches. 

One example Google gave was the search query “2019 brazil traveler to usa need a visa”. The old algorithm would return search results for U.S. citizens who are planning to go to Brazil. BERT, on the other hand, churns out results for Brazilian citizens who are going to the U.S. The key difference between the two algorithms is that BERT recognizes the nuance that the word “to” adds to the search term, which the old algorithm failed to capture. 

Source: Google

Instead of looking at individual keywords, BERT looks at the search string as a whole, which gives it a better sense of user intent than ever before. Users are becoming more specific with the questions they ask and are asking more new questions, and BERT breaks down these questions and generates search results that are more relevant to users.

This is great news for search engine users, but what does it mean for SEO practitioners? While it doesn’t exactly throw long-standing SEO principles out the window, you might have to adjust to accommodate the new algorithm’s intricacies and create more content containing long-tail (longer and more specific) keywords. Let’s move on to the next section to learn more about creating BERT-optimized content.

Developing SEO-friendly content for improved Google

When we perform SEO on our content, we need to consider Google’s intentions in introducing BERT and giving NLP a larger role in determining search rankings. Google uses previous search results for the same keywords to improve its results, but according to the company, 15% of all search queries are used for the first time. The implication here is that Google needs to decipher these new questions by reconstructing them in a way it understands. 

With this in mind, your SEO should factor in the criteria below: 

Core understanding of search intent

While keywords still play an important role in Google searches, BERT also pays close attention to user intent, which just means a user’s desired end goal for performing a search. We may classify user intent into four categories:

  • Navigational: The user goes to Google to get to a specific website. Instead of using the address bar, they run a Google search then click on the website link that appears in the search results. It’s possible that these users know where they want to go but have forgotten the exact URL for the page.
  • Informational: The user has a specific question or just wants to know more about a topic. The intention here is to become more knowledgeable or to get the correct answer for their question. 
  • Commercial: The user might not know what they want at the moment, so they’re just looking around for options. They may or may not make a purchase right away.
  • Transactional: The user is ready and willing to make a purchase and is using Google to find the exact product they want.

Unlike old search algorithms, the new Google algorithm captures user intent better because it considers the whole context of the search terms, which may include prepositions such as “of”, “in”, “for”, and “to”, or interrogative words such as “when”, “where”, “what”, “why”, and “how”. Your SEO strategy should produce content that:

  • Answers a user’s question or addresses a need right away
  • Provides value to the reader
  • Is comprehensive and focused 

You might need to conduct more research about ranking sites for your keyword and check out what kind of content gets into the top results. It’s also a good idea to look at the related searches that Google suggests at the bottom of the results page. These will give you a better idea of user intent and help you draw an SEO strategy that addresses these needs.

Term frequency-inverse document frequency

You might not have heard of the term “Term Frequency-Inverse Document Frequency” (TF-IDF) before, but you’ll be hearing more about it now that Google is starting to use it to determine relevant search results. TF-IDF rises according to the frequency of a search term in a document but decreases by the number of documents that also have it. This means that very common words, such as articles and interrogative words, rank very low. 

TF-IDF is calculated by multiplying the following metrics:

  • Term frequency: This may either be a raw count of instances of a keyword, the raw count adjusted for document length, or the raw frequency of the most common word. 
  • Inverse document frequency: This may be calculated by taking the total number of documents, dividing it by the number of documents that have the keyword, then getting its algorithm. If the word is very common across different documents, the TF-IDF gets closer to 0. Otherwise, it moves closer to 1. 

When we multiply the metrics above, we get the TF-IDF score of a keyword in a document. The higher the TF-IDF score, the more relevant the keyword is for that specific page. As an end-user, you may use TF-IDF to extract the most relevant keywords for a piece of content. 

Google also uses TF-IDF scores in its NLP engine. Since the metric gauges the relevance of a keyword to the rest of the document, it’s more reliable than simple word counts and helps the search engine avoid showing irrelevant or spammy results.

Sentiment importance

Consumer opinions about brands are everywhere on the internet. If you can find a way to aggregate and analyze these sentiments for your brand, you’ll have some powerful data about overall feelings about your business at your fingertips. 

This process is called sentiment analysis, and it uses AI to help you understand the overall emotional tone of the things your customers say about you. It involves three key activities:

  • Knowing where your customers express their opinions about your brand, which might include social media, review sites such as Yelp or the Better Business Bureau, forums, feedback left on your site, and reviews on ecommerce sites such as Amazon.
  • Utilizing AI and NLP to pull data from these sites in massive quantities, instead of gathering a random sample consisting of just a few comments from each platform. This gives you a clearer overall picture of customer sentiment.
  • Analyzing data and assigning positive or negative values to customer sentiments, based on tone and choice of words.

Crafting an SEO strategy that places importance on customer sentiment addresses common complaints and pain points. We’ve found that dealing with issues head-on, instead of skirting them or denying them, increases a brand’s credibility and improves its image among consumers.

Salience and category

If you want to better understand how natural language processing works, you may start by getting familiar with the concept of salience. 

In a nutshell, salience is concerned with measuring how much of a piece of content is concerned with a specific topic or entity. Entities are things, people, places, or concepts, which may be represented by nouns or names. Google measures salience as it tries to draw relationships between the different entities present in an article. Think of it as Google asking what the page is all about and whether it is a good source of information about a specific search term.

Let’s use a real-life example. Let’s imagine you do a Google search to learn more about how to create great Instagram content during the holidays. You click on an article that claims to be a guide to doing just that but soon discover that the article contains one short paragraph about this topic and ten paragraphs about new Instagram features. 

While the article itself mentions both Instagram and the holidays, it isn’t very relevant to the intent of the search, which is to learn how to document the holidays on Instagram. These are the types of search results Google wanted to avoid when it was rolling out BERT. Instead of trying to game the system to get your content to the top of the search results, you need to consider salience as you produce your online content. 

Five tools that can help you develop SEO-friendly content

Given all the changes that Google has made to its search algorithm, how will you ensure that your content remains SEO-friendly? We’ve gathered six of the most useful tools that will help you create content that ranks high and satisfies user intent.

1. Frase

Frase (frase.io) claims to help SEO specialists create content that is aligned with user intent easily. It streamlines the SEO and content creation processes by offering a comprehensive solution that combines keyword research, content research, content briefs, content creation, and optimization. 

Fraser - Tools to create SEO-friendly content

Frase Content, its content creation platform, suggests useful topics, statistics, and news based on the keywords you enter. If you’re working with a team, the Content Briefs feature tells your writers precisely what you need them to produce, reducing the need for revisions and freeing up their time for more projects. 

2. Writer

Writer (writer.com) realizes that we all write for different reasons, and when you sign up, it asks you a few questions about what you intend to use it for. For example, you might be interested in improving your own work, creating a style guide, promoting inclusive language, or unifying your brand voice. 

How NLP and AI Are Revolutionizing SEO-Friendly Content [5 Tools That Can Help You] - Writer

Writer’s text editor has a built-in grammar checker and gives you useful real-time suggestions focusing on tone, style, and inclusiveness. Writer also offers a reporting tool that lets you track your writers’ progress for a specific period, such as spelling, inclusivity, and writing style.

3. SurferSEO

Surfer (surferseo.com) makes heavy use of data to help you create content that ranks. It analyzes over 500 ranking factors such as text length, responsive web design, keyword density, and referring domains and points out common factors from top pages to give you a better idea of what works for a specific keyword. 

Surfer - Tools to create SEO-friendly content

You can see Surfer’s analysis at work when you use its web-based text editor. You will see a dashboard that tracks what the app calls the “content score”. It also gives you useful keyword suggestions.

4. Alli AI

Alli AI (alliAI.com) offers you a quick, painless way to perform SEO on existing content. All you need to do is add a single code snippet to your site, review Alli’s code and recommendations, then approve the changes. Once you approve the changes, Alli implements them in minutes.

How NLP and AI Are Revolutionizing SEO-Friendly Content [5 Tools That Can Help You] - Alli AI

Alli does this by finding the easiest links to build. If you prefer to do things manually, the tool also shows you link building and outreach opportunities. If you’re struggling to keep up with all Google’s algorithm changes, Alli claims it can automatically adjust your site’s SEO strategy.

5. Can I Rank?

Can I Rank (canirank.com) compares your site content to other sites in its niche and gives you useful suggestions for growing your site and improving your search rankings. Its user interface is easy to understand and the suggestions are presented as tasks, including the estimated amount of time you will need to spend on them. 

How NLP and AI Are Revolutionizing SEO-Friendly Content [5 Tools That Can Help You] - Can I Rank?

What we like about Can I Rank? is that everything is in plain English, from the menu to the suggestions it gives you. This makes it friendly to those who aren’t technical experts. It also presents data in graph form, which makes it easier to justify SEO-related decisions.

Bottom line

Google changes its search algorithms quite a bit, and getting your page to rank is a constant challenge. Because its latest update, BERT, is heavily influenced by AI and NLP, it makes sense to use SEO tools based on the same technologies.

These tools – such as Frase, Writer, SurferSEO, AlliAI, and Can I Rank? – help you create content that ranks. Some of them check for grammar and SEO usability in real-time, while others crawl through your site and your competitors’ sites and come up with content suggestions. Trying out these tools is the only way for you to know which one(s) work best for you. Stick with it, and you’ll stay ahead of the game and create content that performs well for years to come!

May Habib is Co-founder and CEO at Writer.com.

The post How NLP and AI are revolutionizing SEO-friendly content [Five tools to help you] appeared first on Search Engine Watch.

Reblogged 1 year ago from www.searchenginewatch.com

Half of U.S. adults don't know that Facebook does not do original news reporting

Half of U.S. adults don't know that Facebook does not do original news reporting

No, Facebook does not do its own original news reporting. 

Social media is increasingly a primary source of news for U.S. adults. According to a new survey from the Pew Research Center, however, almost half of U.S. adults don’t realize that Facebook is merely disseminating news — not reporting it. 

That’s right, a large portion of the adults in the United States either actively believe that Facebook — the company itself — reports original news stories, or aren’t sure whether it does or does not. So finds the online survey of 2,021 U.S. adults, released Tuesday, which details how unfamiliar most Americans actually are with the media landscape.  Read more…

More about Facebook, Social Media, Tech, and Social Media Companies

Reblogged 1 year ago from feeds.mashable.com

This was the year TV figured out technology. Finally.

This was the year TV figured out technology. Finally.

If it weren’t for technology, 2020 would truly have been the end of the world.

We’ve had plagues before. We’ve had calls for social justice and government reform. We’ve had elections, overthrown tyrants. But we’ve never had it all at once, with a population of over seven billion and endless news and communication at our fingertips.

TV and movies have no choice now but to incorporate technology and social media into characters’ lives and worlds, but even decades into the 21st-century this can sometimes feel forced, stilted, or inauthentic. You get a sense that the people making these stories aren’t necessarily engaging with whatever device or platform they’ve written into it, which can alienate the viewer or create something entirely unbelievable.  Read more…

More about Television, Movies, Technology, Social Media, and Entertainment

Reblogged 1 year ago from feeds.mashable.com

Biased language models can result from internet training data

The controversy around AI researcher Timnit Gebru’s exit from Google, and what biased language models may mean for the search industry.

Please visit Search Engine Land for the full article.

Reblogged 1 year ago from feeds.searchengineland.com