Back to Top

SMX Overtime: The new realities of local search

SEO expert Damian Rollison offers insights into the sites that matter for local search and ranking besides Google My Business, and why you should put some time into managing all of them.

Please visit Search Engine Land for the full article.

Reblogged 2 days ago from feeds.searchengineland.com

Google fined another $1.7 billion by EU for ‘abusive’ AdSense publisher contracts

Google has now been fined a cumulative total of roughly $9.3 billion and there’s one more potential case lurking.

Please visit Search Engine Land for the full article.

Reblogged 2 days ago from feeds.searchengineland.com

Three Reasons Most Content Marketing Fails, and What to Do About Them

Instead of flooding the Internet with content that’s merely meh, avoid these three content marketing mistakes to ensure that you are strategically producing content that breaks through and wows your audience. Read the full article at MarketingProfs

Reblogged 2 days ago from www.marketingprofs.com

Google Search Console adds new actions to sitemap report

Google is proving that it is continuing to port old features from the old Search Console to the new one.

Please visit Search Engine Land for the full article.

Reblogged 3 days ago from feeds.searchengineland.com

I Used The Web For A Day On Internet Explorer 8

I Used The Web For A Day On Internet Explorer 8

I Used The Web For A Day On Internet Explorer 8

Chris Ashton

2019-03-19T12:00:08+01:00
2019-03-20T14:35:19+00:00

This article is part of a series in which I attempt to use the web under various constraints, representing a given demographic of user. I hope to raise the profile of difficulties faced by real people, which are avoidable if we design and develop in a way that is sympathetic to their needs.

Last time, I navigated the web for a day using a screen reader. This time, I spent the day using Internet Explorer 8, which was released ten years ago today, on March 19th, 2009.

Who In The World Uses IE8?

Before we start; a disclaimer: I am not about to tell you that you need to start supporting IE8.

There’s every reason to not support IE8. Microsoft officially stopped supporting IE8, IE9 and IE10 over three years ago, and the Microsoft executives are even telling you to stop using Internet Explorer 11.

But as much as we developers hope for it to go away, it just. Won’t. Die. IE8 continues to show up in browser stats, especially outside of the bubble of the Western world.

Browser stats have to be taken with a pinch of salt, but current estimates for IE8 usage worldwide are around 0.3% to 0.4% of the desktop market share. The lower end of the estimate comes from w3counter:

From a peak of almost 30% at the end of 2010, W3Counter now believes IE8 accounts for 0.3% of global usage. (Large preview)

The higher estimate comes from StatCounter (the same data feed used by the “Can I use” usage table). It estimates global IE8 desktop browser proportion to be around 0.37%.


Graph of IE8 usage vs other browsers
Worldwide usage of IE8 is at 0.37% according to StatCounter. (Large preview)

I suspected we might see higher IE8 usage in certain geographical regions, so drilled into the data by continent.

IE8 Usage By Region

Here is the per-continent IE8 desktop proportion (data from February 2018 — January 2019):

1. Oceania 0.09%
2. Europe 0.25%
3. South America 0.30%
4. North America 0.35%
5. Africa 0.48%
6. Asia 0.50%

Someone in Asia is five times more likely to be using IE8 than someone in Oceania.

I looked more closely into the Asian stats, noting the proportion of IE8 usage for each country. There’s a very clear top six countries for IE8 usage, after which the figures drop down to be comparable with the world average:

1. Iran 3.99%
2. China 1.99%
3. North Korea 1.38%
4. Turkmenistan 1.31%
5. Afghanistan 1.27%
6. Cambodia 1.05%
7. Yemen 0.63%
8. Taiwan 0.62%
9. Pakistan 0.57%
10. Bangladesh 0.54%

This data is summarized in the map below:


Graph showing IE8 breakdown in Asia
Iran, Turkmenistan and Afghanistan in the Middle East, and China, North Korea & Cambodia in the Far East stand out for their IE8 usage. (Large preview)

Incredibly, IE8 makes up around 4% of desktop users in Iran — forty times the proportion of IE8 users in Oceania.

Next, I looked at the country stats for Africa, as it had around the same overall IE8 usage as Asia. There was a clear winner (Eritrea), followed by a number of countries above or around the 1% usage mark:

1. Eritrea 3.24%
2. Botswana 1.37%
3. Sudan & South Sudan 1.33%
4. Niger 1.29%
5. Mozambique 1.19%
6. Mauritania 1.18%
7. Guinea 1.12%
8. Democratic Republic of the Congo 1.07%
9. Zambia 0.94%

This is summarized in the map below:


Graph showing IE8 breakdown in Africa
Eritrea stands out for its IE8 usage (3.24%). A number of other countries also have >1% usage. (Large preview)

Whereas the countries in Asia that have higher-than-normal IE8 usage are roughly batched together geographically, there doesn’t appear to be a pattern in Africa. The only pattern I can see — unless it’s a coincidence — is that a number of the world’s largest IE8 using countries famously censor internet access, and therefore probably don’t encourage or allow updating to more secure browsers.

If your site is aimed at a purely Western audience, you’re unlikely to care much about IE8. If, however, you have a burgeoning Asian or African market — and particularly if you care about users in China, Iran or Eritrea — you might very well care about your website’s IE8 experience. Yes — even in 2019!

Who’s Still Using IE?

So, who are these people? Do they really walk among us?!

Whoever they are, you can bet they’re not using an old browser just to annoy you. Nobody deliberately chooses a worse browsing experience.

Someone might be using an old browser due to the following reasons:

  • Lack of awareness
    They simply aren’t aware that they’re using outdated technology.
  • Lack of education
    They don’t know the upgrade options and alternative browsers open to them.
  • Lack of planning
    Dismissing upgrade prompts because they’re busy, but not having the foresight to upgrade during quieter periods.
  • Aversion to change
    The last time they upgraded their software, they had to learn a new UI. “If it ain’t broke, don’t fix it.”
  • Aversion to risk
    The last time they upgraded, their machine slowed to a crawl, or they lost their favorite feature.
  • Software limitation
    Their OS is too old to let them upgrade, or their admin privileges may be locked down.
  • Hardware limitation
    Newer browsers are generally more demanding of your hard disk space, memory and CPU.
  • Network limitation
    A capped data allowance or slow connection mean they don’t want to download 75MB of software.
  • Legal limitation
    They might be on a corporate machine that only condones the use of one specific browser.

Is it really such a surprise that there are still people around the world who are clinging to IE8?

I decided to put myself in the shoes of one of these anonymous souls, and browse the web for a day using IE8. You can play along at home! Download an “IE8 on Windows 7” Virtual Machine from the Microsoft website, then run it in a virtualizer like VirtualBox.

IE8 VM: Off To A Bad Start

I booted up my IE8 VM, clicked on the Internet Explorer program in anticipation, and this is what I saw:


Screenshot of default homepage of IE8 not loading
The first thing I saw was a 404. Great. (Large preview)

Hmm, okay. Looks like the default web page pulled up by IE8 no longer exists. Well, that figures. Microsoft has officially stopped supporting IE8 so why should it make sure the IE8 landing page still works?

I decided to switch to the most widely used site in the world.

Google


Screenshot of Google.com
The Google homepage renders fine in IE8. (Large preview)

It’s a simple site, therefore difficult to get wrong — but to be fair, it’s looking great! I tried searching for something:


Screenshot of Google search results for Impractical Jokers
Those who have read my previous articles may notice a recurring theme here. (Large preview)

The search worked fine, though the layout looks a bit different to what I’m used to. Then I remembered — I’d seen the same search result layout when I used the Internet for a day with JavaScript turned off.

For reference, here is how the search results look in a modern browser with JavaScript enabled:


Screenshot of Google Chrome search results for Impractical Jokers
Cleaner layout, extra images and meta information, Netflix/Twitter integration. (Large preview)

So, it looks like IE8 gets the no-JS version of Google search. I don’t think this was necessarily a deliberate design decision — it could just be that the JavaScript errored out:


Screenshot of Google search error “Object doesn’t support this property or method”
The page tried and failed to run JavaScript. (Large preview)

Still, the end result is fine by me — I got my search results, which is all I wanted.

I clicked through to watch a YouTube video.

YouTube


Screenshot of buggy YouTube video page
Funky logo, no images for related videos, and unsurprisingly, no video. (Large preview)

There’s quite a lot broken about this page. All to do with little quirks in IE.

The logo, for instance, is zoomed in and cropped. This is down to IE8 not supporting SVG, and what we’re actually seeing is the fallback option provided by YouTube. They’ve applied a background-image CSS property so that in the event of no SVG support, you’ll get an attempt at displaying the logo. Only they seem to have not set the background-size properly, so it’s a little too far zoomed in.


Screenshot of YouTube logo in IE8 and Developer Tools inspecting it
YouTube set a background-img on the logo span, which pulls in a sprite. (Large preview)

For reference, here is the same page in Chrome (see how Chrome renders an SVG instead):


Screenshot of Chrome DevTools inspecting YouTube logo
(Large preview)

And what about that Autoplay toggle? It’s rendered like a weird looking checkbox:


Screenshot of Autoplay toggle
Looks like IE8 defaults to a checkbox under the hood. (Large preview)

This appears to be down to use of a custom element (a paper-toggle-button, which is a Material Design element), which IE doesn’t understand:


Screenshot of Autoplay toggle markup
paper-toggle-button is a custom element. (The screenshot is from Chrome DevTools, alongside how the Autoplay toggle SHOULD render.) (Large preview)

I’m not surprised this hasn’t rendered properly; IE8 doesn’t even cope with the basic semantic markup we use these days. Try using an <aside> or <main> and it will basically render them as divs, but ignoring any styling you apply to them.

To enable HTML5 markup, you have to explicitly tell the browser these elements exist. They can then be styled as normal:

<!--[if lt IE 9]>
   <script>
      document.createElement('header');
      document.createElement('nav');
      document.createElement('section');
      document.createElement('article');
      document.createElement('aside');
      document.createElement('footer');
   </script>
<![endif]-->

That is wrapped in an IE conditional, by the way. <!--[if lt IE 9]> is a HTML comment to most browsers — and therefore gets skipped — but in IE it is a conditional which only passes “if less than IE 9”, where it executes/renders the DOM nodes within it.

So, the video page was a fail. Visiting YouTube.com directly didn’t fare much better:


Screenshot of IE8 YouTube homepage: “Your web browser is no longer supported”
At least I had a visible error message this time! (Large preview)

Undeterred, I ignored the warning and tried searching for a video within YouTube’s search bar.


Screenshot of Google “Sorry, your computer may be sending automated queries. We can't process your request”
Computer says no. (Large preview)

IE8 traffic is clearly suspicious enough that YouTube didn’t trust that I’m a real user, and decided not to process my search request!

Signing Up To Gmail

If I’m going to spend the day on IE8, I’m going to need an email address. So I go about trying to set up a new one.

First of all, I tried Gmail.


Screenshot of Gmail homepage
The text isn’t going to pass color contrast standards! (Large preview)

There’s something a bit off about the image and text here. I think it’s down to the fact that IE8 doesn’t support media queries — so it’s trying to show me a mobile image on desktop.

One way you can get around this is to use Sass to generate two stylesheets; one for modern browsers, and one for legacy IE. You can get IE-friendly, mobile-first CSS (see tutorial by Jake Archibald) by using a mixin for your media queries. The mixin “flattens” your legacy IE CSS to treat IE as though it’s always a specific predefined width (e.g. 65em), giving only the relevant CSS for that width. In this case, I’d have seen the correct background-image for my assumed screen size and had a better experience.

Anyway, it didn’t stop me clicking ‘Create an Account’. There were a few differences between how it looked in IE8 and a modern browser:


Screenshot comparing Gmail signup screen on Chrome and IE8
IE8 is missing the tight layout, and there’s an overlap of text, but otherwise still works. (Large preview)

Whilst promising at first sight, the form was quite buggy to fill in. The ‘label’ doesn’t get out of the way when you start filling in the fields, so your input text is obfuscated:


Screenshot of buggy labels
The labels overlapped the text I was writing. (Large preview)

The markup for this label is actually a <div>, and some clever JS moves the text out of the way when the input is focussed. The JS doesn’t succeed on IE8, so the text stays stubbornly in place.


Screenshot of gmail form markup
The ‘label’ is a div which is overlaid on form input using CSS. (Large preview)

After filling in all my details, I hit “Next”, and waited. Nothing happened.

Then I noticed the little yellow warning symbol at the bottom left of my IE window. I clicked it and saw that it was complaining about a JS error:


Screenshot of Gmail error
I got reasonably far, but then the Next button didn’t work. (Large preview)

I gave up on Gmail and turned to MSN.

Signing Up To Hotmail

I was beginning to worry that email might be off-limits for a ten-year-old browser. But when I went to Hotmail, the signup form looked OK — so far so good:


Screenshot of signup page for MSN
The signup page looked fine. Guessed we’d have more luck with a Microsoft product! (Large preview)

Then I noticed a CAPTCHA. I thought, “There’s no way I’ll get through this…”


Screenshot of captcha verification of signup state
I could see and complete the CAPTCHA. (Large preview)

To my surprise, the CAPTCHA worked!

The only quirky thing on the form was some slightly buggy label positioning, but the signup was otherwise seamless:


Screenshot of first name label, surname label, and then two empty input fields, no clear visual hierarchy
The label positions were a bit off, but I guessed my last name followed by my surname would be fine. (Large preview)

Does that screenshot look OK to you? Can you spot the deliberate mistake?

The leftmost input should have been my first name, not my surname. When I came back and checked this page later, I clicked on the “First name” label and it applied focus to the leftmost input, which is how I could have checked I was filling in the correct box in the first place. This shows the importance of accessible markup — even without CSS and visual association, I could determine exactly which input box applied to which label (albeit the second time around!).

Anyhow, I was able to complete the sign-up process and was redirected to the MSN homepage, which rendered great.


Screenshot of MSN homepage looking good
If any site is going to work in IE8, it will be the Microsoft homepage. (Large preview)

I could even read articles and forget that I was using IE8:


Screenshot of MSN article
The article works fine. No dodgy sidebars or borked images. (Large preview)

With my email registered, I was ready to go and check out the rest of the Internet!

Facebook

I visited the Facebook site and was immediately redirected to the mobile site:


Screenshot of Facebook mobile
“You are using a browser that is not supported by Facebook, so we have redirected you to a simpler version to give you the best experience.” (Large preview)

This is a clever fallback tactic, as Facebook need to support a large global audience on low-end mobile devices, so need to provide a basic version of Facebook anyway. Why not offer that same baseline of experience to older desktop browsers?

I tried signing up and was able to make an account. Great! But when I logged into that account, I was treated with suspicion — just like when I searched for things on YouTube — and was faced with a CAPTCHA.

Only this time, it wasn’t so easy.


Screenshot of CAPTCHA message, but CAPTCHA image failing to load
“Please enter the code below”. Yeah, right. (Large preview)

I tried requesting new codes and refreshing the page several times, but the CAPTCHA image never loaded, so I was effectively locked out of my account.

Oh well. Let’s try some more social media.

Twitter

I visited the Twitter site and had exactly the same mobile redirect experience.


Screenshot of mobile view for Twitter
Twitter treats IE8 as a mobile browser, like Facebook does. (Large preview)

But I couldn’t even get as far as registering an account this time:


Screenshot of Twitter registration screen
Your browser is no longer supported. To sign up, please update it. You can still log in to your existing user accounts. (Large preview)

Oddly, Twitter is happy for you to log in, but not for you to register in the first place. I’m not sure why — perhaps it has a similar CAPTCHA scenario on its sign-up pages which won’t work on older browsers. Either way, I’m not going to be able to make a new account.

I felt awkward about logging in with my existing Twitter account. Call me paranoid, but vulnerabilities like the CFR Watering Hole Attack of 2013 — where the mere act of visiting a specific URL in IE8 would install malware to your machine — had me nervous that I might compromise my account.

But, in the interests of education, I persevered (with a temporary new password):


Screenshot of Twitter feed
Successfully logged in. I can see tweets! (Large preview)

I could also tweet, albeit using the very basic <textarea>:


Screenshot of me writing a tweet, lamenting about the lack of emojis in the IE8 twitter view
You only miss them when they’re gone. (Large preview)

In conclusion, Twitter is basically fine in IE8 — as long as you have an account already!

I’m done with social media for the day. Let’s go check out some news.

BBC News


Screenshot of BBC homepage with “Security Warning” browser popup
The BBC appears to be loading a mixture of HTTPS and HTTP assets. (Large preview)

The news homepage looks very basic and clunky but basically works — albeit with mixed content security warnings.

Take a look at the logo. As we’ve already seen on YouTube, IE8 doesn’t support SVG, so we require a PNG fallback.

The BBC uses the <image> fallback technique to render a PNG on IE:


Screenshot of IE8 BBC News logo with devtools open
IE8 finds the base64 image inside the SVG and renders it. (Large preview)

…and to ignore the PNG when SVG is available:


Screenshot of Chrome BBC News logo with devtools open
The image part is ignored and the svg is rendered nicely. (Large preview)

This technique exploits the fact that older browsers used to obey both <image> and <img> tags, and so will ignore the unknown <svg> tag and render the fallback, whereas modern browsers ignore rendering <image> when inside an SVG. Chris Coyier explains the technique in more detail.

I tried viewing an article:


Screenshot of a BBC article, which displays fine but has a warning message at the top
This site is optimised for modern browsers, and does not fully support your browser. (Large preview)

It’s readable. I can see the headline, the navigation, the featured image. But the rest of the article images are missing:


Screenshot of BBC article with references to images that are not displaying
(Large preview)

This is to be expected, and is due to the BBC lazy-loading images. IE8 not being a ‘supported browser’ means it does not get the JavaScript that enables lazy-loading, thus the images never load at all.

Out of interest, I thought I’d see what happens if I try to access the BBC iPlayer:


Screenshot of BBC iPlayer - just a black screen
…not a lot. (Large preview)

And that got me wondering about another streaming service.

Netflix

I was half expecting an empty white page when I loaded up Netflix in IE8. I was pleasantly surprised when I actually saw a decent landing page:


Screenshot of Netflix homepage
“Join free for a month” call to action, over a composite image of popular titles. (Large preview)

I compared this with the modern Chrome version:


Screenshot of Netflix homepage
“Watch free for 30 days” call to action, over a composite image of popular titles. (Large preview)

There’s a slightly different call to action (button text) — probably down to multivariate testing rather than what browser I’m on.

What’s different about the render is the centralized text and the semi-transparent black overlay.

The lack of centralized text is because of Netflix’s use of Flexbox for aligning items:


Netflix Flexbox aligning items
Netflix uses the Flexbox property justify-content: center to align its text. (Large preview)

A text-align: center on this class would probably fix the centering for IE8 (and indeed all old browsers). For maximum browser support, you can follow a CSS fallbacks approach with old ‘safe’ CSS, and then tighten up layouts with more modern CSS for browsers that support it.

The lack of background is due to use of rgba(), which is not supported in IE8 and below.


A background of rgba(0,0,0,.5) is meaningless to older browsers
A background of rgba(0,0,0,.5) is meaningless to older browsers. (Large preview)

Traditionally it’s good to provide CSS fallbacks like so, which show a black background for old browsers but show semi-transparent background for modern browsers:

rgb(0, 0, 0); /* IE8 fallback */
rgba(0, 0, 0, 0.8);

This is a very IE specific fix, however, basically every other browser supports rgba. Moreover, in this case, you’d lose the fancy Netflix tiles altogether, so it would be better to have no background filter at all! The surefire way of ensuring cross-browser support would be to bake the filter into the background image itself. Simple but effective.

Anyway, so far, so good — IE8 actually rendered the homepage pretty well! Am I actually going to be watching Breaking Bad on IE8 today?

My already tentative optimism was immediately shot down when I viewed the sign-in page:


Screenshot comparing sign-in page for Netflix on Chrome and IE8. IE version colors are all over the place, and it's hard to read the text
Can you guess which side is IE8 and which is Chrome? (Large preview)

Still, I was able to sign in, and saw a pared-back dashboard (no fancy auto-expanding trailers):


Screenshot of Netflix dashboard for logged in user
Each programme had a simple hover state with play icon and title. (Large preview)

I clicked on a programme with vague anticipation, but of course, only saw a black screen.

Amazon

Ok, social media and video are out. All that’s left is to go shopping.

I checked out Amazon, and was blown away — it’s almost indistinguishable from the experience you’d get inside a modern browser:


Screenshot of Amazon homepage
The Amazon homepage looks almost as good on IE8 as it does on any other browser. (Large preview)

I’ve been drawn in by a good homepage before. So let’s click on a product page and see if this is just a fluke.


Screenshot of Amazon product page for Ferrero Rocher chocolates
The product page also looks fantastic (and makes me hungry). (Large preview)

No! The product page looked good too!

Amazon wasn’t the only site that surprised me in its backwards compatibility. Wikipedia looked great, as did the Gov.UK government website. It’s not easy to have a site that doesn’t look like an utter car crash in IE8. Most of my experiences were decidedly less polished…!


Screenshot of sky.com on IE8, layout is all over the place and text is hard to read when placed over images
It is difficult to read or navigate sky.com on IE8. (Large preview)

But a deprecated warning notice or funky layout wasn’t the worst thing I saw today.

Utterly Broken Sites

Some sites were so broken that I couldn’t even connect to them!


Screenshot: Internet Explorer cannot display the webpage
No dice when accessing GitHub. (Large preview)

I wondered if it might be a temporary VM network issue, but it happened every time I refreshed the page, even when coming back to the same site later in the day.

This happened on a few different sites throughout the day, and I eventually concluded that this never affected sites on HTTP — only on HTTPS (but not all HTTPS sites). So, what was the problem?

Using Wireshark to analyze the network traffic, I tried connecting to GitHub again. We can see that the connection failed to establish because of a fatal error, “Description: Protocol Version.”


Screenshot of Wireshark output
TLSv1 Alert (Level: Fatal, Description: Protocol Version) (Large preview)

Looking at the default settings in IE8, only TLS 1.0 is enabled — but GitHub dropped support for TLSv1 and TLSv1.1 in February 2018.


Screenshot of settings panel
Default advanced settings for IE8: TLS 1.0 is checked, TLS 1.1 and 1.2 are unchecked. (Large preview)

I checked the boxes for TLS 1.1 and TLS 1.2, reloaded the page and — voilà! — I was able to view GitHub!


Screenshot of GitHub homepage, with a “no longer supports Internet Explorer” message on it
It doesn’t look pretty, but at least I can now see it! (Large preview)

Many thanks to my extremely talented friend Aidan Fewster for helping me debug that issue.

I’m all for backwards compatibility, but this presents an interesting dilemma. According to the PCI Security Standards Council, TLS 1.0 is insecure and should no longer be used. But by forcing TLS 1.1 or higher, some users will invariably be locked out (and not all are likely to be tech-savvy enough to enable TLS 1.2 in their advanced settings).

By allowing older, insecure standards and enabling users to continue to connect to our sites, we’re not helping them — we’re hurting them, by not giving them a reason to move to safer technologies. So how far should you go in supporting older browsers?

How Can I Begin To Support Older Browsers?

When some people think of “supporting older browsers”, they might be thinking of those proprietary old hacks for IE, like that time the BBC had to do some incredibly gnarly things to support iframed content in IE7.

Or they may be thinking of making things work in the Internet Explorer “quirks mode”; an IE-specific mode of operation which renders things very differently to the standards.

But “supporting older browsers” is very different to “hacking it for IE”. I don’t advocate the latter, but we should pragmatically try to do the former. The mantra I try to live by as a web developer is this:

“Optimize for the majority, make an effort for the minority, and never sacrifice security.”

I’m going to move away from the world of IE8 now and talk about general, sustainable solutions for legacy browser support.

There are two broad strategies for supporting older browsers, both beginning with P:

  1. Polyfilling
    Strive for feature parity for all by filling in the missing browser functionality.
  2. Progressive Enhancement
    Start from a core experience, then use feature detection to layer on functionality.

These strategies are not mutually exclusive from one another; they can be used in tandem. There are a number of implementation decisions to make in either approach, each with their own nuances, which I’ll cover in more detail below.

Polyfilling

For some websites or web pages, JavaScript is very important for functionality and you simply want to deliver working JavaScript to as many browsers as possible.

There are a number of ways to do this, but first, a history lesson.

A Brief History Of ECMAScript

ECMAScript is a standard, and JavaScript is an implementation of that standard. That means that ES5 is “ECMAScript version 5”, and ES6 is “ECMAScript version 6”. Confusingly, ES2015 is the same as ES6.

ES6 was the popularized name of that version prior to its release, but ES2015 is the official name, and subsequent ECMAScript versions are all associated with their release year.

Note: This is all helpfully explained by Brandon Morelli in a great blog post that explains the full history of JavaScript versions.

At time of writing, the latest standard is ES2018 (ES9). Most modern browsers support at least ES2015. Almost every browser supports ES5.

Technically IE8 isn’t ES5. It isn’t even ES4 (which doesn’t exist — the project was abandoned). IE8 uses the Microsoft implementation of ECMAScript 3, called JScript. IE8 does have some ES5 support but was released a few months before ES5 standards were published, and so has a mismatch of support.

Transpiling vs Polyfilling

You can write ES5 JavaScript and it will run in almost every ancient browser:

var foo = function () {
  return 'this is ES5!';
};

You can also continue to write all of your JavaScript like that — to enable backwards compatibility forever. But you’d be missing out on new features and syntactic sugar that has become available in the evolving versions of JavaScript, allowing you to write things like:

const foo = () => {
  return 'this is ES6!';
};

Try running that JavaScript in an older browser and it will error. We need to transpile the code into an earlier version of JavaScript that the browser will understand (i.e. convert our ES6 code into ES5, using automated tooling).

Now let’s say our code uses a standard ES5 method, such as Array.indexOf. Most browsers have a native implementation of this and will work fine, but IE8 will break. Remember IE8 was released a few months before ES5 standards were published, and so has a mismatch of support? One example of that is the indexOf function, which has been implemented for String but not for Array.

If we try to run the Array.indexOf method in IE8, it will fail. But if we’re already writing in ES5, what else can we do?

We can polyfill the behavior of the missing method. Developers traditionally polyfill each feature that they need by copying and pasting code, or by pulling in external third-party polyfill libraries. Many JavaScript features have good polyfill implementations on their respective Mozilla MDN page, but it’s worth pointing out that there are multiple ways you can polyfill the same feature.

For example, to ensure you can use the Array.indexOf method in IE8, you would copy and paste a polyfill like this:

if (!Array.prototype.indexOf) {
  Array.prototype.indexOf = (function (Object, max, min) {
    // big chunk of code that replicates the behaviour in JavaScript goes here!
    // for full implementation, visit:
    // https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/indexof#Polyfill
  })(Object, Math.max, Math.min);
}

So long as you call the polyfill before you pull in any of your own JS, and provided you don’t use any ES5 JavaScript feature other than Array.indexOf, your page would work in IE8.

Polyfills can be used to plug all sorts of missing functionality. For example, there are polyfills for enabling CSS3 selectors such as :last-child (unsupported in IE8) or the placeholder attribute (unsupported in IE9).

Polyfills vary in size and effectiveness and sometimes have dependencies on external libraries such as jQuery.

You may also hear of “shims” rather than “polyfills”. Don’t get too hung up on the naming — people use the two terms interchangeably. But technically speaking, a shim is code that intercepts an API call and provides a layer of abstraction. A polyfill is a type of shim, in the browser. It specifically uses JavaScript to retrofit new HTML/CSS/JS features in older browsers.

Summary of the “manually importing polyfills” strategy:

  • ✅ Complete control over choice of polyfills;
  • ✅ Suitable for basic websites;
  • ⚠️ Without additional tooling, you’re forced to write in native ES5 JavaScript;
  • ⚠️ Difficult to micromanage all of your polyfills;
  • ⚠️ Out of the box, all your users will get the polyfills, whether they need them or not.

Babel Polyfill

I’ve talked about transpiling ES6 code down to ES5. You do this using a transpiler, the most popular of which is Babel.

Babel is configurable via a .babelrc file in the root of your project. In it, you point to various Babel plugins and presets. There’s typically one for each syntax transform and browser polyfill you’ll need.

Micromanaging these and keeping them in sync with your browser support list can be a pain, so the standard setup nowadays is to delegate that micromanagement to the @babel/preset-env module. With this setup, you simply give Babel a list of browser versions you want to support, and it does the hard work for you:

{
  "presets": [
    [
      "@babel/preset-env",
      {
        "useBuiltIns": "usage",
        "targets": {
          "chrome": "58",
          "ie": "11"
        }
      }
    ]
  ]
}

The useBuiltIns configuration option of @babel/preset-env is where the magic happens, in combination with an import "@babel/polyfill" (another module) in the entry point of your application.

  • When omitted, useBuiltIns does nothing. The entirety of @babel/polyfill is included with your app, which is pretty heavy.
  • When set to "entry", it converts the @babel/polyfill import into multiple, smaller imports, importing the minimum polyfills required to polyfill the targeted browsers you’ve listed in your .babelrc (in this example, Chrome 58 and IE 11).
  • Setting to "usage" takes this one step further by doing code analysis and only importing polyfills for features that are actually being used. It’s classed as “experimental” but errs on the side of “polyfill too much” rather than “too little”. In any case, I don’t see how it’s possible that it would create a bigger bundle than "entry" or false, so is a good option to choose (and is the way we’re going at the BBC).

Using Babel, you can transpile and polyfill your JavaScript prior to deploying to production, and target support in a specific minimum baseline of browsers. NB, another popular tool is TypeScript, which has its own transpiler that transpiles to ES3, in theory supporting IE8 out of the box.

Summary of using @babel/preset-env for polyfilling:

  • ✅ Delegate micromanagement of polyfills to a tool;
  • ✅ Automated tool helps prevent inclusion of polyfills you don’t need;
  • ✅ Scales to larger, complex sites;
  • ⚠️ Out of the box, all your users will get the polyfills, whether they need them or not;
  • ⚠️ Difficult to keep sight of exactly what’s being pulled into your application bundle.

Lazy Loading Polyfills With Webpack And Dynamic Imports

It is possible to leverage the new import() proposal to feature-detect and dynamically download polyfills prior to initializing your application. It looks something like this in practice:

import app from './app.js';

const polyfills = [];

if (!window.fetch) {
  polyfills.push(import(/* webpackChunkName: "polyfill-fetch" */ 'whatwg-fetch'));
}

Promise.all(polyfills)
  .then(app)
  .catch((error) => {
    console.error('Failed fetching polyfills', error);
  });

This example code is shamelessly copied from the very good article, “Lazy Loading Polyfills With Webpack And Dynamic Imports” that delves into the technique in more detail.

Summary:

  • ✅ Doesn’t bloat modern browsers with unnecessary polyfills;
  • ⚠️ Requires manually managing each polyfill.

polyfill.io

polyfill.io is polyfilling as a service, built by the Financial Times. It works by your page making a single script request to polyfill.io, optionally listing the specific features you need to polyfill. Their server then analyzes the user agent string and populates the script accordingly. This saves you from having to manually provide your own polyfill solutions.

Here is the JavaScript that polyfill.io returns for a request made from IE8:


Screenshot of response from polyfill.io service for IE8
Lots of JS code to polyfill standard ES5 methods in IE8. (Large preview)

Here’s the same polyfill.io request, but where the request came from modern Chrome:


Screenshot of response from polyfill.io service for Chrome - no polyfill was required
No JS code, just a JS comment. (Large preview)

All that’s required from your site is a single script call.

Summary:

  • ✅ Ease of inclusion into your web app;
  • ✅ Delegates responsibility of polyfill knowledge to a third party;
  • ⚠️ On the flipside, you’re now reliant on a third-party service;
  • ⚠️ Makes a blocking <script> call, even for modern browsers that don’t need any polyfills.

Progressive Enhancement

Polyfilling is an incredibly useful technique for supporting older browsers, but can be a bloat to web pages and is limited in scope.

The progressive enhancement technique, on the other hand, is a great way of guaranteeing a basic experience for all browsers, whilst retaining full functionality for your users on modern browsers. It should be achievable on most sites.

The principle is this: start from a baseline of HTML (and styling, optional), and “progressively enhance” the page with JavaScript functionality. The benefit is that if the browser is a legacy one, or if the JavaScript is broken at any point in its delivery, your site should still be functional.

The term “progressive enhancement” is often used interchangeably with “unobtrusive JavaScript“. They do mean essentially the same thing, but the latter takes it a little further in that you shouldn’t litter your HTML with lots of attributes, IDs and classes that are only used by your JavaScript.

Cut-The-Mustard

The BBC technique of “cutting the mustard” (CTM) is a tried and tested implementation of progressive enhancement. The principle is that you write a solid baseline experience of HTML, and before downloading any enhancing JavaScript, you check for a minimum level of support. The original implementation checked for the presence of standard HTML5 features:

if ('querySelector' in document
  && 'localStorage' in window
  && 'addEventListener' in window) {
    // Enhance for HTML5 browsers
}

As new features come out and older browsers become increasingly antiquated, our cuts the mustard baseline will change. For instance, new JavaScript syntax such as ES6 arrow functions would mean this inline CTM check fails to even parse in legacy browsers — not even safely executing and failing the CTM check — so may have unexpected side-effects such as breaking other third-party JavaScript (e.g. Google Analytics).

To avoid even attempting to parse untranspiled, modern JS, we can apply this “modern take” on the CTM technique, taken from @snugug’s blog, in which we take advantage of the fact that older browsers don’t understand the type="module" declaration and will safely skip over it. In contrast, modern browsers will ignore <script nomodule> declarations.

<script type="module" src="./mustard.js"></script>
<script nomodule src="./no-mustard.js"></script>

<!-- Can be done inline too -->

<script type="module">
  import mustard from './mustard.js';
</script>

<script nomodule type="text/javascript">
  console.log('No Mustard!');
</script>

This approach is a good one, provided you’re happy treating ES6 browsers as your new minimum baseline for functionality (~92% of global browsers at the time of writing).

However, just as the world of JavaScript is evolving, so is the world of CSS. Now that we have Grid, Flexbox, CSS variables and the like (each with a varying efficacy of fallback), there’s no telling what combination of CSS support an old browser might have that might lead to a mishmash of “modern” and “legacy” styling, the result of which looks broken. Therefore, sites are increasingly choosing to CTM their styling, so now HTML is the core baseline, and both CSS and JS are treated as enhancements.

The JavaScript-based CTM techniques we’ve seen so far have a couple of downsides if you use the presence of JavaScript to apply CSS in any way:

  1. Inline JavaScript is blocking. Browsers must download, parse and execute your JavaScript before you get any styling. Therefore, users may see a flash of unstyled text.
  2. Some users may have modern browsers, but choose to disable JavaScript. A JavaScript-based CTM prevents them from getting a styled site even when they’re perfectly capable of getting it.

The ‘ultimate’ approach is to use CSS media queries as your cuts-the-mustard litmus test. This “CSSCTM” technique is actively in use on sites such as Springer Nature.

<head>
  <!-- CSS-based cuts-the-mustard -->
  <!-- IMPORTANT: the JS depends on having this rule somewhere in the CSS: `body { clear: both }` -->
  <link rel="stylesheet" href="mq-test.css"
      media="only screen and (min-resolution: 0.1dpcm),
      only screen and (-webkit-min-device-pixel-ratio:0)
      and (min-color-index:0)">
</head>
<body>
  <!-- content here... -->
  <script>
    (function () { // wrap in an IIFE to prevent global scope pollution
      function isSupported () {
        var val = '';
        if (window.getComputedStyle) {
          val = window.getComputedStyle(document.body, null).getPropertyValue('clear');
        } else if (document.body.currentStyle) {
          val = document.body.currentStyle.clear;
        }
        if (val === 'both') { // references the `body { clear: both; }` in the CSS
          return true;
        }
        return false;
      }
      if (isSupported()) {
        // Load or run JavaScript for supported browsers here.
      }
    })();
  </script>
</body>

This approach is quite brittle — accidentally overriding the clear property on your body selector would ‘break’ your site — but it does offer the best performance. This particular implementation uses media queries that are only supported in at least IE 9, iOS 7 and Android 4.4, which is quite a sensible modern baseline.

“Cuts the mustard”, in all its various guises, accomplishes two main principles:

  1. Widespread user support;
  2. Efficiently applied dev effort.

It’s simply not possible for sites to accommodate every single browser / operating system / network connection / user configuration combination. Techniques such as cuts-the-mustard help to rationalize browsers into C-grade and A-grade browsers, according to the Graded Browser Support model by Yahoo!.

Cuts-The-Mustard: An Anti-Pattern?

There is an argument that applying a global, binary decision of “core” vs “advanced” is not the best possible experience for our users. It provides sanity to an otherwise daunting technical problem, but what if a browser supports 90% of the features in your global CTM test, and this specific page doesn’t even make use of the 10% of the features it fails on? In this case, the user would get the core experience, since the CTM check would have failed. But we could have given them the full experience.

And what about cases where the given page does make use of a feature the browser doesn’t support? Well, in the move towards componentization, we could have a feature-specific fallback (or error boundary), rather than a page-level fallback.

We do this every day in our web development. Think of pulling in a web font; different browsers have different levels of font support. What do we do? We provide a few font file variations and let the browser decide which to download:

@font-face {
  font-family: FontName;
  src: url('path/filename.eot');
  src: url('path/filename.eot?#iefix') format('embedded-opentype'),
    url('path/filename.woff2') format('woff2'),
    url('path/filename.woff') format('woff'),
    url('path/filename.ttf') format('truetype');
}

We have a similar fallback with HTML5 video. Modern browsers will choose which video format they want to use, whereas legacy browsers that don’t understand what a <video> element is will simply render the fallback text:

<video width="400" controls>
  <source src="mov_bbb.mp4" type="video/mp4">
  <source src="mov_bbb.ogg" type="video/ogg">
  Your browser does not support HTML5 video.
</video>

The nesting approach we saw earlier used by the BBC for PNG fallbacks for SVG is the basis for the <picture> responsive image element. Modern browsers will render the best fitting image based on the media attribute supplied, whereas legacy browsers that don’t understand what a <picture> element is will render the <img> fallback.

<picture>
  <source media="(min-width: 650px)" srcset="img_pink_flowers.jpg">
  <source media="(min-width: 465px)" srcset="img_white_flower.jpg">
  <img src="img_orange_flowers.jpg" alt="Flowers" style="width:auto;">
</picture>

The HTML spec has carefully evolved over the years to provide a basic fallback mechanism for all browsers, whilst allowing features and optimisations for the modern browsers that understand them.

We could apply a similar principle to our JavaScript code. Imagine a Feature like so, where the foo method contains some complex JS:

class Feature {
  browserSupported() {
    return ('querySelector' in document); // internal cuts-the-mustard goes here
  }

  foo() {
    // etc
  }
}

export default new Feature();

Before calling foo, we check if the Feature is supported in this browser by calling its browserSupported method. If it’s not supported, we don’t even attempt to call the code that would otherwise have errored our page.

import Feature from './feature';

if (Feature.browserSupported()) {
  Feature.foo();
}

This technique means we can avoid pulling in polyfills and just go with what’s natively supported by each individual browser, gracefully degrading individual features if unsupported.

Note that in the example above, I’m assuming the code gets transpiled to ES5 so that the syntax is understood by all browsers, but I’m not assuming that any of the code is polyfilled. If we wanted to avoid transpiling the code, we could apply the same principle but using the type="module" take on cuts-the-mustard, but it comes with the caveat that it already has a minimum ES6 browser requirement, so is only likely to start being a good solution in a couple of years:

<script type="module">
  import Feature from './feature.js';
  if (Feature.browserSupported()) {
    Feature.foo();
  }
</script>

We’ve covered HTML, and we’ve covered JavaScript. We can apply localized fallbacks in CSS too; there’s a @supports keyword in CSS, which allows you to conditionally apply CSS based on the presence or absence of support for a CSS feature. However, it is ironically caveated with the fact that it is not universally supported. It just needs careful application; there’s a great Mozilla blog post on how to use feature queries in CSS.

In an ideal world, we shouldn’t need a global cuts-the-mustard check. Instead, each individual HTML, JS or CSS feature should be self-contained and have its own error boundaries. In a world of web components, shadow DOM and custom elements, I expect we’ll see more of a shift to this sort of approach. But it does make it much more difficult to predict and to test your site as a whole, and there may be unintended side-effects if, say, the styling of one component affects the layout of another.

Two Main Backwards Compatibility Strategies

A summary of polyfilling as a strategy:

  • ✅ Can deliver client-side JS functionality to most users.
  • ✅ Can be easier to code when delegating the problem of backwards-compatibility to a polyfill.
  • ⚠️ Depending on implementation, could be detrimental to performance for users who don’t need polyfills.
  • ⚠️ Depending on complexity of application and age of browser, may require lots of polyfills, and therefore run very poorly. We risk shipping megabytes of polyfills to the very browsers least prepared to accept it.

A summary of progressive enhancement as a strategy:

  • ✅ Traditional CTM makes it easy to segment your code, and to manually test.
  • ✅ Guaranteed baseline of experience for all users.
  • ⚠️ Might unnecessarily deliver the core experience to users who could handle the advanced experience.
  • ⚠️ Not well suited to sites that require client-side JS for functionality.
  • ⚠️ Sometimes difficult to balance a robust progressive enhancement strategy with a performant first render. There’s a risk of over-prioritizing the ‘core’ experience to the detriment of the 90% of your users who get the ‘full’ experience (e.g. providing small images for noJS and then replacing with high-res images on lazy-load means we’ve wasted a lot of download capacity on assets that are never even viewed).

Conclusion

IE8 was once a cutting edge browser. (No, seriously.) The same could be said for Chrome and Firefox today.

If today’s websites are totally unusable in IE8, the websites in ten years time’ are likely to be about as unusable in today’s modern browsers — despite being built upon the open technologies of HTML, CSS, and JavaScript.

Stop and think about that for a moment. Isn’t it a bit scary? (That said, if you can’t abandon browsers after ten years and after the company who built it has deprecated it, when can you?)

IE8 is today’s scapegoat. Tomorrow it’ll be IE9, next year it’ll be Safari, a year later it might be Chrome. You can swap IE8 out for ‘old browser of choice’. The point is, there will always be some divide between what browsers developers build for, and what browsers people are using. We should stop scoffing at that and start investing in robust, inclusive engineering solutions. The side effects of these strategies tend to pay dividends in terms of accessibility, performance and network resilience, so there’s a bigger picture at play here.

We tend not to think about screen reader numbers. We simply take it for granted that it’s morally right to do our best to support users who have no other way of consuming our content, through no fault of our own. The same principle applies to people using older browsers.

We’ve covered some high-level strategies for building robust sites that should continue to work, to some degree, across a broad spectrum of legacy and modern browsers.

Once again, a disclaimer: don’t hack things for IE. That would be missing the point. But be mindful that all sorts of people use all sorts of browsers for all sorts of reasons, and that there are some solid engineering approaches we can take to make the web accessible for everyone.

Optimize for the majority, make an effort for the minority, and never sacrifice security.

Further Reading on SmashingMag:

Smashing Editorial
(ra, il)
Reblogged 3 days ago from www.smashingmagazine.com

How To Build An Endless Runner Game In Virtual Reality (Part 3)

How To Build An Endless Runner Game In Virtual Reality (Part 3)

How To Build An Endless Runner Game In Virtual Reality (Part 3)

Alvin Wan

2019-03-20T13:00:35+01:00
2019-03-20T14:35:19+00:00

And so our journey continues. In this final part of my series on how to build an endless runner VR game, I’ll show you how you can synchronize the game state between two devices which will move you one step closer to building a multiplayer game. I’ll specifically introduce MirrorVR which is responsible for handling the mediating server in client-to-client communication.

Note: This game can be played with or without a VR headset. You can view a demo of the final product at ergo-3.glitch.me.

To get started, you will need the following.

  • Internet access (specifically to glitch.com);
  • A Glitch project completed from part 2 of this tutorial. You can start from the part 2 finished product by navigating to https://glitch.com/edit/#!/ergo-2 and clicking “Remix to edit”;
  • A virtual reality headset (optional, recommended). (I use Google Cardboard, which is offered at $15 a piece.)

Step 1: Display Score

The game as-is functions at a bare minimum, where the player is given a challenge: avoid the obstacles. However, outside of object collisions, the game does not provide feedback to the player regarding progress in the game. To remedy this, you will implement the score display in this step. The score will be large text object placed in our virtual reality world, as opposed to an interface glued to the user’s field of view.

In virtual reality generally, the user interface is best integrated into the world rather than stuck to the user’s head.

Score display (Large preview)

Start by adding the object to index.html. Add a text mixin, which will be reused for other text elements:

<a-assets>
  ...
  <a-mixin id="text" text="
     font:exo2bold;
     anchor:center;
     align:center;"></a-mixin>
  ...
</a-assets>

Next, add a text element to the platform, right before the player:

<!-- Score -->
<a-text id="score" value="" mixin="text" height="40" width="40" position="0 1.2 -3" opacity="0.75"></a-text>

<!-- Player -->
...

This adds a text entity to the virtual reality scene. The text is not currently visible, because its value is set to empty. However, you will now populate the text entity dynamically, using JavaScript. Navigate to assets/ergo.js. After the collisions section, add a score section, and define a number of global variables:

  • score: the current game score.
  • countedTrees: IDs of all trees that are included in the score. (This is because collision tests may trigger multiple times for the same tree.)
  • scoreDisplay: reference to the DOM object, corresponding to a text object in the virtual reality world.
/*********
 * SCORE *
 *********/

var score;
var countedTrees;
var scoreDisplay;

Next, define a setup function to initialize our global variables. In the same vein, define a teardown function.

...
var scoreDisplay;

function setupScore() {
  score = 0;
  countedTrees = new Set();
  scoreDisplay = document.getElementById('score');
}

function teardownScore() {
  scoreDisplay.setAttribute('value', '');
}

In the Game section, update gameOver, startGame, and window.onload to include score setup and teardown.

/********
 * GAME *
 ********/

function gameOver() {
    ...
    teardownScore();
}

function startGame() {
    ...
    setupScore();
    addTreesRandomlyLoop();
}

window.onload = function() {
    setupScore();
    ...
}

Define a function that increments the score for a particular tree. This function will check against countedTrees to ensure that the tree is not double counted.

function addScoreForTree(tree_id) {
  if (countedTrees.has(tree_id)) return;
  score += 1;
  countedTrees.add(tree_id);
}

Additionally, add a utility to update the score display using the global variable.

function updateScoreDisplay() {
  scoreDisplay.setAttribute('value', score);
}

Update the collision testing accordingly in order to invoke this score-incrementing function whenever an obstacle has passed the player. Still in assets/ergo.js, navigate to the collisions section. Add the following check and update.

AFRAME.registerComponent('player', {
  tick: function() {
    document.querySelectorAll('.tree').forEach(function(tree) {
      ...
      if (position.z > POSITION_Z_LINE_END) {
        addScoreForTree(tree_id);
        updateScoreDisplay();
      }
    })
  }
})

Finally, update the score display as soon as the game starts. Navigate to the Game section, and add updateScoreDisplay(); to startGame:

function startGame() {
  ...
  setupScore();
  updateScoreDisplay();
  ...
}

Ensure that assets/ergo.js and index.html match the corresponding source code files. Then, navigate to your preview. You should see the following:

Score display (Large preview)

This concludes the score display. Next, we will add proper start and Game Over menus, so that the player can replay the game as desired.

Step 2: Add Start Menu

Now that the user can keep track of the progress, you will add finishing touches to complete the game experience. In this step, you will add a Start menu and a Game Over menu, letting the user start and restart games.

Let’s begin with the Start menu where the player clicks a “Start” button to begin the game. For the second half of this step, you will add a Game Over menu, with a “Restart” button:

Start and game over menus (Large preview)

Navigate to index.html in your editor. Then, find the Mixins section. Here, append the title mixin, which defines styles for particularly large text. We use the same font as before, align text to the center, and define a size appropriate for the type of text. (Note below that anchor is where a text object is anchored to its position.)

<a-assets>
   ...
   <a-mixin id="title" text="
       font:exo2bold;
       height:40;
       width:40;
       opacity:0.75;
       anchor:center;
       align:center;"></a-mixin>
</a-assets>

Next, add a second mixin for secondary headings. This text is slightly smaller but is otherwise identical to the title.

<a-assets>
   ...
   <a-mixin id="heading" text="
       font:exo2bold;
       height:10;
       width:10;
       opacity:0.75;
       anchor:center;
       align:center;"></a-mixin>
</a-assets>

For the third and final mixin, define properties for descriptive text — even smaller than secondary headings.

<a-assets>
   ...
   <a-mixin id="copy" text="
       font:exo2bold;
       height:5;
       width:5;
       opacity:0.75;
       anchor:center;
       align:center;"></a-mixin>
</a-assets>

With all text styles defined, you will now define the in-world text objects. Add a new Menus section beneath the Score section, with an empty container for the Start menu:

<!-- Score -->
...

<!-- Menus -->
<a-entity id="menu-container">
  <a-entity id="start-menu" position="0 1.1 -3">
  </a-entity>
</a-entity>

Inside the start menu container, define the title and a container for all non-title text:

...
  <a-entity id="start-menu" ...>
    <a-entity id="start-copy" position="0 1 0">
    </a-entity>
    <a-text value="ERGO" mixin="title"></a-text>
  </a-entity>
</a-entity>

Inside the container for non-title text, add instructions for playing the game:

<a-entity id="start-copy"...>
  <a-text value="Turn left and right to move your player, and avoid the trees!" mixin="copy"></a-text>
</a-entity>

To complete the Start menu, add a button that reads “Start”:

<a-entity id="start-copy"...>
  ...
  <a-text value="Start" position="0 0.75 0" mixin="heading"></a-text>
  <a-box id="start-button" position="0 0.65 -0.05" width="1.5" height="0.6" depth="0.1"></a-box>
</a-entity>

Double-check that your Start menu HTML code matches the following:

<!-- Menus -->
<a-entity id="menu-container">
  <a-entity id="start-menu" position="0 1.1 -3">
    <a-entity id="start-copy" position="0 1 0">
      <a-text value="Turn left and right to move your player, and avoid the trees!" mixin="copy"></a-text>
      <a-text value="Start" position="0 0.75 0" mixin="heading"></a-text>
      <a-box id="start-button" position="0 0.65 -0.05" width="1.5" height="0.6" depth="0.1"></a-box>
    </a-entity>
    <a-text value="ERGO" mixin="title"></a-text>
  </a-entity>
</a-entity>

Navigate to your preview, and you will see the following Start menu:


Image of Start menu
Start menu (Large preview)

Still in the Menus section (directly beneath the start menu), add the game-over menu using the same mixins:

<!-- Menus -->
<a-entity id="menu-container">
  ...
  <a-entity id="game-over" position="0 1.1 -3">
    <a-text value="?" mixin="heading" id="game-score" position="0 1.7 0"></a-text>
    <a-text value="Score" mixin="copy" position="0 1.2 0"></a-text>
    <a-entity id="game-over-copy">
      <a-text value="Restart" mixin="heading" position="0 0.7 0"></a-text>
      <a-box id="restart-button" position="0 0.6 -0.05" width="2" height="0.6" depth="0.1"></a-box>
    </a-entity>
    <a-text value="Game Over" mixin="title"></a-text>
  </a-entity>
</a-entity>

Navigate to your JavaScript file, assets/ergo.js. Create a new Menus section before the Game section. Additionally, define three empty functions: setupAllMenus, hideAllMenus, and showGameOverMenu.

/********
 * MENU *
 ********/

function setupAllMenus() {
}

function hideAllMenus() {
}

function showGameOverMenu() {
}

/********
 * GAME *
 ********/

Next, update the Game section in three places. In gameOver, show the Game Over menu:

function gameOver() {
    ...
    showGameOverMenu();
}
```

In `startGame`, hide all menus:

```
function startGame() {
    ...
    hideAllMenus();
}

Next, in window.onload, remove the direct invocation to startGame and instead call setupAllMenus. Update your listener to match the following:

window.onload = function() {
  setupAllMenus();
  setupScore();
  setupTrees();
}

Navigate back to the Menu section. Save references to various DOM objects:

/********
 * MENU *
 ********/

var menuStart;
var menuGameOver;
var menuContainer;
var isGameRunning = false;
var startButton;
var restartButton;

function setupAllMenus() {
  menuStart     = document.getElementById('start-menu');
  menuGameOver  = document.getElementById('game-over');
  menuContainer = document.getElementById('menu-container');
  startButton   = document.getElementById('start-button');
  restartButton = document.getElementById('restart-button');
}

Next, bind both the “Start” and “Restart” buttons to startGame:

function setupAllMenus() {
  ...
  startButton.addEventListener('click', startGame);
  restartButton.addEventListener('click', startGame);
}

Define showStartMenu and invoke it from setupAllMenus:

function setupAllMenus() {
  ...
  showStartMenu();
}

function hideAllMenus() {
}

function showGameOverMenu() {
}

function showStartMenu() {
}

To populate the three empty functions, you will need a few helper functions. Define the following two functions, which accepts a DOM element representing an A-Frame VR entity and shows or hides it. Define both functions above showAllMenus:

...
var restartButton;

function hideEntity(el) {
  el.setAttribute('visible', false);
}

function showEntity(el) {
  el.setAttribute('visible', true);
}

function showAllMenus() {
...

First populate hideAllMenus. You will remove the objects from sight, then remove click listeners for both menus:

function hideAllMenus() {
  hideEntity(menuContainer);
  startButton.classList.remove('clickable');
  restartButton.classList.remove('clickable');
}

Second, populate showGameOverMenu. Here, restore the container for both menus, as well as the Game Over menu and the ‘Restart’ button’s click listener. However, remove the ‘Start’ button’s click listener, and hide the ‘Start’ menu.

function showGameOverMenu() {
  showEntity(menuContainer);
  hideEntity(menuStart);
  showEntity(menuGameOver);
  startButton.classList.remove('clickable');
  restartButton.classList.add('clickable');
}

Third, populate showStartMenu. Here, reverse all changes that the showGameOverMenu effected.

function showStartMenu() {
  showEntity(menuContainer);
  hideEntity(menuGameOver);
  showEntity(menuStart);
  startButton.classList.add('clickable');
  restartButton.classList.remove('clickable');
}

Double-check that your code matches the corresponding source files. Then, navigate to your preview, and you will observe the following behavior:

Start and Game Over menus (Large preview)

This concludes the Start and Game Over menus.

Congratulations! You now have a fully functioning game with a proper start and proper end. However, we have one more step left in this tutorial: We need to synchronize the game state between different player devices. This will move us one step closer towards multiplayer games.

Step 3: Synchronizing Game State With MirrorVR

In a previous tutorial, you learned how to send real-time information across sockets, to facilitate one-way communication between a server and a client. In this step, you will build on top of a fully-fledged product of that tutorial, MirrorVR, which handles the mediating server in client-to-client communication.

Note: You can learn more about MirrorVR here.

Navigate to index.html. Here, we will load MirrorVR and add a component to the camera, indicating that it should mirror a mobile device’s view where applicable. Import the socket.io dependency and MirrorVR 0.2.3.

<script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.1.1/socket.io.js"></script>
<script src="https://cdn.jsdelivr.net/gh/alvinwan/mirrorvr@0.2.2/dist/mirrorvr.min.js"></script>

Next, add a component, camera-listener, to the camera:

<a-camera camera-listener ...>

Navigate to assets/ergo.js. In this step, the mobile device will send commands, and the desktop device will only mirror the mobile device.

To facilitate this, you need a utility to distinguish between desktop and mobile devices. At the end of your file, add a mobileCheck function after shuffle:

/**
 * Checks for mobile and tablet platforms.
 */
function mobileCheck() {
  var check = false;
  (function(a){if(/(android|bb\d+|meego).+mobile|avantgo|bada\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|iris|kindle|lge |maemo|midp|mmp|mobile.+firefox|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)\/|plucker|pocket|psp|series(4|6)0|symbian|treo|up\.(browser|link)|vodafone|wap|windows ce|xda|xiino|android|ipad|playbook|silk/i.test(a)||/1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s\-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|\-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw\-(n|u)|c55\/|capi|ccwa|cdm\-|cell|chtm|cldc|cmd\-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc\-s|devi|dica|dmob|do(c|p)o|ds(12|\-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(\-|_)|g1 u|g560|gene|gf\-5|g\-mo|go(\.w|od)|gr(ad|un)|haie|hcit|hd\-(m|p|t)|hei\-|hi(pt|ta)|hp( i|ip)|hs\-c|ht(c(\-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i\-(20|go|ma)|i230|iac( |\-|\/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |\/)|klon|kpt |kwc\-|kyo(c|k)|le(no|xi)|lg( g|\/(k|l|u)|50|54|\-[a-w])|libw|lynx|m1\-w|m3ga|m50\/|ma(te|ui|xo)|mc(01|21|ca)|m\-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(\-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)\-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|\-([1-8]|c))|phil|pire|pl(ay|uc)|pn\-2|po(ck|rt|se)|prox|psio|pt\-g|qa\-a|qc(07|12|21|32|60|\-[2-7]|i\-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55\/|sa(ge|ma|mm|ms|ny|va)|sc(01|h\-|oo|p\-)|sdk\/|se(c(\-|0|1)|47|mc|nd|ri)|sgh\-|shar|sie(\-|m)|sk\-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h\-|v\-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl\-|tdg\-|tel(i|m)|tim\-|t\-mo|to(pl|sh)|ts(70|m\-|m3|m5)|tx\-9|up(\.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|\-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(\-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas\-|your|zeto|zte\-/i.test(a.substr(0,4))) check = true;})(navigator.userAgent||navigator.vendor||window.opera);
  return check;
};

First, we will synchronize the game start. In startGame, of the Game section, add a mirrorVR notification at the end.

function startGame() {
  ...
  if (mobileCheck()) {
    mirrorVR.notify('startGame', {})
  }
}

The mobile client now sends notifications about a game starting. You will now implement the desktop’s response.

In the window load listener, invoke a setupMirrorVR function:

window.onload = function() {
  ...
  setupMirrorVR();
}

Define a new section above the Game section for the MirrorVR setup:

/************
 * MirrorVR *
 ************/

function setupMirrorVR() {
  mirrorVR.init();
}

Next, add keyword arguments to the initialization function for mirrorVR. Specifically, we will define the handler for game start notifications. We will additionally specify a room ID; this ensures that anyone loading your application is immediately synchronized.

function setupMirrorVR() {
  mirrorVR.init({
    roomId: 'ergo',
    state: {
      startGame: {
        onNotify: function(data) {
          hideAllMenus();
          setupScore();
          updateScoreDisplay();
        }
      },
    }
  });
}

Repeat the same synchronization process for Game Over. In gameOver in the Game section, add a check for mobile devices and send a notification accordingly:

function gameOver() {
  ...
  if (mobileCheck()) {
    mirrorVR.notify('gameOver', {});
  }
}

Navigate to the MirrorVR section and update the keyword arguments with a gameOver listener:

function setupMirrorVR() {
  mirrorVR.init({
    state: {
      startGame: {...
      },
      gameOver: {
        onNotify: function(data) {
          gameOver();
        }
      },
    }
  }) 
}

Next, repeat the same synchronization process for the addition of trees. Navigate to addTreesRandomly in the Trees section. Keep track of which lanes receive new trees. Then, directly before the return directive, and send a notification accordingly:

function addTreesRandomly(...) {
  ...
  var numberOfTreesAdded ...
  var position_indices = [];
  trees.forEach(function (tree) {
    if (...) {
      ...
      position_indices.push(tree.position_index);
    }
  });

  if (mobileCheck()) {
    mirrorVR.notify('addTrees', position_indices);
  }
  return ...
}

Navigate to the MirrorVR section, and update the keyword arguments to mirrorVR.init with a new listener for trees:

function setupMirrorVR() {
  mirrorVR.init({
    state: {
      ...
      gameOver: {...
      },
      addTrees: {
        onNotify: function(position_indices) {
          position_indices.forEach(addTreeTo)
        }
      },
    }
  }) 
}

Finally, we synchronize the game score. In updateScoreDisplay from the Score section, send a notification when applicable:

function updateScoreDisplay() {
  ...
  if (mobileCheck()) {
    mirrorVR.notify('score', score);
  }
}

Update the mirrorVR initialization for the last time, with a listener for score changes:

function setupMirrorVR() {
  mirrorVR.init({
    state: {
      addTrees: {
      },
      score: {
        onNotify: function(data) {
          score = data;
          updateScoreDisplay();
        }
      }
    }
  });
}

Double-check that your code matches the appropriate source code files for this step. Then, navigate to your desktop preview. Additionally, open up the same URL on your mobile device. As soon as your mobile device loads the webpage, your desktop should immediately start mirroring the mobile device’s game.

Here is a demo. Notice that the desktop cursor is not moving, indicating the mobile device is controlling the desktop preview.

Final result of the endless runner game with MirrorVR game state synchronization (Large preview)

This concludes your augmented project with mirrorVR.

This third step introduced a few basic game state synchronization steps; to make this more robust, you could add more sanity checks and more points of synchronization.

Conclusion

In this tutorial, you added finishing touches to your endless runner game and implemented real-time synchronization of a desktop client with a mobile client, effectively mirroring the mobile device’s screen on your desktop. This concludes the series on building an endless runner game in virtual reality. Along with A-Frame VR techniques, you’ve picked up 3D modeling, client-to-client communication, and other widely applicable concepts.

Next steps can include:

  • More Advanced Modeling
    This means more realistic 3D models, potentially created in a third-party software and imported. For example, (MagicaVoxel) makes creating voxel art simple, and (Blender) is a complete 3D modeling solution.
  • More Complexity
    More complex games, such as a real-time strategy game, could leverage a third-party engine for increased efficiency. This may mean sidestepping A-Frame and webVR entirely, instead publishing a compiled (Unity3d) game.

Other avenues include multiplayer support and richer graphics. With the conclusion of this tutorial series, you now have a framework to explore further.

Smashing Editorial
(rb, ra, il)
Reblogged 3 days ago from www.smashingmagazine.com

How SEO can (and should) inform social media marketing

Search engine optimization and social media marketing are two major points of focus for most digital marketers these days. They have co-existed for ages, yet they are seldom used in conjunction.

This is unfortunate because both of these industries have grown and matured quite a bit over the years side by side, and they have a lot in common: both SEO and social media marketing aim at reaching and connecting with your potential customer.

What’s more, both SEO and social media have access to a huge amount of marketing data you’ve been collecting for years.

So why not use this data to grow and inform both of these channels?

Let’s see how SEO and social media can inform and direct each other for you to better understand and serve your customers.

How SEO can inform social media marketing

1. Use keyword research tools to create more effective social media content

Keyword research is not just for SEO (i.e. organic rankings). It’s important to remember that keywords are user-driven, meaning they are generated based on actual user behavior – what real human beings are typing into Google’s search box.

Crafting an effective social media update is not much different from creating a successful article: You need to rely on your audience’s interests, struggles and questions to come up with something that would catch their attention.

That’s why using keyword research tools for both is such a good idea. But it’s not really only about keyword matching any more. It’s about finding topics and questions that make your target audience tick.

Serpstat provides more context to traditional keyword research lists by breaking them up into topics and subtopics for you to implement into your content and social media strategy. Run Serpstat’s clustering tool and share the results with your social media manager. These results let you and find closely related queries so you can see which categories and topics should be covered in your editorial calendar:

TextOptimizer is another innovative keyword research and optimization tool that can help your social media team create a better informed social media editorial strategy. It takes your core keyword (which is anything your business is focusing on) and uses semantic analysis to extract related and neighboring topics for you to include into your social media updates:

You can Run TextOptimizer, export results into a PDF file and send to your social media manager to implement in your brand’s social media sharing strategy.

Another useful feature inside TextOptimizer is the ability to find popular questions around your topic. Questions make great social media updates:

  • Questions are engaging: Human beings have a natural “answering” instinct when they see a question, so it’s a great way to generate more interactions and engagement.
  • Questions inspire more creative and engaging social media content ideas, like polls, surveys, case studies and interviews.

To find popular questions, use TextOptimizer’s “Topic Ideas” tab:

2. Use personalization & analytics tools for audience research

User interaction with your site offers a great deal of insight into what your audience is mostly looking for and what engages them best. Who are my readers? How do my site users interact with my content section? Which of the calls-to-action seem to work better for my articles? Which should be the primary CTA for my social media referrals? Should I use this CTA on my social media landing page too?

All of those questions can be answered by on-site analytics technology your SEO team is using to refine their search marketing strategy, and this is also the type of insight your social media team may find helpful in creating and promoting their social media marketing assets effectively.

Google Analytics (or any of its possible alternatives) collects data on your site users’ age, gender and interests:

Finteza is an advanced (and free) analytics platform that gives you an in-depth insight into how your landing pages and conversion channels perform and how readers interact with your articles:

Use Finteza’s funnel reports to better understand how to engage your customers and which of your calls-to-action should be implemented on your social media landing pages:

Convert is another interesting solution allowing you to refine and personalize your marketing based on your customers’ location. It lets you easily create on-site personalizations and A/B test them to come up with the best-performing one. Apart from increasing conversions, this also provides more insight into what works for your customers, which can be further translated into your social media strategy. Additionally, this is a great way to test your landing pages before promoting them on social media.

How social media can inform SEO strategy

1. Use social media listening for keyword research and content ideation

With voice search on the rise, natural language research is more and more important for SEO. It’s no longer enough to match text search keywords to rank on top of Google. SEOs need to know how people speak when searching for answers and products.

Spoken words are much less standardized than written text. A fundamental difference between spoken queries and written queries is that speech is spontaneous while writing is more carefully planned.

Spoken queries are thus much longer and much harder to predict than typed queries which makes it harder for search engine professionals to optimize content.

Social listening and analysis can help SEOs understand the new mobile- and voice-driven search behavior better. Seeing keywords in real-time context makes it easier for search marketers to identify natural language queries. Since many people adopt a more casual tone on social media and post more spontaneously, compared to thinking up a search engine query, SEOs can get insights into how keywords may appear when they are used in voice search queries.

Social Collaboration Tools Designed for Teams

Request a Demo


Select OneUnited StatesUnited KingdomCanadaAustraliaMexicoSouth AfricaAfghanistanÅland IslandsAlbaniaAlgeriaAmerican SamoaAndorraAngolaAnguillaAntarcticaAntigua and BarbudaArgentinaArmeniaArubaAustraliaAustriaAzerbaijanBahamasBahrainBangladeshBarbadosBelarusBelgiumBelizeBeninBermudaBhutanBoliviaBosnia and HerzegovinaBotswanaBouvet IslandBrazilBritish Indian Ocean TerritoryBritish Virgin IslandsBruneiBulgariaBurkina FasoBurundiCambodiaCameroonCanadaCape VerdeCayman IslandsCentral African RepublicChadChileChinaChristmas IslandCocos (Keeling) IslandsColombiaComorosCook IslandsCosta RicaCroatiaCubaCuraçaoCyprusCzechiaDenmarkDjiboutiDominicaDominican RepublicDR CongoEcuadorEgyptEl SalvadorEquatorial GuineaEritreaEstoniaEthiopiaFalkland IslandsFaroe IslandsFijiFinlandFranceFrench GuianaFrench PolynesiaFrench Southern and Antarctic LandsGabonGambiaGeorgiaGermanyGhanaGibraltarGreeceGreenlandGrenadaGuadeloupeGuamGuatemalaGuernseyGuineaGuinea-BissauGuyanaHaitiHeard Island and McDonald IslandsHondurasHong KongHungaryIcelandIndiaIndonesiaIranIraqIrelandIsle of ManIsraelItalyIvory CoastJamaicaJapanJerseyJordanKazakhstanKenyaKiribatiKosovoKuwaitKyrgyzstanLaosLatviaLebanonLesothoLiberiaLibyaLiechtensteinLithuaniaLuxembourgMacauMacedoniaMadagascarMalawiMalaysiaMaldivesMaliMaltaMarshall IslandsMartiniqueMauritaniaMauritiusMayotteMexicoMicronesiaMoldovaMonacoMongoliaMontenegroMontserratMoroccoMozambiqueMyanmarNamibiaNauruNepalNetherlandsNew CaledoniaNew ZealandNicaraguaNigerNigeriaNiueNorfolk IslandNorth KoreaNorthern Mariana IslandsNorwayOmanPakistanPalauPalestinePanamaPapua New GuineaParaguayPeruPhilippinesPitcairn IslandsPolandPortugalPuerto RicoQatarRepublic of the CongoRomaniaRussiaRwandaRéunionSaint BarthélemySaint Kitts and NevisSaint LuciaSaint MartinSaint Pierre and MiquelonSaint Vincent and the GrenadinesSamoaSan MarinoSaudi ArabiaSenegalSerbiaSeychellesSierra LeoneSingaporeSint MaartenSlovakiaSloveniaSolomon IslandsSomaliaSouth AfricaSouth GeorgiaSouth KoreaSouth SudanSpainSri LankaSudanSurinameSvalbard and Jan MayenSwazilandSwedenSwitzerlandSyriaSão Tomé and PríncipeTaiwanTajikistanTanzaniaThailandTimor-LesteTogoTokelauTongaTrinidad and TobagoTunisiaTurkeyTurkmenistanTurks and Caicos IslandsTuvaluUgandaUkraineUnited Arab EmiratesUnited KingdomUnited StatesUnited States Minor Outlying IslandsUnited States Virgin IslandsUruguayUzbekistanVanuatuVatican CityVenezuelaVietnamWallis and FutunaWestern SaharaYemenZambiaZimbabwe

Select OneWe’re an agencyMyself only2-10 employees11-50 employees51-200 employees201-500 employees501-1,000 employees1,001-5,000 employees5,001-10,000 employees10,001+ employees

{
“animations”: [{
“duration”: 300,
“easing”: “ease-out”,
“iterations”: “1”,
“selector”: “.Snackbar”,
“keyframes”: [
{
“transform”: “translateY(100%)”,
“opacity”: 0
},
{
“transform”: “translateY(0)”,
“opacity”: 1
}
]
},
{
“duration”: 300,
“delay”: “10s”,
“fill”: “forwards”,
“easing”: “ease-in”,
“iterations”: “1”,
“selector”: “.Snackbar”,
“keyframes”: [
{
“transform”: “translateY(0)”,
“opacity”: 1
},
{
“transform”: “translateY(100%)”,
“opacity”: 0
}
]
}]
}

{{message}}

Thank You for Requesting a Demo!

Someone from our sales team will contact you shortly.

Sprout Social’s social media listening tools provide a great way for search marketers to aggregate and analyze real-time keyword context and optimize content for voice search. Furthermore, you can use the monitoring features of the Smart Inbox to make all those keyword mentions searchable and easier to organize. You can tag each mention in the Inbox to categorize them, such as by action required or by content types that fit each message.

Each social mention you tag will go straight to your team’s task list for your content manager to put into your content plan:

Sprout also supports search operators that make natural language queries much easier to target and refine:

2. Use social media ads to test content performance

Facebook’s advertising platform is not only a great way to build leads and sales but it’s also the fastest way to test any content asset and see how it may perform in the long run. If you are thinking about investing into a solid long-term content asset, use Facebook advertising to boost a smaller asset on the same topic. It is a good idea to create several campaigns using varied audience settings to build your marketing personas and better understand who will best respond to your content.

Mind that you will be able to re-include your engaged audience from these preliminary campaigns and advertise your major asset to them when it’s live. This way you are already working on creating an engaged audience before publishing your main content asset, making sure it will generate that initial awareness quickly to spread further on its own.

3. Use social media to discuss your content plans with your audience

Since natural language optimization is becoming more and more important, why not discuss your content plans with your audience in an actual real conversation? Some ideas include:

  • Create a live event (Live video interview or a Twitter chat interview) and use the transcript to publish a natural language-optimized article on your site. This would organically engage many of your social media followers in contributing instant feedback.
  • Simply ask (and maybe tag people you want to hear from). Twitter comments can be quite valuable when it comes to collecting feedback and opinions.
  • Set up a poll and ask your following which of your content angles would be more interesting to your audience:

Are you using SEO and social media in conjunction for a more informed, better targeted marketing strategy? Please share your tips in the comments!

This post How SEO can (and should) inform social media marketing originally appeared on Sprout Social.

Reblogged 3 days ago from feedproxy.google.com

Myspace accidentally lost 14 million artists' music

Facebook

Myspace may have lost over 14 million artists’ music. The struggling social media platform issued a statement regarding lost media due to a server migration. Myspace said, via statement according to TechCrunch, anyone who uploaded music to the platform between 2003 and 2015 hasn’t been able to access their songs. Myspace users are skeptical and outraged about the situation. Read more…

More about Music, Myspace, Mashable Video, Artists, and Mashablevideo

Reblogged 3 days ago from feeds.mashable.com

Social media causes youth depression, survey says

Facebook

According to a survey by the American Psychological Association, cases of mental health disorders rose exponentially among kids and young adults around 2011. Read more…

More about Mashable Video, Youth, Social Media, Depression, and Screen Time

Reblogged 3 days ago from feeds.mashable.com

Bing Ads Editor releases updates to make Google Ads imports faster, easier

The enhancements focus on reliability, speed and parity.

Please visit Search Engine Land for the full article.

Reblogged 3 days ago from feeds.searchengineland.com

Follow on tumblr
  • Recent Posts

  • We Run Wordpress, You Run Business

  • Recent Comments

  • Website Design & Marketing

  • Categories

  • Get TOP 10 Exposure

  • Archives

  • Get paid to speak English?

  • Website Design & Hosting

    Sacramento website designer

    Learn Facebook Advertising

  • Follow Our Feeds