How to Write for the Web—a New Approach for Increased Engagement – Whiteboard Friday

Posted by Dan-Petrovic

We tend to put a lot of effort into writing great content these days. But what’s the point of all that hard work if hardly anybody actually reads it through to the end?

In this week’s Whiteboard Friday, Dan Petrovic illustrates a new approach to writing for the web to increase reader engagement, and offers some tools and tips to help along the way.

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

G’day, Moz fans, Dan Petrovic from DEJAN here. Today we’re talking about how to write for the web.

How much of an article will people actually read?

This year we did an interesting study involving 500 people. We asked them how do they read online. We found that the amount of people who actually read everything word-for-word is 16%. Amazingly, this is exactly the same statistic, the same percentage that Nielsen came up with in 1997. It’s been nearly two decades, and we still haven’t learned how to write for the Web.

I don’t know about you guys, but I find this to be a huge opportunity, something we can do with our blogs and with our content to change and improve how we write in order to provide better user experience and better performance for our content. Essentially, what happens is four out of five people that visit your page will not actually read everything you wrote. The question you have to ask yourself is: Why am I even writing if people are not reading?

I went a little bit further with my study, and I asked those same people: Why is it that you don’t read? How is it that there are such low numbers for the people who actually read? The answer was, “Well, I just skip stuff.” “I don’t have time for reading.” “I mainly scan,” or, “I read everything.” That was 80 out of 500 people. The rest said, “I just read the headline and move on,” which was amazing to hear.

Further study showed that people are after quick answers. They don’t want to be on a page too long. They sometimes lose interest halfway through reading the piece of content. They find the bad design to be a deterrent. They find the subject matter to be too complex or poorly written. Sometimes they feel that the writing lacks credibility and trust.

I thought, okay, there’s a bunch of people who don’t like to read a lot, and there’s a bunch of people who do like to read a lot. How do I write for the web to satisfy both ends?

Here was my dilemma. If I write less, the effort for reading my content is very low. It satisfies a lot of people, but it doesn’t provide the depth of content that some people expect and it doesn’t allow me to go into storytelling. Storytelling is very powerful, often. If I write more, the effort will be very high. Some people will be very satisfied, but a lot of people will just bounce off. It’ll provide the depth of content and enable storytelling.

Actually, I ended up finding out something I didn’t know about, which was how journalists write. This is a very old practice called “inverted pyramid.”

The rules are, you start off with a primary piece of information. You give answers straight up. Right after that you go into the secondary, supporting information that elaborates on any claims made in the first two paragraphs. Right after that we go into the deep content.

I thought about this, and I realized why this was written in such a way: because people used to read printed stuff, newspapers. They would go read the most important thing, and if they drop off at this point, it’s not so bad because they know actually what happened in the first paragraph. The deep content is for those who have time.

But guess what? We write for the web now. So what happens is we have all this technology to change things and to embed things. We don’t really have to wait for our users to go all the way to the bottom to read deep information. I thought, “How can I take this deep information and make it available right here and right there to give those interested extra elaboration on a concept while they’re reading something?”

This is when I decided I’ll dive deeper into the whole thing. Here’s my list. This is what I promised myself to do. I will minimize interruption for my readers. I will give them quick answers straight in the first paragraph. I will support easy scanning of my content. I will support trust by providing citations and references. I will provide in-depth content to those who want to see it. I will enable interactivity, personalization, and contextual relevance to the piece of content people want to retrieve in that particular time.

I took one of my big articles and I did a scroll test on it. This was the cutoff point where people read everything. At this point it drops to 95, 80, 85. You keep losing audience as your article grows in size. Eventually you end up at about 20% of people who visit your page towards the bottom of your article.

My first step was to jump on the Hemingway app—a very good online app where you can put in your content and it tells you basically all the unnecessary things you’ve actually put in your words—to actually take them out because they don’t really need to be there. I did that. I sized down my article, but it still wasn’t going to do the trick.

Enter the hypotext!

This is where I came up with an idea of hypotext. What I did, I created a little plugin for WordPress that enables people to go through my article, click on a particular piece, kind of like a link.

Instead of going to a new website, which does interrupt their reading experience, a block of text opens within the paragraph of text they’re reading and gives them that information. They can click if they like, or if they don’t want to look up this information, they don’t have to. It’s kind of like links, but injected right in the context of what they’re currently reading.

This was a nerve-wracking exercise for me. I did 500 revisions of this article until I got it right. What used to be a 5,000-word article turned into a 400-word article, which can then be expanded to its original 5,000-word form. People said, “That’s great. You have a nice hypothesis, nice theory, but does this really work?”

So I decided to put everything I did to a test. An old article, which takes about 29 minutes to read, was attracting people to the page, but they were spending 6 minutes on average—which is great, but not enough. I wanted people to spend way more time. If I put the effort into writing, I wanted them to digest that content properly. The bounce rate was quite high, meaning they were quite tired with my content, and they just wanted to move on and not explore anything else on my website.

Test Results

After implementing the compressed version of my original article, giving them a choice of what they will read and when, I expanded the average time on page to 12 minutes, which is extraordinary. My bounce rate was reduced to 60%, which meant that people kept browsing for more of my content.

We did a test with a content page, and the results were like this:

Basically, the engagement metrics on the new page were significantly higher than on the old when implemented in this way.

On a commercial landing page, we had a situation like this:

We only had a small increase in engagement. It was about 6%. Still very happy with the results. But what really, really surprised me was on my commercial landing page—where I want people to actually convert and submit an inquiry—the difference was huge.

It was about a 120% increase in the inquiries in comparison to the control group when I implemented this type of information. I removed the clutter and I enabled people to focus on making the inquiry.

I want you all to think about how you write for the web, what is a good web reading experience, and how content on the web should be, because I think it’s time to align how we write and how we read on the web. Thank you.

Video transcription by Speechpad.com

A few notes:

There are a few things to note here. First, for an example of an implementation of hypotext, take a look at this post on user behavior data.

Next, keep in mind that Google does devalue the hidden content, disagreeing with its usability. You can read more about this on the DEJAN blog—there are further tips on the dangers of hidden content and how you can combat them there.

One solution is to reverse how hypotext works in an article. Rather than defaulting to the shorter piece, you can start by showing the full text and offer a “5-minute-read” link (example here) for those inclined to skim or not interested in the deep content.

Share your thoughts in the comments below, and thanks for listening!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

How to Beat Writer’s Block

Writer’s block is a particularly nasty affliction and can make you feel pretty damn terrible, especially when your time is billed by the hour.

The truth is that while we are now required to create more and more content than ever before and even the very best authors suffer from writer’s block. It just… happens.

I’ll own up and declare that I recently had writer’s block and hadn’t posted here on State of Digital for a six month period. No matter how busy I was, I always used to be able to thrash out a blog post in a spare hour on a train or in a café. Yet, here I was struggling for six months with writer’s block! Admittedly I’m a busy person (but aren’t we all?), but as a PR person you would think that an 800-word blog post would be a piece of cake.

There is a lot written on the subject of writer’s block and I’m not going to regurgitate the countless articles about how to tackle it. In the interests of originality I thought I would share my own personal recommendations for tackling writer’s block.

Put aside time to write good content

If you are a busy person, then pay a visit to your favourite place, your home or a café  – wherever it is that you find you are able to focus the most – and set about planning your masterpiece.  My problem was mainly due to a lack of time, which meant I struggled to sit down for longer than five minutes.  I often tell my team to lock themselves away when they have a feature or a deadline. Some might be writing a 3,000-word report and choose to work from home.

Quality time in a non-distracting environment is important.

OLYMPUS DIGITAL CAMERA

Old School Problems

Discover the ‘real’ story

So many people dive straight into writing without thinking or planning the narrative.  Spend time, and I mean serious time, working out what the story is so that you can explain it to a friend or a colleague. Don’t plan out the full piece yet. Instead write a thirty second ‘pitch’.

Test the idea. Chat it through. Refine it and test it again.  Then you can move on to the more formal planning stage.

Does your post past the pub test?

Peer review is important, but why wait until an article is written to test your subject? Would your post pass for an interesting conversation down the pub? Or would it be taken seriously at an industry conference?  I often find the most argumentative person I know, put my ideas through their paces and then make sure that I respond to each and every point they raise. If you can’t think of a strong enough rebuttal to combat criticism with, then perhaps the premise of the story or feature needs to change.

Think about the wider context or narrative – and what can you add to it

Your story doesn’t exist in isolation. The chances are that unless you are very lucky, there are other stories already published on a similar theme.  Often people suffer writer’s block because when they look at what else is already published on the subject, they feel that it has already been covered.   A good writer will turn this problem on its head.

For example, why is the existing conventional wisdom wrong? What have others missed? What questions do other articles raise that need answering? Inspiration can be found from other people’s work and without plagiarising.

Start at the beginning and spend time crafting your opening

Now that you have tested the idea you are ready to put pen to paper (or fingers to keyboard.)

Before writing the full synopsis, focus on the opening paragraph. If you get the opening right then the subsequent 800 words will write themselves, more or less.

The opening paragraph is very important in feature writing and so is the first paragraph of a press release.

The most basic tip I can give for writing a blog post or a feature is to set up the challenge faced, or the opportunity or the big question that is going to be answered, in the first paragraph.

This does not apply for press releases, which need to be written in news-style.

Is your story topical or newsworthy?

It is hard to say whether having a nose for newsworthy content is taught or caught.

Something is newsworthy if it is interesting, topical, notable, noteworthy, important, significant, historic, remarkable or sensational.  One man’s news is another man’s fish and chip paper and something might be of interest to the regional media will not automatically be so for the trade or national media, so it is important to know your audience.

There is plenty of content out there on identifying what is and isn’t newsworthy. I’m not going to go over it here – all I will say is that a good news story will include Who, What, Where, When and Why at the very beginning.

If you don’t have the five Ws covered then you will struggle writing the article and writer’s block will no doubt follow.

Plan a full synopsis

Even with a press release that should take no longer than 30 minutes to draft, I will always write the first paragraph and then plan a synopsis. It makes life so much easier.

What are the key messages of the piece? What are the target keywords?  Will someone be quoted in the article? If so, what would they say? Where do people go for more information?

With feature synopses it is ways a good idea to try and answer a series of questions, in response to your introductory paragraph.

Don’t be afraid to go back to the drawing board and start again

Sometimes the biggest barrier to writing something is that the content just isn’t topical, newsworthy, or just working full stop within the piece. If the content for the piece isn’t interesting then start again.

Starting again is only possible if you are in control of your workload and have some influence over the direction of the article.

Everyone has been a junior within an organisation and been landed with a brief that is so utterly devoid of any value that writing interesting copy for it is nigh on impossible.  If this is your situation then I feel your pain. We have all been there.  I could give you countless examples of the 1,000 word features that I have written on wing mirrors, spark plugs and other assorted detritus.

Unfortunately if you can’t renegotiate the brief then you are on a hiding to nothing.

Suck it up. Then move on.

Don’t just think about copy

Why struggle to write long form content when a paragraph and a photo might suffice?  Images and embedded content might tell the story much more succinctly than you can ever do in text.

I know what the SEO handbook says but sometimes, when it comes to high quality, engaging content, it is better just to ignore it and do what works.

So there you have it. Those were my thoughts on how to tackle writer’s block.  I’ve tried to keep my recommendations practical. If you have any other tips then let us know in the comments.

Post from James Crawford

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

Click-Through Rate Isn't Everything: 8 Ways to Improve Your Online Display Ads

Posted by rMaynes1

You are exposed to an average of 362 online display ads a day. How close are you to buying anything when you see those ads?

Online display ads have been around for over 20 years. They’re nothing new. But over the past 2 decades, the content, format, and messaging of display ads have changed dramatically—because they have had to!

The click-through rate of that first banner ad in 1994 was 44%. CTRs have steadily declined, and were sitting at around 0.1% in 2012 for standard display ads (video and rich media excluded), according to DoubleClick. Advertisers had to do something to ensure that their ads were seen, and engaged with—ads had to be a useful resource, and not an annoying nuisance.

It’s important, however, that the focus is not firmly fixed on CTRs. Yes, online display ads have largely been considered a tool for direct response advertising, but more recently, advertisers are understanding the importance of reaching the right person, in the right mindset, with an ad that can be seen. This ad may not be clicked on, but does that mean it wasn’t noticed and remembered? Advertisers are increasingly opting to pay for performance as opposed to clicks and/or impressions. Advertisers want their ad to drive action that leads to purchase—and that isn’t always in the form of a click.

Mediative recently conducted and released a research study that looks at how display ads can drive purchase behaviour. If someone is browsing the web and sees an ad, can it influence a purchase decision? Are searchers more responsive to display ads at different stages in the buying cycle? What actions do people take after seeing an ad that captures their interest? Ultimately, Mediative wanted to know how indicative of purchase behaviour a click on an ad was, and if clicks on display ads even matter anymore when it comes to driving purchase behaviour and measuring campaign success. The results from an online survey are quite interesting.

1. The ability of online display ads to influence people increases as they come closer to a purchase decision.

In fact, display ads are 39% more likely to influence web users when they are researching a potential purchase versus when they have no intent to buy.

Advertiser action item #1:

Have different ad creatives with different messaging that will appeal to the researcher and the purchaser of your product or service separately. Combined with targeted impressions, advertisers are more likely to reach and engage their target audience when they are most receptive to the particular messaging in the ad.

Here are a few examples of Dell display ads and different creatives that have been used:

This creative is focusing on particular features of the product that might appeal more to researchers.

This ad injects the notion of “limited time” to get a deal, which might cause people who are on the fence to act faster—but it doesn’t mention pricing or discounts.

These creatives introduce price discounts and special offers which will appeal to those in the market to buy.

2. The relevancy of ads cannot be understated.

40% of people took an action (clicked the ad, contacted the advertiser, searched online for more information, etc.) from seeing an ad because it was relevant to a need or want, or relevant to something they were doing at the time.

Advertiser action item #2:

Use audience data or lookalike modeling in display campaigns to ensure ads will be targeted to searchers who have a higher likelihood of being interested in the product or service. Retargeting ads to people based on their past activity or searches is valuable at this stage, as potential customers can be reached all over the web while they comparison shop.

An established Canadian charitable organization ran an awareness campaign in Q2 2015 using retargeting, first and third party data lookalike modeling, and contextual targeting to help drive existing, and new users to their website. The goal was to drive donations, while reducing the effective cost per action of the campaign. This combination helped drive granularity in the targeting, enabling the most efficient spending possible. The result was a 689% decrease in eCPA—$76 versus the goal of $600.

3. Clicks on ads are not the only actions taken after seeing ads.

53% of people said they were likely to search online for the product featured in the ad (the same as those who said they would click on the ad). Searching for more information online is just as likely as clicking the ad after it captures attention, just not as quickly as a click (74% would click on the ad immediately or within an hour, 52% would search online immediately or within an hour).

Advertiser action item #3:

It is critical not to measure the success of a display campaign by clicks alone. Advertisers can get caught up in CTRs, but it’s important to remember that ads will drive other behaviours in people, not just a click. Website visits, search metrics, etc. must all be taken into consideration.

A leading manufacturer of PCs, laptops, tablets, and accessories wanted to increase sales in Q2 of 2014, with full transparency on the performance and delivery of the campaign. The campaign was run against specific custom audience data focusing on people of technological, educational, and business interest, and was optimized using various tactics. The result? The campaign achieved a post-view ROI revenue (revenue from target audiences who were presented with ad impressions, yet did not necessarily click through at that time) that was 30x the amount of post-click revenue.

4. Clicks on ads are not the only actions that lead to purchase.

33% of respondents reported making a purchase as a direct result of seeing an ad online. Of those, 61% clicked and 44% searched (multiple selections were allowed), which led to a purchase.

Advertiser action item #4:

Revise the metrics you measure. Measuring “post-view conversions” will take into account the fact that people may see an ad, but act later—the ad triggers an action, whether it be a search, a visit, or a purchase—but not immediately, and it is not directly measurable.

5. The age of the target audience can impact when ads are most likely to influence them in the buying cycle.

  • Overall, 18–25 year olds are most likely to be influenced by online advertising.
  • At the beginning of the buying cycle, younger adults aged 18–34 are likely to notice and be influenced by ads much more than people aged over 35.
  • At the later stages of the buying cycle, older adults aged 26–54 are 12% more likely that 18–25 year olds to have made a purchase as a result of seeing an ad.

Advertiser action item #5:

If your target audience is older, multiple exposures of an ad might be necessary in order to increase the likelihood of capturing their attention. Integrated campaigns could be more effective, where offline campaigns run in parallel with online campaigns to maximize message exposure.

6. Gender influences how much of an impact display ads have.

More women took an online action that led to a purchase in the last 30 days, whereas more men took an offline action that led to a purchase.

  • 76% more women than men visited an advertiser’s website without clicking on the ad.
  • 47% more women than men searched online for more information about the advertiser, product, or service.
  • 43% more men than women visited the advertiser’s location.
  • 33% more men than women contacted the advertiser.

Advertiser action item #6:

Ensure you know as much about your target audience as possible. What is their age, their average income? What sites do they like to visit? What are their interests? The more you know about who you are trying to reach, the more likely you will be to reach them at the right times when they will be most responsive to your advertising messages.

7. Income influences how much of an impact display ads have.

  • Web users who earned over $100k a year were 35% more likely to be influenced by an ad when exposed to something they hadn’t even thought about than those making under $50k a year.
  • When ready to buy, people who earned under $20K were 12.5% more likely to be influenced by ads than those making over $100K.

Advertiser action item #7:

Lower earners (students, part-time workers, etc.) are more influenced by ads when ready to buy, so will likely engage more with ads offering discounts. Consider income differences when you are trying to reach people at different stages in the buying cycle.

8. Discounts don’t influence people if they are not relevant.

We were surprised that the results of the survey indicated that discounts or promotions in ads did not have more of an impact on people—but it’s likely that the ads with coupons were irrelevant to the searcher’s needs or wants, therefore would have no impact. We asked people what their reasons were behind taking action after seeing an online ad. 40% of respondents took an action from seeing an ad for a more purchase-related reason than simply being interested—they took the action because the ad was relevant to a need or want, or relevant to something they were doing at the time.

Advertiser action item #8:

Use discounts strategically. Utilizing data in campaigns can ensure ads reach people with a high intent to buy and a high likelihood of being interested in your product or service. Turn interest into desire with coupons and/or discounts—it will have more of an impact if directly tied to something the searcher is already considering.

In conclusion, to be successful, advertisers need to ensure their ads are providing value to online web users—to be noticed, remembered, and engaged with, relevancy of the ad is key. Serving relevant ads that are related to a searcher’s current need or want are far more likely to capture attention than a “one-size-fits-all” approach.

Advertisers will be rewarded for their attention to personalization with more interaction with ads and a higher likelihood of a purchase. Analyzing lower funnel metrics, such as post-view conversions, rather than simply concentrating on the CTR will allow advertisers to have a far better understanding of how their ads are performing, and the potential number of consumers that have been influenced.

Rebecca Maynes, Manager of Content Marketing and Research with Mediative, was the major contributor on this whitepaper. The full research study is available for free download at Mediative.com.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

Are we really better off becoming publishers?

At some point over the next few months, Conde Nast is set to launch it’s own global Ecommerce business, cementing the trend that in luxury fashion the lines between being a publisher and being a retailer are increasingly blurred. This, too, can be seen on the other end of the spectrum with luxury retailers like Mr Porter and Net-a-Porter opting for a ‘content first’ approach in their site and have their own fashion magazines available both digitally and in print.

Retailers are becoming publishers and publishers are becoming retailers; is this a good thing symptomatic of the consumer centric shift or are we left with strategic mediocrity where nobody is really great at anything?

It’s hardly interesting that brands invest in content; but what I’m talking about here is retail brands where content is the focus. For example, Mr Porter leads with content, and drives users to content that is more broader lifestyle content rather than direct conversion/product focused.

Mr Porter content is not purely fashion, brand or trend focused; like other luxury fashion retailers, but their content strategy is about encouraging readers to buy in to a certain lifestyle, with their products and promotions a backdrop to the editorial.

For other designer fashion retailers like Flannels or The Outnet, while visually they don’t look hugely different, their homepage edit’s drive users to more traditional product pages.

flannels

 

Screen Shot 2015-10-21 at 10.02.48

And while they still have strong editorial offerings, it’s clear that the primary focus is conversion.

So is being a publisher and leading with great content really the best SEO strategy?

Firstly I should probably call out the obvious; I don’t have analytics data, I recognise that this strategy may (or may not) drive additional value from a CRM, branding, sentiment, trust, awareness perspective and not everything is about how well a site performs in search engines or how successful the content is at earning links but who cares, it’s fun to have a look nonetheless.

Something I immediately noticed is that when content pages are typically those with the most authority, you’re likely to find editorial content ranking for transactional terms. For example, Mr Porter ranks 5th for the 50k monthly search volume query ‘Tom Ford’; good job, only, the page that is ranking is this long form interview with him whereas it’d probably be more lucrative for this optimised product page to be ranking for that query; and this page ranks for Paul Smith rather than the product page.

This happens fairly often, this article tends to rank for most suits and tuxedo queries; and if we think about intent, when someone explicitly queries ‘dinner suits’ or ‘dinner jackets’ they’re most likely in a purchase frame of mind and looking to compare various styles and in this instance, product pages would be better surfaced here.

This article ranks for people looking for classic parka jackets and this article ranks for gentlemen interested in stone chinos.

mr porter

I could be completely wrong and actually, these long form, content heavy article pages build trust and encourage users to click through to product pages to convert, but I doubt it. TL;DR, wanna buy shit.

From my perspective, product pages are what the user is really looking for (or used to?) when searching these kind of queries.

In terms of overall organic performance, it’s not obvious that content first results in better results.

designer fashion

Farfetch are the market leaders (well done, Matt!) and unlike Mr Porter; out of their 14,201 UK ranking pages within the top 100 that Linkdex Visibility picked up, 94% of them are product or category pages. Although editorial is a huge focus for them; they still lead with product rather than editorial. i.e – they are an ecommerce site first and foremost and use content to supplement that, rather than the other way round.

Another assumed benefit for investing hugely in editorial content is increased social shares, by the assumption that people are inherently more interested in content rather than products. But is this actually true? I think it depends, if you’re selling wheelie bins then sure, you’re likely to get more social traction from your articles of cute kittens dressed as wheelie bins than your product pages, but for luxury retail is the product page not interesting, aspirational and shareable in itself?

For net-a-porter, their product pages get significantly more shares than their magazine edits.

Screen Shot 2015-10-21 at 11.34.24To date, the top performing product pages (comparing the same number of product pages and magazine edits) have generated 498% more Facebook shares, with Eres, Louboutin, Gucci and Givenchy being the favoured brands. And this makes sense, if you’re about to drop a grand on a pair of brogues, then the least you can do is brag about it.

Again though, this might be an unfair comparison, The Edit really isn’t optimised for social sharing, there’s no share buttons (aside from email) or no CTA to encourage readers to share the content. 

Screen Shot 2015-10-21 at 11.45.29

So what about links? Is it good at generating those? Well, out of all referring domains, the magazine accounts for 4% (excluding individual issues) and 2% of homepage anchor text specifically references ‘The Edit’ or ‘Magazine’.

Screen Shot 2015-10-21 at 11.52.39

Likewise for Mr Porter’s top 10,000 pages; 6% of referring domains are pointing to journal content and 23% are to specific product pages. This surprised me, again with the assumption being that great content is the way to earn links; though again this is niche specific, for fashion naturally earned links tend to come from product reviews.

I know I’m short of multiple perspectives here; and although on the face of it, it seems as though for a luxury fashion brand positioning themselves first and foremost as fashion publishers makes sense, I’m reticent to agree it lives up to the hype.

 

 

Post from Kirsty Hulse

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

Why All SEOs Should Unblock JavaScript & CSS… And Why Google Cares

Posted by jenstar

If you’re a webmaster, you probably received one of those infamous “Googlebot cannot access CSS and JS files on example.com” warning letters that Google sent out to seemingly every SEO and webmaster. This was a brand new alert from Google, although we have been hearing from the search engine about the need to ensure all resources are unblocked—including both JavaScript and CSS.

There was definite confusion around these letters, supported by some of the reporting in Google Search Console. Here’s what you need to know about Google’s desire to see these resources unblocked and how you can easily unblock them to take advantage of the associated ranking boosts.

Why does Google care?

One of the biggest complaints about the warning emails lay in the fact that many felt there was no reason for Google to see these files. This was especially true because it was flagging files that, traditionally, webmasters blocked—such as files within the WordPress admin area and WordPress plugin folders.

Here’s the letter in question that many received from Google. It definitely raised plenty of questions and concerns:

Of course, whenever Google does anything that could devalue rankings, the SEO industry tends to freak out. And the confusing message in the warning didn’t help the situation.

Why Google needs it

Google needs to render these files for a couple of key reasons. The most visible and well known is the mobile-friendly algorithm. Google needs to be able to render the page completely, including the JavaScript and CSS, to ensure that the page is mobile-friendly and to apply both the mobile-friendly tag in the search results and the associated ranking boost for mobile search results. Unblocking these resources was one of the things that Google was publicly recommending to webmasters to get the mobile-friendly boost for those pages.

However, there are other parts of the algorithm that rely on using it, as well. The page layout algorithm, the algorithm that looks at where content is placed on the page in relation to the advertisements, is one such example. If Google determines a webpage is mostly ads above the fold, with the actual content below the fold, it can devalue the rankings for those pages. But with the wizardry of CSS, webmasters can easily make it appear that the content is front and center, while the ads are the most visible part of the page above the fold.

And while it’s an old school trick and not very effective, people still use CSS and JavaScript in order to hide things like keyword stuffing and links—including, in the case of a hacked site, to hide it from the actual website owner. Googlebot crawling the CSS and JavaScript can determine if it is being used spammily.

Google also has hundreds of other signals in their search algo, and it is very likely that a few of those use data garnered from CSS and JavaScript in some fashion as well. And as Google changes things, there is always the possibility that Google will use it for future signals, as well.

Why now?

While many SEOs had their first introduction to the perils of blocking JavaScript and CSS when they received the email from Google, Matt Cutts was actually talking about it three-and-a-half years ago in a Google Webmaster Help video.

Then, last year, Google made a significant change to their webmaster guidelines by adding it to their technical guidelines:

Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings.

It still got very little attention at the time, especially since most people believed they weren’t blocking anything.

However, one major issue was that some popular SEO WordPress plugins were blocking some JavaScript and CSS. Since most WordPress users weren’t aware this was happening, it came as a surprise to learn that they were, in fact, blocking resources.

It also began showing up in a new “Blocked Resources” section of Google Search Console in the month preceding the mobile-friendly algo launch.

How many sites were affected?

In usual Google fashion, they didn’t give specific numbers about how many webmasters received these blocked resources warnings. But Gary Illyes from Google did confirm that they were sent out to 18.7% of those that were sent out for the mobile-friendly warnings earlier this year:

Finding blocked resources

The email that Google sent to webmasters alerting them to the issue of blocked CSS and JavaScript was confusing. It left many webmasters unsure of what exactly was being blocked and what was blocking it, particularly because they were receiving warnings for JavaScript and CSS hosted on other third-party sites.

If you received one of the warning letters, the suggestion for how to find blocked resources was to use the Fetch tool in Google Search Console. While this might be fine for checking the homepage, for sites with more than a handful of pages, this can get tedious quite quickly. Luckily, there’s an easier way than Google’s suggested method.

There’s a full walkthrough here, but for those familiar with Google Search Console, you’ll find a section called “Blocked Resources” under the “Google Index” which will tell you what JavaScript and CSS is blocked and what pages they’re found in.

You also should make sure that you check for blocked resources after any major redesign or when launching a new site, as it isn’t entirely clear if Google is still actively sending out these emails to alert webmasters of the problem.

Homepage

There’s been some concern about those who use specialized scripts on internal pages and don’t necessarily want to unblock them for security reasons. John Mueller from Google said that they are looking primarily at the homepage—both desktop and mobile—to see what JavaScript and CSS are blocked.

So at least for now, while it is certainly a best practice to unblock CSS and JavaScript from all pages, at the very least you want to make it a priority for the homepage, ensuring nothing on that page is blocked. After that, you can work your way through other pages, paying special attention to pages that have unique JavaScript or CSS.

Indexing of Javascript & CSS

Another reason many sites give for not wanting to unblock their CSS and JavaScript is because they don’t want them to be indexed by Google. But neither of those files are file types that Google will index, according to their long list of supported file types for indexation.

All variations

It is also worth remembering to check both the www and the non-www for blocked resources in Google Search Console. This is something that is often overlooked by those webmasters that only to tend to look at the version they prefer to use for the site.

Also, because the blocked resources data shown in Search Console is based on when Googlebot last crawled each page, you could find additional blocked resources when checking them both. This is especially true for for sites that may be older or not updated as frequently, and not crawled daily (like a more popular site is).

Likewise, if you have both a mobile version and a desktop version, you’ll want to ensure that both are not blocking any resources. It’s especially important for the mobile version, since it impacts whether each page gets the mobile-friendly tag and ranking boost in the mobile search results.

And if you serve different pages based on language and location, you’ll want to check each of those as well. Don’t just check the “main” version and assume it’s all good across the entire site. It’s not uncommon to discover surprises in other variations of the same site. At the very least, check the homepage for each language and location.

WordPress and blocking Javascript & CSS

If you use one of the “SEO for WordPress”-type plugins for a WordPress-based site, chances are you’re blocking Javascript and CSS due to that plugin. It used to be one of the “out-of-the-box” default settings for some to block everything in the /wp-admin/ folder.

When the mobile-friendly algo came into play, because those admin pages were not being individually indexed, the majority of WordPress users left that robots block intact. But this new Google warning does require all WordPress-related JavaScript and CSS be unblocked, and Google will show it as an error if you block the JavaScript and CSS.

Yoast, creator of the popular Yoast SEO plugin (formerly WordPress SEO), also recommends unblocking all the JavaScript and CSS in WordPress, including the /wp-admin/ folder.

Third-party resources

One of the ironies of this was that Google was flagging third-party JavaScript, meaning JavaScript hosted on a third-party site that was called from each webpage. And yes, this includes Google’s own Google AdSense JavaScript.

Initially, Google suggested that website owners contact those third-party sites to ask them to unblock the JavaScript being used, so that Googlebot could crawl it. However, not many webmasters were doing this; they felt it wasn’t their job, especially when they had no control over what a third-party sites blocks from crawling.

Google later said that they were not concerned about third-party resources because of that lack of control webmasters have. So while it might come up on the blocked resources list, they are truly looking for URLs for both JavaScript and CSS that the website owner can control through their own robots.txt.

John Mueller revealed more recently that they were planning to reach out to some of the more frequently cited third-party sites in order to see if they could unblock the JavaScript. While we don’t know which sites they intend to contact, it was something they planned to do; I suspect they’ll successfully see some of them unblocked. Again, while this isn’t so much a webmaster problem, it’ll be nice to have some of those sites no longer flagged in the reports.

How to unblock your JavaScript and CSS

For most users, it’s just a case of checking the robots.txt and ensuring you’re allowing all JavaScript and CSS files to be crawled. For Yoast SEO users, you can edit your robots.txt file directly in the admin area of WordPress.

Gary Illyes from Google also shared some detailed robots.txt changes on Stack Overflow. You can add these directives to your robots.txt file in order to allow Googlebot to crawl all Javascript and CSS.

To be doubly sure you’re unblocking all JavaScript and CSS, you can add the following to your robots.txt file, provided you don’t have any directories being blocked in it already:

User-Agent: Googlebot
Allow: .js
Allow: .css

If you have a more specialized robots.txt file, where you’re blocking entire directories, it can be a bit more complicated.

In these cases, you also need to allow the .js and.css for each of the directories you have blocked.

For example:

User-Agent: Googlebot
Disallow: /deep/
Allow: /deep/*.js
Allow: /deep/*.css

Repeat this for each directory you are blocking in robots.txt.

This allows Googlebot to crawl those files, while disallowing other crawlers (if you’ve blocked them). However, the chances are good that the kind of bots you’re most concerned about being allowed to crawl various JavaScript and CSS files aren’t the ones that honor robots.txt files.

You can change the User-Agent to *, which would allow all crawlers to crawl it. Bing does have its own version of the mobile-friendly algo, which requires crawling of JavaScript and CSS, although they haven’t sent out warnings about it.

Bottom line

If you want to rank as well as you possibly can, unblocking JavaScript and CSS is one of the easiest SEO changes you can make to your site. This is especially important for those with a significant amount of mobile traffic, since the mobile ranking algorithm does require they both be unblocked to get that mobile-friendly ranking boost.

Yes, you can continue blocking Google bot from crawling either of them, but your rankings will suffer if you do so. And in a world where every position gained counts, it doesn’t make sense to sacrifice rankings in order to keep those files private.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

Apps Really Are Transforming The Future of Search

Back in May, Google AdWords announced that more Google searches are now being performed on mobile devices than on desktop computers — this statement was reaffirmed by Amit Singhal earlier this month.

Following the initial announcement, Google launched their Micro Moments campaign, which entreated agencies to consider “winning the moment,” for their client’s customers. Moreover, the campaign put forward the extremely bold idea that, “the purchase funnel is dead.” So what, exactly, is the connection between the two?

Mobile Searches Are Up, But Mobile Web Browsing is On the Decline

While the amount of time spent on mobile devices continues to grow, mobile activity itself has become increasingly unbalanced. A recently updated annual study by Yahoo & Flurry found that browser-based activity only accounts for about 10% of mobile usage, while users spend the remaining 90% of their time using apps.

While this probably won’t come as news to most of you, recent comments from Simon Khalaf, Yahoo’s Senior VP of Publishing, indicate an interesting philosophical shift in how the digital industry is responding to the emergent data on mobile behavior.

“In the world of mobile, SEO, SEM and blue links are gone because people are not using browsers. In the mobile space, users and their communities have become the distribution channel for content themselves [through sharing].”

Interesting indeed.

Finding vs. Deciding: The Parameters of the Search Engine Discourse

The disparaging description of search engines as merely being lists of “10 blue links” is an age-old slight against Google, first conceived as a positioning statement to set Bing’s “Decision Engine” apart at the time of its launch. Nonetheless, there now seems to be plenty of strong indicators confirming that web browser usage is on the decline, regardless of the device or user demographic. As Greg Sterling aptly pointed out on Marketing Land, the implication is that this represents a threat to search.

Similarly, writing for Search Engine Watch, YuYu Chen asked if this could ultimately be a threat to SEO as well. It’s a fair question, especially considering that a threat to Google’s primary source of revenue — i.e. paid search — could pose a serious threat to organic search marketing strategies too.

So Who Cares If Mobile Browser Usage Is Down?

But don’t freak out just yet — this isn’t Mobilegeddon. Just as Bing’s Decision Engine didn’t kill Google’s Search Engine, SEO isn’t going to be dead anytime soon. In fact, at SMX, ComScore’s Eli Goodman presented data that projected a very bright future for publishers on the mobile web.

In his presentation, Goodman explained that iOS was driving growth on mobile searches in general and that mobile web properties were much more successful than mobile apps when it came to reaching unique visitors. The primary factor behind mobile publishers’ ongoing success is that the linking infrastructure between mobile properties is simply easier to execute, more powerful, and visible to users.  

At least on the surface, the app’s threat to search usage doesn’t actually seem to exist. So what should we make of Google’s Micro Moments thesis that “the purchase funnel is dead?”

BIAKelsey’s Michael Boland published an excellent video recently which fleshes out the discussion. In his discussion on the Future of Search, Boland envisions an app-based future that, “doesn’t compel a search engine, as we know it.” Or, put another way, doesn’t compel the “10 blue links” model of web-based engagement.

The video neatly illustrates the current state of the broken purchase funnel experience. Using the example of organizing a night out at a local event, the video illustrates how a user must flip back and forth between all manners of different app and browser-based experiences to achieve their ultimate goal.

The video is a must watch — Boland makes a compelling observation that is simultaneously the complement and the counter to Goodman’s argument the linking infrastructure of the mobile web is the primary factor in its success (over apps). Boland quickly shows how the mobile web is essentially disjointed between many different properties and, therefore, does not generate the comparably fluid, on demand consumer experiences available in apps. The customer purchase funnel doesn’t get any more fragmented than this!

Screencap from Mike Boland’s video, “The Next Era of Search.”

Moreover, Boland takes the argument in a radical new direction, highlighting how companies like Verve and One Button are trying to solve the problem of inter-app linking to restore the broken links in the chain. He says that Google is also trying to improve the deep linking functionality between the mobile web and apps through full app indexing, and the links between apps and apps through Google’s Now On Tap. However, at the end of the video Boland suggests that the biggest driver of app indexing (or at least the standards around the visibility of the internal pages of an app) will likely come from Apple’s new Force Touch mechanism on their latest 6S devices.

The Existential Threat to Search Engines

Amit Singhal confirmed the progress on Google’s app indexing efforts earlier this month at Recode’s Mobile conference, stating that they had already indexed over 100 billion pages from within native apps.

In the same article, Recode’s Mark Bergen goes on to talk about investor insecurity in regards to Google’s mobile revenue numbers and, echoing Boland’s thoughts, the emergence of Apple’s own in-app search efforts.

“Google has several compelling reasons to prove it can crack in-app search — both for investors, skittish about how it will grow revenue on mobile, and app developers, who need evidence that surrendering their pages to Google is outweighed by the benefits of landing inside search. From the new figures today, it seems plenty of developers are willing to make the trade-off.”

But Google’s app indexing efforts are in direct competition with Apple, which is also working on ways to deliver information to iPhone users within and between apps.

At face value, the announcement that mobile searches had exceeded desktop, combined with the Micro Moments campaign launch, made it look a bit like Google was simply trying to compete for mobile advertising spend and win some of the dollars invested in Facebook ads.

Much of Boland’s report hits upon the same notes that are often heard in the most cynical of marketing camps. The theory is that Mobilegeddon was designed for Google to make the mobile web more attractive, and paid search is the solution for those unable to design their sites to rank well on mobile. But that theory doesn’t hold much water, mainly because the sites that don’t work well on mobile aren’t going to work well on paid search anyway.

So, in light of Singhal’s announcement that 100 billion app pages were indexed, we see the flipside of the Micro Moments campaign revealed, foreshadowing an even greater existential threat to Google’s core advertising business from mobile apps.

By declaring, “the purchase funnel is dead,” Google was not expounding yet another, “SEO is Dead,” marketing theory to win big brand PPC budgets — rather, it represented the true call of the canary in the coalmine for the link-based marketplace. Time lost by mobile browsers to web apps is just the metaphorical tip of the iceberg. In-app searches pose an emerging threat to Google’s future (search being their primary source of revenue), and what’s at stake is not simply a matter of time lost, but the erosion of the fundamental, structural foundations of the organization of information on the web.

Ships May Rise With the Tide, But the Sea Level is Rising; Leaving Only Peak Screen Estate

Point is, any threat to Google is also a threat to every business that relies on Google as their front door. Seasoned SEOs who have battled both penguins and pandas know that social media marketing never ended up being the audience development powerhouse that the wild west of search once was.

If “10 blue links” aren’t going to be the front door or mediator between consumers and products anymore, then the future becomes increasingly unclear — where will the lion’s share of digital marketing take place?

What we’ve already seen is that screen estate on mobile search leads to a winner-takes-all distribution — one that is even more pronounced than on desktop (which has the same distribution). This means that we’re going to see less and less value in ranking anywhere below the first page of results. It’s safe to assume that the only worthwhile spots will be top ad positions and anything just below the OneBox results.

Programmatic ad networks are probably wringing their hands with glee right now as, post-search, they would seem have got the major marketplaces sewn up (content publishers and social audiences).

Peak Web Index Meets Peak Screen Estate

With less screen estate on mobile devices, Google may have a captive market at the top of search results, but the overall utility of its software is greatly diminished. It also means that where the information archive was once effectively a free-for-all campaign map to be augmented with paid search, we will begin see an even more heavily curated set of results. What’s clear is that the brands with the biggest PPC budgets and programmatic capabilities will quickly rise to the top.

In many ways, this has already happened. The semi-predictive nature of Google Instant was seemingly proof that the search engine officially knew everything. But it’s all an illusion! Google’s web crawler may have a trillion pages in its sites, but, as Mike Grehan has said countless times for years, all of the pages that make up the ranking web represent a mere fraction of the entire database. Talking to Aaron Wall on SEOBook in 2009, Grehan said, “Google is saying they now have seen a trillion URLs and have no certainty that they’ll ever be able to crawl them in a timely fashion, maybe we’ve reached the zenith of the crawl.” In other words, it’s not that Google “knows everything” — rather, it has a better idea of what is worth searching and what isn’t. In a post-Panda and post-Penguin world, Google’s search results are simply a corpus of the web, rather than an index of the entire thing.

That means that Google already more closely resembles an app more than it does a browser. In fact, this has been the case for a number of years. One could argue that this resemblance might account for the decline in mobile browser usage itself — although doing so would probably be a losing battle.

What Does All This Mean for SEO and SEM?

For starters, it means the fundamental nature of the link graph has already changed, is currently still mutating, and will continue to transform significantly over the next 5-10 years.

Yes, Goodman’s arguments that the mobile web isn’t going anywhere and that it offers the strongest strategic advantage through its current ability to link assets together may have some value, but my impression is that all the signs point to inter-app linking, restructuring, and disintermediating the entire knowledge economy as reflected in the web, and as interpreted by search engines.

Hot button marketing and advertising industry issues such as “Mobilegeddon” and “Ad Blocking” have made it clear that the mobile web has reached the point of stagnation. Currently, all of the innovation is being focused on the internal visibility of apps, the upshot of which will inevitably be inter-app linking. And we all know what the impact of links has been on search engines in the last 15 years.

The parallel vision and insight that emerges from between the lines in both Goodman and Boland’s reports is this: regardless of how you do it, the ability to link assets together is far and away the biggest driver of success on the mobile platform. Moreover, the less friction involved in the creation of those links, the better the chance that you will, in fact, succeed.

What Time Lost in Browsers to Apps Means to Digital Marketers

The good news is that whatever linking theory you subscribe to (i.e., mobile web or inter-app), the evidence suggests that in order to succeed on mobile, you just need more pages indexed in search. However, if you can afford to think like a media buyer and triangulate your brand across a plethora of intent signals, it’s probably not a bad idea. In the short term, there really do seem to be some huge, winner-takes-all mobile marketing opportunities for those who can fully translate the desktop consumer experience onto the mobile platform.

The bad news is that even if you want to get out ahead, long-term project planning is tricky because the app experience itself is fragmenting.

Apps are no longer just for mobile phones — they’re now popping up in all sorts of interesting places, including TVs, game consoles, thermostats, bathroom scales, electric toothbrushes, sneakers and dog collars. There are even apps within apps, like Slack, Trello, and Gmail. And they’ll soon spread to self-driving cars, drones, fridges, smart clothes and smart mirrors.

Informationally-oriented linking created concepts such as hubs and authorities, Instant Answers and the knowledge graph did a good job of translating the search experience from desktop to mobile, but those concepts are reaching their limits in a purely app based information ecosystem. In-app linking will likely lead to a more transaction-based, information architecture across the entire web. The web’s role as an information archive is going to lose ground to the concept of the web as an intent map, and the direction of user experience will change from one of pull, to push.

With that in mind, I have to agree with Boland’s postulation that Force Touch will be a deciding factor in the immediate future of search (that is to say, the shortest of the longer-term view).

Apple’s Force Touch arguably creates as big an incentive for developers to expose the internal pages of apps, rethink conversion points within the consumer journey, and find ways to shorten it, as visibility does in Google Search.

These kinds of open data points represent a return to the original organic strategies of desktop search marketing, and just need to be adapted for a mobile-centric user experience. Exposing the internal pages of the app won’t just assist search indexing but provide an opportunity to rethink conversion points across a user decision journey that spans multiple apps and find ways to shorten it. That exposition process alone is going to make it easier for all app developers and/or digital marketers to identify common audience intersection points between apps.

I wish I could say this in plainer english, but for me, all of this points to the app-storification of search engines, native intent-based interoperability, and programmatically personalized interaction design. 

The Problem is People Are Changing Faster than Technology

Nonetheless, the dummy marketer move at this moment would be assuming that the issue is purely technology and device-related one. In reality, the problem is people. Luckily, we know where all of that lost time is spent and where the lost links are found, just as Google does. Simply put, it’s all in apps.

In an interview with BrightEdge back in 2012, Mike Grehan brought the entire app fragmentation vs user behavior debate down to earth a bit with a hefty dose of common sense:

“It is more about the convergence. I’m not really saying that you either do search or you do social. I don’t believe that anybody actually wakes up in the morning and says, “I think I’ll go and do some searching.” Everybody’s task oriented. Sure, we’ve gone to places like Google, Yahoo, Microsoft or Bing to look for answers, and it’s been a little bit of a solitary thing…

But it’s becoming much more of a group event more than anything else, and there’s a lot more research that goes on. People tend to tap into their own networks first of all, and these are the kind of networks where people feel confident because they are friends and the information is much more verifiable…

[Look at] network theory, the way we come together as human beings. It’s about how the end user is changing, how we have a trans-media phenomenon. We consume media in an entirely different way than the way that we use to. So, marketing is changing not so much because of the technology or the devices that people are connected with. It’s changing because the consumer is changing and we have to adapt to that greater need from the consumer.”

Takeaways to Think About

To summarize, the marketing problem is this:

  • People’s technology habits are changing from desktop-focused to mobile-centric.
  • Dual trends of search indexes and web publishers consolidating properties into apps is causing link diversity to erode, which is the primary digital design factor in organizing information.
  • What this means is that the purchasing funnel isn’t dead, per se. Rather, it’s incredibly fragmented across increasingly complex technological conduits, which is likely to increase the cost of marketing.
  • And even if you have all the budget in the world, the real deciding factor will be imagination, as even Google can’t fully connect the dots.

So, now that you have all of the necessary information, how are you going to connect it all together?

Post from Jonathan Allen

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

Are Your Analytics Telling the Right Story?

Posted by Bill.Sebald

A process can easily become a habit. A habit may not change without awareness or intervention.

Before it becomes a habit, a process should be adjusted to change along with new goals, constant learning, experimentation, and so on.

Considering your time in analytics, are you engaging in a process, or in an outdated habit?

That’s a real question that digital marketing practitioners should ask themselves. Inherently, marketers tend to be buried with work, reusing templates to speed up results. But many agencies lean on those templates a little too much, in my opinion.

Templates should never be written in stone.

If your company is pumping out canned reports, you’re not alone. I do the business development for our company and regularly ask prospects to explain or share the reports they’ve received in the past. Sometimes it’s truly discouraging, outdated, wasteful, and the reason businesses search for new SEO vendors.

Look—I’m all for scalability. It’s a huge help. But some things can’t be scaled and still be successful, especially in today’s SEO climate—or, frankly, marketing in general. Much of what was scalable in SEO prior to 2011 is now penalty-bait. Today’s analytics tools and platforms can slice and dice data faster than anything Ron Popeil ever sold, but the human element will always be necessary if you want your marketing to dominate.

Find the stories to tell

I like to tell stories. I’m real fun in the pub. What I’ve always loved about marketing is the challenge to not only find a story, but have that story change something for the better. I like adding my layer based on real data and experimenting.

Analytics work is all about finding the story. It’s detective work. It’s equal parts Sherlock Holmes, Batman, and Indiana Jones. If you’re lucky, the story jumps out with very little digging. However, it’s more likely you’ll be going on some expeditions. It’s common to start with a hunch or random click through reports, but you need to always be looking for the story.

A great place to start is through client conversations. We schedule at least one monthly call with our clients, where it’s truly a discussion session. We get conversations going to pull intel out of the key stakeholders. Case in point: Recently, we discovered through an open discussion that one of our clients had great success with an earlier email campaign targeted to business owners. There was specific information customers positively responded to, which was helpful in recent content development on their website. It’s amazing what you can learn by asking questions and simply listening to responses.

We should be true consultants, not report monkeys. Dive into the discussions started and enjoy the ride. I guarantee you’ll take note of a few ripe areas to review next time you log into your Google Analytics account.

An impromptu survey says it’s a time issue

Most SEO engagements are designed around a block of purchased hours. Hopefully the client understands they’re not only buying your time to complete SEO tasks, but also your expertise and analysis. If someone on your team were to say, “I don’t have time to do analysis because all my tasks used up their budget this month,” then you really need to question the value of the chosen tasks. Were they picked based on front-loaded analysis, or were they simply tasks pulled out of guesswork?

A few weeks ago I pushed a quick Survey Monkey survey out on Twitter and Linkedin. Thanks to a few retweets, 94 people responded (please consider the following results more directional than scientific—I’m well aware it’s a shallow survey pool). I asked two questions:

  1. If you work in-house or have clients, how often do you log into your clients’ analytics? (Multiple choices ranged from several times a day to a few times a month).
  2. Do you, or do you not, get enough time in Analytics to interpret the data?

The responses:

answers2

While some do make a habit of logging into analytics once or more times a day, more do not. Is it required to check under the hood every day? Personally, I believe it is—but your answer may vary on that one. If something went south overnight, I want to be aware before my client tells me. After all, that’s one of the things I’m paid for. I like the idea of being active—not reactive.

More notable is that most respondents didn’t feel they get enough time in analytics. That should absolutely change.

There was also a field for respondents to elaborate on their selections. There were several comments that jumped out at me:

“In house, day to day tasks and random projects prevent me from taking the deep dives in analytics that I feel are valuable.”

“It’s challenging to keep up with the changes and enhancements made in Google Analytics in particular, amongst other responsibilities and initiatives.”

“Too many things are on my plate for me to spend the time I know I should be spending in Google Analytics.”

“Finding the actionable info in Analytics always takes more time that expected—never enough time to crunch the numbers!”

“I log in to ‘spot check’ things but rarely do I get to delve into the data for long enough to suss out the issues and opportunities presented by the data.”

These results suggest that many marketers are not spending enough time with analytics. And possibly not because they don’t see the value, but simply because they don’t have time. “Either you run the day, or the day runs you (Jim Rohn)” is apropos here—you must make time. You need to get on top of all the people filling your plate. It’s not easy, but it needs to be done.

Get on top of those filling your plate. Kind of like professional crowd surfing.

Helpful resources

Dashboards are fantastic, but I rarely see them set up in analytics platforms. One of the best ways to get a quick glimpse of your key metrics are with dashboards. All good analytics platforms provide the ability to make custom dashboards. Get into work, grab a coffee, fire up the computer, click your dashboard bookmark. (I recommend that order!) Google Analytics, which most of us probably use, provides some decent options with their dashboards, though limited compared to enterprise analytics platforms.

However, this basic dashboard is the minimum you should review in analytics. We’ll get deeper soon.

Building these widgets are quite easy (I recently created a tutorial on my site). There are also websites that provide dashboards you can import into Google Analytics. Dashboard Junkie is a fun one. Here are some others from Econsultancy and Google themselves.

It’s not just analytics platforms that offer dashboards. There are several other vendors in the SEO space that port in analytics data and mesh with their own data—from Moz Analytics to SearchMetrics to Conductor to many, many others.

SEMrush has a unique data set that marketers should routinely review. While your traffic data in analytics will be truer, if you’re targeting pages you may be interested in monitoring keyword rank counts:

Are backlinks a target? Maybe you’d find Cognitive SEO’s dashboard valuable:

cognitive

RankRanger is another SaaS we use. It’s become way more than just our daily rank tracking software. The data you can port in creates excellent snapshots and graphs, and strong dashboards:

rankranger1

It also offers other graphing functionality to make pretty useful views:

While some of the bigger platforms, like SearchMetrics and Conductor, make it easier to get a lot of information within one login, I’m still finding myself logging into several programs to get the most useful data possible. C’est la vie.

Analytics is your vehicle to identifying problems and opportunity

Remember, dashboards are simply the “quick and dirty” window into your site. They help spotlight drastic changes, and make your website’s general traction more visible. Certainly valuable for when your CMO corners you by the Keurig machine. It’s a state of the union, but doesn’t focus on subsections that may need attention.

Agencies and consultants tend to create SEO reports for their clients as a standard practice, though sometimes these reports become extremely boilerplate. Boilerplate reports essentially force you to look under the same rocks month after month. How can you get a bigger view of the world if you never leave your comfortable neighborhood? A new routine needs to be created by generating new reports and correlations, finding trends that were hidden, and using all the tools at your disposal (from Analytics to link tools to competitive tools).

Your analytics app is not a toy—it’s the lifeblood of your website.

Deeper dives with Google Analytics

Grouped pages lookup

A quick way to look at chunks of the site is by identifying a footprint in the URL and searching with that. For example, go to Behavior > Site Content > All Pages or Landing Pages. Then, in the search bar right below the graph, search for the footprint. For example, take www.mystoreisdabomb.com/blog/2015/ as a real URL. if you want to see everything in the blog, enter */blog/ into the search bar. This is especially useful in getting the temperature of an eCommerce category.

Segment sessions with conversions/transactions

So often in SEO we spend our time analyzing what’s not working or posing as a barrier. This report helps us take a look at what is performing (by leads or sales generated) and the customer behavior, channels, and demographic information that goes along with that. Then we can identify opportunities to make use of our success and improve our overall inbound strategy.

Below is a deeper dive into the conversions “Lead Generation” segment, although these same reports can just as aptly be applied to transactions. Ultimately, there are a lot of ways to slice and dice the analysis, so you’ll have to know what makes sense for your client, but here are three different reports from this segment that provided useful insights that will enhance our strategy.

  • Conversions
    One of the easy and most valuable ones! Directions: Under any report, go to Add a Segment > Sessions with Conversions > Apply.
  • Demographics – age, gender, location
    For example, our client is based in Pennsylvania, but is receiving almost as many request form submissions from Texas and New York, and has a high ratio of request form submissions to visitors for both of these other states. Given our client’s industry, this gives us ideas on how to market to these individuals and additional information the Texans may need given the long distance.
  • Mobile – overview, device type, landing pages
    For this client, we see more confirmation of what has been called the “micro-moment” in that our mobile users spend less time on the site, view less pages per visit, have a higher bounce rate, and are more likely to be new users (less brand affinity). This would indicate that the site is mobile optimized and performing as expected. From here, I would next go into mobile traffic segments to find pages that aren’t receiving a lot of mobile traffic, but are similar to those that are, and find ways to drive traffic to those pages as well.
  • Acquisition
    Here we’re looking at how the inbound channels stack up for driving conversions. Organic and Paid channels are neck and neck, although referral and social are unexpected wins (and social, glad we’ve proven your viability to make money!). We’ll now dig deeper into the referring sites and social channels to see where the opportunities are here.

Assisted conversions

There’s more to the story than last click. In Analytics, go to Conversions > Multi-Channel Funnels > Assisted conversions. Many clients have difficulty understanding the concept of attribution. This report seems to provide the best introduction to the world of attribution. Last click isn’t going to be replaced anytime soon, but we can start to educate and optimize for other parts of the funnel.

True stories from analytics detective work

Granted, this is not a post about favorite reports. But this is a post about why digging through analytics can open up huge opportunities. So, it’s real-life example time from Greenlane’s own experience!

Story 1: The Forgotten Links

The client is a big fashion brand. They’ve been a popular brick-and-mortar retail destination since the early 80s, but only went online in 1996. This is the type of company that builds links based on their brand ambassadors and trendy styles. SEO wasn’t the mainstream channel it is today, so it’s likely they had some serious architecture changes since the 90s, right?

For this company, analytics data can only be traced back about seven years. We thought, “Let’s take a look at what drove traffic in their early years. Let’s see if there were any trends that drove volume and sales where they may be slipping today. If they had authority then, and are slipping now, it might be easier to recoup that authority versus building from scratch.”

The good news—this brand had been able to essentially maintain the authority they launched with, as there were not any real noticeable gaps between search data then and search data today. But, in the digging, we uncovered a gem. We found a lot of URLs that used to draw traffic that are not on their tree today. After digging furthur, we found a redesign occurred in the late 90s. SEO wasn’t factored in, creating a ton of 404s. These 404s were not even being charted in Google Webmaster Tools, yet they are still being linked to today from external sites (remember, GWT is still quite directional in terms of the data they provide). Better yet, we pulled links from OSE and Majestic, and saw that thousands of forgotten links existed.

This is an easy campaign—create a 301 redirect matrix for those dead pages and bring those old backlinks to life.

But we kept wondering what pages were out there before the days where analytics was implemented. Using the Wayback Machine, we found that even more redesigns had occurred in the first few years of the site’s life. We didn’t have data for these pages, so we had to get creative. Using Screaming Frog, we crawled the Wayback Machine to pull out URLs we didn’t know existed. We fed them into the link tools, and sure enough, there were links there, too.

Story 2: To “View All” or Not To “View All”

Most eCommerce sites have pagination issues. It’s a given. A seasoned SEO knows immediately to look for these issues. SEOs use rel=”next” and “prev” to help Google understand the relationships. But does Google always behave the way we think they should? Golly, no!

Example 2 is a company that sells barware online. They have a lot of products, and tend to show only “page 1” of a given category. Yet, the analytics showed instances where Google preferred to show the view all page. These were long “view all” pages, which, after comparing to the “page 1” pages, showed a much lower bounce rate and higher conversions. Google seemed to prefer them in several cases anyway, so a quick change to default to “view all” started showing very positive returns in three months.

Story 3: Selling What Analytics Says to Sell

I have to change some details of this story because of NDAs, but once upon a time there was a jewelry company that sold artisan products. They were fond of creating certain kinds of keepsakes based on what sold well in their retail stores. Online, though, they weren’t performing very well selling these same products. The website was fairly new and hadn’t quite earned the footing they thought their brand should have, but that wasn’t the terminal answer we wanted to give them. Instead, we wanted to focus on areas they could compete with, while building up the entire site and turning their offline brand into an online brand.

Conversion rates, search metrics, and even PPC data showed a small but consistent win on a niche product that didn’t perform nearly as well in the brick-and-mortar stores. It wasn’t a target for us or the CEO. Yet online, there was obvious interest. Not only that, with low effort, this series of products was poised to score big in natural search due to low competition. The estimated search volume (per Google Keyword Planner) wasn’t extraordinary by any stretch, but it led to traffic that spent considerable dollars on these products. So much so, in fact, that this product became a focus point of the website. Sometimes, mining through rocks can uncover gold (jewelry pun intended).

Conclusion

My biggest hope is that your takeaway after reading this piece is a candid look at your role as an SEO or digital marketer. You’re a person with a “unique set of skills,” being called upon to perform works of brilliance. Being busy does create pressure; that pressure can sometimes force you to look for shortcuts or “phone it in.” If you really want to find the purest joy in what you’ve chosen as a career, I believe it’s from the stories embedded within the data. Go get ’em, Sherlock!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

What is Programmatic Marketing, Buying and Advertising

Unless you have been hiding under a rock for about a year or so, no doubt you would have heard of programmatic marketing or advertising being bounded around by agencies, blog posts, in conferences and from those who want to target more efficient spend in the advertising arena.  The IAB estimates that by 2018 programmatic spend will have grown from 28% in 2013 to over 80% of marketing spend. Its time to jump onboard and think about where and how you’ll be spending your digital budget.


Programmatic buying will make up $14.88Bn of the approximately $58.6Bn digital advertising pie this…
Click To Tweet


This post will give you a high level – whistle stop tour of what is programmatic, how it is being projected as a “game changer” in digital advertising budget changes and how it can benefit you.

What is Programmatic

Simply put – programmatic is a way to target what types of audience you wish show your advertising to, which can encompass segments across demographics such as age, gender, social standing, to geographic in certain areas of the country. As with Paid Search you can also limit the ads to times of day and frequency and which publishers you want your ads to show on – so you are only paying for highly effective ads, delivered to the right people at the right time. Its a change from traditional ad buying, where a buyer agrees to run a certain number of ads with a publisher and is locked in to the contract.

Programmatic media buying, marketing and advertising is the algorithmic purchase and sale of advertising space in real time. During this process, software is used to automate the buying, placement, and optimisation of media inventory via a bidding system. Automating the process means that it can be done in real time and doesn’t rely on the human touch, manual insertions and manual trading.

Programmatic media buying allows the “owner/brand” to tailor a specific message and creative to the right person, at the right time in the right context – using audience insight from the brand (the customers you want to target) around the kind of audience they want to target. This methodology should deliver far more precision and personalisation of messaging and media, resulting  in more efficiently targeted campaigns, and less of the “spray and pray” methodology of digital advertising – which is less targeted and based on sheer volume (mainly of impressions).
Programmatic Stats 2015(img: http://www.forbes.com/sites/steveolenski/2015/09/18/the-current-state-of-programmatic-advertising/)

How is it different

Again keeping this highlevel (as you will hear lots about SaaS, DSP, DMP etc which I will not go into here) programmatic is about driving efficiencies in spend and resource. Ad buyers manually buy digital ads space (inventory) for their clients, so again, using the human touch. Programmatic makes the buying, placement and optimisation of  process more efficient as its done by computers and algorithms – removing some of the mundane areas of dealing with different tagging requests, insertion orders and hopefully it should cut down on time to market too.

What about Programmatic RTB (Real-time buying)?

Programmatic RTB is different to PPC – as its done for display advertising only. Again its an automated way in transacting media thats bought and sold via technology platforms in Real-time. The process of RTB is set, so sellers of the ad space (publishers) make the space available for buyers (brands/advertisers) who then bid for that ad space. The RTB part comes into play, where brands/advertisers set different variables such as price, audience segment profiles (types of people) they are trying to reach with their message and the overall network reach.

But how does it work in real life?

When a webpage is being loaded and has the space for an advert (and there aren’t many that don’t serve ads of some kind eh) on it, information that’s been gathered about both the user/visitor (audience segment based on web behaviour etc) and the context of the site or webpage its being loaded to,  is sent back and forth to an ad exchange and the ad is placed on that page. This space gets auctioned off to the highest bidder and their ad is placed in the space – all of this is done in milliseconds with no apparent detriment to page loads times or user experience.

How do I know if an advert is programmatic or “normal display”?

Well, you kind of don’t! Not 100% anyway, next time you see an advert on a webpage – take a second or two to look at it and think, is it a targeted advert based on programmatic about you, your segment and likelyhood to be an ideal person to click on it – or is it just inventory purchased on a spray and pray display – where volume or spreading the message as far and wide as possible, will drive the traffic or conversions your way.

Is programmatic the future of all advertising?

As a person who works client side, then isn’t it right that we would want our advertising and campaigns to be marketed to the right type of person, at the right time for them in the context and place that they digest their media? Then my answer is yes!

Programmatic is moving quickly into the realms TV, radio and out of home (OOH) advertising, its not quite there in terms of display advertising just yet, but all of the background data that supports those media outlets is already there, it just needs to be used and “welcomed” by publishers  – but it certainly will be very soon!

I still believe there is a place for the “old inventory display” way of buying ad space, that could be used for more brand awareness – in terms of having a brand that offers lots of different products to different audiences. For me, I will be focusing my planning on programmatic!

Post from Russell O’Sullivan

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

How the search teenager faces the incomprehension of good old SMB’s

Although 15 years old, Search remains today in its infancy: it is bound to evolve and become even more pivotal in our lives as consumers and marketers. In the last decade, it has already drastically mutated, influenced by changes in consumer behaviours and the intrinsic nature of the web. This evolution has accelerated in the last years, and is expected to further increase its pace with the rise of game changers like mainstream social search and new devices like smartphones, wearables or connected TV.

In many respects, it is not that surprising to see conflicting trends in such a maturing marketing channel. After all, which teenager does not come with its contradictions? On the one hand, search advertising is the driving force behind the growth of digital advertising spend (52% of all UK ad spend according to the latest IAB report), a success built upon its accessibility to advertisers of all size and background. On the other hand, no matter how long the tail of advertisers is and for how long brands have been practicing that discipline, there is still a lot of education to be conducted with Small and Medium Businesses.

As industry observers, it is indeed easy to assume that our own online savviness is pervasive across all layers of the economy. It is thus insightful to ground ourselves in the reality of decision makers. This is why Latitude White  and Bing Ads commissioned research to poll British businesses on their sentiment and behaviours towards digital, and online advertising in particular. The full report surfaces rampant knowledge and competency gaps associated with long-lasting misconceptions which are still impairing the adoption of search marketing amongst businesses which would probably benefit most from its flexibility, accountability and affordability. Four stuck with me:

1. Overconfidence by ignorance. 88% of surveyed SME are convinced that they know how effective their marketing activity is – a lot better than John Wanamaker, but only a small majority (55%) have web analytics in place to measure campaign effectiveness.

2. Build it and they will come, won’t they? 60% of SMBs with a website do not promote it. Imagine opening a shop in a back-alley with no signage, and expecting customers to miraculously flock to your door. But as 75% consider offline recommendations as the single most important way their business is discovered, it somehow explains their reluctance to invest in digital. That mindset obviously overlooks the impact of online reputation and the role search plays throughout the purchase process (credential checks, price comparison, reviews…). Even if 83% feel it is critical or important to appear on the first page of search engines, they struggle to reconcile it with investments.

3. The mermaids of the free internet. When asked which digital tactic best performs for their business, both SEO and Social top the league. Debatable yet plausible, until you dig into the reasons for such a plebiscite amongst these small businesses. When 57% of the companies rank organic social activities as the cheapest tactic, it is because they assume a return on zero investment, ignoring the time, effort and opportunity costs. 54% spend two or more hours per week on SEO (only 21% is outsourced), 55% spend two or more hours per week on Social organic (only 6% is outsourced) which once brought back to a man hour would outweigh their advertising budget.

4. The perceived mountain to climb of PPC: the smaller the company, the more likely the absence of competency and apprehensions to outsource will exacerbate the barriers of entry. Even PPC, whose pay-per-click and auction model makes it highly affordable, is deemed out of reach for the smallest businesses. 65% of businesses with 10 or less employees disagree that PPC offers a good return on investment when 68% of businesses with 11 or more employees agree on the contrary.

There is much more to this research than these four misconceptions, for instance the impact of company size, industry and appetite to grow in digital adoption. However, the overwhelming take-away reinforces the need for key industry players – search engines, tool providers, agencies and sites like this one – to continue their educational efforts. “Educate”, from the Latin “e-ducere”, to draw out of ignorance, of infancy. When our discipline and its core principles will be embraced by all, then, and only then will the entire British economy benefit fully from this affordable, scalable and accountable marketing lever. Search will have finally reached adulthood.

Post from Cedric Chambaz

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

Will Google Bring Back Google Authorship?

Posted by MarkTraphagen

Recently, Google Webmaster Trends analyst Gary Illyes surprised many of us with a remark he made during his keynote Q&A with Danny Sullivan at SMX East in New York City. Illyes said that he recommended webmasters not remove the rel=author tag from their site content.

Google had used rel=author as part of its Google Authorship feature that (potentially) displayed a special author rich snippet in search results for content using the tag. Google ended support of this feature in August 2014.

The phrase that made everyone sit up and say, “Did he just say that?” was this: “…because it is possible Google might make use of [rel=author] again in the future.”

Even though Google’s John Mueller made the same recommendation after he announced that Google was no longer making use of Google Authorship in search (to be precise, Mueller said leaving the tag in place “did no harm”), Illyes’s statement seemed to shock many because Google has said nothing about Google Authorship or the rel=author tag since they said they stopped supporting it.

In a subsequent Twitter exchange I had with Gary Illyes, he explained that if enough users are implementing something, Google might consider using it. I asked him if that meant specifically if more people started using rel=author again, that Google might make use of it again. Illyes replied, “That would be safe to say.”

Before I provide my commentary on what all this means, and whether we should expect to see a resumption of Google Authorship in Google Search, let me provide a brief overview of Authorship for anyone who may not be familiar with it. If you already understand Google Authorship, feel free to skip down to the Will Google Bring Back Authorship? section.

A brief history of Google Authorship

Google Authorship was a feature that showed in Google Search results for about three years (from July 2011 until August 2014). It allowed authors and publishers to tag their content, linking it to an author’s Google+ profile, in order to provide a more-certain identification of the content author for Google.

In return, Google said they might display an authorship rich snippet for content so tagged in search results. The authorship rich snippet varied in form over the three years Authorship was in use, but generally it consisted of the author’s profile photo next to the result and his or her byline name under the title. For part of the run of Authorship, one could click on an author byline in search to see results showing related content from that author.

Google Authorship began with an official blog post in June of 2011 where Othar Hansson announced that Google would begin supporting the rel=author tag, but with no specifics on how they might use it.

Then in a July 2011 video, Hansson and Matt Cutts explained that Google+ would be the hub for author identification, and that Google might start showing a special Authorship rich snippet result for properly tagged content.

Those rich snippets slowly began appearing for more and more authors using rel=author over the next several months. During the three years of the program, Google experimented with many different configurations of the rich snippet, and also which authors and content would get it in response to various search queries.

Interest in Google Authorship from the SEO and online marketing communities was spurred even more by its possible connection to Google’s Agent Rank patent, first revealed by Bill Slawski. In this patent, Google described a system by which particular “agents” or “entities” could be identified, scored by their level of authority, and that score then be used as a search ranking factor.

Since one of the types of agents identified in the patent was a content author, the patent rapidly became known as “author rank” in the SEO community. The connection with Authorship in particular, though, came from Cutts and Hansson stating in the above-mentioned Authorship video that Google might someday use Authorship as a search ranking factor.

Speculation about so-called Author Rank, and whether or not it was “on” as a ranking factor, continued throughout the life of the Authorship program. Throughout that period, however, Cutts continued to refer to it as something Google might do in the future. (You can find my own take on why I believed Authorship was never used as a direct ranking factor here.)

The first hint that Google might be drawing back from Authorship came at Pubcon Las Vegas in October 2013 when Matt Cutts, in his keynote “State of Search” address, revealed that at some point in the near future Google would be cutting back on the amount of Authorship rich snippets shown by “around 15%.” Cutts said that in experiments, Google found that reducing Authorship rich snippets by that much “improved the quality of those results.”

Sure enough, in early December of that year, Moz’s Peter Meyers detected a rapid decline over several days in the number of Authorship rich snippets in search results, as measured by his Mozcast Features tool.

Around that same time Google implemented what I called “two-class Authorship,” a first class of authors who continued to get the full rich snippet, and a second class who now got only a byline (no author photo).

Finally, in August 2014, this author was contacted directly by John Mueller, offering to share some information under an NDA embargo until the information was made public. In my call with Mueller, he told me that he was letting me know 24 hours in advance that Google Authorship was going to be discontinued. He added that he was making this call as a courtesy to me since I had become the primary non-Google source of information about Authorship.

With that information, Eric Enge and I were able to compose an in-depth article on Authorship and its demise for Search Engine Land that went live within two minutes of John Mueller’s own public announcement on Google+. In our article linked above, Eric and I give our takes on the reasons behind the death of Authorship and the possible future of author authority on Google.

Will Google bring back Authorship?

From the day Authorship was “killed” in August 2013, we heard no more about it from Google—until Gary Illyes’s remarks at SMX East. So do Gary’s remarks mean we should expect to see a return of Google Authorship to search results?

I don’t think so, at least not in any form similar to what we saw before.

Let me explain why.

1. Illyes made no promise. Far too often people take statements about what Google “could” or “might” do from spokespersons like Gary Illyes, Matt Cutts, and John Mueller and translate “could/might” to “will.” That is unfair to those spokespeople, and an abuse of what they are saying. Just because something is spoken of as a possibility, it does not follow that a promise is being made.

2. It ain’t broke so…. So if there are no actual plans by Google to restore Google Authorship, why would Illyes make a point of stating publicly that authors and publishers should continue to use the rel=author tag? I think a primary reason may be that once Google gets any set of people to begin using any kind of schema, they’d rather have it remain in place. Anything that helps better organize the information on web pages is good for a search engine, whether or not that particular information is “in play” at present.

In the case of rel=author, I think it still may be useful to Google to be able to have confidence about content connected with certain authors. When Authorship ended, many people asked me if I were going to remove the tags from my content. I responded why would I? Having them there doesn’t hurt anything. But more important, as an author trying to build my personal brand reputation online, why wouldn’t I want to give Google every possible hint about the content with which I should be identified?

3. The reasons why Authorship was killed still remain. As with any change in Google search, we’ll probably never know all the reasons behind it, but the public reasons stated by John Mueller centered around Google’s commitment to a “mobile first” user experience strategy. Mobile first is a recognition that search is more and more a mobile experience. Recently, Google announced that more of all searches are now done on mobile than desktop. That trend will likely never reverse.

In response, we’ve seen Google continually moving toward simpler, cleaner, less-cluttered design in all its products, including search. Even their recent logo redesign was motivated by the requirements of the small screen. According to Mueller, Authorship snippets were too much clutter for a mobile world, with not enough user benefit to justify their continuation.

In our Search Engine Land article, Eric Enge and I speculated that another reason Google may have ended the Authorship experiment was relatively poor adoption of the tagging, low participation in Google+ (which was being used as the “anchor” on Google’s side for author identification), and incorrect implementation of the tags by many who did try to use them.

On the latter point, Enge conducted a study of major publishers, which showed that even among those who bothered to implement the authorship tagging, the majority was doing it wrong. That was true even among high-tech and SEO publications!

Alt that points to a messy and lopsided signal, not the kind of signal a search engine wants. At the end of the day, Google couldn’t guarantee that a result showing an Authorship rich snippet was really any better than the surrounding results, so why give it such a prominent highlight?

Despite Gary Illyes saying that if more sites used rel=author Google might begin using it again, I don’t see that doing so would change any of the conditions stated above. Therefore, I believe that any future use of rel=author by Google, if it ever occurs, will look nothing like the Authorship program we knew and loved.

So is there any future for author authority in search?

To this question, I answer a resounding “Yes!”

Every indication I’ve had from Googlers, both publicly and privately, is that author authority continues to be of interest to them, even if they have no sound way to implement it yet.

So how would Google go about assessing author identity and authority in a world where authors and publishers will never mass-tag everything accurately?

The answer: the Knowledge Graph, entity search, and machine learning.

The very first attempts at search engines were mostly human-curated. For example, the original Yahoo search was fed by a group of editors who attempted to classify every web page they came across. But as the World Wide Web took off and started growing exponentially, it was quickly obvious that such attempts couldn’t scale. Hyperlinks between web pages as a means of assessing both the subject matter and relative authority of web pages proved to be a better solution. Search at the scale of the web was born.

Remember that Google’s actual mission statement is to “organize the world’s information.” Over time, Google realized that just knowing about web pages was not enough. The real world is organized by relationships between entities—persons, places, things, concepts—and Google needed a way to learn the relationships between those things, also at scale.

The Knowledge Graph is the repository of what Google is learning, and machine learning is the engine that helps them do that learning at scale. At a simple level, search engine machine learning is the development of an algorithm that learns on its own as a result of feedback mechanisms. Google is applying this technology to the acquisition of and linking together of entities and their relationships at scale.

It’s my contention that this process will be the next evolutionary step that will eventually enable Google to identify authors who matter on a given topic with their actual content, evaluate the relative authority of that content in the perceptions of readers, and use that as a search ranking factor.

In fact, Matt Cutts seemed to hint at a Knowledge Graph-based approach in a June 2013 video about the future of authorship where he talked about how Google was moving away from dependence on keywords, from “strings to things,” figuring out how to discover the “real-world people” behind web content and “their relationships” to improve search results.

Notice that nothing in a machine learning process is dependent upon humans doing anything other than what they already do on the web.

The project is already underway. Take a moment right now and ask Google, “Who is Mark Traphagen?” If you are in the US or most English-speaking countries, you’ll probably see this at the top of the results:

That’s a Knowledge Panel result from Google’s Knowledge Graph. It reveals a couple of things:

1. Google has a high confidence that I’m likely the droids, er, the “Mark Traphagen” you’re looking for. There are a few other Mark Traphagens in the world who potentially show up in Google Search, but Google sees that the vast majority of searchers who search for “Mark Traphagen” are looking for a result about me. Thanks, everybody!

2. Google has high confidence that the Mark Traphagen you’re looking for is the guy who writes for Search Engine Land, so that site’s bio for me is likely a good instant answer to your lifelong quest to find the Real Mark Traphagen (a quest some compare to the search for the Holy Grail).

If Google can continue to do that at scale, then they can lick a problem like assessing author authority for search rankings without any help from us, thank you very much.

How does all this fit with Gary Illyes’s recommendation? I think that while Google knows it ultimately has to depend on machine learning to carry off such projects at scale, any help we can give the machine along the way is appreciated. Back in the Google Authorship I days, some of us (myself included) believed that one of the real purposes for the Authorship project was to enlist our help in training the machine learning algorithm. It may be that rel=author is still useful for that.

What might Authorship look like in the future?

Allow me to speculate a bit.

I don’t expect we’ll ever again see the mass implementation of author rich snippets we saw before, where almost anyone could get highlighted just for having used the tagging on their content and having a Google+ profile. As I stated above, I think Google saw that doing that was a non-useful skewing of the results, as more people were probably clicking on those rich snippets without necessarily getting a better piece of content on the other end.

Instead, I would expect that Google would see the most value in identifying the top few authors for any given topic, and boosting them. This would be similar to their behavior with major brands in search. We often see major, well-known brands dominating the top results for commercial queries because user behavior data tells Google that’s what people want to see. In a similar way, people might be happy to be led directly to authors they already know and trust. They really don’t care about anyone else, no matter how dashing their profile image might be.

Furthermore, for reasons also stated above, I don’t expect that we’ll see a return to the full rich snippets of the glory days of Authorship I. Instead, the boost to top authors might simply be algorithmic; that is, other factors being equal, their content would get a little ranking boost for queries where they are relevant to the topic and the searcher.

It’s also possible that such author’s content could be featured in a highlighted box, similar to how we see local search results or Google News results now.

But notice what I said above: “…when [the authors] are relevant to the topic and the searcher.” That latter part is important, because I believe it is likely that personalization will come into play here as well. It makes sense that boosting or highlighting a particular author has the most value when my search behavior shows that author already has value to me.

We already see this at work with Google+ posts in personalized (logged in) search. When I search for something that AJ Kohn has posted on Google+ while I’m logged in to my Google account, Google will elevate that result to my first page of results and even give it a good old-fashioned Authorship rich snippet! Google has high confidence that’s a result I might want to see because AJ is in my circles, and my interactions with him and his content show that he is probably very relevant and useful to me. Good guess, Google, you’re right!

It is now obvious that Google knows they have to expand beyond Google+ in entity identification and assessment. If Google+ had taken off and become a real rival to Facebook, Google’s job might have been a lot easier. But in the end, building machine learning algorithms that sniff out our “who’s who” and “who matters to whom” may be an even better, if vastly more difficult, solution.

So to sum up, I do expect that at some point in the future, author authority will become a factor in how Google assesses and ranks search results. However, I think that boost will be a “rich get richer” benefit for only the top, most reputable, most trusted authors in each topic. Finally, I think the output will be more subtle and personalized than we saw during the first attempt at Authorship in search.

How to prepare for Authorship II

Since it is unlikely that Authorship II, the future implementation of author identity and authority in search, will be anything like Authorship I, is there anything you can be doing to increase the odds that Authorship II will benefit you and your content? I think there are several things.

1. Set a goal of being the 10X content creator in your niche. Part of the Gospel According to Rand Fishkin these days is that “good, unique” content is not good enough anymore. In order to stand out and get the real benefits of content, you’ve got to be producing and publishing content that is ten times better than anything currently on page one of Google for your topic. That means it’s time to sacrifice quantity (churning out posts like a blogging machine) for quality (publishing only that which kicks butt and makes readers stand up, take notice, and share, recommend and link).

2. Publishers need to become 10X publishers. If you run a publishing site that accepts user-generated content, you’ve got to raise your standards. Accepting any article from any writer just to fill space on your pages won’t cut it.

3. Build and encourage your tribe. If you are authoring truly great, useful stuff, sooner or later you will start to attract some fans. Work hard to identify those fans, to draw them into a community around your work, and to reward and encourage them any way you can. Become insanely accessible to those people. They are the ones who will begin to transmit the signals that will say to Google, “This person matters!”

4. Work as hard offline as you do online. Maybe harder. More and more as I talk with other authors who have been working hard at building their personal brands and tribes, I’m hearing that their offline activities seem to be driving tremendous benefit that flows over into online. I’m talking about speaking at conferences and events, being available for interviews, being prominent in your participation in the organizations and communities around your topic, and dozens of other such opportunities.

BONUS: Doing all four of those recommendations will reap rewards for you in the here and now, whether or not Google ever implements any kind of “author rank.”

The natural power of the fact that people trust other people long before they will trust faceless brands continues, in my opinion, to be one of the least understood and underutilized methodologies in online marketing. Those who work hard to build real author authority in their topic areas will reap the rewards as Google begins to seek them out in the days to come.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link