ALL the winners of the UK Search Awards 2015!

Last Thursday the UK Search Awards were handed out in London. In an evening full of food, drinks and networking 27 different awards were handed out. All the lucky winners you can find below, but first some tweets and pictures from the evening!


vervesearch
openingdrinks
winner-brainlabs
winner-kerboo
winner-mediavision
winner-roberts
dinner

Tweets about #searchawards
// <![CDATA[
!function(d,s,id)var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id))js=d.createElement(s);js.id=id;js.src=p+"://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);(document,"script","twitter-wjs");
// ]]>

The Winners

Congratulations to ALL the winners of the UK Search Awards 2015!

BEST USE OF SEARCH – RETAIL

Periscopix & Toys ‘R’ Us – Results R Us

BEST USE OF SEARCH – FINANCE

MoneySuperMarket.com – Car Insurance PPC Efficiency Through Data Integration

BEST USE OF SEARCH – TRAVEL / LEISURE

Verve Search – Idioms of the World for HotelClub

BEST USE OF SEARCH – GAMING

Blueclaw & bwin.party – Leading the team to Three iGaming Victories

BEST USE OF SEARCH – THIRD SECTOR

GLL & BlueGlass – ‘Making SEO Better’

BEST LOCAL CAMPAIGN

Brainlabs – Deliveroo

BEST LOW BUDGET CAMPAIGN

Found & Randstad Finance & Professionals – Randstad Not The Ordinary 9-5

BEST USE OF PR IN A SEARCH CAMPAIGN

Verve Search – Idioms of the World for HotelClub

BEST USE OF SOCIAL MEDIA IN A SEARCH CAMPAIGN

AccorHotels & NetBooster – A Tale of Three Cities

BEST INTEGRATED CAMPAIGN

MoneySuperMarket.com – #EpicStrut – Digital Integration with ATL Brand Activity

BEST PPC CAMPAIGN

Brainlabs – Real Time Bidding – Alpharooms

BEST SEO CAMPAIGN

Dixons Carphone & Greenlight – Cooking up an SEO Storm.

BEST USE OF CONTENT MARKETING

Latitude and London & Country Mortgages – Game of Loans

INNOVATION – CAMPAIGN

Starcom Mediavest Group – Its S6 and You Know It Samsung Hijacks

INNOVATION – SOFTWARE

Majestic

BEST PPC MANAGEMENT SOFTWARE SUITE

DoubleClick Search

BEST SEO SOFTWARE SUITE

Searchmetrics Suite

BEST SEARCH SOFTWARE TOOL

Kerboo – Data intelligence for search marketers and brand managers

BEST IN-HOUSE TEAM

MoneySuperMarket.com Digital Team

BEST SMALL PPC AGENCY

The Media Image

BEST LARGE PPC AGENCY

Brainlabs

BEST SMALL SEO AGENCY

Verve Search

BEST LARGE SEO AGENCY

Stickyeyes

BEST SMALL INTEGRATED SEARCH AGENCY

MediaVision

BEST LARGE INTEGRATED SEARCH AGENCY

MediaCom

YOUNG SEARCH PROFESSIONAL OF THE YEAR

Mike Litson – Greyheart Media

UK SEARCH PERSONALITY OF THE YEAR

Matt Roberts – Linkdex

Post from Bas van den Beld

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

It's Here! The MozCon Local 2016 Agenda

Posted by EricaMcGillivray

*drumroll* The MozCon Local 2016 agenda is here! For all your local marketing and SEO needs, we’re pleased to present a fabulous lineup of speakers and topics for your enjoyment. MozCon Local is Thursday and Friday, February 18–19 2016 in Seattle. On Thursday, our friends LocalU will present a half-day of intensive workshops, and on Friday we’ll be having an entire day of keynote-style conference fun. (You do need to purchase the workshop ticket separately from the conference ticket.)

If you’ve just remembered that you need to purchase your ticket, do so now:

Buy your MozCon Local 2016 ticket!

Otherwise, let’s dig into that agenda!

MozCon Local 2016


Thursday workshops

12:00–12:30pm
Registration


12:30–12:35pm
Introduction and Housekeeping


12:35–12:55pm
The State of Local Search with David Mihm

Already one of the most complex areas in all of search marketing, local has never been more fragmented than it is today. Following a brief summary of the Local Search Ranking Factors, David will give you his perspective on which strategies and tactics are worth paying attention to, and which ones are simply “nice to have.”

David Mihm is one of the world’s leading practitioners of local search engine marketing. He has created and promoted search-friendly websites for clients of all sizes since the early 2000s. David co-founded GetListed.org, which he sold to Moz in November 2012.


12:55–1:35pm
Local Search Processes with Aaron Weiche, Darren Shaw, Mike Ramsey, and Paula Keller

Darren Shaw, Mike Ramsey, Aaron Weiche, and Paula Keller

Panel discussion and Q&A on the best processes to use in marketing local businesses online.


1:35–2:35pm
How to do Competitive Analysis for Local Search with Aaron Weiche, Darren Shaw, David Mihm, Ed Reese, Mary Bowling, Mike Ramsey

Each panelist will demonstrate their methods and the tools they use to audit a specific area of the online presence of a single local business. The end result will be a complete picture of how a thorough competitive analysis for a local business can be done.


2:35–2:50pm
Break


During this time period, each attendee will choose any three 30-minute workshops to attend. Some workshops are offered in all time slots, while others are only offered at specific times. Present your challenges, discuss solutions, and get your burning questions answered in these small groups.

LocalU Workshops

2:50–3:20pm

  • Tracking and Conversions with Ed Reese
  • Solving Problems at Google My Business with Willys DeVoll and Mary Bowling
  • Ask Me Anything About Local Search with David Mihm
  • Local Targeting of Paid Advertising with Paula Keller
  • Using Reviews to Build Your Business with Aaron Weiche
  • Local Links with Mike Ramsey
  • Citations: Everything You Need to Know with Darren Shaw

3:20–3:50pm

  • Tracking and Conversions with Ed Reese
  • Solving Problems at Google My Business with Willys DeVoll and Mary Bowling
  • Ask Me Anything About Local Search with David Mihm
  • Local Targeting of Paid Advertising with Paula Keller
  • Using Reviews to Build Your Business with Aaron Weiche
  • Agency Issues with Mike Ramsey
  • Local Links with Darren Shaw

3:50–4:20pm

  • Tracking and Conversions with Ed Reese
  • Solving Problems at Google My Business with Willys DeVoll and Mary Bowling
  • Ask Me Anything About Local Search with David Mihm
  • Local Targeting of Paid Advertising with Paula Keller
  • Using Reviews to Build Your Business with Aaron Weiche
  • Local Links with Mike Ramsey
  • Citations: Everything You Need to Know with Darren Shaw

4:20–5:00pm
Live Site Reviews

The group will come back together for live site reviews!


5:00–6:00pm
Happy Hour!


Friday conference

Mary Bowling talks to the local crowd

8:00–9:00am

Breakfast


David Mihm9:00–9:05am
Welcome to MozCon Local 2016! with David Mihm

David Mihm is one of the world’s leading practitioners of Local search engine marketing. He has created and promoted search-friendly websites for clients of all sizes since the early 2000s. David co-founded GetListed.org, which he sold to Moz in November 2012.


Mary Bowling9:05–9:35am
Feeding the Beast: Local Content for RankBrain with Mary Bowling

We now know searcher behavior and continual testing via machine learning indeed affects Google rankings and algorithm refinements. Learn how to create local content to satisfy both Google and our human visitors.

Mary Bowling’s been in SEO since 2003 and has specialized in local SEO since 2006. When she’s not writing about, teaching, consulting, and doing internet marketing, you’ll find her rafting, biking, and skiing/snowboarding in the mountains and deserts of Colorado and Utah.


Mike Ramsey9:35–10:05am
Local Links: Tests, Tools, and Tactics with Mike Ramsey

Going beyond the map pack, links can bring you qualified traffic, organic rankings, penalties, or filters. Mike will walk through lessons, examples, and ideas for you to utilize to your heart’s content.

Mike Ramsey is the president of Nifty Marketing and a founding faculty member of Local University. He is a lover of search and social with a heavy focus in local marketing and enjoys the chess game of entrepreneurship and business management. Mike loves to travel and loves his home state of Idaho.


Darren Shaw10:05–10:35am
Citation Investigation! with Darren Shaw

Darren investigates how citations travel across the web and shares new insights into how to better utilize the local search ecosystem for your brands.

Darren Shaw is the president and founder of Whitespark, a company that builds software and provides services to help businesses with local search. He’s widely regarded in the local SEO community as an innovator, one whose years of experience working with massive local data sets have given him uncommon insights into the inner workings of the world of citation-building and local search marketing. Darren has been working on the web for over 16 years and loves everything about local SEO.


10:35–10:55am
AM Break


Lindsay Wassell10:55–11:20am
Technical Site Audits for Local SEO with
Lindsay Wassell

Onsite SEO success lies in the technical details, but extensive SEO audits can be too expensive and impractical. Lindsay shows you the most important onsite elements for local search optimization and outlines an efficient path for improved performance.

Lindsay Wassell’s been herding bots and wrangling SERPs since 2001. She has a zeal for helping small businesses grow with improved digital presence. Lindsay is the CEO and founder of Keyphraseology.


Justine Jordan11:20–11:45am
Optimizing and Hacking Email for Mobile with Justine Jordan

Email may be an old dog, but it has learned some new mobile tricks. From device-a-palooza and preview text to tables and triggers, Justine will break down the subscriber experience so you (and your audience) get the most from your next campaign.

In addition to being an email critic, cat lover, and explain-a-holic, Justine Jordan also heads up marketing for Litmus, an email testing and analytics platform. She’s strangely passionate about email, hates being called a spammer, and still codes like it’s 1999.


Emily Grossman11:45am–12:10pm
Understanding App-Web Convergence and the Impending App Tsunami with Emily Grossman

People no longer distinguish between app and web content; both compete for the same space in local search results. Learn how to keep your local brand presence afloat as apps and deep links flood into the top of search results.

Emily Grossman is a Mobile Marketing Specialist at MobileMoxie, and she has been working with mobile apps since the early days of the app stores in 2010. She specializes in app search marketing, with a focus on strategic deep linking, app indexing, app launch strategy, and app store optimization (ASO).


Robi Ganguly12:10–12:35pm
Building Customer Love and Loyalty in a Mobile World with Robi Ganguly

How the best companies in the world relate to customers, create a personal touch, and foster customer loyalty at scale.

Robi Ganguly is the co-founder and CEO of Apptentive, the easiest way for every company to communicate with their mobile app customers. A native Seattleite, Robi enjoys building relationships, running, reading, and cooking.


12:35–1:35pm
Lunch



Luther Lowe and Willys Devol1:35–2:05pm
The Past, Present, and Future of Local Listings with Luther Lowe and Willys Devol

Two of the biggest kids on the local search block, Google and Yelp, share their views on the changing world of local listings, their place in the broader world of local search, and what you can do to keep up, in this Q&A moderated by David Mihm.

Luther Lowe is VP of Public Policy at Yelp.

Willys Devol is the content strategist for Google My Business, and he spends his time designing and writing online content to help business owners enhance their presence online. He’s also a major proponent of broccoli and gorillas.


Paula Keller2:05–2:35pm
Fake It Til You Make It: Brand Building for Local Businesses with Paula Keller

Explore real-world examples of how your local business can establish a brand that both customers and Google will recognize and reward.

As Director of Account Management at Search Influence, Paula Keller strategizes with businesses on improving their search, social, and online ads results, and she works to scale those tactics for her team’s 800+ local business clients. Paula views online marketing the same way she views cooking (her favorite way to spend her free time): trends come and go, but classic tactics are always the foundation of success!


Dana DiTomaso2:35–3:05pm
Your Marketing Team is Larger Than You Think with Dana DiTomaso

Imagine doing such a great job with your branding that you become a part of your customer’s life. They trust your brand as part of their community. This magic doesn’t happen by dictating the corporate voice from a head office, but from empowering your locations to build customer community.

Whether at a conference, on the radio, or in a meeting, Dana DiTomaso likes to impart wisdom to help you turn a lot of marketing bullshit into real strategies to grow your business. After 10+ years, she’s (almost) seen it all. It’s true, Dana will meet with you and teach you the ways of the digital world, but she is also a fan of the random fact. Kick Point often celebrates “Watershed Wednesday” because of Dana’s diverse work and education background. In her spare time, Dana drinks tea and yells at the Hamilton Tiger-Cats.


3:05–3:25pm
PM Break


Cori Shirk3:25–3:55pm
Mo’ Listings, Mo’ Problems: Managing Enterprise-Level Local Search with Cori Shirk

Listings are everyone’s favorite local search task…not. Cori takes you through how to tackle them at large scale, keep up, and not burn out.

Cori Shirk is a member of the SEO team at Seer Interactive, where she specializes in managing enterprise local search accounts and guiding strategy across all of Seer’s local search clients. When she’s not sitting in front of a computer, you can usually find her out at a concert enjoying a local craft beer.


Matthew Moore3:55–4:10pm
The Enterprise Perspective on Local Search with Matthew Moore

Learn how the person responsible for local visibility across a portfolio of nearly 1,000 locations tackles this space on a daily basis. Matthew from Sears Home Services shares his experiences and advice in this Q&A moderated by David Mihm.

Matthew Moore is Senior Director, Marketing Analytics at Sears Holdings Corporation.


Adria Saracino4:10–4:40pm
How to Approach Social Media Like Big Brands with Adria Saracino

Facebook, Twitter, LinkedIn, Instagram, Pinterest, YouTube, Snapchat, Periscope…the seemingly never-ending world of social media can leave even the most seasoned marketer flailing among too many tasks and not enough results. Adria will help you cut through the noise and share actionable secrets that big brands use to succeed with social media.

Adria Saracino is a digital strategist whose marketing experience spans mid-stage startups, agency life, and speaking engagements at conferences like SearchLove and Lavacon. When not marketing things, you can see her cooking elaborate meals and posting them on her Instagram, @emeraldpalate.


Rand Fishkin4:40–5:10pm
Analytics for Local Marketers: The Big Picture and the Right Details with Rand Fishkin

Are your marketing efforts taking your organization where it needs to go, or are they just boosting your vanity metrics? Rand explains how to avoid being misled by the wrong metrics and how to focus on the ones that will keep you moving forward. Learn how to determine what to measure, as well as how to tie it to objectives with clear, concise, and useful data points.

Rand Fishkin uses the ludicrous title “Wizard of Moz.” He’s the founder and former CEO of Moz, co-author of a pair of books on SEO, and co-founder of Inbound.org.


6:00–10:00pm
MozCon Local Networking Afterparty, location TBA

Join your fellow attendees and Moz and LocalU staff for a networking party after the conference. Light appetizers and drinks included. See you there!

Buy your MozCon Local 2016 ticket!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

Optimising Website URLs for SEO and Usability

In recent months there have been a few occasions where I’ve had to emphasise the importance of clear, well–structured URLs for websites. As any SEO knows, having the right keywords in the actual URL of a webpage will make that page more relevant, and will help the page rank better in search results. Yet not everyone is entirely aware of this.

I believe the value of optimised URLs go beyond simple relevance, though. To me, a properly structured page URL will carry a range of benefits, which go beyond SEO. Website URLs are also a user experience aspect, aid in site maintenance and content management, and benefit a company’s offline marketing in very real ways.

When we think of a website’s structure, we often think of a tree-like relationship between pages, as per this popular graphic from Moz:

However, it’s not always evident how this should relate to each page’s URL. For me, a good URL structure is one that conveys clear meaning and intent, describes the page’s place within the overall site structure, and offers clear navigational options.

That means a page’s URL needs to be hierarchical; there needs to be a clear parent-child relationship in the URL, identifying the page’s relationship with content that sits ‘above’ it in the site’s reversed tree, and opening logical options for URLs of deeper pages.

Hierarchical URLs

Let’s explain this by means of an example, one that I like to use in my lectures and workshops; a website that sells safety boots for construction and warehouse workers.

Caterpillar Holton SB BootAs a fan of Caterpillar boots, let’s take one of their popular safety boots as an example product, and clarify what would make for a great hierachical URL for this product page.

Main Category URLs

First of all we’ll need to rely on our keyword research to see what types of keywords people use to search for this sort of safety boot. A quick search using Google’s Keyword Planner reveals that ‘safety boots’ is popular, but not as popular as ‘work boots’. However, ‘work boots’ has a strong seasonal element, whereas ‘safety boot’ doesn’t, which leads me to suspect they might not necessarily mean the same thing. And indeed, ‘work boots’ are primarily intended for outdoor use, whereas ‘safety boot’ is more generic and can refer to both the indoors and outdoors work boot.

There’s also a strong geographic aspect to the popularity of these keywords, with ‘work boots’ the preferred keyword in the USA.

Which keyword do we want to use? Let’s keep it simple and stick to ‘safety boot’ for now, avoiding any issues around cultural differences and semantic variances.

So our top-level category will be ‘Safety Boots’. This gives us a pretty straightforward category page URL:

http://www.website.com/safety-boots/

Next we’ll want to have a think about what kind of subcategories we want to identify. Again, we’ll need to rely on keyword research to ensure what we choose as logical subcategories aligns with people’s search behaviour.

Subcategory URLs

Our keyword research shows that there’s a strong brand element to searches for safety boots, with users often looking for specific manufacturers like Dr Martens and Caterpillar. The popular of brand keywords is significantly higher than searches for specific attributes, such as ‘steel toes’ or ‘non-slip soles’.

So our first subcategories will be boot brands:

http://www.websites.com/safety-boots/caterpillar/

Now with two levels of hierarchy, we have created a site structure where theoretically any given product could be accessed within three clicks from the homepage. For me that’s an ideal scenario, but it does leave us with a dilemma: do we add further subcategorisation to enable category pages for feature-specific attributes, or do we rely on a different type of product filtering to enable users to find what they’re looking for?

Sub-Subcategory URLs

There’s no one-size-fits-all solution, it really depends on your specific situation, requirements, constraints, etc. Basically you’ll have two choices: add one level of categorisation in a clear URL hierarchy, or rely on URL parameters to filter product lists down more narrowly:

http://www.website.com/safety-boots/caterpillar/steel-toe/

or

http://www.website.com/safety-boots/caterpillar/?attr=steeltoe

Either one works, though the former option – sub-subcategorisation in hierarchical URLs – throws up a secondary issue: that of product URLs containing categorisation elements. More on that below.

Product URLs

Again, in an ideal scenario, your product pages are the children of your categories and subcategories, so should follow that hierarchical structure:

http://www.websites.com/safety-boots/caterpillar/holton-sb-boot.html

As far as URLs go, this one is about as perfect as it can get for the purpose of SEO as well as for usability. Just by looking at that URL, you already know what kind of page you’ll be visiting. There’s no ambiguity; the URL is completely self-evident and descriptive.

In terms of SEO, the ranking benefit of such a URL should be obvious. It contains two of the most relevant keywords that people search for, and has a clear categorisation hierarchy that allows Google to understand what it is looking at. Even before the page is crawled, Google has several relevance values to associate with the page’s content.

As per above, we could try to make the page URL even more search engine friendly by adding a third layer of categorisation, based on specific product attributes that people search for, such as ‘steel toes’ and ‘waterproof’.

However, when we add a third level of categorisation based on product attributes, we end up with a single product that can easily belong to multiple categories. We thus risk creating duplicate product versions:

http://www.websites.com/safety-boots/caterpillar/steel-toed/holton-sb-boot.html
http://www.websites.com/safety-boots/caterpillar/waterproof/holton-sb-boot.html

We would then be forced to either canonicalise on a single product URL (which we really don’t want to, as it kills the URL’s relevance for the non-canonical attribute) or revert to root URLs for products instead:

http://www.website.com/caterpillar-holton-sb-safety-boot.html

This is not ideal, as we also lose a lot of the hierarchical URL’s keyword value.

URL Parameters Are Okay

If we instead use parameters in subcategory URLs to filter product lists on attributes, we can still use the full SEO-friendly hierachical product URL containing the main category and subcategory. The subcategory’s URL with parameters would show a filtered list, and we can ensure Google can crawl and index this URL, thus still conveying some measure of keyword value:

http://www.website.com/safety-boots/caterpillar/?attr=steeltoe

I also have a preference for using URL parameters for attribute-based filtering, as often we don’t necessarily want Google to index these filtered product lists. Many parameters, like pricing and size, are useful for website visitors to narrow product lists, but have very limited SEO value and can in fact cause crawl optimisation issues on your website.

By using parameters you enable easier crawl control – you can simply add those parameters that have limited SEO value (and which you don’t want crawled) to your robots.txt file, while you keep those you do want crawled unaffected.

User-agent: Googlebot

Disallow: /*attr=price*

In such a case, even when your website allows users to select multiple attributes to filter products by (i.e. faceted navigation), you still have full control over which pages Google can crawl and index, thus reducing crawl waste and preventing index issues.

Site Structure and Information Architecture

All of the above makes one thing really clear: you need to think about your website’s content long before you start coding a single line of HTML. It’s absolutely crucial to understand how your website’s content needs to be structured and made part of a coherent, meaningful architecture that has a place for all your existing content, but also allows for natural growth and expansion of your website.

Failing to plan your site structure is likely to lead to all kinds of issues – not just for SEO – as there’s a very real risk your website will not be set up properly to weather all the challenges that it may face.

This is why I’m such a big proponent of applying information architecture best practices to website design. Unfortunately this is often an entirely overlooked or, at best, hastily skimmed aspect of a website’s design brief. And yet it has such a crucial role to play in a website’s success.

One of the best works on information architecture as it relates to websites is O’Reilly’s so-called Polar Bear book: Information Architecture for the Web. The updated 4th edition has just been released, with an expanded remit to include all forms of digital design:

Information Architecture book, 4th edition

O’Reilly’s Information Architecture book, 4th edition

I can’t recommend this book strongly enough. If you care about websites and how they’re used and made successful, this is truly a must-read book.

On average I find myself mentioning this book several times a year in workshops and conference talks, and it still surprises me that so few industry professionals have read any of the book’s four editions, or any text on information architecture in general. To me, this is a crucial knowledge to have which encourages us to think about website structures in ways that allow us to maximise utility and enable scalable growth.

Key Lessons

Often we as SEOs do not have the luxury to get involved in website projects early on and give our input on how the site is designed. Unfortunately, that means often we have to work with less than ideal structures, and find ourselves applying fixes to issues about content design and hierarchy that could so easily have been prevented.

If and when we do have the opportunity to get stuck in the early phases of a new website project, our most important input revolves around the design of the site’s information architecture, and – by extension – the hierarchical URLs that should emerge from that architecture. Our job as SEOs is made so much easier if a website is created with IA best practices in mind, and we should not allow web designers & developers to get away with shoddy implementations.

Clear, human-readable URLs that conform to a well thought-out website architecture are not optional features – they’re an essential ingredient for all successful websites.

Post from Barry Adams

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

Persona Research in Under 5 Minutes

Posted by CraigBradford

Well-researched personas can be a useful tool for marketers, but to do it correctly takes time. But what if you don’t have extra time? Using a mix of Followerwonk, Twitter, and the AIchemy language API, it’s possible to do top-level persona research very quickly. I’ve built a Python script that can help you answer two important questions about your target audience:

  1. What are the most common domains that my audience visits and spend time on? (Where should I be trying to get mentions/links/PR)
  2. What topics are they interested in or reading on those sites? (What content should I potentially create for these people)

You can get the script on Github: Twitter persona research

Once the script runs, the output is two CSV files. One is a list of the most commonly-shared domains by the group, the other is a list of the topics that the audience is interested in.

A quick introduction to Watson and the Alchemy API

The Alchemy API has been around a while, and they were recently acquired by the IBM Watson group. The language tool has 15 functions. I’ve used it in the past for language detection, sentiment analysis, and topic analysis. For this personas tool, I’ve used the Concepts feature. You can upload a block of text or ask it to fetch a URL for analysis. The output is then a list of concepts that are relevant to the page. For example, if I put the Distilled homepage into the tool, the concepts are:

Notice there are some strange things like Arianna Huffington listed, but running this tool over thousands of URLs and counting the occurrences takes care of any strange results. This highlights one of the interesting features of the tool: Alchemy isn’t just doing a keyword extraction task. Arianna Huffington isn’t mentioned anywhere on the Distilled homepage.

Alchemy has found the mention of Huffington Post and expanded on that concept. Notice that neither search engine optimization or Internet marketing are mentioned on the homepage, but have been listed as the two most relevant concepts. Pretty clever. The Alchemy site sums it up nicely:

“AlchemyAPI employs sophisticated text analysis techniques to concept tag documents in a manner similar to how humans would identify concepts. The concept tagging API is capable of making high-level abstractions by understanding how concepts relate, and can identify concepts that aren’t necessarily directly referenced in the text.”

My thinking for this script is simple: If I get a list of all the links that certain people share and pass the URLs through the Alchemy tool, I should be able to extract the main concepts that the audience is interested in.

To use an example, let’s assume I want to know what topics the SEO community is interested in and what sites are most important in that community. My process is this:

  1. Find people that mention “SEO” in their Twitter bio using Followerwonk
  2. Get a sample of their most recent tweets using the Twitter API
  3. Pull out the most common domains that those people share
  4. Use the Alchemy Concepts API to summarize what the pages they share are about
  5. Output all of the above to a spreadsheet

Follow the steps below. Sorry, but the instructions below are for Mac only; the script will work for PCs, but I’m not sure of the terminal set up.

How to use the script

Step 1 – Finding people interested in SEO

Searching Followerwonk is the only manual part of the process. I might build it into the the script in future, but honestly, it’s too easy to just download the usernames from the interface.

Go into the “Search Bios” tab and enter the job title in quotes. In this case, that’s “SEO.” More common jobs will return a lot of results; I recommend setting some filters to avoid bots. For example, you might want to only include accounts with a certain number of followers, or accounts with less than a reasonable number of tweets. You can download these users in a CSV as shown in the bottom-right of the image below:

Everything else can be done automatically using the script.

Step 2 – Downloading the script from GitHub

Download the script from Github here: Twitter API using Python. Use the Download Zip link on the right hand side as shown below:

Step 3 – Sign up for Twitter and Alchemy API keys:

It’s easy to sign up using the links below:

  • Get a Twitter API key
  • Get a free API key for Alchemy

Once you have the API keys, you need to install a couple of extra requirements for the script to work.

The easiest way to do that is to download Pip here: https://bootstrap.pypa.io/get-pip.py — save the page as “get-pip.py”. Create a folder on your desktop and save the Git download and the “get-pip.py” file in it. You then need to open your terminal and navigate into that folder. You can read my previous post on how to use the command line here: The Beginner’s Guide to the Command Line.

The steps below should get you there:

Open up the terminal and type:

“cd Desktop/”

“cd [foldername]”

You should now be in the folder with the get-pip.py file and the folder you downloaded from Github. Go back to the terminal and type:

“sudo python get-pip.py”

“sudo pip install -r requirements.txt”

Create two more files:

  1. usernames.txt – This is where you will add all of the Twitter handles you want to research
  2. api_keys.py – The file with your API keys for Alchemy and Twitter

In the api_keys file, paste the following and add the respective details:

watson_api_key = “[INSERT ALCHEMY KEY]”

twitter_ckey = “[INSERT TWITTER CKEY]”

twitter_csecret = “[INSERT CSECRET]”

twitter_atoken = “[INSERT TOKEN]”

twitter_asecret = “[INSERT ASECRET]”

Save and close the file.

Step 4 – Run the script

At this stage you should:

  1. Have a username.txt file with the Twitter handles you want to research
  2. Have downloaded the script from Github
  3. Have a file named api_keys.py with your details for Alchemy and Twitter
  4. Installed Pip and the requirements file

The main code of the script can be found in the “get_tweets.py” file.

To run the script, go into your terminal, navigate to the folder that you saved the script to (you should still be in the correct directory if you followed the steps above. Use “pwd” to print the directory you’re in). Once you are in the folder, run the script by going to the terminal and typing: “python get_tweets.py”. Depending on the number of usernames you entered, it should take a couple of minutes to run. I recommend starting with one or two to check that everything is working.

Once the script finishes running, it will have created two csv files in the folder you created:

  1. “domain + timestamp” – This includes all the domains that people tweeted and the count of each
  2. “concepts + timestamp” – This includes all the concepts that were extracted from the links that were shared

I did this process using “SEO” as the search term in Followerwonk. I used 50 or so profiles, which created the following results:

Top 30 domains shared:

Top 40 concepts

For the most part, I think the domains and topics are representative of the SEO community. The output above seems obvious to us, but try it for a topic that you’re not familiar with and it’s really helpful. The bigger the sample size, the better the results should be, but this is restricted by the API limitations.

Although it looks like a lot of steps, once you have this set up, it’s very easy to repeat — all you need to change is the usernames file. Using this tool can get you some top-level persona information in a very short amount of time.

Give it a try and let me know what you think.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

How to Use Hosted Blog Platforms for SEO &amp; Content Distribution

Posted by randfish

Where do you host your content? Is it on your own site, or on third-party platforms like Medium and LinkedIn? If you’re not yet thinking about the ramifications of using hosted blog platforms for your content versus your own site, now’s your chance to start. In this week’s Whiteboard Friday, Rand explores the boons and pitfalls of using outside websites to distribute and share your content.

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat a little bit about blog platforms, places like Medium, Svbtle — that’s Svbtle with a V instead of a U — Tumblr, LinkedIn, places where essentially you’ve got a hosted blog platform, a hosted content platform. It’s someone else’s network. You don’t have to set up your own website, but at the same time you are contributing content to their site.

This has become really popular, I think. Look, Medium and LinkedIn are really the two big ones where a lot of folks are contributing these days. LinkedIn very B2B focused, Medium very startup, and new media as well as new creative-focused.

So I think, because of the rise of these things, we’re seeing a lot of people ask themselves, “Should I create my own content platform? Do I need to build a WordPress hosted subfolder on my website? Or can I just use Medium because it has all these advantages, right?” Well, let me try and answer those questions for you today.

So, what do hosted platforms enable?

Well, it’s really simple to sign up and start creating on them. You plug in your name, email, a password. You don’t have to set up DNS. You don’t have to set up hosting. You can start publishing right away. That’s really easy and convenient.

It also means that, for a lot of marketers, they don’t have to involve their engineering or their web development teams. That’s pretty awesome, too.

There are also built in networks on a lot of these places, Medium in particular, but Svbtle as well. Tumblr quite obviously has a very, very big network. So as a result, you’ve got this ability to gain followers or subscribers to your content, someone that can say like, “Oh, I want to follow @randfish on Medium.” I haven’t published on Medium, but for some reason I seem to have thousands of followers there.

So I think this creates this idea like, “Hey, I could reach a lot more people that I wouldn’t necessarily be able to reach on my own platform, because it’s not like these people are all subscribed to my blog already, but they are signed up for Medium or LinkedIn, which has hundreds of millions of worldwide users.”

There’s also an SEO benefit here. You inherit domain authority. On Medium and on LinkedIn in particular, these can be really powerful. Medium is a domain authority 80. LinkedIn is a domain authority of 99, which is no surprise. Pretty much every website on the planet links to their LinkedIn page. So you can imagine that these pages have the potential to do really well in Google’s rankings, and you don’t necessarily have to point a lot of links at them in order for them to rank very well. We’ve seen this. Medium has been doing quite well in the rankings. LinkedIn articles are doing quite well in their niches.

This is a little different, a subtle but important difference for Svbtle itself, for Tumblr, and for WordPress. These are on subdomains. So it would be, yes, there are lots of people who are using WordPress, although that’s very customizable. But you could imagine that if I got randstshirts.wordpress.com or randstshirts.tumblr.com or randstshirt.svbtle.com, that doesn’t have the same ranking ability. That subdomain means that Google considers it separately from the main domain. So you’re not going to inherit the ranking benefit on those. It’s really Medium and LinkedIn where that happens. To be honest, Google+ as well, we’ve seen them ranking like a Medium or a LinkedIn too.

You also have this benefit of email digests and subscriptions, which can help grow your content’s reach. For those of you who aren’t subscribed to Medium, they send out a daily digest to all of the folks who are signed up. So if you are someone who is contributing Medium content, you can often expect that your subscribers through Medium may be getting your stuff through an email digest. It may even get broadcast to a much broader group, to people who aren’t following you but are following them. If they’ve “hearted” your content on Medium, they’ll see it. So you get all these network effects through email digests and email subscriptions too.

So what’s the downside?

This is pretty awesome. To me, these are compelling reasons to potentially consider using these. But before we get too far ahead of ourselves, let’s talk about the downside as well. To my mind, these downsides prevent me from wanting to encourage certain types of views. I’ll talk about my best advice and my tactical advice for using these in a sec.

Links authority and ranking signals that are accrued. We recognize that you put a post on Medium, a lot of times posts there do very well. They get a lot of traction, a lot of attention. They make it into news feeds. Other sites link to them. Other pages around the web link to them. It’s great. Lots of social shares, lots of engagement. That is terrific.

Guess what? Those benefits accrue only to Medium.com. So every time you publish something there and it gets lots of links and ranking signals and engagement and social and all these wonderful things, that helps Medium.com rank better in the future. It doesn’t help yoursite.com rank better in the future.

You might say, “But Rand, I’ve got a link here, and that link points right back to my site.” Yes, wonderful. You now have the equivalent of one link from Medium. Good for you. It’s not a bad thing. But this is nowhere near the kind of help that you would get if this piece of content had been hosted on your site to begin with. If this is hosted over here, all these links point in there, and all those ranking benefits accrue to your site and page.

In some ways, from an SEO perspective, especially if you’re trying to build up that SEO flywheel of growing domain authority and growing links and being able to rank for more competitive stuff, if you’re trying to build that flywheel, you’d almost say, “Hey, you know what, I’d take half the links and ranking signals if it were on my own site. That would still be worth more to me than more on Medium.”

Okay. But that being said, there are all the distribution advantages, so maybe we’re still at a wash here.

Also on these blogging platforms, these hosted platforms, there’s no ownership of or ability to influence the UI and UX. That is a tough one too. So one of the wonderful things about blogging is — and we’ve seen this over the years many times at Moz. People come to Moz to read the content, they remember Moz, and they have a positive association and they say, “Yeah, you know, Moz made me feel like they were authorities, like they knew what they were talking about. So now I want to go check out Moz Local, their product, or Moz Analytics, or Open Site Explorer, or whatever it is.”

That’s great. But if you are on Medium or if you are on Svbtle or if you are on WordPress — well, WordPress is more customizable — but if you’re on Google+, the experience is, “Oh, I had a really good experience with Medium.” That’s very, very different. They will not remember who you are and how you made them feel, at least certainly not to the extent that they would if you owned and controlled that UI and UX.

So you’re really reducing brandability and any messaging opportunities that you might have had there. That’s dramatically, dramatically reduced. I think that’s very, very tough for a lot of folks.

Next up — and this speaks to the UI and UX elements — but it’s impossible to add or to customize calls to action, which really inhibits using your blog as part of your funnel. Essentially, I can’t say, “Hey, you know what I’d like to do? I’d like to add a button right below here, below all my blog posts that says, ‘Hey, sign up to try our product for free,’ or, ‘Get on our new mailing list,’ or, ‘Subscribe to this particular piece of content.’ Or I want to put something in the sidebar, or I’d like to have it in the header. Or I want to have it as a drop over when someone scrolls halfway down the page.” You can’t do any of those things. That sort of messaging is controlled by the platform. You’re not allowed to add custom code here, and thus your ability to impact your funnel with your blog or with your content platform on these sites is severely limited. You can add a link, and yes, people can still follow you on these networks, but that is definitely not the same.

There’s also, frustratingly, for a lot of paid marketers and a lot of marketers who know that they can do this, you can’t put a retargeting pixel on Svbtle or on Medium. Actually, you may be able to on Svbtle now. I’m not sure if you can. But Medium for sure, LinkedIn for sure, Google+, you can’t say, “Hey, all the people who come to my posts on Medium, I’d like to retarget them and remarket to them as they go around the web later, and I’ll follow them around the Internet like a lost puppy dog.” Well, too bad, not possible. You can’t place that pixel. No custom code, that’s out.

The last thing, and I think one of the most salient points, is there have been many, many platforms like this over the years. Many people use the example of GeoCities where a lot of people hosted their content and then it went away. In the early days of the web, it was very big, and a few years ago it fell apart.

It’s not just that, though. The uncertain future could mean that in some time frame, in the months or years to come, Medium, or Svbtle, or LinkedIn, or Google+ could become more like Facebook, where instead of 100% of the people seeing the content that they subscribe to, maybe they only see 10% or the Facebook averages today, which are under 1%. So this means that you don’t really know what might happen to your content in the future in terms of its potential visibility to the audience there. If that’s the sole place you’re building up your audience, that is a high amount of risk depending on what happens as the platform evolves.

This is true for all social platforms. It’s not just true for these hosted blog content platforms. Many folks have talked about how Twitter in the future may not show 100% of the content there. I don’t know how real that is or whether it’s just a rumor, but it’s one of those things to consider and keep in mind.

My best advice:

So my best advice here is, use platforms like these for reaching their audiences. I think it can be great to say, “Hey, 1 out of every 10 or 20 posts I want to put something up on Medium, or I want to test it on Google+, or I want to test it on LinkedIn because I think that those audiences have a lot of affinity with what I’m doing. I want to be able to reach out to them. I want to see how those perform. Maybe I want to contribute there once a month or once a quarter.” Great. Wonderful. That can be a fine way to draw distribution there.

I think it’s great for building connections. If you know that there are people on those networks who have big, powerful followings and they’re very engaged there, I think using those networks like you would use a Twitter or a Facebook or like you already use LinkedIn to try and build up those connections makes total sense.

Amplifying the reach of existing content or messages. If you have a great piece of content or a really exciting message, something exciting you want to share and you’ve already put some content around that on your own site and now you’re trying to find other channels to amplify, well, you might want to think about treating Medium just like you would treat a post on Twitter or a post on Facebook or a post on LinkedIn. You could instead create a whole piece of content around that, sort of like you would with a guest post, and use it to amplify that reach.

I think guest post-style contribution, in general, is a great way to think about these networks. So you might imagine saying, “Hey, I’d love to contribute to YouMoz,” which is Moz’s own guest blogging platform. That could be wonderful, but you would never make that your home. You wouldn’t host all your content there. Likewise you might contribute to Forbes or Business Insider or to The Next Web or any of these sites. But you wouldn’t say that’s where all my content is going to be placed. It’s one chance to get in front of that audience.

Last one, I think it’s great to try and use these for SERP domination. So if you say, “Hey, I own one or two of the top listings of the first page of results in Google for this particular keyword, term, or phrase. I want to use Medium and LinkedIn, and I’m going to write two separate pieces targeting similar keywords or those same keywords and see if I can’t own 4 slots or 5 slots out of the top 10.” That’s a great use of these types of platforms, just like it is with guest posting.

Don’t try to use these for…

Don’t try to use these as your content’s primary or, God forbid, only home on the web. Like I said, uncertain future, inability to target, inability of using the funnel, just too many limitations for what I think modern marketers need to do.

I don’t think it is wise, either, to put content on there that’s what I’d call your money keywords, essentially stuff that is very close to the conversion funnel, where you know people are going to search for these things, and then when they find this content, they’re very likely to make their next step a sign-up, a conversion. I would urge you to keep that on your site, because you can’t own the experience. I think it’s much wiser if you say, “Hey, let’s look way up in the funnel when people are just getting associated with us, or when we’re trying to bring in press and PR, or we’re trying to bring in broad awareness.” I think those are better uses.

I think it’s also very unwise to make these types of platforms the home of your big content pieces, big content pieces meaning like unique research or giant visuals or interactive content. You probably won’t even be able to host interactive content at most of these.

If you have content that you know is very likely to drive known, high-quality links, you’ve already got your outreach list, you’re pretty sure that those people are going to link to you, please put that content on your own site because you’ll get the maximum ranking benefits in that fashion. Then you could potentially put another piece of content, repurpose a little bit of the information or whatever it is that you’ve put together that’s wonderful in terms of big content as another piece that you separately broadcast and amplify to these audiences.

What I’m really saying is treat these guys — Medium, Svbtle, LinkedIn, Tumblr, and Google+ — treat them like these guys, like you use Facebook, Twitter, Instagram, YouTube, and guest hosts in general. It’s a place to put a little bit of content to reach a new audience. It’s a way to amplify a message you already have. It’s not the home of content. I think that’s really what I urge for modern marketers today.

All right, everyone. Look forward to the comments, and we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

Introducing the State of Digital Academy!

Starting today, State of Digital will be transforming the training industry. We are launching a revolutionary new training programme: The State of Digital Academy.

The Digital Marketing world is changing at high speed. A lot of marketers are trying to stay up-to-date by reading articles and books, monitoring social media, attending events and conferences as well as signing up for training courses.

The biggest problem is that many marketers just don’t know where to get the best help and advice to develop themselves further and become the marketer that they want to be.

Yes, they can find training sessions, yes they can go to events and conferences but all these have one major issue: they are too generic. Courses and events are designed for the market, not for individual circumstances.

As a result many businesses struggle to get their teams trained to the level required. The ‘one size fits all approach’ training courses offer does not meet the requirements that most businesses have. They only touches on certain elements so the marketer on the course doesn’t get what they need out of it and the course is often not worth the money it cost.

State of Digital has launched a revolutionary new training programme: The State of Digital Academy. This will not only guarantee a better quality of education, it will make sure the content and training provided makes it worth every penny. How? By customising the training session to the learners needs. Training is bespoke to the individual or company booking the course.

Our idea originated from the KoozAcademy which was launched by Koozai back in 2013. Have a read about their initiative here and you will see why we at State of Digital want to take it one step further and make something similar available to anyone. We loved the concept so much we wanted to build on it and luckily Koozai lets us!

Using high level trainers with hands on experience in the industry, State of Digital offers customised training to agencies, brands and in-house teams. Trainers will come to the offices of the trainees. They will spend time getting them completely up-to-date with the topics they actually need to gain knowledge on and the course will be fully customised to the group sitting the course.

12-trainers

State of Digital will work with brands, in-house teams and agencies to understand where the knowledge gaps are within the businesses. We will conduct an in-depth assessment of the business before matching the business with the most appropriate trainers for them. The State of Digital team will then offer a completely customised training programme tailored to the needs of the business.

Find out more…

Post from Bas van den Beld

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

Using Snapchat for Business

Let’s face it – Snapchat is everywhere. The explosive growth of Snapchat continues to reign supreme with Facebook and Twitter looking on in awe.

The message sharing app has 100 million active daily users with 8,796 photos being shared every second. A whopping 760 million snaps are posted every day on the app while Snapchat Stories content is being viewed 500 million times per day. Not bad for an app which has only been kicking around since September 2011 and according to a recent article via The Verge ‘snaps and videos in the app are now being viewed over 6 billion times a day!’ Phew!

Many people, including businesses in my opinion, are missing a massive trick in relation to how the app can be used and the benefits of its Snapchat stories feature. Now I’ve already blogged on how West Midlands Police have caught onto the whole idea of using Snapchat stories to engage with their youth audience…

So how can businesses use Snapchat?

Here’s the heads up on key ways to integrate Snapchat into your marketing activities:

1: Create behind the scenes content

Due to its very nature 10-second images and Snapchat Stories are only available for 24 hours. Starting out on Snapchat and creating behind the scenes content which is only available via your businesses Snapchat account. With Snapchat Stories you can tell a particular story with a series of snaps – these can be still images or short bursts of video, 10 seconds apiece. Perhaps a Meet the Team Snapchat Story or new product or service your business is launching.

A great example of a brand getting on board using Snapchat was Whole Hearted Clothing who decided to give it’s followers a sneaky peak behind-the-scenes look at their Autumn/Fall clothing range. Snapchat has proved especially useful for them in building up some extra PR and offering rewards to its loyal customers.  Clothing Brand American Eagle have also pushed out some content to their customers via Snapchat giving them a sneak peak of new clothing lines.

2: Discounts, Promos and Giveaways

A great way to start building your following on Snapchat is to use your established social media accounts, such as Twitter and Facebook, to tease curious viewers across to your Snapchat account – with potentially one of your snaps offering a discount code, hidden in the snaps which the viewer gets once they watch the complete Snapchat story.

New York-based frozen yogurt chain, 16 Handles, was one of the first to test a promotion on Snapchat. 16 Handles created an account ‘Love16Handles’ and is encouraging users to participate by visiting select locations at certain times and snapping photos of their friends enjoying their frozen yogurt.

Once users snapped, they’d receive a special coupon code for 16% off, 50% off, or 100% off which they then have just 10 seconds to show the cashier.

Snapchat16Handles

I recently created a Snapchat Story in which I gave away a copy of my book to one lucky Snapchat viewer. Each person who viewed the snap within the 24 hour period was entered into the competition and a winner picked at random. By scrolling up on the Snapchat Story I could see everyone who viewed the snap and make a note of their username. I put them all in a hat and picked a winner. I then created another Snapchat story in which I announced the winner.

At the Coachella festival 2014, Heineken used the app as a way to engage with its audience by giving them clues about surprise gigs via Snapchat. Fans connected to the brand’s username HeinekenSnapWho. Viewers who correctly guessed the artist were given early heads up confirmation of the act scheduled for the Heineken House stage.

Engaging their followers via One 2 One conversations by rewarding them with exclusive content put Heineken in the premier league of smart Snapchat users.

Heineken2SnapWho

3: Create Clever Content

It’s still early days with Snapchat being utilised by companies and organisations so before it gets saturated, it’s really beneficial to think of clever ways to create content on the platform. Put some real time and thought into content concepts which you think might engage those Snapchat users. I’ve begun recently to create my own OnlineReputation Snapchat Stories. My aim is to create a meaningful Snapchat story around the area of protecting your Online Reputation and managing your Digital Tattoo eg‘My Top 3 Online Reputation Tips on how to create positive content online’ or ‘Why your privacy settings are important on social media’.

A great UK example of clever use of Snapchat that’s right on target in terms of its audience was a campaign by the Co-operative Electrical. They decided to use Snapchat to reach Students by creating a clever piece of content which offered them £30 off the purchase of a laptop. To claim the coupon the Snapchat user just had to add Co-operative Electrical as a Snapchat friend.

CoOp_SnapTop

4: Advice and Tips

Snapchat provides an excellent way for you to create quick little snippets of advice and tips about your business, products or brand via a Snapchat Story. As each Snapchat Story is only up to 10 seconds, you can add more than one story to extend out the message. Once onestory finishes the next will play giving you a sequential campaign. The key with using Snapchat stories is to make the content fun and relevant.

It’s still early days with Snapchat but some of the results have been interesting for those businesses and brands who’ve decided to test the water with this new medium. It does present its own challenges not least trying to create engaging content that just lasts for 24 hours is risky, and may or may not yield a return. But I’d recommend experimenting with this app, trying different types of content – in image and video format. If you can get it to stick, this app or apps like it could become integral to future marketing campaigns.

Post from Wayne Denner

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

30+ Important Takeaways from Google's Search Quality Rater's Guidelines

Posted by jenstar

For many SEOs, a glimpse at the Google’s Search Quality Rater’s Guidelines is akin to looking into Google’s ranking algorithm. While they don’t give the secret sauce to rank number one on Google, they do offer some incredible insight into what Google views as quality – and not-so-quality – and the types of pages they want to serve at the top of their search results.

Last week, Google made the unprecedented move of releasing the entire Search Quality Rater’s Guidelines, following an analysis of a leaked copy obtained by The SEM Post. While Google released a condensed version of the guidelines in 2013, until last week, Google had never released the full guidelines that the search quality raters receive in their entirety.

First, it’s worth noting that quality raters themselves have no bearing on the rankings of the sites they rate. So quality raters could assign a low score to a website, but that low rating would not be reflected at all in the actual live Google search results.

Instead, Google uses the quality raters for experiments, assessing the quality of the search results when they run these experiments. The guidelines themselves are what Google feels searchers are looking for and want to find when they do a Google search. The type of sites that rate highest are the sites and pages they want to rank well. So while it isn’t directly search algorithm-related, it shows what they want their algos to rank the best.

The document itself weighs in at 160 pages, with hundreds of examples of search results and pages with detailed explanations of why each specific example is either good, bad, or somewhere in between. Here’s what’s most important for SEOs and webmasters to know in these newly-released guidelines.

Your Money or Your Life Pages (aka YMYL)

SEOs were first introduced to the concept of Your Money or Your Life pages last year in a leaked copy of the guidelines. These are the types of pages that Google holds to the highest standards because they’re the types of pages that can greatly impact a person’s life.

While anyone can make a webpage about a medical condition or offer advice about things such as retirement planning or child support, Google wants to ensure that these types of pages that impact a searcher’s money or life are as high-quality as possible.

In other words, if low-quality pages in these areas could “potentially negatively impact users’ happiness, health, or wealth,” Google does not want those pages to rank well.

If you have any web pages or websites that deal in these market areas, Google will hold your site to a higher standard than it would a site on a hockey team fan page or a page on rice cooker recipes.

It is also worth noting that Google does consider any website that has a shopping component, such as an online store, as a type of site that also falls under YMYL for ratings. Therefore, ensuring the sales process is secure would be another thing raters would consider.

If a rater wouldn’t feel comfortable ordering from the site or submitting personal information to it, then it wouldn’t rate well. And if a rater feels this way, it’s very likely visitors would feel the same too — meaning you should take steps to fix it.

Market areas for YMYL

Google details five areas that fall into this YMYL category. If your website falls within one of these areas, or you have web pages within a site that do, you’ll want to take extra care that you’re supporting this content with things like references, expert opinions, and helpful supplementary or additional content.

  • Shopping or financial transaction pages
    This doesn’t apply merely to sites where you might pay bills online, do online banking, or transfer money. Any online store that accepts orders and payment information will fall under this as well.
  • Financial information pages
    There are a ton of low-quality websites that fall under this umbrella of financial information pages. Google considers these types of pages to be in the areas of “investments, taxes, retirement planning, home purchase, paying for college, buying insurance, etc.”
  • Medical information pages
    Google considers these types of pages to go well beyond the standard medical conditions and pharmaceuticals, but it also covers things such as nutrition and very niche health sites for sufferers of specific diseases or conditions — the types of sites that are often set up by those suffering from medical condition themselves.
  • Legal pages
    We’ve seen a ton of legal-related sites pop up by webmasters who are looking to cash in on AdSense or affiliate revenue. But Google considers all types of legal information pages as falling under YMYL, including things such as immigration, child custody, divorce, and even creating a well.
  • All-encompassing “Other”
    Then, of course, there are a ton of other types of pages and sites that can fall under YMYL that aren’t necessarily in any of the above categories. These are still things where having the wrong information can negatively impact the searcher’s happiness, health, or wealth. For example, Google considers topics such as child adoption and car safety information as falling under this as well.

Google makes frequent reference to YMYL pages within the quality guidelines and repeatedly stresses the importance of holding these types of sites to a higher bar than others.

Expertise / Authoritativeness / Trustworthiness, aka E-A-T

Expertise / Authoritativeness / Trustworthiness — shortened to E-A-T — refers to what many think of as a website’s overall value. Is the site lacking in expertise? Does it lack authoritativeness? Does it lack trustworthiness? These are all things that readers are asked to consider when it comes to the overall quality of the website or web page, particularly for ones that fall into the YMYL category.

This is also a good rule of thumb for SEO in general. You want to make sure that your website has a great amount of expertise, whether it’s coming from you or contributors. You also want to show people why you have that expertise. Is it the the experience, relevant education, or other qualities that gives the writer of each page that stamp of expertise? Be sure to show and include it.

Authoritativeness is similar, but from the website perspective. Google wants websites that have high authority on the topic. This can come from the expertise of the writers, or even the year quality of the community if it’s something like a forum.

When it comes to trustworthiness, again Google wants raters to decide: Is a site you’d feel you can trust? Or is it somewhat sketchy and you’d have trouble believing what the website is trying to tell you?

Why you need E-A-T

This also comes down to something that goes well beyond just the quality raters and how they view E-A-T. It’s something that you should consider for your site even if these quality raters didn’t exist.

Every website should make a point of either showing how their site has a high E-A-T value or figure out what it is they can do to increase it. Does it mean bringing contributors on board? Or do you merely need to update things like author bios and “About Me” pages? What can you do to show that you have the E-A-T that not only quality raters are looking for, but also just the general visitors to your site?

If it is forums, can your posters show their credentials on publicly-visible profile pages, with additional profile fields for anything specific to the market area? This can really help to show expertise, and your contributors to the forums will appreciate being showcased as an expert, too.

This comes back to the whole concept of quality content. When a searcher lands on your page and they can easily tell that it’s created by someone (or a company) with high E-A-T, this not only tells that searcher that this is great authoritative content, but they’re also that much more likely to recommend or share it with others. It gives them the confidence that they’re sharing trustworthy and accurate information in their social circles.

Fortunately for webmasters, Google does discuss how someone can be an authority with less formal expertise; they’re not looking for degrees or other formal education for someone to be considered an expert. Things like great, detailed reviews, experiences shared on forums or blogs, and even life experience are all things that Google takes into account when considering whether someone’s an authority.

Supplementary content

Supplementary content is where many webmasters are still struggling. Sometimes it’s not easy to add supplementary content, like sidebar tips, into something like your standard WordPress blog for those who are not tech-savvy.

However, supplementary content doesn’t have to require technical know-how. It can comprise things such as similar articles. There are plenty of plug-ins that allow users to add suggested content and can be used to provide helpful supplementary content. Just remember: the key word here is helpful. Things like those suggested-article ad networks, particularly when they lead to Zergnet-style landing pages, are not usually considered helpful.

Think about the additional supporting content that can be added to each page. Images, related articles, sidebar content, or anything else that could be seen as helpful to the visitor of the page is all considered supplementary content.

If you are questioning whether something on the page can be considered secondary content or not, look at the page — anything that isn’t either the main article or advertising can be considered supplementary content. Yes, this includes a strong navigation, too.

Page design

By now you’d think this is a no-brainer, but there are still some atrocious page designs out there with horrible user experiences. But this goes much further than how easy the website is to use.

Google wants raters to consider the focus of the pages. Ideally, the main content of the page, such as the main article, should be “front and center” and the highlight of the page. Don’t make your user scroll down to see the article. Don’t have a ton of ads above the fold that push the content lower. And don’t try to disguise your ad content. These are all things that will affect the rating.

They do include a caveat: Ugly does not equal bad. There are some ugly websites out there that are still user-friendly and meet visitors’ needs; Google even includes some of them as examples of pages with positive ratings.

More on advertising & E-A-T

Google isn’t just looking for ads that are placed above the fold and in a position where one would expect the article to begin. They examine some other aspects as well that can impact the user experience.

Are you somehow trying to blend your advertising too much with the content of the page? This can be an issue. In Google’s words, they say that ads can be present for any visitors that may want to interact with them. But the ads should also be something that can be ignored for those who aren’t interested in the ads.

They also want there to be a clear separation between advertising and the content. This doesn’t mean you must slap a big “ads” label on them, or anything along those lines. But there should be a distinction to differentiate the ads from the main content. Most websites do this, but many try and blur the lines between ads and content to incite accidental clicks by those who don’t realize it was actually an ad.

All about the website

There are still a ton of websites out there that lack basic information about the site itself. Do you have an “About” page? Do you have a “Contact Us” page so that visitors can contact you? If you are selling a service or a product, do you have a customer service page?

If your site falls into the YMYL category, Google considers this information imperative. But if your site isn’t a YMYL page, Google suggests that just a simple email address is fine, or you can use something like a contact form.

Always make sure there’s a way for a visitor to find a little bit more about you or your site, if they’re so inclined. But be sure to go above and beyond this if it’s a YMYL site.

Reputation

For websites to get the highest possible rating, Google is looking at reputation as well. They ask the raters to consider the reputation of the site or author, and also ask them to do reputation research.

They direct the raters to look at Wikipedia and “other informational sources” as places to start doing reputation research when it comes to more formal topics. So if you’re giving medical advice or financial advice, for example, make sure that you have your online reputation listed in places that would be easy to find. If you don’t have a Wikipedia page, consider professional membership sites or similar sites to showcase your background and professional reputation.

Google also considers that there are some topics where this kind of professional reputation isn’t available. In these cases, they say that the reader can look at things such as “popularity, user engagement, and user reviews” to discover reputation within the community or market area. This can often be represented simply by a site that is highly popular, with plenty of comments or online references.

What makes a page low-quality?

On the other end of the spectrum, we have pages that Google considers low-quality. And as you can imagine, a lot of what makes a page low-quality should be obvious to many in the SEO industry. But as we know, webmasters aren’t necessarily thinking from the perspective of a user when gauging the quality of their sites, or they’re looking to take advantage of shortcuts.

5 clues

Google does give us insight into exactly what they consider low-quality, in the form of five things raters should look for. Any one of these will usually result in the lowest ratings.

  1. The quality of the main content is low.
    This shouldn’t be too surprising. Whether it’s spun content or just poorly-written content, low-quality content means a low rating. Useless content is useless.
  2. There is an unsatisfying amount of main content for the purpose of the page.
    This doesn’t mean that short content cannot be considered great-quality content. But if your three-sentence article needs a few more paragraphs to fully explain what the title of that article implies or promises, then you need to rethink that content and perhaps expand it. Thin content is not your SEO friend.
  3. The author of the page or website doesn’t have enough expertise for the topic of the page, and/or the website is not trustworthy or authoritative enough for the topic. In other words, the page/website is lacking E-A-T.
    Again, Google wants to know that the person has authority on the subject. If the site isn’t displaying the characteristics of E-A-T, it can be considered low-quality.
  4. The website has a negative reputation.
    This is where reputation research comes back into play. Ensure you have a great online reputation for your website (or your personal name, if you’re writing under your own name). That said, don’t be overly concerned about it if you have a couple of negative reviews; almost every business does. But if you have overwhelmingly negative reviews, it will be an issue when it comes to how the quality raters see and rate your site.
  5. The supplementary content is distracting or unhelpful for the purpose of the page.
    Again, don’t hit your visitors over the head with all ads, especially if they’re things like autoplay video ads or super flashy animated ads. Google wants the raters to be able to ignore ads on the page if they don’t need them. And again, don’t disguise your ads as content.

Sneaky redirects

If you include links to affiliate programs on your site, be aware that Google does consider these to be “sneaky redirects” in the Quality Rater’s Guidelines. While there isn’t necessarily anything bad about one affiliate link on the page, bombarding visitors with those affiliate links can impact the perceived quality of the page.

The raters are also looking for other types of redirects. These include the ones we usually see used as doorway pages, where you’re redirected through multiple URLs before you end up at the final landing page — a page which usually has absolutely nothing to do with the original link you clicked.

Spammy main content

There’s a wide variety of things that Google is asking the raters to look at when it comes to quality of the main content of the page. Some are flags for what Google considers to be the lowest quality — things that are typically associated with spam. A lot of things are unsurprising, such as auto-generated main content and gibberish. But Google wants their raters to consider other things that signal low quality, in their eyes.

Keyword stuffing

While we generally associate keyword stuffing with content so heavy with keywords that it comes across as almost unreadable, Google also considers it keyword stuffing when the overuse of those keywords seems only a little bit annoying. So for those of you that think you’re being very clever about inserting a few extra keywords in your content, definitely consider it from an outsider’s point of view.

Copied content

This shouldn’t come as a surprise, but many people feel that unless someone is doing a direct comparison, they can get away with stealing or “borrowing” content. Whether you’re copying or scraping the content, Google asks the raters to look specifically at whether the content adds value or not. They also instruct them on how to find stolen content using Google searches and the Wayback Machine.

Abandoned

We still come across sites where the forum is filled with spam, where there’s no moderation on blog comments (so they’re brimming with auto-approved pharmaceutical spam), or where they’ve been hacked. Even if the content seems great, this still signals an untrustworthy site. If the site owner doesn’t care enough to prevent it, why should a visitor care enough to consider it worthy?

Scam sites

Whether a site is trying to solicit extensive personal information, is for a known scam, or is a phishing page, these are all signs of a lowest-quality page. Also included are pages with suspicious download links. If you’re offering a download, make sure it comes across as legitimate as possible, or use a third-party verified service for offering downloads.

Mobile-friendly

If you haven’t taken one of the many hints from Google to make your site mobile friendly, know that this will hurt the perceived quality of your site. In fact, Google tells their raters to rate any page that is not mobile-friendly (a page that becomes unusable on a mobile device) at the lowest rating.

In this latest version of the quality guidelines, all ratings are now being done on a mobile device. Google has been telling us over and over for the last couple of years that mobile is where it’s at, and many countries have more mobile traffic than desktop. So, if you still haven’t made your site mobile-friendly, this should tell you emphatically that it needs to be a priority.

If you have an app, raters are also looking at things like app installs and in-app content in the search results.

Know & Know Simple Queries

Google added a new concept to their quality guidelines this year. It comes down to what they consider “Know Queries” and “Know Simple Queries.” Why is this important? Because Know Simple Queries are the driving force behind featured snippets, something many webmasters are coveting right now.

Know Simple

Know Simple Queries are the types of searches that could be answered in either one to two sentences or in a short list. These are the types of answers that can be featured quite easily in a featured snippet and contain most of the necessary information.

These are also queries where there’s usually a single accepted answer that most people would agree on. These are not controversial questions or types of questions where there are two very different opinions on the answer. These include things such as how tall or how old a particular person is – questions with a clear answer.

These also include implied queries. These are the types of searches where, even though it’s not in the form of a question, there’s clearly a question being asked. For example, someone searching for “Daniel Radcliffe’s height” is really asking “How tall is Daniel Radcliffe?”

If you’re looking for featured snippets, these are the types of questions you want to answer with your webpages and content. And while the first paragraph may only be 1–2 sentences long as a quick answer, you can definitely expand on it in subsequent paragraphs, particularly for those who are concerned about the length of content on the page.

Know Queries

The Know Queries are all the rest of the queries that would be too complex or have too many possible answers. For example, searches related to stock recommendations or a politician wouldn’t have a featured snippet because it’s not clear exactly what the searchers are looking for. “Barack Obama” would be a Know Query, while “Barack Obama’s age” would be a Know Simple Query.

Many controversial topics are considered to be Know Queries, because there are two or more very different opinions on the topic that usually can’t be answered in those 1–2 sentences.

The number of keywords in the search doesn’t necessarily preclude whether it is a Know Query or Know Simple Query. Many long-tail searches would still be considered Know Queries.

Needs Met

Needs Met is another new section to the new Quality Rater’s Guidelines. It looks at how well the search result meets what the searcher’s query is. This is where sites that are trying to rank for content that they don’t have supporting content for will have a hard time, since those landing pages won’t meet what the searchers are actually looking for.

Ratings for this range from “Fully Meets” to “Fails to Meet.”

The most important thing to know is that any site that is not mobile-friendly will get “Fails to Meet.” Again, if your site is not mobile-friendly, you need to make this an immediate priority.

Getting “Highly Meets”

Essentially, your page needs to be able to answer whatever the search query is. This means that the searcher can find all the information they were looking for from their search query on your page without having to visit other pages or websites for the answer. This is why it’s so crucial to make sure that your titles and keywords match your content, and your content is quality enough to answer fully whatever the searchers are looking for when your page surfaces in the SERPs.

Local Packs & “Fully Meets”

If your site is coming up in a local 3-pack, as long as those results in the 3-pack match what the query was, they can be awarded “Fully Meets.” The same applies when it’s a local business knowledge panel — again, provided that it matches whatever the search query is. This is where local businesses that spam Google My Business will run into problems.

Product pages

If you have a quality product page and it matches the search query, this page can earn “Highly Meets.” It can be for both more general queries — the type that might lead to a page on the business website that lists all the products for that product type (such as a listing page for backpacks) — or for a specific product (such as a specific backpack).

Featured snippets

Raters also look at featured snippets and gauging how well those snippets answer the question. We’ve all seen instances where a featured snippet seems quite odd compared to what the search query is, so Google seems to be testing how well their algorithm is choosing those snippets.

“Slightly Meets” and “Fails to Meet”

Google wants the raters to look at things like whether the content is outdated, or is far too broad or specific to what the page is primarily about. Also included is content that’s created without any expertise or has other signals that make it low-quality and untrustworthy.

Dated & updated content

There’s been a recent trend lately where webmasters change the dates on some of their content to make it appear more recent than it really is, even if they don’t change anything on the page. In contrast, others add updated dates to their content when they do a refresh or check, even when the publish date remains the same. Google now takes this into account and asks raters to check the Wayback Machine if there are any questions about the content date.

Heavy monetization

Often, YMYL sites run with heavy monetization. This is one of the things that Google asks the raters to look for, particularly if it’s distracting from the main content. If your page is YMYL, then you’ll want to balance the monetization with usability.

Overall

First and foremost, the biggest takeaway from the guidelines is to make your site mobile-friendly (if it’s not already). Without being mobile-friendly, you’re already missing out the mobile-friendly ranking boost, which means your site will get pushed down further in the results when someone searches on a mobile device. Clearly, Google is also looking at mobile-friendliness as a sign of quality. You might have fabulous, high-quality content, but Google sees those non-mobile-friendly pages as low-quality.

Having confirmation about how Google looks at queries when it comes to featured snippets means that SEOs can take more advantage of getting those featured snippets. Gary Illyes from Google has said that you need to make sure that you’re answering the question if you want featured snippets. This is clearly what’s at the heart of Know Simple Queries. Make sure that you’re answering the question for any search query you hope to get a featured snippet on.

Take a look at your supplementary content on the page and how it supports your main content. Adding related articles and linking to articles found on your own site is a simple way to provide additional value for the visitor — not to mention the fact that it will often keep them on your site longer. Think usefulness for your visitors.

And while looking at that supplementary content, make sure you’re not going overboard with advertising, especially on sites that are YMYL. It can sometimes be hard to find that balance between monetization and user experience, but this is where looking closely at your monetization efforts and figuring out what’s actually making money can really pay off. It’s not uncommon to find some that ad units generate pennies a month and are really not worth cluttering up the page to add fifty cents of monthly revenue.

Make sure you provide sufficient information to a visitor, or a quality rater, that can answer simple questions about your site. Is the author reputable? Does the site have authority? Should people consider the site trustworthy? And don’t forget to include things like a simple contact form. Your site should reflect E-A-T: Expertise, Authoritativeness and Trustworthiness.

Bottom line: Make sure you present the highest-quality content from highly reputable sources. The higher the perceived value of your site, the higher the quality ratings will be. While this doesn’t translate directly into higher rankings, doing well with regards to these guidelines can translate into the type of content Google wants to serve higher in the search results.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

How to Seek Out Content Ideas with Low SEO Competition

When I conduct keyword research for clients, a lot of focus is given to product/service keywords, a.k.a. ‘money’ keywords – i.e. the search terms that’ll likely convert visitors into enquiries and sales (e.g. “car insurance quote”). However another angle is to look at informational/content-driven keywords, such as ‘how to’ terms, which may not necessarily convert into a sale directly, but may draw in potential future customers as well as links and social shares (“how to get proof of no claims bonus”).

Recently I’ve given recommendations to a few clients based on what they could be blogging about or creating informational resources around, which could rank highly of their own accord – especially those that have a ‘high search volume:low competition’ ratio. In this post I share my process as well as a case study where it’s worked pretty well…

Keyword Research 101

When I started writing this post, I almost jumped straight into the next section (how to assess low competition for a keyword), but I realised that some readers may be new to or inexperienced with keyword research, so I didn’t want to gloss over the subject…

The Google AdWords Keyword Planner is a good way to determine whether keywords have search volume or not – i.e. whether or not people actually type them into Google. Obviously there’s little point chasing a keyword that has little-to-no search volume (unless it’s extremely niche, it’s a potential grower in the future, or one enquiry is worth big bucks), so it’s important to make sure that you’re honing in on the right keywords for your clients’ industries. It’s also important to consider semantics and synonyms, e.g. people type in “car insurance,” “vehicle insurance,” or “auto insurance”? It may be obvious in some cases more than others, but it’s always worth taking the time to double-check – I can’t tell you the number of times someone has told me that they’re adamant that a keyword is popular only for the data to prove otherwise.

If you’re new to keyword research and you’d like to learn more, I recommend this resource by Moz.

Assessing Low SEO Competition Keywords

Once you have your list of content-driven keywords, a quick and easy way to assess their competition from an SEO point of view is to do a Google search for them:


Using the aforementioned example (“how to get proof of no claims bonus”), you can see that there are a million results in Google’s index related to the topic. However some of these could be fleeting – e.g. they could include a page that randomly has the words “proof,” “claims,” “bonus,” etc. scattered around the page, completely out of context with one another.

A good way to look at the realistic SEO competition is to do an ‘allintitle’ search for the keyword. This limits the results to pages that contain the words of the search term in their page title. Given that the page title is considered to be one of the most important elements of SEO, it’s fair to say that if a page has the words in the title, they are a serious contender in terms of the SEO side of things.

Keyword example (allintitle)
So for the same keyword, we’ve gone from 1m+ results… down to just 6.

Now it’s not foolproof, as there are many other factors related to SEO success, but it can be suggested that if someone writes a post all about how you can get proof of no claims bonus with those words in the title, it stands a very good chance of ranking on page 1 for the keyword. I stress: it’s not foolproof, but wouldn’t you rather write a blog post that has a good chance of ranking of page 1 than one that has no hope of ranking at all…?

If you’re drilling through a long list of keywords, you can use the formula KEI (Keyword Effectiveness Index) to help distinguish the better opportunities at a glance. KEI is a ratio between high search volume and low competition – when those criteria are met, the KEI score is higher. So if you have a list of dozens of opportunities to scan through, you might find that a low search volume keyword with virtually zero competition is more encouraging than one with higher of each – even if it has higher search volume (which can sometimes be the lead incentive in deciding whether to fight for a keyword or not).

Case Study – An Immediate Page 1 Ranking

I’ve done this for a few clients recently – and it’s worked especially well for TestLodge, a test case management software provider who I started working with very recently. After doing a big keyword research project for them covering their market to kick things off, I used the allintitle method and KEI to find all the content-driven keywords reporting monthly search volume data globally that also had less than 10 competing webpages. We ended up with a list of c. 200 keywords. Here’s a snippet of some of the best (in terms of search volume):

TestLodge content strategy example screenshot
Now 200 keywords does not necessarily mean 200 different blog post ideas, as some keywords would inevitably overlap (e.g. “how to write test cases in software testing with example” and “how to write test cases in software testing with sample” are essentially the same thing, but with a different synonym at the end). But there were at least a few dozen ideas there – they aim to write a blog post a week, so it should keep them busy for a while…! Not only that, but they had been struggling for ideas for future posts, and now they have a list of ideas that are SEO-focused to boot.

They tested the water by writing a post titled “Agile Testing Tools List” (search volume: 30; competing pages: 7; KEI: 128.57). Within only a day or two, it started ranking on the bottom of page 1 of Google UK:

TestLodge's page 1 result screenshot
Obviously higher would be better, but it’s not bad for starters.  It shows proof of concept to the client, who are now 100% on-board with the idea and wanting to approach it full steam ahead. We’re also early on in our working arrangement, so hopefully it’ll increase in the future as we conduct more SEO work in the coming months.

Regarding the above case study, it’s fair to say that it could be the case that freshness could also be a factor here for its early success – but the client is serious about SEO and so it’s our hope that this will be here to stay, and that other keywords will follow suit. But even if it only remains temporarily, it’s a good sign that something has the potential to rank highly pretty quickly.

Other Considerations

If you’re doing this kind of research and analysis, here are a few other tips and/or considerations to bear in mind:

  • Even if you decide to go after one keyword from a list based on its KEI score, there’s no reason why you can’t try and go for multiple keywords in one post. The title can reflect one keyword, while the body of the post can reflect others related to it. Take the “how to write test cases in software testing with example” example above – if you also include the words “example” and “sample” in the post naturally (rather than just the former), you might end up standing more of a chance ranking for both keywords.
  • I always recommend tweaking the title. Make sure that it contains the words of the keyword, but don’t hesitate to make it more fun, more eye-catching, more clickable. For example, the post didn’t have to just be called “Agile Testing Tools List” – it could be “The Ultimate List of Agile Testing Tools” or “An Agile Testing Tools List for the Pros” instead. This is especially important if all the competition is simply mimicking the keyword in the page title – be the result that stands out.
  • Lastly, with anything keyword research-related, it can be all too easy to get carried away by the data. Just because something has high search volume and low competition, it doesn’t necessarily mean that it’s a topic worth writing about for you. So always keep in mind the psychology of the searcher in relation to the keyword… Is what you’re offering related to the searcher? Is it important to them? Are they a likely potential customer, whether now or in the future?

If you find any really good examples, I’d love to hear about them – feel free to leave an example in the comments below or tweet me instead. Have fun!

[Scrabble image credit]

Post from Steve Morgan

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link

RankBrain Unleashed

Posted by gfiorelli1

Disclaimer: Much of what you’re about to read is based on personal opinion. A thorough reflection about RankBrain, to be sure, but still personal — it doesn’t claim to be correct, and certainly not “definitive,” but has the aim to make you ponder the evolution of Google.

Introduction

Whenever Google announces something as important as a new algorithm, I always try to hold off on writing about it immediately, to let the dust settle, digest the news and the posts that talk about it, investigate, and then, finally, draw conclusions.

I did so in the case of Hummingbird. I do it now for RankBrain.

In the case of RankBrain, this is even more correct, because — let’s be honest — we know next to nothing about how RankBrain works. The only things that Google has said publicly are in the video Bloomberg published and the few things unnamed Googlers told Danny Sullivan for his article, FAQ: All About The New Google RankBrain Algorithm.

Dissecting the sources

As I said before, the only direct source we have is the video interview published on Bloomberg.

So, let’s dissect what Jack Clark, reporter of the Bloomberg said in that video and what Greg Corrado — senior research scientist at Google and one of the founding members and co-technical lead of Google’s large-scale deep neural networks project —came others said to Clark.

RankBrain is already worldwide.

I wanted to say this first: If you’re wondering whether or not RankBrain is already affecting the SERPs in your country, now you know — it is.

RankBrain is Artificial Intelligence.

Does this mean that RankBrain is our first evidence of Google as the Star Trek computer? No, it does not.

It’s true that many Googlers — like Peter Norvig, Corinna Cortes, Mehryar Mohri, Yoram Singer, Thomas Dean, Jeff Dean and many others — have been investigating and working on machine/deep learning and AI for a number of years (since 2001, as you can see when scrolling down this page). It’s equally true that much of the Google work on language, speech, translation, and visual processing relies on machine learning and AI. However, we should consider the topic of ANI (Artificial Narrow Intelligence), which Tim Urban of Wait But Why describes as: “Machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing.”

Considering how Google is still buggy, we could have some fun and call it HANI (Hopefully Artificial Narrow Intelligence).

All jokes aside, Google clearly intends for its search engine to be an ANI in the (near) future.

RankBrain is a learning system.

With the term “learning system,” Greg Corrado surely means “machine learning system.”

Machine learning is not new to Google. We SEOs discovered how Google uses machine learning when Panda rolled out in 2011.

Panda, in fact, is a machine learning-based algorithm able to learn through iterations what a “quality website” is — or isn’t.

In order to train itself, it needs a dataset and yes/no factors. The result is an algorithm that is eventually able to achieve its objective.

Iterations, then, are meant to provide the machine with a constant learning process, in order to refine and optimize the algorithm.

Hundreds of people are working on it, and on building computers that can think by themselves.

Uhhhh… (Sorry, I couldn’t resist.)

RankBrain is a machine learning system, but — from what Greg Corrado said in the video — we can infer that in the future, it will probably be a deep learning one.

We do not know when this transition will happen (if ever), but assuming it does, then RankBrain won’t need any input — it will only need a dataset, over which it will apply its learning process in order to generate and then refine its algorithm.

Rand Fishkin visualized in a very simple but correct way what a deep learning process is:

Remember — and I repeat this so there’s no misunderstanding — RankBrain is not (yet) a deep learning system, because it still needs inputs in order to work. So… how does it work?

It interprets languages and interprets queries.

Paraphrasing the Bloomberg interview, Greg Corrado gave this information about how RankBrain works:

It works when people make ambiguous searches or use colloquial terms, trying to solve a classic breakdown computers have because they don’t understand those queries or never saw them before.

We can consider RankBrain to be the first 100% post-Hummingbird algorithm developed by Google.

Even if we had some new algorithms rolling out after the Hummingbird release (e.g. Quality Update), those were based on pre-Hummingbird algos and/or were serving a very different phase of search (the Filter/Clustering and Ranking ones, specifically).

Credit: Enrico Altavilla

RankBrain seems to be a needed “patch” to the general Hummingbird update. In fact, we should remember that Hummingbird itself was meant to help Google understand “verbose queries.”

However, as Danny Sullivan wrote in the above mentioned FAQ article at Search Engine Land, RankBrain is not a sort of Hummingbird v.2, but rather a new algorithm that “optimizes” the Hummingbird work.

If you look at the image above while reading Greg Corrado’s words, we can say with a high degree of correctness that RankBrain acts in between the “Understanding” and the “Retrieving” phases of the overall search process.

Evidently, the too-ambiguous queries and the ones based on colloquialisms were too hard for Hummingbird to understand — so much so, in fact, that Google needed to create RankBrain.

RankBrain, like Hummingbird, generalizes and rewrites those kinds of queries, trying to match the intent behind them.

In order to understand a never-before-seen or unclear query, RankBrain uses vectors, which are — to quote the Bloomberg article — “vast amounts of written language embedded into mathematical entities,” and it tries to see if those vectors may have a meaning in relation to the query it’s trying to answer.

Vectors, though, don’t seem to be a completely new feature in the general Hummingbird algorithm. We have evidence of a very similar thing in 2013 via Matt Cutts himself, as you can see from the Twitter conversation below:

At that time, Google was still a ways from being perfect.

Upon discovering web documents that may answer the query, RankBrain retrieves them and lets them proceed, following the steps of the search phase until those documents are presented in a visible SERP.

It is within this context that we must accept the definition of RankBrain as a “ranking factor,” because in regards to the specific set of queries treated by RankBrain, this is substantially the truth.

In other words, the more RankBrain considers a web document to be a potentially correct answer to an unknown or not understandable query, the higher that document will rank in the corresponding SERP — while still taking into account the other applicable ranking factors.

Of course, it will be the choice of the searcher that ultimately informs Google as to what the answer to that unclear or unknown query is.

As a final note, necessary in order to head off the claims I saw when Hummingbird rolled out: No, your site did not lose visibility because of a mysterious RankBrain penalty.

Dismantling the RankBrain gears

Kristine Schachinger, a wonderful SEO geek whom I hold in deep esteem, relates RankBrain to Knowledge Graph and Entity Search in this article on Search Engine Land. However — while I’m in agreement that RankBrain is a patch of Hummingbird and that Hummingbird is not yet the “semantic search” Google announced — our opinions do differ on a few points.

I do not consider Hummingbird and Knowledge Graph to be the same thing. They surely share the same mission (moving from strings to things), and Hummingbird uses some of the technology behind Knowledge Graph, but still — they are two separate things.

This is, IMHO, a common misunderstanding SEOs have. So much so, in fact, that I even tend to not consider the Featured Snippets (aka the answers boxes) part of Knowledge Graph itself, as is commonly believed.

Therefore, if Hummingbird is not the same as Knowledge Graph, then we should think of entities not only as named entities (people, concepts like “love,” planets, landmarks, brands), but also as search entities, which are quite different altogether.

Search entities, as described by Bill Slawski, are as follows:

  • A query a searcher submits
  • Documents responsive to the query
  • The search session during which the searcher submits the query
  • The time at which the query is submitted
  • Advertisements presented in response to the query
  • Anchor text in a link in a document
  • The domain associated with a document

The relationships between these search entities can create a “probability score,” which may determine if a web document is shown in a determined SERP or not.

We cannot exclude the fact that RankBrain utilizes search entities in order to find the most probable and correct answers to a never-before-seen query, then uses the probability score as a qualitative metric in order to offer reasonable, substantive SERPs to the querying user.

The biggest advancement with RankBrain, though, is in how it deals with the quantity of content it analyzes in order to create the vectors. It seems bigger than the classic “link anchor text and surrounding text” that we always considered when discussing, for instance, how the Link Graph works.

There is a patent filed by Google that cites one of the AI experts cited by Greg Corrado — Thomas Strohmann — as an author.

In that patent, very well explained (again) by Bill Slawski in this post on Gofishdigital.com, is described a process through which Google can discover potential meanings for non-understandable queries.

In the patent, huge importance is attributed to context and “concepts,” and the fact that RankBrain uses vectors (again, “vast amounts of written language embedded into mathematical entities”). This is likely because those vectors are needed to secure a higher probability of understanding context and detecting already-known concepts, thus resulting in a higher probability of positively matching those unknown concepts it’s trying to understand in the query.

Speculating about RankBrain

As the section title says, now I enter in the most speculative part of this post.

What I wrote before, though it may also be considered speculation, has the distinct possibility of being true. What I am going to write now may or may not be true, so please, take it with a grain of salt.

DeepMind and Google Search

In 2014, Google acquired a company specialized in learning systems called DeepMind. I cannot help but consider that some of its technology and the evolutions of its technologies are used by Google for improving its search algorithm — hence the machine learning process of RankBrain.

In this article published last June on technologyreview.com, it’s explained in detail how not having a correctly-formatted database is the biggest obstacle for a correct machine and deep learning process. Without it, the neural computing (which is behind machine and deep learning) cannot work.

In the case of language, then, having “vast amounts of written language” is not enough if there’s no context, especially if not using n-grams within the search so the machine can understand it.

However, Karl Moritz Hermann and some of his DeepMind colleagues described in this paper how they were able to discover the kind of annotations they were looking for in classic “news highlights,” which are independent from the main news body.

Allow me to quote the Technology Review article in explaining their experiment:

Hermann and co anonymize the dataset by replacing the actors in sentences with a generic description. An example of some original text from the Daily Mail is this: “The BBC producer allegedly struck by Jeremy Clarkson will not press charges against the “Top Gear” host, his lawyer said Friday. Clarkson, who hosted one of the most-watched television shows in the world, was dropped by the BBC Wednesday after an internal investigation by the British broadcaster found he had subjected producer Oisin Tymon “to an unprovoked physical and verbal attack.”

An anonymized version of this text would be the following:

The ent381 producer allegedly struck by ent212 will not press charges against the “ent153” host, his lawyer said friday. ent212, who hosted one of the most – watched television shows in the world, was dropped by the ent381 wednesday after an internal investigation by the ent180 broadcaster found he had subjected producer ent193 “to an unprovoked physical and verbal attack.”

In this way it is possible to convert the following Cloze-type query to identify X from “Producer X will not press charges against Jeremy Clarkson, his lawyer says” to “Producer X will not press charges against ent212, his lawyer says.”

And the required answer changes from “Oisin Tymon” to “ent212.”

In that way, the anonymized actor is only possible to identify with some kind of understanding of the grammatical links and causal relationships between the entities in the story.

Using the Daily Mail, Hermann was able to provide a large, useful dataset to the DeepMind deep learning machine, and thus train it. After the training, the computer was able to correctly answer up to 60% of the questions asked.

Not a great percentage, we might be thinking. Besides, not all documents on the web are presented with the kind of highlights the Daily Mail or CNN sites have.

However, let me speculate: What are the search index and the Knowledge Graph if not a giant, annotated database? Would it be possible for Google to train its neural machine learning computing systems using the same technology DeepMind used with the Daily Mail-based database?

And what if Google were experimenting and using the Quantum Computer it shares with NASA and USRA for these kinds of machine learning tasks?

Or… What if Google were using all the computers in all of its data centers as one unique neural computing system?

I know, science fiction, but…

Ray Kurzweil’s vision

Ray Kurzweil is usually known for the “futurist” facets of his credentials. It’s easy for us to forget that he’s been working at Google since 2012, personally hired by Larry Page “to bring natural language understanding to Google.” Natural language understanding is essential both for RankBrain and for Hummingbird to work properly.

In an interview with The Guardian last year, Ray Kurzweil said:

When you write an article you’re not creating an interesting collection of words. You have something to say and Google is devoted to intelligently organising and processing the world’s information. The message in your article is information, and the computers are not picking up on that. So we would like to actually have the computers read. We want them to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions.

The DeepMind technology I cited above seems to be going in that direction, even though it’s still a non-mature technology.

The biggest problem, though, is not really being able to read billion of documents, because Google is already doing it (go read the EULA of Gmail, for instance). The biggest problem is understanding the implicit meaning within the words, so that Google may properly answer users’ questions, or even anticipate the answers before the questions are asked.

We know that Google is hard at work to achieve this, because the same Kurzweil told us that in the same interview:

“We are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.”

The vectors used by RankBrain may be our first glimpse of the technology Google will end up using for understanding all context, which is fundamental for giving a meaning to language.

How can we optimize for RankBrain?

I’m sure you’re asking this question.

My answer? This is a useless question, because RankBrain targets non-understandable queries and those using colloquialisms. Therefore, just as it’s not very useful to create specific pages for every single long-tail keyword, it’s even less useful to try targeting the queries RankBrain targets.

What we should do is insist on optimizing our content using semantic SEO practices, in order to help Google understand the context of our content and the meaning behind the concepts and entities we are writing about.

What we should do is consider the factors of personalized search as priorities, because search entities are strictly related to personalization. Branding, under this perspective, surely is a strategy that may have positive correlation to RankBrain and Hummingbird as they interpret and classify web documents and their content.

RankBrain, then, may not mean that much for our daily SEO activities, but it is offering us a glimpse of the future to come.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Mike Deets - Living

 

 

 

Have an incredible day!

 

Mike

http://blog.deetslist.com

Source link