This session looks at on-the-page and off-the-page factors that influence web search, to understand what remains useful, what no longer works, and what new signals are growing in importance.
Moderator: Kristjan Mar Hauksson (KH), Director of Internet Marketing, Nordic eMarketing
Q&A Moderator: Will Critchlow (WC), Co-Founder & Director, Distilled
Speakers:
- Gary Beal (GB), MD, Vanguard Online Media (Didn’t make it because of #ashcloud)
- Mikkel deMib Svendsen (MS), Creative Director, deMib.com
- Rand Fishkin (RF), CEO & Co-Founder, SEOmoz
- Rob Kerry (RK), Head of Search, Ayima
(The below are my notes from the “SEO Ranking Factors in 2010” session at SMX Advanced. They have been posted during the session and are still to be tidied. I have bolded what I consider to be the best tips. – more SMX coverage here)
Rand Fishkin
Saving his best stuff for Seattle ;(
- Search for Randon Surfer vs Reasonable Surfer patent
- PageRank is the Random surfer model which treats all links from a page as equal. Reasonable surfer updates this to add user and feature data to put greater emphasis on:
- Link position matters. The higher up the page, more weight passed
- Font size
- Summarise as: The Links on the page that people are more likely to click may be considerably more valuable
- Twitter Data influencing search?
- SEOmoz correlation work in October 2009 non compelling but end of October deal with Google and Bing so likely to be some correlation today
- Today tweets appear to be influencing QDF (Query Deserves Freshness)
- Questions over whether Tweets treated as links but assuming that they will simply due to the fact that is where the bulk of the freshness links are occurring
- Facebook Social Graph
- Does it have enough adoption yet to be useful?
- Facebook has 50,000 sites with like buttons on them (end of April) but = 0.06% of domains
- Suspects that data is not flowing from Facebook to Google. Bing on the other hand does have relationships and therefore likely to see effect of Facebook here first.
- Will Facebook build own engine? Not until there is wider adoption of sites.
- On Page Correlation Data
- Keyword or Brand at start of Title? Strong correlation between position and rank so compelling evidence for keyword first
- H1 tags – very limited correlation between rank and using H1 tags (however first words in page important)
- Alt attribute – slightly better correlated! (not that strong but good for image search so do it)
- KW density – no correlation
- Latent Dirichlet Allocation (LDA) – simsilar to LSI (Latent Sematic Indexing). Looks very promising as a ranking factor. (Use wonder wheel to get relavant keywords to include in page)
Rob Kerry
January and May SERPS updates
- 301s passing less link juice. If changing domain, build authority from scratch. Need to build links to new domain, ideally before move, then have to go to every single or consider using the canonical tag instead of 301s as they now work cross domain
- Tolerance towards spam links. Now easier to rank for home page, internal pages harder to get. Niche websites have done well. Losers are those who don’t have enough non keyword anchors
May
- Believes that changes over exaggerated. Authority sites seem to be losing out with niche sites doing better. More effort required to make pages unique. Biggest losers are search results style and category pages
Mikkel deMib Svendsen
Get rid of the Crap on your site (how to avoid losing ranks)
- Code Junkyards – lazy programmers syndrome
- Malware – caused by hackers (often ex-employees). Need to check source code regularly. Will have drastic effects on click through to your site due to “This site may harm your computer” message in SERPS
- Check software updates
- Check Secunia.com to identify vulnerabilites
- Santise all user data – check that no user input required that you don’t expect
- Check using automated but don’t rely on it (Use link tools to find strange anchor text)
- Small is Beautiful
- Site Speed (now officially a factor but MdM believes that indirectly has always been important)
- Quality of Code – all his testing has shown it doesnt matter if don’t validate (even Google doesn’t validate). However:
- Clean up code
- If 90% of your page is code clean it up
- Get rid of Micorosoft.net view_state (see picture!) – better solutions today
- Javascript should be placed in external files (just one or two file not 5 or 10!)
- Keep all CSS in one external file (don’t use inline CSS if using classes)
- Remove empty containers (e.g. divs or table)
- Remove all comment tags when publishing
- Minify or Obfuscate code (remove white spaces and line breaks – will render faster)
- Remove all unnecessary meta tags (e.g. revisit after). All DC
- Use content delivery network
- Compress objects and http response (use compression where ever poss inc http requests (usually using Gzip)
- Clean up code
Q&A
- May Day Update Highlights? RF: Unusual updates in that White hat sites have taken a hit (8-9% drop in traffic almost all in the long tail). Happened very close to the change in l00k and feel – could their be a correlation? RK: Some may have lost but others have done better. Winners are niche sites with very unique content. MdM: Lot of small changes. Change of interface could have caused drop in long tail traffic. Thinks new look is good for AdWords. Increased chances of refining searches rather than clicking through to page 2.
- Mixing US and UK results on UK SERPS? RK: Strange results less of a prob but still expect international results in local SERPS
- Strength of LDA? RF: Difficult to see correlations but more interesting is the concept that content will be more relevant if additional related terms are being seen on page. No real data.
- QDF? RF: If Google sees search volume spikes and content spikes including keywords then they will bias results to the fresh content (e.g. oil spill SERPs will not correlate with standard ranking factors). Very important for those in news. MdM: not so dramatic in non-English searches. RK: Often see US results in QDF results in UK.
- 301 Findings? RF: Hasn’t seen evidence to agree with RK but trusts him! RK: Canonicals are not perfect solution. Perfect solution is to get links redirected. Think Google may be taking a view that Canonicals are better than 301s as less easy to abuse. Thinks 95% chanaged links is practical! MdM: Concerns about canonicals – can go wrong. RF: Google over respecting the canonical e.g. big sites redirecting every page to home page. Seems to work too well, so maybe there will be a pull back from this.
- Top Tips? RF: Thinks that it is insane that Twitter content that is not being saved – webmasters should capture it. RK: If value short tail, don’t change your brand. Focus on home page. MdM: Very hard to predict, especially the future. Value of dynamic content not to be under estimated (e.g. UGC or scrapping!) – more dynamic your site the better.
- Diversifying anchor text from external links? RK: Wide array of terms to both home and deep pages – include things you don’t want to rank for and high ratio of brand links. Mess it up a bit. Maydate: Less value being placed on internal linking?
- Canonicals for internal? RK: Use rel=canonical rather than 301. RF: The significant concerns over 301 are quite new to Rand. Is it a bigger problem with more aggressive SEO? MdM: Just avoid moving. Its why its so important to make your architecture as flexible as poss.
- How important is it for the content to be the same on two pages that are canonicaled RF: Google seems to respect it.