2011 Year of the Panda

As 2011 comes to an end, I thought I would do recap on probably the biggest series of events in the SEO world to happen in 2011; the Panda updates. The shear scope of Panda pretty much guaranteed that everyone operating a web site felt it’s effects, probably in a negative way. It also resulted in the worst words ever; Pandified and Pandification.

Google Panda Make Me SadIt’s About Rankings Stupid

Google’s algo is real-time and is constantly making calculations based on it’s link graph of the web. This includes the hallowed PageRank as well as the other hundreds of other factors that determine day-to-day standings on the results pages.

Panda is a separate algo that operates independent of the main algo and is specifically designed to filter specific types of sites from the search results. Unlike the main algo, Panda does not occur in real-time. Instead, it is manually pushed out, the results evaluated, and changes made before the next manual push.

Panda Timeline

February 24 - Panda 1.0

This was the one that started it all. Originally code-named “Farmer”. In one update, 12% of search results were effected. The primary targets of this update and the following updates were content farms, scraper sites, and any page that Google determined as low quality.

 Biggest losers – wisegeek.com, ezinearticles.com, suite101.com, hubpages.com

Biggest Winners – Youtube, ebay.com, facebook.com, instructables.com

April 11 – Panda 2.0

Original update goes live to all English-speaking countries. Incorporated user data from Chrome extension and block button in search results. Widened the net to longer tail results.

May 10 – Panda 2.1

Minor update.

June 16 – Panda 2.2

Improved scraper detection. Matt Cutts states that there were no manual exceptions made for sites that may have […]

Google To Place Usage Limits on Maps API


Google Maps New York City Hotels
Last week Google announced that they would soon be placing usage limits on the Google Maps API beginning on January 1, 2012. This move is primarily targeted at larger sites in the travel and real estate industries that have come to rely on Google Maps as an integral part of their services.

To keep using Maps for free, your daily map loads must not exceed 25,000 per day and map loads using the custom map feature must not exceed 2,500. Google estimates that the new policy will affect only 0.35% of users and insists that the new pricing is necessary to continue Maps development.

Google has offered 3 different solutions for those that are exceeding the API limit, the first is to reduce your usage but if you can’t do that then you can either pay $4 per 1,000 map loads over the limit or purchase a Maps Premier License which will set you back at least $10,000 per year but will cover up to 100,000 map loads per day. Pricing also depends on usage so if you are hotels.com, you will probably be paying a lot more. On the bright side, it looks like Premier members will have access to substantially higher resolution street view and larger static maps than those now available.

It really was only a matter of time before Maps adopted a pay to play strategy for big enterprise users as Google Apps and Google Analytics have done. I think we will soon see a similar strategy adopted for other […]

Why you should use rel=”me” and rel=”author”

Use rel="author" and rel="me" to take credit for your work

 

If you haven’t implemented the rel=”author” and rel=”me” attributes on your blog or site, stop what you are doing and go do it now. Not only is it important that you start building your own personal ranking with Google, but you are also making sure that you receive credit for posts or articles that you author and, more importantly, making sure others do not get credit for your work.

The author and me attributes are microformats, special HTML code that identifies certain types of data on a web page. Why is this important? Well, it keeps third parties, like search engines and API’s, from having to guess the purpose of content on a web page and output more accurate and unified results.There are tons of these attributes for all kinds of data and Google is always incorporating them in new ways. I plan on doing a post that examines microformats in greater detail at some point in the future.

By using rel=”me” sites can link one page about a person to another page about the same person. By  consolidating your social identities you are letting search engines know that these profiles represent the same person.

Facebook only includes the me attribute if you have adjusted your privacy settings to allow everyone to see your website. I can’t really think of a reason not to do this if you have added your url to this section.

 

 Facebook usage of the rel="me" attribute
You have […]

Google Toolbar Page Rank Should Die

Yet another toolbar page rank update a month after the June 27th update. According to SEO Roundtable, there was a bug with the initial toolbar push and Google uncharacteristically rolled out another update to correct the error.

Even though everyone knows that the toolbar page rank is not an entirely accurate gauge of a page’s actual page rank, it is still the only indicator available to the non-technical, non-seo masses. As such, it has a pronounced effect on the perceived value of a web page/site, or a link from that page/site. Even if you have a ton of rankings and traffic, a low toolbar PR can be an indicator to others that Google doesn’t like something you have done.

My main issue with the toolbar PR is that it has created an industry of link brokering that continues to negatively impact search results. Google has been adamant about punishing those who buy and sell links yet they continue to support the device that makes it easy. In two seconds, Google could eliminate the practice of buying and selling links but they don’t.

I say let the PR toolbar die. Google should let third parties, like SEOmoz, assign page rankings based on factors outside of the Google algo. I seem to remember a few other search engines, Boing and Yoohoo or something.

What do you think about the toolbar PR? Should it die a horrible death?

Ominous robots.txt Email from Google

I received this strange email from Google yesterday telling me that they needed to crawl my image folders or my products search and product ad results may be affected.

Hello,

Thank you for participating in Google Product Search. It has come to our attention that a robots.txt file is preventing us from crawling some or all of the images on your site. In order for us to access and display the images you provide in your product listings, we’d like you to modify your robots.txt file to allow user-agent ‘googlebot’ to crawl your site.

Failure for Google to access your images may affect the visibility of your items on Google Product Search and Product Ad results.

To ensure the ‘googlebot’ is not being blocked, please add the following two lines of text to the end of your robots.txt file:

User-agent: googlebot

Disallow:

 

For more information on robots.txt files, please visit http://www.robotstxt.org. If you have any questions, please contact your webmaster directly.

Sincerely,

The Google Product Search Team

© 2011 Google Inc. 1600 Amphitheatre Parkway, Mountain View, CA 94043

You have received this mandatory email service announcement to update you about important changes to your Google Product Search product or account

I find this very strange considering I give them a syndicated feed and they do not need to crawl my images, just the directory with the xml feed. What exactly is the deal? Are they just phishing for more images? Does this have something to do with the fact that Bing now has 30% market share?

So far no Googler has […]

The Google Bing Sting

February is off to a great start in the world of search. Yesterday, Google went public via Danny Sullivan at Search Engine Land with their very compelling claim that Bing is copying their search results by using data collected from the Bing toolbar and the Suggested Sites feature. To summarize, Google thought that Bing was copying their search results so they set up a honeypot by purposely ranking sites on queries that no one would search for and then waiting to see if the results were copied over to Bing. And, what do you know, they were.

I expect we will see, at the very least, an update to IE8′s privacy policy and definitely some sort of tactic by Google to counteract this sort of thing. Do you thing this is an acceptable business tactic? Bing continues to be a loss for MS so is this an act of desperation? And with long-tail Google results being over-run with large spam networks, why would they want to bother copying them?

Xmas Bling on Google

Today, was searching for Xmas carol lyrics (don’t ask) when I noticed Xmas lights appearing next to the ads in the side bar on Google. Anyone know how long this has been in effect? I never ever search for xmas crap so it is possible Google has done this for years and it is just news to me.

Google Christmas Bling

Google Street View as Performance Art

Just when I thought that I had seen all of the cool Easter Eggs from Google Street View, those crazy Googlers give us some more blog bait. Artists Robin Hewlett and Ben Kinsley teamed up with the Google Street View team and residents of Pittsburgh’s North Side to put on an avant-garde performance for the street view cameras. If you have ever wondered what it would be like to drive through a marching band or a marathon, here you go:
View Larger Map

You can check out some behind the scenes video and pics from the full cast at http://www.streetwithaview.com.

By |Random|Comments Off|

Google Insights Bookmarklet

If you are in search marketing or search engine optimization then you know what a wealth of data that Googles Insights for Search can provide. I probably use Insights at least a dozen times a day and this morning I came across the Google Insights Bookmarklet. This is a clever little bit of javascript that works like this; simply copy the bookmark onto your Firefox toolbar and after you have entered a search on Google or Yahoo you simply click the bookmark and it opens up the Insights page with your query already populated.

This keeps me from having to open a new tab and then go to the insights page and copy and paste my query, all of which takes about 4 seconds to complete. Mutliply that by, say, 10 x a day x 5 days a week x 50 weeks a year and I have effectively saved myself 167 minutes a year that I can use to find other ways to save time.

Come on Google Base, Give Me a Break!

I cannot express in words the frustration I have felt recently while trying to influence the results of Google Base. These feelings reached the boiling point last Friday when I placed a product search for a key term that one of my clients is adamant about ranking highly on. Here is a pic of the results:

base results

As you can see, one site dominates the listings. This occurred sometime Thursday night and as of this Monday the results are still the same. Traditionally Google Base results are fairly spread out amongst competitors provided the proper key term is in the title and description. This is the first instance where I have seen this behavior on a highly competitive term.

Now, why could this be occurring? I can really only think of two reasons:

1) Google Base is temporarily insane and will eventually correct itself.

2) Google Base is undergoing some sort of change and this is simply a side effect.

3) This competitor’s feed has some sort of mojo that no other competitor has.

I think that it may be a combination of all three. Interestingly, this morning I noticed that the initial teaser results on the term do not reflect the same as the first page of product listings:

base teaser results