Web Design Industry Blog

Blog Rss Feed

Browser Statistics 2011: Market Share and the Effect on Web Design

Published on June 30, 2011
Tags: Usability, Web Design London

If you were asked to guess which web browser was the most popular, which one would you choose? It probably comes as no surprise that Internet Explorer is still the most widely used web browser, although it isn’t as popular as it used to be: in 2004, it had around 95% of the entire market to itself. Now, all of the different versions of IE put together add up to around 44% of the market, meaning that while Internet Explorer is still dominant, it is not the force it once was.

But why is it so important to know this? Why should we be bothered about which web browsers people choose to view the web? Simply, web browser trends matter because they have an impact on web designers and developers. As web developers in London, we spend a lot of time making sure our sites are compatible with a whole range of browsers, so users won’t experience any display or other problems while they are viewing the sites. This can often be a time consuming task, especially as there are now so many web browsers on the market and people are often reluctant to upgrade their browser of choice.

This means that knowledge about which browsers are the most popular can be extremely useful when it comes to ensuring site compatibility. Data from StatCounter shows that during the first half of June 2011, the combined versions of IE had 43.72% of the web browser market. It also shows that Firefox had 28.57%, Chrome had 20.26%, Safari had 5.09%. Opera had 1.74% and ‘other’s had 0.62% of the web market.

You can’t make too many pronouncements based on combined data that ignores the fact that each web browser also has multiple versions that also need to be accounted for, but a couple of things stand out. One is that, as mentioned above, while Internet Explorer still has the most users, it has nowhere near the market dominance it once did. The other noticeable issue is that of Google Chrome: it may only have been released around three years ago, but with a fifth of the market share it is doing extremely well. It will be interesting to look at the figures again in a year’s time and see the extent to which it has grown by then.

There are also some interesting things we can clean from the more specific data from StatCounter that relates to the use of individual web browsers. For example, despite the fact that Microsoft have long since moved on from IE6, it still accounts for 3.77% of web browsers in the world. This is especially interesting when you consider that the market share of Safari 5.0 – Apple’s browser that is mainly used on Macs – is only just above IE6 with 3.84%. It shows that even though the influence of Apple has grown hugely in recent years, it is still very hard to shake the influence of Microsoft.

As you might expect, it is an IE browser that is the most widely used in the world – in this case, IE8, which has a 27.98% market share. This is interesting given the fact that IE9 has already been released (it’s currently on 5.89%, which is below IE7 on 6.07%). We can expect to see use of IE9 increase in the coming weeks and months, but this is a good illustration of the fact that just because new technology is available, it doesn’t necessarily mean that all web users will adopt it. 

Several reasons can be given for this. One is, simply, that people are often resistant to change until it’s absolutely necessary. Another reason is that some browsers, such as IE6, are associated with particular operating systems and so a lot of people won’t upgrade their browser until they upgrade their operating system. This can make the job of web designers and developers a bit harder, but there are some things to bear in mind that can help the situation.

  • Different audiences use different browsers. It helps to know which browsers the users of your own website are using, as if they are overwhelmingly IE users or Firefox users, you can make allowances for this. For example, a website that focuses on online technical issues is more likely to have readers that use browsers such as Chrome and Firefox.

  • Forewarned is forearmed. Statistics such as the ones given above are useful when it comes to working out what to do with your own websites as it helps to identify current trends. Data compiled over a period of time can also help you to make projections about how browser use might change (such as the massive growth of Chrome in a relatively short space of time), which can be very helpful.

Overall, while it is hard to make any concrete predictions about what is going to happen in terms of web browser use, browser owners are becoming increasingly aware of the fact that people don’t always upgrade just because there is a new version of the browser available. This means that they are increasingly running campaigns to persuade people to upgrade and these seem to be catching on – it’s unlikely that the situation is going to be resolved any time soon, but as Google starts to wind down its support for old browsers and other developments start taking shape, things seem quite a bit more hopeful than they might once have done.

By Chelsey Evans

Submit Blog & RSS Feeds 

  0 Comments | Post Comment

Why Should I Have the Latest Web Browser Version?

Published on June 24, 2011
Tags: Usability, Internet Security

We wrote a few months ago about the way in which out-dated web browsers and other technology can cause problems not only for web designers, but users as well. After all, if there is a major discrepancy between the latest web trends and what a user’s browser can handle, there is highly likely to be some impact on performance and display. One of the biggest browser culprits here is IE6 (Internet Explorer 6.0), which is still a widely used browser, despite the fact that Microsoft has been urging people for some time to upgrade to newer software.

Now, though, it looks like web users are going to have to take note, as Google recently announced that it intends to phase out its support for older browsers. These include IE7, Firefox 3.5, Safari 3 and all of their predecessors. This means that people using those browsers and trying to access anything from Gmail to Google Docs and Sites will begin to notice issues with their performance from 1 August 2011 – and eventually, Google will withdraw its support for these browsers all together. Research from StatCounter suggests that 17% of web users will need to upgrade their browser as a result of Google’s announcement.

So why have Google made this choice? There are a couple of reasons they have done so. One is that newer browsers tend to be more secure, more efficient and generally offer better performance. Another is because if web designers and coders are going to make use of all the latest technology, it helps if everyone is using a browser that can support HTML5. Ultimately then, this is something that should ultimately benefit everyone and, considering how long Microsoft has been waging a campaign to persuade people to ditch its own IE6, relatively drastic action such as this appears to be the sensible option.

More than this, Google has said that this programme of phasing out support for old browsers will continue. This means that when, for instance, a new version of Internet Explorer makes an appearance, support for the third oldest version of IE will gradually be withdrawn to encourage people to make the upgrade to the new one (or, as Google would most likely prefer, switch to Chrome instead).

This also has the added benefit of making things easier for Google, as it means they won’t have to carry out lengthy compatibility tests with older browsers before releasing new sites, features and updates. The development is also significant as Google is something of a market leader in many areas of the web, so their announcement that they won’t be supporting older browsers suggests that other corporations and web designers will soon be able to follow suit, especially as the new rules have an impact and (hopefully) users start to move to newer technology.

The developments also add to an increasing array of actions undertaken by internet giants in order to modernise the web. As well as campaigns by Microsoft to reduce IE6 use and an extensive effort by Firefox to get its users to upgrade from version 3.5 of its browser, other changes are afoot in the online world. A new batch of IP addresses – IPv6 – is scheduled for release very soon. ICANN, the internet’s domain name regulator, has just launched its plans to massively extend the domains on offer (adding more choice to the current system of .com, .co.uk and so on).

All of this is very interesting and the fact that all these developments are happening around the same time suggests that maybe, in some ways, the internet has exceeded whatever expectations people might previously have had for it. One of the main reasons IPv6 is being released is because the world is running out of IPv4 addresses, even though previously there was thought to be plenty of those addresses to keep us going for quite some time. The world’s appetite for the online world seems to have been underestimated.

It also shows that new developments are being made in online technology all the time, but that developing that technology in the first place is only half the problem. Once the ability to do something has been generated, there is then a second wave of activity while internet companies attempt to get users to agree that things are a good idea. It’s practically inevitable that there is going to be a delay between invention and adoption of ideas.

Perhaps, though, Google has just touched on part of the solution. If people are going to be persuaded to adopt newer browsers, then if conventional approaches don’t work, the option for older browsers has to be taken away. Change has to happen, or else the internet will stagnate, leading to problems not just in compatibility and design, but also security – something that it is more important than ever to be aware of. So, it might seem drastic and some web users might not be thrilled about it, but ultimately it has to be said that Google’s decision to reduce its support for older browsers is, in the long run, for the best.

If you haven’t updated your browser to the latest version, the following links will help you. Do bear in mind that upgrading is free, will make your computer more secure, will make web sites you view more usable, and generally the browser will run faster and more smoothly - even on an old computer!

And if you’d like to see how your website looks on older browsers, take a look at: Spoon.net

By Chelsey Evans

Submit Blog & RSS Feeds 

  0 Comments | Post Comment

Schema.org, Microdata Formats, Rich Snippets and Better SEO

Published on June 10, 2011
Tags: SEO, Usability, Web Design London

They may spend most of their time as rivals, but Google has recently joined forces with Bing and Yahoo! to create schema.org. This is a new website that aims to improve the quality of the internet through the creation of a more robust data mark-up system for webpages. Until now, all of the search giants have had their own systems for this, which has often made it tricky for web designers and webmasters to decide on an exact mark-up schema, but it is hoped that having a shared system will not only make these decisions easier, but also improve search results.

Schema.org uses a microdata format that will be familiar to most webmasters who have previously marked up webpages for rich snippets. For those unfamiliar with this, rich snippets are those pieces of information that help to identify what your site is about and provide a large amount of information in a short space (such as an item that comes up on a search engine with not just the page title, but a picture, description, reviews and other data). Using the microdata format across the whole of schema.org is designed to make the process of marking up rich snippets more consistent.

One of the main benefits of schema.org is that it uses a vocabulary across all participating search engines, so there is less chance for confusion over double meanings or unsupported jargon. It also helps to identify sites more easily and, therefore, means that websites benefit from the categories they choose. For example, under the Schema vocabulary, a restaurant would be – in the broadest category – a ‘thing’. This would allow the web designer to include a name, description, URL and an image in the mark-up information.

The restaurant, however, is not just a ‘thing’. It’s also a ‘place’ a ‘localbusiness’ and a ‘restaurant’. All of these add extra detail to the mark-up information so it can be located in different – yet still relevant – categories. There are lots of other common categories that can be used for different webpages, depending on the content that is included within them. For instance, if you were writing a webpage about a celebrity, they would fall into the ‘person’ category, while a charity could be a ‘place’, ‘local business’ and ‘organisation’. When it comes to creative works there are options for ‘book’, ‘recipe’, ‘TVseries’ and more.

Generally speaking, the more categories you are able to select when you are marking up a webpage and the more information you can provide for each of your selected categories, the better as it provides a richer range of data for the search engines to utilise when they are searching for relevant results. Schema also allows you to view sample HTML for many of the categories so you can work out which ones would be relevant for your site and how you might need to adapt it in order to properly fit in.

Once you have selected all of your categories, filled in all of the information that you are able to and have completed your coding, schema.org recommends that you test your webpage mark-up using a code compiler so you can make sure it is all working properly and will have the intended effect. One thing to bear in mind is that when you use schema.org, you can only mark-up the visible part of your webpages – the bits that your readers will see – and not any of the hidden page elements.

Schema also works to take the ambiguity out of other parts of websites: as well as offering a common vocabulary for webmasters to make use of, there are also standard formats for time and date. This is important when you consider that different countries often use different formats when it comes to the date, so that while 10/6/11 in the UK would undoubtedly be read as the 10th June 2011, in the US it might be interpreted as the 6th October 2011. The standard format offered by Schema means that the inputted information is unambiguous and therefore easier for machines to understand. Something similar can be done with time.

This can be useful if, for example, you were promoting an exhibition or a concert that was taking place at a particular time and date. You would obviously want it to be very clear when the event was taking place and for the information you give to be understood by any machine – and therefore web user – that picked it up. You can do something similar with, for instance, recipes or anything else that might take place over a specific period by using the Schema format to specify how long something will take (such as a recipe that needs cooking for an hour or an event that lasts for four hours).

There is also a meta tag option, which can be used for web content that you can’t mark-up in the normal way due to how it is displayed on the webpage. For instance, if you have a product review on your site and the information is displayed through a five-star graphic, then you could use a Schema meta tag that includes details of the graphic so it is incorporated into your mark-up of the rest of the page. You can also include link data to third party sites to make it clearer to search engines the sort of information you have described on your page (such as linking to an encyclopaedia reference that contains further details).

Overall, schema.org helps to standardise the process of marking up webpages by introducing a common format for Google, Yahoo! and Bing. This has the effect of making life easier for web designers and other staff in charge of the process as they will be able to be much more specific in their coding, rather than trying to come up with a solution that works for all the different search engines.

While Schema is not specifically designed to improve web ranking, including rich snippets in your mark-up can help your search results to display more prominently, which is always a good thing. Plus, as the features of the site develop and more options are included in it, it makes sense to make use of it now so you will continue to benefit further down the line as new developments are made and new features added.

By Chelsey Evans

Submit Blog & RSS Feeds 

  0 Comments | Post Comment

Unique Content: Why Site Text is THE Most Important of SEO Tips

Published on May 12, 2011
Tags: SEO, Usability

We’ve been saying for as long as we remember; nothing is more important than the content on your web site. You can build links galore and undertake all manner of search engine strategies, but without good-quality content, any gains you make in search engine positions will be short-lived.

The issue of content came into focus again in 2010 and is set to intensify throughout 2011.  Of course, to begin with your site must have content, but what constitutes ‘content’ as far as the search engines are concerned?  There are two factors you should consider; content should be:

  1. Relevant
  2. Unique


If you run a web site that sells cleaning products, why would you then include links to your favourite personal sites or just have images of the products with no text? We’ve come across all types of sites in the past with clients wondering why they don’t get ranking and this is just an example of something that really has happened!

First, a page without text is practically blank to the search engines (the same, to a degree, can still be said of Flash-generated pages too). To the search engines, an image is just a picture. Certainly, some very clever image recognition software might be able to work out what the image is about, but that’s not going to rank you very well even if it were being used. Much better is to lay the ‘food’ out in front of the search engines in a way that’s easy for them to see and easy for them to understand. And that means using text – plain and simple.

Whilst there is no minimum length for the text on a web page, bear in mind that the more text you add, the more you can create a relevant page of information with both your keywords and synonyms of your keywords. If your page has just 10 words, creating relevance would be hard work. Our recommendation is that for each page you want the search engines to find, work to a minimum of 300 or so words. 

And that includes the home page. True, if you’re a large, established corporation you don’t need to worry too much about text on your home page as you’ll be able to build relevance in other ways, but the majority of sites aren’t at that level. That means you’re competing in a crowded arena, and with your home page being the main page the search engines use to understand the relevance of your whole site, if you don’t have text on this page then you are missing the best opportunity you have. We have worked with many clients where they want to keep the home page clean and pretty, but then wonder why optimisation efforts are slow (or fail) to bear results.

So, don’t be shy and work that content. But, don’t just write for the search engines. Getting the site visible on the search engines is one thing, but don’t forget that when real people reach the site you’ll want them to do something – and that means having content that sells. If you’re not expert at writing site content, consider hiring one – that little bit of extra expense could easily make the difference between having a lot of visitors and no sales, and having a lot of visitors and a lot of sales. It also justifies the return on investment on your search engine optimisation activities. Make sure though that any content writer you employ really does create unique content (copy and paste a sentence into Google with quote marks around it, and this will quickly identify if it’s unique) as there are some less scrupulous ‘copywriters’ out there.

Another top tip is to work on short easy-to-read sentences. This is for two reasons; first, the search engine’s indexers will find it easier and faster to understand the relevance, and that may mean faster position improvements and second, in the global world we live where automated translation is frequently used (Google Translate being just one), shorter sentences translate better. For a potential customer who doesn’t speak your site’s language as their first language, they might run it through an automated translation engine and the shorter the sentences, the better it will read for them, and the more chances you have of making that all-important sale.

Try to keep each page focussed to one or two topics at most, and create as many pages as you need to cover the relevant parts of your business. Don’t cram the 10 services you offer onto one page, rather create 10 separate page, with each discussing each service in more detail. That way, the search engines will be able to build individual relevance for each service you offer.

Finally, keep in mind the relevance of the whole site. As mentioned at the top of this article, don’t start adding information that has no relevance to what your site is about. Be focussed on creating two levels of relevance. The overall focus is what is the site about, who is it aimed at and what do you want site visitors to do when they get there? The second, level is what an individual page is about, who is it aimed at, and does it fit into the overall site relevance scope. If the answer to the whether if fits into the overall scope is no, then remove the page.


Google defines duplicate content as:

… substantive blocks of content within or across domains that either completely match other content or are appreciably similar.

Hopefully you'll know (but in case you don't) -  duplicated content causes ranking problems. But, more recently a Google employee released a little more information on what it could consider to be ‘appreciably similar’.

In a recent webmaster query response, the employee responded to a question about duplicate content. In the response, the employee went on to state that although the wording of the site in question was not exactly a duplicate of another site, there were strong similarities. The two phrases in question were:

“The Prince serenaded Leighton Meester during his concert at New York City’s Madison Square Garden on Tuesday night (Jan. 18).”


“Leighton Meester gets serenaded by the legendary Prince during his sold-out concert at New York City’s Madison Square Garden on Tuesday night (January 18).”

The Google employee noted the similar phrases “serenaded”, “New York City’s Madison Square Garden”, and “Tuesday night (Jan[uary] 18)”.

Whilst the response discusses other similarities, such as linking to images from the same third party site not related to either page in question, this does raise a question about how Google’s semantics engine is working.

If, indeed, Google can see the phrases above as being ‘appreciably similar’ then it becomes all the more important not simply to take another’s content and adjust it, but to re-write completely. That is, creating your own content with your own words.

And that is ultimately what the search engines want; content that is going to be unique and meaningful to a user. If two sites have effectively the same content but just presented with slightly different wording it is going to make a judgement on which to rank well and which is piggy-backing off the original version. If two sites have entirely unique content, even if on the same topic, then each will be judged and deemed for their respective merits.

We should also take note of Google’s use of ‘blocks of content’ in the definition of duplicate content. This means you might feel okay reusing a few sentences or paragraphs here or there from a web site to boost your own content (we won’t go into the copyright issues that can create here!) but bear in mind that the search engines’ semantics engines are probably sufficiently refined enough now to be able to detect much more that you might think they can spot.

This came into focus again with a recent blog posting by Matt Cutts from Google where he discussed webspam – the junk that appears in the web results. Matt highlighted that in 2010 there were two algorithm releases to reduce the effectiveness of sites with ‘shallow or low quality content’. But, the same article referenced that this is something of a work-in-progress and further updates will appear - particularly in 2011 with:

“…one change that primarily affects sites that copy others’ content and sites with low levels of original content”

As we now know, this major change has so far come in the form of the Google Panda / Farmer in March / April 2011. So, if you haven’t reviewed your site content for uniqueness, relevance and depth recently, now would be a very good time to do so to ensure that whenever the next algorithm changes come along, you can rest assured that your site will pass with flying colours.

In closing, Google recently released a blog posting that gives a lot more guidance on content and structure preparation for your site. If you haven't seen it already, we strongly recommend reading More guidance on building high quality sites.

By Chelsey Evans

Submit Blog & RSS Feeds 

  0 Comments | Post Comment

Why web designers should watch out for a woman in a gorilla suit

Published on December 17, 2008
Tags: Usability

Most good web designers already know the principles of good design, usability and which areas of a page will get the most attention. But how many give consideration to the woman in the gorilla suit?

For those of you that have watched Brainiac on the Discovery Channel, or can go back further to the study by Daniel Simons of the University of Illinois and Christopher Chabris of Harvard University you'll probably already know what I'm talking about. For everyone else, here's the quick recap;

A study was conducted whereby subjects watched a short video of two groups of people passing a basketball around. The subjects were told to count the number of passes made between the groups. During the video, a woman wearing a full gorilla suit walks through the scene, stops in the middle, faces the camera, beats her chest and then walks off. You'd think that's something you couldn't miss, right? Wrong. 50% of the subjects didn't see her at all because her presence wasn't expected and the focus of attention was elsewhere (counting the basketball passes). Arien Mack and Irvin Rock coined this phenomenon as inattentional blindness.

Wikipedia defines inattentional blindness as follows:

'... humans have a limited capacity for attention which thus limits the amount of information processed at any particular time. Any otherwise salient feature within the visual field will not be observed if not processed by attention.'

What's the relevance to web design? Hopefully, that's already becoming clear. When a site visitor reaches a web site they may well have an expectation of what they will find. It might be an expectation with respect to where they find the navigation on the page, or where they expect to find some particular information or product. Their expectation will likely be the focus of their attention. And if that expectation isn't met then they could well just move on to the next site - not ideal if your web site is meant to be generating you income.

Anyone involved in the design of a web site therefore needs to take care. Creating something 'out of the ordinary' in design terms might look wonderfully contemporary but if your visitor is expecting to find a left-aligned menu bar and instead the navigation is a small box in the right hand corner of the screen (OK - I'm being extreme to make the point) then that may be an instant block to them considering the site further. Why? Because their focus of attention is expecting the menu to be on the left hand side and if it's not then they simply may not see it anywhere else.

Let's face it - how many times have you visited a web site and couldn't find something even when it was practically right in front of your eyes. I know I have and I spend 12 hours a day on the Internet!

Similarly, overloading a page with content, graphics and links without consideration to visitors' thought processes will have a similarly negative effect. Consider if they are focused on finding 'Bed Socks' and on your site you have lots of text, graphics and links - one of which states 'Keeping your feet warm at night' that takes the visitor to a page on bed socks. How many mental steps does the visitor need to go through to find that link? First, they have to move their attention away from 'bed socks' to the problem - 'Why do I want bed socks? Because I've got cold feet in bed'. If they don't manage that shift, and because their focus of attention is honed in on 'bed socks' they may not even see your link. If they do make that attention shift to the problem, then they need to move to think about the solution to the problem - 'How can I keep my feet warm at night'. Then, finally they can search the page, might find the link and click on it. That's a long process and they might just give up trying. This particular problem might easily have been solved with a link 'Bed socks to keep your feet warm at night'.

I'm not for one moment saying that we should go out and created cloned web sites that are all the same. Simply, that we need to give due consideration to our visitors' focus of attention; that we must gear the sites we design to be as simple and straightforward as possible, with information laid out and presented in such a way that it is clear, consistent, unambiguous and able to match well with that focus.

By Chelsey Evans

Submit Blog & RSS Feeds 

  0 Comments | Post Comment

<<First < Previous 1 2 3  4 Next > Last >>

Follow Us: Follow us on Google+ Follow us on Facebook Follow us on Twitter Follow us on LinkedIn

Disclaimer: The contents of these articles are provided for information only and do not constitute advice. We are not liable for any actions that you might take as a result of reading this information, and always recommend that you speak to a qualified professional if in doubt.

Reproduction: These articles are © Copyright Ampheon. All rights are reserved by the copyright owners. Permission is granted to freely reproduce the articles provided that a hyperlink with a do follow is included linking back to this article page.