Web Design Industry Blog

Rss Feed

Five Current Trends in Web Design

Published on August 26, 2011
Tags: Usability, Web Design London

When the world of the web designer moves so fast, it can be somewhat odd to think that the profession is not actually that long out of its infancy. Like so many other areas of technology, though – and particularly areas that relate to the online world – web design is a industry that is constantly evolving as time goes by.

Arguably, it is also becoming increasingly important as time goes by. In a challenging market where the vast majority of businesses today all have their own websites, the task of the web designer in creating eye-catching, exciting websites that combine style with functionality and that stand out from the crowd can be a hard to get exactly right. Whether involved in corporate web design, designing sites for individuals or another related aspect of the job, hardly a day goes by when there isn’t something new to take into account.

Despite the regular changes, however, there are a few current trends that jump out in the world of web design. These are trends that are having a big impact on the way we work, as well as on the experience of web users who view the websites created by designers. Read on to find out more about five important trends in web design.

Web Design for Smartphones and Other Devices

A few years ago, websites were designed almost exclusively for personal computers and laptops, and Internet Explorer was by far the dominant web browser. This may have created some limitations in terms of design as it meant there were only certain technologies that could be utilised, but it also meant that web designers could largely guarantee that a site they created would display and run as it was supposed to on the vast majority of computers.

Now, however, the landscape has changed. As well as a massive proliferation in the use of smartphones, tablet computers and other devices when accessing the web, there is also an increasing array of web browsers out there, and the market is much more diverse than it was. One the one hand, this is great for web designers who want to make the most of the latest technology and utilise exciting opportunities that simply weren’t practical before. But, this also raises certain challenges, such as the need to tweak sites and apps for different devices and browsers so they run properly and the user experience remains seamless, no matter how a person chooses to view the web.

Web Design for Touchscreens

I’m sure if you regularly commute or have sat in a coffee bar people-watching that it won’t have escaped your notice how five years ago everyone was all thumbs, whereas now they’ll all fingers. The proliferation of people swiping, pressing and tapping away at their touch-sensitive screens has changed the way we interact with the outside world.

Although it may seem like a relatively minor shift, this has important consequences for web design. For instance, when a person is viewing a website on a computer that uses a mouse or track-pad, the on-screen buttons can appear small. However, fingers are somewhat less accurate than a mouse pointer, so allowances need to be made when it comes to the usability and design of websites. Bigger buttons with larger spacing, making links clearer, and alterations on how scrolling works are just a hanful of the issues this raises.

Move Away from Flash

There are several reasons web designers and others are moving away from using Flash. iHate Flash from Apple being the predominant driver with so may iDevices now in use and the usability issues created when Flash just won’t run. Another reason is that search engines don’t really like Flash either, and so if a site is created using it, it can have a detrimental effect on the website’s search engine positions.

Of course, Flash still has its place, but the increasing diversity in the industry means that other players such as HTML5, CSS3 and even JavaScript are opening up new design possibilities.

Quick Response Barcodes

Over the past few months, you might have noticed a growing trend for square barcodes to be used on TV shows, in magazines and even on business cards. The idea is that you download an app onto your smartphone and then use it to take a picture of the barcode, although known as a QR (or Quick Response) code. That picture then translates into a website, contact information, or other details which will open up on your phone.

For instance, if you were to put one of these barcodes on your regular website, it could act as a gateway to your mobile site or to a special mobile offer that could be used in-store. This might in turn help to broaden how people viewed your site, making accessing the information easier than ever before. This links into the trend for web design to increasingly focus on mobile sites; as more and more people use smartphones to access the web, and access sites through increasing numbers of ways (such as these quick response barcodes), mobile sites are becoming every bit as important as ‘regular’ sites.

Google Preview

If you use Google, and you probably do, you will no doubt have noticed that there have been some additions to their site of late. One of these is the ability to see previews of websites before you click onto them. When you type in your search term and the results come up, you can now hover your mouse (assuming you’re using a computer and not a touchscreen device – otherwise touch with your finger) over the link and it will show you a thumbnail image of the website. This creates a new challenge for web designers: making sure the thumbnail preview looks as good as the full site, as people increasingly use the function to decide whether to click through onto the site.

By Chelsey Evans

Submit Blog & RSS Feeds 
 

  0 Comments | Post Comment

Desktop PCs - the End of an Era?

Published on August 19, 2011
Tags: Internet Communication, Mobile Application Development

In the grand scheme of things, it wasn’t that long ago that the personal computer made its revolutionary entrance in the world of technology. After all, it was only in 1981 that the first PC was launched. The computer in question was the IBM 5150 and, at the time, it was at the cutting edge of technology.

The PC managed to stay at the leading edge for some time afterwards and is still massively popular today. For instance, based on Microsoft’s sales of the Windows 7 software, sales figures for the second quarter of 2011 stood at around 75 million – an impressive figure.

However, when you look a little beneath the surface, it is possible to see that the strong sales figures aren’t quite as sturdy as they first seem. In the second quarter of 2010, PC sales figures stood at around 80 million, meaning that there was a significant decline between last year and this year.

It is possible to give several reasons for this. One is that the recent global recession and resulting stuttering recovery, coupled with higher inflation and less disposable incomes for the people who previously might have bought PCs, have led to people tightening their belts rather than splashing out new technology.

This, though, is not the only explanation. Changes have been occurring in the market for a few years now, as new innovations come through and people start to acquire new and innovative devices that fill the space the PC once used to occupy. For instance, millions of people now own smartphones that have internet access, as well as other internet-capable devices, such as laptops, games consoles and tablet computers.

The tablet computer is an interesting one, especially as it leads us onto one of the technology giants of the moment: Apple. It was recently reported that Apple has got more money than the US government and, when you look at how well their sales are going – as well as the growing breadth of products that they have on offer – it isn’t hard to see why they are doing well.

For instance, even while the PC market was down 17.5% in Europe at the start of 2011, the market for Apple Macs was up by 10%. In Asia, Mac sales were up by 69.4%. This happened largely because more businesses and governments, as well as home users, are starting to use Macs in place of the traditional PC.

Apple is also the dominant force in the tablet market. If you combine all of the Android tablets, including the Samsung Galaxy Tab, EEE Pad and the Motorla Xoom, the Apple iPad is still outselling them by a ratio of 24:1. This certainly suggests that things are starting to shift away from the traditional ‘big players’ in home computing and moving in Apple’s favour.

There are several things that can help to explain Apple’s increasing dominance of the computing market. One is that it has a fairly impressive brand image that means its product launches are guaranteed to attract a large amount of attention. Another reason is that it has many more developers at its disposal than most other companies, meaning that Apple users are much more likely to benefit from state of the art apps ad other developments. 

All of this shows that even though changes are clearly afoot, the world of personal computing is still massive – and growing. 400 million personal computers are expected to be sold in 2011. Growing markets in developing countries are contributing to this, as is increased take-up of internet use.

Naturally, this raises several challenges for web designers and computer programmers, among others. For example, an increasing array of devices means there is an increasing array of factors to take into account when working in web design or coding. While this is undoubtedly a challenge, it also arguably provides more scope for the innovation we have heard so much about over the past few years, with increasing diversity in the type of devices that people are using to access the internet even as certain firms (Apple, Google) remain dominant.

It also raises interesting questions for consumers – the people who buy these products and are gradually moving away from PCs in favour of laptops and tablet computers. In particular, it raises the question of cost versus value: Apple products, for instance, aren’t necessarily the cheapest to buy and in some cases other manufacturers might offer better products (depending on your view, of course) and yet it seems that expense isn’t as big an issue for people as you might expect.

With the market still evolving, it is hard to predict exactly what will become of the PC over the next few years, but it’s sure to be very interesting to watch. The impact of Windows 8, whenever it is released, might offer some indication of what’s going on – or at least Microsoft’s response to what’s going on – but for now it seems as though rather than simply sticking to the trust old PC, people are increasingly looking for diversity, innovation, image and quality in the products they buy. It doesn’t seem like that’s going to change any time soon.

By Chelsey Evans

Submit Blog & RSS Feeds 
 

  0 Comments | Post Comment

Internet Cookies and the EU Privacy and Communications Directive

Published on August 12, 2011
Tags: Web Site Law, Internet Security, Internet Communication

There has long been tension between the need to protect consumers’ privacy on the web and businesses’ desire to grow their online operations in any way they can. One of the things that have led to some of the most heated debates is internet cookies. There has been growing concern among some consumers, for instance, that they are effectively being stalked on the internet. This can be seen in the way a product you might have looked at on one website suddenly appears in adverts on subsequent websites that you visit.

This is the result of internet cookies and, while some cookies are relatively harmless and can in fact be useful (such as by remembering your preferences and log in details), some are not so welcome. As a result, the European Union introduced a new regulation called the Privacy and Communications Directive. The aim of the Directive was to put more guidance in place so that websites know how much information they can collect on their visitors without having to ask their permission.

The Directive is also sometimes known as the ‘cookie law’ and it was due to be implemented by governments by May 2011. At the time of writing, hardly any of them had done so. Only the governments of the UK, Denmark and Estonia had taken any steps to bring the Privacy and Communications Directive into law, and Denmark has since put its draft laws on the back burner.

In the UK, things are quite a bit better, with fairly comprehensive guidelines being given out – but firms still have a year to comply with the new ruling. This means that the ‘third party cookies’, which are thought to be causing a lot of the problems faced by consumers, can still often be found and tailored advertising online still abounds.

Here’s how it works. Say, for instance, that you look on a website for a new power tool. You don’t buy it, but the internet cookies register that you have looked at the product and were interested in it. You leave the website and spend some more time browsing, when you suddenly notice that something keeps happening: adverts for the power tool you were looking at earlier – and perhaps similar products - keep popping up on websites. The aim of businesses, of course, is to try and persuade you to click on one of those adverts and then make a purchase. The concern for web users, naturally, is the extent of the information companies are apparently able to collect on them.

This is what the EU Directive is supposed to help solve, by dividing internet cookies into two groups: those cookies that are ‘strictly necessary’ for services to operate and those that aren’t, which would require users to give their consent before they could be used. As you might expect, many people working in the European marketing industry do not like the Directive as it confuses what they are and aren’t allowed to do.

One thing that has caused confusion is over what the Directive actually requires websites and businesses to do: are they supposed to actively alert users whenever a cookie is placed on their machine, or is it enough to simply make them aware of their security options within their browser, thus leaving it up to the user to alter their security settings if they so wish? Part of this issue arises because the EU’s definition of ‘strictly necessary’ is very narrow, to the point where a cookie that remembers what language you typically view websites in would be likely to fall outside the ‘strictly necessary’ category.

This makes it harder to comply with the law. If you were to assume that the requirement of the directive was that notification had to be given of all cookies outside the ‘strictly necessary’ group, this could potentially lead to a high volume of pop up alerts asking for users to give their permission to continue. This leads to another problem: a lot of browsers block pop ups as a matter of course, and even if they don’t, the vast majority of web users loath them.

However, there is still the problem of users being concerned about their online privacy. There’s also the issue of how the Directive, if fully implemented, would affect businesses: many rely on cookies to work out the extent of their return on investments and believe that tailored advertising actually enhances the user experience. All of this means that companies are now faced with trying to explain to customers the value of using third party cookies.

Even more confusing is the fact that different EU governments are determining the Directive in different ways, so while some countries propose that web users should actively give their consent to individual cookies, others are much more general. Perhaps then, once thing is clear: while a stab at a coordinated effort has been made in order to reassure web users that their privacy is protected, more action and more coordination is still needed to make sure there is a workable policy and that it won’t harm ecommerce in the process. With 27 countries in the EU that all need to be working together, it seems as this could be one that’s set to continue for a good while yet.

By Chelsey Evans

Submit Blog & RSS Feeds 
 

  0 Comments | Post Comment

Google+ and the Internet Pseudonym Debate

Published on August 4, 2011
Tags: Internet Communication

You will no doubt be aware by now of the massive influence of social networking sites on the lives of millions of people: Facebook has somewhere in the region of half a billion users; Twitter has around 200 million; the recently-launched Google+ clocked up 20 million users in a little over three weeks.

This is clearly a big business and one that looks certain to stay around for a good long while yet, but there is a growing debate over the issue of names. More specifically, why do so many social networks insist that you register using your real name?

This issue was perhaps best highlighted by the recent news that Google+ has been closing down the accounts of people who registered under a pseudonym. Many people received an email from Google informing them that there account was being suspended because it didn’t comply with the network’s guidelines. The reasons given for this by Google were that in order to keep the social networks running efficiently, people need to be able to search for other people by their real names so that they can find and add them with minimal fuss.

This is similar to policies employed by other networking sites such as LinkedIn and Facebook; one of Facebook’s initial achievements was seen to be its success in getting people to use their real names when they signed up. All of this, however, has provoked an angry reaction among some users, especially those who have had their accounts suspended by Google+ for using fake names.

One reason for this disgruntlement is that a lot of people put a lot of effort into their online persona. Many people are well-known by the persona they have created for themselves, such as on forums and networking sites that don’t always require you to use your real name (such as Twitter). Some people suggest that this helps them to create more distinct boundaries between their ‘real’ and ‘online’ lives and it helps to separate what they do in real life from anything that happens online.

This, naturally, works well for some people and there are people who have made careers out of being something of a mystery online (case in point: the famous blogger Belle du Jour). One concern, however, is the belief among some researchers and other professionals that if people have fake identities to hide behind while they are online, they are more likely to be disruptive and cause trouble than if they were forced to register for sites using their real names.

Theoretically, this is because using your real name on social networking sites means that the things you write are more obviously attributable to you; in a world where increasing numbers of employers are investigating their potential employees’ internet presence before hiring them, it certainly makes sense to keep anything under your real name professional and entirely above board. Using a pseudonym, however, supposedly makes some people more likely to act up because they feel they have a barrier to hide behind.

Using an online moniker apparently makes some people feel fearless and liberated; as though they can say what they want. There is undoubtedly some truth in this – you only have to visit an online message board and scroll through the comment threads for a couple of minutes before you find evidence of what’s known as ‘trolling’ or nasty comments that people would never say in real life.

There is even a famous theory to explain this phenomenon of internet anger. It’s called Godwin’s Law, or Godwin’s Law of Nazi Analogies. Godwin’s law states that the longer an internet ‘debate’ goes on, the probability of a comparison being made involving the Nazis and/or Hitler approaches 1. That is to say, it becomes a 100% guarantee. The same could probably be said of many topics. After all, if a conversation thread goes on long enough it could conceivably touch on any topic imaginable.

But Godwin’s point certainly stands. Most people can probably either imagine or will have seen a situation just like this on internet message boards, most likely made by people hiding behind online personas, effectively censoring their identities while making comments that would generally be censored in ordinary life.

This isn’t a debate with a clear conclusion: there is an argument to be made for people using their real names on social networking sites. This can be linked to the old adage of ‘if you’ve got nothing to hide then you’ve got nothing to worry about’; if you aren’t planning to do anything suspect online, why should it bother you to use your real name? There is also, though, a good argument to be made for allowing people to use whatever pseudonym they want online. People have a right to privacy and the vast majority of people are perfectly innocent and just want to have a good time; why should a few internet trolls ruin that for the rest?

One thing seems certain and that is that the debate over whether Google+ was right to suspect accounts is going to continue. It also feeds in to the issue of how much information people should post online when it is so obviously easily accessible by so many people – including friends, family, employers, spammers and others. But that is a topic for another day. For now, it seems safe to say that social networks aren’t going anywhere any time soon, but neither is the tension over how separate we should keep our lives online from the lives we live in reality.

By Chelsey Evans

Submit Blog & RSS Feeds 
 

  0 Comments | Post Comment

Follow Us: Follow us on Google+ Follow us on Facebook Follow us on Twitter Follow us on LinkedIn


Disclaimer: The contents of these articles are provided for information only and do not constitute advice. We are not liable for any actions that you might take as a result of reading this information, and always recommend that you speak to a qualified professional if in doubt.

Reproduction: These articles are © Copyright Ampheon. All rights are reserved by the copyright owners. Permission is granted to freely reproduce the articles provided that a hyperlink with a do follow is included linking back to this article page.