Skip to content

Recent Articles

10
Apr

The Cloud Innovation Chasm

In reading the book Made to Stick by Chip and Dan Heath I learned about an experiment done in 1990 by psychology  PhD student Elizabeth Newton, who was able to demonstrate that knowledge can be a curse. In the experiment, some people are asked to tap out a well known song (something iconic like happy birthday) and have someone else predict what the song is merely from the rhythm of the taps. The results were very interesting: the tappers predicted a 50% success rate, but the listeners were only successful 2.5% of the time – that’s one in forty times. Of particular interest was the fact that the people with the song were convinced the listener must be stupid, or not trying hard because the song was so obvious to them. Of course, they had a frame of reference – the song was, after all, in their head.

I feel that where we are with the Cloud at the moment is a bit like that.  Firstly the cloud service providers: Cloud providers know how cool their technology and various systems are, but they have difficulty in conveying what is possible to the business community in a way that the business community can really understand what it means to them beyond saving a few dollars. The business community ends up focussing on factors like Capex vs Opex, TCO, security, disaster recovery etc – factors that simply provide a framework for decision making around alternatives for doing exactly the same stuff they have always done. In this light the focus of the cloud is heavily on cost management and other secondary (albeit important) issues.

The Business Community, having been led to see the cloud as merely an alternative to hosting their hardware (or in some cases software) solutions, are trapped into a suckers choice really: do it in house or do it in the cloud (or in a bureau etc…). But what the business community really wants to know (or should want to know) is how do they do it differently. How do they profoundly change the experience their customers have, how do they provision their staff with information that can help them pre-emptively deal with problems. How do they make their business remarkable. This is where the cloud offers exciting new potential and the vendors and the customers are still not talking the same language.

Part of this is a lack of awareness of the problem, part of it is a failure to truly see “The Cloud” for what I believe it to be – THE CLOUD, not the Amazon cloud, the Google Cloud, the Microsoft Cloud, the Salesforce Cloud, the “My Private Cloud” cloud. This lack of unified thinking leads to limited thinking and stifles opportunities.

I was excited to read this week that the IEEE has announced their intention to develop a cloud interoperability standard. Hopefully this will get people thinking in a more unified way, but the building blocks are already there. Any cloud offering worth its salt provides some form of API that enables interoperability so there is no need to wait.

A very small example by way of illustration: after a recent rollout of a new system to tens of thousands of people, a couple of customers reported an issue they were experiencing. The system touched on a number of different cloud-based systems and by viewing these as part of one system, I was able to pre-emptively find 140 other people who had either experienced this problem or were going to experience it. A personalised message was then sent from the system to each of these and they were astonished that we were able to proactively deal with their problem before they had reported it. Most of them never even knew there had been a problem. Looking at this particular instance from any single vantage point and we would only have been able to deal with these people as they inevitably hit the issue. The patterns were only evident in taking a holistic and pattern-based view of their particulars.

The business community and the technology community need to see across the chasm that divides them – their own expertise makes them assume things they shouldn’t assume. When business people can learn what is really possible by synergising in the cloud, and when cloud providers learn just how significant a 10% reduction in debtors days or stock turns means for a business, or a 5% increase in customer referrals means, then and only then will we start seeing what the cloud can do to help us effect real change.

26
Mar

Security begins at home

I recently remembered a situation a year ago or so where I was called by a friend to assist them with a hard disk crash that resulted in them losing all of their client data, rendering the machine inoperative. This was a business based out of a home office. I won’t go into the nature of their business to avoid embarrassing them except to say that if their clients had wind of how close their data came to being lost it might have been bad for this small business operator. Really bad.

“Was there a backup?” I hear you ask. No there was no backup. Fortunately for the person involved and their clientele I was able to recover the hard drive. While I was doing this, I asked them if they had considered putting the data in the cloud and their response was ironic to say the least: “Oh I wouldn’t do that, the data is too important, I would be worried about the security.” And this said while I was working on the computer placed in front of an exposed window.

It got me thinking about how secure are our businesses when we expose our systems through our homes so badly?

Here are some things to think about when home meets office:

  1. What would happen if your home computer were stolen?
  2. Is your home computer connected to your office using automated VPN scripts? If so are you adequately passwording the machine?
  3. Do you have automated email clients that allow access to your email without you having to log in to view or respond to email?
  4. Do you have intellectual property on your home computer that would be bad if it fell into the wrong hands? Think source code, client lists, product plans, minutes of meetings etc
  5. Do you have children who connect to the computer and could share access with friends, install malware inadvertently?
  6. Are external people able to use your computer – babysitters, friends of family etc
  7. Do you discuss work with your family members? If so how clear are you that they are not sharing your news or company secrets with friends or posting comments on facebook or twitter?
  8. Do you have company backups at home that could fall into the wrong hands?
  9. Does your computer allow connections automatically to key systems like ERP, CRM, Project Management, Source Code Version Control,  Databases etc?

We tend to take a lot of care about our work environments, but it pays to be vigilant about the worst case when business meets the home environment.

17
Jan

Evolving IT Thinking from Independence to Interdependence

Stephen R. Covey, in his book The Seven Habits of Highly Effective People, points out that human beings evolve from dependence to independence and ultimately, if they are to fulfil their potential, they evolve beyond independence to a state of interdependence.

In the first of these states, the state of dependence, the human needs the assistance of one who knows how to get food, clothing, accommodation, sanitation etc. Eventually, the child outgrows these basic needs and assumes a position of independence, ready to take on the world, posturing in a range of ways to demonstrate how well he or she can stand on their own two feet. Many of us fail to progress meaningfully beyond this stage, but in order to fully realise our potential, we must let go our ego and realise that we can do much better if we allow others to help us, trusting to the advantages of synergy that others who know better in some aspect will deliver their share of an agreement to work with us for a greater outcome than we can possibly achieve on our own.

The evolution of business systems and IT infrastructure for a corporation follow a similar pattern. The company starts off being dependent on the advice and software being offered by some small (or large) consultancy, with computers being serviced or looked after by someone else. Companies eventually grow or grow in confidence and decide to take these matters in house and look after their own computer systems. This is the stage where they express their independence. By bringing it all in house and stating that they can host their own equipment, design their own network architecture, they demonstrate their self confidence in being able to stand on their own (if virtual) two feet.

Of course, even in this stage with something as complex as a computer system, there is always going to be an aspect of interdependence – nobody is going to design their own CPUs, their own switches, cables, firewalls, communications protocols, powersource, operating systems etc. But at some fundamental level there is a view that the systems management is going to be done independently of all others. This is done in the name of competition, security, hubris, having special requirements or just “because we can”.

Those that manage to evolve beyond this view take a look outside their self-imposed walls and ask “what if” questions based on a fresh perspective that encompasses the views and potential of others. This is in essence what interdependence allows.  Here are some examples that have come about as a result of interdependent thinking:

  • The electricity grid – delivered as a result of thinking about the interconnectivity and the common need of all
  • The humble telephone provides interconnectivity in ways previously rarely dreamed about. It is inspired by the desire to interoperate.
  • The browser as the ultimate form of polymorphism – the killer app: simple in its approach, almost universal in its versatility
  • XML as the common lingua franca allowing all systems to communicate with all systems – the same data basing shared and used in completely different ways
  • Platform As A Service – the leveraging afforded by being able to make improvements that can be applied to thousands of clients and more makes possible the impossible, and delivers systems in unprecedented timeframes

These are just a small number of examples, but putting them together yields layer upon layer of opportunity to improve the world for everyone by simply thinking interdependently. For example

  • The Google maps project was significantly enhanced as a result of one of the regional development teams providing the tools so that others could extend the maps globally. Soon enough, the public responded magnificently and a global resource is born.
  • Wikipedia has 3.5 million articles in English not to mention all the other languages covered – all as a result of thinking interdependently
  • The Petrucci Library offers a free collection of sheet music for over 80,000 scores of classical and related music

Those are public domain projects, but what about commercial synergies, such as

  • Freight companies offering automatic tracking of packages while in transit
  • Single sign on being used to streamline a customer’s relationship with a range of seemingly unrelated entities
  • Customer subscriptions to forum posts and cases logged in support systems with automated notifications via SMS, Email, Twitter or Facebook

When companies start thinking from a point of view of interdependence, all sorts of doors open up to ways in which they can improve their customer’s experience, the quality of their products, the working conditions of their staff, the reputation of their brand. There is hardly a day goes by when I don’t see an example of how a company could improve the world by thinking interdependently rather than independently.  And it doesn’t require exposing private information to the wrong people. It just requires thinking differently.

13
Jan

The Privacy Membrane

People keep going on about Privacy when it comes to the cloud. Privacy: it is like a religion. “We must preserve privacy in everything we do”. If you think about this general view for a moment, it becomes clear very quickly that it is a superficial view without much substance. Managing privacy is about ensuring:

  • That we can get access to “stuff” we need, want and have a right to get access to, when we want or need it;
  • That we can prevent others from getting access to “stuff” we don’t want them to get when they have no right to it (or we have the right to prevent them from getting it)
  • That we can disseminate “stuff” (to which we have a right) to people (or systems) when we want to;
  • That we can prevent others from exposing us to “stuff” we don’t want to receive and we have a right to avoid.

An Internet search of ‘privacy taxonomy’ yields a lot of academic material on this topic, but I thought it would be worth conveying a few key points to get people thinking about the fact that information privacy is not just some black and white concept that applies without thought across the board.

The Privacy Membrane

The diagram to the right highlights that there are different types of information and how this fact applies to the privacy debate. Clearly, from an information producer’s, or custodian’s perspective some information we want to keep to ourselves, some information we want to share with the world. Likewise, from a consumer’s perspective, some information we really don’t want to receive, while other information we prize highly.

In the diagram, information flows in two directions – from us and to us. Some flows are desirable (green), while some flows are undesirable (red).

The diagram implies domestic and commercial use, but this is indicative only and can be applied in all permutations – domestic to domestic, domestic to commercial, commerical to commercial and commerical to domestic.

One interesting implication of the diagram is that the nature of information (in terms of its privacy) differs between the view of the entity with the information and the view of the potential recipient.

So let us examine each of these in turn. In the description given below, the term possession is used generically to imply either ownership or custodianship, and should be considered in the widest possible terms. Each type is examined in the order of the diagram starting at the top left, going down then across.

Type 1: Information we possess and don’t want others to possess.
This type of information is information for which we consider there is some sort of negative ramification for us if  others gain possession of the information. This can range from personally embarrassing/damning through to commercially damaging. Examples include:

  • An employee interviewing for another job;
  • Trade negotiations or terms;
  • Customer information such as credit card or phone numbers, health records, trading history, information garnered under legal professional privilege;
  • Nefarious or embarrassing activities such as infidelity, crime or doing something against the will of a parent, spouse or employer;
  • Details about a planned surprise.

In all of these cases there is some reason why the recipient would not want others to gain access to the information. Note that in some cases, the information’s privacy value is temporary. Others, not so much. The value of the information to others is not a factor, except to the extent that from the perspective of the possessor it would be damaging for the information to leak across the privacy membrane.

Sometimes the damage in this case is associated with the information itself – a villain gets hold of a credit card – and other times it is not the information per se, but rather the fact that some information, any information, has leaked is cause for a loss of trust in a custodian. For example if a bank, accountant or broker were to release details of customer balances, the results would be devastating – not necessarily for the customer, but certainly for the bank etc. An example of this happened this week with Vodafone customer information including names, numbers and credit card details being exposed on public websites, over which some employees have lost their jobs.

The degree of risk of information leaking in this category depends on many factors, including:

  • The perceived damage to the possessor of losing the particular information
  • The perceived damage to the possessor of losing information in general – this is very much dependent on the nature of possessor. For example a child leaking information is likely to suffer little damage compared to a major cloud provider.
  • The perceived value to a potential recipient of the information, the number of potential recipients, and whether the information is single use or has value to many people.

Type 2: Information we possess we want others to access

Once again there is a wide variety of information in this category and the perceived value to the (initial) possessor varies as well. Examples of information in this category include:

  • News possessed by a journalist or publication;
  • Details about an upcoming social occasion;
  • A new product announcement;
  • A limited-time offer, with or without steak knives;
  • Results of personal or corporate achievements to be shared for glory.

In these cases, the value again depends on the context and the timing. A News story is of value to a journalist if it is timely, and better still, uniquely obtained. Once it has been published by others, its value is often deprecated. Note the value to the possessor is in some ways independent of the value to the targeted recipient(s), but in some ways its value depends greatly on how it is perceived by them. A wedding invitation carries great value in many cases, while a “spam” email usually carries negative value.

Type 3: Information others possess we would like to possess (whether or not we have a right to it)

This is where privacy takes on a different nuance – how to gain possession of information. Information may be public information, such as the weather, a currency exchange rate or share price; or it might be private – either pertaining to us (to which we either have a right to such as a bank balance or medical or academic test result or we don’t have a right to, such as employer discussions about our future or details about who voted for usor gave us a positive review after a presentation) or pertaining to someone else.

Type 4: Information others possess we do not want to possess (not now, perhaps not ever)

Once again there is a wide variety of information in this category ranging from spam, where someone else wants us to possess the information, to clutter we deem irrelevant – the noise around us that distracts. Like with the other types of information, sometimes it is a question of context – we may want to possess information at another time, and increasingly, software systems, especially driven by cloud technologies, are allowing people intelligent context based on digital body language, trends, historic decisions and actions etc.

————————————————————

The management of the privacy membrane here is where software vendors and IT service providers earn their money. Allowing access to information, sometimes mission critical details, while preventing others from accessing that same information is the at the core of what the IT industry is all about.

The cloud will increasingly facilitate the crystallisation of these differences and provide us with increasingly  sharper focus on what matters to us. A greater understanding of the management of the privacy membrane, letting things through when necessary, preventing things from either direction when required, and transforming them, anonymising them, or merging them together from disparate sources as appropriate, will allow for a user experience that will seriously change the nature of the privacy debate in the immediate years to come.

Hopefully we can get past the religious view that blindly follows the mantra “all data is private and privacy must be preserved in all cases” to a view that facilitates the protection of information as and when appropriate (without compromising security), transforming information as and when appropriate (without compromising security or accuracy), facilitating the supply of information as and when appropriate, and provide some serious value to all this “stuff” in our possession.

3
Dec

A brief response to those who criticize Cloud Vendors for warning about Private Cloud

There are many who scoff at people like Wogels and Benioff for making statements like ‘Beware the False Cloud’. Focusing on the naming aspect rather than the conceptual aspect is missing the point Benioff and Vogels et al are making.

I agree, who cares what you call it, but this is not their point. Their key point in saying to be wary of private cloud is that you forego all of the real benefits of abstraction, information leverage, true scalability in both directions, and the expertise leveraged from multiple tenancy security platforms etc.

The term ‘cloud’ was adopted to demonstrate the fundamental differences gained by abstracting the hardware out of the picture.

DIY is DIY, and some things should not be tried at home, unless you REALLY have a phenomenal driver of life and death importance to segregate and isolate.
Until you have really experienced these cloud benefits it is difficult to understand this fundamental difference, and why it is seen as such an important distinction

21
Nov

Property Rights to Information in the Cloud – A Cloud based view on the Coase Theorem

When I studied economics  in the early 1980’s, we learned of the Coase Theorem, which always fascinated me. The Coase Theorem, is attributed to Ronald Coase, who has since earned the Nobel Prize for Economics (1991).

It occurred to me recently that the Coase Theorem may have some fascinating implications for the property rights of information stored in the Cloud.

The Coase Theorem, as I  recall it, goes like this: Regardless of who owns resources initially, given clearly defined property rights and zero transaction costs, resources will always be allocated most efficiently at the end of the day.

This makes for some really interesting discussions about the Internet and property rights to information. The theorem is particularly relevant for discussion for two reasons. Firstly, in the internet world, transaction costs asymptotically approach zero, meaning that the costs of transferring or asserting ownership of information is infinitessimally small, and getting lower all the time. Secondly, property rights are subject to a whole range of debates around privacy, rights to share, rights to mail, sovereignty, rights to access. So if property rights can be defined, the best allocation of resources, according to the theorem, can be ascertained.

For the first time, we have a situation where the theorem can be tested on a massive scale due to the low transactions costs being so low as to be unimagined when the theorem was first postulated. Economists are famous for proposing academic models, but here we have one that can actually play out in real life, where the focus is on the property rtights not the transaction costs.

So what does this imply – more research will be required on this I am sure, but initially there are some interesting trends emerging. We are seeing some stupendous valuations placed on the holders of the information we have. Facebook stands out as a particularly interesting case study because of the ownership debates and the sheer scale of data being pushed through that platform. Google is interesting because it can figure out what we are interested in and match that to marketers.

What does this say about the valuation of our personal data? Will a greater understanding of the Coase Theorem as it applies to the Web 2.0 put a value on our personal data? our spending patterns? There are already small examples of people receiving money for their data, their opinions, their search history, their web trails. Also, there are plenty of examples where people are paid in the forum of free software in exchange for the right to deliver advertising.

One thing is certain – we should not be giving up our rights to our data without fully understanding how valuable it is. The Coase theorem suggests that there is more value to than would appear on the surface and a little care should be exercised in the way we manage this intangible property.

I will have to think further on this.

16
Nov

Misplaced concerns about privacy in the Cloud?

Here’s a thought: Imagine needing a solution for processing diverse vendor bills or handwritten documents digitally with 100% accuracy. Imagine these come in continuously but without any idea of frequency. Obviously if you can provide some sort of API then others can hook into your system directly, but what if you are dealing with consumers who won’t use a computer? With Amazon’s Mechanical Turk you can programmatically assign these tasks to the public in a bidding system where you set the price of the request. You can make three independent requests for someone to enter the data into your database, compare the results for the three, and only if the three match do you consider the record processed. If one of them doesn’t match the other two you would go out with a new request and keep doing so until you get three that match. Any one who did not match would  be marked with a demerit and if they earn enough demerits you would block them from accepting future tasks. They would also be incentivated to do well because it would affect their public rating.

The cloud enables all sorts of variations of this model. It provides a means to connect low-paid service providers with companies who require tasks to be completed quickly and efficiently at very low cost. In essence it is similar to the microcredit schemes initiated by the Grameen Bank in Bangladesh and others in the sense that it opens up avenues of empowerment, but this potentially opens up opportunities for corporates to benefit as well. Incidentally, the founder of the Grameen Bank Muhammad Yunus won the Nobel Peach Prize for his work.

For many businesses this scenario is a frightening nightmare scenario – the encapsulation of the very things that prevent them from considering the cloud. And in many cases, this is simply not an option. But it creates an interesting thought experiment – how far can we go in the interest of efficiency to open our systems up to micro-outsourcing arrangements like this?

I suspect that over time scenarios like this will become more acceptable. Today though, I can’t see many people signing off on an implementation like this. If it were me, I would be looking to SOA models and trying to get suppliers into a B2B relationship. Years ago EDI would have been the way – if you wanted to be a supplier to one of the big department stores, you needed to hook into their systems. But this is a digression – the example postulated was about non-technical integrations.

But it begs the question about why we are so focused on concerns about privacy in the cloud to the exclusion of the benefits – sure, the above example opens a Pandora’s box of privacy concerns and would be almost universally rejected , but what about the normal, regular uses of the cloud? For most scenarios the lengths the major cloud service providers go to to ensure  data is accessible by only those who should see it should allay any fears – after all, typically, the big cloud providers have a lot more to lose if they leak  corporate data.

It is not the cloud vendors we should be fearful of, it is the way we choose to use their services; it is the way we choose to run our companies, it is the way we choose to view the world in which we live.

30
Sep

EE Times article published

An article of mine has been published in the EE Times magazine discussing the acquisition of Morfik by Altium. EE Times is a magazine for the Electronic Engineering industry so the article in part focuses from that angle, but I think it gives people a good sense of Altium’s strategic direction.

30
Sep

Where next for Salesforce Chatter? My two cents…

Salesforce has released an internal collaboration context that beautifully leverages the power of the cloud. When I first saw chatter I was excited by the ability of people to subscribe to objects – more on that later. What surprised me was how many people rave about how good it is. Mostly, from what I can see, people seem to use it for person to person communication, and for this it has some interesting possibilities.

For example, sales teams are able to share tips or presentations they have done in some vertical industry so that other sales reps facing a challenge in that industry can learn from their experience. “Selling to an ENTJ? no problem here is my approach.”

When Marc Benioff recently made his comment about the question that drove him to start Salesforce.com (Why aren’t more enterprise applications more like Amazon), and how now that question has evolved to Why aren’t more enterprise applications more like Facebook, I realised something important about all these new ways of collaborating – Ning, Facebook, Twitter, Yammer, WordPress and many others – they all deal with allowing people to publish a range of information in various facets of their lives, and it allows people to subscribe to those facets. Take Facebook as an example. People choose to post information about their social lives in the form of photos taken at parties, upcoming events, social news etc. People choose to participate in various games where they grow virtual pets or plants and collaborate. People choose to support various causes. As publishers, each of us choose to display all sorts of things in the hope that someone will find it interesting or valuable. As subscribers, we each choose to set up an antenna to learn what a particular person is saying, or what is happening relating to a particular favourite topic, perhaps a musician, perhaps a company.

This notion of publishing and subscribing is core to the notion of Chatter, but to see it as merely a closed-circuit means to publish and subscribe to information from human beings is to miss the mark somewhat because Chatter also allows business objects to participate in the freeflow of information. Human beings can choose to subscribe to certain objects.

So let’s say you are a sales manager managing a team of 5 field sales executives. You want to know how they are progressing on a number of important opportunities, perhaps a dozen in all. You can subscribe to those opportunities directly to find out if anything changes on the opportunities – a feed is provided to you and these opportunities will place information into your feed to let you know something has changed about them – perhaps the close date has changed, or the probability of closure. Perhaps the amount of the opportunity has been modified. This puts you very close to the action so you know what is going on.

This is all available today, and it is not limited to opportunities – what about an important, complex case your team is working on for a VIP customer? You can subscribe to the case to receive information about its status. Even custom objects can participate in this world.

This is all well and good but there are some really compelling things that Chatter requires to make it truly successful in my opinion: metadata based subscription and non-event subscription.

“Metawhat??”, I hear you say? Metadata is data that describes the shape of your data – let me give a few practical examples. Currently, Chatter requires you to subscribe to specific objects, for example Opportunity number 123456. You look at all your data and you choose which records will interest you. But how much more powerful would it be if you could automatically subscribe to objects based on preconfigured parameters? Here are some illustrative examples:

  • You want to automatically subscribe to every opportunity worth more than $50000 owned by someone who reports to you.
  • You want to automatically subscribe to every opportunity for a customer who has never purchased anything from you.
  • You want to automatically subscribe to any cases logged by any VIP customer with platinum support where the renewal contract is due within three months.
  • You want to automatically subscribe to opportunities where the amount is more than one standard deviation above last month’s average closed won price, is managed by one of your team members, and the customer is a new logo.

Another important offering that I feel would take Chatter to a whole new level is the addition of objects being able to chatter non events. Imagine being able to ask an urgent case for an important customer to let you know if it hasn’t been touched for six hours… Or an strategic opportunity has not been updated for more than two days.

These changes would make Salesforce Chatter very much more effective than it already is. Without them it is just another corporate collaboration tool.

In my next post I plan to talk about the next evolution of these concepts, something my company, Altium, is busy working towards – the Facebook of Devices, the Internet of Things, if you will. This is where we take the publisher subscriber model to an entire new level – devices intelligently collaborating with other devices.

17
Sep

Amazon Web Services Part 2 – Scaling Services Provided

Here is the second of three posts on Amazon’s Web Services. The first post provided a look at the key foundational services. The next post will talk about how Altium is leveraging Amazon’s offerings. Meanwhile, in this post I promised to write about some of the ways Amazon takes elasticity to the extreme through a range of services aimed at addressing scalability as an issue.

At its most fundamental level, Amazon’s Web Services are aimed at elasticity – this is reflected in the technology as well as the pricing. The pricing is a pay as you go model – by the hour, by the storage used and/or by the bandwidth consumed. Many of the services reflect this elasticity in their name.

So what tools does Amazon provide you to scale up and down? (Remember scalability isn’t just about being able to scale up – it is about being able to scale in either direction – start small and go big, start big and go small, or start big, grow then shrink again. Or vice versa.

The Elastic Compute Cloud (EC2)  enables you to create a machine image that you can turn on or off whenever you need it. This can be done via an API or via a management console. So a program can turn on a computer if it needs it.

EC2 also allows you to define parameters on a computer so that it effectively clones itself or kills off clones based on the demand. For example if a certain number of requests threshold is hit or the time taken to respond to requests, then a new machine instance can start up automatically. And once it is no longer needed, it can just shut down again.

Imagine a business that provides sports statistics to a subscriber base of sporting tragics. Normally the demand requires a four servers to meet the demand the company’s subscriber base. But once every four years, come Olympics, everybody wants to be a sporting expert and so perhaps 400 servers are required for a period of about six weeks. Amazon’s Auto Scaling feature allows for this.

This is just the simple stuff – there are sophisiticated models provided including Elastic Load Balancing, Elastic Map Reduce, Simple Queuing Services and Simple Notification Services. Let’s take a look at these:

  • Elastic Load Balancing allows you to automatically distribute incoming traffic across multiple EC2 instances. It automatically detects failed machines and bypasses them so that no requests are being sent to oblivion. The load balancer can handle things such as automatically assuring specific user sessions remain on one instance.
  • Elastic Map Reduce allows for large compute assignments to be split up into small units so the work can be shared across multiple EC2 instances, offering potentially massive parallelism. If there is a task to handle huge amounts of data for analysis, simulation or artificial intelligence, Elastic Map Reduce can manage the splitting and the pulling back together of the project components. Any failed tasks are rerun, failed instances are automatically shut down.
  • Simple Queue Service (SQS) provides a means for hosting messages travelling between computers. This is an important part of any significant scalable architecture. Messages are posted by one computer without thought for, or knowledge of, what machine is going to pick up the information and process it. Once received, the message is locked to prevent any other computer from trying to read the message, so it is guaranteed that only one computer will process it. Messages can remain in an unread state between machines for up to 14 days. Queues can be shared or kept private and they can be restricted by IP or time. This means that the systems can designed with separate components where each component is loosely coupled from each other component – even different vendors can be responsible for each component. Different systems can work together in a safe and reliable way.
  • Simple Notification Service (SNS). Whereas SQS is asynchronous, ie each post to the queue can be done without thought about when it will be picked up by the other other end, SNS is designed to be handled at the other end immediately. Messages are pushed out to be handled using HTTP requests, emails or other protocols, which means it can be used to build instantaneous feedback communities using a range of different architectures. So long as they support standard web integration architectures, the systems will talk to each other.  SNS can be used to handle automatic notifications by SMS or Email or API calls when some event has taken place, or if written correctly, when some expected event has not taken place.

I will endeavour to provide example applications over time for each of these scenarios, but for now I hope this has provided a sense of understanding of how highly scalable systems can be built using Amazon’s Web Services.