As someone who has taken an enterprise to the Cloud globally, I understand just how much of an impact the Cloud can have on a business. I have been vocal in pushing the fact that Cloud can open all sorts of possibilities and is not just about cost mitigation and scalability.
Businesses looking to learn more about what Cloud Computing can bring are faced with a plethora of suppliers purporting to have Cloud. Many of the potentially transformational benefits can be lost in the confusion of conflicting ideas, and a sense that it all just sounds like stuff they have heard before.
Real Cloud is hard to fake. The key thing is that the products, services, technologies offered by a vendor enable an enterprise to focus on the business imperatives driving them without having to worry whether the infrastructure will be there when they need it. It is always a tough question: does a company invest in expensive infrastructure just in case it is successful beyond expectations? Does it allow a huge opportunity to slip through its fingers simply because of a conservative approach to investing in infrastructure? Both are risks that all businesses have to traditionally face.
At least, that is how it is without using Cloud. Cloud approaches mean that businesses can effectively forge ahead knowing that the infrastructure will cater for whatever is required. Imagine starting a fishing business and not having to worry about how big a boat and net you should buy, relying on being able to start with modest equipment and elastically expand the ship and net at sea if you happen to come across a huge school of fish.
The freedom from encumbrance that results has the potential to change the approach businesses have to strategic planning, their approach to innovation and the related areas of risk management and process streamlining. More agile methodologies ensue that facilitate experimentation and allow changes to happen more naturally, leading inevitably to a focus on the business goals rather than the potential impediments such as not having enough infrastructure.
Cloud facilitates this change in thinking, but it has failed to overcome the concerns around privacy, security, and data sovereignty. Despite all the advocates who have effectively said that the benefits outweigh the risks, the fact remains that some businesses stand to lose more than they can gain if their data is exposed. In some cases there are legislative impediments, PCI compliance, health records, national sovereignty rules to name a few, that render the potential gains seemingly academic. Further concerns around the high dependency on single Cloud providers have further limited the uptake.
But Rackspace has largely addressed these concerns by open-sourcing the Cloud. By working with NASA, Rackspace has given birth to what is now the fastest growing Open Source project in history – OpenStack. The OpenStack foundation now has more than 8600 contributing developers and has been adopted by IBM, Dell, HP, NTT, Red Hat, Canonical and more than a hundred other companies. Rackspace has very publicly gone “all-in” on OpenStack and is the largest contributor to the code base. Rackspace’s approach is that the Fanatical Support will be the key differentiator that enables the company to excel.
As a result of OpenStack, businesses have the freedom to build an infrastructure platform using a combination of public multi-tenanted Cloud infrastructure, dedicated hosted solutions, and private cloud facilities that are on their own premises if necessary. The technical barriers between each of these topologies are being eliminated, making for one platform that truly allows businesses to have the freedom from worrying about their infrastructure as they focus on driving their business forward.
The freedom to choose a mixture of topologies, suppliers and service levels really allows businesses to focus on what they do, not how they do it. Adding Fanatical Support to that freedom allows Cloud computing to fully realise its potential. And that excites me.
Oh, and for those who want to understand more about my role at Rackspace, I have come on board as the Director of Technology and Product – Asia Pacific. My functions include promoting how Cloud computing concepts can help businesses achieve their goals, expounding on the concepts of the Open Cloud, as well as helping ensure new Rackspace products and services are ready for the market in the Asia Pacific region.
I welcome the opportunity to talk about my journey to the Cloud and how thinking Cloud and related topics such as Big Data, the Internet of Things and Social Media can change our approach to business.
As one of the early Cloud adopters, and someone who has worked hard to promote what Cloud can bring to businesses, I was looking to join a vendor where I could leverage Cloud concepts to truly make a difference in the world. The more I looked, the more Rackspace seemed the right place to be.
The first thing that stood out to me was the company values. I was excited to see that the company places importance on the following values:
- Treating Rackers like family and friends
- Passion for all we do
- Commitment to Greatness
- Full Disclosure and Transparency
- Results First – substance over flash
- And, of course, Fanatical Support in all we do.
These formed a picture for me of an organisation that was striving to really make a difference. The word that stood out for me was the word “Greatness”. This is something that I personally believe in very strongly. Companies that are committed to Greatness are alive, vibrant and focused on growth.
Rackspace is best known for its Fanatical Support and I have to admit before I experienced it I thought it was just marketing hype. I was first exposed to it when Altium acquired a company and brought in a new head of IT who had experienced Rackspace’s Fanatical Support. His face was radiant as he described the fact that Rackspace knew about problems on his servers before he did. I was still pretty sceptical, but impressed with the positioning. I thought if a company can pull this off, it would make them really successful. I have always believed in providing phenomenal support, so I was impressed, but only in an intellectual way.
Then I joined the company. And what I found inside shocked me – here was a company that had inculcated the very idea of going above and beyond into the core of the company’s being. I went to the London office for my induction programme – five days of aligning new Rackers (Rackspace employees are called Rackers) to the fundamental principles that drive the business. There are over 1000 staff in the London office and I must have been approached half a dozen times by people asking me, “you seem lost – what can I do to help you?” This was no fake offer – each time this happened I was helped all the way to my objective, and the people always seemed eager to help.
Everything the company does drives this fanatical support. The company uses Net Promoter Score to measure the likelihood customers will refer others. But even the induction programme had us rookies being asked how likely we would be to recommend each of the presenters to our colleagues or friends. The presenters, we learned, were vying for a coveted internal trophy. I have never seen such engaging and creative presentations, all designed to prepare us to be able to be effective in the Rackspace culture.
The company’s mission is to be recognised as one of the world’s greatest service companies. And it shows.
I have mixed feelings about Amazon’s latest outage, which was caused by a cut in power. The outage was reported quickly and transparently. The information provided after the fault showed a beautifully designed system that would deal with any power loss inevitability.
After reviewing the information provided I am left a little bewildered, wondering how such a beautifully designed system wasn’t put to the ultimate test? I mean, how hard can it be to rig a real production test that cuts the main power supply?
If you believe in your systems, and you must believe in your systems when you are providing Infrastructure As A Service, you should be prepared to run a real live test that tests every aspect of the stack. In the case of a power failure test, anything short of actually cutting the power in multiple stages that tests each line of defense is not a real test.
The lesson applies to all IT, indeed to all aspects of business really – that’s what market research is for. But back to IT. If a business isn’t doing real failover and disaster recovery testing that goes beyond ticking the boxes to actually carrying out conceivable scenarios, who are they trying to kid?
Many years ago I had set up a Novell network for a small business client and implemented a backup regime. One drive, let’s say E: had programs and the other, F:, carried data. The system took a back up of F: drive every day and ignored the E drive. After all, there was no need to back up the programs and disk space was expensive at the time.
After a year I arranged to go to the site and do a back up audit and discovered that the person in charge of IT had swapped the drive letter around because he thought it made more sense. We had a year of backups of the program directories, and no data backups at all.
Here is the text from Amazon’s outage report:
At approximately 8:44PM PDT, there was a cable fault in the high voltage Utility power distribution system. Two Utility substations that feed the impacted Availability Zone went offline, causing the entire Availability Zone to fail over to generator power. All EC2 instances and EBS volumes successfully transferred to back-up generator power. At 8:53PM PDT, one of the generators overheated and powered off because of a defective cooling fan. At this point, the EC2 instances and EBS volumes supported by this generator failed over to their secondary back-up power (which is provided by a completely separate power distribution circuit complete with additional generator capacity). Unfortunately, one of the breakers on this particular back-up power distribution circuit was incorrectly configured to open at too low a power threshold and opened when the load transferred to this circuit. After this circuit breaker opened at 8:57PM PDT, the affected instances and volumes were left without primary, back-up, or secondary back-up power. Those customers with affected instances or volumes that were running in multi-Availability Zone configurations avoided meaningful disruption to their applications; however, those affected who were only running in this Availability Zone, had to wait until the power was restored to be fully functional.
Nice system in theory. I love what Amazon is doing, and I am impressed with how they handle these situations.
They say that what doesn’t kill you makes you stronger – here’s hoping we all learn something from this.
I have given many talks in recent years about my experiences in pioneering cloud applications. I have spoken at events ranging from C-level round tables to professional seminars to large vendor events with more than 5,000 people attending. I have had press conferences with fifty or so journalists. One thing that I find interesting is that the nature of the questions I get asked has changed over that time as an increasing number of people are becoming cloud-savvy.
In the early days, almost every question was about security and privacy, data sovereignty. More recently the questions have been more technical in nature – how to implement, how to handle change management, legal issues around the service level agreements etc.
So it came as a bit of a surprise to speak last week to a room of 150 people almost completely new to cloud at an event held at Google’s Sydney offices. The questions were quite mixed, but they all had one thing in common: the audience hadn’t realised that Cloud computing is different from the way they currently do things.
After years of doing interesting things with the various technologies on offer, it is easy to become complacent about just how radical a difference Cloud computing can make to a business prepared to see it as an opportunity to make real change. So the opportunity to share some basics with this audience was exciting and fresh. They thought Google Apps would bring them a different mail-server. I showed them how it was fundamentally different from in-house approaches: not just an outsourced mail server but an opportunity collaborate and move around untethered.
The freedom to innovate, the freedom to explore, the freedom to dream.
I found it really exciting to see them starting out on this journey that has changed so much.
One of the beautiful things about Salesforce is the ability to create or modify an object’s structure with defined relationships, permissions, application contexts, business rules and page layouts.
Think about it for a second: how many frameworks do you know of that enable you to modify the data schema and automatically set:
- Relationships between objects;
- Cardinality rules, (definitions of how objects relate to each other in terms of how many of one can be related to how many of another);
- Business rules, (what fields are mandatory, what fields are dependent, default values, what fields are read only or even visible for certain users, which fields must be unique);
- Referential Integrity rules (which records will be deleted when a parent is deleted);
- A User Interface, even one that can be different for each user profile;
- Application context (which objects belong together to form a sub application;
- Access to reports; and
- A Notification engine that can share changes with subscribers or record owners, or handle task assignments.
And all with a point and click interface – no programming required (unless you want to), and all with defaults to allow you get the job done quickly. Very quickly. Read more
This article is the first in a series of articles looking at changes/improvements I would like to see happen. You will find them categorised under the category “Things I Want to See”, and also filed under specific vendors where appropriate.
An increasing number of people are coming to understand intuitively the difference between traditional peer-to-peer document sharing modes where multiple instances of documents exist, at least once on each client machine. You know the drill, you attach a document to an email, the recipient opens the attachment, edits it, saves it and then attaches the saved new version to a new email and sends it back. Before long, there are multiple copies of the document and it can be difficult to know how the document evolved. In the case of several people, it can even be difficult to know which version of the document is the current one. There may not even be one single latest version, as two people may edit two different earlier versions at once. Stitching these all back into a master document is not easy.
A lot of tools have been developed to simplify the potentially incredibly complex task of managing all these document versions. But the cloud provides a simpler way, by fundamentally only having one document location. So instead of linking people to people, you link people to documents and the problem elegantly goes away:
|Traditional Document Sharing||Cloud-based Document Sharing|