Why I joined Rackspace Part II – the Products and the Strategy
As someone who has taken an enterprise to the Cloud globally, I understand just how much of an impact the Cloud can have on a business. I have been vocal in pushing the fact that Cloud can open all sorts of possibilities and is not just about cost mitigation and scalability.
Businesses looking to learn more about what Cloud Computing can bring are faced with a plethora of suppliers purporting to have Cloud. Many of the potentially transformational benefits can be lost in the confusion of conflicting ideas, and a sense that it all just sounds like stuff they have heard before.
Real Cloud is hard to fake. The key thing is that the products, services, technologies offered by a vendor enable an enterprise to focus on the business imperatives driving them without having to worry whether the infrastructure will be there when they need it. It is always a tough question: does a company invest in expensive infrastructure just in case it is successful beyond expectations? Does it allow a huge opportunity to slip through its fingers simply because of a conservative approach to investing in infrastructure? Both are risks that all businesses have to traditionally face.
At least, that is how it is without using Cloud. Cloud approaches mean that businesses can effectively forge ahead knowing that the infrastructure will cater for whatever is required. Imagine starting a fishing business and not having to worry about how big a boat and net you should buy, relying on being able to start with modest equipment and elastically expand the ship and net at sea if you happen to come across a huge school of fish.
The freedom from encumbrance that results has the potential to change the approach businesses have to strategic planning, their approach to innovation and the related areas of risk management and process streamlining. More agile methodologies ensue that facilitate experimentation and allow changes to happen more naturally, leading inevitably to a focus on the business goals rather than the potential impediments such as not having enough infrastructure.
Cloud facilitates this change in thinking, but it has failed to overcome the concerns around privacy, security, and data sovereignty. Despite all the advocates who have effectively said that the benefits outweigh the risks, the fact remains that some businesses stand to lose more than they can gain if their data is exposed. In some cases there are legislative impediments, PCI compliance, health records, national sovereignty rules to name a few, that render the potential gains seemingly academic. Further concerns around the high dependency on single Cloud providers have further limited the uptake.
But Rackspace has largely addressed these concerns by open-sourcing the Cloud. By working with NASA, Rackspace has given birth to what is now the fastest growing Open Source project in history – OpenStack. The OpenStack foundation now has more than 8600 contributing developers and has been adopted by IBM, Dell, HP, NTT, Red Hat, Canonical and more than a hundred other companies. Rackspace has very publicly gone “all-in” on OpenStack and is the largest contributor to the code base. Rackspace’s approach is that the Fanatical Support will be the key differentiator that enables the company to excel.
As a result of OpenStack, businesses have the freedom to build an infrastructure platform using a combination of public multi-tenanted Cloud infrastructure, dedicated hosted solutions, and private cloud facilities that are on their own premises if necessary. The technical barriers between each of these topologies are being eliminated, making for one platform that truly allows businesses to have the freedom from worrying about their infrastructure as they focus on driving their business forward.
The freedom to choose a mixture of topologies, suppliers and service levels really allows businesses to focus on what they do, not how they do it. Adding Fanatical Support to that freedom allows Cloud computing to fully realise its potential. And that excites me.
—-
Oh, and for those who want to understand more about my role at Rackspace, I have come on board as the Director of Technology and Product – Asia Pacific. My functions include promoting how Cloud computing concepts can help businesses achieve their goals, expounding on the concepts of the Open Cloud, as well as helping ensure new Rackspace products and services are ready for the market in the Asia Pacific region.
I welcome the opportunity to talk about my journey to the Cloud and how thinking Cloud and related topics such as Big Data, the Internet of Things and Social Media can change our approach to business.
Why I joined Rackspace Part I – the Company and its Values.
As one of the early Cloud adopters, and someone who has worked hard to promote what Cloud can bring to businesses, I was looking to join a vendor where I could leverage Cloud concepts to truly make a difference in the world. The more I looked, the more Rackspace seemed the right place to be.
The first thing that stood out to me was the company values. I was excited to see that the company places importance on the following values:
- Treating Rackers like family and friends
- Passion for all we do
- Commitment to Greatness
- Full Disclosure and Transparency
- Results First – substance over flash
- And, of course, Fanatical Support in all we do.
These formed a picture for me of an organisation that was striving to really make a difference. The word that stood out for me was the word “Greatness”. This is something that I personally believe in very strongly. Companies that are committed to Greatness are alive, vibrant and focused on growth.
Rackspace is best known for its Fanatical Support and I have to admit before I experienced it I thought it was just marketing hype. I was first exposed to it when Altium acquired a company and brought in a new head of IT who had experienced Rackspace’s Fanatical Support. His face was radiant as he described the fact that Rackspace knew about problems on his servers before he did. I was still pretty sceptical, but impressed with the positioning. I thought if a company can pull this off, it would make them really successful. I have always believed in providing phenomenal support, so I was impressed, but only in an intellectual way.
Then I joined the company. And what I found inside shocked me – here was a company that had inculcated the very idea of going above and beyond into the core of the company’s being. I went to the London office for my induction programme – five days of aligning new Rackers (Rackspace employees are called Rackers) to the fundamental principles that drive the business. There are over 1000 staff in the London office and I must have been approached half a dozen times by people asking me, “you seem lost – what can I do to help you?” This was no fake offer – each time this happened I was helped all the way to my objective, and the people always seemed eager to help.
Everything the company does drives this fanatical support. The company uses Net Promoter Score to measure the likelihood customers will refer others. But even the induction programme had us rookies being asked how likely we would be to recommend each of the presenters to our colleagues or friends. The presenters, we learned, were vying for a coveted internal trophy. I have never seen such engaging and creative presentations, all designed to prepare us to be able to be effective in the Rackspace culture.
The company’s mission is to be recognised as one of the world’s greatest service companies. And it shows.
Microsoft Acquiring Yammer Is Good News for All
Today’s announcement that Microsoft has acquired Yammer has the feel of something very exciting – and I would like to share my initial thoughts on what this might mean.
Yammer provide an enterprise collaboration platform based upon publisher-subscriber principles, but constrained to within a domain context. If you don’t have a matching email address you don’t get to participate. From the Yammer website:
Yammer brings the power of social networking to the enterprise in a private and secure environment. Yammer is as easy to use as great consumer software like Facebook and Twitter, but is designed for company collaboration, file sharing, knowledge exchange and team efficiency.
That Microsoft has decided to acquire Yammer shows great insight by Microsoft, and a willingness to think creatively about tackling the new world of social media. Microsoft will be able to leverage Yammer’s platform in many areas of the business, so it is somewhat of a surprise to learn that they have positioned it as part of the Office family. Sure, Yammer could make various Office products much more powerful, particularly when paired with the Office 365 offerings, but I see it could also benefit many other areas of the business. In other words, I am concerned that Microsoft may be looking to productise this alongside other tools in the Office suite. But Yammer has potential to make a big impact throughout much of the Microsoft product line.
So here’s a quick overview of how I initially think Microsoft products could benefit from Yammer:
- Excel, Word and Powerpoint could all gain major collaboration benefits:
- commentary from various people,
- tracking changes with comments in Office 365,
- suggestions for further amendments, with applying them,
- branched versions,
- seeking approval,
- requesting clarification on a paragraph, slide, or formula,
- requesting artwork for insertion
- Microsoft Project could gain some qualitative aspects – look at Assembla or Pivotal Tracker for some of the interesting developments in the application of social media principles to project management.
- Outlook could integrate streams from multiple sources including Email and Yammer, but then also from other social media streams, perhaps Twitter, Facebook and Chatter for example, to the extent corporate policies allow
- Dynamics would benefit – Discussions around non-payment of invoices and doubtful debtors, stock levels, product return rates, supplier feedback would be a good starting point. Beyond that there are many areas where subscription to objects would provide a great deal of control. Beyond that, there is plenty of scope of linking Yammer to the actual business objects and enabling people to subscribe to invoices, customers, picking slips etc. For example, send a notification to a subscriber when an invoice over a certain amount is paid, or its payment deadline passes.
- Sharepoint would also benefit. The full extent to which these two tools can synergise requires some deeper thought, but at the surface, the collaborative nature of each appears complementary.
- Even SQL Server and Visual Studio could provide hooks that enable the database or an application to feed easily into a Yammer stream, or respond to a Yammer feed.
- Microsoft’s acquisition of Skype will fit nicely into this view as well, with a tightly integrated communication platform that runs from a synchronous emails and notifications, to live discussions through to video.
I am also encouraged by this because it will raise the profile of Social Media to the mainstream. Instead of being seen as something for the Salesforce evangelists and their like, Social Media will become more of a tool as a result of this acquisition.
And that can only be a good thing.
Here’s hoping Microsoft are thinking strategically about this rather than just a new feature set to add to the Office product line.
Here are a couple of other bloggers’ comments on the deal:
Theoretical Disaster Recovery doesn’t cut it.
I have mixed feelings about Amazon’s latest outage, which was caused by a cut in power. The outage was reported quickly and transparently. The information provided after the fault showed a beautifully designed system that would deal with any power loss inevitability.
In theory.
After reviewing the information provided I am left a little bewildered, wondering how such a beautifully designed system wasn’t put to the ultimate test? I mean, how hard can it be to rig a real production test that cuts the main power supply?
If you believe in your systems, and you must believe in your systems when you are providing Infrastructure As A Service, you should be prepared to run a real live test that tests every aspect of the stack. In the case of a power failure test, anything short of actually cutting the power in multiple stages that tests each line of defense is not a real test.
The lesson applies to all IT, indeed to all aspects of business really – that’s what market research is for. But back to IT. If a business isn’t doing real failover and disaster recovery testing that goes beyond ticking the boxes to actually carrying out conceivable scenarios, who are they trying to kid?
Many years ago I had set up a Novell network for a small business client and implemented a backup regime. One drive, let’s say E: had programs and the other, F:, carried data. The system took a back up of F: drive every day and ignored the E drive. After all, there was no need to back up the programs and disk space was expensive at the time.
After a year I arranged to go to the site and do a back up audit and discovered that the person in charge of IT had swapped the drive letter around because he thought it made more sense. We had a year of backups of the program directories, and no data backups at all.
Here is the text from Amazon’s outage report:
At approximately 8:44PM PDT, there was a cable fault in the high voltage Utility power distribution system. Two Utility substations that feed the impacted Availability Zone went offline, causing the entire Availability Zone to fail over to generator power. All EC2 instances and EBS volumes successfully transferred to back-up generator power. At 8:53PM PDT, one of the generators overheated and powered off because of a defective cooling fan. At this point, the EC2 instances and EBS volumes supported by this generator failed over to their secondary back-up power (which is provided by a completely separate power distribution circuit complete with additional generator capacity). Unfortunately, one of the breakers on this particular back-up power distribution circuit was incorrectly configured to open at too low a power threshold and opened when the load transferred to this circuit. After this circuit breaker opened at 8:57PM PDT, the affected instances and volumes were left without primary, back-up, or secondary back-up power. Those customers with affected instances or volumes that were running in multi-Availability Zone configurations avoided meaningful disruption to their applications; however, those affected who were only running in this Availability Zone, had to wait until the power was restored to be fully functional.
Nice system in theory. I love what Amazon is doing, and I am impressed with how they handle these situations.
They say that what doesn’t kill you makes you stronger – here’s hoping we all learn something from this.
Transitioning to the Cloud
Today I am presenting for you the same talk I gave at the CeBIT Cloud 2012 conference in Sydney. Entitled, “Transitioning to the Cloud”, the presentation covers three areas:
- If you want to transition your business to the cloud you need to Think Cloud – Cloud, as I see it, is as much a state of mind and you need to embrace this thinking to really make full use of its potential;
- Some examples from my personal experience of using some of the large Cloud providers’ offerings, and why they are more than what they superficially appear to be; and
- Some tips on adopting Cloud (previously covered in an earlier post)
My Top 7 Tips for Going to the Cloud
A lot of people ask me for my advice on what are the most important things to consider when moving the business into the Cloud. So here are some of the things that I think business people need to consider when thinking about going to the Cloud:
1. Make sure you know how to get your data out again
Often people think about how they are going to put their data into the Cloud – if they are using Software as a Service, like Salesforce or Netsuite or Intacct or Clarizen, or Google Apps for that matter, they will be thinking about how to get their data into a shape that can go into the system. The documentation for these systems make clear reference to how to prepare and then import the customer’s data, and there are usually consultants who can assist with this process. Typically this process is well planned, but often little thought is given to how exactly you go about extracting the data out again in a way that is of value to you going forward. Often, lip service is paid to the issue by asking questions like “can I get a back up of my data?”, and a reassuring yes is provided to the now comforted prospective customer. It is one thing to be told it can be done, but you need to check that the data is actually in a format that is useful to you. And if the system is mission critical, it needs to not just be useful, it needs to be readily convertible for immediate use.
Some of the things I have done to ensure that my data is safe include writing programs that automatically read the data for updates every fifteen minutes and write them into a Relational Database hosted separately, and even replicated both in house and in the Cloud. All customisations are programmatically managed so that the relational database copy always reflects the structure in the live system. For example, I did this from Salesforce, where there were more than 300 custom objects created. Another example is to write a program that knows how to extract all the data from a system, such as an accounting system, using the API provided. Not until you have actually proven tangibly that you can get your data into a format you can actually use, it is meaningless to have access to a copy of it.
Even without programming, many systems provide some access to your data in a way you can extract it. For example Salesforce provide a once-per-week csv file you can download. If you don’t have an alternative means it is worth setting up a routine with someone responsible to take this data and copy it.
On line databases such as Amazon RDS or Simple DB can be accesed easily enough through OLEDB connections or similar, or copies of the backups can be stored locally in a format that can be opened by alternative data stores.
No matter how you do it, the principle is important here: you should have a fully tested means of accessing your data off line. The more mission critical the data, the more real-time the recoverability needs to be.
2. Think Differently
Steve Jobs’ passing reminded everyone of the Apple Think Different campaign, but seriously, you need to think Differently when it comes to the Cloud in order to leverage it successfully. It truly is different to anything we have seen, and if you are only seeing it as a cost mitigator or a means of outsourcing infrastructure, you are missing a lot of (pardon the pun) blue sky behind the Cloud. Social networking, crowdsourcing, ubiquity of device and location, Metcalfe’s law in general, scalability, the ability to fail fast and loosely coupled web services are all factors of the Cloud that lend itself to being different.
One example is the way that Salesforce enables you to leverage the power of Twitter and Facebook by recording people’s Twitter and Facebook details against their record and if they tweet or post something with a given hashtag, the system is watching and can automatically create a case for them, assign it to a support officer who can find a solution, link the solution and automatically have the system tweet them with a response and a link.
Another example is the way captchas are being used to get the masses to perform optical character recognition on historical documents that are too poor for a machine to read. The system uses a known control word to determine whether you are human or not and poses a second one that is not known. The results are compared against the results entered by others who have received the same word – a high correlation between results from different users indicates what the text is likely to be.
A third example comes from my own testing of the Amazon EC2 platform to test some ideas concerning a new database design that enabled end users to change the structure of the database without programming, kind of like the way Salesforce allows end users to do custom objects. The test was in two parts – the first, which was easy to test, was could it handle more than a billion records. The second, a little more difficult, was, can it handle one thousand simultaneous users on cheap virtual hardware. For this test I needed a simulation that ran across eleven machines. Traditionally I would need to acquire these eleven machines and set them up – an expensive and time consuming exercise. Using Amazon EC2, I was able to set up the machines from scratch in thirty minutes, run my tests in three hours, and then analyse the results. Total cost? Less than five dollars.
There are plenty of ways the Cloud can transform how you do business if you allow it. Get your sales team to focus on harder sells while the Cloud is engineered around a Marketing Automation experience that drives their behaviour for all the low hanging fruit. The Cloud itself, if you configure it correctly, will tell you where the low hanging fruit are.
3. Make sure your systems interactions are atomic
One of the issues with having Cloud-based systems is that you can build compelling processes out of tools from a number of vendors’ systems working together. Linking your CRM to your financials, or your website to marketing automation and analytics for example. While these may seem obvious examples, the point being made here is that we need to ensure when multiple systems are involved that we are thinking about how to prevent a situation where only part of a system succeeds. This is a much more common problem when different types of systems are talking together. So make sure you are not telling the customer that his request for information has been placed in a queue unless you know for sure that the request has been placed in a queue.
4. Start with Upside, not Downside
When I first started looking at Cloud concepts about six years ago I was looking with the eyes of a sceptic and I was asking the question “What can’t I do if I adopt this approach?” By taking this kind of view I found there were plenty of things I didn’t think I could do, and this thinking led me to see restrictions and obstacles. Once I started to ask myself rather contrary question “What can I do if I adopt this approach?”, I started to see all sorts of opportunities emerge. I understand from Salesforce I was possibly the first person in the world to see their CRM product as a business platform rather than a CRM product. This led to building all sorts of systems within Salesforce including purchase requisitioning, customer software licensing, electronic production management systems with automated QA built in and tested on the finished manufactured products (with the results of the tests stored against each product and displayed to the end user when he or she finally purchased the product and plugged it into a computer). Other systems included Human Resources systems with annual leave management systems, individual development plans and hierarchical cost management for each line manager, who could also see things like who had the most leave accrued in the team.
Thinking of what is possible also leads to being able to try things experimentally with a “fail-fast” attitude. The example provided above about the eleven computers is an example of this. But being able to put ideas into practice quickly makes all sorts of innovative approaches viable that may be otherwise ignored or side stepped as pipe dreams.
In traditional approaches, a startup may need to think of architecting a business for the first generation of clients. As the numbers grow, a different architecture may be required, or investment may be required in infrastructure just in case growth may occur. One of the risks of any business that grows too quickly is one of running out of liquid cash. All this can be very limiting in an entrepreneurs thinking, with a real chance that the fear of succeeding too quickly may cause them to underperform. Often the Cloud allows an architecture to scale far further than using traditional approaches, with the ability to consume infrastructure and related services as required, scaling rapidly up, and then if necessary, scaling rapidly back down again. Traditional models require risky investments, Cloud models are far more flexible. And this allows for more optimistic thinking.
5. Check what API options are available
Most mainstream cloud vendors, whether they be offering Software as a Service, Infrastructure as a Service or a Platform as a Service, will have some sort of API that enables you to read and write data, change metadata, set permissions etc. This is important if you want to truly leverage the power that is available to you. For example, you can use Amazon’s Simple Notification Service and Simple Queueing Service to provide asynchronous connections between systems and plan to notify managers when a VIP customer representative has mentioned your company in a tweet. Having a rich API in your bag of tricks enables you to innovate with freedom, seeing the Cloud as one Cloud rather than a disparate products offered by a host of different people.
6. Seek to understand the inner workings of the vendors various risk mitigation strategies
This is something I was guilty of in the early days. I used to say “these guys know better so you can trust them to make sure your data is safe”. Recent events have made me a little more open eyed about the inner workings. If you are not sure how your data is being backed up, ask. Imagine you are having to satisfy your auditor about the safety of your data. Imagine you are having to satisfy your customer that their data is safe, secure and reliably stored. If you don’t know yourself what steps are being taken to guarantee the preservation of the data, you won’t be able to tell them, and you will come across poorly.
I have written an earlier post about an Australian ISP that collapsed after an attack that took out the server with all of their clients’ websites. They had no offsite backup. Recently, Salesforce, one of the most respected companies had two outages on Sandboxes that caused the loss of the customer data on those sandboxes and the data was down for several days. Amazon had a well publicised outage earlier in the year that brought into question the way their system handled mass failure. Separate zones, designed to remain up when others failed, went down simply due to the overload caused by the failure of one. These failures, or at least the Salesforce and Amazon ones cited, have resulted in those companies making some changes, but an astute customer robustly challenging the methods may well have picked them up before a major problem occurred.
7. Remember, it’s your data, and the buck still stops with you
I wrote a post at the time of the major Amazon outage that was picked up by the CIO Magazine. Several companies hosting their data on Amazon Web Services were posting during the outage as if they were innocent bystanders observing the fallout. The reality is that if your services are down it is your responsibility no matter how you host them. Imagine an airline losing an aircraft saying “oops, luckily we outsourced the maintenance on that plane or else it would have looked really bad for us LOL!”. I don’t think so.
Remember, it is your data and you are entitled to it, and your are responsible for its availability and its security.
Some Musings after a Talk to Cloud First-timers
I have given many talks in recent years about my experiences in pioneering cloud applications. I have spoken at events ranging from C-level round tables to professional seminars to large vendor events with more than 5,000 people attending. I have had press conferences with fifty or so journalists. One thing that I find interesting is that the nature of the questions I get asked has changed over that time as an increasing number of people are becoming cloud-savvy.
In the early days, almost every question was about security and privacy, data sovereignty. More recently the questions have been more technical in nature – how to implement, how to handle change management, legal issues around the service level agreements etc.
So it came as a bit of a surprise to speak last week to a room of 150 people almost completely new to cloud at an event held at Google’s Sydney offices. The questions were quite mixed, but they all had one thing in common: the audience hadn’t realised that Cloud computing is different from the way they currently do things.
After years of doing interesting things with the various technologies on offer, it is easy to become complacent about just how radical a difference Cloud computing can make to a business prepared to see it as an opportunity to make real change. So the opportunity to share some basics with this audience was exciting and fresh. They thought Google Apps would bring them a different mail-server. I showed them how it was fundamentally different from in-house approaches: not just an outsourced mail server but an opportunity collaborate and move around untethered.
The freedom to innovate, the freedom to explore, the freedom to dream.
I found it really exciting to see them starting out on this journey that has changed so much.
CIOs, Systems Designers: Users Have to Have More Say…
Long gone are the days when software implementers could foist arcane or cumbersome software onto users. While some businesses still develop specific vertical products for all sorts of business purposes, the reality is a vast number of systems can be replaced by generic tools that feel natural and extend the utility of the typical user in ways that are almost impossible to foresee without witnessing crowd action. Synergies will emerge when a system is ubiquitously adopted across specialisations, across functions. Perhaps people will be able to react more quickly to emerging trends, perhaps knowledge is more easily accessed, perhaps the customer experience is so greatly enhanced that they evangelise and become disciples.
One thing we have learned from the emergence of social media tools is that building applications inside or around frameworks like Facebook, Chatter, Twitter etc have remarkable spin offs that are difficult to predict. Read more
Things I Want to See – 2. Salesforce Page Layouts with Multiple Related Lists per Object
One of the beautiful things about Salesforce is the ability to create or modify an object’s structure with defined relationships, permissions, application contexts, business rules and page layouts.
Think about it for a second: how many frameworks do you know of that enable you to modify the data schema and automatically set:
- Relationships between objects;
- Indexes;
- Cardinality rules, (definitions of how objects relate to each other in terms of how many of one can be related to how many of another);
- Business rules, (what fields are mandatory, what fields are dependent, default values, what fields are read only or even visible for certain users, which fields must be unique);
- Referential Integrity rules (which records will be deleted when a parent is deleted);
- A User Interface, even one that can be different for each user profile;
- Application context (which objects belong together to form a sub application;
- Access to reports; and
- A Notification engine that can share changes with subscribers or record owners, or handle task assignments.
And all with a point and click interface – no programming required (unless you want to), and all with defaults to allow you get the job done quickly. Very quickly. Read more
Things I Want to See – 1. True Cloud-based Email
This article is the first in a series of articles looking at changes/improvements I would like to see happen. You will find them categorised under the category “Things I Want to See”, and also filed under specific vendors where appropriate.
An increasing number of people are coming to understand intuitively the difference between traditional peer-to-peer document sharing modes where multiple instances of documents exist, at least once on each client machine. You know the drill, you attach a document to an email, the recipient opens the attachment, edits it, saves it and then attaches the saved new version to a new email and sends it back. Before long, there are multiple copies of the document and it can be difficult to know how the document evolved. In the case of several people, it can even be difficult to know which version of the document is the current one. There may not even be one single latest version, as two people may edit two different earlier versions at once. Stitching these all back into a master document is not easy.
A lot of tools have been developed to simplify the potentially incredibly complex task of managing all these document versions. But the cloud provides a simpler way, by fundamentally only having one document location. So instead of linking people to people, you link people to documents and the problem elegantly goes away:
![]() |
![]() |
Traditional Document Sharing | Cloud-based Document Sharing |