Transitioning to the Cloud
Today I am presenting for you the same talk I gave at the CeBIT Cloud 2012 conference in Sydney. Entitled, “Transitioning to the Cloud”, the presentation covers three areas:
- If you want to transition your business to the cloud you need to Think Cloud – Cloud, as I see it, is as much a state of mind and you need to embrace this thinking to really make full use of its potential;
- Some examples from my personal experience of using some of the large Cloud providers’ offerings, and why they are more than what they superficially appear to be; and
- Some tips on adopting Cloud (previously covered in an earlier post)
My Top 7 Tips for Going to the Cloud
A lot of people ask me for my advice on what are the most important things to consider when moving the business into the Cloud. So here are some of the things that I think business people need to consider when thinking about going to the Cloud:
1. Make sure you know how to get your data out again
Often people think about how they are going to put their data into the Cloud – if they are using Software as a Service, like Salesforce or Netsuite or Intacct or Clarizen, or Google Apps for that matter, they will be thinking about how to get their data into a shape that can go into the system. The documentation for these systems make clear reference to how to prepare and then import the customer’s data, and there are usually consultants who can assist with this process. Typically this process is well planned, but often little thought is given to how exactly you go about extracting the data out again in a way that is of value to you going forward. Often, lip service is paid to the issue by asking questions like “can I get a back up of my data?”, and a reassuring yes is provided to the now comforted prospective customer. It is one thing to be told it can be done, but you need to check that the data is actually in a format that is useful to you. And if the system is mission critical, it needs to not just be useful, it needs to be readily convertible for immediate use.
Some of the things I have done to ensure that my data is safe include writing programs that automatically read the data for updates every fifteen minutes and write them into a Relational Database hosted separately, and even replicated both in house and in the Cloud. All customisations are programmatically managed so that the relational database copy always reflects the structure in the live system. For example, I did this from Salesforce, where there were more than 300 custom objects created. Another example is to write a program that knows how to extract all the data from a system, such as an accounting system, using the API provided. Not until you have actually proven tangibly that you can get your data into a format you can actually use, it is meaningless to have access to a copy of it.
Even without programming, many systems provide some access to your data in a way you can extract it. For example Salesforce provide a once-per-week csv file you can download. If you don’t have an alternative means it is worth setting up a routine with someone responsible to take this data and copy it.
On line databases such as Amazon RDS or Simple DB can be accesed easily enough through OLEDB connections or similar, or copies of the backups can be stored locally in a format that can be opened by alternative data stores.
No matter how you do it, the principle is important here: you should have a fully tested means of accessing your data off line. The more mission critical the data, the more real-time the recoverability needs to be.
2. Think Differently
Steve Jobs’ passing reminded everyone of the Apple Think Different campaign, but seriously, you need to think Differently when it comes to the Cloud in order to leverage it successfully. It truly is different to anything we have seen, and if you are only seeing it as a cost mitigator or a means of outsourcing infrastructure, you are missing a lot of (pardon the pun) blue sky behind the Cloud. Social networking, crowdsourcing, ubiquity of device and location, Metcalfe’s law in general, scalability, the ability to fail fast and loosely coupled web services are all factors of the Cloud that lend itself to being different.
One example is the way that Salesforce enables you to leverage the power of Twitter and Facebook by recording people’s Twitter and Facebook details against their record and if they tweet or post something with a given hashtag, the system is watching and can automatically create a case for them, assign it to a support officer who can find a solution, link the solution and automatically have the system tweet them with a response and a link.
Another example is the way captchas are being used to get the masses to perform optical character recognition on historical documents that are too poor for a machine to read. The system uses a known control word to determine whether you are human or not and poses a second one that is not known. The results are compared against the results entered by others who have received the same word – a high correlation between results from different users indicates what the text is likely to be.
A third example comes from my own testing of the Amazon EC2 platform to test some ideas concerning a new database design that enabled end users to change the structure of the database without programming, kind of like the way Salesforce allows end users to do custom objects. The test was in two parts – the first, which was easy to test, was could it handle more than a billion records. The second, a little more difficult, was, can it handle one thousand simultaneous users on cheap virtual hardware. For this test I needed a simulation that ran across eleven machines. Traditionally I would need to acquire these eleven machines and set them up – an expensive and time consuming exercise. Using Amazon EC2, I was able to set up the machines from scratch in thirty minutes, run my tests in three hours, and then analyse the results. Total cost? Less than five dollars.
There are plenty of ways the Cloud can transform how you do business if you allow it. Get your sales team to focus on harder sells while the Cloud is engineered around a Marketing Automation experience that drives their behaviour for all the low hanging fruit. The Cloud itself, if you configure it correctly, will tell you where the low hanging fruit are.
3. Make sure your systems interactions are atomic
One of the issues with having Cloud-based systems is that you can build compelling processes out of tools from a number of vendors’ systems working together. Linking your CRM to your financials, or your website to marketing automation and analytics for example. While these may seem obvious examples, the point being made here is that we need to ensure when multiple systems are involved that we are thinking about how to prevent a situation where only part of a system succeeds. This is a much more common problem when different types of systems are talking together. So make sure you are not telling the customer that his request for information has been placed in a queue unless you know for sure that the request has been placed in a queue.
4. Start with Upside, not Downside
When I first started looking at Cloud concepts about six years ago I was looking with the eyes of a sceptic and I was asking the question “What can’t I do if I adopt this approach?” By taking this kind of view I found there were plenty of things I didn’t think I could do, and this thinking led me to see restrictions and obstacles. Once I started to ask myself rather contrary question “What can I do if I adopt this approach?”, I started to see all sorts of opportunities emerge. I understand from Salesforce I was possibly the first person in the world to see their CRM product as a business platform rather than a CRM product. This led to building all sorts of systems within Salesforce including purchase requisitioning, customer software licensing, electronic production management systems with automated QA built in and tested on the finished manufactured products (with the results of the tests stored against each product and displayed to the end user when he or she finally purchased the product and plugged it into a computer). Other systems included Human Resources systems with annual leave management systems, individual development plans and hierarchical cost management for each line manager, who could also see things like who had the most leave accrued in the team.
Thinking of what is possible also leads to being able to try things experimentally with a “fail-fast” attitude. The example provided above about the eleven computers is an example of this. But being able to put ideas into practice quickly makes all sorts of innovative approaches viable that may be otherwise ignored or side stepped as pipe dreams.
In traditional approaches, a startup may need to think of architecting a business for the first generation of clients. As the numbers grow, a different architecture may be required, or investment may be required in infrastructure just in case growth may occur. One of the risks of any business that grows too quickly is one of running out of liquid cash. All this can be very limiting in an entrepreneurs thinking, with a real chance that the fear of succeeding too quickly may cause them to underperform. Often the Cloud allows an architecture to scale far further than using traditional approaches, with the ability to consume infrastructure and related services as required, scaling rapidly up, and then if necessary, scaling rapidly back down again. Traditional models require risky investments, Cloud models are far more flexible. And this allows for more optimistic thinking.
5. Check what API options are available
Most mainstream cloud vendors, whether they be offering Software as a Service, Infrastructure as a Service or a Platform as a Service, will have some sort of API that enables you to read and write data, change metadata, set permissions etc. This is important if you want to truly leverage the power that is available to you. For example, you can use Amazon’s Simple Notification Service and Simple Queueing Service to provide asynchronous connections between systems and plan to notify managers when a VIP customer representative has mentioned your company in a tweet. Having a rich API in your bag of tricks enables you to innovate with freedom, seeing the Cloud as one Cloud rather than a disparate products offered by a host of different people.
6. Seek to understand the inner workings of the vendors various risk mitigation strategies
This is something I was guilty of in the early days. I used to say “these guys know better so you can trust them to make sure your data is safe”. Recent events have made me a little more open eyed about the inner workings. If you are not sure how your data is being backed up, ask. Imagine you are having to satisfy your auditor about the safety of your data. Imagine you are having to satisfy your customer that their data is safe, secure and reliably stored. If you don’t know yourself what steps are being taken to guarantee the preservation of the data, you won’t be able to tell them, and you will come across poorly.
I have written an earlier post about an Australian ISP that collapsed after an attack that took out the server with all of their clients’ websites. They had no offsite backup. Recently, Salesforce, one of the most respected companies had two outages on Sandboxes that caused the loss of the customer data on those sandboxes and the data was down for several days. Amazon had a well publicised outage earlier in the year that brought into question the way their system handled mass failure. Separate zones, designed to remain up when others failed, went down simply due to the overload caused by the failure of one. These failures, or at least the Salesforce and Amazon ones cited, have resulted in those companies making some changes, but an astute customer robustly challenging the methods may well have picked them up before a major problem occurred.
7. Remember, it’s your data, and the buck still stops with you
I wrote a post at the time of the major Amazon outage that was picked up by the CIO Magazine. Several companies hosting their data on Amazon Web Services were posting during the outage as if they were innocent bystanders observing the fallout. The reality is that if your services are down it is your responsibility no matter how you host them. Imagine an airline losing an aircraft saying “oops, luckily we outsourced the maintenance on that plane or else it would have looked really bad for us LOL!”. I don’t think so.
Remember, it is your data and you are entitled to it, and your are responsible for its availability and its security.
CIOs, Systems Designers: Users Have to Have More Say…
Long gone are the days when software implementers could foist arcane or cumbersome software onto users. While some businesses still develop specific vertical products for all sorts of business purposes, the reality is a vast number of systems can be replaced by generic tools that feel natural and extend the utility of the typical user in ways that are almost impossible to foresee without witnessing crowd action. Synergies will emerge when a system is ubiquitously adopted across specialisations, across functions. Perhaps people will be able to react more quickly to emerging trends, perhaps knowledge is more easily accessed, perhaps the customer experience is so greatly enhanced that they evangelise and become disciples.
One thing we have learned from the emergence of social media tools is that building applications inside or around frameworks like Facebook, Chatter, Twitter etc have remarkable spin offs that are difficult to predict. Read more
Things I Want to See – 2. Salesforce Page Layouts with Multiple Related Lists per Object
One of the beautiful things about Salesforce is the ability to create or modify an object’s structure with defined relationships, permissions, application contexts, business rules and page layouts.
Think about it for a second: how many frameworks do you know of that enable you to modify the data schema and automatically set:
- Relationships between objects;
- Indexes;
- Cardinality rules, (definitions of how objects relate to each other in terms of how many of one can be related to how many of another);
- Business rules, (what fields are mandatory, what fields are dependent, default values, what fields are read only or even visible for certain users, which fields must be unique);
- Referential Integrity rules (which records will be deleted when a parent is deleted);
- A User Interface, even one that can be different for each user profile;
- Application context (which objects belong together to form a sub application;
- Access to reports; and
- A Notification engine that can share changes with subscribers or record owners, or handle task assignments.
And all with a point and click interface – no programming required (unless you want to), and all with defaults to allow you get the job done quickly. Very quickly. Read more
Focus on the Vision, not the Means
“Knowledge is a single point, but the ignorant have multiplied it.”
(Baha’u’llah: Seven Valleys and Four Valleys, Page 25)
When we don’t really understand something, we see division, we see dichotomy. We see the things that differentiate and we hone in on them, creating opportunities by exploiting these differences and in so doing we limit our thinking, our judgement, our potential. We become experts and protect that expertise by making it difficult for others to gain the knowledge we have. Knowledge is power, having more knowledge than others gives us an advantage.
It usually takes one visionary person to challenge the basic assumptions that lead to these differences, and when that happens, entirely new vistas open to us, empowering those who were shut out by providing access to the knowledge or exposing the differences as being false divisions, false barriers to entry.
Computers are like this. In the very early days, only people trained in the arcane would be able to (or want to) access a computer. A computer operator had to be able to read ticker tape, write in binary, then assembler, then Fortran. Screens and keyboards made computers more accessible, and then graphical user interfaces hid much of the complexity.
Programmers have been able to work with increasingly high abstractions, but still we haven’t really been able to get away from the need to be able to program, or to purchase tools that hide this from us – tools that automatically do backups, convert file formats, transfer data, dial the phone, send communiqués or whatever.
This seems to be changing very quickly – increasingly it is becoming possible for people to choose to configure existing systems rather than being forced to find a programmatic solution.
What is interesting here is the trap this represents for some people on both sides of the fence – those that understand how to program and those that don’t. Clearly the people who focus on the end objective, rather than the means of getting there, will adapt as technology becomes increasingly available to non-programmers. These outcome-oriented people have a distinct advantage.
Those who only see the barriers will continue to use old methods. End users will remain in fear of the unknown, while programmers will continue to look for programmatic solutions, even when both are presented with tools that can get the job done without code.
Cloud Computing makes it easier to facilitate the kind of advances described here, advances that empower end users to achieve change without programmers. This is because Platforms and Software delivered as a Service typically mean there is only one version of the platform or software in use by everyone – it is literally impossible for anyone to get left behind. Vendors can work to cover a lower common denominator because it is worth their while. Salesforce.com is a great example of this.
The bottom line: Those that focus on the technology will be left behind in a world where we were slaves to technobabble. Those that focus on what they want to do will realise the rules have changed and will be astonished at just how far they can take their vision without breaking a sweat.
Where next for Salesforce Chatter? My two cents…
Salesforce has released an internal collaboration context that beautifully leverages the power of the cloud. When I first saw chatter I was excited by the ability of people to subscribe to objects – more on that later. What surprised me was how many people rave about how good it is. Mostly, from what I can see, people seem to use it for person to person communication, and for this it has some interesting possibilities.
For example, sales teams are able to share tips or presentations they have done in some vertical industry so that other sales reps facing a challenge in that industry can learn from their experience. “Selling to an ENTJ? no problem here is my approach.”
When Marc Benioff recently made his comment about the question that drove him to start Salesforce.com (Why aren’t more enterprise applications more like Amazon), and how now that question has evolved to Why aren’t more enterprise applications more like Facebook, I realised something important about all these new ways of collaborating – Ning, Facebook, Twitter, Yammer, WordPress and many others – they all deal with allowing people to publish a range of information in various facets of their lives, and it allows people to subscribe to those facets. Take Facebook as an example. People choose to post information about their social lives in the form of photos taken at parties, upcoming events, social news etc. People choose to participate in various games where they grow virtual pets or plants and collaborate. People choose to support various causes. As publishers, each of us choose to display all sorts of things in the hope that someone will find it interesting or valuable. As subscribers, we each choose to set up an antenna to learn what a particular person is saying, or what is happening relating to a particular favourite topic, perhaps a musician, perhaps a company.
This notion of publishing and subscribing is core to the notion of Chatter, but to see it as merely a closed-circuit means to publish and subscribe to information from human beings is to miss the mark somewhat because Chatter also allows business objects to participate in the freeflow of information. Human beings can choose to subscribe to certain objects.
So let’s say you are a sales manager managing a team of 5 field sales executives. You want to know how they are progressing on a number of important opportunities, perhaps a dozen in all. You can subscribe to those opportunities directly to find out if anything changes on the opportunities – a feed is provided to you and these opportunities will place information into your feed to let you know something has changed about them – perhaps the close date has changed, or the probability of closure. Perhaps the amount of the opportunity has been modified. This puts you very close to the action so you know what is going on.
This is all available today, and it is not limited to opportunities – what about an important, complex case your team is working on for a VIP customer? You can subscribe to the case to receive information about its status. Even custom objects can participate in this world.
This is all well and good but there are some really compelling things that Chatter requires to make it truly successful in my opinion: metadata based subscription and non-event subscription.
“Metawhat??”, I hear you say? Metadata is data that describes the shape of your data – let me give a few practical examples. Currently, Chatter requires you to subscribe to specific objects, for example Opportunity number 123456. You look at all your data and you choose which records will interest you. But how much more powerful would it be if you could automatically subscribe to objects based on preconfigured parameters? Here are some illustrative examples:
- You want to automatically subscribe to every opportunity worth more than $50000 owned by someone who reports to you.
- You want to automatically subscribe to every opportunity for a customer who has never purchased anything from you.
- You want to automatically subscribe to any cases logged by any VIP customer with platinum support where the renewal contract is due within three months.
- You want to automatically subscribe to opportunities where the amount is more than one standard deviation above last month’s average closed won price, is managed by one of your team members, and the customer is a new logo.
Another important offering that I feel would take Chatter to a whole new level is the addition of objects being able to chatter non events. Imagine being able to ask an urgent case for an important customer to let you know if it hasn’t been touched for six hours… Or an strategic opportunity has not been updated for more than two days.
These changes would make Salesforce Chatter very much more effective than it already is. Without them it is just another corporate collaboration tool.
In my next post I plan to talk about the next evolution of these concepts, something my company, Altium, is busy working towards – the Facebook of Devices, the Internet of Things, if you will. This is where we take the publisher subscriber model to an entire new level – devices intelligently collaborating with other devices.
What excited me about Salesforce when I first saw it
In thinking about writing for this blog, I was musing about the journey I have been on with using the cloud and was thinking about my early days with Salesforce.
When I first used Salesforce, there were two things that stood out for me as being incredibly powerful about the product, and enabled me to see it as a potential business platform. These were the ability to define custom objects, with clearly defined relationships between them, and a customisable user interface – a user interface that was highly structured, yet flexible enough to drag and drop fields and sections around the page. Bear in mind these were nearly five years ago, and a lot of progress has been made since then.
I figured that with these features, I would be able to use Salesforce as a complete business platform, but I ran into the first of many of the limits Salesforce has imposed on the platform use. I will be writing more about these limits in a detail post at a later point, but for now I want to talk about a limit that made me realise Salesforce really didn’t understand just how much potential they had to break the shackles of their CRM roots: you couldn’t have more than 25 custom tabs. I told them that 25 was nothing and they asked me how many could I possible want – 40? 50? I sent them a reply listing more than 90 specific tabs, which really got their attention.
Now I have more than 300 custom objects in our installation of Salesforce being used in all sorts of interesting areas. I intend to share some of these use cases over the coming weeks along with other stories using other cloud applications and platforms.
For now, I just want to plant a seed that the initial limits born out of the need to protect all clients in a multi-tenanted architecture place unnecessary restraints on people’s perceptions of what is possible. I intend to talk more about this concept as well.
The idea of runtime metacustomisation is such a powerful concept though. Born out of the idea that a single platform can be leveraged by not hundreds, not thousands, but by millions of users, it really excites me to think about how much leverage can be gained by abstracting more of the development into the background. Just like what happened when we went from DOS to Windows, and suddenly all of the work required to write drivers to support different monitors, printers and input devices became a thing of the past.
Plenty has excited me since, but it is hard to recapture that initial excitement when you know you are onto something special.