Skip to content

Posts from the ‘Service Vendors’ Category

2
Jun

Focus on the Vision, not the Means

“Knowledge is a single point, but the ignorant have multiplied it.”
(Baha’u’llah: Seven Valleys and Four Valleys, Page 25)

When we don’t really understand something, we see division, we see dichotomy. We see the things that differentiate and we hone in on them, creating opportunities by exploiting these differences and in so doing we limit our thinking, our judgement, our potential. We become experts and protect that expertise by making it difficult for others to gain the knowledge we have. Knowledge is power, having more knowledge than others gives us an advantage.

It usually takes one visionary person to challenge the basic assumptions that lead to these differences, and when that happens, entirely new vistas open to us, empowering those who were shut out by providing access to the knowledge or exposing the differences as being false divisions, false barriers to entry.

Computers are like this. In the very early days, only people trained in the arcane would be able to (or want to) access a computer. A computer operator had to be able to read ticker tape, write in binary, then assembler, then Fortran. Screens and keyboards made computers more accessible, and then graphical user interfaces hid much of the complexity.

Programmers have been able to work with increasingly high abstractions, but still we haven’t really been able to get away from the need to be able to program, or to purchase tools that hide this from us – tools that automatically do backups, convert file formats, transfer data, dial the phone, send communiqués or whatever.

This seems to be changing very quickly – increasingly it is becoming possible for people to choose to configure existing systems rather than being forced to find a programmatic solution.

What is interesting here is the trap this represents for some people on both sides of the fence – those that understand how to program and those that don’t.  Clearly the people who focus on the end objective, rather than the means of getting there, will adapt as technology becomes increasingly available to non-programmers. These outcome-oriented people have a distinct advantage.

Those who only see the barriers will continue to use old methods. End users will remain in fear of the unknown, while programmers will continue to look for programmatic solutions, even when both are presented with tools that can get the job done without code.

Cloud Computing makes it easier to facilitate the kind of advances described here, advances that empower end users to achieve change without programmers. This is because Platforms and Software delivered as a Service typically mean there is only one version of the platform or software in use by everyone – it is literally impossible for anyone to get left behind. Vendors can work to cover a lower common denominator because it is worth their while. Salesforce.com is a great example of this.

The bottom line: Those that focus on the technology will be left behind in a world where we were slaves to technobabble. Those that focus on what they want to do will realise the rules have changed and will be astonished at just how far they can take their vision without breaking a sweat.

22
Apr

CIOs Moving to the Cloud: The buck still stops with you

Amazon Web Services has been going through a much publicised outage, which has lasted by all appearances more than 12 hours. A range of services including Hootsuite, Reddit, Heroku, Foursquare, Quora and others have all faced major disruptions.

What is interesting is how they have positioned these outages: many have said EC2 is great, but they are having a bit of a problem at the moment. It appears these providers are taking the view that “whew, glad we outsourced our stuff so that it is clear that is not OUR fault that something like this has happened, and we can point to other vendors to prove the case that it wasn’t us – just imagine if we had have done this on our own servers and this happened, we would have been much more at fault!”

Wrong.

Just because systems are moved to the cloud doesn’t mitigate the responsibility to ensure mission critical outages are mitigated. If a business has a use-case that cannot tolerate down time then that business needs to architect their solution in a way that prevents downtime. Cost tradeoffs are always an issue, but if something goes wrong, and the cost of that problem is too high, then perhaps the service isn’t really feasible.

Imagine an airline providing a service where they cut costs on safety in order to offer a cheap service… Doesn’t bear thinking about. Imagine that the airline outsourced their safety inspections to a third party and then wiped their hands of responsibility in the event of a “downtime”. No-one would buy that.

The whole point about the cloud is that it enables you to free your thinking about one provider. Even if you stick with an Amazon only solution, or a Microsoft or Google or Salesforce or Rackspace or whatever solution, you still need to architect things in a way that allows you to accept the consequences of any flaws, no matter how they are caused.

After all, you are the service provider to your customer base – how you decide to deliver that is up to you.

A lot of people are learning a very hard lesson at the moment – there are good ways and bad ways of doing things. For some, a 12 hour outage is hardly a problem, but for others it can ruin lives.

3
Dec

A brief response to those who criticize Cloud Vendors for warning about Private Cloud

There are many who scoff at people like Wogels and Benioff for making statements like ‘Beware the False Cloud’. Focusing on the naming aspect rather than the conceptual aspect is missing the point Benioff and Vogels et al are making.

I agree, who cares what you call it, but this is not their point. Their key point in saying to be wary of private cloud is that you forego all of the real benefits of abstraction, information leverage, true scalability in both directions, and the expertise leveraged from multiple tenancy security platforms etc.

The term ‘cloud’ was adopted to demonstrate the fundamental differences gained by abstracting the hardware out of the picture.

DIY is DIY, and some things should not be tried at home, unless you REALLY have a phenomenal driver of life and death importance to segregate and isolate.
Until you have really experienced these cloud benefits it is difficult to understand this fundamental difference, and why it is seen as such an important distinction

30
Sep

Where next for Salesforce Chatter? My two cents…

Salesforce has released an internal collaboration context that beautifully leverages the power of the cloud. When I first saw chatter I was excited by the ability of people to subscribe to objects – more on that later. What surprised me was how many people rave about how good it is. Mostly, from what I can see, people seem to use it for person to person communication, and for this it has some interesting possibilities.

For example, sales teams are able to share tips or presentations they have done in some vertical industry so that other sales reps facing a challenge in that industry can learn from their experience. “Selling to an ENTJ? no problem here is my approach.”

When Marc Benioff recently made his comment about the question that drove him to start Salesforce.com (Why aren’t more enterprise applications more like Amazon), and how now that question has evolved to Why aren’t more enterprise applications more like Facebook, I realised something important about all these new ways of collaborating – Ning, Facebook, Twitter, Yammer, WordPress and many others – they all deal with allowing people to publish a range of information in various facets of their lives, and it allows people to subscribe to those facets. Take Facebook as an example. People choose to post information about their social lives in the form of photos taken at parties, upcoming events, social news etc. People choose to participate in various games where they grow virtual pets or plants and collaborate. People choose to support various causes. As publishers, each of us choose to display all sorts of things in the hope that someone will find it interesting or valuable. As subscribers, we each choose to set up an antenna to learn what a particular person is saying, or what is happening relating to a particular favourite topic, perhaps a musician, perhaps a company.

This notion of publishing and subscribing is core to the notion of Chatter, but to see it as merely a closed-circuit means to publish and subscribe to information from human beings is to miss the mark somewhat because Chatter also allows business objects to participate in the freeflow of information. Human beings can choose to subscribe to certain objects.

So let’s say you are a sales manager managing a team of 5 field sales executives. You want to know how they are progressing on a number of important opportunities, perhaps a dozen in all. You can subscribe to those opportunities directly to find out if anything changes on the opportunities – a feed is provided to you and these opportunities will place information into your feed to let you know something has changed about them – perhaps the close date has changed, or the probability of closure. Perhaps the amount of the opportunity has been modified. This puts you very close to the action so you know what is going on.

This is all available today, and it is not limited to opportunities – what about an important, complex case your team is working on for a VIP customer? You can subscribe to the case to receive information about its status. Even custom objects can participate in this world.

This is all well and good but there are some really compelling things that Chatter requires to make it truly successful in my opinion: metadata based subscription and non-event subscription.

“Metawhat??”, I hear you say? Metadata is data that describes the shape of your data – let me give a few practical examples. Currently, Chatter requires you to subscribe to specific objects, for example Opportunity number 123456. You look at all your data and you choose which records will interest you. But how much more powerful would it be if you could automatically subscribe to objects based on preconfigured parameters? Here are some illustrative examples:

  • You want to automatically subscribe to every opportunity worth more than $50000 owned by someone who reports to you.
  • You want to automatically subscribe to every opportunity for a customer who has never purchased anything from you.
  • You want to automatically subscribe to any cases logged by any VIP customer with platinum support where the renewal contract is due within three months.
  • You want to automatically subscribe to opportunities where the amount is more than one standard deviation above last month’s average closed won price, is managed by one of your team members, and the customer is a new logo.

Another important offering that I feel would take Chatter to a whole new level is the addition of objects being able to chatter non events. Imagine being able to ask an urgent case for an important customer to let you know if it hasn’t been touched for six hours… Or an strategic opportunity has not been updated for more than two days.

These changes would make Salesforce Chatter very much more effective than it already is. Without them it is just another corporate collaboration tool.

In my next post I plan to talk about the next evolution of these concepts, something my company, Altium, is busy working towards – the Facebook of Devices, the Internet of Things, if you will. This is where we take the publisher subscriber model to an entire new level – devices intelligently collaborating with other devices.

17
Sep

Amazon Web Services Part 2 – Scaling Services Provided

Here is the second of three posts on Amazon’s Web Services. The first post provided a look at the key foundational services. The next post will talk about how Altium is leveraging Amazon’s offerings. Meanwhile, in this post I promised to write about some of the ways Amazon takes elasticity to the extreme through a range of services aimed at addressing scalability as an issue.

At its most fundamental level, Amazon’s Web Services are aimed at elasticity – this is reflected in the technology as well as the pricing. The pricing is a pay as you go model – by the hour, by the storage used and/or by the bandwidth consumed. Many of the services reflect this elasticity in their name.

So what tools does Amazon provide you to scale up and down? (Remember scalability isn’t just about being able to scale up – it is about being able to scale in either direction – start small and go big, start big and go small, or start big, grow then shrink again. Or vice versa.

The Elastic Compute Cloud (EC2)  enables you to create a machine image that you can turn on or off whenever you need it. This can be done via an API or via a management console. So a program can turn on a computer if it needs it.

EC2 also allows you to define parameters on a computer so that it effectively clones itself or kills off clones based on the demand. For example if a certain number of requests threshold is hit or the time taken to respond to requests, then a new machine instance can start up automatically. And once it is no longer needed, it can just shut down again.

Imagine a business that provides sports statistics to a subscriber base of sporting tragics. Normally the demand requires a four servers to meet the demand the company’s subscriber base. But once every four years, come Olympics, everybody wants to be a sporting expert and so perhaps 400 servers are required for a period of about six weeks. Amazon’s Auto Scaling feature allows for this.

This is just the simple stuff – there are sophisiticated models provided including Elastic Load Balancing, Elastic Map Reduce, Simple Queuing Services and Simple Notification Services. Let’s take a look at these:

  • Elastic Load Balancing allows you to automatically distribute incoming traffic across multiple EC2 instances. It automatically detects failed machines and bypasses them so that no requests are being sent to oblivion. The load balancer can handle things such as automatically assuring specific user sessions remain on one instance.
  • Elastic Map Reduce allows for large compute assignments to be split up into small units so the work can be shared across multiple EC2 instances, offering potentially massive parallelism. If there is a task to handle huge amounts of data for analysis, simulation or artificial intelligence, Elastic Map Reduce can manage the splitting and the pulling back together of the project components. Any failed tasks are rerun, failed instances are automatically shut down.
  • Simple Queue Service (SQS) provides a means for hosting messages travelling between computers. This is an important part of any significant scalable architecture. Messages are posted by one computer without thought for, or knowledge of, what machine is going to pick up the information and process it. Once received, the message is locked to prevent any other computer from trying to read the message, so it is guaranteed that only one computer will process it. Messages can remain in an unread state between machines for up to 14 days. Queues can be shared or kept private and they can be restricted by IP or time. This means that the systems can designed with separate components where each component is loosely coupled from each other component – even different vendors can be responsible for each component. Different systems can work together in a safe and reliable way.
  • Simple Notification Service (SNS). Whereas SQS is asynchronous, ie each post to the queue can be done without thought about when it will be picked up by the other other end, SNS is designed to be handled at the other end immediately. Messages are pushed out to be handled using HTTP requests, emails or other protocols, which means it can be used to build instantaneous feedback communities using a range of different architectures. So long as they support standard web integration architectures, the systems will talk to each other.  SNS can be used to handle automatic notifications by SMS or Email or API calls when some event has taken place, or if written correctly, when some expected event has not taken place.

I will endeavour to provide example applications over time for each of these scenarios, but for now I hope this has provided a sense of understanding of how highly scalable systems can be built using Amazon’s Web Services.

3
Sep

Amazon Web Services is looking the goods

I have to admit I am a fan of the work that Amazon has done putting together what is now a compelling collection of infrastructure services. Together these facilities provide a fantastic vehicle for hosting highly scalable and reliable systems. And the pricing model where you pay for what you use, with prices constantly being reviewed is very enticing. Elasticity is taken to an entirely new level – machines can be purchased by the hour for as little as eleven cents. Some of the services charge in micro cents – more on that later. This is the first of three posts examining Amazon Web Services. this post introduces the key concepts, the next post will talk about some of the scaling techniques provided, and the final post will focus on how Altium is currently leveraging Amazon offerings.

The key services Amazon offer include storage, compute power, and a range of auxiliary services designed to enhance these. Here is a very brief overview:

  • Amazon S3 (Simple Structured Storage) provides storage facilities. You can store files of all kinds including video, audio, software and anything else. You pay for the storage and the bandwidth to access them. Costs are very low. Altium uses S3 to store many things including training videos and software builds. When Altium releases a new build, tens of thousands of customers need to be able to get the 1.8GB file very quickly and this works well for that. Storage reliability comes in two levels – the highest provides a 99.999999999% (11 nines) probability of not being lost.
  • Amazon Cloudfront provides a perimeter caching facility – files stored in S3 are distributed to local nodes around the world so that people can get access to the files quickly. There is an option to provide files as delivered via streaming.
  • Amazon EC2 (Elastic Compute Cloud) provides access to virtual computers you can buy by the hour. The computers come in a number of hardware configurations ranging from low-end single processor machines through to big boxes with lots of processors and 68GB RAM. Machines come in Linux and Windows flavors. You can also get the machines preconfigured with certain hardware packages, and the price of those (if they are chargeable) is built into the rental. For example you can get a machine MS SQL Server in different flavours ranging from free to the Enterprise level. Machines can be imaged so that you can take them off line or replicate them very quickly. EC2 instances can be tied to Elastic Block Storage, effectively S3 storage.
  • Amazon RDS is a special case of EC2 that comes with an embedded MySQL database and a range of value added facilities, including automatic backups and the ability to achieve failover replication into an alternative hardware partition in case the main server goes down.
  • Amazon SimpleDB is a lightning fast schema-less string-based database consisting of items with many named attribute value pairs. You can, for instance, store a customer record with values for Name, Address, Phone etc, but you can store anything you like. If you want to store Favorite Color for one customer,  you can.  You can even store multiple values for the one attribute.

There are a range of other facilities, but I will discuss these in a later post where I talk about facilities to help you achieve elastic scaling (up and down).

24
Aug

What excited me about Salesforce when I first saw it

In thinking about writing for this blog, I was musing about the journey I have been on with using the cloud and was thinking about my early days with Salesforce.

When I first used Salesforce, there were two things that stood out for me as being incredibly powerful about the product, and enabled me to see it as a potential business platform. These were the ability to define custom objects, with clearly defined relationships between them, and a customisable user interface – a user interface that was highly structured, yet flexible enough to drag and drop fields and sections around the page. Bear in mind these were nearly five years ago, and a lot of progress has been made since then.

I figured that with these features, I would be able to use Salesforce as a complete business platform, but I ran into the first of many of the limits Salesforce has imposed on the platform use. I will be writing more about these limits in a detail post at a later point, but for now I want to talk about a limit that made me realise Salesforce really didn’t understand just how much potential they had to break the shackles of their CRM roots: you couldn’t have more than 25 custom tabs. I told them that 25 was nothing and they asked me how many could I possible want – 40? 50? I sent them a reply listing more than 90 specific tabs, which really got their attention.

Now I have more than 300 custom objects in our installation of Salesforce being used in all sorts of interesting areas. I intend to share some of these use cases over the coming weeks along with other stories using other cloud applications and platforms.

For now, I just want to plant a seed that the initial limits born out of the need to protect all clients in a multi-tenanted architecture place unnecessary restraints on people’s perceptions of what is possible. I intend to talk more about this concept as well.

The idea of runtime metacustomisation is such a powerful concept though. Born out of the idea that a single platform can be leveraged by not hundreds, not thousands, but by millions of users, it really excites me to think about how much leverage can be gained by abstracting more of the development into the background. Just like what happened when we went from DOS to Windows, and suddenly all of the work required to write drivers to support different monitors, printers and input devices became a thing of the past.

Plenty has excited me since, but it is hard to recapture that initial excitement when you know you are onto something special.