Skip to content

Archive for

30
Sep

EE Times article published

An article of mine has been published in the EE Times magazine discussing the acquisition of Morfik by Altium. EE Times is a magazine for the Electronic Engineering industry so the article in part focuses from that angle, but I think it gives people a good sense of Altium’s strategic direction.

30
Sep

Where next for Salesforce Chatter? My two cents…

Salesforce has released an internal collaboration context that beautifully leverages the power of the cloud. When I first saw chatter I was excited by the ability of people to subscribe to objects – more on that later. What surprised me was how many people rave about how good it is. Mostly, from what I can see, people seem to use it for person to person communication, and for this it has some interesting possibilities.

For example, sales teams are able to share tips or presentations they have done in some vertical industry so that other sales reps facing a challenge in that industry can learn from their experience. “Selling to an ENTJ? no problem here is my approach.”

When Marc Benioff recently made his comment about the question that drove him to start Salesforce.com (Why aren’t more enterprise applications more like Amazon), and how now that question has evolved to Why aren’t more enterprise applications more like Facebook, I realised something important about all these new ways of collaborating – Ning, Facebook, Twitter, Yammer, WordPress and many others – they all deal with allowing people to publish a range of information in various facets of their lives, and it allows people to subscribe to those facets. Take Facebook as an example. People choose to post information about their social lives in the form of photos taken at parties, upcoming events, social news etc. People choose to participate in various games where they grow virtual pets or plants and collaborate. People choose to support various causes. As publishers, each of us choose to display all sorts of things in the hope that someone will find it interesting or valuable. As subscribers, we each choose to set up an antenna to learn what a particular person is saying, or what is happening relating to a particular favourite topic, perhaps a musician, perhaps a company.

This notion of publishing and subscribing is core to the notion of Chatter, but to see it as merely a closed-circuit means to publish and subscribe to information from human beings is to miss the mark somewhat because Chatter also allows business objects to participate in the freeflow of information. Human beings can choose to subscribe to certain objects.

So let’s say you are a sales manager managing a team of 5 field sales executives. You want to know how they are progressing on a number of important opportunities, perhaps a dozen in all. You can subscribe to those opportunities directly to find out if anything changes on the opportunities – a feed is provided to you and these opportunities will place information into your feed to let you know something has changed about them – perhaps the close date has changed, or the probability of closure. Perhaps the amount of the opportunity has been modified. This puts you very close to the action so you know what is going on.

This is all available today, and it is not limited to opportunities – what about an important, complex case your team is working on for a VIP customer? You can subscribe to the case to receive information about its status. Even custom objects can participate in this world.

This is all well and good but there are some really compelling things that Chatter requires to make it truly successful in my opinion: metadata based subscription and non-event subscription.

“Metawhat??”, I hear you say? Metadata is data that describes the shape of your data – let me give a few practical examples. Currently, Chatter requires you to subscribe to specific objects, for example Opportunity number 123456. You look at all your data and you choose which records will interest you. But how much more powerful would it be if you could automatically subscribe to objects based on preconfigured parameters? Here are some illustrative examples:

  • You want to automatically subscribe to every opportunity worth more than $50000 owned by someone who reports to you.
  • You want to automatically subscribe to every opportunity for a customer who has never purchased anything from you.
  • You want to automatically subscribe to any cases logged by any VIP customer with platinum support where the renewal contract is due within three months.
  • You want to automatically subscribe to opportunities where the amount is more than one standard deviation above last month’s average closed won price, is managed by one of your team members, and the customer is a new logo.

Another important offering that I feel would take Chatter to a whole new level is the addition of objects being able to chatter non events. Imagine being able to ask an urgent case for an important customer to let you know if it hasn’t been touched for six hours… Or an strategic opportunity has not been updated for more than two days.

These changes would make Salesforce Chatter very much more effective than it already is. Without them it is just another corporate collaboration tool.

In my next post I plan to talk about the next evolution of these concepts, something my company, Altium, is busy working towards – the Facebook of Devices, the Internet of Things, if you will. This is where we take the publisher subscriber model to an entire new level – devices intelligently collaborating with other devices.

17
Sep

Amazon Web Services Part 2 – Scaling Services Provided

Here is the second of three posts on Amazon’s Web Services. The first post provided a look at the key foundational services. The next post will talk about how Altium is leveraging Amazon’s offerings. Meanwhile, in this post I promised to write about some of the ways Amazon takes elasticity to the extreme through a range of services aimed at addressing scalability as an issue.

At its most fundamental level, Amazon’s Web Services are aimed at elasticity – this is reflected in the technology as well as the pricing. The pricing is a pay as you go model – by the hour, by the storage used and/or by the bandwidth consumed. Many of the services reflect this elasticity in their name.

So what tools does Amazon provide you to scale up and down? (Remember scalability isn’t just about being able to scale up – it is about being able to scale in either direction – start small and go big, start big and go small, or start big, grow then shrink again. Or vice versa.

The Elastic Compute Cloud (EC2)  enables you to create a machine image that you can turn on or off whenever you need it. This can be done via an API or via a management console. So a program can turn on a computer if it needs it.

EC2 also allows you to define parameters on a computer so that it effectively clones itself or kills off clones based on the demand. For example if a certain number of requests threshold is hit or the time taken to respond to requests, then a new machine instance can start up automatically. And once it is no longer needed, it can just shut down again.

Imagine a business that provides sports statistics to a subscriber base of sporting tragics. Normally the demand requires a four servers to meet the demand the company’s subscriber base. But once every four years, come Olympics, everybody wants to be a sporting expert and so perhaps 400 servers are required for a period of about six weeks. Amazon’s Auto Scaling feature allows for this.

This is just the simple stuff – there are sophisiticated models provided including Elastic Load Balancing, Elastic Map Reduce, Simple Queuing Services and Simple Notification Services. Let’s take a look at these:

  • Elastic Load Balancing allows you to automatically distribute incoming traffic across multiple EC2 instances. It automatically detects failed machines and bypasses them so that no requests are being sent to oblivion. The load balancer can handle things such as automatically assuring specific user sessions remain on one instance.
  • Elastic Map Reduce allows for large compute assignments to be split up into small units so the work can be shared across multiple EC2 instances, offering potentially massive parallelism. If there is a task to handle huge amounts of data for analysis, simulation or artificial intelligence, Elastic Map Reduce can manage the splitting and the pulling back together of the project components. Any failed tasks are rerun, failed instances are automatically shut down.
  • Simple Queue Service (SQS) provides a means for hosting messages travelling between computers. This is an important part of any significant scalable architecture. Messages are posted by one computer without thought for, or knowledge of, what machine is going to pick up the information and process it. Once received, the message is locked to prevent any other computer from trying to read the message, so it is guaranteed that only one computer will process it. Messages can remain in an unread state between machines for up to 14 days. Queues can be shared or kept private and they can be restricted by IP or time. This means that the systems can designed with separate components where each component is loosely coupled from each other component – even different vendors can be responsible for each component. Different systems can work together in a safe and reliable way.
  • Simple Notification Service (SNS). Whereas SQS is asynchronous, ie each post to the queue can be done without thought about when it will be picked up by the other other end, SNS is designed to be handled at the other end immediately. Messages are pushed out to be handled using HTTP requests, emails or other protocols, which means it can be used to build instantaneous feedback communities using a range of different architectures. So long as they support standard web integration architectures, the systems will talk to each other.  SNS can be used to handle automatic notifications by SMS or Email or API calls when some event has taken place, or if written correctly, when some expected event has not taken place.

I will endeavour to provide example applications over time for each of these scenarios, but for now I hope this has provided a sense of understanding of how highly scalable systems can be built using Amazon’s Web Services.

3
Sep

Amazon Web Services is looking the goods

I have to admit I am a fan of the work that Amazon has done putting together what is now a compelling collection of infrastructure services. Together these facilities provide a fantastic vehicle for hosting highly scalable and reliable systems. And the pricing model where you pay for what you use, with prices constantly being reviewed is very enticing. Elasticity is taken to an entirely new level – machines can be purchased by the hour for as little as eleven cents. Some of the services charge in micro cents – more on that later. This is the first of three posts examining Amazon Web Services. this post introduces the key concepts, the next post will talk about some of the scaling techniques provided, and the final post will focus on how Altium is currently leveraging Amazon offerings.

The key services Amazon offer include storage, compute power, and a range of auxiliary services designed to enhance these. Here is a very brief overview:

  • Amazon S3 (Simple Structured Storage) provides storage facilities. You can store files of all kinds including video, audio, software and anything else. You pay for the storage and the bandwidth to access them. Costs are very low. Altium uses S3 to store many things including training videos and software builds. When Altium releases a new build, tens of thousands of customers need to be able to get the 1.8GB file very quickly and this works well for that. Storage reliability comes in two levels – the highest provides a 99.999999999% (11 nines) probability of not being lost.
  • Amazon Cloudfront provides a perimeter caching facility – files stored in S3 are distributed to local nodes around the world so that people can get access to the files quickly. There is an option to provide files as delivered via streaming.
  • Amazon EC2 (Elastic Compute Cloud) provides access to virtual computers you can buy by the hour. The computers come in a number of hardware configurations ranging from low-end single processor machines through to big boxes with lots of processors and 68GB RAM. Machines come in Linux and Windows flavors. You can also get the machines preconfigured with certain hardware packages, and the price of those (if they are chargeable) is built into the rental. For example you can get a machine MS SQL Server in different flavours ranging from free to the Enterprise level. Machines can be imaged so that you can take them off line or replicate them very quickly. EC2 instances can be tied to Elastic Block Storage, effectively S3 storage.
  • Amazon RDS is a special case of EC2 that comes with an embedded MySQL database and a range of value added facilities, including automatic backups and the ability to achieve failover replication into an alternative hardware partition in case the main server goes down.
  • Amazon SimpleDB is a lightning fast schema-less string-based database consisting of items with many named attribute value pairs. You can, for instance, store a customer record with values for Name, Address, Phone etc, but you can store anything you like. If you want to store Favorite Color for one customer,  you can.  You can even store multiple values for the one attribute.

There are a range of other facilities, but I will discuss these in a later post where I talk about facilities to help you achieve elastic scaling (up and down).

2
Sep

What Fast Internet to the Home will Mean

In the recent Australian election campaign, one of the key campaign focal points was the policy concerning a high-capacity broadband backbone for the majority of the country. There are many aspects of this that are interesting.

The fact that it is an election issue interesting of itself – it means that there is now recognition that Australia is falling behind other developed nations with regard to the Internet and that keeping pace with this particular form of infrastructure is important.

Then there is the fact that there are such divergent views as to how this should be implemented. One party says they want to put in an optic fibre cable running to 93% of the homes and deliver at least 100Mbps, possibly 1Gbps. This will cost an estimated $43Billion. The other party proposes a network based on a major rollout of wireless towers delivering 12Gbps to 97% of homes at a cost of a little over $6Billion.

And the other thing I found interesting about all of this is that no-one seemed to have any imagine about what would be done with all that bandwidth. I have no problem with spending on a major infrastructural scheme, a scheme that brings back memories of the Snowy Mountain Hydroelectric scheme in the scale of its cost, but I question why people are proposing such major projects when they don’t seem to be able come out with illustrative examples other than entertainment and medicine. Apparently we are all going to have our lives saved.

I noticed some people asking why would we need 1Gbps, and my response is to remind people that when Bill Gates and Paul Allen were discussing how much RAM a computer could ever possibly have, they thought that 640KB would be ample.

There are plenty of things that 1Gbps bandwidth could allow, and I will attempt to explore some of them, but first there is one other variable in this that I haven’t seen mentioned in the media and that is the bandwidth consumption that this fast connectivity will allow. How are the consumers going to be charged? If the speed encourages people to pull down heaps of stuff, or upload huge amounts of data, it will mean that costs will blow out if the data price does not come down considerably.

One of our staff based in Japan has 1Gbps to his home and an upload limit of 50GB per day, no download limit. I wish.

Now, what kind of things could you do if you had that much bandwidth?

The obvious ones, like entertainment, with games, movies don’t really bear mentioning, except that once the time taken to download files of a given size passes some threshold, it ceases to be a barrier to use. For different people that point will differ.

Torrents will become incredibly convenient – files will be downloaded more often and therefore available from more locations, all of them high speed. I wonder what that will do for the movie, music and software industries.

One of the big changes that will come about is the interconnectivity via the Internet of all manner of devices. The speed and ubiquity of the net will increase the use of these devices in the so-called Internet of Things. Devices will maintain state in the cloud and check in to report their current situation as well as see if there is something the device should know about. Agent software will interact with this information and perhaps make changes to the values for other devices.

Here is an example: a car’s GPS is sending to the cloud information about where it is – XYZ coordinates that enable some agent software to determine velocity, direction etc. The agent software knows where home is and can make a determination that the car is heading to the house. As it passes some predetermined proximity boundary, the agent can check to see what the current situation is with the home air conditioner or heater and based on the ambient temperature and preconfigured preferred temperature, it can turn the air conditioner on.

Opportunities to exploit this information will grow as the bandwidth to support them grows. So with the GPS example, the data could be sent back to a central agent that can determine how many cars are not moving quickly enough in one location, and tell the GPS that there is a traffic problem up ahead. Or the data could be anonymised and fed to a government town planning service that can use the data to determine future bridges, tunnels, toll gates etc.

Metcalfe’s law states that the value of a communications network is equal to the square of the number of nodes. When devices are connected like this, the potential is unimaginable.

Like Facebook or Twitter, where individuals decide what they wish to publish, and other individuals decide what they wish to subscribe to, the idea of devices publishing a range of data for other permitted devices to subscribe to, either directly or through some intermediate agent software, is very compelling. Printers can publish their toner levels and how many drum cartridges they have, while some device designed to dispatch a printer repair agent can keep an eye on things. Not only can devices publish information about their current status, they can also have built in a range of metadata – information that they seek out from elsewhere on the intranet and republish. For example, while a printer may publish information about its toner levels, it may also have a Google-style search that publishes the results of a search on its own model number. This would be useful in the case of factory recalls, announcements of new firmware, or perhaps sales of peripherals, accessories and consumables.

To bastardize a quote from the movie Field of Dreams, build it and they will come.

%d bloggers like this: