Skip to content

Recent Articles


Australian Government’s New Third Cloud Computing Policy Shows Technical Leadership

The Australian Government has just released a third version of its Cloud Computing Policy. The new policy can be found here, although for additional context, it is located as part of a range of documents associated with Cloud Computing Policy, found here.

The document is a significant step forward from previous years. Previously departments needed to demonstrate that they had considered using Cloud before they implemented any new systems. The new policy is far more “Cloud-friendly”: the policy is now described as “cloud first” and states that “agencies must adopt cloud where it is fit for purpose, provides adequate protection of data and delivers value for money.” (emphasis on ‘must’ included in original document”

The government makes it clear this is the desired direction: “… agencies have made limited progress in adopting cloud. A significant opportunity exists for agencies to increase their use of cloud services through the Australian Government Cloud Computing Policy.”

“We are committed to leading by example, demonstrating the benefits of investing in and using cloud services”, the foreword goes on to say. Reflecting this, the stated policy goal is to “reduce the cost of government ICT by eliminating duplication and fragmentation and will lead by example in using cloud services to reduce costs, lift productivity and develop better services”

Whereas previously government agencies were asked to first consider Cloud, the new policy states they are “required to use Cloud services for new ICT services and when replacing any existing ICT services, whenever the cloud services:” are fit for purpose, offer best value, and manage risk adequately.

The policy encourages departments to consider cross-entity cloud facilities. Public cloud facilities are recommended for hosting public facing websites and private, public, community and hybrid are recommended for operational systems.

The government sees its role as a technical leader in the wider marketplace: “There is also an important flow-on effect to the broader economy. Combined with states and territories, government expenditure on ICT makes up approximately 30 per cent of the domestic ICT market. Improved adoption of cloud services by the government sends an important signal to the private sector. If government agencies were perceived to be treating cloud services as risky, this could reduce the adoption in the economy more generally.”

I think this is very encouraging for Australian uptake of Cloud – not just in a restricted public-cloud way, but in a full-spectrum use of Cloud technologies. After all, if the Government is mandating Cloud, what excuses do other businesses have?


Cloud vs On-Premise – A False Dichotomy

I often hear people talking about Cloud Computing as an opposite to On-Premise. This is based on an incorrect assumption that one of the key characteristics of Cloud is that it is delivered from off-premise.


The above diagram shows the perception people have – that there is a choice to be made – choosing Cloud means sacrificing on premise, and choosing on-premise means sacrificing Cloud.


The reality is more like the above – we can choose for any workload whether to adopt Cloud or use more traditional  approaches, and we can, separately, choose whether to run these work loads on -premise or elsewhere. This is not an all-or nothing proposition.

Speaking of all-or-nothing propositions, I really like what Gartner’s Lydia Leong had to say about bimodal IT. Her message is that we should not try to compromise our IT practices – use best-practice traditional approaches for traditional workloads, and best-practice “Cloudy” approaches for new workloads based on the New Style of IT. One of the biggest causes of difficulty in adopting new approaches to IT delivery is creating an anemic set of practices that can’t do anything well.

Go All-in when you can, and recognize that despite what the public-cloud-only vendors would have you believe, the location of the infrastructure is a completely independent decision from whether you choose to Go Cloud.


DevOps: the Solution to Disintermediation

Around 1986/87, I approached a motorcycle courier and asked him, “what impact, if any, has the fax machine had on your business”. His reply surprised me: “It has been the best thing that ever happened – people now expect instant delivery, but the fax is not a valid original”.

At first glance, I thought this was counter-intuitive – I expected the fax machine to kill the couriers, or to have no impact – I certainly didn’t expect it to have the positive impact he described. Suddenly people knew they could get their hands on a facsimile of a document instantaneously, and so they came to expect instant gratification, instant delivery. And so, they would call on couriers rather than use the postal service to ensure they got the documents quickly. The fax machine changed the paradigm.

Similarly, the proliferation of SaaS (Software as a Service) offerings like Salesforce, Yammer, Marketo, GMail, Clarizen, Zoho, Zuora etc have changed end users’ expectations of what is an acceptable timeframe for delivery of new systems or modifications. Marketing, Finance, HR and other teams now expect new functionality to be available in days/weeks/months rather than quarters and years. They are making new demands of their IT teams that are in the eyes of traditional CIOs unconscionable, impossible, reckless. This leads to frustration and tension, and a strong desire to bypass the IT department – a process that has become known as disintermediation: cutting out the middle man.

This process is so well established that Gartner stated in early 2012 that by 2017 marketing departments will be spending more on IT systems than CIOs. The trouble is that marketers are good at marketing, not IT, and so they will end up creating siloed “crapplications” rather than extensions of a single source of truth. There will be a plethora of disjoint standalone copies of customer databases, product catalogs, charts of accounts etc distributed through the enterprise.

CIOs can well look at this and feel frustrated and concerned – how do they convey the risk being taken by bypassing the architectural Design in order to get instant gratification? How do they get back in control of the systems agenda? How do they learn to respond to the accelerated expectations of business departments that cannot afford to stand by and watch their business erode by far more responsive newcomers?

The answer, I believe, largely lies in moving to a DevOps model where the dichotomy between software development and operations  infrastructure provisioning disappears,  and hardware and any required software is provisioned on demand by scripting and automation.  The DevOps approach leads to continuous deployment – in some cases up to 75 times per day. This model aligns with agile programming techniques like Scrum, and supports the idea of disposable “cattle” hardware provisioning so synonymous with Cloud and Big Data.

Critics may argue that such approaches are risky, even reckless, that new versions must be thoroughly tested before seeing the light of day. New systems and changes to hardware environments must go through rigorous testing before being published. This has consequences for change management complexity and thus risk: keeping development, staging and production environments in sync is difficult when major changes are bundled together and released in one large batch.

It is perhaps paradoxical that the systems developed using DevOps principles will be delivered faster at lower risk. This is due to the combination of:

  1. Being able to continuously deploy updates using automated deployment systems like Jenkins;
  2. Procuring scripted environment using the likes of Chef, Puppet, Saltstack or Ansible that guarantee all the environments (development, staging and production) are materially the same;
  3. Autonomous failover and recovery systems that know how to self-repair or self-replace parts of the environment in real time;
  4. Agile software practices that are designed to iteratively deal with intrinsic and extrinsic problems, scaling needs, and are capable of responding to changes in direction;
  5. Systems being designed with testing as a first foundation rather than an afterthought; and
  6. Systems being designed from the ground up to cope with fallible, unreliable infrastructure.

CIOs often want to wait “until Cloud matures” before they move workloads or change practices. Such wait-and-see approach can doom them to fail, or at least consign them to a much more difficult road into the new world. Marketers and other functional heads are not going to wait, and once the systems do become disparate and disconnected as a result of them going it alone, it will become increasingly difficult for the CIOs to introduce DevOps processes and re-establish themselves as the credible go-to resource for any form of IT related system.

Unfortunately too many CIOs have been convinced by Cloud vendors that claim that the only true Cloud is a public multi-tenanted one. I myself felt that way in the early days. The reality is that the location of the system is a secondary matter. Cloud and DevOps principles apply regardless of whether the systems are on premise or off premise. Too many CIOs see their biggest challenge of deciding whether to run IT on premise or in the Cloud, but this is a false dichotomy, a false choice – it is not a question of on-premise OR Cloud – there are two choices: one a choice of on-premise or off-premise and the other a choice of Cloud or not Cloud, DevOps or not DevOps.

Chief Marketing Officers, Financial Officers, HR Officers are not going to wait for their visions to be implemented by IT teams encumbered by old practices – nor should they. They need to be able to rely on their CIOs to deliver what they need when they need it. DevOps and Cloud technologies will empower the IT departments to be enablers leading the organisation forward into the new world instead of roadblocks preventing the business from moving ahead.


Data Generation is Growing – Start Storing Everything

The world is generating more data than ever before.

In 2013 IBM reported that 90% of the world’s data had been generated in the last two years, and this trend is continuing. So what is causing this explosion of data?

Several major factors have contributed. Chief among these is the changing way we interact with computers. The first generation of data capture involved rigorously-prepared data being hard-wired physically into the machine by the engineers. The second generation involved professionally trained computer operators who would feed data into the machines at our request. Software was designed to enforce restraints so the machine knew how to process the data. The third generation saw everyone getting access to enter their own data – Web 2.0, the interactive Web: people were given the freedom to capture whatever they wanted to capture, and the amount of data captured exploded. Now we are on a fourth generation of data capture – the Internet of Things, Machine to Machine communication. Gartner has projected that there will be 26Billion connected devices by 2020, and the number of sensors will be measured in the trillions.

The price of storing all the data we are generating has fallen dramatically. The first hard disk drive, IBM’s RAMAC 305 was launched in 1956 at a cost of $50,000, or $435,000 in today’s dollars – the drive stored 5Megabytes. To put that in context, it would cost $350Billion today to store 4 Terabytes of data using that technology, and it would take up a floor area of 2.5 times the area of Singapore. Of course, today, a 4Terabyte drive can be purchased for little more than a hundred dollars and can fit in the palm of a hand.

Since storage costs have been largely resolved, the biggest challenge remained around data processing and analysis. There are two issues here: firstly traditional database systems are designed to run on one machine. Larger databases implies scaling up to bigger machines, but the amount of data now available has outstripped our capacity to process it using traditional methods on one computer – the computers are just not powerful enough. Secondly data has become more structurally complex, and traditional database designs, which rely on the structure of the data being predefined during the design phase, no longer cope with the flexibility required when people and systems can evolve to use data in unpredictable ways.

Until the rise of next-generation database systems like MongoDB, these limitations resulted in a lot of data being thrown away: what’s the point of storing stuff if you cannot make sense of it? But MongoDB has helped change all that. MongoDB is inherently designed to work across many computers thus enabling it to handle vastly larger amounts of data. Furthermore, the structure of the data does not have to be defined in advance – MongoDB allows for the storage of anything, and yet patterns and sense can be gleaned regardless of the structure.

For example, in a database of customers, we may have access to store information about each customer that is highly specific to them as individuals – their pets, their hobbies and special interests, places they have visited, books they have read, companies where they have worked. We may not know what information we can glean, but with MongoDB we can store it today and make sense of it in the future.

Given tools now exist that can handle less structured and very large datasets, and storage costs have so dramatically reduced, it stands to reason to start storing everything. Businesses of the past were valued on their brand awareness; businesses of the future will be increasingly valued on how well they can make use of the data at their disposal to understand each customer, improve their efficiency and responsiveness, and make the best decisions.

Storing data today for use tomorrow makes a great deal of business sense. Once data has been collected, questions about how to use the data will naturally ensue. Without the data, people won’t even think about questions they could be asking, things like:

  • What would be the optimum price for this product?
  • How soon should we follow up a customer after they have purchased item X?
  • What products act as good loss leaders?
  • What are the signs that indicate a customer will churn?
  • When a customer churns, who are the people most at risk of following them?
  • What impact does the weather have on purchasing patterns?

MongoDB makes the perfect solution to store data. It has the flexibility to cope with unstructured and structured data, can scale to many petabytes with full replication, and has a great deal of support for analytics. Even if there is no current interest in analysing data, a business case will be much easier to make for analytical tools in the future if there is a large reservoir of data to use, rather than just an idea to grow from scratch.

The businesses who win in the future will be those that know how to harvest all the data available to them. The sooner they start storing and practising how to glean the most from their data, the sooner they can learn to pre-empt the customers’ needs, pre-empt the marketplace, learn to cope with the veritable deluge in a highly responsive manner and leave the less-informed competition behind.


Cloud Computing: It’s not just about Price

I fly a lot. When you do something a lot, little frictions start to mount up and can become a hassle.

Last week I had a small trip from Sydney to Melbourne and I was booked one way on one of the low cost airlines that offer a price that includes only the seat – everything else is an optional extra. On the way back I flew an airline that offers an all-inclusive price.

Since it was a short trip, I only took carry-on luggage with me. When I went through the scanners I was told that I had a pair of scissors in my bag and I had to surrender them. The scissors were expensive ones so I opted to go back and check in my carry-on bag. When I got to the counter I was asked to pay more than the price of the ticket to check my bag in.

I have to admit I got a bit upset at this and decided I had to throw the scissors away – they were good, but not worth the baggage charge. Frustrated, I tossed the scissors and went back through the scanners, to find out that my shaving cream was now being rejected due to an ill-fitting top. Rather impatiently, I made it very clear the shaving cream was not that important to me.

Then I walked off towards my gate lounge and realized I had lost my wallet somewhere in all of this mess. So I had to go back to the security area, then eventually to the check in area, again with my bags, and learned that my wallet had been taken to the gate lounge.

So after passing through the security screen for the third time I finally went to my gate lounge and picked up my wallet. Over the next five minutes, the gate was changed three times, in one case we were swapped with another flight.

When I finally arrived in Melbourne, there was an attendant (mis)managing the taxis. Fifteen taxi spots back to back and one taxi pulling up at a time, always being sent to the first spot, so the person sent there was always served quickly, but the poor people back in spot ten, eleven, twelve etc were waiting forever. More frustration.

All of these issues were minor in the bigger scheme of things, but they all added up to a really bad experience. by the time I reached my destination I was furious. Contrasting that with my return trip where I am a high-profile customer. I swiped my card, went to the lounge had breakfast, boarded my flight without incident. But if there was an incident, the airline would have assigned someone to take care of it, without charge.

I recognize that both these models are valid – pay a little and hope all goes well, or pay a premium for the piece of mind that you are in good hands, but often the prices of these product offerings are compared as if they are the same thing.

Cloud hosting is like this. Some providers offer a self-service model designed to be cost efficient without any bells and whistles, and if you run into difficulties you are pretty much on your own, or into some sort of exception management situation. Others charge a bit more, but include Service Level Agreements and support offerings that are factored into the price so that when you are in a spot of bother, someone will be there for you.


The Internet of Things: Interconnectedness is the key

I was at an Internet of Things event a couple of weeks ago and listening to the examples it was clear there is too much focus on connecting devices, and not enough focus on interconnecting devices.

Connecting devices implies building devices that are designed specifically to work within a closed ecosystem, to report back to some central hub that manages the relationship with the purpose-built device. Interconnected devices are designed in such a way that they can learn to collaborate with devices they were never designed to work with and react to events of interest to them.

So what will this look like? For one possible scenario, let’s start with the ubiquitous “smart fridge” example and expand this to look at the way we buy our food. There has been talk for years about how fridges will be telling us about the contents, how old they are, whether anything in them has been reserved for a special meal, what is on the shopping list etc. Even to the idea of placing automatic orders with the food suppliers, but what if we want to still be involved in the physical purchasing process, how will the Internet of Things, with interconnected devices work in that scenario? Here’s a chain of steps involved:

  1. Assuming our fridge is the central point for our shopping list, and we want to physically do the shopping ourselves, we can tap the fridge with our phones and the shopping list will be transferred to the phone.
  2. The fridge or our phone can tell us how busy the nearby supermarkets currently are, and based on regular shopping patterns, how many people will likely be there at certain times in the immediate future. Sensors in the checkout will let us know what the average time is for people to be cleared. Any specials that we regularly buy will be listed for us to help make the decision about which store to visit.
  3. We go to the supermarket and the first thing that happens is the supermarket re-orders our shopping list in accordance with the layout of the store.
  4. The phone notifies our family members that we are at the supermarket and lets them know we are there so they can modify our shopping list.
  5. We get a shopping trolley, which immediately introduces itself to our phone. It checks with our preferences in the phone as to whether we want its assistance, whether it is allowed to record our shopping experience for our use, or to assist the store with store planning
  6. As we walk around the store, the phone or the trolley alerts us to the fact that we are near one of the items on our shopping list.
  7. If we have allowed it, the trolley can make recommendations based on our shopping list of related products, compatible recipes, with current costs, and offer to place the additional products into the shopping list on the phone and even into our shopping list template stored in the fridge if we want.
  8. As we make our way to the checkout, the trolley checks its contents against what is on our shopping list and alerts us to anything missing. Clever incentives might also be offered at this time based on the current purchase.
  9. As soon as the trolley is told by the cash register that the goods have been paid for, it will clear its memory, first uploading any pertinent information you have allowed.
  10. Independent of the shopping experience and the identifiability of the shopper and their habits, the store will be able to store the movements of the trolley through the store, and identify how fast, any stopping points to identify interest and analyse for product placement.
  11. Once we get home, we stock the cupboard and the fridge, both of which update our shopping list.
  12. As soon as we put the empty wrapper in the trash, the trash can will read the wrapper and add the item to a provisional entry in the shopping list, unless we have explicitly pre-authorised that product for future purchase.

Another example would be linking an airline live schedule to your alarm clock and taxi booking, to give you extra sleep in the morning if the flight is delayed. Or having your car notify the home that it looks like it is heading home and to have the air conditioner check whether it should turn on.

While we focus only on pre-ordaining the way devices should work during their design, we limit their ability to improve our lives. By building devices that are capable of being  interconnected with other devices in ways that can be exploited at run time we open up a world of possibilities we haven’t begun to imagine.


Preparing for the Big Data Revolution

It is no accident that we have recently seen a surge in the amount of interest in big data. Businesses are faced with unprecedented opportunities to understand their customers, achieve efficiencies and predict future trends thanks to the convergence of a number of technologies.

Businesses need to take every opportunity to store everything they can. Lost data represents lost opportunities to understand customer behaviour and interests, drivers for efficiency and industry trends.

A perfect storm
Data storage costs have fallen dramatically. For instance, in 1956 IBM released the first hard disk drive, the RAMAC 305. It allowed the user to store five megabytes of data at a cost of $50,000 – that’s around $435,000 in today’s dollars. In comparison, a four-terabyte drive today can fit in your hand and costs around $180. If you were to build the four-terabyte drive using 1956 technology, it would cost $350 billion and would take up a floor area of 1600 km2 – 2.5 times the area of Singapore. Also, 10-megabyte personal hard drives were advertised circa 1981 for $3398 – that’s $11,000 today, or $4.4 billion for four terabytes.

Gordon Moore’s prediction in 1965 that processing capacity doubles approximately every two years has proved astoundingly accurate. Yet the amount of data we can generate has far outstripped even this exponential growth rate. Data capture has evolved from requiring specialised engineers, then specialised clerical staff, to the point where the interactive web allowed people to capture their own data. While this was a revolutionary step forward in the amount of data we had at our disposal, it pales before the most recent step: the ‘Internet of Things’, which has opened the door for machines to automatically capture huge amounts of data, resulting in a veritable explosion of data, way outstripping Moore’s Law. The result: the data load became too much for our computers, so we simply threw a lot away or stopped looking for new data to store.

With the price of storage decreasing sharply, the economies of storage have meant we can afford to capture more data: it has become increasingly important to find new ways to process all the data being stored at the petabyte scale. A number of technologies have emerged to do this.

Pets versus cattle
Traditionally computer servers were all-important – they were treated like pets. Each server was named and maintained with great attention to ensure that everything was performing as expected. After all, when a server failed, bad things would happen. Under the new model, servers are more like cattle: they are expendable, easily replaced. Parallel processing technologies have superseded monolithic approaches and allow us to take advantage of using many low-cost machines rather than increasingly more powerful central servers.

Hadoop is one project that has emerged to handle very large data sets using the cattle approach. Hadoop uses a ‘divide and conquer’ approach, which enables extremely large workloads to be distributed across multiple computers, with the results brought back together for aggregation once each intermediate step has been performed. To illustrate Hadoop: imagine having a deck of cards and someone asks you to locate the Jack of Diamonds. Under a traditional approach you have to search through the cards until you locate the card. With Hadoop, you can effectively give one card each to 52 people, or four cards each to 13 people, and ask who has the Jack of Diamonds. Much faster and much simpler when complex processes can be broken into manageable steps.

NoSQL, which was intended to mean “not only SQL”, is a collection of database technologies designed to handle large volumes of data – typically with less structure required than in a typical relational database like SQL Server or MySQL. Databases like this are designed to scale out to multiple machines, whereas traditional relational databases are more suited to scaling up on single bigger servers. NoSQL databases can handle semi-structured data; for example, if you need to capture multiple values of one type or obscure values for one person. In a traditional database, the structure of the database is typically more rigid. NoSQL databases are great for handling large workloads but they are typically not designed to handle atomic transactions: relational SQL databases are better designed for workloads where you have to guarantee that all changes are made to the database at the same time, or no changes are made.

Network science
Network science studies the way relationships between nodes develop and behave in complex networks. Network concepts apply in many scenarios; examples include computer networks, telecommunications networks, airports or social networks. Given a randomly growing network, some nodes emerge as the most significant and, like gravity, continue to attract additional connections from new nodes. For example, some airports develop into significant hubs while others are left behind. As an airport grows, with more connections and flights, there are increasingly compelling reasons why new airlines will decide to fly to that airport. Likewise, in social networks, some people are far more influential either due to the number of associations they develop or because of the effectiveness of their communication skills or powers of persuasion.

Big data can help us to identify the important nodes in any contextual network. Games console companies have identified the most popular children in the playground and given them a free console on the basis that they will have a lot of influence over their friends. Epidemiologists can identify significant factors in the spread of diseases by looking at the significant nodes and then take steps to prevent further contamination or plan for contagion. Similarly, marketers can use the same approaches to figure out what is more likely to ‘go viral’.

Big data assists businesses to gain a better understanding of customers, treating each customer as an individual – the so-called marketing segment of one. Understanding what moves customers can build strong brand loyalty and evoke an emotional response that can be very powerful. Imagine an airline that recognises that a particular passenger travels from A to B every Monday to Thursday. However, if that passenger plans to stay in B for two weeks, imagine how much loyalty could be generated by offering them a free flight over the weekend to C, a discounted flight for their spouse from A to C, and a discounted hire car and room for the weekend away together.

Digital body language and buying habits can lead online retailers to be able to make astute decisions about what product to offer customers. Target was able to identify pregnant customers very early by their shopping patterns: customers buying certain combinations of cosmetics, magazines, clothes would go on to buy certain maternity products months later.

Big data can be used to drive efficiencies in a business. The freight company UPS, for example, was able to save almost 32 million litres of fuel and shave 147 million km off the distance its trucks travelled in 2011 by placing sensors throughout the trucks. As a side benefit, they learned that the short battery life of their trucks was due to the drivers leaving the headlights on.

By analysing customer relationships, T-Mobile was able to mitigate the risk of a domino effect when one customer decided to leave its service. It did this by identifying the customers who were most closely related digitally to the person churning and making a very attractive offer to those people, preventing the churn from spreading. Further, by analysing people’s billing, call dropout rates and public comments, they were able to act in advance to reduce churn by 50% in a quarter.

CERN conducts physics experiments at the Large Hadron Collider involving sending 3.5 trillion electron volts in each direction around an underground ring, resulting in particle collisions that provide an understanding of the basic building blocks of matter. The Higgs-Boson was proven by analysing the data that was generated in smashing the particles together. 15,000 servers are used to analyse the one petabyte of data that is generated per second and 20 gigabytes is actually stored. This is orchestrated using cloud techniques built on OpenStack and designed and supported by Rackspace.

We have reached a point where it is now better to start storing everything today so that we have a business case for analytical tools tomorrow. Once we start getting used to the idea that everything is available to us, we will find new ways to think about how we leverage our information. The businesses that succeed in the future will be those that constantly look for ways to mine the information they have gleaned.

[This article has been slightly modified from an article I wrote that was previously published in Technology Decisions magazine.]


CIOs: Focusing on Obstacles will Limit your Success with Cloud Computing

No matter where you stand on the new Cloud technologies, there is no escaping the fact that Cloud Computing has everyone’s attention. For some business executives it is seen as an opportunity to financially restructure their IT expenditure. Some focus only on the risks they perceive of placing their data and systems in the hands of some external third party.  Still others see it as providing the means to focus on their core business and new business ideas without having to worry about whether the computer infrastructure will be able to cope.

While IT teams must ensure that systems are safe and data is secure, it is ironic that by focusing too much on security and availability, many CIOs are exposing themselves, and their employers, to a far greater risk – the risk of missing the opportunities presented by new technologies emerging from Cloud Computing.

The CIOs who will provide the greatest value to their employer will be those who approach Cloud by asking themselves “what can we now achieve that was previously inconceivable?”

While there are many distracting arguments about what constitutes Cloud, the key characteristic that differentiates it from more traditional approaches is that Cloud provides the freedom to be remarkable – the freedom for a business to focus on what it does best without constraints imposed by infrastructure.

Traditional approaches to IT see the acquisition of dedicated equipment on a project basis, with each new system requiring new equipment and administration. This leads to ever-increasing IT complexity, with the IT department working to prevent things from getting out of hand. In many cases this has led to a perception that the IT department is the problem, and many IT budgets are shifting to marketing as a result.  Under a Cloud model, IT should evolve to becoming a reservoir from which new equipment is instantly sourced, and a platform underpinning whatever the business or the marketplace throws at it, scaling up and down to meet changing demands. While traditional approaches add complexity, Cloud provides the freedom to focus on the business imperatives.

IT leaders who embrace Cloud computing as an enabler will not be seen as roadblocks by the marketing or sales departments. Especially when they adopt Open Cloud approaches such as OpenStack that overcome vendor lock-in and allow for hosting data on-premise, off-premise, or a mix of the two.

Cloud computing enables businesses to take advantage of the relationships their customers have with each other.  For the cost of a coffee, businesses are able to experiment with new technologies by renting computers for the few hours it might take to trial an idea.  They can continuously update their web presence in response to constantly changing patterns of behaviour. They can forge ahead with an initiative knowing that if it exceeds beyond their expectations, the platform can grow to accommodate it, and then shrink when the job is done. They can scale while maintaining a specialized relationship with each individual client. They can identify trends and make predictions based on analysing unprecedented amounts of data. Their employees can collaborate, find information and respond to events and customer demands with far greater agility than ever before. Those who truly adopt this approach fundamentally understand that the Cloud is independent of issues such as on-premise or off-premise – provisioning can include a mixture of both – even bare metal machines can be incorporated into a Cloud-oriented approach to provisioning.

CEOs need to understand that the opportunities to stand out have never been greater. They can help their businesses succeed in capitalizing by making it clear to their CIOs that it is no longer enough just to ensure that systems are operating and data is safe. Cloud computing opens up opportunities for a level playing field like never before and CEOs need to put their CIOs on notice that they need to be first to come up with the next wave of innovation or there will be more at risk than their jobs.


A Review of Bruce McCabe’s Skinjob

It’s not every day you get to read a book written by one of your professional associates; it’s rare when the book happens to be a gripping yarn that has you wanting to tell everyone about it. Skinjob by Dr. Bruce McCabe is such a book.

I first met Bruce in 2006 when he came to interview me about my journey in taking Altium to the Cloud. At the time the things we were doing at Altium with what later became known as ‘Cloud Computing’ were pretty revolutionary, Some senior people at Salesforce told me that some of my emails about how I was using their system sent shock waves through the entire organisation and were instrumental in the development of One person has even described me, rather embarrassingly I think, as the “Father of”. Anyway I digress. Bruce was researching some of the new technologies emerging for his consulting firm and had visited San Francisco to see several companies including Salesforce. He was interested in meeting people who were pushing the envelope and he was referred to me. Here he was in America looking for innovators globally, and was referred to someone that lived almost on his doorstep.

Since those days, Bruce and I have maintained a good professional relationship, and he mentioned to me over a coffee that he was writing a work of fiction – a thriller set in the immediate future. I didn’t think much at the time – I have seen lots of people talk about writing books, but there was a glint in his eye and he seemed serious enough. Fast forward almost a year, and I have just read his first novel – Skinjob. I was seriously impressed.

Once I got past the initial thoughts of “I know this author”, and settled into the book, I was absorbed. I had intended to finish on a plane next week, but I simply couldn’t put it down.

Without giving anything away, the book grabs you on multiple levels. Firstly there is the whodunnit mystery laced with enough threat to the main characters to consider it a thriller. At this level the book is entertaining with a complex plot that is managed and presented well, each dimension to the story doled out at a steady pace that balances beautifully the need to know what’s happening, with the desire for more. The astute reader is given enough hints to solve the mystery. Few readers will see the giveaways though, and I am certain no-one will predict the complete plot line.

Delve one level deeper and we are presented with a number of different predictions about how technology will impact on our world in the next five to ten years. McCabe’s has thoughtfully woven a number of technical advances into our daily lives, focusing on how they relate to things that are very close to all of us, examining intimate entertainment, policing and religion. I like the way in which he illustrates how data will be collected and analysed, and how computing power will be made available for the public ‘good’. In particular the iterative analysis of unstructured data is well thought through. I don’t want to say too much about this because it will give part of the story away. Suffice it to say the book makes a great case study on how Big Data and Cloud Computing will impact on our lives.

At a third level we are challenged with questions of morality and ethics about our behaviour and our rights to privacy. When people see new opportunities to make money or achieve other self interested goals, they will often overlook their moral compass and push on towards what can ultimately be a very slippery slope. McCabe does a great job of raising issues about how technology will impact on the moral decisions we make at the social and individual level, without being at all judgemental.

I think the author is destined for big things. And I am not the only one Bruce McCabe appears to have piqued the interest of the literary agent who made Harry Potter’s J.K. Rowling famous.

I thoroughly enjoyed reading Skinjob. I think others will too.





New news site by Delimiter aims to lift the quality of journalism in Australian IT

I have always been a fan of independent operators and I like the freelancing model that allows journalists to pursue ideas, pitch them and then write stories that provide insights. The new digital era has brought with it many challenges to traditional journalism including the fact that now anyone can publish content cheaply, the popularity of an article is directly proportional to the number of cats picture, attention spans are shorter than ever, and publishers compete to appeal to the lowest common denominator in a vicious cycle that continues to find new levels of inanity.

I was particularly excited to learn of Delimiter’s decision to go against this trend by developing a new site that plans to deliver one signficant article each week that probes into what the editor considers the biggest issue of the week in Australian IT. The site, delimiter2, requires a subscription of $9.95 per month, which I think is a no-brainer, especially in the context of it supporting a small business fighting against the trend towards mediocrity that aids the downward spiral of journalism.

My only real concern about the site is the ambitious nature of the requirement to write one high-quality article that analyses the news story every week. I hope it is kept up – perhaps guest articles may be considered.

Nevertheless, it is a worthy vision and I urge all the Australian IT professionals, and those who have an interest in Australian IT to subscribe.

%d bloggers like this: