Skip to content

Recent Articles

14
Apr

The Internet of Things: Interconnectedness is the key

I was at an Internet of Things event a couple of weeks ago and listening to the examples it was clear there is too much focus on connecting devices, and not enough focus on interconnecting devices.

Connecting devices implies building devices that are designed specifically to work within a closed ecosystem, to report back to some central hub that manages the relationship with the purpose-built device. Interconnected devices are designed in such a way that they can learn to collaborate with devices they were never designed to work with and react to events of interest to them.

So what will this look like? For one possible scenario, let’s start with the ubiquitous “smart fridge” example and expand this to look at the way we buy our food. There has been talk for years about how fridges will be telling us about the contents, how old they are, whether anything in them has been reserved for a special meal, what is on the shopping list etc. Even to the idea of placing automatic orders with the food suppliers, but what if we want to still be involved in the physical purchasing process, how will the Internet of Things, with interconnected devices work in that scenario? Here’s a chain of steps involved:

  1. Assuming our fridge is the central point for our shopping list, and we want to physically do the shopping ourselves, we can tap the fridge with our phones and the shopping list will be transferred to the phone.
  2. The fridge or our phone can tell us how busy the nearby supermarkets currently are, and based on regular shopping patterns, how many people will likely be there at certain times in the immediate future. Sensors in the checkout will let us know what the average time is for people to be cleared. Any specials that we regularly buy will be listed for us to help make the decision about which store to visit.
  3. We go to the supermarket and the first thing that happens is the supermarket re-orders our shopping list in accordance with the layout of the store.
  4. The phone notifies our family members that we are at the supermarket and lets them know we are there so they can modify our shopping list.
  5. We get a shopping trolley, which immediately introduces itself to our phone. It checks with our preferences in the phone as to whether we want its assistance, whether it is allowed to record our shopping experience for our use, or to assist the store with store planning
  6. As we walk around the store, the phone or the trolley alerts us to the fact that we are near one of the items on our shopping list.
  7. If we have allowed it, the trolley can make recommendations based on our shopping list of related products, compatible recipes, with current costs, and offer to place the additional products into the shopping list on the phone and even into our shopping list template stored in the fridge if we want.
  8. As we make our way to the checkout, the trolley checks its contents against what is on our shopping list and alerts us to anything missing. Clever incentives might also be offered at this time based on the current purchase.
  9. As soon as the trolley is told by the cash register that the goods have been paid for, it will clear its memory, first uploading any pertinent information you have allowed.
  10. Independent of the shopping experience and the identifiability of the shopper and their habits, the store will be able to store the movements of the trolley through the store, and identify how fast, any stopping points to identify interest and analyse for product placement.
  11. Once we get home, we stock the cupboard and the fridge, both of which update our shopping list.
  12. As soon as we put the empty wrapper in the trash, the trash can will read the wrapper and add the item to a provisional entry in the shopping list, unless we have explicitly pre-authorised that product for future purchase.

Another example would be linking an airline live schedule to your alarm clock and taxi booking, to give you extra sleep in the morning if the flight is delayed. Or having your car notify the home that it looks like it is heading home and to have the air conditioner check whether it should turn on.

While we focus only on pre-ordaining the way devices should work during their design, we limit their ability to improve our lives. By building devices that are capable of being  interconnected with other devices in ways that can be exploited at run time we open up a world of possibilities we haven’t begun to imagine.

19
Mar

Preparing for the Big Data Revolution

It is no accident that we have recently seen a surge in the amount of interest in big data. Businesses are faced with unprecedented opportunities to understand their customers, achieve efficiencies and predict future trends thanks to the convergence of a number of technologies.

Businesses need to take every opportunity to store everything they can. Lost data represents lost opportunities to understand customer behaviour and interests, drivers for efficiency and industry trends.

A perfect storm
Data storage costs have fallen dramatically. For instance, in 1956 IBM released the first hard disk drive, the RAMAC 305. It allowed the user to store five megabytes of data at a cost of $50,000 – that’s around $435,000 in today’s dollars. In comparison, a four-terabyte drive today can fit in your hand and costs around $180. If you were to build the four-terabyte drive using 1956 technology, it would cost $350 billion and would take up a floor area of 1600 km2 – 2.5 times the area of Singapore. Also, 10-megabyte personal hard drives were advertised circa 1981 for $3398 – that’s $11,000 today, or $4.4 billion for four terabytes.

Gordon Moore’s prediction in 1965 that processing capacity doubles approximately every two years has proved astoundingly accurate. Yet the amount of data we can generate has far outstripped even this exponential growth rate. Data capture has evolved from requiring specialised engineers, then specialised clerical staff, to the point where the interactive web allowed people to capture their own data. While this was a revolutionary step forward in the amount of data we had at our disposal, it pales before the most recent step: the ‘Internet of Things’, which has opened the door for machines to automatically capture huge amounts of data, resulting in a veritable explosion of data, way outstripping Moore’s Law. The result: the data load became too much for our computers, so we simply threw a lot away or stopped looking for new data to store.

With the price of storage decreasing sharply, the economies of storage have meant we can afford to capture more data: it has become increasingly important to find new ways to process all the data being stored at the petabyte scale. A number of technologies have emerged to do this.

Pets versus cattle
Traditionally computer servers were all-important – they were treated like pets. Each server was named and maintained with great attention to ensure that everything was performing as expected. After all, when a server failed, bad things would happen. Under the new model, servers are more like cattle: they are expendable, easily replaced. Parallel processing technologies have superseded monolithic approaches and allow us to take advantage of using many low-cost machines rather than increasingly more powerful central servers.

Hadoop is one project that has emerged to handle very large data sets using the cattle approach. Hadoop uses a ‘divide and conquer’ approach, which enables extremely large workloads to be distributed across multiple computers, with the results brought back together for aggregation once each intermediate step has been performed. To illustrate Hadoop: imagine having a deck of cards and someone asks you to locate the Jack of Diamonds. Under a traditional approach you have to search through the cards until you locate the card. With Hadoop, you can effectively give one card each to 52 people, or four cards each to 13 people, and ask who has the Jack of Diamonds. Much faster and much simpler when complex processes can be broken into manageable steps.

NoSQL, which was intended to mean “not only SQL”, is a collection of database technologies designed to handle large volumes of data – typically with less structure required than in a typical relational database like SQL Server or MySQL. Databases like this are designed to scale out to multiple machines, whereas traditional relational databases are more suited to scaling up on single bigger servers. NoSQL databases can handle semi-structured data; for example, if you need to capture multiple values of one type or obscure values for one person. In a traditional database, the structure of the database is typically more rigid. NoSQL databases are great for handling large workloads but they are typically not designed to handle atomic transactions: relational SQL databases are better designed for workloads where you have to guarantee that all changes are made to the database at the same time, or no changes are made.

Network science
Network science studies the way relationships between nodes develop and behave in complex networks. Network concepts apply in many scenarios; examples include computer networks, telecommunications networks, airports or social networks. Given a randomly growing network, some nodes emerge as the most significant and, like gravity, continue to attract additional connections from new nodes. For example, some airports develop into significant hubs while others are left behind. As an airport grows, with more connections and flights, there are increasingly compelling reasons why new airlines will decide to fly to that airport. Likewise, in social networks, some people are far more influential either due to the number of associations they develop or because of the effectiveness of their communication skills or powers of persuasion.

Big data can help us to identify the important nodes in any contextual network. Games console companies have identified the most popular children in the playground and given them a free console on the basis that they will have a lot of influence over their friends. Epidemiologists can identify significant factors in the spread of diseases by looking at the significant nodes and then take steps to prevent further contamination or plan for contagion. Similarly, marketers can use the same approaches to figure out what is more likely to ‘go viral’.

Benefits
Big data assists businesses to gain a better understanding of customers, treating each customer as an individual – the so-called marketing segment of one. Understanding what moves customers can build strong brand loyalty and evoke an emotional response that can be very powerful. Imagine an airline that recognises that a particular passenger travels from A to B every Monday to Thursday. However, if that passenger plans to stay in B for two weeks, imagine how much loyalty could be generated by offering them a free flight over the weekend to C, a discounted flight for their spouse from A to C, and a discounted hire car and room for the weekend away together.

Digital body language and buying habits can lead online retailers to be able to make astute decisions about what product to offer customers. Target was able to identify pregnant customers very early by their shopping patterns: customers buying certain combinations of cosmetics, magazines, clothes would go on to buy certain maternity products months later.

Big data can be used to drive efficiencies in a business. The freight company UPS, for example, was able to save almost 32 million litres of fuel and shave 147 million km off the distance its trucks travelled in 2011 by placing sensors throughout the trucks. As a side benefit, they learned that the short battery life of their trucks was due to the drivers leaving the headlights on.

By analysing customer relationships, T-Mobile was able to mitigate the risk of a domino effect when one customer decided to leave its service. It did this by identifying the customers who were most closely related digitally to the person churning and making a very attractive offer to those people, preventing the churn from spreading. Further, by analysing people’s billing, call dropout rates and public comments, they were able to act in advance to reduce churn by 50% in a quarter.

CERN conducts physics experiments at the Large Hadron Collider involving sending 3.5 trillion electron volts in each direction around an underground ring, resulting in particle collisions that provide an understanding of the basic building blocks of matter. The Higgs-Boson was proven by analysing the data that was generated in smashing the particles together. 15,000 servers are used to analyse the one petabyte of data that is generated per second and 20 gigabytes is actually stored. This is orchestrated using cloud techniques built on OpenStack and designed and supported by Rackspace.

Conclusion
We have reached a point where it is now better to start storing everything today so that we have a business case for analytical tools tomorrow. Once we start getting used to the idea that everything is available to us, we will find new ways to think about how we leverage our information. The businesses that succeed in the future will be those that constantly look for ways to mine the information they have gleaned.

[This article has been slightly modified from an article I wrote that was previously published in Technology Decisions magazine.]

24
Feb

CIOs: Focusing on Obstacles will Limit your Success with Cloud Computing

No matter where you stand on the new Cloud technologies, there is no escaping the fact that Cloud Computing has everyone’s attention. For some business executives it is seen as an opportunity to financially restructure their IT expenditure. Some focus only on the risks they perceive of placing their data and systems in the hands of some external third party.  Still others see it as providing the means to focus on their core business and new business ideas without having to worry about whether the computer infrastructure will be able to cope.

While IT teams must ensure that systems are safe and data is secure, it is ironic that by focusing too much on security and availability, many CIOs are exposing themselves, and their employers, to a far greater risk – the risk of missing the opportunities presented by new technologies emerging from Cloud Computing.

The CIOs who will provide the greatest value to their employer will be those who approach Cloud by asking themselves “what can we now achieve that was previously inconceivable?”

While there are many distracting arguments about what constitutes Cloud, the key characteristic that differentiates it from more traditional approaches is that Cloud provides the freedom to be remarkable – the freedom for a business to focus on what it does best without constraints imposed by infrastructure.

Traditional approaches to IT see the acquisition of dedicated equipment on a project basis, with each new system requiring new equipment and administration. This leads to ever-increasing IT complexity, with the IT department working to prevent things from getting out of hand. In many cases this has led to a perception that the IT department is the problem, and many IT budgets are shifting to marketing as a result.  Under a Cloud model, IT should evolve to becoming a reservoir from which new equipment is instantly sourced, and a platform underpinning whatever the business or the marketplace throws at it, scaling up and down to meet changing demands. While traditional approaches add complexity, Cloud provides the freedom to focus on the business imperatives.

IT leaders who embrace Cloud computing as an enabler will not be seen as roadblocks by the marketing or sales departments. Especially when they adopt Open Cloud approaches such as OpenStack that overcome vendor lock-in and allow for hosting data on-premise, off-premise, or a mix of the two.

Cloud computing enables businesses to take advantage of the relationships their customers have with each other.  For the cost of a coffee, businesses are able to experiment with new technologies by renting computers for the few hours it might take to trial an idea.  They can continuously update their web presence in response to constantly changing patterns of behaviour. They can forge ahead with an initiative knowing that if it exceeds beyond their expectations, the platform can grow to accommodate it, and then shrink when the job is done. They can scale while maintaining a specialized relationship with each individual client. They can identify trends and make predictions based on analysing unprecedented amounts of data. Their employees can collaborate, find information and respond to events and customer demands with far greater agility than ever before. Those who truly adopt this approach fundamentally understand that the Cloud is independent of issues such as on-premise or off-premise – provisioning can include a mixture of both – even bare metal machines can be incorporated into a Cloud-oriented approach to provisioning.

CEOs need to understand that the opportunities to stand out have never been greater. They can help their businesses succeed in capitalizing by making it clear to their CIOs that it is no longer enough just to ensure that systems are operating and data is safe. Cloud computing opens up opportunities for a level playing field like never before and CEOs need to put their CIOs on notice that they need to be first to come up with the next wave of innovation or there will be more at risk than their jobs.

2
Sep

A Review of Bruce McCabe’s Skinjob

It’s not every day you get to read a book written by one of your professional associates; it’s rare when the book happens to be a gripping yarn that has you wanting to tell everyone about it. Skinjob by Dr. Bruce McCabe is such a book.

I first met Bruce in 2006 when he came to interview me about my journey in taking Altium to the Cloud. At the time the things we were doing at Altium with what later became known as ‘Cloud Computing’ were pretty revolutionary, Some senior people at Salesforce told me that some of my emails about how I was using their system sent shock waves through the entire organisation and were instrumental in the development of Force.com. One person has even described me, rather embarrassingly I think, as the “Father of Force.com”. Anyway I digress. Bruce was researching some of the new technologies emerging for his consulting firm and had visited San Francisco to see several companies including Salesforce. He was interested in meeting people who were pushing the envelope and he was referred to me. Here he was in America looking for innovators globally, and was referred to someone that lived almost on his doorstep.

Since those days, Bruce and I have maintained a good professional relationship, and he mentioned to me over a coffee that he was writing a work of fiction – a thriller set in the immediate future. I didn’t think much at the time – I have seen lots of people talk about writing books, but there was a glint in his eye and he seemed serious enough. Fast forward almost a year, and I have just read his first novel – Skinjob. I was seriously impressed.

Once I got past the initial thoughts of “I know this author”, and settled into the book, I was absorbed. I had intended to finish on a plane next week, but I simply couldn’t put it down.

Without giving anything away, the book grabs you on multiple levels. Firstly there is the whodunnit mystery laced with enough threat to the main characters to consider it a thriller. At this level the book is entertaining with a complex plot that is managed and presented well, each dimension to the story doled out at a steady pace that balances beautifully the need to know what’s happening, with the desire for more. The astute reader is given enough hints to solve the mystery. Few readers will see the giveaways though, and I am certain no-one will predict the complete plot line.

Delve one level deeper and we are presented with a number of different predictions about how technology will impact on our world in the next five to ten years. McCabe’s has thoughtfully woven a number of technical advances into our daily lives, focusing on how they relate to things that are very close to all of us, examining intimate entertainment, policing and religion. I like the way in which he illustrates how data will be collected and analysed, and how computing power will be made available for the public ‘good’. In particular the iterative analysis of unstructured data is well thought through. I don’t want to say too much about this because it will give part of the story away. Suffice it to say the book makes a great case study on how Big Data and Cloud Computing will impact on our lives.

At a third level we are challenged with questions of morality and ethics about our behaviour and our rights to privacy. When people see new opportunities to make money or achieve other self interested goals, they will often overlook their moral compass and push on towards what can ultimately be a very slippery slope. McCabe does a great job of raising issues about how technology will impact on the moral decisions we make at the social and individual level, without being at all judgemental.

I think the author is destined for big things. And I am not the only one Bruce McCabe appears to have piqued the interest of the literary agent who made Harry Potter’s J.K. Rowling famous.

I thoroughly enjoyed reading Skinjob. I think others will too.

 

Links:

 

22
Jul

New news site by Delimiter aims to lift the quality of journalism in Australian IT

I have always been a fan of independent operators and I like the freelancing model that allows journalists to pursue ideas, pitch them and then write stories that provide insights. The new digital era has brought with it many challenges to traditional journalism including the fact that now anyone can publish content cheaply, the popularity of an article is directly proportional to the number of cats picture, attention spans are shorter than ever, and publishers compete to appeal to the lowest common denominator in a vicious cycle that continues to find new levels of inanity.

I was particularly excited to learn of Delimiter’s decision to go against this trend by developing a new site that plans to deliver one signficant article each week that probes into what the editor considers the biggest issue of the week in Australian IT. The site, delimiter2, requires a subscription of $9.95 per month, which I think is a no-brainer, especially in the context of it supporting a small business fighting against the trend towards mediocrity that aids the downward spiral of journalism.

My only real concern about the site is the ambitious nature of the requirement to write one high-quality article that analyses the news story every week. I hope it is kept up – perhaps guest articles may be considered.

Nevertheless, it is a worthy vision and I urge all the Australian IT professionals, and those who have an interest in Australian IT to subscribe.

10
Apr

Why I joined Rackspace Part II – the Products and the Strategy

As someone who has taken an enterprise to the Cloud globally, I understand just how much of an impact the Cloud can have on a business. I have been vocal in pushing the fact that Cloud can open all sorts of possibilities and is not just about cost mitigation and scalability.

Businesses looking to learn more about what Cloud Computing can bring are faced with a plethora of suppliers purporting to have Cloud. Many of the potentially transformational benefits can be lost in the confusion of conflicting ideas, and a sense that it all just sounds like stuff they have heard before.

Real Cloud is hard to fake. The key thing is that the products, services, technologies offered by a vendor enable an enterprise to focus on the business imperatives driving them without having to worry whether the infrastructure will be there when they need it. It is always a tough question: does a company invest in expensive infrastructure just in case it is successful beyond expectations? Does it allow a huge opportunity to slip through its fingers simply because of a conservative approach to investing in infrastructure? Both are risks that all businesses have to traditionally face.

At least, that is how it is without using Cloud. Cloud approaches mean that businesses can effectively forge ahead knowing that the infrastructure will cater for whatever is required. Imagine starting a fishing business and not having to worry about how big a boat and net you should buy, relying on being able to start with modest equipment and elastically expand the ship and net at sea if you happen to come across a huge school of fish.

The freedom from encumbrance that results has the potential to change the approach businesses have to strategic planning, their approach to innovation and the related areas of risk management and process streamlining. More agile methodologies ensue that facilitate experimentation and allow changes to happen more naturally, leading inevitably to a focus on the business goals rather than the potential impediments such as not having enough infrastructure.

Cloud facilitates this change in thinking, but it has failed to overcome the concerns around privacy, security, and data sovereignty. Despite all the advocates who have effectively said that the benefits outweigh the risks, the fact remains that some businesses stand to lose more than they can gain if their data is exposed. In some cases there are legislative impediments, PCI compliance, health records, national sovereignty rules to name a few, that render the potential gains seemingly academic. Further concerns around the high dependency on single Cloud providers have further limited the uptake.

But Rackspace has largely addressed these concerns by open-sourcing the Cloud. By working with NASA, Rackspace has given birth to what is now the fastest growing Open Source project in history – OpenStack. The OpenStack foundation now has more than 8600 contributing developers and has been adopted by IBM, Dell, HP, NTT, Red Hat, Canonical and more than a hundred other companies. Rackspace has very publicly gone “all-in” on OpenStack and is the largest contributor to the code base. Rackspace’s approach is that the Fanatical Support will be the key differentiator that enables the company to excel.

As a result of OpenStack, businesses have the freedom to build an infrastructure platform using a combination of public multi-tenanted Cloud infrastructure, dedicated hosted solutions, and private cloud facilities that are on their own premises if necessary. The technical barriers between each of these topologies are being eliminated, making for one platform that truly allows businesses to have the freedom from worrying about their infrastructure as they focus on driving their business forward.

The freedom to choose a mixture of topologies, suppliers and service levels really allows businesses to focus on what they do, not how they do it. Adding Fanatical Support to that freedom allows Cloud computing to fully realise its potential. And that excites me.

—-

Oh, and for those who want to understand more about my role at Rackspace, I have come on board as the Director of Technology and Product – Asia Pacific. My functions include promoting how Cloud computing concepts can help businesses achieve their goals, expounding on the concepts of the Open Cloud, as well as helping ensure new Rackspace products and services are ready for the market in the Asia Pacific region.

I welcome the opportunity to talk about my journey to the Cloud and how thinking Cloud and related topics such as Big Data, the Internet of Things and Social Media can change our approach to business.

9
Apr

Why I joined Rackspace Part I – the Company and its Values.

As one of the early Cloud adopters, and someone who has worked hard to promote what Cloud can bring to businesses, I was looking to join a vendor where I could leverage Cloud concepts to truly make a difference in the world. The more I looked, the more Rackspace seemed the right place to be.

The first thing that stood out to me was the company values. I was excited to see that the company places importance on the following values:

  • Treating Rackers like family and friends
  • Passion for all we do
  • Commitment to Greatness
  • Full Disclosure and Transparency
  • Results First – substance over flash
  • And, of course, Fanatical Support in all we do.

These formed a picture for me of an organisation that was striving to really make a difference. The word that stood out for me was the word “Greatness”. This is something that I personally believe in very strongly. Companies that are committed to Greatness are alive, vibrant and focused on growth.

Rackspace is best known for its Fanatical Support and I have to admit before I experienced it I thought it was just marketing hype. I was first exposed to it when Altium acquired a company and brought in a new head of IT who had experienced Rackspace’s Fanatical Support. His face was radiant as he described the fact that Rackspace knew about problems on his servers before he did. I was still pretty sceptical, but impressed with the positioning. I thought if a company can pull this off, it would make them really successful. I have always believed in providing phenomenal support, so I was impressed, but only in an intellectual way.

Then I joined the company. And what I found inside shocked me – here was a company that had inculcated the very idea of going above and beyond into the core of the company’s being. I went to the London office for my induction programme – five days of aligning new Rackers (Rackspace employees are called Rackers) to the fundamental principles that drive the business. There are over 1000 staff in the London office and I must have been approached half a dozen times by people asking me, “you seem lost – what can I do to help you?” This was no fake offer – each time this happened I was helped all the way to my objective, and the people always seemed eager to help.

Everything the company does drives this fanatical support. The company uses Net Promoter Score to measure the likelihood customers will refer others. But even the induction programme had us rookies being asked how likely we would be to recommend each of the presenters to our colleagues or friends. The presenters, we learned, were vying for a coveted internal trophy. I have never seen such engaging and creative presentations, all designed to prepare us to be able to be effective in the Rackspace culture.

The company’s mission is to be recognised as one of the world’s greatest service companies. And it shows.

2
Nov

Techs and Non-Techs: Society’s Left Brain and Right Brain

Our progress as an ever advancing civilization is being held back by the way we approach the education of information technology. We have created a false dichotomy: we have those who come out of the education system understanding technology but not the way the real world works, and we have those who learn some aspect of the business world, but have no idea how technology is applied to their domain. It seems the more powerful the software developer, the less grounded they are in the real world, and the same is probably true for those who are strong in some vertical business functional area.

Over time, this one-sidedness is mitigated by experience and exposure, but it is not the same as having the fundamental understanding of what goes on over the fence. It is like having two separate brain hemispheres – one that is focused on how stuff can be built, and another that is focused on what needs to happen. The left brain (the software developers) know how to mix ingredients and build something, but it takes the right-brain to see how things need to be used.

The trouble is, without some means of conveying their expertise, a lot is lost in translation. Nontechs are unaware of what is possible, or have no idea whether something is technically risky or feasible. Technologists know lots of cool tech stuff but have no idea how some gem can be applied to the real world.

Technology is so pervasive, so fundamental to the way we now live that we need to rethink our education strategy or miss out on generations of possibilities. If you think we are doing just fine, why has it taken us 30+ years to apply social media principles to our computing, with publisher-subscriber models only beginning to permeate into our IT systems in natural human-facing ways? These new modes of operation are natural, what we have been doing previously is not natural, hence our historic fear of IT, our frustration with information overload, our expensive overruns and ridiculously high rates of project failures.

I was talking to some software department heads at one of Australia’s leading universities recently and I asked them when they thought we should begin teaching HTML and CSS to our students. Their response: grade two – that’s seven year olds. With this kind of fundamental understanding of the building blocks in web pages, these students will be much better prepared to build an understanding of what is possible.

On the same topic, why are we not teaching secondary students the fundamentals of object-oriented programming? I was rather shocked to learn that in the State of Victoria, Australia, there are only 14 secondary teachers who are qualified computer scientists.

Society will benefit greatly when the two hemispheres are able to communicate more effectively. Current workarounds like product managers and business analysts are a necessary glue, but how much more effective will be if there is a more fundamental understanding of what is going on in the other half of the brain. Imagine constructing buildings where the builder and architect have only a vague understanding of what the building’s purpose might be, or a prospective customer has no sense of the cost of adding a room after the walls have gone up.

I believe we need to start teaching the fundamentals of IT as part of our primary and secondary education, and carry that through to the all the university vertical domains so that computer technology is an intrinsic part of the education of every discipline. Likewise, we need to be introducing Applied Computer Science courses into the CompSci and InfoSys courses on offer so that graduates learn things like the application of Big Data, Publisher Subscriber models, marketing automation, the cost of downtime, basic risk etc, and are able to apply them to real world problems.

We need to be able to cultivate a society where both sides can make meaningful contributions to the other’s discipline by seeing through the other’s perspectives. Only then will be begin to recognise our full potential.

 

26
Jun

Microsoft Acquiring Yammer Is Good News for All

Today’s announcement that Microsoft has acquired Yammer has the feel of something very exciting – and I would like to share my initial thoughts on what this might mean.

Yammer provide an enterprise collaboration platform based upon publisher-subscriber principles, but constrained to within a domain context. If you don’t have a matching email address you don’t get to participate. From the Yammer website:

Yammer brings the power of social networking to the enterprise in a private and secure environment. Yammer is as easy to use as great consumer software like Facebook and Twitter, but is designed for company collaboration, file sharing, knowledge exchange and team efficiency.

That Microsoft has decided to acquire Yammer shows great insight by Microsoft, and a willingness to think creatively about tackling the new world of social media. Microsoft will be able to leverage Yammer’s platform in many areas of the business, so it is somewhat of a surprise to learn that they have positioned it as part of the Office family. Sure, Yammer could make various Office products much more powerful, particularly when paired with the Office 365 offerings, but I see it could also benefit many other areas of the business. In other words, I am concerned that Microsoft may be looking to productise this alongside other tools in the Office suite. But Yammer has potential to make a big impact throughout much of the Microsoft product line.

So here’s a quick overview of how I initially think Microsoft products could benefit from Yammer:

  • Excel, Word and Powerpoint could all gain major collaboration benefits:
    • commentary from various people,
    • tracking changes with comments in Office 365,
    • suggestions for further amendments, with applying them,
    • branched versions,
    • seeking approval,
    • requesting clarification on a paragraph, slide, or formula,
    • requesting artwork for insertion
  • Microsoft Project could gain some qualitative aspects – look at Assembla or Pivotal Tracker for some of the interesting developments in the application of social media principles to project management.
  • Outlook could integrate streams from multiple sources including Email and Yammer, but then also from other social media streams, perhaps Twitter, Facebook and Chatter for example, to the extent corporate policies allow
  • Dynamics would benefit – Discussions around non-payment of invoices and doubtful debtors, stock levels, product return rates, supplier feedback would be a good starting point. Beyond that there are many areas where subscription  to objects would provide a great deal of control. Beyond that, there is plenty of scope of linking Yammer to the actual business objects and enabling people to subscribe to invoices, customers, picking slips etc. For example, send a notification to a subscriber when an invoice over a certain amount is paid, or its payment deadline passes.
  • Sharepoint would also benefit. The full extent to which these two tools can synergise requires some deeper thought, but at the surface, the collaborative nature of each appears complementary.
  • Even SQL Server and Visual Studio could provide hooks that enable the database or an application to feed easily into a Yammer stream, or respond to a Yammer feed.
  • Microsoft’s acquisition of Skype will fit nicely into this view as well, with a tightly integrated communication platform that runs from a synchronous emails and notifications, to live discussions through to video.

I am also encouraged by this because it will raise the profile of Social Media to the mainstream. Instead of being seen as something for the Salesforce evangelists and their like, Social Media will become more of a tool as a result of this acquisition.

And that can only be a good thing.

Here’s hoping Microsoft are thinking strategically about this rather than just a new feature set to add to the Office product line.

Here are a couple of other bloggers’ comments on the deal:

18
Jun

Theoretical Disaster Recovery doesn’t cut it.

I have mixed feelings about Amazon’s latest outage, which was caused by a cut in power. The outage was reported quickly and transparently. The information provided after the fault showed a beautifully designed system that would deal with any power loss inevitability.

In theory.

After reviewing the information provided I am left a little bewildered, wondering how such a beautifully designed system wasn’t put to the ultimate test? I mean, how hard can it be to rig a real production test that cuts the main power supply?

If you believe in your systems, and you must believe in your systems when you are providing Infrastructure As A Service, you should be prepared to run a real live test that tests every aspect of the stack. In the case of a power failure test, anything short of actually cutting the power in multiple stages that tests each line of defense is not a real test.

The lesson applies to all IT, indeed to all aspects of business really – that’s what market research is for. But back to IT. If a business isn’t doing real failover and disaster recovery testing that goes beyond ticking the boxes to actually carrying out conceivable scenarios, who are they trying to kid?

Many years ago I had set up a Novell network for a small business client and implemented a backup regime. One drive, let’s say E: had programs and the other, F:,  carried data. The system took a back up of F: drive every day and ignored the E drive. After all, there was no need to back up the programs and disk space was expensive at the time.

After a year I arranged to go to the site and do a back up audit and discovered that the person in charge of IT had swapped the drive letter around because he thought it made more sense. We had a year of backups of the program directories, and no data backups at all.

Here is the text from Amazon’s outage report:

At approximately 8:44PM PDT, there was a cable fault in the high voltage Utility power distribution system. Two Utility substations that feed the impacted Availability Zone went offline, causing the entire Availability Zone to fail over to generator power. All EC2 instances and EBS volumes successfully transferred to back-up generator power. At 8:53PM PDT, one of the generators overheated and powered off because of a defective cooling fan. At this point, the EC2 instances and EBS volumes supported by this generator failed over to their secondary back-up power (which is provided by a completely separate power distribution circuit complete with additional generator capacity). Unfortunately, one of the breakers on this particular back-up power distribution circuit was incorrectly configured to open at too low a power threshold and opened when the load transferred to this circuit. After this circuit breaker opened at 8:57PM PDT, the affected instances and volumes were left without primary, back-up, or secondary back-up power. Those customers with affected instances or volumes that were running in multi-Availability Zone configurations avoided meaningful disruption to their applications; however, those affected who were only running in this Availability Zone, had to wait until the power was restored to be fully functional.

Nice system in theory. I love what Amazon is doing, and I am impressed with how they handle these situations.

They say that what doesn’t kill you makes you stronger – here’s hoping we all learn something from this.

Follow

Get every new post delivered to your Inbox.

Join 634 other followers

%d bloggers like this: