Skip to content

Posts from the ‘Social Media’ Category


New news site by Delimiter aims to lift the quality of journalism in Australian IT

I have always been a fan of independent operators and I like the freelancing model that allows journalists to pursue ideas, pitch them and then write stories that provide insights. The new digital era has brought with it many challenges to traditional journalism including the fact that now anyone can publish content cheaply, the popularity of an article is directly proportional to the number of cats picture, attention spans are shorter than ever, and publishers compete to appeal to the lowest common denominator in a vicious cycle that continues to find new levels of inanity.

I was particularly excited to learn of Delimiter’s decision to go against this trend by developing a new site that plans to deliver one signficant article each week that probes into what the editor considers the biggest issue of the week in Australian IT. The site, delimiter2, requires a subscription of $9.95 per month, which I think is a no-brainer, especially in the context of it supporting a small business fighting against the trend towards mediocrity that aids the downward spiral of journalism.

My only real concern about the site is the ambitious nature of the requirement to write one high-quality article that analyses the news story every week. I hope it is kept up – perhaps guest articles may be considered.

Nevertheless, it is a worthy vision and I urge all the Australian IT professionals, and those who have an interest in Australian IT to subscribe.


Techs and Non-Techs: Society’s Left Brain and Right Brain

Our progress as an ever advancing civilization is being held back by the way we approach the education of information technology. We have created a false dichotomy: we have those who come out of the education system understanding technology but not the way the real world works, and we have those who learn some aspect of the business world, but have no idea how technology is applied to their domain. It seems the more powerful the software developer, the less grounded they are in the real world, and the same is probably true for those who are strong in some vertical business functional area.

Over time, this one-sidedness is mitigated by experience and exposure, but it is not the same as having the fundamental understanding of what goes on over the fence. It is like having two separate brain hemispheres – one that is focused on how stuff can be built, and another that is focused on what needs to happen. The left brain (the software developers) know how to mix ingredients and build something, but it takes the right-brain to see how things need to be used.

The trouble is, without some means of conveying their expertise, a lot is lost in translation. Nontechs are unaware of what is possible, or have no idea whether something is technically risky or feasible. Technologists know lots of cool tech stuff but have no idea how some gem can be applied to the real world.

Technology is so pervasive, so fundamental to the way we now live that we need to rethink our education strategy or miss out on generations of possibilities. If you think we are doing just fine, why has it taken us 30+ years to apply social media principles to our computing, with publisher-subscriber models only beginning to permeate into our IT systems in natural human-facing ways? These new modes of operation are natural, what we have been doing previously is not natural, hence our historic fear of IT, our frustration with information overload, our expensive overruns and ridiculously high rates of project failures.

I was talking to some software department heads at one of Australia’s leading universities recently and I asked them when they thought we should begin teaching HTML and CSS to our students. Their response: grade two – that’s seven year olds. With this kind of fundamental understanding of the building blocks in web pages, these students will be much better prepared to build an understanding of what is possible.

On the same topic, why are we not teaching secondary students the fundamentals of object-oriented programming? I was rather shocked to learn that in the State of Victoria, Australia, there are only 14 secondary teachers who are qualified computer scientists.

Society will benefit greatly when the two hemispheres are able to communicate more effectively. Current workarounds like product managers and business analysts are a necessary glue, but how much more effective will be if there is a more fundamental understanding of what is going on in the other half of the brain. Imagine constructing buildings where the builder and architect have only a vague understanding of what the building’s purpose might be, or a prospective customer has no sense of the cost of adding a room after the walls have gone up.

I believe we need to start teaching the fundamentals of IT as part of our primary and secondary education, and carry that through to the all the university vertical domains so that computer technology is an intrinsic part of the education of every discipline. Likewise, we need to be introducing Applied Computer Science courses into the CompSci and InfoSys courses on offer so that graduates learn things like the application of Big Data, Publisher Subscriber models, marketing automation, the cost of downtime, basic risk etc, and are able to apply them to real world problems.

We need to be able to cultivate a society where both sides can make meaningful contributions to the other’s discipline by seeing through the other’s perspectives. Only then will be begin to recognise our full potential.



Microsoft Acquiring Yammer Is Good News for All

Today’s announcement that Microsoft has acquired Yammer has the feel of something very exciting – and I would like to share my initial thoughts on what this might mean.

Yammer provide an enterprise collaboration platform based upon publisher-subscriber principles, but constrained to within a domain context. If you don’t have a matching email address you don’t get to participate. From the Yammer website:

Yammer brings the power of social networking to the enterprise in a private and secure environment. Yammer is as easy to use as great consumer software like Facebook and Twitter, but is designed for company collaboration, file sharing, knowledge exchange and team efficiency.

That Microsoft has decided to acquire Yammer shows great insight by Microsoft, and a willingness to think creatively about tackling the new world of social media. Microsoft will be able to leverage Yammer’s platform in many areas of the business, so it is somewhat of a surprise to learn that they have positioned it as part of the Office family. Sure, Yammer could make various Office products much more powerful, particularly when paired with the Office 365 offerings, but I see it could also benefit many other areas of the business. In other words, I am concerned that Microsoft may be looking to productise this alongside other tools in the Office suite. But Yammer has potential to make a big impact throughout much of the Microsoft product line.

So here’s a quick overview of how I initially think Microsoft products could benefit from Yammer:

  • Excel, Word and Powerpoint could all gain major collaboration benefits:
    • commentary from various people,
    • tracking changes with comments in Office 365,
    • suggestions for further amendments, with applying them,
    • branched versions,
    • seeking approval,
    • requesting clarification on a paragraph, slide, or formula,
    • requesting artwork for insertion
  • Microsoft Project could gain some qualitative aspects – look at Assembla or Pivotal Tracker for some of the interesting developments in the application of social media principles to project management.
  • Outlook could integrate streams from multiple sources including Email and Yammer, but then also from other social media streams, perhaps Twitter, Facebook and Chatter for example, to the extent corporate policies allow
  • Dynamics would benefit – Discussions around non-payment of invoices and doubtful debtors, stock levels, product return rates, supplier feedback would be a good starting point. Beyond that there are many areas where subscription  to objects would provide a great deal of control. Beyond that, there is plenty of scope of linking Yammer to the actual business objects and enabling people to subscribe to invoices, customers, picking slips etc. For example, send a notification to a subscriber when an invoice over a certain amount is paid, or its payment deadline passes.
  • Sharepoint would also benefit. The full extent to which these two tools can synergise requires some deeper thought, but at the surface, the collaborative nature of each appears complementary.
  • Even SQL Server and Visual Studio could provide hooks that enable the database or an application to feed easily into a Yammer stream, or respond to a Yammer feed.
  • Microsoft’s acquisition of Skype will fit nicely into this view as well, with a tightly integrated communication platform that runs from a synchronous emails and notifications, to live discussions through to video.

I am also encouraged by this because it will raise the profile of Social Media to the mainstream. Instead of being seen as something for the Salesforce evangelists and their like, Social Media will become more of a tool as a result of this acquisition.

And that can only be a good thing.

Here’s hoping Microsoft are thinking strategically about this rather than just a new feature set to add to the Office product line.

Here are a couple of other bloggers’ comments on the deal:


Transitioning to the Cloud

Today I am presenting for you the same talk I gave at the CeBIT Cloud 2012 conference in Sydney. Entitled, “Transitioning to the Cloud”, the presentation covers three areas:

  1. If you want to transition your business to the cloud you need to Think Cloud – Cloud, as I see it, is as much a state of mind and you need to embrace this thinking to really make full use of its potential;
  2. Some examples from my personal experience of using some of the large Cloud providers’ offerings, and why they are more than what they superficially appear to be; and
  3. Some tips on adopting Cloud (previously covered in an earlier post)
One of the key messages is that when 19th Century industrialists were offered utility power from a central provider for the first time, all they could think of was that their lathes would keep turning – they didn’t see electricity as a solution to all sorts of unimagined potentialities (such as lighting). The parallel is to ask the question, what are we missing out on today, what hidden potentialities exist in Cloud that we haven’t yet figured out?


Information Security – A New Frontier?

Traditionally, companies have focused their security efforts on protecting internally managed, internally generated information from reaching unintended audiences. This includes unpublished financial performance, sensitive employee and customer data, intellectual property, current tenders in progress, business strategic plans and so forth.

At the same time, public information specialists have ensured that the company’s public face, its brand and reputation, are protected and enhanced. Never the twain shall meet.

IT, working in a vacuum, has increasingly espoused the philosophy of control and containment. The common wisdom is to manage what you can control, work within your sphere of influence – because things that happen outside your control are just that: outside your control.

The rise of social networking is changing this, but company IT departments are slow in recognizing this shift. Here is an illustration.

I was at a luncheon on Information Security hosted by PriceWaterhouseCoopers recently and was taken by a comment by one of the presenters. He said he was encouraged that in previous years we were talking about IT Security, now we were talking about Information Security, and he hoped that in future years we will have moved to talk about Information Risk.

This started me thinking, so I posed a question to the panel: We talk about protecting endogenous, or internally generated information a lot, but what about exogenous, externally generated information? This is the stuff that happens in the public domain – customers, the media, even employees to some extent talk about the company and its products in the public arena. This information pertains to our company, its products and services, but it is generated externally. I made the comment that in the past we could control to some extent this exogenous information, but today, with Twitter, Pinterest, Facebook, Youtube, Blogs etc, the public has a lot of leverage. I asked them for their thoughts on security over exogenous information in this new world.

Their response? They told me that companies need to think long and hard about allowing staff to access Facebook and Twitter at work.

It seems to me that PR and marketing people are a LONG way ahead of IT people when it comes to this type of information security. Blocking  access to staff to social media at work is like holding up an insect screen to stop a tsunami.

It is past time that IT managers broaden the scope of their security thinking and engaged with other areas of the business to form a coherent plan designed for the modern era.


Social Media Facilitates Broader Change

Thanks to Cloud concepts, we are discovering that using computers in interconnected ways is much more powerful because it is much more natural to the way people have always worked.

Social media facilitates interactions between people with common interests who would otherwise not be able to find each other, amplifying each others’ messages and providing opportunities to synergize, collaborate, share, compare etc.

When we started talking about the semantic web quite a few years ago now, I don’t think we quite envisaged that the ontology would evolve naturally as a result of the interactions between people based on them voting with their feet. Yes, the academic community continues to move inexorably towards intelligently classified and centrally categorized data, but the real momentum is coming from dynamic ontological discoveries, crowdsourced and crowdwitnessed. People are gravitating to concepts in Twitter through hash tags, lists. They are coming together in groups in LinkedIn, Facebook and Google+, and sharing video channels in YouTube and applications in all sorts of platforms.

What has been accomplished with these tools is phenomenal, yet I have a sense that we are only just beginning to get a glimpse of the possibilities inherent in this phenomenon, possibilities largely based on publish-subscribe metaphors. Read more »


My Top 7 Tips for Going to the Cloud

A lot of people ask me for my advice on what are the most important things to consider when moving the business into the Cloud. So here are some of the things that I think business people need to consider when thinking about going to the Cloud:

1. Make sure you know how to get your data out again

Often people think about how they are going to put their data into the Cloud – if they are using Software as a Service, like Salesforce or Netsuite or Intacct or Clarizen, or Google Apps for that matter, they will be thinking about how to get their data into a shape that can go into the system. The documentation for these systems make clear reference to how to prepare and then import the customer’s data, and there are usually consultants who can assist with this process. Typically this process is well planned, but often little thought is given to how exactly you go about extracting the data out again in a way that is of value to you going forward. Often, lip service is paid to the issue by asking questions like “can I get a back up of my data?”, and a reassuring yes is provided to the now comforted prospective customer. It is one thing to be told it can be done, but you need to check that the data is actually in a format that is useful to you. And if the system is mission critical, it needs to not just be useful, it needs to be readily convertible for immediate use.

Some of the things I have done to ensure that my data is safe include writing programs that automatically read the data for updates every fifteen minutes and write them into a Relational Database hosted separately, and even replicated both in house and in the Cloud. All customisations are programmatically managed so that the relational database copy always reflects the structure in the live system. For example, I did this from Salesforce, where there were more than 300 custom objects created. Another example is to write a program that knows how to extract all the data from a system, such as an accounting system, using the API provided. Not until you have actually proven tangibly that you can get your data into a format you can actually use, it is meaningless to have access to a copy of it.

Even without programming, many systems provide some access to your data in a way you can extract it. For example Salesforce provide a once-per-week csv file you can download. If you don’t have an alternative means it is worth setting up a routine with someone responsible to take this data and copy it.

On line databases such as Amazon RDS or Simple DB can be accesed easily enough through OLEDB connections or similar, or copies of the backups can be stored locally in a format that can be opened by alternative data stores.

No matter how you do it, the principle is important here: you should have a fully tested means of accessing your data off line. The more mission critical the data, the more real-time the recoverability needs to be.

2. Think Differently

Steve Jobs’ passing reminded everyone of the Apple Think Different campaign, but seriously, you need to think Differently when it comes to the Cloud in order to leverage it successfully. It truly is different to anything we have seen, and if you are only seeing it as a cost mitigator or a means of outsourcing infrastructure, you are missing a lot of (pardon the pun) blue sky behind the Cloud. Social networking, crowdsourcing, ubiquity of device and location, Metcalfe’s law in general, scalability, the ability to fail fast and loosely coupled web services are all factors of the Cloud that lend itself to being different.

One example is the way that Salesforce enables you to leverage the power of Twitter and Facebook by recording people’s Twitter and Facebook details against their record and if they tweet or post something with a given hashtag, the system is watching and can automatically create a case for them, assign it to a support officer who can find a solution, link the solution and automatically have the system tweet them with a response and a link.

Another example is the way captchas are being used to get the masses to perform optical character recognition on historical documents that are too poor for a machine to read. The system uses a known control word to determine whether you are human or not and poses a second one that is not known. The results are compared against the results entered by others who have received the same word – a high correlation between results from different users indicates what the text is likely to be.

A third example comes from my own testing of the Amazon EC2 platform to test some ideas concerning a new database design that enabled end users to change the structure of the database without programming, kind of like the way Salesforce allows end users to do custom objects. The test was in two parts – the first, which was easy to test, was could it handle more than a billion records. The second, a little more difficult, was, can it handle one thousand simultaneous users on cheap virtual hardware. For this test I needed a simulation that ran across eleven machines. Traditionally I would need to acquire these eleven machines and set them up – an expensive and time consuming exercise. Using Amazon EC2, I was able to set up the machines from scratch in thirty minutes, run my tests in three hours, and then analyse the results. Total cost? Less than five dollars.

There are plenty of ways the Cloud can transform how you do business if you allow it. Get your sales team to focus on harder sells while the Cloud is engineered around a Marketing Automation experience that drives their behaviour for all the low hanging fruit. The Cloud itself, if you configure it correctly, will tell you where the low hanging fruit are.

3. Make sure your systems interactions are atomic

One of the issues with having Cloud-based systems is that you can build compelling processes out of tools from a number of vendors’ systems working together. Linking your CRM to your financials, or your website to marketing automation and analytics for example. While these may seem obvious examples, the point being made here is that we need to ensure when multiple systems are involved that we are thinking about how to prevent a situation where only part of a system succeeds. This is a much more common problem when different types of systems are talking together. So make sure you are not telling the customer that his request for information has been placed in a queue unless you know for sure that the request has been placed in a queue.

4. Start with Upside, not Downside

When I first started looking at Cloud concepts about six years ago I was looking with the eyes of a sceptic and I was asking the question “What can’t I do if I adopt this approach?” By taking this kind of view I found there were plenty of things I didn’t think I could do, and this thinking led me to see restrictions and obstacles. Once I started to ask myself rather contrary question “What can I do if I adopt this approach?”, I started to see all sorts of opportunities emerge. I understand from Salesforce I was possibly the first person in the world to see their CRM product as a business platform rather than a CRM product. This led to building all sorts of systems within Salesforce including purchase requisitioning, customer software licensing, electronic production management systems with automated QA built in and tested on the finished manufactured products (with the results of the tests stored against each product and displayed to the end user when he or she finally purchased the product and plugged it into a computer). Other systems included Human Resources systems with annual leave management systems, individual development plans and hierarchical cost management for each line manager, who could also see things like who had the most leave accrued in the team.

Thinking of what is possible also leads to being able to try things experimentally with a “fail-fast” attitude. The example provided above about the eleven computers is an example of this. But being able to put ideas into practice quickly makes all sorts of innovative approaches viable that may be otherwise ignored or side stepped as pipe dreams.

In traditional approaches, a startup may need to think of architecting a business for the first generation of clients. As the numbers grow, a different architecture may be required, or investment may be required in infrastructure just in case growth may occur. One of the risks of any business that grows too quickly is one of running out of liquid cash. All this can be very limiting in an entrepreneurs thinking, with a real chance that the fear of succeeding too quickly may cause them to underperform. Often the Cloud allows an architecture to scale far further than using traditional approaches, with the ability to consume infrastructure and related services as required, scaling rapidly up, and then if necessary, scaling rapidly back down again. Traditional models require risky investments, Cloud models are far more flexible. And this allows for more optimistic thinking.

5. Check what API options are available

Most mainstream cloud vendors, whether they be offering Software as a Service, Infrastructure as a Service or a Platform as a Service, will have some sort of API that enables you to read and write data, change metadata, set permissions etc. This is important if you want to truly leverage the power that is available to you. For example, you can use Amazon’s Simple Notification Service and Simple Queueing Service to provide asynchronous connections between systems and plan to notify managers when a VIP customer representative has mentioned your company in a tweet. Having a rich API in your bag of tricks enables you to innovate with freedom, seeing the Cloud as one Cloud rather than a disparate products offered by a host of different people.

6. Seek to understand the inner workings of the vendors various risk mitigation strategies

This is something I was guilty of in the early days. I used to say “these guys know better so you can trust them to make sure your data is safe”. Recent events have made me a little more open eyed about the inner workings. If you are not sure how your data is being backed up, ask. Imagine you are having to satisfy your auditor about the safety of your data. Imagine you are having to satisfy your customer that their data is safe, secure and reliably stored. If you don’t know yourself what steps are being taken to guarantee the preservation of the data, you won’t be able to tell them, and you will come across poorly.

I have written an earlier post about an Australian ISP that collapsed after an attack that took out the server with all of their clients’ websites. They had no offsite backup. Recently, Salesforce, one of the most respected companies had two outages on Sandboxes that caused the loss of the customer data on those sandboxes and the data was down for several days. Amazon had a well publicised outage earlier in the year that brought into question the way their system handled mass failure. Separate zones, designed to remain up when others failed, went down simply due to the overload caused by the failure of one. These failures, or at least the Salesforce and Amazon ones cited, have resulted in those companies making some changes, but an astute customer robustly challenging the methods may well have picked them up before a major problem occurred.

7. Remember, it’s your data, and the buck still stops with you

I wrote a post at the time of the major Amazon outage that was picked up by the CIO Magazine. Several companies hosting their data on Amazon Web Services were posting during the outage as if they were innocent bystanders observing the fallout. The reality is that if your services are down it is your responsibility no matter how you host them. Imagine an airline losing an aircraft saying “oops, luckily we outsourced the maintenance on that plane or else it would have looked really bad for us LOL!”. I don’t think so.

Remember, it is your data and you are entitled to it, and your are responsible for its availability and its security.


CIOs are Responsible for the Central Nervous System of the Enterprise

The Enterprise, Any Enterprise, can be likened to the human body, and the CIO is the architect, builder and custodian of its central nervous system.

A central nervous system provides an efficient means by which the body ensures that the brain’s instructions are followed by the periphery to the letter. It also ensures that any information received anywhere by the body is fed back to the brain in a coordinated way. This can mean that the body can act in advance as an early warning system, or act suddenly to prevent additional trauma. When there is a problem in the central nervous system, instructions become lost or garbled, resulting in poorly followed or ignored  instructions as well as signals that are meaningless, irrelevant or obfuscating. The results of a poorly functioning central nervous system can be catastrophic for the body concerned.

In the enterprise, this is no different. A utopian perfectly functioning nervous system means that the head of the enterprise is able to quickly find out exactly what is happening without prejudice or favour, able to act predictively and astutely with confidence. When such a system exists any instructions issued are carried out faithfully as intended.

Of course, reality is never close to the utopian perfection. Nevertheless, it is the task of the CIO to provide systems that are capable of coming as close the ideal as possible.

In a well-functioning organisation, the Central Nervous System will ensure that relevant, timely and accurate information is accessible when and where the users require it. This will include reports, alarms and other notifications, access to historic records and explanatory memoranda with ease. It will include pre-emptive action based on predictions of future behaviour, for example warnings to account executives that past behaviour on an account suggests that likelihood of future cash receipts is poor. Or notification that the behaviour of a prospect indicates that they are ready for personal contact, or notification to a Support Manager that a Case identified as urgent by a strategic customer is not getting the required degree of attention.

Autonomic systems – i.e. systems that should just take care of themselves (in the human body this would include the heart beating, temperature regulation etc) – will report their successes in a non-intrusive way so that someone can easily see a record of what has happened in the past (say for audit purposes), but will report failures in a way that is compelling.  For example backup drives filling up or security system power outages. (When planes are about to stall the pilot receives feedback in the form of his joystick or steering column shaking violently – this is compelling feedback).

A healthy Central Nervous System will allow proper circulation and all areas of the body will get the nourishment they need. Any area that is not well exercised or fed properly will atrophy. The same is true in enterprise information systems. Areas of the business that are not often accessed, for example infrequently run reports or ad-hoc batch programs that are run occasionally, lists of serial numbers for programs or equipment acquired, warranty documents, or perhaps a system that checks the accuracy of current employee phone numbers may not get used very often and if there is no way to easily access them they will be forgotten.

In a modern business, the rise of social media has changed the way that a central nervous system works. The axons that connect various components and people together are much more likely to take the form of Twitter subscriptions or Chatter group membership.

The role of the CIO in this changing world does not change in the sense that he or she is still responsible for ensuring the flow of signals is unfettered. However, the CIO must become more of a town planner, facilitating means of connectivity, and less of a bus network that provides scheduled bus services that get people and information between predetermined connection points.

Build a strong central nervous system and the business can be agile, responsive, run efficiently and avoid pain points.


CIOs, Systems Designers: Users Have to Have More Say…

Long gone are the days when software implementers could foist arcane or cumbersome software onto users. While some businesses still develop specific vertical products for all sorts of business purposes, the reality is a vast number of systems can be replaced by generic tools that feel natural and extend the utility of the typical user in ways that are almost impossible to foresee without witnessing crowd action. Synergies will emerge when a system is ubiquitously adopted across specialisations, across functions. Perhaps people will be able to react more quickly to emerging trends, perhaps knowledge is more easily accessed, perhaps the customer experience is so greatly enhanced that they evangelise and become disciples.

One thing we have learned from the emergence of social media tools is that building applications inside or around frameworks like Facebook, Chatter, Twitter etc have remarkable spin offs that are difficult to predict. Read more »

%d bloggers like this: