Skip to content

Posts from the ‘Innovation’ Category

3
Dec

A brief response to those who criticize Cloud Vendors for warning about Private Cloud

There are many who scoff at people like Wogels and Benioff for making statements like ‘Beware the False Cloud’. Focusing on the naming aspect rather than the conceptual aspect is missing the point Benioff and Vogels et al are making.

I agree, who cares what you call it, but this is not their point. Their key point in saying to be wary of private cloud is that you forego all of the real benefits of abstraction, information leverage, true scalability in both directions, and the expertise leveraged from multiple tenancy security platforms etc.

The term ‘cloud’ was adopted to demonstrate the fundamental differences gained by abstracting the hardware out of the picture.

DIY is DIY, and some things should not be tried at home, unless you REALLY have a phenomenal driver of life and death importance to segregate and isolate.
Until you have really experienced these cloud benefits it is difficult to understand this fundamental difference, and why it is seen as such an important distinction

21
Nov

Property Rights to Information in the Cloud – A Cloud based view on the Coase Theorem

When I studied economics  in the early 1980’s, we learned of the Coase Theorem, which always fascinated me. The Coase Theorem, is attributed to Ronald Coase, who has since earned the Nobel Prize for Economics (1991).

It occurred to me recently that the Coase Theorem may have some fascinating implications for the property rights of information stored in the Cloud.

The Coase Theorem, as I  recall it, goes like this: Regardless of who owns resources initially, given clearly defined property rights and zero transaction costs, resources will always be allocated most efficiently at the end of the day.

This makes for some really interesting discussions about the Internet and property rights to information. The theorem is particularly relevant for discussion for two reasons. Firstly, in the internet world, transaction costs asymptotically approach zero, meaning that the costs of transferring or asserting ownership of information is infinitessimally small, and getting lower all the time. Secondly, property rights are subject to a whole range of debates around privacy, rights to share, rights to mail, sovereignty, rights to access. So if property rights can be defined, the best allocation of resources, according to the theorem, can be ascertained.

For the first time, we have a situation where the theorem can be tested on a massive scale due to the low transactions costs being so low as to be unimagined when the theorem was first postulated. Economists are famous for proposing academic models, but here we have one that can actually play out in real life, where the focus is on the property rtights not the transaction costs.

So what does this imply – more research will be required on this I am sure, but initially there are some interesting trends emerging. We are seeing some stupendous valuations placed on the holders of the information we have. Facebook stands out as a particularly interesting case study because of the ownership debates and the sheer scale of data being pushed through that platform. Google is interesting because it can figure out what we are interested in and match that to marketers.

What does this say about the valuation of our personal data? Will a greater understanding of the Coase Theorem as it applies to the Web 2.0 put a value on our personal data? our spending patterns? There are already small examples of people receiving money for their data, their opinions, their search history, their web trails. Also, there are plenty of examples where people are paid in the forum of free software in exchange for the right to deliver advertising.

One thing is certain – we should not be giving up our rights to our data without fully understanding how valuable it is. The Coase theorem suggests that there is more value to than would appear on the surface and a little care should be exercised in the way we manage this intangible property.

I will have to think further on this.

16
Nov

Misplaced concerns about privacy in the Cloud?

Here’s a thought: Imagine needing a solution for processing diverse vendor bills or handwritten documents digitally with 100% accuracy. Imagine these come in continuously but without any idea of frequency. Obviously if you can provide some sort of API then others can hook into your system directly, but what if you are dealing with consumers who won’t use a computer? With Amazon’s Mechanical Turk you can programmatically assign these tasks to the public in a bidding system where you set the price of the request. You can make three independent requests for someone to enter the data into your database, compare the results for the three, and only if the three match do you consider the record processed. If one of them doesn’t match the other two you would go out with a new request and keep doing so until you get three that match. Any one who did not match would  be marked with a demerit and if they earn enough demerits you would block them from accepting future tasks. They would also be incentivated to do well because it would affect their public rating.

The cloud enables all sorts of variations of this model. It provides a means to connect low-paid service providers with companies who require tasks to be completed quickly and efficiently at very low cost. In essence it is similar to the microcredit schemes initiated by the Grameen Bank in Bangladesh and others in the sense that it opens up avenues of empowerment, but this potentially opens up opportunities for corporates to benefit as well. Incidentally, the founder of the Grameen Bank Muhammad Yunus won the Nobel Peach Prize for his work.

For many businesses this scenario is a frightening nightmare scenario – the encapsulation of the very things that prevent them from considering the cloud. And in many cases, this is simply not an option. But it creates an interesting thought experiment – how far can we go in the interest of efficiency to open our systems up to micro-outsourcing arrangements like this?

I suspect that over time scenarios like this will become more acceptable. Today though, I can’t see many people signing off on an implementation like this. If it were me, I would be looking to SOA models and trying to get suppliers into a B2B relationship. Years ago EDI would have been the way – if you wanted to be a supplier to one of the big department stores, you needed to hook into their systems. But this is a digression – the example postulated was about non-technical integrations.

But it begs the question about why we are so focused on concerns about privacy in the cloud to the exclusion of the benefits – sure, the above example opens a Pandora’s box of privacy concerns and would be almost universally rejected , but what about the normal, regular uses of the cloud? For most scenarios the lengths the major cloud service providers go to to ensure  data is accessible by only those who should see it should allay any fears – after all, typically, the big cloud providers have a lot more to lose if they leak  corporate data.

It is not the cloud vendors we should be fearful of, it is the way we choose to use their services; it is the way we choose to run our companies, it is the way we choose to view the world in which we live.

30
Sep

EE Times article published

An article of mine has been published in the EE Times magazine discussing the acquisition of Morfik by Altium. EE Times is a magazine for the Electronic Engineering industry so the article in part focuses from that angle, but I think it gives people a good sense of Altium’s strategic direction.

2
Sep

What Fast Internet to the Home will Mean

In the recent Australian election campaign, one of the key campaign focal points was the policy concerning a high-capacity broadband backbone for the majority of the country. There are many aspects of this that are interesting.

The fact that it is an election issue interesting of itself – it means that there is now recognition that Australia is falling behind other developed nations with regard to the Internet and that keeping pace with this particular form of infrastructure is important.

Then there is the fact that there are such divergent views as to how this should be implemented. One party says they want to put in an optic fibre cable running to 93% of the homes and deliver at least 100Mbps, possibly 1Gbps. This will cost an estimated $43Billion. The other party proposes a network based on a major rollout of wireless towers delivering 12Gbps to 97% of homes at a cost of a little over $6Billion.

And the other thing I found interesting about all of this is that no-one seemed to have any imagine about what would be done with all that bandwidth. I have no problem with spending on a major infrastructural scheme, a scheme that brings back memories of the Snowy Mountain Hydroelectric scheme in the scale of its cost, but I question why people are proposing such major projects when they don’t seem to be able come out with illustrative examples other than entertainment and medicine. Apparently we are all going to have our lives saved.

I noticed some people asking why would we need 1Gbps, and my response is to remind people that when Bill Gates and Paul Allen were discussing how much RAM a computer could ever possibly have, they thought that 640KB would be ample.

There are plenty of things that 1Gbps bandwidth could allow, and I will attempt to explore some of them, but first there is one other variable in this that I haven’t seen mentioned in the media and that is the bandwidth consumption that this fast connectivity will allow. How are the consumers going to be charged? If the speed encourages people to pull down heaps of stuff, or upload huge amounts of data, it will mean that costs will blow out if the data price does not come down considerably.

One of our staff based in Japan has 1Gbps to his home and an upload limit of 50GB per day, no download limit. I wish.

Now, what kind of things could you do if you had that much bandwidth?

The obvious ones, like entertainment, with games, movies don’t really bear mentioning, except that once the time taken to download files of a given size passes some threshold, it ceases to be a barrier to use. For different people that point will differ.

Torrents will become incredibly convenient – files will be downloaded more often and therefore available from more locations, all of them high speed. I wonder what that will do for the movie, music and software industries.

One of the big changes that will come about is the interconnectivity via the Internet of all manner of devices. The speed and ubiquity of the net will increase the use of these devices in the so-called Internet of Things. Devices will maintain state in the cloud and check in to report their current situation as well as see if there is something the device should know about. Agent software will interact with this information and perhaps make changes to the values for other devices.

Here is an example: a car’s GPS is sending to the cloud information about where it is – XYZ coordinates that enable some agent software to determine velocity, direction etc. The agent software knows where home is and can make a determination that the car is heading to the house. As it passes some predetermined proximity boundary, the agent can check to see what the current situation is with the home air conditioner or heater and based on the ambient temperature and preconfigured preferred temperature, it can turn the air conditioner on.

Opportunities to exploit this information will grow as the bandwidth to support them grows. So with the GPS example, the data could be sent back to a central agent that can determine how many cars are not moving quickly enough in one location, and tell the GPS that there is a traffic problem up ahead. Or the data could be anonymised and fed to a government town planning service that can use the data to determine future bridges, tunnels, toll gates etc.

Metcalfe’s law states that the value of a communications network is equal to the square of the number of nodes. When devices are connected like this, the potential is unimaginable.

Like Facebook or Twitter, where individuals decide what they wish to publish, and other individuals decide what they wish to subscribe to, the idea of devices publishing a range of data for other permitted devices to subscribe to, either directly or through some intermediate agent software, is very compelling. Printers can publish their toner levels and how many drum cartridges they have, while some device designed to dispatch a printer repair agent can keep an eye on things. Not only can devices publish information about their current status, they can also have built in a range of metadata – information that they seek out from elsewhere on the intranet and republish. For example, while a printer may publish information about its toner levels, it may also have a Google-style search that publishes the results of a search on its own model number. This would be useful in the case of factory recalls, announcements of new firmware, or perhaps sales of peripherals, accessories and consumables.

To bastardize a quote from the movie Field of Dreams, build it and they will come.

%d bloggers like this: