Okay, maybe it’s petty and I’m just tooting my own horn, but I found this old article I wrote for Upstream CIO’s October issue (written in July ’05).  In rereading this article today, I surprised myself how aware I was of the forthcoming Cloud & SOA convergence.

The popularity of Service-Oriented Architecture (SOA) is gaining ground on the heels of the success of eXtensible Mark-up Language (XML) as a means for representing data in a technology-neutral manner and its ability to move data between applications. However, SOA plays a more important role than merely a moniker to capture the use of applications that communicate using XML. SOA is the software world’s entry into the utility computing model.

For CIOs, the utility computing model has come to be synonymous with the ability to provision computing resources on demand, thus maximizing these resources based upon needs of the organization and its customers. In some cases, the utility computing model has begun to incorporate metering, which allows IT organizations to charge back usage to individual departments or projects, thus enabling the CIO to demonstrate the need for the current resources and to plan properly for capital expenditures of new resources.

Moving to a utility computing model helps organizations consolidate computing resources, which results in lower costs of ownership and management of those resources.  However, due to today’s inefficient software designs, most computing resources are dedicated to a single function, such as Web applications or enterprise resource planning. Such dedicated resource allocation means you cannot re-provision them to other tasks when the demand hits. Instead, IT departments are forced to acquire enough computing resources to satisfy each function running at 100%. This results in organizations overspending on computing resources to compensate for this limitation imposed by the software. Thus, today’s application models are undermining consolidation efforts and limiting the cost savings to the organization.

Enter SOA, which offers a software architecture that maps more closely to the utility computing models that have already been adopted, allowing better allocation of resources and greater provisioning controls. SOA enables organizations to develop and, more importantly, deploy their software in a loosely coupled fashion. This means the software is designed as a set of black boxes that have well-defined inputs and outputs, such that they can be linked together in a process flow with ease.

SOA operates on the concept of service providers and consumers who have agreed upon service contracts. These contracts are based on well-defined messages that are passed between the provider and the consumer, but at an abstract level, they are no different than the agreements that you have with any other utility, such as the phone or electric company.Indeed, supporting these contracts requires the same service level agreements that would be expected of mission-critical utilities.

While there is no concrete definition of SOA today, the generally agreed-upon attributes of SOA include being:

  1. Based on technology-neutral standards, such as XML and Web Services;
  2. Self-describing through metadata files created using the Web Services Definition
    Language (WSDL);
  3. Discoverable through a URL mechanism, which means they use common Web
    mechanics for implementation;
  4. Stateless, which means they can easily be reused since they are not reliant upon
    specific resources being dedicated to them while they are in use.

These attributes lead to the creation of software that can be provisioned and allocated to a resource on demand, keeping in line with the tenets of utility computing. Moreover, the concept of providing a service means that computing power can be made available like other utility-based services, such as telecommunications and electricity, which have the ability to direct more or less service based on need.

Hence, this change in architecture leads to a fundamental change in the way software is designed, but also in the way that software is deployed and managed. For example, once a service is developed, it needs to be deployed into an infrastructure that is operationally managed. This means the service may require access controls, redundancy, quality-of-service, service level agreements and root cause analysis when it is not available. These are the same services one would expect of any utility-based service.

In keeping with this model, then, SOA maps perfectly into existing utility models, such that IT can charge back infrastructure usage based on its existing utility metering, adding on charges for the number of times the service is invoked. Additionally, IT has more control over the resources used by these services, such that a single service will not require a significant allocation of a particular resource, such as today’s application servers or SAP servers require.

Thus, the organization can deploy services where there are available resources that fit the demand for that service. Moreover, this change will not impact existing applications if they use the model of “find and bind” that has become a cornerstone of SOA. That is, users can look up the location of a service in a registry and dynamically bind to and use that service without any prior knowledge of its existence.

This movement toward SOA requires a fundamental shift in the way that IT operates within most organizations today regarding software. Today, most IT organizations follow a system integrator model of delivering software, which means they take existing off-the-shelf applications and/or programming tools and deliver an automated solution to a business problem, similar to the service provided by Accenture or BearingPoint.

The move to utility computing using SOA requires the IT organization to operate more like the electric or phone company for their organization. The IT department may still do development work, but it will mostly be responsible for creating new services.  Eventually, business tools will create these services. Thus, IT will become the organization responsible for the infrastructure for provisioning resources for these services based on demand, with the goal of no interruption in service.

This is a major change for many IT organizations that are managing silos of applications today. If one application that supports fifty users is unavailable, IT can expect five to ten calls to the help desk. However, if a service that supports fifty applications is unresponsive, the help desk can expect to receive ten times that many calls. Needless to say, the impact of this paradigm change to the organization and the IT resources is significant from both a cost and resource perspective.

The utility computing model can save organizations millions of dollars each year
through hardware and human resource consolidation.  By adopting SOA, organizations can incorporate a software model that easily maps onto the utility computing model, such that resources do not need to be dedicated to particular applications, but instead can be provisioned based on demand within the organization. This approach requires the organization to plan for changes in how the IT department operates and the types of skills required to build and support an effective utility computing model.

Leave a Reply

Your email address will not be published. Required fields are marked *

*