Pay Per Sip Internet Applications: Changing IT Economics With On-Demand Distributed Computing

Thursday Dec 18th 2003 by Kieran Taylor

A growing number of vendors are advocating an On-Demand or Utility Computing model. Is this something you should consider?

Imagine architecting your dream house and based on the size of your family, the number of appliances you own, and how often you use them, having to size and purchase an electric generator for your home. Oversize and you overpay. Undersize and you miss your favorite TV show when your daughter needs to dry her hair.

It sounds impossible, but traditional application deployment models are no different. They force architects to estimate CPU and memory utilizations for each application, forecast peak demand and, based on complex capacity planning, over provision servers and software licenses -- typically at significant capital and operational expense. As seen in a recent study by IBM (see figure), this "guestimation" is the model of inefficiency. Capacity planning is a daunting task within the confines of a LAN and is significantly more difficult when applications are Internet-enabled for use by an unpredictable global user base.

Source: IBM Scorpion White Paper: Simplifying the Corporate IT Infrastructure, 2002

A growing number of vendors are advocating an On-Demand or Utility Computing model. In this approach, enterprises pay only for the computing cycles consumed instead of paying for infrastructure built to weather periods of peak demand. This economic model is well understood in other industries and is now penetrating IT. But as is true with any new technology, understanding when and how this architecture can be leveraged is a challenge.

In theory, on-demand computing offers the promise of providing computing cycles on a pay-per-use basis, much like telephone, electric or gas utilities. This ability to scale "on demand" sounds great, but it doesn't guarantee application performance. This is especially true for the growing number of Internet applications being deployed daily. Because many on-demand or utility computing architectures are centralized, they still force all requests to a central point for processing, where bottlenecks and Internet conditions threaten user experience. To support Web-based applications and avoid the Internet bottlenecks inherent with "silo" serving, companies are increasingly utilizing a "distributed computing" model in an on-demand fashion.

With the distributed computing model, applications not only scale on demand, they also avoid the inherent bottlenecks on the Internet as the application processing occurs across hundreds or thousands of servers. With this model, all processing occurs close to the requesting users, thereby enhancing performance and reliability of the end-user experience. With very few adjustments in application development and design, developers and architects can propel applications into production without large up-front expenditures for costly infrastructure. Sony Ericsson Mobile Communications, for example, uses an on-demand distributed computing solution for the delivery of its dealer locator application to drive global, on-line visitors into dealers. Relying on this model for not only its dealer locator, but many of its Internet applications, Sony Ericsson has offloaded nearly 100% of application server processing, reducing its server infrastructure by 65%, and increasing the performance of its global dealer locator by over 400%.

Many enterprises that are running Java 2 Enterprise Edition (J2EE) can adopt on-demand distributed computing with few, if any, changes to enterprise applications. By using the existing set of services available in J2EE application server containers, businesses can designate what processing occurs at the "edge" and what is handled at the enterprise origin. In general terms, this means moving what is known as J2EE "Web Container" application components - JSPs, Servlets, Tag libraries, and JavaBeans - to a tier of edge servers. These distributed servers field all application requests, process the Web Container components, and communicate with back-end systems as needed. These requests to back-end systems are handled via industry standard protocols such as HTTP, SOAP, Java RMI (Remote Machine Interface) and Java Database Connectors (JDBC).

Transforming IT Economics

The net result of this architectural model is that enterprises can fundamentally change the way applications are developed and deployed. In the past, the traditional fixed asset application deployment model has had several deleterious effects on enterprises.

One effect not instantly appreciated is stifled innovation. The fixed asset approach mandates that each new Internet application have a proven business case and ROI. In contrast, with the on-demand distributed computing approach, enterprises gain the ability to experiment as they now have "apps on tap" and with that, the freedom to innovate and differentiate themselves from competitors. Applications can be brought to market more quickly and if necessary, discontinued without risking any capital outlay.

The capital savings can be significant as the expense required to deploy an Internet application to weather periods of peak demand can be substantial. Especially when one considers that for any given day, nearly 90% of an enterprise's computing resources may be idle. Take, for example, an online promotion from Logitech to give away 20,000 cordless keyboards in one day. With five weeks to develop and deploy an infrastructure that would ensure 100 percent uptime for a well-advertised event, Logitech needed a solution that could scale on demand. Uncertain as to the magnitude of traffic the online promotion would create, a decision to purchase more servers was both costly and impossible to estimate. Using an on-demand model, Logitech extended its contest application across a distributed server infrastructure, saving significant costs that would have been necessary in upgrading its infrastructure to support the event. The distributed infrastructure enabled Logitech to scale to support 72 million page views, and 55,000 requests per second at peak, resulting in a marketing victory.

Savings are also not just in the area of capital expenditures. As previously mentioned, moving applications from pilot to production using traditional architectures involves time-consuming capacity planning. This means performance testing, debugging, tuning and finally, planning the amount of CPU, memory, and software licenses to provision for peak demand. Unlike Web servers that are easily deployed as a "farm", clustering application servers is complex and costly. By reducing the amount of back-end infrastructure and need to cluster, ownership costs can be greatly reduced.


On-demand technologies deliver on the promise of utility computing, but it is important that enterprises realize that for Internet-based applications, scaling on-demand in a centralized location solves only part of the problem. On-demand distributed computing overcomes the problems inherent in "silo" serving solutions. By locating computing resources close to the users requesting them, performance and reliability, as well as scale, are assured. Given the proliferation of Web Services, this distributed model is increasingly the choice of enterprises eager to innovate without risking huge capital or ownership costs.

Kieran Taylor is director of product management for Akamai Technologies, Inc. He can be reached at

Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved