You have a successful client-server application environment—either an in-house environment with diverse and widely-networked users, or a hosted environment with a broad base of users all over the Internet. And now you face the transition that confronts all applications that are both "client-server" and "successful:" growth that takes them into the inherent tyranny of numbers that any heavily used client-server inspires. Plot your application environment's traffic as more and more users come on board, start gathering metrics on your database connections, on the increasing number of lock-outs and time-outs—and watch your environment slow to a crawl.
Typically, the transition to an enterprise environment is top-down: A company invests heavily in a new, multi-tiered application environment, ramping up new distributed applications and putting legacy apps in wrappers so they'll run in the new environment. This is fine, if you have the financial muscle to buy that kind of environment ready-to-run. Without that muscle, you can't make the transition top-down; you have to go bottom-up, creating the enterprise a piece at a time, with whatever resources are available.
Whichever direction it goes, however, it's an inevitability that can't be negotiated away: The scalability and performance of a client-server architecture will fall off when certain activity thresholds are surpassed. The architecture and technology to adapt—distributed applications architecture and the frameworks that can enable it—are now well understood and widely deployed. The challenge now becomes, how do you implement a distributed applications architecture and, at the same time, maintain the client-server environment it must supplant?
This opens up a long list of new questions: How do you go about growing a dynamic architecture around a static one? If you have a two-tier system and grow a third, where do you put it, if you're not shutting down the other two? How do you change the routes in and out of your databases without shutting them down for repairs? How do you do all of this with little or no interruption of service to your users?
There are answers to each of these very big questions, but (as you might guess) they can grow very lengthy and detailed. Included below are a set of over-arching design principles that are intended to be abstract (applicable with most enterprise technologies), amenable to a staggered and controlled transition, and budget-friendly.
Before Anything Else, Build a Bridge
Job #1 in growing a distributed architecture around the client/server architecture is preserving existing functionality. In most situations, that means dealing with a client-server environment with business logic deeply embedded either in the user interface or the database (say, in stored procedures—in theory a no-no, but, hey, this is the real world). You have to leave this arrangement in place and functioning while you introduce a third layer, and make the transition gradually, rather than all at once.
How do you insert a business layer into such a configuration, while preserving the current functionality of the two existing layers? In a rich application system, will probably need to happen on an application-by-application basis. (Your company might require that this transition happen in some other way: customer-by-customer, for instance, or region-by-region.) Whichever way you go transitionally, the job is the same: build a bridge between the presentation layer and data layer that crosses the business layer.
The idea is to allow the client side direct access to the server side, passing right through the emerging business logic layer, and redirect applications away from the bridge and into the business layer when they are ready to be transitioned. By using this method, you can bring applications into the enterprise environment at whatever rate is reasonable, because you have the resources to add business components and are able to remove old components from one or both of the other layers. The functionality transfer to the middle layer can proceed at a pace that makes sense for you, resource-wise. The trick to making this work is building the bridge well to begin with, building temporary infrastructure that allows you to re-route client communication rapidly.
Figure 1 tells this story. Several applications in a client-server paradigm are brought across into a multi-layer paradigm, one application at a time, as business logic is extracted from the user interface and placed in a growing layer of its own. The original client/server environment is left functional for as long as it's needed, with client-server calls and data passing over the business layer for those applications not using it. Nothing needs to change for any particular application until that app is ready to make the transition, and then it's just a flip of a switch to do so.
How do you build that bridge? The down and dirty specifics depend on your development platform, but the general idea is this: Database calls from your application layer no longer go directly to a database. Instead, your applications send those calls transparently to a manager; and on the database side, calls are coming not from applications themselves, but from proxies that are also managed transparently. On an application-by-application basis, these managers will either hand off calls directly (in other words, to each other) or pass them into the business logic.
On one end of the bridge, then, is a manager routing database calls either directly to the database or into the business logic layer; on the other end is a manager passing data directly back to clients, or into the business logic layer. These managers are where you put in your conditional logic to determine, app-by-app or client-by-client or whatever, whether or not the calling client is "converted" or not.
Variations on a Theme
Depending on where the bottlenecks are in your existing system—or where you must avoid bottlenecks in your emerging system—you may find it advantageous to locate your manager function somewhere besides the middle layer itself. For example, it may be that you want to grow your business logic layer from scratch physically as well as logically. In other words, you want to add new physical servers to your system that housed the business logic as you grow. It would make sense, then, to place your manager module on the underside of your presentation layer (see Figure 2), because the decision to route outbound traffic from the user interface is then not just a logical choice but a physical one. If you need business layer logic for a particular client, you're headed for a different physical server than you would be if you were passing straight through to a database. The decision of what you need and where you can find it are necessary co-located, and that means they need to be nestled up near the client.
Similarly, data access management might be better handled at the top of the data layer, rather than on the underside of the business layer. The infrastructural example above (having the business layer on separate servers) is one scenario that might recommend this; another would be an environment where heavy data layer traffic is mitigated by threading, and hand-off to a business logic layer as opposed to the client has load-balancing implications.
It might even be advisable, depending on your physical infrastructure and server configuration, to have your two traffic management layers as independent layers in themselves, creating a five-layer system. This approach may add expense but could potentially simplify traffic issues.