Book Review: Enterprise Application Architecture

Tuesday Nov 19th 2002 by Sam Huggill
Share:

Enterprise Application Architecture The BEST book for anyone creating distributed, enterprise applications.

We have covered a lot of material inthis book, so in this chapter I will try to pull all of the differenttopics we have discussed into a single vision of the overall system.We'll start by taking another look at the physical architecture thatwill host our applications. Now that you have actually worked throughthe coding of some objects with both data centric and user centrichalves, I think that the server farm designs we looked at earlier maymake a little more sense. While we look at the physical architecture,we will view the computers in the server farm more as softwareelements, SQL Server, MTS, IIS, rather than as hardware elements CPUs,disks, and memory.

What we'll do in this chapter is walkthrough each step that must be taken to deploy an application acrossour enterprise. Before we begin, let's take a minute to define theoutcome we expect to achieve with this deployment. We need to considerseveral issues before we can develop and deliver an application to ourusers. Of course the first thing we need to do is to handle thedevelopment of the application itself. What that means to us, usingECDOs, is that we need to:

  • Determine the application's functionality
  • Acquire the base objects necessary to deliver that functionality
  • Extend the base objects with type definitions if necessary
  • Connect the objects
  • Add the business rules
  • Handle deployment issues

I guess if you think about it, thislist shows that we have really taken the huge leap into the world ofdistributed architecture development. Most of this book, until now,has really bee about handling steps 2 through 5 in a professionalmanner. I guess I always took it for granted that if you picked up abook on Enterprise Application Architecture that you are probably amaster at determining the functionality for an application. Anyway,what we have to do now is to take a look at exactly what it means todeploy a distributed application across our architecture.

The Application Methodology

Before we go on to talk about deploymentspecifics I want to take the time to outline the life-cycle ofdeveloping applications for our enterprise.

Development

Developing applications with data objectsis really a lot of fun when you get used to it. Of course as you haveseen, the actual code we need to write when we build an ECDO ischallenging to say the least. But, once you have built one, you can useit again and again. During the development stages, we spend a goodamount of time searching through the ECDOs that we have already built.Most applications really seem to make use of the same set of base ECDOs:

  • The Person object extended intoone or more different types of people; Doctor, Patient, Employee,Engineer, etc.
  • The Organization object.Extensions for the object include things like Company,Incorporation, and Government Agencies. As a hint about how youmight think of an Organization object, I've found that Icould use the Organization object to represent a family or acollection of Person objects.
  • The Address object Obvious
  • The Telephone Number object Obvious
  • The Email Address object Obvious

Of course there are other ones that weuse quite frequently like the Cash and Time objects, butyou get the idea. The first thing we do is to actively look for waysthat we can reuse the objects we have already built. The keyhere is to be creative. You need to learn to think about, say,the Person object as a real person. Remember that youcan always add an OTD and a Property Bag to the base object and turnthe object into anything that a real person can be. The same goes forthe Organization or any other 4-Dimensional Data Object thatyou have taken the time to build.

Only after we have exhausted the reusepossibilities do we even think about creating any new ECDOs. Usuallywe find that we do need to make a few new ones to handle some specialkind of information for an application. At this point, we STOP. Wetake time to THINK about the FUTURE, and we work hard to design thebase property set for any new ECDOs - so that we might be able toreuse them in the future.

The next step in this process is towork through the relationships between the objects. When we areworking with ECDOs that means that we need to think about Connectorobjects. In just about every application, we do need to build at leasta few Connector objects. But as we saw in the chapters on Connectorsand Veneers it is possible to package common relationships like Person Address Telephone Number Email Address into something wecall a component. This frees us up from having to develop the samekind of Connector objects over and over again. Think about how youmight design an Organization object. You know, one that managesthings like companies, agencies, and even families. Now try to thinkabout even one organization you know that doesn't have at least oneperson associated with it.

Anyway, organizations tend to havepeople. They also usually have contact information like Address Telephone Number and maybe even an Email Address. This means thatwe need to think about these things and consider the possibility thatwe might design something like an Organization component.Again, this frees us up from having to work through the relationshipproblem over and over again.

Notice what you just did! You werethinking about different kinds of organizations and theirrelationships to people.

You did not think about a table inthe database.

That is what we do during development:

  • We think about real things.
  • Then we build objects to model thosethings.
  • Once we have the objects, we thinkabout the real world connections between those things.
  • Then we build objects to model thoseconnections.
  • The last step in this process is towork through the business rules that our organization uses to managethe connections we have made possible with our data objects andtheir associated connectors.
  • We take these rules and package theminto something we call a Veneer.

This is where we do most of the work inapplication development. We write specialized business rules into atiny bit of VB project that we call a Veneer.

The Location of Development

During development, we usually copy allof the DLLs for the objects we need onto a local workstation. Duringvery early development phases, we might even use a local SQL Serverrather than our common development box. We don't normally use MTS atthis time. We try to keep things as simple as possible. Thisallows us to focus on the real problem we are left with when weare developing an application using ECDOs - the business rules in theveneer.

Notice that I said that we might use alocal SQL Server rather than the our common development box. I want youto think about all of the work we did passing Connect strings betweenall of the different spheres in our system. You didn't write one pieceof code that didn't require a Connect string. I am sorry if this wastroubling at the time, but let's think about what it means to us now. Wehave objects that are capable of creating their own tables, views, andstored procedures. This means that we can drop a few of these guys onour desktop, create a disposable local database, and hack away at thebusiness problems. This gives us the same kind of interactivedevelopment process at the application level that makes VB so powerfulat the user interface level.

Once a developer has finished, I am goingto call it understanding the problem (hacking away), then we generallytake the small step of pointing the ECDOs at the common development SQLServer box. We also take any new ECDOs or Connectors that the developerhas created and perform a standard set of tests against them. We use theObject Factory to make the ECDOs so we don't normally have problems inthis area, but these objects are reusable so we do test them before wemove them into the common development MTS/IIS servers. Once the objectsare in the common development area, they become a part of the process wetalked about earlier. In other words, we try to find reuse possibilitiesfor all of the ECDOs.

After we have moved the data store andthe objects to the common development platform, the developer continuesto work through the business rules. This step also gives the developerthe opportunity to see exactly how the ECDOs are performing across thecommon development platform, which is really just three machines that weuse to model a server farm. This allows us to catch any problems thatmight occur in this area before we get into the staging platform.

Staging

By the time an application has moved intostaging, most of the bugs, including any that might arise from operatingacross a distributed platform, have been worked out. Our stagingplatform is a perfect, but scaled down replica of our production system.You can think of it as a 'piece of pie' rather than as the whole pieusing the vision of the concentric ring server farm from Chapter 3.

Normally, the move of an 'application'from development into the staging arena is not all that difficult. Thatis because we reuse ECDOs. This means that often most of the dataobjects we need for an application have already been installed on thestaging platform. This is a blessing. It means that at least a portionof each application has been tested under staging conditions. The thingswe do have to install on the staging machines for new applications areusually just a few Connector objects across the three inner spheres, aVeneer that only exists on the User Centric sphere, and the Interface onthe IIS servers on the Presentation Sphere.

The next thing we do may seem a littlestrange, but immediately after we deploy the ASP interface on thestaging IIS machines and perform some basic tests, we bring thecustomers into the development process. We usually give one developerauthority for an application at this point and that developer works withthe users in real time. This means that every day we ask the users to"play" with the application deployed across the stagingplatform. During this time, we impress upon the users that thisapplication is theirs, and that now is the time for them to suggest anychanges, modifications, or point out any problems. Thanks to ourweb-based deployment technique, we have the ability to make changes tothe interface and have those changes on the users' desks in minutes. Ofcourse, I like things to be consistent, so we don't actually update thestaging system in real time. Instead, we try to get a list of commentsfrom the user test base each day. Then we take that list, and work outthe issues on that list on the development platform. Each night, we takeany changes we have made and tested on the development platform andimplement those changes on the staging platform.

As I said earlier, our staging platformhas been purposefully designed to model the production system. In otherwords, it has the full set of capabilities of the production systemincluding the correct positioning of the Cluster Servers. This gives usan opportunity to do things that we can't do in the development system,like perform fail-over test scenarios.

In many ways, I consider our stagingsystem a choke point for our development process. Although ourcore object team is purposefully small, the objects are made availableto a wide range of developers with a wide range of talents. The stagingplatform gives everyone of those developers a model to follow and a setof standards to meet. If the application they developed cannot exist onthe staging platform, then it will not be moved into productionuntil it can. This is not as ominous as it sounds, in actual practice,the move from development to staging is one place where all thedevelopers get to "meet" and a lot of learning takes placehere.

The staging system is not just a one-waystreet from development to production. We also perform a nightlyreplication of data from the production system back to the stagingsystem. This keeps the staging servers synchronized a day behind theproduction system in terms of the data stored in the system. That allowsus to work with not only the type of information stored in theproduction system but also to work with about the same volume ofinformation. This gives us the opportunity to do things like test theeffects of adding indexes to the database, etc. It also gives us a safeplace we can use to replicate any problems that the users of theproduction systems may have been experiencing. If we are experiencing aproblem in production, we replicate the problem on the staging machinesand "have a conversation" with the users using the stagingmachines as our medium. Once we have nailed down the bug, we hack awayat the bug on the development system, fix it. Then we move it back tostaging and test it. Only when we, and the users, have signed off on theproblem does the fix get placed into production.

You should notice that the rolloutprocess from development towards staging may be a long and drawn out,but that is by design. The staging step in process is intended toprovide a tough hurdle on the way to deployment on the productionsystem.

Production

The deployment of an application on theproduction system really doesn't involve anything more than installingeach of the new pieces required on the proper servers and making surethat the correct connect strings are being passed to the correct dataobjects. By this time, the tables, views, and stored procedures that thenew data objects will create in the system have been tested at leasttwice. The new data objects, the Connector objects, the business rulesin the Veneer, and the user interface have undergone a thorough testingon the staging platform. Once the pieces have been installed on thecorrect servers, all we need to do to deploy an application is to givethe correct user base the permission to access the application using theEntApp object and the Person object extended as an ITCustomer.

In order to deploy our objects across ourenterprise we need to examine the infrastructure that we have designedfor our enterprise system. What follows is a quick review of how wedivided the system up in the earlier chapters.

Processing Divisions

One of the very first things we talkedabout in this book was how to divide the system by isolating particularprocesses that are going on. We further saw how we could sub-divide theprocessing into major and minor processes.

Major Processing Divisions

In Chapter 2, we determined that therewere two types of major process isolation:

OLTP - On Line Transaction Processing

OLTP is designed for real-timetransaction processing. We expect the OLTP portion of the system torespond immediately to all requests, updates, etc. This system isoptimized (at the database level) for speed first and then ease of usewhen dealing with single items (one employee, one customer, one purchaseetc.). Over time, we have gradually reached the point where we havedriven most of the redundancy out of our well-designed database systems.As an industry we have learned to design and optimize our systems arounda unit of work we call a transaction. The most optimal transaction usesthe smallest possible data set that will get a particular taskaccomplished. The Data objects we learned to design and build in thisbook are optimized for OLTP.

OLAP - On Line Analytical Processing

The OLAP or DSS system is not designedfor real-time transaction processing; it is really something of a datawarehouse. It is optimized in a manner that makes it easy to get answersto, even complex-enterprise-wide, questions in a very short time. Weexpect the OLAP portion of the system to provide us with easy access toanswers for difficult (maybe formerly impossible) questions that spanmultiple (enterprise wide, all the employees, all the customers inFrance, etc.) items. Although we took the time to realize that we doneed to separate the OLAP data storage activities from the OLTP datastorage activities, we did not get a chance to learn to code the specialData objects that handle the OLAP portion of our system.

Minor Processing Divisions

The next thing we discovered was that inaddition to the two major processing divisions in every system, we couldalso identify four minor processing divisions.

Data Storage Processes

These processes are responsible for thephysical storage and retrieval of data from some persistent source. Ifwe design our system correctly, this may be one of the few places wherethe system is actually performing reads and writes to an actual physicaldisk.

Data Manipulation Processes

Data Manipulation Processes are designedto know where an organization's data is stored and know the steps thatmust be taken to retrieve, remove, or change that data. These processesdo not typically perform tasks like disk I/O themselves. They usuallyinvoke other processes, most of them on another physical machine, tohandle the physical writes and reads. The Data Manipulation processesare designed to wrap around the core of data available to ourorganization and give the programmers, and the users, access to theentirety of that data as though the data existed on a single machine. Inreality, that data may exist on a centralized SQL Server machine, but itmay also exist in legacy machines like mainframes, tired old UNIXsystems, in flat files, and maybe even somewhere out on the Internet.

Data/Business Rule Integration Processes

While Data Manipulation processes knowabout the organization's data, the Data/Business Rule Integrationprocesses know about the organization's talent. Their job is to pulltogether data and business rules and by combining the two increase thevalue of both.

Presentation Processes

The Presentation processes' job is topresent or receive data to or from the end user. We found that theseprocesses are really most like the Data Storage processes we looked atabove. The real difference between the Data Storage and Presentation setof processes is where we write the end result of the process. While theData Storage processes primarily write to the system's disks, thePresentation processes primarily write to the end users workstations'screens, files, and printers. But, both sets of process are concernedwith changing fleeting binary data transmissions into something of amore persistent nature.

So splitting the enterprise according toprocessing resulted in this:

Insert Image 1

Physical Divisions - Tiers

We found that it was possible to deploythe available physical resources in a manner that mirrored theprocessing divisions. We split the servers into the following physicaltiers:

  • The Data tier - Handles the DataStorage processes
  • The Data Centric tier - Handles theData Manipulation processes
  • The User Centric tier - Handles theData/Business Rule Integration processes
  • The Presentation tier - Handles thePresentation processes

Although tier divisions is not a newconcept in distributed architecture, we found that when we combinedthis divisional strategy to our processing divisions, we could focusthe processing power of a large number of presentation, objectbroker/transaction processing servers, towards a very much smallernumber of database servers at the center of our system. In otherwords, it made more sense to imagine that the servers existed as aseries of concentric rings:

Insert Image 2

During this time, we came to realizethat at the core of distributed architecture is the very simple ideathat we can employ more than one machine to handle the processing loadfor our enterprise.

Logical Divisions - Spheres

At this time, we also realized that inorder to distribute the processing load across more than one machine,the applications we developed needed to be a little different from whatwe might have grown accustomed to in the past. We found that we neededto learn to think of our applications as the set of related processes wetalked about earlier rather than as a single monolithic block of code.The reasoning for this was simple. It doesn't matter if we have 10, or100, or even 1000 servers available to execute our code. If that codeexists as a monolith, then we can only run that block of code on oneserver at a time. On the other hand, if we break the monolith down intoseveral sets of complimentary logical processes, we can run each of thedifferent processes on different machines simultaneously.

This meant that it was possible to haveboth our physical and logical system set up in a complimentary manner.When we paired this idea with the server farm designs we looked atearlier a new vision of our enterprise emerged. This combining ofphysical and logical assets in a parallel fashion had the curious effectof allowing us to envision our system a series of spheres. Where eachone of the spheres represented a minor processing division:

Insert Image 3

Just like we did with the servers in thephysical system above we placed the smallest sphere, the Data sphere, inour logical system at the center of the enterprise. The reason we didthis was because knew that the power of distributed architecture lay inits ability to share the processing load across as many machines aspossible. We intuitively understood that the distribution of theprocessing load across a system did not mean that we had to distributethe data we stored in that system. We learned to position our availableresources, both hardware and software, in a fashion that focused theenergy of the system towards the core of data at the center of thesystem.

 

Once we had positioned our Data sphere atthe center of our enterprise, we set out to construct a system thatwould allow us to share the smallest possible number of database serversat the core of our enterprise to the largest number of users. We foundthat we could envision the different spheres, Data sphere, Data Centricsphere, User Centric sphere, and Presentation sphere as a series ofnested spheres. Although I didn't take time to make it too clear as wewere working through the chapters, we started solving our programmingproblems at the center of this system, at the Data sphere.

During the time that we were learning todevelop ECDOs, we were very careful to craft each data object as a setof three distinct sets of processes. We designed one set of processes,the stored procedures, to be executed on the Data sphere. We craftedanother set of processes, those in the DC Object, to be executed on theData Centric sphere. And a third set of processes, given by the UCObject that were destined to be executed on the User Centric sphere. Theend result of this careful design and execution is that we have, in avery real way, moved the data store out to the third sphere in oursystem. Remember that while we were learning to build these dataobjects, we often called them something like a 'proxy for a table' or a'proxy for a view'.

 

ECDOs are distributed data objects thatinherently spread out the load normally forced upon the databaseservers across as many servers as we have available in the DataCentric and User Centric spheres.

Functional Divisions

Although the physical processingimprovements our distributed object enable are great, we did somethingfar more important with the ECDO when we effectively moved the databaseto the User Centric sphere in the system. We purified our applicationdesign and development practices. While we were learning how to builddistributed data objects, we didn't allow business rules to creep intoour data object designs. We eliminated the business rules from all ofour data handling processes. We took on the elimination of businessrules from our objects with such a fervor that we didn't even begin toconsider the business rules in our applications until very late in ourjourney.

When we did finally begin to think aboutbusiness rules, we ensured that they wouldn't infiltrate our dataobjects by designing a couple of special objects to handle businessrules. We called these special objects Connector objects and Veneers. Welearned that it was possible to distill the business rules from a set ofrequirements and handle all of these rules just using data objects,Connector objects, and Veneers.

This showed us that it was entirelypossible to think about building database applications withoutreally having to think about databases. This means that we can divideour development efforts into two separate challenges. The first is tocreate reusable data objects that can be shared amongst differentapplications. The second is to take the reusable data objects and toincorporate them into unique applications using Connector objects andVeneers:

Insert Image 4

What we can garner from the lastparagraph is that when we start to think about deployment issues for ourenterprise, we really need to think about deployment in two phases:

  • The first phase is the deployment ofinvolves distributing the reusable data objects across the spheresin our system.
  • Once these data objects are availablethroughout the enterprise, then we can begin to think about thedeployment issues on the level of the individual applications.

In the next section of this chapter, weare going to consider the deployment of data objects across the spheresof the system. In order to handle these deployment issues, we need tothink about SQL Server at the Data sphere, Microsoft Transaction Servermanaging transactions at the Data Centric sphere, Microsoft TransactionServer operating as an object broker at the User Centric sphere, andthen Internet Information Server on the Presentation sphere.

This book is about EnterpriseArchitecture. We are using a 4 Sphere Architectural Model; that meansthat we should be able to define the tasks required to meet ourobjective in terms of a series of sub tasks that must be handled on eachsphere. Over the next few pages, we will cover the things that must beaccomplished on each sphere in order to deploy an application. In thischapter, we will be more concerned with the operational tasks ratherthan the programming tasks. I am sure that by now, you either know howto handle the programming or know where to look up the information youneed. What we need to cover is the actual hands on activity that needsto take place on each sphere in order to deploy an application.

I would like to approach the deploymentissue the same way we worked through the development issues we talkedabout throughout this book. In other words, I would like to start at theinner most sphere of the system and work out from the database to theuser. This will allow us to work through deployment issues for eachsphere so we will arrive at the last chapter of this book at thePresentation sphere with only some ASP concepts to finish.

The Data Sphere - SQL Sever

All of the objects we have worked with inthis book rely upon a relational database data core.

In our installation, we use SQL Server7.0, but anything we have done in this book could be accomplished usingSQL Server 6.5.

Of course, it is also possible to useexactly the same techniques with other relational databases, Oracle,Informix, etc. If you use one of the other databases, you might haveto make some minor changes to some of the stored procedures.Microsoft's Transact SQL is not universal by any means, but anychanges that you do need to make are likely to be rather trivialsyntax changes rather than major conceptual overhauls.

As a suggestion, if you are currentlyusing SQL 6.5 and you have the option of migrating to SQL 7.0, by allmeans do it. I am not going to go through a feature-by-featurecomparison, you can do that for yourself. But I will tell you that SQLServer 7.0 has varchar fields that can hold 8,000 characters compared tothe 255 character varchars that SQL Server 6.5 offers. This means thatin all but the rarest cases, we can virtually eliminate the handling ofbinary large objects, BLOBs, and text fields from our list ofprogramming chores. This in and of itself may be enough to cause mostshops to switch from 6.5 to 7.0, but there are other, performance based,reasons to make the switch. During our switch from 6.5 to 7.0, I tookthe time to run some simple comparison tests between the two versions.The outcome of the test was that, out of the box, SQL 7.0 showed about a300% improvement in throughput.

Comparing SQL Server 6.5 and 7.0

The tests consisted of a simple driverapplication that ran a series of database actions against the server. Toremove any client or networking bottlenecks, I used multiple clientmachines to drive the tests against both SQL Server versions. When I ranthe tests against the 6.5 version using the default configuration, I waslimited to approximately 125,000 database actions per hour. It didn'tseem to matter how many client machines I used to drive the test, the125,000 number stood. Curiously, this limitation was not due to the CPUsor I/O capabilities of the machine. The resource monitor showed the CPUutilization steady at around 10%. The limitation seemed to be due tosome internal SQL Server factors. Anyway, I ran the same test againstthe SQL 7.0 version, again using the default configuration. The resultswere impressive. The 7.0 version allowed 400,000 database actions perhour. The 7.0 version also made better use of the available resources. Ididn't need to use any additional client machines because the CPUutilization with 7.0 was around 60%. This test was performed usingobjects that hit SQL Server directly without any of the poolingcapabilities of MTS.

After I performed this test, I devisedanother test that was designed to measure the effect MTS had on theoverall system. I added another machine to the mix. This machine onlyhad MTS installed on it. Next I moved the DLLs from the client machinesto the MTS machine. Then I created a package for the DLLs and informedthe client applications that they were to use the DLLs in MTS instead ofa local copy of the DLL. Finally, I ran exactly the same test as aboveagainst the MTS/SQL pair. The results were astounding! First, the numberof database actions that were completed per hour went from 400,000 to500,000. This alone would have been significant, but it was not whatsurprised me. The amazing thing was that the CPU utilization for themachine that hosted SQL Server 7.0 went down to 12% for 500,000 actionscompared to 60% for 400,000 actions. Of course, the processing power hadto come from somewhere. It did. The MTS machine's CPU utilization wasbetween 40 and 50%! This was a tribute to Microsoft's tight integrationbetween MTS and SQL. It also confirmed the ability of the connectionpooling resources of MTS.

The Data sphere of our system consists ofthree types of database objects tables, views, and storedprocedures. Of course our objects are designed with the capability ofconstructing all of the database objects that they need to function, soby deploying our data objects on the Data Centric sphere, provided wehave set up a database for them, we can leave the handling of the Datasphere to our objects. Once again, the time and effort we put intobuilding high quality data objects has paid off.

The Data Centric Sphere - MTS

Microsoft Transaction Server (MTS)handles the middle spheres of our system. Although Microsoft named thisapplication Transaction Server, this name really doesn't convey all ofthe things that we can do with it.

I'm only going to give a brief overviewof MTS here. If you want to learn more, and I recommend that you do, Isuggest you try Professional MTS and MSMQ with VB and ASP, alsopublished by Wrox Press.

MTS provides a stable run timeenvironment for our objects. It serves as an object broker and canmanage a number of instances of a particular object. Unfortunately, thecurrent version of MTS doesn't offer us the level of administrativecontrol that some other middle-ware managers like TOP END. does. Wecannot, for instance, tell MTS to keep X number of objects of aparticular type in a pool instantiated and available for service. As Iunderstand it, this type of functionality will be available in the nextrelease of MTS. But, as far as I am concerned, this lack ofadministrative tuning capabilities is not that much of a disadvantage.It has been my experience that MTS really does a good job of managingthe number of available objects without any interference from the likesof me. In other words, while other products like TOP END. do give theadministrator a higher level of control, MTS uses a sophisticatedoptimization algorithm to perform that task without any intervention.Although I cannot prove it, I suspect that the algorithm may be morecapable than most administrators would be under the same circumstances.

As we learned by examining the results ofthe testing above, one of the other things MTS offers is superb resourcepooling capabilities. The increase in performance we saw was completelydue to MTS's ability to manage several pools of resources includingdatabase connections, processes, and threads. Again, MTS doesn't offeras rich an administrative interface as some other middle-wareapplications do in this regard. We cannot really do much to tune orotherwise effect the pooling of available resources. MTS does thiswithout interference via some internal logic. And once again, I reallythink that this may be a good thing. The things that people tend to viewas complicated like real-time optimization of resources are tasks thatare often better handled by automation.

If you remember the fairly complicatedserver farm designs we looked at way back in Chapter 3, you may havethought that it would be incredibly difficult to manage objects spreadout over all of those machines. Well MTS makes it very easy to deploy,test, and re-deploy objects across a multiple machine platform. MTS canbe easily programmed to replicate components across a battery ofmachines. It has been my experience that this administrative facilitymay be more important to overall system performance than the ability tomanage the number of instances of an object on a particular machine. Theway we administer MTS is to macro manage the system. In otherwords, if we need more processing power, we add another machine or twoto the pool of available servers. Then as we covered above we can allowMTS to micro manage the pool of resources (and object instances)on each server.

Another advantage MTS offers is theability to provide two additional levels of security to our system. Wecan write code in the objects that allow just certain groups of users togain access to certain properties and methods. This security scheme iscalled programmatic security. This allows us to create objectsthat can actually transform their interface to bend to meet securityconcerns. In other words, we may need everyone in the company to be ableto gain access to some Cash object. But through programmaticsecurity we can ensure that only persons in group X can void a cashtransaction. There is another type of security we can employ with MTS.It is called declarative security. This type of security reallyallows (or denies) certain individuals access to a particular objectunder the control of MTS.

Last but not least, we can use MTS to,surprise surprise, manage transactions. This is one of the mostimportant functions of this server.

MTS and Transactions

You should remember that in all our DataCentric object's processes' code we used a Context object variable tomanage transactions. The calls to SetComplete and SetAbort informed theMTS environment whether our transaction was successful or not andallowed MTS to deactivate the object. At this point I'd briefly like todraw your attention to an property of our objects which you may not havenoticed.

Visual Basic 6.0 introduced a new classproperty that you could set in the Properties dialog, just like anyother property. This new property was MTSTranasctionMode:

Insert Image 5

As you can see there are five potentialsettings for this property, which determines how MTS handles an instanceof this class.

NotAnMTSObject

Specifies that the object will not beinvolved with MTS. This is the default option.

NoTransactions

The class will not support transactions.An object context will still be created, but it will not participate inany transactions. This value is useful for when you are just using MTSas a central repository for a component class. With this value, therewill not be a transaction as part of the class.

RequiresTransactions

This value mandates that the object mustrun within a transaction. When a new instance of the object is created,if the client has a transaction already running then the object'scontext will inherit the existing transaction. If that client does nothave a transaction associated with it, MTS will automatically create anew transaction for the object.

UsesTransactions

This choice indicates that should atransaction be available when the object is created then it will use theexisting transaction. If no transaction exists then it will also run butwithout any transactional support.

RequiresNewTransaction

This indicates that the object willexecute within its own transaction. When a new object is created withthis setting, a new transaction is created regardless of whether theclient already has a transaction.

 

If you look at the source codeyou will see that for all our classes in the DCPerson.dll you'll findthat the MTSTransactionMode property has been set to2-RequiresTransaction.

In the section that follows we will workthrough a procedure you can use to create MTS Packages for the DataCentric objects.

Data Centric Package Creation on MTS

The steps that we must perform to createthe MTS packages for the Data Centric objects and the installationprocedures we must follow do not change even if you need to install boththe Data Centric and User Centric halves of the objects on the samemachine. Over the next few pages, we will work through the steps thatare required to create a Data Centric MTS package for the Personobject. You can repeat exactly the same process for any other DataCentric objects.

  1. Copy the DCPerson.dll to a directoryon the Data Centric MTS machine. We use a directory calledComponents on each MTS machine to hold the DLLs for the objects thatare installed on that machine. It makes it easier to find and workwith the components as the need arises. In addition, we also use asimple directory structure under the Components directory thatseparates each object:
  2. Insert Image 6

  3. Start Microsoft Transaction ServerExplorer. This MTS administrative application is delivered as asnap-in for Microsoft Management Console (MMC).
  4. This set of steps assumes that you haveaccepted the default setup parameters for NT Option Pack 4. They alsoassume that you are working directly on the target machine that hoststhe Data Centric objects.

  5. Navigate through the Console tree viewto the Packages Installed node under the My Computer node. As longas you are working on the target machine, you can always use the MyComputer node to perform any administrative functions that need tobe performed. The MTS snap-in is just like any standard Explorerinterface:
  6. Insert Image 7

  7. Select the Packages Installed node onthe treeview. Then right-click your mouse. You should see a pop-upmenu. Select New and then click on the Package option. Thiswill start MTS's Package Wizard. The wizard will walk you throughthe steps that are necessary to create a new MTS package:
  8. Insert Image 8

  9. In this case, we do not have apre-built package for you Data Centric Person object, so weare going to create an empty package. All you need to do is click onthe Create an empty package button on the form. That will bring upthe next step of the Package Wizard:
  10. Insert Image 9

  11. Type in the name for the new emptypackage. In this case we would type in the word DCPerson. Once youhave typed in the name for the package, press the Nextbutton:
  12. Insert Image 10

  13. The next screen is used to set thepackage identity. In my enterprise, I have a single MTS account thatwe use. This allows me to control the permissions using a singleaccount. You probably don't have a particular MTS account set up inyour domain, so for the time being, just accept the Interactiveuser the current logged on user option and press the Finishbutton. At this point we have successfully created an empty MTSpackage named DCPerson.
  14. Insert Image 11

  15. Now that we have the package, we needto add at least one component to the object. In our installation, weuse a single package for each data object half. We have found thatthis offers us the greatest amount of flexibility and promotes thereuse of the data objects. What this means in real terms is that weonly need to add one DLL to the newly created DCPerson package. Inorder to do that, highlight the Components node under the DCPersonnode as shown in the image below. Right-click your mouse and selectthe New | Component option from the pop-up menu:
  16. Insert Image 12

  17. At this point, the MTS explorerinvokes a new wizard the Components Wizard. As it is likely thatthis machine won't be the one we developed the object on, we need toselect the Install new component(s) button.
  18. This brings up the Install Componentsscreen of the wizard. Here we need to press the Add filesbutton and navigate to the Person directory under the Componentsdirectory on the machine. Once we have located the DCPerson.dllfile, all we need to do is to press the Open button on the open filedialog box:
  19. Insert Image 13

  20. After you have selected theDCPerson.dll you will be returned to the same screen as above, butnow it will display each of the class files that make up the personcomponent as shown. Notice that MTS considers each class under themain object as a separate component:
  21. Insert Image 14

    Another thing to notice is that MTSgoes through the trouble of testing the object to determine whatinterfaces it exposes that MTS can tie into. In this case you willnotice that MTS found the MTX (Context) object that we implementedwhen we developed the Data Centric halves of our objects. Press theFinish button, and the MTS explorer should look something like theimage below:

    Insert Image 15

    The User Centric Sphere - MTS

    If you are installing the User Centricobjects on the same machine as the Data Centric objects, then justfollow the same set of instructions that we covered earlier for theUser Centric components. Of course in this case we would create asecond package called UCPerson in addition to the DCPerson package,but everything else is exactly the same. If you are using a differentmachine for the User Centric sphere, then we need to add a couple ofextra steps. As you might imagine, we need to instruct the UserCentric MTS as to the location of the Data Centric objects. We do thisby installing them as remote components.

  22. The first step we need to doin order to create a remote component package on the User CentricMTS machine is to introduce the User Centric machine to the DataCentric machines. We do that by adding remote computers to the UserCentric computer's Computers folder. To do this, just right-click onthe Computers node and select the New | Computer option fromthe pop-up menu:
  23. Insert Image 16

  24. This will bring up the Add Computerdialog box. Navigate over to the computer that contains the DataCentric objects and press OK. The other Data Centric computer willbe added to the list of remote computers to this machine and then wecan browse the packages available on that computer and install themas remote components on this computer.
  25. Once the computers have beenintroduced, you can search through the available components on theremote computer and add those to the remote computer folder of themachine you are presently working on. In our case, we need to addthe DC Person object as a remote component on our UserCentric MTS machine.
  26. Right-click on the Remote Componentsfolder under My Computer and select New | Remote Components. In thedialog box that appears, select the remote computer and package thatcontain the components you want to invoke remotely:
  27. Insert Image 17

  28. From the Available Componentslist, select the component that you want to invoke remotely, andclick the down arrow (Add). This adds the component and thecomputer on which it resides to the Components to configure on box.If you click the Details checkbox, the Remote Computer,Package, and Path for DLLs are displayed. Add all the DC Personcomponents:
  29. Insert Image 18

    When this process is finished, you willfind that you have a new directory on the User Centric MTS machine.The DCPerson directory will be added to the Program Files\MTS\Install\Remote\DCPerson.This is where the User Centric machine stores all of the informationit needs to communicate with the data centric machine:

    Insert Image 19

    The Veneer

    Now that we have installed the localcomponents and instructed the User Centric computer about where tofind the remote components, we can install our Veneer component.Remember that this is where all of the business rules for ourapplication are stored. To do this, simply follow the sameinstructions we used for the Data Centric component above. The onlydifference is that in this case, is that we add the Veneer DLL to aVeneer package rather than a data object. Otherwise, the procedure isexactly the same. Although it isnt necessary, I usually take theapproach of adding all of the data objects a Veneer requires before Iadd the Veneer.

    At this minute, you should havesomething similar to this image. It doesnt matter that right nowyou may have SQL Server and MTS running on a single server. Even ifyou had to install the data store, DC objects, UC objects, and theVeneer on a single machine, you can always separate them later. Rightnow you should have, at least logically, a system that looks somethinglike this:

    Insert Image 20

    The next sphere we need to consider isthe Presentation sphere. For our purposes, we will consider thePresentation sphere the machine, or machines, that are hosting IIS. Wewill take a look at what we need to do for the client machines alittle bit later.

    The Presentation Sphere - IIS

    As important as SQL Server and MTSservers are to the enterprise, the star of the system may very well beInternet Information Server (IIS). It serves as a bastion host in theDemilitarized zone:

    Insert Image 21

    It can deliver files, web pages, andentire applications with a high level of security and speed. It isalso possible, although I wouldn't recommend it, to use IIS as theprimary dispatcher in a software-based load-balancing scheme. (Icovered all these details in Chapter 2 so you may want to quickly flipback for a refresher.)

    IIS is capable of performing the dutiesof a bastion host because it can make use of the full range ofsecurity facilities available to Windows NT. Remember from Chapter 2that a bastion host stands guard over the second screening router in abelt and suspenders system. This type of security is used to keepintruders outside of the protected network area. IIS also offers theability to protect information that is being transmitted outside ofthe protected network area. It does this by rendering the informationunusable via an encryption scheme called Secured Socket Layer (SSL).The newest version of SSL (3.0), allows IIS to verify any user beforethe user is allowed to log onto the server.

    Using ASP, IIS is also responsible fordynamically creating the application interface that an enterprisedelivers to its clients. IIS can also be used to provide a level oftransaction control to our applications.

    In an Enterprise Caliber installationwe use a combination of MTS and SQL to handle this function.

    In my enterprise, I use a directorystructure like the one shown here to house our web-based applications.This structure allows us to use IIS in combination with our userobject (Person object) to provide our first and second levelsof security:

    Insert Image 22

    It works as follows. Each user in thedomain(s) is given permission to access the IIS Root Directory remember that IIS is really just using standard NT security. So, if anindividual that is not an authorized user attempts to gain access tothis directory, Windows NT will not grant that person access to theobjects (files) in that directory.

    This means that only persons with avalid account on the domain(s) will be allowed to begin the next stepof the security process. For the purposes of this discussion, considerthat there is only one file, active server page, in thisdirectory. This page provides the second level of security bydetermining the authorized user's Logon ID. Once it has thisinformation it retrieves a list of applications that this authorizeduser is allowed to access.

    It uses this list of applications todynamically construct a menu of applications that the particular userhas permission to access. If the user is requesting information from abrowser, then the user is presented with a single "menu"page that contains links for the applications that user is allowed toaccess. If the user is accessing the enterprise through theclient-side Launch Pad application (we will learn how to build thisapplication later), the page is formatted in a manner that can be usedby Launch Pad to dynamically construct its pull down menus.

    In either case, the end result of thesefirst two levels of security is a list of applications and theirassociated HTML addresses. The following section will give youstep-by-step instructions for creating an application directory underIIS, providing permissions to that directory, and installing anapplication (a group of Active Server Pages stored in a singledirectory intended to perform a particular purpose).

    Setting up the Presentation Sphere

  30. Create a directory tocontain the Contact-Information Application. It doesn't reallymatter where you place this directory. It can be on the same machineas IIS, a good idea for performance reasons, or it can be on anothermachine if you like. It is also not a requirement that the directoryexist under the virtual root directory for the IIS server. In fact,for security reasons, it may be better to create this directory in adifferent location perhaps even on another drive. I wanted tomake those things clear to avoid any misconceptions before we began,but for this example we will create the Contact-InformationApplication directory on the IIS machine under the virtual rootdirectory:
  31. Insert Image 23

  32. Start the Internet Service Manager.This IIS administrative application is delivered as a snap-in forMicrosoft Management Console (MMC).
  33. Selecting this option will bring up theMicrosoft Management Console just like the MTS option did earlier.This time, the MMC will transform itself into an administrative toolfor IIS instead of an administrative tool for MTS. Keep thisfunctionality in mind. It will come in handy when we take a look atthe Launch Pad application below.

    Notice that when we open up MMC to useIIS, it contains the MTS snap-in as well. Anyway, from this point,navigate through the treeview until you find the Default Web Site nodeon the Internet Information Server branch of the tree. It should looksomething like the image below:

    Insert Image 24

  34. Create a new Virtual Directory.Right-click on the Default Web Site node and select the New |Virtual Directory option from the pop-up menu. This will invoke theNew Virtual Directory Wizard. Enter the name for the virtualdirectory in the text box as shown:
  35. Insert Image 25

  36. Map the virtual directory to thephysical directory we created earlier. You can either type in thepath of the directory or press the Browse button to navigateover to the actual physical directory:
  37. Insert Image 26

  38. Set the Permissions for the VirtualDirectory. The wizard will display another dialog box that willallow us to specify the permissions for the directory. Select theoptions as shown here and press Finish to continue:
  39. Insert Image 27

    Once you have pressed, the Finishbutton, you will be returned to the Management Console for IIS. Noticethat there is a new node under the Default Web Site. This node iswhere we will install our application. The next major phase in ourinstallation concerns setting the IIS security for the VirtualDirectory.

    Insert Image 28

  40. Configure the IIS settings for theVirtual Directory. Navigate through the treeview shown above andhighlight the CIA node, or Virtual Directory. Once this directoryhas been selected, right-click your mouse and select Propertiesfrom the pop-up menu. From the Properties dialog box, select theoptions for the Virtual Directory as shown in the image. Then switchto the Directory Security tab:
  41. Insert Image 29

  42. Setting the Directory Security. Fromthe Directory Security tab, press the Edit button in theAnonymous Access and Authentication Control frame set. This willbring up another modal dialog:
  43. Insert Image 30

    We use this box to set theauthentication methods. The best way to handle authentication is toselect only the Windows NT Challenge/Response option. However,if you need to use outside Internet Service Providers (ISP) you mighthave to select Basic Authentication as well. If you do this,you will also need to use Secured Sockets Layer to encrypt the data ifyou want to ensure a high degree of system security. Under nocircumstances should you check the Allow Anonymous Access option for adirectory that contains a web-based application:

    This completes the work we need to doto configure the IIS security. Notice that IIS Virtual Directorysecurity is not really a fine-grained mesh the way that Windows NTsecurity is. This is not a problem because IIS conforms to the WindowsNT security model. That means that in addition to the broad securityswath that we can control with IIS, we can provide additional securityby utilizing the NT model to control the objects (files) in thephysical directory that this Virtual Directory maps to.

  44. Setting up a group of authorized usersto manage permissions for the physical directory using Windows NT.NT security gives us the ability to create groups of users that haveaccess to a particular resource or set of resources. This eases ouradministrative tasks. In this case we will create a local user'sgroup on the IIS machine that contains a list of users that havepermission to use the CIA application. Then when we need to workwith the permissions for the physical directory we can work with asingle group of users at one time rather than working with theindividual users.

    Select the New Local Group option from the User Manager forDomains console:

  45. Insert Image 31

    This will bring up a dialog box that wecan use to create a new local user's group. All we need to do here isto type in a group name (and optionally a description). In this case,we are creating a group named CIA. The sole purpose for this groupingof users is to identify those individuals that will have permission toaccess the Contact-Information Application:

    Insert Image 32

    Next, we need to add the users to thisgroup. We can specify which users to add as individuals or as othergroups that contain individuals. For instance, maybe you have a groupcalled Accounting that contains all of the users in your organizationthat work in the accounting department. You can choose to add all theusers in Accounting to the CIA group as individuals or you could alsojust add the entire Accounting Group.

    The next image shows the Add Users andGroups dialog that is presented when you press the Add buttonon the New Local Group dialog. Our only goal here is to identify allof the users that we want to access our CIA application and make surethose users end up in the lower listview on the Add Users And Groupsdialog. Don't worry too much about getting this exactly right thefirst time. You can always go back and modify the list of users as theneed arises:

    Insert Image 33

  46. Setting the permissions on thephysical directory this task is handled outside of the MMC. Allwe need to use to set the permissions on the directory is WindowsExplorer. You are probably familiar with this technique, but just tobe thorough navigate to the physical CIA directory and right clickyour mouse. Select Properties from the pop-up menu.

    Select the Security tab from the Properties dialog and then pressthe Permissions command button. This will bring up the nextdialog, which will let us set permissions for the directory usingstandard NT security:

Insert Image 34

This will be an easy job for us becausewe have taken the time to set up a user's group for our application.All we will need to do in the Directory Permissions dialog is tospecify which user group we want to give access to. From this pointon, we can control user access to the application via access to thatgroup. Press the Add button to add users or groups.

The next dialog should look familiar.It is the same one we used earlier to add the members to the CIAuser's group. In this case, we will use the Add Users and Groupsdialog to give the CIA group Full Control over the physical directory:

Insert Image 35

We could specify less control over thedirectory to be safer, but for our purposes here Full Control isappropriate. Make sure that you also give Full Control to the systemadministrator and to the developers that are responsible for thisdirectory. If not, they will not be able to place the ASP files in thedirectory etc.

 

That is really all there is to it. Torecap, now we have a physical directory that can only be accessed bythe members of a single users group CIA. We have given this group fullcontrol over the CIA directory.

 

I would recommend that you give yourusers the lowest possible access level. This example is designed sothat the developers are also in the CIA user's group. In real life Iwould have created two groups CIA Users and CIA Developers orAdministrators with different permissions.

That physical directory is mapped to anIIS Virtual Directory. We used the IIS administrative tool to set thesecurity on this directory to require Challenge Response Authorization(and possibly Basic Authentication if all else fails) and have set thebroad swathe of security around that directory to allow the users toexecute scripts in the directory. All we have to do now is to copy ourASP files that make up the application into the physical CIA directoryon the IIS machine.

At this point, we have really handled allthe 4 Spheres we have talked about throughout this book. We have thetables, views, and stored procedures on the Data sphere, the DataCentric objects on the Data Centric sphere, the User Centric objects andVeneers on the User Centric sphere, and we have created a location onthe Presentation sphere to hold the user interface files. Now I want tointroduce you to a new sphere - the Client sphere.

We still have one other of sphere toconsider. Let's call that sphere the Client sphere. If you go back toChapter 3 where we talked about the server farm you will see that as wemoved away from the database we were able to add more and more machinesto each sphere. Well, the Client sphere is no different. Depending uponthe number of machines that you have used to create your server farm,you should be able to handle thousands of client machines in the Clientsphere.

As I hinted to earlier although our webapplications can be run in an IE 4.0 or higher browser, there are quitea few reasons for using another vehicle to launch these applications

Perceptions

Even though the web really just givesanother way to handle our job of safeguarding and managing our user'sinformation for them, many people see the web as something inherentlyevil and untrustworthy. It is still a little difficult to convince somepeople that the work we do with a web application is the same as thework they do with a standard client-server interface. I think they havethe perception that all of our customers are out having a good timesurfing the web rather than doing any serious work.

The web-based applications we deliver aredesigned to put an end to this silliness. We understand that we stillneed to battle a lot of misconceptions, so we make an all out effort toremove as much webbiness from our applications as possible. We buildinterfaces that resemble standard client-server applications in everyway. Other than a couple of quirky HTML leftovers like the ability toselect everything on a page, it is nearly impossible to tell thedifference between our web-based applications and a similarly designedVB interface. The only thing that resembles a web page in ourapplications is the browser. And, because we are nothing if notthorough, we even found a way to hide that from the naysayers. We use awrapper for the IE browser that allows us to use its functionalitywithout having to deal with all-of-its standard user interfacecapabilities. This wrapper is called the Launch Pad application, andwe'll see how to build it shortly.

Corral the User

There are really a couple of very soundreasons to direct our users to our applications through a specializedbrowser besides the sentiments of the anti-web group. The most importantis that standard browsers give the users a little too much control overtheir next action. Let me explain while it is fine for Wally the WebSurfer to change his mind in mid-stream and go to another web site, thiskind of indecisiveness is not alright for Eddie the Employee. In aweb-based application, it is important for the user to control thedirection of the application from within the page. If we expect the userto press one of two or three command buttons and the user decides totype in the address of the sports page instead or to press the backbutton 6 or 7 times, we have no way of stopping him. Of courseeverything in our installation is designed to ensure that our dataretains all of its integrity and that it is safe, but we have failed tocreate an application that helps (guides) the user through the processhe is trying to accomplish.

This means we build a client-sideapplication that is really nothing more than a wrapper for MicrosoftInternet Explorer. Of course it is not necessary to use this applicationto deploy web-based applications, but it does offer several advantages.It allows us to remove all external browser-based controls from theuser. This means that we can force the user to use the controls providedon the page to work with the information in the application. This givesus a tighter reign on the user's activities while they are workingwithin the application. This makes it easier for us to control theprocessing order within an application. It also allows us to remove theaddress text box from the user's view. This measure gives an additionallayer of security. It means that rather than allowing the user tonavigate at will throughout our application server (IIS), we can useLaunch Pad to provide them with a standard pull-down menu of optionsthat appears identical to any standard VB application.

Security

The pull-down menu that is used in LaunchPad is populated at run time. In other words, as soon as a user opens upthe Launch Pad application, we perform a query that returns a collectionof applications that this particular user is allowed to access. Thefirst level of security begins here. If the user is not an authorizeduser for the domain or a trusted domain, the query will not be executed.Assuming that the user is authorized and the query does execute, we takethe resulting collection and use it to populate the pull-down menus inLaunch Pad. That means that we can control user access to eachapplication from a centrally managed database application that worksexactly the same way all of the other security measures in Windows NTare managed.

We treat each application as an objectand then we either allow or deny access on a user-by-user basis. This isthe second stage of security. Notice that the levels of security tightenrather than lessen as the user is allowed in closer to the enterprise'sdata. Remember that we learned techniques earlier that allowed us toexercise a control at the user/property user/method level. Remembertoo, that this control is handled in exactly the same way that all otherWindows NT security is handled. In other words, we treat the propertiesand methods as objects and then grant control of those objects on auser-by-user basis.

The Launch Pad Application

Launch Pad gives us a framework uponwhich to deliver a top-notch help system. In my installations, LaunchPad is the preferred medium for delivering applications. It providesstructure and helps the users to learn to work with the application fromwithin the page. However, if the user base is sophisticated enough or have had enough time with the application we also offer theoption of accessing the application directly through the browser. Thisis a boon for those individuals that are on the road and do not havetheir own computers with them.

Building The Launch Pad Application

The Launch Pad application that we'll bebuilding here is really a cut down version of the one I usuallydistribute. This version has all the core functionality but I normallyoffer some additional tricks such as Help and Reporting features.

For the most part, this application isvery simple. The only real trick is the coding or maybe I should saythe concept required to communicate from within an ASP and ahost application written in VB.

Open up a new Standard EXE project in VB.Add one MDI Form, 2 standard forms, and one module to the project. Namethe project LaunchPad.vbp and rename the forms and module as indicated.Set frmBrowser as an MDIChild. When you get finished the ProjectExplorer in the VB IDE should look something like this:

Insert Image 36

Let's work through the code we need toadd to the General module.

General Module Code

This code is really quite simple. All weare doing is creating 5 Public variables that will be used throughoutthe life of the application. The MainForm and UserLogon variables areself-explanatory. strDispatcher is used to hold the address for the ASPfiles on the IIS server so that we don't need to hard code this into ourapplication. We'll be using the registry to set this variable in theSplash form. The additional variables are string arrays that will beused to dynamically create the pull-down menus for the application:

Option ExplicitPublic strDispatcher As StringPublic MainForm As frmMainPublic UserLogon As StringPublic ApplicationAddresses() As StringPublic CurrentApplication() As String

The only other thing in this module isthe Sub Main routine. It is self-explanatory, but what it does is veryimportant. As we will learn in the next section, we use the Splash formto initialize this application.

Sub Main()frmSplash.ShowEnd Sub

You'll also need to specify Sub Main asthe Start Up object in the Project Properties dialog.

The Splash Form

As I said earlier, the Splash form isreally responsible for initializing the Launch Pad Application. The wayit does this is probably the most important concept in this section ofthe book. The Splash form contains a web browser that sends a singleparameter and retrieves a single page from the IIS machine. Theparameter it passes is the user's NT Logon ID and the page that isreturned contains a list of applications that user has permission toaccess.

All you need to do to create a browser isto draw a browser control on any form. How easy is that? The control youneed to add to your project in order to perform this miracle is calledMicrosoft Internet Controls. Microsoft has changed the name of thiscontrol several times, but so far the SHDOCVW.dll has remained more orless constant:

Insert Image 37

Add a timer control, a web browsercontrol, and a picture box to the form:

Insert Image 38

The timer is called tmrTimer and has anInterval of 5; the browser is called brwWebBrowser and the picture boxpicSplash.

 

The picture box has been added to giveyou a place to put something nice for the user to look at while theapplication is configured. Of course it is also possible, and maybepreferable, to eliminate this picture box control and allow the webbrowser to be visible. Then you could specify what the splash screenwould look like from the server by changing the looks of the ASP pagethat is used for the splash screen. We use a picture box because it isimmediate and as an organization we have decided upon a specific lookfor all our applications.

Remember that this screen calls an ASPpage with a single parameter the currently logged in user's NT LogonID. In order to accomplish this task, the Splash form must learn thename of the authorized user. It does that by using a single API callGetUserName. We covered techniques you can use to make API calls inChapter 14. Remember that the first step was to create a declarationstatement:

Option ExplicitPrivate Declare Function GetUserName Lib "advapi32.dll" _  Alias "GetUserNameA" _ (ByVal lpbuffer As String, nSize As Long) As Long

The next bit of code we have to write isthe Form_Load routine. In this routine we determine the name of the userusing our API call. If we find a valid user name then we proceed, if notwe just end the application. If we have found a user Logon ID we appendthat value to the address of the page designed to return a list ofapplications for this user. Then we load that page into the browser byusing the brwWebBrowser.Navigate strAddress line.

 

Notice that we have used a registryentry to store the name of the IIS dispatcher machine. This is toallow us to change the name or IP address of the dispatcher server ifthe need ever arises. Of course, if we followed the principles laidout in Chapter 3, our dispatcher will be a redundant machine that willnever go down.

Private Sub Form_Load()Dim strAddress As StringDim i As LongDim strBuffer As StringDim lngSize As LongstrDispatcher = GetSetting(App.Title, "Settings", _  "Dispatcher", "BTVS")strBuffer = Space$(255)lngSize = Len(strBuffer)Call GetUserName(strBuffer, lngSize)If lngSize > 0 Then  ' Place the user's logon into the local memory variable  UserLogon = Left$(strBuffer, lngSize - 1)Else  ' Every User must have and use a valid Windows NT User Logon  ' If not, give 'em the boot  EndEnd IfstrAddress = "http://" & strDispatcher & _  "/UserLogonScreen.asp?UserLogon=" & UserLogonOn Error Resume NextIf Len(strAddress) > 0 Then  tmrTimer.Enabled = True  brwWebBrowser.Navigate strAddressElse  EndEnd IfEnd Sub

You should notice that before we calledfor the page using the brwWebBrowser.Navigate strAddress line we enabledthe timer we drew on the form earlier. Let's take a look at the code inthe tick event of the timer to see why. This is the code that reallydoes all of the work. As usual, read through the code, we will go overit in detail below:

Private Sub tmrTimer_Timer()Dim i As IntegerDim lngCount As LongDim ThisObject As ObjectDim strMenu As StringDim aryMenu() As StringOn Error Resume NextIf brwWebBrowser.Busy = False Then  tmrTimer.Enabled = False  Me.Caption = brwWebBrowser.LocationName  For Each ThisObject In brwWebBrowser.Document.All    If Left(ThisObject.id, 1) = "A" Then      lngCount = lngCount + 1      ReDim Preserve aryMenu(lngCount)      strMenu = Right(ThisObject.id, Len(ThisObject.id) - 4)      aryMenu(lngCount - 1) = strMenu    End If  Next ThisObject     ReDim ApplicationAddresses(lngCount)  ReDim CurrentApplication(lngCount)  With frmMain    For i = 0 To lngCount - 1      If i = .mnuApplicationOption.Count Then        Load .mnuApplicationOption(i)      End If      .mnuApplicationOption(i).Caption = aryMenu(i)      ApplicationAddresses(i) = "Http://" & strDispatcher & _        "//Applications/" & aryMenu(i) & _        "/MainMenu.asp"      CurrentApplication(i) = aryMenu(i)    Next    .Show  End With  Unload MeElse  Me.Caption = "Performing Security Check for ..."End IfEnd Sub

The first real work in this procedure isdone in the following block of code:

For Each ThisObject In brwWebBrowser.Document.All  If Left(ThisObject.id, 1) = "A" Then    lngCount = lngCount + 1    ReDim Preserve aryMenu(lngCount)    strMenu = Right(ThisObject.id, Len(ThisObject.id) - 7)    aryMenu(lngCount - 1) = strMenu  End IfNext ThisObject

What is happening here is that we areexamining the ASP file that has been delivered in response to therequest we made during the Form_Load. Remember we asked for a list ofapplications that the authorized user has permission to access. The wayit works is as follows: Every tag in a HTML document is considered anobject. All of those objects are stored in the Document.All collection.That means that we can iterate through each object in the document injust the same way we would iterate through any other collection. We haveformatted the ASP Response page by creating a text box for eachapplication. This text box is named by combining the name of theapplication with the prefix "Address". That means that as weiterate through the collection we can identify the objects that containthe name of applications that the authorized user has permission toaccess. As we iterate through the collection, we increase the size ofthe aryMenu string array each time we find a new application. Then weextract the name of the application from the name of the text box andadd it to the array.

After we have finished iterating throughthe document's All collection we re-dimension each of the four arrays wedeclared earlier so that they can hold the necessary information foreach application:

ReDim ApplicationAddresses(lngCount)ReDim CurrentApplication(lngCount)

The next block of code uses theinformation we collected from the document to build the menu options forthe user. In case you are unfamiliar with the VB Load method, the Loadmethod is used to add new controls to a control array at run-time. Inthis case, we are adding a menu option to the Application menu for eachapplication that the authorized user has permission to access:

With frmMain  For i = 0 To lngCount - 1    If i = .mnuApplicationOption.Count Then      Load .mnuApplicationOption(i)    End If

In addition to adding the menu option, wealso build a string that represents a valid address for each menuoption. Then we stuff that string into the appropriate array. This hasthe effect of synchronizing the menu option indexes with the indexes ofthe array containing the addresses. In other words, if I select the menuoption with the Index 3 from the Application menu, the address for theapplication I selected would be stored as the fourth element of theApplicationAddresses array the arrays are zero based so the 4thelement has the Index 3. The same thing holds true for the CurrentApplication array.

    .mnuApplicationOption(i).Caption = aryMenu(i)    ApplicationAddresses(i) = "Http://" & strDispatcher & _    "//Applications/" & aryMenu(i) & _    "/MainMenu.asp"    CurrentApplication(i) = aryMenu(i)  Next  .ShowEnd With

We'll take a look at what the ASP willbe doing in the next chapter.

The only other routine in this form is atiny block of code that just responds to the NavigateComplete event forthe browser control. When the navigation is complete, it simply sets thecaption on the form equal to the name of the ASP document.

Private Sub brwWebBrowser_NavigateComplete2 _  (ByVal pDisp As Object, URL As Variant)On Error Resume NextMe.Caption = brwWebBrowser.LocationNameEnd Sub

At this point in time, we have configuredthe Launch Pad application for an individual authorized user. It has themenu options that can point this user to the applications that the userhas permission to access. That means that all the user has to do to workwith a web application is to select it from the Applications menu optionin Launch Pad. Let's take a look at the code in the MDI form that causesthis to happen.

The Main MDI Form

The MDI form for Launch Pad is reallyquite simple. It has three main menu options:

  • File This is really just used to give us a place to put the exitcommand
  • Applications This option is used to allow the user to navigate to a particularapplication.
  • Window This just contains all of the standard Window Management commands.It is especially useful because it allows users to flip betweendifferent applications, reports, etc.

Insert Image 39

These three menus also contain thefollowing sub-items:

Insert Image 40

As you can see the Applications menucontains one sub-menu that is a control array. This is the array thatwe loaded from the Splash form.

The first real work gets done in theForm_Load routine. All that we are doing here is retrieving thesettings for the size of the MDI form exactly as the user left themwhen they closed the application the last time. Just in case this isthe first time the program was run and there are no last known valuesstored in the registry, we provide some reasonable defaults as thelast parameter:

Private Sub MDIForm_Load()Me.Left = GetSetting(App.Title, "Settings", _  "MainLeft", 1000)Me.Top = GetSetting(App.Title, "Settings", _  "MainTop", 1000)Me.Width = GetSetting(App.Title, "Settings", _  "MainWidth", 6500)Me.Height = GetSetting(App.Title, "Settings", _  "MainHeight", 6500)End Sub

The flip side of the Form_Load is theForm_Unload subroutine. What we are doing here is either creating orupdating the registry values that represent the size state of theapplication when the user shuts down the application:

Private Sub MDIForm_Unload(Cancel As Integer)If Me.WindowState <> vbMinimized Then  SaveSetting App.Title, "Settings", _    "MainLeft", Me.Left  SaveSetting App.Title, "Settings", _    "MainTop", Me.Top  SaveSetting App.Title, "Settings", _    "MainWidth", Me.Width  SaveSetting App.Title, "Settings", _    "MainHeight", Me.HeightEnd IfEnd Sub

Most of the work we need to do in thisapplication is handled in the menu event handlers.

In the ApplicationOptions menu itemcontrol array, we accept an Index that tells us what option the userselected. Then we use that value to retrieve an appropriate addressfrom the ApplicationAddresses array. For housekeeping purposes, we setthe value of the browser's CurrentApplication property to the value ofthe current application as indicated by the index. The last thing wedo before we show the display this instance of the Browser form is tonavigate to the address that the user selected:

Private Sub mnuApplicationOption_Click(Index As Integer)Dim frmAppBrowser As New frmBrowserWith frmAppBrowser  .CurrentApplication = CurrentApplication(Index)  .DocumentAddress = ApplicationAddresses (Index)  .brwWebBrowser.Navigate .DocumentAddress  .ShowEnd WithEnd Sub

The following Window handlingsubroutines are self-explanatory:

Private Sub mnuWindowTileVertical_Click()Me.Arrange vbTileVerticalEnd SubPrivate Sub mnuWindowTileHorizontal_Click()Me.Arrange vbTileHorizontalEnd SubPrivate Sub mnuWindowCascade_Click()Me.Arrange vbCascadeEnd Sub

Finally, this is used to close theLaunch Pad Application:

Private Sub mnuFileExit_Click()' Unload the formUnload MeEnd Sub

The Browser Form

This next form we will look at is thefrmBrowser form. This form is incredibly simple. It is really not muchmore than a MDI child form with a browser control placed upon it.

We've already added the browser controlcomponent to the project so just draw a browser on the form. It doesn'tmatter what it looks like. We have to handle the sizing of this controlin a dynamic fashion:

Insert Image 41

As always, we use the declaration sectionof any module to declare the private variables that we will use to storethe property values. In this case we need two property storagevariables. One for the DocumentAddress property another for theCurrentApplication property. I have also added another variablemblnDontNavigateNow that is used to indicate that the browser control iscurrently busy:

Option ExplicitPrivate mblnDontNavigateNow As BooleanPrivate mstrDocumentAddress As StringPrivate mstrCurrentApplication As String

As usual after the declaration section,we need to provide the necessary property handlers. In this case that isgiven by the following code:

Public Property Get DocumentAddress() As StringDocumentAddress = mstrDocumentAddressEnd PropertyPublic Property Let DocumentAddress _  (strDocumentAddress As String)mstrDocumentAddress = Trim(strDocumentAddress)End PropertyPublic Property Get CurrentApplication() As StringCurrentApplication = mstrCurrentApplicationEnd PropertyPublic Property Let CurrentApplication _  (strCurrentApplication As String)mstrCurrentApplication = strCurrentApplicationEnd Property

The Form_Load for this form is simple,all we do is to show the form and call the Form_Resize routine:

Private Sub Form_Load()On Error Resume NextMe.ShowForm_ResizeEnd Sub

This subroutine just updates the captionof the form to reflect its contents:

Private Sub brwWebBrowser_DownloadComplete()On Error Resume NextMe.Caption = brwWebBrowser.Document.All("Title")End Sub

The Form_Resize routine is also amazinglysimple. All we need to do here is to make sure that the browser isessentially the same size as the form it rests on:

Private Sub Form_Resize()On Error Resume NextbrwWebBrowser.Width = Me.ScaleWidth - 100brwWebBrowser.Height = Me.ScaleHeightEnd Sub

That is really all of the work we have todo in order to create an application that provides a controlledenvironment for our applications' ASPs.

Summary

All we have really achieved in thischapter is to stand back a little and take a look at all the pieces froma different perspective. We started the book off by examining theintrastructure that we would be using to house our data objects and thenwe spend a long time focussing in on the particulars of how to build theobjects. Finally, in this chapter we have been able to combine the twoand look at how we can deploy our carefully constructed data objectsacross the enterprise's arhcitecture.

We looked at our physical architeturefrom a more software oriented approach, so that we saw the the Datasphere as SQL Server, the Data and User Centric spheres as MTS and thePresentation sphere as IIS. Finally, we talked about limiting theclient's actions by constructing a typical desktop interface but withembedded components. There's only one thing left to do now and that'stake a look at the ASP going on behind the scenes.

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved