This article is the second in a series. Part one offers a higher level view of some of the topics explained in more detail here.
Table of Contents
Do Not Use More Power Than You Actually Need
Do Not Use GET to Send Sensitive Data!
Never Trust Incoming Data - Details
Do Not Rely on the Client to Keep Important Data
Do Not Store Sensitive Stuff in the *SP Page Itself
Beware of Extensions
Keep an Eye on HTML Comments Left in Production Code
Check the Wizard-generated Or Sample Code
Language and Technology Specifics
Declarative vs. Programmatic
Distributed Systems and Firewalls
PKI Is Not a Silver Bullet
If certain pieces of functionality require authentication, plan to use it as early as possible instead of continuing to use anonymous access - be it to a Web server, a directory or as a guest-like account for the operating system. Using authenticated access to resources may require a different syntax, and may expose authentication, authorization, and impersonation issues that would otherwise stay hidden until later.
Also, using anonymous access to resources means that the code responsible for authentication/authorization is not actually used. If it's not used, it cannot be (unit-) tested. If it cannot be tested, bugs cannot be discovered until later.
Certainly, the amount of security put into the development stage must be reasonable. For instance, enforcing complex and unique passwords might be a nuisance for the developers while they are writing the code. Such restrictions can be added later.
If deciding what authentication mechanism to use is not easy, you can find a brief overview at
Just don't ask me how many times I had to say "Don't use the 'sa' account to access a SQL Server."
This section used to be about administrative accounts being (ab)used, and their use for common tasks continues to be the most frequent abuse of power. SQL servers are only part of the picture. Running code as an administrator or suid root is equally inappropriate - unless really needed, of course.
Why is the administrative login not good, even in a secure environment without any sensitive data? It prevents application isolation, accurate testing and proper accountability, and the first two especially directly impact the development work.
Using admin accounts is very appealing at first sight. The developer doesn't have to bother with access restrictions and can focus on the functionality. You've already guessed it, the problem has just been spelled out: With admin accounts, there is no access restriction. The code can do anything, anytime. At least, that is, until the release date comes closer or the code is moving in a pre-production environment where accounts and permissions are managed properly and then things start to break. Tables that used to be accessible or writable are no longer because specific access rights have not been assigned, ACLs are applied and various run time errors occur. In a distributed application, even identifying the root cause can be a bit challenging. All these will add debugging time at a time when no one wants it.
There is another operational danger posed by using admin accounts. Because access is not confined to a specific application, you may inadvertently overwrite something else. When I was working on a project with SQL Server, I was personally very close to deleting someone else's tables because I was using the "sa" account when operating from the management console. On that particular server there were several databases for different phases of the same project. They looked very similar, at least as the tables names went, so a slip of the mouse to the next database in a row, followed by a Select All and Delete, almost made me the first lynched individual in the company's history. I still live because of the confirmation message.
The lesson: Use application-specific accounts with rights identified as early as possible. Yes, it is likely the access rights will have to be refined in time, but unless you start making use of them, how to find out?
This is an old one but still very valid.
When sensitive data is to be passed to the server, do not send it as a parameter in the query string, as in:
This is not appropriate because, like any other HTTP request, this will be logged in the logs of the Web server as well as in whatever proxies might be on the way. (And even if you haven't configured any proxy, transparent proxies can still be in the way.) The above request will get logged in cleartext, similar to:
2000-03-22 00:26:40 - W3SVC1 GET /process_card.asp cardnumber=1234567890123456 200 0 623 360 570 80 HTTP/1.1 Mozilla/4.0+(compatible;+MSIE+5.01;+Windows+NT) - -
Also, the entire URL may be stored by the browser in its history, potentially exposing the sensitive information to someone else using the same machine later.
SSL wouldn't help in this case, because it only protects a request in transit. Once arrived at the destination, the request will be happily decrypted and logged in cleartext. An apparent rebuttal may come from the standpoint that the server must be trusted. Yes it must, but this trust implies that private customer data be dealt with sensibly. There is no reason for the server to store such sensitive information in cleartext. In some credit card authorization schemes, the application server is but an intermediate and once the payment is authorized, the Web application does not store the credit card number (or at least not all digits).
The POST method uses the HTTP body to pass information. This is good in our case, because the HTTP body is not logged. Note, however, that by itself POST doesn't offer enough protection. The data's confidentiality and integrity are still at risk because the information is still sent in cleartext (or quasi-cleartext, as in Base64 encoding), so the use of encryption is a must for sensitive information.
It's worth saying it again and in more detail. What can constitute incoming data for a Web application?
The HTTP request itself. The URL, the method, the cookie if any, and the HTTP headers. Think what could happen if the URL were different - say, if any field passed by the client were changed or if the actual URL requested another page. Could the client see the session of another user? What if the parameters are not consistent with each other? Does the server application handle this case or does it fail, possibly with revealing error messages?
Data fields (e.g., form fields). There is so much to do with user-supplied data that you'll find this point several times in the sections below. They can overflow buffers. (If you are not sure why this is dangerous, see the section on C/C++.) If appended to a SQL statement, they can execute code on the SQL server. For a detailed explanation, see Rain Forrest Puppy's article in Phrack 54.
This is a more specific case of the previous section, but worth pointing out. If you work on the server side of the application, never assume that what you sent to the browser got back unchanged. A case in point is relying on hidden form fields to maintain sensitive data between requests. For example, a shopping cart might send the item price or a discount rate as a hidden field in the form, so that when the customer submits the form, the price/discount will be submitted as well although this particular field has not been displayed to the user.
A malicious user can save the Web page locally, change the hidden field to whatever he wants, and then submit it or simply use a scripted tool to post fake orders.
Detailed information and an analysis of real-world commercial products that have this problem is found in "Form Tampering Vulnerabilities in Several Web-Based Shopping Cart Applications," issued by the ISS on Feb 1, 2000 and available online at
A funny (but innocent) example of this flaw is on the Websites of two well-known security companies. They offer a number of whitepapers for download, but conditional upon filling in a form with personal details. A quick look in the HTML source shows that the form uses a hidden field to store the page where the visitor is redirected after filling in the form. Thus, a simple copy & paste into the URL bar will bypass the information collection stage.
The previous version of the whitepaper talked about a possible workaround by having the data hashed on the server prior to sending it to the client. As some readers pointed out, the description was incomplete. Indeed, simply hashing the data is not enough since, once the algorithm is identified, the client can modify the data and rehash it. Salting the data with a random value can solve this, but this means storing session-specific data on the server. Well, if session data is used anyway, it's actually more convenient to store all fields in session variables. Storing just the salt and the hash value uses less memory than a regular form, but the CPU overhead incurred by the two hashing operations means the performance suffers more in this scenario.
The moral of the story is that if you suspect the client might change the data, simply don't rely on it at all. Store whatever you need on the server side.
The paper referred to in the previous whitepaper is still worth mentioning because it illustrates another mistake: relying on the HTTP Referrer field. See the link below.
http://www.webtechniques.com/archives/1998/09/webm/ for details
(*SP stands for ASP or JSP.)
Most of the time, this "sensitive stuff" would be usernames/passwords for accessing various resources (membership directories, database connection strings). Such credentials can be entered there manually or automatically put in by various wizards or Design Time Controls.
A legitimate question is why this would be a concern, since the *SP is processed on the server and only its results sent to the client. There are a number of reasons: From the security standpoint, the past (including the recent one) has seen a number of holes in Web servers that allowed the source of an *SP page to be displayed instead of being executed. For example, two (old and) very well known IIS bugs caused the ASP to be displayed by appending a dot or the string ::$DATA after the URL ending in asp (http://<site>/anypage.asp. or http://<site>/anypage.asp::$DATA). More recently, the "Translate: f" bug allowed the same outcome.
Similarly, two recent bugs have affected BEA Weblogic
and IBM WebSphere
A different issue but with the same outcome was reported about Allaire's JRun JSP engine http://www.allaire.com/handlers/index.cfm?ID=16290&Method=Full
Another reason for not hardcoding credentials in the page itself relates to development practices. Such information should be stored in a centralized place, preferably in a resource to which access can be audited.
An often seen practice is to distinguish included files by using an .inc extension on the server side. However, this opens security risks when that extension is not registered to be processed by the server, and thus such a file, if known to an attacker who asks for it specifically instead of the including page, will be served back in all its glory, possibly revealing sensitive information.
This is a no-brainer. Of course, be sensible: Not all comments are bad, only those embedded in the HTML or client script and which may contain private information. (An example is a connection string that was once part of the server-side script, but was then commented out. In time, through inadvertent editing, it can reach the client script and thus be transmitted to the browser.) The comments are not dangerous per se, but can reveal information.
Server error messages can be revealing and thus disclose information otherwise well protected under normal conditions. What can an error reveal, however? A number of things, such as:
Physical paths. If an included file is not found, the server may reply with an error stating "include file: c:\inetpub\wwwroot\common.asp not found." The physical path is not a danger by itself, but can reveal information that might be used further in other attacks, or can simply give away information about the infrastructure, such as in the case when UNC paths are used.
Platform architecture. For instance, an ODBC error message may reveal the database server used. Or the message can contain the exact version of the OS or the CGI/scripting engine, thus helping a malicious party to tune an attack. For a skillful attacker, even indirect information is helpful. If a particular piece of software is older, it may indicate that the server is not properly maintained, and thus other vulnerabilities are likely to be found out as well. Having detailed information about a platform can also serve in social engineering attacks, especially in large organizations.
The solution is to carefully review the error-related configuration of the server, as well as how errors are handled throughout the application. For instance, under IIS you can choose between "Send detailed ASP error message to client" or a generic error (the setting is under a Website's Home Directory/Configuration/App Debugging). The first option is the default value, which is not the more secure one.
It would also be better to work with the QA team, which systematically goes through the Website anyway. If they find a detailed error message, it can be logged as an issue and followed up accordingly.
This is a more complex issue. After going through the introductory pages below, the reader is encouraged to read the materials available at the following links (more at the end of the section).
CERT® Advisory CA-2000-02 Malicious HTML Tags Embedded in Client Web Requests at
Not a very straightforward name, but a significant problem which can occur with sites that allow the user to input some data and later display it. Typical examples are registration information, bulletin board messages or product descriptions. In the context of this discussion, the "user" is the more or less anonymous user who visits the site, and not the site administrator that changes the content.
Why is this a problem?
Because it breaches trust. A user visiting a site has a level of trust in the content that comes from the server. Usually, this means the user expects the Website will not perform malicious actions against the client, and it will not attempt to mislead the user to reveal personal information.
With sites that accept user-provided data, later used to build dynamic pages, an entire can of worms is opened. No longer is the Web content authored by the Web creators only, it also comes from what other (potentially anonymous) users have put in. The risk comes from the existence of a number of ways in which more than the user-input fields can be manipulated to include more than simple text, such as scripts or links to other sites. Taking the script example, the code would be executed on the client machine because it would be indistinguishable from the genuine code written by the site developers. Everything comes in the HTML stream to the browser.
Quick example: Let's take a site that allows users to input the user's name through a form, and the value entered is later displayed. For brevity, we'll use the same form for both inputting the string and displaying it. The source for the form is
<html> <br> <% <br> if request.form ("yourname") <>"" then <br> Response.Write("Hello " + request.form ("yourname")) <br> else <br> %> <br> <form method="POST"> <br> <input type="text" name= yourname> <br> <input type="submit" value="submit"> <br> </form> <br> <% <br> end if <br> %> <br> </html>
Enter Bad Guy who, instead of typing his name, types the following in the input field:
When later the variable containing the name is displayed as part of a Web page, the visitor will get the script as if it were part of the legitimate site, and the script will be executed on the browser. Feel free to check for yourself, and then view the HTML source of the response Web page.
In our case, the script only consisted of a message box being displayed, but the author could be more creative. Such a scenario becomes very dangerous when a Website accepts content from one user and displays it to others as well. (The code above is rather usable for "self-hacking.") Typical examples are Web-based message boards or community sites. The injected script could perform unwanted actions on the client or send information to external sites.
Again, the fundamental issue here is that the trust the user put into the site is broken: the Web page that gets sent to the visitor contains not only trusted content from the authors but also untrusted content which, equally important, cannot be identified by the browser as being so.
There are other ways to inject script, such as within an HTML tag.
<a href=" [event]='bad script here' "> click me </a>
The script can even be hosted on another Web server (anonymous hosting companies or previously compromised servers being an excellent choice). In this case, the malicious string would contain links to the real script. An example below, illustrating an alternative way of submitting malicious content via cookies:
If the dynamic content comes from a cookie (example taken from the Microsoft advisory),
<% Response.Write("<BODY BGCOLOR=\"" + <br> Request.Cookies("UserColor") + "\">"); %>
the cookie can be trivially manipulated on the client side to:
Cookie: %22+onload%3D%27window%2Elocation%3D <br> %22http%3A%2F%2Fwww%2Eevilsite%2Ecom%22%3B%27
which would lead to
<body BGCOLOR="" onload= 'window.location="http://www.evilsite.com";'">
redirecting the user to another site.
There are other ways to inject the script. Please refer to the two hyperlinks at the beginning of the section.
What to do? There are a number of ways of dealing with this issue. The core idea is to encode the user-input information in such a way that it will be displayed the same as the user input, but stored and transmitted in a form that will prevent the vulnerability from being exploited.
The solution is offered by what is called HTML encoding, a technique used when transmitting special characters in an HTML code. In HTML, the characters < and >, for instance, have a special meaning: they signal the boundaries of a tag. But what if we want a Web page to contain those characters? The workaround is to use special character sequences that will be stored as such but displayed as the character intended (similar to \t, \n from the C world). The character < is HTML-encoded as < and the > sign is encoded as >.
This is classic HTML knowledge for a Web developer but how is this used? The information input by the user is HTML-encoded by the server and stored as such. For instance, the Server object in IIS exposes a method called exactly HTMLEncode, which takes a regular string as input and produces an output string having special HTML characters replaced with the associated escape sequences. At display time, the HTML-encoded string will be sent to the browser, which will interpret the character sequences and display the characters accordingly. What this means is that if the Bad User types in <script>, the server will encode it to <script>, and when the Well-behaved User gets a page with this field, the WBU will see <script> (and may get alerted if he read this document J) but the HTML source of the page will contain those character sequences and not the <script> string itself. What does this do? Well, it prevents the browser from interpreting the string as a tag.
URLs can be exploited as well, for which reason they would be encoded with the appropriate method, Server.URLEncode.
In practice, there is more to discuss on this. There isn't a magic bullet, and the various options available are discussed more extensively at the links below. Perhaps one more thing to note is that protecting against this vulnerability requires code reviews.
Understanding Malicious Content Mitigation for Web Developers
HOWTO: Prevent Cross-Site Scripting Security Issues
Apache Cross Site Scripting Info
Java Web Server
Q253119 HOWTO: Review ASP Code for CSSI Vulnerability
Q253120 HOWTO: Review Visual InterDev Generated Code for CSSI Vulnerability
Q253121 HOWTO: Review MTS/ASP Code for CSSI Vulnerability
Wizards - when available - are nice and handy for learning new things. But when it comes to security, check what they do behind the scenes, i.e., what the generated code is. It may be possible you'll find hardcoded credentials to access resources, such as a database or a directory. Not only is it bad from the security standpoint, but from the development one as well: If the credentials change (for instance, when moving the coding in a production environment), the functionality will break.
Same story with code copied and pasted from samples, even if they are designed to "improve security." Even if the author intended them as such, that doesn't necessarily mean they are. Learn from samples, but don't trust them until at least you have understood and analyzed them.
The biggest problem with C is also the most frequent application-level attack: C's inability to detect and prevent improper memory allocation, with the direct result of allowing buffer overflows. A great deal of material has been written on this topic, but it is still as valid as 20 years ago. The main reason is that prevention of buffer overflows, not being done by the language itself, is left to the programmer to implement. So in the real world, this is rarely done - of course, until the day someone finds out and the vendor scrambles to fix it while the users hope the fix will come before an attacker uses it.
Two excellent papers on buffer overflows are available at
A related problem and the subject of recent debates is the format string attack, in which a combination of sloppy programming and lack of input data validation leads to the same fatal result. Read more in Tim Newsham's paper at http://www.guardent.com/docs/FormatString.PDF, and the thread on Bugtraq.
Preventing these issues can only be done by reviewing the code for insecure practices. The trained eye can be aided by automated tools that scan for known unsafe constructs. Examples of such tools are:
Recently there have been efforts to patch the system libraries so that buffer overflows are not effectual. Having a more robust run-time environment is certainly a good thing. However, relying on it instead of preventing the root cause would not be that wise.
One of the reasons Java is so popular is the intrinsic security mechanism. Assuming the virtual machine is implemented correctly, malicious language constructs are not possible in Java. For instance, buffer overflows, use of initialized variables, invalid opcodes and vulnerabilities that plague other languages/platforms are stopped by the JVM. (This assumes a JVM that works correctly. As with any piece of software, there have been bugs in the JVM implementations allowing exploits that got around the mechanisms.)
Important note, however: The above paragraph doesn't assert it is impossible to write a malicious Java application. You certainly can, but it usually involves a non-Java factor for the attack to be successful. For instance, if someone installs a Java application without giving consideration to whether it should be trusted, then the application would be able to do pretty much anything it pleases (unless sandboxed, but that's not the default configuration).
Having said that, a Java application is still prone to fewer security problems than its C/C++ counterpart. Also, having built-in features that enable the use of security policies and digital signatures is a definite plus for a language.
The core mechanisms supplied in Java are based on the codesource and identity of the code signer, but not on the identity under which the code is run. Filling in this need is the emerging Java Authentication and Authorization Service, which adds user-role security to the existing code-centric approach. See the link below for details.
Books published on Java security are many, but check Scott Oaks' Java Security. (See http://www.oreilly.com/catalog/javasec/. You can download the code and the errata. Also, Li Gong's Inside Java 2 Platform Security (ISBN 0201310007).
Sun has published a document on "Security Code Guidelines" available online at
A useful resource is David A. Wheeler's briefing on Java Security. You can find it at
David Wheeler also authored the "Secure Programming for Linux and Unix HOWTO" document referenced a few times in this document.
If you are debugging security-related problems and want to go beyond just the exceptions, try running the application with the -Djava.security.debug flag. To see what options are available, run
You will be able to see the results of CheckPermission calls, loading and granting of policies, a dump of all relevant domains and other info.
How to write secure CGI scripts is best described in dedicated FAQs, so this section will simply point to the appropriate places.
Lincoln Stein's World Wide Web Security FAQ
Paul Phillips' page on CGI Security
Speaking of CGI, using a CGI scanner would be a useful additional check. Have a look at RFP's Whisker Scanner, available at
The entire site is a must-see resource for Web security.
In the Web world, Perl is often used for CGI scripts (the administration uses are not covered by this document), so the previous section is also a good read. For Perl specifics, please refer to the following documents.
Gunther Birznieks' "CGI/Perl Taint Mode FAQ"
The perlsec Documentation Pages
http://www.perl.com/CPAN-local/doc/manual/html/pod/perlsec.html (or look in your distribution).
This is not a section about Unix security (how could it be just a section when there are books? :-) but merely a pointer into how to write more secure software running under Unix. Certainly, many of the language-specific sections above are very applicable to Unix. For issues specific to the platform itself, however, please check the following resources.
Secure Programming for Linux and Unix HOWTO
Secure UNIX Programming FAQ
Writing Safe Setuid Programs
How to find security holes
How to Write Secure Code
http://www.shmoo.com/securecode/ (provides links to other documents)
XML security becomes essential for both B2B and for signing and protecting XML-based forms, which hopefully one day will replace both paper- and HTML (?)forms. When you talk business, integrity and identity checks are a must for electronic transactions. Setting a standard for digitally signing XML documents and communication is the goal of the joint IETF/W3C xmldsig workgroup. See the links below for more details.
Until the standard comes out, there are a number of vendor-specific solutions you can choose from. Some products the author is aware of are those from:
Most serious Web applications would be complex enough so that componentizing them is a must. Whether it's with COM or EJB, this adds a layer of complexity to the (security) architecture.
For the security architect, it raises a few specific issues, such as how authentication, authorization and impersonation/delegation of credentials work in a distributed environment. Please also see the section on distributed architectures and firewalls.
COM security also is a topic big enough for a book, and there is one. It's written by the man to ask about COM security, Keith Brown from Developmentor. Be sure to check his page
for details on his brand-new book, Programming Windows Security, and also for cool info and utilities to explore the COM world.
To find out how IIS and MTS/COM+ work together to impersonate a client, read the following resources:
and the backgrounder at
This last resource has useful tips on the difference between DCOMCNFG and OleView when it comes to setting component security.
The EJB specs encourage a separation of duties between the Bean Provider (who is not really concerned with security), the Application Assembler (who assigns security roles to the interfaces) and the Deployer (who maps security principals to the roles identified by the Assembler).
A presentation of the EJB security model is found in chapter 9 of Sun's "Designing Enterprise Applications with J2EE" whitepaper. The document used to be available at ftp://ftp.java.sun.com/pub/jbp/aspoiduw/jbp-1_0_1-doc.pdf, but at the time of writing (mid-September) the directory was empty. If you know of an alternative location, please let me know. It is a good read, as it focuses more on concepts and less on actual implementations.
A useful resource I found and one that goes more into the real world issues is the recent book Building Java Enterprise Systems with J2EE (ISBN: 0672317958). See chapters 24-28.
Details on how the EJB security specs are actually implemented by various vendors can be found in their product documentation, often online. Here are some links to such documentation.
Gemstone (not Gemstone's site, though - please let me know if you find the direct link)
Declarative security takes place when the access control is set from outside the application, usually through an administrative interface. Programmatic security is the case in which the logic in the code checks the credentials and associated rights. In most cases, Web applications will be a mixture of these two methods of enforcing security controls.
When it's available, declarative security is quite useful: File permissions, MTS or database server roles are all examples of this type. They are easy to administer, and require no code changes or an understanding of the code for regular operational tasks. Of course, knowing how to apply and integrate them into the whole picture requires a thorough understanding, but once the pieces are in place, daily tasks (such as user and group management) can be delegated to a different team.
Declarative security is good to protect resources between different groups of users (i.e., with different rights). However, when you want a greater granularity, you'll have to use programmatic security. For instance, to distinguish between two users from the same group, permissions and roles are not enough. When you do Web banking, the Web server can allow anonymous access to some pages and enforce authentication to others, but once the users authenticate, it's the code's task to prevent one user from accessing another's account.
Programmatic security can also help when you need better granularity of controls than what declarative can offer. For instance, with MTS components, you can enforce security on a per-interface level. If you want to have different permissions for some methods within the same interface, however, you'll have to resort to calling ObjectContext's IsCallerInRole method or use COM+. It's the same story when you want to know more about the security context in which the execution takes place and to distinguish between the original and the direct caller. COM+ is better at delegation and impersonation so, in this context, make sure you know whether the application will run under IIS 4.0 or IIS 5.0.
There is no hard and fast rule for when to choose each of the two approaches. The key is to understand where each fits and how you can use it better for your purposes.
Any serious Web application is inevitably spread across multiple machines. Since these machines communicate, the protocol used and the infrastructure layout are significant security-wise.
Unlike a common development environment where all machines are placed in the same LAN and thus able to communicate freely, a production environment often has the Web server isolated in a DMZ with the rest of the servers (database, directory, application) hosted in the internal network. Firewalls/routers would protect both the DMZ from the open Internet and the internal LAN from the DMZ, and this means the second firewall must be configured in order to allow the traffic required by the piece running on the Web server to the rest of the servers in the LAN. Or, to put it from another perspective, the application must be designed so the servers can talk through the existing firewalls.
In the non-mainframe world (with which the author is not familiar), the major protocols used for intermachine communication are DCOM and RMI.
Microsoft's DCOM is built on top of RPC. We won't discuss here what DCOM is and how to use it. For the security architect, DCOM poses some problems because it is not a firewall-friendly protocol. You can learn details from a paper on exactly this topic. Please see "Using Distributed COM with Firewalls" at
At the end of the day, you will still have to open ports that administrators are not comfortable with. Allowing incoming traffic to the RPC port mapper (port 135) is not without risks. Using the freely available RPC end point mapper, you can learn a number of things about the server being inquired.
COM Internet Services (aka DCOM over HTTP) does not make all security administrators comfortable. Tunneling a protocol through another implies a degree of covertness (read: makes it harder to monitor intrusions) and may not work with the existing proxy infrastructure.
If you use DCOM, please see the section on COM/DCOM for additional information.
RMI has several methods of connecting across the firewall: directly, tunneled through HTTP, and relayed through a CGI. The first would generally be less appropriate in a secure environment. You can read about the pros and cons of the other in the RMI, Servlets and Object Serialization FAQ available at
and Rudolf Schreiner's paper on CORBA Firewalls available at
This Website has other documents of interest for CORBA security.
SOAP is the industry's response to the clash between traditional distributed protocols and Internet security requirements. The result of a group effort by multiple vendors, the protocol has been designed to be firewall-friendly. It works over HTTP by adding a few specific headers (such as SOAPMethodName) and by transporting the bulk of the data in an HTTP body (with a content type of text/xml or text/xml-SOAP - I've found both in different docs). Alternatively, a SOAP-specific HTTP verb: M-POST. Both ways allow firewalls that inspect the content to filter the SOAP traffic. The ability to use SSL is also a Good Thing, as some environments require an encrypted traffic between a DMZ and the internal LAN.
To find out more, read "Simple Object Access Protocol (SOAP) and Firewalls" available at
SOAP 1.0 specifications
Developmentor's SOAP FAQ
For the past few years, each has been touted as the Year of the PKI. Now, PKI is a very cool technology and can do a lot of things, but only if understood and implemented properly.
A common mistake in the Web world is to decide to use certificate authentication when there is no PKI in place and no plans to implement certificate management. Because certificates are easy to generate, they may give the wrong impression there's nothing more to worry about. You generate the certificate, install it in the browser and, behold, you have certificate authentication. However, checking a certificate's validity or managing certificates is not necessarily a trivial task.
An excellent introduction (and more) into PKI is Understanding the Public-Key Infrastructure (by Carlisle Adams & Steve Lloyd, ISBN: 157870166X). Ellison and Stinger's "Ten Risks of PKI" whitepaper is also a good read. Aee http://www.counterpane.com/pki-risks.html
Also, make sure you understand the default policies in the different products involved and whether you can customize them enough for your needs.
The Real World is not necessarily fair and trustworthy, and this applies to security software as well. Once in a while, you will find products with larger-than-life claims. "Revolutionary breakthroughs", the cure to all your security concerns, the secret software that gives you 100% security forever without even requiring you to do or understand anything, etc. You've got the idea. Why these are bad and how to spot such cases is the subject of a dedicated FAQ (the last version is not very recent, but it's still a very good read)
or of numerous commentaries in Cryptogram
Web applications (and not only) use random values for a number of purposes, most often for session or user IDs. Network-savvy readers will also know that a similar mechanism is employed by the TCP protocol for the Sequence Numbers.
Why is it important that a session ID be truly random? Well, from the server's standpoint, the session ID is what distinguishes one client from another. This information must be shared with the client, of course, as part of the query string in the URL or sent in a cookie. A malicious client may try to type a different ID there in order to pretend to be someone else, incidentally having a session at the same time. If the session IDs of the other connected users can be guessed, then the malicious client will probably succeed. This is where the algorithm used to generate the session IDs becomes essential: if they are generated in a random fashion (and there are degrees of randomness here), then the malicious client has a much harder chance.
Writing a random number generator is not trivial. The first hurdle is that anything generated algorithmically is not random in nature; it can only appear random. This is why such algorithmic solutions are called pseudo-random number generators (PRNGs for short). Most modern languages have a built-in PRNG, but this is usually limited to generating the same sequence of values. The user can only change the place in the circular sequence where the values will be retrieved from. For our purposes, this is not secure enough.
In order to overcome the inherent predictability of PRNG algorithms, the use of some external input is needed. For instance, Java's SecureRandom class creates a very large number of threads and uses the various timing and other parameters, but this incurs a high load on the CPU, not particularly what we want on a Web server.
An interesting solution (which also provides true randomness) is offered by
where atmospheric noise is used to generate, well, random numbers. The site also provides more information about randomness.
If you suspect you're having security problems with your code, check the logs. They may save you a lot of time and cries of "What on earth is happening?" Of course, to make use of them you must have logging on (and set to as much detail as reasonable). If you don't find any relevant entries at all, check whether that particular information is enabled for logging (often full logging is not the default configuration for performance reasons), and also whether the application logs that at all. Once in a while you may have services that do not have enough logging capabilities.
Which brings us to the second half of this section: In order to help yourself and others debug your application, create meaningful logs. This is a part of the larger issue of providing real help for the user. A mere "Access denied" error doesn't tell much. An "Access denied for user while attempting to the resource " is much better.
SSL is often misunderstood ("We use SSL, therefore we are secure") or over-trusted. SSL is in fact a common term for several protocols (SSLv2, SSLv3 and the TLSv1 standard), of which SSLv2 should not be used anymore. An interesting recent survey regarding the level of security provided by SSL has been published by Eric Murray at http://www.meer.net/~ericm/papers/ssl_servers.html
Another possible oversight is to leave SSL disabled until the day of going live. The major problem here (apart from some testing issues) is that HTTPS imposes a much higher load on the CPU than normal HTTP. A site that supports 500 concurrent users may find out that if SSL is enabled, the number goes down dramatically. One way of addressing this without multiplying the machines is to use an SSL accelerator, a device that offloads the crypto load from the server's CPU. There are several products out there offering this. If you want to know which to choose and on what criteria, check the following document. It's an excellent intro into the real world of SSL benchmarking.
George Guninski is a researcher who uncovered numerous vulnerabilities, especially when it comes to client and cross-site scripting. His home page is
David LeBlanc's "Writing Secure Code" Columns
SecurityPortal is the world's foremost on-line resource and services provider for companies and individuals concerned about protecting their information systems and networks.
Th e Focal Point for Security on the Net (tm)