Instance-Based Learning: A Java Implementation

Thursday Oct 31st 2002 by Sione Palu

Instance-Based Learning may help you with solutions to many of your future problems... you just don't know it yet.


Instance-Based Learning (IBL) is defined as the generalizing of a new instance (target) to be classified from the stored training examples. Training examples are processed when a new instance arrives. Instance-Based Learning methods are sometimes called Lazy Learning because they delay the processing until a new instance must be classified. Each time a new query instance is encountered, its relationship to the previously stored examples is examined to assign a target function value for the new instance.

To sum up the first paragraph: Search for the best match, similar match, or close match, but not necessary exact match. For example, the popular industry draughting software for architecture and engineering, called AutoCAD, contains a standard design library plus the user's previous stored designs. If a new project requires some machinery parts to be designed, the engineer would specify some design parameters (attributes) to the AutoCAD software. It will retrieve similar instances of any past design that had previously been stored in the database. If it matches exactly a previous design instance, it retrieves that one or else it retrieves the closest design match. There can be many similar instances retrieved and the best attributes of each design can be combined and used to design a completely new one.


Instance-Based Learning (IBL) algorithms consist of simply storing the presented training examples (data). When a new instance is encountered, a set of similar, related instances is retrieved from memory and used to classify the query instance (target function).

The following are the most common IBL methods:

  • k-Nearest Neighbour
  • Locally Weighted Regression
  • Radial Basis Function

IBL approaches can construct a different approximation of the target function for each distinct query instance that is to be classified. Some techniques only construct local approximation of the target function that applies in the neighbourhood of the new query instance and never construct an approximation designed to perform well over the entire instance space. This has a significant advantage when the target function is very complex, but can still be described by a collection of less complex local approximations.

k-Nearest Neighbour

The k-Nearest Neighbour algorithm is the most basic of all Instance-Based Learning (IBL) methods. The algorithm assumes all instances correspond to points in the n-dimensional space Rn. The nearest neighbours of an instance are defined in terms of standard Euclidean geometry (distances between points in n-dimensional space). More precisely, let an arbitrary instance x be described by the feature attribute lists: < a1(x), a2(x), a3(x), ..., an(x)>, where ar(x) denotes the value of the rth attribute of instance x. The distance between the two instances xi and xj is given by Equation 1. This is the general form for calculating distance in n-dimensional space.

Equation 1

In nearest-neighbour learning, the target function may be either discrete-valued or real-valued. Consider learning discrete-valued target functions of the form f :Rn->V, where V = {v1, v2, v3, ..., vs} is a finite set and Rn is real n-dimensional space. The k-Nearest Neighbour algorithm for approximating a discrete-valued target function is given in Algorithm 1 below:

Algorithm 1     (k-Nearest Neighbour algorithm for discrete-valued function)
Training Algorithm:
  • For each example <x, f(x)>, add the example to the list training_examples

  • Classification Algorithm:
    • Given a query instance xq to be classified,
      • Step 1: Let x1, x2, ... , xk denote the k instances from the training_examples that are nearest to xq,

      • Step 2: Return,
        where , ( argmax means maximum of function ).

    The algorithm for continuous-valued target functions is the same as Algorithm 1 shown above, except that Step 2 is replaced with the following expression:

    For example, to find the Euclidean distance, for 2-dimensional space (n=2) takes two points from the plot in Figure 1 below, say point A and point B. The axes ( x-axis and y-axis) have arbitrary units.

    Figure 1

    The coordinates for A = (1,6), and B = (4,3). Now use Equation 1 to calculate the distance between point A and point B, that is d(A,B), shown below:

    d(A,B) =
    = 4.24 (round to two decimal places)

    It is easy to calculate distance in 2-dimensional space because we all have been taught how to plot graphs in high school Mathematics. In n-dimensional space, the calculation is extended to the nth axis. Even though we cannot visualize dimensions in more than 3D (that is x, y, and z), calculation of distances between points in n-dimensional space is easy using Equation 1.

    Dimension is the number of attributes. Suppose you have a database of people. David, Daniel, Patrick, and John are people's names all stored in this database as instances. Some attributes from the database could be < weight, height, test_scores, and age >, so instances are the records (rows) of table people and the attributes are columns.

    Table 1
    name weight (kg) height (cm) test_scores (out of 100%) age (years)

    Now, we have an Euclidean geometry of 4-dimensional space (n=4), because we have four attributes < weight, height, test_scores, and age >; note that attribute name is omitted here, because it is symbolic (non-numeric). We can find the Euclidean distance between the instance David and instance John by using Equation 1. The coordinates in our 4D space for David = (108.4, 177, 78, 31) and for John = (104, 198, 55, 38) with the calculation shown below:

    d(David,John) =
    = 32.2236 (round to 4 decimal places)

    The following 2D plot of Figure 2 shows a query point xq amongst the instances {A, B, C, and D} of the training examples. When k=1 we have a 1-Nearest Neighbour and it is shown by the blue circle centered at point xq. Point C is is within reach of the radius of the blue circle; therefore, C is the 1-Nearest Neighbour. The magenta circle centered at point xq represents k=2, that is 2-Nearest Neigbours. The 2-Nearest Neigbours of the query point xq are points B and C.

    Figure 2

    The concept of similarities or closeness has no obvious analogy in the world of SQL search. Suppose that a target instance called Mark, who is not stored in our hypothetical database from Table 1, has the following attributes: < weight=96, height=187, test_scores=91, and age=34 >. If you try to do an SQL search for the closest person who has attributes that match Mark's attributes, you will definitely get a NULL return. The SQL statement is shown in Listing 1.

    Listing 1
       SELECT *
         from people
         where weight = 96
         and height = 187
         and test_scores = 91
         and age = 34

    In an SQL search, you either get NULL or overwhelmed with a tremendous number of matches (say, a typical utility company and customer database), which you have to sift through and sort out what you want. SQL is generally a hit or miss scenario, where there is no in between (that is, either true or false). This dilemma of boolean logic (bivalent logic) for a hit and miss search is addressed by instance-based learning as k-Nearest Neighbour and also in Fuzzy Logic (where hit or miss is specified with a membership or certainty level). In the paradigm of Instance-Based Learning and Classification such as k-Nearest Neighbour, there is always a neighbour as long as there are Training examples (data).

    The k-Nearest Neighbour algorithm is refined to weight the contribution of each of the k neighbours according to their distance to the query point xq, giving greater weight to closer neighbours. The weighting factor is shown by Equation 2:

    Equation 2

    The Distance Weighted Nearest Neighbour algorithm is the same as Algorithm 1, but now Step 2 is multiplied by the weighting factor wi. The Selection Engine (SE) does not calculate the weighting factor using Equation 2, but by assigning arbitrary values (up to the user). Since SE is extensible, I have provided variations of the k-Nearest Neighbour algorithm, if you want to write a class to implement distance weighting.

    The other two IBL methods, the Locally Weighted Regression and Radial Basis Functions, are not discussed in this article because they are not implemented in the Selection Engine (refer to the resources for cited publications if you are interested).

    Case-Based Reasoning (CBR)

    Instance-based methods such as k-Nearest Neighbour and locally weighted regression share three key properties. First, they are lazy learning methods in that they defer decisions on how to generalize beyond the training data until a new query instance is observed. Second, they classify new query instances by analyzing similar instances while ignoring instances that are very different from the query. Third, they represent instances as real-valued points in an n-dimensional Euclidean space.

    Case-based reasoning is a learning paradigm based on the first two of these principles, but not the third. CBR is an artificial intelligence methodology that uses specific encapsulated prior experiences as a basis for reasoning about similar new situations. CBR systems rely on various knowledge containers, such as the case-base of prior experiences and similarity criteria for comparing situations and retrieving the most relevant cases. Explicit or implicit changes in the reasoning environment, task focus, and user base may influence the fit of the current knowledge state to the task context, which can affect the quality and efficiency of reasoning results. Over time, the knowledge containers may need to be updated to maintain or improve performance in response to changes in task or environment.

    A case-based reasoner learns after each problem-solving session by retaining relevant information from a problem just solved, making the new experience available for future problem solving. Crucial steps in a CBR process include finding a good match to a new problem, adapting a previous solution to successfully solve the new problem, and deciding how to index and store a new case for later effective retrieval.

    • Development: Case Based Reasoning was originally stimulated by research into how people remember information and how they are in turn reminded of information. It was observed that people commonly solve problems by remembering how they solved similar problems in the past. Case based reasoning was developed as a methodology that judges a new problem by comparing it to already classified cases. The strength of a CBR engine lies in the fact that it does not require the exact criteria of its new problem case, its target case, to be matched.

      The target case is specified by a set of attributes and values that uniquely define that case. A CBR retrieval engine retrieves the most similar case or cases from the case base and adapts the retrieved solutions to fit the specifics of the target case. Hence to create a CBR system, you only need records of cases of previously solved problems. It is not necessary to know how these problems were solved in the first place.

    • Context for CBR: Case based reasoning has proven itself to be a methodology suited to solving problems where it is difficult or impossible to illicit first principle rules from which solutions may be created. In the sphere of industry, these problem areas have very often been solved by one or two human experts whose experience with similar problems have made them best qualified to provide new solutions. It effectively means that industries in which certain problems occur are dependent on the knowledge collated by a small number of individuals and are vulnerable should these human experts leave.

    CBR expert systems reason by memory rather than rules. The driving force behind case-based methods in AI is the machine learning community and CBR is regarded a subfield of machine learning. CBR learning occurs as a natural by-product of problem solving. It is a waste of effort to use similarity-based matching such as CBR if you knew exactly what you were looking for such as item's "key"—a part's part number, a person's social security number, an order's order number, and so on. Similarity-based matching is meant to be used when you don't know what options you have or when you want to have your items ranked.

    Selection Engine (CBR Java Implementation)

    Selection Engine (SE) is an open source project written in Java for CBR. This library is not yet documented (Java Docs) by the author but the author has included some sample code and tutorial notes with the distribution. The capabilities of SE are listed below:

    • does standard data filtering
    • computes similarity of instances
    • computes percent similarity of instances
    • handles weightings
    • sorts data by percent similarity
    • handles searches on subsets of attributes
    • generic enough to deal with any arbitrary data sent to it
    • easily integrated into larger applications
    • worked with Java and was stateless
    • generic data loading and display managers

    By default, the Selection Engine (SE) reads the data, metadata, and search query from text files and sends the output to stdout; this should make it usable by anyone. Any person using the engine is encouraged to extend or customize it so that it reads from a database or develop Swing applications and Applet.

    The main Java class in the SE is named SimilarityEngine; it does the computation of the similarity of a collection of items (training examples) to a specified target (query instance). The SimilarityEngine's only method is computeSimilarity(), as shown in Listing 2; it takes instances of the object's Items, SimilarityCriteria, and SimilarityWeights as its argument. This method is the main computation method for the Selection Engine, which is overloaded. A fourth argument of a DataSetStatistics object is the four argument version. The Item object has a collection of Traits, and the Items collection has a link to the metadata collection TraitDescriptors. (Click here for: Class Hierarchy and Diagram).

    Items is a collection of Item objects, which is basically a standard collection. Item's attributes are defined at run time rather than compile time. Also, Item contains a Traits collection, which contains several Trait objects. A Trait object is just a "name/value" pair. TraitDescriptors represents metadata and it is a collection of TraitDescriptor objects. Each Item object will have one TraitDescriptor. A TraitDescriptor is also just a "name/value" pair. The SE recognizes primitive datatypes such as integers, floating-point numbers, strings, and boolean values.

    Training examples (items) are first run through a FilterEngine object. The FilterEngine object does a hit or miss filtering or SQL type filter, which filters out items you do not want to include for the similarity classification. For example, from the people database of Table 2, if you do not want to include any person who is shorter than 180 cm, the FilterEngine can eliminate those Training examples before the rest are passed to the SimilarityEngine for classification.

    Listing 2 (computeSimilarity() method)
      public SimilarItems computeSimilarity( Items items,
                          SimilarityCriteria criteria,
                          SimilarityWeights weights,
                          DataSetStatistics statistics )
         //--- This is the item that we want to compare ourselves to
         //---   to see how similar we are
         SimilarityCriterionScores targetValues = getTargetValues
                                   ( items.getTraitDescriptors(), 
                                   criteria, weights, 
                                   statistics );
         float maxDistance = getMaxDistance( criteria, weights );
         //--- Create a similarity descriptor for each item
         SimilarItems similarItems = new SimilarItems();
         Iterator itemList = items.iterator();
        while (itemList.hasNext()) {
          Item item = (Item);
          SimilarityDescription descriptor = new
          descriptor.setItem( item );
          SimilarityCriterionScores itemValues = normalizeValues(
                                    item, criteria, weights,
                                    statistics );
          float distance = computeDistance( targetValues, itemValues );
          float percentDifference = (distance / maxDistance);
          float percentSimilarity = (1 - percentDifference);
          descriptor.setPercentSimilarity( percentSimilarity );
          similarItems.add( descriptor );
         //--- Now that we know how similar everyone is, let's go and
         //---   and rank them so that the caller has an easy way to
         //---   sort the items and so to make it easier to select the
         //---   k-best matches
         //--- All done.
         return similarItems;

    The computeSimilarity() method of the SimilarityEngine class also requires arguments of SimilarityCriteria and SimilarityWeights objects. SimilarityCriteria contains a collection of SimilarityCriterion objects. A SimilarityCriterion object describes a simple relationship made up of:

    • an attribute
    • a value
    • an operator (3 types)
      1. ~ means "around"   (Use with numbers)
      2. % means "prefer"   (Use with strings and booleans)
      3. !% means "try to avoid"   (Use with strings and booleans)

    Examples of using operators, from the hypothetical database in Table 2, Training examples:

    • "age ~ 37" (literally means around 37 years of age)
    • "name % 'Daniel'" (literally means prefer Daniel)
    • "name !% 'Daniel'" (literally means avoid or to exclude Daniel)

    Selection Engine is case insensitive with attributes. The filter operators used by FilterEngine are ( =, !=, <, >, <=, and >= ), which are for hit or miss (boolean-type) filtering. Example, "age < 30" will filter all years under 30.

    For numeric attributes, the SimilarityEngine recognizes two special values, [MAX_VAL] and [MIN_VAL]. These are relative values rather than absolute values. The SimilarityEngine translates relative numbers into absolutes by determining the max and min values for each of the item's attributes.

    Listing 2 shows the overloaded method computeSimilarity() is passed a third parameter, SimilarityWeights, which is a collection of SimilarityWeight objects. SimilarityWeight is also a "name/value" pair, where name is the name of an attribute and value is its weight. Weight can be any integer value. The default weight of all attributes is 1. Weights only affect similarity operators ( ~, %, !% ), but not filter operators ( =, !=, <, >, <=, and >= ).

    The way the Selection Engine does its similartiy computation procedure is as follows:

    • Compute the maximum distance of the nth attribute.
    • Calculate the Euclidean distance between the query instance xq and the k-Nearest Neighbour of the Training Examples (items).
    • Divide the Euclidean distance by the maximum distance (a normalization process).
    • Subtract the normalized value from 1 (percentage similarities between the query instance xq and the k-Nearest Neighbour of the Training Examples, that is Items).

    Listing 3 is a test class, called SelectionEngineTest, which comes with the Selection Engine's distribution.

    Listing 3
    import net.sourceforge.selectionengine.*;
      * This is the MAIN CONTROLING class. It calls everything else.
      * ©author baylor wetzel (Selection Engine)
      public class SelectionEngineTest {
        public static void main( String[] args ) {
          SelectionEngineTest demo = new SelectionEngineTest();
        } //--- main
      * This is the main routine. It takes items that matched a
      * given target and displays the results to stdout. It's good
      * for running tests.
      public void start( ) {
         try {
           MatchingItemsManager matchingItemsManager =
                        new MatchingItemsManager();
           DisplayManager displayManager = new DisplayManager();
           TraitDescriptors traitDescriptors =
           FilterCriteria filterCriteria =
           Items items = matchingItemsManager.getItems();
           Items filteredItems =
           SimilarItems similarItems =
           SimilarityCriteria sc =
           SimilarityWeights sw =
           displayManager.displayTraitDescriptors( traitDescriptors );
           displayManager.displayItems( traitDescriptors, items );
           displayManager.displayFilterCriteria( filterCriteria );
           displayManager.displayItems( traitDescriptors,
                                        filteredItems );
           displayManager.displaySimilarityCriteria( sc );
           displayManager.displaySimilarityWeights( sw );
           displayManager.displaySimilarItems( traitDescriptors,
                                               similarItems );
           System.out.println( "" );
          catch( Exception e) {
            Log.log( "Error: " + e );
         } //--- start
      }  //--- SelectionEngineTest

    PC-Shopping Example

    The example that comes with the distribution of the Selection Engine is a small application/applet about buying a computer from an online store. You (the buyer) will specify your target preference (query point xq ) for the computer that is closest or similar to the attributes you want. The online store has a database with different types of computers (Training examples). The target query (specification for a computer to buy) will be ranked according to its nearest neighbours. The closer the Euclidean distance to an instance of a Training example, the higher the rank(similarity is high). Selection Engine's query format, is pipe delimited and uses c and w to represent constraints and weights. The query ( xq ) looks like the format below.

    c | Vendor | %  | Alienware
    c | Vendor | !% | HP
    c | Vendor | != | Dell
    w | Vendor | 1
    c | Price  | ~  | [MIN_VAL]
    c | Price  | <= | 1000
    w | Price  | 1
    c | HD     | ~ | [MAX_VAL]
    w | HD     | 4
    c | DVD    | % | TRUE
    w | DVD    | 1
    c | cpu_benchmark | ~ | [MAX_VAL]
    w | cpu_benchmark | 5

    The Selection Engine's distribution file contain a number of text files for the PC computer data file (Training examples) for the online store and also the query file for the buyer which is similar to the format displayed above.

    If you run the test application shown in Listing 3 for SelectionEngineTest, you need to point the file paths for the PC data ("PCdata.txt") and query ("PCquery.txt") to their respective locations, something like this: "C:/WINDOWS/MyProject/PCdata.txt"; it depends on where you put these text files. SelectionEngineTest is a non-GUI application and it is output to the console. You can do this from the following class variables, respectively:

    ItemManager Class:
         private static final String DEFAULT_FILE_NAME = "C:/WINDOWS/MyProject/PCdata.txt";
    QueryReader Class:
         private static final String DEFAULT_FILE_NAME = "C:/WINDOWS/MyProject/PCquery.txt";

    There is a PCShoppingAsst GUI application that comes with the distribution (can be downloaded—refer to the resources). Running this application will give the results specified in the "PCquery.txt" file as shown below in Table 2.

    Table 2

    Click here for a larger image.

    The classifications for the closest match or similar item for your perfect computer, is ranked from highest to lowest, that is from the closet neighbours to the more distant ones. The 1-Nearest Neighbour has a Similarity = 44% : < maker=compaq, model=deskpro, cpu=650, ram=64, ..., 3.5"=yes >. The 3-Nearest Neighbours are the first three items on the list from Table 2.

    Real-World Applications

    Here are some of the real-world applications of Instance-Based Learning Classification:

    • Law: Most work in artificial intelligence and law has concentrated on modelling the type of reasoning done by trial lawyers. In fact, most lawyers' work involves planning. For example, wills and trusts, real estate deals, and business mergers and acquisitions. Certain planning issues, such as the use of underspecified or "open-textured" rules, are illustrated clearly in this domain.

      During the 1990s, the University of Massachusetts worked on a series of projects in the area of artificial intelligence and law, emphasizing particularly algorithmic and control issues. One of the software programs from these projects is called HYPO.

      HYPO illustrates a use of cases that takes a sequence of facts as input. The target problem (query) is to determine whether the facts satisfy certain legal requirements (in this case, whether they constitute a trade secrets violation). HYPO generates arguments based on previous cases (Training Examples). User input and cases are both represented in simplified form, using a standard "legal caseframe" to hold the important facts of the case. Legal case frames also include such information as the date of the decision, the court deciding the case, and the official citation. Each of the cases in HYPO's case base can be retrieved using a fixed set of indices (termed "dimensions"); dimensions are also used to compare cases. When the user inputs a legal situation, expressed in the legal case-frame language, HYPO uses this representation to calculate the applicability of dimensions to the situation, and then uses these values to index into relevant cases. The system then organizes the retrieved cases into a subset lattice based on the dimensions each retrieved case shares with the current situation and uses the lattice in constructing arguments for both plaintiff and defendant. Arguments are constructed by reciting the dimensions shared with favorable cases and the differences from unfavorable cases (nonshared dimensions or differences in the values of shared dimensions).

    • Medicine: CBR has been used for medical diagnosis software over the past decade which the system diagnoses such diseases as heart failure. As input it uses a patient's symptoms and produces a causal network of possible internal states that could lead to those symptoms. When a new case arises, the software tries to find cases of patients with similar but not necessarily identical symptoms. If the new case matches, the CBR system adapts the retrieved diagnosis by considering differences in symptoms between the old and new cases. The CBR system stores explanations on making use of the solution in its cases. Medical CBR systems have also been integrated with different disciplines of knowledge engineering such as rule-based. There has been a significant advance in medical CBR toward diagnostic systems that can effectively learn from experience.

    • E-commerce: The number of electronic catalogs has grown rapidly during the past few years. Most of these catalogs use standard databases for storing and retrieving product information. Using ordinary databases for product catalogs, however, has the major drawback that it is often very difficult to find the products desired: very often, the database does not return a matching product at all or it returns many products that have to be examined manually. To overcome this problem, Case-Based Reasoning (CBR) techniques has been adopted for commercial electronic catalogs as an approach to requirement-oriented retrieval of products. CBR incorporates product knowledge into the database by means of a similarity measure.

      Many commercial electronic vendors offer product information over the Internet and most of their customers use this service frequently. The majority of these catalogs employ standard database approaches for storage and retrieval of product information. Whereas today, databases are well understood and many off-the-shelf solutions exist, we claim that they can only be used efficiently by advanced users—users who are already very familiar with the contents of the database. This is especially a problem for electronic catalogs meant to be used directly by end consumers or engineers who, in many cases, are not experts in the family of products they are looking for, or at least do not have a good overview over all the products and their specialties inside the catalog. Imagine an electronics engineer looking for a device to integrate into her new circuit that pushes the limits of the most advanced devices from her catalog. How can this person select the device best fitting her needs if she is no expert in the area of devices she is looking for? A standard database offers no support for her.

    • Design Re-use: The design of electronic circuits is a discipline in which two contrasting tendencies can be observed: On the one hand, modern circuit designs get ever more complex and difficult to handle by electrical engineers. On the other hand, the global competition requires a continuous reduction of development times. The correctness and reliability of the designs should, of course, not suffer from shorter development cycles.

      These requirements have become so dominant that they cannot be met anymore without extensive utilization of design reuse. It is getting vitally important for an electrical engineer to re-use old designs (or parts of them) and not to re-design a new application from scratch. Re-using designs from the past requires that the engineer has enough experience and knowledge about existing designs, to be able to find candidates that are suitable for reuse in his specific new situation. Searching databases of existing designs can be an extremely time-consuming task because there are currently no intelligent tools to support the engineer in deciding whether a given design from a database can be easily adapted to meet the specification of his new application. Because of this, until now the most effective way of designing is designer re-use.

      CBR techniques uses a similarity-based approach that relies on suitable heuristics and makes decisions in a way very similar to the decision-making process of a human designer. CBR is employed to suggest old designs that are re-usable for a given new task.

    • Help Desk: CBR has been applied successfully for the concept of help desk automation by offering World Wide Web access to a casebased help desk. It explores the use of case-based reasoning to create an "intelligent" help desk system that learns. NASA has developed a CBR help desk for its data users.

      Many organizations, particularly in computer hardware and software areas, provide extensive customer support via telephone "help desks." This assistance depends primarily on human expertise to respond to user questions. In many cases, the help desk staff must answer the same questions repeatedly. Such support is expensive and requires a large technical staff. Due to budget limitations, many organizations are trying to serve an increasing user population while either maintaining or reducing the size of their help desk staff. A CBR system fits perfectly for help desk applications because many requests for help are reoccurring or variations of previous requests (similar cases).

      Currently, most CBR systems for help desk applications are designed to provide in-house staff support. Typically, these tools are used by help desk staff to personally respond to requests for assistance—a predominantly manual system with the case base providing decision support for help desk staff . More recently, however, attention has focused on designing tools that allow users requiring assistance to access the case base themselves—that is help desk automation.


    The disadvantages of Instance-Based Learning (IBL) and Classification are as follows:

    • The cost of classifying new instances can be high. This is due to the fact that nearly all computation takes place at classification time rather than when the training examples are first encountered. Therefore, the techniques for efficiently indexing training examples are a significant practical issue in reducing the computation required at query time.
    • Many instance-based approaches, especially nearest-neighbour, typically consider all attributes of the instances when attempting to retrieve similar training examples from memory. If the target concept depends on only a few of the many available attributes, the instances that are truly most similar may well be a large Euclidean distance apart.

    IBL with other Computational Intelligence Techniques

    IBL methods have been combined with traditional Artificial Intelligence techniques such as symbolic Rule-based systems and Agent-based systems. Soft-Computing techniques such as Fuzzy Logic, Rough Sets, Bayesian, Neural Network, and Immune Modeling have also being incorporated into IBL and CBR, for statistical and adaptive classification.

    Adoption of Java in CBR

    The adoption of Java for CBR software development is increasing. There are a number of commercial help desk products available, in the market written in Java. The DoD (Department of Defense) and Navy have a large number of CBR tools written in Java, and APIs for these CBR tools can be downloaded, provided you signed (agreed) on some contract about its use. Internal academics research projects at universities around the world are using Java; you can tell that from the vast number of available free publications that is available online which described that Java is the preferred language.

    Current Successful CBR Systems Online

    This link: is an online property leasing using CBR. The screen contain seven fields to complete. This means that this is a systems of 7-dimensional space. You, notice from this website, that the result of your search after pressing the submit button, ranks item as close as possible to your, preferred house (property) or target query. The return results, rank properties according to the target attributes you specified. If the system was using SQL, you would not be satisfied, because it could return NULL, or it returns a list of qualified houses where there were matches according to your search. But hang on! , you did not ask for a list of houses , you were asking for a similar house or houses (items) which were close to your target preference.

    Java software architects, or project managers who would be doing RFP (request for proposal) for any future projects, should consider CBR if it is applicable to the specific projects. As has been mentioned, do not use IBL methods and CBR , if you definitely know what you are looking for. CBR is a discipline on its own, and references from the resources, will take you to the CBR community. There are a lot of tutorials in the subject, from the CBR website, it also points to many online publications (academic and industrial research) and enterprise architect (J2EE) incorporating IBL methods.

    IBL Implementation in the Java Datamining API: (JSR-73)

    Prior to writing this article, I did make an attempt to contact with the Expert Group responsible for drafting the Java Datamining API (JSR-73), since IBL is a branch of Machine Learning (or Artificial Intelligence for generality) in which Datamining is built up on , I wanted to know if the upcoming "javax.datamining" API implement any IBL methods. I assume that the Expert Group members are busy because it took a while to respond to my query. They have just responded back recently, saying yes it does. The upcoming javax.datamining API, contain a number of classes for IBL (Instance-Based Learning), including the k-Nearest Neighbour algorithm.

    Case-Based Reasoning is a specialized field of IBL. It does not mean that the upcoming javax.datamining API implements some IBL methods, then you can build CBR application using this API. You still have to custom develop some libraries to supplement this javax.datamining API, or use some free-source CBR library.

    There are some implementations of IBL methods in the free-source Machine Learning project called WEKA which stands for Waikato Environment for Knowledge Analysis. WEKA is an ongoing project from the Computer Science Department of the University of Waikato, New Zealand: If you want to familiarise with the concepts of datamining, look at WEKA, because the upcoming javax.datamining package of JSR-73, will be almost exactly the same or even contain more Machine Learning methods than WEKA.





    1. Applying Case-Based-Reasoning: Techniques for Enterprise Systems by Ian.D. Watson, pub: Morgan Kaufmann
    2. Machine Learning by Tom. M. Mitchell, pub: Mc Graw Hill
    3. Soft Computing in Case Based Reasoning by Sankar. K. Pal, pub: Springer-Verlag
    4. Developing Industrial Case-Based Reasoning Applications: The Inreca Methodology by R. Bergmann, pub: Springer-Verlag
    5. Learning and Soft Computing: Support Vector Machines, Neural Networks, and Fuzzy Logic Models by Vojislav Kecman, pub: MIT Press
    6. Neuro-Fuzzy and Soft-Computing: A Computational Approach to Learning and Machine Intelligence by J.S.R Jang, C.T Sun, and E. Mizutani, pub: Prentice Hall
    7. Case-Based Reasoning Technologyby G. Goose, pub: Springer-Verlag.
    8. Issues and Applications of Case-Based Reasoning in Design by Mary. L. Maher, pub: Lawrence Erlbaum Associates

    About the author

    Sione Palu has developed software for Publishing Systems, Imaging, Web Applications, and Symbolic and Computer Algebra Systems for secondary education. Palu graduated from the University of Auckland, New Zealand, with a science degree (B.Sc.) in mathematics and computing. His interests involve the application of Java and Mathematics in the fields of mathematical modelling and simulations, symbolic AI and soft-computing, numerical analysis, image processing, wavelets, digital signal processing, control systems,7 and computational finance.

    Mobile Site | Full Site
    Copyright 2017 © QuinStreet Inc. All Rights Reserved