ICPR 2008

Invited Talk 8

Click here for Top of Page
Right Arrow: Next
Right Arrow: Previous

Classifier Ensembles: 

Facts, Fiction, Faults, and Future


By Ludmila I. Kuncheva (UK)

Reviewed by Vasant Manohar (USA)

Why do we choose to use classifier ensembles in our learning applications? Is it because: (a) we are inclined towards complicating entities beyond necessity; (b) it is very difficult to design and train a single sophisticated classifier; or (c) we believe that a combination of multiple classifiers always learns a more expressive concept? Have we truly made progress in classification methods or are many of the implied advances illusory? In an absorbing keynote talk, Ludmila Kuncheva (School of Computer Science, Bangor University, Wales, UK) gave her perspective on these questions.

The central idea of classifier ensembles is to combine the output of several diverse classifiers in an attempt to reach a more accurate decision than that of a carefully designed individual classifier. Though the number of publications on classifier ensembles and associated techniques has increased tremendously in the past few years, it is surprising that there is a huge disparity among the opinions of experts in the community as to what is our current level of scientific understanding of ensemble-type multi-classifier systems—are they mature or lacking?

It has been shown in literature that an ideal ensemble consists of highly correct classifiers that disagree as much as possible. Kuncheva advocates an idea of measuring diversity and using it in the process of building an ensemble. The message purported is that there is still room for heuristic in classifier combination, and diversity might well be one of the directions for further exploration.

In pointing out the typical mistakes we end up making, Kuncheva alluded to the alarming amount of varied jargon that mean the same across the pattern recognition, the data mining, the machine learning, and the statistics fields. Such disagreement just shows that we make little effort to keep ourselves well informed and, as a consequence, end up re-inventing the wheel many times.

In her concluding remarks, Kuncheva expressed her view that we have acquired quite a lot of unstructured insight on multi-classifier combination methods. There has been a significant number of experiment studies and a few exciting theories on different ensemble building and combination methods. So, the answer to the question, “Have we truly made progress?” is yes and no. Yes, we know a lot, and no, we don’t yet have the all-explaining theory. Of course, it’s this quest that makes research challenging and entertaining.