Which machine learning classifier to choose, in general? [closed]

Suppose I'm working on some classification problem. (Fraud detection and comment spam are two problems I'm working on right now, but I'm curious about any classification task in general.)

How do I know which classifier I should use?

  1. Decision tree
  2. SVM
  3. Bayesian
  4. Neural network
  5. K-nearest neighbors
  6. Q-learning
  7. Genetic algorithm
  8. Markov decision processes
  9. Convolutional neural networks
  10. Linear regression or logistic regression
  11. Boosting, bagging, ensambling
  12. Random hill climbing or simulated annealing
  13. ...

In which cases is one of these the "natural" first choice, and what are the principles for choosing that one?

Examples of the type of answers I'm looking for (from Manning et al.'s Introduction to Information Retrieval book):

a. If your data is labeled, but you only have a limited amount, you should use a classifier with high bias (for example, Naive Bayes).

I'm guessing this is because a higher-bias classifier will have lower variance, which is good because of the small amount of data.

b. If you have a ton of data, then the classifier doesn't really matter so much, so you should probably just choose a classifier with good scalability.

  1. What are other guidelines? Even answers like "if you'll have to explain your model to some upper management person, then maybe you should use a decision tree, since the decision rules are fairly transparent" are good. I care less about implementation/library issues, though.

  2. Also, for a somewhat separate question, besides standard Bayesian classifiers, are there 'standard state-of-the-art' methods for comment spam detection (as opposed to email spam)?


Solution 1:

enter image description here

First of all, you need to identify your problem. It depends upon what kind of data you have and what your desired task is.

If you are Predicting Category :

  • You have Labeled Data
    • You need to follow Classification Approach and its algorithms
  • You don't have Labeled Data
    • You need to go for Clustering Approach

If you are Predicting Quantity :

  • You need to go for Regression Approach

Otherwise

  • You can go for Dimensionality Reduction Approach

There are different algorithms within each approach mentioned above. The choice of a particular algorithm depends upon the size of the dataset.

Source: http://scikit-learn.org/stable/tutorial/machine_learning_map/

Solution 2:

Model selection using cross validation may be what you need.

Cross validation

What you do is simply to split your dataset into k non-overlapping subsets (folds), train a model using k-1 folds and predict its performance using the fold you left out. This you do for each possible combination of folds (first leave 1st fold out, then 2nd, ... , then kth, and train with the remaining folds). After finishing, you estimate the mean performance of all folds (maybe also the variance/standard deviation of the performance).

How to choose the parameter k depends on the time you have. Usual values for k are 3, 5, 10 or even N, where N is the size of your data (that's the same as leave-one-out cross validation). I prefer 5 or 10.

Model selection

Let's say you have 5 methods (ANN, SVM, KNN, etc) and 10 parameter combinations for each method (depending on the method). You simply have to run cross validation for each method and parameter combination (5 * 10 = 50) and select the best model, method and parameters. Then you re-train with the best method and parameters on all your data and you have your final model.

There are some more things to say. If, for example, you use a lot of methods and parameter combinations for each, it's very likely you will overfit. In cases like these, you have to use nested cross validation.

Nested cross validation

In nested cross validation, you perform cross validation on the model selection algorithm.

Again, you first split your data into k folds. After each step, you choose k-1 as your training data and the remaining one as your test data. Then you run model selection (the procedure I explained above) for each possible combination of those k folds. After finishing this, you will have k models, one for each combination of folds. After that, you test each model with the remaining test data and choose the best one. Again, after having the last model you train a new one with the same method and parameters on all the data you have. That's your final model.

Of course, there are many variations of these methods and other things I didn't mention. If you need more information about these look for some publications about these topics.

Solution 3:

The book "OpenCV" has a great two pages on this on pages 462-463. Searching the Amazon preview for the word "discriminative" (probably google books also) will let you see the pages in question. These two pages are the greatest gem I have found in this book.

In short:

  • Boosting - often effective when a large amount of training data is available.

  • Random trees - often very effective and can also perform regression.

  • K-nearest neighbors - simplest thing you can do, often effective but slow and requires lots of memory.

  • Neural networks - Slow to train but very fast to run, still optimal performer for letter recognition.

  • SVM - Among the best with limited data, but losing against boosting or random trees only when large data sets are available.

Solution 4:

Things you might consider in choosing which algorithm to use would include:

  1. Do you need to train incrementally (as opposed to batched)?

    If you need to update your classifier with new data frequently (or you have tons of data), you'll probably want to use Bayesian. Neural nets and SVM need to work on the training data in one go.

  2. Is your data composed of categorical only, or numeric only, or both?

    I think Bayesian works best with categorical/binomial data. Decision trees can't predict numerical values.

  3. Does you or your audience need to understand how the classifier works?

    Use Bayesian or decision trees, since these can be easily explained to most people. Neural networks and SVM are "black boxes" in the sense that you can't really see how they are classifying data.

  4. How much classification speed do you need?

    SVM's are fast when it comes to classifying since they only need to determine which side of the "line" your data is on. Decision trees can be slow especially when they're complex (e.g. lots of branches).

  5. Complexity.

    Neural nets and SVMs can handle complex non-linear classification.