{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to select a classifier\n", "\n", "This document will guide you through the process of selecting a classifier for your problem.\n", "\n", "Note that there is no established, scientifically proven rule-set for selecting a classifier to solve a general multi-label classification problem. Succesful approaches often come from mixing intuitions about which classifiers are worth considering, decomposition in to subproblems, and experimental model selection.\n", "\n", "There are two things you need to consider before choosing a classifier:\n", "\n", "- performance, i.e. generalization quality, how well will the model understand the relationship between features and labels, note that there for different use cases you might want to measure the quality using different measures, we'll talk about the measures in a moment \n", "- efficiency, i.e. how fast the classifier will perform, does it scale, is it usable in your problem based on number of labels, samples or label combinations\n", "\n", "There are two ways to make the choice:\n", "- intuition based on asymptotic performance and results from empirical studies\n", "- data-driven model selection using cross-validated parameter search\n", "\n", "Let's load up a data set to see have some thing to work on first." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from skmultilearn.dataset import load_dataset" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "emotions:train - does not exists downloading\n", "Downloaded emotions-train\n", "emotions:test - does not exists downloading\n", "Downloaded emotions-test\n" ] } ], "source": [ "X_train, y_train, feature_names, label_names = load_dataset('emotions', 'train')\n", "X_test, y_test, _, _ =load_dataset('emotions', 'test')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Usually classifier's performance depends on three elements:\n", "\n", "- number of samples\n", "- number of labels\n", "- number of unique label classes\n", "- number of features\n", "\n", "We can obtain the first two from the shape of our output space matrices:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "((391, 6), (202, 6))" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y_train.shape, y_test.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use numpy and the list of rows with non-zero values in output matrices to get the number of unique label combinations." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "((26,), (21,))" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import numpy as np\n", "np.unique(y_train.rows).shape, np.unique(y_test.rows).shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Number of features can be found in the shape of the input matrix:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "72" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "X_train.shape[1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Intutions\n", "\n", "### Generalization quality measures\n", "\n", "\n", "There are several ways to measure a classifier's generalization quality:\n", "\n", "- [Hamming loss](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.hamming_loss.html#sklearn.metrics.hamming_loss) measures how well the classifier predicts each of the labels, averaged over samples, then over labels \n", "- [accuracy score](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html#sklearn.metrics.accuracy_score) measures how well the classifier predicts label combinations, averaged over samples\n", "- [jaccard similarity](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.jaccard_similarity_score.html#sklearn.metrics.jaccard_similarity_score) measures the proportion of predicted labels for a sample to its correct assignment, averaged over samples\n", "- [precision](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html#sklearn.metrics.precision_score) measures how many samples with ,\n", "- [recall](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score) measures how many samples , \n", "- [F1 score](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html#sklearn.metrics.f1_score) measures a weighted average of precision and recall, where both have the same impact on the score \n", "\n", "These measures are conveniently provided by sklearn:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from skmultilearn.adapt import MLkNN\n", "classifier = MLkNN(k=3)\n", "prediction = classifier.fit(X_train, y_train).predict(X_test)" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.2953795379537954" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import sklearn.metrics as metrics\n", "\n", "metrics.hamming_loss(y_test, prediction)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Performance\n", "\n", "Scikit-multilearn provides 11 classifiers that allow a strong variety of classification scenarios through label partitioning and ensemble classification, let's look at the important factors influencing performance. $ g(x) $ denotes the performance of the base classifier in some of the classifiers.\n", "\n", "
\n", "
[BRkNNaClassifier](api/skmultilearn.adapt.brknn.html#skmultilearn.adapt.brknn.BRkNNaClassifier), [BRkNNbClassifier](api/skmultilearn.adapt.brknn.html#skmultilearn.adapt.brknn.BRkNNbClassifier)
\n", "
\n", "\n", "**Parameter estimation needed**: Yes, 1 parameter\n", " \n", "**Complexity**: ``O(n_{labels} * n_{samples} * n_{features} * k)``\n", "\n", "BRkNN classifiers train a k Nearest Neighbor per label and use infer label assignment in one of the two variants.\n", "\n", "**Strong sides**: \n", "- takes some label relations into account while estimating single-label classifers\n", "- works when distance between samples is a good predictor for label assignment. Often used in biosciences. \n", "\n", "**Weak sides**: \n", "- trains a classifier per label\n", "- less suitable for large label space\n", "- requires parameter estimation.\n", "
\n", "
[MLkNN](api/skmultilearn.adapt.mlknn.html#multilabel-k-nearest-neighbours)
\n", "
\n", "**Parameter estimation needed**: Yes, 2 parameters\n", "\n", "**Complexity**: ``O((n_{samples} * n_{features} + n_{labels}) * k)``\n", "\n", "MLkNN builds uses k-NearestNeighbors find nearest examples to a test class and uses Bayesian inference to select assigned labels.\n", "\n", "**Strong sides**: \n", "- estimates one multi-class subclassifier\n", "- works when distance between samples is a good predictor for label assignment\n", "- often used in biosciences.\n", "\n", "**Weak sides**: \n", "- requires parameter estimation\n", "
\n", "\n", "
[MLARAM](api/skmultilearn.adapt.mlaram.html)
\n", "
\n", "**Parameter estimation needed**: Yes, 2 parameters\n", "\n", "**Complexity**: ``O(n_{samples})``\n", "\n", "An ART classifier which uses clustering of learned prototypes into large clusters improve performance.\n", "\n", "**Strong sides**: \n", "- linear in number of samples, scales well\n", "\n", "**Weak sides**: \n", "- requires parameter estimation\n", "- ART techniques have had generalization limits in the past\n", "\n", "
\n", "
[BinaryRelevance](api/skmultilearn.problem_transform.br.html#skmultilearn.problem_transform.BinaryRelevance)
\n", "
\n", "**Parameter estimation needed**: Only for base classifier\n", "\n", "**Complexity**: ``O(n_{labels} * base_single_class_classifier_complexity)``\n", "\n", "Transforms a multi-label classification problem with L labels into L single-label separate binary classification problems.\n", "\n", "**Strong sides**: \n", "- estimates single-label classifiers\n", "- can generalize beyond avialable label combinations\n", "\n", "**Weak sides**: \n", "- not suitable for large number of labels\n", "- ignores label relations\n", "\n", "
\n", "\n", "
[ClassifierChain](api/skmultilearn.problem_transform.cc.html#skmultilearn.problem_transform.ClassifierChain)
\n", "
\n", "**Parameter estimation needed**: Yes, 1 + parameters for base classifier\n", "\n", "**Complexity**: ``O(n_{labels} * base_single_class_classifier_complexity)``\n", "\n", "Transforms multi-label problem to a multi-class problem where each label combination is a separate class.\n", "\n", "**Strong sides**: \n", "- estimates single-label classifiers\n", "- can generalize beyond avialable label combinations\n", "- takes label relations into account\n", "\n", "**Weak sides**: \n", "- not suitable for large number of labels\n", "- quality strongly depends on the label ordering in chain. \n", "\n", "
\n", "\n", "
[LabelPowerset](api/skmultilearn.problem_transform.lp.html#skmultilearn.problem_transform.LabelPowerset)
\n", "
\n", "**Parameter estimation needed**: Only for base classifier\n", "\n", "**Complexity**: ``O(base_multi_class_classifier_complexity(n_classes = n_label_combinations))``\n", "\n", "Transforms multi-label problem to a multi-class problem where each label combination is a separate class and uses a multi-class classifier to solve the problem.\n", "\n", "**Strong sides**: \n", "- estimates label dependencies, with only one classifier\n", "- often best solution for subset accuracy if training data contains all relevant label combinations\n", "\n", "**Weak sides**: \n", "- requires all label combinations predictable by the classifier to be present in the training data\n", "- very prone to underfitting with large label spaces\n", "\n", "
\n", "\n", "
[RakelD](api/skmultilearn.ensemble.rakeld.html#skmultilearn.ensemble.RakelD)
\n", "
\n", "\n", "**Parameter estimation needed**: Yes, 1 + base classifier's parameters\n", "**Complexity**: ``O(n_{partitions} * base_multi_class_classifier_complexity(n_classes = n_label_combinations_per_partition))``\n", "\n", "Randomly partitions label space and trains a Label Powerset classifier per partition with a base multi-class classifier.\n", "\n", "**Strong sides**: \n", "\n", "- may use less classifiers than Binary Relevance and still generalize label relations while not underfitting like LabelPowerset\n", "\n", "**Weak sides**: \n", "\n", "- using random approach is not very probable to draw an optimal label space division\n", "\n", "
\n", "\n", "\n", "
[RakelO](api/skmultilearn.ensemble.rakeld.html#skmultilearn.ensemble.RakelO)
\n", "
\n", "\n", "**Parameter estimation needed**: Yes, 2 + base classifier's parameters\n", "**Complexity**: ``O(n_{partitions} * base_multi_class_classifier_complexity(n_classes = n_label_combinations_per_cluster))``\n", "\n", "Randomly draw label subspaces (possibly overlapping) and trains a Label Powerset classifier per partition with a base multi-class classifier, labels are assigned based on voting.\n", "\n", "**Strong sides**: \n", "\n", "- may provide better results with overlapping models\n", "\n", "**Weak sides**: \n", "\n", "- takes large number of classifiers to generate improvement, not scalable\n", "- random subspaces may not be optimal\n", "\n", "
\n", "\n", "\n", "
[LabelSpacePartitioningClassifier](api/skmultilearn.ensemble.partition.html#skmultilearn.ensemble.LabelSpacePartitioningClassifier)
\n", "
\n", "\n", "**Parameter estimation needed**: Only base classifier\n", "**Complexity**: ``O(n_{partitions} * base_classifier_complexity(n_classes = n_label_combinations_per_partition))``\n", "\n", "Randomly partitions the label space and trains a base classifier per partition with a base multi-class classifier.\n", "\n", "**Strong sides**: \n", "\n", "- accomodates to different types of problems\n", "- infers when to divide into subproblems or not and decide when to use less classifiers than Binary Relevance\n", "- scalable to data sets with large numbers of labels\n", "- generalizes label relations well while not underfitting like LabelPowerset\n", "- does not require parameter estimation \n", "\n", "**Weak sides**: \n", "\n", "- requires label relationships present in training data to be representable of the problem\n", "- partitioning may prevent certain label combinations from being correctly classified, depends on base classifier\n", "\n", "
\n", "\n", "
[MajorityVotingClassifier](api/skmultilearn.ensemble.voting.html#skmultilearn.ensemble.MajorityVotingClassifier)
\n", "
\n", "\n", "**Parameter estimation needed**: Only base classifier\n", "**Complexity**: ``O(n_{clusters} * base_classifier_complexity(n_classes = n_label_combinations_per_cluster))``\n", "\n", "Uses clustering methods to divide the label space into subspaces (possibly overlapping) and trains a base classifier per partition with a base multi-class classifier, labels are assigned based on voting.\n", "\n", "**Strong sides**: \n", "\n", "- accomodates to different types of problems\n", "- infers when to divide into subproblems or not and decide when to use less classifiers than Binary Relevance\n", "- scalable to data sets with large numbers of labels\n", "- generalizes label relations well while not underfitting like LabelPowerset\n", "- does not require parameter estimation \n", "\n", "**Weak sides**: \n", "\n", "- requires label relationships present in training data to be representable of the problem\n", "\n", "
\n", "\n", "
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data-driven model selection\n", "\n", "Scikit-multilearn allows estimating parameters to select best models for multi-label classification using scikit-learn's model selection [GridSearchCV API](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html).\n", "In the simplest version it can look for the best parameter of a scikit-multilearn's classifier, which we'll show on the example case of estimating parameters for MLkNN, and in the more complicated cases of problem transformation methods it can estimate both the method's hyper parameters and the base classifiers parameter.\n", "\n", "### Estimating hyper-parameter k for MLkNN\n", "\n", "In the case of estimating the hyperparameter of a multi-label classifier, we first import the relevant classifier and\n", "scikit-learn's GridSearchCV class. Then we define the values of parameters we want to evaluate. We are interested in which\n", "combination of `k` - the number of neighbours, `s` - the smoothing parameter works best. We also need to select a measure\n", "which we want to optimize - we've chosen the F1 macro score.\n", "\n", "After selecting the parameters we intialize and _run the cross validation grid search and print the best hyper parameters." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "({'k': 1, 's': 0.5}, 0.45223607257008969)\n" ] } ], "source": [ "from skmultilearn.adapt import MLkNN\n", "from sklearn.model_selection import GridSearchCV\n", "\n", "parameters = {'k': range(1,3), 's': [0.5, 0.7, 1.0]}\n", "\n", "clf = GridSearchCV(MLkNN(), parameters, scoring='f1_macro')\n", "clf.fit(X_train, y_train)\n", "\n", "print (clf.best_params_, clf.best_score_)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These values can be then used directly with the classifier.\n", "\n", "### Estimating hyper-parameter k for embedded classifiers\n", "\n", "In problem transformation classifiers we often need to estimate not only a hyper parameter, but also the parameter of the base classifier, and also - maybe even the problem transformation method. Let's take a look at this on a three-layer construction of ensemble of problem transformation classifiers using label space partitioning, the parameters include:\n", "\n", "- ``classifier``: which takes a parameter - a classifier for transforming multi-label classification problem to a single-label classification, we will decide between the Label Powerset and Classifier Chains\n", "- ``classifier__classifier``: which is the base classifier for the transformation strategy, we will use random forests here\n", "- ``classifier__classifier__n_estimators``: the number of trees to be used in the forest, will be passed to the random forest object\n", "- ``clusterer``: a label space partitioning class, we will decide between two approaches provided by the NetworkX library.\n" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "({'classifier__classifier__n_estimators': 50, 'classifier__classifier': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',\n", " max_depth=None, max_features='auto', max_leaf_nodes=None,\n", " min_impurity_decrease=0.0, min_impurity_split=None,\n", " min_samples_leaf=1, min_samples_split=2,\n", " min_weight_fraction_leaf=0.0, n_estimators=50, n_jobs=1,\n", " oob_score=False, random_state=None, verbose=0,\n", " warm_start=False), 'classifier': LabelPowerset(classifier=RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',\n", " max_depth=None, max_features='auto', max_leaf_nodes=None,\n", " min_impurity_decrease=0.0, min_impurity_split=None,\n", " min_samples_leaf=1, min_samples_split=2,\n", " min_weight_fraction_leaf=0.0, n_estimators=50, n_jobs=1,\n", " oob_score=False, random_state=None, verbose=0,\n", " warm_start=False),\n", " require_dense=[True, True]), 'clusterer': }, 0.59569187181557981)\n" ] } ], "source": [ "from skmultilearn.problem_transform import ClassifierChain, LabelPowerset\n", "from sklearn.model_selection import GridSearchCV\n", "from sklearn.naive_bayes import GaussianNB\n", "from sklearn.ensemble import RandomForestClassifier\n", "from skmultilearn.cluster import NetworkXLabelGraphClusterer\n", "from skmultilearn.cluster import LabelCooccurrenceGraphBuilder\n", "from skmultilearn.ensemble import LabelSpacePartitioningClassifier\n", "\n", "from sklearn.svm import SVC\n", "\n", "parameters = {\n", " 'classifier': [LabelPowerset(), ClassifierChain()],\n", " 'classifier__classifier': [RandomForestClassifier()],\n", " 'classifier__classifier__n_estimators': [10, 20, 50],\n", " 'clusterer' : [\n", " NetworkXLabelGraphClusterer(LabelCooccurrenceGraphBuilder(weighted=True, include_self_edges=False), 'louvain'),\n", " NetworkXLabelGraphClusterer(LabelCooccurrenceGraphBuilder(weighted=True, include_self_edges=False), 'lpa')\n", " ]\n", "}\n", "\n", "clf = GridSearchCV(LabelSpacePartitioningClassifier(), parameters, scoring = 'f1_macro')\n", "clf.fit(X_train, y_train)\n", "\n", "print (clf.best_params_, clf.best_score_)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.14" } }, "nbformat": 4, "nbformat_minor": 2 }