Multi-label ARAM

class skmultilearn.adapt.MLARAM(vigilance=0.9, threshold=0.02, neurons=None)[source]

HARAM: A Hierarchical ARAM Neural Network for Large-Scale Text Classification

This method aims at increasing the classification speed by adding an extra ART layer for clustering learned prototypes into large clusters. In this case the activation of all prototypes can be replaced by the activation of a small fraction of them, leading to a significant reduction of the classification time.

Parameters:
  • vigilance (float (default is 0.9)) – parameter for adaptive resonance theory networks, controls how large a hyperbox can be, 1 it is small (no compression), 0 should assume all range. Normally set between 0.8 and 0.999, it is dataset dependent. It is responsible for the creation of the prototypes, therefore training of the network.
  • threshold (float (default is 0.02)) – controls how many prototypes participate by the prediction, can be changed for the testing phase.
  • neurons (list) – the neurons in the network

References

Published work available here.

@INPROCEEDINGS{7395756,
    author={F. Benites and E. Sapozhnikova},
    booktitle={2015 IEEE International Conference on Data Mining Workshop (ICDMW)},
    title={HARAM: A Hierarchical ARAM Neural Network for Large-Scale Text Classification},
    year={2015},
    volume={},
    number={},
    pages={847-854},
    doi={10.1109/ICDMW.2015.14},
    ISSN={2375-9259},
    month={Nov},
}

Examples

Here’s an example code with a 5% threshold and vigilance of 0.95:

from skmultilearn.neurofuzzy import MLARAM

classifier = MLARAM(threshold=0.05, vigilance=0.95)
classifier.fit(X_train, y_train)
prediction = classifier.predict(X_test)
fit(X, y)[source]

Fit classifier with training data

Parameters:
  • X (numpy.ndarray or scipy.sparse) – input features, can be a dense or sparse matrix of size (n_samples, n_features)
  • y (numpy.ndarray or scipy.sparse {0,1}) – binary indicator matrix with label assignments.
Returns:

fitted instance of self

Return type:

skmultilearn.MLARAMfast.MLARAM

get_params(deep=True)

Get parameters to sub-objects

Introspection of classifier for search models like cross-validation and grid search.

Parameters:deep (bool) – if True all params will be introspected also and appended to the output dictionary.
Returns:out – dictionary of all parameters and their values. If deep=True the dictionary also holds the parameters of the parameters.
Return type:dict
predict(X)[source]

Predict labels for X

Parameters:X (numpy.ndarray or scipy.sparse.csc_matrix) – input features of shape (n_samples, n_features)
Returns:binary indicator matrix with label assignments with shape (n_samples, n_labels)
Return type:scipy.sparse of int
predict_proba(X)[source]

Predict probabilities of label assignments for X

Parameters:X (numpy.ndarray or scipy.sparse.csc_matrix) – input features of shape (n_samples, n_features)
Returns:matrix with label assignment probabilities of shape (n_samples, n_labels)
Return type:array of arrays of float
reset()[source]

Resets the labels and neurons

score(X, y, sample_weight=None)

Returns the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
  • X (array-like, shape = (n_samples, n_features)) – Test samples.
  • y (array-like, shape = (n_samples) or (n_samples, n_outputs)) – True labels for X.
  • sample_weight (array-like, shape = [n_samples], optional) – Sample weights.
Returns:

score – Mean accuracy of self.predict(X) wrt. y.

Return type:

float

set_params(**parameters)

Propagate parameters to sub-objects

Set parameters as returned by get_params. Please see this link.