Multilabel ARAM¶

class
skmultilearn.adapt.
MLARAM
(vigilance=0.9, threshold=0.02, neurons=None)[source]¶ HARAM: A Hierarchical ARAM Neural Network for LargeScale Text Classification
This method aims at increasing the classification speed by adding an extra ART layer for clustering learned prototypes into large clusters. In this case the activation of all prototypes can be replaced by the activation of a small fraction of them, leading to a significant reduction of the classification time.
Parameters:  vigilance (float (default is 0.9)) – parameter for adaptive resonance theory networks, controls how large a hyperbox can be, 1 it is small (no compression), 0 should assume all range. Normally set between 0.8 and 0.999, it is dataset dependent. It is responsible for the creation of the prototypes, therefore training of the network.
 threshold (float (default is 0.02)) – controls how many prototypes participate by the prediction, can be changed for the testing phase.
 neurons (list) – the neurons in the network
References
Published work available here.
@INPROCEEDINGS{7395756, author={F. Benites and E. Sapozhnikova}, booktitle={2015 IEEE International Conference on Data Mining Workshop (ICDMW)}, title={HARAM: A Hierarchical ARAM Neural Network for LargeScale Text Classification}, year={2015}, volume={}, number={}, pages={847854}, doi={10.1109/ICDMW.2015.14}, ISSN={23759259}, month={Nov}, }
Examples
Here’s an example code with a 5% threshold and vigilance of 0.95:
from skmultilearn.neurofuzzy import MLARAM classifier = MLARAM(threshold=0.05, vigilance=0.95) classifier.fit(X_train, y_train) prediction = classifier.predict(X_test)

fit
(X, y)[source]¶ Fit classifier with training data
Parameters:  X (numpy.ndarray or scipy.sparse) – input features, can be a dense or sparse matrix of size
(n_samples, n_features)
 y (numpy.ndarray or scipy.sparse {0,1}) – binary indicator matrix with label assignments.
Returns: fitted instance of self
Return type: skmultilearn.MLARAMfast.MLARAM
 X (numpy.ndarray or scipy.sparse) – input features, can be a dense or sparse matrix of size

get_params
(deep=True)¶ Get parameters to subobjects
Introspection of classifier for search models like crossvalidation and grid search.
Parameters: deep (bool) – if True
all params will be introspected also and appended to the output dictionary.Returns: out – dictionary of all parameters and their values. If deep=True
the dictionary also holds the parameters of the parameters.Return type: dict

predict
(X)[source]¶ Predict labels for X
Parameters: X (numpy.ndarray or scipy.sparse.csc_matrix) – input features of shape (n_samples, n_features)
Returns: binary indicator matrix with label assignments with shape (n_samples, n_labels)
Return type: scipy.sparse of int

predict_proba
(X)[source]¶ Predict probabilities of label assignments for X
Parameters: X (numpy.ndarray or scipy.sparse.csc_matrix) – input features of shape (n_samples, n_features)
Returns: matrix with label assignment probabilities of shape (n_samples, n_labels)
Return type: array of arrays of float

score
(X, y, sample_weight=None)¶ Returns the mean accuracy on the given test data and labels.
In multilabel classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:  X (arraylike, shape = (n_samples, n_features)) – Test samples.
 y (arraylike, shape = (n_samples) or (n_samples, n_outputs)) – True labels for X.
 sample_weight (arraylike, shape = [n_samples], optional) – Sample weights.
Returns: score – Mean accuracy of self.predict(X) wrt. y.
Return type: