text.interpret
text.interpret
is the module that implements custom classes for different NLP tasks by inheriting from it.
class
TextClassificationInterpretation
[test]
Provides an interpretation of classification based on input sensitivity. This was designed for AWD-LSTM only for the moment, because Transformer already has its own attentional model.
intrinsic_attention
[test]
Calculate the intrinsic attention of the input w.r.t to an output class_id
, or the classification given by the model if None
. For reference, see the Sequential Jacobian session at https://www.cs.toronto.edu/~graves/preprint.pdf
html_intrinsic_attention
[source][test]
show_doc(TextClassificationInterpretation.show_intrinsic_attention)
show_intrinsic_attention
[source][test]
show_intrinsic_attention
(text
:str
,class_id
:int
=None
, **kwargs
) No tests found for . To contribute a test please refer to and this discussion.
show_doc(TextClassificationInterpretation.show_top_losses)
show_top_losses
[source][test]
Create a tabulation showing the first k
texts in top_losses along with their prediction, actual,loss, and probability of actual class. max_len
is the maximum number of tokens displayed.
Let’s show how can be used once we train a text classification model.
data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text')
.split_by_rand_pct()
.label_for_lm()
.databunch())
data_lm.save()
data_lm.show_batch()
learn.fit_one_cycle(2, 1e-2)
learn.save('mini_train_lm')
learn.save_encoder('mini_train_encoder')
learn = text_classifier_learner(data_clas, AWD_LSTM)
learn.load_encoder('mini_train_encoder')
learn.fit_one_cycle(2, slice(1e-3,1e-2))
learn.save('mini_train_clas')
interpret
xxbos i really like this movie , it is amazing !
©2021 fast.ai. All rights reserved.
Site last generated: Jan 5, 2021