text.interpret

    text.interpret is the module that implements custom classes for different NLP tasks by inheriting from it.

    class TextClassificationInterpretation[test]

    Provides an interpretation of classification based on input sensitivity. This was designed for AWD-LSTM only for the moment, because Transformer already has its own attentional model.

    intrinsic_attention[test]

    Calculate the intrinsic attention of the input w.r.t to an output class_id, or the classification given by the model if None. For reference, see the Sequential Jacobian session at https://www.cs.toronto.edu/~graves/preprint.pdf

      html_intrinsic_attention[source][test]

      1. show_doc(TextClassificationInterpretation.show_intrinsic_attention)

      show_intrinsic_attention[source][test]

      show_intrinsic_attention(text:str, class_id:int=None, **kwargs) No tests found for . To contribute a test please refer to and this discussion.

      1. show_doc(TextClassificationInterpretation.show_top_losses)

      show_top_losses[source][test]

      Create a tabulation showing the first k texts in top_losses along with their prediction, actual,loss, and probability of actual class. max_len is the maximum number of tokens displayed.

      Let’s show how can be used once we train a text classification model.

      1. data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text')
      2. .split_by_rand_pct()
      3. .label_for_lm()
      4. .databunch())
      5. data_lm.save()
      1. data_lm.show_batch()
      1. learn.fit_one_cycle(2, 1e-2)
      2. learn.save('mini_train_lm')
      3. learn.save_encoder('mini_train_encoder')
      1. learn = text_classifier_learner(data_clas, AWD_LSTM)
      2. learn.load_encoder('mini_train_encoder')
      3. learn.fit_one_cycle(2, slice(1e-3,1e-2))
      4. learn.save('mini_train_clas')

      interpret

        xxbos i really like this movie , it is amazing !


        ©2021 fast.ai. All rights reserved.
        Site last generated: Jan 5, 2021