Tabular training

    How to use the tabular application in fastai

    To illustrate the tabular application, we will use the example of the Adult dataset where we have to predict if a person is earning more or less than $50k per year using some general data.

    We can download a sample of this dataset with the usual command:

    1. path = untar_data(URLs.ADULT_SAMPLE)
    2. path.ls()
    1. (#3) [Path('/home/ml1/.fastai/data/adult_sample/models'),Path('/home/ml1/.fastai/data/adult_sample/export.pkl'),Path('/home/ml1/.fastai/data/adult_sample/adult.csv')]

    Then we can have a look at how the data is structured:

    1. df = pd.read_csv(path/'adult.csv')
    2. df.head()

    Some of the columns are continuous (like age) and we will treat them as float numbers we can feed our model directly. Others are categorical (like workclass or education) and we will convert them to a unique index that we will feed to embedding layers. We can specify our categorical and continuous column names, as well as the name of the dependent variable in TabularDataLoaders factory methods:

    1. dls = TabularDataLoaders.from_csv(path/'adult.csv', path=path, y_names="salary",
    2. cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race'],
    3. cont_names = ['age', 'fnlwgt', 'education-num'],
    4. procs = [Categorify, FillMissing, Normalize])

    The last part is the list of pre-processors we apply to our data:

    • is going to take every categorical variable and make a map from integer to unique categories, then replace the values by the corresponding index.
    • FillMissing will fill the missing values in the continuous variables by the median of existing values (you can choose a specific value if you prefer)

    To further expose what’s going on below the surface, let’s rewrite this utilizing fastai‘s class. We will need to make one adjustment, which is defining how we want to split our data. By default the factory method above used a random 80/20 split, so we will do the same:

    1. splits = RandomSplitter(valid_pct=0.2)(range_of(df))
    1. to = TabularPandas(df, procs=[Categorify, FillMissing,Normalize],
    2. cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race'],
    3. cont_names = ['age', 'fnlwgt', 'education-num'],
    4. y_names='salary',
    workclasseducationmarital-statusoccupationrelationshipraceeducation-num_naagefnlwgteducation-num
    15780216152510.9840372.210372-0.033692
    1744251258251-1.509555-0.319624-0.425324

    Now we can build our DataLoaders again:

    1. dls = to.dataloaders(bs=64)

    The show_batch method works like for every other application:

    1. dls.show_batch()

    We can define a model using the method. When we define our model, fastai will try to infer the loss function based on our y_names earlier.

    Note: Sometimes with tabular data, your y‘s may be encoded (such as 0 and 1). In such a case you should explicitly pass y_block = CategoryBlock in your constructor so fastai won’t presume you are doing regression.

    1. learn = tabular_learner(dls, metrics=accuracy)

    And we can train that model with the fit_one_cycle method (the fine_tune method won’t be useful here since we don’t have a pretrained model).

    1. learn.fit_one_cycle(1)
    epochtrain_lossvalid_lossaccuracytime
    00.3693600.3480960.84075600:05

    We can then have a look at some predictions:

    1. learn.show_results()
    1. row, clas, probs = learn.predict(df.iloc[0])
    workclasseducationmarital-statusoccupationrelationshipraceeducation-num_naagefnlwgteducation-numsalary
    0PrivateAssoc-acdmMarried-civ-spouse#na#WifeWhiteFalse49.0101319.9978812.0>=50k
      1. (tensor(1), tensor([0.4995, 0.5005]))

      To get prediction on a new dataframe, you can use the test_dl method of the . That dataframe does not need to have the dependent variable in its column.

      1. test_df = df.copy()
      2. test_df.drop(['salary'], axis=1, inplace=True)
      3. dl = learn.dls.test_dl(test_df)

      Then will give you the predictions:

      1. learn.get_preds(dl=dl)
      1. (tensor([[0.4995, 0.5005],
      2. [0.4882, 0.5118],
      3. [0.9824, 0.0176],
      4. ...,
      5. [0.5324, 0.4676],
      6. [0.7628, 0.2372],
      7. [0.5934, 0.4066]]), None)

      Note: Since machine learning models can’t magically understand categories it was never trained on, the data should reflect this. If there are different missing values in your test data you should address this before training

      As mentioned earlier, TabularPandas is a powerful and easy preprocessing tool for tabular data. Integration with libraries such as Random Forests and XGBoost requires only one extra step, that the .dataloaders call did for us. Let’s look at our to again. It’s values are stored in a DataFrame like object, where we can extract the cats, conts, xs and ys if we want to:

        Now that everything is encoded, you can then send this off to XGBoost or Random Forests by extracting the train and validation sets and their values:

        And now we can directly send this in!