6.1. Pipelines and composite estimators
can be used to chain multiple estimatorsinto one. This is useful as there is often a fixed sequenceof steps in processing the data, for example feature selection, normalizationand classification. serves multiple purposes here:
- Convenience and encapsulation
You only have to call fit and once on yourdata to fit a whole sequence of estimators.
Joint parameter selection
You can grid searchover parameters of all estimators in the pipeline at once.
Safety
- Pipelines help avoid leaking statistics from your test data into thetrained model in cross-validation, by ensuring that the same samples areused to train the transformers and predictors.
All estimators in a pipeline, except the last one, must be transformers(i.e. must have a method).The last estimator may be any type (transformer, classifier, etc.).
6.1.1.1.1. Construction
The is built using a list of (key, value)
pairs, wherethe key
is a string containing the name you want to give this step and value
is an estimator object:
>>>
The utility function make_pipeline
is a shorthandfor constructing pipelines;it takes a variable number of estimators and returns a pipeline,filling in the names automatically:
>>>
- >>> from sklearn.pipeline import make_pipeline
- >>> from sklearn.naive_bayes import MultinomialNB
- >>> from sklearn.preprocessing import Binarizer
- >>> make_pipeline(Binarizer(), MultinomialNB())
- Pipeline(steps=[('binarizer', Binarizer()), ('multinomialnb', MultinomialNB())])
6.1.1.1.2. Accessing steps
The estimators of a pipeline are stored as a list in the steps
attribute,but can be accessed by index or name by indexing (with [idx]
) thePipeline:
>>>
- >>> pipe.steps[0]
- ('reduce_dim', PCA())
- >>> pipe[0]
- PCA()
- >>> pipe['reduce_dim']
- PCA()
Pipeline’s named_steps
attribute allows accessing steps by name with tabcompletion in interactive environments:
>>>
- >>> pipe.named_steps.reduce_dim is pipe['reduce_dim']
- True
A sub-pipeline can also be extracted using the slicing notation commonly usedfor Python Sequences such as lists or strings (although only a step of 1 ispermitted). This is convenient for performing only some of the transformations(or their inverse):
>>>
- >>> pipe[:1]
- Pipeline(steps=[('reduce_dim', PCA())])
- >>> pipe[-1:]
- Pipeline(steps=[('clf', SVC())])
6.1.1.1.3. Nested parameters
Parameters of the estimators in the pipeline can be accessed using the<estimator>__<parameter>
syntax:
>>>
- >>> pipe.set_params(clf__C=10)
- Pipeline(steps=[('reduce_dim', PCA()), ('clf', SVC(C=10))])
This is particularly important for doing grid searches:
>>>
- >>> from sklearn.model_selection import GridSearchCV
- >>> param_grid = dict(reduce_dim__n_components=[2, 5, 10],
- ... clf__C=[0.1, 10, 100])
- >>> grid_search = GridSearchCV(pipe, param_grid=param_grid)
Individual steps may also be replaced as parameters, and non-final steps may beignored by setting them to 'passthrough'
:
>>>
- >>> from sklearn.linear_model import LogisticRegression
- >>> param_grid = dict(reduce_dim=['passthrough', PCA(5), PCA(10)],
- ... clf=[SVC(), LogisticRegression()],
- ... clf__C=[0.1, 10, 100])
- >>> grid_search = GridSearchCV(pipe, param_grid=param_grid)
The estimators of the pipeline can be retrieved by index:
>>>
or by name:
>>>
- >>> pipe['reduce_dim']
- PCA()
Examples:
See also:
Calling fit
on the pipeline is the same as calling fit
oneach estimator in turn, transform
the input and pass it on to the next step.The pipeline has all the methods that the last estimator in the pipeline has,i.e. if the last estimator is a classifier, the can be usedas a classifier. If the last estimator is a transformer, again, so is thepipeline.
Fitting transformers may be computationally expensive. With itsmemory
parameter set, Pipeline
will cache each transformerafter calling fit
.This feature is used to avoid computing the fit transformers within a pipelineif the parameters and input data are identical. A typical example is the case ofa grid search in which the transformers can be fitted only once and reused foreach configuration.
The parameter memory
is needed in order to cache the transformers.memory
can be either a string containing the directory where to cache thetransformers or a object:
>>>
- >>> from tempfile import mkdtemp
- >>> from shutil import rmtree
- >>> from sklearn.decomposition import PCA
- >>> from sklearn.svm import SVC
- >>> from sklearn.pipeline import Pipeline
- >>> estimators = [('reduce_dim', PCA()), ('clf', SVC())]
- >>> cachedir = mkdtemp()
- >>> pipe = Pipeline(estimators, memory=cachedir)
- >>> pipe
- Pipeline(memory=...,
- steps=[('reduce_dim', PCA()), ('clf', SVC())])
- >>> # Clear the cache directory when you don't need it anymore
- >>> rmtree(cachedir)
Warning
Side effect of caching transformers
Using a Pipeline
without cache enabled, it is possible toinspect the original instance such as:
>>>
- >>> X_digits, y_digits = load_digits(return_X_y=True)
- >>> pca1 = PCA()
- >>> svm1 = SVC()
- >>> pipe = Pipeline([('reduce_dim', pca1), ('clf', svm1)])
- >>> pipe.fit(X_digits, y_digits)
- Pipeline(steps=[('reduce_dim', PCA()), ('clf', SVC())])
- >>> # The pca instance can be inspected directly
- >>> print(pca1.components_)
- [[-1.77484909e-19 ... 4.07058917e-18]]
Enabling caching triggers a clone of the transformers before fitting.Therefore, the transformer instance given to the pipeline cannot beinspected directly.In following example, accessing the PCA
instance will raise an AttributeError
since pca2
will be an unfittedtransformer.Instead, use the attribute named_steps
to inspect estimators withinthe pipeline:
>>>
- >>> cachedir = mkdtemp()
- >>> pca2 = PCA()
- >>> svm2 = SVC()
- >>> cached_pipe = Pipeline([('reduce_dim', pca2), ('clf', svm2)],
- ... memory=cachedir)
- >>> cached_pipe.fit(X_digits, y_digits)
- Pipeline(memory=...,
- steps=[('reduce_dim', PCA()), ('clf', SVC())])
- >>> print(cached_pipe.named_steps['reduce_dim'].components_)
- [[-1.77484909e-19 ... 4.07058917e-18]]
- >>> # Remove the cache directory
- >>> rmtree(cachedir)
Examples:
TransformedTargetRegressor
transforms thetargets y
before fitting a regression model. The predictions are mappedback to the original space via an inverse transform. It takes as an argumentthe regressor that will be used for prediction, and the transformer that willbe applied to the target variable:
>>>
- >>> import numpy as np
- >>> from sklearn.datasets import load_boston
- >>> from sklearn.compose import TransformedTargetRegressor
- >>> from sklearn.preprocessing import QuantileTransformer
- >>> from sklearn.linear_model import LinearRegression
- >>> from sklearn.model_selection import train_test_split
- >>> X, y = load_boston(return_X_y=True)
- >>> transformer = QuantileTransformer(output_distribution='normal')
- >>> regressor = LinearRegression()
- >>> regr = TransformedTargetRegressor(regressor=regressor,
- ... transformer=transformer)
- >>> X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
- >>> regr.fit(X_train, y_train)
- TransformedTargetRegressor(...)
- >>> print('R2 score: {0:.2f}'.format(regr.score(X_test, y_test)))
- R2 score: 0.67
- >>> raw_target_regr = LinearRegression().fit(X_train, y_train)
- >>> print('R2 score: {0:.2f}'.format(raw_target_regr.score(X_test, y_test)))
- R2 score: 0.64
For simple transformations, instead of a Transformer object, a pair offunctions can be passed, defining the transformation and its inverse mapping:
>>>
- >>> def func(x):
- ... return np.log(x)
- >>> def inverse_func(x):
- ... return np.exp(x)
Subsequently, the object is created as:
>>>
- >>> regr = TransformedTargetRegressor(regressor=regressor,
- ... func=func,
- ... inverse_func=inverse_func)
- >>> regr.fit(X_train, y_train)
- TransformedTargetRegressor(...)
- >>> print('R2 score: {0:.2f}'.format(regr.score(X_test, y_test)))
- R2 score: 0.65
By default, the provided functions are checked at each fit to be the inverse ofeach other. However, it is possible to bypass this checking by settingcheck_inverse
to False
:
>>>
Note
The transformation can be triggered by setting either transformer
or thepair of functions func
and inverse_func
. However, setting bothoptions will raise an error.
Examples:
FeatureUnion
combines several transformer objects into a newtransformer that combines their output. A takesa list of transformer objects. During fitting, each of theseis fit to the data independently. The transformers are applied in parallel,and the feature matrices they output are concatenated side-by-side into alarger matrix.
When you want to apply different transformations to each field of the data,see the related class sklearn.compose.ColumnTransformer
(see ).
FeatureUnion
serves the same purposes as -convenience and joint parameter estimation and validation.
(A FeatureUnion
has no way of checking whether two transformersmight produce identical features. It only produces a union when thefeature sets are disjoint, and making sure they are the caller’sresponsibility.)
A is built using a list of (key, value)
pairs,where the key
is the name you want to give to a given transformation(an arbitrary string; it only serves as an identifier)and value
is an estimator object:
>>>
- >>> from sklearn.pipeline import FeatureUnion
- >>> from sklearn.decomposition import PCA
- >>> from sklearn.decomposition import KernelPCA
- >>> estimators = [('linear_pca', PCA()), ('kernel_pca', KernelPCA())]
- >>> combined = FeatureUnion(estimators)
- >>> combined
- FeatureUnion(transformer_list=[('linear_pca', PCA()),
- ('kernel_pca', KernelPCA())])
Like pipelines, feature unions have a shorthand constructor called that does not require explicit naming of the components.
Like Pipeline
, individual steps may be replaced using set_params
,and ignored by setting to 'drop'
:
>>>
- >>> combined.set_params(kernel_pca='drop')
- FeatureUnion(transformer_list=[('linear_pca', PCA()),
- ('kernel_pca', 'drop')])
Examples:
Warning
The compose.ColumnTransformer
class is experimental and the API is subject to change.
Many datasets contain features of different types, say text, floats, and dates,where each type of feature requires separate preprocessing or featureextraction steps. Often it is easiest to preprocess data before applyingscikit-learn methods, for example using .Processing your data before passing it to scikit-learn might be problematic forone of the following reasons:
Incorporating statistics from test data into the preprocessors makescross-validation scores unreliable (known as data leakage),for example in the case of scalers or imputing missing values.
You may want to include the parameters of the preprocessors in aparameter search.
The helps performing differenttransformations for different columns of the data, within aPipeline
that is safe from data leakage and that canbe parametrized. works onarrays, sparse matrices, andpandas DataFrames.
To each column, a different transformation can be applied, such aspreprocessing or a specific feature extraction method:
>>>
- >>> X = pd.DataFrame(
- ... {'city': ['London', 'London', 'Paris', 'Sallisaw'],
- ... 'title': ["His Last Bow", "How Watson Learned the Trick",
- ... "A Moveable Feast", "The Grapes of Wrath"],
- ... 'expert_rating': [5, 3, 4, 5],
- ... 'user_rating': [4, 5, 4, 3]})
For this data, we might want to encode the 'city'
column as a categoricalvariable using but apply afeature_extraction.text.CountVectorizer
to the 'title'
column.As we might use multiple feature extraction methods on the same column, we giveeach transformer a unique name, say 'city_category'
and 'title_bow'
.By default, the remaining rating columns are ignored (remainder='drop'
):
>>>
- >>> from sklearn.compose import ColumnTransformer
- >>> from sklearn.feature_extraction.text import CountVectorizer
- >>> from sklearn.preprocessing import OneHotEncoder
- >>> column_trans = ColumnTransformer(
- ... [('city_category', OneHotEncoder(dtype='int'),['city']),
- ... ('title_bow', CountVectorizer(), 'title')],
- ... remainder='drop')
- >>> column_trans.fit(X)
- ColumnTransformer(transformers=[('city_category', OneHotEncoder(dtype='int'),
- ['city']),
- ('title_bow', CountVectorizer(), 'title')])
- >>> column_trans.get_feature_names()
- ['city_category__x0_London', 'city_category__x0_Paris', 'city_category__x0_Sallisaw',
- 'title_bow__bow', 'title_bow__feast', 'title_bow__grapes', 'title_bow__his',
- 'title_bow__how', 'title_bow__last', 'title_bow__learned', 'title_bow__moveable',
- 'title_bow__of', 'title_bow__the', 'title_bow__trick', 'title_bow__watson',
- 'title_bow__wrath']
- >>> column_trans.transform(X).toarray()
- array([[1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0],
- [1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0],
- [0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
- [0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1]]...)
In the above example, the expects a 1D array asinput and therefore the columns were specified as a string ('title'
).However, preprocessing.OneHotEncoder
as most of other transformers expects 2D data, therefore in that case you needto specify the column as a list of strings (['city']
).
Apart from a scalar or a single item list, the column selection can be specifiedas a list of multiple items, an integer array, a slice, a boolean mask, orwith a . Themake_column_selector
is used to select columns basedon data type or column name:
>>>
- >>> from sklearn.preprocessing import StandardScaler
- >>> from sklearn.compose import make_column_selector
- >>> ct = ColumnTransformer([
- ... ('scale', StandardScaler(),
- ... make_column_selector(dtype_include=np.number)),
- ... ('onehot',
- ... OneHotEncoder(),
- ... make_column_selector(pattern='city', dtype_include=object))])
- >>> ct.fit_transform(X)
- array([[ 0.904..., 0. , 1. , 0. , 0. ],
- [-1.507..., 1.414..., 1. , 0. , 0. ],
- [-0.301..., 0. , 0. , 1. , 0. ],
- [ 0.904..., -1.414..., 0. , 0. , 1. ]])
Strings can reference columns if the input is a DataFrame, integers are alwaysinterpreted as the positional columns.
We can keep the remaining rating columns by settingremainder='passthrough'
. The values are appended to the end of thetransformation:
>>>
- >>> column_trans = ColumnTransformer(
- ... [('city_category', OneHotEncoder(dtype='int'),['city']),
- ... ('title_bow', CountVectorizer(), 'title')],
- ... remainder='passthrough')
- >>> column_trans.fit_transform(X)
- array([[1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 5, 4],
- [1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 3, 5],
- [0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 4, 4],
- [0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 5, 3]]...)
The remainder
parameter can be set to an estimator to transform theremaining rating columns. The transformed values are appended to the end ofthe transformation:
>>>
- >>> from sklearn.preprocessing import MinMaxScaler
- >>> column_trans = ColumnTransformer(
- ... [('city_category', OneHotEncoder(), ['city']),
- ... ('title_bow', CountVectorizer(), 'title')],
- ... remainder=MinMaxScaler())
- >>> column_trans.fit_transform(X)[:, -2:]
- array([[1. , 0.5],
- [0. , 1. ],
- [0.5, 0.5],
- [1. , 0. ]])
The function is availableto more easily create a ColumnTransformer
object.Specifically, the names will be given automatically. The equivalent for theabove example would be:
>>>
Examples: