1 Learning Objectives
2 Introduction to tidymodels
3 Himalayan Climbing Expeditions Data
4 The Core tidymodels
Packages
4.1
rsample
: General Resampling Infrastructure
4.2recipes
: Preprocessing Tools to Create Design Matrices
4.3parsnip
: A Common API to Modeling and Analysis Functions
4.4workflows
: Modeling Workflows
4.5dials
: Tools for Creating Tuning Parameter Values
4.6tune
: Tidy Tuning Tools
4.7broom
: Convert Statistical Objects into Tidy Tibbles
4.8yardstick
: Tidy Characterizations of Model Performance
5 Additions to the tidymodels
Ecosystem
This workshop introduces tidymodels
, a unified framework towards modeling and machine learning in R
using tidy data principles. You will get to know tools that facilitate every step of your machine learning workflow, from resampling, over preprocessing and model building, to model tuning and performance evaluation.
More specifically, after this lecture you will
tidymodels
ecosystem and hopefully realize the value of a unified modeling framework,R
.tidymodels
tidymodels
The tidymodels framework is a collection of packages for modeling and machine learning using tidyverse principles. ~ tidymodels.org
Official tidymodels
Hex Sticker
Julia Silge - Software Engineer @ RStudio
Max Kuhn - Software Engineer @ RStudio
tidymodels
The tidymodels framework is a collection of packages for modeling and machine learning using tidyverse principles. ~ tidymodels.org
Official tidymodels
Hex Sticker
Julia Silge - Software Engineer @ RStudio
Max Kuhn - Software Engineer @ RStudio
Whenever possible, the software should be able to protect users from committing mistakes. Software should make it easy for users to do the right thing. ~ Kuhn/Silge (2021)
scikit-learn
package in the context of Python
tidymodels
The tidymodels framework is a collection of packages for modeling and machine learning using tidyverse principles. ~ tidymodels.org
tidymodels
core packages:
rsample
: general methods for resamplingrecipes
: unified interface to data preprocessingparsnip
: unified interface to modelingworkflows
: combine model blueprints and preprocessing recipesdials
: create tuning parameterstune
: hyperparameter tuningbroom
: tidy model outputsyardstick
: model evaluationtidymodels
The tidymodels framework is a collection of packages for modeling and machine learning using tidyverse principles. ~ tidymodels.org
install.packages("tidymodels")library(tidymodels)
-- Attaching packages ----------------------------- tidymodels 0.1.4 --v broom 0.7.9 v recipes 0.1.17v dials 0.0.10 v rsample 0.1.0 v dplyr 1.0.7 v tibble 3.1.4 v ggplot2 3.3.5 v tidyr 1.1.4 v infer 1.0.0 v tune 0.1.6 v modeldata 0.1.1 v workflows 0.2.3 v parsnip 0.1.7 v workflowsets 0.1.0 v purrr 0.3.4 v yardstick 0.0.8 -- Conflicts ------------------------------- tidymodels_conflicts() --x purrr::discard() masks scales::discard()x dplyr::filter() masks stats::filter()x dplyr::lag() masks stats::lag()x recipes::step() masks stats::step()* Use suppressPackageStartupMessages() to eliminate package startup messages
Explain:
R
stats
packagetidymodels v0.1.4
: relatively new package ecosystem, it is not unlikely that some of the features or function interfaces will change slightly in the futuretidymodels
Remember, modeling is one of the main steps in our day-2-day data science workflow. And this is precisely where tidymodels
fits in!
Source: Kuhn/Silge (2021), ch. 1.3
In order to illustrate the features of the tidymodels
ecosystem, we use the Himalayan Climbing Expeditions data set from the tidytuesday
project.
# install.packages("tidytuesdayR")tt_data <- tidytuesdayR::tt_load(2020, week = 39)
> --- Compiling #TidyTuesday Information for 2020-09-22 ----> --- There are 3 files available ---> --- Starting Download ---> > Downloading file 1 of 3: `peaks.csv`> Downloading file 2 of 3: `members.csv`> Downloading file 3 of 3: `expeditions.csv`> > --- Download complete ---
The data set contains a large record of data spanning the 1905-2019 period about
The data set contains a large record of data spanning the 1905-2019 period about
Task: Predict the likelihood of an expedition coming to a lethal end (i.e. binary classification task).
tt_data$members %>% skimr::skim()
> Output on next slide
skimr
package to get a high-level view of the data and most important descriptives> -- Data Summary ------------------------> Values > Name Piped data> Number of rows 76519 > Number of columns 21 > _______________________ > Column type frequency: > character 10 > logical 6 > numeric 5 > ________________________ > Group variables None
> -- Variable type: character ---------------------------------------------------------------------------> # A tibble: 10 x 8> skim_variable n_missing complete_rate min max empty n_unique whitespace> * <chr> <int> <dbl> <int> <int> <int> <int> <int>> 1 expedition_id 0 1 9 9 0 10350 0> 2 member_id 0 1 12 12 0 76518 0> 3 peak_id 0 1 4 4 0 391 0> 4 peak_name 15 1.00 4 25 0 390 0> 5 season 0 1 6 7 0 5 0> 6 sex 2 1.00 1 1 0 2 0> 7 citizenship 10 1.00 2 23 0 212 0> 8 expedition_role 21 1.00 4 25 0 524 0> 9 death_cause 75413 0.0145 3 27 0 12 0> 10 injury_type 74807 0.0224 3 27 0 11 0
> -- Variable type: logical -----------------------------------------------------------------------------> # A tibble: 6 x 5> skim_variable n_missing complete_rate mean count > * <chr> <int> <dbl> <dbl> <chr> > 1 hired 0 1 0.206 FAL: 60788, TRU: 15731> 2 success 0 1 0.382 FAL: 47320, TRU: 29199> 3 solo 0 1 0.00158 FAL: 76398, TRU: 121 > 4 oxygen_used 0 1 0.238 FAL: 58286, TRU: 18233> 5 died 0 1 0.0145 FAL: 75413, TRU: 1106 > 6 injured 0 1 0.0224 FAL: 74806, TRU: 1713
> -- Variable type: numeric -----------------------------------------------------------------------------> # A tibble: 5 x 11> skim_variable n_missing complete_rate mean sd p0 p25 p50 p75 p100 hist > * <chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>> 1 year 0 1 2000. 14.8 1905 1991 2004 2012 2019 ▁▁▁▃▇> 2 age 3497 0.954 37.3 10.4 7 29 36 44 85 ▁▇▅▁▁> 3 highpoint_metres 21833 0.715 7471. 1040. 3800 6700 7400 8400 8850 ▁▁▆▃▇> 4 death_height_metres 75451 0.0140 6593. 1308. 400 5800 6600 7550 8830 ▁▁▂▇▆> 5 injury_height_metres 75510 0.0132 7050. 1214. 400 6200 7100 8000 8880 ▁▁▂▇▇
Pt. 1:
Pt. 2:
Pt. 3: logical:
hired
natives (around 20% of the expedition members)success
)death_cause
and injury_type
Pt. 4: numeric:
year
expeditions took place more and more often in the two recent decadesage
: most climbers i would expect to be between 20-40, with few very old climbers (85), and some super young (7?!)age
and highpoint_metres
has a lot of missings!usually, you would do a lot more EDA right now:
climbers_df <- tt_data$members %>% select(member_id, peak_name, season, year, sex, age, citizenship, expedition_role, hired, solo, oxygen_used, success, died) %>% filter((!is.na(sex) & !is.na(citizenship) & !is.na(peak_name) & !is.na(expedition_role)) == T) %>% mutate(across(where(~ is.character(.) | is.logical(.)), as.factor))climbers_df
> # A tibble: 76,471 x 13> member_id peak_name season year sex age citizenship> <fct> <fct> <fct> <dbl> <fct> <dbl> <fct> > 1 AMAD78301-01 Ama Dablam Autumn 1978 M 40 France > 2 AMAD78301-02 Ama Dablam Autumn 1978 M 41 France > 3 AMAD78301-03 Ama Dablam Autumn 1978 M 27 France > 4 AMAD78301-04 Ama Dablam Autumn 1978 M 40 France > 5 AMAD78301-05 Ama Dablam Autumn 1978 M 34 France > 6 AMAD78301-06 Ama Dablam Autumn 1978 M 25 France > 7 AMAD78301-07 Ama Dablam Autumn 1978 M 41 France > 8 AMAD78301-08 Ama Dablam Autumn 1978 M 29 France > 9 AMAD79101-03 Ama Dablam Spring 1979 M 35 USA > 10 AMAD79101-04 Ama Dablam Spring 1979 M 37 W Germany > # ... with 76,461 more rows, and 6 more variables:> # expedition_role <fct>, hired <fct>, solo <fct>,> # oxygen_used <fct>, success <fct>, died <fct>
Note: After the removal of missing values in the sex
, citizenship
, peak_name
and expedition_role
predictor the data set shrinks 76,519 to 76,471 observations
rsample
:rsample
: Resampling Infrastructurersample
provides methods for data partitioning (i.e. splitting the data into training and test set) and resampling (i.e. drawing repeated samples from the training set to obtain the sampling distributions).
rsample
: Resampling Infrastructurersample
provides methods for data partitioning (i.e. splitting the data into training and test set) and resampling (i.e. drawing repeated samples from the training set to obtain the sampling distributions).
Data Partitioning: First, let's divide our data into a training and test set via initial_split()
. The resulting rsplit
object indexes the original data points according to their data set membership.
set.seed(2021)climbers_split <- initial_split(climbers_df, prop = 0.8, strata = died)climbers_split
> <Analysis/Assess/Total>> <61176/15295/76471>
rsample
: Resampling InfrastructureTo extract the training and test data, we can use the training()
and testing()
functions.
train_set <- training(climbers_split)train_set
> # A tibble: 61,176 x 13> member_id peak_name season year sex age citizenship> <fct> <fct> <fct> <dbl> <fct> <dbl> <fct> > 1 AMAD78301-01 Ama Dablam Autumn 1978 M 40 France > 2 AMAD78301-02 Ama Dablam Autumn 1978 M 41 France > 3 AMAD78301-04 Ama Dablam Autumn 1978 M 40 France > 4 AMAD78301-06 Ama Dablam Autumn 1978 M 25 France > 5 AMAD78301-08 Ama Dablam Autumn 1978 M 29 France > 6 AMAD79101-03 Ama Dablam Spring 1979 M 35 USA > 7 AMAD79101-04 Ama Dablam Spring 1979 M 37 W Germany > 8 AMAD79101-05 Ama Dablam Spring 1979 M 23 USA > 9 AMAD79101-01 Ama Dablam Spring 1979 M 44 USA > 10 AMAD79101-06 Ama Dablam Spring 1979 M 25 USA > # ... with 61,166 more rows, and 6 more variables:> # expedition_role <fct>, hired <fct>, solo <fct>,> # oxygen_used <fct>, success <fct>, died <fct>
test_set <- testing(climbers_split)test_set
> # A tibble: 15,295 x 13> member_id peak_name season year sex age citizenship> <fct> <fct> <fct> <dbl> <fct> <dbl> <fct> > 1 AMAD78301-03 Ama Dablam Autumn 1978 M 27 France > 2 AMAD78301-05 Ama Dablam Autumn 1978 M 34 France > 3 AMAD78301-07 Ama Dablam Autumn 1978 M 41 France > 4 AMAD79101-10 Ama Dablam Spring 1979 M 30 USA > 5 AMAD79101-15 Ama Dablam Spring 1979 M 29 USA > 6 AMAD79101-18 Ama Dablam Spring 1979 M 23 Nepal > 7 AMAD79301-03 Ama Dablam Autumn 1979 F 33 France > 8 AMAD79301-13 Ama Dablam Autumn 1979 M 31 France > 9 AMAD79301-14 Ama Dablam Autumn 1979 M 28 France > 10 AMAD79301-22 Ama Dablam Autumn 1979 M 31 France > # ... with 15,285 more rows, and 6 more variables:> # expedition_role <fct>, hired <fct>, solo <fct>,> # oxygen_used <fct>, success <fct>, died <fct>
rsample
: Resampling InfrastructureResampling: Training predictive models which involve hyperparameters requires a three-way data split:
rsample
: Resampling InfrastructureResampling: Training predictive models which involve hyperparameters requires a three-way data split:
Validation Split: Partition the initial train_set
into a smaller training as well as a validation set using validation_split()
.
Resampling: Use a resampling approach, such as cross-validation (CV) or the bootstrap, to create resamples from our initial training set.
A resample is the outcome of a resampling method, e.g., a fold resulting from k-fold cross-validation or a bootstrapped and out-of-bag sample resulting from The Bootstrap.
Data sets:
CV vs. train-test-split:
rsample
: Resampling InfrastructureHere we implement a 10-fold CV approach using the vfold_cv()
function. It returns a tibble
containing the indexes of 10 separate splits.
set.seed(2021)climbers_folds <- train_set %>% vfold_cv(v = 10, repeats = 1, strata = died) climbers_folds
> # 10-fold cross-validation using stratification > # A tibble: 10 x 2> splits id > <list> <chr> > 1 <split [55058/6118]> Fold01> 2 <split [55058/6118]> Fold02> 3 <split [55058/6118]> Fold03> 4 <split [55058/6118]> Fold04> 5 <split [55058/6118]> Fold05> 6 <split [55058/6118]> Fold06> 7 <split [55059/6117]> Fold07> 8 <split [55059/6117]> Fold08> 9 <split [55059/6117]> Fold09> 10 <split [55059/6117]> Fold10
rsample
: Resampling InfrastructureTo extract the training and validation data, we can again use training()
and testing()
.
climbers_folds %>% purrr::pluck("splits", 1) %>% training()
> # A tibble: 55,058 x 13> member_id peak_name season year sex age citizenship> <fct> <fct> <fct> <dbl> <fct> <dbl> <fct> > 1 AMAD78301-01 Ama Dablam Autumn 1978 M 40 France > 2 AMAD78301-02 Ama Dablam Autumn 1978 M 41 France > 3 AMAD78301-04 Ama Dablam Autumn 1978 M 40 France > 4 AMAD78301-06 Ama Dablam Autumn 1978 M 25 France > 5 AMAD78301-08 Ama Dablam Autumn 1978 M 29 France > 6 AMAD79101-04 Ama Dablam Spring 1979 M 37 W Germany > 7 AMAD79101-05 Ama Dablam Spring 1979 M 23 USA > 8 AMAD79101-01 Ama Dablam Spring 1979 M 44 USA > 9 AMAD79101-06 Ama Dablam Spring 1979 M 25 USA > 10 AMAD79101-08 Ama Dablam Spring 1979 M 32 USA > # ... with 55,048 more rows, and 6 more variables:> # expedition_role <fct>, hired <fct>, solo <fct>,> # oxygen_used <fct>, success <fct>, died <fct>
climbers_folds %>% purrr::pluck("splits", 1) %>% testing()
> # A tibble: 6,118 x 13> member_id peak_name season year sex age citizenship> <fct> <fct> <fct> <dbl> <fct> <dbl> <fct> > 1 AMAD79101-03 Ama Dablam Spring 1979 M 35 USA > 2 AMAD79101-07 Ama Dablam Spring 1979 M 28 USA > 3 AMAD79301-09 Ama Dablam Autumn 1979 M 25 France > 4 AMAD79301-26 Ama Dablam Autumn 1979 M NA Nepal > 5 AMAD79301-24 Ama Dablam Autumn 1979 M NA Nepal > 6 AMAD79302-03 Ama Dablam Autumn 1979 M 27 New Zealand> 7 AMAD79303-05 Ama Dablam Autumn 1979 M 36 Austria > 8 AMAD80302-05 Ama Dablam Autumn 1980 M 24 Japan > 9 AMAD81102-01 Ama Dablam Spring 1981 M 21 Australia > 10 AMAD81301-05 Ama Dablam Autumn 1981 M 28 USA > # ... with 6,108 more rows, and 6 more variables:> # expedition_role <fct>, hired <fct>, solo <fct>,> # oxygen_used <fct>, success <fct>, died <fct>
training()
and testing()
as you let higher level functions access the individual resamples during hyperparameter tuningrsample
: Resampling InfrastructureAlternative resampling approaches: In conjunction to k-fold CV, rsample
enables various alternative resampling schemes for producing a more robust estimate of model performance.
For repeats > 1
, vfold_cv()
repeats the CV approach to reduce the standard error of the estimate at the cost of higher computational demand ( k∗R folds).
set.seed(2021)train_set %>% vfold_cv(v = 10, repeats = 2, strata = died)
bootstraps()
conducts sampling with replacement whereby model performance is estimated based on the "out-of-bag" observations.
set.seed(2021)train_set %>% bootstraps(times = 25, strata = died)
mc_cv()
lies somewhere in between k-fold CV and the bootstraps since it enables partly overlapping assessment sets by generating each resample anew.
set.seed(2021)train_set %>% mc_cv(prop = 0.9, times = 25, strata = died)
For temporally correlated data rsample
provides a suitable partitioning and resampling infrastructure as well.
For example, use initial_time_split()
to conduct a non-random early-late-split and rolling_origin()
or the slide_*()
methods to generate time-series resamples.
Source: Stack Exchange
Note: Find more information about the resampling approaches implemented in rsample
in Kuhn/Silge (2021), ch. 10.
recipes
:recipes
: Preprocessing ToolsIn statistics, a design matrix (also known as regressor matrix or model matrix) is a matrix of values of explanatory variables of a set of objects, often denoted by X. Each row represents an individual object, with the successive columns corresponding to the variables and their specific values for that object. ~ Wikipedia
recipes
: Preprocessing ToolsIn statistics, a design matrix (also known as regressor matrix or model matrix) is a matrix of values of explanatory variables of a set of objects, often denoted by X. Each row represents an individual object, with the successive columns corresponding to the variables and their specific values for that object. ~ Wikipedia
Every model in R
requires a design matrix as input. Intuitively, we can think of a design matrix as a tidy data frame (with one observation per row and one predictor per column) which can be directly processed by the model function.
recipes
: Preprocessing ToolsIn statistics, a design matrix (also known as regressor matrix or model matrix) is a matrix of values of explanatory variables of a set of objects, often denoted by X. Each row represents an individual object, with the successive columns corresponding to the variables and their specific values for that object. ~ Wikipedia
Every model in R
requires a design matrix as input. Intuitively, we can think of a design matrix as a tidy data frame (with one observation per row and one predictor per column) which can be directly processed by the model function.
Oftentimes, however, data frames or matrices that we apply to a model function do not come in the required format. For example:
Note: Some functions internally convert a data frame to a numeric design matrix (e.g., lm()
automatically one-hot encodes unordered factors and creates polynomial contrasts from ordered factors).
Most R functions create the design matrix automatically from a given data frame according to the formula that is provided in the function call.
recipes
: Preprocessing ToolsThe recipes
package provides functions for defining a blueprint for data preprocessing (aka feature engineering). Each recipe
is constructed by chaining multiple preprocessing steps.
recipes
: Preprocessing ToolsThe recipes
package provides functions for defining a blueprint for data preprocessing (aka feature engineering). Each recipe
is constructed by chaining multiple preprocessing steps.
First, create a recipe
object from your data using the recipe()
function and the two arguments:
formula
: A formula to declare variable roles, i.e. everything on the left-hand side (LHS) of the ~
is declared as outcome
and everything on the right-hand side (RHS) as predictor
.data
: The data to which the feature engineering steps are later applied. The data set is only used to catalogue the variables and their respective types (which is why you generally provide the training set).mod_recipe <- recipe(formula = died ~ ., data = train_set)mod_recipe
> Recipe> > Inputs:> > role #variables> outcome 1> predictor 12
recipes
: Preprocessing ToolsSecond, we add new preprocessing steps to the recipe (using the family of step_*()
functions):
update_role()
to assign a new custom role to a predictor. As member_id
simply enumerates our observations, it is assigned the "id"
role and hence not considered in any downstream modeling task.mod_recipe <- mod_recipe %>% update_role(member_id, new_role = "id")mod_recipe
> Recipe> > Inputs:> > role #variables> id 1> outcome 1> predictor 11
Note: Change the role of a predictor to keep it in the data, however, without being used during model fitting. Usually step_*()
functions do not change the role of a predictor. However, each step_*()
function contains a role
argument to explicitly specify the role of a newly generated predictor.
new_role
argument I can set any custom role namerecipes
: Preprocessing ToolsSecond, we add new preprocessing steps to the recipe (using the family of step_*()
functions):
step_impute_median()
to impute NA
values by the median predictor value. Since roughly 3,500 missing values are inherent to age
, we use median-imputation to retain those observations.mod_recipe <- mod_recipe %>% step_impute_median(age)mod_recipe
> Recipe> > Inputs:> > role #variables> id 1> outcome 1> predictor 11> > Operations:> > Median Imputation for age
recipes
: Preprocessing ToolsSecond, we add new preprocessing steps to the recipe (using the family of step_*()
functions):
step_normalize()
to scale numerical data to zero mean and unit standard deviation (which is required for scale-sensitive classifiers).mod_recipe <- mod_recipe %>% step_normalize(all_numeric_predictors())mod_recipe
> Recipe> > Inputs:> > role #variables> id 1> outcome 1> predictor 11> > Operations:> > Median Imputation for age> Centering and scaling for all_numeric_predictors()
Note: Variables can be selected by referring either to their name, their data type, their role (as specified by the recipe) or by using the select()
helpers from dplyr
(e.g., contains()
, starts_with()
).
recipes
: Preprocessing ToolsSecond, we add new preprocessing steps to the recipe (using the family of step_*()
functions):
step_other()
to lump together rarely occurring factor levels. peak_name
, citizenship
and expedition_role
all have several 100 factor levels and hence a high risk of being near-zero variance predictors. All factor levels with a relative frequency below 5% are pooled into "other"
.mod_recipe <- mod_recipe %>% step_other(peak_name, citizenship, expedition_role, threshold = 0.05)mod_recipe
> Recipe> > Inputs:> > role #variables> id 1> outcome 1> predictor 11> > Operations:> > Median Imputation for age> Centering and scaling for all_numeric_predictors()> Collapsing factor levels for peak_name, citizenship, expedit...
Note: You should always take care of the order of your steps. For example, you should first lump together factor levels and then create dummies.
recipes
: Preprocessing ToolsSecond, we add new preprocessing steps to the recipe (using the family of step_*()
functions):
step_dummy()
to one-hot encode categorical predictors.mod_recipe <- mod_recipe %>% step_dummy(all_predictors(), -all_numeric(), one_hot = F)mod_recipe
> Recipe> > Inputs:> > role #variables> id 1> outcome 1> predictor 11> > Operations:> > Median Imputation for age> Centering and scaling for all_numeric_predictors()> Collapsing factor levels for peak_name, citizenship, expedit...> Dummy variables from all_predictors(), -all_numeric()
Note: Use one_hot = T
in case you want to retain all C factor levels instead of just C−1.
same holds for the normalize steps which should follow the median-impute step.
recipes
: Preprocessing ToolsSecond, we add new preprocessing steps to the recipe (using the family of step_*()
functions):
step_upsample()
from the themis
package to tackle class imbalance. In particular, we will subsample data points from the minority class (died == 1
) to obtain a class distribution of 1:4.mod_recipe <- mod_recipe %>% themis::step_upsample(died, over_ratio = 0.2, seed = 2021, skip = T)mod_recipe
> Recipe> > Inputs:> > role #variables> id 1> outcome 1> predictor 11> > Operations:> > Median Imputation for age> Centering and scaling for all_numeric_predictors()> Collapsing factor levels for peak_name, citizenship, expedit...> Dummy variables from all_predictors(), -all_numeric()> Up-sampling based on died
Note: Each step_*()
function contains a skip
argument which is usually equal to FALSE
by default. Yet, for certain preprocessing steps (e.g., under- or oversampling) we set it to TRUE
in order to not apply it to the test set and hence retain its original properties.
Up to this point, you have not performed any actual preprocessing respectively transformation of your data - you have only sketched a blueprint of what R
is supposed to do with your data.
The difference between instantly executing a command and declaring it, in case it is prospectively needed, relates to two important programming paradigms:
R
so far).tidymodels
).recipes
: Preprocessing ToolsThird, prep()
fits the recipe to the dataset specified in your initial recipe
call in order to estimate the unknown quantities required for preprocessing (e.g., medians or pooled factor levels).
mod_recipe_prepped <- prep(mod_recipe, retain = T)mod_recipe_prepped
> Recipe> > Inputs:> > role #variables> id 1> outcome 1> predictor 11> > Training data contained 61176 data points and 2767 incomplete rows. > > Operations:> > Median Imputation for age [trained]> Centering and scaling for year, age [trained]> Collapsing factor levels for peak_name, citizenship, expedit... [trained]> Dummy variables from peak_name, season, sex, citizenship, expe... [trained]> Up-sampling based on died [trained]
[trained]
; output also shows results of the selectorsBy applying prep()
to the final recipe, we fit the recipe only to the training set (as specified in the recipe()
function above). Thus, we prevent the issue of data leakage!
The data leakage issue:
Source: deeplearning.ai.
Note: This blog post and podcast episode are also great resources for getting an intuitive understanding of data leakage.
Data leakage examples:
DON'T:
DO:
DON'T:
DO:
DON'T:
DO:
Rule-of-thumb: Do not train your model on information which were never actually available at prediction time (i.e. test set observations).
Practical example from fastbook ch. 9:
A real-life business intelligence project at IBM where potential customers for certain products were identified, among other things, based on keywords found on their websites. This turned out to be leakage since the website content used for training had been sampled at the point in time where the potential customer has already become a customer, and where the website contained traces of the IBM products purchased, such as the word 'Websphere' (e.g., in a press release about the purchase or a specific product feature the client uses). -> often induced by data collection, aggregation and preparation procedures
recipes
: Preprocessing ToolsFourth, we can finally apply the fitted recipe
to our data and perform the feature engineering steps.
bake(mod_recipe_prepped, new_data = NULL)
> # A tibble: 72,369 x 24> member_id year age died peak_name_Cho.Oyu> <fct> <dbl> <dbl> <fct> <dbl>> 1 NILS15301-01 0.989 -0.616 FALSE 0> 2 LHOT02101-16 0.110 -1.40 FALSE 0> 3 MAKA12105-02 0.786 -1.70 FALSE 0> 4 KAG162101-01 -2.60 1.95 FALSE 0> 5 EVER17157-14 1.12 -1.60 FALSE 0> 6 AMAD08343-01 0.516 0.567 FALSE 0> 7 SAIP92101-05 -0.567 -0.715 FALSE 0> 8 EVER18122-03 1.19 0.370 FALSE 0> 9 LHOT08106-03 0.516 -0.813 FALSE 0> 10 RATH64301-15 -2.46 -0.813 FALSE 0> # ... with 72,359 more rows, and 19 more variables:> # peak_name_Everest <dbl>, peak_name_Manaslu <dbl>,> # peak_name_other <dbl>, season_Spring <dbl>,> # season_Summer <dbl>, season_Winter <dbl>, sex_M <dbl>,> # citizenship_Japan <dbl>, citizenship_Nepal <dbl>,> # citizenship_UK <dbl>, citizenship_USA <dbl>,> # citizenship_other <dbl>, ...
Note: Set new_data = NULL
to apply the recipe
to the data set provided to recipe()
, i.e. the training set. Set new_data = test_set
instead, if it should be applied to the test set.
Note:
other
category workednew_data = test_set
is does not re-estimate the quantities (e.g., mean, median) since they are drawn from the recipe that is estimated on the training datarecipes
: Preprocessing ToolsBenefits of using recipes:
y ~ x
).R
script.recipes
: Preprocessing ToolsAltogether, the recipes
package offers a variety of built-in preprocessing steps:
> [1] "step_arrange" "step_bagimpute" "step_bin2factor" "step_BoxCox" > [5] "step_bs" "step_center" "step_classdist" "step_corr" > [9] "step_count" "step_cut" "step_date" "step_depth" > [13] "step_discretize" "step_downsample" "step_dummy" "step_dummy_multi_choice"> [17] "step_factor2string" "step_filter" "step_geodist" "step_harmonic" > [21] "step_holiday" "step_hyperbolic" "step_ica" "step_impute_bag" > [25] "step_impute_knn" "step_impute_linear" "step_impute_lower" "step_impute_mean" > [29] "step_impute_median" "step_impute_mode" "step_impute_roll" "step_indicate_na" > [33] "step_integer" "step_interact" "step_intercept" "step_inverse" > [37] "step_invlogit" "step_isomap" "step_knnimpute" "step_kpca"
In addition, you may also include checks in your pipeline to test for a specific condition of your variables:
> [1] "check_class" "check_cols" "check_missing" > [4] "check_name" "check_new_values" "check_range" > [7] "check_type"
Note: Learn more about the capabilities of recipes
in Kuhn/Silge (2021), ch. 8, alongside recommended preprocessing operations for each model type.
step_date
: converts a date into factor variables, e.g., day of the week or monthstep_holiday
: creates a dummy for a national holidaystep_corr
: removes variables that have large absolute correlations with other variablesstep_normalize
: applies z-Transformation to predictorsstep_mutate
: to engineer new variables (analogue to dplyr
)step_interact
: to create interaction effectsstep_log
: create the log of a given variable (either outcome or predictor)step_indicate_na
: own predictor for missing values-> basically, all transformation steps you would do using dplyr before modeling, respectively all steps that the model formula would enforce can be embedded as a recipe step within your modeling workflow
parsnip
:parsnip
: A Unified Modeling APIDifferent models, different packages
The R
ecosystem offers a plethora of different packages for implementing machine learning models: stats::lm
, stats::glm
, MASS::lda
, class::knn
, glmnet::glmnet
, rpart::rpart
, randomForest::randomForest
, gbm::gbm
, e1071::svm
, etc.
It is very likely that you will struggle with the varying naming conventions, function interfaces and syntactical intricacies of each package.
parsnip
: A Unified Modeling APIDifferent models, different packages
The R
ecosystem offers a plethora of different packages for implementing machine learning models: stats::lm
, stats::glm
, MASS::lda
, class::knn
, glmnet::glmnet
, rpart::rpart
, randomForest::randomForest
, gbm::gbm
, e1071::svm
, etc.
It is very likely that you will struggle with the varying naming conventions, function interfaces and syntactical intricacies of each package.
Same models, different packages
The same issue persists if you try to implement one and the same model using alternative packages.
predict()
functions with slightly differing naming conventions)parsnip
: A Unified Modeling APIparsnip
provides a unified interface and syntax to modeling which facilitates your overall modeling workflow. The goals of parsnip
are twofold:
ntree
, num.trees
and num_trees
become trees
or k
becomes neighbors
)neighbor
instead of k
, penalty
instead of lambda
)parsnip
: trees
parsnip
: A Unified Modeling APIA parsnip
model specification consists of three individual components:
R
which usually corresponds to a certain modeling function (lm
, glm
), package (e.g., rpart
, glmnet
, randomForest
) or computing framework (e.g., Stan
, sparklyr
).Note: Check all models and engines supported by parsnip
on the tidymodels
website or using the RStudio Addin.
parsnip
: A Unified Modeling APILogistic classifier:
log_cls <- logistic_reg() %>% set_engine("glm") %>% set_mode("classification")# equivalent: logistic_reg(mode = "classification", engine = "glm")log_cls
> Logistic Regression Model Specification (classification)> > Computational engine: glm
parsnip
: A Unified Modeling APIRegularized logistic classifier:
lasso_cls <- logistic_reg() %>% set_args(penalty = 0.1, mixture = 1) %>% set_mode("classification") %>% set_engine("glmnet", family = "binomial")lasso_cls
> Logistic Regression Model Specification (classification)> > Main Arguments:> penalty = 0.1> mixture = 1> > Engine-Specific Arguments:> family = binomial> > Computational engine: glmnet
Note: parsnip
distinguishes between model arguments and engine arguments. The former reflect hyperparameters that are frequently used across various model packages (i.e. engines) whereas the latter reflect arguments that are usually engine-specific. Model arguments are harmonized across modeling packages whereas engine arguments are not.
parsnip
: A Unified Modeling APIDecision tree classifier:
dt_cls <- decision_tree() %>% set_args(cost_complexity = 0.01, tree_depth = 30, min_n = 20) %>% set_mode("classification") %>% set_engine("rpart")dt_cls
> Decision Tree Model Specification (classification)> > Main Arguments:> cost_complexity = 0.01> tree_depth = 30> min_n = 20> > Computational engine: rpart
Note: If not explicitly specified, parsnip
adopts the model's default parameters (i.e. function arguments) defined by the underlying engine (here rpart
).
parsnip
: A Unified Modeling APITree bagging classifier:
rand_forest() %>% set_args(trees = 1000, mtry = .cols()) %>% set_mode("classification") %>% set_engine("randomForest")
> Random Forest Model Specification (classification)> > Main Arguments:> mtry = .cols()> trees = 1000> > Computational engine: randomForest
Note: Use data set characteristics as placeholder arguments which reflect the number of predictors in your data set. .preds()
and .cols()
capture the number of predictors in your data prior respectively subsequent to preprocessing (e.g., one-hot encoding).
parsnip
: A Unified Modeling APIRandom forest classifier:
rand_forest() %>% set_args(trees = 1000, mtry = floor(sqrt(.cols()))) %>% set_mode("classification") %>% set_engine("randomForest")
> Random Forest Model Specification (classification)> > Main Arguments:> mtry = floor(sqrt(.cols()))> trees = 1000> > Computational engine: randomForest
Note: Generally, the square root of the number of available predictors is a good starting point for mtry
. From there on, you could double or half the number of predictors sampled at each split.
parsnip
: A Unified Modeling APIk-nearest-neighbor classifier:
nearest_neighbor() %>% set_args(neighbors = 5, dist_power = 2) %>% set_mode("classification") %>% set_engine("kknn")
> K-Nearest Neighbor Model Specification (classification)> > Main Arguments:> neighbors = 5> dist_power = 2> > Computational engine: kknn
parsnip
: A Unified Modeling APISVM classifier:
svm_rbf() %>% set_args(cost = tune(), rbf_sigma = tune()) %>% set_mode("classification") %>% set_engine("kernlab")
> Radial Basis Function Support Vector Machine Specification (classification)> > Main Arguments:> cost = tune()> rbf_sigma = tune()> > Computational engine: kernlab
Note: Use the tune()
placeholder as a model argument when the parameter is supposed to be specified later on in the workflow (e.g., during hyperparameter tuning).
parsnip
: A Unified Modeling APIFinally, it is time to train our specified model! Since some modeling functions require a formula (e.g., lm()
) as input and others a vector, a matrix (e.g., glmnet()
) or a data frame, parsnip
offers two modes for fitting.
dt_cls_fit <- dt_cls %>% fit(formula = died ~ ., data = train_set)dt_cls_fit
dt_cls_fit <- dt_cls %>% fit_xy(x = train_set %>% select(-died), y = train_set$died)dt_cls_fit
dt_cls_fit$spec %>% translate
> Decision Tree Model Specification (classification)> > Main Arguments:> cost_complexity = 0.01> tree_depth = 30> min_n = 20> > Computational engine: rpart > > Model fit template:> rpart::rpart(formula = missing_arg(), data = missing_arg(), weights = missing_arg(), > cp = 0.01, maxdepth = 30, minsplit = min_rows(20, data))
⚠️ Notice that we did not apply any of our predefined preprocessing steps yet! ⚠️
Note: Only the formula notation automatically creates dummies whereas fit_xy()
takes the data as-is.
translate()
to investigate how parsnip
translates the specification into the underlying computational engine.parsnip
: A Unified Modeling APIAfter fitting the model, we can eventually predict the response in the test data.
dt_cls_fit %>% predict(new_data = test_set, type = "prob") %>% glimpse
> Rows: 15,295> Columns: 2> $ .pred_FALSE <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ~> $ .pred_TRUE <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ~
parsnip
: A Unified Modeling APIAfter fitting the model, we can eventually predict the response in the test data.
dt_cls_fit %>% predict(new_data = test_set, type = "prob") %>% glimpse
> Rows: 15,295> Columns: 2> $ .pred_FALSE <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ~> $ .pred_TRUE <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ~
tidymodels
prediction rules:
tibble
(no need to extract predictions from an object)..pred
, .pred_class
, .pred_lower
/.pred_upper
, etc. depending on the prediction type
).new_data
(and is in the same order).parsnip
: A Unified Modeling APIThanks to these rules, we can directly combine the predictions with the test_set
.
test_set %>% dplyr::bind_cols(predict(dt_cls_fit, new_data = ., type = "prob"))
> # A tibble: 15,295 x 15> member_id peak_name season year sex age citizenship> <fct> <fct> <fct> <dbl> <fct> <dbl> <fct> > 1 AMAD78301-03 Ama Dablam Autumn 1978 M 27 France > 2 AMAD78301-05 Ama Dablam Autumn 1978 M 34 France > 3 AMAD78301-07 Ama Dablam Autumn 1978 M 41 France > 4 AMAD79101-10 Ama Dablam Spring 1979 M 30 USA > 5 AMAD79101-15 Ama Dablam Spring 1979 M 29 USA > 6 AMAD79101-18 Ama Dablam Spring 1979 M 23 Nepal > 7 AMAD79301-03 Ama Dablam Autumn 1979 F 33 France > 8 AMAD79301-13 Ama Dablam Autumn 1979 M 31 France > 9 AMAD79301-14 Ama Dablam Autumn 1979 M 28 France > 10 AMAD79301-22 Ama Dablam Autumn 1979 M 31 France > # ... with 15,285 more rows, and 8 more variables:> # expedition_role <fct>, hired <fct>, solo <fct>,> # oxygen_used <fct>, success <fct>, died <fct>,> # .pred_FALSE <dbl>, .pred_TRUE <dbl>
workflows
:workflows
: Modeling Workflowsworkflows
introduces a workflow
object which bundles the preprocessing recipe and model specification to reduce code clatter. At the same time, it acts as a single entry point to your modeling pipeline.
cls_wf <- workflow(preprocessor = mod_recipe, spec = log_cls)cls_wf
> == Workflow ====================================================> Preprocessor: Recipe> Model: logistic_reg()> > -- Preprocessor ------------------------------------------------> 5 Recipe Steps> > * step_impute_median()> * step_normalize()> * step_other()> * step_dummy()> * step_upsample()> > -- Model -------------------------------------------------------> Logistic Regression Model Specification (classification)> > Computational engine: glm
Note: You may even bundle (and later explore) various different combinations of preprocessing recipes and model specifications using the workflowsets
package.
add_formula()
could be used (but you can only ever use one of the two)workflows
should also be able to encapsulate post-processing steps (e.g., modifying the probability cutoff for binary classification; or calibration of probabilities; or determination of euqivocal zones)workflows
: Modeling WorkflowsWhen calling fit()
on a workflow
object, tidymodels
performs the following steps for us:
recipe
object to the training set and produces the in-sample estimates (prep()
).bake()
).fit()
/fit_xy()
).cls_wf_fit <- cls_wf %>% fit(train_set)cls_wf_fit
> Output on next slide
prep
and bake
workflows
: Modeling Workflows> == Workflow [trained] =========================================================================> Preprocessor: Recipe> Model: logistic_reg()> > -- Preprocessor -------------------------------------------------------------------------------> 5 Recipe Steps> * step_medianimpute() * step_dummy()> * step_normalize() * step_smote()> * step_other()> > -- Model --------------------------------------------------------------------------------------> Call: stats::glm(formula = ..y ~ ., family = stats::binomial, data = data)> > Coefficients:> (Intercept) year age > -2.76543 -0.44060 0.05288 > peak_name_Cho.Oyu peak_name_Everest peak_name_Manaslu > 0.09953 1.19280 1.41512 > peak_name_other season_Spring ... > 1.31090 0.02033 ... > > Degrees of Freedom: 96469 Total (i.e. Null); 96447 Residual> Null Deviance: 127600 > Residual Deviance: 110100 AIC: 110200
output abbreviated
workflows
: Modeling WorkflowsAgain, after having fitted the workflow, we can proceed to predicting the response in the test data. When calling predict()
on a workflow
object, tidymodels
performs the following steps for us:
bake()
).predict()
).cls_wf_fit %>% predict(new_data = test_set, type = "prob") %>% glimpse
> Rows: 15,295> Columns: 2> $ .pred_FALSE <dbl> 0.8887624, 0.8877419, 0.8867132, 0.9784852~> $ .pred_TRUE <dbl> 0.11123760, 0.11225812, 0.11328682, 0.0215~
Note: Call extract_fit_engine()
or extract_recipe()
to extract the fitted model or the estimated recipe
object from the workflow.
Step 1: Split data into training and test set using rsample
.
set.seed(2021)climbers_split <- initial_split(climbers_df, prop = 0.8, strata = died)train_set <- training(climbers_split)test_set <- testing(climbers_split)
Step 2: Define the relevant preprocessing steps using recipe
.
rec <- recipe(formula = died ~ ., data = train_set) %>% update_role(member_id, new_role = "id") %>% step_impute_median(age) %>% step_normalize(all_numeric_predictors()) %>% step_other(peak_name, citizenship, expedition_role, threshold = 0.05) %>% step_dummy(all_predictors(), -all_numeric(), one_hot = F) %>% themis::step_upsample(died, over_ratio = 0.2, seed = 2021, skip = T)
Step 3: Specify the desired machine learning model using parsnip
.
log_cls <- logistic_reg() %>% set_engine("glm") %>% set_mode("classification")
Step 4: Bring everything together using workflows
.
cls_wf <- workflow() %>% add_recipe(rec) %>% add_model(log_cls)
Step 5: Train the workflow (i.e. recipe plus model) and use it for prediction.
cls_wf_fit <- cls_wf %>% fit(train_set)cls_wf_fit %>% predict(new_data = test_set, type = "prob") %>% glimpse
> Rows: 15,295> Columns: 2> $ .pred_FALSE <dbl> 0.8887624, 0.8877419, 0.8867132, 0.9784852~> $ .pred_TRUE <dbl> 0.11123760, 0.11225812, 0.11328682, 0.0215~
dials
:dials
: Creating Hyperparameter ValuesMost machine learning models require the user to predefine so-called hyperparameters (or tuning parameters) prior to model fitting. For example:
penalty
, mixture
Laplace
neighbors
, weight_func
, dist_power
cost_complexity
, tree_depth
, min_n
kernel
, cost
, degree
, scale_factor
trees
, min_n
trees
, mtry
, min_n
trees
, mtry
, min_n
, tree_depth
, learn_rate
Note: This list is not exhaustive! Depending on the underlying engine, an even broader set of hyperparameters can be specified. Use args()
to inspect all hyperparameters (i.e. function arguments) available in a parsnip
object.
dials
: Creating Hyperparameter Valuesdials
streamlines the handling of hyperparameters. It provides functions for specifying parameter sequences and grids, introduces parameters
objects that can be processed by the parsnip
package, and ensures consistent parameter names.
dials
: Creating Hyperparameter Valuesdials
streamlines the handling of hyperparameters. It provides functions for specifying parameter sequences and grids, introduces parameters
objects that can be processed by the parsnip
package, and ensures consistent parameter names.
In the context of a regularized regression, penalty
and mixture
are the two central hyperparameters. dials
comes with a predefined parameters
object for both.
mixture()
> Proportion of Lasso Penalty (quantitative)> Range: [0, 1]
penalty()
> Amount of Regularization (quantitative)> Transformer: log-10 > Range (transformed scale): [-10, 0]
penalty(range = c(-10, 10))
> Amount of Regularization (quantitative)> Transformer: log-10 > Range (transformed scale): [-10, 10]
In practice, you will find that hyperparameters are often defined on the log instead of the linear scale, as for example seen on the previous slide:
mixture()
: 0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0penalty()
: 1e−10,1e−9,1e−8,1e−7,1e−6,1e−5,1e−4,1e−3,1e−2,1e−1,1e0In practice, you will find that hyperparameters are often defined on the log instead of the linear scale, as for example seen on the previous slide:
mixture()
: 0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0penalty()
: 1e−10,1e−9,1e−8,1e−7,1e−6,1e−5,1e−4,1e−3,1e−2,1e−1,1e0Considerations for using the Log-Scale:
In practice, you will find that hyperparameters are often defined on the log instead of the linear scale, as for example seen on the previous slide:
mixture()
: 0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0penalty()
: 1e−10,1e−9,1e−8,1e−7,1e−6,1e−5,1e−4,1e−3,1e−2,1e−1,1e0Considerations for using the Log-Scale:
If you have identified a promising parameter subspace, you may eventually narrow it down by further restricting the search space of your hyperparameter grid.
dials
: Creating Hyperparameter ValuesThere are various helper functions to query and specify the parameters
objects.
penalty() %>% range_get()
> $lower> [1] 1e-10> > $upper> [1] 1
penalty(range = c(-10, 10)) %>% range_get()
> $lower> [1] 1e-10> > $upper> [1] 1e+10
penalty() %>% value_sample(n = 5)
> [1] 2.133703e-02 1.394881e-08 1.609020e-06 1.069317e-09> [5] 8.170827e-01
penalty() %>% value_seq(n = 5, original = F)
> [1] -10.0 -7.5 -5.0 -2.5 0.0
penalty() %>% value_set(seq(-10, 0, by = 2)) %>% value_seq(n = 5, original = F)
> [1] -10 -8 -6 -4 -2
Note: The same helper functions can be applied to qualitative hyperparameters, such as weight_func()
in nearest_neighbor()
.
dials
: Creating Hyperparameter ValuesThere are special cases where the concrete hyperparameter values depend on your data set, e.g., the mtry
argument (number of randomly sampled predictors at each split) in parsnip::rand_forest()
.
mtry()
> # Randomly Selected Predictors (quantitative)> Range: [1, ?]
Therefore, we must finalize()
the hyperparameter setup based on the training set.
finalize(mtry(), x = train_set %>% select(-died))
> # Randomly Selected Predictors (quantitative)> Range: [1, 12]
dials
: Creating Hyperparameter ValuesFinally, dials
renders the systematic querying and evaluation of multiple hyperparameters possible. There are various alternative search algorithms for finding the optimal hyperparameter combination.
Identify the optimal hyperparameter combination from a predefined set of parameter values.
grid_regular( mixture(), penalty(), levels = c(5, 5)) %>% glimpse
> Rows: 25> Columns: 2> $ mixture <dbl> 0.00, 0.25, 0.50, 0.75, 1.00, 0.00, 0.25, 0.50~> $ penalty <dbl> 1.000000e-10, 1.000000e-10, 1.000000e-10, 1.00~
Source: Bergstra/Bengio (2012)
Identify the optimal hyperparameter combination by sampling from a predefined range of parameter values.
grid_random( mixture(), penalty(), size = 25) %>% glimpse
> Rows: 25> Columns: 2> $ mixture <dbl> 0.92267752, 0.69740576, 0.67183790, 0.38385219~> $ penalty <dbl> 9.274587e-08, 5.580510e-05, 2.833908e-09, 5.78~
Source: Bergstra/Bengio (2012)
Identify the optimal hyperparameter candidates by adapting the search procedure based on already evaluated values.
tune
:tune
: Tidy Tuning ToolsThe tune
package unites the previous steps in the context of hyperparameter tuning with the tune_grid()
function being the primary modeling workhorse.
tune_grid( object, preprocessor, resamples, grid = 10, metrics = NULL, control = control_grid())
object
: either a workflow
or a model
objectpreprocessor
: an additional preprocessing recipe or formula expression (only required in case a model
object is provided)resamples
: a resamples
object (e.g., our climbers_folds
)grid
: the number of candidate hyperparameter combinations to be tried (defaults to 10
draws from a Latin hypercube) respectively a predefined parameter gridmetrics
: a set of performance metrics (defaults to RMSE and R2 for regression and AUC and accuracy for classification tasks) computed for each resample (customize via yardstick::metric_set()
) control
: additional options to control the tuning process (e.g., save_pred = T
to retain the predictions for each fold or verbose = T
to print the log)Note: Retaining the predictions for each fold can impose a heavy burden on your machine's memory which may become unwieldy if your data set and/or the number of resamples is large.
Step 1: Split data into training and test set using rsample
.
As in the previous example.
Step 2: Create resamples of the training set for hyperparameter tuning using rsample
.
set.seed(2021)climbers_folds <- training(climbers_split) %>% vfold_cv(v = 10, repeats = 1, strata = died)
Step 3: Define the relevant preprocessing steps using recipe
.
As in the previous example.
Step 4: Specify the desired machine learning model using parsnip
. Indicate which hyperparameters are to be optimized by using the tune()
placeholder.
reg_log_cls <- logistic_reg() %>% set_args(penalty = tune(), mixture = tune()) %>% set_mode("classification") %>% set_engine("glmnet", family = "binomial")
Note: Using the tune()
placeholder, we could even tune hyperparameters that are part of our modeling recipes (e.g., over_ratio
in themis::step_upsample()
or threshold
in step_other
).
Step 5: Bring everything together using workflows
.
cls_wf <- workflow() %>% add_recipe(rec) %>% add_model(reg_log_cls)
Step 6: Create a grid of hyperparameter candidates for performing a grid search.
param_grid <- grid_regular( penalty(), mixture(), levels = c(10, 10))param_grid %>% glimpse
> Rows: 100> Columns: 2> $ penalty <dbl> 1.000000e-10, 1.291550e-09, 1.668101e-08, 2.15~> $ mixture <dbl> 0.0000000, 0.0000000, 0.0000000, 0.0000000, 0.~
Step 7: Perform hyperparameter tuning using tune
.
tune_grid()
iterates over all 10 folds included in climbers_folds
, and evaluates all 100 candidate pairs for mixture()
and penalty()
in param_grid
, resulting in 1,000 model fits.
start <- Sys.time()cls_wf_fit <- tune_grid( cls_wf, climbers_folds, grid = param_grid, metrics = metric_set(roc_auc, accuracy, sens, spec), control = control_grid(save_pred = T, verbose = T))Sys.time() - start
> Time difference of 8.985842 mins
Note: If we are not concerned with hyperparameter tuning per sé, but simply want to train a model without hyperparameters and obtain an unbiased performance estimate, we can refer to fit_resamples()
which works almost identical to tune_grid()
(except for the grid
argument).
Problem: Depending on your hardware, your dataset size, your model and the amount of hyperparameters to be optimized, the search process can take several minutes, hours or even days.
Solution: tune
is equipped with distributed computing capabilities (which stem from an integration of the foreach
package). The tuning process allows for models to be trained independent of each other along multiple dimensions:
Step 7a: Check the number of available CPU cores.
all_cores <- parallel::detectCores(logical = F)all_cores
> [1] 6
Step 7a: Check the number of available CPU cores.
all_cores <- parallel::detectCores(logical = F)all_cores
> [1] 6
Step 7b: Create a cluster of workers, i.e. R
sessions run in parallel. In the background, tune
divides your data (e.g., resamples) and distributes it across the available clusters.
comp_cluster <- parallel::makeCluster(all_cores - 2)comp_cluster
> Socketcluster mit 4 Knoten auf System 'localhost'
tip: its generally a good idea to not use all available clusters
Step 7a: Check the number of available CPU cores.
all_cores <- parallel::detectCores(logical = F)all_cores
> [1] 6
Step 7b: Create a cluster of workers, i.e. R
sessions run in parallel. In the background, tune
divides your data (e.g., resamples) and distributes it across the available clusters.
comp_cluster <- parallel::makeCluster(all_cores - 2)comp_cluster
> Socketcluster mit 4 Knoten auf System 'localhost'
Step 7c: Register a backend for parallel computing (here the doParallel
package). The backend handles the parallelization (e.g., load balancing ensures that cores are not "underemployed").
doParallel::registerDoParallel(comp_cluster)
tip: its generally a good idea to not use all available clusters
do
and vary in the way of how they enable parallel processing.Step 7d: Perform hyperparameter tuning in parallel using tune
.
start <- Sys.time()cls_wf_fit <- tune_grid( cls_wf, climbers_folds, grid = param_grid, metrics = metric_set(roc_auc, accuracy, sens, spec), control = control_grid( save_pred = T, verbose = T, allow_par = T, parallel_over = "resamples", pkgs = c('themis') ))Sys.time() - start
> Time difference of 3.706945 mins
Note: By default, tidymodels
copies only its core packages to all concurrently running R
sessions. If you leverage additional packages (e.g., themis
) as part of your modeling pipeline, it must be provided in the tuning controls.
tune
: Tidy Tuning Toolstune_grid()
updates your initial climbers_folds
object by adding additional columns (.metrics
and .notes
, .predictions
and others depending on your controls).
cls_wf_fit
> # Tuning results> # 10-fold cross-validation using stratification > # A tibble: 10 x 5> splits id .metrics .notes .predictions > <list> <chr> <list> <list> <list> > 1 <split [55059/6118]> Fold01 <tibble [~ <tibble~ <tibble [611~> 2 <split [55059/6118]> Fold02 <tibble [~ <tibble~ <tibble [611~> 3 <split [55059/6118]> Fold03 <tibble [~ <tibble~ <tibble [611~> 4 <split [55059/6118]> Fold04 <tibble [~ <tibble~ <tibble [611~> 5 <split [55059/6118]> Fold05 <tibble [~ <tibble~ <tibble [611~> 6 <split [55059/6118]> Fold06 <tibble [~ <tibble~ <tibble [611~> 7 <split [55059/6118]> Fold07 <tibble [~ <tibble~ <tibble [611~> 8 <split [55060/6117]> Fold08 <tibble [~ <tibble~ <tibble [611~> 9 <split [55060/6117]> Fold09 <tibble [~ <tibble~ <tibble [611~> 10 <split [55060/6117]> Fold10 <tibble [~ <tibble~ <tibble [611~
Now, there are several neat things we can do with our fitted climbers_folds
data frame. Let's have a look at some convenience functions provided by the tune
package.
tidymodels
creates have the "." prefix in order to not override initial columnstune
: Tidy Tuning Tools
Extract performance metrics summarized across all resamples. Use summarize = F
to obtain the unaggregated metrics for each resample.
cls_wf_fit %>% collect_metrics(summarize = T)
> # A tibble: 400 x 8> penalty mixture .metric .estimator mean n std_err> <dbl> <dbl> <chr> <chr> <dbl> <int> <dbl>> 1 0.0000000001 0 accuracy binary 0.847 10 0.00135> 2 0.0000000001 0 roc_auc binary 0.713 10 0.00811> 3 0.0000000001 0 sens binary 0.853 10 0.00146> 4 0.0000000001 0 spec binary 0.428 10 0.0160 > 5 0.00000000129 0 accuracy binary 0.847 10 0.00135> 6 0.00000000129 0 roc_auc binary 0.713 10 0.00811> 7 0.00000000129 0 sens binary 0.853 10 0.00146> 8 0.00000000129 0 spec binary 0.428 10 0.0160 > 9 0.0000000167 0 accuracy binary 0.847 10 0.00135> 10 0.0000000167 0 roc_auc binary 0.713 10 0.00811> # ... with 390 more rows, and 1 more variable: .config <chr>
Filter for the n
best performing candidate pairs.
cls_wf_fit %>% show_best(metric = "roc_auc", n = 3)
> # A tibble: 3 x 8> penalty mixture .metric .estimator mean n std_err .config> <dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr> > 1 0.00599 0.111 roc_auc binary 0.714 10 0.00825 Prepro~> 2 0.000464 1 roc_auc binary 0.714 10 0.00862 Prepro~> 3 0.000464 0.889 roc_auc binary 0.714 10 0.00863 Prepro~
Extract the overall best performing candidate pair. Use select_by_one_std_err(metric = "roc_auc")
to obtain the best candidate pair which still satisfies the 1-se-rule.
cls_wf_fit %>% select_best(metric = "roc_auc")
> # A tibble: 1 x 3> penalty mixture .config > <dbl> <dbl> <chr> > 1 0.00599 0.111 Preprocessor1_Model018
Extract the validation set predictions for each fold (only applicable if save_pred = T
in the controls).
cls_wf_fit %>% collect_predictions( summarize = F, parameters = select_best(cls_wf_fit, metric = "roc_auc") )
> # A tibble: 61,177 x 9> id .pred_FALSE .pred_TRUE .row penalty mixture> <chr> <dbl> <dbl> <int> <dbl> <dbl>> 1 Fold01 0.920 0.0795 16 0.00599 0.111> 2 Fold01 0.922 0.0778 18 0.00599 0.111> 3 Fold01 0.923 0.0769 19 0.00599 0.111> 4 Fold01 0.799 0.201 24 0.00599 0.111> 5 Fold01 0.647 0.353 37 0.00599 0.111> 6 Fold01 0.766 0.234 55 0.00599 0.111> 7 Fold01 0.793 0.207 56 0.00599 0.111> 8 Fold01 0.802 0.198 76 0.00599 0.111> 9 Fold01 0.958 0.0416 88 0.00599 0.111> 10 Fold01 0.950 0.0504 90 0.00599 0.111> # ... with 61,167 more rows, and 3 more variables:> # .pred_class <fct>, died <fct>, .config <chr>
Collect Metrics:
tune
: Tidy Tuning Toolsautoplot(cls_wf_fit)
Step 8: Finalize the workflow
object.
cls_wf_final <- cls_wf %>% finalize_workflow(select_best(cls_wf_fit, metric = "roc_auc"))cls_wf_final
> Output on next slide
Note: Would we not have combined our model specification and preprocessing recipe in a workflow
object, we could alternatively use finalize_model()
or finalize_recipe()
.
> == Workflow ===================================================================================> Preprocessor: Recipe> Model: logistic_reg()> > -- Preprocessor -------------------------------------------------------------------------------> 5 Recipe Steps> * step_medianimpute() * step_dummy()> * step_normalize() * step_smote()> * step_other()> > -- Model --------------------------------------------------------------------------------------> Logistic Regression Model Specification (classification)> > Main Arguments:> penalty = 0.00599484250318942> mixture = 0.111111111111111> > Engine-Specific Arguments:> family = binomial> > Computational engine: glmnet
Step 9: Perform the final fit by training the model on the whole training data and predict the unseen observations from the test data.
cls_wf_final %>% fit(data = train_set) %>% predict(new_data = test_set, type = "prob")
Step 9: Perform the final fit by training the model on the whole training data and predict the unseen observations from the test data.
cls_wf_final %>% fit(data = train_set) %>% predict(new_data = test_set, type = "prob")
Shortcut: The previous step can be abbreviated by using last_fit()
. Conveniently, it also computes performance metrics along the way.
cls_wf_last_fit <- cls_wf_final %>% last_fit(split = climbers_split, metrics = metric_set(roc_auc, accuracy, sens, spec))cls_wf_last_fit
> # Resampling results> # Manual resampling > # A tibble: 1 x 6> splits id .metrics .notes .predictions .workflow> <list> <chr> <list> <list> <list> <list> > 1 <split [6~ train/t~ <tibble ~ <tibble~ <tibble [15,~ <workflo~
tune
: Tidy Tuning ToolsWe have now successfully tuned a single machine learning model! 🤗 🤩 Eventually, however, we would like multiple models to compete on a given task and choose the winner. 🥇
tune
: Tidy Tuning ToolsWe have now successfully tuned a single machine learning model! 🤗 🤩 Eventually, however, we would like multiple models to compete on a given task and choose the winner. 🥇
Practical Tips for Model Selection (Kuhn/Johnson (2013), p. 79):
Note: For a comprehensive overview of the topic of interpretable ML check out Mulner (2021).
broom
:broom
: Tidy Model Outputsbroom
provides three useful functions for converting parsnip
objects (e.g., lm
, glm
, rpart
) into tidy tibbles
:
tidy()
: produces a tidy output of model components (e.g., coefficients, weights, clusters)glance()
: produces a tidy output of model summaries (e.g., goodness-of-fit, F - statistics)augment()
: adds additional information about observations (e.g., fitted values, residuals)tidyr
: these functions tidy model objects, tidyr
is all about tidying and transforming data framesbroom
: Tidy Model OutputsIn order to illustrate the convenience of the three broom
functions, let us first extract our optimal model.
reg_log_cls_fit <- cls_wf_last_fit %>% extract_fit_parsnip()
tidy()
produces a tidy output of model components (e.g., coefficients, weights, clusters)
tidy(reg_log_cls_fit) %>% glimpse
> Rows: 23> Columns: 3> $ term <chr> "(Intercept)", "year", "age", "peak_name_Cho.~> $ estimate <dbl> -2.71947017, -0.38858862, 0.00000000, -0.1235~> $ penalty <dbl> 0.005994843, 0.005994843, 0.005994843, 0.0059~
glance()
produces a tidy output of model diagnostics (e.g., goodness-of-fit, F-statistics)
glance(reg_log_cls_fit) %>% glimpse
> Rows: 1> Columns: 3> $ nulldev <dbl> 65211.72> $ npasses <int> 729> $ nobs <int> 72369
augment()
adds additional information about observations (e.g., fitted values, residuals)
Unfortunately, augment()
is not supported for glmnet
models (check available methods).
Note: Depending on the class of the model object you are providing to tidy()
, it offers several advanced features, such as returning odds-ratios for logit-models (exponentiate = T
) or confidence interval (conf.int = T
).
tidy
: useful for creating visualizations or preparing model tables for a paper
glance
:
yardstick
:yardstick
: Tidy Model PerformanceSimilar to broom
, yardstick
's endeavor is to enable model evaluation using tidy data principles. It provides various performance metrics for both, classification and regression problems.
fct
columns (truth
and estimate
) as input.fct
column (truth
) and one/multiple dbl
columns (estimate
) as input.dbl
columns (truth
and estimate
) as input.Note: Find all available metrics grouped by their type on tidymodels.org or learn more about the yardstick
package, e.g., about features for multi-class learning problems, in Kuhn/Silge (2021).
yardstick
: Tidy Model PerformanceClass Metrics
collect_predictions(cls_wf_last_fit) %>% conf_mat(died, estimate = .pred_class)
> Truth> Prediction FALSE TRUE> FALSE 14919 223> TRUE 138 15
collect_predictions(cls_wf_last_fit) %>% sens(died, estimate = .pred_class)
> # A tibble: 1 x 3> .metric .estimator .estimate> <chr> <chr> <dbl>> 1 sens binary 0.991
collect_predictions(cls_wf_last_fit) %>% spec(died, estimate = .pred_class)
> # A tibble: 1 x 3> .metric .estimator .estimate> <chr> <chr> <dbl>> 1 spec binary 0.0630
collect_predictions(cls_wf_last_fit) %>% accuracy(died, estimate = .pred_class)
> # A tibble: 1 x 3> .metric .estimator .estimate> <chr> <chr> <dbl>> 1 accuracy binary 0.976
metrics <- metric_set(accuracy, sens, spec)collect_predictions(cls_wf_last_fit) %>% metrics(died, estimate = .pred_class)
> # A tibble: 3 x 3> .metric .estimator .estimate> <chr> <chr> <dbl>> 1 accuracy binary 0.976 > 2 sens binary 0.991 > 3 spec binary 0.0630
yardstick
has implemented generalizations of the original measurestune_grid()
where we specified the metrics that we want to compute during our resampling approachyardstick
: Tidy Model PerformanceClass Probability Metrics
collect_predictions(cls_wf_last_fit) %>% roc_curve(died, .pred_TRUE, event_level = "second")
> # A tibble: 5,609 x 3> .threshold specificity sensitivity> <dbl> <dbl> <dbl>> 1 -Inf 0 1> 2 0.0115 0 1> 3 0.0117 0.0000664 1> 4 0.0118 0.000133 1> 5 0.0127 0.000199 1> 6 0.0128 0.000266 1> 7 0.0131 0.000332 1> 8 0.0138 0.000398 1> 9 0.0144 0.000465 1> 10 0.0148 0.000531 1> # ... with 5,599 more rows
collect_predictions(cls_wf_last_fit) %>% roc_auc(died, .pred_TRUE, event_level = "second")
> # A tibble: 1 x 3> .metric .estimator .estimate> <chr> <chr> <dbl>> 1 roc_auc binary 0.704
Note: By default, yardstick
views the first factor level as the positive class. If your outcome is one-hot encoded (e.g., 0
/1
or FALSE
/TRUE
) and the event of interest relates to the second factor level, you have to make the event_level
explicit.
The individual functions can be easily applied to our tuning results as well.
collect_predictions(cls_wf_fit) %>% group_by(id) %>% roc_curve( died, .pred_TRUE, event_level = "second" ) %>% autoplot()
class probability metrics: we provide the probability column for the event of interest
Multiple Roc-Curves:
yardstick
as welltidymodels
EcosystemSimilar to the tidyverse
ecosystem, there is already a promising supply of complementary packages that further improve the capabilities of tidymodels
, e.g.:
textrecipes
: Extra recipes for text processingbaguette
: Efficient model functions for baggingstacks
: Tidy model stackingprobably
: Tools for post-processing class probability estimatesinfer
: Statistical inference and hypothesis testing using tidy data principlesfinetune
: Implementation of additional search algorithmsusemodels
: Boilerplate code for tidymodels
analyses🤔 Right now
🤓 After having mastered tidymodels
Kuhn, M./Silge, J. (2021): Tidy Modeling in R. URL: https://www.tmwr.org (work-in-progress).
Learn section of tidymodels.org.
TidyTuesday contributions by Julia Silge.
tidymodels
artworks and illustration are provided by Allison Horst.
1 Learning Objectives
2 Introduction to tidymodels
3 Himalayan Climbing Expeditions Data
4 The Core tidymodels
Packages
4.1
rsample
: General Resampling Infrastructure
4.2recipes
: Preprocessing Tools to Create Design Matrices
4.3parsnip
: A Common API to Modeling and Analysis Functions
4.4workflows
: Modeling Workflows
4.5dials
: Tools for Creating Tuning Parameter Values
4.6tune
: Tidy Tuning Tools
4.7broom
: Convert Statistical Objects into Tidy Tibbles
4.8yardstick
: Tidy Characterizations of Model Performance
5 Additions to the tidymodels
Ecosystem
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |