XGBoost Categorical Variables: Dummification vs encoding
xgboost
only deals with numeric columns.
if you have a feature [a,b,b,c]
which describes a categorical variable (i.e. no numeric relationship)
Using LabelEncoder you will simply have this:
array([0, 1, 1, 2])
Xgboost
will wrongly interpret this feature as having a numeric relationship! This just maps each string ('a','b','c')
to an integer, nothing more.
Proper way
Using OneHotEncoder you will eventually get to this:
array([[ 1., 0., 0.],
[ 0., 1., 0.],
[ 0., 1., 0.],
[ 0., 0., 1.]])
This is the proper representation of a categorical variable for xgboost
or any other machine learning tool.
Pandas get_dummies is a nice tool for creating dummy variables (which is easier to use, in my opinion).
Method #2 in above question will not represent the data properly
I want to answer this question not just in terms of XGBoost but in terms of any problem dealing with categorical data. While "dummification" creates a very sparse setup, specially if you have multiple categorical columns with different levels, label encoding is often biased as the mathematical representation is not reflective of the relationship between levels.
For Binary Classification problems, a genius yet unexplored approach which is highly leveraged in traditional credit scoring models is to use Weight of Evidence to replace the categorical levels. Basically every categorical level is replaced by the proportion of Goods/ Proportion of Bads.
Can read more about it here.
Python library here.
This method allows you to capture the "levels" under one column and avoid sparsity or induction of bias that would occur through dummifying or encoding.
Hope this helps !
Nov 23, 2020
XGBoost has since version 1.3.0 added experimental support for categorical features. From the docs:
1.8.7 Categorical Data
Other than users performing encoding, XGBoost has experimental support for categorical data using gpu_hist and gpu_predictor. No special operation needs to be done on input test data since the information about categories is encoded into the model during training.
https://buildmedia.readthedocs.org/media/pdf/xgboost/latest/xgboost.pdf
In the DMatrix section the docs also say:
enable_categorical (boolean, optional) – New in version 1.3.0.
Experimental support of specializing for categorical features. Do not set to True unless you are interested in development. Currently it’s only available for gpu_hist tree method with 1 vs rest (one hot) categorical split. Also, JSON serialization format, gpu_predictor and pandas input are required.
Here is a code example of adding One hot encodings columns to a Pandas DataFrame with Categorical columns:
ONE_HOT_COLS = ["categorical_col1", "categorical_col2", "categorical_col3"]
print("Starting DF shape: %d, %d" % df.shape)
for col in ONE_HOT_COLS:
s = df[col].unique()
# Create a One Hot Dataframe with 1 row for each unique value
one_hot_df = pd.get_dummies(s, prefix='%s_' % col)
one_hot_df[col] = s
print("Adding One Hot values for %s (the column has %d unique values)" % (col, len(s)))
pre_len = len(df)
# Merge the one hot columns
df = df.merge(one_hot_df, on=[col], how="left")
assert len(df) == pre_len
print(df.shape)