Categorical predictor variables are very common. Those who've taken a Statistics course covering linear (or logistic) regression, know the procedure to include a categorical predictor into a regression model requires the following steps:

- Convert the categorical variable that has
*m*categories, into*m*binary dummy variables - Include only
*m-1*of the dummy variables as predictors in the regression model (the dropped out category is called the*reference category*)

*D_red*= 1 if the value is 'red' and 0 otherwise

*D_yellow*= 1 if the value is 'yellow' and 0 otherwise

*D_green*= 1 if the value is 'green' and 0 otherwise

In the regression model we might have: Y = b0 + b1 D_red + b2 D_yellow + error

[Note: mathematically, it does not matter which dummy you drop out: the regression coefficients b1, b2

now compare against the left-out category].

When you move to data mining algorithms such as k-NN or trees, the procedure is different:

**we include all**. Dropping a dummy (when

*m*dummies as predictors when*m>2*, but in the case*m=2*, we use a single dummy*m>2*) will distort the distance measure, leading to incorrect distances.

Here's an example, based on X = {red, yellow, green}:

####
**Case 1: ***m=3* (use 3 dummies)

Here are 3 records, their category (color), and their dummy values on (*m=3*(use 3 dummies)

*D_red, D_yellow, D_green*):

The distance between each pair of records (in terms of color) should be identical, since all three records are different from each other. Suppose we use Euclidean distance. The distance between each pair of records will be equal to 2. For example:

Distance(#1, #2) = (1-0)^2 + (0-1)^2 + (0-0)^2 = 2.

If we drop one dummy, then the three distances will no longer be identical! For example, if we drop

*D_green*:

Distance(#1, #2) = 1 + 1 = 2

Distance(#1, #3) = 1

Distance(#2, #3) = 1

####
Case 2: *m=2* (use single dummy)

The above problem doesn't happen with *m=2*. Suppose we have only {red, green}, and use a single dummy. The distance between a pair of records will be 0 if the records are the same color, or 1 if they are different.

Why not use 2 dummies? If we use two dummies, we are doubling the weight of this variable but not adding any information. For example, comparing the red and green records using

*D_red*and

*D_green*would give Distance(#1, #3) = 1 + 1 = 2.

So we end up with distances of 0 or 2 instead of weights of 0 or 1.

####
**Bottom line**

In data mining methods other than regression models (e.g., k-NN, trees, k-means clustering), we use *m*dummies for a categorical variable with

*m*categories - this is called one-hot encoding. But if

*m=2*we use a single dummy.