← Machine Learning Projects

# Machine Learning Engineer Nanodegree¶

## Project: Creating Customer Segments¶

Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.

## Getting Started¶

In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.

The dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers.

Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.

In [267]:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames

# Import supplementary visualizations code visuals.py
import visuals as vs

# Pretty display for notebooks
%matplotlib inline

# Load the wholesale customers dataset
try:
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?"

Wholesale customers dataset has 440 samples with 6 features each.


## Data Exploration¶

In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.

Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.

In [268]:
# Display a description of the dataset
display(data.describe())

Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
count 440.000000 440.000000 440.000000 440.000000 440.000000 440.000000
mean 12000.297727 5796.265909 7951.277273 3071.931818 2881.493182 1524.870455
std 12647.328865 7380.377175 9503.162829 4854.673333 4767.854448 2820.105937
min 3.000000 55.000000 3.000000 25.000000 3.000000 3.000000
25% 3127.750000 1533.000000 2153.000000 742.250000 256.750000 408.250000
50% 8504.000000 3627.000000 4755.500000 1526.000000 816.500000 965.500000
75% 16933.750000 7190.250000 10655.750000 3554.250000 3922.000000 1820.250000
max 112151.000000 73498.000000 92780.000000 60869.000000 40827.000000 47943.000000

### Implementation: Selecting Samples¶

To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.

In [269]:
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = [95, 176, 200]

# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples)

Chosen samples of wholesale customers dataset:

Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
0 3 2920 6252 440 223 709
1 45640 6958 6536 7368 1532 230
2 3067 13240 23127 3941 9959 731

### Question 1¶

Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
What kind of establishment (customer) could each of the three samples you've chosen represent?
Hint: Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying "McDonalds" when describing a sample customer as a restaurant.

1. I think the first customer is likely to be a small corner shop, given that they order very little fresh ingredients (3 vs a mean of 12000.3) while they do order a proportionately higher amount of Milk (2920 vs a mean of 5796.27) and Grocery type foods (6252 vs a mean of 7951.28) than other types of goods. Corner shops typically have a small frozen section (440 vs a mean of 3071.93) and household items area (223 vs a mean of 2881.49), but can sometimes have a large delicatessen area (709 vs a mean of 1524.87) depending on the demographic makeup of the area the shop is in. Looking at the heatmap of normalized expenditures below, this sample purchases much less goods overall than other customers, but seems to purchase proportionately more Milk, Grocery, and Delicatessen goods than other types of goods.

2. The second customer is more likely to be a big market-like shop, with a huge fresh produce section (likely organic produce given the cost, with an expenditure of 45640 compared to a mean of 12000.3) and smaller and equal areas for milk (6958 vs a mean of 5796.27), normal groceries (6536 vs a mean of 7951.28), and frozen foods (7368 vs a mean of 3071.93). I would assume a place like this would also have a bigger delicatessen area, though it appears not to be the case (230 vs a mean of 1524.87). Looking at the heatmap of normalized expenditures below, this sample purchases significantly more fresh produce than other customers and substantially more frozen goods than other customers, but is closer to the mean for both Milk and Grocery products, and has below average consumption of Detergents_Paper and Delicatessen products.

3. The final customer strikes me as a typical supermarket with a modest fresh produce area (3067 vs a mean of 12000.3), but a shop where you're more likely to go for typical grocery items like bread, cereals, tins of food, etc. (23127 vs a mean of 7951.28). These kind of stores typically have a substantial dairy selection (13240 vs a mean of 5796.27) and frozen area (3941 vs a mean of 3071.93), and also have aisles dedicated solely to household items like detergent and paper towels (9959 vs a mean of 2881.49). You're less likely to find a delicatessen area in a store like this (731 vs a mean of 1524.87), though there would appear to be a small part of the store that caters to customers interested in those types of goods. Looking at the heatmap of normalized expenditures below, this sample purchases significantly more Milk, Grocery, and Delicatessen_Paper products, slightly more frozen goods and slightly less delicatessen products, and a below average amount of fresh produce than other customers.
In [270]:
import seaborn as sns

sns.heatmap((samples-data.mean())/data.std(ddof=0), annot=True, cbar=False, square=True)

Out[270]:
<matplotlib.axes._subplots.AxesSubplot at 0x10fb86cd0>

### Implementation: Feature Relevance¶

One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.

In the code block below, you will need to implement the following:

• Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function.
• Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets.
• Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state.
• Import a decision tree regressor, set a random_state, and fit the learner to the training data.
• Report the prediction score of the testing set using the regressor's score function.
In [271]:
print new_data

     Fresh   Milk  Frozen  Detergents_Paper  Delicatessen
0    12669   9656     214              2674          1338
1     7057   9810    1762              3293          1776
2     6353   8808    2405              3516          7844
3    13265   1196    6404               507          1788
4    22615   5410    3915              1777          5185
5     9413   8259     666              1795          1451
6    12126   3199     480              3140           545
7     7579   4956    1669              3321          2566
8     5963   3648     425              1716           750
9     6006  11093    1159              7425          2098
10    3366   5403    4400              5977          1744
11   13146   1124    1420               549           497
12   31714  12319     287              3881          2931
13   21217   6208    3095              6707           602
14   24653   9465     294              5058          2168
15   10253   1114     397               964           412
16    1020   8816     134              4508          1080
17    5876   6157     839               370          4478
18   18601   6327    2205              2767          3181
19    7780   2495     669              2518           501
20   17546   4519    1066              2259          2124
21    5567    871    3383               375           569
22   31276   1917    9408              2381          4334
23   26373  36423    5154              4337         16523
24   22647   9776    2915              4482          5778
25   16165   4230     201              4003            57
26    9898    961    3151               242           833
27   14276    803     485               100           518
28    4113  20484    1158              8604          5206
29   43088   2100    1200              1107           823
..     ...    ...     ...               ...           ...
410   6633   2096    1389              1860          1892
411   2126   3289    1535               235          4365
412     97   3605      98              2970            62
413   4983   4859   17866               912          2435
414   5969   1990    5679              1135           290
415   7842   6046    1691              3540          1874
416   4389  10940     848              6728           993
417   5065   5499     364              3485          1063
418    660   8494     133              6740           776
419   8861   3783     633              1580          1521
420   4456   5266      25              6818          1393
421  17063   4847    1031              3415          1784
422  26400   1377     830               948          1218
423  17565   3686    1059              1803           668
424  16980   2884     874              3213           249
425  11243   2408   15348               108          1886
426  13134   9347    3141              5079          1894
427  31012  16687   15082               439          1163
428   3047   5970    2198               850           317
429   8607   1750      47                84          2501
430   3097   4230     575               241          2080
431   8533   5506   13486              1377          1498
432  21117   1162     269              1328           395
433   1982   3218    1541               356          1449
434  16731   3922     688              2371           838
435  29703  12051   13135               182          2204
436  39228   1431    4510                93          2346
437  14531  15488     437             14841          1867
438  10290   1981    1038               168          2125
439   2787   1698      65               477            52

[440 rows x 5 columns]

In [272]:
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor

# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
new_data = data.copy()
new_data.drop(['Grocery'], axis=1, inplace = True)

# TODO: Split the data into training and testing sets using the given feature as the target
X_train, X_test, y_train, y_test = train_test_split(new_data, data['Grocery'], test_size = 0.25, random_state = 1)

# TODO: Create a decision tree regressor and fit it to the training set
regressor = DecisionTreeRegressor(random_state = 1)
regressor.fit(X_train, y_train)

# TODO: Report the score of the prediction using the testing set
score = regressor.score(X_test, y_test)
print score

0.795768311576


### Question 2¶

Which feature did you attempt to predict? What was the reported prediction score? Is this feature necessary for identifying customers' spending habits?
Hint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data.

I attempted to predict the 'Grocery' feature. The reported prediction score was 0.7958, which means that the model was able to predict its value reasonably well and could mean that it's not a necessary feature for identifying customers' spending habits. Other features could be used to predict customers' purchasing behaviour of groceries with a reasonable degree of accuracy.

In [273]:
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeRegressor

def calculate_r_2_for_feature(data,feature):
new_data = data.drop(feature, axis=1)

X_train, X_test,y_train, y_test = train_test_split(new_data,data[feature],test_size=0.25)

regressor = DecisionTreeRegressor()
regressor.fit(X_train,y_train)

score = regressor.score(X_test,y_test)
return score

def r_2_mean(data,feature,runs=200):
return np.array([calculate_r_2_for_feature(data,feature)
for _ in range(200) ]).mean().round(4)

print "{0:17} {1}".format("Fresh: ", r_2_mean(data,'Fresh'))
print "{0:17} {1}".format("Milk: ", r_2_mean(data,'Milk'))
print "{0:17} {1}".format("Grocery: ", r_2_mean(data,'Grocery'))
print "{0:17} {1}".format("Frozen: ", r_2_mean(data,'Frozen'))
print "{0:17} {1}".format("Detergents_Paper: ", r_2_mean(data,'Detergents_Paper'))
print "{0:17} {1}".format("Delicatessen: ", r_2_mean(data,'Delicatessen'))

Fresh:            -0.7802
Milk:             0.1073
Grocery:          0.6782
Frozen:           -1.2441
Detergents_Paper:  0.685
Delicatessen:     -3.1109


### Visualize Feature Distributions¶

To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.

In [274]:
# Produce a scatter matrix for each pair of features in the data
pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');

In [275]:
corr = data.corr()
with sns.axes_style("white"):
cmap='RdBu', fmt='+.3f')


### Question 3¶

Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?
Hint: Is the data normally distributed? Where do most of the data points lie?

It appears that Grocery and Detergents_Paper have the strongest correlation of the pairs. It also looks like there is some correlation between Detergents_Paper and Milk, and Grocery and Milk. This confirms my suspicion above that Grocery was correlated with some other features that would allow for its value to be predicted with some degree of accuracy. All of the distributions appear to be skewed to the right, with more points hovering closer to the origin and some larger points extending it to the right. The shape of the distributions of Detergents_Paper, Grocery, and Milk are all quite similar.

## Data Preprocessing¶

In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.

### Implementation: Feature Scaling¶

If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.

In the code block below, you will need to implement the following:

• Assign a copy of the data to log_data after applying logarithmic scaling. Use the np.log function for this.
• Assign a copy of the sample data to log_samples after applying logarithmic scaling. Again, use np.log.
In [276]:
# TODO: Scale the data using the natural logarithm
log_data = np.log(data.copy())

# TODO: Scale the sample data using the natural logarithm
log_samples = np.log(samples)

# Produce a scatter matrix for each pair of newly-transformed features
pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');


### Observation¶

After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).

Run the code below to see how the sample data has changed after having the natural logarithm applied to it.

In [277]:
# Display the log-transformed sample data
display(log_samples)

Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
0 1.098612 7.979339 8.740657 6.086775 5.407172 6.563856
1 10.728540 8.847647 8.785081 8.904902 7.334329 5.438079
2 8.028455 9.490998 10.048756 8.279190 9.206232 6.594413

### Implementation: Outlier Detection¶

Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.

In the code block below, you will need to implement the following:

• Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this.
• Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile.
• Assign the calculation of an outlier step for the given feature to step.
• Optionally remove data points from the dataset by adding indices to the outliers list.

NOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points!
Once you have performed this implementation, the dataset will be stored in the variable good_data.

In [278]:
from collections import Counter

# For each feature find the data points with extreme high or low values
for feature in log_data.keys():

# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[feature], 25)

# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[feature], 75)

# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = (Q3 - Q1) * 1.5

# Display the outliers
print "Data points considered outliers for the feature '{}':".format(feature)
display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])

# OPTIONAL: Select the indices for data points you wish to remove
outliers  = []

# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)

Data points considered outliers for the feature 'Fresh':

Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
65 4.442651 9.950323 10.732651 3.583519 10.095388 7.260523
66 2.197225 7.335634 8.911530 5.164786 8.151333 3.295837
81 5.389072 9.163249 9.575192 5.645447 8.964184 5.049856
95 1.098612 7.979339 8.740657 6.086775 5.407172 6.563856
96 3.135494 7.869402 9.001839 4.976734 8.262043 5.379897
128 4.941642 9.087834 8.248791 4.955827 6.967909 1.098612
171 5.298317 10.160530 9.894245 6.478510 9.079434 8.740337
193 5.192957 8.156223 9.917982 6.865891 8.633731 6.501290
218 2.890372 8.923191 9.629380 7.158514 8.475746 8.759669
304 5.081404 8.917311 10.117510 6.424869 9.374413 7.787382
305 5.493061 9.468001 9.088399 6.683361 8.271037 5.351858
338 1.098612 5.808142 8.856661 9.655090 2.708050 6.309918
353 4.762174 8.742574 9.961898 5.429346 9.069007 7.013016
355 5.247024 6.588926 7.606885 5.501258 5.214936 4.844187
357 3.610918 7.150701 10.011086 4.919981 8.816853 4.700480
412 4.574711 8.190077 9.425452 4.584967 7.996317 4.127134
Data points considered outliers for the feature 'Milk':

Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
86 10.039983 11.205013 10.377047 6.894670 9.906981 6.805723
98 6.220590 4.718499 6.656727 6.796824 4.025352 4.882802
154 6.432940 4.007333 4.919981 4.317488 1.945910 2.079442
356 10.029503 4.897840 5.384495 8.057377 2.197225 6.306275
Data points considered outliers for the feature 'Grocery':

Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
75 9.923192 7.036148 1.098612 8.390949 1.098612 6.882437
154 6.432940 4.007333 4.919981 4.317488 1.945910 2.079442
Data points considered outliers for the feature 'Frozen':

Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
38 8.431853 9.663261 9.723703 3.496508 8.847360 6.070738
57 8.597297 9.203618 9.257892 3.637586 8.932213 7.156177
65 4.442651 9.950323 10.732651 3.583519 10.095388 7.260523
145 10.000569 9.034080 10.457143 3.737670 9.440738 8.396155
175 7.759187 8.967632 9.382106 3.951244 8.341887 7.436617
264 6.978214 9.177714 9.645041 4.110874 8.696176 7.142827
325 10.395650 9.728181 9.519735 11.016479 7.148346 8.632128
420 8.402007 8.569026 9.490015 3.218876 8.827321 7.239215
429 9.060331 7.467371 8.183118 3.850148 4.430817 7.824446
439 7.932721 7.437206 7.828038 4.174387 6.167516 3.951244
Data points considered outliers for the feature 'Detergents_Paper':

Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
75 9.923192 7.036148 1.098612 8.390949 1.098612 6.882437
161 9.428190 6.291569 5.645447 6.995766 1.098612 7.711101
Data points considered outliers for the feature 'Delicatessen':

Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
66 2.197225 7.335634 8.911530 5.164786 8.151333 3.295837
109 7.248504 9.724899 10.274568 6.511745 6.728629 1.098612
128 4.941642 9.087834 8.248791 4.955827 6.967909 1.098612
137 8.034955 8.997147 9.021840 6.493754 6.580639 3.583519
142 10.519646 8.875147 9.018332 8.004700 2.995732 1.098612
154 6.432940 4.007333 4.919981 4.317488 1.945910 2.079442
183 10.514529 10.690808 9.911952 10.505999 5.476464 10.777768
184 5.789960 6.822197 8.457443 4.304065 5.811141 2.397895
187 7.798933 8.987447 9.192075 8.743372 8.148735 1.098612
203 6.368187 6.529419 7.703459 6.150603 6.860664 2.890372
233 6.871091 8.513988 8.106515 6.842683 6.013715 1.945910
285 10.602965 6.461468 8.188689 6.948897 6.077642 2.890372
289 10.663966 5.655992 6.154858 7.235619 3.465736 3.091042
343 7.431892 8.848509 10.177932 7.283448 9.646593 3.610918

### Question 4¶

Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why.

75 is considered an outlier for both the Grocery and Detergents_Paper features; 154 is considered an outlier for all of Milk, Grocery, and Delicatessen features; 65 is in Fresh and Frozen; 66 is in Delicatessen and Fresh; 128 is in Delicatessen and Fresh. I think these data points should be removed from the dataset because they were outliers for more than one feature, and therefore may reduce the predictive capability of our model if it is trained on these noisy datapoints. Interestingly, one of the datapoints I selected for the sample, 95, turned out to be an outlier due to the amount of Fresh produce the customer purchased.

## Feature Transformation¶

In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.

### Implementation: PCA¶

Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.

In the code block below, you will need to implement the following:

• Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca.
• Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.
In [279]:
from sklearn.decomposition import PCA
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
pca = PCA(n_components=6)
pca.fit(good_data)

# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)

# Generate PCA results plot
pca_results = vs.pca_results(good_data, pca)