Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

In this notebook, a template is provided for you to implement your functionality in stages which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission, if necessary. Sections that begin with 'Implementation' in the header indicate where you should begin your implementation for your project. Note that some sections of implementation are optional, and will be marked with 'Optional' in the header.

In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.


Step 0: Load The Data

In [1]:
# Load pickled data
import pickle
import tensorflow as tf

# TODO: Fill this in based on where you saved the training and testing data

training_file = "./data/train.p"
testing_file = "./data/test.p"

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
    
X, y = train['features'], train['labels']
X_test, y_test = test['features'], test['labels']

Step 1: Dataset Summary & Exploration

The pickled data is a dictionary with 4 key/value pairs:

  • 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
  • 'labels' is a 2D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
  • 'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.
  • 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES

Complete the basic data summary below.

In [2]:
### Replace each question mark with the appropriate value.
import numpy as np

# TODO: Number of training examples
n_train = X.shape[0]

# TODO: Number of testing examples.
n_test = X_test.shape[0]

# TODO: What's the shape of an traffic sign image?
image_shape = (X.shape[1], X.shape[2])

# TODO: How many unique classes/labels there are in the dataset.
n_classes = np.unique(y).shape[0]

print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Number of training examples = 39209
Number of testing examples = 12630
Image data shape = (32, 32)
Number of classes = 43

Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.

The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.

NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.

In [3]:
### Data exploration visualization goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
import seaborn as sns
# Visualizations will be shown in the notebook.
%matplotlib inline
---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-3-f6b207e38acb> in <module>()
      2 ### Feel free to use as many code cells as needed.
      3 import matplotlib.pyplot as plt
----> 4 import seaborn as sns
      5 # Visualizations will be shown in the notebook.
      6 get_ipython().magic('matplotlib inline')

ImportError: No module named 'seaborn'
In [4]:
plt.figure(figsize=(15,5))
sns.countplot(y)
plt.title("Distribution of Training Labels")
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-4-d1728f55cca2> in <module>()
      1 plt.figure(figsize=(15,5))
----> 2 sns.countplot(y)
      3 plt.title("Distribution of Training Labels")

NameError: name 'sns' is not defined
In [5]:
%matplotlib inline
plt.imshow(X[500])
Out[5]:
<matplotlib.image.AxesImage at 0x128ed4f98>

Step 2: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

There are various aspects to consider when thinking about this problem:

  • Neural network architecture
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

NOTE: The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!

Implementation

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.

In [6]:
### Preprocess the data here.
### Feel free to use as many code cells as needed.
import cv2

def convert_to_y(images):
    #images_y = np.array([cv2.cvtColor(img, cv2.COLOR_BGR2YUV) for img in images])
    images_y = images
    return images_y

#X_train_y = convert_to_y(X_train)
#X_test_y = convert_to_y(X_test)
#train_mean = np.mean(X_train_y)
#X_train_pre = X_train_y - train_mean
#X_test_pre = X_test_y - train_mean
#plt.imshow(X_train_y[500,:,:,:])
In [7]:
#plt.imshow(X_train_pre[500,:,:,:])

Question 1

Describe how you preprocessed the data. Why did you choose that technique?

Answer:

I preprocessed the data by converting the images to YUV and extracting the Y channel. I then subtracting the mean training pixel value from the images. I choose these techniques based on the "Traffic Sign Recognition with Multi-Scale Convolutional Networks" as these seemed to work well as preprocessing steps. Also, I new it would be important to normalize the data to help gradient descent perform better. Since pixels are already bounded in a range of 0-255 this isn't as necessary, but I figured it could only help.

In [8]:
### Generate data additional data (OPTIONAL!)
### and split the data into training/validation/testing sets here.
### Feel free to use as many code cells as needed.

def rescale(img):
    scale_factor = np.random.uniform(.9, 1.1)
    if scale_factor > 1:
        inter_type = cv2.INTER_LINEAR
    else:
        inter_type = cv2.INTER_AREA
    return cv2.resize(img,None,fx=scale_factor, fy=scale_factor, interpolation = inter_type)

def rotate(img):
    rows,cols,_ = img.shape
    rotate_factor = np.random.uniform(-15.0, 15.0)
    M = cv2.getRotationMatrix2D((cols/2,rows/2),rotate_factor,1)
    return cv2.warpAffine(img,M,(cols,rows))

def translate(img):
    rows,cols,_ = img.shape
    x_translate_factor = np.random.uniform(-2.0, 2.0)
    y_translate_factor = np.random.uniform(-2.0, 2.0)
    M = np.float32([[1,0,x_translate_factor],[0,1,y_translate_factor]])
    return cv2.warpAffine(img,M,(cols,rows))

def jitter_data(data, labels, scale=5):
    assert data.shape[0] == labels.shape[0]
    shape = data.shape
    jitter_data = np.zeros((shape[0]*scale,shape[1],shape[2],shape[3]))
    jitter_labels = np.zeros(shape[0]*scale)
    for n in range(scale):
        start_index = n * shape[0]
        for i in range(shape[0]):
            img = rescale(data[i,:,:,:])
            img = rotate(img)
            img = translate(img)
            row, col, _ = img.shape
            if row > 32:
                inter_type = cv2.INTER_LINEAR
            else:
                inter_type = cv2.INTER_AREA
            img = cv2.resize(img,(32, 32), interpolation = inter_type)
            jitter_data[start_index+i,:,:,:] = img
            jitter_labels[start_index+i] = labels[i]
    return jitter_data, jitter_labels


def dense_to_one_hot(y, nb_classes=None):
    y = np.array(y, dtype='int')
    if not nb_classes:
        nb_classes = np.max(y)+1
    Y = np.zeros((len(y), nb_classes))
    for i in range(len(y)):
        Y[i, y[i]] = 1.
    return Y


class DataSet(object):

    def __init__(self,
               X,
               y):

        self.X = X
        self.y = y

        self.pointer = 0
        self.dataset_length = len(y)


    def next_batch(self, size):
        next_indices = np.arange(self.pointer, self.pointer + size) % self.dataset_length
        self.pointer += size
        self.pointer = self.pointer % self.dataset_length

        return self.X[next_indices], self.y[next_indices]
    
    
    def length(self):
        return self.dataset_length
In [9]:
#X_train_jitter, y_train_jitter = jitter_data(X_train_pre, y_train, scale=1)
In [10]:
#X_train_jitter.shape
In [11]:
#X_train_all = np.concatenate((X_train_jitter, X_train_pre), axis=0)
#y_train_all = np.append(y_train_jitter, y_train)
In [12]:
#print(X_train_all.shape[0])
In [13]:
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.33, random_state=0)
X_train = X_train.astype('float32')
X_val = X_val.astype('float32')
X_train = X_train / 255 - 0.5
X_val = X_val / 255 - 0.5
y_train = dense_to_one_hot(y_train)
y_val = dense_to_one_hot(y_val)
print(X_train.shape)
(26270, 32, 32, 3)
In [14]:
train_ds = DataSet(X_train, y_train)
val_ds = DataSet(X_val, y_val)

Question 2

Describe how you set up the training, validation and testing data for your model. Optional: If you generated additional data, how did you generate the data? Why did you generate the data? What are the differences in the new dataset (with generated data) from the original dataset?

**Answer:**

I left the testing set as was given to us and split the training data into training and validation. To do this I used scikit learn's train test split functionality using 30% for the validation set and 70% for the training set.

I generated data in the same way as, "Traffic Sign Recognition with Multi-Scale Convolutional Networks." I created 5 extra versions of each image by slightly perturbing the scale, rotation, and translation of the image. I generated the data because adding some distortion to our training data will help our model be more robust.

In [20]:
### Define your architecture here.
### Feel free to use as many code cells as needed.
from tensorflow.contrib.layers import flatten

def conv2d(input, weight_tuple, strides_list):
    # Filter (weights and bias)
    depth = weight_tuple[-1]
    F_W = tf.Variable(tf.truncated_normal(weight_tuple))
    F_b = tf.Variable(tf.zeros(depth))
    strides = strides_list
    padding = 'VALID'
    return tf.nn.relu(tf.nn.conv2d(input, F_W, strides, padding) + F_b)

def max_pool(input, ksize=[1,3,3,1], strides=[1,2,2,1]):
    padding = 'VALID'
    return tf.nn.max_pool(input, ksize, strides, padding)

def alexnet_layer(input, weight_tuple, strides_list):
    conv = conv2d(input, weight_tuple, strides_list)
    max_layer = max_pool(conv)
    return tf.nn.local_response_normalization(max_layer)

def fully_connected(fc, fc_shape):
    units = fc_shape[-1]
    W = tf.Variable(tf.truncated_normal(shape=(fc_shape)))
    b = tf.Variable(tf.zeros(units))
    fc_mult = tf.matmul(fc, W) + b
    return tf.nn.tanh(fc_mult)

def alexnet(x):
    network = alexnet_layer(x, (4,4,1,14), [1,4,4,1])
    network = alexnet_layer(network, (2,2,14,38), [1,1,1,1])
    network = conv2d(network, (2,2,38,57), [1,1,1,1])
    network = conv2d(network, (2,2,57,57), [1,1,1,1])
    network = alexnet_layer(network, (2,2,57,38), [1,1,1,1])

    network = flatten(network)
    network_shape = (network.get_shape().as_list()[-1], 4096)
    network = fully_connected(network, network_shape)
    network = tf.nn.dropout(network, 0.5)
    network = fully_connected(network, (4096, 4096))
    network = tf.nn.dropout(network, 0.5)
    
    W = tf.Variable(tf.truncated_normal(shape=(4096, 43)))
    b = tf.Variable(tf.zeros(43))
    return tf.matmul(network, W) + b

def LeNet(x):
    x = conv2d(x, (5,5,3,6), [1,1,1,1])
    x = max_pool(x, [1,2,2,1], [1,2,2,1])
    
    x = conv2d(x, (2,2,6,16), [1,2,2,1])
    x = max_pool(x, [1,2,2,1], [1,2,2,1])
    
    # Flatten
    fc1 = flatten(x)
    # (5 * 5 * 16, 120)
    fc1_shape = (fc1.get_shape().as_list()[-1], 120)
    
    fc1_W = tf.Variable(tf.truncated_normal(shape=(fc1_shape)))
    fc1_b = tf.Variable(tf.zeros(120))
    fc1 = tf.matmul(fc1, fc1_W) + fc1_b
    fc1 = tf.nn.relu(fc1)
    
    fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 43)))
    fc2_b = tf.Variable(tf.zeros(43))
    return tf.matmul(fc1, fc2_W) + fc2_b

def simple_net(x):
    F_W = tf.Variable(tf.truncated_normal((3,3,3,32)))
    F_b = tf.Variable(tf.zeros(32))
    conv_layer = tf.nn.relu(tf.nn.conv2d(x, F_W, [1,1,1,1], "VALID") + F_b)
    conv_layer = conv2d(x, (3,3,3,32), [1,1,1,1])
    fc1 = flatten(conv_layer)
    fc1_shape = (fc1.get_shape().as_list()[-1], 128)
    
    fc1_W = tf.Variable(tf.truncated_normal(shape=(fc1_shape)))
    fc1_b = tf.Variable(tf.zeros(128))
    fc1 = tf.matmul(fc1, fc1_W) + fc1_b
    fc1 = tf.nn.relu(fc1)
    
    fc2_W = tf.Variable(tf.truncated_normal(shape=(128, 43)))
    fc2_b = tf.Variable(tf.zeros(43))
    return tf.matmul(fc1, fc2_W) + fc2_b

Question 3

What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) For reference on how to build a deep neural network using TensorFlow, see Deep Neural Network in TensorFlow from the classroom.

Answer:

In [21]:
### Train your model here.
### Feel free to use as many code cells as needed.

x = tf.placeholder(tf.float32, (None, 32, 32, 3))
y = tf.placeholder(tf.float32, (None, 43))
fc2 = simple_net(x)

loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(fc2, y))
opt = tf.train.AdamOptimizer()
train_op = opt.minimize(loss_op)
correct_prediction = tf.equal(tf.argmax(fc2, 1), tf.argmax(y, 1))
accuracy_op = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
In [22]:
def eval_data(dataset, BATCH_SIZE):
    """
    Given a dataset as input returns the loss and accuracy.
    """
    # If dataset.num_examples is not divisible by BATCH_SIZE
    # the remainder will be discarded.
    # Ex: If BATCH_SIZE is 64 and training set has 55000 examples
    # steps_per_epoch = 55000 // 64 = 859
    # num_examples = 859 * 64 = 54976
    #
    # So in that case we go over 54976 examples instead of 55000.
    steps_per_epoch = dataset.length() // BATCH_SIZE
    num_examples = steps_per_epoch * BATCH_SIZE
    total_acc, total_loss = 0, 0
    sess = tf.get_default_session()
    for step in range(steps_per_epoch):
        batch_x, batch_y = dataset.next_batch(BATCH_SIZE)
        loss, acc = sess.run([loss_op, accuracy_op], feed_dict={x: batch_x, y: batch_y})
        total_acc += (acc * batch_x.shape[0])
        total_loss += (loss * batch_x.shape[0])
    return total_loss/num_examples, total_acc/num_examples
In [23]:
EPOCHS = 100
BATCH_SIZE = 128
saver = tf.train.Saver()

with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
    sess.run(tf.initialize_all_variables())
    steps_per_epoch = train_ds.length() // BATCH_SIZE
    num_examples = steps_per_epoch * BATCH_SIZE

    # Train model
    for i in range(EPOCHS):
        for step in range(steps_per_epoch):
            batch_x, batch_y = train_ds.next_batch(BATCH_SIZE)
            loss = sess.run(train_op, feed_dict={x: batch_x, y: batch_y})

        val_loss, val_acc = eval_data(val_ds, BATCH_SIZE)
        print("EPOCH {} ...".format(i+1))
        print("Validation loss = {:.3f}".format(val_loss))
        print("Validation accuracy = {:.3f}".format(val_acc))
        print()
EPOCH 1 ...
Validation loss = 13.227
Validation accuracy = 0.072

EPOCH 2 ...
Validation loss = 4.869
Validation accuracy = 0.058

---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-23-12b6aa324322> in <module>()
     12         for step in range(steps_per_epoch):
     13             batch_x, batch_y = train_ds.next_batch(BATCH_SIZE)
---> 14             loss = sess.run(train_op, feed_dict={x: batch_x, y: batch_y})
     15 
     16         val_loss, val_acc = eval_data(val_ds, BATCH_SIZE)

/Users/tylerfolkman/anaconda/envs/carnd/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    715     try:
    716       result = self._run(None, fetches, feed_dict, options_ptr,
--> 717                          run_metadata_ptr)
    718       if run_metadata:
    719         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/Users/tylerfolkman/anaconda/envs/carnd/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
    913     if final_fetches or final_targets:
    914       results = self._do_run(handle, final_targets, final_fetches,
--> 915                              feed_dict_string, options, run_metadata)
    916     else:
    917       results = []

/Users/tylerfolkman/anaconda/envs/carnd/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
    963     if handle is None:
    964       return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
--> 965                            target_list, options, run_metadata)
    966     else:
    967       return self._do_call(_prun_fn, self._session, handle, feed_dict,

/Users/tylerfolkman/anaconda/envs/carnd/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
    970   def _do_call(self, fn, *args):
    971     try:
--> 972       return fn(*args)
    973     except errors.OpError as e:
    974       message = compat.as_text(e.message)

/Users/tylerfolkman/anaconda/envs/carnd/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
    952         return tf_session.TF_Run(session, options,
    953                                  feed_dict, fetch_list, target_list,
--> 954                                  status, run_metadata)
    955 
    956     def _prun_fn(session, handle, feed_dict, fetch_list):

KeyboardInterrupt: 
In [ ]:
y_train.shape
In [ ]:
from keras.models import Sequential
from keras.layers import Dense, Input, Activation, Conv2D, Flatten

X_train = X_train.astype('float32')
X_val = X_val.astype('float32')
X_train = X_train / 255 - 0.5
X_val = X_val / 255 - 0.5
y_train = dense_to_one_hot(y_train, 43)
y_val = dense_to_one_hot(y_val)

model = Sequential()
model.add(Conv2D(32, 3, 3, input_shape=(32, 32, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(43, activation='softmax'))

model.summary()
# TODO: Compile and train the model here.
model.compile(loss='categorical_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])
history = model.fit(X_train, y_train,
                    batch_size=128, nb_epoch=20,
                    verbose=1, validation_data=(X_val, y_val))

Question 4

How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)

Answer:

Question 5

What approach did you take in coming up with a solution to this problem? It may have been a process of trial and error, in which case, outline the steps you took to get to the final solution and why you chose those steps. Perhaps your solution involved an already well known implementation or architecture. In this case, discuss why you think this is suitable for the current problem.

Answer:


Step 3: Test a Model on New Images

Take several pictures of traffic signs that you find on the web or around you (at least five), and run them through your classifier on your computer to produce example results. The classifier might not recognize some local signs but it could prove interesting nonetheless.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Implementation

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.

In [ ]:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.

Question 6

Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It could be helpful to plot the images in the notebook.

Answer:

In [ ]:
### Run the predictions here.
### Feel free to use as many code cells as needed.

Question 7

Is your model able to perform equally well on captured pictures when compared to testing on the dataset? The simplest way to do this check the accuracy of the predictions. For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate.

NOTE: You could check the accuracy manually by using signnames.csv (same directory). This file has a mapping from the class id (0-42) to the corresponding sign name. So, you could take the class id the model outputs, lookup the name in signnames.csv and see if it matches the sign from the image.

Answer:

In [ ]:
### Visualize the softmax probabilities here.
### Feel free to use as many code cells as needed.

Question 8

Use the model's softmax probabilities to visualize the certainty of its predictions, tf.nn.top_k could prove helpful here. Which predictions is the model certain of? Uncertain? If the model was incorrect in its initial prediction, does the correct prediction appear in the top k? (k should be 5 at most)

tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.

Take this numpy array as an example:

# (5, 6) array
a = np.array([[ 0.24879643,  0.07032244,  0.12641572,  0.34763842,  0.07893497,
         0.12789202],
       [ 0.28086119,  0.27569815,  0.08594638,  0.0178669 ,  0.18063401,
         0.15899337],
       [ 0.26076848,  0.23664738,  0.08020603,  0.07001922,  0.1134371 ,
         0.23892179],
       [ 0.11943333,  0.29198961,  0.02605103,  0.26234032,  0.1351348 ,
         0.16505091],
       [ 0.09561176,  0.34396535,  0.0643941 ,  0.16240774,  0.24206137,
         0.09155967]])

Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:

TopKV2(values=array([[ 0.34763842,  0.24879643,  0.12789202],
       [ 0.28086119,  0.27569815,  0.18063401],
       [ 0.26076848,  0.23892179,  0.23664738],
       [ 0.29198961,  0.26234032,  0.16505091],
       [ 0.34396535,  0.24206137,  0.16240774]]), indices=array([[3, 0, 5],
       [0, 1, 4],
       [0, 5, 1],
       [1, 3, 5],
       [1, 4, 3]], dtype=int32))

Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.

Answer:

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In [ ]: