This project is part of the Nanodegree em Deep Learning Foundation taught by Udacity. The source code for running this project is available in my repository on GitHub.

In this project, I’ll build a neural network and use it to predict daily bike rental ridership.

Step 1: Load and prepare the data

A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights.

data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()

Expected outcome:
Load and prepare the data

Step 2: Checking out the data

This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column.

Below is a plot showing the number of bike riders over the first 10 days or so in the data set. Some days don’t have exactly 24 entries in the data set, so it’s not exactly 10 days. I can capture this information in my model as follows:

rides[:24*10].plot(x='dteday', y=‘cnt')

It’s possible see the hourly rentals here. The weekends have lower overall ridership and there are spikes when people are biking to and from work during the week. The data above, we also have information about temperature, humidity, and wind speed, all of these likely affecting the number of riders.

Checking out the data

Step 3: Binary dummy variables

Here I’ve some categorical variables like season, weather, month. To include these in my model, we’ll need to make binary dummy variables.

dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']

for each in dummy_fields:
    dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
    rides = pd.concat([rides, dummies], axis=1)

fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr']

data = rides.drop(fields_to_drop, axis=1)
data.head()

Expected outcome:
Dummy variables

Step 4: Scaling target variables

To make training the network easier, I’ll standardize each of the continuous variables, that is, I’ll shift and scale the variables such that they have zero mean and a standard deviation of 1.

The scaling factors are saved so I can go backwards when use the network for predictions.

quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']

scaled_features = {}

for each in quant_features:
    mean, std = data[each].mean(), data[each].std()
    scaled_features[each] = [mean, std]
    data.loc[:, each] = (data[each] - mean)/std

Step 5: Splitting the data into training, testing, and validation sets

I’ll save the data for the last approximately 21 days to use as a test set after I’ve trained the network. I’ll use this set to make predictions and compare them with the actual number of riders.

test_data = data[-21*24:]
data = data[:-21*24]

target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]

I’ll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, I’ll train on historical data, then try to predict on future data (the validation set).

train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]

Step 6: build the neural network

Below I’ll build my network. I’ve built out the structure and the backwards pass and implement the forward pass through the network. Here I set the hyper parameters:

  • the learning rate
  • the number of hidden units
  • the number of training passes

The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is f(x)=x. I work through each layer of the network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.

I use the weights to propagate signals forward from the input to the output layers in a neural network.The weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.

Hint: Is need the derivative of the output activation function (f(x)=x) for the backpropagation implementation. This function is equivalent to the equation y=x. where the slope of that equation is the derivative of f(x).

Below, I execute these tasks:

  • Implement the sigmoid function to use as the activation function and  set self.activation_function in __init__ to your sigmoid function
  • Implement the forward pass in the train method
  • Implement the backpropagation algorithm in the train method, including calculating the output error;
  • Implement the forward pass in the run method.
class NeuralNetwork(object):
    def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
        self.input_nodes = input_nodes
        self.hidden_nodes = hidden_nodes
        self.output_nodes = output_nodes

        self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.input_nodes))

        self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, (self.output_nodes, self.hidden_nodes))
        self.lr = learning_rate
        self.activation_function = lambda x : 0  # Replace 0 with your sigmoid calculation.
        self.activation_function = sigmoid
                    
    def train(self, inputs_list, targets_list):
        inputs = np.array(inputs_list, ndmin=2).T
        targets = np.array(targets_list, ndmin=2).T
        
        hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # None # signals into hidden layer
        hidden_outputs = self.activation_function(hidden_inputs) # None # signals from hidden layer
        
        final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # None # signals into final output layer
        final_outputs = final_inputs # None # signals from final output layer
        
        output_errors = targets - final_outputs #None # Output layer error is the difference between desired target and actual output.
        
        hidden_errors = output_errors * self.weights_hidden_to_output * 1.0 # None # errors propagated to the hidden layer
        hidden_grad = np.dot(hidden_errors.T * hidden_outputs * (1 - hidden_outputs), inputs.T) # None # hidden layer gradients
        
        grad_out = np.dot(hidden_outputs, output_errors * 1.0)
        
        self.weights_hidden_to_output += self.lr * grad_out.T / inputs.shape[1] # 0 # update hidden-to-output weights with gradient descent step
        self.weights_input_to_hidden += self.lr * hidden_grad / inputs.shape[1] # 0 # update input-to-hidden weights with gradient descent step
        
    def run(self, inputs_list):
        inputs = np.array(inputs_list, ndmin=2).T
        
        hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # None # signals into hidden layer
        hidden_outputs = self.activation_function(hidden_inputs) # None # signals from hidden layer
        
        final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # None # signals into final output layer
        final_outputs = final_inputs # np.zeros((1, len(inputs_list))) # signals from final output layer 
        return final_outputs

    def MSE(y, Y):
        return np.mean((y-Y)**2)

    def sigmoid(x):
        return 1 / (1 + np.exp(-x))

Step 7: Training the neural network

Here I’ll set the hyperparameters for the network. The strategy is to find hyperparameters such that the error on the training set is low, but not overfitting to the data.

If the train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.

I’ll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. For each training pass, I grab a random sample of the data instead of using the whole data set. I use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently.

Step 8: Choose the number of epochs

This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. Is need to choose enough epochs to train the network well but not too many or she’s be overfitting.

Step 9: Choose the learning rate

This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1.

Hint: That the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.

Step 10: Choose the number of hidden nodes

The more hidden nodes you have, the more accurate predictions the model will make. It’s possible use at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won’t have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.

import sys

epochs = 1000
learning_rate = 0.1
hidden_nodes = 10
output_nodes = 1

N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)

losses = {'train':[], 'validation':[]}

for e in range(epochs):
    batch = np.random.choice(train_features.index, size=128)
    for record, target in zip(train_features.ix[batch].values, train_targets.ix[batch]['cnt']):
        network.train(record, target)
    
    train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
    val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
        
    sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
                     + "% ... Training loss: " + str(train_loss)[:5] \
                     + " ... Validation loss: " + str(val_loss)[:5])
    
    losses['train'].append(train_loss)
    losses['validation'].append(val_loss)

plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()

Results obtained for the training data set and for the validation data set:

Training loss and validation loss

Step 11: Check out your predictions

Here I use the test data to view how well your network is modeling the data.

fig, ax = plt.subplots(figsize=(8,4))

mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()

dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)

Expected outcome:

Data predictions

Running the project

Downloading files needed to run this project, as well as datasets with bike rental and weather information are available on my GitHub. There you also have additional information on preparing the environment to run it.

Listening: Babymetal – Karate