Neural Network Example in R Cube Root

In this article I will provide a detailed step by step guide of how to implement a neural network example in R.

Neural networks have varied applications like character recognition, stock market prediction, information security and loan applications, self-driving cars and so on. [1] In this article I have taken a simple example to get started and to help someone quickly implement a neural net and use it for prediction.

I have created a dataset with two columns, x and y, where:

  • x is a number between 1 and 1000
  • y is the cube root of x

I used excel to quickly create the dataset of 500 rows, and saved it to a scv file. x is generated using randbetween(1,1000) in excel.

Neural Network Example in R - Cuberoot - Dataset Creation

Once the data is ready we can start with the implementation part.

1. Launch R

2. We will be using the “neuralnet” library. So this has to be loaded.

library(neuralnet)

3. Read the csv file with our data and load it into the list mydata_unscaled

mydata_unscaled <- read.csv(file=”c:/Subhodeep/R/dataset_cuberoot.csv”, header=TRUE, sep=”,”)

You can quickly check the data by typing:

mydata_unscaled

Neural Network Example in R - Cuberoot - List in R

4. Now the data has to be normalized between 0 and 1.

First we define the below function “f” and then we apply the “f” to the 2 columns of mydata_unscaled and assign it to mydata. The lapply function is used.

There are lot of articles on the importance of data normalization. “In theory, it’s not necessary to normalize numeric x-data (also called independent data). However, practice has shown that when numeric x-data values are normalized, neural network training is often more efficient, which leads to a better predictor.” – James McCaffery [2] Research has shown that “input data normalization with certain criteria, prior to a training process, is crucial to obtain good results as well as to fasten significantly the calculations.” [3]

f <- function(x) (x -min(x))/(max(x)-min(x))
mydata <- as.data.frame(lapply(mydata_unscaled[1:2],f))

5. Next we divide into training set and test set. 70% of the data is assigned to the training set and remaining 30% is assigned to the test set. The training set is used to train the model, and the test set will be used to see if the model is any good, by comparing predicted versus actual.

index <- sample(nrow(mydata), round(0.7*nrow(mydata)))
train.mydata <- mydata[index,]
test.mydata <- mydata[-index,]

6. Next we create a formula to be used by the neural network.

fml <- y~x

7. Now we are in a position to create the neural network. In this R neural network example, we will be creating a neural network with 1 input, 1 hidden layer with 10 neurons and 1 output, using backpropagation algorithm.

nn.mydata <- neuralnet(fml,train.mydata,hidden=10)

This is how the net looks like.Neural Network Example in R - Plot Neuralnet

8. Next apply the model to the test set.

pred.mydata <- compute(nn.mydata, test.mydata[,c(1:1)])

9. Next I combine the expected output and actual output to the variable myop.

myop <- cbind(pred.mydata$net.result, test.mydata$y)

Neural Network Example in R - Predicted versus Actual Output

From a visual inspection it is clear that the output is quite close to the desired value. However, since the data is normalized it is difficult for us to map the normalized numbers to their real world counterpart. So the next step is to de-normalize the data.

10. I create a function for denormalization of data called “g” and apply it to model output.

g <- function(x,y) (x*(max(y)-min(y)) + min(y))
res <- cbind(mydata_unscaled$x[-index],mydata_unscaled$y[-index], g(pred.mydata$net.result,mydata_unscaled$y))
colnames(res) <- c(“x”,”y_actual”,”y_predicted”)
res

Neural Network Example in R - Denormalized Predicted versus Actual Output

This brings us to the end of the R implementation of a neural network example for a simple cube root function. I hope you found this neural network example in R useful. Please feel free to share your comments.

References

[1] https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/Applications/index.html

[2] https://visualstudiomagazine.com/articles/2014/01/01/how-to-standardize-data-for-neural-networks.aspx

[3] Importance of input data normalization for the application of neural networks to complex industrial problems

You may also like

Subhodeep Mukhopadhyay

I am a Management Consultant in the Education Sector. In my previous corporate career, I have worked in Banking, Private Equity and Software industry. I am an MBA in Finance/ Computer Engineer and enjoy doing equity research and financial analysis in my free time.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>