Building a simple neural network using Keras and Tensorflow

Author

Prof. Eric A. Suess

Update: The original code has been updated to use the tidymodels init_split() function, rather than using the indices method which originally used setdiff, which now may have a conflict between base R and the tidyverse.

Thank you

A big thank you to Leon Jessen for posting his code on github.

Building a simple neural network using Keras and Tensorflow

I have forked his project on github and put his code into an R Notebook so we can run it in class.

Motivation

The following is a minimal example for building your first simple artificial neural network using Keras and TensorFlow for R.

TensorFlow for R by Rstudio lives here.

Gettings started - Install Keras and TensorFlow for R

You can install the Keras for R package from CRAN as follows:

You can install the Keras for R package from CRAN as follows:

# install.packages("keras")

TensorFlow is the default backend engine. TensorFlow and Keras can be installed as follows:

# library(keras)
# install_keras()

Naturally, we will also need Tidyverse.

# Install from CRAN
# install.packages("tidyverse")

# Or the development version from GitHub
# install.packages("devtools")
# devtools::install_github("hadley/tidyverse")

Once installed, we simply load the libraries.

library("keras")
suppressMessages(library("tidyverse"))

Artificial Neural Network Using the Iris Data Set

Right, let’s get to it!

Data

The famous (Fisher’s or Anderson’s) iris data set contains a total of 150 observations of 4 input features Sepal.Length, Sepal.Width, Petal.Length and Petal.Width and 3 output classes setosa versicolor and virginica, with 50 observations in each class. The distributions of the feature values looks like so:

iris_tib <- as_tibble(iris)
iris_tib
# A tibble: 150 × 5
   Sepal.Length Sepal.Width Petal.Length Petal.Width Species
          <dbl>       <dbl>        <dbl>       <dbl> <fct>  
 1          5.1         3.5          1.4         0.2 setosa 
 2          4.9         3            1.4         0.2 setosa 
 3          4.7         3.2          1.3         0.2 setosa 
 4          4.6         3.1          1.5         0.2 setosa 
 5          5           3.6          1.4         0.2 setosa 
 6          5.4         3.9          1.7         0.4 setosa 
 7          4.6         3.4          1.4         0.3 setosa 
 8          5           3.4          1.5         0.2 setosa 
 9          4.4         2.9          1.4         0.2 setosa 
10          4.9         3.1          1.5         0.1 setosa 
# ℹ 140 more rows
iris_tib %>% pivot_longer(names_to = "feature", values_to = "value", -Species) %>%
  ggplot(aes(x = feature, y = value, fill = Species)) +
  geom_violin(alpha = 0.5, scale = "width") +
  theme_bw()

Our aim is to connect the 4 input features to the correct output class using an artificial neural network. For this task, we have chosen the following simple architecture with one input layer with 4 neurons (one for each feature), one hidden layer with 4 neurons and one output layer with 3 neurons (one for each class), all fully connected.

architecture_visualisation.png

Our artificial neural network will have a total of 35 parameters: 4 for each input neuron connected to the hidden layer, plus an additional 4 for the associated first bias neuron and 3 for each of the hidden neurons connected to the output layer, plus an additional 3 for the associated second bias neuron, i.e. \(4 \times 4 + 4 + 4 \times 3 + 3=35\)

Prepare data

We start with slightly wrangling the iris data set by renaming and scaling the features and converting character labels to numeric.

set.seed(265509)
nn_dat <- iris_tib %>%
  mutate(sepal_length = scale(Sepal.Length),
         sepal_width  = scale(Sepal.Width),
         petal_length = scale(Petal.Length),
         petal_width  = scale(Petal.Width),          
         class_label  = as.numeric(Species) - 1) %>% 
    select(sepal_length, sepal_width, petal_length, petal_width, class_label)

nn_dat %>% head()
# A tibble: 6 × 5
  sepal_length[,1] sepal_width[,1] petal_length[,1] petal_width[,1] class_label
             <dbl>           <dbl>            <dbl>           <dbl>       <dbl>
1           -0.898          1.02              -1.34           -1.31           0
2           -1.14          -0.132             -1.34           -1.31           0
3           -1.38           0.327             -1.39           -1.31           0
4           -1.50           0.0979            -1.28           -1.31           0
5           -1.02           1.25              -1.34           -1.31           0
6           -0.535          1.93              -1.17           -1.05           0

Then, we create indices for splitting the iris data into a training and a test data set. We set aside 20% of the data for testing.

library(tidymodels)
── Attaching packages ────────────────────────────────────── tidymodels 1.3.0 ──
✔ broom        1.0.7          ✔ rsample      1.2.1.9000
✔ dials        1.4.0.9000     ✔ tune         1.3.0.9000
✔ infer        1.0.7          ✔ workflows    1.2.0.9000
✔ modeldata    1.4.0          ✔ workflowsets 1.1.0     
✔ parsnip      1.3.0.9000     ✔ yardstick    1.3.2     
✔ recipes      1.1.1.9000     
── Conflicts ───────────────────────────────────────── tidymodels_conflicts() ──
✖ scales::discard()        masks purrr::discard()
✖ dplyr::filter()          masks stats::filter()
✖ recipes::fixed()         masks stringr::fixed()
✖ yardstick::get_weights() masks keras::get_weights()
✖ dplyr::lag()             masks stats::lag()
✖ yardstick::spec()        masks readr::spec()
✖ recipes::step()          masks stats::step()
set.seed(364)
n <- nrow(nn_dat)
n
[1] 150
iris_parts <- nn_dat %>%
  initial_split(prop = 0.8)

train <- iris_parts %>%
  training()

test <- iris_parts %>%
  testing()

list(train, test) %>%
  map_int(nrow)
[1] 120  30
n_total_samples <- nrow(nn_dat)

n_train_samples <- nrow(train)

n_test_samples <- nrow(test)

Create training and test data

Note that the functions in the keras package are expecting the data to be in a matrix object and not a tibble. So as.matrix is added at the end of each line.

x_train <- train %>% select(-class_label) %>% as.matrix()
y_train <- train %>% select(class_label) %>% as.matrix() %>% to_categorical()

x_test <- test %>% select(-class_label) %>% as.matrix()
y_test <- test %>% select(class_label) %>% as.matrix() %>% to_categorical() 

dim(y_train)
[1] 120   3
dim(y_test)
[1] 30  3

Set Architecture

With the data in place, we now set the architecture of our neural network.

model <- keras_model_sequential()
model %>% 
  layer_dense(units = 4, activation = 'relu', input_shape = 4) %>% 
  layer_dense(units = 3, activation = 'softmax')
model %>% summary
Model: "sequential"
________________________________________________________________________________
 Layer (type)                       Output Shape                    Param #     
================================================================================
 dense_1 (Dense)                    (None, 4)                       20          
 dense (Dense)                      (None, 3)                       15          
================================================================================
Total params: 35
Trainable params: 35
Non-trainable params: 0
________________________________________________________________________________

Next, the architecture set in the model needs to be compiled.

model %>% compile(
  loss      = 'categorical_crossentropy',
  optimizer = optimizer_rmsprop(),
  metrics   = c('accuracy')
)

Train the Artificial Neural Network

Lastly we fit the model and save the training progress in the history object.

Try changing the validation_split from 0 to 0.2 to see the validation_loss.

history <- model %>% fit(
  x = x_train, y = y_train,
  epochs = 200,
  batch_size = 20,
  validation_split = 0.2
)
Epoch 1/200
5/5 - 1s - loss: 0.9507 - accuracy: 0.6458 - val_loss: 1.2004 - val_accuracy: 0.5417 - 959ms/epoch - 192ms/step
Epoch 2/200
5/5 - 0s - loss: 0.9358 - accuracy: 0.6354 - val_loss: 1.1958 - val_accuracy: 0.5833 - 43ms/epoch - 9ms/step
Epoch 3/200
5/5 - 0s - loss: 0.9255 - accuracy: 0.6354 - val_loss: 1.1924 - val_accuracy: 0.5833 - 28ms/epoch - 6ms/step
Epoch 4/200
5/5 - 0s - loss: 0.9162 - accuracy: 0.6458 - val_loss: 1.1895 - val_accuracy: 0.5833 - 29ms/epoch - 6ms/step
Epoch 5/200
5/5 - 0s - loss: 0.9083 - accuracy: 0.6458 - val_loss: 1.1867 - val_accuracy: 0.5833 - 29ms/epoch - 6ms/step
Epoch 6/200
5/5 - 0s - loss: 0.9006 - accuracy: 0.6562 - val_loss: 1.1843 - val_accuracy: 0.5833 - 28ms/epoch - 6ms/step
Epoch 7/200
5/5 - 0s - loss: 0.8929 - accuracy: 0.6562 - val_loss: 1.1820 - val_accuracy: 0.5833 - 27ms/epoch - 5ms/step
Epoch 8/200
5/5 - 0s - loss: 0.8856 - accuracy: 0.6562 - val_loss: 1.1798 - val_accuracy: 0.5833 - 28ms/epoch - 6ms/step
Epoch 9/200
5/5 - 0s - loss: 0.8784 - accuracy: 0.6562 - val_loss: 1.1777 - val_accuracy: 0.5833 - 28ms/epoch - 6ms/step
Epoch 10/200
5/5 - 0s - loss: 0.8710 - accuracy: 0.6458 - val_loss: 1.1755 - val_accuracy: 0.5833 - 27ms/epoch - 5ms/step
Epoch 11/200
5/5 - 0s - loss: 0.8642 - accuracy: 0.6562 - val_loss: 1.1737 - val_accuracy: 0.5833 - 28ms/epoch - 6ms/step
Epoch 12/200
5/5 - 0s - loss: 0.8576 - accuracy: 0.6667 - val_loss: 1.1719 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 13/200
5/5 - 0s - loss: 0.8507 - accuracy: 0.6979 - val_loss: 1.1699 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 14/200
5/5 - 0s - loss: 0.8441 - accuracy: 0.7188 - val_loss: 1.1677 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 15/200
5/5 - 0s - loss: 0.8377 - accuracy: 0.7292 - val_loss: 1.1653 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 16/200
5/5 - 0s - loss: 0.8317 - accuracy: 0.7292 - val_loss: 1.1633 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 17/200
5/5 - 0s - loss: 0.8254 - accuracy: 0.7292 - val_loss: 1.1614 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 18/200
5/5 - 0s - loss: 0.8203 - accuracy: 0.7292 - val_loss: 1.1591 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 19/200
5/5 - 0s - loss: 0.8143 - accuracy: 0.7396 - val_loss: 1.1568 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 20/200
5/5 - 0s - loss: 0.8092 - accuracy: 0.7396 - val_loss: 1.1551 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 21/200
5/5 - 0s - loss: 0.8036 - accuracy: 0.7396 - val_loss: 1.1533 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 22/200
5/5 - 0s - loss: 0.7989 - accuracy: 0.7604 - val_loss: 1.1507 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 23/200
5/5 - 0s - loss: 0.7938 - accuracy: 0.7604 - val_loss: 1.1479 - val_accuracy: 0.6667 - 27ms/epoch - 5ms/step
Epoch 24/200
5/5 - 0s - loss: 0.7890 - accuracy: 0.7604 - val_loss: 1.1454 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 25/200
5/5 - 0s - loss: 0.7839 - accuracy: 0.7708 - val_loss: 1.1422 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 26/200
5/5 - 0s - loss: 0.7793 - accuracy: 0.7708 - val_loss: 1.1398 - val_accuracy: 0.6667 - 27ms/epoch - 5ms/step
Epoch 27/200
5/5 - 0s - loss: 0.7745 - accuracy: 0.7708 - val_loss: 1.1364 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 28/200
5/5 - 0s - loss: 0.7701 - accuracy: 0.7708 - val_loss: 1.1336 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 29/200
5/5 - 0s - loss: 0.7654 - accuracy: 0.7708 - val_loss: 1.1307 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 30/200
5/5 - 0s - loss: 0.7611 - accuracy: 0.7708 - val_loss: 1.1271 - val_accuracy: 0.6667 - 27ms/epoch - 5ms/step
Epoch 31/200
5/5 - 0s - loss: 0.7569 - accuracy: 0.7708 - val_loss: 1.1243 - val_accuracy: 0.6667 - 27ms/epoch - 5ms/step
Epoch 32/200
5/5 - 0s - loss: 0.7524 - accuracy: 0.7708 - val_loss: 1.1213 - val_accuracy: 0.6667 - 81ms/epoch - 16ms/step
Epoch 33/200
5/5 - 0s - loss: 0.7483 - accuracy: 0.7708 - val_loss: 1.1179 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 34/200
5/5 - 0s - loss: 0.7443 - accuracy: 0.7708 - val_loss: 1.1142 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 35/200
5/5 - 0s - loss: 0.7399 - accuracy: 0.8125 - val_loss: 1.1105 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 36/200
5/5 - 0s - loss: 0.7357 - accuracy: 0.8229 - val_loss: 1.1073 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 37/200
5/5 - 0s - loss: 0.7316 - accuracy: 0.8229 - val_loss: 1.1039 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 38/200
5/5 - 0s - loss: 0.7279 - accuracy: 0.8333 - val_loss: 1.1004 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 39/200
5/5 - 0s - loss: 0.7241 - accuracy: 0.8229 - val_loss: 1.0969 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 40/200
5/5 - 0s - loss: 0.7199 - accuracy: 0.8229 - val_loss: 1.0936 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 41/200
5/5 - 0s - loss: 0.7160 - accuracy: 0.8333 - val_loss: 1.0901 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 42/200
5/5 - 0s - loss: 0.7124 - accuracy: 0.8333 - val_loss: 1.0861 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 43/200
5/5 - 0s - loss: 0.7088 - accuracy: 0.8333 - val_loss: 1.0816 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 44/200
5/5 - 0s - loss: 0.7050 - accuracy: 0.8333 - val_loss: 1.0780 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 45/200
5/5 - 0s - loss: 0.7007 - accuracy: 0.8333 - val_loss: 1.0734 - val_accuracy: 0.6667 - 27ms/epoch - 5ms/step
Epoch 46/200
5/5 - 0s - loss: 0.6975 - accuracy: 0.8333 - val_loss: 1.0691 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 47/200
5/5 - 0s - loss: 0.6938 - accuracy: 0.8333 - val_loss: 1.0656 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 48/200
5/5 - 0s - loss: 0.6902 - accuracy: 0.8333 - val_loss: 1.0621 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 49/200
5/5 - 0s - loss: 0.6864 - accuracy: 0.8333 - val_loss: 1.0587 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 50/200
5/5 - 0s - loss: 0.6829 - accuracy: 0.8438 - val_loss: 1.0544 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 51/200
5/5 - 0s - loss: 0.6795 - accuracy: 0.8438 - val_loss: 1.0502 - val_accuracy: 0.6667 - 31ms/epoch - 6ms/step
Epoch 52/200
5/5 - 0s - loss: 0.6758 - accuracy: 0.8542 - val_loss: 1.0463 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 53/200
5/5 - 0s - loss: 0.6721 - accuracy: 0.8438 - val_loss: 1.0423 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 54/200
5/5 - 0s - loss: 0.6690 - accuracy: 0.8438 - val_loss: 1.0382 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 55/200
5/5 - 0s - loss: 0.6651 - accuracy: 0.8438 - val_loss: 1.0340 - val_accuracy: 0.6667 - 33ms/epoch - 7ms/step
Epoch 56/200
5/5 - 0s - loss: 0.6615 - accuracy: 0.8542 - val_loss: 1.0302 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 57/200
5/5 - 0s - loss: 0.6581 - accuracy: 0.8542 - val_loss: 1.0261 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 58/200
5/5 - 0s - loss: 0.6546 - accuracy: 0.8542 - val_loss: 1.0218 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 59/200
5/5 - 0s - loss: 0.6511 - accuracy: 0.8542 - val_loss: 1.0177 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 60/200
5/5 - 0s - loss: 0.6478 - accuracy: 0.8542 - val_loss: 1.0130 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 61/200
5/5 - 0s - loss: 0.6443 - accuracy: 0.8542 - val_loss: 1.0092 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 62/200
5/5 - 0s - loss: 0.6413 - accuracy: 0.8438 - val_loss: 1.0054 - val_accuracy: 0.6667 - 36ms/epoch - 7ms/step
Epoch 63/200
5/5 - 0s - loss: 0.6385 - accuracy: 0.8542 - val_loss: 1.0017 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 64/200
5/5 - 0s - loss: 0.6357 - accuracy: 0.8542 - val_loss: 0.9987 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 65/200
5/5 - 0s - loss: 0.6322 - accuracy: 0.8542 - val_loss: 0.9947 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 66/200
5/5 - 0s - loss: 0.6295 - accuracy: 0.8542 - val_loss: 0.9906 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 67/200
5/5 - 0s - loss: 0.6263 - accuracy: 0.8542 - val_loss: 0.9862 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 68/200
5/5 - 0s - loss: 0.6240 - accuracy: 0.8542 - val_loss: 0.9825 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 69/200
5/5 - 0s - loss: 0.6207 - accuracy: 0.8542 - val_loss: 0.9781 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 70/200
5/5 - 0s - loss: 0.6185 - accuracy: 0.8542 - val_loss: 0.9743 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 71/200
5/5 - 0s - loss: 0.6149 - accuracy: 0.8542 - val_loss: 0.9705 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 72/200
5/5 - 0s - loss: 0.6123 - accuracy: 0.8542 - val_loss: 0.9663 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 73/200
5/5 - 0s - loss: 0.6093 - accuracy: 0.8542 - val_loss: 0.9624 - val_accuracy: 0.6667 - 31ms/epoch - 6ms/step
Epoch 74/200
5/5 - 0s - loss: 0.6069 - accuracy: 0.8542 - val_loss: 0.9580 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 75/200
5/5 - 0s - loss: 0.6037 - accuracy: 0.8542 - val_loss: 0.9540 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 76/200
5/5 - 0s - loss: 0.6011 - accuracy: 0.8542 - val_loss: 0.9501 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 77/200
5/5 - 0s - loss: 0.5985 - accuracy: 0.8542 - val_loss: 0.9464 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 78/200
5/5 - 0s - loss: 0.5957 - accuracy: 0.8542 - val_loss: 0.9420 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 79/200
5/5 - 0s - loss: 0.5930 - accuracy: 0.8542 - val_loss: 0.9382 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 80/200
5/5 - 0s - loss: 0.5903 - accuracy: 0.8542 - val_loss: 0.9345 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 81/200
5/5 - 0s - loss: 0.5875 - accuracy: 0.8542 - val_loss: 0.9303 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 82/200
5/5 - 0s - loss: 0.5851 - accuracy: 0.8542 - val_loss: 0.9265 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 83/200
5/5 - 0s - loss: 0.5822 - accuracy: 0.8542 - val_loss: 0.9223 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 84/200
5/5 - 0s - loss: 0.5796 - accuracy: 0.8542 - val_loss: 0.9178 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 85/200
5/5 - 0s - loss: 0.5769 - accuracy: 0.8542 - val_loss: 0.9134 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 86/200
5/5 - 0s - loss: 0.5742 - accuracy: 0.8542 - val_loss: 0.9090 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 87/200
5/5 - 0s - loss: 0.5718 - accuracy: 0.8542 - val_loss: 0.9052 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 88/200
5/5 - 0s - loss: 0.5696 - accuracy: 0.8542 - val_loss: 0.9013 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 89/200
5/5 - 0s - loss: 0.5670 - accuracy: 0.8542 - val_loss: 0.8975 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 90/200
5/5 - 0s - loss: 0.5645 - accuracy: 0.8542 - val_loss: 0.8940 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 91/200
5/5 - 0s - loss: 0.5619 - accuracy: 0.8542 - val_loss: 0.8903 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 92/200
5/5 - 0s - loss: 0.5596 - accuracy: 0.8646 - val_loss: 0.8865 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 93/200
5/5 - 0s - loss: 0.5571 - accuracy: 0.8542 - val_loss: 0.8822 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 94/200
5/5 - 0s - loss: 0.5556 - accuracy: 0.8646 - val_loss: 0.8783 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 95/200
5/5 - 0s - loss: 0.5528 - accuracy: 0.8646 - val_loss: 0.8751 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 96/200
5/5 - 0s - loss: 0.5507 - accuracy: 0.8646 - val_loss: 0.8717 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 97/200
5/5 - 0s - loss: 0.5488 - accuracy: 0.8646 - val_loss: 0.8684 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 98/200
5/5 - 0s - loss: 0.5460 - accuracy: 0.8646 - val_loss: 0.8644 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 99/200
5/5 - 0s - loss: 0.5437 - accuracy: 0.8646 - val_loss: 0.8611 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 100/200
5/5 - 0s - loss: 0.5414 - accuracy: 0.8646 - val_loss: 0.8572 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 101/200
5/5 - 0s - loss: 0.5394 - accuracy: 0.8646 - val_loss: 0.8533 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 102/200
5/5 - 0s - loss: 0.5368 - accuracy: 0.8646 - val_loss: 0.8492 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 103/200
5/5 - 0s - loss: 0.5347 - accuracy: 0.8646 - val_loss: 0.8456 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 104/200
5/5 - 0s - loss: 0.5323 - accuracy: 0.8646 - val_loss: 0.8417 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 105/200
5/5 - 0s - loss: 0.5301 - accuracy: 0.8646 - val_loss: 0.8384 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 106/200
5/5 - 0s - loss: 0.5276 - accuracy: 0.8750 - val_loss: 0.8347 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 107/200
5/5 - 0s - loss: 0.5266 - accuracy: 0.8750 - val_loss: 0.8314 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 108/200
5/5 - 0s - loss: 0.5234 - accuracy: 0.8750 - val_loss: 0.8275 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 109/200
5/5 - 0s - loss: 0.5213 - accuracy: 0.8750 - val_loss: 0.8240 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 110/200
5/5 - 0s - loss: 0.5195 - accuracy: 0.8750 - val_loss: 0.8200 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 111/200
5/5 - 0s - loss: 0.5170 - accuracy: 0.8750 - val_loss: 0.8165 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 112/200
5/5 - 0s - loss: 0.5150 - accuracy: 0.8750 - val_loss: 0.8131 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 113/200
5/5 - 0s - loss: 0.5127 - accuracy: 0.8750 - val_loss: 0.8099 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 114/200
5/5 - 0s - loss: 0.5113 - accuracy: 0.8750 - val_loss: 0.8067 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 115/200
5/5 - 0s - loss: 0.5089 - accuracy: 0.8750 - val_loss: 0.8036 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 116/200
5/5 - 0s - loss: 0.5070 - accuracy: 0.8750 - val_loss: 0.8003 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 117/200
5/5 - 0s - loss: 0.5050 - accuracy: 0.8854 - val_loss: 0.7968 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 118/200
5/5 - 0s - loss: 0.5034 - accuracy: 0.8750 - val_loss: 0.7934 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 119/200
5/5 - 0s - loss: 0.5008 - accuracy: 0.8854 - val_loss: 0.7907 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 120/200
5/5 - 0s - loss: 0.4992 - accuracy: 0.8854 - val_loss: 0.7879 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 121/200
5/5 - 0s - loss: 0.4973 - accuracy: 0.8854 - val_loss: 0.7850 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 122/200
5/5 - 0s - loss: 0.4953 - accuracy: 0.8854 - val_loss: 0.7818 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 123/200
5/5 - 0s - loss: 0.4934 - accuracy: 0.8854 - val_loss: 0.7783 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 124/200
5/5 - 0s - loss: 0.4914 - accuracy: 0.8854 - val_loss: 0.7753 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 125/200
5/5 - 0s - loss: 0.4891 - accuracy: 0.8854 - val_loss: 0.7720 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 126/200
5/5 - 0s - loss: 0.4880 - accuracy: 0.8854 - val_loss: 0.7686 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 127/200
5/5 - 0s - loss: 0.4853 - accuracy: 0.8854 - val_loss: 0.7657 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 128/200
5/5 - 0s - loss: 0.4841 - accuracy: 0.8854 - val_loss: 0.7629 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 129/200
5/5 - 0s - loss: 0.4817 - accuracy: 0.8854 - val_loss: 0.7599 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 130/200
5/5 - 0s - loss: 0.4799 - accuracy: 0.8854 - val_loss: 0.7564 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 131/200
5/5 - 0s - loss: 0.4781 - accuracy: 0.8854 - val_loss: 0.7535 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 132/200
5/5 - 0s - loss: 0.4761 - accuracy: 0.8854 - val_loss: 0.7499 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 133/200
5/5 - 0s - loss: 0.4742 - accuracy: 0.8854 - val_loss: 0.7470 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 134/200
5/5 - 0s - loss: 0.4723 - accuracy: 0.8854 - val_loss: 0.7434 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 135/200
5/5 - 0s - loss: 0.4708 - accuracy: 0.8854 - val_loss: 0.7403 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 136/200
5/5 - 0s - loss: 0.4692 - accuracy: 0.8854 - val_loss: 0.7373 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 137/200
5/5 - 0s - loss: 0.4668 - accuracy: 0.8854 - val_loss: 0.7343 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 138/200
5/5 - 0s - loss: 0.4653 - accuracy: 0.8854 - val_loss: 0.7319 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 139/200
5/5 - 0s - loss: 0.4630 - accuracy: 0.8854 - val_loss: 0.7290 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 140/200
5/5 - 0s - loss: 0.4612 - accuracy: 0.8854 - val_loss: 0.7261 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 141/200
5/5 - 0s - loss: 0.4592 - accuracy: 0.8854 - val_loss: 0.7227 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 142/200
5/5 - 0s - loss: 0.4579 - accuracy: 0.8854 - val_loss: 0.7194 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 143/200
5/5 - 0s - loss: 0.4558 - accuracy: 0.8854 - val_loss: 0.7162 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 144/200
5/5 - 0s - loss: 0.4537 - accuracy: 0.8854 - val_loss: 0.7128 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 145/200
5/5 - 0s - loss: 0.4519 - accuracy: 0.8854 - val_loss: 0.7095 - val_accuracy: 0.6667 - 27ms/epoch - 5ms/step
Epoch 146/200
5/5 - 0s - loss: 0.4500 - accuracy: 0.8854 - val_loss: 0.7064 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 147/200
5/5 - 0s - loss: 0.4484 - accuracy: 0.8854 - val_loss: 0.7034 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 148/200
5/5 - 0s - loss: 0.4462 - accuracy: 0.8854 - val_loss: 0.7008 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 149/200
5/5 - 0s - loss: 0.4447 - accuracy: 0.8854 - val_loss: 0.6983 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 150/200
5/5 - 0s - loss: 0.4425 - accuracy: 0.8854 - val_loss: 0.6956 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 151/200
5/5 - 0s - loss: 0.4406 - accuracy: 0.8854 - val_loss: 0.6921 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 152/200
5/5 - 0s - loss: 0.4390 - accuracy: 0.8854 - val_loss: 0.6895 - val_accuracy: 0.6667 - 27ms/epoch - 5ms/step
Epoch 153/200
5/5 - 0s - loss: 0.4372 - accuracy: 0.8854 - val_loss: 0.6863 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 154/200
5/5 - 0s - loss: 0.4355 - accuracy: 0.8854 - val_loss: 0.6833 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 155/200
5/5 - 0s - loss: 0.4342 - accuracy: 0.8854 - val_loss: 0.6804 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 156/200
5/5 - 0s - loss: 0.4320 - accuracy: 0.8854 - val_loss: 0.6774 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 157/200
5/5 - 0s - loss: 0.4309 - accuracy: 0.8854 - val_loss: 0.6751 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 158/200
5/5 - 0s - loss: 0.4288 - accuracy: 0.8854 - val_loss: 0.6725 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 159/200
5/5 - 0s - loss: 0.4268 - accuracy: 0.8854 - val_loss: 0.6703 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 160/200
5/5 - 0s - loss: 0.4256 - accuracy: 0.8854 - val_loss: 0.6671 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 161/200
5/5 - 0s - loss: 0.4236 - accuracy: 0.8854 - val_loss: 0.6640 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 162/200
5/5 - 0s - loss: 0.4220 - accuracy: 0.8854 - val_loss: 0.6611 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 163/200
5/5 - 0s - loss: 0.4204 - accuracy: 0.8854 - val_loss: 0.6582 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 164/200
5/5 - 0s - loss: 0.4186 - accuracy: 0.8854 - val_loss: 0.6553 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 165/200
5/5 - 0s - loss: 0.4169 - accuracy: 0.8854 - val_loss: 0.6521 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 166/200
5/5 - 0s - loss: 0.4155 - accuracy: 0.8854 - val_loss: 0.6495 - val_accuracy: 0.6667 - 27ms/epoch - 5ms/step
Epoch 167/200
5/5 - 0s - loss: 0.4139 - accuracy: 0.8854 - val_loss: 0.6471 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 168/200
5/5 - 0s - loss: 0.4126 - accuracy: 0.8854 - val_loss: 0.6445 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 169/200
5/5 - 0s - loss: 0.4105 - accuracy: 0.8958 - val_loss: 0.6414 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 170/200
5/5 - 0s - loss: 0.4087 - accuracy: 0.8854 - val_loss: 0.6389 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 171/200
5/5 - 0s - loss: 0.4074 - accuracy: 0.8958 - val_loss: 0.6364 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 172/200
5/5 - 0s - loss: 0.4058 - accuracy: 0.9062 - val_loss: 0.6333 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 173/200
5/5 - 0s - loss: 0.4045 - accuracy: 0.8958 - val_loss: 0.6306 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 174/200
5/5 - 0s - loss: 0.4026 - accuracy: 0.8854 - val_loss: 0.6281 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 175/200
5/5 - 0s - loss: 0.4018 - accuracy: 0.8958 - val_loss: 0.6252 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 176/200
5/5 - 0s - loss: 0.3999 - accuracy: 0.8854 - val_loss: 0.6231 - val_accuracy: 0.6667 - 31ms/epoch - 6ms/step
Epoch 177/200
5/5 - 0s - loss: 0.3981 - accuracy: 0.9062 - val_loss: 0.6207 - val_accuracy: 0.6667 - 31ms/epoch - 6ms/step
Epoch 178/200
5/5 - 0s - loss: 0.3968 - accuracy: 0.9062 - val_loss: 0.6175 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 179/200
5/5 - 0s - loss: 0.3953 - accuracy: 0.8958 - val_loss: 0.6149 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 180/200
5/5 - 0s - loss: 0.3936 - accuracy: 0.8958 - val_loss: 0.6124 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 181/200
5/5 - 0s - loss: 0.3924 - accuracy: 0.8958 - val_loss: 0.6093 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 182/200
5/5 - 0s - loss: 0.3913 - accuracy: 0.8854 - val_loss: 0.6073 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 183/200
5/5 - 0s - loss: 0.3904 - accuracy: 0.8854 - val_loss: 0.6045 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 184/200
5/5 - 0s - loss: 0.3883 - accuracy: 0.9167 - val_loss: 0.6017 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 185/200
5/5 - 0s - loss: 0.3865 - accuracy: 0.9167 - val_loss: 0.5988 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 186/200
5/5 - 0s - loss: 0.3853 - accuracy: 0.9167 - val_loss: 0.5966 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 187/200
5/5 - 0s - loss: 0.3838 - accuracy: 0.9167 - val_loss: 0.5942 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 188/200
5/5 - 0s - loss: 0.3824 - accuracy: 0.9167 - val_loss: 0.5915 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 189/200
5/5 - 0s - loss: 0.3810 - accuracy: 0.9167 - val_loss: 0.5885 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 190/200
5/5 - 0s - loss: 0.3799 - accuracy: 0.9167 - val_loss: 0.5858 - val_accuracy: 0.6667 - 30ms/epoch - 6ms/step
Epoch 191/200
5/5 - 0s - loss: 0.3781 - accuracy: 0.9167 - val_loss: 0.5835 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 192/200
5/5 - 0s - loss: 0.3765 - accuracy: 0.9167 - val_loss: 0.5813 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 193/200
5/5 - 0s - loss: 0.3752 - accuracy: 0.9167 - val_loss: 0.5784 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 194/200
5/5 - 0s - loss: 0.3742 - accuracy: 0.9271 - val_loss: 0.5758 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 195/200
5/5 - 0s - loss: 0.3725 - accuracy: 0.9271 - val_loss: 0.5734 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 196/200
5/5 - 0s - loss: 0.3707 - accuracy: 0.9271 - val_loss: 0.5706 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 197/200
5/5 - 0s - loss: 0.3692 - accuracy: 0.9271 - val_loss: 0.5675 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
Epoch 198/200
5/5 - 0s - loss: 0.3680 - accuracy: 0.9271 - val_loss: 0.5650 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 199/200
5/5 - 0s - loss: 0.3664 - accuracy: 0.9271 - val_loss: 0.5623 - val_accuracy: 0.6667 - 28ms/epoch - 6ms/step
Epoch 200/200
5/5 - 0s - loss: 0.3646 - accuracy: 0.9271 - val_loss: 0.5600 - val_accuracy: 0.6667 - 29ms/epoch - 6ms/step
plot(history) +
  ggtitle("Training a neural network based classifier on the iris data set") +
  theme_bw()

Evaluate Network Performance

The final performance can be obtained like so.

perf <- model %>% evaluate(x_test, y_test)
1/1 - 0s - loss: 0.4254 - accuracy: 0.9000 - 17ms/epoch - 17ms/step
print(perf)
     loss  accuracy 
0.4253841 0.9000000 

For the next plot the predicted and true values need to be in a vector. Note that the true values need to be unlisted before putting them into a numeric vector.

classes <- iris %>% pull(Species) %>% unique()
y_pred  <- model %>% predict(x_test) %>% k_argmax() %>% as.vector()
1/1 - 0s - 47ms/epoch - 47ms/step
y_true  <- test %>% select(class_label) %>% unlist() %>% as.numeric()

tibble(y_true = classes[y_true + 1], y_pred = classes[y_pred + 1],
       Correct = ifelse(y_true == y_pred, "Yes", "No") %>% factor) %>% 
  ggplot(aes(x = y_true, y = y_pred, colour = Correct)) +
  geom_jitter() +
  theme_bw() +
  ggtitle(label = "Classification Performance of Artificial Neural Network",
          subtitle = str_c("Accuracy = ",round(perf[2],3)*100,"%")) +
  xlab(label = "True iris class") +
  ylab(label = "Predicted iris class")

library(gmodels)

CrossTable(y_pred, y_true,
           prop.chisq = FALSE, prop.t = FALSE, prop.r = FALSE,
           dnn = c('predicted', 'actual'))

 
   Cell Contents
|-------------------------|
|                       N |
|           N / Col Total |
|-------------------------|

 
Total Observations in Table:  30 

 
             | actual 
   predicted |         0 |         1 |         2 | Row Total | 
-------------|-----------|-----------|-----------|-----------|
           0 |         9 |         0 |         0 |         9 | 
             |     0.900 |     0.000 |     0.000 |           | 
-------------|-----------|-----------|-----------|-----------|
           1 |         1 |         8 |         1 |        10 | 
             |     0.100 |     0.889 |     0.091 |           | 
-------------|-----------|-----------|-----------|-----------|
           2 |         0 |         1 |        10 |        11 | 
             |     0.000 |     0.111 |     0.909 |           | 
-------------|-----------|-----------|-----------|-----------|
Column Total |        10 |         9 |        11 |        30 | 
             |     0.333 |     0.300 |     0.367 |           | 
-------------|-----------|-----------|-----------|-----------|

 

Conclusion

I hope this illustrated just how easy it is to get started building artificial neural network using Keras and TensorFlow in R. With relative ease, we created a 3-class predictor with an accuracy of 100%. This was a basic minimal example. The network can be expanded to create Deep Learning networks and also the entire TensorFlow API is available.

Enjoy and Happy Learning!

Leon

Thanks again Leon, this was awesome!!!