--- title: "Review Stat. 654" author: "Prof. Eric A. Suess" format: revealjs --- ## Review Today we will do some review of the class and I will suggest some next steps for learning about Neural Networks and Tensorflow/Keras. ## Review We have been using the Hold-out Method, which means we have been splitting our data into - Training data - Validation data - Test data And we have been using k-fold cross-validation. ## Review Working with the new data types that we have seen are a bit challenging, but tensors gives a way to store many different types of data. Pictures are tensors. Words can be converted to numbers using an Autoencoder. ## Review We have covered all of the basic neural network designs. - Sequential - Feed-forward Neural Networks - Convolutional Neural Networks - Recurrent Layers - LSTM - Generative Neural Networks - GANs - Reinforcement Learning ## Review We have discussed some of the newer applications related to neural netwooks. - Stable Diffusion - Attention - Large Language Models ## Review The basic idea of fitting Neural Networks uses - Gradient Decent - Backprogation ## Review We have discussed the idea of Transfer Learning and Pre-Trained Models. We have also discussed the idea of saving a fitted neural network and using it later or elsewhere. ## Open Source software We have been using Open Source software. You are now aware of some of the difficulties working with and using R and Tensorflow/Keras. Things change quickly and you may run into problems. But when things are working they work great! Hopefully you agree. ## Our Book - We have been using the [Deep Learning with R](https://www.manning.com/books/deep-learning-with-r). - If you become interested in leaning Python there is the other book from the same author [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python) with his [book github](https://github.com/fchollet/deep-learning-with-python-notebooks). ## Google - [TensorFlow](https://www.tensorflow.org/) [Tutorials](https://www.tensorflow.org/tutorials/) - [CoLab](https://colab.research.google.com) ## Alternatives - [R for torch](https://torch.mlverse.org/) - [PyTorch](https://pytorch.org/) - [MxNet](https://mxnet.apache.org/) - [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) ## Hugging Face Open source models. - [Hugging Face](https://huggingface.co/) - [Learn](https://huggingface.co/learn) [NLP](https://huggingface.co/learn/nlp-course/chapter1/1) [RF](https://huggingface.co/learn/deep-rl-course/unit0/introduction) ## LLMs - [groq.com](https://groq.com/) - [Comand-R+](https://www.command-r.com/) [LLMU](https://docs.cohere.com/docs/llmu) - [OpenAI](https://openai.com/) ## Read Chapter 14 Conclusions **How to think about deep learning** "The most surprising thing about deep learning is how simple it is. Ten years ago, no one expected that we would achieve such amazing results on machine-perception problems by using simple parametric models trained with gradient descent. Now, it turns out that all you need is sufficiently large parametric models trained with gradient descent on sufficiently many examples. As Feynman once said about the universe, “It’s not complicated, it’s just a lot of it."