library(ellmer)Install ollama and ellmer
Today we will install [Ollama](https://ollama.com/) to access LLMs locally. We will install the R package [ellmer](https://ellmer.tidyverse.org/index.html) to access the models in R Studio.
Ollama
To install [chat_ollama](https://ellmer.tidyverse.org/reference/chat_ollama.html) run the one command provided on the [Ollama](https://ollama.com/) website. The command is run in the Terminal, see the tab to the right of Console in RStudio.
curl -fsSL https://ollama.com/install.sh | sh
Ellmer
Install the R package ellmer and load the package.
Next we will download an [Ollama model](https://ollama.com/search) using a function from the R package [ollamar](https://hauselin.github.io/ollama-r/)
library(ollamar)
Attaching package: 'ollamar'
The following object is masked from 'package:ellmer':
chat
The following object is masked from 'package:stats':
embed
The following object is masked from 'package:methods':
show
ollamar::pull("llama3.2")<httr2_response>
POST http://127.0.0.1:11434/api/pull
Status: 200 OK
Content-Type: application/x-ndjson
Body: In memory (1006 bytes)
Or you can run the following Ollama command in the Terminal to download and chat with the llama3.2 model in the terminal.
ollama run llama3.2
To see your downloaded models, in the terminal run
ollama list
Run ellmer chat_ollama in an R Quarto Notebok or in the Console.
chat <- chat_ollama(model = "llama3.2")
chat$chat("Tell me three jokes about Snoopy")Here are three Snoopy-themed jokes for you:
1. Why did Snoopy go to the doctor?
Because he was feeling a little ruff!
2. Why did Charlie Brown take his dog Snoopy on the badminton court?
Because Snoopy could always be depended upon to serve up trouble!
3. What did Snoopy say when Charlie Brown asked him if he wanted whipped cream?
"Snoopy loves you, but this isn't my top flake!"
ShinyChat
We will open ShinyChat. Install and load the ShinyChat package.
library(shinychat)
ui <- bslib::page_fluid(
chat_ui("chat")
)
server <- function(input, output, session) {
chat <- chat_ollama(
model = "llama3.2"
)
observeEvent(input$chat_user_input, {
stream <- chat$stream_async(input$chat_user_input)
chat_append("chat", stream)
})
}
shinyApp(ui, server)