Ollama
You are currently on a page documenting the use of Ollama models as text completion models. Many popular models available on Ollama are chat completion models.
You may be looking for this page instead.
Ollama allows you to run open-source large language models, such as Llama 3, locally.
Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.
This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance. For a complete list of supported models and model variants, see the Ollama model library.
Setupβ
Follow these instructions to set up and run a local Ollama instance.
Usageβ
- npm
- Yarn
- pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
import { Ollama } from "@langchain/community/llms/ollama";
const ollama = new Ollama({
baseUrl: "http://localhost:11434", // Default value
model: "llama2", // Default value
});
const stream = await ollama.stream(
`Translate "I love programming" into German.`
);
const chunks = [];
for await (const chunk of stream) {
chunks.push(chunk);
}
console.log(chunks.join(""));
/*
I'm glad to help! "I love programming" can be translated to German as "Ich liebe Programmieren."
It's important to note that the translation of "I love" in German is "ich liebe," which is a more formal and polite way of saying "I love." In informal situations, people might use "mag ich" or "mΓΆchte ich" instead.
Additionally, the word "Programmieren" is the correct term for "programming" in German. It's a combination of two words: "Programm" and "-ieren," which means "to do something." So, the full translation of "I love programming" would be "Ich liebe Programmieren.
*/
API Reference:
- Ollama from
@langchain/community/llms/ollama
Multimodal modelsβ
Ollama supports open source multimodal models like LLaVA in versions 0.1.15 and up. You can bind base64 encoded image data to multimodal-capable models to use as context like this:
import { Ollama } from "@langchain/community/llms/ollama";
import * as fs from "node:fs/promises";
const imageData = await fs.readFile("./hotdog.jpg");
const model = new Ollama({
model: "llava",
baseUrl: "http://127.0.0.1:11434",
}).bind({
images: [imageData.toString("base64")],
});
const res = await model.invoke("What's in this image?");
console.log({ res });
/*
{
res: ' The image displays a hot dog sitting on top of a bun, which is placed directly on the table. The hot dog has a striped pattern on it and looks ready to be eaten.'
}
*/
API Reference:
- Ollama from
@langchain/community/llms/ollama
Relatedβ
- LLM conceptual guide
- LLM how-to guides