<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ollama on PAUL'S BLOG</title><link>https://paulyu.dev/tags/ollama/</link><description>Recent content in Ollama on PAUL'S BLOG</description><generator>Hugo</generator><language>en</language><lastBuildDate>Wed, 08 Oct 2025 13:00:00 +0000</lastBuildDate><atom:link href="https://paulyu.dev/tags/ollama/index.xml" rel="self" type="application/rss+xml"/><item><title>Local LLMs: Running Ollama and Open WebUI in Docker on Ubuntu</title><link>https://paulyu.dev/article/ollama-and-openwebui-containers/</link><pubDate>Wed, 08 Oct 2025 13:00:00 +0000</pubDate><guid>https://paulyu.dev/article/ollama-and-openwebui-containers/</guid><description>&lt;p&gt;&lt;a href="https://docs.ollama.com/"&gt;Ollama&lt;/a&gt; is a popular tool for running large language models (LLMs) locally on your machine. It provides a simple interface to interact with various models without needing an internet connection. &lt;a href="https://docs.openwebui.com"&gt;Open WebUI&lt;/a&gt; is a web-based user interface that allows you to interact with LLMs through a browser.&lt;/p&gt;
&lt;p&gt;I am working on an Ubuntu 24.04.3 LTS Desktop machine with decent hardware to run models locally, so my preference is to run Ollama as a local service rather than confining it to a container. This way, Ollama can take full advantage of my machine&amp;rsquo;s capabilities, especially the GPU.&lt;/p&gt;</description></item></channel></rss>