<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Ethan Hagiwara</title><link>https://ethan-hgwr.github.io/</link><description>Recent content on Ethan Hagiwara</description><generator>Hugo -- 0.128.0</generator><language>en-us</language><lastBuildDate>Sat, 01 Feb 2025 12:11:25 +0100</lastBuildDate><atom:link href="https://ethan-hgwr.github.io/index.xml" rel="self" type="application/rss+xml"/><item><title>How to run Deepseek R1 distilled (or others) locally with Open WebUI, Ollama and Docker compose 🐋</title><link>https://ethan-hgwr.github.io/blog/deepseek/</link><pubDate>Sat, 01 Feb 2025 12:11:25 +0100</pubDate><guid>https://ethan-hgwr.github.io/blog/deepseek/</guid><description>Introduction If you&amp;rsquo;ve been online these last few days, you&amp;rsquo;ve very probably heard about China&amp;rsquo;s new LLM model, Deepseek.
Often depicted as ChatGPT&amp;rsquo;s killer, Deepseek sent shockwaves through the internet not only for its performance and cost but also for the fact that the model is open-source. This means that anyone can run the model on their computer for themselves.
⚠️ Important Due to hardware limitations, the Deepseek R1 model you&amp;rsquo;ll run locally after this tutorial will be &amp;ldquo;dumber&amp;rdquo; compared to the one on Deepseek&amp;rsquo;s website.</description></item></channel></rss>