<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai on ijustr.com</title><link>https://www.ijustr.com/categories/ai/</link><description>Recent content in Ai on ijustr.com</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Sun, 14 Dec 2025 19:00:00 +0200</lastBuildDate><atom:link href="https://www.ijustr.com/categories/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>Self-Hosted AI Search: Running Perplexica with Ollama on Fedora 43</title><link>https://www.ijustr.com/self-hosted-ai-search-running-perplexica-with-ollama-on-fedora-43/</link><pubDate>Sun, 14 Dec 2025 19:00:00 +0200</pubDate><guid>https://www.ijustr.com/self-hosted-ai-search-running-perplexica-with-ollama-on-fedora-43/</guid><description>&lt;p&gt;I recently upgraded my desktop with a &lt;strong&gt;GeForce RTX 5060 Ti 16GiB&lt;/strong&gt;,
and naturally, my first instinct was to put that VRAM to work by setting up
local AI search.&lt;/p&gt;
&lt;p&gt;I settled on &lt;strong&gt;Perplexica&lt;/strong&gt;, an open-source AI-powered search which will be
backed by &lt;strong&gt;Ollama&lt;/strong&gt; for the inference. Since my daily driver is &lt;strong&gt;Fedora 43&lt;/strong&gt;,
I wanted to do this using &lt;strong&gt;Podman Rootless Quadlets&lt;/strong&gt; rather than Docker Compose.&lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s my guide on how to orchestrate Perplexica and Ollama using systemd and
NVIDIA CDI on Fedora.&lt;/p&gt;</description></item></channel></rss>