I recently upgraded my desktop with a GeForce RTX 5060 Ti 16GiB, and naturally, my first instinct was to put that VRAM to work by setting up local AI search.
I settled on Perplexica, an open-source AI-powered search which will be backed by Ollama for the inference. Since my daily driver is Fedora 43, I wanted to do this using Podman Rootless Quadlets rather than Docker Compose.
Here’s my guide on how to orchestrate Perplexica and Ollama using systemd and NVIDIA CDI on Fedora.
[Read More]