Intro
Large language models (LLMs) are impressive, capable of generating human-quality text, translating languages, and answering questions. However, there’s a catch: they are limited to the data they were trained on. This means their knowledge can be outdated and they may struggle with topics that emerged after their training.
But what if we could give our LLMs access to the vast ocean of information available on the web?
This powerful combination would allow your local LLMs to:
- Access up-to-date information: Get answers based on the latest news, trends, and developments.
- Expand its knowledge base: Learn about new topics and concepts not covered in its original training data.
- Provide more comprehensive answers: Offer richer and more informative responses by drawing on external sources.
Adding Real-Time Information
That’s where tools like Searxng come in. Searxng is a privacy-focused metasearch engine that aggregates results from multiple search engines, providing a comprehensive and sampled view of the web.
By integrating Searxng with an open-source LLM like Ollama running on Open Web UI, we can effectively bridge the knowledge gap. Here’s how we can achieve this:
- Have Ollama, and Open Web UI running. See two previous blog posts in this series for more details, but I’ll provide a
docker-compose.yaml
file here for your convenience:
version: '3.8'
services:
# From previous blog post. Here for your convenience ;)
ollama:
image: ollama/ollama:latest
container_name: ollama
ports:
- 11434:11434
volumes:
- /home/ghilston/config/ollama:/root/.ollama
tty: true
# If you have an Nvidia GPU, define this section, otherwise remove it to use
# your CPU
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
restart: unless-stopped
ollama-webui:
image: ghcr.io/ollama-webui/ollama-webui:main
container_name: ollama-webui
volumes:
- /home/ghilston/config/ollama/ollama-webui:/app/backend/data
depends_on:
- ollama
ports:
- 8062:8080
environment:
- '/ollama/api=http://ollama:11434/api'
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
...
searxng:
container_name: searxng
image: docker.io/searxng/searxng:latest
ports:
- 8214:8080
# References the configuration files you created in the previous step
volumes:
- ./searxng:/etc/searxng
environment:
- SEARXNG_BASE_URL=https://${SEARXNG_HOSTNAME:-localhost}/
cap_drop:
- ALL
cap_add:
- CHOWN
- SETGID
- SETUID
restart: unless-stopped
- In Open Web UI’s
Admin Panel
, we can find aWeb Search
tab. Here, we’ll ensure theEnable Web Search
toggle is enabled, and we’ll pointSearxng Query URL
to the machine and port where we’re running this service. For example,http://192.168.1.5:8921/search?q=<query>
. - Now that everything is set up, in our next Open Web UI chat, we can click the
+
sign, which is theMore
button, and check theWeb Search
toggle. This will allow us to leverage Searxng to perform a relevant web search. Multiple search engines will be queried and a list of relevant websites and information will be returned to the LLM, under the hood. The LLM will then analyze the retrieved information, extract key insights, and incorporate them into its response.
Congratulations, you can now interact with a local LLM, and have it search the internet on your behalf :)