Run k8sgpt with Local LLM

Run k8sgpt with Local LLM

For some cases, you may need on-premise LLM(Large Language Model) deployment instead of OpenAI: to leverage existing GPU, or to ensure total data ownership, or use your fine-tune models.

Now a new tutorial is here! In this tutorial, @panpan0000 will show us how to setup open source LLM like LLaMA locally, and tells k8sgpt to get the “explanation” from the on-premise LLM.

You can find the tutorial here