K8sGPT CLI Configuration
Learn how to configure K8sGPT CLI for your specific needs
Configuration File
K8sGPT uses a YAML configuration file to manage its settings. By default, it looks for a file named k8sgpt.yaml in your home directory, but you can specify a different location using the --config flag.
Basic Configuration
Here's a basic configuration example:
apiVersion: v1
kind: K8sGPT
metadata:
name: k8sgpt
spec:
backend: openai
model: gpt-3.5-turbo
baseURL: https://api.openai.com/v1
secretRef:
name: k8sgpt-secret
analysis:
interval: 5m
namespace: k8sgptConfiguration Options
Backend
Specify the AI backend to use. Supported options:
- openai
- azureopenai
- localai
- anthropic
- cohere
Model
The specific model to use with the selected backend. Examples:
- gpt-3.5-turbo (OpenAI)
- gpt-4 (OpenAI)
- claude-2 (Anthropic)
- command (Cohere)
Analysis Settings
- interval: How frequently K8sGPT performs analysis (e.g., "30s", "5m", "1h")
- namespace: The namespace where K8sGPT will run its analysis
Secret Management
Store your API keys securely using Kubernetes secrets:
kubectl create secret generic k8sgpt-secret \ --from-literal=api-key=your-api-key
Advanced Configuration
Filters
Configure which resources to analyze and which to ignore:
spec:
filters:
- Pod
- Service
- Deployment
- IngressOutput Format
Customize the output format of analysis results:
- json
- yaml
- table (default)
Cache Settings
Configure caching behavior for analysis results:
spec:
noCache: false # Enable/disable caching
cache:
ttl: 1h # Cache time-to-live