New: K8sGPT works with Claude Desktop!

Back to Documentation

Operator Configuration

Learn how to configure the K8sGPT Operator for your Kubernetes cluster

Configuration File

K8sGPT uses a YAML configuration file to manage its settings. By default, it looks for a file named k8sgpt.yaml in your home directory, but you can specify a different location using the --config flag.

Basic Configuration

  • Backend

    Specify the AI backend to use (e.g., OpenAI, Anthropic, etc.)

  • Model

    The specific model to use with the selected backend

  • API Key

    Your API key for the selected backend

Advanced Configuration

  • Filters

    Configure which resources to analyze and which to ignore

  • Output Format

    Customize the output format (JSON, YAML, etc.)

  • Cache Settings

    Configure caching behavior for analysis results

  • Analysis Interval

    Configure how frequently K8sGPT performs analysis of your cluster. The interval should be specified in a format that can be parsed by Go's time.ParseDuration function (e.g., "30s", "1m", "2h"). If not specified, the default interval is 30 seconds.

Complete Configuration Example

Here's a complete example of all available configuration options in the K8sGPT CRD:

apiVersion: k8sgpt.ai/v1alpha1  # API version
kind: K8sGPT                   # Resource kind
metadata:
  name: <k8sgpt-instance-name> # Name of your K8sGPT instance
  namespace: <namespace>        # Namespace where the instance is created
spec:
  version: <k8sgpt-version>     # Version of K8sGPT (optional)
  repository: <image-repository> # Image repository for K8sGPT (optional, default: ghcr.io/k8sgpt-ai/k8sgpt)
  imagePullPolicy: <image-pull-policy> # Image Pull Policy for K8SGPT container (optional, default: Always)
  imagePullSecrets:             # Secrets for pulling images (optional)
    - name: <secret-name>
  resources:                   # Resource requests and limits for the K8sGPT deployment (optional)
    requests:
      cpu: <cpu-request>
      memory: <memory-request>
    limits:
      cpu: <cpu-limit>
      memory: <memory-limit>
  analysisConfig:
    interval: <interval>       # Interval between analysis runs (e.g. "30s","5m", "1h")
  noCache: <boolean>            # Disable caching (optional)
  customAnalyzers:             # Define custom analyzers (optional)
    - name: <analyzer-name>
      connection:
        url: <analyzer-url>
        port: <analyzer-port>
  filters:                     # Filters for analysis (optional)
    - <filter-expression>
  extraOptions:                 # Extra options (optional)
    backstage:
      enabled: <boolean>
    serviceAccountIRSA: <arn>
  sink:                        # Webhook for notifications (optional)
    type: <webhook-type>        # (e.g., slack, mattermost)
    endpoint: <webhook-endpoint>
    channel: <channel-name>
    username: <username>
    icon_url: <icon-url>
    secret:
      name: <secret-name>
      key: <secret-key>
  analysis:                    # Analysis configuration (optional)
    interval: <interval>        # Interval between analysis runs (e.g. "5m", "1h")
    namespace: <namespace>      # Namespace to run analysis in (optional, default: k8sgpt)
  ai:                          # AI configuration (required)
    autoRemediation:           # Automatic remediation settings
      enabled: <boolean>        # Enable/disable auto-remediation
      riskThreshold: <percentage> # Risk threshold (e.g., "90")
      resources:                # Resource types for auto-remediation
        - Pod
        - Service
        - Deployment
        - Ingress  # Example
    backend: <ai-backend>       # AI backend (e.g., openai, azureopenai, localai, etc.)
    backOff:                   # Retry backoff settings (optional)
      enabled: <boolean>
      maxRetries: <integer>
    baseUrl: <base-url>         # Base URL for the AI API (optional)
    region: <region>            # Region for the AI service (optional)
    model: <ai-model>          # AI model to use (e.g., gpt-3.5-turbo)
    engine: <ai-engine>        # AI engine to use (optional)
    secret:                    # Secret containing API keys (required)
      name: <secret-name>
      key: <secret-key>
    enabled: <boolean>          # Enable/disable AI analysis (optional, default true)
    anonymized: <boolean>       # Anonymize data sent to AI (optional, default true)
    language: <language-code>     # Language for analysis (optional, default english)
    proxyEndpoint: <proxy-url>  # Proxy endpoint for AI API (optional)
    providerId: <provider-id>   # Provider ID for AI API (optional)
    maxTokens: <integer>        # Maximum number of tokens for AI responses (optional, default 2048)
    topk: <integer>             # Top K value for AI responses (optional, default 50)
  remoteCache:                 # Remote cache configuration (optional)
    credentials:
      name: <secret-name>
    gcs:                   # Google Cloud Storage backend
      bucketName: <bucket-name>
      region: <region>
      projectId: <project-id>
    s3:                    # AWS S3 backend
      bucketName: <bucket-name>
      region: <region>
    azure:                 # Azure Blob Storage backend
      storageAccount: <storage-account>
      containerName: <container-name>
    interplex:             # Interplex backend
      endpoint: <endpoint>
  integrations:                # Integrations with other tools (optional)
    trivy:                     # Trivy vulnerability scanning integration
      enabled: <boolean>
      skipInstall: <boolean>
      namespace: <namespace>
  nodeSelector:                # Node selector for the K8sGPT pod (optional)
    <label-key>: <label-value>
  targetNamespace: <namespace>   # Target namespace for analysis (optional)
  kubeconfig:                  # Kubeconfig secret for accessing the cluster (optional)
    name: <secret-name>
    key: <secret-key>
status:                        # Observed status of the K8sGPT operator (read-only)
#... (status information)