OpenCode Overview

What is OpenCode

OpenCode is an open-source AI coding assistant built on VSCode with the following features:

  • Open Source & Free: Fully open source, self-hostable
  • Flexible Models: Supports multiple LLM backends
  • Local First: Protects code privacy
  • Highly Customizable: Extensible and customizable

Core Features

Feature Description
Code Completion Real-time code suggestions (single-line/multi-line)
Code Chat Q&A on code
Code Generation Generate code based on descriptions
Code Explanation Explain the functionality of selected code
Refactoring Suggestions Provide refactoring and optimization suggestions
Bug Fixing Intelligently fix code errors

I. OpenCode Installation and Configuration

System Requirements

Operating System:
  - Windows 10/11
  - macOS 11+
  - Linux (Ubuntu 20.04+, CentOS 8+)

Hardware Requirements:
  - CPU: 4 cores+
  - RAM: 8GB+
  - Storage: 2GB available space

Software Dependencies:
  - Node.js 16+ (if building from source)
  - Git

Installation Steps

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# 1. Download OpenCode
# Visit https://github.com/opencode-org/opencode/releases
# Download the installer for your platform

# Linux/macOS
wget https://github.com/opencode-org/opencode/releases/latest/download/opencv-linux-x64.tar.gz
tar -xzf opencv-linux-x64.tar.gz
cd opencv
./opencode

# macOS (using Homebrew Cask)
brew install --cask opencode

# Windows
# Download opencv-setup.exe and run the installer

# 2. Launch OpenCode
# The welcome page will appear on first launch

Method 2: Building from Source

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# 1. Clone repository
git clone https://github.com/opencode-org/opencode.git
cd opencv

# 2. Install dependencies
npm install

# 3. Build project
npm run build

# 4. Run
npm run electron

Basic Configuration

settings.json Configuration

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
{
  // OpenCode Configuration
  "opencode.modelProvider": "openai",
  "opencode.apiKey": "your-api-key",
  "opencode.model": "gpt-4",
  "opencode.temperature": 0.3,
  "opencode.maxTokens": 2048,

  // Code Completion Configuration
  "opencode.autocomplete.enabled": true,
  "opencode.autocomplete.mode": "inline",
  "opencode.autocomplete.debounce": 150,

  // Code Chat Configuration
  "opencode.chat.enabled": true,
  "opencode.chat.contextLength": 10,

  // Privacy Configuration
  "opencode.telemetry.enabled": false,
  "opencode.anonymousUsageId": false,

  // Shortcut Configuration
  "opencode.suggestInline": "Ctrl+Enter",
  "opencode.openChat": "Ctrl+Shift+C",
  "opencode.explainCode": "Ctrl+Shift+E"
}

II. What OpenCode Can Do

1. Intelligent Code Completion

# Single-line completion
def calculate_discount(price, discount_rate):
    return price * (1 - discount_

# OpenCode auto-completes:
# rate)

# Multi-line completion
# Input comment:
# Create a function to calculate the nth term of the Fibonacci sequence

# OpenCode generates complete function:
def fibonacci(n):
    """
    Calculate the nth term of the Fibonacci sequence

    Args:
        n (int): The term number to calculate

    Returns:
        int: The value of the nth term
    """
    if n <= 0:
        return 0
    elif n == 1:
        return 1
    else:
        return fibonacci(n-1) + fibonacci(n-2)

2. Code Explanation

# Select complex code, OpenCode provides explanation

def merge_sort(arr):
    if len(arr) <= 1:
        return arr

    mid = len(arr) // 2
    left = merge_sort(arr[:mid])
    right = merge_sort(arr[mid:])

    return merge(left, right)

# OpenCode explanation:
# This is an implementation of the merge sort algorithm.
# 1. Base case: If array length <= 1, return directly
# 2. Divide and conquer: Split the array in half, sort recursively
# 3. Merge: Merge two sorted arrays
# Time Complexity: O(n log n)
# Space Complexity: O(n)

3. Code Refactoring

# Original code
def process_user(user):
    if user is not None:
        if user.get("name") is not None:
            if user.get("age") is not None and user["age"] >= 18:
                return True
    return False

# OpenCode refactoring suggestion:
def process_user(user):
    """
    Check if user is a valid adult

    Args:
        user (dict): User info dictionary

    Returns:
        bool: Whether the user is valid
    """
    if not user:
        return False

    name = user.get("name")
    age = user.get("age")

    return bool(name and age >= 18)

4. Bug Detection and Fixing

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Buggy code
def divide_numbers(a, b):
    result = a / b
    return result

# Call
print(divide_numbers(10, 0))  # ZeroDivisionError

# OpenCode detects and fixes:
def divide_numbers(a, b):
    """
    Safely divide two numbers

    Args:
        a (float): Dividend
        b (float): Divisor

    Returns:
        float: Division result

    Raises:
        ValueError: When divisor is 0
    """
    if b == 0:
        raise ValueError("Divisor cannot be zero")

    return a / b

5. Unit Test Generation

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Original function
def add(a, b):
    return a + b

# OpenCode generates unit tests:
import unittest

class TestAdd(unittest.TestCase):
    def test_add_positive_numbers(self):
        """Test adding positive numbers"""
        self.assertEqual(add(2, 3), 5)

    def test_add_negative_numbers(self):
        """Test adding negative numbers"""
        self.assertEqual(add(-2, -3), -5)

    def test_add_mixed_numbers(self):
        """Test adding positive and negative numbers"""
        self.assertEqual(add(2, -3), -1)

    def test_add_zero(self):
        """Test adding zero"""
        self.assertEqual(add(5, 0), 5)

    def test_add_floats(self):
        """Test adding floats"""
        self.assertAlmostEqual(add(2.5, 3.1), 5.6, places=1)

if __name__ == '__main__':
    unittest.main()

III. Introduction to oh-my-opencode

What is oh-my-opencode

oh-my-opencode is an enhanced configuration management tool for OpenCode, providing:

  • Preset Configurations: Optimized for different scenarios
  • Model Switching: Quickly switch between different LLMs
  • Shortcut Bindings: Predefined shortcuts
  • Theme Customization: Customize appearance

Installing oh-my-opencode

1
2
3
4
5
6
7
8
# 1. Clone repository
git clone https://github.com/opencode-org/oh-my-opencode.git ~/.oh-my-opencode

# 2. Install
cd ~/.oh-my-opencode
./install.sh

# 3. Restart OpenCode

Preset Configurations

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# List available presets
oh-my-opencode list

# Apply preset
oh-my-opencode apply python-developer
oh-my-opencode apply go-developer
oh-my-opencode apply fullstack-developer

# Create custom preset
oh-my-opencode create my-preset --template python-developer

IV. Domestic LLM Integration

Supported Domestic LLMs

Model Provider Features Recommended Scenarios
GLM-4 Zhipu AI Strong Chinese understanding Chinese development
Minimax M2.5 MiniMax Strong coding ability Code generation
Kimi K2.5 Moonshot Long context Document understanding
Qwen3.5 Alibaba Cloud Multilingual ability Full-stack development

1. GLM-4 Configuration

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
{
  "opencode.modelProvider": "custom",
  "opencode.apiEndpoint": "https://open.bigmodel.cn/api/paas/v4/chat/completions",
  "opencode.apiKey": "your-glm-api-key",
  "opencode.model": "glm-4",
  "opencode.temperature": 0.3,
  "opencode.maxTokens": 4096,
  "opencode.contextLength": 8192,

  // Request Header Configuration
  "opencode.customHeaders": {
    "Authorization": "Bearer your-glm-api-key"
  },

  // Response Format
  "opencode.responseFormat": {
    "type": "text"
  }
}

2. Minimax M2.5 Configuration

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
{
  "opencode.modelProvider": "custom",
  "opencode.apiEndpoint": "https://api.minimax.chat/v1/text/chatcompletion_v2",
  "opencode.apiKey": "your-minimax-api-key",
  "opencode.model": "abab5.5-chat",
  "opencode.temperature": 0.7,
  "opencode.maxTokens": 2048,

  // Minimax Specific Parameters
  "opencode.customParams": {
    "group": "chat_group_name",
    "role_setting": "You are a professional programming assistant",
    "tokens_to_generate": 2048,
    "temperature": 0.7
  }
}

3. Kimi K2.5 Configuration

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
{
  "opencode.modelProvider": "custom",
  "opencode.apiEndpoint": "https://api.moonshot.cn/v1/chat/completions",
  "opencode.apiKey": "your-kimi-api-key",
  "opencode.model": "moonshot-v1-8k",
  "opencode.temperature": 0.3,
  "opencode.maxTokens": 8192,

  // Kimi Long Context Configuration
  "opencode.contextWindow": 32000,
  "opencode.includeContext": "full"
}

4. Qwen3.5 Configuration

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
{
  "opencode.modelProvider": "custom",
  "opencode.apiEndpoint": "https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions",
  "opencode.apiKey": "your-qwen-api-key",
  "opencode.model": "qwen-plus",
  "opencode.temperature": 0.5,
  "opencode.maxTokens": 6000,

  // Qwen Specific Parameters
  "opencode.customParams": {
    "top_p": 0.8,
    "enable_search": false
  }
}

Multi-Model Configuration Switching

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# Create model configuration files
~/.opencode/models/
├── glm-4.json
├── minimax-m2.5.json
├── kimi-k2.5.json
└── qwen3.5.json

# Quickly switch models
opencode model switch glm-4
opencode model switch minimax-m2.5
opencode model switch kimi-k2.5
opencode model switch qwen3.5

# Set default model
opencode model default qwen3.5

V. Best Practices

1. Prompt Engineering

Characteristics of Good Prompts

A good prompt should:
1. Clear Goal - Clearly state what to do
2. Provide Context - Give relevant background information
3. Specify Format - State the expected output format
4. Give Examples - Provide examples of desired results
5. Set Constraints - Define clear limitations

Example Prompts

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# Bad Prompt
Write a sorting function

# Good Prompt
Please write a quick sort function in Python with the following requirements:
1. Input: List of integers
2. Output: Sorted list
3. Handle edge cases: empty list, single-element list
4. Include complete docstrings
5. Add type annotations
6. Include unit test examples

Please follow the Google Python Style Guide

2. Context Management

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
# Provide sufficient context information

# Bad example - Lacks context
# Help me optimize this function
def process(data):
    result = []
    for item in data:
        result.append(item * 2)
    return result

# Good example - Provides context
# Optimize the following function, requirements:
# Scenario: Processing large amounts of sensor data (millions)
# Current performance: Processing 100k items takes 5 seconds
# Goal: Process 100k items in <1 second
# Constraint: Data order must be preserved
# Language: Python
def process(data):
    result = []
    for item in data:
        result.append(item * 2)
    return result

3. Security Best Practices

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
Security Practices:
  API Key Management:
    - Store API keys in environment variables
    - Rotate keys regularly
    - Do not commit to code repositories

  Code Review:
    - AI-generated code requires human review
    - Pay special attention to security vulnerabilities
    - Test edge cases

  Sensitive Information:
    - Do not let AI access sensitive code
    - Use anonymized data
    - Configure ignore rules

4. Performance Optimization

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
{
  // Performance Optimization Configuration
  "opencode.cache.enabled": true,
  "opencode.cache.size": "1GB",
  "opencode.cache.ttl": 86400,

  // Request Optimization
  "opencode.batchRequests": true,
  "opencode.requestTimeout": 30000,
  "opencode.retryAttempts": 3,

  // Response Cache
  "opencode.responseCache.enabled": true,
  "opencode.responseCache.maxSize": 500
}

5. Team Collaboration

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
Team Collaboration:
  Configuration Sharing:
    - Create team configuration files
    - Unify model selection
    - Unify prompt templates

  Version Control:
    # .opencoderc
    {
      "model": "qwen3.5",
      "temperature": 0.3,
      "presets": {
        "python": "python-dev-preset",
        "go": "go-dev-preset"
      }
    }

  Knowledge Base:
    - Maintain team prompt library
    - Share best practices
    - Regular training

VI. Frequently Asked Questions

Q1: How to Configure API Keys

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# Method 1: Environment Variables (Recommended)
export OPENCODE_API_KEY="your-api-key"
export OPENCODE_API_ENDPOINT="https://api.example.com"

# Method 2: Configuration File
~/.opencode/config.json
{
  "apiKey": "your-api-key",
  "apiEndpoint": "https://api.example.com"
}

# Method 3: OpenCode Settings UI
# Settings > OpenCode > API Key

Q2: How to Improve Generation Quality

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
{
  // Configuration to Improve Generation Quality
  "opencode.temperature": 0.3,        // Reduce randomness
  "opencode.topP": 0.9,               // Control diversity
  "opencode.frequencyPenalty": 0.5,   // Reduce repetition
  "opencode.presencePenalty": 0.3,    // Encourage new topics

  // Increase Context
  "opencode.contextLines": 50,        // Increase number of surrounding lines
  "opencode.includeFileHeader": true,  // Include file header
  "opencode.includeImports": true     // Include imports
}

Q3: Offline Usage Solutions

1
2
3
4
5
6
7
8
9
Offline Solutions:
  Local Models:
    - Use Ollama to run local models
    - Supports Llama, CodeLlama, etc.
    - Configure local API endpoint

  Configuration:
    opencode.apiEndpoint: "http://localhost:11434"
    opencode.model: "codellama:13b"

Summary

As an open-source AI coding assistant, OpenCode has the following advantages:

  1. Open Source & Free: No subscription fees
  2. Flexible Models: Supports multiple LLM backends
  3. Domestic Models: Integrated with GLM, Minimax, Kimi, Qwen
  4. Customizable: Highly configurable
  5. Privacy Protection: Local deployment options

Combined with oh-my-opencode and domestic LLMs, an efficient AI-assisted development environment can be built.