When collaborating with LLMs like Claude, GPT, or Gemini for coding, have you encountered these challenges:
- Copy-pasting code fragments feels scattered, leaving AI without understanding the overall architecture?
- Manually organizing file lists is tedious, often missing key dependencies?
- Token limits force constant trade-offs, making it hard to provide complete context?
- Generated responses often “miss the mark” because AI lacks a global project perspective?
If these frustrations sound familiar, LLM Context Copy is the solution you’ve been waiting for.
🎯 What is LLM Context Copy?
LLM Context Copy is a VS Code extension specifically optimized for Large Language Models. Its core mission is simple: help you efficiently copy project context to AI, enabling AI to truly understand your codebase.
Unlike traditional “copy file content” approaches, LLM Context Copy provides a complete context management solution:
- Intelligent File Selection: Dependency-based file recommendations
- Token Budget Management: Precise calculation and control of token consumption
- Multiple Optimization Strategies: Content compression while preserving semantics
- Multi-format Output: Support for Markdown, JSON, Plain Text, and TOON formats
✨ Core Features Deep Dive
1. Native TreeView File Selector
This is LLM Context Copy’s most intuitive feature. It’s not just a simple file list, but a complete interactive file tree:
📁 src/
☑ 📁 components/
☑ 📄 Button.tsx
☑ 📄 Card.tsx
☐ 📄 Modal.tsx
☑ 📁 utils/
☑ 📄 helpers.ts
☑ 📄 api.ts
☐ 📁 tests/
Why does this matter?
Traditional file selection methods (like command-line glob patterns) have two problems:
- Not intuitive, requiring complex syntax memorization
- Easy to miss or accidentally select files
TreeView provides:
- Visual directory structure: See project layout at a glance
- Checkbox multi-select: Precise control over which files are included
- Folder cascade selection: Selecting a folder auto-selects all sub-files
- Real-time statistics: Shows selected file count, total size, estimated tokens
2. Smart Recommendation System
This is LLM Context Copy’s “killer feature”. It automatically analyzes your code dependencies and recommends related files:
How it works:
- Dependency Graph Construction: Scans import/require statements to build file dependency graph
- Active File Tracking: Detects currently edited file
- Recent File Records: Tracks recently opened files
- Relevance Scoring: Comprehensive calculation of each file’s relevance score
Scoring Weight Configuration:
| Factor | Default Weight | Description |
|---|---|---|
| Active Editor File | 30 | Currently edited file has highest weight |
| Recently Opened Files | 20 | Recently visited files |
| Dependency Relationships | 25 | Files that depend on or are depended by current file |
| File Type Priority | 15 | TypeScript/JavaScript files prioritized |
3. Token Budget Management
Token limits are the core constraint when collaborating with LLMs. LLM Context Copy provides a complete token management solution:
Precise Token Counting:
Uses real tokenizers (like tiktoken) for precise calculation, not simple character estimation. This means:
- For GPT-4: Uses cl100k_base encoding
- For Claude: Uses corresponding encoding scheme
- Estimation error controlled within 5%
Budget Allocation Strategy:
Total Token Budget: 128,000
├── System Prompt: ~2,000
├── User Question Reserve: ~10,000
├── Output Reserve: ~20,000
└── Available for Context: ~96,000
When selection exceeds budget, the extension will:
- Sort files by relevance
- Prioritize high-relevance files
- Apply compression strategies to remaining files
4. Multiple Optimization Strategies
LLM Context Copy offers six optimization strategies that significantly reduce token consumption while preserving semantics:
| Strategy | Effect | Use Case |
|---|---|---|
| Remove Empty Lines | 5-10% reduction | Code with many empty lines |
| Remove Comments | 15-30% reduction | Don’t need AI to understand comment intent |
| Minify Whitespace | 10-15% reduction | Code with deep indentation |
| Truncate Long Files | Varies | Large data files or configs |
| Deduplicate Code | 20-40% reduction | Code with repetitive patterns |
| Prioritize Important Files | Optimized sorting | When token budget is tight |
Real-world Effect Comparison:
Original File: 15,000 tokens
├── After removing empty lines: 13,500 tokens (-10%)
├── After removing comments: 9,450 tokens (-30%)
├── After minifying whitespace: 8,000 tokens (-15%)
└── Final output: 8,000 tokens (-47%)
5. Multi-format Output
Supports four output formats for different use cases:
Markdown Format (Recommended):
# Project Context
## Directory Structure
📁 src/
📄 Button.tsx
## File: src/Button.tsx
```typescript
export const Button = () => { ... }
**JSON Format**:
```json
{
"files": [
{
"path": "src/Button.tsx",
"content": "export const Button..."
}
]
}
Plain Text: For quick reference
TOON Format: Token-Optimized Object Notation, designed for maximum compression
🛠️ Technical Architecture Analysis
As an open-source project, LLM Context Copy’s architecture design is worth learning from:
Core Modules
src/
├── commands/ # VS Code command registration
├── compression/ # Compression engine
├── di/ # Dependency injection container
├── formatters/ # Output formatters
├── intelligence/ # Smart recommendation system
├── performance/ # Performance optimization
├── services/ # Core services
└── tree/ # TreeView provider
Dependency Injection Design
The project uses dependency injection pattern, which makes:
- Module decoupling, easy to test
- Clear service lifecycle management
- Easy to extend new features
// Service registration example
container.registerSingleton(ITokenCounter, TokenCounter);
container.registerSingleton(IFileWatcher, FileWatcher);
container.registerSingleton(IContextManager, ContextManager);
Performance Optimization Strategies
- Web Worker: Token calculation in background thread, doesn’t block UI
- Virtualized TreeView: Large directories loaded on-demand, memory-friendly
- Incremental Updates: Only update changed parts when files change
- Caching Mechanism: Dependency analysis results cached, avoiding recalculation
💡 Use Cases
Case 1: New Feature Development
You’re developing a new feature and need AI to help write code:
- Open LLM Context Copy
- Select relevant components, utility functions, type definitions
- Click “Smart Recommend” to auto-add dependency files
- Copy context, paste to Claude/GPT
- Describe your requirements, get precise code suggestions
Case 2: Bug Debugging
Encountered a tricky bug, need AI to help analyze:
- Select the problematic file
- Use “Remove Comments” strategy to reduce noise
- Include related test files
- Let AI analyze possible root causes
Case 3: Code Review
Have AI help review code quality:
- Select modules to review
- Include related configuration files
- Use Markdown format output
- Let AI provide improvement suggestions
Case 4: Documentation Generation
Generate documentation for code:
- Select files needing documentation
- Keep comments (don’t use remove comments strategy)
- Let AI generate API docs or README
📊 Comparison with Other Tools
| Feature | LLM Context Copy | Manual Copy | Other Extensions |
|---|---|---|---|
| File Selection | TreeView + Checkboxes | Manually open each file | Simple list |
| Token Calculation | Precise calculation | None | Rough estimation |
| Smart Recommendation | ✅ | ❌ | ❌ |
| Optimization Strategies | 6 types | None | 1-2 types |
| Output Formats | 4 types | 1 type | 1-2 types |
| Open Source | ✅ | N/A | Partially open source |
🚀 Quick Start
- Search “LLM Context Copy” in VS Code Extension Marketplace
- Click Install
- Open command palette with
Ctrl/Cmd + Shift + P - Type “Open Context Copy”
- Select files, click copy
Or visit directly:
- VS Code Marketplace: bitfarer.llm-context-copy
- GitHub: bitfarer/llm-context-copy
🔮 Future Roadmap
LLM Context Copy is continuously evolving, with planned features including:
- AI Chat Integration: Direct conversation with AI in VS Code
- Multi-model Support: Optimized output formats for different LLMs
- Team Collaboration: Shared context templates
- Semantic Compression: AST-based intelligent compression
📝 Conclusion
LLM Context Copy solves a real pain point: how to efficiently pass project context to LLMs.
It’s not just a simple “copy file content” tool, but a complete context management solution. Through intelligent file selection, token budget management, and multiple optimization strategies, it enables AI assistants to truly understand your codebase, providing more precise and valuable assistance.
If you frequently collaborate with LLMs like Claude, GPT, or Gemini for coding, LLM Context Copy is definitely worth trying.
Open Source: github.com/bitfarer/llm-context-copy
Marketplace: marketplace.visualstudio.com/items?itemName=bitfarer.llm-context-copy