Files
hermes-agent/website/docs/user-guide/skills/optional/mlops/mlops-slime.md

486 lines
12 KiB
Markdown
Raw Normal View History

docs(website): dedicated page per bundled + optional skill (#14929) Generates a full dedicated Docusaurus page for every one of the 132 skills (73 bundled + 59 optional) under website/docs/user-guide/skills/{bundled,optional}/<category>/. Each page carries the skill's description, metadata (version, author, license, dependencies, platform gating, tags, related skills cross-linked to their own pages), and the complete SKILL.md body that Hermes loads at runtime. Previously the two catalog pages just listed skills with a one-line blurb and no way to see what the skill actually did — users had to go read the source repo. Now every skill has a browsable, searchable, cross-linked reference in the docs. - website/scripts/generate-skill-docs.py — generator that reads skills/ and optional-skills/, writes per-skill pages, regenerates both catalog indexes, and rewrites the Skills section of sidebars.ts. Handles MDX escaping (outside fenced code blocks: curly braces, unsafe HTML-ish tags) and rewrites relative references/*.md links to point at the GitHub source. - website/docs/reference/skills-catalog.md — regenerated; each row links to the new dedicated page. - website/docs/reference/optional-skills-catalog.md — same. - website/sidebars.ts — Skills section now has Bundled / Optional subtrees with one nested category per skill folder. - .github/workflows/{docs-site-checks,deploy-site}.yml — run the generator before docusaurus build so CI stays in sync with the source SKILL.md files. Build verified locally with `npx docusaurus build`. Only remaining warnings are pre-existing broken link/anchor issues in unrelated pages.
2026-04-23 22:22:11 -07:00
---
title: "Slime Rl Training — Provides guidance for LLM post-training with RL using slime, a Megatron+SGLang framework"
sidebar_label: "Slime Rl Training"
description: "Provides guidance for LLM post-training with RL using slime, a Megatron+SGLang framework"
---
{/* This page is auto-generated from the skill's SKILL.md by website/scripts/generate-skill-docs.py. Edit the source SKILL.md, not this page. */}
# Slime Rl Training
Provides guidance for LLM post-training with RL using slime, a Megatron+SGLang framework. Use when training GLM models, implementing custom data generation workflows, or needing tight Megatron-LM integration for RL scaling.
## Skill metadata
| | |
|---|---|
| Source | Optional — install with `hermes skills install official/mlops/slime` |
| Path | `optional-skills/mlops/slime` |
| Version | `1.0.0` |
| Author | Orchestra Research |
| License | MIT |
| Dependencies | `sglang-router>=0.2.3`, `ray`, `torch>=2.0.0`, `transformers>=4.40.0` |
| Tags | `Reinforcement Learning`, `Megatron-LM`, `SGLang`, `GRPO`, `Post-Training`, `GLM` |
## Reference: full SKILL.md
:::info
The following is the complete skill definition that Hermes loads when this skill is triggered. This is what the agent sees as instructions when the skill is active.
:::
# slime: LLM Post-Training Framework for RL Scaling
slime is an LLM post-training framework from Tsinghua's THUDM team, powering GLM-4.5, GLM-4.6, and GLM-4.7. It connects Megatron-LM for training with SGLang for high-throughput rollout generation.
## When to Use slime
**Choose slime when you need:**
- Megatron-LM native training with SGLang inference
- Custom data generation workflows with flexible data buffers
- Training GLM, Qwen3, DeepSeek V3, or Llama 3 models
- Research-grade framework with production backing (Z.ai)
**Consider alternatives when:**
- You need enterprise-grade stability features → use **miles**
- You want flexible backend swapping → use **verl**
- You need PyTorch-native abstractions → use **torchforge**
## Key Features
- **Training**: Megatron-LM with full parallelism support (TP, PP, DP, SP)
- **Rollout**: SGLang-based high-throughput generation with router
- **Data Buffer**: Flexible prompt management and sample storage
- **Models**: GLM-4.x, Qwen3, DeepSeek V3/R1, Llama 3
## Architecture Overview
docs: resync reference, user-guide, developer-guide, and messaging pages against code (#17738) Broad drift audit against origin/main (b52b63396). Reference pages (most user-visible drift): - slash-commands: add /busy, /curator, /footer, /indicator, /redraw, /steer that were missing; drop non-existent /terminal-setup; fix /q footnote (resolves to /queue, not /quit); extend CLI-only list with all 24 CLI-only commands in the registry - cli-commands: add dedicated sections for hermes curator / fallback / hooks (new subcommands not previously documented); remove stale hermes honcho standalone section (the plugin registers dynamically via hermes memory); list curator/fallback/hooks in top-level table; fix completion to include fish - toolsets-reference: document the real 52-toolset count; split browser vs browser-cdp; add discord / discord_admin / spotify / yuanbao; correct hermes-cli tool count from 36 to 38; fix misleading claim that hermes-homeassistant adds tools (it's identical to hermes-cli) - tools-reference: bump tool count 55 -> 68; add 7 Spotify, 5 Yuanbao, 2 Discord toolsets; move browser_cdp/browser_dialog to their own browser-cdp toolset section - environment-variables: add 40+ user-facing HERMES_* vars that were undocumented (--yolo, --accept-hooks, --ignore-*, inference model override, agent/stream/checkpoint timeouts, OAuth trace, per-platform batch tuning for Telegram/Discord/Matrix/Feishu/WeCom, cron knobs, gateway restart/connect timeouts); dedupe the Cron Scheduler section; replace stale QQ_SANDBOX with QQ_PORTAL_HOST User-guide (top level): - cli.md: compression preserves last 20 turns, not 4 (protect_last_n: 20) - configuration.md: display.platforms is the canonical per-platform override key; tool_progress_overrides is deprecated and auto-migrated - profiles.md: model.default is the config key, not model.model - sessions.md: CLI/TUI session IDs use 6-char hex, gateway uses 8 - checkpoints-and-rollback.md: destructive-command list now matches _DESTRUCTIVE_PATTERNS (adds rmdir, cp, install, dd) - docker.md: the container runs as non-root hermes (UID 10000) via gosu; fix install command (uv pip); add missing --insecure on the dashboard compose example (required for non-loopback bind) - security.md: systemctl danger pattern also matches 'restart' - index.md: built-in tool count 47 -> 68 - integrations/index.md: 6 STT providers, 8 memory providers - integrations/providers.md: drop fictional dashscope/qwen aliases Features: - overview.md: 9 image models (not 8), 9 TTS providers (not 5), 8 memory providers (Supermemory was missing) - tool-gateway.md: 9 image models - tools.md: extend common-toolsets list with search / messaging / spotify / discord / debugging / safe - fallback-providers.md: add 6 real providers from PROVIDER_REGISTRY (lmstudio, kimi-coding-cn, stepfun, alibaba-coding-plan, tencent-tokenhub, azure-foundry) - plugins.md: Available Hooks table now includes on_session_finalize, on_session_reset, subagent_stop - built-in-plugins.md: add the 7 bundled plugins the page didn't mention (spotify, google_meet, three image_gen providers, two dashboard examples) - web-dashboard.md: add --insecure and --tui flags - cron.md: hermes cron create takes positional schedule/prompt, not flags Messaging: - telegram.md: TELEGRAM_WEBHOOK_SECRET is now REQUIRED when TELEGRAM_WEBHOOK_URL is set (gateway refuses to start without it per GHSA-3vpc-7q5r-276h). Biggest user-visible drift in the batch. - discord.md: HERMES_DISCORD_TEXT_BATCH_SPLIT_DELAY_SECONDS default is 2.0, not 0.1 - dingtalk.md: document DINGTALK_REQUIRE_MENTION / FREE_RESPONSE_CHATS / MENTION_PATTERNS / HOME_CHANNEL / ALLOW_ALL_USERS that the adapter supports - bluebubbles.md: drop fictional BLUEBUBBLES_SEND_READ_RECEIPTS env var; the setting lives in platforms.bluebubbles.extra only - qqbot.md: drop dead QQ_SANDBOX; add real QQ_PORTAL_HOST and QQ_GROUP_ALLOWED_USERS - wecom-callback.md: replace 'hermes gateway start' (service-only) with 'hermes gateway' for first-time setup Developer-guide: - architecture.md: refresh tool/toolset counts (61/52), terminal backend count (7), line counts for run_agent.py (~13.7k), cli.py (~11.5k), main.py (~10.4k), setup.py (~3.5k), gateway/run.py (~12.2k), mcp_tool.py (~3.1k); add yuanbao adapter, bump platform adapter count 18 -> 20 - agent-loop.md: run_agent.py line count 10.7k -> 13.7k - tools-runtime.md: add vercel_sandbox backend - adding-tools.md: remove stale 'Discovery import added to model_tools.py' checklist item (registry auto-discovery) - adding-platform-adapters.md: mark send_typing / get_chat_info as concrete base methods; only connect/disconnect/send are abstract - acp-internals.md: ACP sessions now persist to SessionDB (~/.hermes/state.db); acp.run_agent call uses use_unstable_protocol=True - cron-internals.md: gateway runs scheduler in a dedicated background thread via _start_cron_ticker, not on a maintenance cycle; locking is cross-process via fcntl.flock (Unix) / msvcrt.locking (Windows) - gateway-internals.md: gateway/run.py ~12k lines - provider-runtime.md: cron DOES support fallback (run_job reads fallback_providers from config) - session-storage.md: SCHEMA_VERSION = 11 (not 9); add migrations 10 and 11 (trigram FTS, inline-mode FTS5 re-index); add api_call_count column to Sessions DDL; document messages_fts_trigram and state_meta in the architecture tree - context-compression-and-caching.md: remove the obsolete 'context pressure warnings' section (warnings were removed for causing models to give up early) - context-engine-plugin.md: compress() signature now includes focus_topic param - extending-the-cli.md: _build_tui_layout_children signature now includes model_picker_widget; add to default layout Also fixed three pre-existing broken links/anchors the build warned about (docker.md -> api-server.md, yuanbao.md -> cron-jobs.md and tips#background-tasks, nix-setup.md -> #container-aware-cli). Regenerated per-skill pages via website/scripts/generate-skill-docs.py so catalog tables and sidebar are consistent with current SKILL.md frontmatter. docusaurus build: clean, no broken links or anchors.
2026-04-29 20:55:59 -07:00
<!-- ascii-guard-ignore -->
docs(website): dedicated page per bundled + optional skill (#14929) Generates a full dedicated Docusaurus page for every one of the 132 skills (73 bundled + 59 optional) under website/docs/user-guide/skills/{bundled,optional}/<category>/. Each page carries the skill's description, metadata (version, author, license, dependencies, platform gating, tags, related skills cross-linked to their own pages), and the complete SKILL.md body that Hermes loads at runtime. Previously the two catalog pages just listed skills with a one-line blurb and no way to see what the skill actually did — users had to go read the source repo. Now every skill has a browsable, searchable, cross-linked reference in the docs. - website/scripts/generate-skill-docs.py — generator that reads skills/ and optional-skills/, writes per-skill pages, regenerates both catalog indexes, and rewrites the Skills section of sidebars.ts. Handles MDX escaping (outside fenced code blocks: curly braces, unsafe HTML-ish tags) and rewrites relative references/*.md links to point at the GitHub source. - website/docs/reference/skills-catalog.md — regenerated; each row links to the new dedicated page. - website/docs/reference/optional-skills-catalog.md — same. - website/sidebars.ts — Skills section now has Bundled / Optional subtrees with one nested category per skill folder. - .github/workflows/{docs-site-checks,deploy-site}.yml — run the generator before docusaurus build so CI stays in sync with the source SKILL.md files. Build verified locally with `npx docusaurus build`. Only remaining warnings are pre-existing broken link/anchor issues in unrelated pages.
2026-04-23 22:22:11 -07:00
```
┌─────────────────────────────────────────────────────────┐
│ Data Buffer │
│ - Prompt initialization and management │
│ - Custom data generation and filtering │
│ - Rollout sample storage │
└─────────────┬───────────────────────────┬───────────────┘
│ │
┌─────────────▼───────────┐ ┌─────────────▼───────────────┐
│ Training (Megatron-LM) │ │ Rollout (SGLang + Router) │
│ - Actor model training │ │ - Response generation │
│ - Critic (optional) │ │ - Reward/verifier output │
│ - Weight sync to rollout│ │ - Multi-turn support │
└─────────────────────────┘ └─────────────────────────────┘
```
docs: resync reference, user-guide, developer-guide, and messaging pages against code (#17738) Broad drift audit against origin/main (b52b63396). Reference pages (most user-visible drift): - slash-commands: add /busy, /curator, /footer, /indicator, /redraw, /steer that were missing; drop non-existent /terminal-setup; fix /q footnote (resolves to /queue, not /quit); extend CLI-only list with all 24 CLI-only commands in the registry - cli-commands: add dedicated sections for hermes curator / fallback / hooks (new subcommands not previously documented); remove stale hermes honcho standalone section (the plugin registers dynamically via hermes memory); list curator/fallback/hooks in top-level table; fix completion to include fish - toolsets-reference: document the real 52-toolset count; split browser vs browser-cdp; add discord / discord_admin / spotify / yuanbao; correct hermes-cli tool count from 36 to 38; fix misleading claim that hermes-homeassistant adds tools (it's identical to hermes-cli) - tools-reference: bump tool count 55 -> 68; add 7 Spotify, 5 Yuanbao, 2 Discord toolsets; move browser_cdp/browser_dialog to their own browser-cdp toolset section - environment-variables: add 40+ user-facing HERMES_* vars that were undocumented (--yolo, --accept-hooks, --ignore-*, inference model override, agent/stream/checkpoint timeouts, OAuth trace, per-platform batch tuning for Telegram/Discord/Matrix/Feishu/WeCom, cron knobs, gateway restart/connect timeouts); dedupe the Cron Scheduler section; replace stale QQ_SANDBOX with QQ_PORTAL_HOST User-guide (top level): - cli.md: compression preserves last 20 turns, not 4 (protect_last_n: 20) - configuration.md: display.platforms is the canonical per-platform override key; tool_progress_overrides is deprecated and auto-migrated - profiles.md: model.default is the config key, not model.model - sessions.md: CLI/TUI session IDs use 6-char hex, gateway uses 8 - checkpoints-and-rollback.md: destructive-command list now matches _DESTRUCTIVE_PATTERNS (adds rmdir, cp, install, dd) - docker.md: the container runs as non-root hermes (UID 10000) via gosu; fix install command (uv pip); add missing --insecure on the dashboard compose example (required for non-loopback bind) - security.md: systemctl danger pattern also matches 'restart' - index.md: built-in tool count 47 -> 68 - integrations/index.md: 6 STT providers, 8 memory providers - integrations/providers.md: drop fictional dashscope/qwen aliases Features: - overview.md: 9 image models (not 8), 9 TTS providers (not 5), 8 memory providers (Supermemory was missing) - tool-gateway.md: 9 image models - tools.md: extend common-toolsets list with search / messaging / spotify / discord / debugging / safe - fallback-providers.md: add 6 real providers from PROVIDER_REGISTRY (lmstudio, kimi-coding-cn, stepfun, alibaba-coding-plan, tencent-tokenhub, azure-foundry) - plugins.md: Available Hooks table now includes on_session_finalize, on_session_reset, subagent_stop - built-in-plugins.md: add the 7 bundled plugins the page didn't mention (spotify, google_meet, three image_gen providers, two dashboard examples) - web-dashboard.md: add --insecure and --tui flags - cron.md: hermes cron create takes positional schedule/prompt, not flags Messaging: - telegram.md: TELEGRAM_WEBHOOK_SECRET is now REQUIRED when TELEGRAM_WEBHOOK_URL is set (gateway refuses to start without it per GHSA-3vpc-7q5r-276h). Biggest user-visible drift in the batch. - discord.md: HERMES_DISCORD_TEXT_BATCH_SPLIT_DELAY_SECONDS default is 2.0, not 0.1 - dingtalk.md: document DINGTALK_REQUIRE_MENTION / FREE_RESPONSE_CHATS / MENTION_PATTERNS / HOME_CHANNEL / ALLOW_ALL_USERS that the adapter supports - bluebubbles.md: drop fictional BLUEBUBBLES_SEND_READ_RECEIPTS env var; the setting lives in platforms.bluebubbles.extra only - qqbot.md: drop dead QQ_SANDBOX; add real QQ_PORTAL_HOST and QQ_GROUP_ALLOWED_USERS - wecom-callback.md: replace 'hermes gateway start' (service-only) with 'hermes gateway' for first-time setup Developer-guide: - architecture.md: refresh tool/toolset counts (61/52), terminal backend count (7), line counts for run_agent.py (~13.7k), cli.py (~11.5k), main.py (~10.4k), setup.py (~3.5k), gateway/run.py (~12.2k), mcp_tool.py (~3.1k); add yuanbao adapter, bump platform adapter count 18 -> 20 - agent-loop.md: run_agent.py line count 10.7k -> 13.7k - tools-runtime.md: add vercel_sandbox backend - adding-tools.md: remove stale 'Discovery import added to model_tools.py' checklist item (registry auto-discovery) - adding-platform-adapters.md: mark send_typing / get_chat_info as concrete base methods; only connect/disconnect/send are abstract - acp-internals.md: ACP sessions now persist to SessionDB (~/.hermes/state.db); acp.run_agent call uses use_unstable_protocol=True - cron-internals.md: gateway runs scheduler in a dedicated background thread via _start_cron_ticker, not on a maintenance cycle; locking is cross-process via fcntl.flock (Unix) / msvcrt.locking (Windows) - gateway-internals.md: gateway/run.py ~12k lines - provider-runtime.md: cron DOES support fallback (run_job reads fallback_providers from config) - session-storage.md: SCHEMA_VERSION = 11 (not 9); add migrations 10 and 11 (trigram FTS, inline-mode FTS5 re-index); add api_call_count column to Sessions DDL; document messages_fts_trigram and state_meta in the architecture tree - context-compression-and-caching.md: remove the obsolete 'context pressure warnings' section (warnings were removed for causing models to give up early) - context-engine-plugin.md: compress() signature now includes focus_topic param - extending-the-cli.md: _build_tui_layout_children signature now includes model_picker_widget; add to default layout Also fixed three pre-existing broken links/anchors the build warned about (docker.md -> api-server.md, yuanbao.md -> cron-jobs.md and tips#background-tasks, nix-setup.md -> #container-aware-cli). Regenerated per-skill pages via website/scripts/generate-skill-docs.py so catalog tables and sidebar are consistent with current SKILL.md frontmatter. docusaurus build: clean, no broken links or anchors.
2026-04-29 20:55:59 -07:00
<!-- ascii-guard-ignore-end -->
docs(website): dedicated page per bundled + optional skill (#14929) Generates a full dedicated Docusaurus page for every one of the 132 skills (73 bundled + 59 optional) under website/docs/user-guide/skills/{bundled,optional}/<category>/. Each page carries the skill's description, metadata (version, author, license, dependencies, platform gating, tags, related skills cross-linked to their own pages), and the complete SKILL.md body that Hermes loads at runtime. Previously the two catalog pages just listed skills with a one-line blurb and no way to see what the skill actually did — users had to go read the source repo. Now every skill has a browsable, searchable, cross-linked reference in the docs. - website/scripts/generate-skill-docs.py — generator that reads skills/ and optional-skills/, writes per-skill pages, regenerates both catalog indexes, and rewrites the Skills section of sidebars.ts. Handles MDX escaping (outside fenced code blocks: curly braces, unsafe HTML-ish tags) and rewrites relative references/*.md links to point at the GitHub source. - website/docs/reference/skills-catalog.md — regenerated; each row links to the new dedicated page. - website/docs/reference/optional-skills-catalog.md — same. - website/sidebars.ts — Skills section now has Bundled / Optional subtrees with one nested category per skill folder. - .github/workflows/{docs-site-checks,deploy-site}.yml — run the generator before docusaurus build so CI stays in sync with the source SKILL.md files. Build verified locally with `npx docusaurus build`. Only remaining warnings are pre-existing broken link/anchor issues in unrelated pages.
2026-04-23 22:22:11 -07:00
## Installation
```bash
# Recommended: Docker
docker pull slimerl/slime:latest
docker run --rm --gpus all --ipc=host --shm-size=16g \
-it slimerl/slime:latest /bin/bash
# Inside container
cd /root/slime && pip install -e . --no-deps
```
### From Source
```bash
git clone https://github.com/THUDM/slime.git
cd slime
pip install -r requirements.txt
pip install -e .
```
## Quick Start: GRPO Training
```bash
# Source model configuration
source scripts/models/qwen3-4B.sh
# Launch training
python train.py \
--actor-num-nodes 1 \
--actor-num-gpus-per-node 4 \
--rollout-num-gpus 4 \
--advantage-estimator grpo \
--use-kl-loss --kl-loss-coef 0.001 \
--rollout-batch-size 32 \
--n-samples-per-prompt 8 \
--global-batch-size 256 \
--num-rollout 3000 \
--prompt-data /path/to/data.jsonl \
${MODEL_ARGS[@]} ${CKPT_ARGS[@]}
```
---
## Workflow 1: Standard GRPO Training
Use this workflow for training reasoning models with group-relative advantages.
### Prerequisites Checklist
- [ ] Docker environment or Megatron-LM + SGLang installed
- [ ] Model checkpoint (HuggingFace or Megatron format)
- [ ] Training data in JSONL format
### Step 1: Prepare Data
```python
# data.jsonl format
{"prompt": "What is 2 + 2?", "label": "4"}
{"prompt": "Solve: 3x = 12", "label": "x = 4"}
```
Or with chat format:
```python
{
"prompt": [
{"role": "system", "content": "You are a math tutor."},
{"role": "user", "content": "What is 15 + 27?"}
],
"label": "42"
}
```
### Step 2: Configure Model
Choose a pre-configured model script:
```bash
# List available models
ls scripts/models/
# glm4-9B.sh, qwen3-4B.sh, qwen3-30B-A3B.sh, deepseek-v3.sh, llama3-8B.sh, ...
# Source your model
source scripts/models/qwen3-4B.sh
```
### Step 3: Launch Training
```bash
python train.py \
--actor-num-nodes 1 \
--actor-num-gpus-per-node 8 \
--rollout-num-gpus 8 \
--advantage-estimator grpo \
--use-kl-loss \
--kl-loss-coef 0.001 \
--prompt-data /path/to/train.jsonl \
--input-key prompt \
--label-key label \
--apply-chat-template \
--rollout-batch-size 32 \
--n-samples-per-prompt 8 \
--global-batch-size 256 \
--num-rollout 3000 \
--save-interval 100 \
--eval-interval 50 \
${MODEL_ARGS[@]}
```
### Step 4: Monitor Training
- [ ] Check TensorBoard: `tensorboard --logdir outputs/`
- [ ] Verify reward curves are increasing
- [ ] Monitor GPU utilization across nodes
---
## Workflow 2: Asynchronous Training
Use async mode for higher throughput by overlapping rollout and training.
### When to Use Async
- Large models with long generation times
- High GPU idle time in synchronous mode
- Sufficient memory for buffering
### Launch Async Training
```bash
python train_async.py \
--actor-num-nodes 1 \
--actor-num-gpus-per-node 8 \
--rollout-num-gpus 8 \
--advantage-estimator grpo \
--async-buffer-size 4 \
--prompt-data /path/to/train.jsonl \
${MODEL_ARGS[@]}
```
### Async-Specific Parameters
```bash
--async-buffer-size 4 # Number of rollouts to buffer
--update-weights-interval 2 # Sync weights every N rollouts
```
---
## Workflow 3: Multi-Turn Agentic Training
Use this workflow for training agents with tool use or multi-step reasoning.
### Prerequisites
- [ ] Custom generate function for multi-turn logic
- [ ] Tool/environment interface
### Step 1: Define Custom Generate Function
```python
# custom_generate.py
async def custom_generate(args, samples, evaluation=False):
"""Multi-turn generation with tool calling."""
for sample in samples:
conversation = sample.prompt
for turn in range(args.max_turns):
# Generate response
response = await generate_single(conversation)
# Check for tool call
tool_call = extract_tool_call(response)
if tool_call:
tool_result = execute_tool(tool_call)
conversation.append({"role": "assistant", "content": response})
conversation.append({"role": "tool", "content": tool_result})
else:
break
sample.response = response
sample.reward = compute_reward(sample)
return samples
```
### Step 2: Launch with Custom Function
```bash
python train.py \
--custom-generate-function-path custom_generate.py \
--max-turns 5 \
--prompt-data /path/to/agent_data.jsonl \
${MODEL_ARGS[@]}
```
See `examples/search-r1/` for a complete multi-turn search example.
---
## Configuration Reference
### Three Argument Categories
slime uses three types of arguments:
**1. Megatron Arguments** (passed directly):
```bash
--tensor-model-parallel-size 2
--pipeline-model-parallel-size 1
--num-layers 32
--hidden-size 4096
```
**2. SGLang Arguments** (prefixed with `--sglang-`):
```bash
--sglang-mem-fraction-static 0.8
--sglang-context-length 8192
--sglang-log-level INFO
```
**3. slime Arguments**:
```bash
# Resource allocation
--actor-num-nodes 1
--actor-num-gpus-per-node 8
--rollout-num-gpus 8
--colocate # Share GPUs between training/inference
# Data
--prompt-data /path/to/data.jsonl
--input-key prompt
--label-key label
# Training loop
--num-rollout 3000
--rollout-batch-size 32
--n-samples-per-prompt 8
--global-batch-size 256
# Algorithm
--advantage-estimator grpo # or: gspo, ppo, reinforce_plus_plus
--use-kl-loss
--kl-loss-coef 0.001
```
### Key Constraints
```
rollout_batch_size × n_samples_per_prompt = global_batch_size × num_steps_per_rollout
```
Example: 32 × 8 = 256 × 1
---
## Data Buffer System
slime's data buffer enables flexible data management:
### Basic Data Source
```python
class RolloutDataSource:
def get_samples(self, num_samples):
"""Fetch prompts from dataset."""
return self.dataset.sample(num_samples)
def add_samples(self, samples):
"""Called after generation (no-op by default)."""
pass
```
### Buffered Data Source (Off-Policy)
```python
class RolloutDataSourceWithBuffer(RolloutDataSource):
def __init__(self):
self.buffer = []
def add_samples(self, samples):
"""Store generated samples for reuse."""
self.buffer.extend(samples)
def buffer_filter(self, args, buffer, num_samples):
"""Custom selection logic (prioritized, stratified, etc.)."""
return select_best(buffer, num_samples)
```
---
## Common Issues and Solutions
### Issue: SGLang Engine Crash
**Symptoms**: Inference engine dies mid-training
**Solutions**:
```bash
# Enable fault tolerance
--use-fault-tolerance
# Increase memory allocation
--sglang-mem-fraction-static 0.85
# Reduce batch size
--rollout-batch-size 16
```
### Issue: Weight Sync Timeout
**Symptoms**: Training hangs after rollout
**Solutions**:
```bash
# Increase sync interval
--update-weights-interval 5
# Use colocated mode (no network transfer)
--colocate
```
### Issue: OOM During Training
**Symptoms**: CUDA OOM in backward pass
**Solutions**:
```bash
# Enable gradient checkpointing
--recompute-activations
# Reduce micro-batch size
--micro-batch-size 1
# Enable sequence parallelism
--sequence-parallel
```
### Issue: Slow Data Loading
**Symptoms**: GPU idle during data fetch
**Solutions**:
```bash
# Increase data workers
--num-data-workers 4
# Use streaming dataset
--streaming-data
```
---
## Supported Models
| Model Family | Configurations |
|--------------|----------------|
| GLM | GLM-4.5, GLM-4.6, GLM-4.7, GLM-Z1-9B |
| Qwen | Qwen3 (4B, 8B, 30B-A3B), Qwen3-MoE, Qwen2.5 |
| DeepSeek | V3, V3.1, R1 |
| Llama | Llama 3 (8B, 70B) |
| Others | Kimi K2, Moonlight-16B |
Each model has pre-configured scripts in `scripts/models/`.
---
## Advanced Topics
### Co-location Mode
Share GPUs between training and inference to reduce memory:
```bash
python train.py \
--colocate \
--actor-num-gpus-per-node 8 \
--sglang-mem-fraction-static 0.4 \
${MODEL_ARGS[@]}
```
### Custom Reward Model
```python
# custom_rm.py
class CustomRewardModel:
def __init__(self, model_path):
self.model = load_model(model_path)
def compute_reward(self, prompts, responses):
inputs = self.tokenize(prompts, responses)
scores = self.model(inputs)
return scores.tolist()
```
```bash
--custom-rm-path custom_rm.py
```
### Evaluation Multi-Task
```bash
--eval-prompt-data aime /path/to/aime.jsonl \
--eval-prompt-data gsm8k /path/to/gsm8k.jsonl \
--n-samples-per-eval-prompt 16
```
---
## Resources
- **Documentation**: https://thudm.github.io/slime/
- **GitHub**: https://github.com/THUDM/slime
- **Blog**: https://lmsys.org/blog/2025-07-09-slime/
- **Examples**: See `examples/` directory for 14+ worked examples