mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-04-29 15:31:38 +08:00
The skills directory was getting disorganized — mlops alone had 40 skills in a flat list, and 12 categories were singletons with just one skill each. Code change: - prompt_builder.py: Support sub-categories in skill scanner. skills/mlops/training/axolotl/SKILL.md now shows as category 'mlops/training' instead of just 'mlops'. Backwards-compatible with existing flat structure. Split mlops (40 skills) into 7 sub-categories: - mlops/training (12): accelerate, axolotl, flash-attention, grpo-rl-training, peft, pytorch-fsdp, pytorch-lightning, simpo, slime, torchtitan, trl-fine-tuning, unsloth - mlops/inference (8): gguf, guidance, instructor, llama-cpp, obliteratus, outlines, tensorrt-llm, vllm - mlops/models (6): audiocraft, clip, llava, segment-anything, stable-diffusion, whisper - mlops/vector-databases (4): chroma, faiss, pinecone, qdrant - mlops/evaluation (5): huggingface-tokenizers, lm-evaluation-harness, nemo-curator, saelens, weights-and-biases - mlops/cloud (2): lambda-labs, modal - mlops/research (1): dspy Merged singleton categories: - gifs → media (gif-search joins youtube-content) - music-creation → media (heartmula, songsee) - diagramming → creative (excalidraw joins ascii-art) - ocr-and-documents → productivity - domain → research (domain-intel) - feeds → research (blogwatcher) - market-data → research (polymarket) Fixed misplaced skills: - mlops/code-review → software-development (not ML-specific) - mlops/ml-paper-writing → research (academic writing) Added DESCRIPTION.md files for all new/updated categories.
71 lines
1.3 KiB
Markdown
71 lines
1.3 KiB
Markdown
# Provider Configuration
|
|
|
|
Guide to using Instructor with different LLM providers.
|
|
|
|
## Anthropic Claude
|
|
|
|
```python
|
|
import instructor
|
|
from anthropic import Anthropic
|
|
|
|
# Basic setup
|
|
client = instructor.from_anthropic(Anthropic())
|
|
|
|
# With API key
|
|
client = instructor.from_anthropic(
|
|
Anthropic(api_key="your-api-key")
|
|
)
|
|
|
|
# Recommended mode
|
|
client = instructor.from_anthropic(
|
|
Anthropic(),
|
|
mode=instructor.Mode.ANTHROPIC_TOOLS
|
|
)
|
|
|
|
# Usage
|
|
result = client.messages.create(
|
|
model="claude-sonnet-4-5-20250929",
|
|
max_tokens=1024,
|
|
messages=[{"role": "user", "content": "..."}],
|
|
response_model=YourModel
|
|
)
|
|
```
|
|
|
|
## OpenAI
|
|
|
|
```python
|
|
from openai import OpenAI
|
|
|
|
client = instructor.from_openai(OpenAI())
|
|
|
|
result = client.chat.completions.create(
|
|
model="gpt-4o-mini",
|
|
response_model=YourModel,
|
|
messages=[{"role": "user", "content": "..."}]
|
|
)
|
|
```
|
|
|
|
## Local Models (Ollama)
|
|
|
|
```python
|
|
client = instructor.from_openai(
|
|
OpenAI(
|
|
base_url="http://localhost:11434/v1",
|
|
api_key="ollama"
|
|
),
|
|
mode=instructor.Mode.JSON
|
|
)
|
|
|
|
result = client.chat.completions.create(
|
|
model="llama3.1",
|
|
response_model=YourModel,
|
|
messages=[...]
|
|
)
|
|
```
|
|
|
|
## Modes
|
|
|
|
- `Mode.ANTHROPIC_TOOLS`: Recommended for Claude
|
|
- `Mode.TOOLS`: OpenAI function calling
|
|
- `Mode.JSON`: Fallback for unsupported providers
|