mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-04-28 06:51:16 +08:00
* feat(plugins): google_meet — bundled plugin for join+transcribe Meet calls v1 shipping transcribe-only. Spawns headless Chromium via Playwright, joins an explicit https://meet.google.com/ URL, enables live captions, and scrapes them into a transcript file the agent can read across turns. The agent then has the meeting content in context and can do followup work (send recap, file issues, schedule followups) with its regular tools. Surface: - Tools: meet_join, meet_status, meet_transcript, meet_leave, meet_say (meet_say is a v1 stub — returns not-implemented; v2 will wire realtime duplex audio via OpenAI Realtime / Gemini Live + BlackHole / PulseAudio null-sink.) - CLI: hermes meet setup | auth | join | status | transcript | stop - Lifecycle: on_session_end auto-leaves any still-running bot. Safety: - URL regex rejects anything that isn't https://meet.google.com/... - No calendar scanning, no auto-dial, no auto-consent announcement. - Single active meeting per install; a second meet_join leaves the first. - Platform-gated to Linux + macOS (Windows audio routing for v2 untested). - Opt-in: standalone plugin, user must add 'google_meet' to plugins.enabled in config.yaml. Zero core changes. Plugin uses existing register_tool / register_cli_command / register_hook surfaces. 21 new unit tests cover the URL safety gate, transcript dedup + status round-trip, process-manager refusals/start/stop paths, tool-handler JSON shape under each branch, session-end cleanup, and platform-gated register(). * feat(plugins/google_meet): v2 realtime audio + v3 remote node host v2 \u2014 agent speaks in-meeting audio_bridge.py: PulseAudio null-sink (Linux) + BlackHole probe (macOS). On Linux we load pactl module-null-sink + module-virtual-source, track module ids for teardown; Chrome gets PULSE_SOURCE=<virt src> env so its fake mic reads what we write to the sink. macOS just probes BlackHole 2ch and returns its device name \u2014 the plugin refuses to switch the user's default audio input (that would surprise them). realtime/openai_client.py: sync WebSocket client for the OpenAI Realtime API. RealtimeSession.speak(text) sends conversation.item.create + response.create, accumulates response.audio.delta PCM bytes, appends them to a file. RealtimeSpeaker runs a JSONL-queue loop consuming meet_say calls. 'websockets' is an optional dep imported lazily. meet_bot.py: when HERMES_MEET_MODE=realtime, provisions AudioBridge, starts RealtimeSession + speaker thread, spawns paplay to pump PCM into the null-sink, then cleans everything up on SIGTERM. If any realtime setup step fails, falls back cleanly to transcribe mode with an error flagged in status.json. process_manager.enqueue_say(): writes a JSONL line to say_queue.jsonl; refuses when no active meeting or active meeting is transcribe-only. tools.meet_say: real implementation; requires active mode='realtime'. meet_join: adds mode='transcribe'|'realtime' param. v3 \u2014 remote node host node/protocol.py: JSON envelope (type, id, token, payload) + validate. node/registry.py: $HERMES_HOME/workspace/meetings/nodes.json, with resolve() auto-selecting the sole registered node when name is None. node/server.py: NodeServer \u2014 websockets.serve, bearer-token auth, dispatches start_bot/stop/status/transcript/say/ping onto the local process_manager. Token auto-generated + persisted on first run. node/client.py: NodeClient \u2014 short-lived sync WS per RPC, raises RuntimeError on error envelopes, clean API matching the server. node/cli.py: 'hermes meet node {run,list,approve,remove,status,ping}' subtree; wired into the main meet CLI by cli.py so 'hermes meet node' Just Works. tools.py: every meet_* tool accepts node='<name>'|'auto'; when set, routes through NodeClient to the remote bot instead of running locally. Unknown node \u2192 clear 'no registered meet node matches ...' error. cli.py: 'hermes meet join --node my-mac --mode realtime' and 'hermes meet say "..." --node my-mac' route to the node; 'hermes meet node approve <name> <url> <token>' registers one. Tests 21 v1 tests updated (meet_say is no longer a stub; active-record now carries mode). 20 new audio_bridge + realtime tests. 42 new node tests (protocol/registry/server/client/cli). 17 new v1/v2/v3 integration tests at the plugin level covering enqueue_say edge cases, env var passthrough, mode validation, node routing (known/unknown/auto/ambiguous), and argparse wiring for `hermes meet say` + `hermes meet node` + --mode/--node flags. Total: 100 plugin tests + 58 plugin-system tests = 158 passing. E2E verified on Linux with fresh HERMES_HOME: plugin loads, 5 tools register, on_session_end hook wires, 'hermes meet' CLI tree wires including the node subtree, NodeRegistry round-trips, meet_join routes correctly to NodeClient under node='my-mac' with mode='realtime', enqueue_say accepts realtime/rejects transcribe, argparse parses every new flag cleanly. Zero changes to core. All new code lives under plugins/google_meet/. * feat(plugins/google_meet): auto-install, admission detect, mac PCM pump, barge-in, richer status Ready-for-live-test follow-up on PR #16364. Five additions that matter for the first live run on a real Meet, in priority order: 1. hermes meet install [--realtime] [--yes] pip install playwright websockets + python -m playwright install chromium --realtime: installs platform audio deps (pulseaudio-utils on Linux via sudo apt, blackhole-2ch + ffmpeg on macOS via brew). Prompts before sudo/brew unless --yes. Refuses on Windows. Refuses to auto-flip the macOS default input — user still selects BlackHole in System Settings (deliberate; surprise audio rerouting is worse than a manual step). 2. Admission detection _detect_admission(page): Leave-button visible OR caption region attached OR participants list present → we're in-call. _detect_denied(page): 'You can\'t join this video call' / 'You were removed' / 'No one responded to your request' → bail out. HERMES_MEET_LOBBY_TIMEOUT (default 300s) caps how long we sit in the lobby before giving up. in_call stays False until admitted. Status surfaces leaveReason: duration_expired | lobby_timeout | denied | page_closed. 3. macOS PCM pump ffmpeg reads speaker.pcm (24kHz s16le mono) and writes to the BlackHole AVFoundation output via -f audiotoolbox -audio_device_index <N>. _mac_audio_device_index() probes ffmpeg -f avfoundation -list_devices true to resolve 'BlackHole 2ch' → numeric index. Falls back to index 0 on probe failure. Linux paplay pump unchanged. 4. Richer status dict _BotState now tracks realtime, realtimeReady, realtimeDevice, audioBytesOut, lastAudioOutAt, lastBargeInAt, joinAttemptedAt, leaveReason. RealtimeSession.audio_bytes_out / last_audio_out_at counters fold into the status file once a second so meet_status() can show the agent's voice activity in near-real-time. 5. Barge-in RealtimeSession.cancel_response() sends type='response.cancel' over the same WS (lock-guarded so it's safe to call from the caption thread while speak() is reading frames). Handles response.cancelled as a terminal frame type. _looks_like_human_speaker() gates triggers so the bot's own name, 'You', 'Unknown', and blanks don't self-cancel. Called from the caption drain loop: when a new caption arrives attributed to a real participant while rt.session exists, we fire cancel_response() and stamp lastBargeInAt. Tests: 20 new unit tests across _BotState telemetry, barge-in gating, admission/denied probe error handling, cancel_response with and without a connected WS, and `hermes meet install` CLI wiring (flag parsing + end-to-end subprocess.run verification + Linux-already-installed fast path). Total 171 passing across all google_meet test files + the plugin-system regression suite. E2E verified on Linux: plugin loads, all 5 tools register, `hermes meet install --realtime --yes` parses, fresh-bot status.json has every new telemetry key, cancel_response on a disconnected session returns False without raising, barge-in helper gates the bot's own name correctly. Still out of scope (for a future PR, not blocking live test): mic → Realtime duplex (the agent listening to meeting audio via WebRTC), node-host TLS/pairing UX, Windows audio, Meet create+Twilio. Docs updated: SKILL.md now lists the installer subcommand, lobby timeout, barge-in caveat, and the full status-dict reference table. README.md quick-start uses hermes meet install.
149 lines
6.9 KiB
Markdown
149 lines
6.9 KiB
Markdown
---
|
|
name: google_meet
|
|
description: Join a Google Meet call, transcribe live captions, optionally speak in realtime, and do the followup work afterwards. Use when the user asks the agent to sit in on a meeting, take notes, summarize, respond in-call, or action items from it.
|
|
version: 0.2.0
|
|
platforms:
|
|
- linux
|
|
- macos
|
|
metadata:
|
|
hermes:
|
|
tags: [meetings, google-meet, transcription, realtime-voice]
|
|
---
|
|
|
|
# google_meet
|
|
|
|
## When to use
|
|
|
|
The user says any of:
|
|
|
|
- "join my Meet at <url>"
|
|
- "take notes on this meeting"
|
|
- "summarize the meeting and send followups"
|
|
- "sit in on my standup"
|
|
- "be a bot in this call and speak up when X"
|
|
|
|
## Two modes
|
|
|
|
| Mode | What the bot does |
|
|
|---|---|
|
|
| `transcribe` (default) | Joins, enables captions, scrapes a transcript. Listen-only. |
|
|
| `realtime` | Same as transcribe PLUS speaks into the meeting via OpenAI Realtime. The agent calls `meet_say(text)` and the bot's voice comes out of the call. |
|
|
|
|
Pick `realtime` only when the user actually wants the agent to speak. It costs real money (OpenAI Realtime is pay-per-audio-minute) and requires a virtual audio device set up on the machine running the bot.
|
|
|
|
## Two locations
|
|
|
|
| Location | When |
|
|
|---|---|
|
|
| Local (default) | Gateway machine runs the Playwright bot directly. |
|
|
| Remote node (`node="<name>"`) | Bot runs on a different machine that has a signed-in Chrome and (for realtime) a configured audio bridge. Useful when the gateway runs on a headless Linux box but the user's real signed-in Chrome lives on their Mac. |
|
|
|
|
## Prerequisites the user must handle once
|
|
|
|
Easiest path — run the built-in installer:
|
|
|
|
```bash
|
|
hermes plugins enable google_meet
|
|
hermes meet install # pip deps + Chromium (transcribe only)
|
|
hermes meet install --realtime # + pulseaudio-utils / brew blackhole+ffmpeg
|
|
hermes meet auth # optional; skips guest-lobby wait
|
|
hermes meet setup # preflight checks
|
|
```
|
|
|
|
`hermes meet install --realtime` prompts before running `sudo apt-get` (Linux)
|
|
or `brew install` (macOS). Pass `--yes` to skip the prompt. It will NOT touch
|
|
your macOS default-input setting — you have to select BlackHole 2ch in
|
|
System Settings yourself before starting a realtime meeting.
|
|
|
|
Or do it manually:
|
|
```bash
|
|
pip install playwright websockets && python -m playwright install chromium
|
|
|
|
# For realtime mode, additionally:
|
|
# Linux: sudo apt install pulseaudio-utils
|
|
# macOS: brew install blackhole-2ch ffmpeg
|
|
# → System Settings → Sound → Input → BlackHole 2ch
|
|
# Then set OPENAI_API_KEY or HERMES_MEET_REALTIME_KEY in ~/.hermes/.env
|
|
```
|
|
|
|
For a remote node:
|
|
```bash
|
|
# on the user's Mac (where Chrome is signed in):
|
|
pip install playwright websockets && python -m playwright install chromium
|
|
hermes plugins enable google_meet
|
|
hermes meet node run --display-name my-mac # persistent server
|
|
# copy the printed token
|
|
|
|
# on the gateway:
|
|
hermes meet node approve my-mac ws://<mac-ip>:18789 <token>
|
|
hermes meet node ping my-mac # confirm reachable
|
|
```
|
|
|
|
Run `hermes meet setup` to preflight local prereqs.
|
|
|
|
## Flow
|
|
|
|
1. **Join** — call `meet_join(url=..., mode=..., node=...)`. Returns immediately.
|
|
2. **Announce yourself** — no auto-consent. Say (in whatever channel the user is watching): "A Hermes agent bot is in this call taking notes."
|
|
3. **Poll** — `meet_status()` for liveness, `meet_transcript(last=20)` for recent captions. Don't re-read the whole transcript every turn.
|
|
4. **Speak (realtime only)** — `meet_say(text="...")` queues text for TTS. The speech lags by ~2s. Don't spam it.
|
|
5. **Leave** — `meet_leave()` when done, or set `duration="30m"` on `meet_join` for auto-leave.
|
|
6. **Follow up** — read `meet_transcript()` in full, summarize, and use regular tools to send the recap, file issues, schedule followups.
|
|
|
|
## Tool reference
|
|
|
|
| Tool | Parameters | Use |
|
|
|---|---|---|
|
|
| `meet_join` | `url`, `mode?`, `guest_name?`, `duration?`, `headed?`, `node?` | Start bot |
|
|
| `meet_status` | `node?` | Liveness + progress |
|
|
| `meet_transcript` | `last?`, `node?` | Read captions |
|
|
| `meet_leave` | `node?` | Close bot |
|
|
| `meet_say` | `text`, `node?` | Speak in realtime meeting |
|
|
|
|
`node?` on all tools: pass a registered node name (or `"auto"` for the sole node) to operate a remote bot instead of a local one. Omit for local.
|
|
|
|
## Important limits
|
|
|
|
- Captions are only as good as Google Meet's live captions. English-biased, lossy on overlapping speakers.
|
|
- Guest mode sits in the lobby until a host admits. Warn the user; `hermes meet auth` avoids this.
|
|
- **Lobby timeout**: if the host doesn't admit the bot within 5 minutes (configurable via `HERMES_MEET_LOBBY_TIMEOUT` env), the bot leaves and `meet_status` reports `leaveReason: "lobby_timeout"`.
|
|
- **One active meeting per install per location.** A second `meet_join` leaves the first.
|
|
- **Windows not supported.**
|
|
- Realtime mode needs a virtual audio device. If the audio bridge setup fails, the bot falls back to transcribe mode and flags it in `meet_status().error`.
|
|
- `meet_say` requires `mode='realtime'` on the originating `meet_join`. Calling it against a transcribe-mode meeting returns a clear error.
|
|
- **Barge-in is best-effort.** When a caption arrives attributed to a real participant while the bot is generating audio, the bot sends `response.cancel` to OpenAI Realtime. Captions take ~500ms to show up, so the bot will talk over the first second or so of a human interruption.
|
|
|
|
## Status dict reference
|
|
|
|
`meet_status()` returns (subset shown, there are more):
|
|
|
|
| Key | Meaning |
|
|
|---|---|
|
|
| `inCall` | Past the lobby. False while waiting for admission. |
|
|
| `lobbyWaiting` | Clicked "Ask to join", waiting on host. |
|
|
| `joinAttemptedAt` / `joinedAt` | Timestamps for lobby-click and actual admission. |
|
|
| `captioning` | Caption observer is installed. |
|
|
| `transcriptLines` / `lastCaptionAt` | Transcript progress. |
|
|
| `realtime` / `realtimeReady` | Realtime mode provisioned / WS connected. |
|
|
| `realtimeDevice` | Audio device name the bot is feeding (e.g. `hermes_meet_src`). |
|
|
| `audioBytesOut` / `lastAudioOutAt` | How much PCM the OpenAI session has produced. |
|
|
| `lastBargeInAt` | Timestamp of the most recent `response.cancel` sent. |
|
|
| `leaveReason` | `duration_expired`, `lobby_timeout`, `denied`, `page_closed`, or null. |
|
|
| `error` | Last error (soft — bot may still be running). |
|
|
|
|
## Transcript location
|
|
|
|
Local:
|
|
```
|
|
$HERMES_HOME/workspace/meetings/<meeting-id>/transcript.txt
|
|
```
|
|
|
|
Remote node: transcript lives on the node host's disk. Use `meet_transcript(node=...)` to read it over RPC.
|
|
|
|
## Safety
|
|
|
|
- URL regex: only `https://meet.google.com/...` URLs pass.
|
|
- No calendar scanning. No auto-dial.
|
|
- Remote nodes use bearer-token auth; tokens are generated on the node (32 hex chars, persisted in `$HERMES_HOME/workspace/meetings/node_token.json`) and must be copied to the gateway via `hermes meet node approve`.
|
|
- `meet_say` text is rate-limited by the OpenAI Realtime session; spam-protection is the bot's problem, not yours, but still — don't queue hundreds of lines.
|