mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-04-28 15:01:34 +08:00
* feat(plugins): google_meet — bundled plugin for join+transcribe Meet calls v1 shipping transcribe-only. Spawns headless Chromium via Playwright, joins an explicit https://meet.google.com/ URL, enables live captions, and scrapes them into a transcript file the agent can read across turns. The agent then has the meeting content in context and can do followup work (send recap, file issues, schedule followups) with its regular tools. Surface: - Tools: meet_join, meet_status, meet_transcript, meet_leave, meet_say (meet_say is a v1 stub — returns not-implemented; v2 will wire realtime duplex audio via OpenAI Realtime / Gemini Live + BlackHole / PulseAudio null-sink.) - CLI: hermes meet setup | auth | join | status | transcript | stop - Lifecycle: on_session_end auto-leaves any still-running bot. Safety: - URL regex rejects anything that isn't https://meet.google.com/... - No calendar scanning, no auto-dial, no auto-consent announcement. - Single active meeting per install; a second meet_join leaves the first. - Platform-gated to Linux + macOS (Windows audio routing for v2 untested). - Opt-in: standalone plugin, user must add 'google_meet' to plugins.enabled in config.yaml. Zero core changes. Plugin uses existing register_tool / register_cli_command / register_hook surfaces. 21 new unit tests cover the URL safety gate, transcript dedup + status round-trip, process-manager refusals/start/stop paths, tool-handler JSON shape under each branch, session-end cleanup, and platform-gated register(). * feat(plugins/google_meet): v2 realtime audio + v3 remote node host v2 \u2014 agent speaks in-meeting audio_bridge.py: PulseAudio null-sink (Linux) + BlackHole probe (macOS). On Linux we load pactl module-null-sink + module-virtual-source, track module ids for teardown; Chrome gets PULSE_SOURCE=<virt src> env so its fake mic reads what we write to the sink. macOS just probes BlackHole 2ch and returns its device name \u2014 the plugin refuses to switch the user's default audio input (that would surprise them). realtime/openai_client.py: sync WebSocket client for the OpenAI Realtime API. RealtimeSession.speak(text) sends conversation.item.create + response.create, accumulates response.audio.delta PCM bytes, appends them to a file. RealtimeSpeaker runs a JSONL-queue loop consuming meet_say calls. 'websockets' is an optional dep imported lazily. meet_bot.py: when HERMES_MEET_MODE=realtime, provisions AudioBridge, starts RealtimeSession + speaker thread, spawns paplay to pump PCM into the null-sink, then cleans everything up on SIGTERM. If any realtime setup step fails, falls back cleanly to transcribe mode with an error flagged in status.json. process_manager.enqueue_say(): writes a JSONL line to say_queue.jsonl; refuses when no active meeting or active meeting is transcribe-only. tools.meet_say: real implementation; requires active mode='realtime'. meet_join: adds mode='transcribe'|'realtime' param. v3 \u2014 remote node host node/protocol.py: JSON envelope (type, id, token, payload) + validate. node/registry.py: $HERMES_HOME/workspace/meetings/nodes.json, with resolve() auto-selecting the sole registered node when name is None. node/server.py: NodeServer \u2014 websockets.serve, bearer-token auth, dispatches start_bot/stop/status/transcript/say/ping onto the local process_manager. Token auto-generated + persisted on first run. node/client.py: NodeClient \u2014 short-lived sync WS per RPC, raises RuntimeError on error envelopes, clean API matching the server. node/cli.py: 'hermes meet node {run,list,approve,remove,status,ping}' subtree; wired into the main meet CLI by cli.py so 'hermes meet node' Just Works. tools.py: every meet_* tool accepts node='<name>'|'auto'; when set, routes through NodeClient to the remote bot instead of running locally. Unknown node \u2192 clear 'no registered meet node matches ...' error. cli.py: 'hermes meet join --node my-mac --mode realtime' and 'hermes meet say "..." --node my-mac' route to the node; 'hermes meet node approve <name> <url> <token>' registers one. Tests 21 v1 tests updated (meet_say is no longer a stub; active-record now carries mode). 20 new audio_bridge + realtime tests. 42 new node tests (protocol/registry/server/client/cli). 17 new v1/v2/v3 integration tests at the plugin level covering enqueue_say edge cases, env var passthrough, mode validation, node routing (known/unknown/auto/ambiguous), and argparse wiring for `hermes meet say` + `hermes meet node` + --mode/--node flags. Total: 100 plugin tests + 58 plugin-system tests = 158 passing. E2E verified on Linux with fresh HERMES_HOME: plugin loads, 5 tools register, on_session_end hook wires, 'hermes meet' CLI tree wires including the node subtree, NodeRegistry round-trips, meet_join routes correctly to NodeClient under node='my-mac' with mode='realtime', enqueue_say accepts realtime/rejects transcribe, argparse parses every new flag cleanly. Zero changes to core. All new code lives under plugins/google_meet/. * feat(plugins/google_meet): auto-install, admission detect, mac PCM pump, barge-in, richer status Ready-for-live-test follow-up on PR #16364. Five additions that matter for the first live run on a real Meet, in priority order: 1. hermes meet install [--realtime] [--yes] pip install playwright websockets + python -m playwright install chromium --realtime: installs platform audio deps (pulseaudio-utils on Linux via sudo apt, blackhole-2ch + ffmpeg on macOS via brew). Prompts before sudo/brew unless --yes. Refuses on Windows. Refuses to auto-flip the macOS default input — user still selects BlackHole in System Settings (deliberate; surprise audio rerouting is worse than a manual step). 2. Admission detection _detect_admission(page): Leave-button visible OR caption region attached OR participants list present → we're in-call. _detect_denied(page): 'You can\'t join this video call' / 'You were removed' / 'No one responded to your request' → bail out. HERMES_MEET_LOBBY_TIMEOUT (default 300s) caps how long we sit in the lobby before giving up. in_call stays False until admitted. Status surfaces leaveReason: duration_expired | lobby_timeout | denied | page_closed. 3. macOS PCM pump ffmpeg reads speaker.pcm (24kHz s16le mono) and writes to the BlackHole AVFoundation output via -f audiotoolbox -audio_device_index <N>. _mac_audio_device_index() probes ffmpeg -f avfoundation -list_devices true to resolve 'BlackHole 2ch' → numeric index. Falls back to index 0 on probe failure. Linux paplay pump unchanged. 4. Richer status dict _BotState now tracks realtime, realtimeReady, realtimeDevice, audioBytesOut, lastAudioOutAt, lastBargeInAt, joinAttemptedAt, leaveReason. RealtimeSession.audio_bytes_out / last_audio_out_at counters fold into the status file once a second so meet_status() can show the agent's voice activity in near-real-time. 5. Barge-in RealtimeSession.cancel_response() sends type='response.cancel' over the same WS (lock-guarded so it's safe to call from the caption thread while speak() is reading frames). Handles response.cancelled as a terminal frame type. _looks_like_human_speaker() gates triggers so the bot's own name, 'You', 'Unknown', and blanks don't self-cancel. Called from the caption drain loop: when a new caption arrives attributed to a real participant while rt.session exists, we fire cancel_response() and stamp lastBargeInAt. Tests: 20 new unit tests across _BotState telemetry, barge-in gating, admission/denied probe error handling, cancel_response with and without a connected WS, and `hermes meet install` CLI wiring (flag parsing + end-to-end subprocess.run verification + Linux-already-installed fast path). Total 171 passing across all google_meet test files + the plugin-system regression suite. E2E verified on Linux: plugin loads, all 5 tools register, `hermes meet install --realtime --yes` parses, fresh-bot status.json has every new telemetry key, cancel_response on a disconnected session returns False without raising, barge-in helper gates the bot's own name correctly. Still out of scope (for a future PR, not blocking live test): mic → Realtime duplex (the agent listening to meeting audio via WebRTC), node-host TLS/pairing UX, Windows audio, Meet create+Twilio. Docs updated: SKILL.md now lists the installer subcommand, lobby timeout, barge-in caveat, and the full status-dict reference table. README.md quick-start uses hermes meet install.
333 lines
12 KiB
Python
333 lines
12 KiB
Python
"""OpenAI Realtime API WebSocket client + file-queue speaker.
|
|
|
|
This module is the "output" side of the v2 voice bridge: it takes text,
|
|
sends it to the OpenAI Realtime API, receives audio deltas back, and
|
|
appends the PCM bytes to a file. A separate consumer (the audio
|
|
bridge) streams that file into Chrome's fake microphone.
|
|
|
|
Designed for simplicity: a single synchronous WebSocket connection per
|
|
speaker, per session. The ``websockets`` package is imported lazily so
|
|
that importing this module never fails just because the optional dep
|
|
is missing.
|
|
"""
|
|
|
|
from __future__ import annotations
|
|
|
|
import base64
|
|
import json
|
|
import time
|
|
import uuid
|
|
from pathlib import Path
|
|
from typing import Any, Callable, Optional
|
|
|
|
|
|
REALTIME_URL = "wss://api.openai.com/v1/realtime"
|
|
|
|
|
|
def _require_websockets():
|
|
"""Import ``websockets.sync.client.connect`` or raise with hint."""
|
|
try:
|
|
from websockets.sync.client import connect as _connect # type: ignore
|
|
except ImportError as exc: # pragma: no cover - exercised via test
|
|
raise RuntimeError(
|
|
"websockets package is required for OpenAI Realtime; "
|
|
"install with: pip install websockets"
|
|
) from exc
|
|
return _connect
|
|
|
|
|
|
class RealtimeSession:
|
|
"""Minimal sync client for the OpenAI Realtime WebSocket API.
|
|
|
|
Usage:
|
|
sess = RealtimeSession(api_key=..., audio_sink_path=Path("out.pcm"))
|
|
sess.connect()
|
|
sess.speak("Hello team.")
|
|
sess.close()
|
|
|
|
Thread safety: ``speak`` and ``cancel_response`` may be called from
|
|
different threads; a lock serializes WebSocket writes.
|
|
"""
|
|
|
|
def __init__(
|
|
self,
|
|
api_key: str,
|
|
model: str = "gpt-realtime",
|
|
voice: str = "alloy",
|
|
instructions: str = "",
|
|
audio_sink_path: Optional[Path] = None,
|
|
sample_rate: int = 24000,
|
|
) -> None:
|
|
import threading as _threading
|
|
self.api_key = api_key
|
|
self.model = model
|
|
self.voice = voice
|
|
self.instructions = instructions
|
|
self.audio_sink_path = Path(audio_sink_path) if audio_sink_path else None
|
|
self.sample_rate = sample_rate
|
|
self._ws: Any = None
|
|
self._send_lock = _threading.Lock()
|
|
self._last_response_id: Optional[str] = None
|
|
# Public counters for status reporting.
|
|
self.audio_bytes_out: int = 0
|
|
self.last_audio_out_at: Optional[float] = None
|
|
|
|
# ── lifecycle ─────────────────────────────────────────────────────────
|
|
|
|
def connect(self) -> None:
|
|
"""Open WS and send session.update with voice+instructions."""
|
|
connect = _require_websockets()
|
|
url = f"{REALTIME_URL}?model={self.model}"
|
|
headers = [
|
|
("Authorization", f"Bearer {self.api_key}"),
|
|
("OpenAI-Beta", "realtime=v1"),
|
|
]
|
|
# websockets.sync.client.connect accepts either additional_headers=
|
|
# (newer) or extra_headers= depending on version; try the newer
|
|
# name first and fall back.
|
|
try:
|
|
self._ws = connect(url, additional_headers=headers)
|
|
except TypeError:
|
|
self._ws = connect(url, extra_headers=headers)
|
|
|
|
self._send_json(
|
|
{
|
|
"type": "session.update",
|
|
"session": {
|
|
"voice": self.voice,
|
|
"instructions": self.instructions,
|
|
"modalities": ["audio", "text"],
|
|
"output_audio_format": "pcm16",
|
|
"input_audio_format": "pcm16",
|
|
},
|
|
}
|
|
)
|
|
|
|
def close(self) -> None:
|
|
if self._ws is not None:
|
|
try:
|
|
self._ws.close()
|
|
except Exception:
|
|
pass
|
|
self._ws = None
|
|
|
|
# ── speaking ──────────────────────────────────────────────────────────
|
|
|
|
def speak(self, text: str, timeout: float = 30.0) -> dict:
|
|
"""Send ``text`` and accumulate the audio response.
|
|
|
|
Audio deltas are base64-decoded and appended to
|
|
``audio_sink_path`` (opened 'ab' and closed per call, so a
|
|
separate streaming reader can consume whatever is there).
|
|
"""
|
|
if self._ws is None:
|
|
raise RuntimeError("RealtimeSession.connect() must be called first")
|
|
|
|
start = time.monotonic()
|
|
|
|
self._send_json(
|
|
{
|
|
"type": "conversation.item.create",
|
|
"item": {
|
|
"type": "message",
|
|
"role": "user",
|
|
"content": [{"type": "input_text", "text": text}],
|
|
},
|
|
}
|
|
)
|
|
self._send_json(
|
|
{
|
|
"type": "response.create",
|
|
"response": {"modalities": ["audio"]},
|
|
}
|
|
)
|
|
|
|
bytes_written = 0
|
|
sink_fp = None
|
|
if self.audio_sink_path is not None:
|
|
self.audio_sink_path.parent.mkdir(parents=True, exist_ok=True)
|
|
sink_fp = open(self.audio_sink_path, "ab")
|
|
|
|
try:
|
|
while True:
|
|
remaining = timeout - (time.monotonic() - start)
|
|
if remaining <= 0:
|
|
raise TimeoutError(
|
|
f"realtime response did not complete within {timeout}s"
|
|
)
|
|
raw = self._recv(timeout=remaining)
|
|
if raw is None:
|
|
# Connection closed by peer.
|
|
break
|
|
try:
|
|
frame = json.loads(raw) if isinstance(raw, (str, bytes, bytearray)) else raw
|
|
except (TypeError, ValueError):
|
|
continue
|
|
if not isinstance(frame, dict):
|
|
continue
|
|
ftype = frame.get("type")
|
|
if ftype == "response.audio.delta":
|
|
b64 = frame.get("delta") or frame.get("audio") or ""
|
|
if b64 and sink_fp is not None:
|
|
try:
|
|
chunk = base64.b64decode(b64)
|
|
except (ValueError, TypeError):
|
|
chunk = b""
|
|
if chunk:
|
|
sink_fp.write(chunk)
|
|
sink_fp.flush()
|
|
bytes_written += len(chunk)
|
|
self.audio_bytes_out += len(chunk)
|
|
self.last_audio_out_at = time.time()
|
|
elif ftype == "response.created":
|
|
rid = (frame.get("response") or {}).get("id")
|
|
if rid:
|
|
self._last_response_id = rid
|
|
elif ftype in ("response.done", "response.completed", "response.cancelled"):
|
|
break
|
|
elif ftype == "error":
|
|
err = frame.get("error") or frame
|
|
raise RuntimeError(f"realtime error: {err}")
|
|
# All other frames (response.created, response.output_item.*,
|
|
# response.audio_transcript.delta, rate_limits.updated, ...)
|
|
# are ignored for v2.
|
|
finally:
|
|
if sink_fp is not None:
|
|
sink_fp.close()
|
|
|
|
duration_ms = (time.monotonic() - start) * 1000.0
|
|
return {
|
|
"ok": True,
|
|
"bytes_written": bytes_written,
|
|
"duration_ms": duration_ms,
|
|
}
|
|
|
|
# ── ws plumbing ───────────────────────────────────────────────────────
|
|
|
|
def cancel_response(self) -> bool:
|
|
"""Interrupt the in-flight response (barge-in).
|
|
|
|
Sends ``response.cancel`` on the current WebSocket so the model
|
|
stops generating audio immediately. Safe to call at any time;
|
|
returns True if a cancel was actually sent, False when there's
|
|
nothing to cancel or the socket isn't open.
|
|
"""
|
|
if self._ws is None:
|
|
return False
|
|
try:
|
|
self._send_json({"type": "response.cancel"})
|
|
return True
|
|
except Exception:
|
|
return False
|
|
|
|
def _send_json(self, payload: dict) -> None:
|
|
assert self._ws is not None
|
|
with self._send_lock:
|
|
self._ws.send(json.dumps(payload))
|
|
|
|
def _recv(self, timeout: Optional[float] = None):
|
|
assert self._ws is not None
|
|
try:
|
|
if timeout is None:
|
|
return self._ws.recv()
|
|
return self._ws.recv(timeout=timeout)
|
|
except TypeError:
|
|
# Older websockets may not accept timeout kwarg.
|
|
return self._ws.recv()
|
|
|
|
|
|
class RealtimeSpeaker:
|
|
"""File-based JSONL queue wrapper around :class:`RealtimeSession`.
|
|
|
|
Each line in ``queue_path`` is a JSON object of the form
|
|
``{"id": "<uuid>", "text": "..."}``. Processed lines are appended
|
|
to ``processed_path`` (if set) and then removed from the queue;
|
|
if ``processed_path`` is ``None``, processed lines are simply
|
|
dropped.
|
|
"""
|
|
|
|
def __init__(
|
|
self,
|
|
session: RealtimeSession,
|
|
queue_path: Path,
|
|
processed_path: Optional[Path] = None,
|
|
) -> None:
|
|
self.session = session
|
|
self.queue_path = Path(queue_path)
|
|
self.processed_path = Path(processed_path) if processed_path else None
|
|
|
|
# ── helpers ──────────────────────────────────────────────────────────
|
|
|
|
def _read_queue(self) -> list[dict]:
|
|
if not self.queue_path.exists():
|
|
return []
|
|
out: list[dict] = []
|
|
for line in self.queue_path.read_text().splitlines():
|
|
line = line.strip()
|
|
if not line:
|
|
continue
|
|
try:
|
|
entry = json.loads(line)
|
|
except ValueError:
|
|
continue
|
|
if not isinstance(entry, dict):
|
|
continue
|
|
if "id" not in entry:
|
|
entry["id"] = str(uuid.uuid4())
|
|
out.append(entry)
|
|
return out
|
|
|
|
def _rewrite_queue(self, remaining: list[dict]) -> None:
|
|
if not remaining:
|
|
# Keep the file but empty — consumers may be watching for
|
|
# new writes via mtime, and delete-then-recreate is a race.
|
|
self.queue_path.write_text("")
|
|
return
|
|
self.queue_path.write_text(
|
|
"\n".join(json.dumps(e) for e in remaining) + "\n"
|
|
)
|
|
|
|
def _append_processed(self, entry: dict, result: dict) -> None:
|
|
if self.processed_path is None:
|
|
return
|
|
self.processed_path.parent.mkdir(parents=True, exist_ok=True)
|
|
record = {"id": entry.get("id"), "text": entry.get("text", ""), "result": result}
|
|
with open(self.processed_path, "a") as fp:
|
|
fp.write(json.dumps(record) + "\n")
|
|
|
|
# ── main loop ────────────────────────────────────────────────────────
|
|
|
|
def run_until_stopped(
|
|
self,
|
|
stop_fn: Callable[[], bool],
|
|
poll_interval: float = 0.5,
|
|
) -> None:
|
|
while not stop_fn():
|
|
entries = self._read_queue()
|
|
if not entries:
|
|
time.sleep(poll_interval)
|
|
continue
|
|
# Process one at a time; re-check the queue file after each
|
|
# speak() call because new entries may have arrived.
|
|
head = entries[0]
|
|
text = (head.get("text") or "").strip()
|
|
if text:
|
|
try:
|
|
result = self.session.speak(text)
|
|
except Exception as exc:
|
|
result = {"ok": False, "error": str(exc)}
|
|
else:
|
|
result = {"ok": True, "bytes_written": 0, "duration_ms": 0.0}
|
|
self._append_processed(head, result)
|
|
|
|
# Re-read the queue from disk in case it was appended to
|
|
# while we were speaking, then drop the head.
|
|
latest = self._read_queue()
|
|
if latest and latest[0].get("id") == head.get("id"):
|
|
self._rewrite_queue(latest[1:])
|
|
else:
|
|
# Fallback: drop-by-id anywhere in the queue.
|
|
self._rewrite_queue(
|
|
[e for e in latest if e.get("id") != head.get("id")]
|
|
)
|