← Back to main docs

Berth Remote Agent

Architecture, protocol, deployment, and security reference for the Berth agent system.

Overview

The Berth Remote Agent is a persistent Rust binary that runs on Linux servers as a systemd service. It receives code deployments from the Berth macOS app (or CLI) over gRPC or NATS, manages execution history in a local SQLite database, runs scheduled jobs independently, publishes services via cloudflared tunnels, and supports remote self-upgrade.

Key principle: Zero runtime dependencies + full persistence. The agent is a single static binary with embedded SQLite. All execution history, logs, events, and schedules survive restarts. Communication flows through NATS (primary) or gRPC (fallback) — neither desktop nor agent needs to expose inbound ports.

PropertyValue
Binaryberth-agent
LanguageRust (shared berth-core crate with app)
ProtocolgRPC over HTTP/2 (tonic 0.12) + NATS command channel, 16 RPCs
Default port50051
Protobuf schemaproto/berth.proto
Local database~/.berth/agent.db (SQLite, 5 tables)
Deployments dir~/.berth/deploys/{project_id}/v{n}/
Supported platformsLinux x86_64, Linux aarch64
SchedulerBuilt-in, tick every 30s, runs independently of app
Tunnel providerscloudflared (pluggable)
NATS transportUser-provided via Synadia Cloud (BYON)

Architecture

System Topology

┌──────────────────────────────────────────────────────────────────────────┐ │ macOS (Berth App) │ │ │ │ ┌─────────────┐ ┌──────────────┐ ┌──────────────────────────────┐ │ │ │ React UI │───▸│ Tauri Commands│───▸│ berth-core │ │ │ │ (WebKit) │ │ (Rust) │ │ ├─ AgentTransport (trait) │ │ │ │ │◂───│ │◂───│ │ ├─ AgentClient (gRPC) │ │ │ │ Terminal │ │ Events: │ │ │ └─ NatsAgentClient │ │ │ │ Component │ │ project-log │ │ ├─ ProjectStore (SQL) │ │ │ │ │ │ │ │ ├─ TLS (rcgen) │ │ │ │ EnvVars │ │ │ │ └─ Credentials (KC) │ │ │ │ Panel │ │ │ │ │ │ │ └─────────────┘ └──────────────┘ └─────────────┬────────────────┘ │ │ │ │ └───────────────────────────────────────────────────────┼──────────────────┘ │ ┌─────────────────────────────┼──────────────┐ │ │ │ gRPC / HTTP/2 NATS (primary) │ (fallback, port 50051) outbound TLS │ │ │ │ │ ┌──────────────┴────────────┐ │ │ │ Your NATS Server (BYON) │ │ │ │ e.g. Synadia Cloud │ │ │ └──────────────┬────────────┘ │ │ │ │ │ NATS (outbound TLS) │ │ │ │ ┌─────────────────────────┼─────────────────────────────┼──────────────┘ │ Linux Server │ │ │ ▼ ▼ │ ┌──────────────────────────────────────────────────────────────────┐ │ │ berth-agent (PersistentAgentService) │ │ │ ├─ gRPC Server (16 RPCs) │ │ │ │ ├─ Health/Status → version, uptime, os, arch, CPU │ │ │ │ ├─ Deploy() → extract tarball, install deps │ │ │ │ ├─ Execute() → run code, persist logs + events │ │ │ │ ├─ Stop() → kill process, emit event │ │ │ │ ├─ GetExecutions() → query execution history │ │ │ │ ├─ GetEvents() → poll store-and-forward events │ │ │ │ ├─ *Schedule() → agent-side cron management │ │ │ │ ├─ Upgrade() → receive binary, swap, restart │ │ │ │ ├─ Publish() → start cloudflared tunnel │ │ │ │ └─ Unpublish() → stop cloudflared tunnel │ │ │ ├─ NatsCommandHandler (subscribes berth.<agent_id>.cmd.>) │ │ │ ├─ NatsPublisher (events, logs, heartbeat via JetStream) │ │ │ ├─ TunnelManager (cloudflared process lifecycle) │ │ │ ├─ AgentStore (SQLite ~/.berth/agent.db) │ │ │ │ deployments | executions | logs | events | schedules │ │ │ ├─ Agent Scheduler (tick every 30s) │ │ │ ├─ RunningProcesses (HashMap<project_id, AbortHandle>) │ │ │ └─ sysinfo (real CPU/memory metrics) │ │ └──────────────────────────────────────────────────────────────────┘ │ │ │ ~/.berth/deploys/{project_id}/v{n}/ ← persistent deploy dirs │ │ └── main.py | requirements.txt | ... │ │ │ └──────────────────────────────────────────────────────────────────────┘

Component Interaction

┌──────────┐ ┌───────────┐ ┌──────────────────┐ │ React │ invoke │ Tauri │ gRPC │ berth-agent │ │ UI │────────▸│ Command │────────▸│ │ │ │ │ │ or │ Writes code │ │ Target: │ │ Reads │ NATS │ to deploy dir │ │ [linux] │ │ code │────────▸│ Spawns process │ │ │ │ from │ │ │ │ [Run] │ │ disk │ │ TunnelManager │ │ │◂────────│◂──────────│◂────────│ (cloudflared) │ │ Terminal │ events │ Emits │ stream │ │ │ (logs) │ │ log evt │ │ Streams │ │ │ │ │ │ stdout/err │ │ EnvVars │ │ Loads │ │ │ │ Panel │ │ env vars │ │ Env vars passed │ │ │ │ from DB │ │ at runtime │ └──────────┘ └───────────┘ └──────────────────┘ Transport selection: target.nats_enabled → NatsAgentClient otherwise → AgentClient (gRPC)

gRPC Protocol

Defined in proto/berth.proto. The agent implements a single gRPC service with 16 RPCs:

Service: AgentService

RPCRequestResponseTypeDescription
HealthHealthRequestHealthResponseUnaryAgent version, status, uptime, os, arch, tunnel providers
StatusStatusRequestStatusResponseUnaryCPU, memory, running projects list
DeployDeployRequestDeployResponseUnaryExtract tarball, install deps, persist deployment
ExecuteExecuteRequeststream ExecuteResponseServer streamingRun code, stream logs, persist to SQLite
StopStopRequestStopResponseUnaryKill process, emit stop event
StreamLogsLogStreamRequeststream LogStreamResponseServer streamingLive log streaming
GetExecutionsGetExecutionsRequestGetExecutionsResponseUnaryQuery persistent execution history
GetExecutionLogsGetExecutionLogsRequeststream LogStreamResponseServer streamingReplay stored log lines (supports since_seq)
GetEventsGetEventsRequestGetEventsResponseUnaryPoll store-and-forward events (since_id)
AckEventsAckEventsRequestAckEventsResponseUnaryAcknowledge + prune old events
AddScheduleAddScheduleRequestAddScheduleResponseUnaryCreate agent-side cron schedule
RemoveScheduleRemoveScheduleRequestRemoveScheduleResponseUnaryDelete a schedule
ListSchedulesListSchedulesRequestListSchedulesResponseUnaryList schedules for a project
Upgradestream UpgradeChunkUpgradeResponseClient streamingUpload new binary, verify, swap, restart
PublishPublishRequestPublishResponseUnaryStart a cloudflared tunnel for a project
UnpublishUnpublishRequestUnpublishResponseUnaryStop the tunnel for a project

Message Definitions

ExecuteRequest

message ExecuteRequest {
  string project_id  = 1;  // Unique project identifier
  string runtime     = 2;  // "python", "node", "go", "shell", "rust"
  string entrypoint  = 3;  // Filename: "main.py", "index.js", "run.sh"
  bytes  code        = 4;  // Inline code content (the file bytes)
  string working_dir = 5;  // Fallback working directory (if no inline code)
}

ExecuteResponse (streamed)

message ExecuteResponse {
  string stream    = 1;  // "stdout" or "stderr"
  string text      = 2;  // One line of output
  string timestamp = 3;  // RFC 3339 timestamp
}

HealthResponse

message HealthResponse {
  string agent_version    = 1;  // e.g. "0.1.9"
  string status           = 2;  // "healthy"
  uint64 uptime_seconds   = 3;  // Seconds since agent started
  string os               = 6;  // e.g. "linux" (from std::env::consts::OS)
  string arch             = 7;  // e.g. "x86_64" (from std::env::consts::ARCH)
  repeated string tunnel_providers = 9;  // e.g. ["cloudflared"] — installed tunnel binaries
}

StatusResponse

message StatusResponse {
  string agent_id             = 1;  // Hostname of the server
  string status               = 2;  // "running"
  double cpu_usage            = 3;  // Global CPU % (via sysinfo)
  uint64 memory_bytes         = 4;  // Used memory in bytes
  repeated ProjectStatus projects = 5;
}

NATS Command Channel

The primary transport for remote agent communication. Both the desktop app and the agent connect outbound to Synadia Cloud — zero inbound ports required on either side. Works behind NAT, firewalls, and across different networks.

Zero inbound ports: Both desktop and agent connect outbound to your Synadia Cloud NATS account. No port forwarding, no firewall rules, no direct network connectivity between desktop and agent.

AgentTransport Trait

Defined in crates/berth-core/src/agent_transport.rs. A unified async trait that abstracts the transport layer:

// Both AgentClient (gRPC) and NatsAgentClient implement this trait.
// Transport is selected per target based on the nats_enabled flag.
//
// If target has nats_enabled=true + nats_agent_id → NatsAgentClient
// Otherwise → AgentClient (gRPC fallback)

NatsCommandKind

The NatsCommandKind enum defines 15 command variants sent over NATS:

VariantDescription
HealthAgent health check
StatusCPU, memory, running projects
StopKill a running project
ExecuteRun code (streaming response)
DeployDeploy code to agent
GetExecutionsQuery execution history
GetExecutionLogsReplay stored log lines
AddScheduleCreate agent-side cron schedule
RemoveScheduleDelete a schedule
ListSchedulesList schedules for a project
UpgradeDownloadAgent downloads binary from URL + checksum
DeployChunkedChunked code deployment over NATS
RollbackRollback to previous agent binary
PublishStart a cloudflared tunnel
UnpublishStop a cloudflared tunnel

Subject Hierarchy

berth.<agent_id>.cmd.<type>         # Commands from desktop → agent
berth.<agent_id>.resp.<request_id>  # Streaming responses from agent → desktop
berth.<agent_id>.events             # Store-and-forward events (JetStream)
berth.<agent_id>.logs               # Log streaming (JetStream)
berth.<agent_id>.heartbeat          # Periodic heartbeat (JetStream)

NatsCommandHandler (Agent Side)

Defined in crates/berth-agent/src/nats_cmd_handler.rs. Subscribes to berth.<agent_id>.cmd.> and dispatches incoming commands to the corresponding PersistentAgentService::do_*() methods. Uses request-reply for simple RPCs and publish+subscribe for streaming operations (Execute, Deploy, Logs).

NatsAgentClient (Desktop Side)

Defined in crates/berth-core/src/nats_cmd_client.rs. Implements the AgentTransport trait over NATS. Uses request-reply for unary RPCs and publish+subscribe for streaming responses.

Transport Selection

The get_agent_client() function in agent_transport.rs returns a Box<dyn AgentTransport>. If the target has nats_enabled=true and a nats_agent_id, communication routes through NATS. Otherwise, it falls back to direct gRPC.

Target UI: The Add Target form includes an optional "NATS Agent ID" field. Targets with NATS enabled show a green "NATS" badge. The update_target_nats Tauri command toggles NATS on/off per target.

Public URL Publishing

Running projects can be published to a public URL via cloudflared tunnels. The architecture is pluggable — adding a new tunnel provider (ngrok, bore, custom) requires changes to one file only (tunnel.rs).

TunnelManager

Defined in crates/berth-core/src/tunnel.rs. Manages the lifecycle of tunnel processes:

Important: cloudflared must be installed separately on the agent machine — it is NOT bundled with the Berth agent binary. Install from GitHub releases or your package manager.

Capability Detection

The HealthResponse includes a tunnel_providers field (repeated string) that reports which tunnel binaries are installed on the agent. The available_providers() function checks for installed binaries at health-check time.

Full Stack Integration

LayerPublishUnpublish
Proto RPCsPublish(PublishRequest)Unpublish(UnpublishRequest)
AgentTransport traitpublish()unpublish()
gRPC clientAgentClient::publish()AgentClient::unpublish()
NATS clientNatsAgentClient::publish()NatsAgentClient::unpublish()
NATS commandNatsCommandKind::PublishNatsCommandKind::Unpublish
Tauri commandspublish_projectunpublish_project
React UIPublishPanel (port input + button)PublishPanel (green URL bar + Unpublish)
MCP toolsberth_publish(project_id, port, provider?)berth_unpublish(project_id)
CLIberth publish <project> --port 8080berth unpublish <project>

SQLite Storage

Two columns on the projects table: tunnel_url and tunnel_provider. Updated via set_tunnel_url() and clear_tunnel_url() store methods.

Self-Upgrade

The agent supports remote self-upgrade using a cloudflared-inspired model: download binary, verify, atomic swap, exit with code 42, systemd restarts with the new binary. Tested end-to-end (v0.1.7 → v0.1.8).

Upgrade Flow

1
Download binary — Agent downloads the new binary from GitHub releases (via NATS UpgradeDownload command or gRPC Upgrade RPC).
2
Verify checksum — SHA-256 checksum is verified against the expected value provided by the desktop.
3
Atomic swap — Current binary backed up as berth-agent.old, new binary moved into place via atomic rename.
4
Exit(42) — Agent exits with code 42. systemd sees this as a success exit (via SuccessExitStatus=42) and restarts with the new binary.
5
Probation — 30-second window after startup. Agent performs 3 TCP self-connect checks. Pass → .probation-passed marker file created. Fail → exit(1) → rollback.

systemd Configuration for Upgrade

SuccessExitStatus=42                              # Prevents rate-limiting on intentional restart
ExecStopPost=+/usr/local/lib/berth/rollback.sh    # Runs as root — restores old binary on failure

Rollback

CLI and Remote Upgrade

# CLI self-serve upgrade (on the agent machine)
berth-agent update [--version X.Y.Z] [--yes]

# Remote upgrade via NATS (desktop sends URL + checksum, agent downloads and swaps)
NatsCommandKind::UpgradeDownload { url, checksum }

Environment Variables

Per-project environment variables are stored on the desktop side only and passed to the agent at runtime. Values are never persisted on the remote agent.

Architecture

Interfaces

InterfaceCommands / Tools
UIEnvVarsPanel — key/value editor with eye-toggle reveal, delete button, add form, .env import textarea
MCPberth_env_set(project_id, key, value), berth_env_get(project_id), berth_env_delete(project_id, key), berth_env_import(project_id, content)
CLIberth env set <project> <KEY> <VALUE>, berth env list <project>, berth env remove <project> <KEY>, berth env import <project> <.env file>

Service Mode

Projects can run in oneshot mode (run once, exit) or service mode (keep running, auto-restart on crash).

Configuration

Supervisor Loop

When run_mode = service, the agent's supervisor loop monitors the spawned process:

Data Flow

Remote Execution Flow

When a user clicks Run with a remote target selected in the UI:

1
Read code from disk — Tauri command reads the entrypoint file from ~/Library/Application Support/com.berth.app/projects/{name}/{entrypoint} on macOS.
2
Look up target — Target's endpoint is retrieved from SQLite. Transport is selected based on nats_enabled flag.
3
Load environment variables — Env vars are loaded from the desktop project_env_vars table and included in ExecuteParams.
4
Connect via transportget_agent_client() returns the appropriate transport: NatsAgentClient (if NATS enabled) or AgentClient (gRPC fallback).
5
Send ExecuteRequest — The code bytes, runtime type, entrypoint filename, and env vars are sent over the selected transport.
6
Agent writes to deploy dir — Code is written to ~/.berth/deploys/{project_id}/v{n}/. A versioned directory is created per deployment, persisted in the deployments table.
7
Agent spawns process — Creates an executions row, injects env vars into the process environment, then runs the command. Every log line is persisted to execution_logs AND streamed over the transport simultaneously. In service mode, the supervisor loop monitors for crashes.
8
Stream logs back — stdout/stderr are captured line by line and sent as ExecuteResponse messages. Each line is also written to SQLite for later replay via GetExecutionLogs. Env var values are masked before storage and transmission.
9
Tauri emits events — Each log line is emitted as a project-log Tauri event. The Terminal component renders them identically to local logs.
10
Completion — When the stream ends, a project-status-change event is emitted (idle or failed), and the run is recorded in SQLite. On the agent side, the executions row is updated with exit code + finished_at, and an execution_completed event is inserted into the events queue.

Sequence Diagram

Mac UI Tauri Command AgentTransport berth-agent (Linux) │ │ │ │ │ click Run │ │ │ ├─────────────────▸│ │ │ │ │ read code │ │ │ │ load env vars │ │ │ │ │ │ │ │ emit "running" │ │ │◂─ ─ ─ ─ ─ ─ ─ ─ │ │ │ │ │ │ │ │ │ get_client() │ │ │ ├──────────────────▸│ │ │ │ │ ExecuteRequest │ │ │ │ (+ env vars) │ │ │ ├────────────────────▸│ │ │ │ (gRPC or NATS) │ write deploy dir │ │ │ │ inject env vars │ │ │ │ spawn process │ │ │ │ │ │ │ ExecuteResponse[0] │ │ │ │◂────────────────────│ stdout (masked) │ │ emit project-log │ │ │◂─ ─ ─ ─ ─ ─ ─ ─ │◂──────────────────│ │ │ render in │ │ ExecuteResponse[1] │ │ Terminal │ │◂────────────────────│ stdout (masked) │◂─ ─ ─ ─ ─ ─ ─ ─ │◂──────────────────│ │ │ │ │ │ │ │ │ (stream ends) │ │ │ │◂────────────────────│ process exits │ │ emit "idle" │ │ │◂─ ─ ─ ─ ─ ─ ─ ─ │ │ │ │ │ record in DB │ │ │ │ │ │

Deployment

Method 1: Build from Source

For development and testing. Requires Rust toolchain and protoc on the target server.

# On the Linux server:
sudo apt install -y protobuf-compiler build-essential
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
source ~/.cargo/env

# Copy source and build
rsync -avz --exclude target --exclude node_modules your-mac:~/berth/ ~/berth/
cd ~/berth
cargo build -p berth-agent --release

# Run (bind to all interfaces for network access)
./target/release/berth-agent --listen-all --port 50051

Method 2: Install Script

For production servers. Downloads a pre-built binary and installs as a systemd service.

# Install
curl -sSL https://get.berth.dev | sudo bash

# Uninstall
curl -sSL https://get.berth.dev | sudo bash -s -- --uninstall

The install script (scripts/install-agent.sh) performs:

StepAction
1. OS DetectionLinux only (rejects macOS)
2. Arch Detectionx86_64 or aarch64
3. DownloadBinary from release URL to /usr/local/bin/berth-agent
4. User CreationCreates berth system user (no home, nologin shell)
5. systemd ServiceCreates unit file, enables, starts

systemd Service Configuration

[Unit]
Description=Berth Agent
After=network.target

[Service]
Type=simple
User=berth
ExecStart=/usr/local/bin/berth-agent --port 50051
Restart=always
RestartSec=5
SuccessExitStatus=42
ExecStopPost=+/usr/local/lib/berth/rollback.sh
Environment=RUST_LOG=info

[Install]
WantedBy=multi-user.target

Post-Install Commands

# Check status
systemctl status berth-agent

# View logs
journalctl -u berth-agent -f

# Restart
sudo systemctl restart berth-agent

Method 3: Register in Berth App

After the agent is running, register it as a target:

# Via CLI
berth targets add my-server --host 192.168.1.100 --port 50051
berth targets ping my-server

# Via UI
# Targets page → + Add Target → fill name/host/port → Add → Ping
# Optional: enter NATS Agent ID for zero-port communication

Agent CLI Flags

FlagDefaultDescription
--port <PORT>50051gRPC server port
--listen-allfalseBind to 0.0.0.0 (required for remote access)
--show-versionfalsePrint version and exit

Important: Without --listen-all, the agent binds to 127.0.0.1 only, rejecting remote gRPC connections. Always use --listen-all for remote targets. When using NATS transport, the gRPC port is only needed for local health checks.

Security

Current Security Model

Warning: The gRPC transport uses plaintext (no TLS). This is acceptable for trusted LANs and development, but must not be used over the internet without additional protection (VPN, SSH tunnel, or enabling mTLS). The NATS transport uses TLS to Synadia Cloud and is safe for use over the internet.

LayerCurrentPlanned
Transport encryptiongRPC: Plaintext / NATS: TLSmTLS for gRPC
AuthenticationNoneClient certificates
AuthorizationNonePer-project permissions
Code isolationOS user separationContainer sandboxing
Credential storagemacOS KeychainKeychain + encrypted file (Linux)

mTLS Infrastructure (Implemented, Not Yet Wired)

The TLS module (crates/berth-core/src/tls.rs) provides certificate generation and tonic TLS configuration, ready to be activated:

Certificate Chain

┌─────────────────────────────────────────────────────┐ │ Berth CA (self-signed, 10-year validity) │ │ Generated by: tls::generate_ca() │ │ Stored at: ~/Library/Application Support/ │ │ com.berth.app/certs/ca.crt + ca.key │ │ │ │ ┌────────────────┐ ┌────────────────────┐ │ │ │ Server Cert │ │ Client Cert │ │ │ │ (per agent) │ │ (Berth app) │ │ │ │ │ │ │ │ │ │ CN: hostname │ │ CN: berth-app │ │ │ │ SAN: hostname │ │ EKU: clientAuth │ │ │ │ EKU: serverAuth│ │ 1-year validity │ │ │ │ 1-year validity│ │ │ │ │ └────────────────┘ └────────────────────┘ │ │ │ │ Both signed by Berth CA │ └─────────────────────────────────────────────────────┘

TLS Functions

FunctionLocationPurpose
generate_ca()tls.rsCreate self-signed CA with CertifiedIssuer
generate_server_cert(ca, hostname)tls.rsSign a server cert for an agent
generate_client_cert(ca, name)tls.rsSign a client cert for the app
ensure_ca()tls.rsLoad or create CA, persist to disk
server_tls_config(bundle, ca_pem)tls.rsBuild tonic::ServerTlsConfig with mTLS
client_tls_config(bundle, ca_pem)tls.rsBuild tonic::ClientTlsConfig with mTLS

Enabling mTLS (When Ready)

// Agent server startup (main.rs):
let ca_pem = fs::read_to_string("ca.crt")?;
let server_bundle = tls::load_bundle("server")?;
let tls_config = tls::server_tls_config(&server_bundle, &ca_pem)?;

Server::builder()
    .tls_config(tls_config)?
    .add_service(AgentServiceServer::new(service))
    .serve(addr).await?;

// App client connection (agent_client.rs):
let ca_pem = fs::read_to_string("ca.crt")?;
let client_bundle = tls::load_bundle("client")?;
let tls_config = tls::client_tls_config(&client_bundle, &ca_pem)?;

Channel::from_shared(endpoint)?
    .tls_config(tls_config)?
    .connect().await?;

Credential Storage

The credentials module (crates/berth-core/src/credentials.rs) stores secrets in the macOS Keychain:

FunctionKeychain Key PatternPurpose
store_ssh_key(target, key)target:{name}:ssh-keySSH private key for remote access
store_aws_credentials(profile, ak, sk)aws:{profile}:access-keyAWS credentials for Lambda targets
store_credential(key, value)Custom keyGeneric secret storage

All credentials use the macOS security-framework crate, which stores secrets in the system Keychain — encrypted at rest, protected by the user's login password.

Security Recommendations

Agent Binary

Source Location

crates/berth-agent/
├── Cargo.toml               # Deps: berth-core, tonic, clap, sysinfo, rusqlite, chrono, uuid
├── build.rs                 # Compiles berth.proto via tonic-build
└── src/
    ├── main.rs              # CLI, SQLite init, gRPC server, NATS handler, scheduler loop spawn
    ├── service.rs           # Legacy re-export (AgentServiceImpl from berth-core)
    ├── persistent_service.rs # PersistentAgentService (16 RPCs with SQLite persistence)
    ├── agent_store.rs       # AgentStore: SQLite at ~/.berth/agent.db (5 tables)
    ├── agent_scheduler.rs   # Independent scheduler (tick every 30s, runs cron jobs)
    ├── nats_cmd_handler.rs  # NATS command subscriber and dispatcher
    ├── nats_publisher.rs    # NATS event/log/heartbeat publisher
    └── update.rs            # CLI `berth-agent update` self-upgrade command

Process Management

The agent uses a HashMap<String, RunningChild> protected by a tokio::sync::Mutex to track running processes:

struct RunningChild {
    abort_handle: tokio::task::AbortHandle,
    started_at: chrono::DateTime<chrono::Utc>,
}

// On Execute: insert(project_id, child)
// On Stop:    remove(project_id) → abort_handle.abort()
// On exit:    remove(project_id) automatically
// Service mode: supervisor loop re-inserts on crash (with backoff)

Runtime Support

RuntimeCommandEntrypoint
Pythonpython3 {entrypoint}main.py
Node.jsnode {entrypoint}index.js
Gogo run {entrypoint}main.go
Shellbash {entrypoint}run.sh
Rustcargo runmain.rs

Resource Monitoring

The Status RPC reports real system metrics via the sysinfo crate:

Client Library

The gRPC client (crates/berth-core/src/agent_client.rs) is used by both the Tauri app and the CLI:

use berth_core::agent_client::AgentClient;

// Connect
let mut client = AgentClient::connect("http://192.168.1.100:50051").await?;

// Health check
let health = client.health().await?;
println!("v{}, uptime {}s", health.version, health.uptime_seconds);

// Execute code remotely
let code = std::fs::read("main.py")?;
let logs = client.execute(
    "my-project",    // project_id
    "python",        // runtime
    "main.py",       // entrypoint
    "/tmp",          // working_dir
    Some(&code),     // inline code bytes
).await?;

for line in &logs {
    println!("[{}] {}", line.stream, line.text);
}

// Stop
client.stop("my-project").await?;

// System status
let status = client.status().await?;
println!("CPU: {:.1}%, Memory: {}MB", status.cpu_usage, status.memory_bytes / 1024 / 1024);

UI Integration

Target Selector (ProjectDetail page)

The project detail view shows pill-style target buttons when remote targets are configured:

┌─────────────────────────────────────────────────────────┐ │ ← Back my-crawler Delete │ ├─────────────────────────────────────────────────────────┤ │ ● Running 0:15 python main.py │ ├─────────────────────────────────────────────────────────┤ │ Target: [● local] [● linux-server NATS] [○ staging] │ ├─────────────────────────────────────────────────────────┤ │ [▶ Run] [■ Stop] [🔑 Env] [🌐 Publish] │ ├─────────────────────────────────────────────────────────┤ │ ┌─────────────────────────────────────────────────┐ │ │ │ $ Hello from my-server! │ │ │ │ $ OS: Linux 6.8.0-100-generic │ │ │ │ $ Python: 3.12.3 │ │ │ │ $ Done! │ │ │ └─────────────────────────────────────────────────┘ │ └─────────────────────────────────────────────────────────┘ ● = green (online) ○ = gray (unknown) ● = red (offline) NATS = green badge on NATS-enabled targets

Tauri Commands

CommandParametersAction
run_projectid, target: Option<String>Read code, load env vars, send to agent (local UDS or remote TCP/NATS), stream logs via events
stop_projectid, target: Option<String>Connect to agent, call Stop RPC
ping_targetidHealth check, update target status in DB
list_targetsList all configured targets from SQLite
add_targetname, host, portSave new target to SQLite
remove_targetidDelete target from SQLite

Event Bridge

Remote execution reuses the same Tauri events as local execution — the Terminal component needs no changes:

EventPayloadSource
project-log{ project_id, stream, text, timestamp }Local: process stdout/stderr
Remote: gRPC/NATS stream
project-status-change{ project_id, status, exit_code }Local: process exit
Remote: stream completion

Current Limitations

LimitationImpactWorkaround
No TLS for gRPC transport Code and logs are transmitted in plaintext over gRPC Use NATS transport (TLS to Synadia Cloud) or SSH tunnel. mTLS infra built but not wired.
No authentication Anyone who can reach port 50051 can execute code Firewall rules, bind to localhost + SSH tunnel, or use NATS transport
No execution timeout Processes can run indefinitely on the agent Use Stop command or process supervisor
No container sandboxing Code runs as the agent's OS user, no isolation Run agent as a dedicated low-privilege user
cloudflared optional Public URL publishing requires cloudflared installed on agent Install manually: curl from GitHub releases or package manager

Resolved in persistent agent redesign:

Roadmap

FeatureStatusNotes
gRPC agent server (16 RPCs)DoneHealth, Status, Deploy, Execute, Stop, StreamLogs + 8 persistent RPCs + Publish/Unpublish
SQLite persistenceDone~/.berth/agent.db — 5 tables, survives restarts
Execution history + logsDonePersistent, replayable via GetExecutions/GetExecutionLogs
Store-and-forward eventsDoneAgent queues events, app polls via GetEvents/AckEvents
Agent-side schedulerDoneTick every 30s, runs cron jobs independently of app
Dependency install on deployDonepip install, npm install, go mod download during Deploy RPC
Remote self-upgradeDoneClient-streaming Upgrade RPC, verify + swap + systemd restart
Multi-file deployDoneDeploy RPC accepts tarballs, extracts to persistent dir
Deployment versioningDoneAuto-version, keep last 5, prune old
gRPC client libraryDoneAgentClient with 8 new methods for persistent RPCs
CLI target managementDoneadd, list, remove, ping
UI target managementDoneTargets page with add/remove/ping/stats (os, arch, CPU, memory)
UI remote executionDoneTarget selector + Run/Stop on remote
Install script + systemdDonesystemd service with auto-restart on Linux
mDNS LAN discoveryDone_berth._tcp.local. via mdns-sd
TLS cert generationDoneCA + server/client certs via rcgen
NATS command channelDoneZero inbound ports, works behind NAT. AgentTransport trait abstracts gRPC vs NATS
Self-upgrade (cloudflared model)Doneexit(42), 30s probation, auto-rollback. Tested v0.1.7 → v0.1.8
Public URL publishingDonecloudflared tunnels, pluggable TunnelProvider enum. Tested end-to-end
Tunnel capability detectionDoneHealth response reports available tunnel providers (installed binaries)
Environment variablesDoneDesktop-side storage, passed at runtime, log masking (values ≥ 3 chars → ***)
Service modeDoneAuto-restart with exponential backoff (1s → 60s cap), uptime tracking, restart count
Activate mTLSPlannedWire tls.rs into agent + client
Container sandboxingPlannedOCI/Docker isolation per project

Berth Remote Agent Documentation — Updated March 2026