Running agents without losing my keys: a month with authsome

Five weeks with a local-first credential broker. What worked, what bit me, and the things the docs don't tell you up front.

May 13, 202616 min read

Running agents without losing my keys: a month with authsome.

I run a small handful of agents on my laptop and a shared box. One drafts release notes from GitHub PRs. One scrapes SaaS dashboards for a Friday digest. A third runs over SSH on a CI box and pokes our staging APIs after every deploy. All three need credentials. Until recently, my "credential management" was a .env file at the project root, a second one I never committed, and a third one in ~/.zshrc that I told myself was temporary in 2024.

This post is what happened when I replaced all of that with authsome, an open-source credential broker that runs locally on my machine. I'm not going to argue you should switch. I'm going to walk through what I actually do with it, what surprised me, and what I'd warn the next person about. Most of the rough edges aren't deal-breakers, but several of them aren't documented as loudly as they should be either.

Before

Three problems pushed me into looking for something:

The first was that I'd lost track of which key was which. I had four OpenAI keys at one point - personal, team, a Stripe-paid test account, and one I'd generated on a flight and forgot to delete. None of them were named anything meaningful in their .env lines. When the OpenAI dashboard started flagging "key seen in commit", I had to bisect through three repos and a Notion page to find which one.

The second was that my GitHub agent kept hitting Bad credentials once a quarter or so. GitHub fine-print OAuth tokens have a sliding-window expiry that I had not internalized. Every time, the fix was the same: log in to GitHub manually, click through the SSO redirect, copy a fresh ghp_..., paste it into the right .env, restart the agent. Forty-five seconds of work and an hour of frustration before I realized what had broken.

The third was that I started running agents under Claude Code and Cursor, and both of them inherit my shell environment by default. Which meant every agent had access to every key, including the ones it didn't need. A coding agent doesn't need my Stripe key. It just had it because zsh had it.

I looked at three categories of tool:

  • A SaaS secrets manager. Vault, Doppler, 1Password Secrets Automation. All of them store values; none of them run the OAuth flow. I'd still be the one logging in to GitHub every quarter, just with an extra round-trip to a cloud service that costs money and adds a network dependency. Pass.
  • The OS keychain. I already use it. It's a dict[str, str]. No expiry, no refresh, no notion of "this key belongs to this service". Insufficient on its own.
  • Roll my own. I have rolled my own before. I've also rolled my own OAuth client. I won't do either again on my own time.

Authsome got my attention because it was the only thing in the "agent credentials" space that did the OAuth dance plus the storage plus the runtime injection, and ran entirely on my machine. The pitch read more honestly than the SaaS pitches did - it has an explicit "what authsome is not" page, which is unusual for a project that wants you to adopt it.

What it actually is

Authsome is a Python CLI plus a local daemon. The daemon runs on 127.0.0.1:7998 and owns three things: an encrypted SQLite vault under ~/.authsome/profiles/<name>/store.db, a small HTTP API that the CLI talks to, and a transparent HTTP proxy that you can run a subprocess underneath. The proxy intercepts outbound HTTP requests from whatever you put behind it, matches the destination host against a registered provider, and injects the right Authorization header. The subprocess never sees the real secret.

The CLI surface is small enough to memorize:

bash
uvx authsome login github           # OAuth or API-key flow
uvx authsome list                   # what's connected
uvx authsome get github             # metadata, secret redacted
uvx authsome export github          # KEY=value lines for the shell
uvx authsome run -- python agent.py # run under the proxy
uvx authsome scan --import          # pick up keys from .env files
uvx authsome revoke github          # nuke the connection

Behind that surface there's a layered architecture with five layers - vault, auth, identity, policy, audit - though only three of those are actually implemented in v1. The design doc admits this openly; identity and policy are aspirational. I'll come back to what that means in practice.

Setting it up

I'll be honest about the friction first. uvx authsome works out of the box on a Mac if you already have uv installed. The first invocation initializes ~/.authsome/ and generates a master key. That's seamless.

The first OAuth login is not seamless. Here's what actually happened the first time I ran uvx authsome login github:

  1. A browser tab opened to http://127.0.0.1:7998/... with a form asking me for my GitHub OAuth app's client_id and client_secret. I didn't have a GitHub OAuth app. I had to go create one at github.com/settings/developers, set the callback URL to a specific value (http://127.0.0.1:7998/auth/callback/oauth), and come back.
  2. After pasting client credentials, a second browser tab opened to GitHub's authorization page. I approved.
  3. A third browser tab (the callback) flashed and closed. The terminal printed success.

That's three tabs and a side trip to GitHub's developer settings. It's a one-time setup per provider per profile, and the credentials get encrypted and stored, but it's not "log in, done". For services that support Dynamic Client Registration (Notion's MCP endpoint is the only example shipping today), the flow skips the OAuth-app-registration step entirely and feels closer to what you'd expect.

For API-key providers like OpenAI, the flow is a single browser form. Faster, but I still found myself reaching for --show-secret more often than I expected to verify the key got captured correctly.

The thing I didn't anticipate: when you run something under the proxy for the first time, you need to install the mitmproxy CA on your machine. HTTPS interception means mitmproxy generates its own certificate authority and presents itself as every host you're connecting to. Your OS (and Python's requests / httpx libraries, separately) needs to trust that CA, or every HTTPS call from your agent fails with CERTIFICATE_VERIFY_FAILED. The install is one command on macOS and one on Linux. It is not optional. The docs hide it under a "troubleshooting" page rather than the install page, which is the opposite of where it should be.

The mental model that finally clicked

I spent the first day confused about why authsome has two namespaces for credentials.

Profiles are top-level isolation. Different vault files, different lock files, different SQLite databases. Connections in profiles/default/ are invisible to profiles/work/. Each profile has its own master-key-protected encryption.

Connections live inside a profile, scoped to a provider. So if I want a personal GitHub and a work GitHub side by side, I can either:

  • Put them in different profiles (default and work), or
  • Put them in the same profile with named connections (personal and work).

The mental rule that finally helped: if I want the credentials to never see each other in any context, use profiles. If I want both reachable from the same script, use connections. I use connections for "two accounts on one provider" (GitHub, OpenAI) and profiles for "different roles entirely" (my personal stuff vs the digest-bot that runs as a different identity).

There's an ugly wrinkle here that the docs don't quite spotlight: the proxy uses each provider's default connection at request time. You can't say "this request should use the work connection". To switch which connection the proxy injects, you have to run uvx authsome login github --connection work --force to make work the default. The --force is mechanical. It works. It also feels like a hack, and per-request connection selection is something I'd really like to have.

Wiring it into agents

Three patterns. I use all three.

Through the proxy. This is the cleanest. Drop uvx authsome run -- in front of your agent invocation:

bash
uvx authsome run -- python my_agent.py

The child process gets HTTP_PROXY=http://127.0.0.1:<port> and HTTPS_PROXY=... set in its environment, plus placeholder variables like OPENAI_API_KEY=authsome-proxy-managed. The OpenAI SDK reads the placeholder, makes the request, and the proxy substitutes the real key on the way out. The agent's process never sees the actual sk-.... If I ps -o command the agent or peek at /proc/<pid>/environ, I see nothing useful.

The proxy pattern composes well with shell tools. I run my Friday digest under uvx authsome run -- bash digest.sh, and the bash script can call gh, curl, and a Python sub-step, all of which get credentials at the wire.

It also breaks in two specific cases. The first is SDKs that ship pinned TLS certificates - they ignore HTTP_PROXY or refuse to trust the mitmproxy CA. The second is non-HTTP protocols: WebSockets, gRPC over HTTP/2 with custom transports, raw TCP database connections. The proxy doesn't see those, so it can't inject anything. For both cases, I fall back to:

Environment export. Less secure but always works:

bash
eval "$(uvx authsome export openai --format env)"
python my_agent.py

export prints KEY=value lines on stdout, sourced into the shell with eval. The agent reads the real key from the environment. This is identical to having a .env file, except the key is held in the vault and renewed automatically when authsome refreshes it. It's the pattern I use for gh (which has its own custom CA bundle for GitHub Enterprise installs).

Library calls. When I'm writing the agent and I know exactly which credential I want at which point:

python
from authsome.server.dependencies import create_auth_service

auth = create_auth_service()
token = auth.get_access_token("github", connection="work")

This is the most explicit pattern and the one I use inside LangChain tools and LlamaIndex readers, where each constructor wants its credential at instantiation. Refresh is silent: if the cached token is within five minutes of expiry, the next get_access_token triggers a refresh, writes the new record back to the vault, and returns the fresh string.

The library API is small but the import path is awkward - authsome.server.dependencies for the recommended constructor reads like an internal detail leaking out. I'd expect it at the top level (from authsome import create_auth_service) but it isn't there in 0.2.4. Minor irritant; I have a snippet in my editor.

The things I noticed working

Refresh. I haven't seen a Bad credentials from GitHub in five weeks. The auth layer refreshes the access token transparently when it's near expiry, and the GitHub OAuth app I set up keeps producing fresh refresh tokens. The agent doesn't care. I don't care.

One audit log to grep. Every login, logout, revoke, export, and --show-secret reveal writes a JSON event to ~/.authsome/audit.log. When I'm trying to figure out why a script behaved oddly:

bash
uvx authsome log --json | grep openai | tail -20

is the first thing I run. The format is stable enough that I can feed it into Vector or jq. The events are timestamped in UTC with full ISO-8601, including the trailing +00:00, which is the format I want from anything machine-readable.

Multi-account scoping. I have --connection personal and --connection work for GitHub and OpenAI. Switching between them is one command. The vault keys are namespaced so a mistake doesn't blow away the other connection.

Bundled providers cover most of my work. 45 providers ship in the box. Most of mine are bundled (GitHub, Google, OpenAI, Linear, Resend, Slack). For the two I needed that aren't (Anthropic, Stripe), the custom provider mechanism is a single JSON file you write once. It's annoying that they're not bundled given how common they are, but adding them takes ten minutes.

The things that bit me

The mitmproxy CA install is the #1 onboarding friction. Every time I help someone else set up authsome, this is where they stall. The CA is generated automatically when the proxy first runs, but you have to manually trust it system-wide for HTTPS interception to work. The doc page is clear; finding the doc page is not. If you skip it, the first time you actually run an agent under the proxy you get a wall of TLS errors and no obvious next step.

The OAuth-app-registration step. For services that don't support Dynamic Client Registration, you need a one-time OAuth app set up at the provider. Authsome can't do this for you - the provider's terms of service require a human to register the app. For something I only use personally, this feels like overkill. For a team, it's correct: the OAuth app is the team's, not the tool's. Just be ready to spend twenty minutes the first time you onboard a new provider.

The daemon's trust model is minimal in v1. The daemon binds to loopback (127.0.0.1) and that's it. No bearer token between the CLI and the daemon, no per-session CSRF on the browser forms. The threat model is "anything on the local machine running as your user is trusted". For a personal laptop, fine. For a multi-user workstation, not fine - any other signed-in user with a process can hit the daemon on the loopback. The docs say this openly; you have to read it.

Hosted mode is internal-network only. Authsome supports running a shared daemon on a private network so multiple developers can use the same vault. The config is straightforward (AUTHSOME_SERVER_BASE_URL on the daemon, AUTHSOME_DAEMON_URL on each client). It is explicitly not ready for internet-facing or multi-tenant use. There's no per-user auth inside the daemon, no tenant isolation, no CSRF hardening. I tried setting it up behind Tailscale for my home setup - it worked, but I'd hesitate to recommend it for anything more than a small trusted team.

Profiles aren't switchable from the CLI yet. There's no --profile flag. The active profile lives in ~/.authsome/config.json under default_profile. To run a command against a non-default profile, you either edit the config file or use the library directly with a different AUTHSOME_HOME. Programmatic profile creation works, but the ergonomics around switching are not there yet.

Linux without a graphical session is awkward for the keyring backend. On Linux servers without DISPLAY, the keyring library often can't find a backend. The fix is to fall back to local_key mode in config.json, but that means the master key sits in ~/.authsome/master.key with mode 0600 and your only defense is filesystem permissions. On macOS and Windows the keychain integration is solid; on headless Linux you're back to file mode.

The CLI is in active churn. I shipped a script in February that used uvx authsome login --client-id ... as command-line args. That was the old behavior. Sensitive values are no longer accepted as CLI args (they're collected through the browser bridge), and the script broke silently. The change was the right one - passing client_secret on the command line shows up in ps output and shell history - but I didn't notice the breakage for a week. Pin your authsome version in CI.

How I use it day-to-day

I have three profiles:

  • default: personal projects, side scripts.
  • work: bound to my work GitHub identity and work OpenAI key.
  • digest: the per-agent profile for my Friday digest bot, which runs unattended.

The Friday digest bot is a cron job. The crontab line is:

text
0 8 * * 5  AUTHSOME_HOME=/home/me/.authsome-digest uvx authsome run -- python digest.py

That single line is the entire credential-management story for that agent. The digest profile only has connections for the four services it touches (Linear, Slack, Resend, OpenAI). If the agent somehow got compromised, it can't reach my personal GitHub or my work credentials, because those live in different vault files protected by different effective master keys.

For interactive work in Cursor, I have a tweaked MCP config that wraps each server with authsome run:

json
{
  "mcpServers": {
    "github": {
      "command": "uvx",
      "args": ["authsome", "run", "--", "node", "/path/to/github-mcp/server.js"]
    }
  }
}

This was the thing I switched to last and it took me longest to figure out. Cursor's MCP config isn't well-documented, but once you understand that command and args are the spawn parameters, the wrapper pattern is mechanical.

For Claude Code, there's a bundled skill at skills/authsome/SKILL.md in the authsome repo. Copy it to ~/.claude/skills/ and Claude can run authsome commands directly. I asked Claude to "list which providers I have configured" the other day and it ran uvx authsome list, read the output, and reported back. That kind of self-driving skill works because authsome's --json output is stable and the CLI surface is small.

Where it fits, where it doesn't

Authsome is a good fit if you're a single developer or a small team running agents on personal laptops, with maybe one shared internal daemon. It's overkill for a one-script-one-key setup where a .env file does the job. It's not the right tool for a production fleet where you need centralized rotation, per-user audit, and compliance reporting - that's still SaaS-secrets-manager territory, and authsome doesn't pretend otherwise.

I'd describe the trade-off this way: authsome buys you OAuth refresh, runtime injection, and an audited credential store on your own machine, in exchange for the friction of setting up OAuth apps per provider, installing the mitmproxy CA, and accepting that "your machine" is the trust boundary. For the workflows where I had been losing keys, scattering tokens, and re-authenticating quarterly, that trade is worth it. For workflows where I had a single API key in CI, I haven't bothered.

What I'd want next

If I were prioritizing for the maintainers, three things:

  1. Per-request connection selection in the proxy. The --force-the-default workaround is fine for personal use, but it makes the proxy un-usable for any agent that needs to talk to multiple connections on the same provider in one run.
  2. CLI --profile flag. Editing config.json is a thing I should not have to do.
  3. A real story for headless Linux with no keyring. local_key works but file permissions are a weak defense. The roadmap mentions OS-level options but nothing concrete.

The current v1 design also calls out three "planned" layers - identity, policy, audit - that aren't fully implemented. The audit log exists. Identity and policy are scaffolding. For my use case I don't need policy (single-user, single-machine), but for any team usage that's the gating factor.

Should you use it

Probably not, if you read this far hoping for a recommendation either way. Use it if the specific shape of "OAuth refresh + runtime injection + local + multi-account" matches what you're actually missing. Don't use it if the missing piece is "team-wide rotation" or "compliance reporting" or "I just need one key in CI". Most people aren't in either bucket.

I've been running it for five weeks. I haven't lost a key, haven't re-authenticated mid-script, haven't found a real bug in the auth layer, and have hit the mitmproxy CA install three separate times across different machines. Net positive. Probably staying.

If you do try it, install the CA up front. The rest is downhill from there.

Priyansh Khodiyar

Priyansh Khodiyar

Maintainer

Works on authsome and the agentr.dev tooling.