API KeySecurityDeveloper Tools

Where Should You Put Your API Keys? A Practical Guide for Two Common Scenarios

You just got a new API key. For OpenAI, Tavily, or some third-party service. Your code needs it to run, your AI assistant needs it to call APIs on your behalf.

After a moment’s hesitation, you put it in a .env file. This is what everyone does when they first start out. A default choice that requires almost no thought.

But then the unease sets in. You’ve heard .env files aren’t secure. You should use 1Password, or Vault, or something else. You search around and find articles telling you “don’t put secrets in .env,” but they rarely explain: where should you put them? Even fewer tell you what makes sense in your specific situation.

This article unpacks that question. No tool comparison chart, no product recommendations. Just a clear look at what threats you’re actually facing in the two most common scenarios, and what’s good enough for each one.


First, Figure Out What You’re Defending Against

API key security isn’t a single spectrum from “insecure” to “secure.” It’s a two-dimensional problem: different scenarios come with different threats, and each approach performs differently against each threat.

You writing code alone on your MacBook, versus you deploying a service on an internet-facing server — these two situations have almost no overlap in what they need to defend against. Mixing them together in a single discussion is why most advice falls flat.

On a personal machine, there isn’t that much to actually defend against. The machine sits in your home or on your desk, with one login account and no other users running processes. A bad actor who wants your API key has to clear three hurdles: physical theft, remote intrusion, or you accidentally committing it to a Git repo.

The first hurdle is handled by FileVault (Apple’s built-in full-disk encryption — you can turn it on in System Settings): once the machine is powered off, the disk can’t be read, and no one who steals the machine can see your files. The second hurdle is extremely low-probability for a personal machine — you don’t have ports exposed to the internet, and no unfamiliar users are running commands on your system. The third is a real risk, but it’s an operational habit problem, not a tool selection problem.

The CNCF Cloud Native Security Whitepaper says secrets should be “injected at runtime through non-persistent mechanisms,” and the OWASP Secrets Management Cheat Sheet says “using environment variables is not recommended unless other methods are not possible.” These standards target multi-tenant production environments — multiple users sharing the same machine, processes able to peek at each other’s environment variables (on Linux, the /proc directory lets other processes under the same user read your program’s startup arguments and environment), logs and crash debugging files being centrally collected. Your personal machine doesn’t fall into this category. There’s no trust boundary between you and root; any process that can read your files can already do far more damage. Under this threat model, the additional exposure from .env is marginal.

On a production server, the threat model is entirely different. Internet-facing exposure means any code vulnerability can become an entry point. The same server might run services from different sources, and they should be isolated from each other. Logs get collected and analyzed. Crash debugging files might be read by third-party tools. There’s an extra layer of “malicious process” in play — not your own code, but a compromised dependency, a vulnerable container, or a shell obtained through social engineering. In this environment, the fact that all processes under the same Linux user can see each other’s environment variables is no longer “me looking at myself” — it’s a real lateral movement channel. The CNCF and OWASP recommendations aren’t overkill here; they’re reasonable baseline requirements.


Scenario One: Your Own Machine

Let’s start with the situation you’re most likely in. You’re the only user. FileVault is on (on macOS) or LUKS is on (on Linux, the equivalent full-disk encryption). Your code and services all run under your own account. What you need: services that start up unattended, your AI assistant able to read API keys when calling APIs, and you won’t accidentally dump all your keys into a Git repo because you forgot .gitignore.

You have four main options, ordered by how likely you are to find them useful.

1Password interactive approval (op run). In this mode, no API key lives on your disk — whenever a key is needed, 1Password pops up a Touch ID or Apple Watch prompt, you tap it, and the key is injected into the current process. When you’re sitting at your computer, this works beautifully: approval takes half a second, and the fundamental benefit is that no credential persists on disk at all. This is the most secure option across all the approaches.

The limitation is clear: when you’re not at the machine, the dialog has no one to click it, and your service stalls. So if you need remote access (SSH, phone control) or unattended background service startup, this won’t work. But if your use pattern is “sitting at your laptop writing code,” it’s the best place to start.

macOS Keychain (Linux equivalent: Gnome Keyring / secret-tool). macOS has a built-in password and credential manager called Keychain (Keychain Access). The passwords Safari auto-fills for you are stored in Keychain. How it works: all stored data is encrypted with a key derived from your login password, providing an extra layer of protection beyond FileVault — even if someone bypasses FileVault to read disk files, Keychain data can’t be directly decrypted. You can read and write to it using the security command-line tool:

# Store once
security add-generic-password -s "opencode-tavily" -w "sk-xxx" -A

# Read at service startup
export TAVILY_API_KEY=$(security find-generic-password -s "opencode-tavily" -w)

The -A flag allows all applications to read without prompting — critical for unattended AI services. Keychain auto-unlocks at user login, and subsequent access requires no interaction. It solves the problem of .env files appearing in plaintext in backups, without introducing an additional long-lived credential.

.env file, permissions 600. This is the baseline. Put it in your project or home directory, set it readable and writable only by you. FileVault ensures the disk is unreadable when powered off. The common real-world risks are two operational ones: forgetting .gitignore and committing it to a repo; or plaintext keys appearing in Time Machine or iCloud backups. A simple mitigation: put .env in a single unified location under your home directory (e.g., ~/.config/keys/), then symlink or relative-path reference it from each project. That way you only need to remember to exclude one location.

1Password Service Account token. Store your API keys centrally in 1Password, and have your service pull them at startup with op read. The SA token (service account token — 1Password’s “machine identity card”) is itself a persistent credential that can unlock all keys in a vault. It needs to live somewhere so the service can read it without you present — in other words, it’s still an “unlock key” that sits on disk. Putting it in Keychain (see above) reduces the plaintext-on-disk exposure, but doesn’t change a fundamental fact: if this token is compromised, an attacker can dump your entire vault.

So does 1Password still have an advantage here? Yes — when you need to rotate your API keys. If you have ten keys spread across five services’ .env files, rotating one means finding it, updating it, and restarting everywhere. With 1Password, you rotate once in 1Password and restart your services. Lowering the cost of rotation is a meaningful security improvement, because it makes “I should rotate these” something you actually do.

Recommendation

Your actual usage pattern determines the best fit.

If you spend most of your time sitting at your computer (a work laptop, you at the keyboard), 1Password interactive approval is the best starting point: no persistent credentials, just tap Touch ID each time. Good security, good convenience. No complex setup needed.

If your machine needs to run unattended — background services, or you’re out and about controlling a home machine from your phone — interactive approval doesn’t work. In this case:


Scenario Two: You’re Running a Production Service

For many people, this “production service” is really a VPS — a remote Linux server you rent from a cloud provider — running a few side projects, managed by systemd (Linux’s built-in service manager that handles starting, monitoring, and restarting your background services), exposed to the internet through Nginx. It’s not “enterprise infrastructure,” but its threat model is fundamentally different from your personal machine.

Internet exposure means your service may be scanned, fuzzed, and probed for vulnerabilities. Historically, many VPS compromises weren’t due to the owner’s code at all, but to running an open-source component with a known security vulnerability (the software industry tracks these with CVE numbers — public identifiers for disclosed flaws). Once someone gets a shell on your server, cat .env is the cheapest next step.

In this scenario, you need three things: API keys that can’t be read directly from environment variables (preventing Linux process-to-process snooping, log leakage, and crash dumps leaking to third parties), services that start up without you being present, and rotation cheap enough that you actually do it regularly.

Don’t use .env anymore. This is what CNCF and OWASP explicitly recommend against. Environment variables appear in crash debugging files, are visible to anyone via the ps auxe command (which shows all processes’ complete environment variables), and get inherited by child processes. On a production server, these aren’t theoretical risks.

systemd credentials. systemd, in addition to managing service start/stop, has a built-in credential management feature (since systemd version 247). You encrypt your API key with a single command and store it in a specific directory, then declare it in your service unit:

[Service]
LoadCredentialEncrypted=api-key:/etc/credstore/myapp.api-key

systemd decrypts at service activation time, placing the key in a temporary directory accessible only to that service’s process ($CREDENTIALS_DIRECTORY), and cleans it up when the service stops. The decryption key can be bound to the TPM2 chip (a security chip on the motherboard — most recent computers and servers have one), making the encrypted file undecryptable on any other machine — even if the hard drive is pulled and plugged into a different server, the data remains unreadable. The entire process requires no additional software installation: as long as your Linux version isn’t ancient, the built-in systemd handles it.

Compared to .env, it eliminates the environment variable visibility problem. Compared to “setting up Vault,” it adds no dependency or operational burden.

1Password Service Account token. Same logic as the personal machine scenario — pull keys with op read in your startup script. The SA token itself is still a single point sitting on disk, but storing it in Keychain or systemd credentials narrows the exposure. The centralized rotation convenience matters even more here, because the consequences of a production key leak are much more severe than on a personal machine.

HashiCorp Vault, Infisical, cloud providers’ Secrets Managers. Full key lifecycle management: dynamic secrets, automatic rotation, audit logs. The tradeoff is that these are services you need to seriously maintain. For teams of fewer than five people, Vault’s operational overhead often exceeds the security benefit it provides. If you’re an individual running a VPS, having systemd handle this is already an order of magnitude better than the status quo.

Recommendation

For small-scale production deployments (a few services, maintained by one person or a small team): on Linux, use systemd credentials; on macOS, use Keychain. Encrypt keys at rest on disk, decrypt and inject at service startup, never let them enter environment variables — this path gives you substantial security improvement with minimal infrastructure change. If your key count starts approaching double digits, or multiple people need to manage them together, that’s when to consider Infisical or 1Password for centralized management. Vault is for organizations that already have a platform engineering team.


Rotation Frequency Matters Far More Than Storage Method

Whatever approach you choose, there’s one variable that nobody mentions which has the biggest practical impact: how often do you rotate your keys?

The traditional .env rotation workflow is painful — find every file where the key lives, update each one, restart all services, pray you didn’t miss any. This pain is what keeps most people from rotating at all. And the real-world security difference between “rotate once a year” and “rotate every three months” is larger than the difference between “.env” and “Keychain.”

The true improvement from centralized key management isn’t the vault’s cryptography — it’s that rotation goes from “find files, edit files, restart” to “change once in one place, restart.” Rotation cost drops by orders of magnitude, and the behavior shifts from “I’ll do it when I remember” to “I can do it regularly.”

A simple self-test: under your current setup, if you suspect a specific API key has leaked, how quickly can you replace it? If the answer is “within an hour,” your setup is effective. If the answer is “might take an afternoon,” the problem isn’t which tool you picked — it’s that your rotation workflow itself prevents the security behavior from happening.


A Minimal Decision Checklist

At this point, you don’t need a tool comparison table. Three questions are enough:

Are you the only user on this machine? If yes, and you’re not running code from unknown sources, .env or Keychain is sufficient. You don’t have a multi-tenant problem — environment variable visibility between Linux processes doesn’t matter when anyone who can read them can already do far worse. Focus your energy on a solid .gitignore and checking your backups for plaintext keys.

Do you need to manage the same set of keys across multiple machines, or rotate them frequently? If yes, the operational convenience of centralized management (1Password, Infisical) is a real security gain. What matters isn’t the vault — it’s that you rotate once.

Is your service directly accessible from the internet? If yes, your highest-priority step is getting keys out of environment variables. Use systemd credentials or Keychain for runtime injection. This matters far more than which secrets manager you pick.

One final principle covers the remaining choices: if your AI assistant can read the contents of a file, you should treat those contents as if they’re already in the logs. Working backward from this: a TAVILY_API_KEY in .env is less scary than it seems — it’s a standalone credential, and if it leaks, you rotate it. But an SA token that can unlock all your keys, sitting in a file your agent can read, is a different story. Narrow its scope, shorten its lifespan, and keep it itself in Keychain. Or, if you know which files your AI assistant reads, just don’t put the master credential where it can see it.