Boundaries matter more with AI than with humans.

Trust is contextual.

We trust engineers with secrets because they are accountable. We do not trust AI with secrets because it is not.

That distinction matters more than people admit.

AI Has No Sense of Boundary

AI does not understand intent. It does not understand sensitivity. It does not understand consequences.

It only understands inputs and outputs.

If a secret appears in a prompt, the model treats it as data, not as something to protect.

That is not malicious. It is fundamental.

Language models are pattern matchers. They learn from correlations in training data. They predict the most likely next token based on context. They have no concept of “this is sensitive” or “this should not be repeated.”

When you paste an API key into a prompt, the model sees a string of characters. It does not see a credential that grants access to production systems. It does not understand that repeating that string in a log file or a code suggestion is a security incident.

This is not a limitation that will be fixed with better models. It is intrinsic to how these systems work. They operate on statistical patterns, not semantic understanding of risk.

Humans Know When Not to Repeat Things

Engineers learn quickly what should not be shared. They understand social and professional boundaries. They internalize risk.

AI does none of that.

If something appears in context, it is fair game. If something helps complete a task, it will use it.

That makes secrets fundamentally incompatible with AI workflows.

Humans develop intuition about secrets through experience and consequences. An engineer who accidentally commits a credential to a repository once learns not to do it again. They understand that credentials in Slack messages are bad. They know that connection strings do not belong in documentation.

This learning comes from social context. We get feedback. We see incidents. We hear war stories. We internalize that certain information is sensitive even when it is not labeled as such.

AI has none of this context. It cannot learn from consequences because it experiences none. If a model is trained on data that includes secrets (even accidentally), it might suggest those patterns to other users. If a model sees a secret in a prompt, it has no mechanism to treat that information differently from any other text.

The problem gets worse with context windows. Modern AI tools encourage users to paste large amounts of context to get better results. “Here is my entire codebase, help me debug this issue.” That context might include configuration files, environment variables, or credentials that were never meant to leave the developer’s machine.

Accidental Exposure Is the Real Risk

Most secret leaks are not intentional. They are accidental.

Logs. Debug output. Context windows. Copied examples.

AI increases the surface area for accidental exposure because it encourages dumping context.

That alone is reason enough to draw a hard line.

The threat model for AI and secrets is not malicious exfiltration. It is accidental inclusion and unintentional propagation.

Someone copies code from a private repository into an AI prompt to get help refactoring. The code includes a hardcoded database connection string. The AI uses that pattern in future suggestions, potentially exposing it to other users or to the AI vendor’s logs.

Someone asks an AI to generate deployment documentation. They provide their actual Terraform configuration as context. That configuration includes secret ARNs, account IDs, and resource names that reveal internal architecture.

Someone debugs an authentication issue by pasting error messages that include tokens or session IDs. Those identifiers end up in the AI’s logs, where they persist longer than intended.

None of these scenarios involve malice. They are all easy mistakes. And AI makes them easier because it normalizes the practice of sharing everything to get better results.

Boundaries Must Be Structural

Policies are not enough. Training is not enough. Good intentions are not enough.

The only reliable boundary is architectural.

No secrets in prompts. No credentials in context. No access to vaults. No exceptions.

If the AI cannot see it, it cannot leak it.

We implemented this as hard constraints, not guidelines. Our AI integrations are designed so that secrets are architecturally impossible to include in prompts.

Code analysis tools redact patterns that look like secrets before sending code to AI APIs. Credentials are stored separately and never exposed in contexts that AI tools can access. When engineers need to reference a secret, they use placeholders or variable names, never actual values.

We also limit what AI tools can access. They cannot read from Key Vault. They cannot query environment variables. They cannot access production configuration. If a workflow requires secrets, it happens outside the AI boundary.

This creates friction. Engineers sometimes want to paste their entire environment into an AI prompt to get better debugging help. We say no. The inconvenience is worth the guarantee that secrets cannot leak through AI channels.

Final Thought

We do not trust AI with secrets not because it is dangerous, but because it is indiscriminate.

Good security is about knowing where not to draw trust.

With AI, that line needs to be very clear.

Related reading: