We thought Container Apps networking would be simple.

We were wrong.

Here is what we learned after hours of troubleshooting.

Internal vs External Environments Are Not What You Think

Container Apps have two environment types: internal and external.

We assumed:

  • external means public internet
  • internal means private network

That is partially true, but incomplete.

External environments get a public IP and can accept traffic from the internet. They can also be restricted to your VNET.

Internal environments do not get a public IP at all. They only get a private IP in your VNET.

The confusion comes from ingress settings. You can have an external environment with internal-only ingress. That means the environment is in your VNET but the apps inside do not accept public traffic.

It took us three failed deployments to understand that distinction.

The VNET Integration Is Not Optional

If you want your Container Apps to talk to private resources, you need VNET integration.

That means:

  • a subnet dedicated to the Container Apps environment
  • enough IP addresses for scaling (minimum /23, we use /21)
  • proper Network Security Group rules
  • DNS configuration for private endpoints

We underestimated the subnet size. Our apps could not scale because we ran out of IPs.

We had to recreate the environment in a larger subnet. That meant downtime.

Plan your subnet size based on the maximum scale you expect, not the typical load.

DNS and Private Endpoints Are the Hidden Complexity

Container Apps use Azure DNS by default.

If you use private endpoints for other Azure services, you need private DNS zones.

We had apps that could not connect to:

  • Azure SQL with private endpoint
  • Storage Accounts with private endpoint
  • Key Vault with private endpoint

The issue was not RBAC. It was not firewall rules. It was DNS.

The Container Apps environment was not linked to our private DNS zones.

Once we linked the zones, everything worked.

That troubleshooting took four hours. The fix took two minutes.

Outbound Traffic Is More Complex Than It Should Be

By default, Container Apps use Azure’s outbound IP addresses.

You do not control them. They can change.

If you need a static outbound IP for whitelisting, you have two options:

  • use a NAT Gateway attached to the Container Apps subnet
  • use Azure Firewall with forced tunneling

We went with NAT Gateway. It was simpler and cheaper for our use case.

But setting it up required:

  • creating a public IP prefix
  • attaching the NAT Gateway to the subnet
  • updating our firewall rules with the new IP range

The documentation made it sound simple. The implementation had edge cases we did not expect.

Service-to-Service Communication Is Simple, Until It Is Not

Container Apps in the same environment can talk to each other by name.

That works great. Until you need to call a Container App in a different environment.

Then you need:

  • proper DNS resolution
  • ingress configured correctly
  • VNET peering if environments are in different VNETs
  • correct NSG rules

We hit this when we split dev and prod into separate environments.

Suddenly our dev apps could not call dev dependencies because they were trying to use public DNS.

We had to set up internal ingress, link the environments through VNET peering, and configure DNS properly.

It worked. But it was not obvious.

What We Should Have Done Differently

We should have:

  • read the networking docs twice before deploying anything
  • used a /21 subnet from the start
  • linked private DNS zones during initial setup
  • planned for static outbound IPs early
  • tested cross-environment communication in dev first

Container Apps networking is powerful.

But it assumes you understand Azure networking deeply.

If you do not, you will learn the hard way.

We did.