examlab .net The most efficient path to the most valuable certifications.
In this note ≈ 27 min

PrivateLink, VPC Endpoints, and Endpoint Policies

5,400 words · ≈ 27 min read ·

ANS-C01 Domain 2.2 deep dive into PrivateLink: gateway endpoints (S3 and DynamoDB only) versus interface endpoints, endpoint services backed by NLB, endpoint policies for data perimeter, private DNS for service discovery, cross-account sharing, overlapping CIDR access patterns.

Do 20 practice questions → Free · No signup · ANS-C01

PrivateLink on ANS-C01 is the connectivity question the Specialty exam will ask in roughly half of all multi-account scenarios. The Solutions Architect exam treats VPC endpoints as "the thing that makes S3 access private". The Advanced Networking Specialty exam asks "given a SaaS provider exposing a service to 200 customer VPCs, where the customers have overlapping 10.0.0.0/16 CIDRs, the SaaS wants per-customer access control with manual approval and per-customer logging — what topology, what endpoint type, what policy structure, and how does DNS resolve?" That is PrivateLink endpoint services architecture, and ANS-C01 has at least 4-6 questions on this exact territory.

This topic covers the full PrivateLink primitive surface: gateway endpoints (S3 and DynamoDB only) vs interface endpoints (everything else, NLB-backed), endpoint services for SaaS providers, endpoint policies for the data perimeter, private DNS for service-name resolution overrides, cross-account sharing via allowed principals and AWS RAM, overlapping CIDR scenarios where PrivateLink is the only viable connectivity, centralised interface endpoint hub patterns for cost and operations consolidation, and the security group + endpoint policy interaction. Mapped to Task Statement 2.2 (Implement routing and connectivity across multiple AWS accounts, Regions, and VPCs).

PrivateLink is the AWS-native answer to three problems that traditional routing cannot solve: (a) private access to AWS services without traversing the public internet, NAT, or peering; (b) service exposure across accounts without sharing IP space or peering routes; (c) connectivity between VPCs with overlapping CIDRs that VPC peering and TGW cannot handle. ANS-C01 expects you to recognise each of these scenarios and pick PrivateLink as the answer.

The framing across this topic is service-level connectivity, not network-level connectivity. Traditional routing (peering, TGW, VPN) connects networks — once a packet from one network can reach the other, any service on either side is reachable subject to security groups. PrivateLink connects services specifically: an endpoint exposes one specific service, and only that service is reachable through it. Other services on the provider's network are invisible. This service-level isolation is the security and architecture property the exam tests.

The exam tests PrivateLink in three distinct flavours. Gateway vs interface selection asks "which type of endpoint for which service?" with answer choices including the trap "gateway endpoint for KMS" (wrong — only S3 and DynamoDB have gateway endpoints). Endpoint policy design asks "how do I prevent S3 access to non-org buckets via endpoint policy?" Endpoint service architecture asks "how do I design a SaaS service exposed to 200 customer VPCs with per-customer authorisation?" — pure PrivateLink endpoint service territory.

PrivateLink combines a few primitives (gateway endpoints, interface endpoints, endpoint services, endpoint policies) into private service connectivity that does not require routing or peering. Three analogies anchor the moving parts.

Analogy 1: The Pneumatic Tube System Inside a Building

Think of a VPC as an office floor, AWS services like S3 and DynamoDB as the building's central supply room, and the public internet as the city street. Without VPC endpoints, accessing S3 means walking out of the building onto the street and back into the building's main entrance — slow, exposed, expensive (NAT GW data charges + IGW). A gateway endpoint is an internal pneumatic tube between your office floor and the supply room, dedicated to S3 and DynamoDB only. The tube does not need an ENI or IP address — it is just a route entry that says "for this service, use the tube". It is free. An interface endpoint is a dedicated phone line installed in your office with its own extension number (ENI with private IP), connecting directly to the AWS service via PrivateLink. Each phone line costs hourly + per-call (per-AZ + per-GB). An endpoint service is a dedicated phone line you, the SaaS provider, install for your customers — your customers' offices each get a phone (interface endpoint) connected to your service NLB. Endpoint policies are the call-screening rules on each line — only certain extensions or call types are permitted.

Analogy 2: The Library With Service Counters

A VPC is a research library. AWS services are specialised service counters in the library system (rare books, archives, microfilm, journal subscriptions). The public internet is the city's public library catalog. Without endpoints, reaching the rare books archive means walking outside to a different building. A gateway endpoint is a secret passage between your reading room and the archive — only for two specific archives (S3 and DynamoDB), free, no key needed except a route map entry. An interface endpoint is a direct phone line to the archive's reference desk, with its own extension and a librarian (security group) screening calls. Each call goes through the line and back. An endpoint service (PrivateLink) is when you yourself run an archive and provide direct phone lines to other libraries that subscribe. Endpoint policies are the rules on what each phone line can request — "this line can only request books from Section A, by patrons in our research consortium".

Analogy 3: The Hospital With Specialty Clinic Connections

A VPC is a patient ward in a hospital. AWS services are specialty clinics elsewhere in the hospital network (radiology, pathology, pharmacy). The public internet is traveling to a different hospital. A gateway endpoint is an internal corridor between your ward and two specific clinics (S3 and DynamoDB). A patient can walk through it freely — no badge, no checkpoint, no fee. An interface endpoint is a dedicated internal phone line between your ward and a specialty clinic — there is a phone (ENI + IP) on your ward end, the clinic's reception (NLB target group) on the other. Each call has hourly and per-minute charges. An endpoint service is when the lab on your campus offers private telemedicine to other hospitals' wards — they subscribe to your service via their own endpoint. Endpoint policies are the scope of what each phone line can do — "this line can only call radiology, only for X-ray requests, only by approved physicians".

For ANS-C01, the pneumatic tube analogy is the highest-yield mental model when a question contrasts gateway endpoints (free, S3/DynamoDB only, no ENI) with interface endpoints (per-AZ ENI, costed). For the data perimeter and endpoint policy questions, the call-screening rules sub-analogy makes layered policy intuitive. For SaaS endpoint services, the dedicated phone line you install for customers is sharpest. Reference: https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html

PrivateLink defines a producer (the service provider) and one or more consumers (the service users). The producer publishes an endpoint service backed by a Network Load Balancer (or Gateway Load Balancer). Consumers create interface endpoints that connect to the producer's endpoint service.

Producer side — endpoint service

The producer creates an endpoint service linked to an NLB (or GWLB). The endpoint service has:

  • Service name — globally unique, of the form com.amazonaws.vpce.<region>.vpce-svc-xxxxx.
  • Allowed principals — IAM principals (accounts, roles) authorised to create endpoints to this service.
  • Acceptance flag — manual or automatic acceptance of consumer endpoint connections.
  • Private DNS name (optional) — a custom DNS name (e.g., api.saas.example.com) that consumers can resolve internally.

Consumer side — interface endpoint

The consumer creates an interface endpoint in their VPC, specifying the producer's service name. The endpoint:

  • Provisions one ENI per AZ in the consumer's selected subnets.
  • Each ENI has a private IP in the consumer's subnet CIDR.
  • Has a security group controlling which clients can reach it.
  • Resolves the service via DNS — private DNS or auto-generated DNS name.

What AWS-managed services use

For AWS-managed services (KMS, Secrets Manager, STS, Systems Manager, ECR, etc.), AWS itself runs the endpoint service. The consumer creates an interface endpoint targeting the AWS-managed service name (e.g., com.amazonaws.<region>.kms). Same mechanism, but the producer is AWS.

PrivateLink does not provide bidirectional network connectivity. It is unidirectional: consumer can call producer, but the producer cannot initiate connections to the consumer. For bidirectional connectivity, use VPC peering, TGW, or Site-to-Site VPN.

  • Gateway endpoint: route-table-based endpoint for S3 and DynamoDB; no ENI, free.
  • Interface endpoint: ENI-based endpoint for AWS services or custom services via PrivateLink.
  • Endpoint service: a producer-side construct exposing a service via PrivateLink, backed by NLB/GWLB.
  • Allowed principals: IAM principals authorised to connect to an endpoint service.
  • Endpoint policy: IAM-style resource policy on the endpoint controlling what flows through.
  • Private DNS: AWS-managed override of public service DNS to point at the endpoint ENI.
  • Service name: global identifier for an endpoint service; format com.amazonaws.vpce.<region>.vpce-svc-xxx.
  • Prefix list: managed list of CIDRs; AWS-managed prefix lists exist for S3, DynamoDB, etc.
  • Reference: https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html

Gateway Endpoints — S3 and DynamoDB Only

A gateway endpoint is a routing target added to a VPC route table that intercepts traffic to S3 or DynamoDB and routes it across the AWS backbone, bypassing the public internet. Only S3 and DynamoDB support gateway endpoints — no other AWS service does.

How gateway endpoints work

The gateway endpoint is added as a target in a VPC route table. AWS provides prefix lists that name the IP ranges of S3/DynamoDB in each region (e.g., pl-63a5400a for S3 in us-east-1). The route table entry: destination pl-63a5400a (the prefix list), target vpce-xxxx (the gateway endpoint). When an instance sends traffic to an S3 IP within the prefix list, the route directs it to the gateway endpoint, which forwards over the AWS backbone.

Cost — free

Gateway endpoints have no per-hour or per-GB charge. They are free. This makes them the obvious choice for S3/DynamoDB access from private subnets that would otherwise need NAT GW.

Endpoint policies on gateway endpoints

Gateway endpoints support endpoint policies — IAM-style resource policies attached to the endpoint that restrict which buckets, tables, principals, or actions are accessible through it. The most powerful application is the data perimeter: an endpoint policy that only allows access to buckets in the org's account list, preventing exfiltration to attacker-controlled buckets even if compromised credentials are used.

Limitations of gateway endpoints

  • S3 and DynamoDB only. Every other service uses interface endpoints.
  • Same-region only — cannot reach S3 in another region via a gateway endpoint.
  • No cross-account direct access — the bucket is reachable but its IAM permissions still apply.
  • Not accessible from on-premises via Direct Connect or VPN — unlike interface endpoints, which can be reached over hybrid connections.

Gateway endpoint route table entries

A common ANS-C01 question: a private subnet's route table contains the prefix list entry pointing to the gateway endpoint. The prefix list is region-specific — different region, different prefix list ID. Updating the gateway endpoint's policy or route association does not require changes to client applications; the prefix list dynamically adjusts to S3 IP changes.

The most common ANS-C01 distractor: an answer choice describes a gateway endpoint for KMS, Secrets Manager, ECR, Systems Manager, or any service other than S3/DynamoDB. There is no such thing. Gateway endpoints exist exclusively for S3 and DynamoDB. Every other AWS service uses interface endpoints (PrivateLink). Memorise this binary fact: gateway endpoints = S3 + DynamoDB only. If a scenario describes private access to KMS/Secrets/STS/etc., the answer is interface endpoint, not gateway endpoint. Reference: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html

An interface endpoint is one or more ENIs in your subnets that proxy requests to an AWS service or custom PrivateLink service. Each ENI has a private IP, supports security groups, and (with private DNS enabled) overrides public service DNS for in-VPC clients.

Per-AZ ENI

Interface endpoints provision one ENI per selected subnet, typically one per AZ for HA. The ENIs are owned by the AWS service or producer, but live in the consumer's subnet and have IPs from the consumer's subnet CIDR. Traffic to these IPs is forwarded over PrivateLink to the producer's NLB or AWS service.

Cost

Interface endpoints have a per-AZ per-hour charge plus per-GB processed. Compared to NAT GW, interface endpoints are typically cheaper for AWS service traffic and provide better security (no internet egress). For high-volume workloads, the per-GB savings vs NAT GW can be significant.

Security group on interface endpoint ENI

Each interface endpoint ENI has a security group attached. The default is the VPC's default SG (often too permissive). Best practice: create a dedicated SG that allows inbound TCP 443 only from the client subnets that should access the endpoint. The SG controls which clients can reach the endpoint at all; the endpoint policy controls what they can do once connected.

Private DNS

When you create an interface endpoint with Private DNS enabled, AWS registers a private hosted zone in the VPC for the AWS service's standard DNS name (e.g., secretsmanager.<region>.amazonaws.com). Clients in the VPC resolving this name get the endpoint ENI's private IP instead of the public service IP. This means no application code change is required — existing clients automatically use the endpoint.

Private DNS requires the VPC to have enableDnsSupport=true and enableDnsHostnames=true. Without these, private DNS does not function and clients still resolve to public IPs.

Endpoint DNS names without Private DNS

Each interface endpoint also gets endpoint-specific DNS names of the form:

  • Per-VPC name: vpce-<id>-<sub>.<service>.<region>.vpce.amazonaws.com
  • Per-AZ name: vpce-<id>-<sub>-<az>.<service>.<region>.vpce.amazonaws.com

Clients can use these names directly, or use the standard service DNS name with Private DNS enabled.

A high-frequency ANS-C01 distinction. Interface endpoints: ENI per AZ, has IP, has SG, has per-AZ per-hour and per-GB cost, supports nearly all AWS services + custom PrivateLink. Gateway endpoints: no ENI, no IP, no SG, no cost, supports only S3 and DynamoDB. The exam frequently tests "which services have which endpoint type" — memorise the binary. For services like KMS, Secrets Manager, STS, Systems Manager, ECR, SNS, SQS, Kinesis, and anything else, interface endpoint is the only option. Reference: https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html

Endpoint Policies — The Data Perimeter Lever

An endpoint policy is an IAM-style resource policy attached to a VPC endpoint. It restricts what principals, actions, and resources are accessible through the endpoint. Endpoint policies are evaluated alongside identity-based and resource-based policies; the effective permission is the intersection.

Default policy — full access

By default, an endpoint policy grants * (full access) — meaning the endpoint adds no additional restriction beyond IAM and resource policies. This is permissive for development but inadequate for production security.

Restrictive endpoint policy — the data perimeter pattern

A typical data perimeter policy on an S3 gateway endpoint:

{
  "Statement": [{
    "Effect": "Allow",
    "Principal": "*",
    "Action": ["s3:GetObject", "s3:PutObject", "s3:ListBucket"],
    "Resource": "*",
    "Condition": {
      "StringEquals": {
        "aws:PrincipalOrgID": "o-xxxxxxx"
      }
    }
  }]
}

This allows S3 access only when the calling principal is in the organisation o-xxxxxxx. A compromised access key from outside the org cannot use this endpoint to reach S3 — even if the key has IAM permissions, the endpoint denies the request.

Layered with bucket policies and SCPs

The full data perimeter is three layers:

  1. Endpoint policy with aws:PrincipalOrgID — restricts who can use the endpoint.
  2. S3 bucket policy with aws:SourceVpce — restricts which endpoints can reach the bucket.
  3. Service Control Policy (SCP) with aws:ResourceOrgID — restricts which buckets the org's principals can call at all.

Together: no exfiltration to outside-org buckets (SCP blocks), no ingestion to your buckets from outside (bucket policy blocks), no use of the endpoint by foreign principals (endpoint policy blocks).

Interface endpoint policies

Interface endpoints also support endpoint policies. For services like KMS, Secrets Manager, STS, the endpoint policy controls which keys, secrets, or roles are accessible through the endpoint. A common pattern: KMS endpoint policy restricting kms:Decrypt to a specific KMS key ARN, preventing use of any other key through this endpoint.

ANS-C01 expects you to recognise the three-layer data perimeter. Each layer alone has a bypass. Endpoint policy alone: blocks foreign principals from using the endpoint, but does not stop in-VPC traffic via NAT GW from reaching foreign buckets. Bucket policy alone: blocks foreign endpoints, but does not stop your principals from accessing foreign buckets. SCP alone: blocks your principals from accessing foreign buckets, but does not stop foreign principals from using your endpoints. Together they form a closed two-way perimeter. Questions describing "prevent any S3 access to or from non-organisation buckets" expect this triad. Reference: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html

Endpoint Services — SaaS Provider Pattern

An endpoint service is the producer-side PrivateLink construct that exposes a service to consumers. The service is backed by an NLB (TCP/UDP/TLS) or a GWLB (transparent inline traffic).

Producer setup

  1. Producer deploys their service behind an NLB (or GWLB) in their VPC.
  2. Producer creates an endpoint service in the same VPC, linking it to the NLB.
  3. Producer adds allowed principals — IAM principals (accounts, roles) authorised to connect.
  4. Producer optionally enables acceptance required — every consumer connection requires manual approval.
  5. Producer optionally registers a custom Private DNS name (e.g., api.saas.example.com).

Consumer connection

  1. Consumer learns the producer's service name (e.g., com.amazonaws.vpce.us-east-1.vpce-svc-xxxxx).
  2. Consumer creates an interface endpoint in their VPC targeting the service name.
  3. If acceptance is required, the producer manually accepts the connection.
  4. Consumer's traffic to the endpoint ENI flows via PrivateLink to the producer's NLB and then to backend instances.

Cross-account and cross-region

PrivateLink endpoint services work cross-account by default (assuming allowed principals include the consumer's account). Cross-region is not directly supported — consumers in a different region than the producer cannot connect to the endpoint service. Workarounds: deploy the producer in multiple regions, each with its own endpoint service; or use TGW peering + interface endpoint in the producer's region.

Example — SaaS provider with 200 customer VPCs

A SaaS provider exposes a REST API to 200 customer VPCs:

  • Provider deploys API behind an NLB; creates endpoint service.
  • Provider adds 200 customer accounts as allowed principals.
  • Provider sets acceptance required = true for manual onboarding control.
  • Each customer creates an interface endpoint in their VPC, gets the endpoint DNS name, and points their API client at it.
  • Provider sees connection records per customer in CloudWatch and VPC Flow Logs — per-customer logging.
  • Provider's NLB target groups can be different per customer (advanced multi-tenancy) or shared (simple multi-tenancy).

For SaaS providers exposing services to customer VPCs, PrivateLink endpoint services are AWS's recommended pattern over alternatives like VPC peering (peering needs route tables and overlapping CIDRs are blockers), Transit Gateway (TGW requires more orchestration and has less granular per-consumer access control), or public APIs over the internet (less secure, more egress costs). Endpoint services support per-consumer authorisation, manual acceptance, per-AZ deployment, and connection-level logging — all the operational hooks a SaaS provider needs. ANS-C01 questions describing "SaaS exposing service to many consumer VPCs" expect endpoint service. Reference: https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-share-your-services.html

A frequent ANS-C01 scenario: two or more VPCs with overlapping 10.0.0.0/16 CIDRs need to communicate. VPC peering and TGW cannot solve this — they require unique CIDR space because routing is by destination IP. PrivateLink can solve it.

PrivateLink's data path uses a destination IP from the consumer's subnet (the endpoint ENI's IP) to reach a service that exists abstractly behind it. The producer's actual VPC CIDR is invisible to the consumer; the consumer only sees the endpoint ENI in their own subnet. So even if the consumer and producer have overlapping 10.0.0.0/16 CIDRs, the consumer's traffic stays in its own CIDR (to the endpoint ENI) and the producer's NLB receives proxied traffic with the source IP rewritten or preserved per NLB settings.

Use case

  • Acquired company has same RFC 1918 space as parent — must integrate without re-IPing.
  • SaaS provider must serve customers regardless of their VPC CIDRs.
  • Multi-tenant lab environments where every tenant uses 10.0.0.0/16 by template.

Limitations

  • PrivateLink is unidirectional service exposure — not full network connectivity.
  • For full connectivity with overlapping CIDRs, the option is NAT-based workarounds (rare, complex) or re-IP (most common).

Centralised Interface Endpoint Pattern — Cost and Operational Consolidation

For organisations with many VPCs each needing access to AWS services via interface endpoints, the per-VPC per-AZ hourly cost adds up. The centralised interface endpoint pattern places interface endpoints in a single shared VPC and routes other VPCs to them via TGW or peering.

Architecture

  • Central endpoints VPC: contains interface endpoints for KMS, Secrets Manager, STS, etc.
  • Spoke VPCs: have TGW attachment, route AWS service traffic through TGW to the endpoints VPC.
  • Route 53 Resolver outbound endpoint in the endpoints VPC, with rules forwarding service DNS names to the central endpoints' private DNS.

Cost benefit

Without centralisation: 10 spoke VPCs × 3 AZs × 5 services = 150 endpoint-AZ hours/month. With centralisation: 1 endpoints VPC × 3 AZs × 5 services = 15 endpoint-AZ hours/month.

Significant savings on per-AZ hourly charges. Trade-off: TGW data processing fees and cross-VPC traffic, but for many workloads the consolidation wins.

DNS challenge

The challenge: by default, the AWS service DNS names (secretsmanager.<region>.amazonaws.com) resolve to the endpoint's private IPs only inside the endpoint's VPC. Spoke VPCs resolving the same name will get public IPs, not the central endpoint IPs. Solution: Route 53 PHZ (private hosted zone) for the service name shared across spoke VPCs, manually pointing at the central endpoint, or Route 53 Resolver rules forwarding to the central endpoint.

Implementation note

The centralised pattern adds operational complexity — DNS configuration, TGW route management, security group updates across many spokes. It is worth it for very large multi-VPC organisations; smaller deployments with 2-5 VPCs may prefer per-VPC endpoints.

  • Gateway endpoints: S3 + DynamoDB only, free, no ENI, route-table-based.
  • Interface endpoints: per-AZ ENI, has SG, per-hour + per-GB cost, supports most AWS services.
  • Endpoint services: producer-side, NLB or GWLB backed, with allowed principals and optional acceptance.
  • Default endpoint policy: full access (*); customise to restrict.
  • Private DNS: requires VPC enableDnsSupport=true and enableDnsHostnames=true.
  • Service name format: com.amazonaws.vpce.<region>.vpce-svc-xxx (third-party) or com.amazonaws.<region>.<service> (AWS).
  • Cross-region: endpoint services do not span regions; per-region producer deployment.
  • Endpoint via on-prem: interface endpoints reachable via DX/VPN; gateway endpoints not.
  • Reference: https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html

On-Premises Access to Interface Endpoints

A unique property of interface endpoints: they are reachable from on-premises via Direct Connect or Site-to-Site VPN, because their ENI IPs are routable. A common pattern: on-premises systems call AWS services through a customer-owned interface endpoint in a private VPC, instead of via NAT GW + IGW.

Architecture

  • VPC has an interface endpoint for the AWS service.
  • Direct Connect or VPN connects on-premises to the VPC.
  • On-premises DNS resolver forwards the AWS service domain to a Route 53 Resolver inbound endpoint in the VPC, which resolves to the interface endpoint's private IP.
  • On-premises applications call the AWS service via private IP, traversing DX/VPN to the VPC and then via PrivateLink to the AWS service.

Why this matters

For regulated workloads where on-premises must not have direct internet access, and where the only way to reach AWS services is through the customer's own VPC. Eliminates the need for on-premises NAT/firewall to AWS public IP space.

Gateway endpoints do NOT work this way

Gateway endpoints are route-table-based and use AWS-internal prefix lists. They are not reachable from on-premises — the prefix list does not propagate over DX/VPN. So on-premises access to S3 cannot use a gateway endpoint; it must use either NAT GW + IGW (public route) or S3 interface endpoints (a special interface endpoint type for S3 that uses ENI-based access, available alongside the gateway endpoint).

A scenario describes an on-premises system needing to reach S3 via Direct Connect, with a gateway endpoint deployed in the VPC. The candidate selects "configure on-premises BGP to advertise the prefix list" — but AWS-managed prefix lists are not BGP-advertisable to on-premises. Gateway endpoints work only for in-VPC traffic. The fix: either use a S3 interface endpoint (the new ENI-based option) which is reachable from on-prem via the ENI's private IP, or configure on-premises to use NAT GW + IGW for S3 (public route, not as good for compliance). The exam tests this distinction. Reference: https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-access-aws-services.html

Trap 1: KMS, Secrets Manager, etc. have gateway endpoints

Wrong. Only S3 and DynamoDB have gateway endpoints. Everything else uses interface endpoints.

Trap 2: Gateway endpoints have ENIs

Wrong. Gateway endpoints have no ENI. They are route table prefix list entries.

Trap 3: Interface endpoints are free

Wrong. Interface endpoints have per-AZ per-hour and per-GB charges.

Trap 4: Gateway endpoints support security groups

Wrong. Gateway endpoints have no ENI, so no security group. Use endpoint policies for access control.

Trap 5: Endpoint services span regions

Wrong. Endpoint services are regional. For multi-region, deploy per-region.

Wrong. PrivateLink is unidirectional service exposure. For bidirectional, use peering, TGW, or VPN.

Trap 7: Private DNS works without enableDnsHostnames

Wrong. Private DNS requires both enableDnsSupport=true and enableDnsHostnames=true on the VPC.

Trap 8: Gateway endpoints reachable from on-premises

Wrong. Gateway endpoints are in-VPC only. On-premises uses interface endpoints (or S3 interface endpoint specifically).

Q1: Why do gateway endpoints exist only for S3 and DynamoDB?

Historical reason: gateway endpoints were the original VPC endpoint mechanism, introduced for S3 (2015) and DynamoDB (2017). They use route table prefix lists and the AWS backbone for traffic, with no per-AZ ENI or per-hour cost. They are extremely efficient for the high-volume use case of S3 and DynamoDB access from many VPCs. AWS introduced interface endpoints (PrivateLink) in 2017 as a more flexible mechanism that supports any service via NLB-backed PrivateLink, including third-party SaaS. The team made an architectural choice not to expand gateway endpoints to additional services because interface endpoints solved the broader problem better. Today, S3 and DynamoDB also have interface endpoint options for cases where ENI-based access is needed (e.g., on-premises reachability), so even those services have both flavours.

Q2: How do I decide between a gateway endpoint and an interface endpoint for S3?

Default to gateway endpoint for in-VPC S3 access — it is free, low-overhead, and well-supported. Use S3 interface endpoint when you need: (a) on-premises access to S3 over Direct Connect or VPN (gateway endpoints are unreachable from on-prem); (b) cross-region access (gateway is region-scoped; interface endpoints can be reached across regions in some patterns); (c) security group control on the data path (gateway has no SG); (d) per-AZ DNS resolution for traffic locality. The cost: interface endpoints have per-AZ hourly and per-GB charges. Gateway endpoints are free. For most simple in-VPC workloads, gateway is the answer. For hybrid or complex multi-account patterns, interface endpoint is.

Q3: How does the data perimeter pattern combine endpoint policy, bucket policy, and SCP?

Each layer protects a different vector. Endpoint policy with aws:PrincipalOrgID = our-org restricts which principals can use this endpoint — a defence against compromised credentials from outside the org. S3 bucket policy with aws:SourceVpce = our-endpoint restricts which endpoints can reach this bucket — a defence against access from public internet or unauthorised VPCs. SCP with deny on s3:* for aws:ResourceOrgID != our-org restricts which buckets your principals can access — a defence against your principals exfiltrating data to attacker-controlled buckets. Together they form a closed two-way perimeter: no exfiltration outwards (SCP blocks), no ingestion inwards (bucket policy blocks), no use of endpoint by foreign principals (endpoint policy blocks). ANS-C01 expects all three.

Q4: How do I expose a service from my VPC to many customer VPCs across accounts?

Use PrivateLink endpoint services. Deploy the service behind an NLB (TCP/UDP/TLS) in your producer VPC. Create an endpoint service linked to the NLB. Add allowed principals — the consumer accounts that may connect. Optionally enable acceptance required for manual onboarding control. Customers create interface endpoints in their VPCs targeting your service name; you manually approve (if acceptance required) or auto-accept. Each customer's traffic flows from their interface endpoint ENI to your NLB to your backend. Per-customer logging via VPC Flow Logs on your NLB; per-customer access control via allowed principals. Cross-account works natively. Limitations: cross-region requires per-region deployment; bidirectional connectivity requires peering or TGW separately.

Q5: What is the difference between Private DNS enabled and disabled on an interface endpoint?

Private DNS enabled (default for AWS-managed services): AWS creates a private hosted zone in your VPC for the service's standard DNS name (e.g., secretsmanager.us-east-1.amazonaws.com). Clients in the VPC resolving this name get the endpoint's private IP. No application code change needed — existing AWS SDKs use the standard name and automatically use the endpoint. Private DNS disabled: clients resolving the standard name still get the public service IP. To use the endpoint, applications must explicitly target the endpoint-specific DNS name (vpce-xxx.secretsmanager.us-east-1.vpce.amazonaws.com). Disable private DNS only when you have multiple endpoints for the same service (e.g., separate endpoints for prod and staging in the same VPC) or when private DNS conflicts with custom Route 53 PHZs. Default to enabled.

Q6: Why does my interface endpoint not work even though the security group allows traffic?

Common causes. (a) VPC dnsSupport or dnsHostnames disabled — Private DNS does not function, clients still resolve to public IPs. Fix: enable both VPC attributes. (b) Endpoint subnet not in client's AZ — interface endpoints are per-AZ; if the endpoint is in AZ-a only and the client is in AZ-b, traffic must hop AZs (and resolve to the AZ-a endpoint via DNS). Best practice: deploy endpoint in every AZ the clients use. (c) Endpoint policy too restrictive — default policy is *, but a custom policy may block the principal or action. Check policy and IAM permissions. (d) Security group on endpoint ENI does not allow client subnet — must allow inbound TCP 443 from client subnets. (e) Route table missing prefix list (gateway endpoints only) — for gateway endpoints, the route table needs the prefix list entry pointing at the endpoint.

Q7: How does a centralised interface endpoint pattern reduce cost?

Per-VPC interface endpoints have per-AZ hourly costs (about $0.01/hour each, multiplied by AZs). For an org with 20 VPCs each needing 5 service endpoints (KMS, Secrets, STS, ECR, Systems Manager) across 3 AZs: 20 × 5 × 3 = 300 endpoint-AZ-hours per hour, or roughly $300 × $0.01 × 730 hours = ~$2,200/month. With centralisation: 1 endpoints VPC × 5 services × 3 AZs = 15 endpoint-AZ-hours/hour, or ~$110/month. Savings: ~$2,000/month for this scale. Caveats: TGW data processing charges add ~$0.02/GB for cross-VPC traffic, partially offsetting savings on high-volume workloads. Operational complexity also adds cost (DNS configuration, security review). Centralised pattern is typically worth it at 10+ VPCs; smaller deployments may prefer per-VPC endpoints for simplicity.

Q8: What are the security group requirements for interface endpoints?

Each interface endpoint's ENI has an attached security group. The default is the VPC's default SG (often too permissive — allows all from itself). For production: create a dedicated SG per endpoint that allows inbound TCP 443 only from client subnet CIDRs or client SGs. Outbound rules can be left default. The endpoint SG controls who can reach the endpoint at all; the endpoint policy controls what they can do once connected. Layering: SG limits the network surface, policy limits the action surface. For consumer-side endpoints calling a producer's PrivateLink service, the same SG-on-ENI applies — the consumer controls who in their VPC reaches the endpoint via SG, separate from the producer's allowed principals control.

Q9: Why do I need an S3 interface endpoint in addition to (or instead of) a gateway endpoint?

Gateway endpoints have two key limitations: (a) not reachable from on-premises over Direct Connect or VPN, because they rely on prefix lists that do not propagate to on-prem; (b) no security group control on the data path, only endpoint policies. S3 interface endpoints (newer feature, 2021) solve both: they are ENI-based with private IPs reachable from on-prem via DX/VPN, and they support security groups for fine-grained access control. Trade-off: interface endpoints are not free (per-AZ per-hour + per-GB cost), so for purely in-VPC S3 access, gateway endpoint remains cheaper. The pattern: gateway endpoint for in-VPC traffic (free, route-based); S3 interface endpoint for on-prem and cross-account traffic (paid, ENI-based). Both can coexist in the same VPC.

VPC peering routes packets by destination IP, which requires unique CIDR space across the two VPCs. If both have 10.0.0.0/16, a packet for 10.0.0.5 could mean either VPC, and routing is ambiguous — peering refuses to be created. PrivateLink works differently. The consumer creates an interface endpoint in their own VPC with an ENI in their own subnet (e.g., 10.0.5.42 in the consumer's VPC). The consumer addresses traffic to this local IP to reach the producer's service. The producer never sees the consumer's CIDR; the producer's NLB sees traffic from the PrivateLink fabric. Even if both have 10.0.0.0/16, each side stays in its own CIDR — no routing collision. This makes PrivateLink the only viable connectivity mechanism for SaaS providers serving customers with arbitrary CIDR spaces, for acquired companies with overlapping spaces during integration, and for multi-tenant labs.

Further Reading

PrivateLink is the service-level connectivity primitive; the next layers on ANS-C01 are Route 53 Resolver hybrid DNS for resolving private endpoint names from on-premises and across accounts; Transit Gateway for the network fabric that complements PrivateLink with multi-account routing; VPC routing primitives for how endpoint route tables interact with TGW and propagated routes; and IaC patterns for declaring all of the above.

Official sources

More ANS-C01 topics