examlab .net The most efficient path to the most valuable certifications.
In this note ≈ 27 min

VPC Architecture, CIDR Design, and Subnetting

5,400 words · ≈ 27 min read ·

ANS-C01 Domain 1.6 deep dive into VPC architecture and CIDR design: IPv4 sizing, IPv6 dual-stack, secondary CIDRs with RFC 6598 (100.64.0.0/10), AWS IPAM pool hierarchy, VPC sharing via RAM, the five reserved IPs per subnet, BYOIP, the overlapping-CIDR problem and PrivateLink/NAT/TGW workarounds, VPC peering non-transitivity, and IP exhaustion in auto-scaling subnets.

Do 20 practice questions → Free · No signup · ANS-C01

VPC architecture on ANS-C01 is not the conversation it was on SAA-C03. The architect exam asks "what subnet does the database go in?". The Advanced Networking Specialty exam asks "you have 47 accounts in three regions, the original VPC was sized at /24, autoscaling groups have started running out of IPs, and a recent acquisition introduced a VPC with a 10.0.0.0/16 CIDR that overlaps yours — design connectivity in 60 seconds without breaking production". That is a Network Engineer problem that pulls in CIDR sizing rules, the five reserved IPs per subnet, IPv6 dual-stack, RFC 6598 100.64.0.0/10 carrier-grade NAT space as a secondary CIDR, AWS IP Address Manager pool hierarchies, BYOIP, VPC sharing via Resource Access Manager, the non-transitive nature of VPC peering, prefix delegation for ENIs, and the PrivateLink / NAT / Transit Gateway workarounds for overlapping address space — and ANS-C01 routinely tests every one of those moving parts inside a single five-line scenario.

This topic is Domain 1 (Network Design, 30 percent of the exam) Task Statement 1.6 in its entirety. The official ANS-C01 exam guide lists the knowledge bullets verbatim: "Different connectivity patterns and use cases (for example, VPC peering, Transit Gateway, AWS PrivateLink)", "Capabilities and advantages of VPC sharing", and "IP subnets and solutions accounting for IP address overlaps". The skills push you to pick the right inter-VPC service for the requirement, manage IP overlaps, and use VPC sharing in multi-account setups. Roughly 8 to 12 of the 65 exam questions touch this territory, almost always in scenario form rather than pure recall.

Why CIDR Design Is the Foundation of ANS-C01 Domain 1

CIDR design errors compound. A /24 VPC chosen in year one becomes a multi-account, multi-region production blast radius in year three, and remediation costs grow linearly with workload migration scope. The Specialty exam tests this because Network Engineers in the field encounter the consequences daily — IP exhaustion in auto-scaling subnets is a chronic AWS support case category, and overlapping CIDR remediation is the most-asked architecture-review question at AWS re:Invent. ANS-C01 expects you to prevent the problem at design time and unstick it at remediation time.

The mental model the exam rewards is IP address as a finite organizational resource governed centrally. A solo VPC designed in isolation is the wrong starting point. The right starting point is an AWS IPAM pool hierarchy at the organization root, with regional and operational pools allocating non-overlapping CIDRs to every account, and an explicit reservation for RFC 6598 100.64.0.0/10 as escape valve for autoscaling overflow. The exam will reward this thinking with full-credit answers; candidates who think of VPC CIDR as a per-VPC decision will be punished on overlapping-CIDR distractors.

Plain-Language Explanation: VPC Architecture and CIDR Design

VPC CIDR planning combines IP arithmetic, AWS-reserved addresses, IPv6 dual-stack mechanics, multi-account governance, and conflict resolution. Three analogies anchor the moving parts.

Analogy 1: The Real-Estate Master Plan

A VPC CIDR is a plot of land in a master-planned city. The CIDR block is the lot boundary — once you fence it, growing it costs surveying and zoning approvals. Subnets are city blocks within the lot, sized to the buildings (workloads) you expect; oversized blocks waste land, undersized blocks force expensive rezoning when an apartment complex (auto scaling group) wants to expand. The five reserved IPs per subnet are the fire hydrant, mailbox, lamp post, transformer, and storm drain the city always installs at the start of a block — you cannot park in their spots. AWS IPAM is the city planning department that hands out non-overlapping plot allocations from a master grid, refusing duplicate addresses and tracking who owns what. Secondary CIDRs are the annex parcels you bolt on when the original lot fills — connected to the same address but not contiguous on the map. VPC peering is a shared driveway between two adjacent lots — useful for two neighbors but not extendable to a third (no transitive routing). VPC sharing is the mixed-use development where one owner holds the title (the participant accounts share the subnets) but multiple tenants run their businesses inside. The overlapping CIDR problem is when an acquired property uses the same street numbers as your existing block — postal mail (packets) cannot tell which house to deliver to without an intermediary mail-forwarder (NAT or PrivateLink).

Analogy 2: The Phone Number Numbering Plan

Think of a CIDR block as the area code in a national phone numbering plan. The /16 is a regional area code with 65,536 phone numbers (IP addresses). The /24 is a small-town exchange with 256 numbers. The /28 is a single neighborhood block with 16 numbers, of which the first (network address), second (VPC router), third (DNS resolver), fourth (reserved for future), and last (broadcast) are taken by the phone company — leaving you 11 usable numbers per /28. IPv6 dual-stack is the migration to a new numbering scheme with effectively unlimited numbers (/56 per VPC, /64 per subnet); old phones (legacy IPv4) and new phones (IPv6) coexist via dual-stack ENIs. AWS IPAM is the national numbering authority allocating area codes to telecoms (accounts) without duplication and reclaiming unused ranges. BYOIP is the port-your-number-from-another-carrier option, where AWS hosts and advertises (announces via BGP) IP space you brought from your own ARIN/RIPE allocation. Overlapping CIDR is two telcos that both got assigned area code 555 — when a customer dials 555-1234, the network has no idea which telco's customer to ring; the resolution is a NAT-equivalent that translates one telco's 555s to a different prefix.

Analogy 3: The Apartment Building With a Front Desk

A VPC is an apartment building, the CIDR is the building's address range, and subnets are the floors. The VPC router (the .1 reserved address) is the building's mail room that knows which floor every package goes to. The AmazonProvidedDNS (the .2 reserved) is the front-desk concierge answering "what's the IP of database.internal?" The five reserved IPs are five building services rooms taken on every floor — you can never rent them. VPC peering is a private hallway between two buildings — convenient for the two endpoint buildings, but residents in a third building cannot use it as a shortcut (non-transitive). Transit Gateway is the central transit hub that connects every building in the city to every other building through one well-routed transit terminal — and it can route between any pair (transitive). VPC sharing is one organisation owning the entire building while different departments rent specific floors and operate them as if they were separate offices. PrivateLink is the pneumatic tube system that connects one apartment directly to a service provider in another building without using the hallways at all — perfect when two buildings have the same address numbering (overlapping CIDR) and a normal hallway connection is impossible.

For ANS-C01, the real-estate master plan is the highest-yield mental model when the question mixes IPAM, account boundaries, and CIDR governance. The phone numbering plan is best when the question hinges on overlapping CIDRs or BYOIP. The apartment building sub-analogy is the easiest way to remember the five reserved IPs and the difference between peering (private hallway) and Transit Gateway (transit hub). Reference: https://docs.aws.amazon.com/vpc/latest/userguide/configure-your-vpc.html

VPC Fundamentals — CIDR Sizing, Reserved IPs, and Subnet Math

A VPC is a regional, logically isolated virtual network defined by one primary IPv4 CIDR block at creation. The block must be between /16 and /28 in size. Outside those bounds, AWS rejects the request — a /15 is too large, a /29 is too small.

Allowed CIDR ranges and the RFC 1918 default

AWS allows any IPv4 range, but conventionally you use RFC 1918 private space: 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16. Public IPv4 in a VPC is technically possible (you own a public range and want to use it inside) but rare; public IPv4 used inside a VPC is not internet-routable through the VPC's IGW unless AWS BYOIP advertises it on your behalf. Most VPCs use 10.x.x.x because the /8 gives the largest planning headroom.

The five reserved IPs per subnet

Every subnet, regardless of size, has five IP addresses reserved by AWS:

  • .0 — network address (subnet identifier).
  • .1 — VPC router (the implicit default gateway every ENI uses for non-local traffic).
  • .2 — AmazonProvidedDNS (the in-VPC DNS resolver, also called the Route 53 Resolver default endpoint).
  • .3 — reserved for future AWS use.
  • .255 (or the last IP in the subnet) — network broadcast (AWS does not actually broadcast, but the address is reserved).

A /28 subnet has 16 addresses, of which 5 are reserved — leaving 11 usable. A /24 has 256, leaving 251. A /22 has 1024, leaving 1019. ANS-C01 expects you to do this math instantly under exam pressure.

A frequent ANS-C01 distractor: a question describes a small workload, the candidate picks /29 or /30 to "save IPs", and the exam answer requires /28 minimum. AWS hardcodes /28 as the smallest allowed subnet — anything narrower is rejected at the API level. The second trap is the eleven-usable count for /28 — candidates who forget the five-reserved rule will compute "16 minus 2" (network + broadcast) and choose a wrong answer. The exam version tests both: "what is the smallest subnet that supports 12 EC2 instances?" — the answer is /27 (32 addresses, 27 usable), not /28 (only 11 usable). Reference: https://docs.aws.amazon.com/vpc/latest/userguide/subnet-sizing.html

Subnet sizing trade-offs

Larger subnets (e.g. /22) give headroom for growth but waste IP address space across multiple AZs and accounts. Smaller subnets (e.g. /27) conserve space but expose you to IP exhaustion under autoscaling bursts — when 200 pods on EKS scale-up land in a /28 with 11 usable IPs, you immediately hit ENI-allocation failures.

The pragmatic balance is /24 per subnet (251 usable) with three or four subnets per AZ for tier separation (public, private, isolated, intra). For high-density EKS or container workloads, /22 or /21 per subnet is appropriate, especially when prefix delegation assigns /28 chunks of IPv4 or IPv6 to each ENI.

IPv6 Dual-Stack and IPv6-Only Subnets

IPv6 in AWS is a separate parallel address plane. A VPC can be IPv4-only, dual-stack (IPv4 + IPv6), or IPv6-only (a newer AWS option). On ANS-C01, expect questions that probe the difference between dual-stack and IPv6-only deployments and the implications for egress.

IPv6 CIDR allocation

AWS assigns each VPC a /56 IPv6 CIDR from an Amazon-owned pool (or your BYOIPv6 pool). Each subnet gets a /64 carved from that /56 — meaning a single subnet has 18 quintillion IPv6 addresses, and a single VPC can carve out 256 distinct /64 subnets.

There is no equivalent of the five-reserved-IPs convention for IPv6 in the same form — AWS reserves the first four addresses and the last in each /64, but with 2^64 addresses available the math is irrelevant.

Egress-only Internet Gateway

In IPv6, every address is globally unique and globally routable — there is no "private IPv6". To prevent inbound traffic from the internet while allowing outbound, AWS provides the Egress-only Internet Gateway (EIGW). EIGW is the IPv6 analogue of NAT for IPv4 outbound: stateful, allows outbound + return traffic, denies unsolicited inbound. NAT is not used for IPv6 in AWS because every IPv6 address is already public and address translation is not needed.

IPv6-only subnets

An IPv6-only subnet has no IPv4 CIDR. Workloads in it have no IPv4 address. To talk to IPv4-only AWS services or the IPv4 internet, you need DNS64 + NAT64 — Route 53 Resolver synthesises AAAA records for IPv4-only destinations, and the NAT Gateway in NAT64 mode translates IPv6 → IPv4 on the wire. This is the modern AWS-recommended pattern for greenfield IPv6 deployments.

A subtle ANS-C01 trap: enabling IPv6 on an existing IPv4 subnet adds the dual-stack capability but does not change the IPv4 internet egress path — you still need a NAT Gateway for IPv4 outbound. Conversely, an IPv6-only subnet cannot reach IPv4 destinations without DNS64 + NAT64. Candidates who assume "we have IPv6 now, so we don't need NAT" lose points on questions about hybrid IPv4/IPv6 egress design. Reference: https://docs.aws.amazon.com/vpc/latest/userguide/configure-your-vpc.html

Secondary CIDR Blocks and RFC 6598 100.64.0.0/10

When a VPC is running out of IPv4 addresses, you have three choices: rebuild the VPC with a larger CIDR (high migration cost), shrink existing subnets (impossible — subnets cannot be resized), or add a secondary CIDR.

Adding a secondary CIDR

A VPC can have up to five IPv4 CIDR blocks by default (raisable to fifty via Service Quotas). Add a secondary block via associate-vpc-cidr-block. The new block can be from the same RFC 1918 supernet or from a non-overlapping range — a common pattern is to use the original 10.0.0.0/16 as the primary and add 100.64.0.0/16 (RFC 6598 carrier-grade NAT space) as the secondary.

Why RFC 6598 100.64.0.0/10

RFC 6598 carved out 100.64.0.0/10 for shared address space — it is non-routable on the public internet, not in conflict with any RFC 1918 range, and rarely used in enterprise on-prem environments. AWS specifically supports it in VPCs as the canonical "we ran out of 10.x.x.x and need more" range. Use cases:

  • EKS pods on prefix-delegation that consume IPs faster than your /16 can sustain.
  • Lambda with high concurrency in VPC mode.
  • Auto Scaling groups that burst to thousands of instances.

The secondary CIDR's subnets work like primary CIDR subnets — same routing rules, same security groups, same egress paths.

Secondary CIDR limitations

Some restrictions apply: the secondary CIDR cannot overlap with the primary, cannot overlap with peered VPCs, and cannot be from a range that overlaps with an existing route in any TGW route table the VPC attaches to. CIDR blocks 169.254.0.0/16 (link-local), 224.0.0.0/4 (multicast), and 240.0.0.0/4 (reserved) are forbidden.

  • VPC CIDR: /16 to /28; primary at creation; up to 5 CIDR blocks per VPC (50 with quota increase).
  • Subnet: /16 to /28; sized within VPC CIDR; 5 reserved IPs per subnet always.
  • /28 = 11 usable, /27 = 27 usable, /24 = 251 usable, /22 = 1019 usable.
  • RFC 6598 100.64.0.0/10: secondary CIDR escape valve for IP exhaustion.
  • IPv6: /56 per VPC, /64 per subnet, no NAT for IPv6 (use Egress-only IGW).
  • NAT64 + DNS64 for IPv6-only subnets reaching IPv4 destinations.
  • VPC peering is non-transitive; TGW is transitive.
  • Reference: https://docs.aws.amazon.com/vpc/latest/userguide/configure-your-vpc.html

AWS IPAM — IP Address Manager for Multi-Account Governance

Amazon VPC IP Address Manager (IPAM) is the AWS-native control plane for IP address allocation across an organization. On ANS-C01 it is the canonical answer to "we have 50 accounts in 3 regions and we need to ensure no two VPCs share a CIDR".

Pool hierarchy

IPAM models address space as a hierarchy of pools:

  • Top-level pool at the organization scope (e.g. 10.0.0.0/8) — the master allocation.
  • Regional pools carved from the top-level (e.g. 10.0.0.0/12 for us-east-1, 10.16.0.0/12 for eu-west-1).
  • Operational pools carved from regional (e.g. 10.0.0.0/16 for production, 10.1.0.0/16 for development).
  • Account-scoped allocations drawn from operational pools — when an account creates a VPC, it requests a CIDR from the operational pool and IPAM hands out a non-overlapping block.

Allocation rules and locale

Each pool can have allocation rules: minimum and maximum allowed CIDR length, required CIDR length, allowed CIDRs to allocate from. Locale binds a pool to a specific AWS Region — VPCs in that region must request from a pool with the matching locale.

Compliance monitoring

IPAM continuously monitors VPCs and surfaces compliance events: "VPC X uses CIDR not from any pool", "VPC Y overlaps with VPC Z (despite not being in the pool)". This is the audit story for "prove no two VPCs in the org overlap" requirements common in regulated industries.

Cross-account sharing via RAM

The top-level pool is created in a centralized account (typically the network account) and shared via AWS Resource Access Manager (RAM) to all member accounts. Member accounts request CIDRs through the pool API; the pool tracks who got what. This is the AWS Security Reference Architecture pattern for IP governance and is heavily tested on the Specialty exam.

A common ANS-C01 distractor: a multi-account scenario asks for non-overlapping CIDR governance, and the answer choices include "track CIDRs in a Confluence wiki", "use AWS Config rules to detect overlap", and "use IPAM with cross-account RAM share". Wiki spreadsheets do not enforce; Config detects after-the-fact; IPAM enforces at the API level by refusing to allocate overlapping blocks. The exam favors IPAM because it is the only purpose-built, scaled-out, write-time enforcement option. Reference: https://docs.aws.amazon.com/vpc/latest/ipam/what-it-is-ipam.html

VPC Sharing via AWS Resource Access Manager

VPC sharing lets one AWS account (the owner) share a VPC's subnets with other accounts (the participants) so that participants can launch resources (EC2, RDS, Lambda) directly into the owner's subnets. The participants do not own the VPC, do not control routing, and cannot create new subnets — they only consume capacity inside the shared subnets.

Owner vs participant model

The owner account controls the VPC: CIDR, route tables, security groups, NACLs, IGW, NAT gateways, VPC peering connections, Transit Gateway attachments. The participant account controls the resources it launches into shared subnets: EC2 instances, RDS clusters, Lambda functions. The participant can also create its own security groups inside the shared VPC.

Why VPC sharing instead of one VPC per account

Pre-sharing, the typical pattern was "one VPC per account, peered or TGW-connected to the others". This wastes IP space (every VPC needs its own CIDR) and explodes administrative cost (every VPC has its own NAT, its own route tables, its own egress path). VPC sharing collapses 50 VPCs into 1 large shared VPC owned by the network account, with 50 participant accounts launching workloads inside — saving NAT cost, simplifying egress, and concentrating IP planning.

Sharing prerequisites

VPC sharing requires AWS Organizations enabled with all features. Subnets are shared via RAM. The owner can revoke sharing at any time, immediately preventing new resource launches but not terminating existing participant resources — those remain until the participant terminates them.

A canonical ANS-C01 design pattern: 100-account organisation with each account previously running a /24 VPC. Total IP consumption: 100 × 256 = 25,600 IPs. Migrate to a single /16 shared VPC owned by the network account: total IP consumption capped at 65,536 with full subnet flexibility for all 100 participants. The exam will reward this answer for "reduce IP-space sprawl across accounts" — the alternative answers (Transit Gateway hub, VPC peering mesh) do not save IPs, only routes. Reference: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html

VPC Peering — Same-Region, Inter-Region, and Non-Transitive Routing

VPC peering establishes a private connection between two VPCs in the same or different regions and accounts. Peering is non-transitive: if VPC A is peered with VPC B and VPC B is peered with VPC C, A cannot reach C through B. Each peering is point-to-point.

Same-region vs inter-region

Same-region peering has no data-transfer cost between the peered VPCs (within the region). Inter-region peering charges per-GB inter-region transfer fees and adds latency. Both encrypt traffic at the AWS network edge but are not user-configurable at the IPsec layer.

Peering route table configuration

Peering does not auto-populate route tables. Both sides must manually add routes pointing to the peer's CIDR via the peering connection ID. This is a chronic source of "I peered them but they cannot talk" troubleshooting cases.

CIDR overlap forbidden

Two VPCs with overlapping CIDRs cannot be peered — AWS rejects the peering at create time. The workaround is PrivateLink, NAT, or a CIDR refactor.

Peering scaling limits

A single VPC supports 125 active peering connections. Beyond that, Transit Gateway is the answer. The full-mesh peering of 50 VPCs requires 50 × 49 / 2 = 1225 connections — administratively prohibitive. TGW collapses this to 50 attachments and one route table.

A frequent ANS-C01 distractor: a candidate sets up peering, enables route propagation on the route table (which is a TGW/VGW feature, not a peering feature), and is surprised when traffic doesn't flow. Peering routes are always static, always manual — propagation does not apply. The exam version: "What does the operator need to do to enable connectivity after the peering is accepted?" Answer: add static routes in both VPCs' route tables pointing to the peering connection. Reference: https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html

Two VPCs with overlapping CIDRs cannot peer, cannot be attached to the same TGW route table, and cannot route directly. The Specialty exam tests four workarounds, in order of preference.

AWS PrivateLink with a service-provider model gets around overlap entirely. The provider VPC publishes a service via an NLB, the consumer VPC creates an interface endpoint pointing at the service. The endpoint ENI sits in the consumer VPC's CIDR — no peering, no shared address space — the consumer does not need to know the provider's internal IPs. This is the cleanest answer when the requirement is one-way service consumption (e.g. consuming an API).

Workaround 2: NAT-based redirect

Each VPC NATs the other's traffic into a non-overlapping range. VPC A's traffic to VPC B is NATted (e.g. via a NAT instance or appliance) to appear as traffic from a non-overlapping CIDR. Bidirectional NAT works but introduces statefulness and breaks end-to-end IP visibility — diagnostics and security telemetry become harder.

Workaround 3: Transit Gateway with route table tricks

TGW route tables can use route table associations and propagations to direct traffic between overlapping VPCs through an intermediate inspection or NAT VPC that does the address translation. This is more flexible than direct peering but requires careful route table design.

Workaround 4: Restructure CIDR

The brute-force option: re-IP one of the overlapping VPCs. Expensive (workload migration cost) but eliminates the problem permanently. Use AWS IPAM to allocate a fresh non-overlapping range and migrate.

ANS-C01 favors PrivateLink for overlapping-CIDR scenarios when the requirement is one-way service consumption (consumer calls a service in another VPC). PrivateLink is uniquely suited because the consumer VPC never sees the provider's CIDR — only the endpoint ENI's IP, which lives in the consumer's address space. Bidirectional or full-mesh connectivity over overlapping CIDRs requires NAT or restructure. The exam will distinguish "one-way service" (PrivateLink) from "general bidirectional connectivity" (NAT/restructure). Reference: https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html

BYOIP — Bring Your Own IP Addresses

Bring Your Own IP (BYOIP) lets you advertise public IP ranges you own (from ARIN, RIPE, APNIC, etc.) on AWS, so that AWS hosts and announces those addresses on your behalf. BYOIP is the answer when scenarios mention "we own a public IP block we want to keep", "regulatory or branding requires our specific IP range", or "we are migrating from on-prem and don't want to change DNS-mapped IPs".

BYOIP requirements

You must own the IP range and be authorised by the regional registry. You provide a Route Origin Authorization (ROA) to ARIN/RIPE allowing AWS to announce. Minimum CIDR: /24 for IPv4 and /48 for IPv6. Smaller blocks are not advertisable on the global BGP mesh.

BYOIP use cases

  • Elastic IP from BYOIP pool — assigned to a NAT Gateway, an NLB, or an EC2 instance, replacing AWS-allocated EIPs.
  • VPC primary CIDR from BYOIP — your owned IP space inside the VPC; combined with EIP from BYOIP pool, on-prem DNS pointing to your IP range continues to work without DNS migration during cloud cutover.
  • MACsec on Direct Connect uses your owned IP for the peering — see Direct Connect topic.

BYOIP and MACsec

Direct Connect MACsec at 100Gbps requires BYOIP for the BGP peering address. This is a Specialty-tier integration question on ANS-C01 — see the Direct Connect topic for the full pattern.

Multi-Account VPC Patterns — Hub-and-Spoke vs Full-Mesh

The two canonical multi-account VPC topologies are hub-and-spoke (with TGW) and full-mesh (with peering). The Specialty exam expects you to choose between them based on number of VPCs, communication patterns, and operational complexity.

Hub-and-spoke with TGW

Every VPC attaches to a central TGW. TGW route tables segment the spokes — production spokes only see other production spokes, dev only sees dev, shared services see all. One attachment per spoke, central control of routing, central logging via TGW Network Manager. Scales to thousands of VPCs.

Full-mesh peering

Every pair of VPCs is peered. Optimal for low VPC count (5 or fewer) and high inter-VPC communication. Above 10 VPCs, mesh becomes administratively prohibitive — the management surface grows quadratically (n × n-1 / 2 connections).

Decision rule

  • <5 VPCs: peering is fine, no TGW cost.
  • 5–10 VPCs with selective communication: peering still defensible if traffic patterns are sparse.
  • >10 VPCs or any organization-scale architecture: TGW is the answer, full stop.

The Specialty exam typically gives scenarios with 20+ VPCs and expects TGW. If the answer choices include peering for 50 VPCs, that is the distractor.

Elastic IPs, Secondary ENIs, and Prefix Delegation

Elastic IP allocation and limits

Elastic IPs (EIPs) are static IPv4 public addresses that can be re-mapped between resources. Default limit: 5 EIPs per region (raisable). EIPs are charged hourly when not associated to a running instance — this is the canonical "EIP idle charge" trap.

Secondary ENIs and alias IPs

An EC2 instance can have multiple secondary ENIs attached. Each ENI gets a primary private IP from the subnet and can have additional secondary private IPs. Secondary ENIs are used for: multi-tenant routing on a single instance, IP-based licensed software, and high-availability ENI failover patterns.

Prefix delegation for ENIs

Prefix delegation assigns /28 IPv4 prefixes or /80 IPv6 prefixes to ENIs instead of individual IPs. This is the modern AWS pattern for high-density EKS workloads — each ENI gets 16 IPv4 addresses or 2^48 IPv6 addresses without needing 16 secondary IP allocations. Prefix delegation is mandatory for EKS pods at scale on Nitro instances.

Common Traps Recap — VPC Architecture and CIDR Design on ANS-C01

Trap 1: /29 and /30 subnets are allowed

Wrong. Minimum subnet size is /28. AWS rejects anything narrower at API time.

Trap 2: Three reserved IPs per subnet

Wrong. Five IPs are reserved per subnet (network, .1 router, .2 DNS, .3 future, last broadcast).

Trap 3: VPC peering is transitive when both ends authorise it

Wrong. Peering is never transitive. A → B → C does not work even with all peering accepted.

Trap 4: Peering routes auto-populate when propagation is enabled

Wrong. Peering routes are always static. Route propagation applies only to VGW and TGW.

Trap 5: NAT Gateway is needed for IPv6 outbound

Wrong. IPv6 uses Egress-only Internet Gateway, not NAT Gateway. NAT64 (a different feature) translates IPv6 → IPv4.

Trap 6: Two VPCs with overlapping 10.0.0.0/16 can use TGW directly

Wrong. TGW route tables cannot resolve overlap directly — you need NAT, PrivateLink, or restructure.

Trap 7: A VPC supports unlimited secondary CIDRs

Wrong. Default is 5 CIDR blocks per VPC (raisable to 50).

Trap 8: BYOIP allows advertising any size block

Wrong. Minimum is /24 for IPv4 and /48 for IPv6 — anything smaller is not BGP-advertisable globally.

Trap 9: VPC sharing requires Direct Connect or peering

Wrong. VPC sharing is via RAM and requires AWS Organizations — no networking layer setup needed.

Trap 10: 100.64.0.0/10 is publicly routable like 1.1.1.0/24

Wrong. 100.64.0.0/10 is RFC 6598 carrier-grade NAT space — non-routable on the public internet, safe to use as secondary CIDR.

Trap 11: IPAM works without AWS Organizations

Partial. IPAM works in a single account, but the cross-account governance use case (the canonical exam scenario) requires AWS Organizations + RAM sharing.

Trap 12: Subnets can be resized after creation

Wrong. Subnet CIDRs are immutable. To resize, you must create a new subnet, migrate workloads, and delete the old one.

Definition — VPC Architecture, CIDR, and Subnetting. This ANS-C01 topic covers a domain-specific AWS service or pattern. Confirm the canonical definition from official AWS documentation before relying on third-party summaries — service names and feature scoping have shifted over time.

FAQ — VPC Architecture and CIDR Design on ANS-C01

Q1: What is the smallest subnet that can host 14 EC2 instances?

A /28 has 16 addresses, of which 5 are reserved by AWS — leaving 11 usable. That is not enough for 14 instances. The next size up, /27, has 32 addresses with 27 usable — comfortably enough. ANS-C01 frequently includes off-by-one questions where the candidate must remember the 5-reserved rule (not the 2-reserved rule from textbook subnet math) and pick the right subnet size. Memorise the table: /28 = 11, /27 = 27, /26 = 59, /25 = 123, /24 = 251.

Q2: When do I add a secondary CIDR vs restructure the VPC?

Add a secondary CIDR when (a) the existing VPC is in production and migration cost is high, (b) the IP exhaustion is in specific autoscaling subnets and you can spin up new larger subnets in a secondary block, (c) the original CIDR plan was reasonable but a single growth pattern blew through it. Restructure when (a) the original CIDR was poorly planned (e.g. /24 for an entire production VPC) and exhaustion will recur, (b) the secondary CIDR option does not provide enough headroom (you've already added 5 secondary blocks), or (c) you need to harmonize with a new IPAM-managed enterprise plan. ANS-C01 typically scenarios secondary CIDR with 100.64.0.0/16 as the answer; restructure is the answer when the exam mentions "long-term" or "strategic" CIDR governance.

Q3: Why do I need both an Internet Gateway and a NAT Gateway?

The IGW is the VPC's connection to the public internet — it allows public IP addresses (EIPs and auto-assigned public IPs) to send and receive traffic. Resources in public subnets route directly to the IGW. Resources in private subnets have no public IP, cannot use the IGW directly, and need a NAT Gateway (in a public subnet) to mask their private IPs as the NAT GW's EIP for outbound-only traffic. NAT GW is stateful — return traffic to outbound-initiated connections is allowed; unsolicited inbound is denied. ANS-C01 distinguishes "public subnet" (route to IGW) from "private subnet" (route to NAT GW for outbound) sharply — picking the wrong one is a recurring distractor.

Q4: How does VPC sharing differ from VPC peering for multi-account setups?

VPC sharing is one VPC owned by one account, with subnets shared into multiple participant accounts via RAM — participants launch resources directly into the shared subnets and consume capacity from the shared CIDR. VPC peering is two separate VPCs (one per account) connected via a peering link, with each account owning its own CIDR, route tables, and resources. Sharing saves IP space (one VPC instead of N) and operational cost (one set of NAT GWs, one route table). Peering preserves account isolation (each account fully owns its VPC) but multiplies IP consumption and operational surface. For 50+ accounts in a single environment, VPC sharing is the modern best practice; peering is legacy. ANS-C01 favors VPC sharing as the answer for "reduce sprawl" and "centralize networking" scenarios.

Q5: What is the single biggest CIDR design mistake to avoid?

Choosing a /24 (256 addresses) for a primary VPC CIDR. Five reserved IPs per subnet plus three or four AZ subnets plus growth headroom plus EKS pod density consumes a /24 within months. Once production workloads are in, restructuring to a /16 requires expensive migration. The Network Engineer Specialty answer: start every VPC at /16 minimum (65,536 addresses) — even if 99% sits unused — because the IP cost is zero and the future flexibility is enormous. AWS IPAM lets you allocate /16s from a /8 master pool with no waste, since unallocated space stays in the pool.

Q6: Can I peer VPCs across regions and across accounts?

Yes — both inter-region peering and cross-account peering are supported. Inter-region peering charges per-GB inter-region transfer fees but enables direct VPC-to-VPC traffic without internet transit. Cross-account peering requires the requester account to send a peering request and the accepter account to accept — both ends must add routes manually. The combination (inter-region + cross-account) is supported in a single peering. ANS-C01 favors TGW peering over VPC peering when more than two VPCs are involved or when transit semantics are needed; VPC peering is fine for two-VPC point-to-point use cases.

Q7: What is the practical limit on VPC peering for full-mesh topology?

The hard limit is 125 active peerings per VPC. The practical limit (administrative manageability) is much lower — somewhere between 5 and 10 VPCs in a full mesh becomes unmaintainable because n × (n-1) / 2 connections grows fast. At 10 VPCs that is 45 peerings; at 20 it is 190; at 30 it is 435. Migration to TGW is mandatory for 10+ VPCs in any modern enterprise. ANS-C01 will give scenarios with 20+ VPCs and expect TGW; full-mesh peering at that scale is always the wrong answer.

Q8: Why does AWS reserve five IPs per subnet rather than the textbook two?

Two of the five (network, broadcast) are standard subnet conventions. The other three are AWS-specific:

  • .1 — VPC implicit router (every ENI's default gateway).
  • .2 — AmazonProvidedDNS resolver (the in-VPC DNS).
  • .3 — reserved for future AWS use (not currently allocated to a known service).

These are AWS infrastructure addresses that cannot be reassigned. The reservation is global (all subnets, regardless of size). ANS-C01 expects you to bake this into every subnet-sizing calculation.

Q9: How do I size a VPC for an EKS cluster with high pod density?

EKS pods consume IP addresses. By default, each pod gets a primary IP from the node's ENI; with prefix delegation (Nitro instances), each ENI gets a /28 prefix (16 IPs) per attached secondary ENI. A node with 4 ENIs each delegated a /28 has 64 pod IPs. For 1000-pod clusters, you need ~16 nodes worth of IP capacity. Per-node IP demand: count instance ENI limit × 16 (prefix delegation) ≈ 60–600 pod IPs per node depending on instance size. Plus each node itself consumes one IP per ENI for the kubelet. Subnet sizing rule of thumb: 4 nodes × 100 pods = 400 IPs minimum; round up to /23 (512 addresses, 507 usable). For org-scale EKS, allocate /20 or /19 per cluster subnet. ANS-C01 scenarios often involve EKS density — secondary CIDR with 100.64.0.0/16 is the canonical answer.

Q10: When does BYOIP make sense vs using AWS-allocated IPs?

BYOIP makes sense when (a) you own a public IP block already allocated by ARIN/RIPE that customers, partners, or compliance regimes know about, (b) you are migrating from on-prem and DNS-mapped IPs cannot change during cutover, (c) regulatory or contract terms require a specific IP range. AWS-allocated IPs are fine for greenfield workloads where IP-pinning is not required — they are simpler, no ROA setup, and AWS handles the BGP advertisement. ANS-C01 typically scenarios BYOIP with "we are migrating workload X from data center Y and existing customers point DNS at our IP block" — the answer is BYOIP plus EIPs allocated from the BYOIP pool, attached to the same workload roles (NAT GW, ELB, EC2). For greenfield, BYOIP is the wrong answer.

Once VPC architecture is in place, the natural next operational layers on ANS-C01 are: Transit Gateway routing and attachments for hub-and-spoke at scale; PrivateLink, VPC endpoints, and endpoint policies for private service consumption (especially across overlapping CIDR); VPC routing — longest prefix match, propagation, and blackhole routes for routing decisions; and network performance — ENA, EFA, placement groups, and jumbo frames for tuning the throughput inside the VPCs you've architected.

Official sources

More ANS-C01 topics