Transit Gateway on ANS-C01 is the multi-VPC fabric question. While SAA-C03 treats Transit Gateway as "the thing that connects VPCs", the Advanced Networking Specialty exam asks "given a TGW with three VPC attachments, two VPN attachments, one DX Transit VIF attachment, and one peering attachment to a TGW in another region — design route tables to enforce prod/non-prod isolation, ensure stateful inspection is symmetric, and provide centralised egress through a shared NAT VPC, while sharing the TGW across 12 AWS accounts via Resource Access Manager". That is a multi-attachment, multi-route-table, multi-account design problem and it is core ANS-C01 territory.
This topic covers the full Transit Gateway primitive surface: attachments (VPC, VPN, Direct Connect, peering, Connect), route tables with associations and propagations, appliance mode for symmetric flows through inspection VPCs, TGW Connect with GRE+BGP for SD-WAN integration, multicast domains for video/financial workloads, inter-region peering (static-only, no BGP), AWS Resource Access Manager for cross-account sharing, and the canonical centralised-egress and inspection-VPC patterns. Mapped to Task Statement 2.2 (Implement routing and connectivity across multiple AWS accounts, Regions, and VPCs).
Why Transit Gateway Is the Centre of Gravity for Multi-VPC ANS-C01
Domain 2 (26 percent of the exam) and Domain 1 task 1.6 both lean heavily on TGW. Modern AWS multi-VPC architecture is hub-and-spoke with TGW as the hub, and ANS-C01 expects you to design every aspect: which spokes go in which TGW route table, when to enable appliance mode, when to use TGW Connect vs Site-to-Site VPN for SD-WAN, when peering is preferable to TGW for cost reasons, and how to scale across regions. Expect 5-7 questions on TGW configuration in scenario form.
The framing across this topic is segmentation through route tables. A TGW has one or more route tables; each attachment is associated with exactly one TGW route table (controlling its lookup table) and propagates its own routes to one or more TGW route tables (controlling visibility from other attachments). By creating multiple route tables (prod, non-prod, shared, inspection) and choosing associations and propagations carefully, you can enforce who reaches whom without relying on subnet-level security. Misconfigured propagation is the most frequent design bug — the exam tests it directly.
Plain-Language Explanation: Transit Gateway
TGW is a transit fabric that combines a few primitives (attachments, route tables, associations, propagations, appliance mode) into surprisingly subtle multi-VPC topologies. Three analogies anchor the moving parts.
Analogy 1: The Airport Hub With Concourses and Gates
Think of Transit Gateway as a hub airport. Each VPC, VPN, or DX connection is a gate at this airport. Attachments are the planes parked at gates. The airport has multiple concourses (TGW route tables): a prod concourse, a non-prod concourse, a shared-services concourse, an inspection concourse. Each gate is associated with exactly one concourse — that determines which terminal map the passengers at that gate see. Each gate propagates its destination cities to one or more concourse terminal maps — that determines which passengers can find this gate as a destination. Appliance mode is the rule "passengers transferring between two concourses must always go through the same security checkpoint, both directions" — without it, a passenger could clear security going east but bypass it going west. TGW peering is a sister airport in another region — flights go between them but only on pre-published direct routes (static), not via dynamic dispatch (no BGP). AWS RAM is the airline alliance that lets your sister airline use your gates without owning the airport.
Analogy 2: The Postal Sorting Distribution Center
A TGW is a regional postal distribution center between dozens of post offices (VPCs) and external postal carriers (VPN, DX). Attachments are the conveyor belts feeding mail into the center. Route tables are the sorting bins, each labelled with a destination zone. Each conveyor is associated with one sorting bin — when mail comes off the conveyor, it is sorted by that bin's labelling. Each conveyor propagates its own destination labels to one or more sorting bins — so other conveyors' mail can be routed to it. Appliance mode is the rule "mail to be inspected must always be inspected at the same X-ray scanner on the way in and on the way out, in the same building" — preventing inspection-bypass via asymmetric routing. Multicast domains are the broadcast-news distribution lists that the center handles — one packet in, many subscribers receive copies.
Analogy 3: The City Subway System With Multiple Lines
A TGW is the subway hub station where multiple lines meet. Attachments are the lines (Red, Blue, Green, Express). Route tables are the line maps shown on the platforms. Each line is associated with exactly one platform map. Each line propagates its terminal stations to one or more platform maps. A passenger on the Red Line at the hub station looks up "Stop X" on the Red Line's platform map; if X is on the propagated stations of another line, the passenger transfers and continues. Appliance mode is the rule "if you transfer to the Inspection line, you must come back through the same Inspection line, both ways" — so the inspection officer sees you on departure and return. Blackhole routes are "this stop is closed permanently — do not route" entries in the line map, preventing fallback to less-specific routes.
For ANS-C01, the airport hub analogy is the highest-yield mental model when a question stacks multiple TGW route tables with different propagations — the concourse-gate-terminal-map structure maps directly onto associations vs propagations. For inspection VPC and appliance mode questions, the postal X-ray scanner sub-analogy is the cleanest. For multi-region scenarios with TGW peering, the sister airport image makes "static-only peering, no BGP" intuitive. Reference: https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html
Transit Gateway Architecture — The Core Primitives
A Transit Gateway is a regional, multi-AZ network gateway service that lets you connect VPCs, VPN connections, Direct Connect connections, and other TGWs through a single managed transit fabric. AWS limits one TGW to 5000 attachments and 20 association+propagation route tables by default.
Attachments — the connection types
A TGW supports five attachment types:
- VPC attachment — connects a VPC to the TGW. Requires one ENI per AZ in selected subnets.
- VPN attachment — terminates a Site-to-Site VPN. Two IPsec tunnels per VPN, BGP or static.
- Direct Connect attachment — connects via a Transit VIF attached to a Direct Connect Gateway.
- TGW peering attachment — peers two TGWs in same or different regions; static routing only.
- TGW Connect attachment — overlay GRE+BGP attachment over an existing VPC or DX attachment, used for SD-WAN.
Each attachment has its own ID, can have tags, can be associated with one TGW route table, and can propagate routes to one or more TGW route tables.
Route tables — segmentation
A TGW can have multiple route tables. Each route table holds routes (destination CIDR + target attachment) populated either by propagation (auto-populated from attachments that propagate to it) or static (manually added, override propagation at same prefix length).
Association — which route table an attachment uses for lookups
An attachment is associated with exactly one TGW route table at a time. When a packet arrives at the TGW from this attachment, the TGW consults the associated route table to find the destination attachment.
Propagation — which route tables an attachment's routes appear in
An attachment propagates its routes (the prefixes it advertises) to one or more TGW route tables. A VPC attachment propagates its CIDR; a VPN/DX attachment propagates the BGP-advertised prefixes; a peering attachment propagates routes if you statically add them in your route table.
The decoupling
The decoupling of association (where I look up routes) from propagation (where my routes show up) is the key TGW design lever. By choosing associations and propagations carefully, you implement segmentation: production VPCs associate to a prod RT and propagate to prod + shared, non-prod VPCs associate to non-prod RT and propagate to non-prod + shared, the inspection VPC associates to all RTs and propagates everywhere.
- Attachment: a connection point on a TGW (VPC, VPN, DX, peering, Connect).
- TGW route table: a routing table on the TGW; multiple per TGW.
- Association: which TGW RT an attachment uses for outbound lookups (one only).
- Propagation: which TGW RTs receive an attachment's routes (one or many).
- Appliance mode: per-VPC-attachment toggle ensuring symmetric flow path.
- Default route table: the TGW RT auto-created at TGW creation; can be replaced.
- Default association RT and Default propagation RT: default for new attachments unless overridden.
- Connect attachment: overlay GRE+BGP attachment for SD-WAN.
- Reference: https://docs.aws.amazon.com/vpc/latest/tgw/tgw-route-tables.html
VPC Attachments — Subnets, AZ Selection, and ENIs
A VPC attachment connects a VPC to a TGW. The TGW creates an ENI in each subnet the customer selects (typically one subnet per AZ for HA). These ENIs are the data path between the VPC and the TGW backbone.
Subnet selection
You select one subnet per AZ in which the TGW provisions its ENI. The selected subnets should be dedicated TGW subnets (small, with no other workload), routing 0.0.0.0/0 to the TGW for spoke traffic. Production-grade design uses /28 subnets per AZ exclusively for TGW ENIs.
AZ availability
A TGW VPC attachment is HA across the selected AZs — if one AZ fails, traffic flows through the surviving AZ's ENI. However, for traffic destined to a specific AZ in the spoke (e.g., a database in AZ-a), the TGW ENI in AZ-a must be available for traffic to reach that AZ without cross-AZ hops. Always select all AZs the spoke uses.
Multi-account VPC attachment
A VPC attachment requires the VPC and the TGW to be in the same account or shared via AWS RAM. When TGW is shared, the spoke account creates the attachment on its side; the central network account accepts (or auto-accepts) the attachment.
Spoke VPC route table
The spoke VPC's subnet route table contains: local + <other-spoke-CIDR or 0.0.0.0/0> → tgw-xxxx. The simplest spoke route is 0.0.0.0/0 → tgw-xxxx, sending all non-local traffic to the TGW for routing decision.
Appliance Mode — The Must-Know Feature for Stateful Inspection
Appliance mode is a per-VPC-attachment setting that forces TGW to keep both directions of a flow on the same TGW attachment ENI in the same AZ. It is critical for stateful firewalls (Network Firewall, Palo Alto VM-Series, Check Point, Fortinet) deployed in inspection VPCs.
Why appliance mode is required
Without appliance mode, TGW makes per-flow routing decisions per AZ. A flow from spoke A in AZ-1 to spoke B might enter the inspection VPC via the AZ-1 TGW ENI, get inspected by the AZ-1 firewall instance, and leave via AZ-1 ENI to spoke B. The return packet from spoke B might enter via AZ-2 TGW ENI (because TGW chose AZ-2 for return path), be inspected by the AZ-2 firewall — which has no record of the original SYN — and dropped as an unsolicited packet.
Enabling appliance mode pins the entire flow's TGW path to one AZ ENI. The firewall in that AZ sees both directions and stateful tracking works correctly.
Configuring appliance mode
Set during VPC attachment creation, or modified later via modify-transit-gateway-vpc-attachment API. The setting applies only to the inspection VPC's attachment; spoke attachments do not need it. AWS recommends enabling appliance mode for any VPC hosting a stateful network function.
Appliance mode and cross-AZ traffic
Appliance mode does not affect the spoke's choice of which AZ it sends from. The flow stays in one AZ from the source spoke through the inspection VPC and to the destination spoke. There is no cross-AZ data transfer cost added by appliance mode itself — the cross-AZ behaviour is determined by where the spoke's instance and the destination instance live.
The most-tested TGW gotcha on ANS-C01: a customer deploys AWS Network Firewall in an inspection VPC and routes spoke traffic through it via TGW. Connections inside one AZ work fine, but cross-AZ connections to other spokes intermittently fail or break after a few seconds. Candidates blame Network Firewall rules or security groups — the actual fix is to enable appliance mode on the inspection VPC's TGW attachment. AWS publishes this exact warning in the inspection-VPC reference architecture. Memorise the symptom-to-fix mapping: "stateful firewall, asymmetric drop on cross-AZ flow" → enable TGW attachment appliance mode. Reference: https://docs.aws.amazon.com/vpc/latest/tgw/tgw-appliance-scenario.html
TGW Route Table Patterns — Segmentation Through Association and Propagation
The most common TGW design pattern is multi-segment isolation — keeping prod and non-prod VPCs from reaching each other while both can reach shared services.
Three-segment pattern
Create three TGW route tables: prod-rt, non-prod-rt, shared-rt.
- Prod VPC attachments: associated with prod-rt, propagate to prod-rt and shared-rt.
- Non-prod VPC attachments: associated with non-prod-rt, propagate to non-prod-rt and shared-rt.
- Shared services VPC attachments: associated with shared-rt, propagate to prod-rt, non-prod-rt, and shared-rt.
Result: prod can reach prod and shared; non-prod can reach non-prod and shared; shared can reach everyone; prod cannot reach non-prod because non-prod's CIDRs are not propagated to prod-rt.
Adding inspection
To force all traffic through an inspection VPC, change the spoke association: prod VPCs and non-prod VPCs both associate with an inspection-rt that has 0.0.0.0/0 → inspection-VPC-attachment. The inspection VPC routes through Network Firewall, then back to TGW for forwarding to the destination spoke. The inspection VPC's attachment is associated with a post-inspection-rt that propagates to all spoke RTs and propagates from all spokes.
Blackhole routes for explicit isolation
In the prod-rt, add a static blackhole route for the non-prod CIDR range. Even if a propagation accidentally adds non-prod routes to prod-rt, the blackhole at the same prefix wins (static beats propagated at same length) and traffic is dropped. This is defense-in-depth segmentation.
Default route tables
The TGW default RT (created at TGW creation) is the default association RT and default propagation RT for new attachments. For production designs, disable default association and default propagation at the TGW level so attachments do not auto-join an unintended RT — force the operator to explicitly choose.
A common ANS-C01 distractor: an operator creates a new VPC attachment, expects it to be in the prod-rt, but it ends up in the default-rt (which has propagations from every other attachment). The new VPC immediately has connectivity to every spoke — possibly including non-prod and shared — violating segmentation. The fix is to disable "default association" and "default propagation" at the TGW level when first creating it. New attachments then must explicitly specify their association and propagation, preventing accidental cross-segment leakage. This is the AWS Well-Architected Reliability + Security recommended TGW configuration. Reference: https://docs.aws.amazon.com/vpc/latest/tgw/tgw-route-tables.html
TGW Connect — GRE Overlay + BGP for SD-WAN
TGW Connect is a special attachment type that establishes a GRE tunnel + BGP overlay over an existing VPC or DX attachment. It is designed for SD-WAN appliance integration where a third-party appliance (Aviatrix, Cisco SD-WAN, Palo Alto, Versa) runs in a VPC and needs BGP routing into TGW.
How TGW Connect works
A TGW Connect attachment is created on top of an underlying VPC attachment or DX attachment. The Connect attachment provides up to 4 BGP peers per Connect peer, each over a GRE tunnel between the TGW Connect peer (a TGW IP) and the customer SD-WAN appliance IP. Each BGP peer can advertise prefixes; TGW ECMPs across them.
Why GRE + BGP, not VPN
GRE has lower overhead than IPsec (no encryption, no IKE handshake) and TGW Connect can saturate VPC bandwidth (up to 5 Gbps per peer, 20 Gbps with 4 peers) — much higher than Site-to-Site VPN (~1.25 Gbps per tunnel). Combined with BGP for dynamic routing, TGW Connect is the AWS-native answer to "I have an SD-WAN cloud onramp; how do I integrate with my AWS network?"
Use cases
- SD-WAN appliance in a VPC needs BGP with TGW for dynamic routing.
- High-bandwidth aggregation across multiple BGP peers via ECMP.
- Encryption optional (encryption happens at the SD-WAN appliance layer if needed).
Limitations
- Connect attachment requires an underlying VPC or DX transport attachment.
- Encryption is the customer's responsibility (GRE is unencrypted).
- BGP only — no static routing on Connect.
Multicast on Transit Gateway
TGW multicast allows IP multicast traffic across VPCs through the TGW. Use cases include video streaming, financial market data feeds, multi-receiver telemetry, and legacy applications relying on multicast.
Multicast domain
A multicast domain is configured on the TGW. VPC attachments are added to the domain, and ENIs in those VPCs become multicast members via either IGMP (Internet Group Management Protocol) or static registration.
IGMP vs static membership
IGMP (v2) allows EC2 instances to dynamically join multicast groups by sending IGMP membership reports. The TGW snoops IGMP and updates the multicast forwarding tables. Static membership registers ENIs directly via API — simpler for stable workloads but no dynamic join/leave.
Multicast use cases
- Financial market data — many subscribers receiving the same data feed efficiently.
- Video streaming — single source, many receivers without per-receiver bandwidth amplification.
- Service discovery — legacy multicast-based discovery protocols.
- Stock trading platforms — low-latency, single-source-many-receiver patterns.
Limitations
- TGW multicast is regional only — no inter-region multicast.
- Cross-VPC multicast requires both source and destination VPCs to be in the multicast domain.
- Throughput per source ENI capped (varies by instance type).
Inter-Region TGW Peering — Static Only, No BGP
TGW peering connects two TGWs in the same or different regions. Inter-region peering uses the AWS backbone — encrypted, low-latency, no internet transit.
Static routing only
TGW peering attachments do not support BGP. All routes between peered TGWs must be static entries in each side's TGW route tables. This is the most-tested TGW peering trap on ANS-C01.
Configuration
Create a peering attachment from TGW-1 to TGW-2 (works across regions and accounts via RAM). Accept the peering on TGW-2's side. Add static routes in each TGW's relevant RTs: <remote-region-VPC-CIDR> → peering-attachment-id. Both sides need explicit static routes.
Use cases
- Multi-region DR — replicate workloads across regions with private TGW-to-TGW connectivity.
- Multi-region application — global services with regional VPCs.
- Compliance — regional data residency with selective inter-region access.
Alternatives
- Cloud WAN — AWS-managed WAN with policy-based routing across regions, BGP-capable, alternative to TGW peering for complex multi-region scenarios.
- Direct Connect Gateway — for hybrid scenarios where on-premises connects to multiple regions, DXGW + Transit VIF is sometimes simpler than TGW peering.
A scenario describes a multi-region architecture: TGW in us-east-1 peered to TGW in eu-west-1, BGP advertising routes between them. Candidates accept this as plausible — it is not. TGW peering is static-only; BGP between TGWs is not supported. The fix: maintain static routes in each TGW's RTs for the remote region's CIDRs. For dynamic multi-region routing with BGP, use AWS Cloud WAN (the newer service, which supports BGP-based segmentation across regions) or accept the operational overhead of static route maintenance. Reference: https://docs.aws.amazon.com/vpc/latest/tgw/tgw-peering.html
Direct Connect Gateway and TGW Integration
A Direct Connect Gateway (DXGW) lets a single Direct Connect connection terminate to multiple VPCs across multiple AWS regions. When combined with TGW, DXGW + Transit VIF + TGW gives a customer one DX connection reaching every VPC in every region (subject to limits).
Transit VIF + DXGW + TGW
Architecture: Direct Connect connection has a Transit VIF attached to a DXGW. The DXGW is associated with one or more TGWs. Each TGW has VPC attachments. A packet from on-premises:
- Enters AWS via the DX connection.
- Routed by Transit VIF to DXGW.
- DXGW forwards to the appropriate TGW (based on association).
- TGW routes to the target VPC attachment.
This composes seamlessly with the segmented TGW route table design — on-premises traffic enters the right segment depending on which TGW it lands on.
Limits
- Up to 20 VPCs per DXGW (raisable).
- Up to 20 TGWs per DXGW (raisable).
- Each TGW can be associated with 20 DXGWs.
- Allowed prefixes: each TGW-DXGW association has a list of allowed prefixes (the prefixes the TGW will advertise to on-premises via this DXGW). This is how you control which VPC CIDRs are visible from on-premises.
TGW peering NOT over DXGW
Inter-region TGW connectivity requires TGW peering, not DXGW. DXGW is for on-premises ↔ AWS multi-region; TGW peering is for AWS-only multi-region.
AWS RAM — Cross-Account TGW Sharing
AWS Resource Access Manager (RAM) shares TGW resources across accounts in an AWS Organisation. The standard pattern: a central network account owns the TGW; member accounts (workload accounts) attach their VPCs to the shared TGW.
Sharing model
- Resource share — a RAM construct that bundles resources (TGW, subnets, etc.) and lists principals (accounts, OUs, the org) allowed to use them.
- Sharing the TGW — the network account creates a resource share, includes the TGW, and grants member accounts access.
- Member account behaviour — once shared, member accounts can create attachments to the TGW from their VPCs. The TGW itself remains owned and managed by the network account.
Auto-acceptance
By default, attachment creation requires the central account to manually accept each attachment. Auto-acceptance can be enabled per-TGW or per-shared-resource, useful when the central network team trusts member accounts to make their own attachment decisions.
What member accounts cannot do
Member accounts can attach VPCs and view the TGW, but cannot:
- Modify TGW route tables (only the owning account can).
- Modify other accounts' attachments.
- Delete the TGW.
This separation is key to the central network governance model: workload teams own their VPCs and attachments; central network team owns the routing fabric.
- 5000 attachments per TGW (default).
- 20 association+propagation route tables per TGW (default).
- 50 ECMP paths for VPN attachments.
- Default 50 routes per TGW route table (raisable).
- TGW Connect: up to 4 BGP peers per peer, 5 Gbps per peer, 20 Gbps aggregate.
- TGW peering: static-only, no BGP.
- DXGW: 20 VPCs, 20 TGWs, allowed-prefixes list controls what is advertised to on-premises.
- Appliance mode: per-VPC-attachment toggle for symmetric flow.
- Default association/propagation: enabled by default; disable for production.
- Reference: https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html
Centralised Egress Pattern Through TGW
The centralised egress pattern uses a shared egress VPC attached to TGW, with NAT GW + IGW, so all spoke VPCs share one set of NAT GWs (one per AZ) for outbound internet access.
Architecture
- Egress VPC: public subnets with IGW + NAT GW (one per AZ); private subnets with TGW attachment.
- Spoke VPCs: TGW attachment, default route
0.0.0.0/0 → tgw-xxx. - TGW route tables: every spoke RT has
0.0.0.0/0 → egress-VPC-attachment. Egress VPC's attachment RT has spoke CIDRs propagated for return traffic.
Cost benefit
Without centralised egress: each spoke needs its own NAT GW per AZ. With 10 spokes and 3 AZs, that is 30 NAT GWs. With centralised egress: 3 NAT GWs total (in the egress VPC). NAT GW per-hour and per-GB charges scale linearly with NAT count, so consolidation yields significant savings — at the cost of slightly higher cross-AZ data transfer charges (TGW + NAT GW + IGW path).
Filtering
The egress VPC is the natural place for AWS Network Firewall or a third-party egress firewall to inspect outbound traffic — domain allow-lists, malware C2 blocking, exfiltration prevention. All spoke traffic flows through this single chokepoint.
Limitations
- Cross-AZ data transfer cost: spoke traffic from AZ-1 to NAT GW in AZ-2 incurs cross-AZ charges. Mitigate by aligning egress VPC AZs with spoke AZs and configuring AZ-affinity in TGW (appliance mode helps).
- Single point of failure if not multi-AZ. Always deploy NAT GW per AZ in egress VPC.
Inspection VPC Pattern With Network Firewall
The inspection VPC pattern places AWS Network Firewall (or a third-party stateful firewall via GWLB) in a dedicated VPC, with all inter-spoke and outbound traffic routed through it via TGW.
Architecture
- Inspection VPC: subnets with Network Firewall endpoints (one per AZ); TGW attachment in appliance mode.
- Spoke VPCs: TGW attachment, default route
0.0.0.0/0 → tgw-xxx. - TGW route tables:
- Spoke RT:
0.0.0.0/0 → inspection-VPC-attachment,<other-spoke-CIDR> → inspection-VPC-attachment. - Inspection RT: spoke CIDRs propagated, allowing the post-inspection traffic to forward to the destination spoke.
- Spoke RT:
Why appliance mode is mandatory
Stateful firewalls require symmetric flows. Without appliance mode, return traffic could enter the inspection VPC via a different AZ's TGW ENI and hit a firewall instance that never saw the SYN — connection drops. Appliance mode pins the flow.
Combined with centralised egress
For production designs, combine inspection VPC with centralised egress: spoke → TGW → inspection VPC → TGW → egress VPC → NAT → IGW. Outbound traffic is inspected then NATed; inbound return traffic is symmetrically inspected via appliance mode.
TGW Network Manager — Visibility and Topology
Transit Gateway Network Manager is a regional service that registers TGWs and provides:
- Topology view — graphical representation of TGW attachments, route tables, and connections.
- Route Analyzer — simulate a packet path between attachments to verify routing intent.
- CloudWatch metrics and events — bandwidth utilisation, packet drops, attachment state changes.
- EventBridge integration — react to topology changes, attachment failures.
For multi-account organisations, register a Global Network that aggregates TGWs from multiple accounts/regions for unified visibility.
Common Traps Recap — Transit Gateway on ANS-C01
Trap 1: TGW peering supports BGP
Wrong. TGW peering is static-only. Use Cloud WAN for BGP-based multi-region.
Trap 2: Appliance mode is needed only for cross-region traffic
Wrong. Appliance mode is needed for any stateful inspection in a multi-AZ inspection VPC — local-region cross-AZ flows break without it.
Trap 3: Default route table covers all attachments well
Wrong for production. Disable default association and default propagation, force explicit attachment configuration to prevent unintended cross-segment leakage.
Trap 4: TGW Connect uses IPsec
Wrong. TGW Connect uses GRE (unencrypted) over an underlying VPC/DX attachment. Encryption is the customer's responsibility.
Trap 5: VPC attachment requires only one subnet
Technically possible but bad practice. Select one subnet per AZ for HA. Single-subnet attachment loses connectivity to other AZs in the spoke.
Trap 6: TGW supports multicast across regions
Wrong. TGW multicast is regional only. No inter-region multicast.
Trap 7: DXGW does TGW peering between regions
Wrong. DXGW is for on-prem ↔ AWS multi-region. TGW peering is for AWS-only inter-region.
Trap 8: Member accounts can modify shared TGW route tables
Wrong. Only the owning account can modify TGW route tables. Member accounts can only attach VPCs.
ANS-C01 exam priority — Transit Gateway — Route Tables, Attachments, Connect, and Multicast. This topic carries weight on the ANS-C01 exam. Master the trade-offs, decision boundaries, and the cost/performance triggers each AWS service exposes — the exam will test scenarios that hinge on knowing which service is the wrong answer, not just which is right.
FAQ — Transit Gateway Routing and Attachments
Q1: What is the difference between association and propagation on TGW?
Association decides which TGW route table an attachment uses for outbound lookups. Each attachment is associated with exactly one route table — when traffic arrives from this attachment, the TGW consults the associated RT to find the destination. Propagation decides which TGW route tables receive an attachment's routes (advertised prefixes). An attachment can propagate to many RTs. Worked example: the prod VPC attachment is associated with prod-rt (so its outbound traffic looks up prod-rt) and propagates its CIDR to prod-rt and shared-rt (so prod and shared attachments can reach it). The decoupling of association from propagation is the core TGW design lever.
Q2: Why is appliance mode required for inspection VPCs and how do I enable it?
Appliance mode forces both directions of a flow through the same TGW attachment ENI in the same AZ. Without it, TGW makes per-flow decisions per AZ, potentially sending the request via AZ-1 ENI and the response via AZ-2 ENI. Stateful firewalls in the inspection VPC, having no record of the original SYN on AZ-2, drop the response. Enable appliance mode at VPC attachment creation or via modify-transit-gateway-vpc-attachment --options "ApplianceModeSupport=enable". The setting applies to the inspection VPC attachment only — spokes do not need it. Always enable appliance mode for any VPC hosting Network Firewall, Palo Alto VM-Series, Check Point, Fortinet, or any stateful network function. Forgetting appliance mode is the highest-frequency cause of "intermittent connection drops on cross-AZ flows" troubleshooting questions on ANS-C01.
Q3: When should I use TGW Connect instead of Site-to-Site VPN for SD-WAN integration?
Use TGW Connect when the SD-WAN appliance is in a VPC and you need high-bandwidth BGP-routed connectivity to TGW. Connect provides up to 5 Gbps per BGP peer (20 Gbps with 4 peers via ECMP), much higher than VPN's ~1.25 Gbps per tunnel. Connect uses GRE (lower overhead than IPsec) and assumes the SD-WAN appliance handles encryption at its own layer. Use Site-to-Site VPN when connecting on-premises hardware (router or firewall) directly to AWS without an SD-WAN appliance, or when you need IPsec encryption end-to-end in a single AWS-managed primitive. The exam scenario for "SD-WAN appliance in VPC, need 10 Gbps to TGW with BGP" is TGW Connect; "branch office router needs IPsec to AWS" is Site-to-Site VPN.
Q4: How do I share a TGW across 20 AWS accounts and prevent unintended attachments?
Use AWS Resource Access Manager (RAM) to share the TGW. The central network account creates a resource share with the TGW, and grants access to specific accounts, an organisational unit, or the entire AWS Organisation. Member accounts can then create VPC attachments to the shared TGW from their own VPCs. To prevent unintended attachments: (a) disable auto-acceptance so the central account must approve each attachment; (b) disable default association and default propagation at the TGW level so new attachments default to no route table — the central account explicitly assigns associations; (c) use AWS Config rules and EventBridge to alert when a new attachment is created. The result: only attachments the central network team approves and configures end up with connectivity.
Q5: How do I segment prod and non-prod traffic through a single TGW?
Create two TGW route tables: prod-rt and non-prod-rt. Prod VPC attachments associate with prod-rt and propagate to prod-rt + shared-rt. Non-prod VPC attachments associate with non-prod-rt and propagate to non-prod-rt + shared-rt. Shared services VPC attachments associate with shared-rt and propagate to all three. Result: prod can reach prod and shared but not non-prod (because non-prod CIDRs are not in prod-rt); non-prod symmetric. For belt-and-braces, add static blackhole routes in prod-rt for non-prod CIDRs and vice versa, so even if propagation is misconfigured the blackhole at the same prefix length wins. Combine with AWS Config rules to detect unauthorised propagation changes.
Q6: What happens if I do not enable appliance mode on an inspection VPC attachment?
Cross-AZ flows asymmetrically split between TGW ENIs. A flow's request goes through inspection appliance instance A in AZ-1; the response comes back through instance B in AZ-2. Stateful firewalls (Network Firewall, Palo Alto, Check Point, Fortinet) require both directions of a flow to traverse the same instance to maintain connection state. The response packet hits an appliance with no matching connection record and is dropped. Symptom: connections succeed initially within one AZ but drop intermittently for cross-AZ flows; debugging shows TCP RSTs from the firewall on what should be valid return traffic. Fix: enable appliance mode on the inspection VPC's TGW attachment — this is a one-time configuration change with no extra cost and resolves the entire class of bugs.
Q7: How does TGW peering differ from VPC peering, and when do I use each?
VPC peering is direct point-to-point between two VPCs, free for data transfer within the same region (cross-region peering has data transfer charges), supports up to 125 active peerings per VPC, but does not support transitive routing — spoke A peered to hub peered to spoke B does not let A reach B. TGW peering connects two TGWs (in the same or different regions), supports static routing only (no BGP), and provides multi-region multi-VPC connectivity through each TGW's spokes. Use VPC peering for simple two-VPC connectivity where transitive routing is not needed and cost is paramount. Use TGW peering when you have multi-VPC TGW deployments that need to span regions (e.g., DR replication, global service architecture). TGW peering data transfer is charged per-GB across regions; same-region TGW peering also has TGW per-GB charge — sometimes more expensive than VPC peering for low-volume scenarios.
Q8: What are the limits I should know for TGW on the exam?
Exam-relevant defaults: 5000 attachments per TGW (raisable), 20 route tables per TGW (raisable to higher), 10000 routes per TGW route table, 50 ECMP paths for VPN attachments, 20 VPCs per DXGW (raisable), 20 TGWs per DXGW (raisable). Per-VIF prefix limits: 100 prefixes received from customer on Private VIF, 1000 on Public VIF. Bandwidth: TGW Connect delivers up to 20 Gbps aggregate; Site-to-Site VPN ~1.25 Gbps per tunnel; Direct Connect 1/10/100 Gbps native. The exam tests "exceeded route limit" and "max ECMP paths" scenarios — recognise the symptoms of "BGP session drops after advertising 102 routes" as the 100-prefix limit on Private VIF.
Q9: When should I prefer Cloud WAN over Transit Gateway peering for multi-region?
Cloud WAN is the newer AWS-managed WAN service (launched 2022) that offers policy-based routing, segmentation across regions natively, and BGP between regions — features TGW peering does not have. Use Cloud WAN for: (a) BGP-based multi-region routing where static TGW peering is operationally painful; (b) policy-based segmentation across many regions with declarative configuration; (c) integration with SD-WAN providers (Aviatrix, Cisco SD-WAN, Palo Alto Prisma) at the multi-region scale; (d) modern multi-region architectures from scratch where you do not have TGW peering legacy. Use TGW peering for: existing TGW deployments where adding peering is straightforward, single-pair multi-region (one region peered to one other), or where Cloud WAN's premium pricing is not justified. The exam currently tests TGW peering more frequently than Cloud WAN, but Cloud WAN is gaining importance and may appear in newer questions.
Q10: How do I use TGW for centralised egress while also forcing inspection?
Combine the two patterns: spoke VPCs route 0.0.0.0/0 → TGW. TGW route table for spokes points to the inspection VPC attachment. The inspection VPC has Network Firewall (or third-party stateful firewall via GWLB) with appliance mode enabled. After inspection, the inspection VPC routes traffic to TGW again, with a separate TGW route table for inspection-egress that points to the egress VPC attachment. The egress VPC has NAT GW + IGW for actual internet egress. Return traffic flows symmetrically: IGW → egress VPC → TGW → inspection VPC (inspected on return path due to appliance mode) → TGW → spoke VPC. This composite pattern is the AWS Security Reference Architecture canonical layout: every outbound packet is inspected and NATed through a small set of central VPCs, providing single-pane policy enforcement and cost-efficient NAT consolidation. ANS-C01 questions that mention "centralised inspection and egress" expect this composite design.
Further Reading
- AWS Transit Gateway User Guide
- Transit Gateway Route Tables
- Transit Gateway Connect Attachments
- Transit Gateway Appliance Scenario
- Transit Gateway Multicast Overview
- Building a Scalable and Secure Multi-VPC AWS Network Infrastructure
- AWS ANS-C01 Exam Guide
Transit Gateway is the multi-VPC fabric; the next layer is VPC routing primitives for how spoke route tables direct traffic into TGW; BGP attribute manipulation for hybrid traffic engineering across DX and VPN attachments; Direct Connect VIF types for hybrid attachments; and PrivateLink for service-level connectivity that complements TGW for unidirectional service exposure.