Category Archives: Study Notes

Citrix Netscaler Setup Notes

Below are general design Highlights

  • Dual VLAN design for redundancy. Two external VLANs and Two Internal VLANs
  • BGP routing is configured between the LBs, FW and routers. Primary VLANs are always preferred as primary route path. Secondary VLAN only used when primary is down
  • GE interface speed should be set to AUTO/AUTO both at switch & LBs
  • Server resides behind the Web POD router. LBs are installed as L3 route mode, and forward the traffic between the clients and severs
  • Virtual IP addresses are from dedicated subnet assigned within the load balancers. These addresses are being advertised from load balancers itself via BGP routing.
  • LBs don’t allow routing pass-thru. It means sever subnets are not reachable from the internet. This creates screened server network, which adds an additional layer of security because the actual server network address is not advertised out to the Internet. In addition, the load balancers will only forward traffic which is configured to be load-balanced – thus no traffic other than the specified web server ports would be forwarded by the load balancers even in the event of a screening firewall mis-configuration or compromise, thereby adding an added layer of security.
  • LBs within Web POD don’t change client source IPs when it forward the traffic. Client source IP is preserved when traffic arrive at servers.
    Web Pod may contain either netscaler , or BIGIP/LTM

Netscaler Setup

Below are designs details specific to the HW type.

  • BIGIP (4.X, 9.X, 10.x)
  • Serial failover is used for HA. Serial cable needs to be connected between the two boxes. VLAN failsafe are NOT used within Web POD.
  • No mgmt interface configured. All mgmt/administration are done via inside self IP address. One of the internal VLAN self IP will be used for administration.
  • BIGIPs are configured with floating IPs. These IP addresses are used as next-hop by FWs and Web POD routers to forward the traffic. This ensures that traffic is forwarded to the BIGIP/LTM who is currently active.
  • Netscalers Within Web POD, Netscalers should be installed as L3 INC mode.
  • The INC mode installation allows both LBs to run independent routing. This requires that each LB needs to be configured with independent network level config (such as NSIP, MIP, SNIPs,
  • VLAN) separately. These configurations are not synchronized.
  • For high availability, it uses network based failovers. Each participated internet faces are monitored by NS for high availability.
  • Web POD netscalers use “FIS Group” features for high availability. The FIS group ensures that NS fails over only if both interfaces are down. Single interface failure can’t cause NS to fails.
  • Since client source IP is preserved within WPOD, the USIP mode should be enabled.
  • Netscalers are configured with NSIP, this IP addresses are used for administration. The IPs are assigned from one of the internal VLAN
  • For each VLAN (subnet), the SNIP is configured on NS and bonded to the VLANs.

Routing Notes: BGP Setup and Configuration Instructions

6 BGP Installation & Configuration Instructions

Guidelines for T2 side of the peering:

-Transit networks must use T2 address space: 101.185.96.0 255.255.224.0

– Allows better network isolation in conjunction with the perimeter ACL (see below).

– Transit network interfaces on T2 devices must be configured with existing perimeter ACL:
‘ip access-group VTM_TIER2_39464 in’

– Restricts external access to T2 internal devices, while allowing transit traffic. Goal is to protect T2 interior devices from targeted and untargeted attacks, or other service-impacting traffic-related conditions.

– Transit network interfaces on T2 devices must be configured with existing inbound policy: ‘service-policy input mark_control’

– Sets any inbound packets marked with TOS of 6 or 7 to TOS 0, giving priority to T2 internal routing protocols, and other internal T2 traffic, in QOS queues. All other packets accepted unaltered. Goal is to increase T2 stability.

– Transit network interfaces on T2 devices must be enabled for Netflow: ‘ip route-cache flow’

– Transit network interfaces on T2 and T3 devices:

– Must not allow OSPF to form an adjacency.

– Interfaces must be ‘passive’ in OSPF. ‘Passive-interface default’ prefered.

– Should be configured using an MTU size of 1600.

– BGP timers are set to 2 and 8 on T2. While these are adequate for fast re-convergence, timers may be set differently on individual peers as required by the various component architectures.

– EBGP peering configuration:

– Use peer-groups for multiple peerings to the same external AS.

– ‘next-hop-self’, while not strictly required, is added for conformity.

– ‘send-community’ is required for basic community support (‘both’ or ‘extended’ keywords not required)

– ‘soft-reconfiguration inbound’ is required for ease of maintenance/troubleshooting

– Prefix-list ‘DEFAULT-ONLY’ should be used, as required, to filter routes outbound, permitting only the default.

– Prefix-list ‘DEFAULT-ONLY’ may not pre-exist. As required add:
‘ip prefix-list DEFAULT-ONLY seq 5 permit 0.0.0.0/0’

– Existing route-map CLEAR-COMMUNITY clears the 100:xxx community if it was set at a lower tier, and resets it appropriately (100:1 for NA).

– Route filtering:

– Preferred is for T3 AS’s to accept only default from T2. This configuration requires the least maintenance.

– For legacy conversions, the safest decision may be to continue sending all existing routes plus default. This configuration will eliminate most future maintenance.

– All additional route filtering, except as indicated below, should occur on the T3 side of the peering using appropriate inbound and outbound route-maps or prefix lists.

– Multicast support:

– PIM should only be enabled on T2 interfaces that peer to AS’s with a multicast requirement.

– Required configuration:

ip pim query-interval 5
ip pim sparse-mode
ip multicast boundary MST_Tier3AS_Multicast_Boundary

EXAMPLES T2 to T3:

– Typical T2 router interface configuration:

interface GigabitEthernetx/x
description BGP link to ??????? Gy/y
mtu 1600
ip address 101.185.a.a 255.255.255.252
ip access-group VTM_TIER2_33464 in
ip pim query-interval 5
ip pim sparse-mode
ip multicast boundary MST_Tier3AS_Multicast_Boundary
ip route-cache flow
wrr-queue cos-map 2 2 3 ! NOTE:
wrr-queue cos-map 3 2 4 ! Cos-maps differ depending on hardware.
wrr-queue cos-map 3 7 6 7 ! Ensure use of correct cos-map. See QOS Standard.
mls qos trust dscp
service-policy input mark_control

– Typical EBGP peer configuration – default only:

neighbor xxxxxxx peer-group
neighbor xxxxxxx remote-as yyyyy
neighbor xxxxxxx next-hop-self
neighbor xxxxxxx send-community
neighbor xxxxxxx default-originate
neighbor xxxxxxx soft-reconfiguration inbound
neighbor xxxxxxx prefix-list DEFAULT-ONLY out
neighbor xxxxxxx route-map CLEAR-COMMUNITY in
neighbor 101.185.c.c peer-group xxxxxxx
neighbor 101.185.c.c description ????????
neighbor 101.185.d.d peer-group xxxxxxx
neighbor 101.185.d.d description ????????

– Typical EBGP peer configuration – all routes plus default:

neighbor xxxxxxx peer-group
neighbor xxxxxxx remote-as yyyyy
neighbor xxxxxxx next-hop-self
neighbor xxxxxxx send-community
neighbor xxxxxxx default-originate
neighbor xxxxxxx soft-reconfiguration inbound
neighbor xxxxxxx route-map CLEAR-COMMUNITY in
neighbor 101.185.c.c peer-group xxxxxxx
neighbor 101.185.c.c description ????????
neighbor 101.185.d.d peer-group xxxxxxx
neighbor 101.185.d.d description ????????

– Typical EBGP peer configuration – all routes, no default:

neighbor xxxxxxx peer-group
neighbor xxxxxxx remote-as yyyyy
neighbor xxxxxxx next-hop-self
neighbor xxxxxxx send-community
neighbor xxxxxxx soft-reconfiguration inbound
neighbor xxxxxxx route-map CLEAR-COMMUNITY in
neighbor 101.185.c.c peer-group xxxxxxx
neighbor 101.185.c.c description ????????
neighbor 101.185.d.d peer-group xxxxxxx
neighbor 101.185.d.d description ????????

EXAMPLES T3 to T2:

– Typical T3 to T2 BGP peer configuration:

router bgp xxxxx
no synchronization
bgp router-id y.y.y.y
bgp log-neighbor-changes
timers bgp 2 8
neighbor TIER2-NA-33464 peer-group
neighbor TIER2-NA-33464 remote-as 33464
neighbor TIER2-NA-33464 send-community
neighbor TIER2-NA-33464 soft-reconfiguration inbound
neighbor TIER2-NA-33464 route-map ?????? out
neighbor TIER2-NA-33464 route-map ?????? in
neighbor 101.185.a.a peer-group TIER2-NA-33464
neighbor 101.185.a.a description ????????
neighbor 101.185.b.b peer-group TIER2-NA-33464
neighbor 101.185.b.b description ????????
maximum-paths 4

Remote-access & Load balancers

A Remote Access Pod is designed to allow access to remote employees and customers and B2B external partners using the Internet and private dial networks as the transport medium. Within the pod, the Cisco Dial nodes, Nortel Contivity switches, Juniper’s IVE (i.e., remote desktop), and high rated router devices are used.

VPN Netscaler

The Remote Pod has load balancers (reference) installed to ensure that all the remote access traffic is evenly distributed between the devices available within a site. Two pairs of firewalls will protect all the devices, one pair on the external side of the network protecting the remote access devices from the Internet whereas the second pair will provide protection to corporate resources.

References:
https://vpnreviews.online/
https://en.wikipedia.org/wiki/Load_balancing_(computing)