top of page
Search

Getting Started with NSX-T 3.0

Writer: Ramy AfifiRamy Afifi

Updated: Jul 30, 2021


VMware NSX Data Center is the network virtualization and security platform that enables a software-defined approach to networking that extends across data centers, clouds and application frameworks. With NSX Data Center, networking and security are brought closer to the application wherever it’s running, from virtual machines to containers to bare metal. Like the operational model of VMs, networks can be provisioned and managed independent of underlying hardware.

NSX Data Center reproduces the entire network model in software, enabling any network topology—from simple to complex multi-tier networks—to be created and provisioned in seconds. Users can create multiple virtual networks with diverse requirements, leveraging a combination of the services offered via NSX or from a broad ecosystem of third-party integrations—ranging from next-generation firewalls to performance management solutions—to build inherently more agile and secure environments. These services can then be extended to a variety of endpoints within and across clouds.


This post describes how to install the VMware NSX-T Data Center 3.0 release on vSphere. The information includes step-by-step configuration instructions and suggested best practices.


Preparing for Installation


The following resources are designed to help you plan your NSX-T deployment, and effectively manage your Software-defined Data Center and virtualization environment.

System Requirements for NSX-T Data Center. Before installing NSX-T, prepare the deployment environment to meet the system requirements.

VMware Product Interoperability Matrices. Provides details about the compatibility of current and earlier versions of VMware vSphere components, including NSX-T, ESXi, vCenter Server, and other VMware products.

VSphere Hardware and Guest Operating System Compatibility Guides. An online reference that shows what hardware, converged systems, operating systems, third-party applications, and VMware products are compatible with a specific version of a VMware software product.

VMware Configuration Maximums. When you configure, deploy, and operate your virtual and physical equipment, you must stay at or below the maximums supported by your product. The limits presented in the Configuration Maximums tool are tested limits supported by VMware.

Lab Topology


Downloading NSX-T Data Center OVA Files

Download the NSX-T OVA files from the VMware Downloads Web Site. VMware NSX-T Data Center is listed under Networking & Security. You will need to download two OVA files: the NSX Manager for VMware ESXi OVA and the NSX Edge for VMware ESXi OVA.


Installing NSX Manager OVA


NSX Manager is the management component of NSX-T Data Center. NSX Manager provides a graphical user interface (GUI) and REST APIs for creating, configuring, and monitoring NSX-T Data Center components such as logical switches, logical routers, and firewalls. For high availability, NSX-T Data Center supports a management cluster of three NSX Managers. For a production environment, deploying a management cluster is recommended. For a proof-of-concept environment, you can deploy a single NSX Manager.

You can set up the NSX Manager virtual appliance by importing OVA to your vCenter server. Right-click the target host on which you want to deploy the appliance and select Deploy OVF Template to start the installation wizard.

Enter the download OVA URL or navigate to the OVA file.

Enter a name for the NSX Manager virtual machine. The name you enter appears in the vSphere inventory.

Select a compute resource on which to deploy the NSX Manager virtual appliance.

Verify the OVF template details.

Select a deployment configuration.

Select a datastore to store the NSX Manager appliance files.

Select a destination network for the source network. Select the port group or destination network for the NSX Manager virtual appliance.

Under Customize template page, complete the deployment details. Enter the NSX Manager system root, CLI admin, and audit passwords. Enter the hostname of the NSX Manager. The host name must be a valid domain name. Accept the default NSX Manager role for this VM. And enter the site name, default gateway, management network IPv4, and management network netmask.

Enter the DNS Server and Domain Search list.

Enter the NTP, enable SSH and allow root SSH login to the NSX Manager command line. By default, these options are disabled for security reasons.

Verify that all your custom OVF template specification is accurate and click Finish to initiate the installation. The installation might take 7-8 minutes.


Once the NSX Manager virtual appliance is installed, start the virtual machine and launch the console to track the boot process. After the NSX Manager boots, log in to the CLI using the admin credentials.


Run the <get interface eth0> command to verify the IP settings applied to the virtual machine.

Run the <get certificate api thumbprint> command. The command output is a string of alphanumeric numbers that is unique to this NSX Manager and will be used later to join the NSX Edge with the Management Plane.


Logging to the Newly Created NSX Manager


After you install NSX Manager, you can use the user interface to perform other installation tasks. From a web browser, log in with admin credentials to the NSX Manager at https://<nsx-manager-ip-address>.


The first time you login, the EULA appears. Read and accept the EULA terms.

Select whether to join the VMware's Customer Experience Improvement Program (CEIP).

In the NSX Manager UI, navigate to Home > Monitoring Dashboards > System and verify that the Management Cluster status is stable.

You have installed the NSX Manager appliance. The virtual machine appears in the vCenter server inventory.


Installing NSX Edge OVA


NSX Edge provides routing services and connectivity to networks that are external to the NSX-T DC deployment. NSX Edge is required if you want to deploy Tier-0 and Tier-1 logical routers. It provides the compute backing for the configured logical routers.

Each logical router contains a services router (SR) and a distributed router (DR). A distributed router is replicated on all transport nodes that belong to the same transport zone. A services router is required if the logical router is going to be configured to perform services, such as NAT. All Tier-0 logical routers have a services router. A Tier-1 router can have a services router if needed based on your design considerations.


Transport zones control the reach of Layer 2 networks in NSX-T Data Center. Transport zones dictate which hosts and, therefore, which virtual machines can participate in the use of a particular network.


You can set up the NSX Edge virtual appliance by importing OVA to your vCenter server. Right-click the target host on which you want to deploy the appliance and select Deploy OVF Template to start the installation wizard.

Enter the download OVA URL or navigate to the saved OVA file.

Enter a name for the NSX Edge virtual machine. The name you type appears in the vSphere inventory.

Select a compute resource for the NSX Edge virtual appliance.

Verify the OVF template details.

Select a deployment configuration.

Select a datastore to store the NSX Edge appliance files.

Select a destination network for each source network. Select the port group or destination network for each network interface on the NSX Edge virtual appliance.


The Edge node virtual machine in NSX-T can have a total of four Network Interfaces. Network 0 is dedicated for Management traffic and the rest of the interfaces are assigned to DPDK fast path. These fast path interfaces are used for sending Overlay traffic and Uplink traffic towards TOR or Top of Rack switches. For redundancy, two interfaces can be used for Uplink traffic. In this topology, we will use one fastpath for Overlay traffic and one fastpath for Uplink traffic. Network 3 is disconnected and will not be used.

Under Customize template page, complete the deployment details. Enter the NSX Edge system root, CLI admin, and audit passwords. Enter the hostname of the NSX Edge. The host name must be a valid domain name. And enter the default gateway, management network IPv4, and management network netmask.

Enter the DNS Server and Domain Search.

Enter the NTP, enable SSH and allow root SSH login to the NSX Edge command line. By default, these options are disabled for security reasons.


Note - Ignore VMC settings. Only enter VMC values for VMC deployments.

Verify that all your custom OVA template specification is accurate and click Finish to initiate the installation. The installation might take 7-8 minutes.

Once the NSX Edge virtual appliance is installed, start the virtual machine and launch the console to track the boot process. After the NSX Edge boots, log in to the CLI using the admin credentials.


Run the <get interface eth0> command to verify the IP settings applied to the virtual machine.


Applying the NSX License


Beginning NSX 3.0, you’re required to apply a license key before allowed to register NSX Edge Nodes on NSX Manager Node.


In the NSX Manager UI, navigate to System > Settings > Licenses. Notice that a default NSX for vShield Endpoint license key is already applied. Click add to apply a license key for NSX Data Center.


Joining the NSX Edge with the Management Plane


Joining NSX Edges with the management plane ensures that the NSX Manager and NSX Edges can communicate with each other.


On the NSX Edge node console, run the <join management-plane> command in order to join the the Management Plane. The thumbprint value is the unique string we've captured earlier on the NSX Manager by running the <get certificate api thumbprint> command.

Notice NSX Edge Node successfully registered as a Fabric Node.


In the NSX Manager UI, navigate to Home > Monitoring Dashboards > System and verify that the Edge Node is appears in the inventory.


Adding a Compute Manager


You can add vCenter Server as a Compute Manager. NSX-T Data Center polls compute managers to collect cluster information from the vCenter Server inventory.

Navigate to System > Fabric > Compute Managers and add the vCenter Server.

Complete the compute manager details. Type the name to identify the vCenter Server. Type the IP address or FQDN of the vCenter Server. Type the vCenter Server login credentials. Leave the thumbprint value blank. You will be prompted to accept the server provided thumbprint.

It takes some time to register the compute manager with vCenter Server and for the connection status to appear as UP.


Note - A vCenter Server instance can only register with one NSX Manager. NSX-T Data Center does not support the same vCenter Server to be registered with more than one NSX Manager.


After the vCenter Server is registered, do not power off and delete the NSX Manager virtual machine without deleting the compute manager first. Otherwise, when you deploy a new NSX Manager, you will not be able to register the same vCenter Server again. You will get the error that the vCenter Server is already registered with another NSX Manager.


Navigate to System > Fabric > Nodes and click the Host Transport Nodes tab. Notice the NSX-T Data Center polls compute managers to collect cluster information from the vCenter server inventory.


Creating IP Pools for Tunnel Endpoint IP Addresses


Tunnel endpoints are the source and destination IP addresses used in the external IP header to identify the hypervisor hosts originating and end the NSX-T Data Center encapsulation of overlay frames. You can use either DHCP or manually configured IP pools for tunnel endpoint IP addresses.


Navigate to Networking > IP Management > IP Address Pools and add two IP pools; ESXi tunnel endpoint IP pool & Edge tunnel endpoint IP pool.

Adding the ESXi tunnel endpoint IP pool. This IP pool provides the TEP IP for ESXi hosts when configured as Transport Nodes.

Adding the Edge tunnel endpoint IP pool. This IP pool provides the TEP IP for the Edge nodes when configured as Transport Nodes.

You have configured both ESXi and Edge tunnel endpoint IP pools. These pools appear on the IP Pools page.


Understanding Transport Zones


Transport zones dictate which hosts and, therefore, which virtual machines can participate in the use of a particular network. A transport zone does this by limiting the hosts that can "see" a logical switch—and, therefore, which virtual machines can be attached to the logical switch.

An NSX-T Data Center environment can contain one or more transport zones based on your requirements. A transport zone can span one or more host clusters. A host can belong to multiple transport zones. A logical switch can belong to only one transport zone.


Transport zones control the reach of Layer 2 networks. NSX-T Data Center does not allow connection of virtual machines that are in different transport zones in the Layer 2 network. The span of a logical switch is limited to a transport zone, so virtual machines in different transport zones cannot be on the same Layer 2 network.


There are two types of transport zones:

  • Overlay for internal NSX-T Data Center tunneling between transport nodes.

  • VLAN for uplinks external to NSX-T Data Center.


The overlay transport zone is used by both host transport nodes and NSX Edges. When a host or NSX Edge transport node is added to an overlay transport zone, an N-VDS, a software switch, is installed on the host or NSX Edge.


The VLAN transport zone is used by the NSX Edge and host transport nodes for its VLAN uplinks. When an NSX Edge is added to a VLAN transport zone, a VLAN N-VDS is installed on the NSX Edge.


The purpose of N-VDS software switch is to bind logical router uplinks and downlinks to physical NICs. For each transport zone that a host or NSX Edge transport node belongs to, a single N-VDS gets installed on the host or NSX Edge transport node.


Creating Overlay Transport Zone


Navigate to System > Fabric > Transport Zones and add the required Overlay Transport zone.

Enter a name for the transport zone. Enter a name for the N-VDS. For the traffic type, select Overlay.

View the new transport zone on the Transport Zones page.


Configuring ESXi Hosts as Transport Nodes


Navigate to System > Fabric > Nodes > Host Transport Nodes to Configure NSX on the desired ESXi hosts.

Under Host Details page, enter a host name or use the host IP address, populated by default, and click Next.

Under Configure NSX page, select the N-vDS, NIOC Profile, Uplink Profile, LLDP Profile, ESX TEP IP pool, and the physical NIC.


Make sure that the physical NIC is not already in use (for example, by a standard vSwitch or a vSphere distributed switch). Otherwise, the transport node state remains in “partial success”, and the fabric node LCP connectivity fails to establish.

ESXi hosts are configured as Transport Nodes. View the Configuration state on the Host Transport Nodes page.

After configuring ESXi hosts as Transport Nodes, open an SSH session to each ESXi host configured for NSX and login using the root credentials.


Run the <esxcli network ip interface ipv4 get> command to locate the TEP or tunnel endpoint IP address assigned during the host preparation.

When a host or a Transport Node is added to an Overlay Transport Zone, An N-VDS is installed on the host. The unused vmnic or physical NIC is added to the N-VDS after it is created.


Understanding Segments or Logical Switches


Segments or Logical Switches in an NSX-T DC environment are similar to VLANs, in that they provide network connections to which you can attach virtual machines. The virtual machines can then communicate with each other over tunnels between hypervisors if the virtual machines are connected to the same Segment or Logical Switch. Each Segment has a virtual network identifier (VNI), similar to a VLAN ID. Entities such as routers, virtual machines, or containers can connect to a segment through the segment ports. The NSX N-VDS, configured on each transport node, can span multiple hosts to provide the Layer 2 functionality.

Configuring a Logical Segment


Navigate to Networking > Segments to add a new segment.

Enter the new segment a name, select the type of connectivity for the segment as "None", select the Overlay Transport Zone, enter the Gateway IP Address of the subnet in a CIDR format, and click Add.


View the new segment on the Segments page.

Note - Upstream logical gateways are not configured yet. Once configured, you can change the connectivity option to connect the segment to any upstream gateway (Tier-0 or Tier-1). The segment connectivity changes are permitted only when the gateways and the connected segments are in the same Transport Zone.

Note - If a segment is not connected to a gateway, the subnet is optional. If a segment is connected either to a Tier-1 or Tier-0 gateway, the subnet is required.

Note - Subnets of one segment must not overlay with the subnets of other segments in your network. A segment is always associated with a single virtual network identifier (VNI) regardless of whether it is configured with one subnet, two subnets, or no subnet.


After adding a new Segment, a corresponding Port Group is created automatically in the vCenter server inventory.


Attaching workloads to Logical Segment


Connect two test virtual machines to the newly created segment. One virtual machine is on one ESXi host and the other virtual machine is on another ESXi host. Both ESXi hosts are configured for NSX.

Both test virtual machines are connected to the same Segment or Logical Switch and can then communicate with each other over tunnels between hypervisors.


Use Traceflow to inspect the path of a packets as it travels from one logical port to another logical port.


Note - Traceflow is only supported on Overlay backed NSX environment.



Configuring NSX Edge as a Transport Node


After manually installing an NSX Edge on ESXi, configure the NSX Edge to the NSX-T Data Center fabric as a Transport Node.


An NSX Edge can belong to one Overlay Transport Zone and multiple VLAN Transport Zones. If a virtual machine requires access to the outside world, the NSX Edge must belong to the same transport zone that the virtual machine's Logical Switch belongs to. Generally, the NSX Edge belongs to at least one VLAN transport zone to provide the uplink access.


Creating Uplink Profile for Edge VM


An uplink profile defines policies for the links from hypervisor hosts to NSX-T logical switches or from NSX Edge nodes to ToR or Top of Rack switches.


Navigate to System > Fabric > Profiles > Uplink Profiles and add a new Uplink Profile.

Use the default Teaming Policy and specify only one active uplink and no standby uplink.

View the new profile on the Uplink Profiles page.


Creating VLAN Transport Zone


Navigate to System > Fabric > Transport Zones and add the required VLAN Transport zone.

Enter a name for the Transport Zone. Enter a name for the N-VDS. For the traffic type, select VLAN.

View the new Transport Zone on the Transport Zones page.


Understanding NSX Edge Virtual Appliance Networking

When you install NSX Edge as a virtual appliance, internal interfaces are created, called fp-ethX, where X is 0, 1, 2, and 3. These interfaces are allocated for uplinks to ToR or Top-of-Rack switches and for NSX-T Data Center overlay tunneling.

When you create the NSX Edge transport node, you can select fp-ethX interfaces to associate with the Uplinks and the Overlay tunnel. You can decide how to use the fp-ethX interfaces.

On the vSphere distributed switch or vSphere Standard switch, you must allocate at least two vmnics to the NSX Edge: One for NSX Edge management and one for uplinks and tunnels.

In my topology, fp-eth0 is used for the Edge Overlay tunnel. fp-eth1 is used for the Edge VLAN Uplink. fp-eth2 and fp-eth3 are not used. vNIC1 or Network Adapter 1 is assigned to the management network.


Configuring NSX Edge Node as a Transport Node


Navigate to System > Fabric > Nodes > Edge Transport Nodes to Configure NSX on the desired NSX Edge Node.

Enter the Edge Transport Node a name. From the Available column, select Transport Zones and click the right-arrow to move them to the Selected column. . Make sure to select both Overlay and VLAN Transport Zones.


Under New Node Switch, Add the first Edge Switch. Enter the Edge Switch a name, select the Transport Zone associated with the Edge Switch, select the Uplink Profile, select the Edge TEP IP pool, and select the virtual NIC. Don't click save yet.


Add the second Edge Switch. Enter the Edge Switch a name, select the Overlay Zone associated with the Edge switch, select the Uplink Profile, and select the virtual NIC.

NSX Edge Node is configured as a Transport Node. View the Configuration state on the Edge Transport Nodes page.


Creating NSX Edge Cluster


Having a multi-node cluster of NSX Edges helps ensure that at least one NSX Edge is always available.

In order to create a Tier-0 logical router or a Tier-1 router with stateful services such as NAT, load balancer, and so on. You must associate it with an NSX Edge cluster. Therefore, even if you have only one NSX Edge, it must still belong to an NSX Edge cluster to be useful.


Note - An NSX Edge transport node can be added to only one NSX Edge cluster.


Navigate to System > Fabric > Nodes > Edge Clusters and add a new Edge Cluster.

Enter the NSX Edge cluster a name. Select an NSX Edge cluster profile from the drop-down menu. In Member Type drop-down menu, select Edge Node. From the Available column, select NSX Edges and click the right-arrow to move them to the Selected column.

View the new Edge Cluster on the Edge Clusters page.



Creating a VLAN-backed Segment for the Uplinks


The NSX Edge Uplink Ports will need to be connected to a VLAN backed Logical switch backed by a VLAN Transport Zone. The VLAN backed Logical Switch will be in the same Layer2 domain as the interface on the ToR switch.


Navigate to Networking > Segments and add a new Segment or Logical Switch.

Enter the new segment a name, select the type of connectivity for the segment as "None", select the VLAN Transport Zone, enter "0" for the VLAN ID, and leave Gateway IP Address of the subnet blank, and click Add.


View the new VLAN-backed segment on the Segments page.


Creating and Configuring Tier-0 Gateway


The Tier-0 gateway in the NSX-T Edge cluster provides a gateway service between the logical and physical network. The NSX-T Edge cluster can back multiple Tier-0 gateways.

Navigate to Networking > Connectivity > Tier-0 Gateways and add a new Tier-0 gateway.

Enter the Tier-0 gateway a name, select the Edge Cluster, select the High Availability Mode, and select the Failover Mode [this option is available only when HA Mode is set to Active-Standby]


View the new gateway on the Tier-0 Gateways page.


Adding an Uplink Interface to Tier-0 Gateway

A Tier-0 gateway has uplink connections to physical networks. Edit the Tier-0 gateway settings and add an interface for connecting upstream to the physical switch. Enter the uplink interface a name, select “External” as the interface type, enter an IP address in CIDR format, and connect it to the Segment-Uplink.


Tier-0 gateway is now created and linked upstream to the physical network.


Enabling Route Redistribution

After adding an Uplink Interface on the Tier-0 Router, enable Route Redistribution. This is required to advertise routes northbound.

Enter the New Redistribution Criteria a name and select both T0|T1 Connected and T0|T1 Static as sources.



Configuring Routing - Static


Enable routing on the Tier-0 Router. NSX-T supports static routing and the dynamic routing protocol BGP on Tier-0 Routers for IPv4 and IPv6 workloads. For simplicity, I will configure a default Static Route on the Tier-0 router to external networks.


Enter the Network IP/mask and specify the Next Hop IP address.



Tier-0 Router is now configured with a static route toward external subnets with a next hop IP of the physical router. Open an SSH session to the NSX Edge and login with the administrative credentials. View the routes and verify the connectivity outbound.





Creating and Configuring Tier-1 Gateway

Navigate to Networking > Connectivity > Tier-1 Gateways and add a new Tier-1 gateway.

Enter the Tier-1 gateway a name, link the newly created Tier-1 gateway to the Tier-0 gateway, and enable Route Advertisement for All Static Routes and All Connected Segments & Service Ports.

View the new gateway on the Tier-1 Gateways page.

Tier-1 Gateway is now created and linked upstream to Tier-0 gateway.


Note - There is no need to advertise the route from Tier-0 to Tier-1, because Tier-1 gatewayss automatically have a static default route towards their connected Tier-0 gateways.


Note - A default 100.64.0.0/16 subnet is used for communication between a Tier-0 gateway and all Tier-1 gateways that are linked to it. There is no need to manually assign IP addresses. You will see the actual IP addresses automatically assigned to the link on the Tier-0 gateway side and on the Tier-1 gateway side.


Linking the Logical Segment upstream to Tier-1 Gateway


After creating a Tier-1 gateway, update the logical segment settings and link it upstream to the Tier-1 gateway.

Edit the segment1 settings, and select the type of connectivity for the segment as "T1-Gateway".

The logical segment is now linked upstream to the Tier-1 gateway.


Connectivity Test


The test virtual machines linked to the segment1 not only can communicate with each other over tunnels between hypervisors, but also can communicate with their default gateway and any other external targets behind the Tier-0 gatway.

Open an SSH session to the NSX Manager and login with the administrative credentials. run the <get logical-switch <VNI> mac-table> command and the <get logical-switch <VNI> arp-table> command to view the MAC addresses and IP addresses of the test virtual machines learned on the Logical Segment.





Summary


NSX-T enables the creation of Layer 2 Segments and Layer 3 gateways in Software as logical constructs and embeds them in the hypervisor layer, abstracted from the underlying physical hardware. NSX-T creates virtual Layer 2 Segments to provide connectivity between its services and the different virtual machines in the environment. The logical routing capability in NSX-T platform provides the ability to interconnect both virtual and physical workloads deployed in different logical Layer 2 networks.


In addition to providing network virtualization, NSX-T also serves as an advanced security platform, providing a rich set of features to streamline the deployment of security solutions.


 
 
 

Recent Posts

See All

Micro-segmentation with NSX-T

Micro-segmentation provided by NSX-T DC enables organizations to logically divide the data center into distinct security segments down to...

Comentarios


SUBSCRIBE VIA EMAIL

Thanks for submitting!

© 2020 by Ramy Afifi

bottom of page