NSX-T Data Center 2.5
Updated: Mar 31
VMware NSX Data Center is the network virtualization and security platform that enables the virtual cloud network, a software-defined approach to networking that extends across data centers, clouds and application frameworks. With NSX Data Center, networking and security are brought closer to the application wherever it’s running, from virtual machines to containers to bare metal. Like the operational model of VMs, networks can be provisioned and managed independent of underlying hardware. NSX Data Center reproduces the entire network model in software, enabling any network topology—from simple to complex multi-tier networks—to be created and provisioned in seconds. Users can create multiple virtual networks with diverse requirements, leveraging a combination of the services offered via NSX or from a broad ecosystem of third-party integrations—ranging from next-generation firewalls to performance management solutions—to build inherently more agile and secure environments. These services can then be extended to a variety of endpoints within and across clouds.
This post describes how to install the VMware NSX-T Data Center 2.5 release on vSphere. The information includes step-by-step configuration instructions and suggested best practices.
Preparing for Installation
The following resources are designed to help you plan your NSX-T deployment, and effectively manage your Software-defined Data Center and virtualization environment.
System Requirements for NSX-T Data Center. Before installing NSX-T, prepare the deployment environment to meet the system requirements.
VMware Product Interoperability Matrices. Provides details about the compatibility of current and earlier versions of VMware vSphere components, including NSX-T, ESXi, vCenter Server, and other VMware products.
VSphere Hardware and Guest Operating System Compatibility Guides. An online reference that shows what hardware, converged systems, operating systems, third-party applications, and VMware products are compatible with a specific version of a VMware software product.
VMware Configuration Maximums. When you configure, deploy, and operate your virtual and physical equipment, you must stay at or below the maximums supported by your product. The limits presented in the Configuration Maximums tool are tested limits supported by VMware.
Downloading NSX-T Data Center OVA Files
Download the NSX-T OVA files from the VMware Downloads Web Site. VMware NSX-T Data Center is listed under Networking & Security. You will need to download two OVA files: the NSX Manager for VMware ESXi OVA and the NSX Edge for VMware ESXi OVA.
Installing NSX Manager OVA
NSX Manager is the management component of NSX-T Data Center. NSX Manager provides a graphical user interface (GUI) and REST APIs for creating, configuring, and monitoring NSX-T Data Center components such as logical switches, logical routers, and firewalls. For high availability, NSX-T Data Center supports a management cluster of three NSX Managers. For a production environment, deploying a management cluster is recommended. For a proof-of-concept environment, you can deploy a single NSX Manager.
You can set up the NSX Manager virtual appliance by importing OVA to your vCenter server. Right-click the target host on which you want to deploy the appliance and select Deploy OVF Template to start the installation wizard.
Enter the download OVA URL or navigate to the OVA file.
Enter a name for the NSX Manager virtual machine. The name you enter appears in the vSphere inventory.
Select a compute resource on which to deploy the NSX Manager virtual appliance.
Verify the OVF template details.
Select a deployment configuration.
Select a datastore to store the NSX Manager appliance files.
Select a destination network for the source network. Select the port group or destination network for the NSX Manager virtual appliance.
Under Customize template page, complete the deployment details. Enter the NSX Manager system root, CLI admin, and audit passwords. Enter the hostname of the NSX Manager. The host name must be a valid domain name. Accept the default NSX Manager role for this VM. And enter the default gateway, management network IPv4, and management network netmask.
Enter the DNS Server and Domain Search list.
Enter the NTP, enable SSH and allow root SSH login to the NSX Manager command line. By default, these options are disabled for security reasons.
Verify that all your custom OVF template specification is accurate and click Finish to initiate the installation. The installation might take 7-8 minutes.
Once the NSX Manager virtual appliance is installed, start the virtual machine and launch the console to track the boot process. After the NSX Manager boots, log in to the CLI using the admin credentials.
Run the <get interface eth0> command to verify the IP settings applied to the virtual machine.
Run the <get certificate api thumbprint> command. The command output is a string of alphanumeric numbers that is unique to this NSX Manager and will be used later to join the NSX Edge with the Management Plane.
Logging to the Newly Created NSX Manager
After you install NSX Manager, you can use the user interface to perform other installation tasks. From a web browser, log in with admin credentials to the NSX Manager at https://<nsx-manager-ip-address>.
The first time you login, the EULA appears. Read and accept the EULA terms. Select whether to join the VMware's Customer Experience Improvement Program (CEIP).
In the NSX Manager UI, navigate to Home > Monitoring Dashboards > System and verify that the Management Cluster status is stable.
You have installed the NSX Manager appliance. The virtual machine appears in the vCenter server inventory.
Installing NSX Edge OVA
NSX Edge provides routing services and connectivity to networks that are external to the NSX-T DC deployment. NSX Edge is required if you want to deploy Tier-0 and Tier-1 logical routers. It provides the compute backing for the configured logical routers.
Each logical router contains a services router (SR) and a distributed router (DR). A distributed router is replicated on all transport nodes that belong to the same transport zone. A services router is required if the logical router is going to be configured to perform services, such as NAT. All Tier-0 logical routers have a services router. A Tier-1 router can have a services router if needed based on your design considerations.
Transport zones control the reach of Layer 2 networks in NSX-T Data Center. Transport zones dictate which hosts and, therefore, which virtual machines can participate in the use of a particular network.
You can set up the NSX Edge virtual appliance by importing OVA to your vCenter server. Right-click the target host on which you want to deploy the appliance and select Deploy OVF Template to start the installation wizard.
Enter the download OVA URL or navigate to the saved OVA file.
Enter a name for the NSX Edge virtual machine. The name you type appears in the vSphere inventory.
Select a compute resource for the NSX Edge virtual appliance.
Verify the OVF template details.
Select a deployment configuration.
Select a datastore to store the NSX Edge appliance files.
Select a destination network for each source network. Select the port group or destination network for each network interface on the NSX Edge virtual appliance.
The Edge node virtual machine in NSX-T can have a total of four Network Interfaces. Network 0 is dedicated for Management traffic and the rest of the interfaces are assigned to DPDK fast path. These fast path interfaces are used for sending Overlay traffic and Uplink traffic towards TOR or Top of Rack switches. For redundancy, two interfaces can be used for Uplink traffic. In this topology, we will use one fastpath for Overlay traffic and one fastpath for Uplink traffic. Network 3 is disconnected and will not be used.
Under Customize template page, complete the deployment details. Enter the NSX Edge system root, CLI admin, and audit passwords. Enter the hostname of the NSX Edge. The host name must be a valid domain name. And enter the default gateway, management network IPv4, and management network netmask.
Enter the DNS Server and Domain Search.
Enter the NTP, enable SSH and allow root SSH login to the NSX Edge command line. By default, these options are disabled for security reasons.
Note - Ignore VMC settings. Only enter VMC values for VMC deployments.
Verify that all your custom OVA template specification is accurate and click Finish to initiate the installation. The installation might take 7-8 minutes.
Once the NSX Edge virtual appliance is installed, start the virtual machine and launch the console to track the boot process. After the NSX Edge boots, log in to the CLI using the admin credentials.
Run the <get interface eth0> command to verify the IP settings applied to the virtual machine.
Joining the NSX Edge with the Management Plane
Joining NSX Edges with the management plane ensures that the NSX Manager and NSX Edges can communicate with each other.
On the NSX Edge node console, run the <join management-plane> command in order to join the the Management Plane. The thumbprint value is the unique string we've captured earlier on the NSX Manager by running the <get certificate api thumbprint> command.
Notice NSX Edge Node successfully registered as a Fabric Node.
In the NSX Manager UI, navigate to Home > Monitoring Dashboards > System and verify that the Edge Node is appears in the inventory.
Adding a Compute Manager
You can add vCenter Server as a Compute Manager. NSX-T Data Center polls compute managers to collect cluster information from the vCenter Server inventory.
Navigate to System > Fabric > Compute Managers and add the vCenter Server.
Complete the compute manager details. Type the name to identify the vCenter Server. Type the IP address or FQDN of the vCenter Server. Type the vCenter Server login credentials. Leave the thumbprint value blank. You will be prompted to accept the server provided thumbprint.
It takes some time to register the compute manager with vCenter Server and for the connection status to appear as UP.
Note - A vCenter Server instance can only register with one NSX Manager. NSX-T Data Center does not support the same vCenter Server to be registered with more than one NSX Manager.
After the vCenter Server is registered, do not power off and delete the NSX Manager virtual machine without deleting the compute manager first. Otherwise, when you deploy a new NSX Manager, you will not be able to register the same vCenter Server again. You will get the error that the vCenter Server is already registered with another NSX Manager.
Navigate to Advanced Networking & Security > Fabric > Nodes and click the Host Transport Nodes tab. Notice the NSX-T Data Center polls compute managers to collect cluster information from the vCenter server inventory.
Creating IP Pools for Tunnel Endpoint IP Addresses
Tunnel endpoints are the source and destination IP addresses used in the external IP header to identify the hypervisor hosts originating and end the NSX-T Data Center encapsulation of overlay frames. You can use either DHCP or manually configured IP pools for tunnel endpoint IP addresses.
Navigate to Advanced Networking & Security > Inventory > Groups > IP Pools and add two IP pools; ESXi tunnel endpoint IP pool & Edge tunnel endpoint IP pool.
Adding the ESXi tunnel endpoint IP pool. This IP pool provides the TEP IP for ESXi hosts when configured as Transport Nodes.
Adding the Edge tunnel endpoint IP pool. This IP pool provides the TEP IP for the Edge nodes when configured as Transport Nodes.
You have configured both ESXi and Edge tunnel endpoint IP pools. These pools appear on the IP Pools page.
Understanding Transport Zones
Transport zones dictate which hosts and, therefore, which virtual machines can participate in the use of a particular network. A transport zone does this by limiting the hosts that can "see" a logical switch—and, therefore, which virtual machines can be attached to the logical switch.
An NSX-T Data Center environment can contain one or more transport zones based on your requirements. A transport zone can span one or more host clusters. A host can belong to multiple transport zones. A logical switch can belong to only one transport zone.
Transport zones control the reach of Layer 2 networks. NSX-T Data Center does not allow connection of virtual machines that are in different transport zones in the Layer 2 network. The span of a logical switch is limited to a transport zone, so virtual machines in different transport zones cannot be on the same Layer 2 network.
There are two types of transport zones:
Overlay for internal NSX-T Data Center tunneling between transport nodes.
VLAN for uplinks external to NSX-T Data Center.
The overlay transport zone is used by both host transport nodes and NSX Edges. When a host or NSX Edge transport node is added to an overlay transport zone, an N-VDS, a software switch, is installed on the host or NSX Edge.
The VLAN transport zone is used by the NSX Edge and host transport nodes for its VLAN uplinks. When an NSX Edge is added to a VLAN transport zone, a VLAN N-VDS is installed on the NSX Edge.
The purpose of N-VDS software switch is to bind logical router uplinks and downlinks to physical NICs. For each transport zone that a host or NSX Edge transport node belongs to, a single N-VDS gets installed on the host or NSX Edge transport node.
Creating Overlay Transport Zone
Navigate to System > Fabric > Transport Zones and add the required Overlay Transport zone
Enter a name for the transport zone. Enter a name for the N-VDS. For the traffic type, select Overlay.
View the new transport zone on the Transport Zones page.
Configuring ESXi Hosts as Transport Nodes
Navigate to System > Fabric > Nodes > Host Transport Nodes to Configure NSX on the desired ESXi hosts.
Under Host Details page, enter a host name or use the host IP address, populated by default, and click Next.
Under Configure NSX page, select the N-vDS, NIOC Profile, Uplink Profile, LLDP Profile, ESX TEP IP pool, and the physical NIC.
Make sure that the physical NIC is not already in use (for example, by a standard vSwitch or a vSphere distributed switch). Otherwise, the transport node state remains in “partial success”, and the fabric node LCP connectivity fails to establish.
ESXi hosts are configured as Transport Nodes. View the Configuration state on the Host Transport Nodes page.
After configuring ESXi hosts as Transport Nodes, open an SSH session to each ESXi host configured for NSX and login using the root credentials.
Run the <esxcli network ip interface ipv4 get> command to locate the TEP or tunnel endpoint IP address assigned during the host preparation.
When a host or a Transport Node is added to an Overlay Transport Zone, An N-VDS is installed on the host. The unused vmnic or physical NIC is added to the N-VDS after it is created.
Understanding Logical Switches or Segments
Logical Switches or Segments in an NSX-T DC environment are similar to VLANs, in that they provide network connections to which you can attach virtual machines. The virtual machines can then communicate with each other over tunnels between hypervisors if the virtual machines are connected to the same Segment or Logical Switch. Each Segment has a virtual network identifier (VNI), similar to a VLAN ID. Entities such as routers, virtual machines, or containers can connect to a segment through the segment ports. The NSX N-VDS, configured on each transport node, can span multiple hosts to provide the Layer 2 functionality.
Configuring a Logical Switch
Navigate to Advanced Networking & Security > Networking > Switching > Switches to add a new Logical Switch.
Enter the new Logical Switch a name, select the Overlay Transport Zone, and click Add.
View the new Logical Switch on the Switches page.
After adding a new Logical Switch, a corresponding Port Group is created automatically in the vCenter server inventory.
Connect two test virtual machines to the Logical Switch. One virtual machine is on one ESXi host and the other virtual machine is on another ESXi host. Both ESXi hosts are configured for NSX.
Both test virtual machines are connected to the same Segment or Logical Switch and can then communicate with each other over tunnels between hypervisors.
Use Traceflow to inspect the path of a packets as it travels from one logical port to another logical port.
Note - Traceflow is only supported on Overlay backed NSX environment.
Configuring NSX Edge as a Transport Node
After manually installing an NSX Edge on ESXi, configure the NSX Edge to the NSX-T Data Center fabric as a Transport Node.
An NSX Edge can belong to one Overlay Transport Zone and multiple VLAN Transport Zones. If a virtual machine requires access to the outside world, the NSX Edge must belong to the same transport zone that the virtual machine's Logical Switch belongs to. Generally, the NSX Edge belongs to at least one VLAN transport zone to provide the uplink access.
Creating Uplink Profile for Edge VM
An uplink profile defines policies for the links from hypervisor hosts to NSX-T logical switches or from NSX Edge nodes to ToR or Top of Rack switches.
Navigate to System > Fabric > Profiles > Uplink Profiles and add a new Uplink Profile.
Use the default Teaming Policy and specify only one active uplink and no standby uplink.
Creating VLAN Transport Zone
Navigate to System > Fabric > Transport Zones and add the required VLAN Transport zone.
Enter a name for the Transport Zone. Enter a name for the N-VDS. For the traffic type, select VLAN.
View the new Transport Zone on the Transport Zones page.
Understanding NSX Edge Virtual Appliance Networking
When you install NSX Edge as a virtual appliance, internal interfaces are created, called fp-ethX, where X is 0, 1, 2, and 3. These interfaces are allocated for uplinks to ToR or Top-of-Rack switches and for NSX-T Data Center overlay tunneling.
When you create the NSX Edge transport node, you can select fp-ethX interfaces to associate with the Uplinks and the Overlay tunnel. You can decide how to use the fp-ethX interfaces.
On the vSphere distributed switch or vSphere Standard switch, you must allocate at least two vmnics to the NSX Edge: One for NSX Edge management and one for uplinks and tunnels.
In my topology, fp-eth0 is used for the Edge Overlay tunnel. fp-eth1 is used for the Edge VLAN Uplink. fp-eth2 and fp-eth3 are not used. vNIC1 or Network Adapter 1 is assigned to the management network.
Configuring NSX Edge Node as a Transport Node
Navigate to System > Fabric > Nodes > Edge Transport Nodes to Configure NSX on the desired NSX Edge Node.
Under General tab, enter the Edge Transport Node a name. From the Available column, select Transport Zones and click the right-arrow to move them to the Selected column. . Make sure to select both Overlay and VLAN Transport Zones.
Under N-VDS tab, Add the first Edge Switch. Select the N-VDS associated with the Overlay Transport Zone, Uplink Profile, Edge TEP IP pool, and the virtual NIC. Don't click save yet.
Add the second Edge Switch. Select the N-VDS associated with the VLAN Transport Zone, Uplink Profile, and the virtual NIC.
NSX Edge Node is configured as a Transport Node. View the Configuration state on the Edge Transport Nodes page.
Creating NSX Edge Cluster
Having a multi-node cluster of NSX Edges helps ensure that at least one NSX Edge is always available.
In order to create a Tier-0 logical router or a Tier-1 router with stateful services such as NAT, load balancer, and so on. You must associate it with an NSX Edge cluster. Therefore, even if you have only one NSX Edge, it must still belong to an NSX Edge cluster to be useful.
Note - An NSX Edge transport node can be added to only one NSX Edge cluster.
Navigate to System > Fabric > Nodes > Edge Clusters and add a new Edge Cluster.
Enter the NSX Edge cluster a name. Select an NSX Edge cluster profile from the drop-down menu. In Member Type drop-down menu, select Edge Node. From the Available column, select NSX Edges and click the right-arrow to move them to the Selected column.
View the new Edge Cluster on the Edge Clusters page.
Creating a VLAN Logical Switch for the Uplinks
The NSX Edge Uplink Ports will need to be connected to a VLAN backed Logical switch backed by a VLAN Transport Zone. The VLAN backed Logical Switch will be in the same Layer2 domain as the interface on the ToR switch.
Navigate to Advanced Networking & Security > Switching > Switches and add a new Logical Switch.
Enter the Logical Switch a name and select the VLAN Transport Zone.
View the new VLAN backed Logical Switch on the Switches page.
Creating and Configuring Tier-0 Gateway
The Tier-0 gateway in the NSX-T Edge cluster provides a gateway service between the logical and physical network. The NSX-T Edge cluster can back multiple Tier-0 gateways.
Navigate to Advanced Networking & Security > Routers > Routers and add a new Tier-0 Router.
Enter the Tier-0 Router a name, select the Edge Cluster, and select the High Availability Mode.
View the new Tier-0 Router on the Routers page.
Adding a Downlink Interface to the Tier-0 Router
After creating a Tier-0 Router, add a Router Port to it to connect downstream to the Segment-100 Overlay Logical Switch.
Enter the New Router Port a name, select the type, select the Logical Switch, and configure the Tier-0 Router downlink port IP address.
The Logical Switch is now connected upstream to the Tier-0 Router. The test virtual machines connected to the same Segment or Logical Switch not only can communicate with each other over tunnels between hypervisors, but also can communicate with their default gateway, or the downlink port on the Tier-0 Router.
Navigate to Advanced Networking & Security > Networking > Switching> Switches and view the Logical Switch ports status and statistics.
Open an SSH session to the NSX Manager and login with the administrative credentials. run the <get logical-switch <VNI> mac-table> command and the <get logical-switch <VNI> arp-table> command to view the MAC addresses and IP addresses of the test virtual machines learned on the Logical Switch.
Adding an Uplink Interface to Tier-0 Router
After connecting the Tier-0 Router downstream to the Overlay Segment or Logical Switch, add a Router Port to the Tier-0 Router to connect upstream to the ToR switch.
Enter the New Router Port a name, select the type, select the Logical Switch, and configure the Tier-0 router uplink port IP address.
The Tier-0 Router is now connected upstream to the ToR switch. The Tier-0 Router Uplink IP address is also reachable.
After adding an Uplink Interface on the Tier-0 Router, enable Route Redistribution. This is required to advertise routes northbound.
Enter the New Redistribution Criteria a name and select both T0 Connected and T0 Static as sources.
Last but not least, enable routing on the Tier-0 Router. NSX-T supports static routing and the dynamic routing protocol BGP on Tier-0 Routers for IPv4 and IPv6 workloads. For simplicity, I will configure a default Static Route on the Tier-0 router to external networks.
Enter the Network IP/mask. Add the Next Hop IP address and specify the Logical Router Port.
Tier-0 Router is now configured with a static route toward external subnets with a next hop IP of the physical router. Open an SSH session to the NSX Edge and login with the administrative credentials. View the routes and verify the connectivity outbound.