top of page
Search
  • Writer's pictureRamy Afifi

Micro-segmentation with NSX-T

Updated: Jul 30, 2021

Micro-segmentation provided by NSX-T DC enables organizations to logically divide the data center into distinct security segments down to the individual workload level. It establishes a security perimeter around each virtual machine or container workload with a dynamically- defined policy to have least privilege security model. This is known as the Zero Trust Model. This restricts an attacker’s ability to move laterally in the data center, even after the perimeter has been breached.


Micro-segmentation with NSX-T DC supports Layer 3, Layer 4, Layer-7 APP-ID, Identity Firewalling, IDS/IPS, and Service Insertion capability using integration with partner ecosystem. The platform utilizes the following capabilities to deliver its outcomes:


A Uniform Security Policy Model

Uniform security policy model for on-premises and cloud deployment, supporting multi- hypervisor (ESXi and KVM) and multi-workload, with a level of granularity down to VM/containers/bare metal attributes.


Intelligent Grouping

Groups allow abstraction of workload grouping from the underlying infrastructure topology. This allows a security policy to be written for either a workload or zone (PCI zone, DMZ, or production environment). A Group is a logical construct that allows grouping into a common container of static or and dynamic elements, based on various criteria including tags, virtual machine names, subnets, segments, segment ports, and AD Groups. The use of Groups gives more flexibility as an environment changes over time. Rules stay more constant for a given policy model, even as the data center environment changes. The addition or deletion of workloads will affect group membership alone, not the rules.


Distributed Firewall [DFW]

DFW Enforcement at the Hypervisor Level

NSX-T DC distributed firewall (DFW) provides stateful protection of the workload at the vNIC level. DFW enforcement occurs in the hypervisor kernel, helping deliver micro- segmentation. The scope of policy enforcement can be selective, with application or workload-level granularity.


DFW Architecture and Components


NSX-T DFW Architecture & Components

Distributed firewall [DFW] monitors all the East-West traffic on your virtual machines. When a DFW policy is configured in NSX-T DC, the management plane service validates the configuration and locally stores a persistent copy. Then the management plane pushes user-published policies to the control plane service within the Manager Cluster. The control plane will receive policy rules pushed by the management plane. If the policy contains objects including segments or Groups, it converts them into IP addresses using an object-to-IP mapping table. This table is maintained by the control plane and updated using an IP discovery mechanism. Once the policy is converted into a set of rules based on IP addresses, the central control plane (CCP) within the Manager Cluster pushes the rules to the local control plane (LCP) on all the NSX-T transport nodes [ESXi/KVM hosts].


The NSX-T DC transport nodes comprise the distributed data plane with DFW enforcement done at the hypervisor kernel level. On each of the transport nodes, once local control plane (LCP) has received policy configuration from CCP, it pushes the firewall policy and rules to the data plane filters (in kernel) for each of the virtual NICs. With the “Applied To” field in the rule or section which defines scope of enforcement, the LCP makes sure only relevant DFW rules are programmed on relevant virtual NICs instead of every rule everywhere, which would be a sub-optimal use of hypervisor resources.


Data plane Implementation on Transport Nodes


The NSX-T DC management and control plane components are identical for both ESXi and KVM hosts. For the data plane, they use a different implementation for packet handling.

ESXi Host - Data Plane Implementation

NSX-T DC uses N-VDS on ESXi hosts, which is derived from vCenter VDS, along with the VMware Internetworking Service Insertion Platform (vSIP) kernel module for firewalling.


Note - Previously, NSX-T consumed N-vDS and vSphere networking consumed a separate vDS. With NSX-T release 3.0, it is possible to use a single Converged vDS (the native vDS built into vSphere 7.0) for both NSX-T DC 3.0 and vSphere 7 networking.

KVM Host - Data Plane Implementation

For KVM, the N-VDS leverages Open vSwitch (OVS) and its utilities. For KVM, there is an additional component called the NSX agent in addition to LCP, with both running as user space agents. When LCP receives DFW policy from the CCP, it sends it to NSX-agent. NSX-agent will process and convert policy messages received to a format appropriate for the OVS data path. Then NSX agent programs the policy rules onto the OVS data path using OpenFlow messages. For stateful DFW rules, NSX-T DC uses the Linux conntrack utilities to keep track of the state of permitted flow connections allowed by a stateful firewall rule. For DFW policy rule logging, NSX-T DCuses the ovs-fwd module.


DFW Status & Rule Statistics


A typical DFW policy configuration consists of one or more sections with a set of rules using objects like Groups, Segments, and application level gateway (ALGs). For monitoring and troubleshooting, the management plane interacts with a host-based management plane agent (MPA) to retrieve DFW status along with rule and flow statistics. The management plane also collects an inventory of all hosted virtualized workloads on NSX-T transport nodes. This is dynamically collected and updated from all NSX-T transport nodes.


DFW Policy Lookup and Packet Flow


NSX-T DFW Policy Lookup & Packet Flow

In the data path, the DFW maintains two tables: a Rule Table and a Flow Table or a Connection Tracker Table. The LCP populates the Rule Table with the configured policy rules, while the Flow Table is updated dynamically to cache flows permitted by the Rule Table. NSX-T DFW can allow for a policy to be stateful or stateless with section-level granularity in the DFW Rule Table. The Flow Table is populated only for stateful policy rules; it contains no information on stateless policies. This applies to both ESXi and KVM environments.

NSX-T DFW rules are enforced as follows:

  1. Flow hits DFW on vNIC.

  2. DFW does Flow Table Look Up first, to see any state match to existing Flow.

  3. First packet of a new session will result in Flow Table Miss.

  4. DFW then does Rule Table Look Up in top-down order for 5-Tuple match.

  5. Rules are processed in top-down order. Each packet is checked against the top rule in the rule table before moving down the subsequent rules in the table.

  6. The first rule in the table that matches the traffic parameters is enforced. The search is then terminated, so no subsequent rules will be examined or enforced.

  7. In addition, Flow Table is updated with New Flow State for permitted flow. Subsequent packets in this flow checked against this flow for state match.

Because of this behavior, it is always recommended to put the most granular policies at the top of the rule table. This will ensure more specific policies are enforced first. The DFW default policy rule, located at the bottom of the rule table, is a catchall rule; packets not matching any other rule will be enforced by the default rule - which is set to “allow” by default. This ensures that VM-to-VM communication is not broken during staging or migration phases. It is a best practice to then change this default rule to a “drop” action and enforce access control through a whitelisting model (i.e., only traffic defined in the firewall policy is allowed onto the network).


Layer 7 App-ID


NSX-T DFW with Layer 7 Context Profiles

Layer 7 App IDs identify which application a particular packet or flow is generated by, independent of the port that is being used. Rule Enforcement based on Layer 7 App IDs enable users to allow or deny applications to run on any port, or to force applications to run on their standard port.


Layer 7 App IDs are used in creating Context Profiles. Context Profiles are used in DFW rules or Gateway FW rules, and are supported on ESXi and KVM hosts. Layer 7 App IDs can be combined with FQDN whitelisting and blacklisting. Layer 7 App IDs can also be combined with Application Level Gateway [ALG] service to dynamically open up ports for certain protocols [FTP/RPC/Oracle]. An ALG service monitors Control connections to dynamically detect ports that are required to be opened for the data channel. And by using an ALG, we’re able to inspect that connection traffic and we are able to dynamically open up those ports for that connection without you having to specify and open up that whole range of ports.


NSX-T DC provides built in Attributes (App IDs) for common infrastructure and enterprise applications. Layer 7 App IDs include versions (SSL/TLS and CIFS/SMB) and Cipher Suite (SSL/TLS). The Deep Packet Inspection [DPI] Engine enables matching packet payload against defined patterns, commonly referred to as signatures. Signature-based identification and enforcement, enables customers not just to match the particular application/protocol a flow belongs to, but also the version of that protocol, for example TLS version 1.0, TLS version 1.2 or different versions of CIFS traffic. This allows customers to get visibility into or restrict the use of protocols that have known vulnerabilities for all deployed applications and their E-W flows within the data center.


Rule processing for an incoming packet:


  1. Upon entering a DFW or Gateway filter, packets are looked up in the flow table based on 5-tuple.

  2. If no flow/state is found, the flow is matched against the rule-table based on 5-tuple and an entry is created in the flow table.

  3. If the flow matches a rule with a Layer 7 service object, the flow table state is marked as “DPI In Progress.”

  4. The traffic is then punted to the DPI engine. The DPI Engine determines the App Id.

  5. After the App Id has been determined, the DPI Engine sends down the attribute which is inserted into the context table for this flow. The "DPI In Progress" flag is removed, and traffic is no longer punted to the DPI engine.

  6. The flow (now with App Id) is reevaluated against all rules that match the App Id, starting with the original rule that was matched based on 5-tuple, and the first fully matched L4/L7 rule is picked up. The appropriate action is taken (allow/deny/reject) and the flow table entry is updated accordingly.


In Summary, when a context-profile has been used in a rule, any traffic coming in from a virtual machine is matched against the rule-table based on 5-tuple. If the rule matches the flow also includes a Layer 7 context profile, that packet is redirected to a user-space component called the DPI engine. A few subsequent packets are punted to that DPI engine for each flow, and after it has determined the App Id, this information is stored in the in-kernel context-table. When the next packet for the flow comes in, the information in the context table is compared with the rule table again and is matched on 5-tuple, and on the layer 7 App Id. The appropriate action as defined in the fully matched rule is taken, and if there is an ALLOW-rule, all subsequent packets for the flow are processed in the kernel, and matched against the connection table. For fully matched DROP rule a reject packet is generated. Logs generated by the firewall will include the Layer 7 App Id and applicable URL, if that flow was punted to DPI.


Filtering Specific Domains [FQDN/URLs]


DFW supports a Fully Qualified Domain Name (FQDN) or URL that can be specified in a context profile for FQDN whitelisting or blacklisting. FQDN can be configured with an attribute in a context profile, or each can be set in different context profiles. After a context profile has been defined, it can be applied to one or more distributed firewall rules.


You must set up a DNS rule first, and then the FQDN whitelist or blacklist rule below it. This is because NSX-T Data Center uses DNS Snooping to obtain a mapping between the IP address and the FQDN. SpoofGuard should be enabled across the switch on all logical ports to protect against the risk of DNS spoofing attacks. A DNS spoofing attack is when a malicious VM can inject spoofed DNS responses to redirect traffic to malicious endpoints or bypass the firewall. For more information about SpoofGuard,


Currently a predefined list of domains is supported. You can see the list of FQDNs when you add a new context profile of attribute type Domain (FQDN) Name.


In the current release, ESXi and KVM are supported. ESXi supports drop/reject action for URL rules. KVM supports the whitelisting feature. Gateway firewall rules do not support the use of FQDN attributes or other sub attributes.


Identity Firewall [IDFW]


NSX-T DFW – Identity Firewall

IDFW enhances traditional firewall by allowing firewall rules based on user identity. Identity based firewall rules are determined by membership in an Active Directory (AD) group membership. With IDFW, an NSX administrator can create Active Directory user-based Distributed Firewall (DFW) rules.


IDFW can be used for Virtual Desktops (VDI) or Remote desktop sessions (RDSH), enabling simultaneous logins by multiple users, with different user application access. VDI management systems control whether users are granted access to the VDI virtual machines. NSX-T DC controls access to the destination servers from the source virtual machine (VM), which has IDFW enabled. With RDSH, administrators create security groups with different users in Active Directory (AD), and allow or deny those users access to an application server based on their role. For example, Human Resources and Engineering can connect to the same RDSH server, and have access to different applications from that server.


A high level overview of the IDFW configuration workflow begins with preparing the infrastructure. Preparation includes the administrator installing the host preparation components on each protected cluster, and setting up Active Directory synchronization so that NSX-T DC can consume AD users and groups. Next, IDFW must know which desktop an Active Directory user logs on to apply IDFW rules. User identity information is provided by the NSX Guest Introspection Thin Agent inside guest VMs. Security administrators must ensure that Thin Agent is installed and running in each guest VM. Logged-in users should not have the privilege to remove or stop the agent. When network events are generated by a user, the Thin Agent installed with VMware Tools on the VM gathers the information and forwards the information and sends it to the context engine. This information is used to provide enforcement for the Distributed Firewall.


Identity Firewall - IDFW workflow:


  1. A user logs in to a VM and starts a network connection, by opening Skype or Outlook.

  2. A user login event is detected by the Thin Agent, which gathers connection information and identity information and sends it to the context engine.

  3. The context engine forwards the connection and the identity information to Distributed Firewall for any applicable rule enforcement.


Distributed Intrusion Detection [IDS]

NSX-T Distributed IDS - Threat Detection

NSX-T DC distributed IDS functionality adds additional traffic inspection capabilities to the Service-defined Firewall. The distributed IDS uses regular-expression engines that detect traffic patterns. These engines are programmed to look for known malicious traffic patterns using a configuration language. The patterns expressed using the IDS configuration language are referred to as Signatures.

NSX-T DC distributed IDS periodically connect to private clouds to update detection information, including signatures. This live streaming information is created, tested and disseminated by threat research organizations that track the latest exploits and vulnerabilities. By default, NSX Manager checks for new signatures once per day. New signature update versions are published every two weeks.

NSX-T DC distributed IDS engines originated in Suricata, a well-known and broadly respected open-source project. NSX builds on Suricata by giving the IDS engines a runtime environment, including networking I/O and management functionality.

NSX-T DC co-locates the IDS functionality with the firewall, leading to a single-pass design for traffic inspection. All traffic passes through the firewall first, followed by IDS inspection depending on configuration. This co-location of IDS functionality with the firewall also simplifies the expression and enforcement of network security policies.

NSX distributed IDS engines are housed in user space and connected to the firewall module that resides in the hypervisor’s kernel. An application communicates with another application by sending traffic to the hypervisor, where the firewall inspects the traffic. Subsequently, the firewall forwards the traffic to the IDS module in user space.

The IDS module uses Signatures, protocol decoders and anomaly detection to hunt for attacks in the traffic flow. If no attacks are present, the traffic is passed back to the firewall for further transport to the destination. On the other hand, if an attack is detected, an alert is generated and logged.


NSX-T DC distributed IDS workflow:


  1. A virtual machine traffic passes through the distributed firewall filter.

  2. Packets are looked up in the flow table based on 5-tuple.

  3. If no flow/state is found, the flow is matched against the rule table based on 5-tuple.

  4. The rule table has two rules; a DFW rule and an IDS rule.

  5. First, traffic is processed through the regular distributed firewall or DFW rule.

  6. If the flow matches a DFW rule, then the traffic is processed again through the IDS rule.

  7. If the flow matches an IDS rule, then an entry is created in the flow table.

  8. As a result, this packet, along with any subsequent packet for this flow now needs to be redirected to the user space, basically to the IDPS Engine.

  9. So the packet gets redirected from kernel space to the IDPS Engine. What actually happens is that the Mux is going to take a copy of the packet and send that copy down to the IDPS Engine.

  10. Once the packet has been inspected, the original packet is released back on the data plane. Unlike the Layer 7 App ID leveraging Context Profiles, NSX IDS inspects the entire flow, so ALL subsequent packets for this flow also get redirected from kernel module up to the IDPS engine in user space for inspection.


Note - NSX-T DC distributed firewall or DFW must be enabled for IDS to work. If traffic is blocked by a DFW rule, then IDS will not see the traffic.


Gateway | Edge Firewall

NSX-T Gateway Firewall

NSX-T Gateway firewall provides essential perimeter firewall protection which can be used in addition to a physical perimeter firewall. The Gateway firewall service is part of the NSX-T Edge node for both bare metal and VM form factors. Optionally, the Gateway Firewall service insertion capability can be leveraged with the partner ecosystem to provide advanced security services like IPS/IDS and more. This enhances the security posture by providing next-generation firewall (NGFW) services on top of native firewall capability NSX-T provides.


NSX-T Gateway firewall is instantiated per gateway and supported at both Tier-0 and Tier-1. The Gateway firewall service is implemented on Tier-0 gateway uplinks and Tier-1 gateway links. This is implemented on a Tier-0/1 Service Router (SR) component which is hosted on NSX-T Edge. Tier-0 Gateway firewall supports stateful firewalling only with active/standby HA mode. It can also be enabled in an active/active mode, though it will be only working in stateless mode.


NSX-T Gateway firewall works independent of NSX-T DFW from a policy configuration and enforcement perspective. A user can consume the Gateway firewall using either the GUI or REST API framework provided by NSX-T Manager. The Gateway firewall configuration is similar to DFW firewall policy; it is defined as a set of individual rules within a section. Like DFW, the Gateway firewall rules can use logical objects, tagging and grouping constructs to build policies.


Tier-0 Gateway as Perimeter Firewall | North-South

Tier-0 GW as Perimeter FW - Logical Representation

The Tier-0 Gateway firewall is used as perimeter firewall between physical and virtual domains. This is mainly used for N-S traffic from the virtualized environment to physical world. In this case, the Tier-0 SR component which resides on the Edge node enforces the firewall policy before traffic enters or leaves the NSX-T virtual environment. The E-W traffic continues to leverage the distributed routing and firewalling capability which NSX-T natively provides in the hypervisor.


Tier-1 Gateway as Inter-Tenant Firewall

Tier-1 GW as Inter-Tenant FW - Logical Representation

The Tier-1 Gateway firewall is used as inter-tenant firewall within an NSX-T virtual domain. This is used to define policies between different tenants residing within an NSX-T environment. This firewall is enforced for the traffic leaving the Tier-1 router and uses the Tier-1 SR component which resides on the Edge node to enforce the firewall policy before sending to the Tier-0 Gateway for further processing of the traffic. The intra-tenant traffic continues to leverage distributed routing and firewalling capabilities native to the NSX-T.


Service Insertion - Partner Integration


Service Insertion enables users to seamlessly add 3rd party network and security services at various points throughout the network. Service Insertion includes Network Introspection and Endpoint Protection.

Endpoint Protection examines inside the guest VMs, whereas Network Introspection examines outside the guest VMs. This provides both internal (endpoint) and external (network) perspectives on the activities performed in virtual machines. NSX-T Data Center supports both Network Introspection and Endpoint Protection.

Network Introspection deals with data in motion across the network by offering services such as IDS, IPS, and next-generation firewall. You can define detailed redirection rules, which define which traffic should be inspected by the partner services. You can create selective redirection rules by using Security Groups. All traffic matching a redirection rule is redirected along the specified service chain, which specifies the sequence of service profiles applied to network traffic.

Endpoint Protection deals with security on the workload (inside the guest VMs) by offering services such as anti-virus and anti malware solutions. It enables use cases such as agent-less anti-virus, where an agent does not need to be installed on each workload but instead NSX-T DC can intercept file events and pass them to a partner virtual appliance. This functionality significantly reduces the overhead because you do not need another agent and avoids the processing overhead associated with running scanning operations at every workload.


Please refer to VMware Product Interoperability Matrices to provide details about the compatibility of current and earlier versions of VMware vSphere components, including NSX-T, ESXi, vCenter Server, and other VMware products.


Network Introspection


NSX-T DC supports both North-South and East-West Service Insertion for Network Introspection.

NSX-T North-South Network Introspection

North-South Service Insertion for Network Introspection can be applied at Tier-0 and Tier-1 gateways. The insertion points are the uplinks of the Tier-0 or Tier-1 gateways. A partner service virtual machine (SVM) is deployed close to the NSX Edge node to process the redirected traffic. The SVM virtual appliance is connected over the service plane to receive redirected traffic.

NSX-T East-West Network Introspection

With East-West Service Insertion, the insertion points are at each guest VM’s vNIC.


Partner SVMs for East-West Network Introspection can be deployed either on Compute hosts or in a Service cluster. For SVMs deployed on Compute hosts, an SVM does not need to be installed on every host. Some customers prefer to deploy the partner SVM on each host to achieve the least amount of traffic hairpinning.

When the partner SVM is deployed in a Service cluster, traffic is sent from the Compute hosts across the Overlay to the hosts in the Service cluster.


Endpoint Protection


Endpoint Protection is a platform for VMware partners to integrate with NSX-T DC and provide agent-less Anti-Virus and Anti-Malware capabilities for guest VM workloads, running on vSphere ESXi hypervisors.

Endpoint Protection provides visibility into guest VMs by analyzing the OS and file data. A service virtual machine (SVM) is deployed to monitor a file, network, or process activity on a guest VM. Whenever a file is accessed, such as a file open attempt, the anti-malware Service VM is notified of the event. The Service VM then determines how to respond to the event. For example, to inspect the file for virus signatures.

If the Service VM determines that the file contains no viruses, then it allows the file open operation to succeed. If the Service VM detects a virus in the file, it requests the Thin Agent on the guest VM to act in one of the following ways:


  1. Delete the infected file or deny access to the file.

  2. Infected VMs can be assigned a tag by NSX. Moreover, you can define a rule that automatically moves such tagged guest VMs to a security group that quarantines the infected VM for additional scan and isolation from the network until the infection is completely removed.

NSX-T Endpoint Protection

The Ops agent (Context engine and Guest Introspection client) forwards the guest introspection configuration to the guest introspection host agent (Context Multiplexer). It also relays the health status of the solution to NSX Manager.

The Guest Introspection host agent (Context Multiplexer)vprocesses configuration of endpoint protection policies. It also multiplexes and forwards messages from protected VMs to the Service VM. It reports the health status of the guest introspection platform and maintains records of the Service VM configuration in the muxconfig.xml file.

A Thin file & network introspection agent is running inside the guest VMs. This agent is part of VMware Tools. It replaces the traditional agent provided by antivirus or antimalware security vendors. It is a generic and lightweight agent that facilitates offloading files and processes for scanning to the Service VM provided by the vendor.

The Service VM consumes the guest introspection SDK provided by VMware. It contains the logic to scan file or process events to detect virus or malware on the guest. After scanning a request, it sends back a verdict or notification about the action taken by the guest VM on the request.


Summary


VMware NSX Data Center is a complete Layer 2-7 network virtualization and security platform that enables a software-defined approach to networking and security that extends across data centers, clouds, and a variety of application frameworks.

8,149 views0 comments

Recent Posts

See All

Getting Started with NSX-T 3.0

VMware NSX Data Center is the network virtualization and security platform that enables a software-defined approach to networking that extends across data centers, clouds and application frameworks. W

Getting Started with vRealize Network Insight 5.1

VMware vRealize Network Insight delivers intelligent operations for software-defined networking and security. It helps customers build an optimized, highly-available, and secure network infrastructure

Deploying vCenter Server Appliance 7.0

VMware vSphere is VMware's virtualization platform, which transforms data centers into aggregated computing infrastructures that include CPU, storage, and networking resources. vSphere manages these i

bottom of page