NSX-T 2.5 Deploying the NSX Manager

So when we start the on the deployment on vSphere of NSX-T Manager the process is quite normal, which is why I will just go through it and highlight a couple of things.

The first thing is to check the “Review details” screen, which lists some information about settings.

An important change from before NSX-T 2.5 is that the NSX manager and the controllers are now collapsed into a a single template, so unlike NSX-V or earlier versions of NSX-T you no longer need to deploy a single manager and 3 controllers.

nsxmgr-details

The next screen where you chose the configuration it is useful to know what sizing to go for. There are 4 options to choose from:

Extra Small

IMPORTANT: This configuration is only supported for the nsx-cloud-service-manager role. This configuration requires the following: * 2 vCPU * 8GB RAM * 200GB Storage * VM hardware version 10 or greater (vSphere 5.5 or greater)

Small (Lab only)

This configuration requires the following: * 4 vCPU * 16GB RAM * 200GB Storage * VM hardware version 10 or greater (vSphere 5.5 or greater)

Medium (Default)

This configuration requires the following: * 6 vCPU * 24GB RAM * 200GB Storage * VM hardware version 10 or greater (vSphere 5.5 or greater)

Large

This configuration requires the following: * 12 vCPU * 48GB RAM * 200GB Storage * VM hardware version 10 or greater (vSphere 5.5 or greater)

I will go for Small in my case since I do not need more and also don’t have that many resources. The Medium supports up to 64 Hypervisors, the Large supports more than 64 Hypervisors.

On the next screen, when selecting storage, there are a couple of important things here as highlighted from VMware.

network-storage-latencies

On the template customization page there are as usual some information to add. One important part here is that the default role is set to NSX Manager but can be changed to NSX Cloud service manager

nsx_role

Also at the bottom there are some internal properties which you should not set.

do_not_set

Once the Appliance is deployed and has booted you should be able to log into the appliance CLI and perform a few verification steps to see that all is well.

The simplest thing is to first ping the appliance and see if you get an answer. If that works then ssh to the appliance using your preferred ssh client.

Once logged in run “get interface eth0” to verify that the network settings were applied as expected.

Next run “get services” to verify that the services are running.

Sample output from above commands

nsx-mgr-post-install-verification

Check that the nsx manager appliance can ping the vCenter, ntp server, DNS and the esxi hosts.

After performing these troubleshooting steps the next step is to review the password policy. By default the admin password for nsx manager and edges will expire after 90 days.  This can however be reset from the CLI and since we are there now, it could be a convenient time to do it.

You can set it from 1-9999 days using the following command:

“set user admin password-expiration <1-9999>”

You can also disable it by running the following command:

“clear user admin password-expiration”

One this has been done, we can move on to the next part and make NSX-T ready and then add cluster nodes.