In the last post I showed how to connect the NSX Manager to the Compute Manager, in this case the vCenter on my site 1 management cluster. Now it is time to connect the other two NSX managers that I deployed at the beginning. Since I wrote this, things have changed a bit I redeployed in with a 2.5.1 version and I decided when redeploying to only have on NSX Manager since I do have a lab, and apparently I do not have never-ending resources. So below is mainly something I kept for demo-purposes.
Before starting there are some things that you may want to verify:
- NSX Manager nodes are installed
- Compute Manager is installed
- System requirements are met
- Required ports are open
- Datastores are configured and accessible.
- Verify that you have IP address, gateway, DNS, domain search list, NTP server IP for the manager to use.
- If not already done, make sure that NSX-T datacenter appliance is on a management network
Most of these should already be ok if you have followed along this series of NSX-T installation.
Next log into the NSX manager using the admin user.
Go to System -> Appliances -> Overview -> Add nodes
Fill in the information, you should have this handy from the deployment of the original manager node. Note you can select the form factor. I again went with small due to constraints in my environment, and again… it is a test environment.
On the second screen fill in the details as required. Since I will set up afinity rules later the only thing I added was name, cluster, network and IP addresses. Note at the top of the screen you can add one more node so you can deploy both the extra required nodes straight away.
Once that is all done then you click finish and get taken back to the Appliances menu and you can see the appliances being prepared/deployed. This may take a bit of time to complete (10-15 minutes as per the installation guide).
After a while the nodes will have been added data is synced between the three nodes. After that is completed the Appliances overview looks like this:
You see that the connectivity is up, sync is completed. On top of the one node you will also see that there is no virtual IP assigned yet. This will come in a bit. You may want to check like with the first node deployment if the ip settings and services are running.
Check status of the new managers and the cluster.
On the new managers SSH is not enabled by default in the two new nodes deployed from inside the NSX-T manager, so you need to enable this first from the console of vCenter.
To enable ssh on boot you run the following command from the console menu:
set service ssh start-on-boot
After this you should be able to connect via your preferred ssh client to the nsx manager.
Lets first check that the managers are ok. You do this by logging into the new appliances and run the two commands:
get interface eht0
Next thing to do is to check the cluster status
This is done from CLI running the command:
get cluster status
You can also run this command as verbose to get a bit more output:
get cluster status verbose
Set Virtual IP for the Cluster
This is easily done. If you go to System –> Appliances and then on top of the nodes there is a field called Virtual IP. Next to it there is an edit button. Click on this and set a virtual IP for the cluster.
Click on this and set a virtual IP for the cluster. You should see the IP having been set afterwards and which node it is associated with.
You can also verify it in the CLI by doing a “get cluster status verbose”, towards the end of the list you should see something like this, obviously you may have more nodes than one here.
That is pretty much is for now, next part starting on the edge installation requirements, installing an edge and create an edge cluster.