DNS / NTP Server Migration

You can migrate your DNS / NTP server after you have completed your initial deployment of CloudVision. Migrating the DNS / NTP server is typically done if you want to or need to change the DNS / NTP server that CloudVision currently uses.

For example, if the current CloudVision DNS / NTP server was intentionally isolated during the initial CloudVision installation, you need to migrate the server to make it accessible by external resources.

Note: Following the DNS / NTP Server Migration procedure may cause the CVP server to be unavailable for some time after using the commands.

How to Modify the DNS and NTP Configuration

The process for modifying the DNS / NTP server after the completion of the initial CloudVision installation involves updating the DNS and NTP server entries on each cluster node and modifying the /cvpi/cvp-config.yaml file (on each node) to reflect the updates to the server entries.

Pre-requisites

Before you begin the migration process, make sure that:

  • The IP addresses and hostnames (fqdn) of the nodes must not change.
  • For each node, make sure that:
    • At least one DNS server entry is present in the /cvpi/cvp-config.yaml file.
    • The DNS server that corresponds to the DNS server entry in the /cvpi/cvp-config.yaml file can be accessed by the cluster throughout the migration process. (The reason for this is that any changes made to resolv.conf take effect immediately upon saving the file.)
  • The time difference between the old NTP server and new NTP server should be negligible.
  • The old NTP server and new NTP server should be in same time zone.
Note: Following the DNS / NTP Server Migration procedure may cause the CVP server to be unavailable for some time after using the commands.

Complete these steps to modify the DNS / NTP server.

  1. On each node, edit the /cvpi/cvp-config.yaml file to reflect the changes to the DNS and NTP server entries that need to be made,
  2. To read the /cvpi/cvp-config.yaml file and restart the network service, run the /cvpi/tools/cvpConfig.py -y /cvpi/cvp-config.yaml -n nodeX command on each node where X is the respective node number.
  3. Restart the CVP components for all kubernetes pods to re-mount the /etc/resolv.conf file: cvpi -v=3 stop all && cvpi -v=3 start all

    Related topics: