Run the Docker Swarm Install
Download the .tar file that contains the Brainspace 7 deployment descriptors and install scripts.
Untar the archive, ‘cd’ into the subdirectory and run the install script as the ‘root’ user:
# switch to the root user
sudo su -root
# run the installer
./install.sh
This will bring up the Brainspace 7 Install UI.
At this point you’ll be presented with the ‘Installation Options’ screen:
For new installs, select the Install option. Steps for upgrades and uninstall will be presented in later sections. Prior to installing for the first time, select the option Pre-Install Checks to ensure there are enough resources to run Brainspace 7.
Next, you will be presented with a list of options for obtaining Docker Images. If using images hosted by Reveal in our Amazon ECR registry select ‘Amazon ECR’.
If you have already configured the Docker registry on the host machines, including authentication, you can select Skip.
If Amazon ECR was selected, you’ll be asked to enter your AWS access key and AWS secret, which should have been provided to you by Reveal customer service.
Enter the access key and secret key twice and click <Ok>. The UI will attempt to validate them using the credentials to access the Reveal Amazon ECR registry which contains the Docker images.
Choose whether you want to store the Amazon ECR creds in a file in the root user’s home directory.
Note
Storing creds will make it easier to administer the swarm later but choosing 'Yes' here stores the AWS secret key in a plain text file.
The next step is configuring the volume mount options. This step determines how the "data" and "localdata" shares are configured.
If using something other than the default (NFS), such as CIFS, GlueFS, or you just want to set up the file shares on the host machines and use bind mounts to give the containers access, select the Bind option and see section: “Configuring File Shares using Bind Mounts.”
If using the recommended NFS configuration, select the NFS option. If the NFS server is setup on the application host, the IP address would be the IP address of the current host where the installation is being performed. You can find the host’s IP address using the command:
ip addr
Note
Be sure to choose the IP address associated with the interface that is connected to the network, don’t use the loopback interface or ‘localhost’.
Next enter the source locations of the shares on the NFS device. This is the directory location where the data and localdata shares exist in the NFS server, not the mount points of data and localdata if they happen to be mounted on the application host.
If using the NFS option, it is not required that data and localdata be mounted on the hosts, as is required in Brainspace 6. It is required, however, that all the hosts be configured in the NFS server /etc/exports file. Even if the application server is hosting the NFS shares, it’s IP address must be listed in the /etc/exports file in order for the Docker containers to have permission to mount the NFS shares.
Enter the IP address or hostname of the server that is hosting the NFS file shares:
After entering the IP address, the UI will attempt some minimal validation to try to ensure the NFS server is up and accepting connections, if Check NFS is selected.
Next enter the database password of your choosing, twice to ensure it was typed correctly. The database password will be stored in a Docker Secret so it is properly protected.
The next step is choosing the host deployment option. The options are presented in the menu with an explanation of which services will be deployed on each host.
One – All services run on a single host.
Two – Host 1: Application + Database + Messenger + UI / Host 2: ANA + ODA
Three – Host 1: Application + Database + Messenger + UI / Host 2: ANA / Host 3: ODA
Four - Host 1: Application + UI / Host 2: Database + Messenger / Host 3: ANA / Host 4: ODA
Custom – if the deployment doesn’t match one of the above configurations, use this option and refer to section “Custom Host Deployment Option”.
Typically, the ‘Three Host’ deployment option will be selected, but other options are available.
Note
The default configuration is a 3-host install, with Postgres and RabbitMQ running on the application host. If you want to deploy the database on a different host, go to section: “Configuring a separate database host”.
If using more than one host, the next step is to add the other hosts to the swarm. To do this copy the command that is presented in the UI and paste it into a terminal running on each of the other hosts. In the example below, the command is:
docker swarm join –token SWMTKN-1-59782iohybaolpqn2ja1d34knz9bqj7ik072ov1kvsf6jh6cc1-8sepnocscoplxadzgkq2l1ip6 10.224.64.19:2377.
Back on the application host, you will see a progress bar that will update as each host is added to the swarm:
Your hosts (aka Docker nodes) have now joined the cluster and are ready to be assigned a specific role that determines which services the host will run.
You will need to choose which host will serve as the Analytics (ANA) host and which will serve as the On-Demand Analytics (ODA) host. You should be able to recognize the hosts by the hostname provided in the UI. Since the host where the installation is being performed is the Application host, it cannot be selected as an ANA/ODA host. If you wish to deploy the Analytics and On-Demand Analytics services on the application host, use option One on the previous screen.
Select the ID associated with the hostname you wish to assign to be the ANA host:
Do the same for the ODA host.
Finally, review the configuration to ensure it is correct, and select <Yes> to start the Brainspace deployment. If anything looks incorrect, select <No> and the installer will exit.
If you choose <No> here you can resume the installation where you left off by simply running the install.sh script again and choosing the “Continue Install” option.
This is convenient if you need to edit the default values in the swarm.env file, like NFS protocol version, or if you need to edit the swarm.env to configure an external Postgres database.
Additionally, if you need to change any of the previous choices made, you can choose the “Update Configuration” option here, and update only those configuration options that need to be changed:
Note
You must select one of the save options after making updates to ensure the updated configuration is persisted. By selecting “Save & Continue” you will be brought to the “Docker Container Registry Options” screen again. You can choose <Skip> here if you’ve previously configured the container registry and it doesn’t need to change.
After proceeding, you’ll be taken back to the Deploy Brainspace screen where you can review choices again to ensure they are correct.
When you select <Yes> on the Deploy Brainspace screen, the deployment will begin.
After the deployment starts, services will begin to come up. Use the “Health Checks” option in the install UI to check the state of the Brainspace deployment and see the state of the services in the Brainspace stack:
Checking the state of the stack will show all services, their current state, and any errors the services may have encountered:
Alternatively, you can use the following command to see the state of the services:
docker stack ps <stack name, ex: brainspace>
Some useful flags:
To see just running services: docker stack ps -f 'desired-state=running' brainspace
To see full details about errors: docker stack ps –-no-trunc brainspace
Note
If the “batch_tools” services are still in the “Running” or “Preparing” state this means the batch tools images, which are quite large are still being downloaded and batch tools has not been fully set up yet.
The Brainspace UI should be available as soon as the ‘brains-app’ service is “Running” so you can proceed with setting up the system, you just won’t be able to do any analytics builds until the batch tools services have finished. After some time, the batch_tools services should enter the “Shutdown” state, which is expected and means that batch tools has been set up and the system is ready to run analytics builds.
This long startup period only applies to the first time deploying a stack with a specific version of the software on a host.
Important
-> Very Important
If uninstalling the Docker Engine from an existing environment for any reason, do NOT follow the instructions to delete the /var/lib/docker directory (instructions for Ubuntu, for example: https://docs.docker.com/engine/install/ubuntu/#uninstall-docker-engine).
When you set up a Docker volume for an NFS share, it creates a mount point down in /var/lib/docker. This means if you have set up a Docker volume for the NFS data/localdata shares and you run the command to delete that directory, it will also delete the entire Brainspace dataset, and all data contained in those shares.
You must unmount those shares under /var/lib/docker prior to deleting the contents of that directory.