Skip to main content

Installation Guide for High Availability (HA) Deployment

This guide empowers IT Administrators to build a resilient, uninterrupted ServiceOps environment by eliminating single points of failure, ensuring business continuity during outages.

High Availability (HA) refers to systems that are durable and designed to operate continuously without failure for an extended period. The primary goal of HA is to ensure an agreed-upon level of operational performance and uptime, guaranteeing service and data recovery during an unplanned disruption.

This guide provides detailed instructions for setting up a High Availability environment for ServiceOps.

HA Architecture Overview

In a ServiceOps High Availability setup, three main components work together:

  • Master Server: The primary ServiceOps server accessed by users.
  • Slave Server: A secondary, idle server that continuously replicates data from the Master's database.
  • HA-Proxy Server: A load balancer that redirects traffic to the Slave server if the Master server becomes unavailable.

During a downtime event, the Slave server is promoted to become the new Master, using its synchronized database to provide uninterrupted service.

Single Data Center Architecture

High Availability Architecture for a Single Data Center

Prerequisites

Before proceeding with the HA setup, ensure you have met System Requirements and Pre-Installation Checklist.

HA Configuration Steps

Follow these steps to configure the High Availability environment.

Step 1: Install Master and Slave Servers

Install ServiceOps on two separate servers. One will act as the Master and the other as the Slave.

  1. Copy release build installer (service_desk_master_CI) to target machine.

  2. Open terminal and navigate to the directory where the build is.

  3. Make Sure you have the permission to execute the file. If there’s no permission then you can change it using the following command:

    sudo chmod 777 service_desk_master_CI

  4. Run Installer by using the following command:

    sudo ./service_desk_master_CI

  5. Enter the password when prompted. The password will be the same as the system admin password.

  6. Enter the file where the key needs to be saved. Here, just press enter and it will auto-add the key path.

  7. After adding the ssh key, it will prompt to enter the username for the server ssh

For detailed installation instructions, refer to the Standalone Installation Guide

note

Ensure both servers have the same ServiceOps version and are fully functional before proceeding.

Step 2: Set Up the HA Observer Server

The HA Observer monitors the health of the Master and Slave servers and manages the failover process.

Pre-Installation Checklist
  • Do not use sudo when running the HA Observer installer package.
  • Ensure both Master and Slave machines are ready before starting this step.
  • Use the same username and password for SSH on both Master and Slave machines.
  • Please use common password on SSH Master and Slave machine.
  1. Download the HA Observer installer from the Download Links page.
  2. Assign execute permissions to the installer file:
    chmod 777 service_desk_ha_CI

  1. Run the HA Observer installer:
    ./service_desk_ha_CI

  1. It will prompt for generation of the public key. Press Enter for generating the Key.

  1. After generating public key, it will prompt to enter the Passphrase twice. Press Enter for both the prompts.

  1. It will prompt to “Enter username for server ssh”. This Username will be Common for both the machines i.e Master and Slave.
note

Here flotomate is entered just for example. User needs to enter the respective Machine's Username.

  1. Enter the IP address and password for the Master server when prompted.
note

Here entered IP is just for example. User needs to enter respective Master’s IP.

  1. After entering the IP Address, it will prompt for Password twice. Here, the password will be respective machines password.

  1. Enter the IP address and password for the Slave server when prompted.

  1. Enter the sudo password when prompted. This password will be the same as the common password entered in Step 8.

  1. After entering the sudo password, it will automatically start configuration of Slave and will show message as “Enter slave config started”.

  1. After finishing the above step it will show message "HA Installed SuccessFully". It will then automatically start the mechanism of file sync showing the message "File Sync Setup Started".

  1. After the File Sync Setup started, it will prompt for "Entering Key" for generating public/private rsa key pair. Press Enter in response.

  1. After entering the Key, File Sync Installation Setup finishes. With this HA Observer Installation also finishes.

  1. Finally, ensure the Slave server's database password matches the Master's. You may need to manually update the password in the following configuration files on the Slave server:
/opt/flotomate/main-server/config/application-saas.properties
/opt/flotomate/cm-analytics/config/application-saas.properties
/etc/flotomate_env
/opt/flotomate/cm-analytics/lib/analytics-hosted-exec.conf
/opt/flotomate/main-server/lib/boot-hosted-exec.conf

The core High Availability setup is now complete. At this stage, the Master and Slave servers are configured for data replication.

The following section on configuring an HA-Proxy server is optional but recommended for automatic client redirection during a failover.

tip

For advanced HA-Proxy configurations, especially in a DMZ, refer to the How to Configure HA Proxy in DMZ guide.


Optional: Configure HA-Proxy for Automatic Failover

While the core High Availability setup ensures data replication between the Master and Slave servers, it does not automatically redirect user traffic if the Master server fails. To achieve seamless, automatic failover, you can deploy an HA-Proxy server.

The HA-Proxy acts as a reverse proxy and load balancer, sitting in front of your Master and Slave servers. It monitors the health of the primary server and, in the event of an outage, automatically reroutes all incoming traffic to the standby Slave server. This ensures that users experience minimal disruption and service continuity is maintained without any manual intervention.

This section guides you through the installation and configuration of an HA-Proxy server for your ServiceOps environment.

HA-Proxy Server Requirements
  • The HA-Proxy requires a separate server with a dedicated IP address.
  • Internet connectivity is required to install the HA-Proxy package.

HA-Proxy Installation

  1. Log in to the HA-Proxy server as a root user.
  2. Install the HA-Proxy package:
    sudo apt-get update
    sudo apt-get install haproxy -y

  1. Open the HA-Proxy configuration file for editing:

    sudo nano /etc/haproxy/haproxy.cfg
  2. Add the following configuration to the end of the file. Replace the placeholder IP addresses with the actual IPs of your HA-Proxy, Master, and Slave servers.

    defaults
    log global
    mode tcp
    option httplog
    option dontlognull
    timeout connect 5000
    timeout client 50000
    timeout server 50000
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

    frontend http_front
    # Replace with your HA-Proxy IP address
    bind 172.16.8.100:80
    stats uri /haproxy?stats
    default_backend http_back

    backend http_back
    balance roundrobin
    mode tcp
    option tcp-check
    # Replace with your Master Server IP
    server master 172.16.8.241:80 check port 80
    # Replace with your Slave Server IP
    server slave 172.16.8.240:80 check port 80
  3. Restart the HA-Proxy service to apply the changes:

    sudo systemctl restart haproxy

Failover Time

In the event of a failover, the Slave server may take approximately 5-8 minutes to become fully active.

HA-Proxy SSL Configuration

This section provides instructions for configuring SSL certificates for HAProxy.

Prepare Certificate Files

  1. Create a combined PEM file from your certificate and private key:

    cat example.crt example.key >> example.pem

  1. Copy the PEM file to the HAProxy server:

    cp example.pem /etc/ssl/

  1. Set proper permissions:

    sudo chmod 600 /etc/ssl/example.pem
    sudo chown root:root /etc/ssl/example.pem

Configure HAProxy

  1. Open the HAProxy configuration file:

    vi /etc/haproxy/haproxy.cfg
    # or
    nano /etc/haproxy/haproxy.cfg
  2. Update the frontend configuration to enable SSL:

    Standard HTTP configuration (port 80):

    frontend http_front
    bind *:80
    default_backend http_back

    SSL configuration (port 443):

    frontend https_front
    bind *:443 ssl crt /etc/ssl/example.pem
    default_backend http_back

  1. Complete frontend configuration example:
    frontend https_front
    bind 172.16.13.68:443 ssl crt /etc/ssl/example.pem
    mode http
    option forwardfor
    default_backend http_back

Configure Backend for HTTPS

If your backend servers also use HTTPS, update the backend configuration:

backend http_back
mode http
balance roundrobin
server master 172.16.13.69:443 check ssl verify none
server slave 172.16.13.70:443 check ssl verify none

Verify and Test Configuration

  1. Test HAProxy configuration syntax:

    haproxy -c -f /etc/haproxy/haproxy.cfg

  2. Restart HAProxy service:

    systemctl restart haproxy
  3. Verify service status:

    systemctl status haproxy
  4. Test SSL connectivity:

    openssl s_client -connect your-domain.com:443 -servername your-domain.com

Verify SSL Certificate

  1. Access your application through HTTPS in a web browser
  2. Check certificate details by clicking the lock icon
  3. Verify certificate validity and domain match

Restart HA Proxy Server Service

Next, after updating the port, restart the HA Proxy server service before validating the haproxy.cfg file.

note

Verify the port number is open from the OS Firewall and Network side.

Distributed Deployment with High Availability

A distributed HA deployment separates the application and database tiers across five dedicated machines. This model provides automatic failover at both the application and database tiers, and lets you scale each tier independently. It suits large enterprises running high-volume ITSM operations where a single-server failure must not interrupt service.

Prerequisites

Before starting, ensure the following are in place:

  • Operating System: Ubuntu 22, Ubuntu 24, RHEL 9.4, or RHEL 9.6
  • PostgreSQL Version: 16 or 17
  • 5 machines provisioned and reachable over the network
  • ServiceOps installed on both APP 1 (Master) and APP 2 (Slave) — follow the Standalone Installation Guide
  • Root or sudo access on all 5 machines
  • The following setup scripts available on their respective machines:
    • MotadataETCDSetupU24 — on the Observer machine
    • MotadataPatroniSetupU24 — on both DB machines
    • MotadataAppHASetup — on the Observer machine
  • The following firewall ports open on each machine:
MachinePorts
Observer80, 443, 2379, 2380, 5000
APP 1 and APP 280, 443
DB 1 and DB 25432, 7000, 8008

Architecture Overview

The 5-machine distributed HA setup assigns a dedicated role to each machine:

RoleMachineExample IPPorts
Observer / HAProxy / ETCD1 machine172.16.13.4280, 443, 2379, 2380, 5000
APP 1 (Master Application)1 machine172.16.12.17180, 443
APP 2 (Slave Application)1 machine172.16.12.17780, 443
DB 1 (Master Database)1 machine172.16.12.2025432, 7000, 8008
DB 2 (Slave Database)1 machine172.16.12.2165432, 7000, 8008

HAProxy load-balances traffic across APP 1 and APP 2 on ports 80 and 443, and routes database connections through port 5000. ETCD manages distributed consensus on ports 2379 and 2380, and drives Patroni-based PostgreSQL failover. The Observer monitors the application tier and triggers failover scripts when it detects a failure.

How It Works

ETCD stores the cluster state and elects a primary node for PostgreSQL via Patroni. HAProxy routes database traffic to the Patroni-managed primary on port 5000 using health checks on port 8008. The Observer load-balances application traffic between APP 1 and APP 2 on ports 80 and 443. When the Observer detects that APP 1 is down, master.sh and slave.sh scripts handle the application-tier failover. All five machines must reach each other over the network for this coordination to work.

Step 1: Download Setup Files

Download the zip file containing all the setup files from the Download Links page and extract it on each machine before proceeding with the steps below.

Step 2: Set Up the Observer (ETCD and HAProxy)

Run this step on the Observer machine (172.16.13.42).

  1. Give MotadataETCDSetupU24 execute permissions and run it:

    chmod 777 MotadataETCDSetupU24
    ./MotadataETCDSetupU24

    Terminal showing MotadataETCDSetupU24 script executing on the Observer machine

  2. When asked whether to install and configure ETCD, type yes.

    Terminal showing ETCD installation prompt asking to install and configure ETCD

  3. When asked for the ETCD node number, enter 1. This setup uses a single ETCD node.

    ETCD node number prompt with value 1 entered for single-node setup

    Maximum ETCD Cluster Nodes

    This setup supports a maximum of 7 ETCD cluster nodes. This guide uses 1 node. Leave all other node prompts blank.

  4. Leave all additional ETCD node prompts blank and press Enter.

    ETCD additional node prompts left blank

  5. When asked whether to install and configure HAProxy, type yes.

    HAProxy installation prompt asking to install and configure HAProxy

  6. Enter the DB Node 1 (Master DB) IP address when prompted — for example, 172.16.12.202. Then enter the DB Node 2 (Slave DB) IP address — for example, 172.16.12.216.

    Terminal showing HAProxy confirmation message after typing yes

    HAProxy DB node IP address prompts with Master and Slave DB IPs entered

    note

    DB Node 1 = Master DB. DB Node 2 = Slave DB.

  7. Enter the APP Node 1 (Master APP) IP address — for example, 172.16.12.171. Then enter the APP Node 2 (Slave APP) IP address — for example, 172.16.12.177.

    Terminal showing HAProxy APP node IP address prompts with Master and Slave APP IPs entered

    note

    APP Node 1 = Master APP. APP Node 2 = Slave APP.

ETCD and HAProxy setup is now complete.

Verify the HAProxy Configuration

Open /etc/haproxy/haproxy.cfg and confirm it matches this structure:

HAProxy configuration file showing global, defaults, listen stats, listen postgres, and listen backend blocks

global
maxconn 100

defaults
log global
mode tcp
retries 2
timeout client 30m
timeout connect 4s
timeout server 30m
timeout check 5s

listen stats
mode http
bind *:7000
stats enable
stats uri /

listen postgres
bind *:5000
option httpchk
http-check expect status 200
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server etcd1 172.16.12.202:5432 maxconn 100 check port 8008
server etcd2 172.16.12.216:5432 maxconn 100 check port 8008

listen backend
bind *:80
balance roundrobin
mode tcp
option tcp-check
server ubuntu1 172.16.12.171:80 check port 80
server ubuntu2 172.16.12.177:80 check port 80
Verify Firewall Rules

Open ports 80 and 443 on the Observer and both APP machines. Open ports 2379, 2380, and 5000 on the Observer. Open ports 5432, 7000, and 8008 on both DB machines.

Verify ETCD Service Status

systemctl status motadata_etcd

If the service is inactive, start it:

systemctl start motadata_etcd

Step 2: Configure Master DB with Patroni

Run this step on DB 1 (Master DB) (172.16.12.202).

  1. Give MotadataPatroniSetupU24 execute permissions and run it:

    chmod 777 MotadataPatroniSetupU24
    ./MotadataPatroniSetupU24
  2. When asked whether this machine is Node 1 or Node 2, enter 1 (Master). Then enter the Slave DB IP address when prompted — for example, 172.16.12.216.

    Terminal showing Patroni setup prompt asking for node number with 1 entered for Master DB

  3. Enter the ETCD IP address when prompted — for example, 172.16.13.42.

    Patroni setup prompt asking for ETCD IP address

Patroni setup on Master DB is complete.

Step 3: Configure Slave DB with Patroni

Run this step on DB 2 (Slave DB) (172.16.12.216).

  1. Give MotadataPatroniSetupU24 execute permissions and run it:

    chmod 777 MotadataPatroniSetupU24
    ./MotadataPatroniSetupU24
  2. When asked whether this machine is Node 1 or Node 2, enter 2 (Slave). Enter the Master DB IP address and the ETCD IP address when prompted.

    Terminal showing Patroni setup for Slave DB with node 2 selected and Master DB and ETCD IPs entered

  3. After setup, verify that the Patroni service on the Slave DB is inactive:

    systemctl status patroni

    Terminal showing Patroni service status as inactive on Slave DB

    If it shows running or active, stop it:

    systemctl stop patroni
Why Must Slave Patroni Be Inactive?

Patroni on the Slave DB starts automatically when DB configuration runs in Step 4. Starting it early can cause split-brain issues.

Step 4: Run DB Configuration on Master DB

Run this step on DB 1 (Master DB) (172.16.12.202).

  1. Retrieve the application DB password from the Master APP machine:

    cat /opt/flotomate/main-server/lib/boot-hosted-exec.conf

    Terminal showing boot-hosted-exec.conf output with the application DB password

    Copy the DB password from the output.

  2. On the Master DB, go to /opt/HA and copy MotadataPatroniHADBConfig to the home directory. Give it execute permissions and run it:

    chmod 777 MotadataPatroniHADBConfig
    ./MotadataPatroniHADBConfig
  3. Enter the DB password you copied in step 1.

  4. When asked whether to reload and restart members, enter y.

    Terminal showing prompt to reload and restart Patroni members with y entered

DB configuration on Master DB is complete.

Step 5: Run DB Configuration on Slave DB

Run this step on DB 2 (Slave DB) (172.16.12.216).

  1. Run MotadataPatroniHADBConfig:

    ./MotadataPatroniHADBConfig
  2. Enter the same DB password you used on the Master DB.

    Terminal showing MotadataPatroniHADBConfig password prompt on Slave DB

DB configuration on Slave DB is complete.

Step 6: Configure Master Application

Run this step on APP 1 (Master APP) (172.16.12.171).

Copy MotadataPatroniHAMasterSlaveAppConfig from the Master DB /opt/HA folder to the Master APP machine. Give it execute permissions and run it:

chmod 777 MotadataPatroniHAMasterSlaveAppConfig
./MotadataPatroniHAMasterSlaveAppConfig
  1. Enter the Observer IP address when prompted — for example, 172.16.13.42.

    Terminal showing MotadataPatroniHAMasterSlaveAppConfig prompt asking for Observer IP address

  2. When asked whether this machine is master or slave, enter master.

    Terminal showing role prompt with master entered for APP 1

  3. Master application configuration completes.

    Terminal showing Master application configuration complete message

Step 7: Configure Slave Application

Run this step on APP 2 (Slave APP) (172.16.12.177).

Copy MotadataPatroniHAMasterSlaveAppConfig to the Slave APP machine. Give it execute permissions and run it:

chmod 777 MotadataPatroniHAMasterSlaveAppConfig
./MotadataPatroniHAMasterSlaveAppConfig
  1. Enter the Observer IP address when prompted — for example, 172.16.13.42.

  2. When asked whether this machine is master or slave, enter slave.

    Terminal showing role prompt with slave entered for APP 2

  3. Enter the Master APP DB password when prompted.

    Use the Master APP DB Password

    The DB password must be identical across all machines. Always retrieve it from the Master APP's configuration file.

    Terminal showing MotadataPatroniHAMasterSlaveAppConfig prompt asking for Master APP DB password

Slave application configuration is complete.

Step 8: Configure HA Observer for Application Failover

Run this step on the Observer machine (172.16.13.42).

Run as a Normal User — Not Root

This script must run as a normal (non-root) user. Running as root or with sudo will cause it to fail.

  1. Download service_desk_ha_CI from the Motadata docs portal. Give it execute permissions and run it:

    chmod 777 service_desk_ha_CI
    ./service_desk_ha_CI

    Terminal showing service_desk_ha_CI script starting and beginning SSH key pair generation

  2. The script generates SSH key pairs. Press Enter repeatedly until key generation completes.

  3. Enter the following when prompted:

    PromptValue
    SSH usernameThe OS user on the APP machines
    SSH portDefault is 22
    Master APP IP172.16.12.171

    Terminal showing Observer script prompting for SSH username, port, and Master APP IP address

  4. Enter the Master APP password twice when prompted.

    Terminal showing Observer script prompting for Master APP SSH password confirmation

  5. Enter the Slave APP IP address — for example, 172.16.12.177. Enter the Slave APP password twice when prompted.

  6. When asked for email configuration, enter test for now.

    Terminal showing Observer script prompting for Slave APP IP and password

  7. When prompted to generate key pairs a second time, press Enter.

    Terminal showing Observer script second key pair generation prompt

Observer configuration for application HA is complete. Check logs at:

cat /opt/HA/logs

Troubleshooting — Distributed HA

Nginx is running on the Slave APP

Nginx must remain inactive on the Slave APP. An active Nginx service on the Slave causes routing conflicts. Stop and disable it:

systemctl stop nginx
systemctl disable nginx
/opt/HA has incorrect ownership

If /opt/HA is owned by a different user than the one used during configuration, failover scripts will fail. Correct the ownership on both Master and Slave APP machines:

chown -R motadata:motadata /opt/HA
Machines cannot communicate with each other

One or more required ports are blocked by the firewall. Confirm the following ports are open: 80 and 443 on the Observer and both APP machines; 2379, 2380, and 5000 on the Observer; 5432, 7000, and 8008 on both DB machines.

Patroni on Slave DB shows as active before DB config

Patroni started automatically during setup. Stop it before running Step 5:

systemctl stop patroni
HTTPS port not listening after restart

Re-open appsettings.json and confirm the leading underscore is removed from both CertificateFilePath and KeyFilePath, the file paths are correct and accessible, and the path separator matches the OS. Restart the poller service and re-run the verification commands.


Configuring Air Gap in HA Environment

This section outlines how to configure the Airgap utility in an HA environment. This setup maintains high availability within the isolated network while using the external Airgap utility machine to securely fetch and transfer update metadata and patch binaries.

Prerequisites

Before beginning the installation, ensure the following requirements are met:

  • Internal Network: Three dedicated servers (Master, Slave, and HA Proxy) with static IPs and interconnectivity.
  • External Network: One Air Gap Utility Machine with stable internet access to reach vendor URLs and the Motadata Central Patch Repository.
  • Storage: A secure portable storage medium (e.g., encrypted USB or secure transfer host) for manual data movement.

Configuration Steps

1. Install the HA Cluster (Internal)

Set up the core ServiceOps architecture within your isolated network first.

  1. HA Proxy Setup: Install and configure the HA Proxy/Controller. This will act as the single entry point (192.168.x.x) for all users.
  2. Master & Slave Installation: Install ServiceOps on both the Master and Slave servers.
  3. Establish Sync: Configure DB Sync and File DB Sync between the Master and Slave to ensure the standby server remains identical to the active one. See, for detailed CLI commands and configuration file edits.

2. Set Up the Air Gap Utility Machine (External)

The Utility Machine acts as the bridge for patch management.

  1. Network Configuration: Ensure this machine is not part of the internal ServiceOps domain. It only requires an internet connection.
  2. Utility Initialization: Install the Air Gap Utility tool provided by Motadata.
  3. Fetch Metadata & Patches:
    • Run the utility to download the latest Patch Catalog from vendor websites.
    • Connect to the Motadata Central Patch Repository to download required patch binaries.

3. Manual Data Transfer

Because the environment is air-gapped, data must be moved manually to the internal network.

  1. Export Data: Copy the downloaded binaries and metadata from the Utility Machine to your secure transfer medium.
  2. Import to Master: Connect the medium to the Active Master Server and move the files to the designated ServiceOps patch directory.

4. Finalizing Synchronization

Once the data is on the Master Server, the HA architecture handles the rest:

  1. Importing: The Master Server automatically imports the new metadata and binaries into the application.
  2. Internal Replication: ServiceOps triggers a File DB Sync, automatically pushing the newly imported patches from the Master to the Slave server.
  3. Verification: Log in via the HA Proxy IP and navigate to Patch Management > Health Check to ensure both nodes recognize the new updates.
note

Always perform a manual backup of the Master Server database before importing new patch catalogs in an HA environment.

Next Steps

After completing the HA deployment, you may need to set up additional components depending on your requirements. Below are links to guides for other server components that you can deploy.