Installing a manual failover environment

A manual failover environment consists of two primary servers, one production server and one failover server, that can both access file systems installed on a shared file server. If the production server becomes unavailable, you can move processing to the failover server until the production server is available again.

Installing a manual failover environment is a complex procedure. Before you start the configuration process, consult with your system administrator. Manual failover is primarily used in enterprise environments and might not be appropriate for your setting.

This procedure assumes that you use NFS for file sharing. Based on your system requirements and workflow, you might need a different setup, such as a SAN or NAS. Determine your system requirements and use the best technology for your company. Use the following steps as a guide to set up your system.

Before you start this procedure, open the required ports in your firewall to allow communication between your file server, production server, and failover server. Also, make sure you have completed these procedures as needed:

Whether you are installing using a DVD or an ISO image, make sure that you can access the installation media from the production server, the failover server, and the file server.

To install a manual failover environment:

  1. Determine the GID for each of these system groups. See Creating system groups and users for more information about system groups. You must use the same system group names and GID values on the production and failover systems. The defaults are listed below. If you choose to use different values, record them here for future reference.
    Group Name Default GID Database configuration
    printq 1002
    • DB2
    • PostgreSQL
    aiwgrp1 32458
    • DB2
    • PostgreSQL
    docker 977 PostgreSQL
    aiwdbgrp 1000 DB2
    aiwdbfgp 1001 DB2
  2. Determine the UID values for each of these user names. You must use the same system user names and UIDs on the production and failover systems. See Creating system groups and users for more information about system users. The values are listed below.
    User name Default UID Group Membership Database configuration
    aiw1 32457 aiwgrp1
    • DB2
    • PostgreSQL
    printq
    • DB2
    • PostgreSQL
    aiwdbgrp DB2
    docker PostgreSQL
    aiwinst 1000 aiwdbgrp DB2
    aiwdbfid 1001 aiwdbfgp DB2
  3. Determine the hostnames for each of these servers.
    Server Description
    Production server The system that has the primary server installed and the system RICOH ProcessDirector runs on during normal operations.
    Failover server The system that RICOH ProcessDirector runs on as a backup, used when the normal server is unavailable.
    File server The system set up by a network administrator which hosts files such as installed code, configuration files, data files, and the database. Might be a SAN or NAS.
    Note: A DNS can be set to use a single hostname alias to redirect to either the production or failover server, depending on which system is active. With this configuration, users can access the system from a single URL.
  4. Log on to the file server as an administrator.
  5. Open a command line. Go to the directory where the installation media is located and into the scripts directory. Find failover-create-shares.sh.
    If you have custom share paths or are using a technology other than NFS, copy failover-create-shares.sh to /tmp. Edit the script to match your system configuration.
  6. Run the script.
    In a PostgreSQL configuration, type:
    ./failover-create-shares.sh postgresql
    In a DB2 configuration, type:
    ./failover-create-shares.sh db2
  7. Verify that the script created these directories on the file server:
    • /aiw/aiwdata
    • /aiw/aiwpath
    • /aiw/varaiw
    • /aiw/homeaiw1
    • /aiw/homeaiwinst (this directory is created only when using a DB2 database)
    • /aiw/homeaiwdbfid (this directory is created only when using a DB2 database)
    • /aiw/varpsf
    • /aiw/docker-volumes (this directory is created only when using a PostgreSQL database)
  8. In the directory where the installation media is located, type: scripts/failover-update_exports.sh to add these shares to NFS.
  9. Restart NFS. Type: showmount -e then cat /etc/exports to view and confirm the settings are correct on the file server.
    Make sure the added shares are correct and check the flags and permissions of each share.
  10. Log in to the production server as the root user and mount the shared directories:
    1. Open a command line. Go to the directory where the installation media is located.
    2. Run the script.
      In a PostgreSQL configuration, type:
      scripts/failover-create-mountpoints.sh postgresql
      In a DB2 configuration, type:
      scripts/failover-create-mountpoints.sh db2
    3. If the directory /usr/local/bin does not exist, type: mkdir -p /usr/local/bin and press Enter.
    4. Copy scripts/mountDrives.sh from the installation media to /usr/local/bin
    5. Using a text editor, edit mountDrives.sh. Make sure you change the file server value to the name of your file server.
    6. If you are not using NFS to share and mount the filesystems, modify the script to run the appropriate commands to mount them.
    7. To make the script executable, type: chmod +x /usr/local/bin/mountDrives.sh and press Enter.
    8. To run the script, type: /usr/local/bin/mountDrives.sh and press Enter.
    9. To confirm the shared directories are mounted, type: df and press Enter .
  11. Create users on the production server:
    1. Copy scripts/failover-user-configuration from the installation media to /tmp on the production server.
    2. Using a text editor, open failover-user-configuration. Compare the system user and system group values to the system user and system group values from Step . If you are using the default values, these values do not need to be changed.
    3. To run the script, go to the directory where the installation media is located and type: scripts/failover-create-users.sh /tmp/failover-user-configuration and press Enter.
    4. Type: id username for each user name to verify it was created.
      For example, if you type: id aiw1, your output might look like:

      uid=3133(aiw1) gid=1038(ipserv) groups=10(wheel),1038(ipserv),111(staff1)

  12. Install RICOH ProcessDirector on the production server. See Installing the base product.
    When prompted for system users and system groups, use the same values you used in the scripts and select the system user (aiw1 is the default). Do not reboot the system after installing RICOH ProcessDirector. The mapped drives might need to be re-mapped if the system is rebooted.
  13. Continue with Logging in for the first time. Do not do the Verifying the installation procedure. Return to this section to complete the manual failover environment installation.
  14. Completely shut down RICOH ProcessDirector on the production server:
    1. Log in to the production server as the system user (aiw1 is the default).
    2. If you run in a PostgreSQL configuration, go to the directory where the installation media is located. Type scripts/failover-docker-setup.sh and press Enter.
    3. Open a command line and type: stopaiw
    4. Type: su - root and press Enter. When prompted, enter the password for the root user and press Enter.
    5. If you run in a DB2 configuration, type: /opt/infoprint/ippd/db/bin/db2fmcu -d
    6. If you run in a DB2 configuration, type: ps -ef | grep db2 to display all db2 processes that are still running. To end each db2 process, type:
      kill followed by each of the process IDs listed in the results of the grep command. For example, your results might look similar to:
      dasusr1  14729     1  0 Aug24 ?   00:00:01 /home/dasusr1/das/
                                                  adm/db2dasrrm
      root     18266     1  0 Aug24 ?   00:15:08 /opt/infoprint/ippd/db/
                                                  bin/db2fmcd
      dasusr1  18342     1  0 Aug24 ?   00:00:23 /opt/infoprint/ippd/db/das/
                                                  bin/db2fmd -i dasusr1 -m /
                                                  opt/infoprint/ippd/db/das/
                                                  lib/libdb2dasgcf.so.1
      root     21049     1  0 Sep01 ?   00:00:00 db2wdog 0 [aiwinst] 
      aiwinst  21051 21049  0 Sep01 ?   01:13:01 db2sysc 0  
      root     21059 21049  0 Sep01 ?   00:00:00 db2ckpwd 0 
      aiwinst  21061 21049  0 Sep01 ?   00:00:00 db2vend (PD Vendor 
                                                 Process - 1) 0    

      In these results, the process IDs are listed in the second column. To end the first process in the list, type: kill 14729 and press Enter.

    7. Type: ps -ef | grep psfapid to display all psfapid processes. To end each psfapid process, type:
      kill followed by each of the process IDs listed in the results of the grep command.
    8. Type: ps -ef | grep aiw1 to display all aiw1 processes. To end each aiw1 process, type:
      kill followed by each of the process IDs listed in the results of the grep command.
  15. Log in to the failover server as the root user and mount the shared directories:
    1. Run the script.
      In a PostgreSQL configuration, type:
      scripts/failover-create-mountpoints.sh postgresql
      In a DB2 configuration, type:
      scripts/failover-create-mountpoints.sh db2
    2. If the directory /usr/local/bin does not exist, type: mkdir -p /usr/local/bin and press Enter.
    3. Copy scripts/mountDrives.sh from the installation media to /usr/local/bin.
    4. Using a text editor, edit mountDrives.sh. Make sure you change the file server value to the name of your file server. If you are not using NFS, update the script to use your sharing technology.
    5. If you are not using NFS to share and mount the filesystems, modify the script to run the appropriate commands to mount them.
    6. To make the script executable, type: chmod +x /usr/local/bin/mountDrives.sh and press Enter.
    7. To run the script, type: /usr/local/bin/mountDrives.sh and press Enter.
    8. To confirm the shared directories are mounted, type: df and press Enter.
  16. Create users on the failover server:
    1. Copy scripts/failover-user-configuration from the installation media to /tmp on the failover server.
    2. Using a text editor, open failover-user-configuration. Compare the system user and system group values to the system user and system group values from Step . If you are using the default values, these values do not need to be changed.
    3. To run the script, go to the directory where the installation media is located, type: scripts/failover-create-users.sh /tmp/failover-user-configuration then press Enter.
    4. Type: id username for each user name to verify it was created.
      For example, if you type: id aiw1, your output might look like:

      uid=3133(aiw1) gid=1038(ipserv) groups=10(wheel),1038(ipserv),111(staff1)

  17. On the failover server:
    1. Log in as the root user.
    2. Open a command line and go to the directory where the installation media is located. Type: scripts/failover-setup-rpd-node.sh and press Enter to run the script.
      The script adds entries to /etc/services, installs PSF if necessary, and updates the rpm database on the failover server.
    3. Type: /opt/infoprint/ippd/bin/changeHostname.plproduction_server_hostname where production_server_hostname is the name of the production server.
    4. To verify the installation on the failover server, log in to the product again. This time, use the host name of the failover server in the Web browser: http://failover hostname:15080/pd If you can log in, the installation is successful.
  18. Switch processing back to the production server:
    1. Log in to the failover server as the system user (aiw1 is the default).
    2. Open a command line and type: stopaiw
    3. Log in to the production server as the root user.
    4. On the production server, type: /opt/infoprint/ippd/bin/changeHostname.plfailover_server_hostname where failover_server_hostname is the name of the failover server. The failover server is currently the primary server.
  19. Install the license keys for the production and failover servers. You must purchase two license keys, one per server.
    1. On the production server, install the license key for the production server. See Downloading and installing license keys.
    2. Open a command line and log in as the system user (aiw1 is the default) and type: stopaiw
    3. Switch processing to the failover server. On the failover server, open a command prompt as the root user and type: /opt/infoprint/ippd/bin/changeHostname.plproduction_server_hostname where production_server_hostname is the name of the production server.
    4. On the failover server, install the license key for the failover server. See Downloading and installing license keys.
      When you open the RICOH ProcessDirector user interface on the failover server, you might see the message License key violation detected. Contact Software Support. This message does not appear after the license key is installed.
    5. Open a command line and log in as the system user (aiw1 is the default) and type: stopaiw
    6. Switch processing to the production server. On the production server, open a command prompt as the root user and type: /opt/infoprint/ippd/bin/changeHostname.plfailover_server_hostname where failover_server_hostname is the name of the failover server.
Any features installed on the production server are automatically available when you switch processing to the failover server.