System Installation Without Internet on Astra Linux

Note

Below is an example of system installation in closed loop (without internet) on Astra Linux 1.7.X "Smolensk"

Warning

The instruction is actual for installation of Universe MDM version 6.9 and later, since the system has switched from Elasticsearch to Opensearch

Before you start:

  • The archive with the distribution kit is provided to the client when purchasing the product through the manager of the company "Universe Data".

  • Unpack the distribution archive, which contains the installation scripts, to any location. The contents will be stored in the MDM_manual_install_Astra_1.7 directory. Next, this directory will be named as <OFFLINE_REF>.

  • Copy the contents of <OFFLINE_REP> to the target server.

JAVA installation

Note

Java must be installed on all servers where you plan to run Opensearch, Tomcat

All packages required for a correct openJDK installation are located in the Java directory.

To install Java:

  1. Navigate to the Java directory.

  2. Unzip the contents of the archive with the installation packages.

  3. In the unpacked directory, execute the command:

    sudo dpkg -i *.deb
    

Installing Opensearch

To install Opensearch:

  1. Navigate to the Opensearch directory and use the command:

    sudo dpkg -i *.deb
    

Installing dictionaries

  1. Copy the ./<OFFLINE_REP>/Opensearch/hunspell directory to /etc/opensearch/:

    sudo cp -rv ./<OFFLINE_REP>/Opensearch/hunspell /etc/opensearch
    
  2. Grant permissions for the new directory:

    sudo chown -R root:opensearch /etc/opensearch/hunspell/
    

Installing the plugin

  1. Run the command with the full path to the analysis-icu-*.zip archive:

    sudo /usr/share/opensearch/bin/opensearch-plugin install /<OFFLINE_REP>/Opensearch/analysis-icu-2.7.0.zip
    
    # if you're in the directory with the file analysis-icu-*.zip
    # sudo /usr/share/opensearch/bin/opensearch-plugin install file://`pwd`/analysis-icu-2.7.0.zip
    
  2. The result of the command will look like this:

    Installed analysis-icu with folder name analysis-icu
    

Opensearch Configuration

  1. Clear the configuration file and open the following file with any editor:

    sudo cp -i /etc/opensearch/opensearch.yml /etc/opensearch/opensearch.yml_before_ssl && > /etc/opensearch/opensearch.yml # clean file
    sudo vi /etc/opensearch/opensearch.yml
    
  2. Specify the cluster.name parameter, for example:

    cluster.name: mdm-os-cluster
    

The cluster name will be used in the application settings for connection. Specify each parameter in the file on a new line.

  1. By default Opensearch listens only to localhost, if the Tomcat application is installed on another server, and/or Opensearch will be used in a cluster, it is necessary to allow connections from other interfaces by specifying the: parameter:

    network.host: 0.0.0.0

  2. You must also open port 9200 for nodes/applications that will connect to Opensearch.

  3. If you plan to use only one Opensearch node - specify the parameter:

    discovery.type: single-node
    
  4. Specify the directory for logs and data:

    path.data: /var/lib/opensearch
    path.logs: /var/log/opensearch
    
  5. If SSL encryption and authorization are not required, specify the parameter:

    plugins.security.disabled: true
    

Example of the final opensearch.yml file to run Opensearch on a single node without ssl and authorization:

cluster.name: mdm-os-cluster
network.host: 0.0.0.0
discovery.type: single-node
path.data: /var/lib/opensearch
path.logs: /var/log/opensearch
plugins.security.disabled: true

Configuring an Opensearch cluster

Note

This is an example of a configuration without using ssl

For a cluster configuration and using multiple Opensearch servers, in the /etc/opensearch/opensearch.yml file on each server, change a number of settings:

  1. Set a unique node.name for each node in the Opensearch cluster:

    # for the first server:
    node.name: node01
    
    # ...
    # для N сервера
    # node.name: nodeN
    
  2. List all Hostname or ip servers that are planned to be clustered in the following parameters:

    cluster.initial_master_nodes: ["10.10.24.90","10.10.24.91", "10.10.24.92"]
    discovery.seed_hosts: ["10.10.24.90", "10.10.24.91", "10.10.24.92"]
    
  3. Commit or remove the following parameter as it conflicts with the cluster setting:

    #discovery.type: single-node
    

Opensearch cluster servers communicate with each other using port 9300, which must be open between them.

Opensearch RAM consumption settings

You must configure the amount of allocated RAM in the /etc/opensearch/jvm.options file. The action is performed on each node in the cluster.

The values of allocated RAM should not be more than 50% of the total RAM (provided that no other resource-intensive applications are installed on the server) and not more than 32 GB. Xms must be equal to Xmx. In the example below, the value is 16 GB:

-Xms16g
-Xmx16g

Configuring SSL Opensearch

Tip

If no ssl connection is required, the above steps can be skipped

SSL requires certificates, which are generated on one node and then copied to the other nodes.

  1. On each node, delete or move the default demo certificates to a third-party directory, as they may interfere with startup:

    cd /etc/opensearch && mkdir -p /opt/_os_default_demo_certs_ && mv  *.pem /opt/_os_default_demo_certs_
    
  2. Generate certificates (on one of the nodes) and navigate to the directory for certificates:

    mdkir -p /etc/opensearch/ssl && cd /etc/opensearch/ssl
    
  3. Create the root certificate and administrator certificate that you will need to manage the settings:

     # Root CA
    openssl genrsa -out root-ca-key.pem 2048
    openssl req -new -x509 -sha256 -key root-ca-key.pem -subj "/C=CA/O=ORG/OU=UNIT/CN=root-ca.dns.record" -out root-ca.pem -days 9999
    
    # Admin cert
    openssl genrsa -out admin-key-temp.pem 2048
    openssl pkcs8 -inform PEM -outform PEM -in admin-key-temp.pem -topk8 -nocrypt -v1 PBE-SHA1-3DES -out admin-key.pem
    openssl req -new -key admin-key.pem -subj "/C=CA/O=ORG/OU=UNIT/CN=A" -out admin.csr
    openssl x509 -req -in admin.csr -CA root-ca.pem -CAkey root-ca-key.pem -CAcreateserial -sha256 -out admin.pem -days 9999
    
    # clear intermediate files that are no longer needed
    rm admin-key-temp.pem
    rm admin.csr
    
    # days n - certificate validity period in days
    
  4. Create a certificate for each node in the Opensearch cluster:

    # Node cert 1
    openssl genrsa -out node1-key-temp.pem 2048
    openssl pkcs8 -inform PEM -outform PEM -in node1-key-temp.pem -topk8 -nocrypt -v1 PBE-SHA1-3DES -out node1-key.pem
    openssl req -new -key node1-key.pem -subj "/C=CA/O=ORG/OU=UNIT/CN=opensearch-node-1" -out node1.csr
    echo 'subjectAltName=DNS:opensearch-node-1' > node1.ext
    openssl x509 -req -in node1.csr -CA root-ca.pem -CAkey root-ca-key.pem -CAcreateserial -sha256 -out node1.pem -days 9999 -extfile node1.ext
    
    # Node cert 2
    openssl genrsa -out node2-key-temp.pem 2048
    openssl pkcs8 -inform PEM -outform PEM -in node2-key-temp.pem -topk8 -nocrypt -v1 PBE-SHA1-3DES -out node2-key.pem
    openssl req -new -key node2-key.pem -subj "/C=CA/O=ORG/OU=UNIT/CN=opensearch-node-2" -out node2.csr
    echo 'subjectAltName=DNS:opensearch-node-2' > node2.ext
    openssl x509 -req -in node2.csr -CA root-ca.pem -CAkey root-ca-key.pem -CAcreateserial -sha256 -out node2.pem -days 9999 -extfile node2.ext
    
    # clean up intermediate files that are no longer needed
    rm node1-key-temp.pem
    rm node1.csr
    rm node1.ext
    rm node2-key-temp.pem
    rm node2.csr
    rm node2.ext
    
  • Above is an example for two nodes, for each subsequent node it is important:

    • Replace all file names in the command that contain nodeN (i.e. node1-key-temp.pem -> node3-key-temp.pem; node1-key.pem -> node3-key.pem .... etc)

    • Specify a valid hosname/dns address for each opensearch node in the CN= parameter: -subj "/C=CA/O=ORG/OU=UNIT/CN=opensearch-node-1".

  1. Copy the resulting files to the /etc/opensearch/ssl directory on all Opensearch nodes and grant permissions to the directory and files:

    chown opensearch:opensearch -R /etc/opensearch/ssl
    chown +x /etc/opensearch/ssl
    chmod 600 /etc/opensearch/ssl/*
    chmod 644 /etc/opensearch/ssl/root-ca.pem
    

Opensearch.yml file configuration (SSL)

  1. On each node, clear the configuration file and open the file (with any editor):

    sudo cp -i /etc/opensearch/opensearch.yml /etc/opensearch/opensearch.yml_before_ssl && /etc/opensearch/opensearch.yml > /etc/opensearch/opensearch.yml # clean file
    sudo vi /etc/opensearch/opensearch.yml
    
  2. Insert the following content:

    cluster.name: universe-os-cluster
    network.host: opensearch-node-1
    node.name: node01
    
    cluster.initial_master_nodes: ["opensearch-node-1","opensearch-node-2"]
    discovery.seed_hosts: ["opensearch-node-2"]
    
    # for cluster the parameter below should be removed, or commented out
    #discovery.type: single-node
    
    plugins.security.ssl.transport.pemcert_filepath: /etc/opensearch/ssl/node1.pem
    plugins.security.ssl.transport.pemkey_filepath: /etc/opensearch/ssl/node1-key.pem
    plugins.security.ssl.transport.pemtrustedcas_filepath: /etc/opensearch/ssl/root-ca.pem
    plugins.security.ssl.transport.enforce_hostname_verification: false
    
    # enable ssl 9200
    plugins.security.ssl.http.enabled: true
    plugins.security.ssl.http.pemcert_filepath: /etc/opensearch/ssl/node1.pem
    plugins.security.ssl.http.pemkey_filepath: /etc/opensearch/ssl/node1-key.pem
    plugins.security.ssl.http.pemtrustedcas_filepath: /etc/opensearch/ssl/root-ca.pem
    plugins.security.allow_default_init_securityindex: true
    plugins.security.authcz.admin_dn:
      - 'CN=A,OU=UNIT,O=ORG,C=CA'
    plugins.security.nodes_dn:
      - 'CN=opensearch-node-1,OU=UNIT,O=ORG,C=CA'
      - 'CN=opensearch-node-2,OU=UNIT,O=ORG,C=CA'
    
    plugins.security.enable_snapshot_restore_privilege: true
    plugins.security.check_snapshot_restore_write_privileges: true
    plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
    cluster.routing.allocation.disk.threshold_enabled: false
    plugins.security.audit.config.disabled_rest_categories: NONE
    plugins.security.audit.config.disabled_transport_categories: NONE
    
    plugins.security.allow_unsafe_democertificates: false
    
    path.data: /var/lib/opensearch
    path.logs: /var/log/opensearch
    
  3. Replace the descriptions of the following parameters:

  • Cluster-wide parameters in the file:

    • cluster.name - must be the same for all nodes in the cluster

    • cluster.initial_master_nodes - specify all nodes in the cluster

    • network.host: 0.0.0.0 - if 0.0.0.0 is specified, all available interfaces and ip addresses are used, otherwise a specific ip address of the current node can be specified, available for all nodes.

    • plugins.security.authcz.admin_dn - attributes of the admin key are specified, you can get the value in the required format with the command openssl x509 -subject -nameopt RFC2253 -noout -in admin.pem.

    • plugins.security.nodes_dn parameter specifies all nodes in the cluster. The attributes that were used in the command when generating the certificates must be specified:

      • CN=opensearch-node-1,OU=UNIT,O=ORG,C=CA.

      • CN=opensearch-node-2,OU=UNIT,O=ORG,C=CA. The format is different from the one specified during generation. To display in the desired format, you can run the commands for each certificate: openssl x509 -subject -nameopt RFC2253 -noout -in node1.pem openssl x509 -subject -nameopt RFC2253 -noout -in node2.pem openssl x509 -subject -nameopt RFC2253 -noout -in nodeN.pem.

  • Parameters unique to each node:

    • Unique node.name.

    • For parameters, specify the path to the certificates for the current cluster node, for example:

      • plugins.security.ssl.http.pemcert_filepath: /etc/opensearch/ssl/node1.pem

      • plugins.security.ssl.http.pemkey_filepath: /etc/opensearch/ssl/node1-key.pem

      • plugins.security.ssl.http.pemcert_filepath: /etc/opensearch/ssl/node1.pem

      • plugins.security.ssl.http.pemkey_filepath: /etc/opensearch/ssl/node1-key.pem

    • Specify the neighboring nodes of the cluster in the discovery.seed_hosts parameter. If 3 nodes are used, then for opensearch-node-1 the parameter will look like this ["opensearch-node-2", "opensearch-node-3"]

Creating a key store for use in an application when connecting to Opensearch (SSL)

  1. Create a truststore and place the host's public key certificate there:

    keytool -import -trustcacerts -file node1.pem -keystore app-truststore.jks
    
  2. Enter and confirm the password. For example, for MY-TrustStore-pswd, enter yes to the question Trust this certificate?

  3. Create a keystore and mark the host's public key certificate there:

    openssl pkcs12 -export -in admin.pem -inkey admin-key.pem -out admin.p12 -CAfile root-ca.srl -caname root
    keytool -importkeystore -destkeystore app-keystore.jks -srckeystore admin.p12 -srcstoretype PKCS12
    
  4. Enter and confirm the password. For example, MY-p_12-pswd:

    keytool -importkeystore -destkeystore app-keystore.jks -srckeystore admin.p12 -srcstoretype PKCS12
    
  5. Set a new password and confirm it, it will be required in the application configuration. For example, MY-keyStore-pswd.

  6. Enter the previously entered password MY-p_12-pswd.

    • The command can be executed without interactively entering passwords by passing them through arguments, for example:

      keytool -importkeystore -deststorepass MY-keyStore-pswd -destkeypass MY-keyStore-pswd -destkeystore app-keystore.jks -srckeystore admin.p12 -srcstoretype PKCS12 -srcstorepass MY-p_12-pswd
      
  7. Delete the intermediate file:

    rm -f  admin.p12
    

Passwords and received .jks files will be required when customizing the application

Launching Opensearch

  1. Update the services:

    sudo systemctl daemon-reload
    
  2. Add Opensearch to the autoloader:

    sudo systemctl enable opensearch.service
    
  3. Start Opensearch:

    sudo systemctl start opensearch.service
    
  4. Check the status:

    sudo systemctl status opensearch.service
    

Opensearch startup check

  1. Check Opensearch nodes:

    curl 'https://localhost:9200' -k -u 'admin:admin'
    
  2. Check cluster status:

    curl 'https://localhost:9200/_cluster/health?pretty' -k -u 'admin:admin'
    

Note

It is important that "status" = "green" and that the status name and number of nodes match the configured value

Change login/password to access Opensearch

Opensearch users are stored in the file /etc/opensearch/security/internal_users.yml. The password is specified as a hash, example:

admin:
  hash: "$2a$12$VcCDgh2NDk07JGN0rjGbM.Ad41qVR/YFJcgHp0UGns5JDymv..TOG"
  1. To get a hash of the password to use, you need to run the command, enter your password, and get a hash that you can paste into this config:

    bash /usr/share/opensearch/plugins/opensearch-security/tools/hash.sh
    
  2. To apply the file settings, run the script:

    cd /usr/share/opensearch/plugins/opensearch-security/tools && \
    ./securityadmin.sh -cd /etc/opensearch/opensearch-security -icl -nhnv -h localhost \
       -cacert /etc/opensearch/ssl/root-ca.pem \
       -cert /etc/opensearch/ssl/admin.pem\
       -key /etc/opensearch/ssl/admin-key.pem
    
  3. The result of successful execution of the command: Done with success.

The script writes data to the index for the whole cluster, to avoid inconsistency and possible application of the command from another node, you should transfer the changes in this file to all nodes of the cluster.

In the internal_users.yml file you can also rename the admin user, delete all others and apply the configuration.

Installing PostgreSQL 12

All packages necessary for correct installation are located in the Postgresql directory.

To install postgresql:

  1. Go to the Postgresql directory and run the command:

    sudo dpkg -i *.deb
    
  2. The main postgresql configuration files are located in the directory:

  • /etc/postgresql/12/main/postgresql.conf

  • /etc/postgresql/12/main/pg_hba.conf

  1. In the /etc/postgresql/12/main/postgresql.conf file, set the listed parameters as follows. Also replace port=5433 with port=5432:

    listen_addresses = '*'
    max_connections = 1000
    max_prepared_transactions = 300
    
  2. At the end of the /etc/postgresql/12/main/pg_hba.conf file, add a line that will allow a password connection to the host for all bases:

    host    all             all             all            scram-sha-256
    
  3. Reboot postgresql to apply the settings (may change settings to suit individual needs):

    systemctl restart postgresql-12
    
  4. Continue with the configuration in the console. Log in to the database under the user:

    sudo su
    su postgres
    psql
    
  5. Set the user password:

    ALTER USER postgres WITH PASSWORD 'notpostgres_change_me'
    
  6. Create a database for the application (selection of your own logical name to be used later is available):

    CREATE DATABASE universe;
    
  7. Change the timezone (e.g. MSK+3):

    ALTER SYSTEM SET timezone TO 'W-SU';
    ALTER SYSTEM SET random_page_cost TO 1.1;
    
  8. Uncomment and set the buffer size. When RAM is 1 GB or more, the shared_buffers value is 25% of the memory size:

    ALTER SYSTEM SET shared_buffers TO '4GB'
    
  9. Reboot postgresql to apply additional settings:

    systemctl restart postgresql-12
    
  10. Check the connection to the base and password request:

    psql -U postgres -h localhost
    

Installing Tomcat

Java is required to run Tomcat (see above for installation description).

  1. Create a user to run tomcat:

    sudo useradd -r tomcat -s /sbin/nologin
    
  2. Extract the distribution from the apache-tomcat-9.0.*.tar.gz directory to the /opt/ directory.

  3. Rename the /opt/apache-tomcat-9.0.* directory to /opt/tomcat-9 (version may vary).

  4. Remove the standard Manager App files and directories that may be contained in the tomcat distribution from the /opt/tomcat-9/webapps directory:

    rm -rf /opt/tomcat-9/webapps/*.
    
  5. Grant permissions to the tomcat directory to the tomcat: user:

    chown -R tomcat:tomcat /opt/tomcat-9
    
  6. Add a service by creating the tomcat.service file:

    sudo vi /etc/systemd/system/tomcat.service
    

with the following contents, setting the RAM consumption parameters in CATALINA_OPTS:

[Unit]
Description=Apache Tomcat Web Application Container
After=network.target

[Service]
Type=forking

Environment=JAVA_HOME=/usr/lib/jvm/java-1.11.0-openjdk-amd64
Environment=CATALINA_PID=/opt/tomcat-9/temp/tomcat.pid
Environment=CATALINA_HOME=/opt/tomcat-9
Environment=CATALINA_BASE=/opt/tomcat-9
Environment='CATALINA_OPTS=-Xms1024M -Xmx2048M -server -XX:+UseParallelGC'
Environment='JAVA_OPTS=-Djava.awt.headless=true -Djava.security.egd=file:/dev/./urandom'

WorkingDirectory=/opt/tomcat-9/

ExecStart=/opt/tomcat-9/bin/startup.sh
ExecStop=/opt/tomcat-9/bin/shutdown.sh

User=tomcat
Group=tomcat
UMask=0007
RestartSec=10
Restart=always

[Install]
WantedBy=multi-user.target
  1. Apply the services configuration:

    systemctl daemon-reload
    

Installing the Universe app

Tomcat and Java are required to run the application (see above for installation description).

Starting the installation

  1. Installing .war files: the Application_Tomcat directory contains 2 archives: frontend or backend. Each archive contains .war files, which should be copied to the /opt/tomcat-9/webapps/ directory.

  2. Installing additional options and libraries: the backend archive contains a Tomcat directory, the contents of which should be copied to /opt/tomcat-9/. While in the unpacked directory of the backend archive, navigate to the tomcat directory and copy the files to the service directory:

    sudo cp -v bin/setenv.sh /opt/tomcat-9/bin/ && \
    sudo cp -rv conf/universe /opt/tomcat-9/conf/ && \
    sudo cp -v libs/* /opt/tomcat-9/lib/
    
  3. Grant file permissions for the service:

    chown -R tomcat:tomcat /opt/tomcat-9
    chmod +x /opt/tomcat-9/bin/*.sh
    

Customizing the application

  1. The basic parameters are set in variables in the setenv.sh file. Edit the file:

    vi /opt/tomcat-9/bin/setenv.sh
    
  2. Set/add the following variables, setting the values to match the logical variable name:

    # the existing JAVA_OPTS parameter is NOT affected
    
    # specify database connection parameters
    export POSTGRES_ADDRESS="localhost:5432"
    export POSTGRES_USERNAME="postgres"
    export POSTGRES_PASSWORD="notpostgres_change_me"
    export DATABASE_NAME="universe"
    
    # Specify the parameters for connecting to Opensearch:
    
    export SEARCH_CLUSTER_NAME="universe-os-cluster"
    export SEARCH_CLUSTER_ADDRESS="localhost:9200"
    # when using an Opensearch cluster, the SEARCH_CLUSTER_ADDRESS variable lists all nodes (hostname or ip) in commas, example:
    # SEARCH_CLUSTER_ADDRESS=opensearch-node-1:9200,opensearch-node-2:9200,opensearch-node-3:9200
    

Configuring an application to connect to Opensearch via SSL

  1. Create a directory for the certificates:

    sudo mkdir /opt/tomcat-9/ssl
    
  2. Copy the .jks files created in the previous steps to the created directory:

    cp -v *\.jks /opt/tomcat-9/ssl
    chown -R tomcat:tomcat /opt/tomcat-9/ssl
    
  3. The basic parameters are set in variables in the setenv.sh file. Edit the file:

    vi /opt/tomcat-9/bin/setenv.sh
    
  4. Set/add the following variables by entering values corresponding to the logical variable name:

    export SEARCH_CLUSTER_NAME="universe-os-cluster"
    # the address must be the dns name specified in the certificate in CN
    export SEARCH_CLUSTER_ADDRESS="opensearch-node-1:9200"
    # when using an Opensearch cluster, the SEARCH_CLUSTER_ADDRESS variable lists all nodes (hostname or ip) in commas, example:
    # SEARCH_CLUSTER_ADDRESS=opensearch-node-1:9200,opensearch-node-2:9200,opensearch-node-3:9200
    
    export SEARCH_SECURITY_ENABLED=true
    # credentials for authorization in Opensearch
    export SEARCH_ADMIN_LOGIN=admin
    export SEARCH_ADMIN_PASSWORD=admin
    
    # Specify the jks files generated for Opensearch and their passwords
    export SEARCH_TRUSTSTORE_PATH=/opt/tomcat-9/ssl/app-truststore.jks
    export SEARCH_TRUSTSTORE_PASSWORD=MY-TrustStore-pswd
    
    export SEARCH_KEYSTORE_PATH=/opt/tomcat-9/ssl/app-keystore.jks
    export SEARCH_KEYSTORE_PASSWORD=MY-keyStore-pswd
    

Cluster application customization

  1. If you plan to use multiple Tomcat servers, you must change a number of settings in the /opt/tomcat-9/conf/universe/backend.properties file on each server to set up the cluster configuration of the application:

    vi /opt/tomcat-9/conf/universe/backend.properties
    
  2. The following settings are the same for each application node:

    # enable distributed cache
    org.unidata.mdm.system.cache.tcp-ip.enabled=true
    # list all tomcat nodes, ip or hostname
    org.unidata.mdm.system.cache.tcp-ip.members=server-192-168-106-110,server-192-168-106-111
    
    # default port, can be replaced if necessary, must be open to all nodes in the application cluster
    org.unidata.mdm.system.cache.port=5701
    
  3. The parameter below must be unique for each node in the application cluster, example value of the parameter:

    org.unidata.mdm.system.node.id=node1
    #org.unidata.mdm.mdm.system.node.id=nodeN # for the remaining N servers
    
  4. Example of a log message ( logs/catalina.out ) to determine that applications have clustered together:

    INFO com.hazelcast.internal.server.tcp.TcpServerConnection.null [server-192-168-106-111]:5701 [dev] Initialized new cluster connection between /192.168.106.111:44589 and server-192-168-106-110/192.168.106.110:5701
    com.hazelcast.internal.cluster.ClusterService.null [server-192-168-106-111]:5701 [dev]
    
    Members {size:2, ver:2} [
            Member [server-192-168-106-110]:5701 - b15485d2-3121-4398-adf0-aee0147d442e
            Member [server-192-168-106-111]:5701 - c61b4e32-94da-4e6a-8f1d-269ccb7f0f10 this
    ]
    

Launching the application

  • It is managed through a service, on each tomcat node:

    sudo systemctl start tomcat
    sudo systemctl status tomcat
    # sudo systemctl restart tomcat
    # sudo systemctl stop tomcat
    
  • The application logs are in the /opt/tomcat-9/logs/ directory.

After the installation is complete, execute logginginto Universe.