ltcluster node check — performs some health checks on a node from a replication perspective
Performs some health checks on a node from a replication perspective. This command must be run on the local node.
Currently ltcluster performs health checks on physical replication slots only, with the aim of warning about streaming replication standbys which have become detached and the associated risk of uncontrolled WAL file growth.
Execution on the primary server:
       $ ltcluster -f /etc/ltcluster.conf node check
       Node "node1":
            Server role: OK (node is primary)
            Replication lag: OK (N/A - node is primary)
            WAL archiving: OK (0 pending files)
            Upstream connection: OK (N/A - is primary)
            Downstream servers: OK (2 of 2 downstream nodes attached)
            Replication slots: OK (node has no physical replication slots)
            Missing replication slots: OK (node has no missing physical replication slots)
            Configured data directory: OK (configured "data_directory" is "/var/lib/lightdb/data")
Execution on a standby server:
       $ ltcluster -f /etc/ltcluster.conf node check
       Node "node2":
            Server role: OK (node is standby)
            Replication lag: OK (0 seconds)
            WAL archiving: OK (0 pending archive ready files)
            Upstream connection: OK (node "node2" (ID: 2) is attached to expected upstream node "node1" (ID: 1))
            Downstream servers: OK (this node has no downstream nodes)
            Replication slots: OK (node has no physical replication slots)
            Missing physical replication slots: OK (node has no missing physical replication slots)
            Configured data directory: OK (configured "data_directory" is "/var/lib/lightdb/data")
Each check can be performed individually by supplying an additional command line parameter, e.g.:
        $ ltcluster node check --role
        OK (node is primary)
Parameters for individual checks are as follows:
--role: checks if the node has the expected role
      --replication-lag: checks if the node is lagging by more than
        replication_lag_warning or replication_lag_critical
      --archive-ready: checks for WAL files which have not yet been archived,
        and returns WARNING or CRITICAL if the number
        exceeds archive_ready_warning or archive_ready_critical respectively.
      --downstream: checks that the expected downstream nodes are attached
      --upstream: checks that the node is attached to its expected upstream
      --slots: checks there are no inactive physical replication slots
      --missing-slots: checks there are no missing physical replication slots
      --data-directory-config: checks the data directory configured in
        ltcluster.conf matches the actual data directory.
        This check is not directly related to replication, but is useful to verify ltcluster
        is correctly configured.
      
Several checks are provided for diagnostic purposes and are not included in the general output:
            --db-connection: checks if ltcluster can connect to the
            database on the local node.
          
            This option is particularly useful in combination with SSH, as
            it can be used to troubleshoot connection issues encountered when ltcluster is
            executed remotely (e.g. during a switchover operation).
          
            --replication-config-owner: checks if the file containing replication
            configuration (LightDB 21 and later: lightdb.auto.conf)
            is
            owned by the same user who owns the data directory.
          
            Incorrect ownership of these files (e.g. if they are owned by root)
            will cause operations which need to update the replication configuration
            (e.g. ltcluster standby follow
            or ltcluster standby promote)
            to fail.
          
-S/--superuser: connect as the
            named superuser instead of the ltcluster user
          
--csv: generate output in CSV format (not available
            for individual checks)
          --nagios: generate output in a Nagios-compatible format
            (for individual checks only)
          
      When executing ltcluster node check with one of the individual
      checks listed above, ltcluster will emit one of the following Nagios-style exit codes
      (even if --nagios is not supplied):
      
0: OK
          1: WARNING
          2: ERROR
          3: UNKNOWN
          
      One of the following exit codes will be emitted by ltcluster status check
      if no individual check was specified.
    
SUCCESS (0)No issues were detected.
ERR_NODE_STATUS (25)One or more issues were detected.