Free Storage, SAN and LAN Performance and Capacity Monitoring

Hitachi storage installation

Back to storage installation home.


In case of usage of Virtual Appliance
  • Use local account lpar2rrd for hosting of STOR2RRD on the virtual appliance
  • Use /home/stor2rrd/stor2rrd as the product home
  • Use lpar2rrd account on storages as it is configured in /home/stor2rrd/stor2rrd/etc/stor2rrd.cfg (STORAGE_USER)
The program uses Hitachi Storage Navigator Modular 2 CLI interface (HSNM2 CLI).
The CLI is included in the same CD from where you got the SNM UI. If you dont have the CD you should ask HDS for the SNM CD (usually distributed as an ISO image).

Install HSNM2 CLI

  • Allow access from the STOR2RRD host to the both storage controllers on port 2000 (28355 when using secure communication).
    Test if port is open:
    $ perl /home/stor2rrd/stor2rrd/bin/conntest.pl 192.168.1.1 2000
      Connection to "192.168.1.1" on port "2000" is ok
    
  • HSNM2 CLI apparently needs libstdc++ package. Assure that it is installed.
    $ rpm -qa|grep libstdc++
      libstdc++-4.8.3-9.el7.x86_64
    
    Install it if it is not listed:
    # yum install libstdc++     # (RedHat, prp. Debian/Ubuntu )
    # apt-get install libstdc++ # (Debian/Ubuntu )
    
  • Get HSNM2 CLI package for your operating system and install it under root:
    # mkdir /usr/stonavm/
    # cd /usr/stonavm/
    # tar xf /tmp/HSNM2-2810-A-CLI-P01.tar
    # chown stor2rrd /usr/stonavm   # This must be owned by stor2rrd !!!!
    
  • Set environment in your actual shell (just copy&paste to the cmd line)
    LIBPATH=/usr/stonavm/lib:$LIBPATH
    SHLIB_PATH=/usr/stonavm/lib:$SHLIB_PATH
    LD_LIBRARY_PATH=/usr/stonavm/lib:$LD_LIBRARY_PATH
    STONAVM_HOME=/usr/stonavm
    STONAVM_ACT=on
    STONAVM_RSP_PASS=on
    PATH=$PATH:/usr/stonavm
    export  LIBPATH SHLIB_PATH LD_LIBRARY_PATH STONAVM_HOME STONAVM_ACT STONAVM_RSP_PASS PATH
    
  • Register the storage systems to be monitored (adjust IPs in below examples)
    • Automatically
      # auunitaddauto -ip 192.168.1.1 192.168.1.2
      
    • Manuallly
      # auunitadd -unit HUS110 -LAN -ctl0 192.168.1.1 -ctl1 192.168.1.2
      
    • Using secure communication (port 28355 on storage):
      # auunitadd -unit HUS110 -LAN -ctl0 192.168.1.1 -ctl1 192.168.1.2 -communicationtype secure
      
  • Create user stor2rrd (lpar2rrd on the Appliance) on the storage with role Storage Administrator (View Only)

  • Register user access for the storage
    # auaccountenv -set -uid stor2rrd -authentication 
    
  • Test connectivity
    # auunitref
    # auunitinfo -unit HUS110
    

STOR2RRD storage configuration

  • All actions below under stor2rrd user (lpar2rrd on Virtual Appliance)

  • Configure storages in etc/storage-list.cfg
    Uncomment (remove the hash) example line and adjust it:
    $ vi /home/stor2rrd/stor2rrd/etc/storage-list.cfg
    
    #
    # Hitachi HUS
    #
    # Storage alias:HUS:VOLUME_AGG_DATA_LIM:VOLUME_AGG_IO_LIM:SAMPLE_RATE_MINS
    #
    #HUS110:HUS:256:10:5
    #HUS130:HUS:
    #AMS2000-alias:HUS:
    HUS110:HUS:
    
  • Assure you have enough of disk space on the filesystem where is STOR2RRD installed
    Roughly you might count 2 - 30 GB per a storage (it depends on number of volumes, 30GB for 5000 volumes)
    $ df -g /home   # AIX
    $ df -h /home   # Linux
    
  • Test the storage connectivity:
    $ cd /home/stor2rrd/stor2rrd
    $ ./bin/config_check.sh 
      =========================
      STORAGE: HUS110: HUS 
      =========================
      /usr/stonavm/auunitinfo -unit HUS110
      connection ok
    
  • Schedule to run storage agent from stor2rrd crontab (lpar2rrd on Virtual Appliance, it might already exist there)
    $ crontab -l | grep load_husperf.sh
    $
    
    Add if it does not exist as above
    $ crontab -e
    
    # Hitachi HUS && AMS 2000 storage agent 
    0,5,10,15,20,25,30,35,40,45,50,55 * * * * /home/stor2rrd/stor2rrd/load_husperf.sh > /home/stor2rrd/stor2rrd/load_husperf.out 2>&1
    
    Assure there is already an entry with the UI creation running once an hour in crontab
    $ crontab -e
    
    # STOR2RRD UI (just ONE entry of load.sh must be there)
    5 * * * * /home/stor2rrd/stor2rrd/load.sh > /home/stor2rrd/stor2rrd/load.out 2>&1
    
  • Let run the storage agent for 15 - 20 minutes to get data, then:
    $ cd /home/stor2rrd/stor2rrd
    $ ./load.sh
    
  • Go to the web UI: http://<your web server>/stor2rrd/
    Use Ctrl-F5 to refresh the web browser cache.

In case of usage of Virtual Appliance
  • Use local account lpar2rrd for hosting of STOR2RRD on the virtual appliance
  • Use /home/stor2rrd/stor2rrd as the product home
The program uses 3 Hitachi APIs. You have to install both of them.
  • Command Control Interface (CCI)
  • Hitachi Export tool
  • SNMP API to get Health status
You might also look into very detaily described installation procedure on www.sysaix.com.
Note that Virtual Storage Machines (VSM) feature is not supported by the tool.

Storage configuration

Installation of Hitachi CCI

  • Allow communication from STOR2RRD host to all storages SVP IP on TCP ports 1099, 51100
    Note new firmwares might use 51101 or 51099 ports instead of 51100.
    Test open ports for TCP protocols to SVP IP:
    $ perl /home/stor2rrd/stor2rrd/bin/conntest.pl 192.168.1.1 1099
      Connection to "192.168.1.3" on port "1099" is ok
    $ perl /home/stor2rrd/stor2rrd/bin/conntest.pl 192.168.1.1 51100
      Connection to "192.168.1.3" on port "51100" is ok
    
    Allow communication from STOR2RRD host to all storages node IP on UDP 31001
    How to test UDP

  • Create user stor2rrd on the storage, read only access
    Do not use shell special characters like #!?|$*[]\{}`"'& in the password, use ;:.+-%@ instead.

  • Obtain CCI installation package from your Hitachi representatives.
    • Install it from .iso image under root account
      Mount ISO image:
      • AIX command:
        # loopmount -i /HS042_77.iso -o "-V cdrfs -o ro" -m /mnt
        
      • Linux (Virtual Appliance) command:
        # mount -o loop,ro HS042_77.iso /mnt 
        
    • Create target directory:
      # mkdir /etc/HORCM
      # cd /mnt
      # ./RMinstsh
      
    • Install from CD
      # mkdir /opt
      # cpio -idmu < /dev/XXXX    # where XXXX = I/O device with install media
      # ln -s /opt/HORCM /HORCM
      
    • Execute the CCI installation command:
      # /HORCM/horcminstall.sh
      
    • Verify installation of the proper version using the raidqry command:
      # raidqry -h
        Model: RAID-Manager/HP-UX
        Ver&Rev: 01-29-03/05
        Usage: raidqry [options]
      
    • Assure that everything is executable and writeable by stor2rrd user
      Use lpar2rrd user on the Virtual Appliance.
      This is a must! Under root identification execute this:
      # touch /HORCM/etc/USE_OLD_IOCT 
      # chown stor2rrd /HORCM
      # chown -R stor2rrd /HORCM/* /HORCM/.uds 
      # chmod 755 /HORCM /HORCM/usr/bin /HORCM/usr/bin/* /HORCM/log* /HORCM/etc/horcmgr /HORCM/etc/*conf /HORCM/.uds/ 
      

Configuration of CCI

  • CCI communication with storage can be done either via LAN (it is something described below) or via command device (SAN attached volume from the storage).
    When you have many storages in place 40+ then use rather command device as LAN communication might not be reliable enough. CCI command device configuration procedure

  • Each storage must have its own config file /etc/horcmXX.conf

  • Check if local ports 10001 and 10002 are not used (nothing is listening there)
    # netstat -an|grep -i listen| egrep "11001|11002"
    
  • storage with controllers IP 192.168.1.1 and 192.168.1.2, conf file /etc/horcm1.conf will use local port 11001 (UDP)
    Use storage node IP. SVP IP must be used in etc/storage-list.cfg further.
    # vi /etc/horcm1.conf
    
    HORCM_MON
    # ip_address service poll(10ms) timeout(10ms)
    localhost    11001   1000       3000
    HORCM_CMD
    # dev_name dev_name dev_name
    \\.\IPCMD-192.168.1.1-31001   \\.\IPCMD-192.168.1.2-31001
    
  • storage with IP 192.168.1.10 and 192.168.1.11, conf file /etc/horcm2.conf
    change localhost port to 11002 (from 11001 which is used above)
    # vi /etc/horcm2.conf
    
    HORCM_MON
    # ip_address service poll(10ms) timeout(10ms)
    localhost    11002   1000       3000
    HORCM_CMD
    # dev_name dev_name dev_name
    \\.\IPCMD-192.168.1.10-31001  \\.\IPCMD-192.168.1.11-31001
    
  • Start it under stor2rrd account (definitely not under root!). Use lpar2rrd account on the Virtual Appliance
    This starts HORM instance 1 (/etc/horcm1.conf)
    # su - stor2rrd
    $ /HORCM/usr/bin/horcmstart.sh 1
    
  • Start HORM instance 1 & 2 (/etc/horcm1.conf & /etc/horcm2.conf)
    # su - stor2rrd
    $ /HORCM/usr/bin/horcmstart.sh 1 2
    
  • Check if they are running
    $ ps -ef | grep horcm
      stor2rrd 19660912 1 0 Feb 26 - 0:03 horcmd_02
      stor2rrd 27590770 1 0 Feb 26 - 0:09 horcmd_01
    
  • Place it into operating system start/stop scripts
    # su - stor2rrd -c "/HORCM/usr/bin/horcmstart.sh 1 2"
    # su - stor2rrd -c "/HORCM/usr/bin/horcmshutdown.sh 1 2"
    
  • When HORCM does not want to start then
    • Assure that filesystem permission are fine for /HORCM (owned by stor2rrd user)
    • Check if connection to storage IP nodes is allowed: how to test UDP

Installation of Hitachi Export Tool

    It is typically located on a CD that comes packaged with the Service Processor on the HDS USP Array. The Export Tool can also be obtained by contacting HDS support. (CD location: /ToolsPack/ExportTool)
    Hitachi produces a new Export Tool for each release of the firmware. So unless you make sure all of the storages are running on the same firmware version then you will need to obtain the appropriate version of the Export Tool to meet the firmware version you are running at the site.

    Find our firmware release of your storage (83-01-28/00) identified by /etc/horcm1.conf (-I1):
    Export Tool version must match the SVP firmware version.
    Under stor2rrd account!
    # su - stor2rrd
    $ raidcom -login stor2rrd <password> -I1
    $ raidqry -l -I1 
      No  Group    Hostname     HORCM_ver   Uid   Serial#   Micro_ver     Cache(MB)
       1    ---   localhost   01-35-03-08     0   471234    83-01-28/00      320000
    $ raidcom -logout -I1
    
    Install each version of the Export Tool into separate directory named as firmware of your storage (just 6 numbers like in this example firmware 83-01-28) under root user:
    # mkdir /opt/hds
    # mkdir /opt/hds/83-01-28
    # cd /opt/hds/83-01-28
    # tar xvf export-tool.tar
    # chmod 755 runUnix.sh
    # chown -R stor2rrd /opt/hds
    
    Might work that higher Export Tool version works even with lower storage firmware like in that example (Export Tool 83-01-28 and storage firmware 73-03-57).
    In this case you do not need to install older Export Tool, just make a symlink.
    # cd /opt/hds/
    # ls
      83-01-28
    # ln -s 83-01-28 73-03-57
    

    Directory /opt/hds is optional, it is configurable in /home/stor2rrd/stor2rrd/etc/stor2rrd.cfg : VSP_CLIDIR=/opt/hds
    The HDS Performance Monitor License must exist for each array and monitoring must be enabled.
    Storage configuration example

Allow monitoring of CU and WWN

    Note this configuration option do not have to be in place on all modes or firmwares, you might ignore it if you do not find it on your storages.
  • CU
    Hitachi CU menu

    Hitachi CU selection

  • WWN
    Note that monitoring apparently cannot be enabled when WWNs per Port exceeds the maximum of 32.
    In this case you will not have direct per host data but host data will be aggregated from attached volumes (it might mislead when volumes have attached more hosts).

    Hitachi WWN menu

    Hitachi WWN selection


    When you still do not have data then re-enable of monitoring might help. Hitachi WWN

Health status

    Only the way to get health status from these storages is SNMP protocol.
    This feature is available since product version 2.50.
    You have to install SNMP modules anyway if you use 2.50.

    Install snmpwalk

    Skip that in case you are on our Virtual Appliance
    • AIX
      Download Net-SNMP packages and install them.
      Do not use the latest packages on AIX, it does not work, use net-snmp-5.6.2.1-1!
      # umask 0022
      # rpm -Uvh net-snmp-5.6.2.1-1 net-snmp-utils-5.6.2.1-1 net-snmp-perl-5.6.2.1-1
      
      Make sure
      • you use PERL=/opt/freeware/bin/perl in etc/stor2rrd.cfg
      • PERL5LIB in etc/stor2rrd.cfg contains /opt/freeware/lib/perl5/vendor_perl/5.8.8/ppc-thread-multi path
    • Linux
      # umask 0022
      # yum install net-snmp
      # yum install net-snmp-utils
      # yum install net-snmp-perl
      
      Note you might need to allow optional repositories on RHEL to yum can find it
      # subscription-manager repos --list
      ...
      # subscription-manager repos --enable rhel-7-server-optional-rpms
      
      Use rhel-7-for-power-le-optional-rpms for Linux on Power etc ...

    • Linux Debian/Ubuntu
      % umask 0022
      % apt-get install snmp libsnmp-perl snmp-mibs-downloader
      
      If apt-get does not find snmp-mibs-downloader package then enable contrib and non-free repositories.

    Storage configuration - VSP-G

    Allow SNMP on the storage, configure protocol (SNMP version), community string and permit STOR2RRD IP/hostname.
       Hitachi VSPG SNMP setup


    Storage configuration - HUS-VM

    1. Allow SNMP on the storage, below example uses SNMP v1
    2. Configure community string, example has "Public" set, note the first letter is uppercase
    3. Permit STOR2RRD IP/hostname

       Hitachi HUS-VM SNMP setup


    Network communication

    Allow comunication between STOR2RRD server and the storage on CNTL IP and port 161 UDP
    You can test network visibility through this test:
    perl /home/stor2rrd/stor2rrd/bin/conntest_udp.pl vspg_CNTL_host.example.com 161
      UDP connection to "vspg_CNTL_host.example.com" on port "161" is ok
    

STOR2RRD storage configuration

  • All actions below under stor2rrd user (lpar2rrd on Virtual Appliance)

  • Configure Export Tool installation directory in /home/stor2rrd/stor2rrd/etc/stor2rrd.cfg
     VSP_CLIDIR=/opt/hds
    
  • Configure storages in etc/storage-list.cfg
    Under stor2rrd account create config entry. Uncomment (remove the hash) example line and adjust it.
    Use storage SVP IP here (CCI above uses storage node IP).
    $ vi /home/stor2rrd/stor2rrd/etc/storage-list.cfg
    
    #
    # Hitachi VSP-G and HUS-VM
    #
    # Storage Alias:VSPG:IP address/hostname:storage user:password:/etc/horcm.conf:VOLUME_AGG_DATA_LIM:VOLUME_AGG_IO_LIM:SAMPLE_RATE_MINS:SNMP_IP:SNMP_VERSION:SNMP_PORT:SNMP3_USER:SNMP_COMMUNITY:SNMP_PRIV_PASS:SNMP_AUTH_PASS:SNMP_SEC_LEVEL:SNMP_AUTH_PROTOCOL:SNMP_PRIV_PROTOCOL
    #
    # to encrypt password use: perl ./bin/spasswd.pl
    #
    
    # Use single line per each storage!
    
    # Example without health status monitoring (prior to v2.50)
    VSPG-400:VSPG:vspg_SVP_host.example.com:stor2rrd:KT4mXVI9N0BUPjZdVQo=:/etc/horcm2.conf
    
    # Example include heath status, SNMP v2c example, use just SNMP_IP, SNMP_VERSION, SNMP_PORT, SNMP_COMMUNITY fields
    VSPG-400:VSPG:vspg_SVP_host.example.com:stor2rrd:KT4mXVI9N0BUPjZdVQo=:/etc/horcm1.conf::::vspg_CNTL_host.example.com:2c:161::Public
    
    # Example include heath status, SNMP v3 example
    VSPG-400:VSPG:vspg_SVP_host.example.com:stor2rrd:KDo2KU0tJjVWOTcoYAo=:/etc/hormcm1.conf:1024:10:5:vspg_CNTL_host.example.com:3:444:user1:private:KjYmXVI9N0BUPjZdVStAYGAK:KjYmXVI9N0BUPjZdVStAYGAK:noAuthNoPriv:MD5:DES
    
    
    Above shows the storage with SVP vspg_SVP_host.example.com which will be visible in the UI as VSPG-400

    Use encrypted password in storage line above:
    $ cd /home/stor2rrd/stor2rrd
    $ perl bin/spasswd.pl
    
      Encode password for storage authentication:
      -------------------------------------------
      Enter password:
      Re-enter password:
    
      Copy the following string to the password field of the corresponding line in etc/storage-list.cfg:
    
      KT4mXVI9N0BUPjZdVQo=
    
  • Read this for setting up of health status monitoring via SNMP.

  • Assure you have enough of disk space on the filesystem where is STOR2RRD installed
    Roughly you might count 2 - 30 GB per a storage (it depends on number of volumes, 30GB for 5000 volumes)
    $ df -g /home   # AIX
    $ df -h /home   # Linux
    
  • Test the storage connectivity under stor2rrd user:
    $ cd /home/stor2rrd/stor2rrd
    $ ./bin/config_check.sh 
      =========================
      STORAGE: VSPG-600 : VSPG
      =========================
      connection ok
    
    Note: STOR2RRD v2.30 and older testing wrong port (5110 instead of 51101) when running ./bin/config_check.sh, ignore that warning.

  • Schedule to run storage agent from stor2rrd crontab (lpar2rrd on Virtual Appliance, it might already exist there)
    $ crontab -l | grep load_vspgperf.sh
    $
    
    Add if it does not exist as above
    $ crontab -e
    
    # Hitachi VSP-G
    0,5,10,15,20,25,30,35,40,45,50,55 * * * * /home/stor2rrd/stor2rrd/load_vspgperf.sh > /home/stor2rrd/stor2rrd/load_vspgperf.out 2>&1
    
    Assure there is already an entry with the UI creation running once an hour in crontab
    $ crontab -e
    
    # STOR2RRD UI (just ONE entry of load.sh must be there)
    5 * * * * /home/stor2rrd/stor2rrd/load.sh > /home/stor2rrd/stor2rrd/load.out 2>&1
    
  • Let run the storage agent for 15 - 20 minutes to get data, then:
    $ cd /home/stor2rrd/stor2rrd
    $ ./load.sh
    
  • Go to the web UI: http://<your web server>/stor2rrd/
    Use Ctrl-F5 to refresh the web browser cache.

In case of usage of Virtual Appliance
  • Use local account lpar2rrd for hosting of STOR2RRD on the virtual appliance
  • Use /home/stor2rrd/stor2rrd as the product home
  • Use lpar2rrd account on storages as it is configured in /home/stor2rrd/stor2rrd/etc/stor2rrd.cfg (STORAGE_USER)
HNAS support will be released in STOR2RRD v2.60 (Q3 2019)

The tool uses SNMP protocol to get performance and configuration data from the storage.

Install Prerequisites (skip that in case of Virtual Appliance)

  • AIX
    Download Net-SNMP packages and install them.
    Do not use the latest packages on AIX, it does not work, use net-snmp-5.6.2.1-1!
    # umask 0022
    # rpm -Uvh net-snmp-5.6.2.1-1 net-snmp-utils-5.6.2.1-1 net-snmp-perl-5.6.2.1-1
    
    Make sure
    • you use PERL=/opt/freeware/bin/perl in etc/stor2rrd.cfg
    • PERL5LIB in etc/stor2rrd.cfg contains /opt/freeware/lib/perl5/vendor_perl/5.8.8/ppc-thread-multi path

  • Linux
    # umask 0022
    # yum install net-snmp
    # yum install net-snmp-utils
    # yum install net-snmp-perl
    
    Note you might need to allow optional repositories on RHEL to yum can find it
    # subscription-manager repos --list
    ...
    # subscription-manager repos --enable rhel-7-server-optional-rpms
    
    Use rhel-7-for-power-le-optional-rpms for Linux on Power etc ...

  • Linux Debian/Ubuntu
    % umask 0022
    % apt-get install snmp libsnmp-perl snmp-mibs-downloader
    
    If apt-get does not find snmp-mibs-downloader package then enable contrib and non-free repositories.

Enable SNMP on the storage

  • SNMP v2c
    Hitachi docu, follow section "Configuring SNMP access"

    Navigate to: Home ➡ Server ➡ Settings ➡ SNMP Access Configuration
    Select SNMP v2c, leave port 161, add stor2rrd server as alllowed host and set community string
    Note: do not use SNMP v1, it does not work

  • SNMP v3
    Hitachi docu, follow section "Configuring SNMPv3 access"

    Use the CLI command snmp-protocol to configure SNMPv3.
    When SNMPv3 is enabled the SNMP agent will not respond to SNMPv1 or SNMPv2c requests.
    HNAS1:$ snmp-protocol -v v3
    HNAS1:$ snmp-protocol
            Protocol:      SNMPv3               
    
    Add users with the snmpv3-user-add command.
    HNAS1:$ snmpv3-user-add stor2rrd 
            Please enter the authentication password:     ********
            Please re-enter the authentication password:  ********
            Please enter the privacy password:    ********
            Please re-enter the privacy password: ********
    

Allow network access

  • Allow access from the STOR2RRD host to the storage (its admin IP) on port 161 UDP .
    Test if port is open:
    $ perl /home/stor2rrd/stor2rrd/bin/conntest_udp.pl 192.168.1.1 161
      Connection to "192.168.1.1" on port "161" is ok
    

STOR2RRD storage configuration

  • All actions below under stor2rrd user (lpar2rrd on Virtual Appliance)

  • Configure storages in etc/storage-list.cfg
    Uncomment (remove the hash) example line and adjust it:
    $ vi /home/stor2rrd/stor2rrd/etc/storage-list.cfg
    
    #
    # Hitachi HNAS
    #
    #Storage Alias:HNAS:IP address/hostname:SNMP_VERSION:SNMP_PORT:SNMP3_USER:SNMP_COMMUNITY:SNMP_PRIV_PASS:SNMP_AUTH_PASS:SNMP_SEC_LEVEL:SNMP_AUTH_PROTOCOL:SNMP_PRIV_PROTOCOL
    # SNMP v3 example
    #HNAS-alias01:HNAS:hnas01.example.com:3:161:stor2rrd:public:KT4mXVI9N0BUPjZdVQo=:KT4mXVI9N0BUPjZdVQo=:authNoPriv:SHA:AES
    # SNMP v2c example
    #HNAS-alias02:HNAS:hnas01.example.com:2c:161::public:::::
    
    HNAS-alias02:HNAS:hnas01.example.com:2c:161::public:::::
    
  • Assure you have enough of disk space on the filesystem where is STOR2RRD installed
    Roughly you might count 2 - 30 GB per a storage (it depends on number of volumes, 30GB for 5000 volumes)
    $ df -g /home   # AIX
    $ df -h /home   # Linux
    
  • Test the storage connectivity:
    $ cd /home/stor2rrd/stor2rrd
    $ ./bin/config_check.sh 
      =========================
      STORAGE: HNAS-alias02: HNAS
      =========================
      UDP connection to "192.168.177.7" on port "161" is ok
    
      san_verify.pl:
      snmpwalk -v 2c -c public 192.168.1.1 .1.3.6.1.4.1.11096.6.1.1 SNMP version : 2c (default) Port : 161 (default) Timeout : 5 seconds Community : public Storage name : hnas01.example.com STATE : CONNECTED! 
    
      connection ok
    
  • Schedule to run storage agent from stor2rrd crontab (lpar2rrd on Virtual Appliance, it might already exist there)
    $ crontab -l | grep load_hnasperf.sh
    $
    
    Add if it does not exist as above
    $ crontab -e
    
    # Hitachi HNAS
    0,5,10,15,20,25,30,35,40,45,50,55 * * * * /home/stor2rrd/stor2rrd/load_hnasperf.sh > /home/stor2rrd/stor2rrd/load_hnasperf.out 2>&1
    
    Assure there is already an entry with the UI creation running once an hour in crontab
    $ crontab -e
    
    # STOR2RRD UI (just ONE entry of load.sh must be there)
    5 * * * * /home/stor2rrd/stor2rrd/load.sh > /home/stor2rrd/stor2rrd/load.out 2>&1
    
  • Let run the storage agent for 15 - 20 minutes to get data, then:
    $ cd /home/stor2rrd/stor2rrd
    $ ./load.sh
    
  • Go to the web UI: http://<your web server>/stor2rrd/
    Use Ctrl-F5 to refresh the web browser cache.