1. - OFED source. Quite convoluted these days.
  2. RHEL ofed - RHEL backport fixes, and keep a more usable OFED stack than the community version. use this before using community version to avoid headache (eg luster compatibility).
  3. - compile binaries for major distro, used to be about 3 months lag behind source, but probably best route for supported cards.
    If updateing OS, need to get new version of driver, and recompile for the OS. may take some 15 min to compile and install. May also disable opensmd systemd , just need to re-enable it - my cache of IB docs.

InfiniBand 101

CA - Control Adaptor
HCA - Host CA (like how FibreChannel has HBA)
LID - Local ID, uniq id of IB end points.
port - a port on the HCA, like NIC ports in ethernet quad card.
GUID - Global Uniq ID
sminfo give the GUID of the subnet manager, can be used to determine if two nodes are on same fabric or in two separate islands.

Symbol Error - Think of this as packet error in the ethernet world. it is the rate of error accumulation that should be of concern.

opensm 	# open fabric subnet manager.  run on one of the host.  
	# additional standy manager on other hosts can be setup.  
	# the daemon process will only bind to ONE IB port.  
	# HA fabrics should be joined into a single one.
  	# Fix opensm.conf to bind to desired guid (port).   
	# not specifiying one will get prompt to choose if multiple port is avail.
	# If have two-port card and need TWO subnet manager, then need to 
	# update /etc/init.d/opensm and start TWO instance of opensmd
	# use -g GUID to specify which port each instance use.  this would overwrite config in /etc/opensm/opensm.conf. eg 
 	# /usr/sbin/opensm --daemon 
 	# /usr/sbin/opensm --daemon -g 0x506b4b03007c00c2

OFED has ib driver and opensm
Mellanox compiled their own version and provide support for it, MLNX_OFED, aka MOFED.  has driver and opensm.  No more support for ConnectX-2 cards in 2018.
SL7 version of OFED is said to have less trouble with LUSTER than general public OFED.

opensm --create-config /tmp/opensm.conf 
cp -i /tmp/opensm.conf /etc/opensm/
vi /etc/opensm/opensm.conf # set GUID for the port where openSM should bind to

Troubleshooting commands
  - all ports should be active.  It would be active if there is running fabric manager to register them when the port comes alive.
  - If port is connected, but no running Subnet Manager, port status would be Initiating
  - If port status is Down, then there is problem with physical link (even when no SM is running)  Reseating after SM is running may help.
  - node guid
  - port guid

ibv_devinfo - similar info to ibstat


ompi_info --all

iblinkinfo --node-name-map fabrics.node-name-map
	# iblinkinfo provides link info for whole fabric, very quick 
	# node-name-map converts GUID to human name
	# 0xe41d2d03004f1b40 "ib000.cf0 (Mellanox SX6025, S54-21)"

ibnetdiscover --node-name-map fabrics.node-name-map > ibnet.topo
	# generate a topology file 
	# switch LID would be listed above all the port, see comment field 

cat fabrics.conf | grep -v ^$ | tr '[]' '#' | awk -F\= '{print $1 " \"" $2 "\""}' > fabrics.node-name-map

/etc/init.d/openibd status
/etc/init.d/opensmd status

# select output of ibnetdiscover

Switch  24 "S-0002c90200410d02"   # "MT47396 Infiniscale-III Mellanox Technologies" base port 0 lid 2 lmc 0
#                                                                               ib switch LID      ^^^

[16]    "H-00188b9097fe8e41"[1](188b9097fe8e42)     # "rac1001 HCA-1" lid 4 4xDDR
[15]    "H-00188b9097fe906d"[1](188b9097fe906e)     # "rac1002 HCA-1" lid 9 4xDDR
^^^^ ib port where host is connected            comment area and host's LID ^^^
ibclearerrors		# reboot will induce errors, this is normal.

hca_self_test.ofed  	# sanity check. ofed-scripts rpm.  /usr/bin
pdsh -g etna0 TERM=vt52 /usr/bin/hca_self_test.ofed | grep -i fail
# the script use tput and eed to force TERM when used with pdsh

perfquery		# simple to read ib counters.  pay special attention to:
			# SymbolErrorCounter
			# LinkErrorRecover

ibqueryerrors -c -s XmWait

ibcheckerrors -b	# very noisy, from infiniband-diags rpm by Mellanox, /sbin

ibdiagnet		# scan fabric, takes a long time

wwibcheck by Yong Qin:
ibcheck -f fabrics.conf -C etna -E  -dd -a -b -O 20 > now.out
ibcheck -f fabrics.conf -C etna -E  -dd -a -b -O 20 > 2hrsLater.out
but can't simple diff to see errors, need to awk out the RcvPkgs columns...

per-node errors listed in bottom section.
This is the header of avail counters:

   NodeDesc Port               Guid ExcBufOverrunErrors LinkDowned LinkIntegrityErrors LinkRecovers RcvConstraintErrors    RcvData RcvErrors    RcvPkts RcvRemotePhysErrors RcvSwRelayErrors SymbolErrors VL15Dropped XmtConstraintErrors    XmtData XmtDiscards    XmtPkts

IB port on switch side

ibportstate - manage specific port on the IB switch, eg turn it on/off, etc.

ibportstate [swlid] [swPortNum] [command]	
ibportstate  2        11         disable	# turn off IB switch port a specific host is connected to
ibportstate  6        11         disable
     # can also use enable, status.  Other form allow for changing speed, etc.

  1. Yong Qin's IB Diag & Troubleshooting
  2. ulowa basic ib troubleshooting


# bring up interface after driver install, w/o reboot 
rmmod    ib_ipoib
modprobe ib_ipoib

/etc/sysconfig/network-scripts/ifcfg-ib0 ::
no  = use datagram (default?)
yes = like tcp?
  • RH IPoIB cli
  • RH IPoIB

    IBoIP with Bonding Config

    OFED 1.4 instructions for configuring bonding for IPoIB interfaces is to create static config in /etc/sysconfig/network-scripts for the file ifcfg-bond0, ifcfg-ib0, ifcfg-ib1, as below:
    IB-bond for system WITHOUT ethernet bond
    $ cat /etc/sysconfig/network-scripts/ifcfg-bond0
    $ cat /etc/sysconfig/network-scripts/ifcfg-ib0  
    $ cat /etc/sysconfig/network-scripts/ifcfg-ib1
    # relevant entries in  /etc/modprobe.conf ::
    ## added for Oracle 10/11 (per Cambridge WI)
    ## make sure that the hangcheck-timer kernel module is set to load when the system boots
    options hangcheck-timer hangcheck_tick=1 hangcheck_margin=10 hangcheck_reboot=1
    alias ib0 ib_ipoib
    alias ib1 ib_ipoib
    alias net-pf-27 ib_sdp
    ##  For IB bonding
    alias bond0 bonding
    options bond0 miimon=100 mode=1 max_bonds=1
    As of OFED 1.4.0 (circa 2009.09), the above bonding config would work, bond0 would be created correctly and disabling the ib port for say ib0 would cause thigns to fail over.

    However, fail over won't actually work if the machine also has ethernet bonding configured. The config would successfully create a bond for ib0 and ib1. But the IP would be bond to a specific interface and when the IB port is disabled from the switch, ping and rds-ping would stop working. Maybe it has to do with some bugs in the ifcfg-* scripts in RHEL 5.3 that associate the HW "mac address" of the ibX interfaces incorrectly to the bonding interface. OFED 1.4.x doesn't support the bonding config in /etc/inifiniband/openib.conf anymore. Manually creating the ib-bond after system boot would work, and fail over actually works correctly. Here is the required config:
    IB-bond for system with ethernet bond
    $ cat /etc/sysconfig/network-scripts/ifcfg-ib0
    $ cat /etc/sysconfig/network-scripts/ifcfg-ib1
    # relevant entries in  /etc/modprobe.conf ::
    options bond0 mode=balance-rr miimon=100    max_bonds=2
    # ethernet bonds are configured as in stock RHEL 5.3 config.
    # init script to start ib-bond at boot time:
    # run "manual" ib-bond config
    # could not do this in /etc/sysconfig/network-scripts as bond1, ib0, ib1 scripts
    # as somehow presence of eth bond would make ib-bond fail over not to work.
    # maybe a bug in how the network-scripts are parsed...
    ##  nn = startLevel  Sxx Kxx
    ##  eg start at rc3 and rc5,   start as S56, kill is omitted so no Kxx script
    ##  maybe as S28 if need to be before oracleasm
    # chkconfig: 35 28 -
    # description: ib-bond config
    # source function library
    . /etc/rc.d/init.d/functions
    start() {
            ifconfig ib0 up
            ifconfig ib1 up
            ib-bond --bond-name bond1 --bond-ip --slaves ib0,ib1 --miimon 100
    stop() {
            ib-bond --bond-name bond1 --stop
    status() {
            ib-bond --status-all
    case "$1" in
                    echo -n $"Starting $prog: "
                    [ "$RETVAL" = 0 ] && logger "ib-bond-config start ok" || logger local7.err "ib-bond-config start failed"
                    echo -n $"Stopping $prog: "
                    #echo 'not implemented yet'
                    [ "$RETVAL" = 0 ] && logger "ib-bond-config stop ok" || logger local7.err "ib-bond-config stop failed"
                    echo $"Usage: $0 {start|stop|status}"
    exit $RETVAL
    ## setup aid:
    ##  sudo cp -p /nfshome/sa/config_backup/lx/conf.test-4000/ib-bond-config /etc/init.d/
    ##  sudo ln -s /etc/init.d/ib-bond-config /etc/rc.d/rc3.d/S56ib-bond-config
    ##  sudo ln -s /etc/init.d/ib-bond-config /etc/rc.d/rc5.d/S56ib-bond-config
    There is one final catch. eth fail over work for one NIC at a time. If both eth0 and eth1 is ifdown'd, the minute eth0 is ifup, the machine reset. Not sure why, no log message, so may not even be a kernel panic... But if both eth interfaces are down, machine is probably screwed anyway...

    RDS and InfiniBand

    RDS stands for Reliable Datagram Socket. It was modeled/designed like a UDP replacement, but adds reliability and in-sequence delivery characteristics traditionally only available from TCP. It was to be lightweight, rely on the InfiniBand hardware to do the path mapping and "virtual channel" config.

    The RDS developer mailing list has a doc that indicates RDS would not depend on IP when run on InfiniBand, so then IBoIP may not actually need to be configured (eg, how MPICH work with infiniband without any IP support). On the flip side, there are also discussion of implementing RDS over TCP! (eg: take 2) At any rate, rds-ping needs an IP address. rds-info is centered around the premise of IP address. Oracle doesn't work unless IP are assigned for IB interface.

    So, for HA (at least for Oracle), it seems that IBoIP fail over from the IB-bonding need to be configured. Oracle cluster/RAC does not have ability to use multiple ib0, ib1 interfaces as its private network, so some sort of automatic HCA fail over would be needed.

    RDMA and InfiniBand

    RDMA = Remote DMA , ie Remote Direct Memory Access.
    It would allow data transfer between two hosts with IB HCA to copy data from one host directly into the other, bypassing the many copy needed of traditional NIC. Combined with low latency of IB, it would give very high transfer speed.
    However, RDMA is not implemented by RDS as of OFED 1.4. (it is for TCP? but that would be at the IPoIB layer? Or maybe for things that use IB directly like MPICH... don't know....).

    [Doc URL: ]
    Last Updated: 2018-02-03
    (cc) Tin Ho. See main page for copyright info.

    psg101 sn50 tin6150