Section: Maintenance Commands (8)
Updated: January 2019
- mount [-F cvfs] [-o options] filesystem dir
- mount [-fnv] [-t cvfs] [-o <options>] <filesystem> <dir>
- mount_cvfs filesystem dir cvfs options
- mount_cvfs server:filesystem dir cvfs options
Direct Execution (Linux)
- mount_cvfs control_device filesystem dir cvfs options
mount_cvfs is a mount helper utilitythat mounts a StorNext file system on client machines. On Linux and Solaris these utilities are called by the mount(8) utility to mount file systems of type cvfs. These helper utilities are designed to be invoked only by the mount(8) utility; if they are invoked directly on the command line, the option and argument location is strictly positional.
Each client file system must communicate with a File System Manager (FSM) running either locally or on a remote host. The FSM manages all the activity for the client in terms of storage allocation and metadata. Data transfers go directly between disks and the client. In the second form of the mount_cvfs command, the hostname of the FSM server is explicitly given in a syntax similar to NFS.
The FSM can manage a number of different StorNext file systems. Each different file system is specified in a configuration file on the FSM host. For example, a sample file system configuration is provided in the FSM configuration file /usr/cvfs/examples/example.cfgx.
The mount_cvfs command supports mounting file systems that are running in a cluster other than your default cluster. Your default cluster is defined with the fsmcluster(4) file or, if this doesn’t exist, the default is _cluster0/_addom0. When mounting a file system in a non-default cluster, the filesystem must be qualified with the correct cluster information. The syntax is filesystem@<cluster>[/addom].
Options supported by the mount command:
- LINUX ONLYFakes the mount process but updates the /etc/mtab file. The mount call will fail if the mtab entry already exists.
- LINUX ONLYMounts the filesystem without updating the /etc/mtab file.
- Verbose mode.
Additional options may be specified in the /etc/fstab file or on the mount(8) command line via the -o parameter. The -o parameter should be specified only once. If multiple options are needed, they should follow the -o in a comma-separated list.
- Default: rwMount the file system read-only.
- Default: rwMount the file system read/write.
- Default: offDo not update inode access times on this file system. Silently disabled on a managed filesystem.
- Default: offDo not update directory access times on this file system.
- Default: offWhen set, force directory offsets to fit into 31 bits and inode numbers to 32 bits. This should only be used when a problem has been identified with using the full size of the struct dirent d_off field from readdir(2) or older clients that are unable to handle large inode numbers.
- Default: offDo not allow the execution of programs resident on this file system.
- Default: offWhen executing programs resident on this file system, do not honor the set-user-ID and set-group-ID bits.
- Default: 12Determines the number of kernel threads that are created. On some platforms these threads will show up as cvfsiod processes in the output of ps.
This setting does not affect other kernel threads, for example, cvfsd, cvfsbufiod, cvfsflusher, cvfs_dputter.
The minimum value allowed is 12.
- Default: 8In certain cases, such as with using JBOD devices it may be possible to over-load their command queues using SNFS. If this occurs, the I/O concurrency can be reduced by reducing the number of concurrent stripeclusters the file system uses. The reduction is at the cost of performance.
- Default: yesWhen set to yes, the file system will use buffer caching for unaligned I/O.
- Default: noWhen set to yes, non-root users are unable to use the preallocation ioctl. Note: protect_alloc=yes also (silently) implies sparse=yes.
- Default: noIf the diskless option is set to yes then the mount will succeed, even if the file system’s disks are unavailable. Any subsequent I/O will fail until the file system’s disks are visible through the SNFS portmapper.
- If the diskproxy option is set to client, then the mount may use a Proxy Server to do its data I/O. If the client host has SAN connectivity to some or all of the disks in the file system, then those disks will be accessed via the SAN connection, not the network. This client is then referred to as a disk proxy hybrid client. When SAN connectivity is used, the server license on the MDC will be charged for this mount. If it is desired that this client use the network for the mount, then the disks should be made unavailable to this host or the cvpaths file should be configured to prevent StorNext from using the directly attached disks. The who subcommand of cvadmin shows the type of proxy mount.If the diskproxy option is set to server, then this system will become a Proxy Server for this file system. A dpserver configuration file must exist to define the operating parameters for the Server. See dpserver(4) and sndpscfg(8) for details.
A set of proxy servers may be configured in a sparse manner where each server sees only a subset of the disks in the file system. The servers make use of the “diskless” mount option. The proxy client will issue disk i/o requests to the appropriate server. No special configuration is needed on the client.
Note: The diskproxy option is available only on Linux, OS X, and Solaris systems, and the server option is available on Linux systems. The diskproxy selection on Windows clients is made through the Client Configuration utility.
- Only used if the diskproxy option is set to client, controls the algorithm used to balance I/O across Proxy Server connections.The proxy client keeps track of bytes of I/O pending, bytes of I/O completed and the elapsed time for each I/O request. It uses these values and certain rules to determine the server that is used for subsequent I/O requests. These collected counters are decayed over time so that only the most recent (minute or so) I/O requests are relevant.
There are two main components of the selection – the algorithm itself and the use of file sticky behavior. The algorithms are balance, rotate and sticky.
The balance algorithm attempts to keep the same amount of time’s worth of I/O outstanding on each connection. i.e. Faster links will tend to get more of the I/O. A link could be faster because a given server is more efficient or less busy. It also may be the case that network traffic over a given link uses higher speed interconnects such as 10G ethernet.
The rotate algorithm attempts to keep the same number of bytes of I/O pending on each Proxy Server connection. This is similar to balance in that servers which respond more quickly to I/O requests will have the outstanding I/O bytes reduced at a more rapid pace than slower servers and will thus be used more often than slower links.
The difference between balance and rotate is that with balance, higher speed links will have more bytes of I/O outstanding than slower links.
In both balance and rotate, if more than one path has the best score, a pseudo-random selection among the winning paths is made to break the tie.
The sticky algorithm assigns I/O to specific luns to specific Proxy Server connections.
Filesticky behavior attempts to assign I/O for a given file to a specific proxy server. It does this by using the file’s inode number modulo the number of servers to select a server index. Since all clients see the same inode number for a given file, all clients will select the same server. If there is more than one path to that server, then the algorithm (balance or rotate) will be used to select among the paths.
Filesticky behavior is controlled through a mount option.
When no proxypath mount option is specified, the balance algorithm is used and filesticky behavior is selected.
For mount options balance and rotate, filesticky is not selected. For filestickybalance and filestickyrotate filesticky is selected.
Note: The proxypath mount option is available only on Linux, OS X, and Solaris systems. The proxypath options are selected on Windows clients through the Client Configuration utility.
- Only used if the diskproxy option is set to client. Defines the starting value in seconds to wait for a Proxy Client I/O read request to complete before disconnecting from the Proxy Server and resubmitting the request to a different Proxy Server. If reads are completing but coming close to the configured timeout, the timeout will be increased.The minimum value is 1 second, maximum is 3600 and the default value is 15.
Note: This option is available only on Linux, OS X, and Solaris systems.
- Only used if the diskproxy option is set to client. Defines the starting value in seconds to wait for a Proxy Client I/O write request to complete before disconnecting from the Proxy Server and resubmitting the request to a different Proxy Server. If writes are completing but coming close to the configured timeout, the timeout will be increased.The minimum value is 1 second, maximum is 3600 and the default value is 30.
Note: This option is available only on Linux and Solaris systems.
- Only used if the diskproxy option is set to client. Defines the number of seconds to wait for lost write requests. A lost write request is an active write through a gateway and the connection to that gateway is unexpectedly lost. These writes may or may not have been flushed to disk or even started at the time the client notices the connection is lost. The default behavior (0) is that lost writes are immediately re-queued to an available gateway. If the connection to the gateway over which the lost writes were sent is reactivated, the gateway will be queried if any writes from this connection are still active. If there are none, such as would be the case if the server unexpectedly re-booted, the client will immediately requeue all lost writes from the previous connection to this gateway. A value of -1 indicates that the client will never time out lost writes.The minimum value is -1, maximum is 2147483647 and the default is 0.
Note: This option is available only on Linux systems.
- Default: noPerform lazy atime updates. This option improves performance by waiting until closing a file before updating the atime value of the file. This reduces extra network traffic and latency linked to atime updates.
- Default: 60The QOS Token Hold Time (nrtiotokenhold) parameter is only applicable when using the SNFS Quality of Service (QOS) feature for real-time I/O. The parameter determines the number of seconds that a client stripe group will hold on to a non-realtime I/O token during periods of inactivity. If no I/O is performed on a stripe group within the specified number of seconds, the token will be released back to the FSM.
The parameter should be specified in five second increments; if the parameter is not a multiple of five, it will be rounded up automatically.
- Default: noWhen set to yes, allows multiple threads to write to files concurrently.
Note: setting auto_concwrite=yes requires that sparse=no also be specified. Also, protect_alloc=yes is disallowed with auto_concwrite=yes.
- Default 64 K (bytes)This option sets the size of each cache buffer. This determines the I/O transfer size used for the buffer cache. For optimal performance, cachebufsize should match the RAID stripe size. If cachebufsize is less than the RAID stripe size write performance may be severely degraded.
The maximum value allowed is 2048k and the minimum value allowed is 1. The value is rounded up to be a multiple of file system blocks. For example, if the file system block size is 4k and a value of 1 is used, the cachebufsize will be 4k and a value of 2047k would be rounded up to 2048k.
Can be specified in bytes (e.g. 131072) or kilobytes (e.g. 128k).
- Depicted in megabytes (MB).
- Default: varies by hardware, OS, and memory:
- 32bit windows with <= 2GB of memory => 32MB
32 bit windows with > 2GB of memory => 64MB
32 bit linux with <= 2GB of memory => 64MB
32 bit linux with > 2GB of memory => 128MB
all others with <= 2GB of memory => 64MB
all others with > 2GB of memory => 256MB
Tell the system how much memory in MB units to use for the cachebufsize associated with this mount. All mounted file systems with the same cachebufsize share this amount of memory. If a subsequent mount with the same cachebufsize increases buffercachecap, an attempt is made to allocate more buffers. If buffercachecap is less than or equal to a previously mounted file system already mounted with the same cachebufsize, the value is ignored. If the number of buffers already allocated for this cachebufsize is less than buffercachecap, an attempt is made to allocate more buffers. If any allocation fails, mount stops trying to allocate and the mount succeeds unless not even 10 buffers could be allocated. In this last case, mount fails with ENOMEM.
If the total amount of memory on the system is 4GB or less, the value of buffercachecap must be between 1 and 1/2 the memory size (in MB). For example, if the machine has 2GB of memory, buffercachecap can be a value from 1 up to and including 1024.
If the total amount of memory on the system is greater than 4GB, the maximum value for buffercachecap is given by:
- MINIMUM(memory_size_in_MB – 2048, .9 * memory_size_in_MB)
Note that some operating systems reserve a percentage of memory for special purposes making the available memory somewhat smaller than the physical capacity of the installed RAM.
Also note that while a maximum value exists for buffercachecap that attempts to prevent having a single mount consume too much memory, no checks are made across other caches or other memory consumers including user processes. For example, it is possible to oversubscribe memory by configuring different values of cachebufsize across mounts and specifying large values of buffercachecap. Oversubscription is also possible when specifying very large values of dircachesize and buffercachecap for a particular mount.
The cvdb(8) command can be used with the -b option to see how many buffers have been allocated for each cachebufsize.
- 32bit windows with <= 2GB of memory => 32MB
- Default: varies by hardware, OS, and memory:
- These options set the low and high watermarks for background buffer flushing and values are depicted in megabytes (MB). Background buffer flushing is initiated at bufferhighdirty and continues until bufferlowdirty is reached. Note: these options are not intended for general use. Only use when recommended by Quantum Support.
- See buffercachecap. This option has been deprecated and is ignored. Depicted in megabytes (MB).
- See buffercachecap. This option has been deprecated and is ignored. Depicted in megabytes (MB).
- Default: noWhen set to yes, mount_cvfs will display configuration information about the file system being mounted.
- Default: noWhen set to yes, mount_cvfs will display debugging information. This can be useful in diagnosing configuration or disk problems.
- Default: 1Indicates the number of retransmission attempts the file system will make during the execution of the mount(2) system call. Until the file system is mounted, the kernel will only retransmit messages to the FSM mnt_retrans times. This parameter works in conjunction with the mnt_recon parameter. This can help reduce the amount of time a mount command will hang during boot; see the mnt_type option.
- Default: softControls whether after mnt_retrans attempts at contacting the FSS during the mounting and unmounting of a file system, the kernel will either give up or continue retrying forever. It is advisable to leave this option at soft so that an unresponsive FSS does not hang the client during boot.
- Default: fg (foreground)Setting mnt_type to bg will cause the mount to run in the background if the mount of the indicated file system fails. mount_cvfs will retry the mount mnt_retry number of times before giving up. Without this option, an unresponsive FSM could cause a machine to hang during boot while attempting to mount StorNext file systems.
During background mounts, all output is re-directed to /var/adm/SYSLOG.
- Default: 100If a mount attempt fails, retry the connection up to n times.
- Default: 5Indicates the number of attempts that the kernel will make to transmit a message to the FSM. If no response to a transmitted message arrives in the amount of time indicated by the timeout parameter, the request will be retransmitted. If the file system was mounted with the recon=soft parameter, the file system will give up after retrans attempts at sending the message to the FSM and will return an error to the user.
- Default: hardThis option controls whether after retrans attempts at sending a message to the FSM, the file system will give up or continue retrying forever. For hard mounted file systems, the kernel will retry the connection attempt forever, regardless of the value of the retrans field. For soft mounted file systems, the kernel will only try retrans number of times before giving up and returning an error of ETIME (62). This is analogous to the hard and soft options to NFS (see fstab(5)).
- Default: 100 (ten seconds)The timeout value, in tenths of a second (0.1 seconds) to use when sending message to the FSM. This timeout parameter is similar to the one used by NFS (see fstab(5) for more information on NFS timeouts). If no response is received from the FSM in the indicated period the request is tried again. On heavily loaded systems, you may want to adjust the timeout value higher.
- Default: noticeDuring normal operations, certain messages will be logged to the system console using the syslog facility. debug is the most verbose, with notice being reserved for critical information. It is important to note that the syslog level is global per system, not unique to each file system. Changing the level for one file system will affect all other SNFS file systems.
- Default: 64 KThis option sets the maximum buffer size, in bytes, for the unaligned I/O transition buffer. Use caution when setting this option since values that are too small may degrade performance or produce errors when performing large unaligned I/O.
- Default: 10 MBThis option sets the size of the directory cache. Directory entries are cached on the client to reduce client-FSM communications during directory reads. Note: the directory cache on a client is shared across all mounted SNFS file systems. If different values of dircachesize are specified for multiple file systems, the maximum is used. When applying this setting, ensure that the system has sufficient kernel memory.
Can be specified in bytes (e.g. 2097152), kilobytes (e.g. 2048k), or megabytes (e.g. 2m).
- Default: 1048577 Bytes (1MB + 1)The minimum transfer size used for performing direct DMA I/O instead of using the buffer cache for well-formed reads. All well-formed reads equal to, or larger than this value will be transferred with DMA. All read transfers of a smaller size will use the buffer cache. Reads larger than this value, that are not well-formed will use a temporary memory buffer, separate from the buffer cache.
The minimum value is the cachebufsize. By default, well-formed reads of greater than 1 Megabyte will be transferred with DMA; smaller reads will use the buffer cache.
Auto_dma_read_length can be specified in bytes (e.g. 2097152), kilobytes (e.g. 2048k), or megabytes (e.g. 2m).
- Default: 1048577 Bytes (1MB + 1)The minimum transfer size used for performing direct DMA I/O instead of using the buffer cache for well-formed writes. All well-formed writes equal to, or larger than this value will be transferred with DMA. All write transfers of a smaller size use the buffer cache.
The minimum value is the cachebufsize. By default, well-formed writes of greater than 1 Megabyte will be transferred with DMA; smaller writes will use the buffer cache. Writes larger than this value, that are not well-formed will use a temporary memory buffer, separate from the buffer cache.
Auto_dma_write_length can be specified in bytes (e.g. 2097152), kilobytes (e.g. 2048k), or megabytes (e.g. 2m).
- Default: 16This option modifies the size of the read-ahead window, represented in cache buffers. Setting this value to 0 disables read-ahead.
- Default: 8, Minimum: 1, Maximum: 100The number of background daemons used for performing buffer cache I/O.
- Default: varies by platform.This option sets the maximum number of cvnode entries cached on the client. Caching cvnode entries improves performance by reducing Client-FSM communication. However, each cached cvnode entry must be maintained by the FSM as well. In environments with many SNFS clients the FSM may be overloaded with cvnode references. In this case reducing the size of the client cvnode cache will alleviate this issue.
- LINUX ONLYDefault: varies by platform.
This option tells the kernel the maximum DMA size a user process can issue. This can impact the number of concurrent I/Os the file system issues to the driver for a user I/O. There are other factors that can also limit the number of concurrent I/Os. The default is 512m
on Linux. WARNING: Incorrectly setting this value may degrade performance or cause a crash/hang.
- LINUX ONLYDefault: Linux: 512M with Linux DM/Multipath. 512K with StorNext multipath.
This option tells the kernel the maximum I/O size to use when issuing I/Os to the underlying disk driver handling a LUN. The file system attempts to get the maximum I/O size using the IOCINFO ioctl. Since the ioctl is not always reliable, this mount option exists to override the ioctl return value. Example usage max_dev=1m or max_dev=256k. WARNING: Incorrectly setting this value may result in I/O failures or cause a crash/hang. For Linux clients, only use when recommended by Quantum Support.
- Default: varies by platform.Some utilities detect “holes” in a file and assume the file system will fill the hole with zeroes. To ensure that SNFS writes zeroes to allocated, but uninitialized areas on the disk, set sparse=yes.
- LINUX ONLY Default: 0 (disabled)This option controls the amount of dma based I/O which is can be queued while there is buffer cache based I/O pending. The numerical value is multiplied by the cachebufsize (64K default), once this amount of buffered I/O is pending, all DMA I/O will be suspended until some of the buffered I/O completes. This gives buffered I/O some priority over heavy DMA loads. If you do not understand how to use this for your workload, do not use it.
- Default: 300 (seconds)This option controls the Multi-Path I/O penalty value, where n is expressed in seconds with a minimum value of 1 and a maximum value of 4294967295 (0xFFFFFFFF). This parameter establishes the amount of time that a Path_In_Error will be bypassed in favor of an Operational Path during a Multi-Path selection sequence. If all paths are currently in the Path_In_Error state, then the first available path will be selected, independent of the Path_In_Error state.
- Default: 0 (Forever)This option controls the I/O retry behavior. n is expressed in seconds (0 is no time limit), and establishes the amount of time that may elapse during an I/O retry sequence. An I/O retry sequence consists of:
- Retry an I/O request across all available paths that are currently present.
- Compare the current time to the Instantiation time of the I/O request, if at least n seconds have elapsed, return the I/O request in error, otherwise reset the paths used, and retry again.
- Default: disabledThis option provides conversion of file and directory names from legacy ISO-8859-1 and ISO-8859-15 to SNFS native UTF8 format. This may be needed in environments where a critical application does not yet support UNICODE and the required locale is ISO-8859-1 or ISO-8859-15.
This option is supported on Linux and Solaris.
1. cannot be enabled on SNSM metadata server (clients only). 2. filesystem names must be plain ascii. 3. pre-existing filesystems with erroneous ISO-8859 object names must be manually converted to UTF8 before using this feature. Note, this can be easily accomplished with a perl script using Encode::from_to(). 4. does not solve issues surrounding composed versus decomposed presentation.
- LINUX ONLY Default: 0 (no)Causes a Linux client to evaluate path names in a case insensitive, case preserving mode. This is intended for use in SMB environments to reduce the overhead of emulating the behavior of a Windows file system on Linux in user space.
- Default: noUNSUPPORTED – reserved for Quantum internal usage. Not intended for production use. When set to yes, mount_cvfs will allow multiple mounts for the same file system.
- Linux ONLY Default: 0.Determines the number of threads used for servicing retrieves of off-line files initiated as a side effect of NFS activity. This option should only be used on Linux systems acting as NFS servers and is only useful for managed file systems. dmnfsthreads should be set to a value as least as large as the maximum number of concurrent retrieves possible for the file system. For example, with tape-based configurations, this will be equal to the number of tape drives configured for retrieval.
- Linux ONLYDefault: off.
Enable support for loopback devices on top of Stornext. This causes heavy use of the linux page cache when loop devices are used, it will also change how an NFS server interacts with Stornext. Use of the option is not recommended unless loopback device or sendfile support is required.
- Linux ONLYDefault: no.
Change some client behaviors to be more in line with Hadoop file system support. In particular, deferred close does not happen, data is flushed on last close, and allocation sessions are no longer per directory.
- Linux ONLYDefault: no value.
Pass the specified string to the FSM during connect. This allows the identification of a set of clients of a particular class from the FSM. The information is available via an xattr on files in the file system, via cvadmin and via a web service call. This is initially for Hadoop support.
- Default: 0.Set the maximum size of ingest segments to n (bytes).
- Default: 0.Set the maximum time to wait before sending ingest segment to n (seconds).
- Default: varies by platform.Set the required memory alignment for dma transfers to n. Use of the option is not intended for general use and may be deprecated in the future. Only use when recommended by Quantum Support.
- Linux ONLY Default: noneEnable special buffer cache ingest throttling and LRU handling. This option is not intended for general use and incorrect use may lead to poor performance and system instability. Only use when recommended by Quantum Support.
- Linux and WindowsWhen large direct I/O is performed, submit request components in parallel using a pool of daemons. This can improve performance and reduce latency if the backend storage provides more than 10 GB/s of bandwidth. On slower storage, the performance improvement will not be noticeable. On Linux, transparent huge pages must be disabled or performance will actually be worse when using the option. Also, in order for the performance impact to be seen, the number of requests configured for the Linux block devices (nr_requests) must be large enough so that I/O submission does not block due to request structure exhaustion (see deviceparams(4)). Note that “large I/Os”, here, are any having sizes at least 16 times the stripebreadth of the stripe group where the file resides.
This option is also available on StorNext Windows clients through the Enable Parallel I/O Submisison checkbox on the Advanced Mount Options tab in the Client Configuration Tool.
This option is not intended for general use and incorrect use may lead to poor performance and system instability. Only use when recommended by Quantum Support.
On Linux, the initialization of SNFS can be controlled by the chkconfig(8) mechanism. If the cvfs chkconfig flag is set to on, then all SNFS file systems specified in the /etc/fstab file will be mounted when the system is booted to multi-user state. When the system is being shut down, the file systems will be unmounted. See the cvfs man page for more information.
On Solaris, the installation of CVFS adds a startup script to /etc/init.d/ that will automatically mount CVFS file systems present in the /etc/vfstab file with a “yes” keyword in the “mount at boot” column.
mount_cvfs will query the local portmapper for the list of all accessible SNFS disk devices. SNFS disks are recognized by their label. This list is matched with the list of devices for each stripe group in the file system. If any disk is missing, I/O will be prohibited, and you will receive I/O errors.
A socket is maintained for each unique SNFS client file system for sending and receiving commands to and from the FSM. If the socket connection is lost for any reason, it must be reconnected.
There are two daemons involved in re-establishing the connection between an SNFS client and the FSM. The first is the socket input daemon, which is a dedicated daemon that handles all input from the FSM. The second is the reconnect daemon, which handles the work of re-establishing the logical connection with the FSM. Both of these daemons appear as cvfsd in the output from ps.
Messages will be printed on the system console and to syslog during reconnect processing; the verbosity of the messages displayed can be controlled via the syslog= parameter and cvdb(8).
When the socket input daemon detects that the connection has been lost, it will attempt to first connect to the fsm portmapper process, fsmpm(8). Once it has succeeded and has the port number of the fsm(8)
to use, it attempts to create a new socket to the FSM using the port number returned by fsmportmapper.
If no response is received from either the SNFS portmapper or the FSM, the daemon will pend for the amount of time specified by the timeout= parameter. The socket input daemon will attempt to reconnect to the FSM forever.
If any of the configuration parameters in the FSM configuration file changed, then the connection will be terminated, and no further I/O will be allowed. The only recourse will be to unmount and remount the file systems. See snfs_config(5) (part of the cvfs_server product) for more information on configuring the FSM.
Whenever a process must go to sleep in the SNFS file system, the sleep is interruptible, meaning that the process can be sent a signal and the operation will fail with an error (usually EINTR). The only exceptions are when a process is executing the exit(2) system call and is closing out all open files; due to Unix limitations, processes are immune to signals at that point.
File systems can be specified either on the command line or in /etc/fstab. To mount the default file system that is served by a host in the SAN, the entry in /etc/fstab would be:
default /usr/tmp/cvfs cvfs verbose=yes
If this is the only SNFS file system in /etc/fstab, it could be mounted with the command:
# mount -at cvfs
To mount the same file system, but specifying a soft connection with a retransmit count of two, and a soft background mount with a retry count of two, the entry in /etc/fstab would be (line is shown broken up for readability; in practice, it would wrap):
default /usr/tmp/foo cvfs verbose=yes,recon=soft,retrans=2, mnt_recon=soft,mnt_retry=2,mnt_type=bg
To mount a file system in a cluster other than your default, use an /etc/fstab entry similar to the following:
snfs1@cluster1 /stornext/snfs1 cvfs rw 0 0
And the corresponding mount command would be:
Filesystems can also be specified on the command line, without an entry in /etc/fstab. To mount the default file system on mount point /usr/tmp/foo:
mount -t cvfs default /usr/tmp/foo
The command to mount a filesystem not in your default cluster would be:
mount -t cvfs snfs1@cluster1 /stornext/snfs1
To mount a file system verbosely that is described by the FSS configuration file mycvdr.cfgx on that host:
mount -o verbose=yes -t cvfs mycvdr /usr/tmp/foo