From 0dc646898197a4bff549fa471e1bf1c1f337f5fe Mon Sep 17 00:00:00 2001 From: Don Domingo Date: Aug 17 2010 01:48:49 +0000 Subject: Initial check-in --- diff --git a/Makefile b/Makefile new file mode 100644 index 0000000..0340dfe --- /dev/null +++ b/Makefile @@ -0,0 +1,15 @@ +#Makefile for Storage_Administration_Guide + +XML_LANG = en-US +DOCNAME = Storage_Administration_Guide +#PRODUCT = FIX_ME! +BRAND = RedHat + +#OTHER_LANGS = as-IN bn-IN de-DE es-ES fr-FR gu-IN hi-IN it-IT ja-JP kn-IN ko-KR ml-IN mr-IN or-IN pa-IN pt-BR ru-RU si-LK ta-IN te-IN zh-CN zh-TW + +# Extra Parameters start here + +# Extra Parameters stop here +COMMON_CONFIG = /usr/share/publican +include $(COMMON_CONFIG)/make/Makefile.common + diff --git a/en-US/Author_Group.xml b/en-US/Author_Group.xml new file mode 100644 index 0000000..2ef5dcf --- /dev/null +++ b/en-US/Author_Group.xml @@ -0,0 +1,187 @@ + + + + + + + Don + Domingo + + Engineering + Content Services + + ddomingo@redhat.com + + +Subject Matter Experts + + +Josef +Bacik + +Server Development +Kernel File System + +Disk Quotas +jwhiter@redhat.com + + + +Kamil +Dudka + +Base Operating System +Core Services - BRNO + +Access Control Lists +kdudka@redhat.com + + + +Hans +de Goede + +Base Operating System +Installer + +Partitions +hdegoede@redhat.com + + + +Doug +Ledford + +Server Development +Hardware Enablement + +RAID +dledford@redhat.com + + + +Daniel +Novotny + +Base Operating System +Core Services - BRNO + +The /proc File System +dnovotny@redhat.com + + + +Nathan +Straz + +Quality Engineering +QE - Platform + +GFS2 +nstraz@redhat.com + + + +David +Wysochanski + +Server Development +Kernel Storage + +LVM/LVM2 +dwysocha@redhat.com + + + +Contributors + + +Michael +Christie + +Server Development +Kernel Storage + +Online Storage +mchristi@redhat.com + + + +Sachin +Prabhu + +Software Maintenance +Engineering + +NFS +sprabhu@redhat.com + + + +Rob +Evers + +Server Development +Kernel Storage + +Online Storage +revers@redhat.com + + + +David +Howells + +Server Development +Hardware Enablement + +FS-Cache +dhowells@redhat.com + + + +David +Lehman + +Base Operating System +Installer + +Storage configuration during installation +dlehman@redhat.com + + + +Jeff +Moyer + +Server Development +Kernel File System + +Solid-State Disks +jmoyer@redhat.com + + + +Eric +Sandeen + +Server Development +Kernel File System + +ext3, ext4, XFS, Encrypted File Systems +esandeen@redhat.com + + + + +Mike +Snitzer + +Server Development +Kernel Storage + +I/O Stack and Limits +msnitzer@redhat.com + + diff --git a/en-US/Book_Info.xml b/en-US/Book_Info.xml new file mode 100644 index 0000000..3f391e4 --- /dev/null +++ b/en-US/Book_Info.xml @@ -0,0 +1,33 @@ + + +%BOOK_ENTITIES; +]> + + Storage Administration Guide + Deploying and configuring single-node storage in Fedora + Fedora + 13 + 0 + 1 + + + This guide provides instructions on how to effectively manage storage devices and file systems on Fedora 13 and later. It is intended for use by system administrators with basic to intermediate knowledge of Red Hat Enterprise Linux or Fedora. + + + + + + + + + + Logo + + + + + + + + diff --git a/en-US/DG_Filesys-Acls.xml b/en-US/DG_Filesys-Acls.xml new file mode 100644 index 0000000..74bd55e --- /dev/null +++ b/en-US/DG_Filesys-Acls.xml @@ -0,0 +1,493 @@ + + +%RH_ENTITIES; + +]> + +Access Control Lists + + Access Control Lists + ACLs + + + + Files and directories have permission sets for the owner of + the file, the group associated with the file, and all other + users for the system. However, these permission sets have + limitations. For example, different permissions cannot be + configured for different users. Thus, Access + Control Lists (ACLs) were implemented. + + + + ACLs + on ext3 file systems + + + ACLs + with Samba + + + The Fedora kernel provides ACL support for the ext3 file system and NFS-exported file systems. ACLs are also recognized on ext3 file systems accessed via Samba. + + + Along with support in the kernel, the acl package is required to implement ACLs. It contains the utilities used to add, modify, remove, and retrieve ACL information. + + + The cp and mv commands copy or move any ACLs associated with files and directories. + +
+ Mounting File Systems + + ACLs + mounting file systems with + + + Before using ACLs for a file or directory, the partition for the file or directory must be mounted with ACL support. If it is a local ext3 file system, it can mounted with the following command: + + +mount -t ext3 -o acl device-name partition + + + For example: + + +mount -t ext3 -o acl /dev/VolGroup00/LogVol02 /work + + + Alternatively, if the partition is listed in the /etc/fstab file, the entry for the partition can include the acl option: + + +LABEL=/work /work ext3 acl 1 2 + + + If an ext3 file system is accessed via Samba and ACLs have been enabled for it, the ACLs are recognized because Samba has been compiled with the option. No special flags are required when accessing or mounting a Samba + share. + + +
+ NFS + + ACLs + mounting NFS shares with + + + By default, if the file system being exported by an NFS server supports ACLs and the NFS client can read ACLs, ACLs are utilized by the client system. + + + + To disable ACLs on NFS shares when configuring the server, include the no_acl option in the /etc/exports file. To disable ACLs on an NFS share when mounting it on a + client, mount it with the no_acl option via the command line or the /etc/fstab file. + +
+
+
+ Setting Access ACLs + + ACLs + access ACLs + + + ACLs + setting + access ACLs + + + There are two types of ACLs: access ACLs and default ACLs. An access ACL is the access control list for a specific file or directory. A default ACL can only be associated with a directory; if a file within + the directory does not have an access ACL, it uses the rules of the default ACL for the directory. Default ACLs are optional. + + + + ACLs can be configured: + + + + + + Per user + + + + + + Per group + + + + + + Via the effective rights mask + + + + + + For users not in the user group for the file + + + + + ACLs + setfacl + + + + setfacl + + + + The setfacl utility sets ACLs for files and directories. Use the -m option to add or modify the ACL of a file or directory: + + +setfacl -m rules files + + + Rules (rules) must be specified in the following formats. Multiple rules can be specified in the same command if they are separated by commas. + + + + + u:uid:perms + + + Sets the access ACL for a user. The user name or UID may be specified. The user may be any valid user on the system. + + + + + + g:gid:perms + + + Sets the access ACL for a group. The group name or GID may be specified. The group may be any valid group on the system. + + + + + + m:perms + + + Sets the effective rights mask. The mask is the union of all permissions of the owning group and all of the user and group entries. + + + + + + o:perms + + + Sets the access ACL for users other than the ones in the group for the file. + + + + + + + Permissions (perms) must be a combination of the characters r, w, and + x for read, write, and execute. + + + + + + If a file or directory already has an ACL, and the setfacl command is used, the additional rules are added to the existing ACL or the existing rule is modified. + + + + For example, to give read and write permissions to user andrius: + + +setfacl -m u:andrius:rw /project/somefile + + + To remove all the permissions for a user, group, or others, use the -x option and do not specify any permissions: + + +setfacl -x rules files + + + For example, to remove all permissions from the user with UID 500: + + +setfacl -x u:500 /project/somefile + +
+
+ Setting Default ACLs + + ACLs + default ACLs + + + To set a default ACL, add d: before the rule and specify a directory instead of a file name. + + + + For example, to set the default ACL for the /share/ directory to read and execute for users not in the user group (an access ACL for an individual file can override it): + + +setfacl -m d:o:rx /share + +
+
+ Retrieving ACLs + + ACLs + retrieving + + + ACLs + getfacl + + + + getfacl + + + + To determine the existing ACLs for a file or directory, use the getfacl command. In the example below, the getfacl is used to determine the existing ACLs for a file. + + +getfacl home/john/picture.png + + + The above command returns the following output: + +# file: home/john/picture.png +# owner: john +# group: john +user::rw- +group::r-- +other::r-- + + + + + + If a directory with a default ACL is specified, the default ACL is also displayed as illustrated below. For example, getfacl home/sales/ will display similar output: + + +# file: home/sales/ +# owner: john +# group: john +user::rw- +user:barryg:r-- +group::r-- +mask::r-- +other::r-- +default:user::rwx +default:user:john:rwx +default:group::r-x +default:mask::rwx +default:other::r-x + + + +
+ +
+ Archiving File Systems With ACLs + + ACLs + archiving with + + + star + + + +BZ#561619; removed warning RE "tar and dump commands do not backup ACLs"; note the by default statement RE dump and ACLs + + +By default, the dump command now preserves ACLs during a backup operation. When archiving a file or file system with tar, use the --acls option to preserve ACLs. Similarly, when using cp to copy files with ACLs, include the --preserve=mode option to ensure that ACLs are copied across too. In addition, the -a option (equivalent to -dR --preserve=all) of cp also preserves ACLs during a backup along with other information such as timestamps, SELinux contexts, and the like. For more information about dump, tar, or cp, refer to their respective man pages. + + + + + The star utility is similar to the tar utility in that it can be used to generate archives of files; however, some of its options are different. Refer to + for a listing of more commonly used options. For all available options, refer to man star. The star package is required to use this + utility. + + + + Command Line Options for <command moreinfo="none">star</command> + + + + + + + + Option + + + + Description + + + + + + + + + + + Creates an archive file. + + + + + + + + + + Do not extract the files; use in conjunction with to show what extracting the files does. + + + + + + + + + + Replaces files in the archive. The files are written to the end of the archive file, replacing any files with the same path and file name. + + + + + + + + + + Displays the contents of the archive file. + + + + + + + + + + Updates the archive file. The files are written to the end of the archive if they do not exist in the archive, or if the files are newer than the files of the same name in the archive. This option only works if the archive is a file or an unblocked tape that may backspace. + + + + + + + + + + Extracts the files from the archive. If used with and a file in the archive is older than the corresponding file on the file system, the file is not extracted. + + + + + + + + + + Displays the most important options. + + + + + + + + + + Displays the least important options. + + + + + + + + + + Do not strip leading slashes from file names when extracting the files from an archive. By default, they are stripped when files are extracted. + + + + + + + + + + When creating or extracting, archives or restores any ACLs associated with the files and directories. + + + + +
+
+
+ Compatibility with Older Systems + + + If an ACL has been set on any file on a given file system, that file system has the ext_attr attribute. This attribute can be seen using the following command: + + +tune2fs -l filesystem-device + + + A file system that has acquired the ext_attr attribute can be mounted with older kernels, but those kernels do not enforce any ACLs which have been set. + + + + Versions of the e2fsck utility included in version 1.22 and higher of the e2fsprogs package (including the versions in very early versions of Fedora) can check a file system + with the ext_attr attribute. Older versions refuse to check it. + +
+
+References + + ACLs + additional resources + + + Refer to the following man pages for more information. + + + + + + + man acl — Description of ACLs + + + + + + man getfacl — Discusses how to get file access control lists + + + + + + man setfacl — Explains how to set file access control lists + + + + + + man star — Explains more about the star utility and its many options + + + + + + +
+
diff --git a/en-US/DG_Filesys-Disk_Quotas-accurate.xml b/en-US/DG_Filesys-Disk_Quotas-accurate.xml new file mode 100644 index 0000000..ab1ba75 --- /dev/null +++ b/en-US/DG_Filesys-Disk_Quotas-accurate.xml @@ -0,0 +1,76 @@ + + + +
+ Keeping Quotas Accurate + + disk quotas + management of + + quotacheck command, using to check + + + + quotacheck command + checking quota accuracy with + + + Whenever a file system is not unmounted cleanly (due to a system crash, for example), it is necessary to run quotacheck. However, quotacheck can be run on a regular basis, even if the system has not crashed. Safe methods for periodically running quotacheck include: + + + Ensuring quotacheck runs on next reboot + + + Best method for most systems + This method works best for (busy) multiuser systems which are periodically rebooted. + + As root, place a shell script into the /etc/cron.daily/ or /etc/cron.weekly/ directory—or schedule one using the crontab -e command—that contains the touch /forcequotacheck command. This creates an empty forcequotacheck file in the root directory, which the system init script looks for at boot time. If it is found, the init script runs quotacheck. Afterward, the init script removes the /forcequotacheck file; thus, scheduling this file to be created periodically with cron ensures that quotacheck is run during the next reboot. + + +For more information about cron, refer to man cron. + + + + + + + Running quotacheck in single user mode + + An alternative way to safely run quotacheck is to (re-)boot the system into single-user mode to prevent the possibility of data corruption in quota files and run the following commands: + +quotaoff -vaug /file_system + + +quotacheck -vaug /file_system + + +quotaon -vaug /file_system + + + + + + + Running quotacheck on a running system + + If necessary, it is possible to run quotacheck on a machine during a time when no users are logged in, and thus have no open files on the file system being checked. Run the command quotacheck -vaug file_system + ; this command will fail if quotacheck cannot remount the given file_system as read-only. Note that, following the check, the file system will be remounted read-write. + + + Running quotacheck on a live file system mounted read-write is not recommended due to the possibility of quota file corruption. + + + + + + Refer to man cron for more information about configuring cron. +
diff --git a/en-US/DG_Filesys-Disk_Quotas.xml b/en-US/DG_Filesys-Disk_Quotas.xml new file mode 100644 index 0000000..c43e244 --- /dev/null +++ b/en-US/DG_Filesys-Disk_Quotas.xml @@ -0,0 +1,476 @@ + + +%RH_ENTITIES; + +]> + + + +Disk Quotas + + disk quotas + + + disk storage + disk quotas + + + + Disk space can be restricted by implementing disk quotas which alert a system administrator before a user consumes too much disk space or a partition becomes full. + + + Disk quotas can be configured for individual users as well as user groups. This makes it possible to manage the space allocated for user-specific files (such as email) separately from the space allocated to the projects a user works on (assuming the projects are given their own groups). + + + In addition, quotas can be set not just to control the number of disk blocks consumed but to control the number of inodes (data structures that contain information about files in UNIX file systems). Because inodes are used to contain file-related + information, this allows control over the number of files that can be created. + + + The quota RPM must be installed to implement disk quotas. + + + +
+ Configuring Disk Quotas + + disk quotas + enabling + + + To implement disk quotas, use the following steps: + + + + + + Enable quotas per file system by modifying the /etc/fstab file. + + + + + + Remount the file system(s). + + + + + + Create the quota database files and generate the disk usage table. + + + + + + Assign quota policies. + + + + + + Each of these steps is discussed in detail in the following sections. + + +
+ Enabling Quotas + + disk quotas + enabling + /etc/fstab, modifying + + + /etc/fstab file + enabling disk quotas with + + + As root, using a text editor, edit the /etc/fstab file. Add the usrquota and/or grpquota options to the file systems that require quotas: + +/dev/VolGroup00/LogVol00 / ext3 defaults 1 1 +LABEL=/boot /boot ext3 defaults 1 2 +none /dev/pts devpts gid=5,mode=620 0 0 +none /dev/shm tmpfs defaults 0 0 +none /proc proc defaults 0 0 +none /sys sysfs defaults 0 0 +/dev/VolGroup00/LogVol02 /home ext3 defaults,usrquota,grpquota 1 2 +/dev/VolGroup00/LogVol01 swap swap defaults 0 0 . . . + + In this example, the /home file system has both user and group quotas enabled. + + + + Note + + + The following examples assume that a separate /home partition was created during the installation of &PROD;. The root (/) partition can be used for setting quota policies in the /etc/fstab file. + + + +
+ +
+ Remounting the File Systems + + + After adding the usrquota and/or grpquota options, remount each file system whose fstab entry has been modified. If the file system is not in use + by any process, use one of the following methods: + + + + + + Issue the umount command followed by the mount command to remount the file system. Refer to the man page for both umount and mount for the specific syntax for mounting and unmounting various file system types. + + + + + + Issue the mount -o remount file-system command (where file-system is the name of the file system) to remount the file system. For example, to remount the /home file system, the command to issue is mount -o remount /home. + + + + + + If the file system is currently in use, the easiest method for remounting the file system is to reboot the system. + +
+ +
+ Creating the Quota Database Files + + disk quotas + enabling + quotacheck, running + + + disk quotas + enabling + creating quota files + + + quotacheck + + + + After each quota-enabled file system is remounted, the system is capable of working with disk quotas. However, the file system itself is not yet ready to support quotas. The next step is to run the quotacheck command. + + + + The quotacheck command examines quota-enabled file systems and builds a table of the current disk usage per file system. The table is then used to update the operating system's copy of disk usage. In addition, the + file system's disk quota files are updated. + + + + To create the quota files (aquota.user and aquota.group) on the file system, use the option of the quotacheck command. For + example, if user and group quotas are enabled for the /home file system, create the files in the /home directory: + + +quotacheck -cug /home + + + The option specifies that the quota files should be created for each file system with quotas enabled, the option specifies to check for user quotas, and the option specifies to check for group + quotas. + + + + If neither the or options are specified, only the user quota file is created. If only is specified, only the group quota file is created. + + + + After the files are created, run the following command to generate the table of current disk usage per file system with quotas enabled: + + +quotacheck -avug + + + The options used are as follows: + + + + +a + + +Check all quota-enabled, locally-mounted file systems + + + + +v + + +Display verbose status information as the quota check proceeds + + + + +u + + +Check user disk quota information + + + + +g + + +Check group disk quota information + + + + + + After quotacheck has finished running, the quota files corresponding to the enabled quotas (user and/or group) are populated with data for each quota-enabled locally-mounted file system such as + /home. + +
+ +
+ Assigning Quotas per User + + disk quotas + assigning per user + + + The last step is assigning the disk quotas with the edquota command. + + + + To configure the quota for a user, as root in a shell prompt, execute the command: + + +edquota username + + + Perform this step for each user who needs a quota. For example, if a quota is enabled in /etc/fstab for the /home partition + (/dev/VolGroup00/LogVol02 in the example below) and the command edquota testuser is executed, the following is shown in the editor configured as the default for the system: + +Disk quotas for user testuser (uid 501): +Filesystem blocks soft hard inodes soft hard +/dev/VolGroup00/LogVol02 440436 0 0 37418 0 0 + + Note + + + The text editor defined by the + EDITOR + environment variable is used by edquota. To change the editor, set the + EDITOR + environment variable in your ~/.bash_profile file to the full path of the editor of your choice. + + + + + The first column is the name of the file system that has a quota enabled for it. The second column shows how many blocks the user is currently using. The next two columns are used to set soft and hard block limits for the user on the file system. The + inodes column shows how many inodes the user is currently using. The last two columns are used to set the soft and hard inode limits for the user on the file system. + + + disk quotas + hard limit + + + The hard block limit is the absolute maximum amount of disk space that a user or group can use. Once this limit is reached, no further disk space can be used. + + + disk quotas + soft limit + + + disk quotas + grace period + + + The soft block limit defines the maximum amount of disk space that can be used. However, unlike the hard limit, the soft limit can be exceeded for a certain amount of time. That time is known as the grace period. The grace period can + be expressed in seconds, minutes, hours, days, weeks, or months. + + + + If any of the values are set to 0, that limit is not set. In the text editor, change the desired limits. For example: + +Disk quotas for user testuser (uid 501): +Filesystem blocks soft hard inodes soft hard +/dev/VolGroup00/LogVol02 440436 500000 550000 37418 0 0 + + To verify that the quota for the user has been set, use the command: + + +quota testuser + +
+ +
+ Assigning Quotas per Group + + disk quotas + assigning per group + + + Quotas can also be assigned on a per-group basis. For example, to set a group quota for the devel group (the group must exist prior to setting the group quota), use the command: + + +edquota -g devel + + + This command displays the existing quota for the group in the text editor: + +Disk quotas for group devel (gid 505): +Filesystem blocks soft hard inodes soft hard +/dev/VolGroup00/LogVol02 440400 0 0 37418 0 0 + + Modify the limits, then save the file. + + + + To verify that the group quota has been set, use the command: + + +quota -g devel + +
+ +
+ Setting the Grace Period for Soft Limits + + disk quotas + assigning per file system + + + + + +If a given quota has soft limits, you can edit the grace period (i.e. the amount of time a soft limit can be exceeded) with the following command: + + + + +edquota -t + + + +This command works on quotas for inodes or blocks, for either users or groups. While other edquota commands operate on quotas for a particular user or group, the -t option operates on every file system with quotas enabled. + + +
+
+
+ Managing Disk Quotas + + disk quotas + management of + + + If quotas are implemented, they need some maintenance — mostly in the form of watching to see if the quotas are exceeded and making sure the quotas are accurate. + + + Of course, if users repeatedly exceed their quotas or consistently reach their soft limits, a system administrator has a few choices to make depending on what type of users they are and how much disk space impacts their work. The administrator can either help the user determine how to use less disk space or increase the user's disk quota. + + +
+ Enabling and Disabling + + quotaoff + + + + disk quotas + disabling + + + It is possible to disable quotas without setting them to 0. To turn all user and group quotas off, use the following command: + + +quotaoff -vaug + + + If neither the or options are specified, only the user quotas are disabled. If only is specified, only group quotas are disabled. The -v switch causes verbose status information to display as the command executes. + + + quotaon + + + + disk quotas + enabling + + + To enable quotas again, use the quotaon command with the same options. + + + + For example, to enable user and group quotas for all file systems, use the following command: + + +quotaon -vaug + + + To enable quotas for a specific file system, such as /home, use the following command: + + +quotaon -vug /home + + + If neither the or options are specified, only the user quotas are enabled. If only is specified, only group quotas are enabled. + +
+ +
+ Reporting on Disk Quotas + + disk quotas + management of + reporting + + + Creating a disk usage report entails running the repquota utility. For example, the command repquota /home produces this output: + + + +*** Report for user quotas on device /dev/mapper/VolGroup00-LogVol02 +Block grace time: 7days; Inode grace time: 7days + Block limits File limits +User used soft hard grace used soft hard grace +---------------------------------------------------------------------- +root -- 36 0 0 4 0 0 +kristin -- 540 0 0 125 0 0 +testuser -- 440400 500000 550000 37418 0 0 + + To view the disk usage report for all (option ) quota-enabled file systems, use the command: + + +repquota -a + + + While the report is easy to read, a few points should be explained. The -- displayed after each user is a quick way to determine whether the block or inode limits have been exceeded. If either soft limit + is exceeded, a + appears in place of the corresponding -; the first - represents the block limit, and the + second represents the inode limit. + + + + The grace columns are normally blank. If a soft limit has been exceeded, the column contains a time specification equal to the amount of time remaining on the grace period. If the grace period has + expired, none appears in its place. + +
+ + +
+
+References + + disk quotas + additional resources + + + For more information on disk quotas, refer to the man pages of the following commands: + + + +quotacheck +edquota +repquota +quota +quotaon +quotaoff + + +
+
diff --git a/en-US/DG_Filesys-Ext3.xml b/en-US/DG_Filesys-Ext3.xml new file mode 100644 index 0000000..b25d437 --- /dev/null +++ b/en-US/DG_Filesys-Ext3.xml @@ -0,0 +1,291 @@ + + +%RH_ENTITIES; + +]> + +The Ext3 File System + + file systems + ext3 + ext3 + + + + + ext3 + features + + + The ext3 file system is essentially an enhanced version of the ext2 file system. These improvements provide the following advantages: + + + + + Availability + + + After an unexpected power failure or system crash (also called an unclean system +shutdown), each mounted ext2 file system on the machine must be checked for consistency by the e2fsck program. + This is a time-consuming process that can delay system boot time significantly, especially with large volumes +containing a large number of files. During this time, any data on the volumes is unreachable. + + + + The journaling provided by the ext3 file system means that this sort of file system check is no longer necessary +after an unclean system shutdown. The only time a consistency check occurs using ext3 is in certain rare hardware failure cases, such as + hard drive failures. The time to recover an ext3 file system after an unclean system shutdown does not depend on +the size of the file system or the number of files; rather, it depends on the size of the journal used to maintain + consistency. The default journal size takes about a second to recover, depending on the speed of the hardware. + + + + + + Data Integrity + + + The ext3 file system prevents loss of data integrity in the event that an unclean system shutdown occurs. The +ext3 file system allows you to choose the type and level of protection that your data receives. By default, the ext3 volumes are configured to keep a high level +of data consistency with regard to the state of the file system. + + + + + + Speed + + + Despite writing some data more than once, ext3 has a higher throughput in most cases than ext2 because ext3's +journaling optimizes hard drive head motion. You can choose from three journaling modes to optimize speed, but doing so means trade-offs in regards to data +integrity if the system was to fail. + + + + + + Easy Transition + + + It is easy to migrate from ext2 to ext3 and gain the benefits of a robust journaling file system without +reformatting. Refer to for more on how to perform this task. + + + + + + +The Fedora 13 version of ext3 features the following updates: + + + +Default Inode Sizes Changed + +The default size of the on-disk inode has increased for more efficient +storage of extended attributes, for example ACLs or SELinux attributes. +Along with this change, the default number of inodes created on a +file system of a given size has been decreased. The inode size may be +selected with the mke2fs -I option, or specified in /etc/mke2fs.conf to set system-wide defaults for mke2fs. + + + + + + +If you upgrade to Fedora 13 with the intention of keeping any ext3 file systems intact, you do not need to remake the file system. + + + + +New Mount Option: data_err + +A new mount option has been added: data_err=abort. This option instructs ext3 to abort the journal if an error occurs in a file data (as opposed to metadata) buffer in data=ordered mode. This option is disabled by default (i.e. set as data_err=ignore). + + + +More Efficient Storage Use + +When creating a file system (i.e. mkfs), mke2fs will attempt to "discard" or "trim" blocks not used by the file system metadata. This helps to optimize SSDs or thinly-provisioned storage. To suppress this behavior, use the +mke2fs -K option. + + + + + The following sections walk you through the steps for creating and tuning ext3 partitions. For ext2 partitions, skip the partitioning +and formatting sections below and go directly to . + + + + +
+ Creating an Ext3 File System + + ext3 + creating + + + After installation, it is sometimes necessary to create a new ext3 file system. For example, if you add a new disk drive to the system, you may want to partition the drive and use the ext3 file system. + + + + The steps for creating an ext3 file system are as follows: + + + + + + + Format the partition with the ext3 file system using mkfs. + + + + + + Label the file system using e2label. + + + + + +
+
+ Converting to an Ext3 File System + + tune2fs + + converting to ext3 with + + + ext3 + converting from ext2 + + + The tune2fs allows you to convert an ext2 file system to ext3. + + + + Note + + + Always use the e2fsck utility to check your file system before and after using tune2fs. + + + A default installation of Fedora 13 uses ext4 for all file systems. + + + + + To convert an ext2 file system to ext3, log in as root and type the following command in a terminal: + + +tune2fs -j block_device + + + where block_device contains the ext2 file system you wish to convert. + + + + A valid block device could be one of two types of entries: + + + + + + A mapped device — A logical volume in a volume group, for example, /dev/mapper/VolGroup00-LogVol02. + + + + + + A static device — A traditional storage volume, for example, /dev/sdbX, where sdb is a storage device name and + X is the partition number. + + + + + + Issue the df command to display mounted file systems. + + + + + /etc/fstab + + + +
+
+ Reverting to an Ext2 File System + + tune2fs + + reverting to ext2 with + + + file systems + ext2 + ext2 + + + ext2 + reverting from ext3 + + + resize2fs + + + + e2fsck + + + + +For simplicity, the sample commands in this section use the following value for the block device: + + +/dev/mapper/VolGroup00-LogVol02 + + + + If you wish to revert a partition from ext3 to ext2 for any reason, you must first unmount the partition by logging in as root and typing, + + +umount /dev/mapper/VolGroup00-LogVol02 + + + Next, change the file system type to ext2 by typing the following command as root: + + +tune2fs -O ^has_journal /dev/mapper/VolGroup00-LogVol02 + + + Check the partition for errors by typing the following command as root: + + +e2fsck -y /dev/mapper/VolGroup00-LogVol02 + + + Then mount the partition again as ext2 file system by typing: + + +mount -t ext2 /dev/mapper/VolGroup00-LogVol02 /mount/point + + + In the above command, replace /mount/point with the mount point of the partition. + + + +If a .journal file exists at the root level of the partition, delete it. + + + + + You now have an ext2 partition. + + + + If you want to permanently change the partition to ext2, remember to update the /etc/fstab file. + +
+
diff --git a/en-US/DG_Filesys-File_System.xml b/en-US/DG_Filesys-File_System.xml new file mode 100644 index 0000000..daed06a --- /dev/null +++ b/en-US/DG_Filesys-File_System.xml @@ -0,0 +1,682 @@ + + +%RH_ENTITIES; + +]> + +File System Structure + + file system + structure + +
+ Why Share a Common Structure? + + + The file system structure is the most basic level of organization in an operating system. Almost all of the ways an operating system interacts with its users, applications, and security model are dependent on how the operating system organizes files on storage + devices. Providing a common file system structure ensures users and programs can access and write files. + + + + File systems break files down into two logical categories: + + + + + + Shareable vs. unsharable files + + + + + + Variable vs. static files + + + + + + Shareable files can be accessed locally and by remote hosts; unsharable files are only available locally. Variable files, such as documents, can be changed at any time; + static files, such as binaries, do not change without an action from the system administrator. + + + + Categorizing files in this manner helps correlate the function of each file with the permissions assigned to the directories which hold them. +How the operating system and its users interact with a file determines the directory in which it is placed, whether that directory is mounted with read-only or read/write permissions, +and the level of of access each user has to that file. The top level of this organization is crucial; access to the underlying directories can be restricted, otherwise +security problems could arise if, from the top level down, access rules do not adhere to a rigid structure. + + + +
+
+ Overview of File System Hierarchy Standard (FHS) + + file system + hierarchy + + + FHS + file system + + + hierarchy, file system + + + Fedora uses the Filesystem Hierarchy Standard (FHS) file system structure, which defines the names, locations, and permissions for many file types and directories. + + + + The FHS document is the authoritative reference to any FHS-compliant file system, but the standard leaves many areas undefined or extensible. This section is an overview of the standard and a description of the parts of the file system not covered by + the standard. + + + +The two most important elements of FHS compliance are: + + + +Compatibility with other FHS-compliant systems +The ability to mount a /usr/ partition as read-only. This is +especially crucial, since /usr/ contains common executables and should not be changed by users. In addition, since +/usr/ is mounted as read-only, it should be mountable from the CD-ROM drive or from another machine via a read-only NFS mount. + + + + +
+ FHS Organization + + file system + FHS standard + + + file system + organization + + + FHS + file system + + + The directories and files noted here are a small subset of those specified by the FHS document. Refer to the latest FHS document for the most complete information at . + + + + +
+ Gathering File System Information + + + + system information + file systems + + + file systems + + + df + + + + + The df command reports the system's disk space usage. Its output looks similar to the following: + + + +Filesystem 1K-blocks Used Available Use% Mounted on +/dev/mapper/VolGroup00-LogVol00 + 11675568 6272120 4810348 57% / /dev/sda1 + 100691 9281 86211 10% /boot +none 322856 0 322856 0% /dev/shm + + + By default, df shows the partition size in 1 kilobyte blocks and the amount of used/available disk space in kilobytes. To view the information in megabytes and gigabytes, use the command df -h. The + -h argument stands for "human-readable" format. The output for df -h looks similar to the following: + + +Filesystem Size Used Avail Use% Mounted on +/dev/mapper/VolGroup00-LogVol00 + 12G 6.0G 4.6G 57% / /dev/sda1 + 99M 9.1M 85M 10% /boot +none 316M 0 316M 0% /dev/shm + + + system information + file systems + /dev/shm + + + + /dev/shm + + + + +The mounted partition /dev/shm represents the system's virtual memory file system. + + + + + + + du + + + + + + The du command displays the estimated amount of space being used by files in a directory, displaying the disk usage +of each subdirectory. The last line in the output of du shows the total disk usage of the directory; to see only the total disk usage of a directory in human-readable format, use du -hs. For more options, refer to man du. + + + + To view the system's partitions and disk space usage in a graphical format, use the Gnome System Monitor by clicking on Applications > System Tools > System Monitor or using the command gnome-system-monitor. Select the File Systems tab to view the system's partitions. The figure below illustrates the File Systems tab. + + + + + +
<application moreinfo="none">GNOME System Monitor File Systems tab</application> + + + + + File systems tab of the gnome-system-monitor + + + +
+ + + + + +
+ + +
+ The <filename moreinfo="none">/boot/</filename> Directory + + /boot/ directory + + + directories + /boot/ + + + + The /boot/ directory contains static files required to boot the system, e.g. the Linux kernel. These files are essential for the system to boot properly. + + + + Warning + + + Do not remove the /boot/ directory. Doing so renders the system unbootable. + + +
+ + +
+ The <filename moreinfo="none">/dev/</filename> Directory + + dev directory + + + directories + /dev/ + + + + The /dev/ directory contains device nodes that represent the following device types: + + + +Devices attached to the system +Virtual devices provided by the kernel + + + +These device nodes are essential for the system to function properly. The udevd daemon creates and removes device nodes in /dev/ as needed. + + + Devices in the /dev/ directory and subdirectories are either character (providing only a serial stream of input/output, e.g. mouse or keyboard) or block (accessible randomly, e.g. hard drive, floppy drive). If you have GNOME or KDE installed, some storage devices are automatically detected when connected (e.g via USB) or inserted (e.g via CD or DVD drive), and a popup window displaying the contents appears. + + + Examples of common files in the <filename>/dev</filename> + + + + + File + + + Description + + + + + + + /dev/hda + + + The master device on primary IDE channel. + + + + + /dev/hdb + + + The slave device on primary IDE channel. + + + + + /dev/tty0 + + + The first virtual console. + + + + + /dev/tty1 + + + The second virtual console. + + + + + /dev/sda + + + The first device on primary SCSI or SATA channel. + + + + + /dev/lp0 + + + The first parallel port. + + + + +
+
+ +
+ The <filename moreinfo="none">/etc/</filename> Directory + + etc directory + + + directories + /etc/ + + + + The /etc/ directory is reserved for configuration files that are local to the machine. It should contain no binaries; any binaries should be moved to /bin/ or /sbin/. + + + + + +For example, the /etc/skel/ directory stores "skeleton" user files, which are used to populate a home directory when a user is first created. Applications also store their configuration files in this directory and may reference them when executed. The /etc/exports file controls which file systems to export to remote hosts. + + + +
+ +
+ The <filename moreinfo="none">/lib/</filename> Directory + + lib directory + + + directories + /lib/ + + + + The /lib/ directory should only contain libraries needed to execute the binaries in /bin/ and /sbin/. These shared library images are +used to boot the system or execute commands within the root file system. + +
+ +
+ The <filename moreinfo="none">/media/</filename> Directory + + media directory + + + directories + /media/ + + + + The /media/ directory contains subdirectories used as mount points for removeable media such as USB storage media, DVDs, CD-ROMs, and Zip disks. + +
+ +
+ The <filename moreinfo="none">/mnt/</filename> Directory + + mnt directory + + + directories + /mnt/ + + + + The /mnt/ directory is reserved for temporarily mounted file systems, such as NFS file system mounts. For all removeable storage media, use the /media/ directory. Automatically detected removeable media will be mounted in the /media directory. + + + + Note + + + The /mnt directory must not be used by installation programs. + + +
+ +
+ The <filename moreinfo="none">/opt/</filename> Directory + + opt directory + + + directories + /opt/ + + + + +The /opt/ directory is normally reserved for software and add-on packages that are not part of the default installation. A package that installs to /opt/ creates a directory bearing its name, e.g. /opt/packagename/. In most cases, such packages follow a predictable subdirectory structure; most store their binaries in /opt/packagename/bin/ and their man pages in /opt/packagename/man/, and so on. + + + + +
+ +
+ The <filename moreinfo="none">/proc/</filename> Directory + + proc directory + + + directories + /proc/ + + + + The /proc/ directory contains special files that either extract information from the kernel or send information to it. Examples of such information include system memory, cpu information, and hardware configuration. For more information about /proc/, refer to . + +
+ +
+ The <filename moreinfo="none">/sbin/</filename> Directory + + sbin directory + + + directories + /sbin/ + + + + The /sbin/ directory stores binaries essential for booting, restoring, recovering, or repairing the system. The binaries in /sbin/ require root privileges to use. In addition, /sbin/ contains binaries used by the system before the /usr/ directory is mounted; any system utilities used after /usr/ is mounted is typically placed in /usr/sbin/. + + + + + At a minimum, the following programs should be stored in /sbin/: + + +arp +clock +halt +init +fsck.* +grub +ifconfig +mingetty +mkfs.* +mkswap +reboot +route +shutdown +swapoff +swapon + + +
+ +
+ The <filename moreinfo="none">/srv/</filename> Directory + + srv directory + + + directories + /srv/ + + + + The /srv/ directory contains site-specific data served by a Fedora system. This directory gives users the location of data files for a particular service, such as FTP, WWW, or CVS. Data that only pertains to a specific user should go in the /home/ directory. + + +
+ +
+ The <filename moreinfo="none">/sys/</filename> Directory + + sys directory + + + directories + /sys/ + + + + The /sys/ directory utilizes the new sysfs virtual file system specific to the 2.6 kernel. With the increased support for hot plug hardware devices in the 2.6 kernel, the /sys/ directory contains information similar to that held by /proc/, but displays a hierarchical view device information specific to hot plug devices. + + + +
+ +
+ The <filename moreinfo="none">/usr/</filename> Directory + + usr directory + + + directories + /usr/ + + + + The /usr/ directory is for files that can be shared across multiple machines. The /usr/ directory is often on its own partition and is mounted read-only. At a minimum, /usr/ should contain the following subdirectories: + + +/usr/bin, used for binaries +/usr/etc, used for system-wide configuration files +/usr/games +/usr/include, used for C header files +/usr/kerberos, used for Kerberos-related binaries and files +/usr/lib, used for object files and libraries that are not designed to be directly utilized by shell scripts or users +/usr/libexec, contains small helper programs called by other programs + +/usr/sbin, stores system administration binaries that do not belong to /sbin/ +/usr/share, stores files that are not architecture-specific +/usr/src, stores source code +/usr/tmp -> /var/tmp + + + +The /usr/ directory should also contain a /local/ subdirectory. As per the FHS, this subdirectory is used by the system administrator when installing software +locally, and should be safe from being overwritten during system updates. The /usr/local directory has a structure similar to /usr/, and contains the following subdirectories: + + + +/usr/local/bin +/usr/local/etc +/usr/local/games +/usr/local/include +/usr/local/lib +/usr/local/libexec +/usr/local/sbin +/usr/local/share +/usr/local/src + + + +Fedora's usage of /usr/local/ differs slightly from the FHS. The FHS states that /usr/local/ should be used to store software +that should remain safe from system software upgrades. Since the RPM Package Manager can perform software upgrades safely, it is not necessary to protect files by storing them in /usr/local/. + + + + +Instead, Fedora uses /usr/local/ for software local to the machine. + For instance, if the /usr/ directory is mounted as a read-only NFS share from a remote host, it is still possible to install a package or program under the /usr/local/ directory. + + + +
+ + +
+ The <filename moreinfo="none">/var/</filename> Directory + + var directory + + + directories + /var/ + + + + Since the FHS requires Linux to mount /usr/ as read-only, any programs that write log files or need spool/ or lock/ directories should write them to the /var/ directory. The FHS states /var/ is for variable data files, which include spool directories/files, logging data, transient/temporary files, and the like. + + + + Below are some of the directories found within the /var/ directory: + + + +/var/account/ +/var/arpwatch/ +/var/cache/ +/var/crash/ +/var/db/ +/var/empty/ +/var/ftp/ +/var/gdm/ +/var/kerberos/ +/var/lib/ +/var/local/ +/var/lock/ +/var/log/ +/var/mail -> /var/spool/mail/ +/var/mailman/ +/var/named/ +/var/nis/ +/var/opt/ +/var/preserve/ +/var/run/ +/var/spool/ + +/var/tmp/ +/var/tux/ +/var/www/ +/var/yp/ + + + + System log files, such as messages and lastlog, go in the /var/log/ directory. The /var/lib/rpm/ directory contains RPM system databases. Lock files go in the /var/lock/ directory, usually in directories for the program using the file. The /var/spool/ directory has subdirectories +that store data files for some programs. These subdirectories include: + + + +/var/spool/at/ +/var/spool/clientmqueue/ +/var/spool/cron/ +/var/spool/cups/ +/var/spool/exim/ +/var/spool/lpd/ +/var/spool/mail/ +/var/spool/mailman/ +/var/spool/mqueue/ +/var/spool/news/ +/var/spool/postfix/ +/var/spool/repackage/ +/var/spool/rwho/ +/var/spool/samba/ +/var/spool/squid/ +/var/spool/squirrelmail/ +/var/spool/up2date/ +/var/spool/uucp +/var/spool/uucppublic/ +/var/spool/vbox/ + + +
+
+
+ +
+ Special Fedora File Locations + + Fedora-specific file locations + /var/lib/rpm/ + + + + Fedora-specific file locations + /etc/sysconfig/ + + sysconfig directory + + + Fedora-specific file locations + /var/cache/yum + + + + sysconfig directory + + + var/lib/rpm/ directory + + + var/spool/up2date/ directory + + + Fedora extends the FHS structure slightly to accommodate special files. + + + + Most files pertaining to RPM are kept in the /var/lib/rpm/ directory. For more information on RPM, refer to man rpm. + + + + The /var/cache/yum/ directory contains files used by the Package Updater, including RPM header information for the system. This location may also be used to temporarily store RPMs downloaded while updating the system. For more information about &RH; Network, refer to the documentation online at https://rhn.redhat.com/. + + + + Another location specific to Fedora is the /etc/sysconfig/ directory. This directory stores a variety of configuration information. Many scripts that run at boot time use the files in this + directory. + +
+ +
diff --git a/en-US/DG_Filesys-Lvm.xml b/en-US/DG_Filesys-Lvm.xml new file mode 100644 index 0000000..0854ff4 --- /dev/null +++ b/en-US/DG_Filesys-Lvm.xml @@ -0,0 +1,724 @@ + + +%RH_ENTITIES; + +]> + +LVM (Logical Volume Manager)<remark> [screenshots may need updating]</remark><remark></remark> + + LVM + + + + + + LVM + explanation of + + + LVM is a tool for logical volume management which includes allocating disks, striping, mirroring and resizing logical volumes. + + + LVM + physical volume + + + physical volume + + + + With LVM, a hard drive or set of hard drives is allocated to one or more physical volumes. LVM physical volumes can be placed on other block devices which might span two or more disks. + + + + The physical volumes are combined into logical volumes, with the exception of the /boot/ partition. The /boot/ partition cannot be on a logical volume + group because the boot loader cannot read it. If the root (/) partition is on a logical volume, create a separate /boot/ partition which is not a part of a volume group. + + + + Since a physical volume cannot span over multiple drives, to span over more than one drive, create one or more physical volumes per drive. + + + + LVM + logical volumes + + + logical volumes + + + volume group + +
Logical Volumes + + + + + LVM Group + + + +
+ + LVM + logical volume + + + logical volume + + + The volume groups can be divided into logical volumes, which are assigned mount points, such as /home and / and file system types, such as ext2 or ext3. + When "partitions" reach their full capacity, free space from the volume group can be added to the logical volume to increase the size of the partition. When a new hard drive is added to the system, it can be added to the volume group, + and partitions that are logical volumes can be increased in size. + +
Logical Volumes + + + + + Logical Volumes + + + +
+ + On the other hand, if a system is partitioned with the ext3 file system, the hard drive is divided into partitions of defined sizes. If a partition becomes full, it is not easy to expand the size of the partition. Even if the partition is moved to + another hard drive, the original hard drive space has to be reallocated as a different partition or not used. + + + + + + + +This chapter on LVM/LVM2 focuses on the use of the LVM GUI administration tool, i.e. system-config-lvm. For +comprehensive information on the creation and configuration of LVM partitions in clustered and non-clustered storage, +please refer to the Logical Volume Manager Administration guide also provided by &RH;. + + + +In addition, the Installation Guide for Fedora 13 also documents how to +create and configure LVM logical volumes during installation. For more information, refer to the Create LVM Logical Volume +section of the Installation Guide for Fedora 13. + + + + + + +
+ What is LVM2? + + LVM2 + explanation of + + + LVM version 2, or LVM2, was the default for previous versions of Fedora which used the device mapper driver contained in the 2.6 kernel. LVM2 can be upgraded from versions of Fedora running the 2.4 kernel. + + +
+ + + + + +
+ Using <filename>system-config-lvm</filename> + + LVM + system-config-lvm + + + The LVM utility allows you to manage logical volumes within X windows or graphically. You can access the application by selecting from your menu panel System > Administration > Logical Volume Management. Alternatively you can start the Logical Volume Management utility by typing system-config-lvm from a terminal. + + + In the example used in this section, the following are the details for the volume group that was created during the installation: + + +/boot - (Ext3) file system. Displayed under 'Uninitialized Entities'. (DO NOT initialize this partition). +LogVol00 - (LVM) contains the (/) directory (312 extents). +LogVol02 - (LVM) contains the (/home) directory (128 extents). +LogVol03 - (LVM) swap (28 extents). + + + + The logical volumes above were created in disk entity /dev/hda2 while /boot was created in /dev/hda1. The system also consists of 'Uninitialised Entities' which are illustrated in . The figure below illustrates the main window in the LVM utility. The logical and the physical views of the above configuration are illustrated below. The three logical volumes exist on the same physical volume (hda2). + +
Main LVM Window + + + + + Main LVM Window + + + +
+ + + The figure below illustrates the physical view for the volume. In this window, you can select and remove a volume from the volume group or migrate extents from the volume to another volume group. Steps to migrate extents are discussed in . + + + +
Physical View Window + + + + + Physical View Window + + + +
+ + + The figure below illustrates the logical view for the selected volume group. The logical volume size is also indicated with the individual logical volume sizes illustrated. + +
Logical View Window + + + + + Logical View Window + + + +
+ + + + On the left side column, you can select the individual logical volumes in the volume group to view more details about each. In this example the objective is to rename the logical volume name for 'LogVol03' to 'Swap'. To perform this operation select the respective logical volume and click on the Edit Properties button. + This will display the Edit Logical Volume window from which you can modify the Logical volume name, size (in extents) and also use the remaining space available in a logical volume group. The figure below illustrates this. + + + Please note that this logical volume cannot be changed in size as there is currently no free space in the volume group. If there was remaining space, this option would be enabled (see ). + Click on the OK button to save your changes (this will remount the volume). To cancel your changes click on the Cancel button. To revert to the last snapshot settings click on the Revert button. A snapshot can be created by clicking on the Create Snapshot button on the LVM utility window. If the selected logical volume is in use by the system (for example) the / (root) directory, this task will not be successful as the volume cannot be unmounted. + + +
Edit Logical Volume + + + + + Edit Logical Volume + + + +
+ + + + +
+ Utilizing Uninitialized Entities + +LVM +uninitialized entries + + + +uninitialized entries +LVM + + + + +LVM +unallocated volumes + + + +unallocated volumes +LVM + + + + + + 'Uninitialized Entities' consist of unpartitioned space and non LVM file systems. In this example partitions 3, 4, 5, 6 and 7 were created during installation and some unpartitioned space was left on the hard disk. Please view each partition and ensure that you read the 'Properties for Disk Entity' on the right column of the window to ensure that you do not delete critical data. In this example partition 1 cannot be initialized as it is /boot. Uninitialized entities are illustrated below. + +
Uninitialized Entities + + + + + Uninitialized Entities + + + +
+ + + In this example, partition 3 will be initialized and added to an existing volume group. To initialize a partition or unpartioned space, select the partition and click on the Initialize Entity button. Once initialized, a volume will be listed in the 'Unallocated Volumes' list. + +
+ + +
+ Adding Unallocated Volumes to a Volume Group + + + Once initialized, a volume will be listed in the 'Unallocated Volumes' list. The figure below illustrates an unallocated partition (Partition 3). The respective buttons at the bottom of the window allow you to: + + + + create a new volume group, + + + + + + add the unallocated volume to an existing volume group, + + + + + remove the volume from LVM. + + + + + To add the volume to an existing volume group, click on the Add to Existing Volume Group button. + + +
Unallocated Volumes + + + + + Unallocated Volumes + + + +
+ + + Clicking on the Add to Existing Volume Group button will display a pop up window listing the existing volume groups to which you can add the physical volume you are about to initialize. A volume group may span across one or more hard disks. In this example only one volume group exists as illustrated below. + + +
Add physical volume to volume group + + + + + Add physical volume to volume group + + + +
+ + + + Once added to an existing volume group the new logical volume is automatically added to the unused space of the selected volume group. You can use the unused space to: + + + + + create a new logical volume (click on the Create New Logical Volume(s) button, + + + + + + select one of the existing logical volumes and increase the extents (see ), + + + + + + select an existing logical volume and remove it from the volume group by clicking on the Remove Selected Logical Volume(s) button. Please note that you cannot select unused space to perform this operation. + + + + + + The figure below illustrates the logical view of 'VolGroup00' after adding the new volume group. + + + +
Logical view of volume group + + + + + Logical view of volume group + + + +
+ + + In the figure below, the uninitialized entities (partitions 3, 5, 6 and 7) were added to 'VolGroup00'. + + +
Logical view of volume group + + + + + Logical view of volume group + + + +
+
+ + +
+ Migrating Extents + + + +LVM +migrating extents + + + +migrating extents +LVM + + + + + + +LVM +extents, migration of + + + +extents, migration of +LVM + + + + To migrate extents from a physical volume, select the volume and click on the Migrate Selected Extent(s) From Volume button. Please note that you need to have a sufficient number of free extents to migrate extents within a volume group. An error message will be displayed if you do not have a sufficient number of free extents. To resolve this problem, please extend your volume group (see ). + If a sufficient number of free extents is detected in the volume group, a pop up window will be displayed from which you can select the destination for the extents or automatically let LVM choose the physical volumes (PVs) to migrate them to. This is illustrated below. + + +
Migrate Extents + + + + + Migrate Extents + + + +
+ + + The figure below illustrates a migration of extents in progress. In this example, the extents were migrated to 'Partition 3'. + +
Migrating extents in progress + + + + + Migrating extents in progress + + + +
+ + + + Once the extents have been migrated, unused space is left on the physical volume. The figure below illustrates the physical and logical view for the volume group. Please note that the extents of LogVol00 which were initially in hda2 are now in hda3. Migrating extents allows you to move logical volumes in case of hard disk upgrades or to manage your disk space better. + + +
Logical and physical view of volume group + + + + + Logical and physical view of volume group + + + +
+ +
+ + +
+ Adding a New Hard Disk Using LVM + + + +LVM +new hard disk, adding a + + + +new hard disk, adding a +LVM + + + + In this example, a new IDE hard disk was added. The figure below illustrates the details for the new hard disk. From the figure below, the disk is uninitialized and not mounted. To initialize a partition, click on the Initialize Entity button. For more details, see . Once initialized, LVM will add the new volume to the list of unallocated volumes as illustrated in . + +
Uninitialized hard disk + + + + + Uninitialized hard disk + + + +
+
+ + +
+ Adding a New Volume Group + + + +LVM +adding a new volume group + + + +adding a new volume group +LVM + + + + Once initialized, LVM will add the new volume to the list of unallocated volumes where you can add it to an existing volume group or create a new volume group. You can also remove the volume from LVM. The volume if removed from LVM will be listed in the list of 'Uninitialized Entities' as illustrated in . In this example, a new volume group was created as illustrated below. + + +
Create new volume group + + + + + Create new volume group + + + +
+ + + Once created a new volume group will be displayed in the list of existing volume groups as illustrated below. The logical view will display the new volume group with unused space as no logical volumes have been created. To create a logical volume, select the volume group and click on the Create New Logical Volume button as illustrated below. Please select the extents you wish to use on the volume group. In this example, all the extents in the volume group were used to create the new logical volume. + + +
Create new logical volume + + + + + Create new logical volume + + + +
+ + + + The figure below illustrates the physical view of the new volume group. The new logical volume named 'Backups' in this volume group is also listed. + +
Physical view of new volume group + + + + + Physical view of new volume group + + + +
+ +
+ + +
+ Extending a Volume Group + + + +LVM +extending a volume group + + + +extending a volume group +LVM + + + + In this example, the objective was to extend the new volume group to include an uninitialized entity (partition). This was to increase the size or number of extents for the volume group. + To extend the volume group, click on the Extend Volume Group button. This will display the 'Extend Volume Group' window as illustrated below. On the 'Extend Volume Group' window, you can select disk entities (partitions) to add to the volume group. Please ensure that you check the contents of any 'Uninitialized Disk Entities' (partitions) to avoid deleting any critical data (see ). In the example, the disk entity (partition) /dev/hda6 was selected as illustrated below. + + +
Select disk entities + + + + + Select disk entities + + + +
+ + + Once added, the new volume will be added as 'Unused Space' in the volume group. The figure below illustrates the logical and physical view of the volume group after it was extended. + +
Logical and physical view of an extended volume group + + + + + Logical and physical view of an extended volume group + + + +
+
+ +
+ Editing a Logical Volume + + + +LVM +editing a logical volume + + + +editing a logical volume +LVM + + + + + + +LVM +logical volume, editing a + + + +logical volume, editing a +LVM + + + + The LVM utility allows you to select a logical volume in the volume group and modify its name, size and specify file system options. In this example, the logical volume named 'Backups" was extended onto the remaining space for the volume group. + + + Clicking on the Edit Properties button will display the 'Edit Logical Volume' popup window from which you can edit the properties of the logical volume. On this window, you can also mount the volume after making the changes and mount it when the system is rebooted. Please note that you should indicate the mount point. If the mount point you specify does not exist, a popup window will be displayed prompting you to create it. The 'Edit Logical Volume' window is illustrated below. + + +
Edit logical volume + + + + + Edit logical volume + + + +
+ + + + If you wish to mount the volume, select the 'Mount' checkbox indicating the preferred mount point. To mount the volume when the system is rebooted, select the 'Mount when rebooted' checkbox. In this example, the new volume will be mounted in /mnt/backups. This is illustrated in the figure below. + + +
Edit logical volume - specifying mount options + + + + + Edit logical volume - specifying mount options + + + +
+ + + The figure below illustrates the logical and physical view of the volume group after the logical volume was extended to the unused space. Please note in this example that the logical volume named 'Backups' spans across two hard disks. A volume can be stripped across two or more physical devices using LVM. + +
Edit logical volume + + + + + Edit logical volume + + + +
+ +
+ + +
+ + + + + + +
+References + + LVM + additional resources + + + Use these sources to learn more about LVM. + + + + +LVM +documentation + + + +documentation +LVM + + + Installed Documentation + + + + + rpm -qd lvm2 — This command shows all the documentation available from the lvm package, including man pages. + + + + + + lvm help — This command shows all LVM commands available. + + + + + + + Useful Websites + + + + + http://sources.redhat.com/lvm2 — LVM2 webpage, which contains an overview, link to the mailing lists, and more. + + + + + + http://tldp.org/HOWTO/LVM-HOWTO/ — LVM HOWTO from the Linux Documentation Project. + + + + + +
+ +
diff --git a/en-US/DG_Filesys-Lvm.xml.backup b/en-US/DG_Filesys-Lvm.xml.backup new file mode 100644 index 0000000..26549d0 --- /dev/null +++ b/en-US/DG_Filesys-Lvm.xml.backup @@ -0,0 +1,671 @@ + + +%RH_ENTITIES; + +]> + +LVM (Logical Volume Manager)<remark> [screenshots may need updating]</remark><remark></remark> + + LVM + + + + + + LVM + explanation of + + + LVM is a tool for logical volume management which includes allocating disks, striping, mirroring and resizing logical volumes. + + + LVM + physical volume + + + physical volume + + + + With LVM, a hard drive or set of hard drives is allocated to one or more physical volumes. LVM physical volumes can be placed on other block devices which might span two or more disks. + + + + The physical volumes are combined into logical volumes, with the exception of the /boot/ partition. The /boot/ partition cannot be on a logical volume + group because the boot loader cannot read it. If the root (/) partition is on a logical volume, create a separate /boot/ partition which is not a part of a volume group. + + + + Since a physical volume cannot span over multiple drives, to span over more than one drive, create one or more physical volumes per drive. + + + + LVM + logical volumes + + + logical volumes + + + volume group + +
Logical Volumes + + + + + LVM Group + + + +
+ + LVM + logical volume + + + logical volume + + + The volume groups can be divided into logical volumes, which are assigned mount points, such as /home and / and file system types, such as ext2 or ext3. + When "partitions" reach their full capacity, free space from the volume group can be added to the logical volume to increase the size of the partition. When a new hard drive is added to the system, it can be added to the volume group, + and partitions that are logical volumes can be increased in size. + +
Logical Volumes + + + + + Logical Volumes + + + +
+ + On the other hand, if a system is partitioned with the ext3 file system, the hard drive is divided into partitions of defined sizes. If a partition becomes full, it is not easy to expand the size of the partition. Even if the partition is moved to + another hard drive, the original hard drive space has to be reallocated as a different partition or not used. + + + + + + + +This chapter on LVM/LVM2 focuses on the use of the LVM GUI administration tool, i.e. system-config-lvm. For +comprehensive information on the creation and configuration of LVM partitions in clustered and non-clustered storage, +please refer to the Logical Volume Manager Administration guide also provided by Red Hat. + + + +In addition, the Installation Guide for Red Hat Enterprise Linux 6 also documents how to +create and configure LVM logical volumes during installation. For more information, refer to the Create LVM Logical Volume +section of the Installation Guide for Red Hat Enterprise Linux 6. + + + + + + +
+ What is LVM2? + + LVM2 + explanation of + + + LVM version 2, or LVM2, was the default for Red Hat Enterprise Linux 5, which uses the device mapper driver contained in the 2.6 kernel. LVM2 can be upgraded from versions of Red Hat Enterprise Linux running the 2.4 kernel. + + +
+ + + + + +
+ Using <filename>system-config-lvm</filename> + + LVM + system-config-lvm + + + The LVM utility allows you to manage logical volumes within X windows or graphically. You can access the application by selecting from your menu panel System > Administration > Logical Volume Management. Alternatively you can start the Logical Volume Management utility by typing system-config-lvm from a terminal. + + + In the example used in this section, the following are the details for the volume group that was created during the installation: + + +/boot - (Ext3) file system. Displayed under 'Uninitialized Entities'. (DO NOT initialize this partition). +LogVol00 - (LVM) contains the (/) directory (312 extents). +LogVol02 - (LVM) contains the (/home) directory (128 extents). +LogVol03 - (LVM) swap (28 extents). + + + + The logical volumes above were created in disk entity /dev/hda2 while /boot was created in /dev/hda1. The system also consists of 'Uninitialised Entities' which are illustrated in . The figure below illustrates the main window in the LVM utility. The logical and the physical views of the above configuration are illustrated below. The three logical volumes exist on the same physical volume (hda2). + +
Main LVM Window + + + + + Main LVM Window + + + +
+ + + The figure below illustrates the physical view for the volume. In this window, you can select and remove a volume from the volume group or migrate extents from the volume to another volume group. Steps to migrate extents are discussed in . + + + +
Physical View Window + + + + + Physical View Window + + + +
+ + + The figure below illustrates the logical view for the selected volume group. The logical volume size is also indicated with the individual logical volume sizes illustrated. + +
Logical View Window + + + + + Logical View Window + + + +
+ + + + On the left side column, you can select the individual logical volumes in the volume group to view more details about each. In this example the objective is to rename the logical volume name for 'LogVol03' to 'Swap'. To perform this operation select the respective logical volume and click on the Edit Properties button. + This will display the Edit Logical Volume window from which you can modify the Logical volume name, size (in extents) and also use the remaining space available in a logical volume group. The figure below illustrates this. + +
Edit Logical Volume + + + + + Edit Logical Volume + + + +
+ + +Before committing any changes, you can create a logical volume snapshot in case you need to revert to your original settings. For more information on logical volume snapshots, rever to . + + + + + Please note that this logical volume cannot be changed in size as there is currently no free space in the volume group. If there was remaining space, this option would be enabled (see ). + Click on the OK button to save your changes (this will remount the volume). To cancel your changes click on the Cancel button. + + + + + + + + +
+<remark>[review!] </remark>Creating, Reverting, and Merging an LVM Snapshot + + +You can also create a logical volume snapshot using the Create Snapshot button on the Logical Volume Management window. A snapshot is a "point-in-time" state of a logical volume, providing you with the option to revert to a logical volume's original settings when needed. To create a snapshot, select a logical volume from the LVM utility and click the Create Snapshot button (). + + + +To revert back to a previous snapshot of a logical volume, select the logical volume from the LVM utility and click the Edit Properties button. This will open the Edit Logical Volume menu; from there, click the Revert button. + + + +Red Hat Enterprise Linux 6 now allows you to merge a logical volume snapshot into its original logical volume. To do this, use lvmconvert --merge snapshot. When merging a single snapshot into its origin, snapshot is designated as volume_group/snapshot_name; e.g. lvmconvert --merge vg00/lvol1_snap. + + + + +If both origin and snapshot volume are closed, lvmconvert will merge both immediately. Otherwise, lvconvert will +merge both origin and snapshot once either is activated and both are closed. When merging a snapshot into an origin that cannot be closed, such as a +root file system, lvmconvert defers the merge until the next time the origin is activated. + + + +You can also merge multiple snapshots into their origin. To do so, first tag each snapshot to be merged using +lvm --addtag tagname snapshot (where each snapshot +is also designated as volume_group/snapshot_name, as in single-snapshot merges). +Then, run lvmconvert --merge @tagname to merge each snapshot serially into the origin. + + + +For more information about merging snapshots, refer to man lvmconvert. + + + + +
+ + +
+ Utilizing Uninitialized Entities + + 'Uninitialized Entities' consist of unpartitioned space and non LVM file systems. In this example partitions 3, 4, 5, 6 and 7 were created during installation and some unpartitioned space was left on the hard disk. Please view each partition and ensure that you read the 'Properties for Disk Entity' on the right column of the window to ensure that you do not delete critical data. In this example partition 1 cannot be initialized as it is /boot. Uninitialized entities are illustrated below. + +
Uninitialized Entities + + + + + Uninitialized Entities + + + +
+ + + In this example, partition 3 will be initialized and added to an existing volume group. To initialize a partition or unpartioned space, select the partition and click on the Initialize Entity button. Once initialized, a volume will be listed in the 'Unallocated Volumes' list. + +
+ + +
+ Adding Unallocated Volumes to a Volume Group + + + Once initialized, a volume will be listed in the 'Unallocated Volumes' list. The figure below illustrates an unallocated partition (Partition 3). The respective buttons at the bottom of the window allow you to: + + + + create a new volume group, + + + + + + add the unallocated volume to an existing volume group, + + + + + remove the volume from LVM. + + + + + To add the volume to an existing volume group, click on the Add to Existing Volume Group button. + + +
Unallocated Volumes + + + + + Unallocated Volumes + + + +
+ + + Clicking on the Add to Existing Volume Group button will display a pop up window listing the existing volume groups to which you can add the physical volume you are about to initialize. A volume group may span across one or more hard disks. In this example only one volume group exists as illustrated below. + + +
Add physical volume to volume group + + + + + Add physical volume to volume group + + + +
+ + + + Once added to an existing volume group the new logical volume is automatically added to the unused space of the selected volume group. You can use the unused space to: + + + + + create a new logical volume (click on the Create New Logical Volume(s) button, + + + + + + select one of the existing logical volumes and increase the extents (see ), + + + + + + select an existing logical volume and remove it from the volume group by clicking on the Remove Selected Logical Volume(s) button. Please note that you cannot select unused space to perform this operation. + + + + + + The figure below illustrates the logical view of 'VolGroup00' after adding the new volume group. + + + +
Logical view of volume group + + + + + Logical view of volume group + + + +
+ + + In the figure below, the uninitialized entities (partitions 3, 5, 6 and 7) were added to 'VolGroup00'. + + +
Logical view of volume group + + + + + Logical view of volume group + + + +
+
+ + +
+ Migrating Extents + + + + To migrate extents from a physical volume, select the volume and click on the Migrate Selected Extent(s) From Volume button. Please note that you need to have a sufficient number of free extents to migrate extents within a volume group. An error message will be displayed if you do not have a sufficient number of free extents. To resolve this problem, please extend your volume group (see ). + If a sufficient number of free extents is detected in the volume group, a pop up window will be displayed from which you can select the destination for the extents or automatically let LVM choose the physical volumes (PVs) to migrate them to. This is illustrated below. + + +
Migrate Extents + + + + + Migrate Extents + + + +
+ + + The figure below illustrates a migration of extents in progress. In this example, the extents were migrated to 'Partition 3'. + +
Migrating extents in progress + + + + + Migrating extents in progress + + + +
+ + + + Once the extents have been migrated, unused space is left on the physical volume. The figure below illustrates the physical and logical view for the volume group. Please note that the extents of LogVol00 which were initially in hda2 are now in hda3. Migrating extents allows you to move logical volumes in case of hard disk upgrades or to manage your disk space better. + + +
Logical and physical view of volume group + + + + + Logical and physical view of volume group + + + +
+ +
+ + +
+ Adding a New Hard Disk Using LVM + + + In this example, a new IDE hard disk was added. The figure below illustrates the details for the new hard disk. From the figure below, the disk is uninitialized and not mounted. To initialize a partition, click on the Initialize Entity button. For more details, see . Once initialized, LVM will add the new volume to the list of unallocated volumes as illustrated in . + +
Uninitialized hard disk + + + + + Uninitialized hard disk + + + +
+
+ + +
+ Adding a New Volume Group + + + Once initialized, LVM will add the new volume to the list of unallocated volumes where you can add it to an existing volume group or create a new volume group. You can also remove the volume from LVM. The volume if removed from LVM will be listed in the list of 'Uninitialized Entities' as illustrated in . In this example, a new volume group was created as illustrated below. + + +
Create new volume group + + + + + Create new volume group + + + +
+ + + Once created a new volume group will be displayed in the list of existing volume groups as illustrated below. The logical view will display the new volume group with unused space as no logical volumes have been created. To create a logical volume, select the volume group and click on the Create New Logical Volume button as illustrated below. Please select the extents you wish to use on the volume group. In this example, all the extents in the volume group were used to create the new logical volume. + + +
Create new logical volume + + + + + Create new logical volume + + + +
+ + + + The figure below illustrates the physical view of the new volume group. The new logical volume named 'Backups' in this volume group is also listed. + +
Physical view of new volume group + + + + + Physical view of new volume group + + + +
+ +
+ + +
+ Extending a Volume Group + + + In this example, the objective was to extend the new volume group to include an uninitialized entity (partition). This was to increase the size or number of extents for the volume group. + To extend the volume group, click on the Extend Volume Group button. This will display the 'Extend Volume Group' window as illustrated below. On the 'Extend Volume Group' window, you can select disk entities (partitions) to add to the volume group. Please ensure that you check the contents of any 'Uninitialized Disk Entities' (partitions) to avoid deleting any critical data (see ). In the example, the disk entity (partition) /dev/hda6 was selected as illustrated below. + + +
Select disk entities + + + + + Select disk entities + + + +
+ + + Once added, the new volume will be added as 'Unused Space' in the volume group. The figure below illustrates the logical and physical view of the volume group after it was extended. + +
Logical and physical view of an extended volume group + + + + + Logical and physical view of an extended volume group + + + +
+
+ +
+ Editing a Logical Volume + + + The LVM utility allows you to select a logical volume in the volume group and modify its name, size and specify filesystem options. In this example, the logical volume named 'Backups" was extended onto the remaining space for the volume group. + + + Clicking on the Edit Properties button will display the 'Edit Logical Volume' popup window from which you can edit the properties of the logical volume. On this window, you can also mount the volume after making the changes and mount it when the system is rebooted. Please note that you should indicate the mount point. If the mount point you specify does not exist, a popup window will be displayed prompting you to create it. The 'Edit Logical Volume' window is illustrated below. + + +
Edit logical volume + + + + + Edit logical volume + + + +
+ + + + If you wish to mount the volume, select the 'Mount' checkbox indicating the preferred mount point. To mount the volume when the system is rebooted, select the 'Mount when rebooted' checkbox. In this example, the new volume will be mounted in /mnt/backups. This is illustrated in the figure below. + + +
Edit logical volume - specifying mount options + + + + + Edit logical volume - specifying mount options + + + +
+ + + The figure below illustrates the logical and physical view of the volume group after the logical volume was extended to the unused space. Please note in this example that the logical volume named 'Backups' spans across two hard disks. A volume can be stripped across two or more physical devices using LVM. + +
Edit logical volume + + + + + Edit logical volume + + + +
+ +
+ + +
+ + + + + + +
+ Additional Resources + + LVM + additional resources + + + Use these sources to learn more about LVM. + + +
+ Installed Documentation + + + + + rpm -qd lvm2 — This command shows all the documentation available from the lvm package, including man pages. + + + + + + lvm help — This command shows all LVM commands available. + + + +
+ +
+ Useful Websites + + + + + http://sources.redhat.com/lvm2 — LVM2 webpage, which contains an overview, link to the mailing lists, and more. + + + + + + http://tldp.org/HOWTO/LVM-HOWTO/ — LVM HOWTO from the Linux Documentation Project. + + + +
+ +
+ +
diff --git a/en-US/DG_Filesys-Lvm.xml.march2-lvm-adminguide b/en-US/DG_Filesys-Lvm.xml.march2-lvm-adminguide new file mode 100644 index 0000000..15e95b3 --- /dev/null +++ b/en-US/DG_Filesys-Lvm.xml.march2-lvm-adminguide @@ -0,0 +1,419 @@ + + +%RH_ENTITIES; + +]> + +LVM (Logical Volume Manager) + + LVM + + + + + + logical volume + definition + + + +Volume management creates a layer of abstraction +over physical storage, allowing you to create +logical storage volumes. +This provides much greater +flexibility in a number of ways than using physical storage directly. +With a logical volume, you are not restricted to +physical disk sizes. +In addition, the hardware storage configuration is hidden +from the software so it can be resized and +moved without +stopping applications or unmounting file systems. +This can reduce operational costs. + + + +Logical volumes provide the following +advantages over using physical storage directly: + + + + +Flexible capacity + +When using logical volumes, file systems can extend across multiple disks, +since you can aggregate disks and partitions into a single logical +volume. + + + +Resizeable storage pools + +You can extend logical volumes or reduce logical volumes in +size with simple software commands, without reformatting and repartitioning +the underlying disk devices. + + + +Online data relocation + +To deploy newer, faster, or more +resilient storage subsystems, +you can move data while your system +is active. +Data can be rearranged on disks while the disks are in use. +For example, you can empty a hot-swappable +disk before removing it. + + + +Convenient device naming + +Logical storage volumes can be managed in user-defined groups, +which you can name according to your convenience. + + + +Disk striping + +You can create a logical volume that +stripes data across two or more disks. +This can dramatically increase throughput. + + + +Mirroring volumes + +Logical volumes provide a convenient way to +configure a mirror for your data. + + + +Volume Snapshots + +Using logical volumes, you can take +device snapshots for consistent backups or +to test the effect of changes without affecting the real data. + + + +The implementation of these features in LVM is described in the +remainder of this document. + + +
+LVM Architecture Overview + + LVM + architecture overview + + + LVM + history + + + LVM1 + + + LVM2 + + + +For the RHEL 4 release of the Linux operating +system, the original LVM1 logical volume manager +was replaced by LVM2, which has a more +generic kernel framework than LVM1. +LVM2 provides the following improvements +over LVM1: + + + + +flexible capacity + + + + +more efficient metadata storage + + + + +better recovery format + + + + +new ASCII metadata format + + + + +atomic changes to metadata + + + + +redundant copies of metadata + + + + + +LVM2 is backwards compatible with LVM1, with the +exception of snapshot and cluster support. +You can convert a volume group from LVM1 format +to LVM2 format with the vgconvert +command. For information on converting LVM +metadata format, see the vgconvert(8) +man page. + + + +The underlying physical storage unit of an LVM logical volume +is a block device such as a partition or whole disk. +This device is initialized as an LVM physical +volume (PV). + + + +To create an LVM logical volume, the physical volumes are +combined into a volume group (VG). +This creates a pool of +disk space out of which LVM logical volumes (LVs) can be allocated. +This process is analogous to the way in which disks are divided into +partitions. A logical volume is +used by file systems and applications (such as databases). + + + + LVM + components + + + +shows the components of a simple LVM logical volume: + +
+ LVM Logical Volume Components + + + + + + + LVM Logical Volume Components + + + +
+ + +For detailed information on the components of an LVM logical volume, +see . + + +
+ +
+The Clustered Logical Volume Manager (CLVM) + + LVM + clustered + + + CLVM + definition + + + cluster environment + + + + +The Clustered Logical Volume Manager (CLVM) +is a set of clustering extensions to LVM. These extensions +allow a cluster of computers to manage shared storage (for example, +on a SAN) using LVM. + + + +Whether you should use CLVM depends on your system requirements: + + + + + + +If only one node of your system requires access to the storage you +are configuring as logical volumes, then +you can use LVM without the CLVM extensions and the +logical volumes created with that node are all local +to the node. + + + + + +If you are using a clustered system for failover where only a single +node that accesses the storage is active at any one time, you should +use High Availability Logical Volume Management agents (HA-LVM). +For information on HA-LVM, see +Configuring and Managing a Red Hat +Cluster. + + + + + +If more than one node of your cluster will require access to your +storage which is then shared among the active nodes, then you +must use CLVM. CLVM allows a user to configure logical volumes +on shared storage by locking access to physical storage while +a logical volume is being configured, and uses clustered +locking services to manage the shared storage. + + + + + + + + clvmd daemon + + + +In order to use CLVM, the Red Hat Cluster Suite software, +including the clvmd daemon, must be running. +The clvmd daemon is the +key clustering extension to LVM. +The clvmd daemon runs in +each cluster computer and distributes LVM metadata updates +in a cluster, presenting each cluster computer with +the same view of the logical volumes. +For information on installing and administering Red Hat Cluster +Suite, see Configuring and Managing a Red Hat +Cluster. + + + +To ensure that +clvmd is started at boot time, +you can execute a chkconfig ... on +command on the clvmd service, as follows: + + + +# chkconfig clvmd on + + + +If the clvmd daemon has not been started, +you can execute a service ... start command +on the clvmd service, as follows: + + + +# service clvmd start + + + +Creating LVM logical volumes in a cluster environment is +identical to creating LVM logical volumes on a single +node. There is no difference in the LVM commands themselves, +or in the LVM graphical user interface, as described in + and +. +In order to enable the LVM volumes +you are creating in a cluster, the cluster infrastructure +must be running and the cluster must be quorate. + + + +By default, logical volumes created with CLVM on shared storage +are visible to all systems that have access to the shared +storage. +It is possible to create volume groups +in which all of the storage devices are visible to only one +node in the cluster. It is also possible to change +the status of a volume group from a local volume +group to a clustered volume group. +For information, see + +and +. + + + + Warning + + When you create volume groups with CLVM on shared storage, you must + ensure that all nodes in the cluster have access to the physical + volumes that constitute the volume group. + Asymmmetric cluster configurations in which some nodes + have access to the storage and others do not are not supported. + + + + + + +shows a CLVM overview in a Red Hat cluster. + +
+ CLVM Overview + + + + + + + GFS with a SAN + + + +
+ + + +Shared storage for use in Red Hat Cluster Suite requires that you +be running the cluster logical volume manager daemon (clvmd) +or the High Availability Logical Volume Management agents (HA-LVM). If you are not +able to use either the clvmd daemon or HA-LVM for operational reasons or +because you do not have the correct entitlements, you must not use +single-instance LVM on the shared disk as this may result in data +corruption. If you have any concerns please contact your Red Hat service +representative. + + + + + +CLVM requires changes to the lvm.conf file for +cluster-wide locking. +Information on configuring the lvm.conf file +to support clustered locking is provided within the +lvm.conf file itself. For information +about the +lvm.conf file, see +. + + + +
+ + + + + + + + + + + + + + + +
diff --git a/en-US/DG_Filesys-Parted.xml b/en-US/DG_Filesys-Parted.xml new file mode 100644 index 0000000..3577c46 --- /dev/null +++ b/en-US/DG_Filesys-Parted.xml @@ -0,0 +1,626 @@ + + +%RH_ENTITIES; + +]> + + + + +Partitions + + disk storage + parted + + parted + + + + parted + + + + + + The utility parted allows users to: + + View the existing partition table + Change the size of existing partitions + Add partitions from free space or additional hard drives + + + + + + + + + + + + parted + + overview + + + + + By default, the parted package is included when installing &PROD;. To start parted, log in as root and type the command parted /dev/sda at a shell prompt (where /dev/sda is the device name for the drive you want to configure). + + + + If you want to remove or resize a partition, the device on which that partition resides must not be in use. Creating a new partition on a device which is in use—while possible—is not recommended. + + For a device to not be in use, none of the partitions on the device can be mounted, and any swap space on the device must not be enabled. + + As well, the partition table should not be modified while it is in use because the kernel may not properly recognize the changes. If the partition table does not match the actual state of the mounted partitions, information could be written to the wrong partition, resulting in lost and overwritten data. + + The easiest way to achieve this it to boot your system in rescue mode. When prompted to mount the file system, select Skip. + + + + Alternately, if the drive does not contain any partitions in use (system processes that use or lock the file system from being unmounted), you can unmount them with the umount command and turn off all the swap space on + the hard drive with the swapoff command. + + + + contains a list of commonly used parted commands. The sections that follow explain some of these commands and arguments in more detail. + + + + parted + + table of commands + + + <command moreinfo="none">parted </command> commands + + + + + + + + Command + + + + Description + + + + + + + check minor-num + + + + Perform a simple check of the file system + + + + + + cp from to + + + + Copy file system from one partition to another; from and to are the minor numbers of the partitions + + + + + + help + + + + Display list of available commands + + + + + + mklabel label + + + + + Create a disk label for the partition table + + + + + + mkfs minor-num file-system-type + + + + Create a file system of type file-system-type + + + + + + mkpart part-type fs-type start-mb end-mb + + + + Make a partition without creating a new file system + + + + + + mkpartfs part-type fs-type start-mb end-mb + + + + Make a partition and create the specified file system + + + + + + move minor-num start-mb end-mb + + + + Move the partition + + + + + + name minor-num name + + + + Name the partition for Mac and PC98 disklabels only + + + + + + print + + + + Display the partition table + + + + + + quit + + + + Quit parted + + + + + + rescue start-mb end-mb + + + + Rescue a lost partition from start-mb to end-mb + + + + + + resize minor-num start-mb end-mb + + + + Resize the partition from start-mb to end-mb + + + + + + rm minor-num + + + + Remove the partition + + + + + + select device + + + + Select a different device to configure + + + + + + set minor-num flag state + + + + Set the flag on a partition; state is either on or off + + + + + + toggle [NUMBER [FLAG] + + + + Toggle the state of FLAG on partition NUMBER + + + + + + unit UNIT + + + + Set the default unit to UNIT + + + + + +
+ +
+ Viewing the Partition Table + + parted + + viewing partition table + + + partitions + viewing list + + + partition table + viewing + + + After starting parted, use the command print to view the partition table. A table similar to the following appears: + + + +Model: ATA ST3160812AS (scsi) +Disk /dev/sda: 160GB +Sector size (logical/physical): 512B/512B +Partition Table: msdos + +Number Start End Size Type File system Flags + 1 32.3kB 107MB 107MB primary ext3 boot + 2 107MB 105GB 105GB primary ext3 + 3 105GB 107GB 2147MB primary linux-swap + 4 107GB 160GB 52.9GB extended root + 5 107GB 133GB 26.2GB logical ext3 + 6 133GB 133GB 107MB logical ext3 + 7 133GB 160GB 26.6GB logical lvm + + + + + + The first line contains the disk type, manufacturer, model number and interface, and the second line displays the disk label type. The remaining output below the fourth line shows the partition table. + + + + In the partition table, the Minor number is the partition number. For example, the partition with minor number 1 corresponds to /dev/sda1. The + Start and End values are in megabytes. Valid Type are metadata, free, primary, extended, or logical. The Filesystem is the file system type, which can be any of the following: + + + ext2 + ext3 + fat16 + fat32 + hfs + jfs + linux-swap + ntfs + reiserfs + hp-ufs + sun-ufs + xfs + + + If a Filesystem of a device shows no value, this means that its file system type is unknown. + + + The Flags column lists the flags set for the partition. Available flags are boot, + root, swap, hidden, raid, lvm, or lba. + + + Tip + + parted + + selecting device + + + To select a different device without having to restart parted, use the select command followed by the device name (for example, /dev/sda). Doing so allows you to view or configure the partition table of a device. + + +
+ +
+ Creating a Partition + + parted + + creating partitions + + + partitions + creating + + + Warning + + + Do not attempt to create a partition on a device that is in use. + + + + + Before creating a partition, boot into rescue mode (or unmount any partitions on the device and turn off any swap space on the device). + + + + Start parted, where /dev/sda is the device on which to create the partition: + + +parted /dev/sda + + + View the current partition table to determine if there is enough free space: + + +print + + + If there is not enough free space, you can resize an existing partition. Refer to for details. + + +
+ Making the Partition + + partitions + making + mkpart + + + + mkpart + + + + From the partition table, determine the start and end points of the new partition and what partition type it should be. You can only have four primary partitions (with no extended partition) on a device. If you need more than four partitions, you can + have three primary partitions, one extended partition, and multiple logical partitions within the extended. For an overview of disk partitions, refer to the appendix An Introduction to Disk Partitions in the Fedora 13 Installation Guide. + + + + For example, to create a primary partition with an ext3 file system from 1024 megabytes until 2048 megabytes on a hard drive type the following command: + + +mkpart primary ext3 1024 2048 + + Tip + + If you use the mkpartfs command instead, the file system is created after the partition is created. However, parted does not support creating an ext3 file system. Thus, if you wish + to create an ext3 file system, use mkpart and create the file system with the mkfs command as described later. + + + + The changes start taking place as soon as you press Enter, so review the command before executing to it. + + +After creating the partition, use the print command to confirm that it is in the partition table with the correct partition type, file system type, and size. Also remember the minor number of the new partition so that you can label any file systems on it. You should also view the output of cat /proc/partitions to make sure the kernel recognizes the new partition. + + + +
+ +
+ Formatting and Labeling the Partition + + partitions + formatting + mkfs + + + + mkfs + + + + The partition still does not have a file system. Create the file system: + + +/sbin/mkfs -t ext3 /dev/sda6 + + + Warning + + + Formatting the partition permanently destroys any data that currently exists on the partition. + + + + + + Next, give the file system on the partition a label. For example, if the file system on the new partition is /dev/sda6 and you want to label it /work, use: + + +e2label /dev/sda6 /work + + + By default, the installation program uses the mount point of the partition as the label to make sure the label is unique. You can use any label you want. + + +Afterwards, create a mount point (e.g. /work) as root. + + +
+ + + +
+ Add to <filename moreinfo="none">/etc/fstab</filename> + +As root, edit the /etc/fstab file to include the new partition using the partition's UUID. Use the blkid -L label command +to retrieve the partition's UUID. The new line should look similar to the following: + +UUID=93a0429d-0318-45c0-8320-9676ebf1ca79 /work ext3 defaults 1 2 + + + + + + The first column should contain UUID= followed by the file system's UUID. The second column should contain the mount point for the new partition, and the next column should be the file system + type (for example, ext3 or swap). If you need more information about the format, read the man page with the command man fstab. + + + + If the fourth column is the word defaults, the partition is mounted at boot time. To mount the partition without rebooting, as root, type the command: + + +mount /work + +
+
+ +
+ Removing a Partition + + parted + + removing partitions + + + partitions + removing + + + Warning + + + Do not attempt to remove a partition on a device that is in use. + + + + + Before removing a partition, boot into rescue mode (or unmount any partitions on the device and turn off any swap space on the device). + + + + Start parted, where /dev/sda is the device on which to remove the partition: + + +parted /dev/sda + + + View the current partition table to determine the minor number of the partition to remove: + + +print + + + Remove the partition with the command rm. For example, to remove the partition with minor number 3: + + +rm 3 + + + The changes start taking place as soon as you press Enter, so review the command before committing to it. + + + + After removing the partition, use the print command to confirm that it is removed from the partition table. You should also view the output of + + +cat /proc/partitions + + + to make sure the kernel knows the partition is removed. + + + + The last step is to remove it from the /etc/fstab file. Find the line that declares the removed partition, and remove it from the file. + +
+ +
+ Resizing a Partition + + parted + + resizing partitions + + + partitions + resizing + + + Warning + + + Do not attempt to resize a partition on a device that is in use. + + + + + Before resizing a partition, boot into rescue mode (or unmount any partitions on the device and turn off any swap space on the device). + + + + Start parted, where /dev/sda is the device on which to resize the partition: + + +parted /dev/sda + + + View the current partition table to determine the minor number of the partition to resize as well as the start and end points for the partition: + + +print + + + To resize the partition, use the resize command followed by the minor number for the partition, the starting place in megabytes, and the end place in megabytes. For example: + + +resize 3 1024 2048 + + + + Warning + + A partition cannot be made larger than the space available on the device + + + + + + + After resizing the partition, use the print command to confirm that the partition has been resized correctly, is the correct partition type, and is the correct file system type. + + + + After rebooting the system into normal mode, use the command df to make sure the partition was mounted and is recognized with the new size. + +
+ + +
diff --git a/en-US/DG_Filesys-Raid-softconfig.xml b/en-US/DG_Filesys-Raid-softconfig.xml new file mode 100644 index 0000000..ea84568 --- /dev/null +++ b/en-US/DG_Filesys-Raid-softconfig.xml @@ -0,0 +1,518 @@ + + + +
+ <remark>[INSTALLGUIDE?] </remark>Configuring Software RAID + + RAID + configuring software RAID during installation + + + installation + software RAID + + + + Users can configure Software RAID during the graphical + installation process (Disk Druid), the text-based installation + process, or during a kickstart installation.This chapter covers + Software RAID configuration during the installation process using + the Disk Druid application. + + + + + + + Apply software RAID partitions to the + physical hard drives. + + + + To add a boot partition (/boot/) to a + RAID partition, ensure it is on a RAID1 partiton. + + + + + + Creating RAID devices from the software + RAID partitions. + + + + + + Optional: Configuring LVM + from the RAID devices. + + + + + + Creating file systems from the RAID + devices. + + + + + + Note + + Although this procedure covers installating with a GUI + application, system administrators can do the same with + text-based installation. + + + + Configuration of software RAID must be done manually in + Disk Druid during the + installation process. + + + + These examples use two 9.1 GB SCSI drives + (/dev/sda and /dev/sdb) + to illustrate the creation of simple RAID1 configurations. They + detail how to create a simple RAID 1 configuration by implementing + multiple RAID devices. + + + On the Disk Partitioning + Setup screen, select Manually partition with Disk Druid. + + +
+ <remark>[INSTALLGUIDE?] </remark>Creating the RAID Partitions + + RAID + installing + creating the boot partition + + + RAID + installing + creating the RAID partitions + + + In a typical situation, the disk drives are new or are + formatted. Both drives are shown as raw devices with no + partition configuration in . + + +
Two Blank Drives, Ready For Configuration + + + + + Two Blank Drives, Ready For Configuration + + + +
+ + + + + In Disk Druid, + choose RAID to enter + the software RAID creation screen. + + + + + + Choose Create a software RAID + partition to create a RAID partition as shown + in . Note that no + other RAID options (such as entering a mount point) are + available until RAID partitions, as well as RAID devices, + are created. + + +
RAID Partition Options + + + + + RAID Partition Options + + + +
+
+ + + + A software RAID partition must be constrained to one + drive. For Allowable Drives, + select the drive to use for RAID. If you have multiple + drives, by default all drives are selected and you must + deselect the drives you do not want. + + +
Adding a RAID Partition + + + + + + Adding a RAID Partition + + + +
+
+ + + + Enter the size that you want the partition to be. + + + + + + Select Fixed Size to specify partition + size. Select Fill all space up to (MB) + and enter a value (in MB) to specify partition size + range. Select Fill to maximum allowable + size to allow maximum available space of the hard + disk. Note that if you make more than one space growable, + they share the available free space on the disk. + + + + + + Select Force to be a primary + partition if you want the partition to be a + primary partition. A primary partition is one of the first + four partitions on the hard drive. If unselected, the + partition is created as a logical partition. If other + operating systems are already on the system, unselecting + this option should be considered. For more information on + primary versus logical/extended partitions, refer to the + appendix section of the Fedora Installation Guide. + + + + + + Repeat these steps to create as many partitions as you need + for your partitions. + + +
+ + + Repeat these steps to create as many partitions as needed for + your RAID setup. Notice that all the partitions do not have to + be RAID partitions. For example, you can configure only the + /boot/ partition as a + software RAID device, leaving the root partition (/), /home/, and swap as regular file + systems. + shows successfully allocated space for the RAID 1 configuration + (for /boot/), which is now + ready for RAID device and mount point creation: + + +
RAID 1 Partitions Ready, Pre-Device and Mount Point Creation + + + + + RAID 1 Partitions Ready, Pre-Device and Mount Point + Creation + + + +
+
+ + +
+ <remark>[INSTALLGUIDE?] </remark>Creating the RAID Devices and Mount Points + + RAID + installing + creating the RAID devices + + + RAID + installing + creating the mount points + + + Once you create all of your partitions as Software RAID + partitions, you must create the RAID device and mount point. + + + + + + Select the RAID + button on the Disk + Druid main partitioning screen (refer to ). + + + + + + appears. Select + Create a RAID device. + + +
RAID Options + + + + + RAID Select Option + + + +
+
+ + + + Next, appears, where + you can make a RAID device and assign a mount point. + + +
Making a RAID Device and Assigning a Mount Point + + + + + Making a RAID Device and Assigning a Mount Point + + + +
+
+ + + + Select a mount point. + + + + + + Choose the file system type for the partition. At this point + you can either configure a dynamic LVM file system or a + traditional static ext2/ext3 file system. For more + information on configuring LVM on a RAID device, select + physical volume + (LVM). If LVM is not required, continue on with + the following instructions. + + + + + + Select a device name such as md0 for the RAID device. + + + + + + Choose your RAID level. You can choose from RAID 0, + RAID 1, and RAID + 5. + + + + Note + + If you are making a RAID partition of /boot/, you must choose RAID + level 1, and it must use one of the first two drives (IDE + first, SCSI second). If you are not creating a seperate + RAID partition of /boot/, and you are making a + RAID partition for the root file system (/), it must be RAID level 1 and + must use one of the first two drives (IDE first, SCSI + second). + + +
The + <command moreinfo="none">/boot/</command> Mount + Error + + + + + + The /boot/ Mount + Error + + + +
+
+
+ + + + The RAID partitions created appear in the RAID Members list. Select which + of these partitions should be used to create the RAID + device. + + + + + + If configuring RAID 1 or RAID 5, specify the number of spare + partitions. If a software RAID partition fails, the spare is + automatically used as a replacement. For each spare you want + to specify, you must create an additional software RAID + partition (in addition to the partitions for the RAID + device). Select the partitions for the RAID device and the + partition(s) for the spare(s). + + + + + + After clicking OK, + the RAID device appears in the Drive Summary list. + + + + + + Repeat this chapter's entire process for configuring + additional partitions, devices, and mount points, such as + the root partition (/), + /home/, or swap. + + +
+ + + After completing the entire configuration, the figure as shown + in resembles the default + configuration, except for the use of RAID. + + +
Final Sample + RAID Configuration + + + + + + Final Sample RAID Configuration + + + +
+ + + + The figure as shown in is an example of a RAID + and LVM configuration. + + +
Final Sample RAID With LVM Configuration + + + + + Final Sample RAID With LVM Configuration + + + +
+ + + You can continue with your installation process. Refer to the + Installation + Guide for further instructions. + +
+ +
+<remark>[NEW!] </remark>Advanced RAID Device Creation + + + +In some cases, you may wish to install the operating system on an array that can't be created after the installation completes. Usually, this means setting up the /boot or root file system arrays on a complex RAID device; in such cases, you may need to use array options that are not supported by Anaconda. To work around this, perform the following procedure: + + + + + + +Insert the install disk as you normally would. + + + + + +During the initial boot up, select Rescue Mode instead of Install or Upgrade. When the system fully boots into Rescue mode, the user will be presented with a command line terminal. + + + + +From this terminal, use parted to create RAID partitions on the target hard drives. Then, use mdadm to manually create raid arrays from those partitions using any and all settings and options available. For more information on how to do these, refer to , man parted, and man mdadm. + + + + + + +Once the arrays are created, you can optionally create file systems on the arrays as well. Refer to the different chapters under for information on file systems supported by Fedora 13. also contains basic technical information on supported file systems for quick review. + + + + + + +Reboot the computer and this time select Install or Upgrade to +install as normal. As Anaconda searches the disks in the system, it will find the pre-existing RAID devices. + + + + + +When asked about how to use the disks in the system, select Custom Layout and click Next. In the device listing, the pre-existing MD RAID devices will be listed. + + + + +Select a RAID device, click Edit and configure its mount point and (optionally) the type of file system it should use (if you didn't create one earlier) then click Done. Anaconda will perform the install to this pre-existing RAID device, preserving the custom options you selected when you created it in Rescue Mode. + + + + +The limited Rescue Mode of the installer does not include man pages. Both the man mdadm and man md contain useful information for creating custom RAID arrays, and may be needed throughout the workaround. As such, it can be helpful to either have access to a machine with these man pages present, or to print them out prior to booting into Rescue Mode and creating your custom arrays. + +
+ + +
+RAID Utilities + + + + + + +
+ +
diff --git a/en-US/DG_Filesys-Raid.xml b/en-US/DG_Filesys-Raid.xml new file mode 100644 index 0000000..c80277a --- /dev/null +++ b/en-US/DG_Filesys-Raid.xml @@ -0,0 +1,762 @@ + + +%RH_ENTITIES; + +]> + +Redundant Array of Independent Disks (RAID) + + + The basic idea behind RAID is to combine multiple small, + inexpensive disk drives into an array to accomplish performance + or redundancy goals not attainable with one large and expensive + drive. This array of drives appears to the computer as a single + logical storage unit or drive. + + +
+ What is RAID? + + RAID + explanation of + + + + RAID allows information to be spread across several disks. RAID uses + techniques such as disk striping (RAID + Level 0), disk mirroring (RAID Level 1), + and disk striping with parity (RAID Level + 5) to achieve redundancy, lower latency, increased bandwidth, + and maximized ability to recover from hard disk crashes. + + + + striping + RAID fundamentals + + + RAID distributes data across each drive in the + array by breaking it down into consistently-sized + chunks (commonly 256K or 512k, although other values are + acceptable). Each chunk is then written to a hard drive in the + RAID array according to the RAID level employed. When the data + is read, the process is reversed, giving the illusion that the + multiple drives in the array are actually one large drive. + +
+
+ Who Should Use RAID? + + + System Administrators and others who manage large amounts of + data would benefit from using RAID technology. Primary reasons + to deploy RAID include: + + + + RAID + reasons to use + + + + + + Enhances speed + + + + + + Increases storage capacity using a single virtual disk + + + + + + Minimizes disk failure + + + +
+ +
+ RAID Types + + Hardware RAID + RAID + + + Software RAID + RAID + + + RAID + Hardware RAID + + + RAID + Software RAID + + + + There are three possible RAID approaches: Firmware RAID, Hardware RAID and + Software RAID. + + + + +Firmware RAID + +Firmware RAID (also known as ATARAID) is a type of software RAID where the RAID sets can be configured +using a firmware-based menu. The firmware used by this type of RAID also hooks into the BIOS, allowing you to boot from its +RAID sets. Different vendors use different on-disk metadata formats to mark the RAID set members. The Intel Matrix RAID +is a good example of a firmware RAID system. + + + + + Hardware RAID + + + The hardware-based array manages the RAID subsystem + independently from the host. It presents a single disk per + RAID array to the host. + + +A Hardware RAID device may be internal or external to the system, with internal devices commonly consisting of a specialized controller card that handles the RAID tasks tranparently to the operating system and with external devices commonly connecting to the system via SCSI, fiber channel, iSCSI, InfiniBand, or other high speed network interconnect and presenting logical volumes to the system. + + + + + + RAID controller cards function like a SCSI controller to the + operating system, and handle all the actual drive + communications. The user plugs the drives into the RAID + controller (just like a normal SCSI controller) and then adds + them to the RAID controllers configuration, and the operating + system won't know the difference. + + + + + Software RAID + + + Software RAID implements the various RAID levels in the kernel + disk (block device) code. It offers the cheapest possible + solution, as expensive disk controller cards or hot-swap + chassis + + A hot-swap chassis allows you to remove a hard drive without having to power-down your system. + + are not required. Software RAID also works with + cheaper IDE disks as well as SCSI disks. With today's faster + CPUs, Software RAID also generally outperforms Hardware RAID. + + + + The Linux kernel contains a multi-disk (MD) driver that allows the RAID + solution to be completely hardware independent. The + performance of a software-based array depends on the server + CPU performance and load. + + + Here are some of the key features of the Linux software RAID stack: + + + + + + + +Multi-threaded design + + + + + + + + + + Portability of arrays between Linux machines without + reconstruction + + + + + + Backgrounded array reconstruction using idle system + resources + + + + + + Hot-swappable drive support + + + + + + Automatic CPU detection to take advantage of certain CPU + features such as streaming SIMD support + + + +Automatic correction of bad sectors on disks in an array + +Regular consistency checks of RAID data to ensure the health of the array + +Proactive monitoring of arrays with email alerts sent to a designated email address on important events + +Write-intent bitmaps which drastically speed resync events by allowing the kernel to know precisely which portions of a disk need to be resynced instead of having to resync the entire array + +Resync checkpointing so that if you reboot your computer during a resync, at startup the resync will pick up where it left off and not start all over again + +The ability to change parameters of the array after installation. For example, you can grow a 4-disk raid5 array to a 5-disk raid5 array when you have a new disk to add. This grow operation is done live and does not require you to reinstall on the new array. + + + + + + + + +
+ +
+ RAID Levels and Linear Support + + + RAID supports various configurations, including levels 0, 1, 4, + 5, 6, 10, and linear. These RAID types are defined as follows: + + + + + + + RAID + levels + + + RAID + level 0 + + + RAID + level 1 + + + RAID + level 4 + + + RAID + level 5 + + + + +RAID +levels + + + +levels +RAID + + + + +RAID +striping + + + +striping +RAID + + + + + + +RAID +mirroring + + + +mirroring +RAID + + + + + + +RAID +parity + + + +parity +RAID + + + + + + +RAID +linear RAID + + + +linear RAID +RAID + + + + + +Level 0 + + +RAID level 0, often + called "striping," is a performance-oriented striped data + mapping technique. This means the data being written to the + array is broken down into strips and written across the + member disks of the array, allowing high I/O performance at + low inherent cost but provides no redundancy. + + +Many RAID level 0 implementations will only stripe the data across the member devices up to the size of the smallest device in the array. This means that if you have multiple devices with slightly different sizes, each device will get treated as though it is the same size as the smallest drive. Therefore, the common storage + capacity of a level 0 array is equal to the capacity + of the smallest member disk in a Hardware RAID or the capacity + of smallest member partition in a Software RAID multiplies by the number of disks or partitions in the array. + + + + + + +Level 1 + +RAID level 1, or + "mirroring," has been used longer than any other form of + RAID. Level 1 provides redundancy by writing identical data + to each member disk of the array, leaving a "mirrored" copy + on each disk. Mirroring remains popular due to its + simplicity and high level of data availability. Level 1 + operates with two or more disks, and provides very good data reliability and + improves performance for read-intensive applications but at + a relatively high cost. + + RAID level 1 comes at a high cost because you write the + same information to all of the disks in the array, provides data reliability, but in a much less space-efficient manner than parity based RAID levels such as level 5. However, this space inefficiency comes with a performance benefit: parity-based RAID levels consume considerably more CPU power in order to generate the parity while RAID level 1 simply writes the same data more than once to the multiple RAID members with very little CPU overhead. As such, RAID level 1 can outperform the parity-based RAID levels on machines where software RAID is employed and CPU resources on the machine are consistently taxed with operations other than RAID activities. + + + + + The storage capacity of the level 1 array is + equal to the capacity of the smallest mirrored hard disk in a + Hardware RAID or the smallest mirrored partition in a + Software RAID. Level 1 redundancy is the highest possible among all RAID types, with the array being able to operate with only a single disk present. + + + + +Level 4 + + +Level 4 uses parity + + + Parity information is calculated based on the contents + of the rest of the member disks in the array. This + information can then be used to reconstruct data when + one disk in the array fails. The reconstructed data can + then be used to satisfy I/O requests to the failed disk + before it is replaced and to repopulate the failed disk + after it has been replaced. + + concentrated on a single disk drive to protect + data. Because the dedicated parity disk + represents an inherent bottleneck on all write transactions to the RAID array, level 4 is seldom used + without accompanying technologies such as write-back + caching, or in specific circumstances where the system administrator is intentionally designing the software RAID device with this bottleneck in mind (such as an array that will have little to no write transactions once the array is populated with data). RAID level 4 is so rarely used that it is not available as an option in Anaconda. However, it could be created manually by the user if truly needed. + + + +The storage capacity of Hardware RAID level 4 is equal to the capacity of the smallest member partition multiplied by the number of partitions minus one. Performance of a RAID level 4 array will always be asymmetrical, meaning reads will outperform writes. This is because writes consume extra CPU and main memory bandwidth when generating parity, and then also consume extra bus bandwidth when writing the actual data to disks because you are writing not only the data, but also the parity. Reads need only read the data and not the parity unless the array is in a degraded state. As a result, reads generate less traffic to the drives and across the busses of the computer for the same amount of data transfer under normal operating conditions. + + + + + + +Level 5 + + +This is the most common + type of RAID. By distributing parity across all of + an array's member disk drives, RAID level 5 eliminates the + write bottleneck inherent in level 4. The only performance + bottleneck is the parity calculation process itself. With modern + CPUs and Software RAID, that is usually not a bottleneck at all since modern CPUs can generate parity very fast. However, if you have a sufficiently large number of member devices in a software RAID5 array such that the combined aggregate data transfer speed across all devices is high enough, then this bottleneck can start to come into play. + + + +As with level 4, level 5 has asymmetrical + performance, with reads substantially outperforming + writes. The storage capacity of RAID + level 5 is calculated the same way as with level 4. + + + + +Level 6 + + +This is a common level of RAID when data redundancy and preservation, and not performance, are the paramount concerns, but where the space inefficiency of level 1 is not acceptable. Level 6 uses a complex parity scheme to be able to recover from the loss of any two drives in the array. This complex parity scheme creates a significantly higher CPU burden on sofware RAID devices and also imposes an increased burden during write transactions. As such, not only is level 6 asymmetrical in performance like levels 4 and 5, but it is considerably more asymmetrical. + + +The total capacity of a RAID level 6 array is calculated similarly to RAID level 5 and 4, except that you must subtract 2 devices (instead of 1) from the device count for the extra parity storage space. + + + + + +Level 10 + + +This RAID level attempts to combine the performance advantages of level 0 with the redundancy of level 1. It also helps to alleviate some of the space wasted in level 1 arrays with more than 2 devices. With level 10, it is possible to create a 3-drive array configured to store only 2 copies of each piece of data, which then allows the overall array size to be 1.5 times the size of the smallest devices instead of only equal to the smallest device (like it would be with a 3-device, level 1 array). + + +The number of options available when creating level 10 arrays (as well as the complexity of selecting the right options for a specific use case) make it impractical to create during installation. It is possible to create one manually using the command line mdadm tool. For details on the options and their respective performance trade-offs, refer to man md. + + + + + +Linear RAID + +Linear RAID is a + simple grouping of drives to create a larger virtual + drive. In linear RAID, the chunks are allocated sequentially + from one member drive, going to the next drive only when the + first is completely filled. This grouping provides no + performance benefit, as it is unlikely that any I/O + operations will be split between member drives. Linear RAID + also offers no redundancy and, in fact, decreases + reliability — if any one member drive fails, the + entire array cannot be used. The capacity is the total of + all member disks. + + + +
+ +
+<remark>[NEW!] </remark>Linux RAID Subsystems + +RAID +subsystems of RAID + + + +subsystems of RAID +RAID + + +RAID in Linux is composed of the following subsystems: + + + + + +Linux Hardware RAID controller drivers + + + +RAID +hardware RAID controller drivers + + + +hardware RAID controller drivers +RAID + + + + + +Hardware RAID controllers have no specific RAID subystem in Linux. Because +they use special RAID chipsets, hardware RAID controllers come with their own drivers; these drivers +allow the system to detect the RAID sets as regular disks. + + + + +mdraid + + + +RAID +mdraid + + + +mdraid +RAID + + +The mdraid subsystem was designed as a software RAID solution for Linux; it +is also the preferred solution for software RAID under Linux. This subsystem uses its own +metadata format, generally refered to as native mdraid metadata. + + +mdraid also +supports other metadata formats, known as external metadata. +Fedora 13 uses mdraid with +external metadata to access +ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and +controlled through the mdadm utility. + + + + +dmraid + + + +RAID +dmraid + + + +dmraid +RAID + + + +Device-mapper RAID or dmraid +refers to device-mapper kernel code that offers the mechanism to piece disks +together into a RAID set. This same kernel code does not provide any RAID +configuration mechanism. + + + +dmraid is configured entirely in user-space, making it easy +to support various on-disk metadata formats. As such, dmraid +is used on a wide variety of firmware RAID implementations. dmraid +also supports Intel firmware RAID, although Fedora 13 uses mdraid +to access Intel firmware RAID sets. + + + +
+ +
+<remark>[NEW!] </remark>RAID Support in the Installer + +RAID +installer support + + + +installer support +RAID + + + + +RAID +Anaconda support + + + +Anaconda support +RAID + + + + + +The Anaconda installer will automatically detect any hardware and +firmware RAID sets on a system, making them available for installation. Anaconda +also supports software RAID using mdraid, and can recognize existing mdraid +sets. + + + +Anaconda provides utilities for creating RAID sets during installation; however, +these utilities only allow partitions (as opposed to entire disks) to be members of new sets. To use an +entire disk for a set, simply create a partition on it spanning the entire disk, and use that partition as +the RAID set member. + + + +When the root file system uses a RAID set, Anaconda will add special kernel +command-line options to the bootloader configuration telling the initrd which +RAID set(s) to activate before searching for the root file system. + + + +For instructions on configuring RAID during installation, refer to the Fedora 13 Installation Guide. + + + +
+
+<remark>[NEW!] </remark>Configuring RAID Sets + +RAID +configuring RAID sets + + + +configuring RAID sets +RAID + + + +Most RAID sets are configured during creation, typically through the firmware menu or +from the installer. In some cases, you may need to create or modify RAID sets after +installing the system, preferably without having to reboot the machine and enter +the firmware menu to do so. + + + + +Some hardware RAID controllers allow you to configure RAID sets on-the-fly or even +define completely new sets after adding extra disks. This requires the use of driver-specific +utilities, as there is no standard API for this. Refer to your hardware RAID controller's +driver documentation for information on this. + + + +mdadm + + + +RAID +mdadm (configuring RAID sets) + + + +mdadm (configuring RAID sets) +RAID + + + +The mdadm command-line tool is used to manage +software RAID in Linux, i.e. mdraid. For information on the different mdadm +modes and options, refer to man mdadm. The man page also contains useful +examples for common operations like creating, monitoring, and assembling software RAID arrays. + + + + + + +dmraid + + + +RAID +dmraid (configuring RAID sets) + + + +dmraid (configuring RAID sets) +RAID + + + + + +As the name suggests, dmraid is used to manage device-mapper RAID sets. The dmraid +tool finds ATARAID devices using multiple metadata format handlers, each supporting various formats. For a +complete list of supported formats, run dmraid -l. + + + +As mentioned earlier in , the dmraid tool cannot +configure RAID sets after creation. For more information about using dmraid, +refer to man dmraid. + + + +
+ +
+<remark>[NEW!] </remark>Advanced RAID Device Creation + +RAID +advanced RAID device creation + + + +advanced RAID device creation +RAID + + + + +In some cases, you may wish to install the operating system on an array that can't be created after the installation completes. Usually, this means setting up the /boot or root file system arrays on a complex RAID device; in such cases, you may need to use array options that are not supported by Anaconda. To work around this, perform the following procedure: + + + + + + +Insert the install disk as you normally would. + + + + + +During the initial boot up, select Rescue Mode instead of Install or Upgrade. When the system fully boots into Rescue mode, the user will be presented with a command line terminal. + + + + +From this terminal, use parted to create RAID partitions on the target hard drives. Then, use mdadm to manually create raid arrays from those partitions using any and all settings and options available. For more information on how to do these, refer to , man parted, and man mdadm. + + + + + + +Once the arrays are created, you can optionally create file systems on the arrays as well. Refer to for basic technical information on file systems supported by Fedora 13. + + + + + + +Reboot the computer and this time select Install or Upgrade to +install as normal. As Anaconda searches the disks in the system, it will find the pre-existing RAID devices. + + + + + +When asked about how to use the disks in the system, select Custom Layout and click Next. In the device listing, the pre-existing MD RAID devices will be listed. + + + + +Select a RAID device, click Edit and configure its mount point and (optionally) the type of file system it should use (if you didn't create one earlier) then click Done. Anaconda will perform the install to this pre-existing RAID device, preserving the custom options you selected when you created it in Rescue Mode. + + + + +The limited Rescue Mode of the installer does not include man pages. Both the man mdadm and man md contain useful information for creating custom RAID arrays, and may be needed throughout the workaround. As such, it can be helpful to either have access to a machine with these man pages present, or to print them out prior to booting into Rescue Mode and creating your custom arrays. + +
+
diff --git a/en-US/DG_Filesys-Swapspace.xml b/en-US/DG_Filesys-Swapspace.xml new file mode 100644 index 0000000..bfbd380 --- /dev/null +++ b/en-US/DG_Filesys-Swapspace.xml @@ -0,0 +1,129 @@ + + +%RH_ENTITIES; +]> + + Swap Space + + swap space + +
+ What is Swap Space? + + swap space + explanation of + + + Swap space in Linux is used when the amount of physical memory (RAM) is full. If the system needs more memory resources and the RAM is full, inactive pages in memory are moved to the swap space. While swap space can help machines + with a small amount of RAM, it should not be considered a replacement for more RAM. Swap space is located on hard drives, which have a slower access time than physical memory. + + + Swap space can be a dedicated swap partition (recommended), a swap file, or a combination of swap partitions and swap files. + + + swap space + recommended size + + + Swap should equal 2x physical RAM for up to 2 GB of physical RAM, and then an additional 1x physical RAM for any amount above 2 GB, but never less than 32 MB. + + + So, if: + + + M = Amount of RAM in GB, and S = Amount of swap in GB, then + + If M < 2 + S = M *2 +Else + S = M + 2 + + Using this formula, a system with 2 GB of physical RAM would have 4 GB of swap, while one with 3 GB of physical RAM would have 5 GB of swap. Creating a large swap space partition can be especially helpful if you plan to upgrade your RAM at a later time. + + + For systems with really large amounts of RAM (more than 32 GB) you can likely get away with a smaller swap partition (around 1x, or less, of physical RAM). + + + Important + + File systems and LVM2 volumes assigned as swap space should not be in use when being modified. Any attempts to modify swap will fail if a system process or the kernel is using swap space. Use the free and cat /proc/swaps commands to verify how much and where swap is in use. + + + + + +You should modify swap space while the system is booted in rescue mode; for instructions on how to boot in rescue mode, refer to the Installation Guide. When prompted to mount the file system, select Skip. + + + + + + + +
+
+ Adding Swap Space + + swap space + creating + + + swap space + expanding + + + Sometimes it is necessary to add more swap space after installation. For example, you may upgrade the amount of RAM in your system from 128 MB to 256 MB, but there is only 256 MB of swap space. It might be advantageous to increase the amount of swap space to 512 MB if you perform memory-intense operations or run applications that require a large amount of memory. + + + You have three options: create a new swap partition, create a new swap file, or extend swap on an existing LVM2 logical volume. It is recommended that you extend an existing logical volume. + + + + + + + + +
+
+ Removing Swap Space + + swap space + removing + + + Sometimes it can be prudent to reduce swap space after installation. For example, say you downgraded the amount of RAM in your system from 1 GB to 512 MB, but there is 2 GB of swap space still assigned. It might be advantageous to reduce the amount of + swap space to 1 GB, since the larger 2 GB could be wasting disk space. + + + You have three options: remove an entire LVM2 logical volume used for swap, remove a swap file, or reduce swap space on an existing LVM2 logical volume. + + + + +
+
+ Moving Swap Space + + swap space + moving + + + To move swap space from one location to another, follow the steps for removing swap space, and then follow the steps for adding swap space. + +
+
diff --git a/en-US/DG_Filesys-proc_File_System.xml b/en-US/DG_Filesys-proc_File_System.xml new file mode 100644 index 0000000..5373370 --- /dev/null +++ b/en-US/DG_Filesys-proc_File_System.xml @@ -0,0 +1,195 @@ + + +%RH_ENTITIES; + +]> + +
+The /proc Virtual File System + +virtual file system (/proc) +/proc/devices + + + +/proc +/proc/devices + + + +/proc/devices +virtual file system (/proc) + + + + + +virtual file system (/proc) +/proc/filesystems + + + +/proc +/proc/filesystems + + + +/proc/filesystems +virtual file system (/proc) + + + + + + +virtual file system (/proc) +/proc/mdstat + + + +/proc +/proc/mdstat + + + +/proc/mdstat +virtual file system (/proc) + + + + + + +virtual file system (/proc) +/proc/mounts/ + + + +/proc +/proc/mounts/ + + + +/proc/mounts/ +virtual file system (/proc) + + + + + + +virtual file system (/proc) +/proc/mounts + + + +/proc +/proc/mounts + + + +/proc/mounts +virtual file system (/proc) + + + + + + +virtual file system (/proc) +/proc/partitions + + + +/proc +/proc/partitions + + + +/proc/partitions +virtual file system (/proc) + + + + + +Unlike most file systems, /proc contains neither +text not binary files. Instead, it houses virtual files; hence, +/proc is normally referred to as a virtual file system. These +virtual files are typically zero bytes in size, even if they contain a large amount of +information. + + + +The /proc file system is not used for storage per se. Its main purpose +is to provide a file-based interface to hardware, memory, running processes, and other system +components. You can retrieve real-time information on many system components by viewing the +corresponding /proc file. Some of the files within /proc can also be manipulated +(by both users and applications) to configure the kernel. + + + +The following /proc files are relevant in managing and monitoring system storage: + + + + + +/proc/devices + + +Displays various character and block devices currently configured + + + + + +/proc/filesystems + + +Lists all file system types currently supported by the kernel + + + + + +/proc/mdstat + + +Contains current information on multiple-disk or RAID configurations on the system, if they exist + + + + + +/proc/mounts + + +Lists all mounts currently used by the system + + + + + +/proc/partitions + + +Contains partition block allocation information + + + + + + + + + + + +For more information about the /proc file system, refer to the Fedora Deployment Guide. + + + + +
diff --git a/en-US/DG_NFS-NFS.xml b/en-US/DG_NFS-NFS.xml new file mode 100644 index 0000000..c62ccff --- /dev/null +++ b/en-US/DG_NFS-NFS.xml @@ -0,0 +1,924 @@ + + +%RH_ENTITIES; + +]> + +<remark><command>[unprocessed as yet] </command></remark>Network File System (NFS) + +NFS +introducing + + +Network File System +NFS + + + +A Network File System (NFS) allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate +resources onto centralized servers on the network. + + +This chapter focuses on fundamental NFS concepts and supplemental information. + +
+How It Works + + NFS + how it works + + + NFS + UDP + + + NFS + TCP + + +sprabhu@redhat.com + + Currently, there are three versions of NFS. NFS version 2 (NFSv2) is older and is widely supported. NFS version 3 (NFSv3) supports safe asynchronous writes and a more robust error handling than NFSv2; it also supports 64-bit file sizes and offsets, allowing clients to access more than 2Gb of file data. + + +NFS version 4 (NFSv4) works through firewalls and on the Internet, no longer requires an rpcbind service, supports ACLs, and utilizes stateful operations. Fedora supports NFSv2, NFSv3, and NFSv4 clients. When mounting a file + system via NFS, Fedora uses NFSv4 by default, if the server supports it. + + + + + + All versions of NFS can use Transmission Control Protocol (TCP) running over an IP network, with NFSv4 requiring it. NFSv2 and NFSv3 can use the User Datagram Protocol + (UDP) running over an IP network to provide a stateless network connection between the client and server. + + + +When using NFSv2 or NFSv3 with UDP, the stateless UDP connection (under normal conditions) has less protocol overhead than TCP. This can translate into better performance on very clean, non-congested networks. However, because UDP is stateless, if the server goes down unexpectedly, UDP clients continue to saturate the network with requests for the server. +In addition, when a frame is lost with UDP, the entire RPC request must be retransmitted; with TCP, only the lost frame needs to be resent. For these reasons, TCP is the preferred protocol when connecting to an NFS server. + + + + + + +The mounting and locking protocols have been incorporated into the NFSv4 protocol. The server also listens on the well-known TCP port 2049. As such, NFSv4 does not need to interact with rpcbind + +The rpcbind service replaces portmap, which was +used in previous versions of Fedora to map RPC program numbers to +IP address port number combinations. For more information, refer to . + +, rpc.lockd, and rpc.statd daemons. The rpc.mountd daemon is still required on the NFS server so set up the exports, but is not involved in any over-the-wire operations. + + + + + + Note + +TCP is the default transport protocol for NFS version 2 and 3 under Fedora. UDP can be +used for compatibility purposes as needed, but is not recommended for wide usage. NFSv4 requires TCP. + + + + All the RPC/NFS daemon have a '-p' command line option that can set the port, making firewall configuration easier. + + + + +After TCP wrappers grant access to the client, the NFS server refers to the /etc/exports configuration file to determine whether the client is allowed +to access any exported file systems. Once verified, all file and directory operations are available to the user. + + + + Important + + +In order for NFS to work with a default installation of Fedora with a firewall enabled, configure IPTables with the default TCP port 2049. Without proper IPTables configuration, NFS will not function properly. + + + +The NFS initialization script and rpc.nfsd process now allow binding to any specified port during system start up. However, this can be error-prone if the port is unavailable, or if it conflicts with another daemon. + + + +
+ Required Services + +NFS +required services + +sprabhu@redhat.com +renamed portmap to rpcbind + +Fedora uses a combination of kernel-level support and daemon processes to provide NFS file sharing. All NFS versions rely on Remote Procedure Calls (RPC) between clients and servers. RPC services under Fedora 13 are controlled by the rpcbind service. To share or mount NFS file systems, the following services work together, depending on which version of NFS is implemented: + + + + +The portmap service was used to map RPC program numbers to IP address port number combinations +in earlier versions of Fedora. This service is now replaced by rpcbind in +Fedora 13 to enable IPv6 support. For more information about this change, refer to the following +links: + + + +TI-RPC / rpcbind support: +IPv6 support in NFS: + + + + + +nfs + + +service nfs start starts the NFS server and the appropriate RPC processes to service requests for shared NFS file systems. + + + + +nfslock + + +service nfslock start activates a mandatory service that starts the appropriate RPC processes which allow NFS clients to lock files on the server. + + + + +rpcbind + +renamed portmap to rpcbind + +rpcbind accepts port reservations from local RPC services. These ports are then made available (or advertised) so the corresponding remote RPC services can access them. rpcbind responds to requests for RPC services and sets up connections to the requested RPC service. This is not used with NFSv4. + + + + + +The following RPC processes facilitate NFS services: + + + + +rpc.mountd + +sprabhu@redhat.com + +This process receives mount requests from NFS clients and verifies that the requested file system is currently exported. This process is started automatically by the +nfs service and does not require user configuration. + + + + + +rpc.nfsd + + +rpc.nfsd allows explicit NFS versions and protocols the server advertises to be defined. It works with the Linux kernel to meet the dynamic demands of NFS clients, such as providing server threads each time an NFS client connects. This process +corresponds to the nfs service. + + + + +rpc.lockd + + +rpc.lockd allows NFS clients to lock files on the server. If rpc.lockd is not started, file locking will fail. rpc.lockd implements the Network Lock Manager (NLM) protocol. This process corresponds to the nfslock service. This is not used with NFSv4. + + + + +rpc.statd + + +This process implements the Network Status Monitor (NSM) RPC protocol, which notifies NFS clients when an NFS server is restarted without being gracefully brought down. rpc.statd is started automatically by the nfslock service, and does not require user configuration. This is not used with NFSv4. + + + + +rpc.rquotad + + +This process provides user quota information for remote users. rpc.rquotad is started automatically by the nfs service and does not require user configuration. + + + + +rpc.idmapd + + +rpc.idmapd provides NFSv4 client and server upcalls, which map between on-the-wire NFSv4 names (which are strings in the form of user@domain) and local UIDs and GIDs. For +idmapd to function with NFSv4, the /etc/idmapd.conf must be configured. This service is required for use with NFSv4, although not when all hosts share the same DNS domain name. + + + + + + + +
+
+ + + + + + + +
+Common NFS Mount Options + + NFS + client + mount options + + +Beyond mounting a file system via NFS on a remote host, you can also specify other options at mount time to make the mounted share easier to use. +These options can be used with manual mount commands, + /etc/fstab settings, and autofs. + + + + The following are options commonly used for NFS mounts: + + + + + + + +intr + + +Allows NFS requests to be interrupted if the server goes down or cannot be reached. + + + + + +lookupcache=mode + +sprabhu@redhat.com + +Specifies how the kernel should manage its cache of directory entries for a given mount point. Valid arguments for mode are all, none, or pos/positive. + + + + + +nfsvers=version + +sprabhu@redhat.com + +Specifies which version of the NFS protocol to use, where version is 2, 3, or 4.. This is useful for hosts that run multiple NFS servers. If no version is specified, NFS uses the highest version supported by the +kernel and mount command. + + + +The option vers is identical to nfsvers, and is included in this release for compatibility reasons. + + + + + +noacl + + +Turns off all ACL processing. This may be needed when interfacing with older versions of Fedora, &RHEL;, &RH; Linux, or Solaris, since the most recent ACL technology is not compatible with older systems. + + + + +nolock + + +Disables file locking. This setting is occasionally required when connecting to older NFS servers. + + + + +noexec + + +Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a non-Linux file system containing incompatible binaries. + + + + +nosuid + + +Disables set-user-identifier or set-group-identifier bits. This prevents remote users from gaining higher privileges by running a setuid program. + + + + +port=num + + + — Specifies the numeric value of the NFS server port. If num is 0 (the default), then +mount queries the remote host's rpcbind service for the port number to use. If the remote host's NFS daemon is not registered with its rpcbind service, the standard NFS port number of TCP 2049 is used instead. + + + + +rsize=num and wsize=num + + +These settings speed up NFS communication for reads () and writes () by setting a larger data block size (num, in bytes), to be transferred at one +time. Be careful when changing these values; some older Linux kernels and network cards do not work well with larger block sizes. For NFSv2 or NFSv3, the default values for both parameters is set to 8192. For NFSv4, the default values for both +parameters is set to 32768. + + + + +sec=mode + + +Specifies the type of security to utilize when authenticating an NFS connection. +Its default setting is , which uses local UNIX UIDs and GIDs by using AUTH_SYS to authenticate NFS operations. + + + + uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users. + + + + uses Kerberos V5 for user authentication and performs integrity checking of NFS operations using secure checksums to prevent data tampering. + + + + uses Kerberos V5 for user authentication, integrity checking, and encrypts NFS traffic to prevent traffic sniffing. This is the most secure setting, but it also involves the most performance overhead. + + + + +tcp + + +Instructs the NFS mount to use the TCP protocol. + + + + +udp + + +Instructs the NFS mount to use the UDP protocol. + + + + + + +For a complete list of options and more detailed information on each one, refer to man mount and man nfs. For more information on using NFS via TCP or UDP protocols, refer to . + + +
+ + +
+Starting and Stopping NFS + + NFS + starting + + + NFS + stopping + + + NFS + status + + + NFS + restarting + + + NFS + reloading + + + NFS + condrestart + + + rpcbind + + status + + + To run an NFS server, the rpcbind service must be running. To verify that rpcbind is active, use the following command: + + +service rpcbind status + + + + +Using service command to start, stop, or restart a daemon requires root privileges. + + + + + + If the rpcbind service is running, then the nfs service can be started. To start an NFS server, use the following command as root: + + +service nfs start + + + + nfslock must also be started for both the NFS client and server to function properly. To start NFS locking, use the following command: + + + +service nfslock start + + + +If NFS is set to start at boot, ensure that nfslock also starts by running chkconfig --list nfslock. If nfslock is not set to on, this implies that you will need to manually run the service nfslock start each time the computer starts. To set nfslock to automatically start on boot, use chkconfig nfslock on. + +sprabhu@redhat.com + +nfslock is only needed for NFSv2 and NFSv3. + + + + + + To stop the server, use: + + +service nfs stop + + + The option is a shorthand way of stopping and then starting NFS. This is the most efficient way to make configuration changes take effect after editing the configuration file for NFS. + To restart the server, as root, type: + + +service nfs restart + + + The (conditional restart) option only starts nfs if it is currently running. This option is useful for scripts, because it does not start the daemon if it is not running. + To conditionally restart the server, as root, type: + + +service nfs condrestart + + + To reload the NFS server configuration file without restarting the service, as root, type: + + +service nfs reload + + + +
+ + + + + + + +
+Securing NFS + + NFS + security + + + NFS is well-suited for sharing entire file systems with a large number of known hosts in a transparent manner. However, with ease-of-use comes a variety of potential security problems. Consider the following sections when exporting NFS file systems on a server or mounting them on a client. Doing so minimizes NFS security risks and better protects +data on the server. +server. + + + + + + +
+Host Access in NFSv2 or NFSv3 + + NFS + security + NFSv2/NFSv3 host access + + +NFS controls who can mount an exported file system based on the host making the mount request, not the user that actually uses the file system. Hosts must be given explicit rights to mount the exported file system. Access control is not possible for users, other than through file and directory permissions. In other words, once a file system is exported via NFS, any user on any remote host connected to the NFS server can access the shared data. To limit the potential risks, administrators often allow read-only access or squash user permissions to a common user and group ID. Unfortunately, these solutions prevent the NFS share from being used in the way it was originally intended. + + + +Additionally, if an attacker gains control of the DNS server used by the system exporting the NFS file system, the system associated with a particular hostname or fully qualified domain name can be pointed to an unauthorized machine. At this point, the unauthorized machine is the system permitted to mount the NFS share, since no username or password information is exchanged to provide additional security for the NFS mount. + + + +Wildcards should be used sparingly when exporting directories via NFS, as it is possible for the scope of the wildcard to encompass more systems than intended. + + +renamed portmap to rpcbind + + +You can also to restrict access to the rpcbind service via TCP wrappers. Creating rules with iptables can also limit access +to ports used by rpcbind, rpc.mountd, and rpc.nfsd. + + + + +renamed portmap to rpcbind + + For more information on securing NFS and rpcbind, refer to man iptables. + + +
+ +
+Host Access in NFSv4 + + NFS + security + NFSv4 host access + + + The release of NFSv4 brought a revolution to authentication and security to NFS exports. NFSv4 mandates the implementation of the RPCSEC_GSS kernel module, the Kerberos version 5 GSS-API mechanism, SPKM-3, and LIPKEY. With NFSv4, the mandatory security mechanisms are oriented towards authenticating individual users, and not client machines as used in NFSv2 and NFSv3. As such, for security reasons, it is recommended that you choose NFSv4 over other versions whenever possible. + + + + +It is assumed that a Kerberos ticket-granting server (KDC) is installed and configured correctly, prior to configuring an NFSv4 server. Kerberos is a network authentication system which allows clients and servers to authenticate to each other through use of symmetric encryption and a trusted third party, the KDC. + + + + +NFSv4 includes ACL support based on the Microsoft Windows NT model, not the POSIX model, because of the former's features and wide deployment. NFSv2 and NFSv3 do not have support for native ACL attributes. + + Another important security feature of NFSv4 is the removal of the use of the MOUNT protocol for mounting file systems. This protocol presented possible security holes because of the way that it processed file handles. + + + + + For more information on the RPCSEC_GSS framework, including how rpc.svcgssd and rpc.gssd inter-operate, refer to + http://www.citi.umich.edu/projects/nfsv4/gssd/. + +
+ + +
+ File Permissions + +NFS +security +file permissions + + +Once the NFS file system is mounted read/write by a remote host, the only protection each shared file has is its permissions. If two users that share the same user ID value mount the same NFS file system, they can modify each others files. +Additionally, anyone logged in as root on the client system can use the su - command to access any files via the NFS share. + + + +By default, access control lists (ACLs) are supported by NFS under Fedora. It is recommended that you keep this feature enabled. + + + +By default, NFS uses root squashing when exporting a file system. This sets the user ID of anyone accessing the NFS share as the root user on their local machine +to nobody. Root squashing is controlled by the default option root_squash; for more information about this option, refer to . If possible, never disable root squashing. + + + + + +When exporting an NFS share as read-only, consider using the option. This option makes every user accessing the exported file system take the user ID of the nfsnobody user. + +
+
+ +
+<remark>[renamed <command>portmap</command> to <command>rpcbind</command>] </remark>NFS and <command moreinfo="none">rpcbind</command> + + NFS + rpcbind + + + + rpcbind + + NFS + +renamed portmap to rpcbind + + Note + + +The following section only applies to NFSv2 or NFSv3 implementations that require the rpcbind service for backward compatibility. + + + + + The rpcbind utility maps RPC services to the ports on which they listen. RPC processes notify rpcbind when they start, registering the ports they are listening on and the RPC program numbers they expect to serve. The client system then contacts rpcbind on the server with a particular RPC program number. The rpcbind service redirects the client to the proper port number so it can communicate with the requested service. + + + + Because RPC-based services rely on rpcbind to make all connections with incoming client requests, rpcbind must be available before any of these services start. + + + + The rpcbind service uses TCP wrappers for access control, and access control rules for rpcbind affect all RPC-based services. Alternatively, it is possible to + specify access control rules for each of the NFS RPC daemons. The man pages for rpc.mountd and rpc.statd contain information regarding the precise syntax for these rules. + + +
+ <remark>[renamed <command>portmap</command> to <command>rpcbind</command>] </remark>Troubleshooting NFS and <command moreinfo="none">rpcbind</command> + +NFS +troubleshooting NFS and rpcbind + + + +troubleshooting NFS and rpcbind +NFS + + + + + +rpcbind + +rpcinfo + + + +rpcinfo + + + +rpcbind + +NFS + +renamed portmap to rpcbind + +Because rpcbind provides coordination between RPC services and the port numbers used to communicate with them, it is useful to view the status of current RPC services using rpcbind +when troubleshooting. The rpcinfo command shows each RPC-based service with port numbers, an RPC program number, a version number, and an IP protocol type (TCP or UDP). + + + +To make sure the proper NFS RPC-based services are enabled for rpcbind, issue the following command as root: + + +rpcinfo -p + + +The following is sample output from this command: + + + +program vers proto port +100021 1 udp 32774 nlockmgr +100021 3 udp 32774 nlockmgr +100021 4 udp 32774 nlockmgr +100021 1 tcp 34437 nlockmgr +100021 3 tcp 34437 nlockmgr +100021 4 tcp 34437 nlockmgr +100011 1 udp 819 rquotad +100011 2 udp 819 rquotad +100011 1 tcp 822 rquotad +100011 2 tcp 822 rquotad +100003 2 udp 2049 nfs +100003 3 udp 2049 nfs +100003 2 tcp 2049 nfs +100003 3 tcp 2049 nfs +100005 1 udp 836 mountd +100005 1 tcp 839 mountd +100005 2 udp 836 mountd +100005 2 tcp 839 mountd +100005 3 udp 836 mountd +100005 3 tcp 839 mountd + +If one of the NFS services does not start up correctly, rpcbind will be unable to map RPC requests from clients for that service to the +correct port. In many cases, if NFS is not present in rpcinfo output, restarting NFS causes the service to correctly register with rpcbind and begin working. For instructions on +starting NFS, refer to . + + + +For more information and a list of options on rpcinfo, refer to its man page. + + + + +
+
+
+Using NFS over TCP + + NFS + over TCP + + + + +NFS +TCP, using NFS over + + + +TCP, using NFS over +NFS + + + +The default transport protocol for NFS is TCP; however, the Fedora kernel includes support for NFS over UDP. To use NFS over UDP, include the mount option -o udp when mounting the NFS-exported file system on the client system. Note that NFSv4 on UDP is not +standards-compliant, since UDP does not feature congestion control; as such, NFSv4 on UDP is not supported. + + + + + There are three ways to configure an NFS file system export: + + + +On demand via the command line (client side) +Automatically via the /etc/fstab file (client side) +Automatically via autofs configuration files, such as /etc/auto.master and /etc/auto.misc (server side with NIS) + + + + + For example, on demand via the command line (client side): + + +mount -o udp shadowman.example.com:/misc/export /misc/local + + + When the NFS mount is specified in /etc/fstab (client side): + + +server:/usr/local/pub /pub nfs rsize=8192,wsize=8192,timeo=14,intr,udp + + + When the NFS mount is specified in an autofs configuration file for a NIS server, available for NIS enabled workstations: + + +myproject -rw,soft,intr,rsize=8192,wsize=8192,udp penguin.example.net:/proj52 + + + Since the default is TCP, if the option is not specified, the NFS-exported file system is accessed via TCP. + + + + The advantages of using TCP include the following: + + + + + + +UDP only acknowledges packet completion, while TCP acknowledges every packet. This results in a performance gain on heavily-loaded networks that use TCP +when mounting shares. + + + + + + TCP has better congestion control than UDP. On a very congested network, UDP packets are the first packets that are dropped. This means that if NFS is writing data (in 8K chunks) all of that 8K must be retransmitted over UDP. Because of TCP's reliability, only parts of that 8K data are transmitted at a time. + + + + + + TCP also has better error detection. When a TCP connection breaks (due to the server being unavailable) the client stops sending data and restarts the connection process once the server becomes available. since UDP is connectionless, the client continues to pound the network with data until the server re-establishes a connection. + + + + + + The main disadvantage with TCP is that there is a very small performance hit due to the overhead associated with the protocol. + +
+ +
+References + + NFS + additional resources + + + Administering an NFS server can be a challenge. Many options, including quite a few not mentioned in this chapter, are available for exporting or mounting NFS shares. Consult the following sources for more information. + + + + Installed Documentation + +NFS +additional resources +installed documentation + + + + +/usr/share/doc/nfs-utils-version/ — This +directory contains a wealth of information about the NFS implementation for Linux, including a look at various NFS configurations and their impact on file transfer performance. + + + + + +man mount — Contains a comprehensive look at mount options for both NFS server and client configurations. + + + + + +man fstab — Gives details for the format of the /etc/fstab file used to mount file systems at boot-time. + + + + + +man nfs — Provides details on NFS-specific file system export and mount options. + + + + + +man exports — Shows common options used in the /etc/exports file when exporting NFS file systems. + + + + + + + Useful Websites + +NFS +additional resources +useful websites + + + + +http://nfs.sourceforge.net/ — The home of the Linux NFS project and a great place for project status updates. + + + + + +http://www.citi.umich.edu/projects/nfsv4/linux/ — An NFSv4 for Linux 2.6 kernel resource. + + + + + +http://www.nfsv4.org — The home of NFS version 4 and all related standards. + + + + + +http://www.vanemery.com/Linux/NFSv4/NFSv4-no-rpcsec.html — Describes the details of NFSv4 with Fedora Core 2, which includes the 2.6 kernel. + + + + + +http://www.nluug.nl/events/sane2000/papers/pawlowski.pdf — An excellent whitepaper on the features and enhancements of the NFS Version 4 protocol. + + + + + +http://wiki.autofs.net — The Autofs wiki, discussions, documentation and enhancements. + + + + + + + + Related Books + +NFS +additional resources +related books + + + + +Managing NFS and NIS by Hal Stern, Mike Eisler, and Ricardo Labiaga; O'Reilly & Associates — Makes an excellent reference guide for the many different NFS export and mount options available. + + + + + +NFS Illustrated by Brent Callaghan; Addison-Wesley Publishing Company — Provides comparisons of NFS to other network file systems and shows, in detail, how NFS communication occurs. + + + + + + + +
+
diff --git a/en-US/DG_NFS-adapting-autofs4-5.xml b/en-US/DG_NFS-adapting-autofs4-5.xml new file mode 100644 index 0000000..e11b9e0 --- /dev/null +++ b/en-US/DG_NFS-adapting-autofs4-5.xml @@ -0,0 +1,86 @@ + + + +
+ Adapting Autofs v4 Maps To Autofs v5 + + NFS + autofs + + adapting + + + v4 Multi-map entries + + Autofs version 4 introduced the notion of a multi-map entry in the master + map. A multi-map entry is of the form: + + +<mount-point> <maptype1> <mapname1> <options1> -- <maptype2> <mapname2> <options2> -- ... + + + + + Any number of maps can be combined into a single map in this manner. This + feature is no longer present in v5. This is because Version 5 supports + included maps which can be used to attain the same results. Consider the following multi-map example: /home file /etc/auto.home -- nis auto.home + + + This can be replaced by the following configuration for v5: + + + /etc/nsswitch.conf must list: +automount: files nis + + + /etc/auto.master should contain: +/home auto.home + + + /etc/auto.home should contain: +<entries for the home directory> ++auto.home + + + In this way, the entries from /etc/auto.home and the nis auto.home map are combined. + + + + + Multiple master maps + + In autofs version 4, it is possible to merge the contents of master maps + from each source, such as files, nis, hesiod, and LDAP. The version 4 + automounter looks for a master map for each of the sources listed in + /etc/nsswitch.conf. The map is read if it exists and its contents are merged into one large auto.master map. + + + + + In version 5, this is no longer the behaviour. Only the first master map found from the list of sources in nsswitch.conf is consulted. If it is desirable to merge the contents of multiple master maps, included maps can be used. Consider the following example: + + +/etc/nsswitch.conf: +automount: files nis + + +/etc/auto.master: +/home /etc/auto.home ++auto.master + + + + The above configuration will merge the contents of the file-based + auto.master and the NIS-based auto.master. However, because included map entries are only allowed in file maps, there is no way to include both an + NIS auto.master and an LDAP auto.master. + + + This limitation can be overcome by creating a master maps that have a + different name in the source. In the example above if we had an LDAP + master map named auto.master.ldap we could also add "+auto.master.ldap" to the file based master map and provided that "ldap" is listed as a source in our nsswitch configuration it would also be included. + + + + +
diff --git a/en-US/DG_NFS-autofs.xml b/en-US/DG_NFS-autofs.xml new file mode 100644 index 0000000..6ef0c28 --- /dev/null +++ b/en-US/DG_NFS-autofs.xml @@ -0,0 +1,542 @@ + + + +
+<command moreinfo="none">autofs</command> + + NFS + client + autofs + + + + autofs + + NFS + + +One drawback to using /etc/fstab is that, regardless of how infrequently a user accesses the NFS mounted file system, the system must dedicate resources to keep the mounted file system in place. This is not a problem with one or two mounts, but when the system is maintaining mounts to many systems at one time, overall system performance can be affected. An alternative to /etc/fstab is to use the kernel-based automount utility. An automounter consists of two components: + + + +a kernel module that implements a file system +a user-space daemon that performs all of the other functions + + + +The automount utility can mount and unmount NFS file systems automatically (on-demand mounting), therefore saving system resources. It can be used to mount other file systems including AFS, SMBFS, CIFS, and local file systems. + + + + + + +autofs uses /etc/auto.master (master map) as its default primary configuration file. This can be changed to use another supported network source and name using the autofs configuration (in /etc/sysconfig/autofs) in conjunction with the Name Service Switch (NSS) mechanism. An instance of the autofs version 4 daemon was run for each mount point configured in the master map and so it could be run manually from the command line for any given mount point. This is not possible with autofs version 5, because it uses a single daemon to manage all configured mount points; as such, all automounts must be configured in the master map. This is in line with the usual requirements of other industry standard automounters. Mount point, hostname, exported directory, and options can all be specified in a set of files (or other supported network sources) rather than configuring them manually for each host. + + + + + + +
+ +Improvements in autofs Version 5 over Version 4 + +version +what is new +autofs + + + +NFS +autofs version 5 + + + +autofs version 5 +NFS + + + + +sprabhu@redhat.com: revised title + + +autofs version 5 features the following enhancements over version 4: + + + + + + + Direct map support + + + + +NFS +direct map support (autofs version 5) + + + +direct map support (autofs version 5) +NFS + + + + + +Direct maps in autofs provide a mechanism to automatically mount file systems at arbitrary points in the file system hierarchy. A direct map is denoted by a mount point of /- in the master map. Entries in a direct map contain an absolute path name as a key (instead of the relative path names used in indirect maps). + + + + + + Lazy mount and unmount support + + + + +NFS +lazy mount/unmount support (autofs version 5) + + + +lazy mount/unmount support (autofs version 5) +NFS + + + +Multi-mount map entries describe a hierarchy of mount points under a single key. A good example of this is the -hosts map, commonly used for automounting all exports from a host under "/net/host" as a multi-mount map entry. When using the "-hosts" map, an 'ls' of "/net/host" will mount autofs trigger mounts for each export from host and mount and expire them as they are accessed. This can greatly reduce the number of active mounts needed when accessing a server with a large number of exports. + + + + + + Enhanced LDAP support + + + + +NFS +enhanced LDAP support (autofs version 5) + + + +enhanced LDAP support (autofs version 5) +NFS + + + +The Lightweight Directory Access Protocol (LDAP) support in autofs version 5 has been enhanced in several ways with respect to autofs version 4. The autofs configuration file (/etc/sysconfig/autofs) provides a mechanism to specify the autofs schema that a site implements, thus precluding the need to determine this via trial and error in the application itself. In addition, authenticated binds to the LDAP server are now supported, using most mechanisms supported by the common LDAP server implementations. A new configuration file has been added for this support: /etc/autofs_ldap_auth.conf. The default configuration file is self-documenting, and uses an XML format. + + + + + + Proper use of the Name Service Switch (nsswitch) configuration. + + + + +NFS +proper nsswitch configuration (autofs version 5), use of + + + +proper nsswitch configuration (autofs version 5), use of +NFS + + + +The Name Service Switch configuration file exists to provide a means of determining from where specific configuration data comes. The reason for this configuration is to allow administrators the flexibility of using the back-end database of choice, while maintaining a uniform software interface to access the data. While the version 4 automounter is becoming increasingly better at handling the NSS configuration, it is still not complete. Autofs version 5, on the other hand, is a complete implementation. + + + +Refer to man nsswitch.conf for more information on the supported syntax of this file. Please note that not all NSS databases are valid map sources and the parser will reject ones that are invalid. Valid sources are files, yp, nis, nisplus, ldap, and hesiod. + + + + + + Multiple master map entries per autofs mount point + + + + +NFS +multiple master map entries per autofs mount point (autofs version 5) + + + +multiple master map entries per autofs mount point (autofs version 5) +NFS + + + +One thing that is frequently used but not yet mentioned is the handling of multiple master map entries for the direct mount point /-. The map keys for each entry are merged and behave as one map. + + +An example is seen in the connectathon test maps for the direct mounts below: + +/- /tmp/auto_dcthon +/- /tmp/auto_test3_direct +/- /tmp/auto_test4_direct + + + +   + +
+ + + +
+ <command moreinfo="none">autofs</command> Configuration + +NFS +autofs + +configuration + + +autofs + + + + + + + +NFS +overriding/augmenting site configuration files (autofs) + + + +overriding/augmenting site configuration files (autofs) +NFS + + + +The primary configuration file for the automounter is /etc/auto.master, also referred to as the master map which may be changed as described in the . The master map lists autofs-controlled mount points on the system, and their corresponding configuration files or network sources known as automount maps. The format of the master map is as follows: + + +mount-point map-name options + + +The variables used in this format are: + + + +mount-point + +The autofs mount point e.g /home. + + + +map-name + +The name of a map source which contains a list of mount points, and the file system location from which those mount points should be mounted. The syntax for a map entry is described below. + + + +options + +If supplied, these will apply to all entries in the given map provided they don't themselves have options specified. This behavior is different from autofs version 4 where options where cumulative. This has been changed to implement mixed environment compatibility. + + + + + + +The following is a sample line from /etc/auto.master file (displayed with cat /etc/auto.master): + +/home /etc/auto.misc + + +The general format of maps is similar to the master map, however the "options" appear between the mount point and the location instead of at the end of the entry as in the master map: + +mount-point [options] location + + The variables used in this format are: + + + + +mount-point + + This refers to the autofs mount point. This can be a single directory name for an indirect mount or the full path of the mount point for direct mounts. Each direct and indirect map entry key (mount-point above) may be followed by a space separated list of offset directories (sub directory names each beginning with a "/") making them what is known as a mutli-mount entry. + + + +options + + Whenever supplied, these are the mount options for the map entries that do not specify their own options. + + + + +location + + This refers to the file system location such as a local file system path (preceded with the Sun map format escape character ":" for map names beginning with "/"), an NFS file system or other valid file system location. + + + + + The following is a sample of contents from a map file (i.e. /etc/auto.misc): + +payroll -fstype=nfs personnel:/dev/hda3 +sales -fstype=ext3 :/dev/hda4 + + The first column in a map file indicates the autofs mount point (sales and payroll from the server called personnel). The second column indicates the options for the autofs mount while the third column indicates the source of the mount. Following the above configuration, the autofs mount points will be /home/payroll and /home/sales. The -fstype= option is often omitted and is generally not needed for correct operation. + + + + +The automounter will create the directories if they do not exist. If the directories exist before the automounter was started, the automounter will not remove them when it exits. You can start or restart the automount daemon by issuing either of the following two commands: + + + +service autofs start + + + +service autofs restart + + + +Using the above configuration, if a process requires access to an autofs unmounted directory such as /home/payroll/2006/July.sxc, the automount daemon automatically mounts the directory. If a timeout is specified, the directory will automatically be unmounted if the directory is not accessed for the timeout period. + + + +You can view the status of the automount daemon by issuing the following command: + + + +service autofs status + + +
+ + + + +
+Overriding or Augmenting Site Configuration Files + + NFS + autofs + + augmenting + + + + + +NFS +storing automounter maps, using LDAP to store (autofs) + + + +storing automounter maps, using LDAP to store (autofs) +NFS + + + + +It can be useful to override site defaults for a specific mount point on a client system. For example, +consider the following conditions: + + + + +Automounter maps are stored in NIS and the /etc/nsswitch.conf file has the following directive: + + +automount: files nis + + +The auto.master file contains the following + + ++auto.master + + +The NIS auto.master map file contains the following: + + +/home auto.home + + + +The NIS auto.home map contains the following: + + +beth fileserver.example.com:/export/home/beth +joe fileserver.example.com:/export/home/joe +* fileserver.example.com:/export/home/& + +The file map /etc/auto.home does not exist. + + + + + +Given these conditions, let's assume that the client system needs to override the NIS map auto.home and mount home directories from a different server. In this case, the client will need to use the following /etc/auto.master map: + + +/home ­/etc/auto.home ++auto.master + + +And the /etc/auto.home map contains the entry: + + +* labserver.example.com:/export/home/& + +sprabhu@redhat.com + +Because the automounter only processes the first occurrence of a mount point, /home will contain the contents of /etc/auto.home instead of the NIS auto.home map. + + + + + +Alternatively, if you just want to augment the site-wide auto.home map with a few entries, create a /etc/auto.home file map, and in it put your new entries and at the end, include the NIS auto.home map. Then the /etc/auto.home file map might look similar to: + + +mydir someserver:/export/mydir ++auto.home + + +Given the NIS auto.home map listed above, ls /home would now output: + + +beth joe mydir + + +This last example works as expected because autofs knows not to include the contents of a file map of the same name as the one it is reading. As such, autofs moves on to the next map source in the nsswitch configuration. + + +
+ +
+Using LDAP to Store Automounter Maps + + NFS + autofs + + LDAP + + +sprabhu@redhat.com + + + LDAP client libraries must be installed on all systems configured to retrieve automounter maps from LDAP. In Fedora, the openldap package should be installed automatically as a dependency of the automounter. To configure LDAP access, modify /etc/openldap/ldap.conf. Ensure that BASE, URI, and schema are set appropriately for your site. + + + + + + +NFS +rfc2307bis (autofs) + + + +rfc2307bis (autofs) +NFS + + + + The most recently established schema for storing automount maps in LDAP is + described by rfc2307bis. To use this schema it is necessary to set it in the autofs configuration (/etc/sysconfig/autofs) by removing the comment characters from the schema definition. For example: + + +DEFAULT_MAP_OBJECT_CLASS="automountMap" +DEFAULT_ENTRY_OBJECT_CLASS="automount" +DEFAULT_MAP_ATTRIBUTE="automountMapName" +DEFAULT_ENTRY_ATTRIBUTE="automountKey" +DEFAULT_VALUE_ATTRIBUTE="automountInformation" + + Ensure that these are the only schema entries not commented in the configuration. Note that the automountKey replaces the cn attribute in the rfc2307bis schema. An LDIF of a sample configuration is described below: + + + + +# extended LDIF +# +# LDAPv3 +# base <> with scope subtree +# filter: (&(objectclass=automountMap)(automountMapName=auto.master)) +# requesting: ALL +# + +# auto.master, example.com +dn: automountMapName=auto.master,dc=example,dc=com +objectClass: top +objectClass: automountMap +automountMapName: auto.master + +# extended LDIF +# +# LDAPv3 +# base <automountMapName=auto.master,dc=example,dc=com> with scope subtree +# filter: (objectclass=automount) +# requesting: ALL +# + +# /home, auto.master, example.com +dn: automountMapName=auto.master,dc=example,dc=com +objectClass: automount +cn: /home + +automountKey: /home +automountInformation: auto.home + +# extended LDIF +# +# LDAPv3 +# base <> with scope subtree +# filter: (&(objectclass=automountMap)(automountMapName=auto.home)) +# requesting: ALL +# + +# auto.home, example.com +dn: automountMapName=auto.home,dc=example,dc=com +objectClass: automountMap +automountMapName: auto.home + +# extended LDIF +# +# LDAPv3 +# base <automountMapName=auto.home,dc=example,dc=com> with scope subtree +# filter: (objectclass=automount) +# requesting: ALL +# + +# foo, auto.home, example.com +dn: automountKey=foo,automountMapName=auto.home,dc=example,dc=com +objectClass: automount +automountKey: foo +automountInformation: filer.example.com:/export/foo + +# /, auto.home, example.com +dn: automountKey=/,automountMapName=auto.home,dc=example,dc=com +objectClass: automount +automountKey: / +automountInformation: filer.example.com:/export/& + + + + + + + +
+ + + + +
diff --git a/en-US/DG_NFS-clientconfig.xml b/en-US/DG_NFS-clientconfig.xml new file mode 100644 index 0000000..a2ead12 --- /dev/null +++ b/en-US/DG_NFS-clientconfig.xml @@ -0,0 +1,209 @@ + + + +
+NFS Client Configuration + + NFS + client + configuration + + + +NFS +mount (client configuration) + + + +mount (client configuration) +NFS + + + +The mount command mounts NFS shares on the client side. Its format is as follows: + + + + +sprabhu@redhat.com: i replaced host with server and edited the definition to be consistent with +, is this still correct? + +mount -t nfs -o options host:/remote/export /local/directory + + + +This command uses the following variables: + + + + +NFS +options (client configuration, mounting) + + + +options (client configuration, mounting) +NFS + + + + + + +NFS +server (client configuration, mounting) + + + +server (client configuration, mounting) +NFS + + + + + + +NFS +/remote/export (client configuration, mounting) + + + +/remote/export (client configuration, mounting) +NFS + + + + + + +NFS +/local/directory (client configuration, mounting) + + + +/local/directory (client configuration, mounting) +NFS + + + + + +options + + +A comma-delimited list of mount options; refer to for details on valid NFS mount options. + + + + + +server + + +The hostname, IP address, or fully qualified domain name of the server exporting the file system you wish to mount + + + + + +/remote/export + + +The file system / directory being exported from server, i.e. the directory you wish to mount + + + + + + +/local/directory + + +The client location where /remote/export should be mounted + + + + + + + + +The NFS protocol version used in Fedora 13 is identified by the mount options nfsvers or vers. By default, mount will use NFSv4 with mount -t nfs. If the server does not support NFSv4, the client will automatically step down to a version supported by the server. If you use the nfsvers/vers option to pass a particular version not supported by the server, the mount will fail. The file system type nfs4 is also available for legacy reasons; this is equivalent to running mount -t nfs -o nfsvers=4 host:/remote/export /local/directory. + + + + + + + Refer to man mount for more details. + + + +If an NFS share was mounted manually, the share will not be automatically mounted upon reboot. Fedora offers two methods for mounting remote file systems +automatically at boot time: the /etc/fstab file and the autofs service. Refer to +and for more information. + + + + + + +
+ Mounting NFS File Systems using <filename moreinfo="none">/etc/fstab</filename> + + + + /etc/fstab + + + + NFS + /etc/fstab + + +An alternate way to mount an NFS share from another machine is to add a line to the /etc/fstab file. The line must state the hostname of the NFS server, the directory on the server being exported, and the directory +on the local machine where the NFS share is to be mounted. You must be root to modify the /etc/fstab file. + + + +The general syntax for the line in /etc/fstab is as follows: + +sprabhu: i don't see an errant hyphen in this command? + +server:/usr/local/pub /pub nfs rsize=8192,wsize=8192,timeo=14,intr + +The mount point /pub must exist on the client machine before this command can be executed. After adding this line to /etc/fstab on the client system, use the command +mount /pub, and the mount point /pub is mounted from the server. + + + +The /etc/fstab file is referenced by the netfs service at boot time, so lines referencing NFS shares have the same effect as manually typing the +mount command during the boot process. + +sprabhu@redhat.com + +A valid /etc/fstab entry to mount an NFS export should contain the following information: + +server:/remote/export /local/directory nfs options 0 0 + + +The variables server, /remote/export, /local/directory, and options are the same ones used when manually mounting an NFS share. Refer to for a definition of each variable. + + + + +The mount point /local/directory must exist on the client before /etc/fstab is read. Otherwise, the mount will fail. + + + + + +For more information about /etc/fstab, refer to man fstab. + + + + + +
+
diff --git a/en-US/DG_NFS-revising-site-conf-files.xml b/en-US/DG_NFS-revising-site-conf-files.xml new file mode 100644 index 0000000..75cd6c1 --- /dev/null +++ b/en-US/DG_NFS-revising-site-conf-files.xml @@ -0,0 +1,104 @@ + + + +
+ Overriding or augmenting site configuration files + + NFS + autofs + + augmenting + + +It can be useful to override site defaults for a specific mount point on a client system. For example, +consider the following conditions: + + + + +Automounter maps are stored in NIS and the /etc/nsswitch.conf file has the following directive: + + + +automount: files nis + + + +The auto.master file contains the following + + + ++auto.master + + + +The NIS auto.master map file contains the following: + + + +/home auto.home + + + + +The NIS auto.home map contains the following: + + + +beth fileserver.example.com:/export/home/beth +joe fileserver.example.com:/export/home/joe +* fileserver.example.com:/export/home/& + + +The file map /etc/auto.home does not exist. + + + + + +Given these conditions, let's assume that the client system needs to override the NIS map auto.home and mount home directories from a different server. In this case, the client will need to use the following /etc/auto.master map: + + + +/home ­/etc/auto.home ++auto.master + + + +And the /etc/auto.home map contains the entry: + + + +* labserver.example.com:/export/home/& + + +sprabhu@redhat.com + +Because the automounter only processes the first occurrence of a mount point, /home will contain the contents of /etc/auto.home instead of the NIS auto.home map. + + + + + +Alternatively, if you just want to augment the site-wide auto.home map with a few entries, create a /etc/auto.home file map, and in it put your new entries and at the end, include the NIS auto.home map. Then the /etc/auto.home file map might look similar to: + + + +mydir someserver:/export/mydir ++auto.home + + + +Given the NIS auto.home map listed above, ls /home would now output: + + + +beth joe mydir + + + +This last example works as expected because autofs knows not to include the contents of a file map of the same name as the one it is reading. As such, autofs moves on to the next map source in the nsswitch configuration. + + +
diff --git a/en-US/DG_NFS-serverconfig-file.xml b/en-US/DG_NFS-serverconfig-file.xml new file mode 100644 index 0000000..5e0d417 --- /dev/null +++ b/en-US/DG_NFS-serverconfig-file.xml @@ -0,0 +1,204 @@ + + + +
+ <remark>[DUPLICATE] </remark>The <filename moreinfo="none">/etc/exports</filename> Configuration File + + NFS + server configuration + /etc/exports + + + + The /etc/exports file controls which file systems are exported to remote hosts and specifies options. It follows the following syntax rules: + + + +Blank lines are ignored. +To add a comment, start a line with the hash mark (#). +You can wrap long lines with a backslash (\). +Each exported file system should be on its own individual line. +Any lists of authorized hosts placed after an exported file system must be separated by space characters. +Options for each of the hosts must be placed in parentheses directly after the host identifier, without any spaces separating the host and the first parenthesis. + + + + +Each entry for an exported file system has the following structure: + + +export host(options) + + +The aforementioned structure uses the following variables: + + + + + +export + + +The directory being exported + + + + + +host + + +The host or network to which the export is being shared + + + + + +options + + +The options to be used for host + + + + + + +You can specify multiple hosts, along with specific options for each host. To do so, list them on the same line as a space-delimited list, with each hostname followed by +its respective options (in parentheses), as in: + + +export host1(options1) host2(options2) host3(options3) + + +For information on different methods for specifying hostnames, refer to . + + + + + + +In its simplest form, the /etc/exports file only specifies the exported directory and the hosts permitted to access it, as in the following example: + + + +/exported/directory bob.example.com + + +Here, bob.example.com can mount /exported/directory/ +from the NFS server. Because no options are specified in this example, NFS will use default settings, which are: + + + + + +ro + + +The exported file system is read-only. Remote hosts cannot change the data shared on the +file system. To allow hosts to make changes to the file system (i.e. read/write), specify the +option. + + + + + +sync + +sprabhu@redhat.com + +The NFS server will not reply to requests before changes made by previous requests are +written to disk. To enable asynchronous writes instead, specify the option . + + + + + +wdelay + +sprabhu@redhat.com + +The NFS server will delay writing to the disk if it suspects another write request is imminent. +This can improve performance as it reduces the number of times the disk must be accesses by +separate write commands, thereby reducing write overhead. To disable this, specify the +; note that is only available if the +default option is also specified. + + + + + +root_squash + +sprabhu@redhat.com + +This prevents root users connected remotely from having root privileges; instead, the NFS server +will assign them the user ID nfsnobody. This effectively "squashes" the power of the +remote root user to the lowest local user, preventing possible unauthorized writes on the remote server. To disable +root squashing, specify . + + + + + + +To squash every remote user (including root), use . To specify the user and group IDs +that the NFS server should assign to remote users from a particular host, use the +and options, respectively, as in: + + +export host(anonuid=uid,anongid=gid) + + +Here, uid and gid are user ID number and group ID number, respectively. +The and options allow you to create a special user/group account for +remote NFS users to share. + + + + Important + + + By default, access control lists (ACLs) are supported by NFS. To disable this feature, specify the no_acl option when exporting the file + system. + + + + + Each default for every exported file system must be explicitly overridden. For example, if the option is not specified, then the exported file system is shared as read-only. The following is a sample line from + /etc/exports which overrides two default options: + +sprabhu@redhat.com + +/another/exported/directory 192.168.0.3(rw,async) + + + +In this example 192.168.0.3 can mount /another/exported/directory/ read/write and all writes to disk are +asynchronous. For more information on exporting options, refer to man exportfs. + + + +Additionally, other options are available where no default value is specified. These include the ability to disable sub-tree checking, allow access from insecure ports, and allow insecure file locks (necessary for certain early NFS client implementations). Refer to man exports for details on these less-used options. + + + + Warning + + + The format of the /etc/exports file is very precise, particularly in regards to use of the space character. Remember to always separate exported file systems from hosts and hosts from one another with a space + character. However, there should be no other space characters in the file except on comment lines. + + + + For example, the following two lines do not mean the same thing: + +/home bob.example.com(rw) +/home bob.example.com (rw) + + +The first line allows only users from bob.example.com read/write access to the /home directory. The second line allows users from bob.example.com to mount the directory as read-only (the default), while the rest of the world can mount it read/write. + + +
diff --git a/en-US/DG_NFS-serverconfig-file2.xml b/en-US/DG_NFS-serverconfig-file2.xml new file mode 100644 index 0000000..46e9f6b --- /dev/null +++ b/en-US/DG_NFS-serverconfig-file2.xml @@ -0,0 +1,65 @@ + + + +
+ <remark>[DUPLICATE] </remark>The <filename moreinfo="none">/etc/exports</filename> Configuration File + + NFS + command line configuration + + + + The /etc/exports file controls what directories the NFS server exports. Its format is as follows: + + +directory hostname(options) + + + The only option that needs to be specified is one of sync or async (sync is recommended). If sync is specified, + the server does not reply to requests before the changes made by the request are written to the disk. + + + + For example, + + +/misc/export speedy.example.com(sync) + + + would allow users from speedy.example.com to mount /misc/export with the default read-only permissions, but, + + +/misc/export speedy.example.com(rw,sync) + + + would allow users from speedy.example.com to mount /misc/export with read/write privileges. + + + + Refer to for an explanation of possible hostname formats. + + + + + Caution + + + Be careful with spaces in the /etc/exports file. If there are no spaces between the hostname and the options in parentheses, the options apply only to the hostname. If there is a space between the hostname and the + options, the options apply to the rest of the world. For example, examine the following lines: + +/misc/export speedy.example.com(rw,sync) /misc/export speedy.example.com (rw,sync) + + The first line grants users from speedy.example.com read-write access and denies all other users. The second line grants users from speedy.example.com read-only access (the + default) and allows the rest of the world read-write access. + + + + + Each time you change /etc/exports, you must inform the NFS daemon of the change, or reload the configuration file with the following command: + + +/sbin/service nfs reload + + +
diff --git a/en-US/DG_NFS-serverconfig.xml b/en-US/DG_NFS-serverconfig.xml new file mode 100644 index 0000000..3ce8ab1 --- /dev/null +++ b/en-US/DG_NFS-serverconfig.xml @@ -0,0 +1,268 @@ + + + +
+NFS Server Configuration + + NFS + server configuration + + + +There are two ways to configure an NFS server: + + + +By manually editing the NFS configuration file, i.e. /etc/exports +Through the command line, i.e. through exportfs + + + + + + + + + + +
+ The <command moreinfo="none">exportfs</command> Command + +NFS +server configuration +exportfs command + + +Every file system being exported to remote users via NFS, as well as the access level for those file systems, are listed in the /etc/exports file. When the nfs service starts, the +/usr/sbin/exportfs command launches and reads this file, passes control to rpc.mountd (if NFSv2 or NFSv3) for the actual mounting process, then to +rpc.nfsd where the file systems are then available to remote users. + + + +When issued manually, the /usr/sbin/exportfs command allows the root user to selectively export or unexport directories without restarting the NFS service. When given the proper options, the +/usr/sbin/exportfs command writes the exported file systems to /var/lib/nfs/xtab. Since rpc.mountd refers to the +xtab file when deciding access privileges to a file system, changes to the list of exported file systems take effect immediately. + + + +The following is a list of commonly-used options available for /usr/sbin/exportfs: + + + + +-r + + +Causes all directories listed in /etc/exports to be exported by constructing a new export list in /etc/lib/nfs/xtab. This option effectively refreshes the export list with any changes made to /etc/exports. + + + + +-a + + +Causes all directories to be exported or unexported, depending on what other options are passed to /usr/sbin/exportfs. If no other options are specified, +/usr/sbin/exportfs exports all file systems specified in /etc/exports. + + + + +-o file-systems + + +Specifies directories to be exported that are not listed in /etc/exports. Replace file-systems with additional file +systems to be exported. These file systems must be formatted in the same way they are specified in /etc/exports. Refer to for more information on +/etc/exports syntax. This option is often used to test an exported file system before adding it permanently to the list of file systems to be exported. + + + + +-i + + +Ignores /etc/exports; only options given from the command line are used to define exported file systems. + + + + +-u + + +Unexports all shared directories. The command /usr/sbin/exportfs -ua suspends NFS file sharing while keeping all NFS daemons up. To re-enable NFS sharing, use exportfs -r. + + + + +-v + + +Verbose operation, where the file systems being exported or unexported are displayed in greater detail when the exportfs command is executed. + + + + + +If no options are passed to the exportfs command, it displays a list of currently exported file systems. +For more information about the exportfs command, refer to man exportfs. + + +
+Using <command moreinfo="none">exportfs</command> with NFSv4 + + NFS + server configuration + exportfs command with NFSv4 + + + The exportfs command is used to maintain the NFS table of exported file systems. When used with no arguments, exportfs shows all the exported directories. + + + Since NFSv4 no longer utilizes the MOUNT protocol, which was used with the NFSv2 and NFSv3 protocols, the mounting of file systems has changed. + + + + + An NFSv4 client now has the ability to see all of the exports served by the NFSv4 server as a single file system, called the NFSv4 pseudo-file system. On Fedora, the pseudo-file system is identified as a single, real file system, identified at export with the option. + + + + +
+
+ + + +
+ Running NFS Behind a Firewall + + NFS + configuration with firewall + +renamed portmap to rpcbind + NFS requires rpcbind, which dynamically assigns ports for RPC services and can cause problems for configuring firewall rules. To allow clients to access NFS shares behind a firewall, edit the /etc/sysconfig/nfs configuration file to control which ports the required RPC services run on. + + + +The /etc/sysconfig/nfs may not exist by default on all systems. If it does not exist, create it and add the following variables, replacing port with an unused port number (alternatively, if the file exists, un-comment and change the default entries as required): + + + +MOUNTD_PORT=port + + + Controls which TCP and UDP port mountd (rpc.mountd) uses. + + + +STATD_PORT=port + + + Controls which TCP and UDP port status (rpc.statd) uses. + + + +LOCKD_TCPPORT=port + + + Controls which TCP port nlockmgr (rpc.lockd) uses. + + + +LOCKD_UDPPORT=port + + + Controls which UDP port nlockmgr (rpc.lockd) uses. + + + + + +If NFS fails to start, check /var/log/messages. Normally, NFS will fail to start if you specify a port number that is already in use. After editing /etc/sysconfig/nfs, restart the NFS service using service nfs restart. Run the rpcinfo -p command to confirm the changes. + + + +To configure a firewall to allow NFS, perform the following steps: + + + + + Allow TCP and UDP port 2049 for NFS. + + + + + Allow TCP and UDP port 111 (rpcbind/sunrpc). + + + + + Allow the TCP and UDP port specified with MOUNTD_PORT="port" + + + + + Allow the TCP and UDP port specified with STATD_PORT="port" + + + + + Allow the TCP port specified with LOCKD_TCPPORT="port" + + + + + Allow the UDP port specified with LOCKD_UDPPORT="port" + + + + + +
+
+ Hostname Formats + +NFS +hostname formats + + +The host(s) can be in the following forms: + + + + +Single machine + + +A fully-qualified domain name (that can be resolved by the server), hostname (that can be resolved by the server), or an IP address. + + + + +Series of machines specified via wildcards + + +Use the * or ? character to specify a string match. Wildcards are not to be used with IP addresses; however, they may accidentally work if reverse DNS lookups fail. When specifying wildcards in fully qualified domain names, dots (.) are not included in the wildcard. For example, *.example.com includes one.example.com but does not include one.two.example.com. + + + + +IP networks + + +Use a.b.c.d/z, where a.b.c.d is the network and z is the number of bits in the netmask (for example 192.168.0.0/24). Another acceptable format is +a.b.c.d/netmask, where a.b.c.d is the network and netmask is the netmask (for example, 192.168.100.8/255.255.255.0). + + + + +Netgroups + + +Use the format @group-name, where group-name is the NIS netgroup name. + + + +
+
diff --git a/en-US/DG_NFS-start-stop.xml b/en-US/DG_NFS-start-stop.xml new file mode 100644 index 0000000..ad8d2fe --- /dev/null +++ b/en-US/DG_NFS-start-stop.xml @@ -0,0 +1,93 @@ + + + +
+ Starting and Stopping NFS + + NFS + starting + + + NFS + stopping + + + NFS + status + + + NFS + restarting + + + NFS + reloading + + + NFS + condrestart + + + rpcbind + + status + + + To run an NFS server, the rpcbind service must be running. To verify that rpcbind is active, type the following command as root: + + +/sbin/service rpcbind status + + + If the rpcbind service is running, then the nfs service can be started. To start an NFS server, as root type: + + +/sbin/service nfs start + + + + nfslock also has to be started for both the NFS client and server to function properly. To start NFS locking as root type: /sbin/service nfslock start. If NFS is set to start at boot, please ensure that nfslock also starts by running chkconfig --list nfslock. If nfslock is not set to on, this implies that you will need to manually run the /sbin/service nfslock start each time the computer starts. To set nfslock to automatically start on boot, type the following command in a terminal chkconfig nfslock on. + + + +nfslock is only needed for NFSv2 and NFSv3. + + + + + + To stop the server, as root, type: + + +/sbin/service nfs stop + + + The option is a shorthand way of stopping and then starting NFS. This is the most efficient way to make configuration changes take effect after editing the configuration file for NFS. + + + + To restart the server, as root, type: + + +/sbin/service nfs restart + + + The (conditional restart) option only starts nfs if it is currently running. This option is useful for scripts, because it does not start the daemon if it is not running. + + + + To conditionally restart the server, as root, type: + + +/sbin/service nfs condrestart + + + To reload the NFS server configuration file without restarting the service, as root, type: + + +/sbin/service nfs reload + + + +
diff --git a/en-US/OSG_Revision_History.xml b/en-US/OSG_Revision_History.xml new file mode 100644 index 0000000..ba4f888 --- /dev/null +++ b/en-US/OSG_Revision_History.xml @@ -0,0 +1,55 @@ + + + + +Revision History + + + + + 3.0 + Tue Mar 30 2010 + + Don + Domingo + ddomingo@redhat.com + + + + updated as per RHEL5.5 + + + + + 2.0 + Wed Jul 15 2009 + + Don + Domingo + ddomingo@redhat.com + + + + revisions as per BZ#264001 + + + + + 1.0 + Thu Jun 17 2009 + + Don + Domingo + ddomingo@redhat.com + + + + adding to RHEL5.4, correcting branch + + + + + + + diff --git a/en-US/OSG_ch-common_operations.xml b/en-US/OSG_ch-common_operations.xml new file mode 100644 index 0000000..313c52c --- /dev/null +++ b/en-US/OSG_ch-common_operations.xml @@ -0,0 +1,19 @@ + + + + + Common Operations + + The following sections provide procedures that work on fibre channel and iSCSI protocols. + + + + + + + + + + + diff --git a/en-US/OSG_ch-fc-api-procedures.xml b/en-US/OSG_ch-fc-api-procedures.xml new file mode 100644 index 0000000..bcceeee --- /dev/null +++ b/en-US/OSG_ch-fc-api-procedures.xml @@ -0,0 +1,25 @@ + + + +
+ Fibre Channel + +online storage +fibre channel + + + +fibre channel +online storage + + + + This section discusses the Fibre Channel API, native Fedora 13 Fibre Channel drivers, and the Fibre Channel capabilities of these drivers. + + + + + +
+ diff --git a/en-US/OSG_ch-iscsi-api-procedures.xml b/en-US/OSG_ch-iscsi-api-procedures.xml new file mode 100644 index 0000000..e7c3254 --- /dev/null +++ b/en-US/OSG_ch-iscsi-api-procedures.xml @@ -0,0 +1,26 @@ + + + +
+ iSCSI + + This section describes the iSCSI API and the iscsiadm utility. Before using the iscsiadm utility, install the iscsi-initiator-utils package first; to do so, run yum install iscsi-initiator-utils. + + + +In addition, the iSCSI service must be running in order to discover or log in to targets. To start the iSCSI service, run service iscsi start + + + + + + + + + + + + +
+ diff --git a/en-US/OSG_reference_fc-api-etc.xml b/en-US/OSG_reference_fc-api-etc.xml new file mode 100644 index 0000000..1adc5c1 --- /dev/null +++ b/en-US/OSG_reference_fc-api-etc.xml @@ -0,0 +1,130 @@ + + + +
+ Fibre Channel API + +fibre channel API + + + +API, fibre channel + + + + userspace API files + fibre channel API + + Below is a list of /sys/class/ directories that contain files used to provide the userspace API. In each item, host numbers +are designated by H, bus numbers are B, targets are +T, logical unit numbers (LUNs) are L, and remote port numbers are +R. + + + Important + If your system is using multipath software, consult your hardware vendor before changing any of the values described in this +section. + + + + + + Transport: +/sys/class/fc_transport/targetH:B:T/ + + + + + + +transport +fibre channel API + + + port_id — 24-bit port ID/address + node_name — 64-bit node name + port_name — 64-bit port name + + + + + + Remote Port: +/sys/class/fc_remote_ports/rport-H:B-R/ + + + + +remote port +fibre channel API + + + port_id + node_name + port_name + + + + +dev_loss_tmo +fibre channel API + + + dev_loss_tmo — number of seconds to wait before marking a link as "bad". Once a link is marked +bad, I/O running on its corresponding path (along with any new I/O on that path) will be failed. + + The default dev_loss_tmo value varies, depending on which driver/device is used. If a Qlogic adapter +is used, the default is 35 seconds, while if an Emulex adapter is used, it is 30 seconds. The dev_loss_tmo value can be changed via the +scsi_transport_fc module parameter dev_loss_tmo, although the driver can override this timeout value. + + The maximum dev_loss_tmo value is 600 seconds. If dev_loss_tmo is set to zero or +any value greater than 600, the driver's internal timeouts will be used instead. + + + + + + + +fast_io_fail_tmo +fibre channel API + + fast_io_fail_tmo — length of time to wait before failing I/O executed when a link problem is +detected. I/O that reaches the driver will fail. If I/O is in a blocked queue, it will not be failed until dev_loss_tmo expires and the queue +is unblocked. + + + + + + Host: /sys/class/fc_host/hostH/ + + + + +host +fibre channel API + + + port_id + + + + +issue_lip +fibre channel API + + issue_lip — instructs the driver to rediscover remote ports. + + + + + + + + + +
+ diff --git a/en-US/OSG_reference_fc-native-drivers.xml b/en-US/OSG_reference_fc-native-drivers.xml new file mode 100644 index 0000000..a2003e6 --- /dev/null +++ b/en-US/OSG_reference_fc-native-drivers.xml @@ -0,0 +1,109 @@ + + + +
+ Native Fibre Channel Drivers and Capabilities + + + + +fibre channel drivers (native) + + + + + + +native fibre channel drivers + + + + + + +drivers (native), fibre channel + + + + + Fedora 13 ships with the following native fibre channel drivers: + + lpfc + qla2xxx + zfcp + mptfc + + + describes the different fibre-channel API capabilities of each native Fedora 13 driver. X denotes support for the capability. + + + + Fibre-Channel API Capabilities + + + + + lpfc + qla2xxx + zfcp + mptfc + + + + Transport port_id + X + X + X + X + + + Transport node_name + X + X + X + X + + + Transport port_name + X + X + X + X + + + Remote Port dev_loss_tmo + X + X + X + X + + + Remote Port fast_io_fail_tmo + X + X + Supported as of Fedora 10 + X Supported as of Fedora 13.0 + + + + + Host port_id + X + X + X + X + + + Host issue_lip + X + X + + + + + +
+ +
+ diff --git a/en-US/OSG_reference_iscsi-api-etc.xml b/en-US/OSG_reference_iscsi-api-etc.xml new file mode 100644 index 0000000..1073548 --- /dev/null +++ b/en-US/OSG_reference_iscsi-api-etc.xml @@ -0,0 +1,51 @@ + + + +
+ iSCSI API + +iSCSI API + + + +API, iSCSI + + + running sessions, retrieving information about + iSCSI API + + + + To get information about running sessions, run: + iscsiadm -m session -P 3 + + This command displays the session/device state, session ID (sid), some negotiated parameters, and the SCSI devices accessible through the session. + + For shorter output (for example, to display only the sid-to-node mapping), run: + + + iscsiadm -m session -P 0 + + or + iscsiadm -m session + + These commands print the list of running sessions with the format: + + +driver [sid] target_ip:port,target_portal_group_tag proper_target_name + + +For example: + + +iscsiadm -m session + +tcp [2] 10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311 +tcp [3] 10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311 + + + + For more information about the iSCSI API, refer to /usr/share/doc/iscsi-initiator-utils-version/README. +
+ diff --git a/en-US/OSG_reference_iscsi-iscsiadm-etc.xml b/en-US/OSG_reference_iscsi-iscsiadm-etc.xml new file mode 100644 index 0000000..3ba130f --- /dev/null +++ b/en-US/OSG_reference_iscsi-iscsiadm-etc.xml @@ -0,0 +1,15 @@ + + + +
+ <command>iscsiadm</command> + + The iscsiadm utility is a command-line tool that allows you to manage iSCSI targets. + + CONTENT TBA + + For a complete list of iscsiadm commands and options, refer to man iscsiadm. + +
+ diff --git a/en-US/OSG_reference_iscsi-nopouts.xml b/en-US/OSG_reference_iscsi-nopouts.xml new file mode 100644 index 0000000..00f81cf --- /dev/null +++ b/en-US/OSG_reference_iscsi-nopouts.xml @@ -0,0 +1,48 @@ + + + +
+ NOP-Out Interval/Timeout + +NOP-Out requests +modifying link loss +iSCSI configuration + + To help monitor problems the SAN, the iSCSI layer sends a NOP-Out request to each target. If a NOP-Out request times out, the iSCSI layer responds by failing any running commands and instructing the SCSI layer to requeue those commands when possible. + + When dm-multipath is being used, the SCSI layer will fail those running commands and defer them to the multipath layer. The multipath layer then retries those commands on another path. If dm-multipath is not being used, those commands are retried five times before failing altogether. + + Intervals between NOP-Out requests are 10 seconds by default. To adjust this, open /etc/iscsi/iscsid.conf and edit the following line: + +node.conn[0].timeo.noop_out_interval = [interval value] + + +Once set, the iSCSI layer will send a NOP-Out request to each target every [interval value] seconds. + +By default, NOP-Out requests time out in 10 secondsIn previous versions of Fedora, the default NOP-Out requests time out was 15 seconds.. To adjust this, open /etc/iscsi/iscsid.conf and edit the following line: + + +node.conn[0].timeo.noop_out_timeout = [timeout value] + + + This sets the iSCSI layer to timeout a NOP-Out request after [timeout value] seconds. + +SCSI Error Handler + +SCSI Error Handler +modifying link loss +iSCSI configuration + + +replacement_timeout +modifying link loss +iSCSI configuration + +If the SCSI Error Handler is running, running commands on a path will not be failed immediately when a NOP-Out request times out on that path. Instead, those commands will be failed after replacement_timeout seconds. For more information about replacement_timeout, refer to . + +To verify if the SCSI Error Handler is running, run: +iscsiadm -m session -P 3 + +
+ diff --git a/en-US/OSG_reference_iscsi-replacement_timeout.xml b/en-US/OSG_reference_iscsi-replacement_timeout.xml new file mode 100644 index 0000000..503532d --- /dev/null +++ b/en-US/OSG_reference_iscsi-replacement_timeout.xml @@ -0,0 +1,34 @@ + + + +
+ <command>replacement_timeout</command> + +replacement_timeout +modifying link loss +iSCSI configuration + + replacement_timeout controls how long the iSCSI layer should wait for a timed-out path/session to reestablish itself before failing any commands on it. The default replacement_timeout value is 120 seconds. + + To adjust replacement_timeout, open /etc/iscsi/iscsid.conf and edit the following line: + + +node.session.timeo.replacement_timeout = [replacement_timeout] + + +queue_if_no_path +modifying link loss +iSCSI configuration + +The 1 queue_if_no_path option in /etc/multipath.conf sets iSCSI timers to immediately defer commands to the multipath layer (refer to ). This setting prevents I/O errors from propagating to the application; because of this, you can set replacement_timeout to 15-20 seconds. + +By configuring a lower replacement_timeout, I/O is quickly sent to a new path and executed (in the event of a NOP-Out timeout) while the iSCSI layer attempts to re-establish the failed path/session. If all paths time out, then the multipath and device mapper layer will internally queue I/O based on the settings in /etc/multipath.conf instead of /etc/iscsi/iscsid.conf. + + + Important +Whether your considerations are failover speed or security, the recommended value for replacement_timeout will depend on other factors. These factors include the network, target, and system workload. As such, it is recommended that you thoroughly test any new configurations to replacements_timeout before applying it to a mission-critical system. + + +
+ diff --git a/en-US/OSG_reference_troubleshooting.xml b/en-US/OSG_reference_troubleshooting.xml new file mode 100644 index 0000000..84e51b6 --- /dev/null +++ b/en-US/OSG_reference_troubleshooting.xml @@ -0,0 +1,70 @@ + + + +
+ Troubleshooting + + This section provides solution to common problems users experience during online storage reconfiguration. + + +online storage +troubleshooting + + + +troubleshooting +online storage + + + + + + Logical unit removal status is not reflected on the host. + + + When a logical unit is deleted on a configured filer, the change is not reflected on the host. In such cases, lvm commands will hang indefinitely when dm-multipath is used, as the logical unit has now become stale. + + To work around this, perform the following procedure: +Working Around Stale Logical Units + Determine which mpath link entries in /etc/lvm/cache/.cache are specific to the stale logical unit. To do this, run the following command: + + ls -l /dev/mpath | grep stale-logical-unit + + + + For example, if stale-logical-unit is 3600d0230003414f30000203a7bc41a00, the following results may +appear: + + +lrwxrwxrwx 1 root root 7 Aug 2 10:33 /3600d0230003414f30000203a7bc41a00 -> ../dm-4 +lrwxrwxrwx 1 root root 7 Aug 2 10:33 /3600d0230003414f30000203a7bc41a00p1 -> ../dm-5 + + This means that 3600d0230003414f30000203a7bc41a00 is mapped to two mpath links: dm-4 and +dm-5. + + + Next, open /etc/lvm/cache/.cache. Delete all lines containing stale-logical-unit +and the mpath links that stale-logical-unit maps to. + + Using the same example in the previous step, the lines you need to delete are: + + +/dev/dm-4 +/dev/dm-5 +/dev/mapper/3600d0230003414f30000203a7bc41a00 +/dev/mapper/3600d0230003414f30000203a7bc41a00p1 +/dev/mpath/3600d0230003414f30000203a7bc41a00 +/dev/mpath/3600d0230003414f30000203a7bc41a00p1 + + + + + + + + + + +
+ diff --git a/en-US/OSG_task-autolun-add.xml b/en-US/OSG_task-autolun-add.xml new file mode 100644 index 0000000..b9cc27c --- /dev/null +++ b/en-US/OSG_task-autolun-add.xml @@ -0,0 +1,107 @@ + + + +
+ Adding/Removing a Logical Unit Through rescan-scsi-bus.sh + +LUN (logical unit number) +adding/removing + + + +adding/removing +LUN (logical unit number) + + + + +LUN (logical unit number) +adding/removing +rescan-scsi-bus.sh + + + +rescan-scsi-bus.sh +adding/removing +LUN (logical unit number) + + + + + + +LUN (logical unit number) +adding/removing +required packages + + + +required packages +adding/removing +LUN (logical unit number) + + + + + + + + + + The sg3_utils package provides the rescan-scsi-bus.sh script, which can automatically update the logical unit configuration of the host as needed (after a device has been added to the system). The rescan-scsi-bus.sh script can also perform an issue_lip on supported devices. For more information about how to use this script, refer to rescan-scsi-bus.sh --help. + + + +To install the sg3_utils package, run yum install sg3_utils. + + + + +Known Issues With rescan-scsi-bus.sh + + + +LUN (logical unit number) +adding/removing +known issues + + + +known issues +adding/removing +LUN (logical unit number) + + + + When using the rescan-scsi-bus.sh script, take note of the following known issues: + + + + + +In order for rescan-scsi-bus.sh to work properly, LUN0 must be the first mapped logical unit. The rescan-scsi-bus.sh can only detect the first mapped logical unit if it is LUN0. The rescan-scsi-bus.sh will not be able to scan any other logical unit unless it detects the first mapped logical unit even if you use the --nooptscan option. + + + + + +A race condition requires that rescan-scsi-bus.sh be run twice if logical units are mapped for the first time. During the first scan, rescan-scsi-bus.sh only adds LUN0; all other logical units are added in the second scan. + + + + +A bug in the rescan-scsi-bus.sh script incorrectly executes the functionality for recognizing a change in logical unit size when the --remove option is used. + + + + + +The rescan-scsi-bus.sh script does not recognize ISCSI logical unit removals. + + + + + + +
diff --git a/en-US/OSG_task_adding-storagedevice-or-path.xml b/en-US/OSG_task_adding-storagedevice-or-path.xml new file mode 100644 index 0000000..d932c4e --- /dev/null +++ b/en-US/OSG_task_adding-storagedevice-or-path.xml @@ -0,0 +1,87 @@ + + + +
+Adding a Storage Device or Path + + +adding paths to a storage device + + + + path to storage devices, adding + + + +When adding a device, be aware that the path-based device name (/dev/sd name, major:minor number, and /dev/disk/by-path name, for example) the system assigns to the new device may have been previously in use by a device that has since been removed. As such, ensure that all old references to the path-based device name have been removed. Otherwise, the new device may be mistaken for the old device. + + +The first step in adding a storage device or path is to physically enable access to the new storage device, or a new path to an existing device. This is done using vendor-specific commands at the Fibre Channel or iSCSI storage server. When doing so, note the LUN value for the new storage that will be presented to your host. If the storage server is Fibre Channel, also take note of the World Wide Node Name (WWNN) of the storage server, and determine whether there is a single WWNN for all ports on the storage server. If this is not the case, note the World Wide Port Name (WWPN) for each port that will be used to access the new LUN. + + + +Next, make the operating system aware of the new storage device, or path to an existing device. The recommended command to use is: + + + +echo "c t l" > /sys/class/scsi_host/hosth/scan + + + +In the previous command, h is the HBA number, c is the channel on the HBA, t is the SCSI target ID, and l is the LUN. + + + +The older form of this command, echo "scsi add-single-device 0 0 0 0" > /proc/scsi/scsi, is deprecated. + + + +For Fibre Channel storage servers that implement a single WWNN for all ports, you can determine the correct h,c,and t values (i.e. HBA number, HBA channel, and SCSI target ID) by searching for the WWNN in sysfs. For example, if the WWNN of the storage server is 0x5006016090203181, use: + + + +grep 5006016090203181 /sys/class/fc_transport/*/node_name + + + +This should display output similar to the following: + + +/sys/class/fc_transport/target5:0:2/node_name:0x5006016090203181 +/sys/class/fc_transport/target5:0:3/node_name:0x5006016090203181 +/sys/class/fc_transport/target6:0:2/node_name:0x5006016090203181 +/sys/class/fc_transport/target6:0:3/node_name:0x5006016090203181 + + +This indicates there are four Fibre Channel routes to this target (two single-channel HBAs, each leading to two storage ports). Assuming a LUN value is 56, then the following command will configure the first path: + + + +echo "0 2 56" > /sys/class/scsi_host/host5/scan + + + +This must be done for each path to the new device. + + + +For Fibre Channel storage servers that do not implement a single WWNN for all ports, you can determine the correct HBA number, HBA channel, and SCSI target ID by searching for each of the WWPNs in sysfs. + + + +Another way to determine the HBA number, HBA channel, and SCSI target ID is to refer to another device that is already configured on the same path as the new device. This can be done with various commands, such as lsscsi, scsi_id, multipath -l, and ls -l /dev/disk/by-*. This information, plus the LUN number of the new device, can be used as shown above to probe and configure that path to the new device. + + + +After adding all the SCSI paths to the device, execute the multipath command, and check to see that the device has been properly configured. At this point, the device can be added to md, LVM, mkfs, or mount, for example. + + + +If the steps above are followed, then a device can safely be added to a running system. It is not necessary to stop I/O to other devices while this is done. +Other procedures involving a rescan (or a reset) of the SCSI bus, which cause the operating system to update its state to reflect the current device connectivity, are not recommended while storage I/O is in progress. + + + +
+ diff --git a/en-US/OSG_task_config-fcoe.xml b/en-US/OSG_task_config-fcoe.xml new file mode 100644 index 0000000..c4c8849 --- /dev/null +++ b/en-US/OSG_task_config-fcoe.xml @@ -0,0 +1,154 @@ + + + +
+<remark>[NEW] </remark>Configuring a Fibre-Channel Over Ethernet Interface + +FCoE +fibre channel over ethernet + + + +fibre channel over ethernet +FCoE + + + + +FCoE +required packages + + + +required packages +FCoE + + + + + + +FCoE +configuring an ethernet interface to use FCoE + + + +configuring an ethernet interface to use FCoE +FCoE + + + + + +Setting up and deploying a Fibre-channel over ethernet (FCoE) interface requires two packages: + + + + +fcoe-utils + +dcbd + + + +Once these packages are installed, perform the following procedure to enable +FCoE over a virtual LAN (VLAN): + + + +Configuring an ethernet interface to use FCoE + + + +Configure a new VLAN (101) by creating a new network script for it. The easiest way to do +so is to copy the network script of an ethernet interface (eth3) +to a new one with the 101 file name suffix, as in: + + + +cp /etc/sysconfig/network-scripts/ifcfg-eth3 /etc/sysconfig/network-scripts/ifcfg-eth3.101 + + + + + +Open /etc/sysconfig/network-scripts/ifcfg-eth3.101. Edit it to ensure that +the following settings are configured accordingly: + + + DEVICE=eth3.101 + VLAN=yes + ONBOOT=yes + + + + + +Start the data center bridging daemon (dcbd) using the following command: + + + +/etc/init.d/dcbd start + + + + + +Use the dcbtool utility to enable data center bridging and +FCoE on the ethernet interface using the following commands: + + + +dcbtool sc eth3 dcb on + + + +dcbtool sc eth3 app:fcoe e:1 + + + +These commands will only work if no other changes have been made to the +dcbd settings for the ethernet interface. + + + + + + + +Start FCoE using the command /etc/init.d/fcoe start. The +fibre-channel device should appear shortly, assuming all other settings +on the fabric are correct. + + + + + +After correctly configuring the ethernet interface to use FCoE, +you should set FCoE and dcbd +to run at startup. To do so, use chkconfig, as in: + + + +chkconfig dcbd on + + + +chkconfig fcoe on + + + + + +Do not run software-based DCB or LLDP on CNAs that implement DCB. + + + +Some Combined Network Adapters (CNAs) implement the Data Center Bridging (DCB) protocol in firmware. The DCB protocol assumes that there is just one originator of DCB on a particular network link. This means that any higher-level software implementation of DCB, or Link Layer Discovery Protocol (LLDP), must be disabled on CNAs that implement DCB. + + + + + +
diff --git a/en-US/OSG_task_config-iface-offload.xml b/en-US/OSG_task_config-iface-offload.xml new file mode 100644 index 0000000..3f5c5e9 --- /dev/null +++ b/en-US/OSG_task_config-iface-offload.xml @@ -0,0 +1,66 @@ + + + +
+<remark>[NEW] </remark>Configuring an iface for iSCSI Offload + + + +iSCSI +offload and interface binding +iface (configuring for iSCSI offload) + + + +iface (configuring for iSCSI offload) +offload and interface binding +iSCSI + + + + + + + +By default, iscsiadm will create an iface +configuration for each Chelsio, Broadcom, and +ServerEngines port. To view available iface configurations, +use the same command for doing so in software iSCSI, i.e. iscsiadm -m iface. + + + + +Before using the iface of a network card for +iSCSI offload, first set the IP address (target_IP) that the +device should use. For +ServerEngines devices that use the be2iscsi +driver (i.e. ServerEngines iSCSI HBAs), the IP address is configured in the ServerEngines BIOS +setup screen. + + + + +For Chelsio and Broadcom devices, the procedure for configuring +the IP address is the same as for any other iface +setting. So to configure the IP address of the iface, use: + + + +iscsiadm -m iface -I iface_name -o update -n iface.ipaddress -v target_IP + + + +For example, to set the iface IP address of a Chelsio card (with iface name cxgb3i.00:07:43:05:97:07) +to 20.15.0.66, use: + + + + +iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07 -o update -n iface.ipaddress -v 20.15.0.66 + + + + + +
diff --git a/en-US/OSG_task_config-iface-softiscsi.xml b/en-US/OSG_task_config-iface-softiscsi.xml new file mode 100644 index 0000000..0922352 --- /dev/null +++ b/en-US/OSG_task_config-iface-softiscsi.xml @@ -0,0 +1,106 @@ + + + +
+<remark>[NEW] </remark>Configuring an iface for Software iSCSI + +iSCSI +software iSCSI + + + +software iSCSI +iSCSI + + + + +iSCSI +offload and interface binding +software iSCSI + + + +'software iSCSI +offload and interface binding +iSCSI + + + + + + +iSCSI +offload and interface binding +iface for software iSCSI + + + +iface for software iSCSI +offload and interface binding +iSCSI + + + + + + +As mentioned earlier, an iface configuration is required for each +network object that will be used to bind a session. + + + +Before + + + +To create an iface configuration for software iSCSI, run the following command: + + +iscsiadm -m iface -I iface_name --op=new + + + +This will create a new empty iface configuration with +a specified iface_name. If an existing iface +configuration already has the same iface_name, then it will be +overwritten with a new, empty one. + + + + + +To configure a specific setting of an iface configuration, use +the following command: + + + +iscsiadm -m iface -I iface_name --op=update -n iface.setting -v hw_address + + + +For example, to set the MAC address (hardware_address) of +iface0 to 00:0F:1F:92:6B:BF, +run: + + + +iscsiadm -m iface -I iface0 - -op=update -n iface.hwaddress -v 00:0F:1F:92:6B:BF + + + + + + + +Do not use default or iser +as iface names. Both strings are special values used by iscsiadm +for backward compatibility. Any manually-created iface configurations named +default or iser will disable +backwards compatibility. + + + + +
diff --git a/en-US/OSG_task_controlling-scsi-command-timer-onlining-devices.xml b/en-US/OSG_task_controlling-scsi-command-timer-onlining-devices.xml new file mode 100644 index 0000000..90b15d6 --- /dev/null +++ b/en-US/OSG_task_controlling-scsi-command-timer-onlining-devices.xml @@ -0,0 +1,86 @@ + + + +
+ Controlling the SCSI Command Timer and Device Status + +controlling SCSI command timer and device status +Linux SCSI layer + + + The Linux SCSI layer sets a timer on each command. When this timer expires, the SCSI layer will quiesce the host bus adapter (HBA) and wait for all outstanding commands to either time out or complete. Afterwards, the SCSI layer will activate the driver's error handler. + + When the error handler is triggered, it attempts the following operations in order (until one successfully executes): + + + Abort the command. + Reset the device. + Reset the bus. + Reset the host. + + +offline status +Linux SCSI layer + + + + + + +running status +Linux SCSI layer + + + If all of these operations fail, the device will be set to the offline state. When this occurs, all I/O to that device will be failed, until the problem is corrected and the user sets the device to running. + + The process is different, however, if a device uses the fibre channel protocol and the rport is blocked. In such cases, the drivers wait for several seconds for the rport to become online again before activating the error handler. This prevents devices from becoming offline due to temporary transport problems. + + + + Device States + + + + +device status +Linux SCSI layer + + + + + To display the state of a device, use: + + cat /sys/block/device-name/device/state + + To set a device to running state, use: + + echo running > /sys/block/device-name/device/state + + + Command Timer + +SCSI command timer +Linux SCSI layer + + +command timer (SCSI) +Linux SCSI layer + + + To control the command timer, you can write to /sys/block/device-name/device/timeout. To do so, +run: + +echo value /sys/block/device-name/device/timeout + + Here, value is the timeout value (in seconds) you want to implement. + +udev rule (timeout) +command timer (SCSI) + + + + + +
+ diff --git a/en-US/OSG_task_fc-discovering-new-storage.xml b/en-US/OSG_task_fc-discovering-new-storage.xml new file mode 100644 index 0000000..1d79fd9 --- /dev/null +++ b/en-US/OSG_task_fc-discovering-new-storage.xml @@ -0,0 +1,18 @@ + + + +
+ Discovering New Storage + + If a driver implements the Host issue_lip callback, you can instruct the driver to find new targets/ports added to the storage area network (SAN) after the initial module loading. To do this, write to: + + /sys/class/fc_host/hostH/issue_lip + + Once a target/port is found, find devices/LUNs off that storage by activating a SCSI scan. To do this, run: + + echo - - - > /sys/class/scsi_host/hostH/scan + + The native drivers lpfc and qla2xxx support Host issue_lip capability. For more information, refer to . + +
diff --git a/en-US/OSG_task_fc-modifying-link-loss-behavior.xml b/en-US/OSG_task_fc-modifying-link-loss-behavior.xml new file mode 100644 index 0000000..e1e324f --- /dev/null +++ b/en-US/OSG_task_fc-modifying-link-loss-behavior.xml @@ -0,0 +1,102 @@ + + + + diff --git a/en-US/OSG_task_iscsi-discovery-config.xml b/en-US/OSG_task_iscsi-discovery-config.xml new file mode 100644 index 0000000..abd5d29 --- /dev/null +++ b/en-US/OSG_task_iscsi-discovery-config.xml @@ -0,0 +1,162 @@ + + + +
+<remark>[NEW] </remark>iSCSI Discovery Configuration + +iSCSI +discovery + + + +discovery +iSCSI + + + + +iSCSI +discovery +configuration + + + +configuration +discovery +iSCSI + + + + + + +iSCSI +discovery +record types + + + +record types +discovery +iSCSI + + + + + + +The default iSCSI configuration file is /etc/iscsi/iscsid.conf. +This file contains iSCSI settings used by iscsid and +iscsiadm. + + + + + + +During target discovery, the iscsiadm tool uses the +settings in /etc/iscsi/iscsid.conf to create two +types of records: + + + + + +Node records in /var/lib/iscsi/nodes + + +When logging into a target, iscsiadm uses the settings in this file. + + + + + +Discovery records in /var/lib/iscsi/discovery_type + + +When performing discovery to the same destination, iscsiadm uses +the settings in this file. + + + + + + + + +Before using different settings for discovery, delete the current discovery records +(i.e. /var/lib/iscsi/discovery_type) +first. To do this, use the following command: + + + +iscsiadm -m discovery -t discovery_type -p target_IP:port -o delete + + +The target_IP and port variables refer to the IP address and port combination of +a target/portal, respectively. For more information, refer to and . + + + + + + + +Here, discovery_type can be either sendtargets, isns, or fw. + + +For details on different types of discovery, refer to the DISCOVERY TYPES section of man iscsiadm. + + + + + +There are two ways to reconfigure discovery record settings: + + + +Edit the /etc/iscsi/iscsid.conf file directly prior to performing +a discovery. Discovery settings use the prefix discovery; to view them, +run: + + +iscsiadm -m discovery -t discovery_type -p target_IP:port + + + + + +Alternatively, iscsiadm can also be used to directly change +discovery record settings, as in: + +iscsiadm -m discovery -t discovery_type -p target_IP:port -o update -n setting -v %value + + + +Refer to man iscsiadm for more information on available +settings and valid +values for each. + + + + + + + + +After configuring discovery settings, any subsequent attempts to discover new targets will use +the new settings. Refer to for details on how to scan +for new iSCSI targets. + + + + + +For more information on configuring iSCSI target discovery, refer to the man pages of iscsiadm and iscsid. +The /etc/iscsi/iscsid.conf file also contains examples on proper configuration syntax. + + + + +
diff --git a/en-US/OSG_task_iscsi-modifying-link-loss-behavior-dmmultipath.xml b/en-US/OSG_task_iscsi-modifying-link-loss-behavior-dmmultipath.xml new file mode 100644 index 0000000..2e16a87 --- /dev/null +++ b/en-US/OSG_task_iscsi-modifying-link-loss-behavior-dmmultipath.xml @@ -0,0 +1,38 @@ + + + + diff --git a/en-US/OSG_task_iscsi-modifying-link-loss-behavior-root.xml b/en-US/OSG_task_iscsi-modifying-link-loss-behavior-root.xml new file mode 100644 index 0000000..85b3678 --- /dev/null +++ b/en-US/OSG_task_iscsi-modifying-link-loss-behavior-root.xml @@ -0,0 +1,60 @@ + + + + diff --git a/en-US/OSG_task_iscsilogin.xml b/en-US/OSG_task_iscsilogin.xml new file mode 100644 index 0000000..52d6fe9 --- /dev/null +++ b/en-US/OSG_task_iscsilogin.xml @@ -0,0 +1,97 @@ + + + +
+<remark>[NEW] </remark>Logging In to an iSCSI Target + +iSCSI +targets + + + +targets +iSCSI + + + + +iSCSI +targets +logging in + + + +logging in +iSCSI targets + + + + + + +As mentioned in , the iSCSI service must be running in +order to discover or log into targets. To start the iSCSI service, run: + + + +service iscsi start + + + +When this command is executed, the iSCSI init scripts will +automatically log into targets where the node.startup +setting is configured as automatic. This is the +default value of node.startup for all targets. + + + +To prevent automatic login to a target, set node.startup +to manual. To do this, run the following command: + + + +iscsiadm -m node --targetname proper_target_name -p target_IP:port -o update -n node.startup -v manual + + + + + +Deleting the entire record will also prevent automatic login. To do this, run: + + + + +iscsiadm -m node --targetname proper_target_name -p target_IP:port -o delete + + + + +To automatically mount a file system from an iSCSI device on the network, add a +partition entry for the mount in /etc/fstab with the +_netdev option. For example, to automatically mount the +iSCSI device sdb to /mount/iscsi +during startup, add the following line to /etc/fstab: + + +/dev/sdb /mnt/iscsi ext3 _netdev 0 0 + + +To manually log in to an iSCSI target, use the following command: + + + +iscsiadm -m node --targetname proper_target_name -p target_IP:port -l + + + + + + +The proper_target_name and target_IP:port refer to the full name and IP address/port combination of +a target. For more information, refer to and . + + + + +
diff --git a/en-US/OSG_task_iscsioffload-main.xml b/en-US/OSG_task_iscsioffload-main.xml new file mode 100644 index 0000000..4e9af7d --- /dev/null +++ b/en-US/OSG_task_iscsioffload-main.xml @@ -0,0 +1,67 @@ + + + +
+<remark>[NEW] </remark>Configuring iSCSI Offload and Interface Binding + + +iSCSI +offload and interface binding + + + +offload and interface binding +iSCSI + + + +This chapter describes how to set up iSCSI interfaces in order +to bind a session to a NIC port when using software iSCSI. It also +describes how to set up interfaces for use with +network devices that support offloading; namely, devices +from Chelsio, Broadcom and ServerEngines. + + + + + + + + + +The network subsystem can be configured to determine the +path/NIC that iSCSI interfaces should use for binding. For example, +if portals and NICs are set up on different subnets, then it is +not necessary to manually configure iSCSI interfaces for binding. + + + + + + +Before attempting to configure an iSCSI interface for binding, run the following command +first: + + + +ping -I ethX target_IP + + + + +If ping fails, then you will not be able to bind a session to a NIC. If this is +the case, check the network settings first. + + + + + + + + + + + + +
diff --git a/en-US/OSG_task_modifying-link-loss-behavior.xml b/en-US/OSG_task_modifying-link-loss-behavior.xml new file mode 100644 index 0000000..7ca2cc1 --- /dev/null +++ b/en-US/OSG_task_modifying-link-loss-behavior.xml @@ -0,0 +1,22 @@ + + + + diff --git a/en-US/OSG_task_online-lun-resizing.xml b/en-US/OSG_task_online-lun-resizing.xml new file mode 100644 index 0000000..e072162 --- /dev/null +++ b/en-US/OSG_task_online-lun-resizing.xml @@ -0,0 +1,187 @@ + + + +
+ <remark>[EDITED] </remark>Resizing an Online Logical Unit + + resizing resized logical units + + + + resized logical units, resizing + + +In most cases, fully resizing an online logical unit involves two things: resizing the logical unit itself and reflecting the size change in the corresponding +multipath device (if multipathing is enabled on the system). + +To resize the online logical unit, start by modifying the logical unit size through the array management interface of your storage device. This procedure differs with each +array; as such, consult your storage array vendor documentation for more information on this. + + + +In order to resize an online file system, the file system must not reside on a partitioned device. + + + + +
+Resizing Fibre Channel Logical Units + + +After modifying the online logical unit size, re-scan the logical unit to ensure that the system detects the updated size. To do this for Fibre Channel logical units, use the following command: + + +echo 1 > /sys/block/sdX/device/rescan + + + +To re-scan fibre channel logical units on a system that uses multipathing, execute the aforementioned command for each sd device (i.e. sd1, sd2, and so on) that represents a path for the multipathed logical unit. To determine which devices are paths for a multipath logical unit, use multipath -ll; then, find the entry that matches the logical unit being resized. It is advisable that you refer to the WWID of each entry to make it easier to find which one matches the logical unit being resized. + + +
+ +
+ + +Resizing an iSCSI Logical Unit + +resizing an iSCSI logical unit + + +iSCSI logical unit, resizing + + +After modifying the online logical unit size, re-scan the logical unit to ensure that the system detects the updated size. To do this for iSCSI devices, use the following +command: + + +iscsiadm -m node --targetname target_name -R + + + +original command was iscsiadm -m node -T target_name -R; changed to --targetname proper_target_name for consistency with and + +Replace target_name with the name of the target where the device is located. + + +Note + + You can also re-scan iSCSI logical units using the following command: + + +iscsiadm -m node -R -I interface + +Replace interface with the corresponding interface name of the resized logical unit (for example, iface0). This command performs two operations: + + + + + It scans for new devices in the same way that the command echo "- - -" > /sys/class/scsi_host/host/scan does (refer to ). + + + + + + It re-scans for new/modified logical units the same way that the command echo 1 > /sys/block/sdX/device/rescan does. Note that this command is the same one used for re-scanning fibre-channel logical units. + + + + + +
+ + +
+ Updating the Size of Your Multipath Device + +If multipathing is enabled on your system, you will also need to reflect the change in logical unit size to the logical unit's corresponding multipath +device (after resizing the logical unit). For Fedora 12 (and later), you can do this through multipathd. To do so, first ensure that multipathd is running using service multipathd status. Once you've verified that multipathd is operational, run the following command: + + +multipathd -k"resize map multipath_device" + +The multipath_device variable is the corresponding multipath entry of your device in +/dev/mapper. Depending on how multipathing is set up on your system, multipath_device +can be either of two formats: + + +mpathX, where X is the corresponding entry of +your device (for example, mpath0) +a WWID; for example, 3600508b400105e210000900000490000 + + + + + + +To determine which multipath entry corresponds to your resized logical unit, run multipath -ll. This displays a list of all existing multipath +entries in the system, along with the major and minor numbers of their corresponding devices. + + + + Important + Do not use multipathd -k"resize map multipath_device" if there are any commands queued to +multipath_device. That is, do not use this command when the no_path_retry parameter (in +/etc/multipath.conf) is set to "queue", and there are no active paths to the device. + + + + If your system is using an earlier version of Fedora, you will need to perform the following procedure to instruct the multipathd +daemon to recognize (and adjust to) the changes you made to the resized logical unit: + + + + Resizing the Corresponding Multipath Device (Required for Fedora 12 and earlier) + + resizing multipath device + resizing online resized logical units + + + + + Dump the device mapper table for the multipathed device using: + + dmsetup table multipath_device + + + + + Save the dumped device mapper table as table_name. This table will be re-loaded and edited later. + + + + + + entries, device mapper table + + Examine the device mapper table. Note that the first two numbers in each line correspond to the start and end sectors of the disk, respectively. + + + + Suspend the device mapper target: + dmsetup suspend multipath_device + + + + Open the device mapper table you saved earlier (i.e. table_name). Change the second number (i.e. the disk end sector) to reflect the new number of 512 byte sectors in the disk. For example, if the new disk size is 2GB, change the second number to 4194304. + + + + Reload the modified device mapper table: + dmsetup reload multipath_device table_name + + + + + Resume the device mapper target: + dmsetup resume multipath_device + + + + +For more information about multipathing, refer to the Using Device-Mapper Multipath guide (in ). +
+
+ + diff --git a/en-US/OSG_task_persistent-naming.xml b/en-US/OSG_task_persistent-naming.xml new file mode 100644 index 0000000..5e089aa --- /dev/null +++ b/en-US/OSG_task_persistent-naming.xml @@ -0,0 +1,182 @@ + + + +
+ Persistent Naming + + persistent naming + + + + The operating system issues I/O to a storage device by referencing the path that is used to reach it. For SCSI devices, the path consists of the following: + + +PCI identifier of the host bus adapter (HBA) +channel number on that HBA +the remote SCSI target address +the Logical Unit Number (LUN) + + + +This path-based address is not persistent. It may change any time the system is reconfigured (either by on-line reconfiguration, as described in this manual, or when the system is shutdown, reconfigured, and rebooted). It is even possible for the path identifiers to change when no physical reconfiguration has been done, as a result of timing variations during the discovery process when the system boots, or when a bus is re-scanned. + + +symbolic links in /dev/disk +persistent naming + + + + + +/dev/disk +persistent naming + + +The operating system provides several non-persistent names to represent these access paths to storage devices. One is the /dev/sd name; another is the major:minor number. A third is a symlink maintained in the /dev/disk/by-path/ directory. This symlink maps from the path identifier to the current /dev/sd name. For example, for a Fibre Channel device, the PCI info and Host:BusTarget:LUN info may appear as follows: + + + +pci-0000:02:0e.0-scsi-0:0:0:0 -> ../../sda + + + +For iSCSI devices, by-path/ names map from the target name and portal information to the sd name. + + + +It is generally not appropriate for applications to use these path-based names. This is because the storage device these paths reference may change, potentially causing incorrect data to be written to the device. Path-based names are also not appropriate for multipath devices, because the path-based names may be mistaken for separate storage devices, leading to uncoordinated access and unintended modifications of the data. + + +In addition, path-based names are system-specific. This can cause unintended data changes when the device is accessed by multiple systems, such as in a cluster. + + + +For these reasons, several persistent, system-independent, methods for identifying devices have been developed. The following sections discuss these in detail. + + +
+WWID + + + WWID + persistent naming + + + + World Wide Identifier (WWID) + persistent naming + + + + +The World Wide Identifier (WWID) can be used in reliably identifying devices. It is a persistent, system-independent ID that the SCSI Standard requires from all SCSI devices. The WWID identifier is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. + + + +This identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data (page 0x83) or Unit Serial Number (page 0x80). The mappings from these WWIDs to the current /dev/sd names can be seen in the symlinks maintained in the /dev/disk/by-id/ directory. + + + +For example, a device with a page 0x83 identifier would have: + + +scsi-3600508b400105e210000900000490000 -> ../../sda + + +Or, a device with a page 0x80 identifier would have: + + +scsi-SSEAGATE_ST373453LW_3HW1RHM6 -> ../../sda + + +Fedora automatically maintains the proper mapping from the WWID-based device name to a current /dev/sd name on that system. Applications can use the /dev/disk/by-id/ name to reference the data on the disk, even if the path to the device changes, and even when accessing the device from different systems. + + + +If there are multiple paths from a system to a device, device-mapper-multipath uses the WWID to detect this. Device-mapper-multipath then presents a single "pseudo-device" in /dev/mapper/wwid, such as /dev/mapper/3600508b400105df70000e00000ac0000. + + + +The command multipath -l shows the mapping to the non-persistent identifiers: Host:Channel:Target:LUN, /dev/sd name, and the major:minor number. + + +3600508b400105df70000e00000ac0000 dm-2 vendor,product +[size=20G][features=1 queue_if_no_path][hwhandler=0][rw] +\_ round-robin 0 [prio=0][active] + \_ 5:0:1:1 sdc 8:32 [active][undef] + \_ 6:0:1:1 sdg 8:96 [active][undef] +\_ round-robin 0 [prio=0][enabled] + \_ 5:0:0:1 sdb 8:16 [active][undef] + \_ 6:0:0:1 sdf 8:80 [active][undef] + + +Device-mapper-multipath automatically maintains the proper mapping of each WWID-based device name to its corresponding /dev/sd name on the system. These names are persistent across path changes, and they are consistent when accessing the device from different systems. + + + +When the user_friendly_names feature (of device-mapper-multipath) is used, the WWID is mapped to a name of the form /dev/mapper/mpathn. By default, this mapping is maintained in the file /var/lib/multipath/bindings. These mpathn names are persistent as long as that file is maintained. + + + +The multipath bindings file (by default, /var/lib/multipath/bindings) must be available at boot time. If /var is a separate file system from /, then you must change the default location of the file. For more information, refer to . + + + +If you use user_friendly_names, then additional steps are required to obtain consistent names in a cluster. Refer to the Consistent Multipath Device Names section in the Using Device-Mapper Multipath book. + + + +In addition to these persistent names provided by the system, you can also use udev rules to implement persistent names of your own, mapped to the WWID of the storage. For more information about this, refer to . + + + +udev +persistent naming + +
+ +
+UUID and Other Persistent Identifiers + + + UUID + persistent naming + + + + Universally Unique Identifier (UUID) + persistent naming + + + + +If a storage device contains a file system, then that file system may provide one or both of the following: + + + +Universally Unique Identifier (UUID) +File system label + + + +These identifiers are persistent, and based on metadata written on the device by certain applications. They may also be used to access the device using the symlinks maintained by the operating system in the /dev/disk/by-label/ (e.g. boot -> ../../sda1 ) and /dev/disk/by-uuid/ (e.g. f8bf09e3-4c16-4d91-bd5e-6f62da165c08 -> ../../sda1) directories. + + + +md and LVM write metadata on the storage device, and read that data when they scan devices. In each case, the metadata contains a UUID, so that the device can be identified regardless of the path (or system) used to access it. As a result, the device names presented by these facilities are persistent, as long as the metadata remains unchanged. + + + + + + + + + + + + +
+ + diff --git a/en-US/OSG_task_removing-devices.xml b/en-US/OSG_task_removing-devices.xml new file mode 100644 index 0000000..897bb19 --- /dev/null +++ b/en-US/OSG_task_removing-devices.xml @@ -0,0 +1,96 @@ + + + +
+Removing a Storage Device + + +removing devices + + + +devices, removing + + + + +Before removing access to the storage device itself, it is advisable to back up data from the device first. Afterwards, flush I/O and remove all operating system references to the device (as described below). If the device uses multipathing, then do this for the multipath "pseudo device" () and each of the identifiers that represent a path to the device. If you are only removing a path to a multipath device, and other paths will remain, then the procedure is simpler, as described in . + + + +Removal of a storage device is not recommended when the system is under memory pressure, since the I/O flush will add to the load. To determine the level of memory pressure, run the command vmstat 1 100; device removal is not recommended if: + + + +Free memory is less than 5% of the total memory in more than 10 samples per 100 (the command free can also be used to display the total memory). + + +Swapping is active (non-zero si and so columns in the vmstat output). + + + + +The general procedure for removing all access to a device is as follows: + + + + +Ensuring a Clean Device Removal + + +Close all users of the device and backup device data as needed. + + +Use umount to unmount any file systems that mounted the device. + + +Remove the device from any md and LVM volume using it. If the device is a member of an LVM Volume group, then it may be necessary to move data off the device using the pvmove command, then use the vgreduce command to remove the physical volume, and (optionally) pvremove to remove the LVM metadata from the disk. + + + +If the device uses multipathing, run multipath -l and note all the paths to the device. Afterwards, remove the multipathed device using multipath -f device. + + + +Run blockdev –flushbufs device to flush any outstanding I/O to all paths to the device. +This is particularly important for raw devices, where there is no umount or vgreduce operation to cause an I/O flush. + + + + +Remove any reference to the device's path-based name, like /dev/sd, /dev/disk/by-path or the major:minor number, in applications, scripts, or utilities on the system. This is important in ensuring that different devices added in the future will not be mistaken for the current device. + + + +Finally, remove each path to the device from the SCSI subsystem. To do so, use the command echo 1 > /sys/block/device-name/device/delete where device-name may be sde, for example. + + + +Another variation of this operation is echo 1 > /sys/class/scsi_device/h:c:t:l/device/delete, where h is the HBA number, c is the channel on the HBA, t is the SCSI target ID, and l is the LUN. + + + + +The older form of these commands, echo "scsi remove-single-device 0 0 0 0" > /proc/scsi/scsi, is deprecated. + + + + + +You can determine the device-name, HBA number, HBA channel, SCSI target ID and LUN for a device from various commands, such as lsscsi, scsi_id, multipath -l, and ls -l /dev/disk/by-*. + + + +After performing , a device can be physically removed safely from a running system. It is not necessary to stop I/O to other devices while doing so. + + + +Other procedures, such as the physical removal of the device, followed by a rescan of the SCSI bus (as described in ) to cause the operating system state to be updated to reflect the change, are not recommended. This will cause delays due to I/O timeouts, and devices may be removed unexpectedly. If it is necessary to perform a rescan of an interconnect, it must be done while I/O is paused, as described in . + + + + + +
+ diff --git a/en-US/OSG_task_removing-path-to-storage-device.xml b/en-US/OSG_task_removing-path-to-storage-device.xml new file mode 100644 index 0000000..ff94e5d --- /dev/null +++ b/en-US/OSG_task_removing-path-to-storage-device.xml @@ -0,0 +1,54 @@ + + + +
+Removing a Path to a Storage Device + + +removing paths to a storage device + + + + path to storage devices, removing + + + +If you are removing a path to a device that uses multipathing (without affecting other paths to the device), then the general procedure is as follows: + + + +Removing a Path to a Storage Device + +Remove any reference to the device's path-based name, like /dev/sd or /dev/disk/by-path or the major:minor number, in applications, scripts, or utilities on the system. This is important in ensuring that different devices added in the future will not be mistaken for the current device. + + + +Take the path offline using echo offline > /sys/block/sda/device/state. + + + +This will cause any subsequent I/O sent to the device on this path to be failed immediately. Device-mapper-multipath will continue to use the remaining paths to the device. + + + +Remove the path from the SCSI subsystem. To do so, use the command echo 1 > /sys/block/device-name/device/delete where device-name may be sde, for example (as described in ). + + + + + + + + +After performing , the path can be safely removed from the running system. It is not necessary to stop I/O while this is done, as device-mapper-multipath will re-route I/O to remaining paths according to the configured path grouping and failover policies. + + + + + +Other procedures, such as the physical removal of the cable, followed by a rescan of the SCSI bus to cause the operating system state to be updated to reflect the change, are not recommended. This will cause delays due to I/O timeouts, and devices may be removed unexpectedly. If it is necessary to perform a rescan of an interconnect, it must be done while I/O is paused, as described in . + + +
+ diff --git a/en-US/OSG_task_scanning-for-iscsi-devices-offload.xml b/en-US/OSG_task_scanning-for-iscsi-devices-offload.xml new file mode 100644 index 0000000..d693bf9 --- /dev/null +++ b/en-US/OSG_task_scanning-for-iscsi-devices-offload.xml @@ -0,0 +1,118 @@ + + + +
+<remark>[NEW] </remark>Binding/Unbinding an iface to a Portal + + + +iSCSI +offload and interface binding +binding/unbinding an iface to a portal + + + +binding/unbinding an iface to a portal +offload and interface binding +iSCSI + + + + + + +iface binding/unbinding +offload and interface binding +iSCSI + + + + + +Whenever iscsiadm is used to scan for interconnects, +it will first check the iface.transport settings +of each iface configuration in /var/lib/iscsi/ifaces. +The iscsiadm utility will then bind discovered portals to +any iface whose +iface.transport is tcp. + + + +This behavior was implemented for compatibility reasons. To override this, use the +-I iface_name to specify which portal +to bind to an iface, as in: + + + +iscsiadm -m discovery -t st -p target_IP:port -I iface_name -P 1 + + + + + +By default, the iscsiadm utility will not automatically bind any portals +to iface configurations that use offloading. This is because +such iface configurations will not have iface.transport +set to tcp. As such, the iface configurations +of Chelsio, Broadcom, and ServerEngines ports need to be manually bound to discovered portals. + + + + + + + +It is also possible to prevent a portal from binding to any existing +iface. To do so, use default +as the iface_name, as in: + + + +iscsiadm -m discovery -t st -p IP:port -I default -P 1 + + + +To remove the binding between a target and iface, use: + + + +iscsiadm -m node -targetname proper_target_name -I iface0 --op=delete + +Refer to for information on proper_target_name. + + + + + + +To delete all bindings for a specific iface, use: + + + +iscsiadm -m node -I iface_name --op=delete + + + + +To delete bindings for a specific portal (e.g. for Equalogic targets), use: + + + +iscsiadm -m node -p IP:port -I iface_name --op=delete + + + + + + +If there are no iface configurations defined in /var/lib/iscsi/iface +and the -I option is not used, iscsiadm will allow +the network subsystem to decide which device a specific portal should use. + + + + + + +
diff --git a/en-US/OSG_task_scanning-for-iscsi-devices.xml b/en-US/OSG_task_scanning-for-iscsi-devices.xml new file mode 100644 index 0000000..5d0e13c --- /dev/null +++ b/en-US/OSG_task_scanning-for-iscsi-devices.xml @@ -0,0 +1,218 @@ + + + +
+ <remark>[EDITED] </remark>Scanning iSCSI Interconnects +DON (mar03-2010): i edited this section as per Mike Christie's submission on RHEL iscsi setup + + + +interconnects (scanning) +iSCSI + + + +iSCSI +scanning interconnects + + + +scanning interconnects +iSCSI + + + + + + For iSCSI, if the targets send an iSCSI async event indicating new storage is added, then the scan is done automatically. Cisco +MDS and EMC Celerra support this feature. + + + However, if the targets do not send an iSCSI async event, you need to manually scan them using the iscsiadm utility. Before doing so, however, you need to first retrieve the proper --targetname and the --portal values. If your device model supports only a single logical unit and portal per target, use iscsiadm to issue a sendtargets command to the host, as in: + + + +iscsiadm -m discovery -t sendtargets -p target_IP:port + + + + + + + +new + + +The output will appear in the following format: + + + +target_IP:port,target_portal_group_tag proper_target_name + + + + + + + +For example, on a target with a proper_target_name + +of iqn.1992-08.com.netapp:sn.33615311 and a +target_IP:port of 10.15.85.19:3260, the +output may appear as: + + + + +10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311 +10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311 + + + + +In this example, the target has two portals, each using target_ip:ports +of 10.15.84.19:3260 and 10.15.85.19:3260. + + + + + +To see which iface configuration +will be used for each session, add the -P 1 option. This +option will print also session information in tree format, as in: + + + + Target: proper_target_name + Portal: target_IP:port,target_portal_group_tag + Iface Name: iface_name + + + + +For example, with iscsiadm -m discovery -t sendtargets -p 10.15.85.19:3260 -P 1, +the output may appear as: + + + +Target: iqn.1992-08.com.netapp:sn.33615311 + Portal: 10.15.84.19:3260,2 + Iface Name: iface2 + Portal: 10.15.85.19:3260,3 + Iface Name: iface2 + + + +This means that the target iqn.1992-08.com.netapp:sn.33615311 +will use iface2 as its iface configuration. + + + + + +/new + + + + +With some device models (e.g. from EMC and Netapp), however, a single target may have multiple logical units and/or portals. In this case, issue a sendtargets command to the host first to find new portals on the target. Then, rescan the existing sessions using: + + +iscsiadm -m session --rescan + + + +You can also rescan a specific session by specifying the session's SID value, as in: + + + +iscsiadm -m session -r SID --rescan +For information on how to retrieve a session's SID value, refer to . + + + + + +If your device supports multiple targets, you will need to issue a sendtargets command to the hosts to find new portals for each target. Then, rescan existing sessions to discover new logical units on existing sessions (i.e. using the --rescan option). + + + + +The sendtargets command used to retrieve --targetname and --portal values overwrites the contents of the /var/lib/iscsi/nodes database. This database will then be repopulated using the settings in /etc/iscsi/iscsid.conf. However, this will not occur if a session is currently logged in and in use. + + + +To safely add new targets/portals or delete old ones, use the -o new or -o delete options, respectively. For example, to add new targets/portals without overwriting /var/lib/iscsi/nodes, use the following command: + + + + + +iscsiadm -m discovery -t st -p target_IP -o new + + + +To delete /var/lib/iscsi/nodes entries that the target did not display during discovery, use: + + + +iscsiadm -m discovery -t st -p target_IP -o delete + + + +You can also perform both tasks simultaneously, as in: + + + +iscsiadm -m discovery -t st -p target_IP -o delete -o new + + + + + + + + + +The sendtargets command will yield the following output: + + +ip:port,target_portal_group_tag proper_target_name + + +For example, given a device with a single target, logical unit, and portal, with equallogic-iscsi1 as your target_name, the output should appear similar to the following: + +10.16.41.155:3260,0 iqn.2001-05.com.equallogic:6-8a0900-ac3fe0101-63aff113e344a4a2-dl585-03-1 + + +Note that proper_target_name and +ip:port,target_portal_group_tag are identical to the values of the same name in . + + +At this point, you now have the proper --targetname and --portal values needed to manually scan for iSCSI devices. To do so, run the following command: + + + + iscsiadm --mode node --targetname proper_target_name --portal ip:port,target_portal_group_tag \ --login + +This is a single command split into multiple lines, to accommodate printed and PDF versions of this document. All concatenated lines — preceded by the backslash (\) — should be treated as one command, sans backslashes. + + + + + + Using our previous example (where proper_target_name is equallogic-iscsi1), the full command would be: + + + +iscsiadm --mode node --targetname \ iqn.2001-05.com.equallogic:6-8a0900-ac3fe0101-63aff113e344a4a2-dl585-03-1 \ --portal 10.16.41.155:3260,0 --login + + + + + + + +
diff --git a/en-US/OSG_task_scanning-for-new-devices.xml b/en-US/OSG_task_scanning-for-new-devices.xml new file mode 100644 index 0000000..5a64703 --- /dev/null +++ b/en-US/OSG_task_scanning-for-new-devices.xml @@ -0,0 +1,69 @@ + + + +
+ Scanning for New Devices + +scanning for new devices + + + +new devices, scanning for + + + + +devices (new), scanning for + + + + Testing has revealed that reconfiguring online storage may cause some unexpected device name changes; this will result in data loss if the device is still performing I/O. As such, before attempting to dynamically configure or unconfigure a logical unit, it is strongly recommended that you quiesce I/O first. After reconfiguring a logical unit, check if the device name has changed before resuming I/O to avoid any data loss. + + + +Consult your storage array vendor documentation for possible restrictions regarding this issue. + + + + + + If you load a driver before adding the corresponding storage device, you will likely need to manually add the new storage to the operating system. As such, you will need the corresponding LUN of the added storage device. + + + To scan all buses and targets for new logical units, use the following command: + + + + echo "- - -" > /sys/class/scsi_host/host/scan + +Replace host with the appropriate host number (host0, host1, host2, and so on). + + + +Fedora 13 includes a rescan-scsi-bus.sh script that automatically updates the logical unit configuration of the host as needed after adding a new storage device. For more information about this script (along with its related known issues), refer to . + + + +
+ Scanning Fibre-Channel Devices That Support issue_lip + To perform a scan for fibre channel adapters that support issue_lip, use: + +echo "1" > /sys/class/fc_host/host/issue_lip + + + Bear in mind that issue_lip is not a synchronous operation. As such, you can only perform fibre channel device discovery after issue_lip completes its scan. + + + + The lpfc and qla2xxx drivers support issue_lip. For more information about the API capabilities supported by each driver in Fedora, refer to . + + + + Note that the proc interface is deprecated; as such, do not use it. + +
+ + +
+ diff --git a/en-US/OSG_task_scanning-storage-interconnects.xml b/en-US/OSG_task_scanning-storage-interconnects.xml new file mode 100644 index 0000000..bf09c5d --- /dev/null +++ b/en-US/OSG_task_scanning-storage-interconnects.xml @@ -0,0 +1,61 @@ + + + +
+Scanning Storage Interconnects + + +scanning storage interconnects + + + + storage interconnects, scanning + + + +There are several commands available that allow you to reset and/or scan one or more interconnects, potentially adding and removing multiple devices in one operation. This type of scan can be disruptive, as it can cause delays while I/O operations timeout, and remove devices unexpectedly. As such, you should only use this type of scan only when necessary. In addition, the following restrictions must be observed when scanning storage interconnects: + + + +All I/O on the effected interconnects must be paused and flushed before executing the procedure, and the results of the scan checked before I/O is resumed. +As with removing a device, interconnect scanning is not recommended when the system is under memory pressure. To determine the level of memory pressure, run the command vmstat 1 100; interconnect scanning is not recommended if free memory is less than 5% of the total memory in more than 10 samples per 100. It is also not recommended if swapping is active (non-zero si and so columns in the vmstat output). The command free can also display the total memory. + + + + + +The following commands can be used to scan storage interconnects. + + + +echo "1" > /sys/class/fc_host/host/issue_lip +This operation performs a Loop Initialization Protocol (LIP) and then scans the interconnect and causes the SCSI layer to be updated to reflect the devices currently on the bus. A LIP is, essentially, a bus reset, and will cause device addition and removal. This procedure is necessary to configure a new SCSI target on a Fibre Channel interconnect. + + +Bear in mind that issue_lip is an asynchronous operation. The command may complete before the entire scan has completed. You must monitor /var/log/messages to determine when it is done. + + +The lpfc and qla2xxx drivers support issue_lip. For more information about the API capabilities supported by each driver in Fedora 13, refer to . + + + + + +/usr/bin/rescan-scsi-bus.sh + +By default, this script scans all the SCSI buses on the system, updating the SCSI layer to reflect new devices on the bus. The script provides additional options to allow device removal and the issuing of LIPs. For more information about this script (including known issues), refer to . + + + +echo "- - -" > /sys/class/scsi_host/hosth/scan + +This is the same command described in to add a storage device or path. In this case, however, the channel number, SCSI target ID, and LUN values are replaced by wildcards. Any combination of identifiers and wildcards is allowed, allowing you to make the command as specific or broad as needed. This procedure will add LUNs, but not remove them. + + +rmmod driver-name or modprobe driver-name +These commands completely re-initialize the state of all interconnects controlled by the driver. Although this is extreme, it may be appropriate in some situations. This may be used, for example, to re-start the driver with a different module parameter value. + + +
+ diff --git a/en-US/OSG_task_viewing-ifaces.xml b/en-US/OSG_task_viewing-ifaces.xml new file mode 100644 index 0000000..51c74dd --- /dev/null +++ b/en-US/OSG_task_viewing-ifaces.xml @@ -0,0 +1,215 @@ + + + +
+<remark>[NEW] </remark>Viewing Available iface Configurations + + + +iSCSI +offload and interface binding +viewing available iface configurations + + + +viewing available iface configurations +offload and interface binding +iSCSI + + + + + + +iSCSI +offload and interface binding +iface configurations, viewing + + + +iface configurations, viewing +offload and interface binding +iSCSI + + + + + + + +Fedora 11 (and later) supports iSCSI offload and interface binding +for the following iSCSI initiator implementations: + + + + +iSCSI +offload and interface binding +initiator implementations + + + +initiator implementations +offload and interface binding +iSCSI + + + + + + + +iSCSI +offload and interface binding +software iSCSI (initiator implementations) + + + +software iSCSI (initiator implementations) +offload and interface binding +iSCSI + + +Software iSCSI — like the scsi_tcp +and ib_iser modules, this stack allocates an iSCSI +host instance (i.e. scsi_host) per session, with a single connection per session. As a +result, /sys/class_scsi_host and /proc/scsi will +report a scsi_host for each connection/session you are logged into. + + + + + + +iSCSI +offload and interface binding +offload iSCSI (initiator implementations) + + + +offload iSCSI (initiator implementations) +offload and interface binding +iSCSI + +Offload iSCSI — like the +Chelsio cxgb3i, Broadcom bnx2i +and ServerEngines be2iscsi modules, this stack allocates a +scsi_host for each PCI device. As such, each port on a +host bus adapter will show up as a different PCI device, with a different +scsi_host per HBA port. + + + + + + + +To manage both types of initiator implementations, iscsiadm uses the +iface structure. With this structure, an iface +configuration must be entered in /var/lib/iscsi/ifaces for each +HBA port, software iSCSI, or network device (ethX) +used to bind sessions. + + + +To view available iface configurations, run iscsiadm -m iface. +This will display iface information in the following format: + + + +iface_name transport_name,hardware_address,ip_address,net_ifacename,initiator_name + + + +Refer to the following table for an explanation of each value/setting. + + + + +iSCSI +offload and interface binding +iface settings + + + +iface settings +offload and interface binding +iSCSI + + + + + +iface Settings + +SettingDescription + + + +iface_name iface configuration name. +transport_name Name of driver +hardware_address MAC address +ip_address IP address to use for this port +net_iface_name Name used for the vlan or alias binding of a software iSCSI session. For + iSCSI offloads, net_iface_name will be <empty> because this value is not persistent across reboots. + +initiator_name This setting is used to override a default name for the initiator, which is defined in /etc/iscsi/initiatorname.iscsi + +
+ + + + + + +The following is a sample output of the iscsiadm -m iface command: + + +iface0 qla4xxx,00:c0:dd:08:63:e8,20.15.0.7,default,iqn.2005-06.com.redhat:madmax +iface1 qla4xxx,00:c0:dd:08:63:ea,20.15.0.9,default,iqn.2005-06.com.redhat:madmax + + + +For software iSCSI, each iface configuration +must have a unique name (with less than 65 characters). The iface_name for network devices that support offloading +appears in the format transport_name.hardware_name. + + +For example, the sample output of iscsiadm -m iface on a system using +a Chelsio network card might appear as: + +default tcp,<empty>,<empty>,<empty>,<empty> +iser iser,<empty>,<empty>,<empty>,<empty> +cxgb3i.00:07:43:05:97:07 cxgb3i,00:07:43:05:97:07,<empty>,<empty>,<empty> + + +It is also possible to display the settings of a specific iface +configuration in a more friendly way. To do so, use the option -I +iface_name. This will display the settings in the following +format: + + +iface.setting = value + + +Using the previous example, the iface settings of the same +Chelsio video card (i.e. iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07) +would appear as: + + + +# BEGIN RECORD 2.0-871 +iface.iscsi_ifacename = cxgb3i.00:07:43:05:97:07 +iface.net_ifacename = <empty> +iface.ipaddress = <empty> +iface.hwaddress = 00:07:43:05:97:07 +iface.transport_name = cxgb3i +iface.initiatorname = <empty> +# END RECORD + + + + + +
diff --git a/en-US/Preface.xml b/en-US/Preface.xml new file mode 100644 index 0000000..4d80778 --- /dev/null +++ b/en-US/Preface.xml @@ -0,0 +1,13 @@ + + + + + Preface + + + + + + + diff --git a/en-US/Revision_History.xml b/en-US/Revision_History.xml new file mode 100644 index 0000000..f88830d --- /dev/null +++ b/en-US/Revision_History.xml @@ -0,0 +1,30 @@ + + + + + Revision History + + + + + + 1.0 + Thu Jul 09 2009 + + Don + Domingo + ddomingo@redhat.com + + + + initial build + + + + + + + + + diff --git a/en-US/Storage_Administration_Guide.ent b/en-US/Storage_Administration_Guide.ent new file mode 100644 index 0000000..69875b9 --- /dev/null +++ b/en-US/Storage_Administration_Guide.ent @@ -0,0 +1,254 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +root"> + + + + + + + + + + + + + + +Big Board"> + + + + + + +Command Center"> + + + + +Current State area"> + + + + + + + + + + + + + + + + + + + + + + +Main Menu"> + + + + +Network Map"> +Network Status bar"> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/en-US/Storage_Administration_Guide.xml b/en-US/Storage_Administration_Guide.xml new file mode 100644 index 0000000..125a2d6 --- /dev/null +++ b/en-US/Storage_Administration_Guide.xml @@ -0,0 +1,52 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/en-US/ch-diskstorage.xml b/en-US/ch-diskstorage.xml new file mode 100644 index 0000000..6510292 --- /dev/null +++ b/en-US/ch-diskstorage.xml @@ -0,0 +1,25 @@ + + + + + Disk Storage + + +This part describes how to manage and secure local storage devices. It also dicusses the management of RAID devices and logical volumes. + + + + + + + + + + + + + + + + diff --git a/en-US/ch-dmmultipath.xml b/en-US/ch-dmmultipath.xml new file mode 100644 index 0000000..2e0757a --- /dev/null +++ b/en-US/ch-dmmultipath.xml @@ -0,0 +1,25 @@ + + + + + <remark><command>dmm </command></remark>Device Mapper Multipathing + + +Device Mapper Multipathing (DM-Multipath) allows you to configure +multiple I/O paths between server nodes and storage arrays into +a single device. +These I/O paths are physical SAN connections +that can include separate cables, switches, +and controllers. +Multipathing aggregates the I/O paths, creating +a new device that consists of the aggregated paths. + + + + + + + + + diff --git a/en-US/ch-dmultipath_virtstorage.xml b/en-US/ch-dmultipath_virtstorage.xml new file mode 100644 index 0000000..3f79e73 --- /dev/null +++ b/en-US/ch-dmultipath_virtstorage.xml @@ -0,0 +1,105 @@ + + + + +Device Mapper Multipathing and Virtual Storage + + + +Fedora 13 also supports DM-Multipath and +virtual storage. Both features are documented in detail +in other stand-alone books also provided by &RH;. + + +
+Virtual Storage + +virtual storage + + +Fedora 13 supports the following file systems/online storage methods for virtual storage: + +content for this section will be pulled from the Virtualization book. + +Fibre Channel +iSCSI +NFS +GFS2 + + + +Virtualization in Fedora 13 uses libvirt to manage virtual instances. The +libvirt utility uses the concept of storage pools to manage storage +for virtualized guests. A storage pool is storage that can be divided up into smaller volumes or allocated +directly to a guest. Volumes of a storage pool can be allocated to virtualized guests. There are two categories of storage pools available: + + + + +Local storage pools +Local storage covers storage devices, files or directories directedly attached to a host. Local storage includes local directories, directly attached disks, and LVM Volume Groups. + + + +Networked (shared) storage pools +Networked storage covers storage devices shared over a network using standard protocols. Networked storage includes shared storage devices using Fibre Channel, iSCSI, NFS, GFS2, and SCSI RDMA protocols. Networked storage is a requirement for migrating guest virtualized guests between hosts. + + + + + +For comprehensive information on the deployment and configuration of virtual storage instances in your environment, +please refer to the Virtualization Storage section of the Virtualization +guide provided by &RH;. + + +
+ +
+DM-Multipath + +device-mapper multipathing + + +Device Mapper Multipathing (DM-Multipath) is a feature that allows you to configure multiple +I/O paths between server nodes and storage arrays into a single device. These +I/O paths are physical SAN connections that can include separate cables, switches, +and controllers. Multipathing aggregates the I/O paths, creating a new device +that consists of the aggregated paths. + + + +DM-Multipath are used primarily for the following reasons: + + + + +Redundancy + + +DM-Multipath can provide failover in an active/passive configuration. In an active/passive configuration, only half the paths are used at any time for I/O. If any element of an I/O path (the cable, switch, or controller) fails, DM-Multipath switches to an alternate path. + + + + + +Improved Performance + + +DM-Multipath can be configured in active/active mode, where I/O is spread over the paths in a round-robin fashion. In some configurations, DM-Multipath can detect loading on the I/O paths and dynamically re-balance the load. + + + + + + +For comprehensive information on the deployment and configuration of DM-Multipath in your environment, +please refer to the Using DM-Multipath guide provided by &RH;. + + + +
+ + +
diff --git a/en-US/ch-filesystems.xml b/en-US/ch-filesystems.xml new file mode 100644 index 0000000..ca44c45 --- /dev/null +++ b/en-US/ch-filesystems.xml @@ -0,0 +1,36 @@ + + + + + File Systems + + The term file system refers to the series of files and + directories stored on a computer. A file system can have + different formats called file system + types. These formats determine how the information + is stored as files and directories. Some file system types + store redundant copies of the data, while some file system + types make hard drive access faster. This chapter discusses the + ext3, swap, and RAID file system types. It also discusses + the parted utility to + manage partitions and access control lists (ACLs) to customize + file permissions. + + + + + + + + + + + + + + + + + + diff --git a/en-US/ch-lvm.xml b/en-US/ch-lvm.xml new file mode 100644 index 0000000..1881f7f --- /dev/null +++ b/en-US/ch-lvm.xml @@ -0,0 +1,25 @@ + + + + + Test + + This is a test paragraph + +
+ Section 1 Test + + Test of a section + +
+ +
+ Section 2 Test + + Test of a section + +
+ +
+ diff --git a/en-US/ch-osrg.xml b/en-US/ch-osrg.xml new file mode 100644 index 0000000..2728de7 --- /dev/null +++ b/en-US/ch-osrg.xml @@ -0,0 +1,79 @@ + + + + + <remark><command>osrg</command> </remark>Online Storage Management + + + +online storage +overview + + + +overview +online storage + + + + +online storage +overview +sysfs + + + +sysfs +overview +online storage + + + + + + It is often desirable to add, remove or re-size storage devices while the operating system is running, and without rebooting. This chapter outlines the procedures that may be used to reconfigure storage devices on Fedora 13 host systems while the system is running. It covers iSCSI and Fibre Channel storage interconnects; other interconnect types may be added it the future. + + + This chapter focuses on adding, removing, modifying, and monitoring storage devices. It does not discuss the Fibre Channel or iSCSI protocols in detail. For more information about these protocols, refer to other documentation. + + + + +This chapter makes reference to various sysfs objects. Fedora advises that the sysfs object names and directory structure are subject to change in major Fedora releases. This is because the upstream Linux kernel does not provide a stable internal API. For guidelines on how to reference sysfs objects in a transportable way, refer to the document Documentation/sysfs-rules.txt in the kernel source tree for guidelines. + +need link to actual doc, or if local, an absolute file path + + + +Online storage reconfiguration must be done carefully. System failures or interruptions during the process can lead to unexpected results. You should reduce system load to the maximum extent possible during the change operations. This will reduce the chance of I/O errors, out-of-memory errors, or similar errors occurring in the midst of a configuration change. The following sections provide more specific guidelines regarding this. + + + +In addition, you should back up all data before reconfiguring online storage.  + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/en-US/ch-overview_main.xml b/en-US/ch-overview_main.xml new file mode 100644 index 0000000..fe44124 --- /dev/null +++ b/en-US/ch-overview_main.xml @@ -0,0 +1,189 @@ + + + + + Overview + + +overview + + +introduction + + +The Storage Administration Guide contains extensive information +on supported file systems and data storage features in Fedora 13. This +book is intended as a quick reference for administrators managing single-node (i.e. non-clustered) +storage solutions. + + + + + + +
+What's New in Fedora 13 + + + + +This release of Fedora features several improvements +in file system support and storage device management. Support for the following +file systems have now been added: + + + +ext4 +GFS2 +XFS + + + +Fedora 13 also features the following file system enhancements: + + + +File System Encryption + +overview +file system encryption + + + +file system encryption +overview + + + + +overview +encryption, file system + + + +encryption, file system +overview + + + + + + +overview +ecryptfs + + + +ecryptfs +overview + + + +You can now encrypt a file system at mount using eCryptfs, which provides +an encryption layer on top of an actual file system. +This "pseudo-file system" +allows per-file and file name encryption, which offers more granular encryption than encrypted block devices. +For more information +about file system encryption, refer to . + + + + + +File System Caching + +overview +file system caching + + + +file system caching +overview + + + + +overview +caching, file system + + + +caching, file system +overview + + + + + + +overview +fs-cache + + + +fs-cache +overview + + + + +FS-Cache allows you to use local storage for caching +data from file systems served over the network (e.g. through NFS). This helps minimize +network traffic, although it does not guarantee faster access to data over the network. FS-Cache allows a file system on a server to interact directly with a client's local cache without creating an overmounted file system. For more information about FS-Cache, refer to . + + + + + +I/O Limit Processing + +overview +I/O limit processing + + + +I/O limit processing +overview + + + +processing, I/O limit +overview + + + +The Linux I/O stack can now process I/O limit information for +devices that provide it. This allows storage management tools to +better optimize I/O for some devices. For more information on this, refer to +. + + + + + + + +ext4 Support + +The ext4 file system is fully supported in this release. It is now the default file system of Fedora 13, supporting an unlimited number of subdirectories. It also features more granular timestamping, extended attributes support, and quota journalling. For more information on ext4, refer to . + + + + + +Network Block Storage + +Fibre-channel over ethernet is now supported. This allows a fibre-channel interface to use 10-Gigabit ethernet networks while preserving the fibre-channel protocol. For instructions on how to set this up, refer to . + + + + +
+ + + + + +
+ diff --git a/en-US/ch-planning_storage_strategy.xml b/en-US/ch-planning_storage_strategy.xml new file mode 100644 index 0000000..d6349c2 --- /dev/null +++ b/en-US/ch-planning_storage_strategy.xml @@ -0,0 +1,12 @@ + + + + + Planning A Storage Strategy + +CONTENT TBA. Now that you know how to manage storage devices, learn how to build an awesome one from scratch. Include best practices. + + + + diff --git a/en-US/ch-securingstorage.xml b/en-US/ch-securingstorage.xml new file mode 100644 index 0000000..8fc9be1 --- /dev/null +++ b/en-US/ch-securingstorage.xml @@ -0,0 +1,11 @@ + + + + + Storage Backup and Security + +CONTENT TBA. Learn how to secure access to your storage devices. Configure and prepare for disaster with backup plans and techniques to secure data in the event of meltdown. + + + diff --git a/en-US/ch-whatsnew.xml b/en-US/ch-whatsnew.xml new file mode 100644 index 0000000..b5600be --- /dev/null +++ b/en-US/ch-whatsnew.xml @@ -0,0 +1,25 @@ + + + + +Major Updates + +CONTENT TBA + + +this chapter will serve as an overview for all new features in RHEL6. each new feature will include a link to is respective section + +
+New File System and Storage Features +new features in RHEL6 + +
+ +
+New Storage Management Features +new features in RHEL6 + +
+ +
diff --git a/en-US/glossary.xml b/en-US/glossary.xml new file mode 100644 index 0000000..e4391cb --- /dev/null +++ b/en-US/glossary.xml @@ -0,0 +1,177 @@ + + + + +Glossary + + + + +This glossary defines common terms relating to file systems and storage used throughout the Storage Administration Guide. + + + +Delayed Allocation + + An allocator behavior in which disk locations are chosen when data + is flushed to disk, rather than when the write occurs. This can + generally lead to more efficient allocation because the allocator + is called less often and with larger requests. + + + + + + + Persistent Preallocation + + +A type of file allocation which chooses locations on disk, and marks +these blocks as used regardless of when or if they are written. Until +data is written into these blocks, reads will return 0s. +Preallocation is performed with the fallocate() glibc function. + + + + + + + + + + +Stripe Unit + + Also sometimes referred to as stride or chunk-size. The stripe unit + is the amount of data written to one component of striped storage + before moving on to the next. Specified in byte or file system block + units. + + + + + +Stripe Width + + The number of individual data stripe units in striped storage + (excluding parity). Depending on the administrative tool used, may + be specified in byte or file system block units, or in multiples of + the stripe unit. + + + + + +Stripe-aware allocation + + An allocator behavior in which allocations and I/O are well-aligned + to underlying striped storage. This depends on stripe information + being available at mkfs time as well. Doing well-aligned allocation + I/O can avoid inefficient read-modify-write cycles on the underlying + storage. + + + + + +Extent + + A unit of file allocation, stored in the file's metadata + as an offset, length pair. A single extent record can describe + many contiguous blocks in a file. + + + + + +Quota + + +A limit on block or inode usage of individual users and groups in a +file system, set by the administrator. + + + + + + + + +Fragmentation + + The condition in which a file's data blocks are not allocated + in contiguous physical (disk) locations for contiguous logical + offsets within the file. File fragmentation can lead to poor + performance in some situations, due to disk seek time. + + + + + +Defragmentation + + The act of reorganizing a file's data blocks so that they are + more physically contiguous on disk. + + + + + +Extended Attributes + + Name/Value metadata pairs which may be associated with a file. + + + + + +POSIX Access Control Lists (ACLs) + + Metadata attached to a file which permits more fine-grained access controls. + ACLS are often implemented as a special type of extended attribute. + + + + + +Write Barriers + + A method to enforce consistent I/O ordering on storage devices which have + volatile write caches. Barriers must be used to ensure that after a power + loss, the ordering guarantees required by metadata journalling are not +violated due to the storage hardware writing out blocks from its +volatile write cache in a different order than the operating system +requested. + + + + + + + + + +Metadata Journaling + + A method used to ensure that a file system's metadata is consistent even + after a system crash. Metadata journalling can take different forms, but + in each case a journal or log can be replayed after a crash, writing only + consistent transactional changes to the disk. + + + + +File System Repair (fsck) + + A method of verifying and repairing consistency of a file system's metadata. + May be needed post-crash for non-journalling file systems, or after a hardware + failure or kernel bug. + + + + + + + + diff --git a/en-US/images/fig-gfs2-gnbd-san.png b/en-US/images/fig-gfs2-gnbd-san.png new file mode 100644 index 0000000..29c94a3 Binary files /dev/null and b/en-US/images/fig-gfs2-gnbd-san.png differ diff --git a/en-US/images/fig-gfs2-gnbd-storage.png b/en-US/images/fig-gfs2-gnbd-storage.png new file mode 100644 index 0000000..710b194 Binary files /dev/null and b/en-US/images/fig-gfs2-gnbd-storage.png differ diff --git a/en-US/images/fig-gfs2-with-san.png b/en-US/images/fig-gfs2-with-san.png new file mode 100644 index 0000000..e9e8455 Binary files /dev/null and b/en-US/images/fig-gfs2-with-san.png differ diff --git a/en-US/images/fs-cache.png b/en-US/images/fs-cache.png new file mode 100644 index 0000000..89a293d Binary files /dev/null and b/en-US/images/fs-cache.png differ diff --git a/en-US/images/gfs-fig-gfs-gnbd-san.png b/en-US/images/gfs-fig-gfs-gnbd-san.png new file mode 100644 index 0000000..b58c9fd Binary files /dev/null and b/en-US/images/gfs-fig-gfs-gnbd-san.png differ diff --git a/en-US/images/gfs-fig-gfs-gnbd-storage.png b/en-US/images/gfs-fig-gfs-gnbd-storage.png new file mode 100644 index 0000000..e559f5e Binary files /dev/null and b/en-US/images/gfs-fig-gfs-gnbd-storage.png differ diff --git a/en-US/images/gfs-fig-gfs-with-san.png b/en-US/images/gfs-fig-gfs-with-san.png new file mode 100644 index 0000000..9225f66 Binary files /dev/null and b/en-US/images/gfs-fig-gfs-with-san.png differ diff --git a/en-US/images/gnome-system-monitor-filesystems.png b/en-US/images/gnome-system-monitor-filesystems.png new file mode 100644 index 0000000..18854ce Binary files /dev/null and b/en-US/images/gnome-system-monitor-filesystems.png differ diff --git a/en-US/images/icon.svg b/en-US/images/icon.svg new file mode 100644 index 0000000..c471a60 --- /dev/null +++ b/en-US/images/icon.svg @@ -0,0 +1,3936 @@ + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + id="path2858" /> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/en-US/images/lvg.png b/en-US/images/lvg.png new file mode 100644 index 0000000..c7a6719 Binary files /dev/null and b/en-US/images/lvg.png differ diff --git a/en-US/images/lvm-auto-config.png b/en-US/images/lvm-auto-config.png new file mode 100644 index 0000000..bb8d7e4 Binary files /dev/null and b/en-US/images/lvm-auto-config.png differ diff --git a/en-US/images/lvm-compo-extent-map.png b/en-US/images/lvm-compo-extent-map.png new file mode 100644 index 0000000..c085fc8 Binary files /dev/null and b/en-US/images/lvm-compo-extent-map.png differ diff --git a/en-US/images/lvm-compo-mirrored_vol.png b/en-US/images/lvm-compo-mirrored_vol.png new file mode 100644 index 0000000..84c20f0 Binary files /dev/null and b/en-US/images/lvm-compo-mirrored_vol.png differ diff --git a/en-US/images/lvm-compo-physvol.png b/en-US/images/lvm-compo-physvol.png new file mode 100644 index 0000000..45fe690 Binary files /dev/null and b/en-US/images/lvm-compo-physvol.png differ diff --git a/en-US/images/lvm-compo-stripedvol.png b/en-US/images/lvm-compo-stripedvol.png new file mode 100644 index 0000000..f5e3fe7 Binary files /dev/null and b/en-US/images/lvm-compo-stripedvol.png differ diff --git a/en-US/images/lvm-compo-uneven_pvs.png b/en-US/images/lvm-compo-uneven_pvs.png new file mode 100644 index 0000000..d264266 Binary files /dev/null and b/en-US/images/lvm-compo-uneven_pvs.png differ diff --git a/en-US/images/lvm-compo-uneven_vols.png b/en-US/images/lvm-compo-uneven_vols.png new file mode 100644 index 0000000..8f69f7e Binary files /dev/null and b/en-US/images/lvm-compo-uneven_vols.png differ diff --git a/en-US/images/lvm-dvcmap-multipathmap.png b/en-US/images/lvm-dvcmap-multipathmap.png new file mode 100644 index 0000000..1a8bd21 Binary files /dev/null and b/en-US/images/lvm-dvcmap-multipathmap.png differ diff --git a/en-US/images/lvm-main1.png b/en-US/images/lvm-main1.png new file mode 100644 index 0000000..b31935d Binary files /dev/null and b/en-US/images/lvm-main1.png differ diff --git a/en-US/images/lvm-main10.png b/en-US/images/lvm-main10.png new file mode 100644 index 0000000..d69338d Binary files /dev/null and b/en-US/images/lvm-main10.png differ diff --git a/en-US/images/lvm-main13.png b/en-US/images/lvm-main13.png new file mode 100644 index 0000000..5dfb234 Binary files /dev/null and b/en-US/images/lvm-main13.png differ diff --git a/en-US/images/lvm-main14.png b/en-US/images/lvm-main14.png new file mode 100644 index 0000000..29679e9 Binary files /dev/null and b/en-US/images/lvm-main14.png differ diff --git a/en-US/images/lvm-main15.png b/en-US/images/lvm-main15.png new file mode 100644 index 0000000..aaf3437 Binary files /dev/null and b/en-US/images/lvm-main15.png differ diff --git a/en-US/images/lvm-main16.png b/en-US/images/lvm-main16.png new file mode 100644 index 0000000..c621f91 Binary files /dev/null and b/en-US/images/lvm-main16.png differ diff --git a/en-US/images/lvm-main17.png b/en-US/images/lvm-main17.png new file mode 100644 index 0000000..3592633 Binary files /dev/null and b/en-US/images/lvm-main17.png differ diff --git a/en-US/images/lvm-main18.png b/en-US/images/lvm-main18.png new file mode 100644 index 0000000..b249b29 Binary files /dev/null and b/en-US/images/lvm-main18.png differ diff --git a/en-US/images/lvm-main2.png b/en-US/images/lvm-main2.png new file mode 100644 index 0000000..262b114 Binary files /dev/null and b/en-US/images/lvm-main2.png differ diff --git a/en-US/images/lvm-main21.png b/en-US/images/lvm-main21.png new file mode 100644 index 0000000..00bd4f9 Binary files /dev/null and b/en-US/images/lvm-main21.png differ diff --git a/en-US/images/lvm-main23.png b/en-US/images/lvm-main23.png new file mode 100644 index 0000000..9b44669 Binary files /dev/null and b/en-US/images/lvm-main23.png differ diff --git a/en-US/images/lvm-main26.png b/en-US/images/lvm-main26.png new file mode 100644 index 0000000..bb6bff0 Binary files /dev/null and b/en-US/images/lvm-main26.png differ diff --git a/en-US/images/lvm-main27.png b/en-US/images/lvm-main27.png new file mode 100644 index 0000000..6948d82 Binary files /dev/null and b/en-US/images/lvm-main27.png differ diff --git a/en-US/images/lvm-main28.png b/en-US/images/lvm-main28.png new file mode 100644 index 0000000..fc8707d Binary files /dev/null and b/en-US/images/lvm-main28.png differ diff --git a/en-US/images/lvm-main3.png b/en-US/images/lvm-main3.png new file mode 100644 index 0000000..9d219c4 Binary files /dev/null and b/en-US/images/lvm-main3.png differ diff --git a/en-US/images/lvm-main30.png b/en-US/images/lvm-main30.png new file mode 100644 index 0000000..6a7c34b Binary files /dev/null and b/en-US/images/lvm-main30.png differ diff --git a/en-US/images/lvm-main32.png b/en-US/images/lvm-main32.png new file mode 100644 index 0000000..84b6b88 Binary files /dev/null and b/en-US/images/lvm-main32.png differ diff --git a/en-US/images/lvm-main33.png b/en-US/images/lvm-main33.png new file mode 100644 index 0000000..cda6747 Binary files /dev/null and b/en-US/images/lvm-main33.png differ diff --git a/en-US/images/lvm-main34.png b/en-US/images/lvm-main34.png new file mode 100644 index 0000000..9e5fe58 Binary files /dev/null and b/en-US/images/lvm-main34.png differ diff --git a/en-US/images/lvm-main36.png b/en-US/images/lvm-main36.png new file mode 100644 index 0000000..9c71571 Binary files /dev/null and b/en-US/images/lvm-main36.png differ diff --git a/en-US/images/lvm-main7.png b/en-US/images/lvm-main7.png new file mode 100644 index 0000000..7c9f09a Binary files /dev/null and b/en-US/images/lvm-main7.png differ diff --git a/en-US/images/lvm-manual-boot.png b/en-US/images/lvm-manual-boot.png new file mode 100644 index 0000000..6a0b6bc Binary files /dev/null and b/en-US/images/lvm-manual-boot.png differ diff --git a/en-US/images/lvm-manual-done.png b/en-US/images/lvm-manual-done.png new file mode 100644 index 0000000..8735895 Binary files /dev/null and b/en-US/images/lvm-manual-done.png differ diff --git a/en-US/images/lvm-manual-free.png b/en-US/images/lvm-manual-free.png new file mode 100644 index 0000000..27ce579 Binary files /dev/null and b/en-US/images/lvm-manual-free.png differ diff --git a/en-US/images/lvm-manual-lv.png b/en-US/images/lvm-manual-lv.png new file mode 100644 index 0000000..2a39b2e Binary files /dev/null and b/en-US/images/lvm-manual-lv.png differ diff --git a/en-US/images/lvm-manual-lvdone.png b/en-US/images/lvm-manual-lvdone.png new file mode 100644 index 0000000..7a9f89b Binary files /dev/null and b/en-US/images/lvm-manual-lvdone.png differ diff --git a/en-US/images/lvm-manual-postboot.png b/en-US/images/lvm-manual-postboot.png new file mode 100644 index 0000000..30a65e5 Binary files /dev/null and b/en-US/images/lvm-manual-postboot.png differ diff --git a/en-US/images/lvm-manual-pv01.png b/en-US/images/lvm-manual-pv01.png new file mode 100644 index 0000000..ea16714 Binary files /dev/null and b/en-US/images/lvm-manual-pv01.png differ diff --git a/en-US/images/lvm-manual-pvdone.png b/en-US/images/lvm-manual-pvdone.png new file mode 100644 index 0000000..872e47d Binary files /dev/null and b/en-US/images/lvm-manual-pvdone.png differ diff --git a/en-US/images/lvm-manual-vg.png b/en-US/images/lvm-manual-vg.png new file mode 100644 index 0000000..b6ec9af Binary files /dev/null and b/en-US/images/lvm-manual-vg.png differ diff --git a/en-US/images/lvm-ovrvw-basic-lvm-volume.png b/en-US/images/lvm-ovrvw-basic-lvm-volume.png new file mode 100644 index 0000000..eacc2f4 Binary files /dev/null and b/en-US/images/lvm-ovrvw-basic-lvm-volume.png differ diff --git a/en-US/images/lvm-ovrvw-clvmoverview.png b/en-US/images/lvm-ovrvw-clvmoverview.png new file mode 100644 index 0000000..e38a57d Binary files /dev/null and b/en-US/images/lvm-ovrvw-clvmoverview.png differ diff --git a/en-US/images/lvols.png b/en-US/images/lvols.png new file mode 100644 index 0000000..dae912b Binary files /dev/null and b/en-US/images/lvols.png differ diff --git a/en-US/images/multipath-server1.png b/en-US/images/multipath-server1.png new file mode 100644 index 0000000..c4eca3a Binary files /dev/null and b/en-US/images/multipath-server1.png differ diff --git a/en-US/images/multipath-server2.png b/en-US/images/multipath-server2.png new file mode 100644 index 0000000..fcf8b88 Binary files /dev/null and b/en-US/images/multipath-server2.png differ diff --git a/en-US/images/multipath-server3.png b/en-US/images/multipath-server3.png new file mode 100644 index 0000000..ba53894 Binary files /dev/null and b/en-US/images/multipath-server3.png differ diff --git a/en-US/images/nfs-add.png b/en-US/images/nfs-add.png new file mode 100644 index 0000000..2b70a3f Binary files /dev/null and b/en-US/images/nfs-add.png differ diff --git a/en-US/images/nfs-general-options.png b/en-US/images/nfs-general-options.png new file mode 100644 index 0000000..21bab60 Binary files /dev/null and b/en-US/images/nfs-general-options.png differ diff --git a/en-US/images/nfs-server-settings.png b/en-US/images/nfs-server-settings.png new file mode 100644 index 0000000..e2f63bd Binary files /dev/null and b/en-US/images/nfs-server-settings.png differ diff --git a/en-US/images/nfs-user-access.png b/en-US/images/nfs-user-access.png new file mode 100644 index 0000000..5192e64 Binary files /dev/null and b/en-US/images/nfs-user-access.png differ diff --git a/en-US/images/raid-manual-boot-error.png b/en-US/images/raid-manual-boot-error.png new file mode 100644 index 0000000..68df6f2 Binary files /dev/null and b/en-US/images/raid-manual-boot-error.png differ diff --git a/en-US/images/raid-manual-final.png b/en-US/images/raid-manual-final.png new file mode 100644 index 0000000..ffc3c63 Binary files /dev/null and b/en-US/images/raid-manual-final.png differ diff --git a/en-US/images/raid-manual-free.png b/en-US/images/raid-manual-free.png new file mode 100644 index 0000000..3b98e20 Binary files /dev/null and b/en-US/images/raid-manual-free.png differ diff --git a/en-US/images/raid-manual-lvm-final.png b/en-US/images/raid-manual-lvm-final.png new file mode 100644 index 0000000..859a2f3 Binary files /dev/null and b/en-US/images/raid-manual-lvm-final.png differ diff --git a/en-US/images/raid-manual-mntpt.png b/en-US/images/raid-manual-mntpt.png new file mode 100644 index 0000000..4a8a8fd Binary files /dev/null and b/en-US/images/raid-manual-mntpt.png differ diff --git a/en-US/images/raid-manual-part-add.png b/en-US/images/raid-manual-part-add.png new file mode 100644 index 0000000..ed278ac Binary files /dev/null and b/en-US/images/raid-manual-part-add.png differ diff --git a/en-US/images/raid-manual-part-bootready.png b/en-US/images/raid-manual-part-bootready.png new file mode 100644 index 0000000..898162a Binary files /dev/null and b/en-US/images/raid-manual-part-bootready.png differ diff --git a/en-US/images/raid-manual-part-opt.png b/en-US/images/raid-manual-part-opt.png new file mode 100644 index 0000000..8e99e92 Binary files /dev/null and b/en-US/images/raid-manual-part-opt.png differ diff --git a/en-US/images/raid-manual-part-opt2.png b/en-US/images/raid-manual-part-opt2.png new file mode 100644 index 0000000..746b901 Binary files /dev/null and b/en-US/images/raid-manual-part-opt2.png differ diff --git a/en-US/images/system-config-nfs.png b/en-US/images/system-config-nfs.png new file mode 100644 index 0000000..b5f1979 Binary files /dev/null and b/en-US/images/system-config-nfs.png differ diff --git a/en-US/images/xfs-15a.png b/en-US/images/xfs-15a.png new file mode 100644 index 0000000..bf25616 Binary files /dev/null and b/en-US/images/xfs-15a.png differ diff --git a/en-US/images/xfs-15b.png b/en-US/images/xfs-15b.png new file mode 100644 index 0000000..21de61a Binary files /dev/null and b/en-US/images/xfs-15b.png differ diff --git a/en-US/images/xfs-16.png b/en-US/images/xfs-16.png new file mode 100644 index 0000000..5f46c52 Binary files /dev/null and b/en-US/images/xfs-16.png differ diff --git a/en-US/images/xfs-18.png b/en-US/images/xfs-18.png new file mode 100644 index 0000000..4097348 Binary files /dev/null and b/en-US/images/xfs-18.png differ diff --git a/en-US/images/xfs-20a.png b/en-US/images/xfs-20a.png new file mode 100644 index 0000000..52d0c26 Binary files /dev/null and b/en-US/images/xfs-20a.png differ diff --git a/en-US/images/xfs-20b.png b/en-US/images/xfs-20b.png new file mode 100644 index 0000000..977d57d Binary files /dev/null and b/en-US/images/xfs-20b.png differ diff --git a/en-US/images/xfs-23.png b/en-US/images/xfs-23.png new file mode 100644 index 0000000..a74cd55 Binary files /dev/null and b/en-US/images/xfs-23.png differ diff --git a/en-US/images/xfs-28.png b/en-US/images/xfs-28.png new file mode 100644 index 0000000..e85b2c1 Binary files /dev/null and b/en-US/images/xfs-28.png differ diff --git a/en-US/images/xfs-30.png b/en-US/images/xfs-30.png new file mode 100644 index 0000000..f623fe5 Binary files /dev/null and b/en-US/images/xfs-30.png differ diff --git a/en-US/images/xfs-31.png b/en-US/images/xfs-31.png new file mode 100644 index 0000000..48b0172 Binary files /dev/null and b/en-US/images/xfs-31.png differ diff --git a/en-US/images/xfs-32.png b/en-US/images/xfs-32.png new file mode 100644 index 0000000..05da0b1 Binary files /dev/null and b/en-US/images/xfs-32.png differ diff --git a/en-US/images/xfs-35.png b/en-US/images/xfs-35.png new file mode 100644 index 0000000..25c3160 Binary files /dev/null and b/en-US/images/xfs-35.png differ diff --git a/en-US/images/xfs-36.png b/en-US/images/xfs-36.png new file mode 100644 index 0000000..c1d8b65 Binary files /dev/null and b/en-US/images/xfs-36.png differ diff --git a/en-US/images/xfs-39.png b/en-US/images/xfs-39.png new file mode 100644 index 0000000..0f264f4 Binary files /dev/null and b/en-US/images/xfs-39.png differ diff --git a/en-US/images/xfs-43.png b/en-US/images/xfs-43.png new file mode 100644 index 0000000..c9ef36b Binary files /dev/null and b/en-US/images/xfs-43.png differ diff --git a/en-US/images/xfs-48.png b/en-US/images/xfs-48.png new file mode 100644 index 0000000..e906f18 Binary files /dev/null and b/en-US/images/xfs-48.png differ diff --git a/en-US/images/xfs-54.png b/en-US/images/xfs-54.png new file mode 100644 index 0000000..9e2ee03 Binary files /dev/null and b/en-US/images/xfs-54.png differ diff --git a/en-US/images/xfs-6.png b/en-US/images/xfs-6.png new file mode 100644 index 0000000..36c22fa Binary files /dev/null and b/en-US/images/xfs-6.png differ diff --git a/en-US/images/xfs-61.png b/en-US/images/xfs-61.png new file mode 100644 index 0000000..7b18e61 Binary files /dev/null and b/en-US/images/xfs-61.png differ diff --git a/en-US/images/xfs-62.png b/en-US/images/xfs-62.png new file mode 100644 index 0000000..e240fa3 Binary files /dev/null and b/en-US/images/xfs-62.png differ diff --git a/en-US/images/xfs-64.png b/en-US/images/xfs-64.png new file mode 100644 index 0000000..ced8ffc Binary files /dev/null and b/en-US/images/xfs-64.png differ diff --git a/en-US/images/xfs-69.png b/en-US/images/xfs-69.png new file mode 100644 index 0000000..3efa679 Binary files /dev/null and b/en-US/images/xfs-69.png differ diff --git a/en-US/images/xfs-72.png b/en-US/images/xfs-72.png new file mode 100644 index 0000000..fd7a99f Binary files /dev/null and b/en-US/images/xfs-72.png differ diff --git a/en-US/images/xfs-76.png b/en-US/images/xfs-76.png new file mode 100644 index 0000000..346aa7d Binary files /dev/null and b/en-US/images/xfs-76.png differ diff --git a/en-US/images/xfs-code-33a.png b/en-US/images/xfs-code-33a.png new file mode 100644 index 0000000..9ffb4f4 Binary files /dev/null and b/en-US/images/xfs-code-33a.png differ diff --git a/en-US/images/xfs-code-33b.png b/en-US/images/xfs-code-33b.png new file mode 100644 index 0000000..b45323c Binary files /dev/null and b/en-US/images/xfs-code-33b.png differ diff --git a/en-US/images/xfs-code-40.png b/en-US/images/xfs-code-40.png new file mode 100644 index 0000000..38441ea Binary files /dev/null and b/en-US/images/xfs-code-40.png differ diff --git a/en-US/images/xfs-code-46.png b/en-US/images/xfs-code-46.png new file mode 100644 index 0000000..df7abd5 Binary files /dev/null and b/en-US/images/xfs-code-46.png differ diff --git a/en-US/images/xfs-code-57.png b/en-US/images/xfs-code-57.png new file mode 100644 index 0000000..6f3b679 Binary files /dev/null and b/en-US/images/xfs-code-57.png differ diff --git a/en-US/images/xfs-code-60.png b/en-US/images/xfs-code-60.png new file mode 100644 index 0000000..5795be9 Binary files /dev/null and b/en-US/images/xfs-code-60.png differ diff --git a/en-US/images/xfs-code-61.png b/en-US/images/xfs-code-61.png new file mode 100644 index 0000000..ecff05e Binary files /dev/null and b/en-US/images/xfs-code-61.png differ diff --git a/en-US/images/xfs-code-65.png b/en-US/images/xfs-code-65.png new file mode 100644 index 0000000..dd74d54 Binary files /dev/null and b/en-US/images/xfs-code-65.png differ diff --git a/en-US/images/xfs-code-66.png b/en-US/images/xfs-code-66.png new file mode 100644 index 0000000..3479ac4 Binary files /dev/null and b/en-US/images/xfs-code-66.png differ diff --git a/en-US/images/xfs-code-67.png b/en-US/images/xfs-code-67.png new file mode 100644 index 0000000..efec42a Binary files /dev/null and b/en-US/images/xfs-code-67.png differ diff --git a/en-US/images/xfs-code-71.png b/en-US/images/xfs-code-71.png new file mode 100644 index 0000000..e66b710 Binary files /dev/null and b/en-US/images/xfs-code-71.png differ diff --git a/en-US/images/xfs-code-73-74.png b/en-US/images/xfs-code-73-74.png new file mode 100644 index 0000000..e44878c Binary files /dev/null and b/en-US/images/xfs-code-73-74.png differ diff --git a/en-US/images/xfs-code-74.png b/en-US/images/xfs-code-74.png new file mode 100644 index 0000000..e50f0a4 Binary files /dev/null and b/en-US/images/xfs-code-74.png differ diff --git a/en-US/new-p1-installconfig.xml b/en-US/new-p1-installconfig.xml new file mode 100644 index 0000000..971b6b4 --- /dev/null +++ b/en-US/new-p1-installconfig.xml @@ -0,0 +1,643 @@ + + + + +<remark>[NEW!] </remark>Storage Considerations During Installation + +storage considerations during installation +updates + + + +installation storage configurations +updates + + + +updates +storage considerations during installation + + + + + +storage considerations during installation +what's new + + + +installation storage configurations +what's new + + + +what's new +storage considerations during installation + + + +Many storage device and file system settings can only be configured at install time. +Other settings, such as file system type, can only be modified up to a certain point without +requiring a reformat. As such, it is prudent that you plan your storage configuration accordingly before installing +Fedora 13. + + + +This chapter discusses several considerations when planning a storage configuration for +your system. For actual installation instructions (including storage configuration during +installation), refer to the Fedora 13 Installation Guide. + + +
+Updates to Storage Configuration During Installation + + +Installation configuration for the following settings/devices has been updated for Fedora 13: + + + + +storage considerations during installation +fibre-channel over ethernet (FCoE) + + + +installation storage configurations +fibre-channel over ethernet + + + +fibre-channel over ethernet +storage considerations during installation + + + + + + + +FCoE +storage considerations during installation + + + +Fibre-Channel over Ethernet (FCoE) +Anaconda can now configure FCoE storage devices during installation. + + + + + +storage considerations during installation +storage device filter interface + + + +installation storage configurations +storage device filter interface + + + +storage device filter interface +storage considerations during installation + + + +Storage Device Filtering Interface + +Anaconda now has improved control over which storage devices are used during +installation. You can now control which devices are available/visible to the installer, in addition +to which devices are actually used for system storage. There are two paths through device filtering: + + + + + +storage considerations during installation +basic path + + + +installation storage configurations +basic path + + + +basic path +storage considerations during installation + + + + + + +storage considerations during installation +advanced path + + + +installation storage configurations +advanced path + + + +advanced path +storage considerations during installation + + + + +Basic Path + + +For systems that only use locally attached disks and firmware RAID arrays as storage devices + + + + + +Advanced Path + + +For systems that use SAN (e.g. multipath, iSCSI, FCoE) devices + + + + + + + +storage considerations during installation +auto-partitioning and /home + + + +installation storage configurations +auto-partitioning and /home + + + +auto-partitioning and /home +storage considerations during installation + + +Auto-partitioning and /home + +Auto-partitioning now creates a separate logical volume for the +/home file system when 50GB or more is available +for allocation of LVM physical volumes. The root file system (/) +will be limited to a maximum of 50GB whe creating a separate /home +logical volume, but the /home logical volume will grow to +occupy all remaining space in the volume group. + + + + +
+ + +
+Overview of Supported File Systems + + + +storage considerations during installation +file systems, overview of supported types + + + +installation storage configurations +file systems, overview of supported types + + + +file systems, overview of supported types +storage considerations during installation + + + +This section shows basic technical information on each file system supported by Fedora 13. + + + +Technical Specifications of Supported File Systems + + + + + File System + + + Max Supported Size + + + Max File Size + + + Max Subdirectories (per directory) + + + Max Depth of Symbolic Links + + + ACL Support + + + Details + + + + + + + Ext2 + + + 8TB + + + 2TB + + + 32,000 + + + 8 + + + Yes + + + N/A + + + + + Ext3 + + + 16TB + + + 2TB + + + 32,000 + + + 8 + + + Yes + + + + + + + + Ext4 + + + + 16TB + + + 16TB + + + 65,000 + + When the link count exceeds 65,000, it is reset to 1 and no longer increases. + + + + + 8 + + + Yes + + + + + + + + XFS + + +100TB + + + + 16TB + + + 65,000 + + + 8 + + + Yes + + + + + + + +
+ + + +Not all file systems supported by Fedora 13 are documented in this guide. In addition, +file systems (e.g. BTRFS) that are unsupported in &RHEL; are not documented herein either. + + + +
+ +
+Special Considerations + + +This section enumerates several issues and factors to consider for +specific storage configurations. + + + +Separate Partitions for /home, /opt, /usr/local + + + +storage considerations during installation +separate partitions (for /home, /opt, /usr/local) + + + +installation storage configurations +separate partitions (for /home, /opt, /usr/local) + + + +separate partitions (for /home, /opt, /usr/local) +storage considerations during installation + + + +If it is likely that you will upgrade your system in the future, +place /home, /opt, and +/usr/local on a separate device. This will +allow you to reformat the devices/file systems containing the +operating system while preserving your user and application data. + + + + + + + + + +DASD and zFCP Devices on IBM System Z + + + +storage considerations during installation +DASD and zFCP devices on IBM System z + + + +installation storage configurations +DASD and zFCP devices on IBM System z + + + +DASD and zFCP devices on IBM System z +storage considerations during installation + + + + +storage considerations during installation +channel command word (CCW) + + + +installation storage configurations +channel command word (CCW) + + + +channel command word (CCW) +storage considerations during installation + + +CCW, channel command word +storage considerations during installation + + +On the IBM System Z platform, DASD and zFCP devices are configured via the Channel +Command Word (CCW) mechanism. CCW paths must be explicitly added to the system +and then brought online. For DASD devices, this is simply means listing +the device numbers (or device number ranges) as the DASD= parameter at the boot +command line or in a CMS configuration file. + + +For zFCP devices, you must list +the device number, logical unit number (LUN), and world wide port name (WWPN). Once the zFCP device is initialized, it is +mapped to a CCW path. The FCP_x= lines on the boot +command line (or in a CMS configuration file) allow you to specify this information for the installer. + + + + + + +Encrypting Block Devices Using LUKS + + + +storage considerations during installation +LUKS/dm-crypt, encrypting block devices using + + + +installation storage configurations +LUKS/dm-crypt, encrypting block devices using + + + +LUKS/dm-crypt, encrypting block devices using +storage considerations during installation + + + +Formatting a block device for encryption using +LUKS/dm-crypt will destroy any existing formatting on +that device. As such, you should decide which devices to encrypt (if any) +before the new system's storage configuration is +activated as part of +the installation process. + + + + +Stale BIOS RAID Metadata + + + +storage considerations during installation +stale BIOS RAID metadata + + + +installation storage configurations +stale BIOS RAID metadata + + + +stale BIOS RAID metadata +storage considerations during installation + + + + + +Moving a disk from a system configured for firmware +RAID without removing the RAID metadata from the disk +can prevent Anaconda from correctly detecting +the disk. + + + + +Removing/deleting RAID metadata from disk could potentially destroy +any stored data. You should back up your data +before proceeding. + + + + +To delete RAID metadata from the disk, use the following command: + + + + +dmraid -r -E /device/ + + + + + +For more information about managing RAID devices, refer to man dmraid and . + + + + + +iSCSI Detection and Configuration + + + + +storage considerations during installation +iSCSI detection and configuration + + + +installation storage configurations +iSCSI detection and configuration + + + +iSCSI detection and configuration +storage considerations during installation + + + +For plug and play detection of iSCSI drives, configure them in the firmware of an iBFT boot-capable network interface card (NIC). +CHAP authentication of iSCSI targets is supported during installation. +However, iSNS discovery is not supported during installation. + + + + +FCoE Detection and Configuration + + +For plug and play detection of fibre-channel over ethernet (FCoE) drives, configure +them in the firmware of an EDD boot-capable NIC. + + + + +DASD + +Direct-access storage devices (DASD) cannot be added/configured during +installation. Such devices are specified in the +CMS configuration file. + + + + + + +Block Devices with DIF/DIX Enabled + + + + +storage considerations during installation +DIF/DIX-enabled block devices + + + +installation storage configurations +DIF/DIX-enabled block devices + + + +DIF/DIX-enabled block devices +storage considerations during installation + + + +DIF/DIX is a hardware checksum feature provided by certain +SCSI host bus adapters and block devices. When DIF/DIX is enabled, errors will occur if the block device is used as a +general-purpose block device. Buffered I/O or mmap(2)-based I/O will not work reliably, as there are no +interlocks in the buffered write path to prevent buffered data from being overwritten after the DIF/DIX checksum has been calculated. + + + +Because of this, the I/O will later fail with a checksum error. This problem is common to all block device (or file system-based) +buffered I/O or mmap(2) I/O, so it is not possible to work around +these errors caused by overwrites. + + + +As such, block devices with DIF/DIX enabled should only be used with applications that +use O_DIRECT. Such applications should use the raw block device. +Alternatively, it is also safe to use the XFS filesystem on a DIF/DIX enabled block +device, as long as only O_DIRECT I/O is issued through the file system. +XFS is the only filesystem that does not fall back to buffered IO when +doing certain allocation operations. + + + + +The responsibility for ensuring that the I/O data does not change after the +DIF/DIX checksum has been computed always lies with the application, so only applications +designed for use with O_DIRECT I/O and DIF/DIX hardware should +use DIF/DIX. + + + + + + +
+
diff --git a/en-US/new-p1-storagemannew.xml b/en-US/new-p1-storagemannew.xml new file mode 100644 index 0000000..7336fd3 --- /dev/null +++ b/en-US/new-p1-storagemannew.xml @@ -0,0 +1,23 @@ + + + + + <remark><command>[NEW!]</command></remark>New Storage Management Features + +new features in RHEL6; short description of overarching file system development themes for RHEL6 to be added later + + + + + + + + + + + + + + + diff --git a/en-US/new-p1-storagenew.xml b/en-US/new-p1-storagenew.xml new file mode 100644 index 0000000..705e6eb --- /dev/null +++ b/en-US/new-p1-storagenew.xml @@ -0,0 +1,27 @@ + + + +
+ <remark><command>[NEW!]</command></remark>New File System Management Features + +new features in RHEL6; short description of overarching file system development themes for RHEL6 to be added later + + + + + + + + + + + + + + + + + +
+ diff --git a/en-US/newfilesys-efs.xml b/en-US/newfilesys-efs.xml new file mode 100644 index 0000000..7f1d565 --- /dev/null +++ b/en-US/newfilesys-efs.xml @@ -0,0 +1,168 @@ + + + + +<remark><command>[NEW!]</command></remark>Encrypted File System + + +eCryptfs +file system types + + + +file system types +encrypted file system + + + + + + +Fedora 13 now supports eCryptfs, a "pseudo-file system" which provides data and filename encryption on a per-file basis. The term "pseudo-file system" refers to the fact that eCryptfs does not have an on-disk format; rather, it is a file system layer that resides on top of an actual file system. The eCryptfs layer provides encryption capabilities. + + + +eCryptfs works like a bind mount, as it intercepts file operations that write to the underlying (i.e. encrypted) file system. The eCryptfs layer adds a header to the metadata of files in the underlying file system. This metadata describes the encryption for that file, and eCryptfs encrypts file data before it is passed to the encrypted file system. Optionally, eCryptfs can also encrypt filenames. + + + + + +eCryptfs is not an on-disk file system; as such, there is no need to create it via tools such as mkfs. Instead, eCryptfs is initiated by issuing a special mount command. To manage file systems protected by eCryptfs, the ecryptfs-utils package must be installed first. + + +
+Mounting a File System as Encrypted + +encrypted file system +mounting + + + +eCryptfs +mounting + + + +mounting +encrypted file system + + + + +The easiest way to encrypt a file system with eCryptfs and mount it is interactively. To start this process, execute the following command: + + + +mount -t ecryptfs /source /destination + + + +Encrypting a directory heirarchy (i.e. /source) with eCryptfs means mounting it to a mount point encrypted by eCryptfs (i.e. /destination). All file operations to /destination will be passed encrypted to the underlying /source file system. In some cases, however, it may be possible for a file operation to modify /source directly without passing through the eCryptfs layer; this could lead to inconsistencies. + + + + +encrypted file system +mounting a file system as encrypted + + + +eCryptfs +mounting a file system as encrypted + + + +mounting a file system as encrypted +encrypted file system + + + + +Eric: "inconsistencies" is a bit vague, can you be more specific on the effects? will the underlying FS be corrupted, etc? +A: In particular, removing or adding files on the "lower" fs may not be seen by the upper encrypted fs. +It's just a bad thing to do, and not well tested (and not particularly useful). + +This is why for most environments, both /source and /destination should be identical. For example: + + + +mount -t ecryptfs /home /home + + +This effectively means encrypting a file system and mounting it on itself. Doing so helps ensure that all file operations to /home pass through the eCryptfs layer. + + + + +During the interactive encryption/mount process, mount will allow the following settings to be configured: + + + + +encrypted file system +mount settings for encrypted file systems + + + +eCryptfs +mount settings for encrypted file systems + + + +mount settings for encrypted file systems +encrypted file system + + +Encryption key type; openssl, tspi, or passphrase. When choosing passphrase, mount will ask for one. + +Cipher; aes, blowfish, des3_ede, cast6, or cast5. + +Eric: is there any man page or other installed doc that users can refer to for info on these? + + +Key bytesize; 16, 32, 24 + +Whether or not plaintext passthrough is enabled +Whether or not filename encryption is enabled + + + +After the last step of an interactive mount, mount will display all the selections made and perform the mount. This output consists of the command-line option equivalents of each chosen setting. For example, mounting /home with a key type of passphrase, aes cipher, key bytesize of 16 with both plaintext passthrough and filename encryption disabled, the output would be: + + + +Attempting to mount with the following options: + ecryptfs_unlink_sigs + ecryptfs_key_bytes=16 + ecryptfs_cipher=aes + ecryptfs_sig=c7fed37c0a341e19 +Mounted eCryptfs + + +The options in this display can then be passed directly to the command line to encrypt and mount a file system using the same configuration. To do so, use each option as an argument to the -o option of mount. For example: + + + +mount -t ecryptfs /home /home -o ecryptfs_unlink_sigs \ ecryptfs_key_bytes=16 ecryptfs_cipher=aes ecryptfs_sig=c7fed37c0a341e19 +This is a single command split into multiple lines, to accommodate printed and PDF versions of this document. All concatenated lines — preceded by the backslash (\) — should be treated as one command, sans backslashes. + + + + + +
+ +
+Additional Information + +For more information on eCryptfs and its mount options, refer to man ecryptfs (provided by the ecryptfs-utils package). The following Kernel document (provided by the kernel-doc package) also provides additional information on eCryptfs: + + + +/usr/share/doc/kernel-doc-version/Documentation/filesystems/ecryptfs.txt + +
+ +
diff --git a/en-US/newfilesys-ext4.xml b/en-US/newfilesys-ext4.xml new file mode 100644 index 0000000..eb2108b --- /dev/null +++ b/en-US/newfilesys-ext4.xml @@ -0,0 +1,752 @@ + + + + + <remark><command>[NEW!]</command></remark>The Ext4 File System + + +ext4 +main features + + + +main features +ext4 + + + + +ext4 +allocation features + + + +allocation features +ext4 + + + +ext4 +file system types + + + +file system types +ext4 + + + +The ext4 file system is a scalable extension of the ext3 file system, which was the default file system in previous versions of Fedora. Ext4 is now the default file system of Fedora 13, and can support files and file systems of up to 16 terabytes in size. It also supports an unlimited number of sub-directories (the ext3 file system only supports up to 32,000). Further, ext4 is backward compatible with ext3 and ext2, allowing these older versions to be mounted with the ext4 driver. + + +Eric: added statement that ext4 is default; default test install verifies this. i assume this will remain until GA? + + + + +Main Features + + +Ext4 uses extents (as opposed to the traditional block mapping scheme used by ext2 and ext3), which improves performance when using large files and reduces metadata overhead for large files. In addition, ext4 also labels unallocated block groups and inode table sections accordingly, which allows them to be skipped during a file system check. This makes for quicker file system checks, which becomes more beneficial as the file system grows in size. + + + + + +Allocation Features + +The ext4 file system features the following allocation schemes: + + + +Persistent pre-allocation +Delayed allocation +Multi-block allocation +Stripe-aware allocation + + + + +ext4 +fsync() + + + +fsync() +ext4 + + + + + +Because of delayed allocation and other performance optimizations, +ext4's behavior of writing files to disk is different from ext3. In +ext4, a program's writes to the file system are not guaranteed to be on-disk unless +the program issues an fsync() call afterwards. + + + +By default, ext3 automatically forces newly created files to disk +almost immediately even without fsync(). This behavior +hid bugs in programs that did not use fsync() to +to ensure that written data was on-disk. The ext4 file system, on the other hand, +often waits several seconds to write out changes to disk, allowing it to +combine and reorder writes for better disk performance than ext3 + + + + +If a system crashes while ext4 is waiting to write out changes to disk, +the write will fail (i.e. newly created files will not be on-disk). To +prevent this, add an fsync() call to any programs that +depend on writes being on-disk. + + + + + + + + +Other Ext4 Features + +The Ext4 file system also supports the following: + + +Extended attributes (xattr), which allows the system to associate several additional name/value pairs per file. +Quota journalling, which avoids the need for lengthy quota consistency checks after a crash. + +"No journalling" mode, which allows users to disable journalling for a slight improvement albeit at the cost of file system integrity +Subsecond timestamps + + + + + + + +Eric: any other features we need to list? FYi i got some of these items from and + + +Eric: FYI i added subsecond timestamps to XFS as you requested + +
+Creating an Ext4 File System + + + +ext4 +creating + + + +creating +ext4 + + + + + + +ext4 +mkfs.ext4 + + + +mkfs.ext4 +ext4 + + + +To create an ext4 file system, use the mkfs.ext4 command. In general, the default +options are optimal for most usage scenarios, as in: + + + +mkfs.ext4 /dev/device + + + +Below is a sample output of this command, which displays the resulting file system geometry and features: + + +mke2fs 1.41.9 (22-Aug-2009) +Filesystem label= +OS type: Linux +Block size=4096 (log=2) +Fragment size=4096 (log=2) +1954064 inodes, 7813614 blocks +390680 blocks (5.00%) reserved for the super user +First data block=0 +Maximum filesystem blocks=4294967296 +239 block groups +32768 blocks per group, 32768 fragments per group +8176 inodes per group +Superblock backups stored on blocks: + 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, + 4096000 + +Writing inode tables: done +Creating journal (32768 blocks): done +Writing superblocks and filesystem accounting information: done + + +For striped block devices (e.g. RAID5 arrays), the stripe geometry can be +specified at the time of file system creation. Using proper stripe geometry greatly enhances performance +of an ext4 file system. + + + +When creating file systems on lvm or md volumes, mkfs.ext4 chooses an optimal geometry. This may also be true on some hardware RAIDs which export geometry information to the operating system. + + + + +ext4 +stripe geometry + + + +stripe geometry +ext4 + + + + + + +ext4 +stride (specifying stripe geometry) + + + +stride (specifying stripe geometry) +ext4 + + + + + + +ext4 +stripe-width (specifying stripe geometry) + + + +stripe-width (specifying stripe geometry) +ext4 + + + +To specify stripe geometry, use the -E option of mkfs.ext4 (i.e. extended file system options) with the following sub-options: + + + + + +stride=value + + +Specifies the RAID chunk size. + + + + + +stripe-width=value + + +Specifies the number of data disks in a RAID device, or the number of stripe units in the stripe. + + + + + + + +For both sub-options, value must be specified in file system block units. For example, to create a file system with a 64k stride (i.e. 16 x 4096) on a 4k-block file system, use the following commmand: + + + + + + +mkfs.ext4 -E stride=16,stripe-width=64 /dev/device + + + +For more information about creating file systems, refer to man mkfs.ext4. + + +
+ + +
+Converting an Ext3 File System to Ext4 + + + +ext4 +converting ext3 to ext4 + + + +converting ext3 to ext4 +ext4 + + + + + + +ext4 +ext3 to ext4, converting + + + + + + + +ext4 +tune2fs (converting ext3 to ext4) + + + + + +Many of the ext4 file system enhancements over ext3 result from modified metadata structures configured on the disk during file system creation. However, an existing ext3 file system can be upgraded to take advantage of other improvements in ext4. + + + + +Whenever possible, create a new ext4 file system and migrate your data to it instead of converting from ext3 to ext4. This +ensures a better metadata layout, allowing for the enhanced performance natively provided by ext4. + + + + +To enable ext4 features on an existing ext3 file system, begin by using the tune2fs command in the following manner: + + + +tune2fs -O extents,uninit_bg /dev/device + + + +The -O option sets, clears, or initializes a comma-delimited list of file system features. With the extents parameter, the file system will now use extents instead of the indirect block scheme for storing data blocks in an inode (but only for files created after activating extents feature). The uninit_bg parameter allows the kernel to mark unused block groups accordingly. + + + + + + +After using tune2fs to modify the file system, perform a file system check using the following command: + + + + +ext4 +e2fsck (converting ext3 to ext4) + + + +e2fsck (converting ext3 to ext4) +ext4 + + + +e2fsck -f /dev/device + + + +Note that without the file system check, the converted file system cannot be mounted. During the course of conversion, e2fsck may print the following warning: + + + +One or more block group descriptor checksums are invalid + + + +This warning is generally benign, as e2fsck will repair any invalid block group descriptors it encounters during the conversion process. + + + +An ext3 file system converted to ext4 in the manner described in this section can no longer be mounted as ext3. Refer to for information on how to mount an ext3 file system as ext4 without converting. + + + +In addition, an ext2 file system cannot be converted directly to ext4; it should be converted to ext3, at which point it can be converted to ext4 (or mounted using the ext4 driver). For more information on converting an ext2 file system to ext3, refer to . + + + + + + +For more information on converting an ext3 file system to ext4, refer to man tune2fs and man e2fsck. + + +Eric: is this the same procedure for converting ext2 to ext4? + +
+ +
+Mounting an Ext4 File System + + + + +ext4 +mounting + + + +mounting +ext4 + + + + + + +ext4 +tune2fs (mounting) + + + +tune2fs (mounting) +ext4 + + +An ext4 file system can be mounted with no extra options. For example: + + + +mount /dev/device /mount/point + + + +The ext4 file system also supports several mount options to influence behavior. For example, the acl parameter enables access control lists, while the user_xattr parameter enables user extended attributes. To enable both options, use their respective parameters with -o, as in: + + + +mount -o acl,user_xattr /dev/device /mount/point + + + +The tune2fs utility also allows administrators to set default mount options in the file system superblock. For more information on this, refer to man tune2fs. + + + + + +ext4 +write barriers + + + +write barriers +ext4 + + + + + + +ext4 +nobarrier mount option + + + +nobarrier mount option +ext4 + + + +Write Barriers +By default, ext4 uses write barriers to ensure file system integrity even +when power is lost to a device with write caches enabled. For devices without +write caches, or with battery-backed write caches, disable barriers using the +nobarrier option, as in: + + + +mount -o nobarrier /dev/device /mount/point + + + +For more information about write barriers, refer to . + + + + + + +Mounting an Ext3 File System as Ext4 + + + +ext4 +mounting ext3 as ext4 + + + +mounting ext3 as ext4 +ext4 + + + + + + +ext4 +ext3 (mounting as ext4) + + + +ext3 (mounting as ext4) +ext4 + + + +An ext3 file system can also be mounted as ext4 without changing the format, allowing it to be mounted as ext3 again in the future. To do so, run the following command (where device is an ext3 file system): + + + +mount -t ext4 /dev/device /mount/point + + + +Doing so will only allow the ext3 file system to use ext4-specific features that do not require a file format conversion. These features include delayed allocation and multi-block allocation, and exclude features such as extent mapping. + + + + +For more information about mounting an ext4 file system, refer to man mount. + + +
+ + + + +
+<remark>[UNFINISHED] </remark>Resizing an Ext4 File System + + + +ext4 +resizing + + + +resizing +ext4 + + + + + + +ext4 +resize2fs (resizing ext4) + + + +resize2fs (resizing ext4) +ext4 + + + + + +Before growing an ext4 file system, ensure that the underlying block device is of an appropriate size to hold the file system later. Use the appropriate resizing methods for the affected block device. + + + + +An ext4 file system may be grown while mounted using the resize2fs command, as in: + + + +resize2fs /mount/point size + + +Eric: what particular tools are used in resizing block devices? i need to add references to them in this section, as well as in XFS... + + +The resize2fs command can also decrease the size of an unmounted ext4 file system, as in: + + + +resize2fs /dev/device size + + + + + +When resizing an ext4 file system, the resize2fs utility reads the size in units of file system block size, unless a suffix indicating a specific unit is used. The following suffixes indicate specific units: + + + + +s — 512kb sectors +K — kilobytes +M — megabytes +G — gigabytes + + + +For more information about resizing an ext4 file system, refer to man resize2fs. + + +
+ +
+Other Ext4 File System Utilities + + + +ext4 +other file system utilities + + + +other file system utilities +ext4 + + + + + + +ext4 +e2label + + + +e2label +ext4 + + + + + + +ext4 +e2label (other ext4 file system utilities) + + + +e2label (other ext4 file system utilities) +ext4 + + + + + + +ext4 +quota (other ext4 file system utilities) + + + +quota (other ext4 file system utilities) +ext4 + + + + + + +ext4 +debugfs (other ext4 file system utilities) + + + +debugfs (other ext4 file system utilities) +ext4 + + + + + + +ext4 +e2image (other ext4 file system utilities) + + + +e2image (other ext4 file system utilities) +ext4 + + + +Fedora 13 also features other utilities for managing ext4 file systems: + + + + +e2fsck + + +Used to repair an ext4 file system. This tool checks and repairs an ext4 file system more efficiently than ext3, thanks to updates in the ext4 disk structure. + + + + + +e2label + + +Changes the label on an ext4 file system. This tool can also works on ext2 and ext3 file systems. + + + + + +quota + + +Controls and reports on disk space (blocks) and file (inode) usage by users and groups on an ext4 file system. For more information on using quota, refer to man quota and . + + + + + + + + + +As demonstrated earlier in , the tune2fs utility can also adjust configurable file system parameters for ext2, ext3, and ext4 file systems. In addition, the following tools are also useful in debugging and analyzing ext4 file systems: + + + + +debugfs + + +Debugs ext2, ext3, or ext4 file systems. + + + + + +e2image + + +Saves critical ext2, ext3, or ext4 file system metadata to a file. + + + + + + +For more information about these utilities, refer to their respective man pages. + + + + +
+ diff --git a/en-US/newfilesys-fscache.xml b/en-US/newfilesys-fscache.xml new file mode 100644 index 0000000..6e936e1 --- /dev/null +++ b/en-US/newfilesys-fscache.xml @@ -0,0 +1,807 @@ + + + + +FS-Cache + +FS-Cache +cache back-end + + + +cache back-end +FS-Cache + + + +FS-Cache is a persistent local cache that can be used by file systems +to take data retrieved from over the network and cache it on local disk. +This helps minimize network traffic for users accessing data from a file +system mounted over the network (for example, NFS). + + + + +The following diagram is a high-level illustration of how FS-Cache works: + +will replace the following ASCII with graphics, as per RT3#59600 + +
FS-Cache Overview + + + + + FS-Cache Overview + + + +
+ + +FS-Cache is designed to be as transparent as possible to the users and +administrators of a system. Unlike cachefs on Solaris, +FS-Cache allows a file system on a server to interact directly with a client's +local cache without creating an overmounted file system. With NFS, a mount option +instructs the client to mount the NFS share with FS-cache enabled. + + +David: i'm not sure if i rewrote this correctly, or if i understood "overmount" correctly. please advise + + + +FS-Cache does not alter the basic operation of a file system that works over +the network - it merely provides that file system with a persistent place in +which it can cache data. For instance, a client can still mount an NFS share +whether or not FS-Cache is enabled. In addition, cached NFS can handle files that won't fit into the cache (whether individually or collectively) as files can be partially cached +and do not have to be read completely up front. +FS-Cache also +hides all I/O errors that occur in the cache from the client file system driver. + + + + + +To provide caching services, FS-Cache needs a cache back-end. +A cache back-end is a storage driver configured to provide caching services (i.e. cachefiles). In this case, +FS-Cache requires a mounted block-based file system that can supports bmap +and extended attributes (e.g. ext3) as its cache back-end. + + + + +FS-Cache +cachefiles + + + +cachefiles +FS-Cache + + + + + + + + + +FS-Cache +indexing keys + + + +indexing keys +FS-Cache + + + + + + +FS-Cache +coherency data + + + +coherency data +FS-Cache + + + +FS-Cache cannot arbitrarily cache any file system, whether through the network or otherwise: +the shared file system's driver must be altered to allow interaction with FS-Cache, data storage/retrieval, +and metadata setup and validation. FS-Cache needs +indexing keys and coherency data from the cached file system to support +persistence: indexing keys to match file system objects to cache objects, and +coherency data to determine whether the cache objects are still valid. + + + + +
+Performance Guarantee + + + +FS-Cache +performance guarantee + + + +performance guarantee +FS-Cache + + + +FS-Cache does not guarantee increased performance. Rather, +using a cache back-end incurs a performance penalty: for example, cached NFS shares +add disk accesses to cross-network lookups. While FS-Cache tries to be as asynchronous +as possible, there are synchronous paths (e.g. reads) where this isn't possible. + + + +For example, using FS-Cache to cache an NFS share between two computers over an otherwise +unladen GigE network will not demonstrate any performance improvements on file access. Rather, +NFS requests would be satisfied faster from server memory rather than from local disk. + + + + +The use of FS-Cache, therefore, is a compromise between various +factors. If FS-Cache is being used to cache NFS traffic, for instance, it may +slow the client down a little, but massively reduce the network and server loading by +satisfying read requests locally without consuming network bandwidth. + + + + +
+ +
+Setting Up a Cache + + + + + +cache setup +FS-Cache + + + + + + +FS-Cache +setting up a cache + + + +setting up a cache +FS-Cache + + + + + + + + +FS-Cache +cachefilesd + + + +cachefilesd +FS-Cache + + + +Currently, Fedora 13 only provides the cachefiles caching back-end. The cachefilesd daemon initiates and manages cachefiles. The /etc/cachefilesd.conf file controls how cachefiles provides caching services. To configure a cache back-end of this type, the cachefilesd package +must be installed. + + +ditto: use of cachefilesd again + + +The first setting to configure in a cache back-end is which directory to use as a cache. To configure +this, use the following parameter: + + + +dir /path/to/cache + + + +Typically, the cache back-end +directory is set in /etc/cachefilesd.conf as /var/cache/fscache, +as in: + + +dir /var/cache/fscache + + + + + +FS-Cache will store the cache in the file system that hosts +/path/to/cache. On a laptop, it is +advisable to use the the root file system (/) as the host file system, but for a +desktop machine it would be more prudent to mount a disk partition specifically for the cache. + +do we have to be explicit about why desktop machines should mount separate disk partitions specifically for the cache, or is this obvious enough for users? i'm leaning towards the latter... + + + +File systems that support functionalities required by FS-Cache cache back-end include the Fedora 13 implementations of the following file systems: + + + +ext3 (with extended attributes enabled) +ext4 +BTRFS +XFS + + + + +FS-Cache +tune2fs (setting up a cache) + + + +tune2fs (setting up a cache) +FS-Cache + + + +The host file system must support user-defined extended attributes; FS-Cache uses these +attributes to store coherency maintenance information. To enable user-defined extended +attributes for ext3 file systems (i.e. device), use: + + + +tune2fs -o user_xattr /dev/device + + + +Alternatively, extended attributes for a file system can be enabled at mount time, as in: + + + + +mount /dev/device /path/to/cache -o user_xattr + + + + + + +The cache back-end works by maintaining a certain amount of free space on +the partition hosting the cache. It grows and shrinks the cache in response to +other elements of the system using up free space, making it safe to use on the root +file system (for example, on a laptop). FS-Cache sets defaults on this behavior, +which can be configured via cache cull limits. For more information +about configuring cache cull limits, refer to . + + + +Once the configuration file is in place, start up the cachefilesd daemon: + + + + +service cachefilesd start + + + +To configure cachefilesd to start at boot time, execute the following command as root: + + +chkconfig cachefilesd on + + +
+ +
+Using the Cache With NFS + + + +FS-Cache +NFS (using with) + + + +NFS (using with) +FS-Cache + + + + + + +NFS +FS-Cache + + + + + +NFS will not use the cache unless explicitly instructed. To configure an NFS mount +to use FS-Cache, include the -o fsc option to the mount +command, as in: + + + + + + +mount nfs-share:/ /mount/point -o fsc + + + +All access to files under /mount/point will go through +the cache, unless the file is opened for direct I/O or writing (refer to for more information). NFS indexes cache contents using +NFS file handle, not the file name; this means that hard-linked files share the +cache correctly. + + + +Caching is supported in version 2, 3, and 4 of NFS. However, each version uses different branches +for caching. + + + +
+Cache Sharing + + + +FS-Cache +cache sharing + + + +cache sharing +FS-Cache + + +There are several potential issues to do with NFS cache sharing. Because the +cache is persistent, blocks of data in the cache are indexed on a sequence of four keys: + + + +Level 1: Server details +Level 2: Some mount options; security type; FSID; uniquifier +Level 3: File Handle +Level 4: Page number in file + + + +To avoid coherency management problems between superblocks, all NFS superblocks +that wish to cache data have unique Level 2 keys. +Normally, two NFS +mounts with same source volume and options will share a superblock, and +thus share the caching, even if they mount different directories within that +volume. Take the following two mount commands: + + + + + + + +mount home0:/disk0/fred /home/fred -o fsc + + +mount home0:/disk0/jim /home/jim -o fsc + + + + +Here, /home/fred and /home/jim will likely +share the superblock as they have the same options, especially if they come from the +same volume/partition on the NFS server (home0). Now, consider +the next two subsequent mount commands: + + + +mount home0:/disk0/fred /home/fred -o fsc,rsize=230 + + +mount home0:/disk0/jim /home/jim -o fsc,rsize=231 + + + +In this case, /home/fred and /home/jim will +not share the superblock as they have different network access parameters, which +are part of the Level 2 key. The same goes for the following mount sequence: + + + +mount home0:/disk0/fred /home/fred1 -o fsc,rsize=230 + + +mount home0:/disk0/fred /home/fred2 -o fsc,rsize=231 + + + + +Here, the contents of the two subtrees (/home/fred1 and /home/fred2) +will be cached twice. + + + + +Another way to avoid superblock sharing is to suppress it explicitly with the nosharecache +parameter. Using the same example: + + + +mount home0:/disk0/fred /home/fred -o nosharecache,fsc + + +mount home0:/disk0/jim /home/jim -o nosharecache,fsc + + + +However, in this case only one of the superblocks will be permitted to use +cache since there is nothing to distinguish the Level 2 keys of home0:/disk0/fred +and home0:/disk0/jim. To address this, add a unique identifier +on at least one of the mounts, i.e. fsc=unique-identifier. For example: + + +changed "uniquifier" here, unless that's an actual term? please advise + + + +mount home0:/disk0/fred /home/fred -o nosharecache,fsc + + +mount home0:/disk0/jim /home/jim -o nosharecache,fsc=jim + + + +Here, the unique identifier jim will be added to the Level 2 key used +in the cache for /home/jim. + + + + +
+ +
+Cache Limitations With NFS + + + +FS-Cache +NFS (cache limitations with) + + + +NFS (cache limitations with) +FS-Cache + + + + + + + + +cache limitations with NFS +FS-Cache + + + +Opening a file from a shared file system for direct I/O will automatically bypass +the cache. This is because this type of access must be direct to the server. + + + +Opening a file from a shared file system for writing will not work on NFS +version 2 and 3. The protocols of these versions do not provide sufficient +coherency management information for the client to detect a concurrent write +to the same file from another client. + + + +As such, opening a file from a shared file system for either direct I/O or +writing will flush the cached copy of the file. FS-Cache will not cache +the file again until it is no longer opened for direct I/O or writing. + + + +Furthermore, this release of FS-Cache only caches regular NFS files. +FS-Cache will not cache directories, symlinks, device +files, FIFOs and sockets. + + + +
+ + +
+ +
+Setting Cache Cull Limits +please note numerous references to cachefilesd if correct + + + +FS-Cache +cache cull limits + + + +cache cull limits +FS-Cache + + + +The cachefilesd daemon works by caching remote data from shared file systems to +free space on the disk. This could potentially consume all available free +space, which could be bad if the disk also housed the root partition. To control +this, cachefilesd tries to maintain a certain amount of free space by discarding old +objects (i.e. accessed less recently) from the cache. This behavior is known as +cache culling. + + + +When dealing with file system size, the CacheFiles culling behavior is controlled by +three settings in /etc/cachefilesd.conf: + + + + + +brun N% + + + + +FS-Cache +brun (cache cull limits settings) + + + +brun (cache cull limits settings) +FS-Cache + + + +If the amount of free space rises above N% of total disk capacity, cachefilesd disables culling. + + + + + +bcull N% + + + + +FS-Cache +bcull (cache cull limits settings) + + + +bcull (cache cull limits settings) +FS-Cache + + +If the amount of free space falls below N% of total disk capacity, cachefilesd starts culling. + + + + + +bstop N% + + + + +FS-Cache +bstop (cache cull limits settings) + + + +bstop (cache cull limits settings) +FS-Cache + + + +If the amount of free space falls below N%, cachefilesd +will no longer allocate disk space until until culling raises the amount of free space above N%. + + + + + + + +Some file systems have a limit on the number of files they can actually support (for example, ext3 +can only support up to 32,000 files). This makes it possible for CacheFiles to reach the file system's +maximum number of supported files without triggering bcull or bstop. +To address this, cachefilesd also tries to keep the number of files below +a file system's limit. This behavior is controlled by the following settings: + + +does this mean that cachefilesd is aware of the type of file system housing the cache, +and also knows its file number limits (if any)? + + + +frun N% + + + + +FS-Cache +frun (cache cull limits settings) + + + +frun (cache cull limits settings) +FS-Cache + + + +If the number of files the file system can further accommodate falls below N% of +its maximum file limit, cachefilesd disables culling. For example, with frun 5%, +cachefilesd will disable culling on an ext3 file system if it can accommodate more than 1,600 +files, or if the number of files falls below 95% of its limit, i.e. 30,400 files. + + + + + +fcull N% + + + + +FS-Cache +fcull (cache cull limits settings) + + + +fcull (cache cull limits settings) +FS-Cache + + + +If the number of files the file system can further accommodate rises above N% of +its maximum file limit, cachefilesd starts culling. For example, with fcull 5%, +cachefilesd will start culling on an ext3 file system if it can only accommodate 1,600 more files, +or if the number of files exceeds 95% of its limit, i.e. 30,400 files. + + + + + +fstop N% + + + + +FS-Cache +fstop (cache cull limits settings) + + + +fstop (cache cull limits settings) +FS-Cache + + +If the number of files the file system can further accommodate rises above N% of its +maximum file limit, cachefilesd will no longer allocate disk space until culling drops the +number of files to below N% of the limit. For example, with fstop 5%, +cachefilesd will no longer accommodate disk space until culling drops the number of +files below 95% of its limit, i.e. 30,400 files. + + + + + + + +The default value of N for each setting is as follows: + + + +brun/frun — 10% +bcull/fcull — 7% +bstop/fstop — 3% + + + + +When configuring these settings, the following must hold true: + + + +0 <= bstop < bcull < brun < 100 + + +0 <= fstop < fcull < frun < 100 + + + + + + +
+
+Statistical Information + + + +FS-Cache +statistical information (tracking) + + + +statistical information (tracking) +FS-Cache + + + + + + + +tracking statistical information +FS-Cache + + +FS-Cache also keeps track of general statistical information. To view this information, +use: + + + +cat /proc/fs/fscache/stats + + + +FS-Cache statistics includes information on decision points +and object counters. For more details on the statistics +provided by FS-Cache, refer to the following kernel document: + + + +/usr/share/doc/kernel-doc-version/Documentation/filesystems/caching/fscache.txt + + +
+ +
+References + + +For more information on cachefilesd and how to configure it, refer to man cachefilesd +and man cachefilesd.conf. The following kernel documents also provide additional information: + + + +/usr/share/doc/cachefilesd-0.5/README +/usr/share/man/man5/cachefilesd.conf.5.gz +/usr/share/man/man8/cachefilesd.8.gz + + + +For general information about FS-Cache, including details on its design contraints, available statistics, +and capabilities, refer to the following kernel document: + + + +/usr/share/doc/kernel-doc-version/Documentation/filesystems/caching/fscache.txt + + + + +
+
diff --git a/en-US/newmds-ssdtuning.xml b/en-US/newmds-ssdtuning.xml new file mode 100644 index 0000000..3f7198e --- /dev/null +++ b/en-US/newmds-ssdtuning.xml @@ -0,0 +1,282 @@ + + + + +<remark><command>[NEW!]</command></remark>Solid-State Disk Deployment Guidelines + + +solid state disks +SSD + + + +SSD +solid state disks + + + + +solid state disks +deployment guidelines + + + +deployment guidelines +solid state disks + + + + + + +solid state disks +throughput classes + + + +throughput classes +solid state disks + + +Solid-state disks (SSD) are storage devices that use NAND flash chips to +persistently store data. This sets them apart from previous generations of disks, which store data +in rotating, magnetic platters. In an SSD, the access time for data across the full Logical Block Address +(LBA) range is constant; whereas with older disks that use rotating media, access patterns that span +large address ranges incur seek costs. As such, SSD devices have better latency and throughput. + + + +Not all SSDs show the same performance profiles, however. In fact, +many of the first generation devices show little or no advantage over +spinning media. Thus, it is important to define classes of solid +state storage to frame further discussion in this section. + + + +SSDs can be divided into three classes, based on throughput: + + + +The first class of SSDs use a PCI-Express connection, which offers +the fastest I/O throughput compared to other classes. This class also has a +very low latency for random access. +The second class uses the traditional SATA connection, and features fast +random access for read and write operations (though not as fast as SSDs that use +PCI-Express connection). +The third class also uses SATA, but the performance of SSDs in this class do not +differ substantially from devices that use 7200rpm rotational disks. + + + + +For all three classes, performance degrades as the number of used blocks approaches the disk capacity. +The degree of performance impact varies greatly by vendor. However, all devices experience some +degradation. + + + + +solid state disks +TRIM command + + + +TRIM command +solid state disks + + +To address the degradation issue, the ATA specification outlines a new command: TRIM. +This command allows the file system to communicate to +the underlying storage device that a given range of blocks is no +longer in use. The SSD can use this information to free up space +internally, using the freed blocks for wear-leveling. + + + +Enabling TRIM support is most useful when there is available free space on +the file system, but the file system has already written to most logical blocks on the underlying +storage device. For more information about TRIM, refer to its Data Set +Management T13 Specifications from the following link: + + + + + + + + +Not all solid-state devices in the market support TRIM. + + + +
+<remark>[UNFINISHED] </remark>Deployment Considerations + + + +solid state disks +deployment + + + +deployment +solid state disks + + +Because of the internal layout and operation of SSDs, it is best to +partition devices on an internal erase block boundary. +Partitioning utilities in Fedora 13 chooses sane defaults +if the SSD exports topology information. This is especially true if the exported +topology information includes alignment offsets and optimal I/O sizes. + + + + +However, if the device does not export topology information, +you should that the first partition be created at a 1MB boundary. + + + +later: (reference the IO Topology documentation that I'm sure will be a part + of this guide) + + +In addition, keep in mind that logical volumes, device-mapper targets, and md +targets do not support TRIM. As such, the default Fedora 13 installation will not allow the use of the TRIM command, since this +install uses DM-linear targets. + + + +Take note as well that software RAID levels 1, 4, 5, and 6 are not recommended for use on SSDs. +During the initialization stage of these RAID levels, some RAID management utilities +(such as mdadm) write to all of the blocks +on the storage device to ensure that checksums operate properly. This will cause the +performance of the SSD to degrade quickly. + + + +At present, ext4 is the only fully-supported file system that supports TRIM. To +enable TRIM commands on a device, use the mount option discard. +For example, to mount /dev/sda2 to /mnt with TRIM +enabled, run: + + + +mount -t ext4 -o discard /dev/sda2 /mnt + + + +By default, ext4 does not issue the TRIM command. This is mostly +to avoid problems on devices which may not properly implement the TRIM +command. The Linux swap code will issue TRIM commands to TRIM-enabled devices, +and there is no option to control this behaviour. + + + + + + +
+
+Tuning Considerations + + + +solid state disks +tuning + + + +tuning +solid state disks + + +This section describes several factors to consider when configuring settings that may affect SSD performance. + + + + +I/O Scheduler + + + +solid state disks +I/O scheduler (tuning) + + + +I/O scheduler (tuning) +solid state disks + + + +Any I/O scheduler should perform well with most SSDs. However, as with any +other storage type, you should benchmark to determine the +optimal configuration for a given workload. + + + + + + + +When using SSDs, you should change the I/O scheduler only for benchmarking +particular workloads. For more information about the different types of I/O schedulers, refer to +the &RHEL; I/O Tuning Guide. +The following kernel document also contains instructions on how to switch between I/O schedulers: + + + +/usr/share/doc/kernel-version/Documentation/block/switching-sched.txt + + + + +Virtual Memory + + + +solid state disks +virtual memory (tuning) + + + +virtual memory (tuning) +solid state disks + + +Like the I/O scheduler, virtual memory (VM) subsystem requires no special tuning. Given +the fast nature of I/O on SSD, it should be possible to turn down the vm_dirty_background_ratio +and vm_dirty_ratio settings, as increased +write-out activity should not negatively impact the latency of other +operations on the disk. However, this can generate more overall I/O +and so is not generally recommended without workload-specific testing. + + + + + +Swap + + + +solid state disks +swap (tuning) + + + +swap (tuning) +solid state disks + +An SSD can also be used as a swap device, and is likely to +produce good page-out/page-in performance. + + + + + +
+ + +
diff --git a/en-US/newosrg-fcoeiscsi.xml b/en-US/newosrg-fcoeiscsi.xml new file mode 100644 index 0000000..fecb5b0 --- /dev/null +++ b/en-US/newosrg-fcoeiscsi.xml @@ -0,0 +1,115 @@ + + + +
+<remark><command>[NEW!]</command></remark>Network Block Storage (FCoE, iSCSI) + + + + Network Block Storage (FCoE, iSCSI) +by Siddharth Nagar — last modified Nov 19, 2009 12:03 AM +— filed under: In: Fedora 12, Owner: coughlan, Feature: DOC-RED, QEOwner:kernel-storage, In: Fedora 11, Feature: DEV-GREEN, Feature: QE-RED + +Enhance existing support for FCoE and iSCSI. +Overview + +Enhance existing support for FCoE and iSCSI. +Owner (package maintainer): + + * kernel + +Summary: + + * Continue development of FCoE and iSCSI drivers. This includes software-only solutions, running on standard NICs, as well as NICs and HBAs that provide hardware assists. + * This also includes the development of management applications. + +Detailed description: + + * Enhance existing support for FCoE and iSCSI. + +Completion Status + + * % Completed in F11: 60% + * % Completed in F12: 90% + * Confidence factor that any remaining work for this feature will land by F12 beta (~2009-07-28) (high/med/low): high + * % Completed in RHEL 6 (if only in RHEL6): 100% + * Basic support is upstream. Enhancements are being developed. + * add fcoe-utils, driver updates, and more HW support to Fedora 12 and RHEL5.4 + +Fedora Links Other Helpful Information + + * + +Scoping +Target Audience + + * OPEN + +Product Variants / High Level Use Cases + + * OPEN + +Hardware Architectures + + * OPEN + +Constraints and Limitations + + * OPEN + +Third-Party Dependencies + + * OPEN + +Links +Features this feature depends on + + * This project depends on Anaconda to provide support for install and boot on these interconnects. + * OPEN + +Features depending on this Feature + + * OPEN + +Use-Cases + + * OPEN + +Test Cases + + * OPEN + +Bugzilla Numbers + + * https://bugzilla.redhat.com/show_bug.cgi?id=519880 + +Documentation Upstream Project + + * OPEN + +Business Aspects +Business Justification + + * OPEN + +Themes + + * OPEN + +Customers + + * OPEN + +Partners + + * OPEN + +Planned Certifications + + * OPEN + + + + +
diff --git a/en-US/newstorage-disklesssystems.xml b/en-US/newstorage-disklesssystems.xml new file mode 100644 index 0000000..b0bcce2 --- /dev/null +++ b/en-US/newstorage-disklesssystems.xml @@ -0,0 +1,295 @@ + + + + +Setting Up A Remote Diskless System + + +diskless systems +remote diskless systems + + + +remote diskless systems +diskless systems + + + + +diskless systems +required packages + + + +required packages +diskless systems + + +diskless systems +network booting service + + + +network booting service +diskless systems + + + +The Network Booting Service (provided by system-config-netboot) is no longer available in Fedora 13. +Deploying diskless systems is now possible in this release without the use of system-config-netboot. + + + +To set up a basic remote diskless system booted over PXE, you need the following packages: + + + + +tftp-server +xinetd +dhcp +syslinux +dracut-network + + + +Remote diskless system booting requires both a tftp service (provided by tftp-server) +and a DHCP service (provided by dhcp). The tftp service is used to retrieve kernel +image and initrd over the network via the PXE loader. Both tftp and DHCP services +must be provided by the same host machine. + + + +The following sections outline the necessary procedures for deploying remote diskless systems in a network environment. + + + + +
+Configuring a tftp Service for Diskless Clients + + + +diskless systems +tftp service, configuring + + + +tftp service, configuring +diskless systems + + + + + + + + +configuring a tftp service for diskless clients +diskless systems + + + + + +The tftp service is disabled by default. To enable it and allow PXE booting via the network, set the +Disabled option in /etc/xinetd.d/tftp to no. +To configure tftp, perform the following steps: + + + + + +The tftp root directory (chroot) is located in /var/lib/tftpboot. +Copy /usr/share/syslinux/pxelinux.0 to /var/lib/tftpboot/, as in: + + + +cp /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot/ + + + + + +Create a pxelinux.cfg directory inside the tftp root directory: + + + +mkdir -p /var/lib/tftpboot/pxelinux.cfg/ + + + + + + +You will also need to configure firewall rules properly +to allow tftp traffic; as tftp supports TCP wrappers, you can configure host access to tftp +via /etc/hosts.allow. For more information on configuring TCP wrappers and the /etc/hosts.allow +configuration file, refer to the Security Guide for Fedora 13 or &RHEL; &RHELVER;; man hosts_access +also provides information about /etc/hosts.allow. + + + +After configuring tftp for diskless clients, configure DHCP, NFS, and the exported file system accordingly. Refer to + and for instructions on how to do so. + + +
+ +
+Configuring DHCP for Diskless Clients + + + +diskless systems +DHCP, configuring + + + +DHCP, configuring +diskless systems + + + + + + + +configuring DHCP for diskless clients +diskless systems + + + +After configuring a tftp server, you need to set up a DHCP service on the same host machine. Refer to the Fedora 13 Deployment Guide for instructions on how to set up a DHCP server. In addition, you should +enable PXE booting on the DHCP server; to do this, add the following configuration to /etc/dhcp/dhcp.conf: + + +allow booting; +allow bootp; +class "pxeclients" { + match if substring(option vendor-class-identifier, 0, 9) = "PXEClient"; + next-server server-ip; + filename "linux-install/pxelinux.0"; +} + + +Replace server-ip with the IP address of the host machine on +which the tftp and DHCP services reside. Now that tftp and DHCP are configured, +all that remains is to configure NFS and the exported file system; refer to for instructions. + +
+ +
+Configuring an Exported File System for Diskless Clients + + + +diskless systems +exported file systems + + + +exported file systems +diskless systems + + +The root directory of the exported file system (used by diskless clients in the network) is shared via NFS. Configure the +NFS service to export the root directory by adding it to /etc/exports. For instructions on how to do so, +refer to . + + + +To accommodate completely diskless clients, the root directory should contain a complete Fedora 13 installation. You can synchronize this +with a running system via rsync, as in: + + + +rsync -a -e ssh --exclude='/proc/*' --exclude='/sys/*' hostname.com:/ /exported/root/directory + + + +Replace hostname.com with the hostname of the running system with which to synchronize via rsync. The /exported/root/directory is the path to the exported file system. + + + +Alternatively, you can also use yum with the --installroot option to install Fedora to a specific +location. For example: + + + +yum groupinstall Base --installroot=/exported/root/directory + + + +The file system to be exported still needs to be configured further before it can be used by diskless clients. To do this, perform the following procedure: + + + + + + +Configure the exported file system's /etc/fstab to contain (at least) the following configuration: + +none /tmp tmpfs defaults 0 0 +tmpfs /dev/shm tmpfs defaults 0 0 +sysfs /sys sysfs defaults 0 0 +proc /proc proc defaults 0 0 + + + + +Select the kernel that diskless clients should use (vmlinuz-kernel-version) and copy it to the tftp boot directory: + + + +cp /boot/vmlinuz-kernel-version /var/lib/tftpboot/ + + + + + + + +Create the initrd (i.e. initramfs-kernel-version.img) with network support: + + + +dracut initramfs-kernel-version.img vmlinuz-kernel-version + + + +Copy the resulting initramfs-kernel-version.img into the tftp boot directory as well. + + + + + + +Edit the default boot configuration to use the initrd and kernel inside /var/lib/tftpboot. +This configuration should instruct the diskless client's root to mount the exported file system (/exported/root/directory) +as read-write. To do this, configure /var/lib/tftpboot/pxelinux.cfg/default with the following: + + +default rhel6 + +label rhel6 + kernel vmlinuz-kernel-version + append initrd=initramfs-kernel-version.img root=nfs:server-ip:/exported/root/directory rw + + + + +Replace server-ip with the IP address of the host machine on which the tftp and DHCP services reside. + + + + + +The NFS share is now ready for exporting to diskless clients. These clients can boot over the network via PXE. + + + +
+ +
diff --git a/en-US/newstorage-iolimits.xml b/en-US/newstorage-iolimits.xml new file mode 100644 index 0000000..bb2e354 --- /dev/null +++ b/en-US/newstorage-iolimits.xml @@ -0,0 +1,596 @@ + + + + + +<remark><command>[NEW!]</command> </remark>Storage I/O Alignment and Size + +I/O alignment and size +Linux I/O stack + + + +Linux I/O stack +I/O alignment and size + + + +I/O alignment and size + + + +Recent enhancements to the SCSI and ATA standards allow storage devices to +incidate their preferred (and in some cases, required) I/O alignment and I/O size. +This information is particularly useful with newer disk drives that increase the +physical sector size from 512 bytes to 4k bytes. This information may also be beneficial +for RAID devices, where the chunk size and stripe size may impact performance. + + + + +The Linux I/O stack has been enhanced to process vendor-provided +I/O alignment and I/O size information, allowing storage management +tools (parted, lvm, mkfs.*, +and the like) to optimize data placement and access. If a legacy +device does not export I/O alignment and size data, then storage management +tools in Fedora 13 will conservatively align I/O on a +4k (or larger power of 2) boundary. This will ensure that 4k-sector devices +operate correctly even if they do not indicate any required/preferred +I/O alignment and size. + + + +Fedora 13 supports 4k-sector devices as data disks, not as boot disks. +Boot support for 4k-sector devices is planned for a later release. + + + +Refer to to learn how to determine the information +that the operating system obtained from the device. This data is subsequently used by the storage management +tools to determine data placement. + + + + +
+Parameters for Storage Access + + + +I/O alignment and size +storage access parameters + + + +storage access parameters +I/O alignment and size + + + + + + + +parameters for storage access +I/O alignment and size + + + +The operating system uses the following information to determine I/O alignment +and size: + + + +physical_block_size +Smallest internal unit on which the device can operate + +logical_block_size +Used externally to address a location on the device + +alignment_offset +Tthe number of bytes that the beginning of the Linux block device (partition/MD/LVM device) is offset from the underlying physical alignment + +minimum_io_size +The device’s preferred minimum unit for random I/O + +optimal_io_size +The device’s preferred unit for streaming I/O + + + + +For example, certain 4K sector devices may use a 4K physical_block_size +internally but expose a more granular 512-byte logical_block_size to Linux. +This discrepancy introduces potential for misaligned I/O. +To address this, the Fedora 13 I/O stack will attempt to start +all data areas on a naturally-aligned boundary (physical_block_size) by making +sure it accounts for any alignment_offset if the beginning of the block device +is offset from the underlying physical alignment. + + + +Storage vendors can also supply I/O hints about the preferred minimum unit for +random I/O (minimum_io_size) and streaming I/O (optimal_io_size) of a device. +For example, minimum_io_size and optimal_io_size may correspond to a RAID +device's chunk size and stripe size respectively. + + + + + +
+ + +
+Userspace Access + + + + +I/O alignment and size +userspace access + + + +userspace access +I/O alignment and size + + + + + +I/O alignment and size +logical_block_size + + + +logical_block_size +I/O alignment and size + + + + +Always take care to use properly aligned and sized I/O. This +is especially important for Direct I/O access. Direct I/O should be +aligned on a logical_block_size boundary, +and in multiples of the logical_block_size. + + + + +With native 4K devices (i.e. logical_block_size is 4K) +it is now critical that applications perform direct I/O in +multiples of the device's logical_block_size. This +means that applications will fail with native 4k devices that perform +512-byte aligned I/O rather than 4k-aligned I/O. + + + +To avoid this, an application should consult the I/O parameters of a device +to ensure it is using the proper I/O alignment and size. As mentioned earlier, +I/O parameters are exposed through the both sysfs and block +device ioctl interfaces. + + + +For more details, refer to man libblkid. This man page +is provided by the libblkid-devel package. + +revisit later: is markup enough for + + +sysfs Interface + + + +I/O alignment and size +sysfs interface (userspace access) + + + +sysfs interface (userspace access) +I/O alignment and size + + + + +/sys/block/disk/alignment_offset +/sys/block/disk/partition/alignment_offset +/sys/block/disk/queue/physical_block_size +/sys/block/disk/queue/logical_block_size +/sys/block/disk/queue/minimum_io_size +/sys/block/disk/queue/optimal_io_size + + + + + + +The kernel will still export these sysfs attributes for +"legacy" devices that do not provide I/O parameters information, for example: + + +alignment_offset: 0 +physical_block_size: 512 +logical_block_size: 512 +minimum_io_size: 512 +optimal_io_size: 0 + + +Block Device ioctls + + + + +I/O alignment and size +block device ioctls (userspace access) + + + +block device ioctls (userspace access) +I/O alignment and size + + + + +BLKALIGNOFF: alignment_offset +BLKPBSZGET: physical_block_size +BLKSSZGET: logical_block_size +BLKIOMIN: minimum_io_size +BLKIOOPT: optimal_io_size + + + + + + + +
+ + +
+Standards + + +This section describes I/O standards used by ATA and SCSI devices. + + + +ATA + + + +I/O alignment and size +ATA standards + + + +ATA standards +I/O alignment and size + +ATA devices must report appropriate information via the +IDENTIFY DEVICE command. ATA devices only +report I/O parameters for physical_block_size, +logical_block_size, and alignment_offset. +The additional I/O hints are outside the scope of the ATA Command Set. + + + +SCSI + + + +I/O alignment and size +SCSI standards + + + +SCSI standards +I/O alignment and size + + + + +I/O parameters support in Fedora 13 requires at least +version 3 of the SCSI Primary Commands +(SPC-3) protocol. The kernel will only send an extended inquiry +(which gains access to the BLOCK LIMITS VPD page) and +READ CAPACITY(16) command to devices which claim compliance with SPC-3. + + + + + +I/O alignment and size +READ CAPACITY(16) + + + +READ CAPACITY(16) +I/O alignment and size + + + + +The READ CAPACITY(16) command provides the block sizes and alignment offset: + + + + + +LOGICAL BLOCK LENGTH IN BYTES is used to derive /sys/block/disk/queue/physical_block_size + + + + + +LOGICAL BLOCKS PER PHYSICAL BLOCK EXPONENT is used to derive +/sys/block/disk/queue/logical_block_size + + + + +LOWEST ALIGNED LOGICAL BLOCK ADDRESS is used to derive: + + + +/sys/block/disk/alignment_offset +/sys/block/disk/partition/alignment_offset + + + + + +The BLOCK LIMITS VPD page (0xb0) provides the I/O hints. It also uses OPTIMAL TRANSFER +LENGTH GRANULARITY and OPTIMAL TRANSFER LENGTH +to derive: + + + + +/sys/block/disk/queue/minimum_io_size +/sys/block/disk/queue/optimal_io_size + + + +The sg3_utils package provides the sg_inq +utility, which can be used to access the BLOCK LIMITS VPD page. +To do so, run: + + + +sg_inq -p 0xb0 disk + + + +
+ +
+Stacking I/O Parameters + + + + +I/O alignment and size +stacking I/O parameters + + + +stacking I/O parameters +I/O alignment and size + + + + + + + + +I/O parameters stacking +I/O alignment and size + + + +All layers of the Linux I/O stack have been engineered to propagate the +various I/O parameters up the stack. When a layer consumes an attribute +or aggregates many devices, the layer must expose appropriate +I/O parameters so that upper-layer devices or tools will have +an accurate view of the storage as it transformed. Some practical examples are: + + + +Only one layer in the I/O stack should adjust for a non-zero + alignment_offset; once a layer adjusts accordingly, it +will export a device with an alignment_offset of zero. + +A striped Device Mapper (DM) device created with LVM must export + a minimum_io_size and optimal_io_size +relative to the stripe + count (number of disks) and user-provided chunk size. + + + +In Fedora 13, Device Mapper and Software Raid (MD) +device drivers can be used to arbitrarily combine devices with +different I/O parameters. The kernel's block layer will attempt +to reasonably combine the I/O parameters of the individual +devices. The kernel will not prevent combining heterogenuous +devices; however, be aware of the risks associated with doing so. + + + +For instance, a 512-byte device and a 4K device may be combined into a +single logical DM device, which would have a +logical_block_size of 4K. File systems layered +on such a hybrid device assume that 4K will be written atomically, +but in reality it will span 8 logical block addresses when issued +to the 512-byte device. Using a 4K logical_block_size +for the higher-level DM device increases potential for a partial write +to the 512-byte device if there is a system crash. + + + + +If combining the I/O parameters of multiple devices results in a conflict, the +block layer may issue a warning that the device is susceptible to partial +writes and/or is misaligned. + + + +
+ +
+Logical Volume Manager + + + + +I/O alignment and size +LVM + + + +LVM +I/O alignment and size + + + +LVM provides userspace tools that are used to manage the kernel's DM +devices. LVM will shift the start of the data area (that a given DM +device will use) to account for a non-zero alignment_offset +associated with any device managed by LVM. This means logical volumes will be +properly aligned (alignment_offset=0). + + +By default, LVM will adjust for any +alignment_offset, but this behavior can be disabled by setting +data_alignment_offset_detection to 0 +in /etc/lvm/lvm.conf. Disabling this is not +recommended. + + + + +LVM will also detect the I/O hints for a device. The start of a +device's data area will be a multiple of the minimum_io_size or +optimal_io_size exposed in sysfs. LVM will use the minimum_io_size +if optimal_io_size is undefined (i.e. 0). + + + +By default, LVM will automatically determine these I/O hints, but this + behavior can be disabled by setting +data_alignment_detection to 0 +in /etc/lvm/lvm.conf. Disabling this is not +recommended. + + +
+ + +
+Partition and File System Tools + + + + +I/O alignment and size +tools (for partitioning and other file system functions) + + + +tools (for partitioning and other file system functions) +I/O alignment and size + + +This section describes how different partition and file system management tools +interact with a device's I/O parameters. + + + + +util-linux-ng's libblkid and fdisk + + +The libblkid library provided with the util-linux-ng package includes a +programmatic API to access a device's I/O parameters. libblkid allows +applications, especially those that use Direct I/O, to properly size +their I/O requests. The fdisk utility from util-linux-ng uses libblkid to determine +the I/O parameters of a device for optimal placement of all partitions. The fdisk utility will align all +partitions on a 1MB boundary. + + + + + +parted and libparted + + +The libparted library from parted also uses the +I/O parameters API of libblkid. The Fedora installer +(Anaconda) uses libparted, which means that +all partitions created by either the installer or parted will be properly aligned. +For all partitions created on a device that does not appear to provide I/O parameters, the default +alignment will be 1MB. + + + + + + +The heuristics parted uses are as follows: + + + +Always use the reported alignment_offset as the offset for the + start of the first primary partition. +If optimal_io_size is defined (i.e. not 0), +align all partitions on an optimal_io_size boundary. + + + +If optimal_io_size is undefined (i.e. 0), alignment_offset is 0, + and minimum_io_size is a power of 2, use a 1MB default alignment. + + + +This is the catch-all for "legacy" devices which don't appear to provide +I/O hints. As such, by default all partitions will be aligned on a 1MB boundary. + + + + + +Fedora cannot distinguish between devices that don't provide +I/O hints and those that do so with alignment_offset=0 and +optimal_io_size=0. Such a device might be a single SAS 4K device; +as such, at worst 1MB of space is lost at the start of the disk. + + + + + + +File System tools + +The different mkfs.filesystem utilities +have also been enhanced to consume a device's I/O parameters. These utilities will not allow +a file system to be formatted to use a block size smaller than the logical_block_size +of the underlying storage device. + + + + +Except for mkfs.gfs2, all other mkfs.filesystem +utilities also use the I/O hints to layout on-disk data structure and data areas relative to the +minimum_io_size and optimal_io_size of the underlying storage +device. This allows file systems to be optimally formatted for various RAID (striped) layouts. + + + + + +
+ +
diff --git a/en-US/newstorage-writebarriers.xml b/en-US/newstorage-writebarriers.xml new file mode 100644 index 0000000..2950d05 --- /dev/null +++ b/en-US/newstorage-writebarriers.xml @@ -0,0 +1,335 @@ + + + + +Write Barriers + +content taken from + +write barriers +definition + + + + + +A write barrier is a kernel mechanism used to ensure that file system metadata is correctly written and ordered on persistent storage, even when storage devices with volatile write caches lose power. File systems with write barriers enabled also ensure that data transmitted via fsync() is persistent throughout a power loss. + + + +Enabling write barriers incurs a substantial performance penalty for some applications. Specifically, applications that use fsync() heavily or create and delete many small files will likely run much slower. + + + + + +
+Importance of Write Barriers + + + +write barriers +importance of write barriers + + + +importance of write barriers +write barriers + + + + + +File systems take great care to safely update metadata, ensuring consistency. Journalled file systems bundle metadata updates into transactions and send them to persistent storage in the following manner: + + + + + +First, the file system sends the body of the transaction to the storage device. + + + + +Then, the file system sends a commit block. + + + + +If the transaction and its corresponding commit block are written to disk, the file system assumes that the transaction will survive any power failure. + + + + +However, file system integrity during power failure becomes more complex for storage devices with extra caches. Storage target devices like local S-ATA or SAS drives may have write caches ranging from 32MB to 64MB in size (with modern drives). Hardware RAID controllers often contain internal write caches. Further, high end arrays, like those from NetApp, IBM, Hitachi and EMC (among others), also have large caches. + + +"when storage devices add extra caches" > "for storage devices with extra caches" + + +Storage devices with write caches report I/O as "complete" when the data is in cache; if the cache loses power, it loses its data as well. Worse, as the cache de-stages +to persistent storage, it may change the original metadata ordering. When this occurs, the commit block may be present on disk without having the complete, associated transaction in place. As a result, the journal may replay these uninitialized transaction blocks into the file system during post-power-loss recovery; this will cause data inconsistency and corruption. + + + + + + + + +How Write Barriers Work + + + +write barriers +how write barriers work + + + +how write barriers work +write barriers + + + + + +Write barriers are implemented in the Linux kernel via storage write cache flushes before and after the I/O, which is order-critical. After the transaction is written, the storage cache is flushed, the commit block is written, and the cache is flushed again. This ensures that: + + + +The disk contains all the data. +No re-ordering has occurred. + + + +With barriers enabled, an fsync() call will also issue a storage cache flush. This guarantees that file data is persistent on disk even if power loss occurs shortly after fsync() returns. + + + + + + +
+
+Enabling/Disabling Write Barriers + + + + + + +write barriers +enablind/disabling + + + +enablind/disabling +write barriers + + + + +To mitigate the risk of data corruption during power loss, some storage devices use battery-backed write caches. Generally, high-end arrays and some hardware controllers use battery-backed write cached. However, because the cache's volatility is not visible to the kernel, Fedora 13 enables write barriers by default on all supported journaling file systems. + + + + + +Write caches are designed to increase I/O performance. However, enabling write barriers means constantly flushing these caches, which can significantly reduce performance. + + + + +For devices with non-volatile, battery-backed write caches and those with write-caching disabled, you can safely disable write barriers at mount time using the -o nobarrier option for mount. However, some devices do not support write barriers; such devices will log an error message to /var/log/messages (refer to ). + + + + +write barriers +error messages + + + +error messages +write barriers + + + + + +Write barrier error messages per file system + + + + + File System + Error Message + + + + + ext3/ext4 + JBD: barrier-based sync failed on device - disabling barriers + + + XFS + Filesystem device - Disabling barriers, trial barrier write failed + + + btrfs + btrfs: disabling barriers on dev device + + + + + +
+ + + +
+ +
+Write Barrier Considerations + + + +Some system configurations do not need write barriers to protect data. In most cases, other methods +are preferable to write barriers, since enabling write barriers +causes a significant performance penalty. + + + +Disabling Write Caches + + + +write barriers +disabling write caches + + + +disabling write caches +write barriers + + + + + + +write caches, disabling +write barriers + + + + +One way to alternatively avoid data integrity issues is to ensure +that no write caches lose data on power failures. When possible, the +best way to configure this is to simply disable the write cache. +On a simple server or desktop with one or more SATA drives (off a local SATA controller Intel AHCI part), +you can disable the write cache on the target SATA drives with the hdparm command, as in: + + + + +hdparm -W0 /device/ + + + + +Battery-Backed Write Caches + + + +write barriers +battery-backed write caches + + + +battery-backed write caches +write barriers + + + +Write barriers are also unnecessary whenever the system uses hardware RAID controllers with battery-backed write cache. If +the system is equipped with such controllers and if its component drives have write caches disabled, the controller +will advertise itself as a write-through cache; this will inform the kernel that the write cache data will survive a power loss. + + + + +Most controllers use vendor-specific tools to query and manipulate target drives. For example, the LSI Megaraid SAS controller +uses a battery-backed write cache; this type of controller requires the MegaCli64 tool to manage target drives. +To show the state of all back-end drives for LSI Megaraid SAS, use: + + + +MegaCli64 -LDGetProp -DskCache -LAll -aALL + + + +To disable the write cache of all back-end drives for LSI Megaraid SAS, use: + + + +MegaCli64 -LDSetProp -DisDskCache -Lall -aALL + + +please check: should it be DskCache or DisDskCache? +please check: should it be -Lall or -LAll? or doesn't it matter? + + +Hardware RAID cards recharge their batteries while the system is operational. +If a system is powered off for an extended period of time, the batteries will +lose their charge, leaving stored data vulnerable during a power failure. + + + + + + +High-End Arrays + + + +write barriers +high-end arrays + + + +high-end arrays +write barriers + + +High-end arrays have various ways of protecting data in the event of a power failure. +As such, there is no need to verify the state of the internal drives in external RAID storage. + + + + + +NFS + + + + +write barriers +NFS + + + +NFS +write barriers + +NFS clients do not need to enable write barriers, since data integrity is handled +by the NFS server side. As such, NFS servers should be configured to ensure data persistence +throughout a power loss (whether through write barriers or other means). + + + + + +
+ +
diff --git a/en-US/p1-storagemannew.xml b/en-US/p1-storagemannew.xml new file mode 100644 index 0000000..d5b7a09 --- /dev/null +++ b/en-US/p1-storagemannew.xml @@ -0,0 +1,15 @@ + + + + + New Storage Management Features + + This is a test paragraph + +new features in RHEL6 + + + + + diff --git a/en-US/p1-storagenew.xml b/en-US/p1-storagenew.xml new file mode 100644 index 0000000..5a0c2f0 --- /dev/null +++ b/en-US/p1-storagenew.xml @@ -0,0 +1,14 @@ + + + + + New File System Management Features + + This is a test paragraph + +new features in RHEL6 + + + + diff --git a/en-US/remove-comments.rb b/en-US/remove-comments.rb new file mode 100644 index 0000000..12e4033 --- /dev/null +++ b/en-US/remove-comments.rb @@ -0,0 +1,14 @@ +#!/usr/bin/env ruby +#arg1 = filename +#arg2 = tag to remove - if none then remove comments +result = File.read(ARGV[0]) +tag = ARGV[1] +#puts tag +if tag == nil then + result = result.gsub(//, "") + #result = result.gsub(//,"") +else + result = result.gsub(Regexp.new("/<#{tag}[^>]*>(.*?)<\/#{tag}>/"), "") + #result = result.gsub(/<remark[^>]*>(.*?)<\/remark>/, "") +end +puts result diff --git a/en-US/swap-creating-file.xml b/en-US/swap-creating-file.xml new file mode 100644 index 0000000..0fbf797 --- /dev/null +++ b/en-US/swap-creating-file.xml @@ -0,0 +1,71 @@ + + + +
+ Creating a Swap File + + swap space + file + creating + + + To add a swap file: + + + + + Determine the size of the new swap file in megabytes and multiply by 1024 to determine the number of blocks. For example, the block size of a 64 MB swap file is 65536. + + + + + At a shell prompt as root, type the following command with count being equal to the desired block size: + + +dd if=/dev/zero of=/swapfile bs=1024 count=65536 + + + + + Setup the swap file with the command: + + +mkswap /swapfile + + + + + To enable the swap file immediately but not automatically at boot time: + + +swapon /swapfile + + + + + To enable it at boot time, edit /etc/fstab to include the following entry: + + +/swapfile swap swap defaults 0 0 + + + The next time the system boots, it enables the new swap file. + + + + + + +To test if the new swap file was successfully created, use cat /proc/swaps or free to +inspect the swap space. + +
diff --git a/en-US/swap-creating-lvm2.xml b/en-US/swap-creating-lvm2.xml new file mode 100644 index 0000000..edc245d --- /dev/null +++ b/en-US/swap-creating-lvm2.xml @@ -0,0 +1,67 @@ + + + +
+ Creating an LVM2 Logical Volume for Swap + + swap space + LVM2 + creating + + + To add a swap volume group (assuming /dev/VolGroup00/LogVol02 is the swap volume you want to add): + + + + + + Create the LVM2 logical volume of size 256 MB: + + +lvcreate VolGroup00 -n LogVol02 -L 256M + + + + + Format the new swap space: + + +mkswap /dev/VolGroup00/LogVol02 + + + + + Add the following entry to the /etc/fstab file: + + +/dev/VolGroup00/LogVol02 swap swap defaults 0 0 + + + + + Enable the extended logical volume: + + +swapon -v /dev/VolGroup00/LogVol02 + + + + + +To test if the logical volume was successfully created, use cat /proc/swaps or free to +inspect the swap space. + + + + + +
diff --git a/en-US/swap-extending-lvm2.xml b/en-US/swap-extending-lvm2.xml new file mode 100644 index 0000000..842e5b1 --- /dev/null +++ b/en-US/swap-extending-lvm2.xml @@ -0,0 +1,71 @@ + + + +
+ Extending Swap on an LVM2 Logical Volume + + swap space + LVM2 + extending + + +By default, Fedora 13 uses all available space during installation. If this is the case with your system, then you must first add a new physical volume to the volume group used by the swap space. For instructions on how to do so, refer to . + + + +After adding additional storage to the swap space's volume group, it is now possible to extend it. To do so, perform the following procedure (assuming /dev/VolGroup00/LogVol01 is the volume you want to extend by 256MB): + + + + + + + Disable swapping for the associated logical volume: + + +swapoff -v /dev/VolGroup00/LogVol01 + + + + + Resize the LVM2 logical volume by 256 MB: + + +lvresize /dev/VolGroup00/LogVol01 -L +256M + + + + + Format the new swap space: + + +mkswap /dev/VolGroup00/LogVol01 + + + + + Enable the extended logical volume: + + + +swapon -v /dev/VolGroup00/LogVol01 + + + + + + + +To test if the logical volume was successfully extended, use cat /proc/swaps or free to +inspect the swap space. + + + +
diff --git a/en-US/swap-reducing-lvm2.xml b/en-US/swap-reducing-lvm2.xml new file mode 100644 index 0000000..ed07fa8 --- /dev/null +++ b/en-US/swap-reducing-lvm2.xml @@ -0,0 +1,62 @@ + + + +
+ Reducing Swap on an LVM2 Logical Volume + + swap space + LVM2 + reducing + + + To reduce an LVM2 swap logical volume (assuming /dev/VolGroup00/LogVol01 is the volume you want to reduce): + + + + + Disable swapping for the associated logical volume: + + +swapoff -v /dev/VolGroup00/LogVol01 + + + + + Reduce the LVM2 logical volume by 512 MB: + + +lvreduce /dev/VolGroup00/LogVol01 -L -512M + + + + + Format the new swap space: + + +mkswap /dev/VolGroup00/LogVol01 + + + + + Enable the extended logical volume: + + +swapon -v /dev/VolGroup00/LogVol01 + + + + + + +To test if the swap's logical volume size was successfully reduced, use cat /proc/swaps or free to +inspect the swap space. + +
diff --git a/en-US/swap-removing-file.xml b/en-US/swap-removing-file.xml new file mode 100644 index 0000000..1684f64 --- /dev/null +++ b/en-US/swap-removing-file.xml @@ -0,0 +1,44 @@ + + + +
+ Removing a Swap File + + swap space + file + creating + + + To remove a swap file: + + + + + At a shell prompt as root, execute the following command to disable the swap file (where /swapfile is the swap file): + + +swapoff -v /swapfile + + + + + Remove its entry from the /etc/fstab file. + + + + + Remove the actual file: + + +rm /swapfile + + + +
diff --git a/en-US/swap-removing-lvm2.xml b/en-US/swap-removing-lvm2.xml new file mode 100644 index 0000000..73e20b1 --- /dev/null +++ b/en-US/swap-removing-lvm2.xml @@ -0,0 +1,56 @@ + + + +
+ Removing an LVM2 Logical Volume for Swap + + swap space + LVM2 + removing + + + + To remove a swap volume group (assuming /dev/VolGroup00/LogVol02 is the swap volume you want to remove): + + + + + Disable swapping for the associated logical volume: + + +swapoff -v /dev/VolGroup00/LogVol02 + + + + + Remove the LVM2 logical volume of size 512 MB: + + +lvremove /dev/VolGroup00/LogVol02 + + + + + Remove the following entry from the /etc/fstab file: + + +/dev/VolGroup00/LogVol02 swap swap defaults 0 0 + + + + + + +To test if the logical volume size was successfully removed, use cat /proc/swaps or free to +inspect the swap space. + + +
diff --git a/en-US/xfs-Allocation_Groups.xml b/en-US/xfs-Allocation_Groups.xml new file mode 100644 index 0000000..8b09634 --- /dev/null +++ b/en-US/xfs-Allocation_Groups.xml @@ -0,0 +1,933 @@ + + + +
+ Allocation Groups + + XFS filesystems are divided into a number of equally sized chunks called Allocation Groups. Each AG can almost be thought of as an individual filesystem that maintains it's own space usage. Each AG can be up to one terabyte in size (512 bytes * 231), regardless of the underlying device's sector size. + + + Each AG has the following characteristics: + + + + A super block describing overall filesystem info + + + Free space management + + + Inode allocation and tracking + + + + Having multiple AGs allows XFS to handle most operations in parallel without degrading performance as the number of concurrent accessing increases. + + + The only global information maintained by the first AG (primary) is free space across the filesystem and total inode counts. If the XFS_SB_VERSION2_LAZYSBCOUNTBIT flag is set in the superblock, these are only updated on-disk when the filesystem is cleanly unmounted (umount or shutdown). + + + Immediately after a mkfs.xfs, the primary AG has the following disk layout the subsequent AGs do not have any inodes allocated: + + + + + 6 + + + + + Each of these structures are expanded upon in the following sections. + +
+ Superblocks + + Each AG starts with a superblock. The first one is the primary superblock that stores aggregate AG information. Secondary superblocks are only used by xfs_repair when the primary superblock has been corrupted. + + + The superblock is defined by the following structure. The description of each field follows. + + +typedef struct xfs_sb +{ + __uint32_t        sb_magicnum; + __uint32_t        sb_blocksize; + xfs_drfsbno_t     sb_dblocks; + xfs_drfsbno_t     sb_rblocks; + xfs_drtbno_t      sb_rextents; + uuid_t        sb_uuid; + xfs_dfsbno_t      sb_logstart; + xfs_ino_t        sb_rootino; + xfs_ino_t        sb_rbmino; + xfs_ino_t        sb_rsumino; + xfs_agblock_t    sb_rextsize; + xfs_agblock_t    sb_agblocks; + xfs_agnumber_t    sb_agcount; + xfs_extlen_t      sb_rbmblocks; + xfs_extlen_t      sb_logblocks; + __uint16_t        sb_versionnum; + __uint16_t        sb_sectsize; + __uint16_t        sb_inodesize; + __uint16_t        sb_inopblock; + char        sb_fname[12]; + __uint8_t        sb_blocklog; + __uint8_t        sb_sectlog; + __uint8_t        sb_inodelog; + __uint8_t        sb_inopblog; + __uint8_t        sb_agblklog; + __uint8_t        sb_rextslog; + __uint8_t        sb_inprogress; + __uint8_t        sb_imax_pct; + __uint64_t        sb_icount; + __uint64_t        sb_ifree; + __uint64_t        sb_fdblocks; + __uint64_t        sb_frextents; + xfs_ino_t        sb_uquotino; + xfs_ino_t        sb_gquotino; + __uint16_t        sb_qflags; + __uint8_t        sb_flags; + __uint8_t        sb_shared_vn; + xfs_extlen_t      sb_inoalignmt; + __uint32_t        sb_unit; + __uint32_t        sb_width; + __uint8_t        sb_dirblklog; + __uint8_t        sb_logsectlog; + __uint16_t        sb_logsectsize; + __uint32_t        sb_logsunit; + __uint32_t        sb_features2; +} xfs_sb_t; + + + + sb_magicnum + Identifies the filesystem. It's value is XFS_SB_MAGIC = 0x58465342 "XFSB". + + + sb_blocksize + The size of a basic unit of space allocation in bytes. Typically, this is 4096 (4KB) but can range from 512 to 65536 bytes. + + + sb_dblocks + Total number of blocks available for data and metadata on the filesystem. + + + sb_rblocks + Number blocks in the real-time disk device. Refer to for more information. + + + sb_rextents + Number of extents on the real-time device. + + + sb_uuid + UUID (Universally Unique ID) for the filesystem. Filesystems can be mounted by the UUID instead of device name. + + + sb_logstart + First block number for the journaling log if the log is internal (ie. not on a separate disk device). For an external log device, this will be zero (the log will also start on the first block on the log device). + + + sb_rootino + Root inode number for the filesystem. Typically, this is 128 when using a 4KB block size. + + + sb_rbmino + Bitmap inode for real-time extents. + + + sb_rsumino + Summary inode for real-time bitmap. + + + sb_rextsize + Realtime extent size in blocks. + + + sb_agblocks + Size of each AG in blocks. For the actual size of the last AG, refer to the agf_length value. + + + sb_agcount + Number of AGs in the filesystem. + + + sb_rbmblocks + Number of real-time bitmap blocks. + + + sb_logblocks + Number of blocks for the journaling log. + + + sb_versionnum + + Filesystem version number. This is a bitmask specifying the features enabled when creating the filesystem. Any disk checking tools or drivers that do not recognize any set bits must not operate upon the filesystem. Most of the flags indicate features introduced over time. The value must be 4 including the following flags: + + + + Flag + + + Description + + + + + + + XFS_SB_VERSION_ATTRBIT + + + Set if any inode have extended attributes. + + + + + XFS_SB_VERSION_NLINKBIT + + + Set if any inodes use 32-bit di_nlink values. + + + + + XFS_SB_VERSION_QUOTABIT + + + Quotas are enabled on the filesystem. This also brings in the various quota fields in the superblock. + + + + + XFS_SB_VERSION_ALIGNBIT + + + Set if sb_inoalignmt is used. + + + + + XFS_SB_VERSION_DALIGNBIT + + + Set if sb_unit and sb_width are used. + + + + + XFS_SB_VERSION_SHAREDBIT + + + Set if sb_shared_vn is used. + + + + + XFS_SB_VERSION_LOGV2BIT + + + Version 2 journaling logs are used. + + + + + XFS_SB_VERSION_SECTORBIT + + + Set if sb_sectsize is not 512. + + + + + XFS_SB_VERSION_EXTFLGBIT + + + Unwritten extents are used. This is always set. + + + + + XFS_SB_VERSION_DIRV2BIT + + + Version 2 directories are used. This is always set. + + + + + XFS_SB_VERSION_MOREBITSBIT + + + Set if the sb_features2 field in the superblock contains more flags. + + + + + + + + + sb_sectsize + Specifies the underlying disk sector size in bytes. Majority of the time, this is 512 bytes. This determines the minimum I/O alignment including Direct I/O. + + + sb_inodesize + Size of the inode in bytes. The default is 256 (2 inodes per standard sector) but can be made as large as 2048 bytes when creating the filesystem. + + + sb_inopblock + Number of inodes per block. This is equivalent to sb_blocksize / sb_inodesize. + + + sb_fname[12] + Name for the filesystem. This value can be used in the mount command. + + + sb_blocklog + log2 value of sb_blocksize. In other terms, sb_blocksize = 2sb_blocklog. + + + sb_sectlog + log2 value of sb_sectsize. + + + sb_inodelog + log2 value of sb_inodesize. + + + sb_inopblog + log2 value of sb_inopblock. + + + sb_agblklog + log2 value of sb_agblocks (rounded up). This value is used to generate inode numbers and absolute block numbers defined in extent maps. + + + sb_rextslog + log2 value of sb_rextents. + + + sb_inprogress + Flag specifying that the filesystem is being created. + + + sb_imax_pct + Maximum percentage of filesystem space that can be used for inodes. The default value is 25%. + + + sb_icount + Global count for number inodes allocated on the filesystem. This is only maintained in the first superblock. + + + sb_ifree + Global count of free inodes on the filesystem. This is only maintained in the first superblock. + + + sb_fdblocks + Global count of free data blocks on the filesystem. This is only maintained in the first superblock. + + + sb_frextents + Global count of free real-time extents on the filesystem. This is only maintained in the first superblock. + + + sb_uquotino + Inode for user quotas. This and the following two quota fields only apply if XFS_SB_VERSION_QUOTABIT flag is set in sb_versionnum. Refer to for more information. + + + sb_gquotino + Inode for group or project quotas. Group and Project quotas cannot be used at the same time. + + + sb_qflags + + Quota flags. It can be a combination of the following flags: + + + + Flag + + + Description + + + + + + + XFS_UQUOTA_ACCT + + + User quota accounting is enabled. + + + + + XFS_UQUOTA_ENFD + + + User quotas are enforced. + + + + + XFS_UQUOTA_CHKD + + + User quotas have been checked and updated on disk. + + + + + XFS_PQUOTA_ACCT + + + Project quota accounting is enabled. + + + + + XFS_OQUOTA_ENFD + + + Other (group/project) quotas are enforced. + + + + + XFS_OQUOTA_CHKD + + + Other (group/project) quotas have been checked. + + + + + XFS_GQUOTA_ACCT + + + Group quota accounting is enabled. + + + + + + + + sb_flags + Miscellaneous flags. + + + sb_shared_vn + Reserved and must be zero ("vn" stands for version number). + + + sb_inoalignmt + Inode chunk alignment in fsblocks. + + + sb_unit + Underlying stripe or raid unit in blocks. + + + sb_width + Underlying stripe or raid width in blocks. + + + sb_dirblklog + log2 value multiplier that determines the granularity of directory block allocations in fsblocks. + + + sb_logsectlog + log2 value of the log subvolume's sector size. This is only used if the journaling log is on a separate disk device (i.e. not internal). + + + sb_logsectsize + The log's sector size in bytes if the filesystem uses an external log device. + + + sb_logsunit + The log device's stripe or raid unit size. This only applies to version 2 logs (XFS_SB_VERSION_LOGV2BIT is set in sb_versionnum). + + + sb_features2 + + Additional version flags if XFS_SB_VERSION_MOREBITSBIT is set in sb_versionnum. The currently defined additional features include: + + + XFS_SB_VERSION2_LAZYSBCOUNTBIT  (0x02): Lazy global counters. Making a filesystem with this bit set can improve performance. The global free space and inode counts are only updated in the primary superblock when the filesystem is cleanly unmounted. + + + XFS_SB_VERSION2_ATTR2BIT  (0x08): Extended attributes version 2. Making a filesystem with this optimises the inode layout of extended attributes. + + + XFS_SB_VERSION2_PARENTBIT  (0x10): Parent pointers. All inodes must have an extended attribute that points back to its parent inode. The primary purpose for this information is in backup systems. + + + + + + + + + + + + + + + + + + +xfs_db Example: + A filesystem is made on a single SATA disk with the following command: + +# mkfs.xfs -i attr=2 -n size=16384 -f /dev/sda7 +meta-data=/dev/sda7 isize=256 agcount=16, agsize=3923122 blks + = sectsz=512 attr=2 +data = bsize=4096 blocks=62769952, imaxpct=25 + = sunit=0 swidth=0 blks, unwritten=1 +naming =version 2 bsize=16384 +log =internal log bsize=4096 blocks=30649, version=1 + = sectsz=512 sunit=0 blks +realtime =none extsz=65536 blocks=0, rtextents=0 + + + + And in xfs_db, inspecting the superblock: + +xfs_db> sb +xfs_db> p +magicnum = 0x58465342 +blocksize = 4096 +dblocks = 62769952 +rblocks = 0 +rextents = 0 +uuid = 32b24036-6931-45b4-b68c-cd5e7d9a1ca5 +logstart = 33554436 +rootino = 128 +rbmino = 129 +rsumino = 130 +rextsize = 16 +agblocks = 3923122 +agcount = 16 +rbmblocks = 0 +logblocks = 30649 +versionnum = 0xb084 +sectsize = 512 +inodesize = 256 +inopblock = 16 +fname = "\000\000\000\000\000\000\000\000\000\000\000\000" +blocklog = 12 +sectlog = 9 +inodelog = 8 +inopblog = 4 +agblklog = 22 +rextslog = 0 +inprogress = 0 +imax_pct = 25 +icount = 64 +ifree = 61 +fdblocks = 62739235 +frextents = 0 +uquotino = 0 +gquotino = 0 +qflags = 0 +flags = 0 +shared_vn = 0 +inoalignmt = 2 +unit = 0 +width = 0 +dirblklog = 2 +logsectlog = 0 +logsectsize = 0 +logsunit = 0 +features2 = 8 + + + +
+ + + + +
+ AG Free Space Management + The XFS filesystem tracks free space in an allocation group using two B+trees. One B+tree tracks space by block number, the second by the size of the free space block. This scheme allows XFS to quickly find free space near a given block or of a given size. + All block numbers, indexes and counts are AG relative. +
+ AG Free Space Block + The second sector in an AG contains the information about the two free space B+trees and associated free space information for the AG. The "AG Free Space Block", also knows as the AGF, uses the following structure: + +typedef struct xfs_agf { + __be32 agf_magicnum; + __be32 agf_versionnum; + __be32 agf_seqno; + __be32 agf_length; + __be32 agf_roots[XFS_BTNUM_AGF]; + __be32 agf_spare0; + __be32 agf_levels[XFS_BTNUM_AGF]; + __be32 agf_spare1; + __be32 agf_flfirst; + __be32 agf_fllast; + __be32 agf_flcount; + __be32 agf_freeblks; + __be32 agf_longest; + __be32 agf_btreeblks; +} xfs_agf_t; + + + + + + + The rest of the bytes in the sector are zeroed. XFS_BTNUM_AGF is set to 2, index 0 for the count B+tree and index 1 for the size B+tree. + + + + agf_magicnum + Specifies the magic number for the AGF sector: "XAGF" (0x58414746). + + + agf_versionnum + Set to XFS_AGF_VERSION which is currently 1. + + + agf_seqno + Specifies the AG number for the sector. + + + agf_length + Specifies the size of the AG in filesystem blocks. For all AGs except the last, this must be equal to the superblock's sb_agblocks value. For the last AG, this could be less than the sb_agblocks value. It is this value that should be used to determine the size of the AG. + + + agf_roots + Specifies the block number for the root of the two free space B+trees. + + + agf_levels + Specifies the level or depth of the two free space B+trees. For a fresh AG, this will be one, and the "roots" will point to a single leaf of level 0. + + + agf_flfirst + Specifies the index of the first "free list" block. Free lists are covered in more detail later on. + + + agf_fllast + Specifies the index of the last "free list" block. + + + agf_flcount + Specifies the number of blocks in the "free list". + + + agf_freeblks + Specifies the current number of free blocks in the AG. + + + agf_longest + Specifies the number of blocks of longest contiguous free space in the AG. + + + agf_btreeblks + Specifies the number of blocks used for the free space B+trees. This is only used if the XFS_SB_VERSION2_LAZYSBCOUNTBIT bit is set in sb_features2. + + +
+ +
+ AG Free Space B+trees + The two Free Space B+trees store a sorted array of block offset and block counts in the leaves of the B+tree. The first B+tree is sorted by the offset, the second by the count or size. + The trees use the following header: + +typedef struct xfs_btree_sblock { + __be32 bb_magic; + __be16 bb_level; + __be16 bb_numrecs; + __be32 bb_leftsib; + __be32 bb_rightsib; +} xfs_btree_sblock_t; + + Leaves contain a sorted array of offset/count pairs which are also used for node keys: + +typedef struct xfs_alloc_rec { + __be32 ar_startblock; + __be32 ar_blockcount; +} xfs_alloc_rec_t, xfs_alloc_key_t; + + + Node pointers are an AG relative block pointer: + typedef __be32 xfs_alloc_ptr_t; + + + + As the free space tracking is AG relative, all the block numbers are only 32-bits. + + + The bb_magic value depends on the B+tree: "ABTB" (0x41425442) for the block offset B+tree, "ABTC" (0x41425443) for the block count B+tree. + + + The xfs_btree_sblock_t header is used for intermediate B+tree node as well as the leaves. + + + For a typical 4KB filesystem block size, the offset for the xfs_alloc_ptr_t array would be 0xab0 (2736 decimal). + + + There are a series of macros in xfs_btree.h for deriving the offsets, counts, maximums, etc for the B+trees used in XFS. + + + The following diagram shows a single level B+tree which consists of one leaf: + + + + 15a + + + + + With the intermediate nodes, the associated leaf pointers are stored in a separate array about two thirds into the block. The following diagram illustrates a 2-level B+tree for a free space B+tree: + + + + 15b + + + +
+ + + + + + +
AG Free List + The AG Free List is located in the 4th sector of each AG and is known as the AGFL. It is an array of AG relative block pointers for reserved space for growing the free space B+trees. This space cannot be used for general user data including inodes, data, directories and extended attributes. + With a freshly made filesystem, 4 blocks are reserved immediately after the free space B+tree root blocks (blocks 4 to 7). As they are used up as the free space fragments, additional blocks will be reserved from the AG and added to the free list array. + As the free list array is located within a single sector, a typical device will have space for 128 elements in the array (512 bytes per sector, 4 bytes per AG relative block pointer). The actual size can be determined by using the XFS_AGFL_SIZE macro. + Active elements in the array are specified by the AGF's () agf_flfirst, agf_fllast and agf_flcount values. The array is managed as a circular list. + + + + 16 + + + + + The presence of these reserved block guarantees that the free space B+trees can be updated if any blocks are freed by extent changes in a full AG. + + xfs_db Examples: + These examples are derived from an AG that has been deliberately fragmented. + The AGF: + +xfs_db> agf <ag#> +xfs_db> p +magicnum = 0x58414746 +versionnum = 1 +seqno = 0 +length = 3923122 +bnoroot = 7 +cntroot = 83343 +bnolevel = 2 +cntlevel = 2 +flfirst = 22 +fllast = 27 +flcount = 6 +freeblks = 3654234 +longest = 3384327 +btreeblks = 0 + + In the AGFL, the active elements are from 22 to 27 inclusive which are obtained from the flfirst and fllast values from the agf in the previous example: + +xfs_db> agfl 0 +xfs_db> p +bno[0-127] = 0:4 1:5 2:6 3:7 4:83342 5:83343 6:83344 7:83345 8:83346 9:83347 + 10:4 11:5 12:80205 13:80780 14:81496 15:81766 16:83346 17:4 18:5 + 19:80205 20:82449 21:81496 22:81766 23:82455 24:80780 25:5 + 26:80205 27:83344 + + + The free space B+tree sorted by block offset, the root block is from the AGF's bnoroot value: + +xfs_db> fsblock 7 +xfs_db> type bnobt +xfs_db> p +magic = 0x41425442 +level = 1 +numrecs = 4 +leftsib = null +rightsib = null +keys[1-4] = [startblock,blockcount] + 1:[12,16] 2:[184586,3] 3:[225579,1] 4:[511629,1] +ptrs[1-4] = 1:2 2:83347 3:6 4:4 + + + Blocks 2, 83347, 6 and 4 contain the leaves for the free space B+tree by starting block. Block 2 would contain offsets 16 up to but not including 184586 while block 4 would have all offsets from 511629 to the end of the AG. + The free space B+tree sorted by block count, the root block is from the AGF's cntroot value: + +xfs_db> fsblock 83343 +xfs_db> type cntbt +xfs_db> p +magic = 0x41425443 +level = 1 +numrecs = 4 +leftsib = null +rightsib = null +keys[1-4] = [blockcount,startblock] + 1:[1,81496] 2:[1,511729] 3:[3,191875] 4:[6,184595] +ptrs[1-4] = 1:3 2:83345 3:83342 4:83346 + + + The leaf in block 3, in this example, would only contain single block counts. The offsets are sorted in ascending order if the block count is the same. + Inspecting the leaf in block 83346, we can see the largest block at the end: + +xfs_db> fsblock 83346 +xfs_db> type cntbt +xfs_db> p +magic = 0x41425443 +level = 0 +numrecs = 344 +leftsib = 83342 +rightsib = null +recs[1-344] = [startblock,blockcount] + 1:[184595,6] 2:[187573,6] 3:[187776,6] + ... + 342:[513712,755] 343:[230317,258229] 344:[538795,3384327] + + + The longest block count must be the same as the AGF's longest value. + +
+
+ + +
+ AG Inode Management +
+ Inode Numbers + Inode numbers in XFS come in two forms: AG relative and absolute. + AG relative inode numbers always fit within 32 bits. The number of bits actually used is determined by the sum of the superblock's () sb_inoplog and sb_agblklog values. Relative inode numbers are found within the AG's inode structures. + Absolute inode numbers include the AG number in the high bits, above the bits used for the AG relative inode number. Absolute inode numbers are found in directory () entries. + + + + 18 + + + +
+
+ Inode Information + Each AG manages its own inodes. The third sector in the AG contains information about the AG's inodes and is known as the AGI. + The AGI uses the following structure: + +typedef struct xfs_agi { + __be32 agi_magicnum; + __be32 agi_versionnum; + __be32 agi_seqno + __be32 agi_length; + __be32 agi_count; + __be32 agi_root; + __be32 agi_level; + __be32 agi_freecount; + __be32 agi_newino; + __be32 agi_dirino; + __be32 agi_unlinked[64]; +} xfs_agi_t; + + + + agi_magicnum + Specifies the magic number for the AGI sector: "XAGI" (0x58414749). + + + agi_versionnum + Set to XFS_AGI_VERSION which is currently 1. + + + agi_seqno + Specifies the AG number for the sector. + + + agi_length + Specifies the size of the AG in filesystem blocks. + + + agi_count + Specifies the number of inodes allocated for the AG. + + + agi_root + Specifies the block number in the AG containing the root of the inode B+tree. + + + agi_level + Specifies the number of levels in the inode B+tree. + + + agi_freecount + Specifies the number of free inodes in the AG. + + + agi_newino + Specifies AG relative inode number most recently allocated. + + + agi_dirino + Deprecated and not used, it's always set to NULL (-1). + + + agi_unlinked[64] + Hash table of unlinked (deleted) inodes that are still being referenced. Refer to for more information. + + +
+ + +
Inode B+trees + Inodes are allocated in chunks of 64, and a B+tree is used to track these chunks of inodes as they are allocated and freed. The block containing root of the B+tree is defined by the AGI's agi_root value. + The B+tree header for the nodes and leaves use the xfs_btree_sblock structure which is the same as the header used in the AGF B+trees (): + typedef struct xfs_btree_sblock xfs_inobt_block_t; + + Leaves contain an array of the following structure: + +typedef struct xfs_inobt_rec { + __be32 ir_startino; + __be32 ir_freecount; + __be64 ir_free; +} xfs_inobt_rec_t; + + + + Nodes contain key/pointer pairs using the following types: + +typedef struct xfs_inobt_key { + __be32 ir_startino; +} xfs_inobt_key_t; +typedef __be32 xfs_inobt_ptr_t; + + + + For the leaf entries, ir_startino specifies the starting inode number for the chunk, ir_freecount specifies the number of free entries in the chuck, and the ir_free is a 64 element bit array specifying which entries are free in the chunk. + The following diagram illustrates a single level inode B+tree: + + + + 20a + + + + And a 2-level inode B+tree: + + + + 20b + + + + +xfs_db Examples: + TODO:
+ Real-time Devices + TODO:
diff --git a/en-US/xfs-Author_Group.xml b/en-US/xfs-Author_Group.xml new file mode 100644 index 0000000..474378e --- /dev/null +++ b/en-US/xfs-Author_Group.xml @@ -0,0 +1,16 @@ + + + + + + + Dude + McDude + + My Org + Best Div in the place + + dude.mcdude@myorg.org + + diff --git a/en-US/xfs-Chapter.xml b/en-US/xfs-Chapter.xml new file mode 100644 index 0000000..1be5904 --- /dev/null +++ b/en-US/xfs-Chapter.xml @@ -0,0 +1,9 @@ + + + +
+Journaling Log + TODO: +
+ diff --git a/en-US/xfs-Common_XFS_Types.xml b/en-US/xfs-Common_XFS_Types.xml new file mode 100644 index 0000000..1abb3f8 --- /dev/null +++ b/en-US/xfs-Common_XFS_Types.xml @@ -0,0 +1,68 @@ + + + +
+ Common XFS Types + + All the following XFS types can be found in xfs_types.h. NULL values are always -1 on disk (ie. all bits for the value set to one). + + + + xfs_ino_t + Unsigned 64 bit absolute inode number (). + + + xfs_off_t + Signed 64 bit file offset. + + + xfs_daddr_t + Signed 64 bit disk address. + + + xfs_agnumber_t + Unsigned 32 bit AG () number. + + + xfs_agblock_t + Unsigned 32 bit AG relative block number. + + + xfs_extlen_t + Unsigned 32 bit extent () length in blocks. + + + xfs_extnum_t + Signed 32 bit number of extents in a file. + + + xfs_dablk_t + Unsigned 32 bit block number for directories () and extended attributes (). + + + xfs_dahash_t + Unsigned 32 bit hash of a directory file name or extended attribute name. + + + xfs_dfsbno_t + Unsigned 64 bit filesystem block number combining AG () number and block offset into the AG. + + + xfs_drfsbno_t + Unsigned 64 bit raw filesystem block number. + + + xfs_drtbno_t + Unsigned 64 bit extent number in the real-time () sub-volume. + + + xfs_dfiloff_t + Unsigned 64 bit block offset into a file. + + + xfs_dfilblks_t + Unsigned 64 bit block count for a file. + + +
diff --git a/en-US/xfs-Data_Extents.xml b/en-US/xfs-Data_Extents.xml new file mode 100644 index 0000000..ed84912 --- /dev/null +++ b/en-US/xfs-Data_Extents.xml @@ -0,0 +1,265 @@ + + + +
+Data Extents + XFS allocates space for a file using extents: starting location and length. XFS extents also specify the file's logical starting offset for a file. This allows a files extent map to automatically support sparse files (i.e. "holes" in the file). A flag is also used to specify if the extent has been preallocated and not yet been written to (unwritten extent). + A file can have more than one extent if one chunk of contiguous disk space is not available for the file. As a file grows, the XFS space allocator will attempt to keep space contiguous and merge extents. If more than one file is being allocated space in the same AG at the same time, multiple extents for the files will occur as the extents get interleaved. The effect of this can vary depending on the extent allocator used in the XFS driver. + An extent is 128 bits in size and uses the following packed layout: + + + + 30 + + + + The extent is represented by the xfs_bmbt_rec_t structure which uses a big endian format on-disk. In-core management of extents use the xfs_bmbt_irec_t structure which is the unpacked version of xfs_bmbt_rec_t: + + +typedef struct xfs_bmbt_irec { + xfs_fileoff_t br_startoff; + xfs_fsblock_t br_startblock; + xfs_filblks_t br_blockcount; + xfs_exntst_t br_state; +} xfs_bmbt_irec_t; + + + + + The extent br_state field uses the following enum declaration: + + +typedef enum { + XFS_EXT_NORM, + XFS_EXT_UNWRITTEN, + XFS_EXT_INVALID +} xfs_exntst_t; + + + + Some other points about extents: + + + The xfs_bmbt_rec_32_t and xfs_bmbt_rec_64_t structures are effectively the same as xfs_bmbt_rec_t, just different representations of the same 128 bits in on-disk big endian format. + + + When a file is created and written to, XFS will endeavour to keep the extents within the same AG as the inode. It may use a different AG if the AG is busy or there is no space left in it. + + + If a file is zero bytes long, it will have no extents, di_nblocks and di_nexents will be zero. Any file with data will have at least one extent, and each extent can use from 1 to over 2 million blocks (221) on the filesystem. For a default 4KB block size filesystem, a single extent can be up to 8GB in length. + + + The following two subsections cover the two methods of storing extent information for a file. The first is the fastest and simplest where the inode completely contains an extent array to the file's data. The second is slower and more complex B+tree which can handle thousands to millions of extents efficiently. + + + +
+ Extent List + Local extents are where the entire extent array is stored within the inode's data fork itself. This is the most optimal in terms of speed and resource consumption. The trade-off is the file can only have a few extents before the inode runs out of space. + The "data fork" of the inode contains an array of extents, the size of the array determined by the inode's di_nextents value. + + + + + + 32 + + + + + + The number of extents that can fit in the inode depends on the inode size and di_forkoff. For a default 256 byte inode with no extended attributes, a file can up to 19 extents with this format. Beyond this, extents have to use the B+tree format. + +xfs_db Example: + An 8MB file with one extent: + +xfs_db> inode <inode#> +xfs_db> p +core.magic = 0x494e +core.mode = 0100644 +core.version = 1 +core.format = 2 (extents) +... +core.size = 8294400 +core.nblocks = 2025 +core.extsize = 0 +core.nextents = 1 +core.naextents = 0 +core.forkoff = 0 +... +u.bmx[0] = [startoff,startblock,blockcount,extentflag] + 0:[0,25356,2025,0] + + + A 24MB file with three extents: + +xfs_db> inode <inode#> +xfs_db> p +... +core.format = 2 (extents) +... +core.size = 24883200 +core.nblocks = 6075 +core.nextents = 3 +... +u.bmx[0-2] = [startoff,startblock,blockcount,extentflag] + 0:[0,27381,2025,0] + 1:[2025,31431,2025,0] + 2:[4050,35481,2025,0] + + + Raw disk version of the inode with the third extent highlighted (di_u always starts at offset 0x64): + + + code33a + + We can expand the highlighted section into the following bit array from MSB to LSB with the file offset and the block count highlighted: + + + code33b + + + + A 4MB file with two extents and a hole in the middle, the first extent containing 64KB of data, the second about 4MB in containing 32KB (write 64KB, lseek ~4MB, write 32KB operations): + +xfs_db> inode <inode#> +xfs_db> p +... +core.format = 2 (extents) +... +core.size = 4063232 +core.nblocks = 24 +core.nextents = 2 +... +u.bmx[0-1] = [startoff,startblock,blockcount,extentflag] + 0:[0,37506,16,0] + 1:[984,37522,8,0] + + + +
+ + + +
B+tree Extent List + Beyond the simple extent array, to efficiently manage large extent maps, XFS uses B+trees. The root node of the B+tree is stored in the inode's data fork. All block pointers for extent B+trees are 64-bit absolute block numbers. + For a single level B+tree, the root node points to the B+tree's leaves. Each leaf occupies one filesystem block and contains a header and an array of extents sorted by the file's offset. Each leaf has left and right (or backward and forward) block pointers to adjacent leaves. For a standard 4KB filesystem block, a leaf can contain up to 254 extents before a B+tree rebalance is triggered. + For a multi-level B+tree, the root node points to other B+tree nodes which eventually point to the extent leaves. B+tree keys are based on the file's offset. The nodes at each level in the B+tree point to the adjacent nodes. + The base B+tree node is used for extents, directories and extended attributes. The structures used for inode's B+tree root are: + + +typedef struct xfs_bmdr_block { + __be16 bb_level; + __be16 bb_numrecs; +} xfs_bmdr_block_t; +typedef struct xfs_bmbt_key { + xfs_dfiloff_t br_startoff; +} xfs_bmbt_key_t, xfs_bmdr_key_t; +typedef xfs_dfsbno_t xfs_bmbt_ptr_t, xfs_bmdr_ptr_t; + + + + + + + On disk, the B+tree node starts with the xfs_bmbr_block_t header followed by an array of xfs_bmbt_key_t values and then an array of xfs_bmbt_ptr_t values. The size of both arrays is specified by the header's bb_numrecs value. + + + The root node in the inode can only contain up to 19 key/pointer pairs for a standard 256 byte inode before a new level of nodes is added between the root and the leaves. This will be less if di_forkoff is not zero (i.e. attributes are in use on the inode). + + + The subsequent nodes and leaves of the B+tree use the xfs_bmbt_block_t declaration: + + +typedef struct xfs_btree_lblock xfs_bmbt_block_t; +typedef struct xfs_btree_lblock { + __be32 bb_magic; + __be16 bb_level; + __be16 bb_numrecs; + __be64 bb_leftsib; + __be64 bb_rightsib; +} xfs_btree_lblock_t; + + + + + For intermediate nodes, the data following xfs_bmbt_block_t is the same as the root node: array of xfs_bmbt_key_t value followed by an array of xfs_bmbt_ptr_t values that starts halfway through the block (offset 0x808 for a 4096 byte filesystem block). + + + For leaves, an array of xfs_bmbt_rec_t extents follow the xfs_bmbt_block_t header. + + + Nodes and leaves use the same value for bb_magic: + + + +#define XFS_BMAP_MAGIC        0x424d4150        /* 'BMAP' */ + + + + The bb_level value determines if the node is an intermediate node or a leaf. Leaves have a bb_level of zero, nodes are one or greater. + + + Intermediate nodes, like leaves, can contain up to 254 pointers to leaf blocks for a standard 4KB filesystem block size as both the keys and pointers are 64 bits in size. + + + The following diagram illustrates a single level extent B+tree: + + + + + + + + + + 35 + + + + + + + + + + + The following diagram illustrates a two level extent B+tree: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + xfs_db Example: + TODO:
diff --git a/en-US/xfs-Directories.xml b/en-US/xfs-Directories.xml new file mode 100644 index 0000000..11b8c87 --- /dev/null +++ b/en-US/xfs-Directories.xml @@ -0,0 +1,1132 @@ + + + +
+ Directories + + + Only v2 directories covered here. v1 directories are obsolete. + + + The size of a "directory block" is defined by the superblock's () sb_dirblklog value. The size in bytes = sb_blocksize * 2sb_dirblklog. For example, if sb_blocksize = 4096, sb_dirblklog = 2, the directory block size is 16384 bytes. Directory blocks are always allocated in multiples based on sb_dirblklog.  Directory blocks cannot be more that 65536 bytes in size. + + + Note: the term "block" in this section will refer to directory blocks, not filesystem blocks unless otherwise specified. + + + + + + All directory entries contain the following "data": + + + + + Entry's name (counted string consisting of a single byte namelen followed by name consisting of an array of 8-bit chars without a NULL terminator). + + + + + Entry's absolute inode number (), which are always 64 bits (8 bytes) in size except a special case for shortform directories. + + + + + + An offset or tag used for iterative readdir calls. + + + + + + All non-shortform directories also contain two additional structures: "leaves" and "freespace indexes". + + + + Leaves contain the sorted hashed name value (xfs_da_hashname() in xfs_da_btree.c) and associated "address" which points to the effective offset into the directory's data structures. Leaves are used to optimise lookup operations. + + + + + Freespace indexes contain free space/empty entry tracking for quickly finding an appropriately sized location for new entries. They maintain the largest free space for each "data" block. + + + + + + + A few common types are used for the directory structures: + +typedef __uint16_t xfs_dir2_data_off_t; +typedef __uint32_t xfs_dir2_dataptr_t; + + + + + + +
+ Shortform Directories + + + Directory entries are stored within the inode. + + + Only data stored is the name, inode # and offset, no "leaf" or "freespace index" information is required as an inode can only store a few entries. + + + "." is not stored (as it's in the inode itself), and ".." is a dedicated parent field in the header. + + + The number of directories that can be stored in an inode depends on the inode size (), the number of entries, the length of the entry names and extended attribute data. + + + Once the number of entries exceed the space available in the inode, the format is converted to a "Block Directory". + + + Shortform directory data is packed as tightly as possible on the disk with the remaining space zeroed: + +typedef struct xfs_dir2_sf { + xfs_dir2_sf_hdr_t hdr; + xfs_dir2_sf_entry_t list[1]; +} xfs_dir2_sf_t; +typedef struct xfs_dir2_sf_hdr { + __uint8_t count; + __uint8_t i8count; + xfs_dir2_inou_t parent; +} xfs_dir2_sf_hdr_t; +typedef struct xfs_dir2_sf_entry { + __uint8_t namelen; + xfs_dir2_sf_off_t offset; + __uint8_t name[1]; + xfs_dir2_inou_t inumber; +} xfs_dir2_sf_entry_t; + + + + + 39 + + + + + + + + + Inode numbers are stored using 4 or 8 bytes depending on whether all the inode numbers for the directory fit in 4 bytes (32 bits) or not. If all inode numbers fit in 4 bytes, the header's count value specifies the number of entries in the directory and i8count will be zero. If any inode number exceeds 4 bytes, all inode numbers will be 8 bytes in size and the header's i8count value specifies the number of entries and count will be zero. The following union covers the shortform inode number structure: + + typedef struct { __uint8_t i[8]; } xfs_dir2_ino8_t; +typedef struct { __uint8_t i[4]; } xfs_dir2_ino4_t; +typedef union { + xfs_dir2_ino8_t i8; + xfs_dir2_ino4_t i4; +} xfs_dir2_inou_t; + + + + + + + +xfs_db Example: +A directory is created with 4 files, all inode numbers fitting within 4 bytes: + +xfs_db> inode <inode#> +xfs_db> p +core.magic = 0x494e +core.mode = 040755 +core.version = 1 +core.format = 1 (local) +core.nlinkv1 = 2 +... +core.size = 94 +core.nblocks = 0 +core.extsize = 0 +core.nextents = 0 +... +u.sfdir2.hdr.count = 4 +u.sfdir2.hdr.i8count = 0 +u.sfdir2.hdr.parent.i4 = 128 /* parent = root inode */ +u.sfdir2.list[0].namelen = 15 +u.sfdir2.list[0].offset = 0x30 +u.sfdir2.list[0].name = "frame000000.tst" +u.sfdir2.list[0].inumber.i4 = 25165953 +u.sfdir2.list[1].namelen = 15 +u.sfdir2.list[1].offset = 0x50 +u.sfdir2.list[1].name = "frame000001.tst" +u.sfdir2.list[1].inumber.i4 = 25165954 +u.sfdir2.list[2].namelen = 15 +u.sfdir2.list[2].offset = 0x70 +u.sfdir2.list[2].name = "frame000002.tst" +u.sfdir2.list[2].inumber.i4 = 25165955 +u.sfdir2.list[3].namelen = 15 +u.sfdir2.list[3].offset = 0x90 +u.sfdir2.list[3].name = "frame000003.tst" +u.sfdir2.list[3].inumber.i4 = 25165956 + + + The raw data on disk with the first entry highlighted. The six byte header precedes the first entry: + + + code40 + + Next, an entry is deleted (frame000001.tst), and any entries after the deleted entry are moved or compacted to "cover" the hole: + +xfs_db> inode <inode#> +xfs_db> p +core.magic = 0x494e +core.mode = 040755 +core.version = 1 +core.format = 1 (local) +core.nlinkv1 = 2 +... +core.size = 72 +core.nblocks = 0 +core.extsize = 0 +core.nextents = 0 +... +u.sfdir2.hdr.count = 3 +u.sfdir2.hdr.i8count = 0 +u.sfdir2.hdr.parent.i4 = 128 +u.sfdir2.list[0].namelen = 15 +u.sfdir2.list[0].offset = 0x30 +u.sfdir2.list[0].name = "frame000000.tst" +u.sfdir2.list[0].inumber.i4 = 25165953 +u.sfdir2.list[1].namelen = 15 +u.sfdir2.list[1].offset = 0x70 +u.sfdir2.list[1].name = "frame000002.tst" +u.sfdir2.list[1].inumber.i4 = 25165955 +u.sfdir2.list[2].namelen = 15 +u.sfdir2.list[2].offset = 0x90 +u.sfdir2.list[2].name = "frame000003.tst" +u.sfdir2.list[2].inumber.i4 = 25165956 + + + Raw disk data, the space beyond the shortform entries is invalid and could be non-zero: + +xfs_db> type text +xfs_db> p +00: 49 4e 41 ed 01 01 00 02 00 00 00 00 00 00 00 00 INA............. +10: 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 03 ................ +20: 44 b2 45 a2 09 fd e4 50 44 b2 45 a3 12 ee b5 d0 D.E....PD.E..... +30: 44 b2 45 a3 12 ee b5 d0 00 00 00 00 00 00 00 48 D.E............H +40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ +50: 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 ................ +60: ff ff ff ff 03 00 00 00 00 80 0f 00 30 66 72 61 ............0fra +70: 6d 65 30 30 30 30 30 30 2e 74 73 74 01 80 00 81 me000000.tst.... +80: 0f 00 70 66 72 61 6d 65 30 30 30 30 30 32 2e 74 ..pframe000002.t +90: 73 74 01 80 00 83 0f 00 90 66 72 61 6d 65 30 30 st.......frame00 +a0: 30 30 30 33 2e 74 73 74 01 80 00 84 0f 00 90 66 0003.tst.......f +b0: 72 61 6d 65 30 30 30 30 30 33 2e 74 73 74 01 80 rame000003.tst.. +c0: 00 84 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ + + TODO: 8-byte inode number example
+ + + +
+ Block Directories + When the shortform directory space exceeds the space in an inode, the directory data is moved into a new single directory block outside the inode. The inode's format is changed from "local" to "extent". Following is a list of points about block directories. + + + All directory data is stored within the one directory block, including "." and ".." entries which are mandatory. + + + The block also contains "leaf" and "freespace index " information. + + + The location of the block is defined by the inode's in-core extent list (): the di_u.u_bmx[0] value. The file offset in the extent must always be zero and the length = (directory block size / filesystem block size). The block number points to the filesystem block containing the directory data. + + + Block directory data is stored in the following structures: + +#define XFS_DIR2_DATA_FD_COUNT 3 +typedef struct xfs_dir2_block { + xfs_dir2_data_hdr_t hdr; + xfs_dir2_data_union_t u[1]; + xfs_dir2_leaf_entry_t leaf[1]; + xfs_dir2_block_tail_t tail; +} xfs_dir2_block_t; +typedef struct xfs_dir2_data_hdr { + __uint32_t magic; + xfs_dir2_data_free_t bestfree[XFS_DIR2_DATA_FD_COUNT]; +} xfs_dir2_data_hdr_t; +typedef struct xfs_dir2_data_free { + xfs_dir2_data_off_t offset; + xfs_dir2_data_off_t length; +} xfs_dir2_data_free_t; +typedef union { + xfs_dir2_data_entry_t entry; + xfs_dir2_data_unused_t unused; +} xfs_dir2_data_union_t; +typedef struct xfs_dir2_data_entry { + xfs_ino_t inumber; + __uint8_t namelen; + __uint8_t name[1]; + xfs_dir2_data_off_t tag; +} xfs_dir2_data_entry_t; +typedef struct xfs_dir2_data_unused { + __uint16_t freetag; /* 0xffff */ + xfs_dir2_data_off_t length; + xfs_dir2_data_off_t tag; +} xfs_dir2_data_unused_t; +typedef struct xfs_dir2_leaf_entry { + xfs_dahash_t hashval; + xfs_dir2_dataptr_t address; +} xfs_dir2_leaf_entry_t; +typedef struct xfs_dir2_block_tail { + __uint32_t count; + __uint32_t stale; +} xfs_dir2_block_tail_t; + + + + + + + + 43 + + + + + + The tag in the xfs_dir2_data_entry_t structure stores its offset from the start of the block. + + + Start of a free space region is marked with the xfs_dir2_data_unused_t structure where the freetag is 0xffff. The freetag and length overwrites the inumber for an entry. The tag is located at length - sizeof(tag) from the start of the unused entry on-disk. + + + The bestfree array in the header points to as many as three of the largest spaces of free space within the block for storing new entries sorted by largest to third largest. If there are less than 3 empty regions, the remaining bestfree elements are zeroed. The offset specifies the offset from the start of the block in bytes, and the length specifies the size of the free space in bytes. The location each points to must contain the above xfs_dir2_data_unused_t structure. As a block cannot exceed 64KB in size, each is a 16-bit value. bestfree is used to optimise the time required to locate space to create an entry. It saves scanning through the block to find a location suitable for every entry created. + + + The tail structure specifies the number of elements in the leaf array and the number of stale entries in the array. The tail is always located at the end of the block. The leaf data immediately precedes the tail structure. + + + The leaf array, which grows from the end of the block just before the tail structure, contains an array of hash/address pairs for quickly looking up a name by a hash value. Hash values are covered by the introduction to directories. The address on-disk is the offset into the block divided by 8 (XFS_DIR2_DATA_ALIGN). Hash/address pairs are stored on disk to optimise lookup speed for large directories. If they were not stored, the hashes have to be calculated for all entries each time a lookup occurs in a directory. + + + + +xfs_db Example: + A directory is created with 8 entries, directory block size = filesystem block size: + +xfs_db> sb 0 +xfs_db> p +magicnum = 0x58465342 +blocksize = 4096 +... +dirblklog = 0 +... +xfs_db> inode <inode#> +xfs_db> p +core.magic = 0x494e +core.mode = 040755 +core.version = 1 +core.format = 2 (extents) +core.nlinkv1 = 2 +... +core.size = 4096 +core.nblocks = 1 +core.extsize = 0 +core.nextents = 1 +... +u.bmx[0] = [startoff,startblock,blockcount,extentflag] 0:[0,2097164,1,0] + + + Go to the "startblock" and show the raw disk data: + +xfs_db> dblock 0 +xfs_db> type text +xfs_db> p +000: 58 44 32 42 01 30 0e 78 00 00 00 00 00 00 00 00 XD2B.0.x........ +010: 00 00 00 00 02 00 00 80 01 2e 00 00 00 00 00 10 ................ +020: 00 00 00 00 00 00 00 80 02 2e 2e 00 00 00 00 20 ................ +030: 00 00 00 00 02 00 00 81 0f 66 72 61 6d 65 30 30 .........frame00 +040: 30 30 30 30 2e 74 73 74 80 8e 59 00 00 00 00 30 0000.tst..Y....0 +050: 00 00 00 00 02 00 00 82 0f 66 72 61 6d 65 30 30 .........frame00 +060: 30 30 30 31 2e 74 73 74 d0 ca 5c 00 00 00 00 50 0001.tst.......P +070: 00 00 00 00 02 00 00 83 0f 66 72 61 6d 65 30 30 .........frame00 +080: 30 30 30 32 2e 74 73 74 00 00 00 00 00 00 00 70 0002.tst.......p +090: 00 00 00 00 02 00 00 84 0f 66 72 61 6d 65 30 30 .........frame00 +0a0: 30 30 30 33 2e 74 73 74 00 00 00 00 00 00 00 90 0003.tst........ +0b0: 00 00 00 00 02 00 00 85 0f 66 72 61 6d 65 30 30 .........frame00 +0c0: 30 30 30 34 2e 74 73 74 00 00 00 00 00 00 00 b0 0004.tst........ +0d0: 00 00 00 00 02 00 00 86 0f 66 72 61 6d 65 30 30 .........frame00 +0e0: 30 30 30 35 2e 74 73 74 00 00 00 00 00 00 00 d0 0005.tst........ +0f0: 00 00 00 00 02 00 00 87 0f 66 72 61 6d 65 30 30 .........frame00 +100: 30 30 30 36 2e 74 73 74 00 00 00 00 00 00 00 f0 0006.tst........ +110: 00 00 00 00 02 00 00 88 0f 66 72 61 6d 65 30 30 .........frame00 +120: 30 30 30 37 2e 74 73 74 00 00 00 00 00 00 01 10 0007.tst........ +130: ff ff 0e 78 00 00 00 00 00 00 00 00 00 00 00 00 ...x............ + + The "leaf" and "tail" structures are stored at the end of the block, so as the directory grows, the middle is filled in: + +fa0: 00 00 00 00 00 00 01 30 00 00 00 2e 00 00 00 02 .......0........ +fb0: 00 00 17 2e 00 00 00 04 83 a0 40 b4 00 00 00 0e ................ +fc0: 93 a0 40 b4 00 00 00 12 a3 a0 40 b4 00 00 00 06 ................ +fd0: b3 a0 40 b4 00 00 00 0a c3 a0 40 b4 00 00 00 1e ................ +fe0: d3 a0 40 b4 00 00 00 22 e3 a0 40 b4 00 00 00 16 ................ +ff0: f3 a0 40 b4 00 00 00 1a 00 00 00 0a 00 00 00 00 ................ + + + In a readable format: + +xfs_db> type dir2 +xfs_db> p +bhdr.magic = 0x58443242 +bhdr.bestfree[0].offset = 0x130 +bhdr.bestfree[0].length = 0xe78 +bhdr.bestfree[1].offset = 0 +bhdr.bestfree[1].length = 0 +bhdr.bestfree[2].offset = 0 +bhdr.bestfree[2].length = 0 +bu[0].inumber = 33554560 +bu[0].namelen = 1 +bu[0].name = "." +bu[0].tag = 0x10 +bu[1].inumber = 128 +bu[1].namelen = 2 +bu[1].name = ".." +bu[1].tag = 0x20 +bu[2].inumber = 33554561 +bu[2].namelen = 15 +bu[2].name = "frame000000.tst" +bu[2].tag = 0x30 +bu[3].inumber = 33554562 +bu[3].namelen = 15 +bu[3].name = "frame000001.tst" +bu[3].tag = 0x50 +... +bu[8].inumber = 33554567 +bu[8].namelen = 15 +bu[8].name = "frame000006.tst" +bu[8].tag = 0xf0 +bu[9].inumber = 33554568 +bu[9].namelen = 15 +bu[9].name = "frame000007.tst" +bu[9].tag = 0x110 +bu[10].freetag = 0xffff +bu[10].length = 0xe78 +bu[10].tag = 0x130 +bleaf[0].hashval = 0x2e +bleaf[0].address = 0x2 +bleaf[1].hashval = 0x172e +bleaf[1].address = 0x4 +bleaf[2].hashval = 0x83a040b4 +bleaf[2].address = 0xe +... +bleaf[8].hashval = 0xe3a040b4 +bleaf[8].address = 0x16 +bleaf[9].hashval = 0xf3a040b4 +bleaf[9].address = 0x1a +btail.count = 10 +btail.stale = 0 + + + + Note that with block directories, all xfs_db fields are preceded with "b". + + + For a simple lookup example, the hash of frame000000.tst is 0xb3a040b4. Looking up that value, we get an address of 0x6. Multiply that by 8, it becomes offset 0x30 and the inode at that point is 33554561. + When we remove an entry from the middle (frame000004.tst), we can see how the freespace details are adjusted: + +bhdr.magic = 0x58443242 +bhdr.bestfree[0].offset = 0x130 +bhdr.bestfree[0].length = 0xe78 +bhdr.bestfree[1].offset = 0xb0 +bhdr.bestfree[1].length = 0x20 +bhdr.bestfree[2].offset = 0 +bhdr.bestfree[2].length = 0 +... +bu[5].inumber = 33554564 +bu[5].namelen = 15 +bu[5].name = "frame000003.tst" +bu[5].tag = 0x90 +bu[6].freetag = 0xffff +bu[6].length = 0x20 +bu[6].tag = 0xb0 +bu[7].inumber = 33554566 +bu[7].namelen = 15 +bu[7].name = "frame000005.tst" +bu[7].tag = 0xd0 +... +bleaf[7].hashval = 0xd3a040b4 +bleaf[7].address = 0x22 +bleaf[8].hashval = 0xe3a040b4 +bleaf[8].address = 0 +bleaf[9].hashval = 0xf3a040b4 +bleaf[9].address = 0x1a +btail.count = 10 +btail.stale = 1 + + A new "bestfree" value is added for the entry, the start of the entry is marked as unused with 0xffff (which overwrites the inode number for an actual entry), and the length of the space. The tag remains intact at the offset+length - sizeof(tag). The address for the hash is also cleared. The affected areas are highlighted below: + + + code46 + + +
+ + + + + +
+ + Leaf Directories + Once a Block Directory () has filled the block, the directory data is changed into a new format. It still uses extents () and the same basic structures, but the "data" and "leaf" are split up into their own extents. The "leaf" information only occupies one extent. As "leaf" information is more compact than "data" information, more than one "data" extent is common. + + + Block to Leaf conversions retain the existing block for the data entries and allocate a new block for the leaf and freespace index information. + + + As with all directories, data blocks must start at logical offset zero. + + + The "leaf" block has a special offset defined by XFS_DIR2_LEAF_OFFSET. Currently, this is 32GB and in the extent view, a block offset of 32GB/sb_blocksize. On a 4KB block filesystem, this is 0x800000 (8388608 decimal). + + + The "data" extents have a new header (no "leaf" data): + +typedef struct xfs_dir2_data { + xfs_dir2_data_hdr_t hdr; + xfs_dir2_data_union_t u[1]; +} xfs_dir2_data_t; + + + + + The "leaf" extent uses the following structures: + +typedef struct xfs_dir2_leaf { + xfs_dir2_leaf_hdr_t hdr; + xfs_dir2_leaf_entry_t ents[1]; + xfs_dir2_data_off_t bests[1]; + xfs_dir2_leaf_tail_t tail; +} xfs_dir2_leaf_t; +typedef struct xfs_dir2_leaf_hdr { + xfs_da_blkinfo_t info; + __uint16_t count; + __uint16_t stale; +} xfs_dir2_leaf_hdr_t; +typedef struct xfs_dir2_leaf_tail { + __uint32_t bestcount; +} xfs_dir2_leaf_tail_t; + + + + + The leaves use the xfs_da_blkinfo_t filesystem block header. This header is used for directory and extended attribute () leaves and B+tree nodes: + +typedef struct xfs_da_blkinfo { + __be32 forw; + __be32 back; + __be16 magic; + __be16 pad; +} xfs_da_blkinfo_t; + + + + The size of the ents array is specified by hdr.count. + + + The size of the bests array is specified by the tail.bestcount which is also the number of "data" blocks for  the directory. The bests array maintains each data block's bestfree[0].length value. + + + + 48 + + + + + + + + +xfs_db Example: + + For this example, a directory was created with 256 entries (frame000000.tst to frame000255.tst) and then deleted some files (frame00005*, frame00018* and frame000240.tst) to show free list characteristics. + +xfs_db> inode <inode#> +xfs_db> p +core.magic = 0x494e +core.mode = 040755 +core.version = 1 +core.format = 2 (extents) +core.nlinkv1 = 2 +... +core.size = 12288 +core.nblocks = 4 +core.extsize = 0 +core.nextents = 3 +... +u.bmx[0-2] = [startoff,startblock,blockcount,extentflag] + 0:[0,4718604,1,0] + 1:[1,4718610,2,0] + 2:[8388608,4718605,1,0] + + + + As can be seen in this example, three blocks are used for "data" in two extents, and the "leaf" extent has a logical offset of 8388608 blocks (32GB). + Examining the first block: + +xfs_db> dblock 0 +xfs_db> type dir2 +xfs_db> p +dhdr.magic = 0x58443244 +dhdr.bestfree[0].offset = 0x670 +dhdr.bestfree[0].length = 0x140 +dhdr.bestfree[1].offset = 0xff0 +dhdr.bestfree[1].length = 0x10 +dhdr.bestfree[2].offset = 0 +dhdr.bestfree[2].length = 0 +du[0].inumber = 75497600 +du[0].namelen = 1 +du[0].name = "." +du[0].tag = 0x10 +du[1].inumber = 128 +du[1].namelen = 2 +du[1].name = ".." +du[1].tag = 0x20 +du[2].inumber = 75497601 +du[2].namelen = 15 +du[2].name = "frame000000.tst" +du[2].tag = 0x30 +du[3].inumber = 75497602 +du[3].namelen = 15 +du[3].name = "frame000001.tst" +du[3].tag = 0x50 +... +du[51].inumber = 75497650 +du[51].namelen = 15 +du[51].name = "frame000049.tst" +du[51].tag = 0x650 +du[52].freetag = 0xffff +du[52].length = 0x140 +du[52].tag = 0x670 +du[53].inumber = 75497661 +du[53].namelen = 15 +du[53].name = "frame000060.tst" +du[53].tag = 0x7b0 +... +du[118].inumber = 75497758 +du[118].namelen = 15 +du[118].name = "frame000125.tst" +du[118].tag = 0xfd0 +du[119].freetag = 0xffff +du[119].length = 0x10 +du[119].tag = 0xff0 + + + Note that the xfs_db field output is preceded by a "d" for "data". + The next "data" block: + +xfs_db> dblock 1 +xfs_db> type dir2 +xfs_db> p +dhdr.magic = 0x58443244 +dhdr.bestfree[0].offset = 0x6d0 +dhdr.bestfree[0].length = 0x140 +dhdr.bestfree[1].offset = 0xe50 +dhdr.bestfree[1].length = 0x20 +dhdr.bestfree[2].offset = 0xff0 +dhdr.bestfree[2].length = 0x10 +du[0].inumber = 75497759 +du[0].namelen = 15 +du[0].name = "frame000126.tst" +du[0].tag = 0x10 +... +du[53].inumber = 75497844 +du[53].namelen = 15 +du[53].name = "frame000179.tst" +du[53].tag = 0x6b0 +du[54].freetag = 0xffff +du[54].length = 0x140 +du[54].tag = 0x6d0 +du[55].inumber = 75497855 +du[55].namelen = 15 +du[55].name = "frame000190.tst" +du[55].tag = 0x810 +... +du[104].inumber = 75497904 +du[104].namelen = 15 +du[104].name = "frame000239.tst" +du[104].tag = 0xe30 +du[105].freetag = 0xffff +du[105].length = 0x20 +du[105].tag = 0xe50 +du[106].inumber = 75497906 +du[106].namelen = 15 +du[106].name = "frame000241.tst" +du[106].tag = 0xe70 +... +du[117].inumber = 75497917 +du[117].namelen = 15 +du[117].name = "frame000252.tst" +du[117].tag = 0xfd0 +du[118].freetag = 0xffff +du[118].length = 0x10 +du[118].tag = 0xff0 + + + And the last data block: + +xfs_db> dblock 2 +xfs_db> type dir2 +xfs_db> p +dhdr.magic = 0x58443244 +dhdr.bestfree[0].offset = 0x70 +dhdr.bestfree[0].length = 0xf90 +dhdr.bestfree[1].offset = 0 +dhdr.bestfree[1].length = 0 +dhdr.bestfree[2].offset = 0 +dhdr.bestfree[2].length = 0 +du[0].inumber = 75497918 +du[0].namelen = 15 +du[0].name = "frame000253.tst" +du[0].tag = 0x10 +du[1].inumber = 75497919 +du[1].namelen = 15 +du[1].name = "frame000254.tst" +du[1].tag = 0x30 +du[2].inumber = 75497920 +du[2].namelen = 15 +du[2].name = "frame000255.tst" +du[2].tag = 0x50 +du[3].freetag = 0xffff +du[3].length = 0xf90 +du[3].tag = 0x70 + + + Examining the "leaf" block (with the fields preceded by an "l" for "leaf"): + The directory before deleting some entries: + +xfs_db> dblock 8388608 +xfs_db> type dir2 +xfs_db> p +lhdr.info.forw = 0 +lhdr.info.back = 0 +lhdr.info.magic = 0xd2f1 +lhdr.count = 258 +lhdr.stale = 0 +lbests[0-2] = 0:0x10 1:0x10 2:0xf90 +lents[0].hashval = 0x2e +lents[0].address = 0x2 +lents[1].hashval = 0x172e +lents[1].address = 0x4 +lents[2].hashval = 0x23a04084 +lents[2].address = 0x116 +... +lents[257].hashval = 0xf3a048bc +lents[257].address = 0x366 +ltail.bestcount = 3 + + + Note how the lbests array correspond with the bestfree[0].length values in the "data" blocks: + +xfs_db> dblock 0 +xfs_db> type dir2 +xfs_db> p +dhdr.magic = 0x58443244 +dhdr.bestfree[0].offset = 0xff0 +dhdr.bestfree[0].length = 0x10 +... +xfs_db> dblock 1 +xfs_db> type dir2 +xfs_db> p +dhdr.magic = 0x58443244 +dhdr.bestfree[0].offset = 0xff0 +dhdr.bestfree[0].length = 0x10 +... +xfs_db> dblock 2 +xfs_db> type dir2 +xfs_db> p +dhdr.magic = 0x58443244 +dhdr.bestfree[0].offset = 0x70 +dhdr.bestfree[0].length = 0xf90 + + + Now after the entries have been deleted: + +xfs_db> dblock 8388608 +xfs_db> type dir2 +xfs_db> p +lhdr.info.forw = 0 +lhdr.info.back = 0 +lhdr.info.magic = 0xd2f1 +lhdr.count = 258 +lhdr.stale = 21 +lbests[0-2] = 0:0x140 1:0x140 2:0xf90 +lents[0].hashval = 0x2e +lents[0].address = 0x2 +lents[1].hashval = 0x172e +lents[1].address = 0x4 +lents[2].hashval = 0x23a04084 +lents[2].address = 0x116 +... + + As can be seen, the lbests values have been update to contain each hdr.bestfree[0].length values. The leaf's hdr.stale value has also been updated to specify the number of stale entries in the array. The stale entries have an address of zero. + + TODO: Need an example for where new entries get inserted with several large free spaces.
+ + + + + + + +
+ + Node Directories + When the "leaf" information fills a block, the extents undergo another separation. All "freeindex" information moves into its own extent. Like Leaf Directories (), the "leaf" block maintained the best free space information for each "data" block. This is not possible with more than one leaf. + + + The "data" blocks stay the same as leaf directories. + + + The "leaf" blocks eventually change into a B+tree with the generic B+tree header pointing to directory "leaves" as described in Leaf Directories. The top-level blocks are called "nodes". It can exist in a state where there is still a single leaf block before it's split. Interpretation of the node vs. leaf blocks has to be performed by inspecting the magic value in the header. The combined leaf/freeindex blocks has a magic value of XFS_DIR2_LEAF1_MAGIC (0xd2f1), a node directory's leaf/leaves have a magic value of XFS_DIR2_LEAFN_MAGIC  (0xd2ff) and intermediate nodes have a magic value of XFS_DA_NODE_MAGIC (0xfebe). + + + The new "freeindex" block(s) only contains the bests for each data block. + + + The freeindex block uses the following structures: + +typedef struct xfs_dir2_free_hdr { + __uint32_t magic; + __int32_t firstdb; + __int32_t nvalid; + __int32_t nused; +} xfs_dir2_free_hdr_t; +typedef struct xfs_dir2_free { + xfs_dir2_free_hdr_t hdr; + xfs_dir2_data_off_t bests[1]; +} xfs_dir2_free_t; + + + + The location of the leaf blocks can be in any order, the only way to determine the appropriate is by the node block hash/before values. Given a hash to lookup, you read the node's btree array and first hashval in the array that exceeds the given hash and it can then be found in the block pointed to by the before value. + +typedef struct xfs_da_intnode { + struct xfs_da_node_hdr { + xfs_da_blkinfo_t info; + __uint16_t count; + __uint16_t level; + } hdr; + struct xfs_da_node_entry { + xfs_dahash_t hashval; + xfs_dablk_t before; + } btree[1]; +} xfs_da_intnode_t; + + + + + The freeindex's bests array starts from the end of the block and grows to the start of the block. + + + When an data block becomes unused (ie. all entries in it have been deleted), the block is freed, the data extents contain a hole, and the freeindex's hdr.nused value is decremented and the associated bests[] entry is set to 0xffff. + + + As the first data block always contains "." and "..", it's invalid for the directory to have a hole at the start. + + + The freeindex's hdr.nvalid should always be the same as the number of allocated data directory blocks containing name/inode data and will always be less than or equal to hdr.nused. hdr.nused should be the same as the index of the last data directory block plus one (i.e. when the last data block is freed, nused and nvalid are decremented). + + + + 54 + + + + + + + + + +xfs_db Example: + With the node directory examples, we are using a filesystems with 4KB block size, and a 16KB directory size. The directory has over 2000 entries: + +xfs_db> sb 0 +xfs_db> p +magicnum = 0x58465342 +blocksize = 4096 +... +dirblklog = 2 +... +xfs_db> inode <inode#> +xfs_db> p +core.magic = 0x494e +core.mode = 040755 +core.version = 1 +core.format = 2 (extents) +... +core.size = 81920 +core.nblocks = 36 +core.extsize = 0 +core.nextents = 8 +... +u.bmx[0-7] = [startoff,startblock,blockcount,extentflag] 0:[0,7368,4,0] +1:[4,7408,4,0] 2:[8,7444,4,0] 3:[12,7480,4,0] 4:[16,7520,4,0] +5:[8388608,7396,4,0] 6:[8388612,7524,8,0] 7:[16777216,7516,4,0] + + + + + As can already be observed, all extents are allocated is multiples of 4 blocks. + Blocks 0 to 19 (16+4-1) are used for the data. Looking at blocks 16-19, it can seen that it's the same as the single-leaf format, except the length values are  a lot larger to accommodate the increased directory block size: + +xfs_db> dblock 16 +xfs_db> type dir2 +xfs_db> p +dhdr.magic = 0x58443244 +dhdr.bestfree[0].offset = 0xb0 +dhdr.bestfree[0].length = 0x3f50 +dhdr.bestfree[1].offset = 0 +dhdr.bestfree[1].length = 0 +dhdr.bestfree[2].offset = 0 +dhdr.bestfree[2].length = 0 +du[0].inumber = 120224 +du[0].namelen = 15 +du[0].name = "frame002043.tst" +du[0].tag = 0x10 +du[1].inumber = 120225 +du[1].namelen = 15 +du[1].name = "frame002044.tst" +du[1].tag = 0x30 +du[2].inumber = 120226 +du[2].namelen = 15 +du[2].name = "frame002045.tst" +du[2].tag = 0x50 +du[3].inumber = 120227 +du[3].namelen = 15 +du[3].name = "frame002046.tst" +du[3].tag = 0x70 +du[4].inumber = 120228 +du[4].namelen = 15 +du[4].name = "frame002047.tst" +du[4].tag = 0x90 +du[5].freetag = 0xffff +du[5].length = 0x3f50 +du[5].tag = 0 + + + + Next, the "node" block, the fields are preceded with 'n' for node blocks: + +xfs_db> dblock 8388608 +xfs_db> type dir2 +xfs_db> p +nhdr.info.forw = 0 +nhdr.info.back = 0 +nhdr.info.magic = 0xfebe +nhdr.count = 2 +nhdr.level = 1 +nbtree[0-1] = [hashval,before] 0:[0xa3a440ac,8388616] 1:[0xf3a440bc,8388612] + + + + + The following leaf blocks have been allocated once as XFS knows it needs at two blocks when allocating a B+tree, so the length is 8 fsblocks. For all hashes < 0xa3a440ac, they are located in the directory offset 8388616 and hashes below 0xf3a440bc are in offset 8388612. Hashes above f3a440bc don't exist in this directory. + +xfs_db> dblock 8388616 +xfs_db> type dir2 +xfs_db> p +lhdr.info.forw = 8388612 +lhdr.info.back = 0 +lhdr.info.magic = 0xd2ff +lhdr.count = 1023 +lhdr.stale = 0 +lents[0].hashval = 0x2e +lents[0].address = 0x2 +lents[1].hashval = 0x172e +lents[1].address = 0x4 +lents[2].hashval = 0x23a04084 +lents[2].address = 0x116 +... +lents[1021].hashval = 0xa3a440a4 +lents[1021].address = 0x1fa2 +lents[1022].hashval = 0xa3a440ac +lents[1022].address = 0x1fca +xfs_db> dblock 8388612 +xfs_db> type dir2 +xfs_db> p +lhdr.info.forw = 0 +lhdr.info.back = 8388616 +lhdr.info.magic = 0xd2ff +lhdr.count = 1027 +lhdr.stale = 0 +lents[0].hashval = 0xa3a440b4 +lents[0].address = 0x1f52 +lents[1].hashval = 0xa3a440bc +lents[1].address = 0x1f7a +... +lents[1025].hashval = 0xf3a440b4 +lents[1025].address = 0x1f66 +lents[1026].hashval = 0xf3a440bc +lents[1026].address = 0x1f8e + + + + An example lookup using xfs_db: + +xfs_db> hash frame001845.tst +0xf3a26094 +Doing a binary search through the array, we get address 0x1ce6, which is +offset 0xe730. Each fsblock is 4KB in size (0x1000), so it will be offset +0x730 into directory offset 14. From the extent map, this will be fsblock +7482: +xfs_db> fsblock 7482 +xfs_db> type text +xfs_db> p +... +730: 00 00 00 00 00 01 d4 da 0f 66 72 61 6d 65 30 30 .........frame00 +740: 31 38 34 35 2e 74 73 74 00 00 00 00 00 00 27 30 1845.tst.......0 + + + + Looking at the freeindex information (fields with an 'f' tag): + +xfs_db> fsblock 7516 +xfs_db> type dir2 +xfs_db> p +fhdr.magic = 0x58443246 +fhdr.firstdb = 0 +fhdr.nvalid = 5 +fhdr.nused = 5 +fbests[0-4] = 0:0x10 1:0x10 2:0x10 3:0x10 4:0x3f50 + + Like the Leaf Directory (), each of the fbests values correspond to each data block's bestfree[0].length value. + The raw disk layout, old data is not cleared after the array. The fbests array is highlighted: + + + code57 + + TODO: Example with a hole in the middle
+ + + +
+ B+tree Directories + When the extent map in an inode grows beyond the inode's space, the inode format is changed to a "btree". The inode contains a filesystem block point to the B+tree extent map for the directory's blocks. The B+tree extents contain the extent map for the "data", "node", "leaf" and "freeindex" information as described in Node Directories (). + Refer to the previous section on B+tree Data Extents () for more information on XFS B+tree extents. + The following situations and changes can apply over Node Directories, and apply here as inode extents generally cannot contain the number of directory blocks that B+trees can handle: + + + The node/leaf trees can be more than one level deep. + + + More than one freeindex block may exist, but this will be quite rare. It would required hundreds of thousand files with quite long file names (or millions with shorter names) to get a second freeindex block. + + + +xfs_db Example: + A directory has been created with 200,000 entries with each entry being 100 characters long. The filesystem block size and directory block size are 4KB: + +xfs_db> inode 772 +xfs_db> p +core.magic = 0x494e +core.mode = 040755 +core.version = 1 +core.format = 3 (btree) +... +core.size = 22757376 +core.nblocks = 6145 +core.extsize = 0 +core.nextents = 234 +core.naextents = 0 +core.forkoff = 0 +... +u.bmbt.level = 1 +u.bmbt.numrecs = 1 +u.bmbt.keys[1] = [startoff] 1:[0] +u.bmbt.ptrs[1] = 1:89 +xfs_db> fsblock 89 +xfs_db> type bmapbtd +xfs_db> p +magic = 0x424d4150 +level = 0 +numrecs = 234 +leftsib = null +rightsib = null +recs[1-234] = [startoff,startblock,blockcount,extentflag] + 1:[0,53,1,0] 2:[1,55,13,0] 3:[14,69,1,0] 4:[15,72,13,0] + 5:[28,86,2,0] 6:[30,90,21,0] 7:[51,112,1,0] 8:[52,114,11,0] + ... + 125:[5177,902,15,0] 126:[5192,918,6,0] 127:[5198,524786,358,0] + 128:[8388608,54,1,0] 129:[8388609,70,2,0] 130:[8388611,85,1,0] + ... + 229:[8389164,917,1,0] 230:[8389165,924,19,0] 231:[8389184,944,9,0] + 232:[16777216,68,1,0] 233:[16777217,7340114,1,0] 234:[16777218,5767362,1,0] + + + +We have 128 extents and a total of 5555 blocks being used to store name/inode pairs. With only about 2000 values that can be stored in the freeindex block, 3 blocks have been allocated for this information. The firstdb field specifies the starting directory block number for each array: + +xfs_db> dblock 16777216 +xfs_db> type dir2 +xfs_db> p +fhdr.magic = 0x58443246 +fhdr.firstdb = 0 +fhdr.nvalid = 2040 +fhdr.nused = 2040 +fbests[0-2039] = ... +xfs_db> dblock 16777217 +xfs_db> type dir2 +xfs_db> p +fhdr.magic = 0x58443246 +fhdr.firstdb = 2040 +fhdr.nvalid = 2040 +fhdr.nused = 2040 +fbests[0-2039] = ... +xfs_db> dblock 16777218 +xfs_db> type dir2 +xfs_db> p +fhdr.magic = 0x58443246 +fhdr.firstdb = 4080 +fhdr.nvalid = 1476 +fhdr.nused = 1476 +fbests[0-1475] = ... + + Looking at the root node in the node block, it's a pretty deep tree: + +xfs_db> dblock 8388608 +xfs_db> type dir2 +xfs_db> p +nhdr.info.forw = 0 +nhdr.info.back = 0 +nhdr.info.magic = 0xfebe +nhdr.count = 2 +nhdr.level = 2 +nbtree[0-1] = [hashval,before] 0:[0x6bbf6f39,8389121] 1:[0xfbbf7f79,8389120] +xfs_db> dblock 8389121 +xfs_db> type dir2 +xfs_db> p +nhdr.info.forw = 8389120 +nhdr.info.back = 0 +nhdr.info.magic = 0xfebe +nhdr.count = 263 +nhdr.level = 1 +nbtree[0-262] = ... 262:[0x6bbf6f39,8388928] +xfs_db> dblock 8389120 +xfs_db> type dir2 +xfs_db> p +nhdr.info.forw = 0 +nhdr.info.back = 8389121 +nhdr.info.magic = 0xfebe +nhdr.count = 319 +nhdr.level = 1 +nbtree[0-318] = [hashval,before] 0:[0x70b14711,8388919] ... + + The leaves at each the end of a node always point to the end leaves in adjacent nodes. Directory block 8388928 forward pointer is to block 8388919, and vice versa as highlighted in the following example: + + + code60 +
+ +
+ diff --git a/en-US/xfs-Extended_Attributes.xml b/en-US/xfs-Extended_Attributes.xml new file mode 100644 index 0000000..7c33985 --- /dev/null +++ b/en-US/xfs-Extended_Attributes.xml @@ -0,0 +1,542 @@ + + +
+ Extended Attributes + Extended attributes implement the ability for a user to attach name:value pairs to inodes within the XFS filesystem. They could be used to store meta-information about the file. + The attribute names can be up to 256 bytes in length, terminated by the first 0 byte. The intent is that they be printable ASCII (or other character set) names for the attribute. The values can be up to 64KB of arbitrary binary data. Some XFS internal attributes (eg. parent pointers) use non-printable names for the attribute. + Access Control Lists (ACLs) and Data Migration Facility (DMF) use extended attributes to store their associated metadata with an inode. + XFS uses two disjoint attribute name spaces associated with every inode. They are the root and user address spaces. The root address space is accessible only to the superuser, and then only by specifying a flag argument to the function call. Other users will not see or be able to modify attributes in the root address space. The user address space is protected by the normal file permissions mechanism, so the owner of the file can decide who is able to see and/or modify the value of attributes on any particular file. + To view extended attributes from the command line, use the getfattr command. To set or delete extended attributes, use the setfattr command. ACLs control should use the getfacl and setfacl commands. + XFS attributes supports three namespaces: "user", "trusted" (or "root" using IRIX terminology) and "secure". + The location of the attribute fork in the inode's literal area is specified by the di_forkoff value in the inode's core. If this value is zero, the inode does not contain any extended attributes. Non-zero, the byte offset into the literal area = di_forkoff * 8, which also determines the 2048 byte maximum size for an inode. Attributes must be allocated on a 64-bit boundary on the disk except shortform attributes (they are tightly packed). To determine the offset into the inode itself, add 100 (0x64) to di_forkoff * 8. + The following four sections describe each of the on-disk formats. + + +
+ Shortform Attributes + When the all extended attributes can fit within the inode's attribute fork, the inode's di_aformat is set to "local" and the attributes are stored in the inode's literal area starting at offset di_forkoff * 8. + Shortform attributes use the following structures: + +typedef struct xfs_attr_shortform { + struct xfs_attr_sf_hdr { + __be16 totsize; + __u8 count; + } hdr; + struct xfs_attr_sf_entry { + __uint8_t namelen; + __uint8_t valuelen; + __uint8_t flags; + __uint8_t nameval[1]; + } list[1]; +} xfs_attr_shortform_t; +typedef struct xfs_attr_sf_hdr xfs_attr_sf_hdr_t; +typedef struct xfs_attr_sf_entry xfs_attr_sf_entry_t; + + + + + + 64 + + + + + + namelen and valuelen specify the size of the two byte arrays containing the name and value pairs. valuelen is zero for extended attributes with no value. + + + nameval[] is a single array where it's size is the sum of namelen and valuelen. The names and values are not null terminated on-disk. The value immediately follows the name in the array. + + + flags specifies the namespace for the attribute (0 = "user"): + + + + + + Flag + + + Description + + + + + + + XFS_ATTR_ROOT + + + The attribute's namespace is "trusted". + + + + + XFS_ATTR_SECURE + + + The attribute's namespace is "secure". + + + + + + + +xfs_db Example: + A file is created and two attributes are set: + +# setfattr -n user.empty few_attr +# setfattr -n trusted.trust -v val1 few_attr + + Using xfs_db, we dump the inode: + +xfs_db> inode <inode#> +xfs_db> p +core.magic = 0x494e +core.mode = 0100644 +... +core.naextents = 0 +core.forkoff = 15 +core.aformat = 1 (local) +... +a.sfattr.hdr.totsize = 24 +a.sfattr.hdr.count = 2 +a.sfattr.list[0].namelen = 5 +a.sfattr.list[0].valuelen = 0 +a.sfattr.list[0].root = 0 +a.sfattr.list[0].secure = 0 +a.sfattr.list[0].name = "empty" +a.sfattr.list[1].namelen = 5 +a.sfattr.list[1].valuelen = 4 +a.sfattr.list[1].root = 1 +a.sfattr.list[1].secure = 0 +a.sfattr.list[1].name = "trust" +a.sfattr.list[1].value = "val1" + + We can determine the actual inode offset to be 220 (15 x 8 + 100) or 0xdc. + Examining the raw dump, the second attribute is highlighted: + + + code65 + + Adding another attribute with attr1, the format is converted to extents and di_forkoff remains unchanged (and all those zeros in the dump above remain unused): + +xfs_db> inode <inode#> +xfs_db> p +... +core.naextents = 1 +core.forkoff = 15 +core.aformat = 2 (extents) +... +a.bmx[0] = [startoff,startblock,blockcount,extentflag] 0:[0,37534,1,0] + + Performing the same steps with attr2, adding one attribute at a time, you can see di_forkoff change as attributes are added: + +xfs_db> inode <inode#> +xfs_db> p +... +core.naextents = 0 +core.forkoff = 15 +core.aformat = 1 (local) +... +a.sfattr.hdr.totsize = 17 +a.sfattr.hdr.count = 1 +a.sfattr.list[0].namelen = 10 +a.sfattr.list[0].valuelen = 0 +a.sfattr.list[0].root = 0 +a.sfattr.list[0].secure = 0 +a.sfattr.list[0].name = "empty_attr" + + Attribute added: + +xfs_db> p +... +core.naextents = 0 +core.forkoff = 15 +core.aformat = 1 (local) +... +a.sfattr.hdr.totsize = 31 +a.sfattr.hdr.count = 2 +a.sfattr.list[0].namelen = 10 +a.sfattr.list[0].valuelen = 0 +a.sfattr.list[0].root = 0 +a.sfattr.list[0].secure = 0 +a.sfattr.list[0].name = "empty_attr" +a.sfattr.list[1].namelen = 7 +a.sfattr.list[1].valuelen = 4 +a.sfattr.list[1].root = 1 +a.sfattr.list[1].secure = 0 +a.sfattr.list[1].name = "trust_a" +a.sfattr.list[1].value = "val1" + + Another attribute is added: + + + code66 + + One more is added: + +xfs_db> p +core.naextents = 0 +core.forkoff = 10 +core.aformat = 1 (local) +... +a.sfattr.hdr.totsize = 69 +a.sfattr.hdr.count = 4 +a.sfattr.list[0].namelen = 10 +a.sfattr.list[0].valuelen = 0 +a.sfattr.list[0].root = 0 +a.sfattr.list[0].secure = 0 +a.sfattr.list[0].name = "empty_attr" +a.sfattr.list[1].namelen = 7 +a.sfattr.list[1].valuelen = 4 +a.sfattr.list[1].root = 1 +a.sfattr.list[1].secure = 0 +a.sfattr.list[1].name = "trust_a" +a.sfattr.list[1].value = "val1" +a.sfattr.list[2].namelen = 6 +a.sfattr.list[2].valuelen = 12 +a.sfattr.list[2].root = 0 +a.sfattr.list[2].secure = 0 +a.sfattr.list[2].name = "second" +a.sfattr.list[2].value = "second_value" +a.sfattr.list[3].namelen = 6 +a.sfattr.list[3].valuelen = 8 +a.sfattr.list[3].root = 0 +a.sfattr.list[3].secure = 1 +a.sfattr.list[3].name = "policy" +a.sfattr.list[3].value = "contents" + + A raw dump is shown to compare with the attr1 dump on a prior page, the header is highlighted: + + + code67 + + It can be clearly seen that attr2 allows many more attributes to be stored in an inode before they are moved to another filesystem block.
+ + + + +
Leaf Attributes + When an inode's attribute fork space is used up with shortform attributes and more are added, the attribute format is migrated to "extents". + Extent based attributes use hash/index pairs to speed up an attribute lookup. The first part of the "leaf" contains an array of fixed size hash/index pairs with the flags stored as well. The remaining part of the leaf block contains the array name/value pairs, where each element varies in length. + Each leaf is based on the xfs_da_blkinfo_t block header declared in Leaf Directories. The structure encapsulating all other structures in the xfs_attr_leafblock_t. + The structures involved are: + +typedef struct xfs_attr_leaf_map { + __be16 base; + __be16 size; +} xfs_attr_leaf_map_t; + +typedef struct xfs_attr_leaf_hdr { + xfs_da_blkinfo_t info; + __be16 count; + __be16 usedbytes; + __be16 firstused; + __u8 holes; + __u8 pad1; + xfs_attr_leaf_map_t freemap[3]; +} xfs_attr_leaf_hdr_t; + +typedef struct xfs_attr_leaf_entry { + __be32 hashval; + __be16 nameidx; + __u8 flags; + __u8 pad2; +} xfs_attr_leaf_entry_t; + +typedef struct xfs_attr_leaf_name_local { + __be16 valuelen; + __u8 namelen; + __u8 nameval[1]; +} xfs_attr_leaf_name_local_t; + +typedef struct xfs_attr_leaf_name_remote { + __be32 valueblk; + __be32 valuelen; + __u8 namelen; + __u8 name[1]; +} xfs_attr_leaf_name_remote_t; + +typedef struct xfs_attr_leafblock { + xfs_attr_leaf_hdr_t hdr; + xfs_attr_leaf_entry_t entries[1]; + xfs_attr_leaf_name_local_t namelist; + xfs_attr_leaf_name_remote_t valuelist; +} xfs_attr_leafblock_t; + + + Each leaf header uses the following magic number: + +#define XFS_ATTR_LEAF_MAGIC        0xfbee + + + The hash/index elements in the entries[] array are packed from the top of the block.  Name/values grow from the bottom but are not packed. The freemap contains run-length-encoded entries for the free bytes after the entries[] array, but only the three largest runs are stored (smaller runs are dropped).  When the freemap doesn’t show enough space for an allocation, name/value area is compacted and allocation is tried again.  If there still isn't enough space, then the block is split. The name/value structures (both local and remote versions) must be 32-bit aligned. + For attributes with small values (ie. the value can be stored within the leaf), the XFS_ATTR_LOCAL flag is set for the attribute. The entry details are stored using the xfs_attr_leaf_name_local_t structure. For large attribute values that cannot be stored within the leaf, separate filesystem blocks are allocated to store the value. They use the xfs_attr_leaf_name_remote_t structure. + + + + + + + 69 + + + + + + + Both local and remote entries can be interleaved as they are only addressed by the hash/index entries. The flag is stored with the hash/index pairs so the appropriate structure can be used. + Since duplicate hash keys are possible, for each hash that matches during a lookup, the actual name string must be compared. + An “incomplete” bit is also used for attribute flags.  It shows that an attribute is in the middle of being created and should not be shown to the user if we crash during the time that the bit is set.  The bit is cleared when attribute has finished being setup.  This is done because some large attributes cannot be created inside a single transaction. + + + +xfs_db Example: + A single 30KB extended attribute is added to an inode: + +xfs_db> inode <inode#> +xfs_db> p +... +core.nblocks = 9 +core.nextents = 0 +core.naextents = 1 +core.forkoff = 15 +core.aformat = 2 (extents) +... +a.bmx[0] = [startoff,startblock,blockcount,extentflag] + 0:[0,37535,9,0] +xfs_db> ablock 0 +xfs_db> p +hdr.info.forw = 0 +hdr.info.back = 0 +hdr.info.magic = 0xfbee +hdr.count = 1 +hdr.usedbytes = 20 +hdr.firstused = 4076 +hdr.holes = 0 +hdr.freemap[0-2] = [base,size] 0:[40,4036] 1:[0,0] 2:[0,0] +entries[0] = [hashval,nameidx,incomplete,root,secure,local] + 0:[0xfcf89d4f,4076,0,0,0,0] +nvlist[0].valueblk = 0x1 +nvlist[0].valuelen = 30692 +nvlist[0].namelen = 8 +nvlist[0].name = "big_attr" + + Attribute blocks 1 to 8 (filesystem blocks 37536 to 37543) contain the raw binary value data for the attribute. + Index 4076 (0xfec) is the offset into the block where the name/value information is. As can be seen by the value, it's at the end of the block: + +xfs_db> type text +xfs_db> p +000: 00 00 00 00 00 00 00 00 fb ee 00 00 00 01 00 14 ................ +010: 0f ec 00 00 00 28 0f c4 00 00 00 00 00 00 00 00 ................ +020: fc f8 9d 4f 0f ec 00 00 00 00 00 00 00 00 00 00 ...O............ +030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ +... +fe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 ................ +ff0: 00 00 77 e4 08 62 69 67 5f 61 74 74 72 00 00 00 ..w..big.attr... + + A 30KB attribute and a couple of small attributes are added to a file: + +xfs_db> inode <inode#> +xfs_db> p +... +core.nblocks = 10 +core.extsize = 0 +core.nextents = 1 +core.naextents = 2 +core.forkoff = 15 +core.aformat = 2 (extents) +... +u.bmx[0] = [startoff,startblock,blockcount,extentflag] + 0:[0,81857,1,0] +a.bmx[0-1] = [startoff,startblock,blockcount,extentflag] + 0:[0,81858,1,0] + 1:[1,182398,8,0] +xfs_db> ablock 0 +xfs_db> p +hdr.info.forw = 0 +hdr.info.back = 0 +hdr.info.magic = 0xfbee +hdr.count = 3 +hdr.usedbytes = 52 +hdr.firstused = 4044 +hdr.holes = 0 +hdr.freemap[0-2] = [base,size] 0:[56,3988] 1:[0,0] 2:[0,0] +entries[0-2] = [hashval,nameidx,incomplete,root,secure,local] + 0:[0x1e9d3934,4044,0,0,0,1] + 1:[0x1e9d3937,4060,0,0,0,1] + 2:[0xfcf89d4f,4076,0,0,0,0] +nvlist[0].valuelen = 6 +nvlist[0].namelen = 5 +nvlist[0].name = "attr2" +nvlist[0].value = "value2" +nvlist[1].valuelen = 6 +nvlist[1].namelen = 5 +nvlist[1].name = "attr1" +nvlist[1].value = "value1" +nvlist[2].valueblk = 0x1 +nvlist[2].valuelen = 30692 +nvlist[2].namelen = 8 +nvlist[2].name = "big_attr" + + + As can be seen in the entries array, the two small attributes have the local flag set and the values are printed. + A raw disk dump shows the attributes. The last attribute added is highlighted (offset 4044 or 0xfcc): + + + + c + + + + +
+ + + + + +
+ Node Attributes + When the number of attributes exceeds the space that can fit in one filesystem block (ie. hash, flag, name and local values), the first attribute block becomes the root of a B+tree where the leaves contain the hash/name/value information that was stored in a single leaf block. The inode's attribute format itself remains extent based. The nodes use the xfs_da_intnode_t structure introduced in Node Directories. + The location of the attribute leaf blocks can be in any order, the only way to determine the appropriate is by the node block hash/before values. Given a hash to lookup, you read the node's btree array and first hashval in the array that exceeds the given hash and it can then be found in the block pointed to by the before value. + + + + + + + + + 72 + + + + + + + + + +xfs_db Example: + An inode with 1000 small attributes with the naming "attribute_n" where 'n' is a number: + +xfs_db> inode <inode#> +xfs_db> p +... +core.nblocks = 15 +core.nextents = 0 +core.naextents = 1 +core.forkoff = 15 +core.aformat = 2 (extents) +... +a.bmx[0] = [startoff,startblock,blockcount,extentflag] 0:[0,525144,15,0] +xfs_db> ablock 0 +xfs_db> p +hdr.info.forw = 0 +hdr.info.back = 0 +hdr.info.magic = 0xfebe +hdr.count = 14 +hdr.level = 1 +btree[0-13] = [hashval,before] + 0:[0x3435122d,1] + 1:[0x343550a9,14] + 2:[0x343553a6,13] + 3:[0x3436122d,12] + 4:[0x343650a9,8] + 5:[0x343653a6,7] + 6:[0x343691af,6] + 7:[0x3436d0ab,11] + 8:[0x3436d3a7,10] + 9:[0x3437122d,9] + 10:[0x3437922e,3] + 11:[0x3437d22a,5] + 12:[0x3e686c25,4] + 13:[0x3e686fad,2] + + + The hashes are in ascending order in the btree array, and if the hash for the attribute we are looking up is before the entry, we go to the addressed attribute block. + For example, to lookup attribute "attribute_267": + +xfs_db> hash attribute_267 +0x3437d1a8 + + + In the root btree node, this falls between 0x3437922e and 0x3437d22a, therefore leaf 11 or attribute block 5 will contain the entry. + + + code73-74 + + Each of the hash entries has XFS_ATTR_LOCAL flag set (1), which means the attribute's value follows immediately after the name. Raw disk of the name/value pair at offset 2864 (0xb30), highlighted with "value_267\d" following immediately after the name: + + + code74 + + Each entry starts on a 32-bit (4 byte) boundary, therefore the highlighted entry has 2 unused bytes after it. +
+ + + + +
B+tree Attributes + When the attribute's extent map in an inode grows beyond the available space, the inode's attribute format is changed to a "btree". The inode contains root node of the extent B+tree which then address the leaves that contains the extent arrays for the attribute data. The attribute data itself in the allocated filesystem blocks use the same layout and structures as described in Node Attributes. + Refer to the previous section on B+tree Data Extents for more information on XFS B+tree extents. + + + +xfs_db Example: + Added 2000 attributes with 729 byte values to a file: + +xfs_db> inode <inode#> +xfs_db> p +... +core.nblocks = 640 +core.extsize = 0 +core.nextents = 1 +core.naextents = 274 +core.forkoff = 15 +core.aformat = 3 (btree) +... +a.bmbt.level = 1 +a.bmbt.numrecs = 2 +a.bmbt.keys[1-2] = [startoff] 1:[0] 2:[219] +a.bmbt.ptrs[1-2] = 1:83162 2:109968 +xfs_db> fsblock 83162 +xfs_db> type bmapbtd +xfs_db> p +magic = 0x424d4150 +level = 0 +numrecs = 127 +leftsib = null +rightsib = 109968 +recs[1-127] = [startoff,startblock,blockcount,extentflag] + 1:[0,81870,1,0] + ... +xfs_db> fsblock 109968 +xfs_db> type bmapbtd +xfs_db> p +magic = 0x424d4150 +level = 0 +numrecs = 147 +leftsib = 83162 +rightsib = null +recs[1-147] = [startoff,startblock,blockcount,extentflag] + ... + (which is fsblock 81870) +xfs_db> ablock 0 +xfs_db> p +hdr.info.forw = 0 +hdr.info.back = 0 +hdr.info.magic = 0xfebe +hdr.count = 2 +hdr.level = 2 +btree[0-1] = [hashval,before] 0:[0x343612a6,513] 1:[0x3e686fad,512] + + + The extent B+tree has two leaves that specify the 274 extents used for the attributes. Looking at the first block, it can be seen that the attribute B+tree is two levels deep. The two blocks at offset 513 and 512 (ie. access using the ablock command) are intermediate xfs_da_intnode_t nodes that index all the attribute leaves.
+ + + diff --git a/en-US/xfs-Internal_Inodes.xml b/en-US/xfs-Internal_Inodes.xml new file mode 100644 index 0000000..6a093d0 --- /dev/null +++ b/en-US/xfs-Internal_Inodes.xml @@ -0,0 +1,180 @@ + + +
Internal Inodes + XFS allocates several inodes when a filesystem is created. These are internal and not accessible from the standard directory structure. These inodes are only accessible from the superblock. + +
+ Quota Inodes + If quotas are used, two inodes are allocated for user and group quota management. If project quotas are used, these replace the group quota management and therefore uses the group quota inode. + + + Project quota's primary purpose is to track and monitor disk usage for directories. For this to occur, the directory inode must have the XFS_DIFLAG_PROJINHERIT flag set so all inodes created underneath the directory inherit the project ID. + + + Inodes and blocks owned by ID zero do not have enforced quotas, but only quota accounting. + + + ­Extended attributes do not contribute towards the ID's quota . + + + To access each ID's quota information in the file, seek to the ID offset multiplied by the size of xfs_dqblk_t (136 bytes). + + + + + + 76 + + + + + + Quota information stored in the two inodes (in data extents) are an array of the xfs_dqblk_t structure where there is one instance for each ID in the system: + +typedef struct xfs_disk_dquot { + __be16 d_magic; + __u8 d_version; + __u8 d_flags; + __be32 d_id; + __be64 d_blk_hardlimit; + __be64 d_blk_softlimit; + __be64 d_ino_hardlimit; + __be64 d_ino_softlimit; + __be64 d_bcount; + __be64 d_icount; + __be32 d_itimer; + __be32 d_btimer; + __be16 d_iwarns; + __be16 d_bwarns; + __be32 d_pad0; + __be64 d_rtb_hardlimit; + __be64 d_rtb_softlimit; + __be64 d_rtbcount; + __be32 d_rtbtimer; + __be16 d_rtbwarns; + __be16 d_pad; +} xfs_disk_dquot_t; +typedef struct xfs_dqblk { + xfs_disk_dquot_t dd_diskdq; + char dd_fill[32]; +} xfs_dqblk_t; + + + + + + + + d_magic + Specifies the signature where these two bytes are 0x4451 (XFS_DQUOT_MAGIC), or "DQ" in ASCII. + + + + d_version + Specifies the structure version, currently this is one (XFS_DQUOT_VERSION). + + + + d_flags + Specifies which type of ID the structure applies to: + +#define XFS_DQ_USER 0x0001 +#define XFS_DQ_PROJ 0x0002 +#define XFS_DQ_GROUP 0x0004 + + + + + + d_id + The ID for the quota structure. This will be a uid, gid or projid based on the value of d_flags. + + + + d_blk_hardlimit + Specifies the hard limit for the number of filesystem blocks the ID can own. The ID will not be able to use more space than this limit. If it is attempted, ENOSPC will be returned. + + + + d_blk_softlimit + Specifies the soft limit for the number of filesystem blocks the ID can own.  The ID can temporarily use more space than by d_blk_softlimit up to d_blk_hardlimit. If the space is not freed by the time limit specified by ID zero's d_btimer value, the ID will be denied more space until the total blocks owned goes below d_blk_softlimit. + + + + d_ino_hardlimit + Specifies the hard limit for the number of inodes the ID can own. The ID will not be able to create or own any more inodes if d_icount reaches this value. + + + + d_ino_softlimit + Specifies the soft limit for the number of inodes the ID can own. The ID can temporarily create or own more inodes than specified by d_ino_softlimit up to d_ino_hardlimit. If the inode count is not reduced by the time limit specified by ID zero's d_itimer value, the ID will be denied from creating or owning more inodes until the count goes below d_ino_softlimit. + + + + d_bcount + Specifies how many filesystem blocks are actually owned by the ID. + + + + d_icount + Specifies how many inodes are actually owned by the ID. + + + + d_itimer + Specifies the time when the ID's d_icount exceeded d_ino_softlimit. The soft limit will turn into a hard limit after the elapsed time exceeds ID zero's d_itimer value. When d_icount goes back below d_ino_softlimit, d_itimer is reset back to zero. + + + + d_btimer + Specifies the time when the ID's d_bcount exceeded d_blk_softlimit. The soft limit will turn into a hard limit after the elapsed time exceeds ID zero's d_btimer value. When d_bcount goes back below d_blk_softlimit, d_btimer is reset back to zero. + + + + d_iwarns + d_bwarns + d_rtbwarns + Specifies how many times a warning has been issued. Currently not used. + + + + d_rtb_hardlimit + Specifies the hard limit for the number of real-time blocks the ID can own. The ID cannot own more space on the real-time subvolume beyond this limit. + + + + d_rtb_softlimit + Specifies the soft limit for the number of real-time blocks the ID can own. The ID can temporarily own more space than specified by d_rtb_softlimit up to d_rtb_hardlimit. If d_rtbcount is not reduced by the time limit specified by ID zero's d_rtbtimer value, the ID will be denied from owning more space until the count goes below d_rtb_softlimit + + + + d_rtbcount + Specifies how many real-time blocks are currently owned by the ID. + + + + d_rtbtimer + Specifies the time when the ID's d_rtbcount exceeded d_rtb_softlimit. The soft limit will turn into a hard limit after the elapsed time exceeds ID zero's d_rtbtimer value. When d_rtbcount goes back below d_rtb_softlimit, d_rtbtimer is reset back to zero. + + +
+ + + + + +
Real-time Inodes + There are two inodes allocated to managing the real-time device's space, the Bitmap Inode and the Summary Inode. + +
Real-Time Bitmap Inode + The Bitmap Inode tracks the used/free space in the real-time device using an old-style bitmap. One bit is allocated per real-time extent. The size of an extent is specified by the superblock's sb_rextsize value. + The number of blocks used by the bitmap inode is equal to the number of real-time extents (sb_rextents) divided by the block size (sb_blocksize) and bits per byte. This value is stored in sb_rbmblocks. The nblocks and extent array for the inode should match this. + + +xfs_ino_t        sb_rbmino;        +
Real-Time Summary Inode + +xfs_ino_t        sb_rsumino;        +
+ diff --git a/en-US/xfs-Introduction.xml b/en-US/xfs-Introduction.xml new file mode 100644 index 0000000..8573934 --- /dev/null +++ b/en-US/xfs-Introduction.xml @@ -0,0 +1,10 @@ + + + +
+ Introduction +This document describes the layout of an XFS filesystem. + It shows how to manually inspect it by showing examples using the xfs_db user-space tool supplied with the XFS filesystem driver. + TODO: +
diff --git a/en-US/xfs-Journaling_Log.xml b/en-US/xfs-Journaling_Log.xml new file mode 100644 index 0000000..1be5904 --- /dev/null +++ b/en-US/xfs-Journaling_Log.xml @@ -0,0 +1,9 @@ + + + +
+Journaling Log + TODO: +
+ diff --git a/en-US/xfs-On-disk_Inode.xml b/en-US/xfs-On-disk_Inode.xml new file mode 100644 index 0000000..ba4e22a --- /dev/null +++ b/en-US/xfs-On-disk_Inode.xml @@ -0,0 +1,438 @@ + + + +
+ On-disk Inode + All files, directories and links are stored on disk with inodes and descend from the root inode with it's number defined in the superblock (). The previous section on AG Inode Management () describes the allocation and management of inodes on disk. This section describes the contents of inodes themselves. + An inode is divided into 3 parts: + + + + 23 + + + + + + + The core contains what the inode represents, stat data and information describing the data and attribute forks. + + + The di_u "data fork" contains normal data related to the inode. It's contents depends on the file type specified by di_core.di_mode (eg. regular file, directory, link, etc) and how much information is contained in the file which determined by di_core.di_format. The following union to represent this data is declared as follows: + +union { + xfs_bmdr_block_t di_bmbt; + xfs_bmbt_rec_t di_bmx[1]; + xfs_dir2_sf_t di_dir2sf; + char di_c[1]; + xfs_dev_t di_dev; + uuid_t di_muuid; + char di_symlink[1]; +} di_u; + + + + + The di_a "attribute fork" contains extended attributes. Its layout is determined by the di_core.di_aformat value. Its representation is declared as follows: + +union { + xfs_bmdr_block_t di_abmbt; + xfs_bmbt_rec_t di_abmx[1]; + xfs_attr_shortform_t di_attrsf; +} di_a; + + + + Note: The above two unions are rarely used in the XFS code, but the structures within the union are directly cast depending on the di_mode/di_format and di_aformat values. They are referenced in this document to make it easier to explain the various structures in use within the inode. + The remaining space in the inode after di_next_unlinked where the two forks are located is called the inode's "literal area". This starts at offset 100 (0x64) in the inode. + The space for each of the two forks in the literal area is determined by the inode size, and di_core.di_forkoff. The data fork is located between the start of the literal area and di_forkoff. The attribute fork is located between di_forkoff and the end of the inode. + + + + +
Inode Core + The inode's core is 96 bytes in size and contains information about the file itself including most stat data information about data and attribute forks after the core within the inode. It uses the following structure: + +typedef struct xfs_dinode_core { + __uint16_t di_magic; + __uint16_t di_mode; + __int8_t di_version; + __int8_t di_format; + __uint16_t di_onlink; + __uint32_t di_uid; + __uint32_t di_gid; + __uint32_t di_nlink; + __uint16_t di_projid; + __uint8_t di_pad[8]; + __uint16_t di_flushiter; + xfs_timestamp_t di_atime; + xfs_timestamp_t di_mtime; + xfs_timestamp_t di_ctime; + xfs_fsize_t di_size; + xfs_drfsbno_t di_nblocks; + xfs_extlen_t di_extsize; + xfs_extnum_t di_nextents; + xfs_aextnum_t di_anextents; + __uint8_t di_forkoff; + __int8_t di_aformat; + __uint32_t di_dmevmask; + __uint16_t di_dmstate; + __uint16_t di_flags; + __uint32_t di_gen; +} xfs_dinode_core_t; + + + + + + di_magic + The inode signature where these two bytes are 0x494e, or "IN" in ASCII. + + + di_mode + Specifies the mode access bits and type of file using the standard S_Ixxx values defined in stat.h. + + + + di_version + Specifies the inode version which currently can only be 1 or 2. The inode version specifies the usage of the di_onlink, di_nlink and di_projid values in the inode core. Initially, inodes are created as v1 but can be converted on the fly to v2 when required. + + + + di_format + Specifies the format of the data fork in conjunction with the di_mode type. This can be one of several values. For directories and links, it can be "local" where all metadata associated with the file is within the inode, "extents" where the inode contains an array of extents to other filesystem blocks which contain the associated metadata or data or "btree" where the inode contains a B+tree root node which points to filesystem blocks containing the metadata or data. Migration between the formats depends on the amount of metadata associated with the inode. "dev" is used for character and block devices while "uuid" is currently not used. + +typedef enum xfs_dinode_fmt { + XFS_DINODE_FMT_DEV, + XFS_DINODE_FMT_LOCAL, + XFS_DINODE_FMT_EXTENTS, + XFS_DINODE_FMT_BTREE, + XFS_DINODE_FMT_UUID +} xfs_dinode_fmt_t; + + + + di_onlink + In v1 inodes, this specifies the number of links to the inode from directories. When the number exceeds 65535, the inode is converted to v2 and the link count is stored in di_nlink. + + + di_uid + Specifies the owner's UID of the inode. + + + di_gid + Specifies the owner's GID of the inode. + + + di_nlink + Specifies the number of links to the inode from directories. This is maintained for both inode versions for current versions of XFS. Old versions of XFS did not support v2 inodes, and therefore this value was never updated and was classed as reserved space (part of di_pad). + + + di_projid + Specifies the owner's project ID in v2 inodes. An inode is converted to v2 if the project ID is set.  This value must be zero for v1 inodes. + + + di_pad[8] + Reserved, must be zero. + + + di_flushiter + Incremented on flush. + + + + + + di_atime + Specifies the last access time of the files using UNIX time conventions the following structure. This value maybe undefined if the filesystem is mounted with the "noatime" option. + +typedef struct xfs_timestamp { + __int32_t t_sec; + __int32_t t_nsec; +} xfs_timestamp_t; + + + + di_mtime + Specifies the last time the file was modified. + + + di_ctime + Specifies when the inode's status was last changed. + + + di_size + Specifies the EOF of the inode in bytes. This can be larger or smaller than the extent space (therefore actual disk space) used for the inode. For regular files, this is the filesize in bytes, directories, the space taken by directory entries and for links, the length of the symlink. + + + di_nblocks + Specifies the number of filesystem blocks used to store the inode's data including relevant metadata like B+trees. This does not include blocks used for extended attributes. + + + di_extsize + Specifies the extent size for filesystems with real-time devices and an extent size hint for standard filesystems. For normal filesystems, and with directories, the XFS_DIFLAG_EXTSZINHERIT flag must be set in di_flags if this field is used. Inodes created in these directories will inherit the di_extsize value and have XFS_DIFLAG_EXTSIZE set in their di_flags. When a file is written to beyond allocated space, XFS will attempt to allocate additional disk space based on this value. + + + di_nextents + Specifies the number of data extents associated with this inode. + + + di_anextents + Specifies the number of extended attribute extents associated with this inode. + + + di_forkoff + Specifies the offset into the inode's literal area where the extended attribute fork starts. This is an 8-bit value that is multiplied by 8 to determine the actual offset in bytes (ie. attribute data is 64-bit aligned). This also limits the maximum size of the inode to 2048 bytes. This value is initially zero until an extended attribute is created. When in attribute is added, the nature of di_forkoff depends on the XFS_SB_VERSION2_ATTR2BIT  flag in the superblock. Refer to the Extended Attribute Versions section () for more details. + + + di_aformat + Specifies the format of the attribute fork. This uses the same values as di_format, but restricted to "local", "extents" and "btree" formats for extended attribute data. + + + di_dmevmask + DMAPI event mask. + + + di_dmstate + DMAPI state. + + + di_flags + Specifies flags associated with the inode. This can be a combination of the following values: + + + + Flag + + + Description + + + + + + XFS_DIFLAG_REALTIME + + + The inode's data is located on the real-time device. + + + + + XFS_DIFLAG_PREALLOC + + + The inode's extents have been preallocated. + + + + + XFS_DIFLAG_NEWRTBM + + + Specifies the sb_rbmino uses the new real-time bitmap format + + + + + XFS_DIFLAG_IMMUTABLE + + + Specifies the inode cannot be modified. + + + + + XFS_DIFLAG_APPEND + + + The inode is in append only mode. + + + + + XFS_DIFLAG_SYNC + + + The inode is written synchronously. + + + + + XFS_DIFLAG_NOATIME + + + The inode's di_atime is not updated. + + + + + XFS_DIFLAG_NODUMP + + + Specifies the inode is to be ignored by xfsdump. + + + + + XFS_DIFLAG_RTINHERIT + + + For directory inodes, new inodes inherit the XFS_DIFLAG_REALTIME bit. + + + + + XFS_DIFLAG_PROJINHERIT + + + For directory inodes, new inodes inherit the di_projid value. + + + + + XFS_DIFLAG_NOSYMLINKS + + + For directory inodes, symlinks cannot be created. + + + + + XFS_DIFLAG_EXTSIZE + + + Specifies the extent size for real-time files or a and extent size hint for regular files. + + + + + XFS_DIFLAG_EXTSZINHERIT + + + For directory inodes, new inodes inherit the di_extsize value. + + + + + XFS_DIFLAG_NODEFRAG + + + Specifies the inode is to be ignored when defragmenting the filesystem. + + + + + + di_gen + + A generation number used for inode identification. This is used by tools that do inode scanning such as backup tools and xfsdump. An inode's generation number can change by unlinking and creating a new file that reuses the inode. + + + + + + + +
+ +
Unlinked Pointer + The di_next_unlinked value in the inode is used to track inodes that have been unlinked (deleted) but which are still referenced. When an inode is unlinked and there is still an outstanding reference, the inode is added to one of the AGI's () agi_unlinked hash buckets. The AGI unlinked bucket points to an inode and the di_next_unlinked value points to the next inode in the chain. The last inode in the chain has di_next_unlinked set to NULL (-1). + Once the last reference is released, the inode is removed from the unlinked hash chain, and di_next_unlinked is set to NULL. In the case of a system crash, XFS recovery will complete the unlink process for any inodes found in these lists. + The only time the unlinked fields can be seen to be used on disk is either on an active filesystem or a crashed system. A cleanly unmounted or recovered filesystem will not have any inodes in these unlink hash chains. + + + + + 28 + + + +
+ +
Data Fork + The structure of the inode's data fork based is on the inode's type and di_format. It always starts at offset 100 (0x64) in the inode's space which is the start of the inode's "literal area". The size of the data fork is determined by the type and format. The maximum size is determined by the inode size and di_forkoff. In code, use the XFS_DFORK_PTR macro specifying XFS_DATA_FORK for the "which" parameter. Alternatively, the XFS_DFORK_DPTR macro can be used. + Each of the following sub-sections summarises the contents of the data fork based on the inode type. + + +
+ Regular Files (S_IFREG) + The data fork specifies the file's data extents. The extents specify where the file's actual data is located within the filesystem. Extents can have 2 formats which is defined by the di_format value: + + + XFS_DINODE_FMT_EXTENTS: The extent data is fully contained within the inode which contains an array of extents to the filesystem blocks for the file's data. To access the extents, cast the return value from XFS_DFORK_DPTR to xfs_bmbt_rec_t*. + + + XFS_DINODE_FMT_BTREE: The extent data is contained in the leaves of a B+tree. The inode contains the root node of the tree and is accessed by casting the return value from XFS_DFORK_DPTR to xfs_bmdr_block_t*. + + + Details for each of these data extent formats are covered in the Data Extents section () later on.
+ + + +
Directories (S_IFDIR) + The data fork contains the directory's entries and associated data. The format of the entries is also determined by the di_format value and can be one of 3 formats: + + + XFS_DINODE_FMT_LOCAL: The directory entries are fully contained within the inode. This is accessed by casting the value from XFS_DFORK_DPTR to xfs_dir2_sf_t*. + + + XFS_DINODE_FMT_EXTENTS: The actual directory entries are located in another filesystem block, the inode contains an array of extents to these filesystem blocks (xfs_bmbt_rec_t*). + + + XFS_DINODE_FMT_BTREE: The directory entries are contained in the leaves of a B+tree. The inode contains the root node (xfs_bmdr_block_t*). + + + Details for each of these directory formats are covered in the Directories section () later on.
+ + + + +
Other File Types + For character and block devices (S_IFCHR and S_IFBLK), cast the value from XFS_DFORK_DPTR to xfs_dev_t*.
+ + + + +
Attribute Fork + The attribute fork in the inode always contains the location of the extended attributes associated with the inode. + The location of the attribute fork in the inode's literal area (offset 100 to the end of the inode) is specified by the di_forkoff value in the inode's core. If this value is zero, the inode does not contain any extended attributes. Non-zero, the byte offset into the literal area = di_forkoff * 8, which also determines the 2048 byte maximum size for an inode. Attributes must be allocated on a 64-bit boundary on the disk. To access the extended attributes in code, use the XFS_DFORK_PTR macro specifying XFS_ATTR_FORK for the "which" parameter. Alternatively, the XFS_DFORK_APTR macro can be used. + Which structure in the attribute fork is used depends on the di_aformat value in the inode. It can be one of the following values: + + + XFS_DINODE_FMT_LOCAL: The extended attributes are contained entirely within the inode. This is accessed by casting the value from XFS_DFORK_APTR to xfs_attr_shortform_t*. + + + XFS_DINODE_FMT_EXTENTS: The attributes are located in another filesystem block, the inode contains an array of pointers to these filesystem blocks. They are accessed by casting the value from XFS_DFORK_APTR to xfs_bmbt_rec_t*. + + + XFS_DINODE_FMT_BTREE: The extents for the attributes are contained in the leaves of a B+tree. The inode contains the root node of the tree and is accessed by casting the value from XFS_DFORK_APTR to xfs_bmdr_block_t*. + + + Detailed information on the layouts of extended attributes are covered in the Extended Attributes section () later on in this document. + + + +
Extended Attribute Versions + Extended attributes come in two versions: "attr1" or "attr2". The attribute version is specified by the XFS_SB_VERSION2_ATTR2BIT  flag in the sb_features2 field in the superblock. It determines how the inode's extra space is split between di_u and di_a forks which also determines how the di_forkoff value is maintained in the inode's core. + With "attr1" attributes, the di_forkoff is set to somewhere in the middle of the space between the core and end of the inode and never changes (which has the effect of artificially limiting the space for data information). As the data fork grows, when it gets to di_forkoff, it will move the data to the level format level (ie. local > extent > btree). If very little space is used for either attributes or data, then a good portion of the available inode space is wasted with this version. + "Attr2" was introduced to maximum the utilisation of the inode's literal area. The di_forkoff starts at the end of the inode and works its way to the data fork as attributes are added. Attr2 is highly recommended if extended attributes are used. + The following diagram compares the two versions: + + + + 30 + + +
diff --git a/en-US/xfs-Preface.xml b/en-US/xfs-Preface.xml new file mode 100644 index 0000000..405e716 --- /dev/null +++ b/en-US/xfs-Preface.xml @@ -0,0 +1,13 @@ + + + + + Preface + + + + + + + diff --git a/en-US/xfs-Symbolic_Links.xml b/en-US/xfs-Symbolic_Links.xml new file mode 100644 index 0000000..f14810b --- /dev/null +++ b/en-US/xfs-Symbolic_Links.xml @@ -0,0 +1,78 @@ + + + diff --git a/en-US/xfs-XFS_Filesystem_Structure.xml b/en-US/xfs-XFS_Filesystem_Structure.xml new file mode 100644 index 0000000..8fa5d44 --- /dev/null +++ b/en-US/xfs-XFS_Filesystem_Structure.xml @@ -0,0 +1,1267 @@ + + + + +<remark><command>xfs </command></remark>The XFS File System + +XFS +file system types + + + +file system types +XFS + + + + +XFS +main features + + + +main features +XFS + + + +XFS is a highly scalable, high-performance file system which was originally +designed at Silicon Graphics, Inc. It was created to support extremely large filesystems (up to 16 exabytes), files (8 exabytes) and directory structures (tens of millions of entries). + + + + + +Main Features + +XFS supports metadata journaling, which facilitates quicker crash recovery. The XFS file system can also be defragmented and enlarged while mounted and active. In addition, Fedora 13 supports backup and restore utilities specific to XFS. + + + + +Allocation Features +XFS features the following allocation schemes: + + + +XFS +allocation features + + + +allocation features +XFS + + +Extent-based allocation +Stripe-aware allocation policies +Delayed allocation +Space pre-allocation + + + + +XFS +fsync() + + + +fsync() +XFS + + + +Delayed allocation and other performance optimizations +affect XFS the same way that they do ext4. Namely, a program's +writes to a an XFS file system are not guaranteed to be +on-disk unless the program issues an fsync() +call afterwards. + + + +For more information on the implications of delayed allocation +on a file system, refer to Allocation Features in +. The workaround for ensuring +writes to disk applies to XFS as well. + + + + + + + + + + +Other XFS Features + +The XFS file system also supports the following: + + +Extended attributes (xattr), which allows the system to associate several additional name/value pairs per file. +Quota journalling, which avoids the need for lengthy quota consistency checks after a crash. +Project/directory quotas, allowing quota restrictions over a directory tree. +Subsecond timestamps + + + + + + +chapter opening to be fixed later + + + +
+Creating an XFS File System + + + +XFS +creating + + + +creating +XFS + + + + + + +XFS +mkfs.xfs + + + +mkfs.xfs +XFS + + + +To create an XFS file system, use the mkfs.xfs /dev/device command. In general, the default +options are optimal for common use. + + + +When using mkfs.xfs on a block device containing an existing +file system, use the -f option to force an overwrite of that file system. + + + +Below is a sample output of the mkfs.xfs command: + + + +meta-data=/dev/device isize=256 agcount=4, agsize=3277258 blks + = sectsz=512 attr=2 +data = bsize=4096 blocks=13109032, imaxpct=25 + = sunit=0 swidth=0 blks +naming =version 2 bsize=4096 ascii-ci=0 +log =internal log bsize=4096 blocks=6400, version=2 + = sectsz=512 sunit=0 blks, lazy-count=1 +realtime =none extsz=4096 blocks=0, rtextents=0 + + +After an XFS file system is created, its size cannot be reduced. However, it can still be enlarged using the xfs_growfs command (refer to ). + + + + +For striped block devices (e.g., RAID5 arrays), the stripe geometry can be +specified at the time of file system creation. Using proper stripe geometry greatly enhances the performance +of an XFS filesystem. + + + +When creating filesystems on lvm or md volumes, mkfs.xfs chooses an optimal geometry. This may also be true on some hardware RAIDs which +export geometry information to the operating system. + + + +To specify stripe geometry, use the following mkfs.xfs sub-options: + + + + +XFS +su (mkfs.xfs sub-options) + + + +su (mkfs.xfs sub-options) +XFS + + + + + + +XFS +sw (mkfs.xfs sub-options) + + + +sw (mkfs.xfs sub-options) +XFS + + + + + + +su=value + + +Specifies a stripe unit or RAID chunk size. The value must be specified in bytes, with an optional k, m, or g suffix. + + + + + +sw=value + + +Specifies the number of data disks in a RAID device, or the number of stripe units in the stripe. + + + + + +The following example specifies a chunk size of 64k on a RAID device containing 4 stripe units: + + + +mkfs.xfs -d su=64k,sw=4 /dev/device + + + + + +For more information about creating XFS file systems, refer to man mkfs.xfs. + +
+ +
+Mounting an XFS File System + + + +XFS +mounting + + + +mounting +XFS + + + + +An XFS file system can be mounted with no extra options, for example: + + + +mount /dev/device /mount/point + + + +XFS also supports several mount options to influence behavior. + + + +By default, XFS allocates inodes to reflect their +on-disk location. However, because some 32-bit userspace applications are +not compatible with inode numbers greater than 232, XFS will allocate all +inodes in disk locations which result in 32-bit inode numbers. This +can lead to decreased performance on very large +filesystems (i.e. larger than 2 terabytes), because inodes are skewed to the beginning of the block device, +while data is skewed towards the end. + + + +To address this, use the inode64 mount option. This option configures XFS to allocate inodes and data across the entire file system, which can improve performance: + + + + +XFS +inode64 mount option + + + +inode64 mount option +XFS + + + + + + +mount -o inode64 /dev/device /mount/point + + +Eric: the above may change; this may be made the default very soon + + + +Write Barriers + + + +XFS +write barriers + + + +write barriers +XFS + + + + + + +XFS +nobarrier mount option + + + +nobarrier mount option +XFS + + + +By default, XFS uses write barriers to ensure file system integrity even +when power is lost to a device with write caches enabled. For devices without +write caches, or with battery-backed write caches, disable barriers using the nobarrier option, as in: + + + +mount -o nobarrier /dev/device /mount/point + + + +For more information about write barriers, refer to . + + + +
+
+XFS Quota Management + + + +XFS +quota management + + + +quota management +XFS + + + +The XFS quota subsystem manages limits on disk space (blocks) and file (inode) +usage. XFS quotas control and/or report on usage of these items on a user, group, or directory/project level. Also, note that while user, group, and directory/project quotas are enabled independently, group and project quotas a mutually exclusive. + + + +When managing on a per-directory or per-project basis, XFS manages the disk usage of directory heirarchies associated with a specific project. In doing so, XFS recognizes cross-organizational "group" boundaries between projects. This provides a level of control that is broader than what is available when managing quotas for users or groups. + + + + +XFS quotas are enabled at mount time, with specific mount options. Each mount +option can also be specified as noenforce; this will allow usage reporting without enforcing any limits. Valid quota mount options are: + + + + +XFS +uquota/uqnoenforce + + + +uquota/uqnoenforce +XFS + + + + + + +XFS +gquota/gqnoenforce + + + +gquota/gqnoenforce +XFS + + + + + + +XFS +pquota/pqnoenforce + + + +pquota/pqnoenforce +XFS + + + +uquota/uqnoenforce - User quotas +gquota/gqnoenforce - Group quotas +pquota/pqnoenforce - Project quota + + + +Once quotas are enabled, the xfs_quota tool can be used to set limits and report on disk usage. By default, xfs_quota is run interactively, and in basic mode. Basic mode sub-commands simply report usage, and are available to all users. Basic xfs_quota sub-commands include: + + + + +XFS +xfs_quota + + + +xfs_quota +XFS + + + + + + +XFS +expert mode (xfs_quota) + + + +expert mode (xfs_quota) +XFS + + + + + + + +quota username/userID + + +Show usage and limits for the given username or numeric userID + + + + + +df + + +Shows free and used counts for blocks and inodes. + + + + + + +In contrast, xfs_quota also has an expert mode. The sub-commands of this mode allow actual configuration of limits, and are available only to users with elevated privileges. To use expert mode sub-commands interactively, run xfs_quota -x. Expert mode sub-commands include: + + + + +XFS +report (xfs_quota expert mode) + + + +report (xfs_quota expert mode) +XFS + + + + + + +XFS +limit (xfs_quota expert mode) + + + +limit (xfs_quota expert mode) +XFS + + + + + +report /path + + +Reports quota information for a specific file system. + + + + + +limit + + +Modify quota limits. + + + + + + +For a complete list of sub-commands for either basic or expert mode, use the sub-command help. + + + +All sub-commands can also be run directly from a command line using the -c option, with -x for expert sub-commands. For example, to display a sample quota report for /home (on /dev/blockdevice), use the command xfs_quota -cx 'report -h' /home. This will display output similar to the following: + + + +User quota on /home (/dev/blockdevice) + Blocks +User ID Used Soft Hard Warn/Grace +---------- --------------------------------- +root 0 0 0 00 [------] +testuser 103.4G 0 0 00 [------] +... + + + +To set a soft and hard inode count limit of 500 and 700 respectively for user john (whose home directory is /home/john), use the following command: + + + +xfs_quota -x -c 'limit isoft=500 ihard=700 /home/john' + + + +By default, the limit sub-command recognizes targets as users. When configuring the limits for a group, use the -g option (as in the previous example). Similarly, use -p for projects. + + + +Soft and hard block limits can also be configured using bsoft/bhard instead of isoft/ihard. For example, to set a soft and hard block limit of 1000m and 1200m, respectively, to group accounting on the /target/path file system, use the following command: + + + +xfs_quota -x -c 'limit -g bsoft=1000m bhard=1200m accounting' /target/path + + + + +While real-time blocks (rtbhard/rtbsoft) are described in man xfs_quota as valid units when setting quotas, the real-time sub-volume is not enabled in this release. As such, the rtbhard and rtbsoft options are not applicable. + + + + +Setting Project Limits + + + +XFS +project limits (setting) + + + +project limits (setting) +XFS + + +Before configuring limits for project-controlled directories, add them first to /etc/projects. Project names can be added to/etc/projectid to map project IDs to project names. Once a project is added to /etc/projects, initialize its project directory using the following command: + + + +xfs_quota -c 'project -s projectname' + + + +Quotas for projects with initialized directories can then be configured, as in: + + + +xfs_quota -x -c 'limit -p bsoft=1000m bhard=1200m projectname' + + + +Generic quota configuration tools (e.g. quota, repquota, edquota) may also be used to manipulate +XFS quotas. However, these tools cannot be used with XFS project quotas. + + + +For more information about setting XFS quotas, refer to man xfs_quota. + + + + +
+ + +
+Increasing the Size of an XFS File System + + + +XFS +increasing file system size + + + +increasing file system size +XFS + + + + + + +XFS +xfs_growfs + + + +xfs_growfs +XFS + + + + +An XFS file system may be grown while mounted using the xfs_growfs command, as in: + + + +xfs_growfs /mount/point -D size + + + + + +The -D size option grows the file system to the specified +size (expressed in file system blocks). Without the +-D size option, xfs_growfs will +grow the file system to the maximum size supported by the device. + + + + +Before growing an XFS file system with -D size, ensure that the underlying block device is of an appropriate size to hold the file system later. Use the appropriate resizing methods for the affected block device. + + + + +While XFS file systems can be grown while mounted, their size cannot be reduced at all. + + + + +For more information about growing a file system, refer to man xfs_growfs. + +
+ +
+<remark>[UNFINISHED] </remark>Repairing an XFS File System + + + +XFS +repairing file system + + + +repairing file system +XFS + + + + + + +XFS +xfs_repair + + + +xfs_repair +XFS + + + +To repair an XFS file system, use xfs_repair, as in: + + + +xfs_repair /dev/device + + + +The xfs_repair utility is highly scalable, and is designed to repair even very large file systems +with many inodes efficiently. Note that unlike other Linux file systems, xfs_repair does +not run at boot time, even when an XFS file system was not cleanly unmounted. In the event of an unclean unmount, xfs_repair +simply replays the log at mount time, ensuring a consistent file system. + +Don: eric will provide hard numbers for previous para, i.e. "many inodes" and "reasonable amount of time" + + + +XFS +repairing XFS file systems with dirty logs + + + +repairing XFS file systems with dirty logs +XFS + + + + + + + +dirty logs (repairing XFS file systems) +XFS + + + +The xfs_repair utility cannot repair an XFS file system with a dirty log. To +clear the log, mount and unmount the XFS file system. If the log is corrupt +and cannot be replayed, use the -L option ("force log zeroing") to clear the log, i.e. xfs_repair -L /dev/device. Note, however, that this may result in further corruption or data loss. + + + +For more information about repairing an XFS file system, refer to man xfs_repair. + + +
+ +
+Suspending an XFS File System + + + +XFS +suspending + + + +suspending +XFS + + + + + +XFS +xfs_freeze + + + +xfs_freeze +XFS + + + + + + +XFS +xfsprogs + + + +xfsprogs +XFS + + + +To suspend or resume write activity to a file system, use xfs_freeze. Suspending write activity allows hardware-based device snapshots +to be used to capture the file system in a consistent state. + + + + + +The xfs_freeze utility is provided by the xfsprogs package, which is only +available on x86_64. + + + + + +To suspend (i.e. freeze) an XFS file system, use: + + + +xfs_freeze -f /mount/point + + + +To unfreeze an XFS file system, use: + + + +xfs_freeze -u /mount/point + + + +When taking an LVM snapshot, it is not necessary to use xfs_freeze +to suspend the file system first. Rather, the LVM management tools will automatically +suspend the XFS file system before taking the snapshot. + + + + +You can also use the xfs_freeze utility to freeze/unfreeze an ext3, ext4, GFS2, XFS, and BTRFS, file system. +The syntax for doing so is also the same. + + + + + + +For more information about freezing and unfreezing an XFS file system, refer to man xfs_freeze. + +
+ +
+<remark>[UNFINISHED] </remark>Backup and Restoration of XFS File Systems + + + +XFS +backup/restoration + + + +backup/restoration +XFS + + + + + + + +restoring a backup +XFS + + + + +XFS +xfsdump + + + +xfsdump +XFS + + + + + + +XFS +xfsrestore + + + +xfsrestore +XFS + + + + +XFS file system backup and restoration involves two utilities: xfsdump and xfsrestore. + + + +To backup or dump an XFS file system, use the xfsdump utility. Fedora 13 supports backups to tape drives or regular file images, and also allows multiple dumps to be written to the same tape. The xfsdump utility also allows a dump to span multiple tapes, although only one dump can be written to a regular file. In addition, xfsdump supports incremental backups, and can exclude files from a backup using size, subtree, or inode flags to filter them. + + + + +XFS +dump levels + + + +dump levels +XFS + + + +In order to support incremental backups, xfsdump uses dump levels to determine a base dump to which a specific dump is relative. The -l option specifies a dump level (0-9). To perform a full backup, perform a level 0 dump on the file system (i.e. /path/to/filesystem), as in: + + + +xfsdump -l 0 -f /dev/device /path/to/filesystem + + +original command was "xfsdump -l 0 -f /dev/st0 /mnt" + + + +The -f option specifies a destination for a backup. For example, the /dev/st0 destination is normally used for tape drives. An xfsdump destination can be a tape drive, regular file, or remote tape device. + + + + +In contrast, an incremental backup will only dump files that changed since the last level 0 dump. A level 1 dump is the first incremental dump after a full dump; the next incremental dump would be level 2, and so on, to a maximum of level 9. So, to perform a level 1 dump to a tape drive: + + + +xfsdump -l 1 -f /dev/st0 /path/to/filesystem + + + +Conversely, the xfsrestore utility restores file systems from dumps produced by xfsdump. The xfsrestore utility has two modes: a default simple mode, and a cumulative mode. Specific dumps are identified by session ID or session label. As such, restoring a dump requires its corresponding session ID or label. To display the session ID and labels of all dumps (both full and incremental), use the -I option, as in: + + + +xfsrestore -I + + + +This will provide output similar to the following: + +file system 0: + fs id: 45e9af35-efd2-4244-87bc-4762e476cbab + session 0: + mount point: bear-05:/mnt/test + device: bear-05:/dev/sdb2 + time: Fri Feb 26 16:55:21 2010 + session label: "my_dump_session_label" + session id: b74a3586-e52e-4a4a-8775-c3334fa8ea2c + level: 0 + resumed: NO + subtree: NO + streams: 1 + stream 0: + pathname: /mnt/test2/backup + start: ino 0 offset 0 + end: ino 1 offset 0 + interrupted: NO + media files: 1 + media file 0: + mfile index: 0 + mfile type: data + mfile size: 21016 + mfile start: ino 0 offset 0 + mfile end: ino 1 offset 0 + media label: "my_dump_media_label" + media id: 4a518062-2a8f-4f17-81fd-bb1eb2e3cb4f +xfsrestore: Restore Status: SUCCESS + + + + + +xfsrestore Simple Mode + + + +XFS +simple mode (xfsrestore) + + + +simple mode (xfsrestore) +XFS + + + +The simple mode allows users to restore an entire file system from a level 0 dump. After identifying a level 0 dump's session ID (i.e. session-ID), restore it fully to /path/to/destination using: + + + + +xfsrestore -f /dev/st0 -S session-ID /path/to/destination + + + + +The -f option specifies the location of the dump, while the -S or -L option specifies which specific dump to restore. The -S option is used to specify a session ID, while the -L option is used for session labels. The -I option displays both session labels and IDs for each dump. + + + + +xfsrestore Cumulative Mode + + + +XFS +cumulative mode (xfsrestore) + + + +cumulative mode (xfsrestore) +XFS + + +The cumulative mode of xfsrestore allows file system restoration from a specific incremental backup, i.e. level 1 to level 9. To restore a file system from an incremental backup, simply add the -r option, as in: + + + +xfsrestore -f /dev/st0 -S session-ID -r /path/to/destination + + + +Interactive Operation + + + +XFS +interactive operation (xfsrestore) + + + +interactive operation (xfsrestore) +XFS + + + + +The xfsrestore utility also allows specific files from a dump to be extracted, added, or deleted. To use xfsrestore interactively, use the -i option, as in: + + + +xfsrestore -f /dev/st0 -i + + + +The interactive dialogue will begin after xfsrestore finishes reading the specified device. Available commands in this dialogue include cd, ls, add, delete, and extract; for a complete list of commands, use help. + + + +For more information about dumping and restoring XFS file systems, refer to man xfsdump and man xfsrestore. + + + + + + +
+ +
+Other XFS File System Utilities + + + + +XFS +xfs_fsr + + + +xfs_fsr +XFS + + + + + + +XFS +xfs_bmap + + + +xfs_bmap +XFS + + + + + + +XFS +xfs_info + + + +xfs_info +XFS + + + + + + +XFS +xfs_admin + + + +xfs_admin +XFS + + + + + + +XFS +xfs_copy + + + +xfs_copy +XFS + + + + + + +XFS +xfs_metadump + + + +xfs_metadump +XFS + + + + + + +XFS +xfs_mdrestore + + + +xfs_mdrestore +XFS + + + + + + +XFS +xfs_db + + + +xfs_db +XFS + + + + +Fedora 13 also features other utilities for managing XFS file systems: + + + + + +xfs_fsr + + +Used to defragment mounted XFS file systems. When invoked with no arguments, xfs_fsr defragments all regular files in all mounted XFS file systems. This utility also allows users to suspend a defragmentation at a specified time and resume from where it left off later. + + + +In addition, xfs_fsr also allows the defragmentation of only one file, as in xfs_fsr /path/to/file. +Periodic defragmentation of an entire file system is not advised, as this is normally not warranted. + + + + + +xfs_bmap + + +Prints the map of disk blocks used by files in an XFS filesystem. This map list each extent used by a specified file, as well as regions in the file with no corresponding blocks (i.e. holes). + + + + + +xfs_info + + +Prints XFS file system information. + + + + + +xfs_admin + + +Changes the parameters of an XFS file system. The xfs_admin utility can only modify parameters of unmounted devices/file systems. + + + + + +xfs_copy + + +Copies the contents of an entire XFS file system to one or more targets in parallel. + + + + + + + + +The following utilities are also useful in debugging and analyzing XFS file systems: + + + + +xfs_metadump + + +Copies XFS file system metadata to a file. The xfs_metadump utility should only be used to copy unmounted, read-only, or frozen/suspended file systems; otherwise, generated dumps could be corrupted or inconsistent. + + + + + +xfs_mdrestore + + +Restores and XFS metadump image (generated using xfs_metadump) to a file system image. + + + + + +xfs_db + + +Debugs an XFS file system. + + + + + + + +For more information about these utilities, refer to their respective man pages. + + +
+ + + + + +
+ diff --git a/publican.cfg b/publican.cfg new file mode 100644 index 0000000..c06bc03 --- /dev/null +++ b/publican.cfg @@ -0,0 +1,8 @@ +# Config::Simple 4.59 +# Thu Feb 4 09:44:55 2010 + +debug: 1 +#show_remarks: 1 +xml_lang: en-US +brand: fedora +