Create ACFS File System on RAC
Table of Contents
___________________________________________________________________________________________________
1. Overview
2. Environment
3. Verify ACFS modules
4. Create ASM Disk group
5. Create Volume
6. Create File System
7. Register File System on OCR
8. Verify Mount Point on All Nodes
9. Find ACFS mountpoints
10. Unmount ACFS filesystem
11. Start/Stop ACFS filesystem
_________________________________________________________________________________________________
ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1) Oracle Automatic Storage Management Cluster File System (Oracle ACFS) is a multi-platform, scalable file system, and storage management technology that extends Oracle Automatic Storage Management (Oracle ASM) functionality to support customer files maintained outside of Oracle Database. Oracle ACFS supports many database and application files, including executables, database trace files, database alert logs, application reports, BFILEs, and configuration files. Other supported files are video, audio, text, images, engineering drawings, and other general-purpose application file data. Oracle ACFS does not support files for the Oracle Grid Infrastructure home. Oracle ACFS does not support Oracle Cluster Registry (OCR) and voting files. Oracle ACFS functionality requires that the disk group compatibility attributes for ASM and ADVM be set to 11.2 or greater.
Nodes : RAC1,RAC2
GI Version : 12.2
RDBMS Version : 12.2
[root@rac1 ~]# lsmod | grep ora oracleacfs 4616192 0 oracleadvm 782336 0 oracleoks 655360 2 oracleacfs,oracleadvm oracleasm 65536 1 [root@rac1 ~]# [root@rac2 ~]# lsmod | grep ora oracleacfs 4616192 0 oracleadvm 782336 0 oracleoks 655360 2 oracleacfs,oracleadvm oracleasm 61440 1 [root@rac2 ~]#
SQL> set lines 250 set pages 9999 column path format a20 select path, group_number group_#, disk_number disk_#, mount_status,header_status, state, total_mb, free_mb from v$asm_disk order by group_number; PATH GROUP_# DISK_# MOUNT_S HEADER_STATU STATE TOTAL_MB FREE_MB ---------------------------------------- ---------- ---------- ------- ------------ -------- ---------- ---------- /dev/oracleasm/disks/DISK9 0 0 CLOSED PROVISIONED NORMAL 0 0 /dev/oracleasm/disks/DISK8 0 1 CLOSED PROVISIONED NORMAL 0 0 /dev/oracleasm/disks/DISK7 1 1 CACHED MEMBER NORMAL 1020 948 /dev/oracleasm/disks/DISK6 1 0 CACHED MEMBER NORMAL 1020 936 /dev/oracleasm/disks/DISK5 2 4 CACHED MEMBER NORMAL 1020 512 /dev/oracleasm/disks/DISK3 2 2 CACHED MEMBER NORMAL 1020 524 /dev/oracleasm/disks/DISK4 2 3 CACHED MEMBER NORMAL 1020 508 /dev/oracleasm/disks/DISK2 2 1 CACHED MEMBER NORMAL 1020 516 /dev/oracleasm/disks/DISK1 2 0 CACHED MEMBER NORMAL 1020 528 /dev/oracleasm/disks/GIMR3 3 3 CACHED MEMBER NORMAL 10236 10144 /dev/oracleasm/disks/GIMR4 3 2 CACHED MEMBER NORMAL 10236 10160 /dev/oracleasm/disks/GIMR1 3 1 CACHED MEMBER NORMAL 10236 10140 /dev/oracleasm/disks/GIMR2 3 0 CACHED MEMBER NORMAL 10236 10116 13 rows selected. SQL> Find ASM physical disk mapping [root@rac1 ~]# oracleasm querydisk -d DISK8 Disk "DISK8" is a valid ASM disk on device [8,129] [root@rac1 ~]# oracleasm querydisk -d DISK9 Disk "DISK9" is a valid ASM disk on device [8,145] [root@rac1 ~]# ls -l /dev | grep 8, | grep 129 brw-rw----. 1 root disk 8, 129 Sep 21 14:31 sdi1 <--- [root@rac1 ~]# [root@rac1 ~]# ls -l /dev | grep 8, | grep 145 brw-rw----. 1 root disk 8, 145 Sep 21 14:31 sdj1 <--- [root@rac1 ~]# [OR] [root@rac1 ~]# oracleasm querydisk -p DISK8 | head -2 | grep /dev | awk -F: '{print $1}' /dev/sdi1 [root@rac1 ~]# [root@rac1 ~]# oracleasm querydisk -p DISK9 | head -2 | grep /dev | awk -F: '{print $1}' /dev/sdj1 [root@rac1 ~]# [OR] #!/bin/bash echo "ASM Disk Mappings" echo "----------------------------------------------------" for f in `oracleasm listdisks` do dp=`oracleasm querydisk -p $f | head -2 | grep /dev | awk -F: '{print $1}'` echo "$f: $dp" done [OR] [root@rac1 ~]# oracleasm querydisk -p DISK8 Disk "DISK8" is a valid ASM disk /dev/sdi1: LABEL="DISK8" TYPE="oracleasm" [root@rac1 ~]# [root@rac1 ~]# oracleasm querydisk -p DISK9 Disk "DISK9" is a valid ASM disk /dev/sdj1: LABEL="DISK9" TYPE="oracleasm" [root@rac1 ~]# [root@rac1 ~]# fdisk -l /dev/sdi1 Disk /dev/sdi1: 1072 MB, 1072693248 bytes, 2095104 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes [root@rac1 ~]# [root@rac1 ~]# fdisk -l /dev/sdj1 Disk /dev/sdj1: 1072 MB, 1072693248 bytes, 2095104 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes [root@rac1 ~]# SQL> SELECT NAME,VALUE,GROUP_NUMBER FROM v$asm_attribute where name like '%com%'; NAME VALUE GROUP_NUMBER ------------------------------ ---------- ------------ compatible.asm 12.2.0.1.0 1 compatible.rdbms 10.1.0.0.0 1 compatible.advm 12.2.0.1.0 1 compatible.asm 12.2.0.1.0 2 compatible.rdbms 10.1.0.0.0 2 compatible.advm 12.2.0.1.0 2 compatible.asm 12.2.0.1.0 3 compatible.rdbms 10.1.0.0.0 3 compatible.advm 12.2.0.1.0 3 9 rows selected. SQL> SQL> CREATE DISKGROUP ACFSDG EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/DISK8','/dev/oracleasm/disks/DISK9' ATTRIBUTE 'compatible.asm' = '12.2.0.1.0', 'compatible.rdbms'='10.1.0.0.0' , 'compatible.advm' = '12.2.0.1.0'; Diskgroup created. SQL> SQL> SELECT NAME,VALUE,GROUP_NUMBER FROM v$asm_attribute where name like '%com%'; NAME VALUE GROUP_NUMBER ------------------------------ ---------- ------------ compatible.asm 12.2.0.1.0 1 compatible.rdbms 10.1.0.0.0 1 compatible.advm 12.2.0.1.0 1 compatible.asm 12.2.0.1.0 2 compatible.rdbms 10.1.0.0.0 2 compatible.advm 12.2.0.1.0 2 compatible.asm 12.2.0.1.0 3 compatible.rdbms 10.1.0.0.0 3 compatible.advm 12.2.0.1.0 3 compatible.asm 12.2.0.1.0 4 compatible.rdbms 10.1.0.0.0 4 compatible.advm 12.2.0.1.0 4 12 rows selected. SQL> SQL> COL % FORMAT 99.0 SQL> SELECT name, free_mb, total_mb, ((total_mb-free_mb)/total_mb)*100 as "USED %", free_mb/total_mb*100 "FREE%" from v$asm_diskgroup order by 1; NAME FREE_MB TOTAL_MB USED % FREE% ------------------------------ ---------- ---------- ---------- ---------- ACFSDG 1989 2046 2.78592375 97.2140762 ARCH 1884 2040 7.64705882 92.3529412 DATA 2556 5100 49.8823529 50.1176471 GIMR 40560 40944 .937866354 99.0621336 SQL> SQL> select path, group_number group_#, disk_number disk_#, mount_status,header_status, state, total_mb, free_mb from v$asm_disk order by group_number; PATH GROUP_# DISK_# MOUNT_S HEADER_STATU STATE TOTAL_MB FREE_MB ------------------------------ ---------- ---------- ------- ------------ -------- ---------- ---------- /dev/oracleasm/disks/DISK7 1 1 CACHED MEMBER NORMAL 1020 760 /dev/oracleasm/disks/DISK6 1 0 CACHED MEMBER NORMAL 1020 748 /dev/oracleasm/disks/DISK2 2 1 CACHED MEMBER NORMAL 1020 500 /dev/oracleasm/disks/DISK3 2 2 CACHED MEMBER NORMAL 1020 504 /dev/oracleasm/disks/DISK4 2 3 CACHED MEMBER NORMAL 1020 496 /dev/oracleasm/disks/DISK5 2 4 CACHED MEMBER NORMAL 1020 496 /dev/oracleasm/disks/DISK1 2 0 CACHED MEMBER NORMAL 1020 512 /dev/oracleasm/disks/GIMR4 3 2 CACHED MEMBER NORMAL 10236 10160 /dev/oracleasm/disks/GIMR3 3 3 CACHED MEMBER NORMAL 10236 10144 /dev/oracleasm/disks/GIMR1 3 1 CACHED MEMBER NORMAL 10236 10136 /dev/oracleasm/disks/GIMR2 3 0 CACHED MEMBER NORMAL 10236 10116 /dev/oracleasm/disks/DISK8 4 0 CACHED MEMBER NORMAL 1023 459 /dev/oracleasm/disks/DISK9 4 1 CACHED MEMBER NORMAL 1023 459 13 rows selected. SQL> [root@rac1 ~]# . oraenv ORACLE_SID = [root] ? +ASM1 The Oracle base has been set to /u01/app/oracle [root@rac1 ~]# crsctl stat res ora.ACFSDG.dg -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ACFSDG.dg ONLINE ONLINE rac1 STABLE OFFLINE OFFLINE rac2 STABLE -------------------------------------------------------------------------------- [root@rac1 ~]# --- Logon to Node2 [oracle@rac2 ~]$ . oraenv ORACLE_SID = [oracle] ? +ASM2 The Oracle base has been set to /u01/app/oracle [oracle@rac2 ~]$ sqlplus / as sysasm SQL*Plus: Release 12.2.0.1.0 Production on Mon Sep 21 22:37:44 2020 Copyright (c) 1982, 2016, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL> ALTER DISKGROUP ACFSDG MOUNT; Diskgroup altered. SQL> [root@rac1 ~]# crsctl stat res ora.ACFSDG.dg -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ACFSDG.dg ONLINE ONLINE rac1 STABLE ONLINE ONLINE rac2 STABLE -------------------------------------------------------------------------------- [root@rac1 ~]#
[oracle@rac1 ~]$ . oraenv ORACLE_SID = [+ASM1] ? The Oracle base remains unchanged with value /u01/app/oracle [oracle@rac1 ~]$ asmcmd volcreate -G ACFSDG -s 1G acfs_test [oracle@rac1 ~]$ [oracle@rac1 ~]$ asmcmd volinfo -G ACFSDG acfs_test Diskgroup Name: ACFSDG Volume Name: ACFS_TEST Volume Device: /dev/asm/acfs_test-463 State: ENABLED Size (MB): 1024 Resize Unit (MB): 64 Redundancy: UNPROT Stripe Columns: 8 Stripe Width (K): 1024 Usage: Mountpath: [oracle@rac1 ~]$ [root@rac1 ~]# crsctl stat res -t | grep -i "advm" ora.ACFSDG.ACFS_TEST.advm ora.proxy_advm [root@rac1 ~]# crsctl stat res ora.ACFSDG.ACFS_TEST.advm -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ACFSDG.ACFS_TEST.advm ONLINE ONLINE rac1 STABLE ONLINE ONLINE rac2 STABLE -------------------------------------------------------------------------------- [root@rac1 ~]# [root@rac1 ~]# fdisk -l /dev/asm/acfs_test-463 Disk /dev/asm/acfs_test-463: 1073 MB, 1073741824 bytes, 2097152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes [root@rac1 ~]# [root@rac2 ~]# fdisk -l /dev/asm/acfs_test-463 Disk /dev/asm/acfs_test-463: 1073 MB, 1073741824 bytes, 2097152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes [root@rac2 ~]#
[oracle@rac1 ~]$ mkfs -t acfs /dev/asm/acfs_test-463
mkfs.acfs: version = 12.2.0.1.0
mkfs.acfs: on-disk version = 46.0
mkfs.acfs: volume = /dev/asm/acfs_test-463
mkfs.acfs: volume size = 1073741824 ( 1.00 GB )
mkfs.acfs: Format complete.
[oracle@rac1 ~]$
[root@rac1 ~]# mkdir -p /acfs_test [root@rac1 ~]# chown oracle:oinstall /acfs_test [root@rac1 ~]# /sbin/acfsutil registry -a /dev/asm/acfs_test-463 /acfs_test -u oracle acfsutil registry: mount point /acfs_test successfully added to Oracle Registry [root@rac1 ~]# [OR] -- The above command equivalent to below command. [root@rac1 ~]# . oraenv ORACLE_SID = [+ASM1] ? The Oracle base remains unchanged with value /u01/app/oracle [root@rac1 ~]# which srvctl /u01/app/grid/product/12.2/bin/srvctl [root@rac1 ~]# srvctl add filesystem -d /dev/asm/acfs_test-463 -m /acfs_test -u oracle -fstype ACFS -autostart ALWAYS [root@rac1 ~]# crsctl stat res -t | grep -i "acfsdg" ora.ACFSDG.ACFS_TEST.advm ora.ACFSDG.dg ora.acfsdg.acfs_test.acfs [root@rac1 ~]# [root@rac1 ~]# crsctl stat res ora.acfsdg.acfs_test.acfs -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.acfsdg.acfs_test.acfs ONLINE ONLINE rac1 mounted on /acfs_tes t,STABLE ONLINE ONLINE rac2 mounted on /acfs_tes t,STABLE -------------------------------------------------------------------------------- [root@rac1 ~]#
8. Verify Mount Point on All Nodes
[oracle@rac1 ~]$ df -h /acfs_test Filesystem Size Used Avail Use% Mounted on /dev/asm/acfs_test-463 1.0G 487M 538M 48% /acfs_test [oracle@rac1 ~]$ [oracle@rac1 ~]$ touch /acfs_test/raj [oracle@rac1 ~]$ ls -ltr /acfs_test/raj -rw-r--r--. 1 oracle oinstall 0 Sep 21 21:20 /acfs_test/raj [oracle@rac1 ~]$ [oracle@rac2 ~]$ df -h /acfs_test Filesystem Size Used Avail Use% Mounted on /dev/asm/acfs_test-463 1.0G 487M 538M 48% /acfs_test [oracle@rac2 ~]$ [oracle@rac2 ~]$ ls -ltr /acfs_test/raj -rw-r--r--. 1 oracle oinstall 0 Sep 21 21:20 /acfs_test/raj [oracle@rac2 ~]$ [root@rac1 ~]# srvctl status filesystem -d /dev/asm/acfs_test-463 ACFS file system /acfs_test is mounted on nodes rac1,rac2 [root@rac1 ~]# [root@rac1 ~]# srvctl config filesystem Volume device: /dev/asm/acfs_test-463 Diskgroup name: acfsdg Volume name: acfs_test Canonical volume device: /dev/asm/acfs_test-463 Accelerator volume devices: Mountpoint path: /acfs_test Mount point owner: oracle Mount users: Type: ACFS Mount options: Description: ACFS file system is enabled ACFS file system is individually enabled on nodes: ACFS file system is individually disabled on nodes: [root@rac1 ~]#
[oracle@rac1 ~]$ /sbin/acfsutil registry -l Device : /dev/asm/acfs_test-463 : Mount Point : /acfs_test : Options : none : Nodes : all : Disk Group: ACFSDG : Primary Volume : ACFS_TEST : Accelerator Volumes : [oracle@rac1 ~]$ [oracle@rac1 ~]$ asmcmd volinfo -G ACFSDG ACFS_TEST Diskgroup Name: ACFSDG Volume Name: ACFS_TEST Volume Device: /dev/asm/acfs_test-463 State: ENABLED Size (MB): 1024 Resize Unit (MB): 64 Redundancy: UNPROT Stripe Columns: 8 Stripe Width (K): 1024 Usage: ACFS Mountpath: /acfs_test [oracle@rac1 ~]$ [oracle@rac1 ~]$ mount -t acfs /dev/asm/acfs_test-463 on /acfs_test type acfs (rw,relatime,device,rootsuid,ordered) [oracle@rac1 ~]$
[oracle@rac1 ~]$ /sbin/acfsutil registry -l Device : /dev/asm/acfs_test-463 : Mount Point : /acfs_test : Options : none : Nodes : all : Disk Group: ACFSDG : Primary Volume : ACFS_TEST : Accelerator Volumes : [oracle@rac1 ~]$ [root@rac1 ~]# umount /dev/asm/acfs_test-463 [root@rac1 ~]# [root@rac2 ~]# umount /dev/asm/acfs_test-463 [root@rac2 ~]# [root@rac1 ~]# crsctl stat res ora.acfsdg.acfs_test.acfs -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.acfsdg.acfs_test.acfs OFFLINE OFFLINE rac1 admin unmounted /acf s_test,STABLE OFFLINE OFFLINE rac2 admin unmounted /acf s_test,STABLE -------------------------------------------------------------------------------- [root@rac1 ~]# More information: [root@rac1 ~]# umount /dev/asm/acfs_test-463 umount: /acfs_test: target is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) umount.acfs: CLSU-00100: operating system function: OfsWaitProc failed with error data: 32 umount.acfs: CLSU-00101: operating system error message: Broken pipe umount.acfs: CLSU-00103: error location: OWPR_1 umount.acfs: ACFS-04151: unmount of mount point /acfs_test failed [root@rac1 ~]# [root@rac1 ~]# lsof | grep /acfs_test bash 6022 root cwd DIR 248,237057 32768 2 /acfs_test vi 30169 root cwd DIR 248,237057 32768 2 /acfs_test vi 30169 root 3u REG 248,237057 12288 77 /acfs_test/.test.swp [root@rac1 ~]# [root@rac1 ~]# After stopping or killing these processes and the dismount should go through [root@rac1 ~]# kill -9 30169 [root@rac1 ~]# kill -9 6022 [root@rac1 ~]# lsof | grep /acfs_test [root@rac1 ~]#
11. Start/Stop ACFS filesystem
[root@rac1 ~]# /sbin/acfsutil registry -l Device : /dev/asm/acfs_test-463 : Mount Point : /acfs_test : Options : none : Nodes : all : Disk Group: ACFSDG : Primary Volume : ACFS_TEST : Accelerator Volumes : [root@rac1 ~]# [root@rac1 ~]# srvctl start filesystem -d /dev/asm/acfs_test-463 [root@rac1 ~]# [root@rac1 ~]# srvctl status filesystem -d /dev/asm/acfs_test-463 ACFS file system /acfs_test is mounted on nodes rac1,rac2 [root@rac1 ~]# [root@rac1 ~]# srvctl stop filesystem -d /dev/asm/acfs_test-463 [root@rac1 ~]# [root@rac1 ~]# srvctl status filesystem -d /dev/asm/acfs_test-463 ACFS file system /acfs_test is not mounted [root@rac1 ~]# [root@rac1 ~]# srvctl start filesystem -d /dev/asm/acfs_test-463 [root@rac1 ~]# [root@rac1 ~]# srvctl status filesystem -d /dev/asm/acfs_test-463 ACFS file system /acfs_test is mounted on nodes rac1,rac2 [root@rac1 ~]# [root@rac1 ~]# crsctl stat res ora.acfsdg.acfs_test.acfs -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.acfsdg.acfs_test.acfs ONLINE ONLINE rac1 mounted on /acfs_tes t,STABLE ONLINE ONLINE rac2 mounted on /acfs_tes t,STABLE -------------------------------------------------------------------------------- [root@rac1 ~]# [root@rac1 ~]# df -h /acfs_test Filesystem Size Used Avail Use% Mounted on /dev/asm/acfs_test-463 1.0G 487M 538M 48% /acfs_test [root@rac1 ~]# [root@rac2 ~]# df -h /acfs_test Filesystem Size Used Avail Use% Mounted on /dev/asm/acfs_test-463 1.0G 487M 538M 48% /acfs_test [root@rac2 ~]#
Caution: Your use of any information or materials on this website is entirely at your own risk. It is provided for educational purposes only. It has been tested internally, however, we do not guarantee that it will work for you. Ensure that you run it in your test environment before using.
Thank you,
Rajasekhar Amudala
Email: br8dba@gmail.com
Linkedin: https://www.linkedin.com/in/rajasekhar-amudala/
Reference:
ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1)