How To Reclaim ASM Disks
The project I have been working on is migrating GoldenGate trails from DBFS and ACFS. Now that the migration has completed and the DBFS database has been dropped, the next step is to reclaim ASM disks. DBFS is running on 2 nodes RAC.
1. Initial Environment Assessment
Before proceeding with disk reclamation, it is essential to verify the current state of the ASM instance and ensure the target diskgroups are identified correctly.
Checking ASM Version and Current Diskgroup State
From node1 - Check asmcmd version:
$ asmcmd -V asmcmd version 12.1.0.2.0
List ASM diskgroup:
$ asmcmd lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 4194304 307184 51044 0 51044 0 N ACFS_DATA/ MOUNTED EXTERN N 512 4096 4194304 9289488 793500 0 793500 0 N DATA/ MOUNTED EXTERN N 512 4096 4194304 153584 83580 0 83580 0 N DBFS_DATA/ MOUNTED EXTERN N 512 4096 1048576 15344 9832 0 9832 0 Y GRID/
2. Pre-Reclamation Verification
Reclaiming disks requires ensuring that no processes are currently accessing the files within the diskgroup and identifying which physical disks are associated with the target.
Verifying Open Files and Identifying Candidate Disks
Verify no open files from DBFS_DATA:
$ asmcmd lsof -G DBFS_DATA DB_Name Instance_Name Path
Check current candidate disks:
$ asmcmd lsdsk --candidate Path /dev/mapper/dbfs_data02p1
List specific disks currently assigned to DBFS_DATA:
$ asmcmd lsdsk -G DBFS_DATA Path /dev/mapper/dbfs_data01p1
3. Step-by-Step Reclamation on Node 1
The first phase of removal involves unmounting the diskgroup from the first node in the RAC cluster.
Unmounting the Target Diskgroup
Unmount disk for DBFS_DATA:
$ asmcmd umount -f DBFS_DATA $ asmcmd lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 4194304 307184 51044 0 51044 0 N ACFS_DATA/ MOUNTED EXTERN N 512 4096 4194304 9289488 793500 0 793500 0 N DATA/ MOUNTED EXTERN N 512 4096 1048576 15344 9832 0 9832 0 Y GRID/
4. Final Cleanup and Removal on Node 2
Once unmounted on the first node, we move to the second node to verify the state and permanently drop the diskgroup from the ASM metadata.
Final Verification and Dropping the Diskgroup
From node2 - List ASM diskgroup:
$ asmcmd lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 4194304 307184 51044 0 51044 0 N ACFS_DATA/ MOUNTED EXTERN N 512 4096 4194304 9289488 793500 0 793500 0 N DATA/ MOUNTED EXTERN N 512 4096 4194304 153584 83580 0 83580 0 N DBFS_DATA/ MOUNTED EXTERN N 512 4096 1048576 15344 9832 0 9832 0 Y GRID/
Verify no open files from DBFS_DATA:
$ asmcmd lsof -G DBFS_DATA DB_Name Instance_Name Path
Drop DBFS_DATA and verify the updated diskgroup list:
$ asmcmd dropdg -r DBFS_DATA $ asmcmd lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 4194304 307184 51044 0 51044 0 N ACFS_DATA/ MOUNTED EXTERN N 512 4096 4194304 9289488 793500 0 793500 0 N DATA/ MOUNTED EXTERN N 512 4096 1048576 15344 9832 0 9832 0 Y GRID/
5. Conclusion
Check candidate disk: /dev/mapper/dbfs_data01p1 is now successfully listed as a candidate
$ asmcmd lsdsk --candidate Path /dev/mapper/dbfs_data01p1 /dev/mapper/dbfs_data02p1
In summary, don't forget to reclaim disks after dropping databases so they can be repurposed for other storage needs within your infrastructure.
Oracle Database Consulting Services
Ready to optimize your Oracle Database for the future?
Share this
Share this
More resources
Learn more about Pythian by reading the following blogs and articles.
How To Fix ASM Disk Header Status from "FORMER" To "MEMBER"
Oracle ASM 11g: Does the ASMCMD cp Command Really Work?

Mitigating Long Distance Network Latencies with Oracle Client Result Cache
Ready to unlock value from your data?
With Pythian, you can accomplish your data transformation goals and more.