ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?
This has been solved. They key is that deduplicated volumes need to have the dedup flag turned off before deletion. This should be done at the pool level as well as the zvol or filesystem level. Otherwise, the deletion is essentially being deduplicated. The process takes time because the ZFS deduplication table is being referenced. In this case, RAM helps. I temporarily added 16 additional Gigabytes of RAM to the system and brought the server back online. The zpool imported completely within 4 hours.
The moral is probably that dedupe isn't super polished and that RAM is essential to its performance. I'm suggesting 24GB or more, depending on the environment. Otherwise, leave ZFS dedupe off. It's definitely not reasonable for home users or smaller systems.
As a long time user of Sun/Oracle ZFS 7000-series appliances, I can tell you without question dedupe isn't polished. Never confuse sales with delivery! The salesguys will tell you "Oh, it's been fixed". In real life - my real life - I can tell you 24GB isn't enough to handle the "DDT tables". That is, the back end index which stores the dedupe table. That table has to reside in system memory so that each I/O is intercepted in-flight in order to figure out if it needs to be written to disk or not. The larger your storage pool, the more data changes, the larger this table - and the larger demand on the system memory. That memory comes at the expense of ARC (cache) and at times, the OS itself - which is why you experience the hangs, as certain commands happen in the foreground, some in the background. Seems the pool delete happens in the foreground, unless you tell it otherwise in CLI. GUI wizards won't do this.
Even a mass-delete of NFS data within a share defined on a deduped volume will bring your system to a half if you don't have enough memory to process the "writes" to ZFS telling it to delete the data.
In all, unless you max out your memory and even then, find a way to reserve memory for the OS by restricting ARC and DDT (and I don't think you can restrict DDT by nature of it is, it's just an index tied exactly to your I/O) - then you're hosed during large deletes or destory zvol/pools.