I had exactly the same problem and I had no success solving it.
It happened with proxmox 4.1 when I applied latest patches.
My Server is proliant ml10 v2. I have two of them working as a cluster, and the problems only ocurred in one of both.
What is even more strange, is that I moved disks to another computer, and they boot correctly..
Things I have done:
I boot with a systemrescuecd that supports ZFS (
http://ftp.osuosl.org/pub/funtoo/distfiles/sysresccd/ ) and imported rpool. I had to change the rpool/ROOT/pve-1 mountpoint in order to access to files. All ZFS are working without errors. I tried:
* Reinstalled grub with no success (
https://forum.proxmox.com/threads/grub2-recovery-on-zfs-proxmox-ve-3-4.21306)
* I dissabled compression for all zfs datasets (no success)
After battleling a lot of time, I gave up and reinstalled a new system in two new disk zfs raid1.
I folowed the instructions from
https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster (Reinstalling a cluster node)
I booted with the systemrescuecd in order to make the backup of the cluster files.
As it was running on ceph storage, I also had to backup and restore the ceph dir to the new machine (/var/lib/ceph and /etc/ceph)
Now my server is running again, but I do not understand why it did not boot in my server after the update, and why it boots in another computer..
P.D. I don't know if there is an official guide to boot a zfs proxmox damaged system, but maybe people from proxmox could write one. I have spent a lot of time trying to boot the system offline, and I found a special version of system rescuecd with zfs support.