Adding a bit more to this, it almost seems like there are a bunch of rpool ROOT snapshots being created and those are what is messing up the mounting. Perhaps each exit is for one of these?
root@fmt-pve-10:~# zfs list -t snapshot
NAME...
Just a note: seeing the same as the OP. Tried both:
proxmox-ve_5.1-722cc488-1.iso
and
proxmox-ve_5.1-3.iso
Both have this issue.
Tried the @st0ne "spam exit at prompt" method and it actually worked I believe under 20 attempts.
rpool is on a 2x Intel DC S3610 480GB ZFS mirror on the Intel SATA...
Hi @wolfgang I see where you can setup times to snapshot and transfer, but I cannot find where to specify - keep 15 minute snapshots for the last day (so 96 snapshots) and keep the last 7 daily snapshots and keep the last 4 weekly snapshots.
I can see how to snapshot every few minutes, days and...
Not sure where to put a suggestion for this @wbumiller and everyone.
It would be significantly more useful to have the ability to specify different time period sets for the snapshots. For example one might want to keep:
Recent snapshots (e.g. every 15 minutes for a day)
Daily snapshots
Weekly...
I recently had a failed boot SATA DOM and tried re-installing a cluster node. Luckily, there was nothing too important on the node while it was running.
I used these steps exactly: http://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster#Re-installing_a_cluster_node
The files backed up were...
If it helps anyone here - when I got this issue I did:
nano /etc/apt/sources.list
Then I changed the jessie from ftp.us.debian.org to ftp.uk.debian.org
And it worked. Certainly not the fanciest fix, but it can get you moving in a pinch.
Is there any way to add a username to a pve-zsync destination? I am trying to a FreeNAS server that does not have root as a loginroot@pve-test:~# pve-zsync sync --source 107 --dest 192.168.1.107:pool1/backups --name test1 --limit 1024that gives a prompt for the root user: root@192.168.1.107's...
No clues but I have seen this on two different E5 v3 processors with Proxmox VE 4.0.System 1: dual E5-2650L V3 supermicro motherboardSystem 2: dual E5-2683 V3 asrock motherboard
Udo thank you for this. fmt-pve-01 is now offline due to the failure of the mirrored SSDs. Those 4 OSDs are offline and the hostname of fmt-pve-01 cannot be resolved (it is offline.)
Due to the node being offline, Ceph still hast the monitors/ OSDs listed, but there is no way to delete via the...
I have been working on a few STH articles on Proxmox VE 4.0. (e.g. http://www.servethehome.com/add-raid-1-mirrored-zpool-to-proxmox-ve/ and http://www.servethehome.com/proxmox-ve-ceph-how-to-fix-a-create-ceph-osd-option-being-unavailable-grayed-out/ as examples) Absolutely great job on 4.0. It...
I think this is a GRUB/ ZFS issue.
Interestingly enough I changed the Intel SSDs to sdc and sdd (from sda and sdc). I then put two other SSDs as sda and sdb. This worked.
Just wanted to pass along something I have found in the Proxmox VE 4.0 release installed via legacy (e.g. not UEFI):
Latest release 0d8559d0-17 ISO using the ZFS RAID 1 mirror on sda and sdc (Intel 320 SSDs) and got:
Beta 2 ISO installs ZFS RAID 1 root and has no issue completing the setup...
@mir - thank you again for all of your help!
Here is what I did: first, re-install Proxmox VE 3.4 fresh
Then similar to the link you provided:
Downloaded the 3.2.10 files
Edited the repos so I could install build tools
Did apt-get install gcc fakeroot build-essential debhelper rsync...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.