Thanks for pointing that out!
I checked my apt sources and found no proxmox (except the out-commented enterprise).
Added the PVE No-subscription repository, found 23 (!) new packages.
Shows ver. 6.0-8 in GUI now.
Obviously I missed manually adding the apt source (which IMHO should be installed...
BUG? Update needed?
Installed a new phresh Proxmox Server
downloaded template "Debian 10.0 (Standard) 10.0-1" through GUI
created new CT (ID 444)
entered new CT
apt-dist upgraded
everything fine.
rebooted host
new CT wont start
deep, deep in the logs:
"lxc-start 444 20191012123719.464 DEBUG...
o.k. solved :)
logs are written to be read ;)
in boot.log I spotted:
Oct 05 16:06:55 pve systemd[1]: Starting Mount ZFS filesystems...
Oct 05 16:06:55 pve zfs[1113]: cannot mount '/rpool': directory is not empty
Oct 05 16:06:55 pve systemd[1]: zfs-mount.service: Main process exited...
And for me :(
Upgrade seemd to go well, but after reboot no container will start...
Kinda stuck here - tried your script and got:
All packages are up to date.
Any ideas?
~R.
EDIT:
different paths: as shown in zflist, mine were in rpool/subvol-xxx - script assumes rpool/data/subvol-xxx -...
did multiple re-boots since then - problem w. halt on boot never showed up again and is not reproducible (the 'purely cosmetic' thin target support missing from kernel messages still show though :°)
Thanks,
~R.
I have to correct my above post. I only had a report from our IT, didn't personally see the problem myself yet. Now saw, that boot stops due to a backup-directory on / which is not found:
[ TIME ] Timed out waiting for device dev-mapper-pve\x2dbackups.device.
[DEFEND] Dependency failed for...
I just suffered the same issue.
Box was set-up w. a clean install in March using a 5.x standard official release, downloaded from here.
Did an upgrade through GUI a couple of days ago and today finally rebooted (or rather tried to reboot) as recommended at the end of the upgrade-process.
Boot...
I love you totalimpact! :)
That was exactly what I was looking for: -V (capital V instead of -v, as proposed by dear Miha ;°)
For noobs like me who come across this post:
When the LV is finally created, it must of course be formatted and mounted to show up in GUI:
mkfs.ext4 /dev/pve/backups...
Apparently proxmox resp. data stole the space - how do I get it back to use it for backups/dumps?
--- Logical volume ---
LV Name data
VG Name pve
LV UUID qOKuXk-Z0K7-BziW-1Uc7-qltl-Kbme-wTmcnZ
LV Write Access read/write
LV Creation...
Dear Miha,
could you kindly elaborate on that?
What exactly did you type, what is the exact command?
I tried several variations but still get the same insufficient free space error.
Just trying to get my vzdumps to the 5TB 'empty space' on me lil server :)
Cheers,
R.
I had the same problem: even though I don't have spaceship NCC Enterprise travelling in my /etc/apt/sources.list, PVE accessed the PAID repository and threw errors.
I solved this by commenting out the entry in...:
/etc/apt/sources.list.d/pve-enterprise.list
Obviously, by default, it is...
I reckon the "destructions" of 100, 222, 333, 444 and 1100 happened automatically, when I restored them from backup which re-created the disks/subvolumes. I never ordered to destroy these. Restore happened only once for each VE and without errors.
No idea - it's a simple, standard proxmox VE 4.3-12 w. some containers :)
@fabian:root@pve:~# zpool history rpool
...
2016-12-12.12:17:12 zfs destroy -r rpool/subvol-666-disk-2
2016-12-12.12:17:26 zfs destroy -r rpool/vm-555-disk-1
2016-12-12.12:17:34 zfs destroy -r rpool/vm-777-disk-1...
with disk-space having become short, I wanted to clean up my server a bit. I deleted a virtual disk, that was not actually being used (VE1100 uses subvol-1100-disk-2) and could not be deleted via GUI, so I did a:
zfs destroy -f rpool/subvol-1100-disk-1
as a result all other subvolumes sort of...
just a brief follow up: I upgraded my box (hp micro server gen8). RAM from 2GB to 4GB, replaced the Celeron w. an i3. No more problems since then. proxmox' official system requirements state 8GB RAM for a good reason :)
I can provoke a system crash by manually starting a backup of the VE that I use to find in a locked state.
After 1 or 2 mins., process z_wr_iss takes 97% CPU and subsequently the server fails. It then automatically reboots, but the VE is still locked from the pending/failed backup process and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.