Same problem here, more or less. After spending hours I eventually was able to complete apt update; apt upgrade without errors, but apt full-upgrade still errors out:
root@pve2:~# cat /etc/apt/sources.list
deb https://ftp.debian.org/debian bullseye main contrib
deb https://ftp.debian.org/debian...
What do I need to do to be offered the choice to create CT on my vmdata pool?
the vmdata pool is shown everywhere, but when I create a CT I am only offered local.
# pvesm status
Name Type Status Total Used Available %
local dir...
So .... only way is to start over again?
Or just not possible to run Docker on ZFS storage?
Created a new CT with ZFS storage, installed docker, and it runs fine, so that's not the problem.
Will moving the volume to another storage not result in the same as a snapshot restore to that same storage (zfs storage in both cases)?
arch: amd64
cores: 4
features: nesting=1
hostname: mail.xxx.net
memory: 6144
net0...
So I have this LXC container running docker with unprivileged,nesting=1 running fine, but just realize I created it on a raw image, which means snapshots take too long. I restored a snapshot to ZFS storage, which completed fine, but now docker no longer starts.
I get this error...
Wait - doesn't qm stop implicitly reboot the server? Or does it like suspend/hibernate/freeze it first, so that it can continue unbooted after qm start? Otherwise I don't get the point?
I always assumed qm stop is like a qm shutdown without being nice about it.
That was another thing I found wasn't really clear. What NODENAME do you need to remove. The NODENAME of the server you are working on or the other? Or both?
There is a problem with the procedure explained: Efter deleting /etc/pve/corosync.conf any pvecm command - including pvecm delnode and pvecm expected - all reports Error: Corosync config '/etc/pve/corosync.conf' does not exist - is this node part of a cluster?
Ok, that was quite painless, thanks.
So now I have two nodes:
# pvecm nodes
Membership information
----------------------
Nodeid Votes Name
1 1 server39
2 1 server40 (local)
I now need to stop the nodeid 1 server and remove it from the cluster, in...
I have a bastard cluster - as in two identical servers, each with it's own disks i.e. not shared disks - and I need to move one 400 GB VM from one server to the other.
Since they don't share disks I suppose I need to do it as an offline migration. Not a problem per se, but I need to know what...
I added /dev/sdg -n idle,6,q to /etc/smartd.conf and rebooted, but it still says IDLE_A and is spinning.
I realize it's not a premium drive. It's just one of those cheap WD USB appliances which only purpose is to backup my zfs raid.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.