either you set up your VMs properly in the first place, or you have to do a shutdown and wait and follow up with a stop (or run qm shutdown XXX --timeout YYY --forceStop 1). if you don't care about properly shutting down, then you can of course always just do a stop (a shutdown is basically...
no, the bug is not yet fixed.
as a workaround, you could
- restore into a new, empty dir
- delete offending "files" in the target directory before extracting
there is work going on to make bridges (and SDN vnets) ACL entities covered by the permission system - that would then allow to say "user X can only configure guests to use bridge foo" (for example). whether that will be ready in time for your summer interns I cannot promise ;)
just re-add the storage entry (Datacenter -> Storage -> Add on the GUI) - that is a purely "logical" operation writing a PVE config file. use the name "local-zfs" and the correct dataset (probably "rpool/data"), and if you changed any of the other options, set them as well.
no, this is not possible directly. qm showcmd XXX will give you the commandline, which might or might not be enough to start the VM manually. I would re-evaluate why you regularly lose quorum and fix that problem (or not use a cluster ;))
das urspruengliche problem ist eine einschraenkung von grub wenn nicht UEFI verwendet wird, und dateien von einer grossen platte gelesen werden muessen die zum boot gebraucht werden (z.b. stage2 von grub, kernel, initrd, ..). die loesung dafuer ist entweder eine kleinere platte fuer /boot oder /...
no there isn't (yet - it would be possible to implement though).
if you attach both disks to the same system (or within a well connected, local network) you could do a real sync as well (e.g. by setting up a remote pointing to localhost), if you repeatedly run the sync it will give you an idea...
just be sure that the directory structure of the original and new datastore match - the chunks are in subdirs of .chunks in the datastore, and need to follow that exact scheme for them to be found and re-used!
yes. one way to do it would be to create the datastore on the new disks, then copy the chunk store contents into its chunk store, transfer the new disks to the new server, manually create the datastore.cfg entry and then start the sync. just be careful to wait for a full sync to finish before...
sowohl quota (maximaler verbrauch) als auch reservation (zugewiesener speicher) sind interessant. beide properties lassen sich auch nachtraeglich aendern, um z.b. einem datastore mehr platz zuzuweisen. ansonstens einfach "zfs create" und PBS dann den mountpoint als neuen datastore konfigurieren...
in dem fall wuerde es vermutlich sinn machen, eigene datastores pro kunden zu verwenden (bringt potenziell auch mehr flexibilitaet im hinblick auf quotas, wenn das vom drunter liegenden storage unterstuetzt wird. bei ZFS z.b. leicht mittels 1 datastore == 1 dataset umsetzbar ;)).
yes indeed :)
thanks for the log! I will have to take some time to digest them in detail, but they look sensible to me.
AFAICT:
- corosync only starting and establishing quorum worked using all nodes
- starting pmxcfs afterwards triggered the issue once node 109 was reached
- there were...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.