Hi,
Have you run the Garbage Collect on your datastore ?
Can you give the log ?
Not possible, but request exist to enhance this : https://bugzilla.proxmox.com/show_bug.cgi?id=5799
Prune and after 24H run the Garbage Collect
It's the role of...
Hi all, author of the hardening guide here. Thanks for bringing this up and sorry for seeing this post so late.
Quick clarification, my guide is not saying you cannot or should not use the Proxmox ISO. Maybe I have to adjust the wording there...
Hi,
It was introduced with v256 : https://github.com/systemd/systemd/blob/2ba910ab06e5970e9e2dc6d28a8d38a4adcd4a16/NEWS#L3509-L3583
But if I understand correctly, this still uses SSHD config for authentication, but allows potential hosts to VM...
I got the same problem on my 1 node cluster. (It was 2 node when migrating to new hardware.
I got it fixed, it seems that the problem is that there was no quorum for HA.
After a reboot, local login again and start dpkg --configure -a it got stuck...
Hi,
Have you run the Garbage Collect on your datastore ?
Can you give the log ?
Not possible, but request exist to enhance this : https://bugzilla.proxmox.com/show_bug.cgi?id=5799
Prune and after 24H run the Garbage Collect
It's the role of...
Here is the workflow:
* i have a PVE and PBS integrated
* i setted up several VMs to automatically backup every night to PBS
* one VM has 500 GB of storage, and i didn't wanted to back it up anymore
* i deleted the VMs backups via the garbage...
Ich muss doch noch mal an dem bisherigen anknüpfen. Ich habe nun einen NUT-Server und einen NUT-Client auf meinem Proxmox-Mini PC eingerichtet. Entsprechend den zahlreichen Anleitungen hat das bisher auch gut funktioniert. Heute wollte ich nun...
I would either limit the bandwith (like @UdoB said) or modify the CIFS mount in PVE itself:
vers=3.1.1,soft,serverino,nosharesock,cache=none
This prevents write stalls. But SMB/CIFS on „low end“ storages is generally not recommended. I would...
I would either limit the bandwith (like @UdoB said) or modify the CIFS mount in PVE itself:
vers=3.1.1,soft,serverino,nosharesock,cache=none
This prevents write stalls. But SMB/CIFS on „low end“ storages is generally not recommended. I would...
I have no specific idea but just a profane suspicion: the Synology is overloaded, busy with emptying internal write-caches and blocking network IO until it is done (with the current chunk of data).
To verify/check this idea simply set a...
Hi everyone!
Created account so I can post that I've faced similar problem, could not figure out whats going on until found this topic. Yesterday I reinstalled/recreated my PBS 3.x to newest 4.x, simple default install, empty data store running...
It's a "silent thread" ... I was hoping for some "official" comment.
For now i have done this (on the Proxmox hypervisor):
/etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX="systemd.ssh_auto=no" <<---------- *** add...
Yes, you're right. It works as intended.
The same is true for a vanished storage. Losing all virtual disks is surprisingly not considered a problem.
If you want this problem area to be more visible to the developers you could open a Feature...
Ich habe die Tage mal dazu das Tool “ProxCLMC” (Prox CPU Live Migration Checker) geschrieben, da dies immer mal wieder aufkam. Wie @Falk R. schon schrieb, erfolgt dies auf VM Basis. Die Idee ist, dies in eine Pipeline einzubinden, um das...
Hello,
Now that PBS' S3 support has been around for a while and people have been using it (and the dev team has been putting so much effort into continuously refining it), I'm curious: for someone who's new to S3-based storage solutions, when is...