You need to export PBS_FINGERPRINT=00:00:00... as well, with the fingerprint shown on the front page of your PBS install, or use a certificate that is in the system trust store.
Could you try to run the file-restore command manually on the CLI of your PVE node?
proxmox-file-restore list vm/100/2021-09-20T16:20:30Z /drive-scsi0.img.fidx/ --repository root@pam@127.0.0.1:datastore
(substitute your own IP/datastore/VM combo of course)
As I said, might be MAC related. VMs attached to a bridge do not have their MAC "translated", the bridge sends out the packets to the physical NIC with the source MAC of the VM. So potentially some layer in your stack filters that.
My personal favourite trick to debug anything network related...
Can your PVE host request an IP address when the device is connected to a bridge? If not, your device might just be incompatible with bridging. Sometimes there are weird MAC address filters built into the hardware or firmware/driver...
Obviously I meant the logs from a time where such a crash occured. Check also /var/log/syslog plus its logrotations. Filter through them for a timestamp when your issue occurs, don't just post the whole log please.
In your posted output only a single VM stops, at Sep 15 12:21:04. This appears...
Yes, and as I said, that's not the right way to think about Ceph deployment. If you absolutely require local data access for performance (which, given how fast Ceph can be on 10G+ networks, is rather unlikely), Ceph is the wrong tool for the job. Or put differently: If you're intra-cluster...
Check your logs (journalctl -e, dmesg), and please include your pveversion -v output as well as configs from your VMs (qm config <vmid>). This is very little information to go on...
Das options zu Beginn am besten nicht übersetzen, ansonsten stimmt die config ;) update-initramfs danach nicht vergessen.
Allerdings stellt sich mir schon die Frage wieso man bei 256 GB RAM und so viel storage den ARC auf 14 GiB beschränken möchte? Das macht die performance jedenfalls nicht...
No it won't, but this is also not how ceph works in general. To ensure consistency, every write needs to be confirmed by the qurom in the cluster anyway, so if the data is local or not should not be of concern. Ceph will auto-balance to the best of it's abilities, but as an administrator...
You will always get a confirmation dialog. Clicking restore from the "Backup" tab while on the container will overwrite, doing so from the "Backups" tab on the storage itself will give you the option to assign a new VMID (i.e. not overwrite).
You didn't fully upgrade your system, or something went wrong. This is 'pve-manager' version 6.4 running with the kernel from PVE 7. Try apt update && apt dist-upgrade or apt-get install -f.
Please also post the full output of pveversion -v.
Thin-pools are, as the name suggests, thin-allocated. However, that does not mean that the container only has a part of the disk space available, it always has all of it available. Thus, the filesystem will immediately be extended to 200 GB in your example - but since most of it is unused, it...
You can try, of course, though if the corresponding driver is compiled as "built-in" it will not work (not sure how we configure it, probably as module though). Take a look at the Makefile in the repository I linked to see how we build our pve-kernel packages. The source is in the submodules...
This configuration does not need CPU pinning (i.e. a 1:1 mapping between pCPU and vCPU). Simply set the number of cores in the GUI (that includes hyperthreads) to 2, the linux kernel scheduler on PVE will automatically balance the VMs. That is, if all of them need their assigned resources, they...
Last one. In regular usage, you do not need to run 'mkfs'. If you want a manual filesystem (not for container use), you'd have to create a thin-LV on your thinpool and then run mkfs on that (/dev/mapper/xxxxx).
Try kdump. If you're using grub, here's a past explanation: https://forum.proxmox.com/threads/random-proxmox-server-hang-no-vms-no-web-gui.58823/#post-271632
Are you using USB passthrough to attach the adapter to a VM? If so, that will not work, as the bandwidth of that is very limited, certainly not at 2.5 Gb/s speeds. Does the adapter work on the PVE host? If so, I'd recommend not using passthrough at all, assigning a static IP to a new bridge...
These entries are normal on most systems, the "interrupt took too long" comes from the perf monitoring subsystem of the kernel, nothing fatal. Are there any crash logs available from when the systems actually died? Otherwise, potentially look into setting up kdump or netconsole, to get a log of...
Always recommend playing through your changes in a test environment, but that seems like a reasonable plan for a clean re-install. Make sure to wait for Ceph to create the replicas/balancing between each node.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.