Hi everyone,
I'm trying to recover the machine configurations from a disk by connecting it via a virtual usb adapter but the configuration file is in /etc/pve/qemu-server/<VMID>
it's not found,
in fact the entire /etc/pve directory is empty. Am I doing the procedure wrong? Thank you
from debian 10 onwards the mcelog package was replaced by rasdaemon, so the package for debian 11 does not exist. I have the same problem, now I have to upgrade to proxmox 8 which contains debia 12 the problem is fixed and if the logs are written correctly, because at the moment the kernel...
If, on the other hand, the backups are encrypted, do I have to do some particular operation to make them recognize them, when I go to recreate the pbs machine?
Thanks for the answer. So by backing up the pbs configuration files and doing the restore, can I access all the backup files even if they are on a nas?
hello to all.
I'm using a nas as an external PBS datastore via NFS.
In the event of a disaster how should I proceed? or better, if my PBS machine is damaged, can I connect the NAS as a datastore to a new PBS machine and recover the backups? Or is it wrong? Or do I have to backup something from...
I removed the VPS and the disk and I was able to remove the node from the list by deleting the directory /etc/pve/nodes/brokennode
Now I would like to figure out how to remove the ceph references to the broken node
Thank you
I managed to remove the vps by moving the file as you suggested. Now to remove the node from the list and ceph disks how does it work? I already did the uncluster procedure
hello everyone when trying to migrate a vps from one node to another of the cluster this error comes out.
2023-04-06 12:39:24 use dedicated network address for sending migration traffic (x.x.x.1)
2023-04-06 12:39:25 starting migration of VM 100 to node 'fra' (x.x.x.1)
2023-04-06 12:39:25...
Hi all, I have a 3-node cluster, one died badly. I re-added a third one, I removed the node from the cluster, but I can't figure out how to remove it from ceph, so it stays inside the cluster and I can't remove it. I've looked in the documentation but I don't understand how to remove it.
the...
Hello. I put it into production, and to avoid problems I use the ssd nvme in order to have troughput.
I went crazy setting up OVH vracks decently.
When I finish everything maybe I put the configuration here so that it can be useful to someone without them going crazy like I did.
Hi everyone, I created a cluster,
it is about 3 machines taken on ovh with the vrack
the servers have two interfaces
a public vmb0 pointing to a card
while the traffic of the cluster I'm running it on a bridge hooked to a separate vlan on the vrack with network 10.0.0.0/29.
I was able without...
ok i managed to show zfs volumes on all nodes.
I first created the volume on the first node and placed it in the pool.
then I created the other volumes in the other nodes without checking to put them in the datastore.
when I gave everything from the cluster management I entered the names of the...
Hi I'm testing a 3 node cluster, before I venture into ceph, where I already know I'm going to cry, I'm testing zfs replication.
there are 3 nodes with an SSD where there is the operating system and a second dedicated to the ZFS.
If I try to insert a zfs with the same name in the cluster it...
reinstalled both nodes, rebuilt the cluster and reinserted the ovh NAS as shared storage.
Same result I inserted the test VPs inside the HA section but trying to restart the node where the VPS is present it is not migrated.
Is there any other operation to do that I don't understand?
Could this...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.