it's a mirror. I don't know when on initramfs its shown like that.
I have it mounted as read-only and the zpool status output is: https://prnt.sc/vKzNL0nUFALc
if just attaching a new drive isn't possible, is there a way of migrating the VMs in a new pve over the network(samba maybe?)
I have an ssd drive with failed controller and which was a part of a mirrored zfs vdev. Now it boots into initramfs and says that it can't import pool "rpool". when i issue
zpool import -f rpool
it says that there's no such pool or dataset. However, in "zpool import" can see that rpool has...
suddensly i started getting: TASK ERROR: could not activate storage 'backup1': backup1: error fetching datastores - 500 Server closed connection without sending any data back
and i can't figure out what might be causing it.
when issueing: proxmox-backup-client version --repository...
I don't use any LXC just a few VMs
Yes, i have double-checked that MACs are unique - moreover, the MAC address doesn't change when i migrate, it stays the same and it was working flawlessly before the migration.
Now for the last question i noticed something strange(strange like i don't...
i've been facing a rather bizzare issue on PVE6.4-15
I have a small 2-node cluster and everytime i migrate a VM to the other node, it doesn't have network connectivity anymore.
In order to solve the issue i have to remove the NICs and re-add them.
Afterwards i have to modify...
I have found the culprit to be the migration process from another node.
While it's keeping the HWADDRESS of the virtio NIC during the migration process, i have to remove the nic and add it again every time i migrate and of course changing the connecting nic...
I have a host running only one vm with a e5-2667v4 cpu(8core-16threads)
i have assigned all threads to a vm which is only vm on the host.
i can see a ~900% cpu usage on the kvm process (all other processes of the host doesn't consume more than 50%) while at the same time the vm iis at ~40-45%...
i've been facing this issue and have ran out of ideas on what the problem might be.
The host can access the internet just fine and everything works.
however within the VM i can only ping the host and nothing else.
Firewall within proxmox is disabled everywhere.
here is my config from the...
i managed to fix the issue and i wanted to share the solution in case another member faces the same problem.
The server is running cpanel and there's a function that hardens the /tmp partition
you can find more here...
i face a very wierd problem and although i found a few threads with "similar" problem, i wasn't able to diagnose and fix.
this is stopping the replication from finishing and also makes the guest unusable.
the guest system hangs almost immediately when issuing fsfreeze-status
qm guest cmd...
Do you believe that our scenario will be trustworthy if we use a raspberry Pi to keep the quorum?
i have no experience with raspberry but i just found out that i can easily install debian on it so i could also use it as a firewall for the ilo management ports.
Their sla for the connectivity between the dcs (they are both from the same company and have direct links between them) of 99.99%
and max ltency at 9ms
My tests has shown latency of no more than 7ms though.
if connectivity it lost between the dcs and one node goes down, the ha will not work...
i have decided to use 2 hp G9 servers and setup a 2 nodes HA cluster for production(at the moment we cannot afford a 3rd node)
and will use a VM hosted on a seperate server, on another DC(around 6-7ms ping) that will keep the quorum with corosync-qnetd
The servers feature 2 NICs with 4...