Thank you for your answer.
I didn't know about `pmgcm update-fingerprints` and corrected the fingerprints in the cluster.conf on both servers. In case it's appearing again, I'll try.
Hello,
Yes, I've used the one from PMG 6.4 however it's still marked as "standalone" under ACME, is this expected ?
Here is for the first server (main) :
I'll see how it will go during the next renewal.
Thanks.
Hello,
This morning I found my cluster was not synced anymore.
I've recently replaced my 1 year certificate with Let's Encrypt.
Therefore, I wanted to know if the renewal process has something to do with this error ?
Thanks!
Hello,
I'm using PVE 6.2-6.
Is it possible to reduce the speed (throttle) when I move a VM between 2 storage to reduce the impact on all other VMs ?
Thanks.
Hello,
We've upgraded our PVE cluster of ndoes from 5.4 to 6.2 few weeks ago, we also upgraded our ZFS pool to the version 0.8.4. We also added an iSCSI storage (FreeNAS) which hosts now most of the storage.
I'm now seeing that the ZFS cache is like eating a little bit too much of RAM (this is...
Hi menk,
When you say "LVM over SCSI", can I access the same LVM twice, or I have to setup two LVM, one by host ? If the the second option is right, it means that the files are not shared between the two LVM, is it correct ?
Thanks you!
Hello,
It's certainly written somewhere, but I didn't find it on https://pve.proxmox.com/wiki/Storage:_iSCSI
It's said that it's generally not recommended to mount multiple iSCSI LUN on a host as it leads to data corruption.
Therefore, how does Proxmox handle this when used inside a cluster...
I've found the culprit! FreeNAS is causing the issue and especially the disks inside it (full ssd, busy at 100% during the "not reachable time"). Now I'm trying to find why they are saturing... so nothing to do from Proxmox I think.
Hello,
I recently added a FreeNAS server to my network and have setup it with Proxmox as an NFS server.
Ping is ok, no loss at all. Network is 10 Gbits.
Randomly, I get the following error on my nodes :
Jul 3 02:12:45 athos pvestatd[7137]: unable to activate storage 'freenas-a' - directory...
Hi, thank you very much for your reply, very interesting as I was thinking about doing the same.
Except DB, you did not experienced big issues when switching to a cloned snapshot from the second NAS ? Like read only VM or corrupted VM ?
How do you manage the updates from
FreeNAS on your both...
Hello,
I have some customers who would like to get weekly spam report instead of daily spam report, would this be possible in some way ?
Thank you,
Yann
Hello,
I'm looking into building a FreeNAS server which will host my VMs through a NFS share.
As Proxmox does not support snapshot on raw image hosted by an NFS server, I'm trying to find a way to do this in an "easy" way.
I was thinking about creating a dataset per VM which will allow me to...
Hi,
I wanted to know what would be the risks of a broken VM in case of error (fs, network, disks) during a process of live migrating a VM without shared storage with the above command ? What is going to happen if cancelling during the process, is the VM partially moved ?
Indeed, we'd like to...
Hi!
In case of upgrading a cluster from 5.2-10 to 5.4, do the upgrade disable in anyway the SWAP ? What would be the safest way to disable/remove the SWAP in an existing installation ?
Thanks.
We connect to the host using a different port, however they talk to each other
Sorry, here we go :
()
2019-04-03 09:43:00 100-0: start replication job
2019-04-03 09:43:00 100-0: guest => VM 100, running => 0
2019-04-03 09:43:00 100-0: volumes => local-zfs-hdd:vm-100-disk-1
2019-04-03 09:43:01...
I'm having the same issue when I move back a VM to the original host, replication fails and I have to manually delete the snapshots of the affected VM to have the replication working again. Here is the logs when failing :
2019-04-03 08:28:03 100-0: end replication job with error: command 'set...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.