Prior to upgrading to 6.2.4, Live migration using a Ceph storage backend, worked like a charm. Recently, I have migrated to Proxmox 6.2-4, and practically everything is working like a charm .... except for the live migration. I get the following error:
2020-05-15 12:07:17 ERROR: Failed to sync...
When running a PROXMOX 6.1 cluster with Ceph OSDs, what is the recommended IO Scheduler for the OSD drives? By default, it is mq-deadline right now, but will there be a benefit to change it to BFQ? Has anyone done a benchmark?
Thanks in advance.
Thanks, Alwin. I tried that but I still could not get the monitor to start. I went through the configuration line by line, commenting it out until I got the monitors to start. Ultimately, this is the ONLY LINE that I need to comment out to make it work:
ms type = simple
I am documenting it...
I upgraded a Proxmox 5.4 cluster with Ceph 12.2 to Nautilus using the instructions provided. It was basically uneventful.
However, after restarting the nodes, I found that the monitor process does not run. I even tried to run it manually thus:
/usr/bin/ceph-mon --debug_mon 10 -f...
unfortunately, after a few hours, they got disconnected again ... no pertinent errors inside the windows guests as well ... am stumped ....
at least, a work-around exists (use e1000 drivers), but personally, i prefer the virtio drivers ...
Hi!
Yes, it never happened before the upgrade, and I am using the 1.1.16 virtio drivers.
However, it seems that new packages were put up in the pvetest repository (since the initial announcement above) and I updated the hosts. I have replaced some Windows guests' nic card (back from e1000...
I performed an upgrade on two Proxmox servers that hosts both Linux and Windows 2008 R2 guests. All guests use VIRTIO for both drives and network cards. What I noticed is that after some time, the Windows guests' network virtio becomes unresponsive. The Windows hosts therefore become unreachable...
as a work around, have you tried using chromium browser (apt-get install chromium-browser) ... or install Google Chrome for Ubuntu? That's what I use to access Proxmox ...
Hi there,
First off, please allow me to thank the Proxmox team for your effort on making Proxmox a great product! Kudos to the team!
Presently, I am hosting my Proxmox images on an NFS share. The performance is decent but I would like to experiment with changing the wsize and rsize to see how...
I have been running Proxmox in production for the past month and have been generally happy with it overall.
However, in this same period I have encountered instances when a running KVM instance dies out. Unfortunately, I can't seem to find any log that might direct me to what is wrong...
I was able to trace that the problem had NOTHING to do with the configuration ... rather it was the network drivers I used inside the KVM Guest (E1000) ... when I switched to using VIRTIO, everything worked as expected!
Again, kudos on a job well done .... thanks to the Proxmox VE team!
Thanks for the prompt reply ...
Please see below for the contents of /etc/network/interfaces ...
# network interface settings
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100...
First off, allow me to congratulate the team for a wonderful product. Great work!
What I am trying to do is to set-up KVM machines and OpenVZ machines in one box. Everything seems to work out of the box. I did encounter a problem though ... I could not communicate (ping for instance) to the KVM...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.