Search results

  1. M

    NGINX Security Headers

    Hi! I believe nginx and haproxy as reverse proxies, will allow you to add or replace headers, including CSP. Thanks
  2. M

    Proxmox 6.2-4 Live migration

    unfortunately pvesm list images_vm gives an error: rbd error: rbd: listing images failed: (2) No such file or directory /etc/pve/storage.cfg : dir: local path /var/lib/vz content vztmpl,backup,images,iso maxfiles 2 shared 0 lvmthin: local-lvm...
  3. M

    Proxmox 6.2-4 Live migration

    source node (pveversion -v): roxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve) pve-manager: 6.2-4 (running version: 6.2-4/9824574a) pve-kernel-5.4: 6.2-1 pve-kernel-helper: 6.2-1 pve-kernel-5.3: 6.1-6 pve-kernel-5.4.34-1-pve: 5.4.34-2 pve-kernel-5.3.18-3-pve: 5.3.18-3 pve-kernel-4.15: 5.3-3...
  4. M

    Proxmox 6.2-4 Live migration

    Prior to upgrading to 6.2.4, Live migration using a Ceph storage backend, worked like a charm. Recently, I have migrated to Proxmox 6.2-4, and practically everything is working like a charm .... except for the live migration. I get the following error: 2020-05-15 12:07:17 ERROR: Failed to sync...
  5. M

    For Proxmox and Ceph, which IO Scheduler is optimum?

    When running a PROXMOX 6.1 cluster with Ceph OSDs, what is the recommended IO Scheduler for the OSD drives? By default, it is mq-deadline right now, but will there be a benefit to change it to BFQ? Has anyone done a benchmark? Thanks in advance.
  6. M

    Upgraded Ceph Monitor (from Luminous to Nautilus) not starting

    Thanks, Alwin. I tried that but I still could not get the monitor to start. I went through the configuration line by line, commenting it out until I got the monitors to start. Ultimately, this is the ONLY LINE that I need to comment out to make it work: ms type = simple I am documenting it...
  7. M

    Upgraded Ceph Monitor (from Luminous to Nautilus) not starting

    I upgraded a Proxmox 5.4 cluster with Ceph 12.2 to Nautilus using the instructions provided. It was basically uneventful. However, after restarting the nodes, I found that the monitor process does not run. I even tried to run it manually thus: /usr/bin/ceph-mon --debug_mon 10 -f...
  8. M

    test repository updates (kvm 0.14.0)

    unfortunately, after a few hours, they got disconnected again ... no pertinent errors inside the windows guests as well ... am stumped .... at least, a work-around exists (use e1000 drivers), but personally, i prefer the virtio drivers ...
  9. M

    test repository updates (kvm 0.14.0)

    Hi! Yes, it never happened before the upgrade, and I am using the 1.1.16 virtio drivers. However, it seems that new packages were put up in the pvetest repository (since the initial announcement above) and I updated the hosts. I have replaced some Windows guests' nic card (back from e1000...
  10. M

    test repository updates (kvm 0.14.0)

    I performed an upgrade on two Proxmox servers that hosts both Linux and Windows 2008 R2 guests. All guests use VIRTIO for both drives and network cards. What I noticed is that after some time, the Windows guests' network virtio becomes unresponsive. The Windows hosts therefore become unreachable...
  11. M

    firefox 3.6.14 java applets bug - vm console - DO NOT upgrade!!!

    as a work around, have you tried using chromium browser (apt-get install chromium-browser) ... or install Google Chrome for Ubuntu? That's what I use to access Proxmox ...
  12. M

    Proxmox 1.5 and NFS

    That's great! Thanks Dietmar
  13. M

    Proxmox 1.5 and NFS

    Hi there, First off, please allow me to thank the Proxmox team for your effort on making Proxmox a great product! Kudos to the team! Presently, I am hosting my Proxmox images on an NFS share. The performance is decent but I would like to experiment with changing the wsize and rsize to see how...
  14. M

    memory balooning & page sharing

    Hi mobius, The proper setting in your VMID.conf file should be: args: -mem-path /hugepages
  15. M

    Crashes

    PERFECT! Exactly what I was looking for ... thanks!
  16. M

    Crashes

    I have been running Proxmox in production for the past month and have been generally happy with it overall. However, in this same period I have encountered instances when a running KVM instance dies out. Unfortunately, I can't seem to find any log that might direct me to what is wrong...
  17. M

    Network Issue between KVM and OpenVZ

    I was able to trace that the problem had NOTHING to do with the configuration ... rather it was the network drivers I used inside the KVM Guest (E1000) ... when I switched to using VIRTIO, everything worked as expected! Again, kudos on a job well done .... thanks to the Proxmox VE team!
  18. M

    Network Issue between KVM and OpenVZ

    Thanks Dietmar. Now that I know that it SHOULD work, I will try to find out what may be causing this ... I will post whatever findings I get ...
  19. M

    Network Issue between KVM and OpenVZ

    Thanks for the prompt reply ... Please see below for the contents of /etc/network/interfaces ... # network interface settings auto lo iface lo inet loopback iface eth0 inet manual iface eth1 inet manual auto bond0 iface bond0 inet manual slaves eth0 eth1 bond_miimon 100...
  20. M

    Network Issue between KVM and OpenVZ

    First off, allow me to congratulate the team for a wonderful product. Great work! What I am trying to do is to set-up KVM machines and OpenVZ machines in one box. Everything seems to work out of the box. I did encounter a problem though ... I could not communicate (ping for instance) to the KVM...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!