Search results

  1. S

    VM freezes for a few minutes after migration and gets time offset

    I'm starting to think that this issue might be related to the guest kernel. I'll try to narrow it down, but my guess is that this issue happens with guests running a kernel older than 3.13. UPDATE: nope, it happens with 3.13 guest kernel too.
  2. S

    VM freezes for a few minutes after migration and gets time offset

    I could upgrade one of the five production nodes. Is downgrade possible back to stable using similar methods? Even if the testing goes fine I would like to downgrade back to stable (we have the community subscription) in the knowledge that the fix will make it's way into the stable repo someday.
  3. S

    VM freezes for a few minutes after migration and gets time offset

    Hi, I experience VM lockups after live migration. These lockups usually lasts for 2-5 minutes with 100% CPU usage and the VM is not responding and the network and disk graphs are showing a 200-400 petabyte spike. After the lockup the clock inside the VM is off from 16 to 360 secods. The VMs are...
  4. S

    Ceph problem when master node is out

    Yes, 3 monitors are enough for a small to medium sized cluster. And as spirit has pointed out please show us your pool size configuration and your CRUSH map would also be helpful.
  5. S

    Ceph problem when master node is out

    Actually you need at least 3 monitors to make a quorum. Two is not safe and not recommended. From the Ceph documentation:
  6. S

    Ceph - High I/O wait on OSD add/remove

    Thanks Marius for the detailed response! Could you please point me to the article regarding Firefly and the SSD journals? That would be an interesting update for me. :) Getting rid of those 6 SSDs and putting spinning disks in would give us a small storage space boost which would be handy...
  7. S

    Ceph - High I/O wait on OSD add/remove

    Regarding SSD based journals I think that it is dependent on the use case. The journal is used for 2 purposes: 1) gathering small writes and flushes them in batches to improve seek time and overall performance of the backed OSD (which should be a single spinning disk per recommendation); 2)...
  8. S

    Load balancing on Proxmox

    In this context i really don't understand what do you mean by load balance. What you've described is more like resource balance. You migrate your VMs to balance the load on your hosts. But 'load balance' is to distribute requests between more than one real servers. Anyway, they are just terms...
  9. S

    Load balancing on Proxmox

    In my opinion if you need more resources than one physical hardware can provide for an app or service then virtualisation is not a good option as it means some overhead (and not just performance wise). In that case you have to design your infrastructure to use a HA load-balancer tier and a HA...
  10. S

    "system" full?

    What does an fsck run say?
  11. S

    "system" full?

    My guess would be that pvdisplay reports that all of the pv space on /dev/sda5 is allocated to your root partition so LVM wise you cannot allocate more space on it. The actual filesystem usage on root is another question which involves the filesystem on the logical volume and not the physical...
  12. S

    [SOLVED] Mixed kernels 2.6 & 3.10 : no quorum

    Have you tried KVM live migration between a 2.6.32 and a 3.10 node?
  13. S

    Bug in iPXE while booting DHCP

    Thanks! It appears to be that cman has to be on another physical network. In my case I've tried with a different network inteface but same VLAN and it didn't work.
  14. S

    Bug in iPXE while booting DHCP

    Sounds a bit scary but might work because pvecm status and clustat shows the nodes' names and not the actual IP address. I'll give it a try and let you know how it went. Thanks!
  15. S

    Bug in iPXE while booting DHCP

    I'm having this problem too since I've switched to 3.10 kernel. How did you move your PVE network to another physical network? I might try that and see if it helps.
  16. S

    New 3.10.0 Kernel

    Is it normal behaviour that after live migrating a KVM instance from a host running kernel 3.10 to a host running 2.6.32 that the VM locks up and goes into "internal-error" state? The hardware is identical, CPU is set to KVM64, de guest is a Linux system running 3.13 kernel. Here's the...
  17. S

    Automated Intall under KVM?

    Try http://fai-project.org Combined with the Proxmox API and some configuration management system (like puppet, chef or ansible) you can have your systems installed and configured automatically.
  18. S

    KVM/Qemu Online Migration often ends up with 100% cpu, no ping, frozen VM

    Is the freeze permanent of will it disappear after a few minutes? I had the same issue and it turned out that the hosts were using the intel_pstate driver and altered the clock frequency which affected time keeping in the guests so during live migration the guests were frozen for a few minutes...
  19. S

    3.2 PXE boot connection timed out

    Hi, PXE boot with virtio or e1000 interfaces ends with a connection timed out during IP acquisition. There's now DHCPACK after one DHCPOFFER message. RTL3189 seems to be working but it's damn slow... Any ideas what might be in the background? May 15 14:51:13 pxeboot dhcpd: DHCPDISCOVER from...
  20. S

    FreeBSD with Proxmox 3.2

    You made my day! It really works! :)

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!