Search results

  1. G

    IO Scheduler with SSD and HWRaid

    One more thing about the guest: in many distributions using kernels 3.16 and later, the OS will not use a traditional IO scheduler in VMs, rather using the multiqueue block layer for fast devices, so when you see: # cat /sys/block/vda/queue/scheduler none then you have nothing else to do, your...
  2. G

    IO Scheduler with SSD and HWRaid

    This recommendation is not useful, as it's not based on facts, only a belief that "defaults must be the the best". Unfortunately, defaults are often not good, in this case they are really bad. We have to talk about the host and the guest separately, but the general rule is you only want IO...
  3. G

    Ceph (or CephFS) for vzdump backup storage?

    Multiple active does not work, which means in case one goes down, you have to set another one active for CephFS. This is an annoyance yes, but not a showstopper for us. We would like to see CephFS implemented as a Proxmox storage plugin for vzdump backups. Backup storage has lower uptime...
  4. G

    Ceph (or CephFS) for vzdump backup storage?

    I'm not sure I follow... every software project has a bugtracker (Proxmox as well), that does not mean it's unstable. Are you aware of any specific bug that would make CephFS unusable as a Proxmox storage plugin? Or you just like to quote links for no particular reason?
  5. G

    Ceph (or CephFS) for vzdump backup storage?

    What bug? According to the Jewel release notes, CephFS is stable and the necessary repair and recovery tools are there! There is even a volume manager included, that could be used to create the Proxmox storage plugin! See here: http://ceph.com/releases/v10-2-0-jewel-released/
  6. G

    HowTo: Upgrade Ceph Hammer to Jewel

    Az Jewel includes CephFS, it begs the question: will CephFS become a storage plugin in Proxmox, enabling the storage of vzdump backups and qcow2 disk images? Is this feature on the roadmap at all?
  7. G

    Ceph (or CephFS) for vzdump backup storage?

    We have a small Ceph Hammer cluster (only a few monitors and less then 10 OSDs), still it proves very useful for low IO guest storage. Our Ceph cluster runs on our Proxmox nodes, but has it's own, separate gigabit LAN, and performance is adequate for our needs. We would like to use it as backup...
  8. G

    Live KVM migration without share storage

    Just wanted to revisit this, did anything happen to this planned feature in the last two months? Any idea when it might hit the test repository? Or is it already out, and I missed it?
  9. G

    Random Restarting

    Spontaneous reboots with nothing peculiar in syslog are most likely caused by ZFS during high memory pressure situations. We have had many of these occur on wildly different hardware architectures, on both 3.x (kernel 2.6.32) and 4.x (kernel 4.4), where the only common thing was both system...
  10. G

    Fresh install 4.4: ZFS performance 6x Samsung 850 evo

    You forget that you are only reading data simultaneously from 4 disks, the rest is parity from the remaining 2 disks, since you are running RAIDZ2. So the theoretical limit of an uncompressed fully sequential read is closer to 1600MBytes/sec. The rest of the difference could easily come from...
  11. G

    proxmox 4 to proxmox 3.4

    I remember testing (and even partially moving to) Proxmox 4 and LXC last fall, a decision that we later reversed and moved back to 3.x for another year. We used ext4 on LVM on Adaptec RAID10, and the many parallel ext4 journal (writing) processes were generating much higher load on the same...
  12. G

    Random Restarting

    According to our experience the system resets when ZFS puts high memory pressure on a system that is already short on RAM. It has also been reported that swap on ZFS is connected to this. Since we set these variables upon installation... zfs set primarycache=metadata rpool/swap zfs set...
  13. G

    Proxmox ZFS RAIDZ1 pool on SSD drives - sync parameter

    Well, I'm not sure you can create a RAIDZ1 pool out of 2 SSDs (you need at least 3 drives), and even if you could, there is no advantage to a RAID1 (mirror) when using 2 drives, only disadvantage (much higher CPU usage due to parity calculation), and you also lose an entire disk (like with the...
  14. G

    Backup naming policy

    This has been one of the most requested features in Proxmox due to it's great positive impact on usability, also probably one of the easiest to implement, yet the Proxmox team has been unwilling to add this to the backup job options for some unknown reason for years now. The argument that it...
  15. G

    Storage Problems With Proxmox/Ceph

    Speaking of Jewel: it supports CephFS. Is there any way currently to mount CephFS as storage in Proxmox or do we have to wait until official support comes?
  16. G

    Storage Problems With Proxmox/Ceph

    Yeah how? root@proxmox:~# pveceph install -version jewel 400 Parameter verification failed. version: value 'jewel' does not have a value in the enumeration 'hammer'
  17. G

    Storage Problems With Proxmox/Ceph

    @Nicolas Dey @spirit @dietmar I am experiencing very similar issues with Ceph. I have set up a two bridge network in my five node Proxmox 4.3 cluster, vmbr0 is for regular cluster traffic (10.10.0.x), and vmbr1 for Ceph traffic (192.168.0.x set up with pveceph -init). After installing Ceph...
  18. G

    Multiple subnets on same interface?

    At the moment I have two ethernet ports in each cluster node, both of them connected to a bridge. eth0 > vmbr0 is 10.10.10.x and eth1 > vmbr1 is 192.168.0.x. I would like to create another bridge (vmbr2) connected to eth1 with the 172.16.0.x subnet, is this possible somehow?
  19. G

    Frequent CPU stalls in KVM guests during high IO on host

    Well, our nodes are very different hardware wise, yet the problem surfaces on all of them (but not on all guests). We have experienced the issue on single and dual socket servers, sporting 32GB to 96GB RAM, but there is one thing in common: all of them use ZFS (KVM guests are running on...
  20. G

    Frequent CPU stalls in KVM guests during high IO on host

    If you read carefully, the 4GB of RAM belongs to the the KVM guest that produced the errors. These CPU stalls are happening on all of our nodes, regardless of hardware configuration. Yes, we tried monitoring the hosts and the guest nodes as well, no particular reason has shown up.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!