Search results

  1. G

    IO Scheduler with SSD and HWRaid

    We already did test it on Proxmox, but I look forward to seeing your benchmarks. They do recommend deadline, but not for VMs (Debian 8 disables the IO scheduler inside VMs and on NVMe SSDs). Also they are not talking about HW RAID cards, for which noop is the only sensible choice.
  2. G

    IO Scheduler with SSD and HWRaid

    Yes, because SSDs are usually much faster than what the schedulers were designed for. The above means you have the deadline scheduler set for your disk. You will be better off with the noop scheduler when inside a VM.
  3. G

    IO Scheduler with SSD and HWRaid

    One more thing about the guest: in many distributions using kernels 3.16 and later, the OS will not use a traditional IO scheduler in VMs, rather using the multiqueue block layer for fast devices, so when you see: # cat /sys/block/vda/queue/scheduler none then you have nothing else to do, your...
  4. G

    IO Scheduler with SSD and HWRaid

    This recommendation is not useful, as it's not based on facts, only a belief that "defaults must be the the best". Unfortunately, defaults are often not good, in this case they are really bad. We have to talk about the host and the guest separately, but the general rule is you only want IO...
  5. G

    Ceph (or CephFS) for vzdump backup storage?

    Multiple active does not work, which means in case one goes down, you have to set another one active for CephFS. This is an annoyance yes, but not a showstopper for us. We would like to see CephFS implemented as a Proxmox storage plugin for vzdump backups. Backup storage has lower uptime...
  6. G

    Ceph (or CephFS) for vzdump backup storage?

    I'm not sure I follow... every software project has a bugtracker (Proxmox as well), that does not mean it's unstable. Are you aware of any specific bug that would make CephFS unusable as a Proxmox storage plugin? Or you just like to quote links for no particular reason?
  7. G

    Ceph (or CephFS) for vzdump backup storage?

    What bug? According to the Jewel release notes, CephFS is stable and the necessary repair and recovery tools are there! There is even a volume manager included, that could be used to create the Proxmox storage plugin! See here: http://ceph.com/releases/v10-2-0-jewel-released/
  8. G

    HowTo: Upgrade Ceph Hammer to Jewel

    Az Jewel includes CephFS, it begs the question: will CephFS become a storage plugin in Proxmox, enabling the storage of vzdump backups and qcow2 disk images? Is this feature on the roadmap at all?
  9. G

    Ceph (or CephFS) for vzdump backup storage?

    We have a small Ceph Hammer cluster (only a few monitors and less then 10 OSDs), still it proves very useful for low IO guest storage. Our Ceph cluster runs on our Proxmox nodes, but has it's own, separate gigabit LAN, and performance is adequate for our needs. We would like to use it as backup...
  10. G

    Live KVM migration without share storage

    Just wanted to revisit this, did anything happen to this planned feature in the last two months? Any idea when it might hit the test repository? Or is it already out, and I missed it?
  11. G

    Random Restarting

    Spontaneous reboots with nothing peculiar in syslog are most likely caused by ZFS during high memory pressure situations. We have had many of these occur on wildly different hardware architectures, on both 3.x (kernel 2.6.32) and 4.x (kernel 4.4), where the only common thing was both system...
  12. G

    Fresh install 4.4: ZFS performance 6x Samsung 850 evo

    You forget that you are only reading data simultaneously from 4 disks, the rest is parity from the remaining 2 disks, since you are running RAIDZ2. So the theoretical limit of an uncompressed fully sequential read is closer to 1600MBytes/sec. The rest of the difference could easily come from...
  13. G

    proxmox 4 to proxmox 3.4

    I remember testing (and even partially moving to) Proxmox 4 and LXC last fall, a decision that we later reversed and moved back to 3.x for another year. We used ext4 on LVM on Adaptec RAID10, and the many parallel ext4 journal (writing) processes were generating much higher load on the same...
  14. G

    Random Restarting

    According to our experience the system resets when ZFS puts high memory pressure on a system that is already short on RAM. It has also been reported that swap on ZFS is connected to this. Since we set these variables upon installation... zfs set primarycache=metadata rpool/swap zfs set...
  15. G

    Proxmox ZFS RAIDZ1 pool on SSD drives - sync parameter

    Well, I'm not sure you can create a RAIDZ1 pool out of 2 SSDs (you need at least 3 drives), and even if you could, there is no advantage to a RAID1 (mirror) when using 2 drives, only disadvantage (much higher CPU usage due to parity calculation), and you also lose an entire disk (like with the...
  16. G

    Backup naming policy

    This has been one of the most requested features in Proxmox due to it's great positive impact on usability, also probably one of the easiest to implement, yet the Proxmox team has been unwilling to add this to the backup job options for some unknown reason for years now. The argument that it...
  17. G

    Storage Problems With Proxmox/Ceph

    Speaking of Jewel: it supports CephFS. Is there any way currently to mount CephFS as storage in Proxmox or do we have to wait until official support comes?
  18. G

    Storage Problems With Proxmox/Ceph

    Yeah how? root@proxmox:~# pveceph install -version jewel 400 Parameter verification failed. version: value 'jewel' does not have a value in the enumeration 'hammer'
  19. G

    Storage Problems With Proxmox/Ceph

    @Nicolas Dey @spirit @dietmar I am experiencing very similar issues with Ceph. I have set up a two bridge network in my five node Proxmox 4.3 cluster, vmbr0 is for regular cluster traffic (10.10.0.x), and vmbr1 for Ceph traffic (192.168.0.x set up with pveceph -init). After installing Ceph...
  20. G

    Multiple subnets on same interface?

    At the moment I have two ethernet ports in each cluster node, both of them connected to a bridge. eth0 > vmbr0 is 10.10.10.x and eth1 > vmbr1 is 192.168.0.x. I would like to create another bridge (vmbr2) connected to eth1 with the 172.16.0.x subnet, is this possible somehow?