Search results

  1. Z

    Planning first installation on Dell R710 with 6x8TB drives

    1 - Make Sense 2 - Would you say Zil is better for VMs, as opposed to L2ARC? 3 - What about an Intel Optane as ZFS Zil Cache when compared to Hadware Raid?
  2. Z

    Is a GlusterFS Hyperconverged Proxmox with Hardware Raid doable?

    Hey @jdancer Thanks for this info! A company I work for has a 3-Node CEPH Cluster already setup. The configuration for each of those Servers: 2 x Intel CPU Gold 512GB RAM IT Mode RAID Controller 2 x 256GB SSD Samsung EVO 870 (Proxmox 6 x 1TB SSD Samsung EVO 870 2 x 10GB NICs (CEPH Public...
  3. Z

    Planning first installation on Dell R710 with 6x8TB drives

    I was actually referring to what @UdoB was saying regarding RAID-10 vs Raid5/6 but with Zil + L2ARC... not with hardware raid. To clear up my question: If you are using an IT Flashed mode Controller and not a battery-backed Hardware RAID, can we still not get good performance with RAID 5 or...
  4. Z

    Is a GlusterFS Hyperconverged Proxmox with Hardware Raid doable?

    Hey Folks, I'm trying to get clear on something here with a possible setup... Can we create a Proxmox Hyper-converged GlusterFS system on a system with hardware raid? I'm thinking of the following scenario: 3 x Dell R720 PERC HARDWARE RAID 2x256 GB SSD -- Proxmox OS (Raid1) 6X1TB SSD --...
  5. Z

    Proxmox Cluster with local Gluster servers.

    Hey Folks, I'm trying to get clear on something here with a possible setup... It sounds like from what you described here is that you can do a Proxmox Hyper-converged GlusterFS system on a system with hardware raid. Is that correct? I'm thinking of the following scenario: 3 x Dell R720 PERC...
  6. Z

    CEPH: Increasing the PG (Placement Group) Count from 128 to 512

    Hi folks, Thanks for everybody's contributions on CEPH and Proxmox so far. I'm looking for some instructions: I'm Running Proxmox 8.1 with Ceph 17.2.7 (Quincy) on a 3 Node Cluster. Each cluster has 6 x 1TB Samsung 870 EVOs Each Proxmox Node has a 1TB Samsung 980 Pro 1TB NVME with 6 x 40GB...
  7. Z

    [PVE+Ceph] Increasing PG count on a production cluster

    Hi at @RokaKen, Thanks for your 'write up' on the steps to Ceph. I would also like some instructions: I'm Running Proxmox 8.1 with Ceph 17.2.7 (Quincy) on a 3 Node Cluster. Each cluster has 6 x 1TB Samsung 870 EVOs Each Proxmox Node has a 1TB Samsung 980 Pro 1TB NVME with 6 x 40GB WAl/DB...
  8. Z

    [SOLVED] Backup fails when LXC has FUSE activated and in use (error code 23)

    The typical problem is that you are running ZFS without POSIX ACL Support. The LXC container has ACL settings inside its filesystem and the 'snaphot' backup process that the Proxmox VE host runs is an rsync to the /var/tmp directory. If POSIX ACL is not turned on in the rpool/ROOT/pve-1...
  9. Z

    [SOLVED] Error code 23 when backing up LXC from PVE to PBS

    The typical problem is that you are running ZFS without POSIX ACL Support. The LXC container has ACL settings inside its filesystem and the 'snaphot' backup process that the Proxmox VE host runs is an rsync to the /var/tmp directory. If POSIX ACL is not turned on in the rpool/ROOT/pve-1...
  10. Z

    [SOLVED] lxc container backup suspend mode exit code 23

    The typical problem is that you are running ZFS without POSIX ACL Support. The LXC container has ACL settings inside its filesystem and the 'snaphot' backup process that the Proxmox VE host runs is an rsync to the /var/tmp directory. If POSIX ACL is not turned on in the rpool/ROOT/pve-1...
  11. Z

    [SOLVED] LXC Backup faild ( exit code 23 )

    The typical problem is that you are running ZFS without POSIX ACL Support. The LXC container has ACL settings inside its filesystem and the 'snaphot' backup process that the Proxmox VE host runs is an rsync to the /var/tmp directory. If POSIX ACL is not turned on in the rpool/ROOT/pve-1...
  12. Z

    Planning first installation on Dell R710 with 6x8TB drives

    What about the performance scenario when keeping raid 5/6 and using ZFS ZIL + L2ARC?
  13. Z

    Proxmox slow file transfer performance

    Wanting to Bump this thread. Here is my situation: Iperf3 @ 10GB/s! However, SCP Speed at 1GB/s ??!! Hi, I have a DEBIAN (11) and CentOS 7 VM. The host Proxmox PVE has a 10GB ethernet and I'm using OpenVswitch (OVS Bridge, OVS Port and OVS intPort). All the VMs are running VirtIO drivers...
  14. Z

    Iperf3 @ 10GB/s! However, SCP Speed at 1GB/s ??!!

    Hi, I have a DEBIAN (11) and CentOS 7 VM. The host Proxmox PVE has a 10GB ethernet and I'm using OpenVswitch (OVS Bridge, OVS Port and OVS intPort). All the VMs are running VirtIO drivers for their NIC On my Debian and CentOS VM's, SCP transfers between VM's are only going at 1GBs and...
  15. Z

    Looking for a Proxmox Specific MAC Address generator (as opposed to having PVE autogenerate it for me)

    I'd like to Pre-generate the Mac Address for a tool I'm building. I'm currently using one for VMware (and VMware only allows specific MAC addresses) and wondering if Proxmox has something similar? Is there a Proxmox SPECIFIC Mac address generator? (i'm guessing there is a function in the...
  16. Z

    [TUTORIAL] Proxmox VE 7.2 Benchmark: aio native, io_uring, and iothreads

    Hi @guletz , I just want to unpack your explanation a little bit more if you don't mind... Do you mean create an iSCSI block device on each of my Proxmox servers, then attach each iSCSI devices to each of the Proxmox servers and then put LVM on top of each attached iSCSI? Then when I create a...
  17. Z

    [TUTORIAL] Proxmox VE 7.2 Benchmark: aio native, io_uring, and iothreads

    @guletz Thanks for your suggestions. Question for you: I've never heard of a "3-way mirror iSCSI" How would you suggest I set one up? Thanks