Search results

  1. Z

    Proxmox Backup Server 3.4 released!

    Amazing! Congratulations! I will go ahead and test if their are performance improvements in the use case when there are a large amount of backups in a single namespace. In version 3.3, the listing of these backups take way, way, way too long and they also do not show up in PVE (in the backup...
  2. Z

    Proxmox VE 8.4 released!

    BTW, is there a plan to do live-migration with no-downtime for LXC containers?
  3. Z

    Proxmox VE 8.4 released!

    Congratulations to the entire Promox Team! Congratulations to all the users of Proxmox. This looks like an awesome release! We're near completion of an Automated Provisioning Platform specificlaly for Proxmox, and I cannot wait try it out on this! Looking forward to upgrading to Ceph 19!!!!
  4. Z

    [SOLVED] Kernel panic installing rocky or almalinux

    You could do 'host' (which would also give you more performance) or in the Proxmox 8.x series, you could use the new default of 'x86-64-v2-AES' which gives you greater compatibility of Proxmox features (like Live-migration) when it comes to having different servers with different cpu generations...
  5. Z

    Debian VM CIFS mount issues

    @jamesgrafton Would you be able to give us some examples?
  6. Z

    Invoking a VNC Console From Bash using Proxmox API

    Hi @nosoop4u are you already Authenticated to Proxmox in that browser though?
  7. Z

    Invoking a VNC Console From Bash using Proxmox API

    Hey Folks, I'm needing a bit of help here and if someone can point me in the right direction, it would be greatly appreciated. What I'm trying to do is create a bash script which has proxmox credentials and the VMID and from those, print a URL where by when I click it, it will open up a...
  8. Z

    CEPH: Increasing the PG (Placement Group) Count from 128 to 512

    It went great! It was super simple. The Proxmox team has done some outstanding work with their integration of CEPH and the necessary features.
  9. Z

    Async IO: io_uring, native or threads?

    Hey folks, When browsing this forum, I've seen a couple of different recommendations on which of these settings to choose (especially when it came to NFS Storage options). I was wondering if anybody could educate us on which one of these are bests for the various use cases. (and for personaly...
  10. Z

    CEPH: Increasing the PG (Placement Group) Count from 128 to 512

    Thanks Aaron! Very helpful I'll let you know how this all goes!
  11. Z

    Is a GlusterFS Hyperconverged Proxmox with Hardware Raid doable?

    1. Re: Samsung PM883 --> Thank you. I will take a look into this. What about Samsung Pro's? 2. "Set VM cache to none We do this already The question here is why are you using this? If you want to ensure that no data is lost in the event of a power outage, then both none and writeback...
  12. Z

    Proxmox Cluster with local Gluster servers.

    I have 1 x 10GB Switch I have 1 x 1GB Switch Thanks for those recommendations. We'll look at getting another 2 of these Switches with the same brand.
  13. Z

    Proxmox Cluster with local Gluster servers.

    Gluster does not like Hardware Raid? Even if the filesystem Gluster Sits on is XFS?
  14. Z

    Planning first installation on Dell R710 with 6x8TB drives

    It would be a pretty randomly used VM: - Linux Web Server (httpd/nginx) application (Zabbix, nagios, nextcloud) - DB (MySQL/PostGres/Mongo) - Load Balancer (haproxy)
  15. Z

    CEPH: Increasing the PG (Placement Group) Count from 128 to 512

    Hi @aaron Thanks for your input. Had 3 question for you though: 1. What do you mean "(.mgr can be ignored)" 2. Shouldn't the "target_ratio" be "1.0" given that its exactly all the same hardware? 3. Because its all the same hardware, it looks like I don't need to adjust the 'Autoscaler', but I...
  16. Z

    Planning first installation on Dell R710 with 6x8TB drives

    @alexskysilk Thanks for the resource. I will check it out. I've got a Dell R620 with 8 x 1.2TB HDDs configured in RaidZ2. This is the Benchmark that i've done: Command: sync; dd if=/dev/zero of=tempfile bs=1M count=10240; sync Results: Proxmox Host: ZFS / = 1.5 GB/S Proxmox VM on...
  17. Z

    CEPH: Increasing the PG (Placement Group) Count from 128 to 512

    I just noticed this feature in Proxmox 8.1 Proxmox Node --> Ceph --> Pools Select a Pool and click --> Edit I'm guessing I could just do it from here on each node.
  18. Z

    Planning first installation on Dell R710 with 6x8TB drives

    1 - Make Sense 2 - Would you say Zil is better for VMs, as opposed to L2ARC? 3 - What about an Intel Optane as ZFS Zil Cache when compared to Hadware Raid?
  19. Z

    Is a GlusterFS Hyperconverged Proxmox with Hardware Raid doable?

    Hey @jdancer Thanks for this info! A company I work for has a 3-Node CEPH Cluster already setup. The configuration for each of those Servers: 2 x Intel CPU Gold 512GB RAM IT Mode RAID Controller 2 x 256GB SSD Samsung EVO 870 (Proxmox 6 x 1TB SSD Samsung EVO 870 2 x 10GB NICs (CEPH Public...
  20. Z

    Planning first installation on Dell R710 with 6x8TB drives

    I was actually referring to what @UdoB was saying regarding RAID-10 vs Raid5/6 but with Zil + L2ARC... not with hardware raid. To clear up my question: If you are using an IT Flashed mode Controller and not a battery-backed Hardware RAID, can we still not get good performance with RAID 5 or...