Search results

  1. V

    "New" SR-IOV AMD graphics card vs Proxmox

    Has anyone had a chance to try them?
  2. V

    "New" SR-IOV AMD graphics card vs Proxmox

    How does proxmox behave with the "new" SR-IOV AMD graphics card? Like: Instinct MI50 (16Gb and 32GB). Instinct MI100. Do they work? Do they need a specific driver in proxmox? Can they be "divided"?
  3. V

    Monitor Disk(s) and network activity

    I'm not interested in statistical data. I'm interested in the activities carried out. Something like (proxmox firewall log): 116 7 tap116i0-IN 07/Jun/2021:15:36:12 +0200 ACCEPT: IN=fwbr116i0 OUT=fwbr116i0 PHYSIN=fwln116i0 PHYSOUT=tap116i0 MAC=01:00:5e:7e:7f:3f:78:8a:20:89:19:2b:08:00...
  4. V

    Monitor Disk(s) and network activity

    Can we do that in proxmox? If yes how? I try to explain myself better. I see that in proxmox there is a log of the firewall activities, even at the single VM level. How can I export it? Maybe in real time? How is it interpreted? Is it possible to increase the level of detail of the log? Is...
  5. V

    Windows Server 2K19 kvm64 to host

    Try to set to "Default" all the "Extra CPU Flags":
  6. V

    AMD s7150x2 on Proxmox VE 6

    I just can't do the "make install". And to get to the "make" I have to switch to the experimental version by unlocking the PVE pve-no-subscription repository, which I wanted to avoid in order not to recompile with every kernel update. I don't know if it makes so much sense to insist now as the...
  7. V

    Ceph usage space

    I'm just starting to experiment with Ceph. For now I have excellent feedback but also some things I don't understand. The occupation of the pool is one of them. For example, in the test station I have two virtual machines, one a clone of the other. Both have a 320GB drive and the occupied space...
  8. V

    [SOLVED] Proxmox VE 6.4-4 problem with ceph installation

    Now I don't think I'll be able to test it in time, I'll check the new version. Thank you.
  9. V

    [SOLVED] Proxmox VE 6.4-4 problem with ceph installation

    Brand new installation of Proxmox VE 6.4-4 fails to install Manager mgr.pve proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve) pve-manager: 6.4-4 (running version: 6.4-4/337d6701) pve-kernel-5.4: 6.4-1 pve-kernel-helper: 6.4-1 pve-kernel-5.4.106-1-pve: 5.4.106-1 ceph: 15.2.11-pve1...
  10. V

    3 x 4TB Samsung SSD in ZFS raidz1 => poor performance

    Ok, where and how do I have to put these settings?
  11. V

    3 x 4TB Samsung SSD in ZFS raidz1 => poor performance

    I haven't changed any ZFS configurations. I don't have a hardware RAID. Are samsung 860 EVO slow? I do not think it depends from disks. IOTOP show random 500MBi/s write not related to any VM but mostly releted to ZFS tasks.
  12. V

    3 x 4TB Samsung SSD in ZFS raidz1 => poor performance

    I have two zpools, one the host and one per the VM disks. rpool (mirror), 2 x 500GB samsung 970 evo, for proxmox fastSSD (raidz1), 3 x 4TB samsung 860 evo, for VM disks. I have create the ZFS pools with the proxmox GUI. No other configuration have been made.
  13. V

    3 x 4TB Samsung SSD in ZFS raidz1 => poor performance

    I don't think that's the problem. I think it's more of a ZFS configuration problem or ZFS itself. Is raidz1 that bad? How and where to configure ZFS correctly.
  14. V

    3 x 4TB Samsung SSD in ZFS raidz1 => poor performance

    The original point was not the life time of the disk but the performance. In my case even just one VM can block all the others.
  15. V

    3 x 4TB Samsung SSD in ZFS raidz1 => poor performance

    I use ZFS raidz1 also for rpool and there is fine, no performance issue. For HA it seems that ceph is the best option, am I missing any options?
  16. V

    3 x 4TB Samsung SSD in ZFS raidz1 => poor performance

    If it's slower then it's not worth it. The system must manage more than 40 VMs and the weak link is storage. Sometimes a single VM using the disk is enough to block all the others.
  17. V

    3 x 4TB Samsung SSD in ZFS raidz1 => poor performance

    Hardware raid is not a option. I'm going to upgrade to server-grade SSDs. I think about ditching ZFS. Is Ceph better?
  18. V

    3 x 4TB Samsung SSD in ZFS raidz1 => poor performance

    I have a server with 3 SSD Samsung SSD 860 EVO 4TB . I configured them with ZFS in raid1. Intermittently but more and more frequently I have performance problems, l'IO delay exceeds 30%. Where am I wrong? Is ZFS suitable for this? Is Ceph better?
  19. V

    AMD s7150x2 on Proxmox VE 6

    But that guide is for the VE 5.X (linux kernel 4), right? I have the VE 6.X (linux kernel 5) and I don't want to upgrade to an older version.
  20. V

    VDI (Virtual Desktop Infrastructure)

    It looked cool, but it seems paid and not super easy to use.