Search results

  1. J

    Windows Server 2022 problem

    Thanks por your reply Robert, Unfortunately we cannot do without the hotplug, for us it is an indispensable function. For us the ideal would be to get some configuration or change in the Windows 2022 to keep the Proxmox configuration with the 2 sockets and NUMA. Searching about this problem I...
  2. J

    Windows Server 2022 problem

    Since a few months we are having problems with Windows Server 2022 virtual machines. When applying Microsoft updates the machines hang with a BSOD. The updates that we believe are causing the problems are these: KB5022291 KB5022842 All our servers have 2 sockets and in the CPU configuration of...
  3. J

    Problem with Windows Server 2019 and viostor

    Hello, I have the same issue but my virtual machine is configured with iscsi from creation and the kernel running in Proxmox host is this version 5.15.53-1-pve pveversion proxmox-ve: 7.2-1 (running kernel: 5.15.53-1-pve) pve-manager: 7.2-11 (running version: 7.2-11/b76d3178) pve-kernel-helper...
  4. J

    Proxmox Backup Server 2.2 available

    Finaly I had to downgrade to 2.1-8 because I can't upgrade my PVE nodes to 7.2 now. There is this bug with backing up large files: https://forum.proxmox.com/threads/possible-bug-after-upgrading-to-7-2-vm-freeze-if-backing-up-large-disks.109272/ And I can't asume that risk.
  5. J

    Proxmox Backup Server 2.2 available

    Hello, I Have two PBS node in two diferent locations and every night I sync from one to another I have update both servers to 2.2.1 and this night the sync jobs have been failed, this is the error: 2022-05-20T10:28:34+02:00: Starting datastore sync job 'PBS01:ds01:DS01::s-665551eb-83cd'...
  6. J

    Backup PVE cluster with 15 nodes and over 650 VMs

    Thanks for your answer @SINOS I think get better hardware is only a patch, I need horizontal scalability. The number of virtual machines grow every day, is imposible grow with brute force and for this is necessary a big budget. If PBS can't grow in horizontal is only for small business cluster...
  7. J

    Backup PVE cluster with 15 nodes and over 650 VMs

    Hi guys, I have a PVE cluster with 15 nodes and over 650 virtual machines, some containers and growing every day. Until now I'm doing backup with one PBS for all nodes, I have a schedule that starts the backup on each node with 15 minutes of delay. Until some time I start to have time out...
  8. J

    Proxmox VE Ceph Benchmark 2020/09 - hyper-converged with NVMe

    In my Intel SSDPE2KX080T8 NVMe disks I see two LBA format: LBA Format 0 : Metadata Size: 0 bytes - Data Size: 512 bytes - Relative Performance: 0x2 Good LBA Format 1 : Metadata Size: 0 bytes - Data Size: 4096 bytes - Relative Performance: 0 Best (in use) You have wrote 512K to 4M, is that...
  9. J

    Proxmox VE Ceph Benchmark 2020/09 - hyper-converged with NVMe

    Hi, If we make 4 OSD on each NVME disk, will we get more performance or this is irrelevant? Thanks in advance.
  10. J

    iSCSI message

    I gave up looking for a solution to this, I saw that it was not a mistake and as I had no more time to devote to it I gave it up. Today in my company we no longer use those Compellent Storage arrays, we have moved on to Ceph. Sorry I can't help more.
  11. J

    iSCSI message

    Hello Jorge, I'm installing a new cluster with Proxmox 5.2 and a Dell Compellent SCv2020 and I have the same messages on syslog. I'm looking for a solution but I have not found anything yet, if I find it I will write it here and if you find it, I would appreciate it if you also put it here...
  12. J

    Problem creating Ceph OSD

    The log is to long to post here because of I attach it as a file.
  13. J

    Problem creating Ceph OSD

    I have a cluster o 8 nodes and when I try to create a Ceph OSD it fails. The proxmox version is: pveversion pve-manager/5.1-51/96be5354 (running kernel: 4.13.16-2-pve) The ceph version is: dpkg -l | grep ceph ii ceph 12.2.4-pve1 amd64...
  14. J

    [SOLVED] Linux virtual machines poor IO perfomance over SSD

    Thank you for your answers. Finally I managed to get the performance of the SSD disks by changing the configuration of execution of fio. Using this config: [global] bs=4k ioengine=libaio iodepth=32 size=4g direct=1 runtime=60 directory=/home filename=ssd.test.file [seq-read] rw=read stonewall...
  15. J

    [SOLVED] Linux virtual machines poor IO perfomance over SSD

    Hello, I have a 5 node cluster with a Dell Compellent sc2020 with 7 SSD disk with mutlipath as storage . On Windows virtual machines when I do a CrystalDisk benchmark we have a little over 22000 IOPS but when I do a fio test on Linux machines never got over 4800 IOPS. So in windows the...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!