Search results

  1. D

    Pruning doesn't work

    Thanks) I thought all options were counted from the current time, like in Veeam B&R.
  2. D

    Pruning doesn't work

    Hello, I don't understand how pruning is working. Today is the first Monday in August, but as of 03/31/2024 the backup copy has not been deleted. Why? 2024-08-05T00:00:00+05:00: prune job 'default-PVE01Backup-6c1d041d-a53' 2024-08-05T00:00:00+05:00: task triggered by schedule 'daily'...
  3. D

    The VM hangs if access to PBS is interrupted during backup

    Hello! We have PVE 8.1.4 and PBS 3.1. Our PBS freezes for two days in a row while the backup is in progress. At the same time, the VM that is being backed up freezes too. If I try to restart this VM, an error message "TASK ERROR: VM is locked (backup)" appears. If I restart the PBS, the VM will...
  4. D

    Show datacenter resources from cli

    Thanks a lot! This is exactly what I was looking for
  5. D

    Show datacenter resources from cli

    Hi! Is it possible to get current values of dataceter resources from CLI? Like on Search in GUI Thanks
  6. D

    Poor performance of ZFS pool on SSD

    I have 5 SSD disks connected to P420i in HBA mode on DL380 Gen8. Each disk produces about 420 IOPS on 4K blocks. I ctreated a ZFS pool RAIDZ1 on them. But I don't understand why the performance of ZFS pool is so poor. Why each disk is loaded the same as entire ZFS pool. And what...
  7. D

    Unexpected drop in disk subsystem performance

    I found the cause. I switched P420i to HBA mode and tested all the disks. it turned out that three of them have been degraded. iops : min= 376, max= 474, avg=461.68 iops : min= 380, max= 478, avg=460.37 iops : min= 370, max= 472, avg=459.93 iops : min= 26...
  8. D

    Unexpected drop in disk subsystem performance

    New Samsung SSD 860 EVO on P420i FW v8.32 It looks like it is. I'll try to research in this direction. Thanks for the tip.
  9. D

    Unexpected drop in disk subsystem performance

    Hi! My PVE 6.2 worked a few months without any problems. But three days ago the latency of the disk subsystem increased dramatically. PVE is installed on HPE DL380Gen8 with RAID6 on 8 SSD 2TB. No action was taken on the server when the problem started. There are no suspicious messages in the...
  10. D

    storage 'VMBackup' is not online (500)

    I even restarted the PVE node. The volume was not automatically mounted. I recreated the store, it was mounted automatically immediately, but I see the same errors in the GUI again. And showmount shown NFS share, but not CIFS. # df -h /mnt/pve/VMBackup2 Filesystem Size Used Avail...
  11. D

    storage 'VMBackup' is not online (500)

    # cat /etc/pve/storage.cfg dir: local path /var/lib/vz content vztmpl,backup,iso lvmthin: local-lvm thinpool data vgname pve content rootdir,images dir: VMs path /pool1/VMs content images,iso shared 0 dir: Backup path...
  12. D

    storage 'VMBackup' is not online (500)

    Hi! I have PVE 6.2-11. I created new CIFS storage from FreeNAS. Now I see it from shell on PVE host: #df -h //st-nas1/VMBackup2 5.2T 5.0T 214G 96% /mnt/pve/VMBackup #mount //st-nas1/VMBackup2 on /mnt/pve/VMBackup type cifs...
  13. D

    Case sensitive email addresses

    Hello. Why PMG is case sensitive to email addresses?
  14. D

    Poor ZFS performance On Supermicro vs random ASUS board

    root@vmc3-1:~# cat /sys/block/nvme0n1/queue/physical_block_size 512 root@vmc3-1:~# zfs set atime=off nvmepool/VMs root@vmc3-1:/nvmepool/VMs# fio --randrepeat=1 --ioengine=libaio --direct=0 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite test...
  15. D

    Poor ZFS performance On Supermicro vs random ASUS board

    Unfortunatly, RAID10 does not help. root@vmc3-1:~# zpool list -v NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT nvmepool 952G 4.13G 948G - 3% 0% 1.00x ONLINE - mirror 476G 1.92G 474G - 3% 0% nvme0n1 - -...
  16. D

    Poor ZFS performance On Supermicro vs random ASUS board

    Ok. I set recordsize to 4k and turned off compression, but zfs still writes to disks several times more than it needs. root@vmc3-1:/nvmepool# smartctl -a /dev/nvme0n1 | grep Written Data Units Written: 770,603 [394 GB] root@vmc3-1:/nvmepool# smartctl -a /dev/nvme1n1 | grep...
  17. D

    Poor ZFS performance On Supermicro vs random ASUS board

    My server has 320 GB RAM, but nvme disk is only 450 GB. And how it explains that zfs writes to disk 25 times more data.
  18. D

    Poor ZFS performance On Supermicro vs random ASUS board

    root@vmc3-1:/nvmepool/VMs# zfs get sync nvmepool/VMs NAME PROPERTY VALUE SOURCE nvmepool/VMs sync disabled local root@vmc3-1:/nvmepool/VMs# smartctl -a /dev/nvme1n1 smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.13.13-1-pve] (local build) Copyright (C) 2002-16, Bruce Allen...
  19. D

    Poor ZFS performance On Supermicro vs random ASUS board

    Look at numbers above. When fio writes to zfs volume 1 MB, zfs writes to disk 26 MB. Why?
  20. D

    Poor ZFS performance On Supermicro vs random ASUS board

    Hi, I have similar problem with zfs. Can anybody explain me, why fio shows bw=30249KB/s, but at the same time iostat shows 797297.60 wkB/s?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!