Search results

  1. U

    [SOLVED] Storage space mismatches

    So, do I assume the 1.60T of snaps (if I rightly read USEDSNAP) are not shown in the GUI monitor? I also assume they are linked or anyway tied to the backups. Isn't it better to show the full situation on the monitor? Again, I know this must be the n-th time this comes up, sorry XD
  2. U

    [SOLVED] Storage space mismatches

    zfs list -o space NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD Backups 1.44T 2.08T 1.60T 487G 0B 917M rpool 203G 21.6G 0B 96K 0B 21.6G rpool/ROOT 203G 21.4G 0B 96K...
  3. U

    [SOLVED] Storage space mismatches

    root@pbs1:~# zpool list -v NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT Backups 3.62T 2.02T 1.60T - - 15% 55% 1.00x ONLINE - sda...
  4. U

    [SOLVED] Storage space mismatches

    Hi, Sure this is one of the many times this comes up, but I'd need some insight on this. I have a 4Tb SSD that I put on ZFS, to create a Datastore. The datasrtore seems to only be 2.13Tb. Why? Thank you.
  5. U

    [SOLVED] Proxmox Shell for Upgrades logs in wrong directory

    For reference, the root home directory seems to be right, here are three ways to access the console: ssh (right): Gui Shell (right): Gui Shell from upgrades (wrong): As said, the other nodes are really ok. Thank you.
  6. U

    [SOLVED] Proxmox Shell for Upgrades logs in wrong directory

    Hi, Any one having this problem? Why on earth the two shells on the same node (console and upgrade shell) put me in a different path?
  7. U

    question on kubernetes + proxmox + persistent volume

    I did try some different storage solutions for K8s running on Proxmox VMs, and probably I have found out some useful things to think about before deploying your cluster for choosing the best storage solution for your persistent volumes. If you have Ceph running on Proxmox, the best way is Ceph...
  8. U

    question on kubernetes + proxmox + persistent volume

    Hi, I'm trying to achieve a stable cluster of k8s VMs. It's a 3xPVE node, the VMs are on a Ceph pool of fast nvme. Actually using Longhorn on virtual disks on the same Ceph pool, but it seems to always fail when the vm backup jobs run or when I do restart the pve nodes for maintenance: I always...
  9. U

    [SOLVED] Proxmox Shell for Upgrades logs in wrong directory

    Hi, I'm facing a strange behaviour. I have a 3xNodes cluster working find for month (almost a year and a half), now it begins to have this behiviour: On Node1, when I start the shell from the Updates to Upgrade, it logs in / directory instead of /root On Node2 and Node3 it rightly logs in...
  10. U

    Backup leaves VMs unusable

    @basil qemu agent works great, until the first bckup run (the internal one), then on the second one the VM is already frozen, qemu agent with it. Actually @fiona ' s suggestion worked and now both bckup runs (internal and external) work fine wothout breaking VMs.
  11. U

    Backup leaves VMs unusable

    The trick seems to work, thenk you! Only one bckup run for the moment, but works. Is there an issue tracker or something to monitor?
  12. U

    Backup leaves VMs unusable

    Hi, I setup two different daily baclup jobs: 00:30 Internal bkp to a CephFS pool (SAS disks) 01:30 External to a PBS instance Al nights, one or two of my Ubuntu 22.04 VMs (kubernetes nodes) go stale after the first bckup (I think). I find them stale (and have to Stop and Start, as the Guest...
  13. U

    Speedtest Throughput too low

    @logan893 I really think it's related to multiqueue. Any IDS/IPS like suricata is quite cpu intensive, and pfSense seems not to be multiqueue aware on vtnet (VirtIO), so we end up with only one queue, so just one vCpu used up by suricata, ending up with a lot of lag and performance disruption at...
  14. U

    How to get better performance with pfsense vm

    Hi, My own experience (pfSense+) is that the problem was Suricata performances. With Suricata enabled I got up to 0,6G throughput, with it disabled I get up to the expected 2G. I see speaking of multiqueue NICs, is there a way to properly manage/check them? Thank you.
  15. U

    Speedtest Throughput too low

    I tested with ookla speedtest. In the meantime I found out that the problem was suricata on pfSense: disabling it the throughput got up to 1,9G - that's in line with expected. Is there a way to optimize suricata (I see something about multiqueue - how to manage them? Are them usable on...
  16. U

    pfSense VM - very slow network throughput

    Hi, I have the same problem on my pfSense VM. 2,5G fiber from ISP Proxmox gets 2G throughtput if tested directly on vmbr0 Simple Ubuntu VM gets 2G throughtput if tested directly on vmbr0 So it seems not to be hardware related? pfSense gets capped at roughly 0,5G on VirtIO NIC on vmbr0 I...
  17. U

    Speedtest Throughput too low

    Ok, A new ubuntu VM with a VirtIO NIC on the vmbr0 bridge gets the 2G throughput. It's the very same settigs as the "wan" NIC in pfSense, which is getting up to 0,6G of throughput. Has anyone had the same problem with pfSense?
  18. U

    Speedtest Throughput too low

    Ok, So it seems to be pfSense to cap the throughput, as putting the gateway directly to the ISP modem gets a 2G Throughput. I'll try to get other experiments and get back.
  19. U

    Speedtest Throughput too low

    Hi, I recently updated my internet contract to 2,5G, and I can speedtest up to 0,9G with a CAT6 cable from a 2,5 switch to my 1G NIC on PC. As soon as I connect the same cable to a transceiver on my 10G SFP+ NIC on proxmox server, I begin to get up to 0,5G. Also, I cannot get more if I connect...
  20. U

    Nvme passthrough seen as sdx

    Hi, I'm trying to passthrough a nvme drive (seen as /dev/nvme0n1 from pve) to a vm, but the vm sees it as /dev/sdd. Is there something I'm missing? Will the vm use all of the nvme features? How to make the vm see it as a nvme? Thanks,