Latest activity

  • G
    _gabriel replied to the thread network speed problems.
    what is the host CPU ?
  • S
    SteveITS replied to the thread CEPH cache disk.
    Yes, the cache tiering. I meant set up pools by device class like https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster#pve_ceph_device_classes. The DB/WAL is generally on SSD or faster disk in front of an HDD OSD. If you share a...
  • bbgeek17
    Hi @soufiyan , welcome to the forum. I think the expectation is that you already know the $vmid. You can always do something like this: pvesh get /nodes/pve-2/qemu/$vmid/agent/get-fsinfo --output-format json | jq --argjson vmid "$vmid" '...
  • S
    Hello Proxmox community! I'm working on a monitoring script that collects filesystem information from multiple VMs using the agent/get-fsinfo API endpoint. However, I've hit a major roadblock: The Problem: When I call GET...
  • tcabernoch
    tcabernoch replied to the thread CEPH cache disk.
    Thanks @guruevi. (And @SteveITS) - So don't bother with doing cache with one of the current disks, as they are all the same. K. - And a SAS disk with high IOPs for DB/WAL on each host should be significantly faster than these SATAs. I've had...
  • S
    SteveITS replied to the thread CEPH cache disk.
    @tcabernoch IIRC from when I looked into it using a disk for read cache was deprecated or otherwise not viable with Ceph. I just don’t recall the details now. It does have memory caching. Our prior Virtuozzo Storage setup did have that ability...
  • UdoB
    UdoB reacted to twowordz's post in the thread zpool issue while importing pool with Like Like.
    thanks, that did it.
  • UdoB
    UdoB reacted to news's post in the thread zpool issue while importing pool with Like Like.
    or zpool import -f -d /dev/disk/by-id/ 2697102583354024049
  • L
    I’m experiencing the same issue where the I/O utilization goes up to 100% when restarting VMs — apparently whenever some load is generated. Is anyone else having this problem? Can anyone reproduce this behavior? The problem occurs with both an...
    • 1760291168959.png
  • tcabernoch
    tcabernoch reacted to guruevi's post in the thread CEPH cache disk with Like Like.
    The DB/WAL are both things you CAN put in other disks, only recommended if those disks are significantly faster than your disk. Eg. NVRAM for NVMe or NVMe/SAS SSD for spinning disks. You can read up on exactly what they do, but the WAL is...
  • T
    thanks, that did it.
  • news
    news replied to the thread zpool issue while importing pool.
    or zpool import -f -d /dev/disk/by-id/ 2697102583354024049
  • news
    news reacted to waltar's post in the thread zpool issue while importing pool with Like Like.
    zpool import -f -d /dev/disk/by-id/ storage0
  • T
    toto-ets replied to the thread network speed problems.
    Yes, in single mode with the down I go a little better, I even get to 5G
  • J
    Good morning all, Proxmox VE 9 is now officially supported on most recent Veeam with Proxmox VE plugin according to this KB article: Veeam KB4775 - Proxmox VE 9 plugin I have tried backup with latest VM version, which is working now. Download...
  • UdoB
    UdoB reacted to jdancer's post in the thread [SOLVED] Proxmox VE Incorrect Time with Like Like.
    It's always DNS. When given the chance, use static IPs.
  • L
    Learn4Fun reacted to AngryAdm's post in the thread [SOLVED] sda has a holder... with Like Like.
    Solution once and for all to all those "disk is busy" "disk has holder" disk has....." root@pve01:~# sgdisk --zap-all /dev/sdx root@pve01:~# readlink /sys/block/sdx...
  • G
    _gabriel replied to the thread network speed problems.
    is there a difference between single stream and multi streams speedtest ?
  • UdoB
    UdoB replied to the thread fhfhg.
    Naja, Einstiegsprobleme können vielfältig sein - nicht nur PVE spezifisch. Und das schließt durchaus "bedienen von XenForo" mit ein. Und "viel Arbeit" machen wir uns doch nur, wenn es uns gerade "in den Kram" passt... ;-)
  • B
    Behem0th reacted to netadair's post in the thread ARM Support with Like Like.
    Hi all, just in case this is still relevant, I've build a 3.4.7 client for armhf (ie Beaglebone on armv7l) running Buster. Not fully tested, but at least compiled after lot of tweaks and running the first backups right now, some problem with >4GB...