Search results

  1. A

    Hardware interface packages best practices

    I'd say that the question is too broad for a reasonable answer; most "software" doesnt care about direct access to hardware. application that need DO need access (eg, GPU) have these methods to accomplish: 1. for exclusive access, a gpu can be passed directly to a guest 2. for shared access, it...
  2. A

    Proxmox - crash disc

    the syntax is qm disk import <vmid> <source> <storage> [OPTIONS] in your case, source would be /path/to/my/recovered.raw
  3. A

    Mellanox ConnectX-3 & Proxmox 9.1

    1. try proxmox-kernel 2. says who? while the DRIVERS arent provided in the package, mst should work just fine. if it doesnt, you can always uninstall :) 3. that is one way; another would be to make a persistent live usb so you dont have to mess with your install at all.
  4. A

    Site to Site VM replication

    Site to Site replication can be done to some extent with PDM. third party derived solutions also exist (Veeam, Pegaprox) as well as creative use of PBS. ZFS replication can be used if the sites are single node.
  5. A

    Shared Remote ZFS Storage

    Ah. this is a sore point for me. the ONUS of decisionmaking is on the purchaser. If the purchaser doesn't understand what he is buying (be it for any reason short of outright fraud) then the fault lies there too; NO technology is without edge cases, and 99.999% uptime still means over 5 minutes...
  6. A

    CephFS "Block Size"?

    The default stripe_unit size for ceph is 4M, but you may not want to raise that (and maybe even shrink it instead.) the stripe size is a maximum "file chunk" limit for a cephfs object (file). if a file is larger than the stripe size, the next portion of the file is written to A DIFFERENT PG...
  7. A

    Shared Remote ZFS Storage

    I think I understand your point, but I dont subscribe to the viewpoint that "if I can do it myself there is no justification for someone else to get paid." There is nothing wrong, in my view, for a Nexenta or truenas to use ZFS internally; zfs is tried and true and works well. When you buy a...
  8. A

    Shared Remote ZFS Storage

    I am really curious about this. Can you point to an "it" a specific vendor does that is not a good idea in your view? more pointedly- what would be an example of "shady stuff?" Interesting. I wasnt even aware that such solutions exist. would you be so kind as to point me to a vendor that...
  9. A

    W11 Pro license and VM migration

    So... you're both kinda right and kinda wrong. It depends on the TYPE of license and how you bought it. RETAIL licenses are not tied to an underlying Windows server license, and therefore can be moved with the VM. OEM licenses ARE tied to an underlying Windows server. VL licenses are.... maybe...
  10. A

    Nvidia install for LXC

    https://forum.proxmox.com/threads/jellyfin-lxc-with-nvidia-gpu-transcoding-and-network-storage.138873/ that should get you the rest of the way, and applies to other containers as well.
  11. A

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    Its likely that your particular configuration escaped testing. like others noted, its not really a sane configuration.
  12. A

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    Well I got nothing. you should submit a bug report (https://github.com/openzfs/zfs/issues)
  13. A

    PVE to PBS

    a backup isnt a backup if it depends on the primary for function.
  14. A

    Mellanox ConnectX-3 & Proxmox 9.1

    Since mst 4.22 seems to be broken on 6.17, you have a few alternatives: 1. install an older kernel. 6.14 and 6.11 are both available. 6.8 is too I think. 2. use a newer version of mst. The most current version is 4.34.1-10 3. boot from a thumbdrive with a linux livecd. Once you set the...
  15. A

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    I reread the op and for the life of me I dont understand what you're asking. Actual free space is exactly what you think it is, but its not USABLE. Usable free space depends on the makeup of your data, since you have a special device that (presumably default) is eating all <128k writes. And...
  16. A

    Ceph multi-public-network setup: CephFS on separate network

    I dont think that note is as wide in scope as the working seem to suggest. Public and Private (cluster) traffic dont need routability to each other to function, so you're safe on that front. Reading up further, monitors can only have one pinned IP, which means that while you can have multiple...
  17. A

    Proxmox - crash disc

    if the original data was a zvol, you can just append .raw to the file. then you can use qemu-img to import the data to your new destination.
  18. A

    Ceph multi-public-network setup: CephFS on separate network

    ceph allows multiple public networks. just make sure your monitors exist on whatever public network(s) you define. see https://docs.ceph.com/en/reef/rados/configuration/network-config-ref/