Search results

  1. A

    SAS pool on one node, SATA pool on the other

    lets only discuss the node in question, since the other one is working. I assume that the end result here is a zpool named saspool? if so, zpool create SASPool [raidtype] [device list] If something else, please note what your desired end result is.
  2. A

    SAS pool on one node, SATA pool on the other

    I am completely confused. If the drives that host saspool are NOT PRESENT, then its a foregone conclusion that a defined store looking for the filesystem is going to error... what are you asking? lets start over. do you have the drives plugged in, and can you see them in lsblk?
  3. A

    ZFS - How do I stop resilvering a degraded drive and replace it with a cold spare?

    dont worry about attaching/detaching it. simply remove and replace it. edit- DONT PULL IT YET. there's a resilver in progress; even though the disk is bad, the vdev partner is not in a "complete" state. let it finish resilver, and THEN replace the bad drive.
  4. A

    SAS pool on one node, SATA pool on the other

    so I assume you have a data store defined looking for saspool... remember to delete that too.
  5. A

    SAS pool on one node, SATA pool on the other

    what does zpool list and zpool import show?
  6. A

    Evaluating ProxMox and failing on UEFI

    the likely culprit is a pre-existing filesystem on your local drive. The installer is usually pretty good at dealing with it, but in your case clearly it isnt. use a linux livecd like https://gparted.org/download.php and completely wipe the drive; retry installation.
  7. A

    SAS pool on one node, SATA pool on the other

    actual errors would be useful- this can refer to the zfs pool or the storage pool in pve. No. each pool must have a different name on the same host. same goes for pve stores.
  8. A

    /etc/init.d/ceph warnings

    Ignore it. its just informing you what the default behavior for a ceph install is, which is not aware of how ceph interacts in a pve environment.
  9. A

    Custom Auto install proxmox with zfs and offline packages

    I feel so inadequate.... for years, I just run a postinstall script that sits on a nfs mount. yes, I could have done more automation but in the end it seemed the better part of valor to just type mount followed by /path/to/mount/postinstall.sh maybe if I was doing 100 installs a day I'd look...
  10. A

    Installing multipath-tools-boot for iSCSI?

    This package is necessary to prevent race condition issues at boot; what could happen without it is that VGs get trapped ahead of multipath by the operating system. If you dont have any entries in fstab for your iscsi mounts you dont need it. edit- to clarify, if you're not issuing iscsi...
  11. A

    Proxmox - best practice to handle (USB) drives

    As long as your PVE box sees the drives (eg, /dev/sdb and /dev/sdc) you can do whatever you want insofar as deployment. You can make them individual VGs, or Single VG on both, or as a zpool, btrfs mirror, or passthrough to guest if you really want OMV. Understand that USB as a disk transport...
  12. A

    Recover broken Ceph config possible?

    well since the cluster is still up, not all is lost (or at least there is still a chance of recovery.) start with logging into a node containing a working monitor: ceph config dump | tee -a /etc/pve/ceph.conf.rebuild alternatively, if it doesnt work: ceph config show-with-defaults...
  13. A

    PVE-root is full

    https://forum.proxmox.com/threads/full-disk-on-vm-but-cant-find-files-taking-up-space.75084/post-336530
  14. A

    Hardware interface packages best practices

    I'd say that the question is too broad for a reasonable answer; most "software" doesnt care about direct access to hardware. application that need DO need access (eg, GPU) have these methods to accomplish: 1. for exclusive access, a gpu can be passed directly to a guest 2. for shared access, it...
  15. A

    Proxmox - crash disc

    the syntax is qm disk import <vmid> <source> <storage> [OPTIONS] in your case, source would be /path/to/my/recovered.raw
  16. A

    Mellanox ConnectX-3 & Proxmox 9.1

    1. try proxmox-kernel 2. says who? while the DRIVERS arent provided in the package, mst should work just fine. if it doesnt, you can always uninstall :) 3. that is one way; another would be to make a persistent live usb so you dont have to mess with your install at all.
  17. A

    Site to Site VM replication

    Site to Site replication can be done to some extent with PDM. third party derived solutions also exist (Veeam, Pegaprox) as well as creative use of PBS. ZFS replication can be used if the sites are single node.
  18. A

    Shared Remote ZFS Storage

    Ah. this is a sore point for me. the ONUS of decisionmaking is on the purchaser. If the purchaser doesn't understand what he is buying (be it for any reason short of outright fraud) then the fault lies there too; NO technology is without edge cases, and 99.999% uptime still means over 5 minutes...
  19. A

    CephFS "Block Size"?

    The default stripe_unit size for ceph is 4M, but you may not want to raise that (and maybe even shrink it instead.) the stripe size is a maximum "file chunk" limit for a cephfs object (file). if a file is larger than the stripe size, the next portion of the file is written to A DIFFERENT PG...
  20. A

    Shared Remote ZFS Storage

    I think I understand your point, but I dont subscribe to the viewpoint that "if I can do it myself there is no justification for someone else to get paid." There is nothing wrong, in my view, for a Nexenta or truenas to use ZFS internally; zfs is tried and true and works well. When you buy a...