Search results

  1. A

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    you dont need pci passthrough for lxc- just would need to install the proper nvidia driver based on hardware and kernel deployed. You are better off creating an installation script, especially if you intend on having multiple nodes with GPUs. FYI, that 4x node solution is VERY old and has...
  2. A

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    I think you need to carefully consider what your end goal is. PCIe passthrough is not a good citizen in a PVE cluster, since VMs with PCIe pins not only cannot move anywhere, but also liable to hang the host. if you MUST use PCIe passthrough, consider leaving that node outside the cluster. I...
  3. A

    yawgpp (Yet Another Windows Guest Performance Post)

    in the many years I've been using PVE, I havent had much call for using Windows guests, and when I did it was usually Windows 2016 (and older before) and had reasonably good results. In the last few weeks, I had need of a Windows guest for a specific purpose and I figured its a good time to set...
  4. A

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    In a cluster you dont need or even want to backup a host. everything important lives in /etc/pve which exists on all nodes. If you DID back up a host(s), you'd open the possibility of restoring a node that has been removed from the cluster and causing untold damage when turning it on. This is...
  5. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    The dashboard and smb modules are, as the terms suggests, OPTIONAL MODULES. they are not required for "basic functionality" and provide no utility to a ceph installation as a component of PVE.
  6. A

    Splitbrain 1 Node after Hardware Error

    The short answer is yes. the longer answer is you need to take into consideration what ceph daemons are running on the node and account for them in the interim. moving all but OSDs are trivial- just create new ones on other nodes and delete the ones on the "broken" one. OSDs add a wrinkle; its...
  7. A

    Splitbrain 1 Node after Hardware Error

    simplest fix? evict the "out" node from the cluster and reinstall pve on it from scratch, WITH A NEW NAME.
  8. A

    Move Storage fails with qemu-img convert (Invalid argument)

    Interwebs say this happens when the on-disk block size is going from 4k source to 512b destination. Is reformatting the destination volume a possibility?
  9. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    I didnt know that, but that kinda begs the question what does the dashboard offer you beyond what PVE presents; if it really something necessary, I'd probably just set up ceph with cephadm seperate from pve. PVE doesnt consider the entirety of the ceph stack necessary for full function since it...
  10. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    You dont actually need the module- all it does is integrate what you always could do with smbd. Its a nice to have, not a showstopper.
  11. A

    SAS pool on one node, SATA pool on the other

    correct. and it doesnt have to be a full pve either. see https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_corosync_external_vote_support
  12. A

    Evaluating ProxMox and failing on UEFI

    https://forum.proxmox.com/threads/proxmox-7-0-failed-to-prepare-efi.95466/
  13. A

    SAS pool on one node, SATA pool on the other

    make sure that only the node that actually has this store is in this box: also, you need a third node or qdevice in your cluster or you can have issues if any node is down.
  14. A

    SAS pool on one node, SATA pool on the other

    pvesm remove SATAPool out of curiosity- why do you keep bringing up the other node? are they clustered? if clustered, dont delete the store; you need to go to datacenter-storage and make sure you EXCLUDE the node that doesnt have that pool in it from the store definition or unmark shared.
  15. A

    SAS pool on one node, SATA pool on the other

    lets only discuss the node in question, since the other one is working. I assume that the end result here is a zpool named saspool? if so, zpool create SASPool [raidtype] [device list] If something else, please note what your desired end result is.
  16. A

    SAS pool on one node, SATA pool on the other

    I am completely confused. If the drives that host saspool are NOT PRESENT, then its a foregone conclusion that a defined store looking for the filesystem is going to error... what are you asking? lets start over. do you have the drives plugged in, and can you see them in lsblk?
  17. A

    ZFS - How do I stop resilvering a degraded drive and replace it with a cold spare?

    dont worry about attaching/detaching it. simply remove and replace it. edit- DONT PULL IT YET. there's a resilver in progress; even though the disk is bad, the vdev partner is not in a "complete" state. let it finish resilver, and THEN replace the bad drive.
  18. A

    SAS pool on one node, SATA pool on the other

    so I assume you have a data store defined looking for saspool... remember to delete that too.
  19. A

    SAS pool on one node, SATA pool on the other

    what does zpool list and zpool import show?