Search results

  1. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    The dashboard and smb modules are, as the terms suggests, OPTIONAL MODULES. they are not required for "basic functionality" and provide no utility to a ceph installation as a component of PVE.
  2. A

    Splitbrain 1 Node after Hardware Error

    The short answer is yes. the longer answer is you need to take into consideration what ceph daemons are running on the node and account for them in the interim. moving all but OSDs are trivial- just create new ones on other nodes and delete the ones on the "broken" one. OSDs add a wrinkle; its...
  3. A

    Splitbrain 1 Node after Hardware Error

    simplest fix? evict the "out" node from the cluster and reinstall pve on it from scratch, WITH A NEW NAME.
  4. A

    Move Storage fails with qemu-img convert (Invalid argument)

    Interwebs say this happens when the on-disk block size is going from 4k source to 512b destination. Is reformatting the destination volume a possibility?
  5. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    I didnt know that, but that kinda begs the question what does the dashboard offer you beyond what PVE presents; if it really something necessary, I'd probably just set up ceph with cephadm seperate from pve. PVE doesnt consider the entirety of the ceph stack necessary for full function since it...
  6. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    You dont actually need the module- all it does is integrate what you always could do with smbd. Its a nice to have, not a showstopper.
  7. A

    SAS pool on one node, SATA pool on the other

    correct. and it doesnt have to be a full pve either. see https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_corosync_external_vote_support
  8. A

    Evaluating ProxMox and failing on UEFI

    https://forum.proxmox.com/threads/proxmox-7-0-failed-to-prepare-efi.95466/
  9. A

    SAS pool on one node, SATA pool on the other

    make sure that only the node that actually has this store is in this box: also, you need a third node or qdevice in your cluster or you can have issues if any node is down.
  10. A

    SAS pool on one node, SATA pool on the other

    pvesm remove SATAPool out of curiosity- why do you keep bringing up the other node? are they clustered? if clustered, dont delete the store; you need to go to datacenter-storage and make sure you EXCLUDE the node that doesnt have that pool in it from the store definition or unmark shared.
  11. A

    SAS pool on one node, SATA pool on the other

    lets only discuss the node in question, since the other one is working. I assume that the end result here is a zpool named saspool? if so, zpool create SASPool [raidtype] [device list] If something else, please note what your desired end result is.
  12. A

    SAS pool on one node, SATA pool on the other

    I am completely confused. If the drives that host saspool are NOT PRESENT, then its a foregone conclusion that a defined store looking for the filesystem is going to error... what are you asking? lets start over. do you have the drives plugged in, and can you see them in lsblk?
  13. A

    ZFS - How do I stop resilvering a degraded drive and replace it with a cold spare?

    dont worry about attaching/detaching it. simply remove and replace it. edit- DONT PULL IT YET. there's a resilver in progress; even though the disk is bad, the vdev partner is not in a "complete" state. let it finish resilver, and THEN replace the bad drive.
  14. A

    SAS pool on one node, SATA pool on the other

    so I assume you have a data store defined looking for saspool... remember to delete that too.
  15. A

    SAS pool on one node, SATA pool on the other

    what does zpool list and zpool import show?
  16. A

    Evaluating ProxMox and failing on UEFI

    the likely culprit is a pre-existing filesystem on your local drive. The installer is usually pretty good at dealing with it, but in your case clearly it isnt. use a linux livecd like https://gparted.org/download.php and completely wipe the drive; retry installation.
  17. A

    SAS pool on one node, SATA pool on the other

    actual errors would be useful- this can refer to the zfs pool or the storage pool in pve. No. each pool must have a different name on the same host. same goes for pve stores.
  18. A

    /etc/init.d/ceph warnings

    Ignore it. its just informing you what the default behavior for a ceph install is, which is not aware of how ceph interacts in a pve environment.
  19. A

    Custom Auto install proxmox with zfs and offline packages

    I feel so inadequate.... for years, I just run a postinstall script that sits on a nfs mount. yes, I could have done more automation but in the end it seemed the better part of valor to just type mount followed by /path/to/mount/postinstall.sh maybe if I was doing 100 installs a day I'd look...