Search results

  1. K

    Repurposing vxrails hardware with Ceph

    I do have the SSDs, 2x400GB per chassis, unless they used to come with even more SSDs...
  2. K

    Repurposing vxrails hardware with Ceph

    Thanks for all the advise, will be playing with stuff and trying things :) Out of curiosity, while I know HDDs are not great these are SAS disks which should provide better performance and all in all this system when it was running vxrail was supposedly performant enough, looking at the hw I...
  3. K

    Repurposing vxrails hardware with Ceph

    NFS boot will probably apply to some guests but mostly it will be a selfcontained playground where we can break things and try things faster than elsewhere.
  4. K

    Repurposing vxrails hardware with Ceph

    Yes This is the first foot in the door for Proxmox, the main goal at the moment is that I get a lab env with less of the encumberments of the way they currently mostly work, as an aside plus it may show some of the good sides of Proxmox. The outfit is actually a mostly Debian outfit (just...
  5. K

    Repurposing vxrails hardware with Ceph

    It has 4 10GbE NICs (Intel X550), I believe all are connected to a switch at 10GbE but I don't have access to switch atm and enabling more than 1 port led to spanning tree issues so at least until the person in charge of all of that is a bit more available all 3 nodes are running on a single 10G...
  6. K

    Repurposing vxrails hardware with Ceph

    Boot sits on ZRAID1 sata SSDs (which were barely hit by usage due to vmware I guess not writing a lot to the OS disks)
  7. K

    Repurposing vxrails hardware with Ceph

    At a place I am working at I have been given access to a set of old vxrails machines to implement a Proxmox PoC (and I hope I'll be able to help them migrate from VMware to Proxmox in the future :)). The machines are 3 Dell S570 (probably from 2018/2019) each with 4x4TB HDD and 2x400GB SSD (all...
  8. K

    Disk pass through or ZFS datasets

    A lot of guides suggest passing through physical disks to VMs when people want to run things like TrueNAS. But what if you want to use your HDDs for more than just a NAS like for instance log devices to reduce "less important" writes to SSDs or PBS? My gut says I should just setup the RAIDZ at...
  9. K

    [SOLVED] NFS server in LXC

    Sorry for the very delayed reply. If you follow @unclevic instructions you might as well install directly on the host there is no difference since all the guardrails are removed and the service ties in with the host kernel. Solution 1 from @lz114 on the other hand would not make an unsecure...
  10. K

    ZFS no pools available yet ONLINE import status, I/O error on Import attempt

    For anyone in the future - The following was the sequence of actions that worked for me: echo 0 > /sys/module/zfs/parameters/spa_load_verify_metadata echo 0 > /sys/module/zfs/parameters/spa_load_verify_data zpool import rpool -f -o readonly=on -R /mnt # mounted the key volume from the gnome...
  11. K

    ZFS no pools available yet ONLINE import status, I/O error on Import attempt

    Hey @colinstu did you ever do a full write up? I am currently trying to resolve a similar situation with the added complication of encrypted ZFS. Were you able to make the zpool importable again (I believe that once I manage to import decrypting should be possible). Thanks!
  12. K

    [Server migration] How should I approach this?

    In the end I got it working by from the chroot (which had /sys and /dev bind mounted) reformatting, reinitinlizing and updating the boot partition(s). I actually had an error with one partition so I need to double check that *both* SSDs actually have working boot partitions but this is already...
  13. K

    [Server migration] How should I approach this?

    Please note that I have run https://forum.proxmox.com/threads/proxmox-rescue-disk-trouble.127585/#post-557888 I have also tried to chroot into the resulting mount of rpool and run `proxmox-boot-tool status` and `proxmox-boot-tool refresh` the output as I understand it seems to suggest all is...
  14. K

    [Server migration] How should I approach this?

    At the moment I'm still trying to fix boot issues what I ended up doing so far - 1. Connect old mirror to sata ports of new motherboard 2. Boot ubuntu 25.04 live (just what I happened to have an ISO of) 3. Create GPT partition table and 3 partitions (1M - bios_boot, 1G - EFI, the rest) 3. add...
  15. K

    [Server migration] How should I approach this?

    Just putting all the old drives on the new motherboard is not possible since it "only" 9 SATA and I am using 10, also as said I saw this as a nice opportunity to upgrade the zfs-mirror used for proxmox and promary storage to nvme. I could have a degraded OS disk and migrate it to the NVME...
  16. K

    [Server migration] How should I approach this?

    (Sorry about the vague title I was a bit unsure what to use, even writing this post is brainstorming for me) I have a single proxmox host in my homelab it has 10 SATA SSDs split as follows: - Proxmox OS + majority of guest OS disks sit on a 2 device zfs mirror - Some guest VMs have data living...
  17. K

    [heartbeat] Are alternative interfaces supported?

    Thanks for the fast reply! I really liked the idea of simple cables since that is the least possible things that can break but I guess it was not to be.
  18. K

    [heartbeat] Are alternative interfaces supported?

    I was wondering is it possible to leverage other interfaces like USB, serial etc for the corosync heartbeat in a proxmox cluster? Like this you may be able to avoid having a switch which can also fail in your heartbeat path and just have a mesh (for small 3 node clusters you would only need 3...
  19. K

    [SOLVED] NFS server in LXC

    I think you are 100% correct, unless you use nfs-ganesha you are probably worse off by using a container because you are providing a server that ties in to your kernel on the host so all "benefits" of containers/vms go out the window.