Search results

  1. T

    S3 Backup support

    I would love to give this a try, but I am a little lost on how to set up the minio side. Would someone be able to point to a guide for the minio setup? I've been skimming https://min.io/docs/minio/linux/operations/installation.html but it seems it's expecting dedicated disk(s) locally? I'm...
  2. T

    Void Linux LXC?

    For what it's worth, the instructions posted to this Reddit thread work: https://www.reddit.com/r/voidlinux/comments/10m528z/void_lxc_on_proxmox/ What I did was built a container, turned it into a template, and then clone the template to spin up additional Void containers. The only things I...
  3. T

    Can two PBS instances share same datastore?

    Cool, that matches my understanding which is good as I already started juggling things around, heh. The amount of free disk space on one of the hosts was looking to be a bit tight if I did things the most proper way and did a sync, assuming a sync would result in a complete second copy of the...
  4. T

    Can two PBS instances share same datastore?

    Yeah, which is where I was thinking if they were cross-pulling into the same data-store things could go wrong. Yep yep this is precisely why I have a local PBS instance handle backups for the local environment then sync across to the remote PBS instance! Oooooooh I will definitely need to...
  5. T

    Can two PBS instances share same datastore?

    Hello! I have two locations that are geographically distant from one another where PVE and PBS (to back up the local PVE environment) is running. I have the opposing PBS instances configured with a separate datastore with a sync job so that the backups for location A periodically sync to...
  6. T

    Migrate PBS to new host

    I think I found the mechanism to do this. Remotes! https://pbs.proxmox.com/docs/managing-remotes.html#remote Question is, can I set up the new server as a remote, then pivot to having PVE back up directly to it?
  7. T

    Migrate PBS to new host

    Hello! I need to replace a host that is currently running Proxmox Backup Server. Is there a way I can migrate the datastore and configuration? I'd like to avoid re-baselining the backups if possible. I've found post about moving the datastore to a new volume, but this would be migrating the...
  8. T

    [SOLVED] Cancelled Disk Move - Orphaned Disk

    Nevermind! Found the answer to my question. qm rescan --vmid 104 Then I can delete the unused disk from the VM's hardware pane.
  9. T

    [SOLVED] Cancelled Disk Move - Orphaned Disk

    Hello, I started a disk move from a ZFS storage pool to a local thin LVM storage pool. I thought there was enough disk space, but I was mistaken. So, before the local LVM storage became full, I cancelled the move. The task seemed to cancel fine, so I went ahead and did the move again to a...
  10. T

    Homelab migration to Proxmox

    That makes complete sense. I think what I may end up doing is simplifying the setup as follows: One node running Proxmox VE paired with one DAS shelf. One node running Proxmox Backup paired with one DAS shelf. I actually do not have a great backup strategy at the moment, so I think the above...
  11. T

    Homelab migration to Proxmox

    I mean, someone has to push the envelope...heh. I am not tied to Ceph specifically. It was my original plan as per my setup, but these disk shelves have thrown me a curve ball, heh. I may look into sourcing a 3rd disk shelf, but that would be a future thing unfortunately. I would like to use...
  12. T

    Homelab migration to Proxmox

    Hmm. So, I just acquired two IBM EXP2512 DAS shelves. I'm trying to think of how to integrate these into my setup. Given that I have two shelves, not three, poses a bit of a challenge. I was reading about how one can build a 2-node Proxmox cluster and have a 3rd system participate as a...
  13. T

    Homelab migration to Proxmox

    This makes sense. I have to think about it a bit differently since the setup would be different than my existing gluster pool. With my current gluster setup, each host has its own RAID array, so a single disk failure in a given node does not result in any lost redundancy until an entire node...
  14. T

    Homelab migration to Proxmox

    Thanks for the feedback everyone! Sounds like this is totally doable and I'm excited to get started! When it comes to the Ceph pool, I was planning on configuring it for 2 copies. The idea being that the cluster only needs to survive losing a single node. This is what I do currently with...
  15. T

    Homelab migration to Proxmox

    Gotchya. The temporary storage was something I was hoping to avoid, however I understand why it's recommended. So, this got me thinking and I *think* I have a plan. Since with Ceph I'd be presenting it the disks raw, assuming the hardware will support this... I can take two of the drives of...
  16. T

    Homelab migration to Proxmox

    Hello, I am new to Proxmox and am looking to migrate my home lab over. I'm currently running a 3-node hodgepodged together libvirt/KVM/gluster stack. I'd like to move this to Proxmox using Ceph. However, I'd need to juggle this around existing hardware. Here's what I'm thinking of doing...