Search results

  1. N

    Debian 13 LXC Template

    Umm extracting archive '/var/lib/vz/template/cache/debian-13-standard_13.1-1_amd64.tar.zst' Total bytes read: 545423360 (521MiB, 340MiB/s) TASK ERROR: unable to create CT 105 - unsupported debian version '13.1'
  2. N

    Debian 13 LXC Template

    pveam update New template is out for Debian 13
  3. N

    ZFS Drive failed HA didnt migrate

    Thanks, I know about PLP, and I did try to buy a Micron 7400 Pro, which turned out to be fake. My issue is I only have 2280 slot and 2 of them in my MiniPC. I will get 2x 2TB on each Node and run ZFS mirror.
  4. N

    ZFS Drive failed HA didnt migrate

    Hi there, I have a 3 node PVE cluster with a single ZFS drive on all 3. I setup replication to run every 2 hours between all 3 nodes. Today I had a ZFS drive on node1 die, instead of the ct/vm's migrating to other nodes they all just failed. What is the best way to get them back up and...
  5. N

    Networking and Clusters

    Yeah I plan to do this. This is the option when creating the cluster to select both NICs right? And then when you join a cluster to add both NICs of the other nodes?
  6. N

    Networking and Clusters

    Was just thinking about about bonding the 2x 2.5gb for a 5 gb connection for everything? Ceph, Proxmox, and VM/LXC? Would having extra bandwidth be better than a dedicated slower NIC?
  7. N

    Any issues with Dell branded Intel D3-4610, & Enterprise NVME?

    Hi all. I need a new SSD for a PBS backup box. I was looking at the Intel D3-4610 But I found some Dell branded ones for slightly cheaper. Are there any concerns that would come in buying a Dell branded rather than a OEM Intel one? Are there any other solid cheap SSDs for a backup server...
  8. N

    Networking and Clusters

    @UdoB or @SteveITS can you help me decide if I put the Proxmox network on the Ceph network or the VM/LXC network?
  9. N

    Networking and Clusters

    Yeah I've been testing for a year on a smaller cluster. It's been rock solid so far. I just want to figure out the best option in terms of 2 NIC utilization and which way to install Proxmox.
  10. N

    Networking and Clusters

    Probably not very. Couple of Windows VM's and about 10 LXC's. I was going to use Ceph, as my storage across the 3 nodes, on the mgmt LAN.
  11. N

    Networking and Clusters

    Hi all, I'm about to build a new cluster with dual 2.5gb nics. I want one or two LXC's on my main default network (unifi) and the rest on VLAN. I can do this by leaving the vm/lxc lan on default, and then just using the VLAN tag for all containers I want elsewhere. But my main question is...
  12. N

    Planning advice

    Hi all, I'm looking for some advice. I have 2 main nodes that I want to use, I dont NEED HA, but it would be nice to have. Basically I plan to run the 2 sorta individual nodes, but be able to transfer VM's across if needed. (was thinking of using the Datacentre Manager) I was planning to put...
  13. N

    How to remove a node from cluster with Ceph?

    Hi all, I want to do a clean install of my nodes one by one to upgrade to PVE 9.0. Is there any documentation on how to do this successfully? It also made me think, what if one of my nodes actually dies, how do I go about replacing the node? I've found how to remove a Ceph OSD, and how to...
  14. N

    Ceph not starting - New network

    Hi all, So I had to move my PVE cluster to a new network, from 192.168.x to 10.0.0.x I followed this: https://bookstack.dismyserver.net/books/documentation/page/how-to-change-the-ip-address-of-a-proxmox-clustered-node For the most part it seems ok, I can see all the nodes in the cluster, all...
  15. N

    Email notifications sending to root@pam

    Hi all, I recently setup PBS and notifications similar to how PVE is setup. But the emails keep going to root@pam but also the email that was configured for this root account... This is the mail system at host pve-mini02.home.arpa. I'm sorry to have to inform you that your message could not...
  16. N

    subvol umount and removed but keeps coming back after container restart

    Running this, then going to the container, and clicking remove on both those unused volumes resolved the issue...
  17. N

    subvol umount and removed but keeps coming back after container restart

    This is all that is in my conf Yes mp0 & 1 show up but they are bind mounts. This is a container that is not re-creating those empty folders: Both are mounted the same way, only difference is that the 2nd one was created without any resources for mounts. I added them after via the conf...
  18. N

    subvol umount and removed but keeps coming back after container restart

    I did bind mounts, like a lot of my other containers. So there should only be a subvol-201-disk-0 on the name pool. I did originally set the same resource mounts and did have subvol-201-disk-0 on ssd and vault, but I switched them to bind mounts. I removed the old mounts and I removed those...