Search results

  1. A

    Intel Nuc 13 Pro Thunderbolt Ring Network Ceph Cluster

    I'm running different hardware than you guys, the minisforum ms-01. I'm getting a lot of retransmits with iperf3. I started with some no-name thunderbolt4 cables, "connbull". I ordered and tried a belkin tb4 cable and an owc tb4 cables no apparent change. I've followed @scyto guide over on...
  2. A

    MS-01 HCI HA Ceph Sanity Check

    I considered that, but i coudln't find any A/E nvme drives in the US. I found one in Europe and i found a bunch of adapters. the usb storage has been working fine though. I'm probably going to put a google coral in the A/E slot eventually, but going to try openvino first with the v-igpu. I...
  3. A

    MS-01 HCI HA Ceph Sanity Check

    My goal of this project was to decrease power utilization, and gain redundancy, from my single r730xd which has dual e5-2690v4 cpu's and 256 gb of ram. My r730xd's power floor, even with power tuning is around 280w (that's with storage). Each ms-01 node has 96gb of ram, and 3 nvme drives...
  4. A

    How do i reserve/prioritize resources for a critical VM/CT?

    A lot of my VM's and CT's are for fun home hosting stuff, but a few of them are important. My cluster has 3 nodes and while it has enough resources if all the nodes are up, memory get a little tight if one of them goes down. There's enough, but it depends on how the VM's wind up being...
  5. A

    MS-01 HCI HA Ceph Sanity Check

    I'm doing the homelab with a trio of ms-01s. Each host has 1x7.68 u.2, and 2x 3.84tb m.2. booting off of a USB to nvme adapter. Ceph network is a thunderbolt mesh with open fabric routing, and using the built in sfp ports lacp'ed to my switch for proxmox vms, 1 of the 2.5 ports dedicated for...
  6. A

    Alternative backup method for containers with big disk images?

    Thank you for the 2 links! I'm glad to see progress is being made on the issue
  7. A

    Alternative backup method for containers with big disk images?

    So i've seen a few posts on this, and that as it currently stands containers are hard to back up quickly. Per my understanding the backup client has to scan the entire "disk" every time you want to make an incremental backup, unlike a vm where it has the dirty-bitmap. I have a container that...
  8. A

    Proxmox/Ceph Cluster with 2 nodes now and a 3rd later?

    I did manage to get this working in a 2/2 configuration on ceph. I'm only using it for testing, but won't go into production until I get all threed nodes. Good news is the new node is arriving today, along with most of the enterprise ssd's. Instead of a qdevice I edited corosync to give one...
  9. A

    Proxmox/Ceph Cluster with 2 nodes now and a 3rd later?

    I'm building my first proxmox cluster and i have 2 of my 3 nodes, the third one should be arriving next month, it's delayed. Can i set up a cluster and ceph now? and then add the third node once i get it? This will be in my test lab, and not in production until the third node arrives. I've...
  10. A

    So Truenas VM vs Cockpit Samba ?

    I have a similar config, but with an r730xd. And I'm passing the entire front 12 drive back plane's hba to a VM running true Nas. I never even considered having proxmox with cockpit manage it because I migrated over from VMware and that's just the way I've been doing it for a decade. But I've...
  11. A

    Install PBS on PVE Host?

    Ok, i used an unpriviliged lxc container, set up bind mounts on both hosts, and using the job sync function, this all works perfectly. Thank you all.
  12. A

    Install PBS on PVE Host?

    that makes sense, i just installed pbs in a vm to get an idea of how it works. So it looks like data in the ".chunk" dir is the actual backup, and data in the vm-id folders are manifests? i'm working on setting it up in an LXC with bind mounts to test it. I should just be able to cronjob an...
  13. A

    Install PBS on PVE Host?

    with a bind mount i assume that pbs would recognize that there is an underlying zfs file system? since it' can't directly control the disks, i assume garbage collection can't trim the file system directly?
  14. A

    Install PBS on PVE Host?

    This is for a home lab situation. I have 2 locations, each with a single PVE host and 1 gig fiber. I'd like to install PBS on each PVE host and backup both locally, and over the wan. I'm sure this isn't a recommended solution, but it would satisfy my needs for the time being as I don't want...
  15. A

    vm with pci passthrough fails with "failed: got timeout"

    so I installed the nvme drives, moved the datastores over and the system is no longer timing out trying to start the vm. the single drive i was using as a crutch until the nvme adapter came in was probably a bottleneck, or maybe just rebooting the host fixed it. EDIT: I didn't re-paste the...
  16. A

    vm with pci passthrough fails with "failed: got timeout"

    memory is evenly disturbed already. i didn't have numa on, i just enabled it and rebooted all the VM's. Do i just need to enable numa in proxmox? or are there any config settings i need to make on each guest os? specifically freebsd and ubuntu right now. Eventually i'll be running windows...
  17. A

    vm with pci passthrough fails with "failed: got timeout"

    tried it again with 32gb of ram root@pve1:~# time /usr/bin/kvm -id 101 -name TrueNAS -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon...
  18. A

    vm with pci passthrough fails with "failed: got timeout"

    Ok, i tried running the kvm command manually as i saw suggested in a post i found from 2016. root@pve1:~# time /usr/bin/kvm -id 101 -name TrueNAS -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev...
  19. A

    vm with pci passthrough fails with "failed: got timeout"

    i just tried with 4gb and the same issue. i have tried with ballooning on and off.
  20. A

    vm with pci passthrough fails with "failed: got timeout"

    log from journalctl as vm is starting up Nov 06 10:38:28 pve1 qm[3819359]: <root@pam> starting task UPID:pve1:003A47A5:00E817AD:6367D4F4:qmstart:101:root@pam: Nov 06 10:38:28 pve1 qm[3819429]: start VM 101: UPID:pve1:003A47A5:00E817AD:6367D4F4:qmstart:101:root@pam: Nov 06 10:38:28 pve1...