Search results

  1. A

    Alternative backup method for containers with big disk images?

    So i've seen a few posts on this, and that as it currently stands containers are hard to back up quickly. Per my understanding the backup client has to scan the entire "disk" every time you want to make an incremental backup, unlike a vm where it has the dirty-bitmap. I have a container that...
  2. A

    Proxmox/Ceph Cluster with 2 nodes now and a 3rd later?

    I did manage to get this working in a 2/2 configuration on ceph. I'm only using it for testing, but won't go into production until I get all threed nodes. Good news is the new node is arriving today, along with most of the enterprise ssd's. Instead of a qdevice I edited corosync to give one...
  3. A

    Proxmox/Ceph Cluster with 2 nodes now and a 3rd later?

    I'm building my first proxmox cluster and i have 2 of my 3 nodes, the third one should be arriving next month, it's delayed. Can i set up a cluster and ceph now? and then add the third node once i get it? This will be in my test lab, and not in production until the third node arrives. I've...
  4. A

    So Truenas VM vs Cockpit Samba ?

    I have a similar config, but with an r730xd. And I'm passing the entire front 12 drive back plane's hba to a VM running true Nas. I never even considered having proxmox with cockpit manage it because I migrated over from VMware and that's just the way I've been doing it for a decade. But I've...
  5. A

    Install PBS on PVE Host?

    Ok, i used an unpriviliged lxc container, set up bind mounts on both hosts, and using the job sync function, this all works perfectly. Thank you all.
  6. A

    Install PBS on PVE Host?

    that makes sense, i just installed pbs in a vm to get an idea of how it works. So it looks like data in the ".chunk" dir is the actual backup, and data in the vm-id folders are manifests? i'm working on setting it up in an LXC with bind mounts to test it. I should just be able to cronjob an...
  7. A

    Install PBS on PVE Host?

    with a bind mount i assume that pbs would recognize that there is an underlying zfs file system? since it' can't directly control the disks, i assume garbage collection can't trim the file system directly?
  8. A

    Install PBS on PVE Host?

    This is for a home lab situation. I have 2 locations, each with a single PVE host and 1 gig fiber. I'd like to install PBS on each PVE host and backup both locally, and over the wan. I'm sure this isn't a recommended solution, but it would satisfy my needs for the time being as I don't want...
  9. A

    vm with pci passthrough fails with "failed: got timeout"

    so I installed the nvme drives, moved the datastores over and the system is no longer timing out trying to start the vm. the single drive i was using as a crutch until the nvme adapter came in was probably a bottleneck, or maybe just rebooting the host fixed it. EDIT: I didn't re-paste the...
  10. A

    vm with pci passthrough fails with "failed: got timeout"

    memory is evenly disturbed already. i didn't have numa on, i just enabled it and rebooted all the VM's. Do i just need to enable numa in proxmox? or are there any config settings i need to make on each guest os? specifically freebsd and ubuntu right now. Eventually i'll be running windows...
  11. A

    vm with pci passthrough fails with "failed: got timeout"

    tried it again with 32gb of ram root@pve1:~# time /usr/bin/kvm -id 101 -name TrueNAS -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon...
  12. A

    vm with pci passthrough fails with "failed: got timeout"

    Ok, i tried running the kvm command manually as i saw suggested in a post i found from 2016. root@pve1:~# time /usr/bin/kvm -id 101 -name TrueNAS -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev...
  13. A

    vm with pci passthrough fails with "failed: got timeout"

    i just tried with 4gb and the same issue. i have tried with ballooning on and off.
  14. A

    vm with pci passthrough fails with "failed: got timeout"

    log from journalctl as vm is starting up Nov 06 10:38:28 pve1 qm[3819359]: <root@pam> starting task UPID:pve1:003A47A5:00E817AD:6367D4F4:qmstart:101:root@pam: Nov 06 10:38:28 pve1 qm[3819429]: start VM 101: UPID:pve1:003A47A5:00E817AD:6367D4F4:qmstart:101:root@pam: Nov 06 10:38:28 pve1...
  15. A

    vm with pci passthrough fails with "failed: got timeout"

    I have a vm with an hba doing pci passthrough to truenas. I started having issues after i added a second hba with an identical chipset, but did not want passthrough for the 2nd hba, i did some trickery binding the bus address to use the vfio driver per this post, i created the referenced...
  16. A

    migrated esxi VM Boots to blank screen

    I have an older Photon OS VM (vmware version of linux for a docker host) running on esxi. I plan on converting it over to ubuntu server, but all the docker configs are pretty old and would not come up under ubuntu a lot of errors about depreciated commands, so i decided to migrate the machine...
  17. A

    pass through 1 hba but not the other when they both have the same device id?

    Well i solved my own problem. after figuring out the right keywords i found a post of someone with a similar issue. The instructions seem to have worked. https://forum.proxmox.com/threads/pci-passthrough-selection-with-identical-devices.63042/post-287937
  18. A

    pass through 1 hba but not the other when they both have the same device id?

    i'm trying to figure out how to pass through one of my 2 hba's, but not the other. So i do have passthrough working correctly on hba 1 03:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 [1000:0097] (rev 02) DeviceName: Integrated RAID...
  19. A

    Help with stack trace! iommu related

    system is VERY sluggish again, but nothing is running, load averages high, no iowait. apache2 was running and eating about 50% cpu, i killed it. I don't understand how the system is idle, but has high load averages. ps sorted by cpu usage: # ps -e -o pcpu,cpu,nice,state,cputime,args...
  20. A

    Help with stack trace! iommu related

    root@proxmox:~# pveversion -v pve-manager: 2.1-14 (pve-manager/2.1/f32f3f46) running kernel: 2.6.32-14-pve proxmox-ve-2.6.32: 2.1-74 pve-kernel-2.6.32-11-pve: 2.6.32-66 pve-kernel-2.6.32-14-pve: 2.6.32-74 lvm2: 2.02.95-1pve2 clvm: 2.02.95-1pve2 corosync-pve: 1.4.3-1 openais-pve: 1.1.4-2...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!