Recent content by Quintin Cardinal

  1. Q

    [SOLVED] [Warning] Latest Patch just broke all my WINDOWS vms. (6.3-4) [Patch Inside] (See #53)

    I've had multiple instances on multiple hosts across different sites that had WIndows VMs treat their network interface as a new one, and therefore lose their static IP address configuration. Nothing else causes issues, the new devices, but the NIC does if you're statically configured.
  2. Q

    PVE 6 VM boots single thread

    I'd forgotten to write back for some time. It's definitely a Windows problem, updates work at a decent speed but running DISM operations takes hours. I've got an instance of Windows 10 running on PVE 5.4 doing this as well as the Server 2019 on PVE 6.1 referenced above. Just how things are now I...
  3. Q

    PVE 6 VM boots single thread

    I have a snapshot of the VM, unfortunately we are running ZFS and cannot create a clone from a snapshot. I found another forum post that had a workaround, creating a zfs clone of the snapshot, creating a VM (not installing an OS,) and dd if=*clone* of=/dev/zvol/rpool/data/*vm destination disk*...
  4. Q

    PVE 6 VM boots single thread

    Hello, We are experiencing the same extremely slow boot with this server again. CPU is pinned to 100% for the process, using one full core. Boot is taking ~15 minutes so far, I'm looking through all the logs that I can and not finding anything useful. Thanks
  5. Q

    PVE 6 VM boots single thread

    pveversion -v proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve) pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e) pve-kernel-5.3: 6.1-5 pve-kernel-helper: 6.1-5 pve-kernel-5.0: 6.0-11 pve-kernel-5.3.18-2-pve: 5.3.18-2 pve-kernel-5.3.13-2-pve: 5.3.13-2 pve-kernel-5.0.21-5-pve: 5.0.21-10...
  6. Q

    PVE 6 VM boots single thread

    Hey all, Honestly I'm burned out and have not done much investigation into this yet. I have a couple month old installation on PVE 6 at a customer location. Tonight I did updates, which involved Windows updates on a single 2019 VM followed by guest OS shutdown, PVE updates by "apt-get update...
  7. Q

    Strange harddisk activity

    When you get storage involved, there's many different things at play. Are you using ZFS? LVM on ext4 or xfs? If using ZFS, are you using the M.2 SSD or zil/l2arc? Does your arc have enough RAM? A hardware RAID Controller? HBA controller? Enterprise disks? A hardware RAID controller could be...
  8. Q

    FreeNAS 11.2u4 with Proxmox 5.4 iSCSI

    That's interesting, I run the same ZFS over iSCSI patch in a completely virtual environment (FreeNAS, 2x PVE hosts, and an LXC container for corosync-qnetd running on their own network all virtualized on my single node PVE on ZFS physical host) and I didn't have any speed issues at all. A basic...
  9. Q

    Running Commands at Container Startup

    Depending on the situation, I've always liked creating systemd daemons or /etc/init.d/ scripts. For machines that just need to run a script on startup I create a script in /etc/init.d/ and symlink it in /etc/rc2.d to get it to run on startup. For machines that I would rather have a daemon, I...
  10. Q

    ZFS config recommendation

    You may need to put the RAID controller into a non-raid mode where it will pass the disks straight to the OS. HBA is what I've seen this mode called on Dell servers. Whatever you do, you don't want to make a RAID 0 for each disk. That will prevent ZFS from seeing the hard drive directly. Proxmox...
  11. Q

    Advice needed

    You can pass devices through to VMs. I would say that the answer to the first part, using the physical Proxmox host but passing through the GPU to the guest, and getting video output from the GPU to a monitor to see the guest OS, and passing through keyboard/mouse to the guest os... I'm not sure...
  12. Q

    Combining a Proxmox and a Freenas installation in home setup

    It is thin provisioned! The raw file if you view in your VM storage shows the configured disk size, but it only uses what the container uses. Most of my LXC containers (like my bind dns servers) are less than 1GB in size. Space can be added to a disk while the container is running, and so can...
  13. Q

    Combining a Proxmox and a Freenas installation in home setup

    LXC Containers are fun, and I much prefer them over docker. It acts like it's own VM, but really it works kind of like a BSD Jail. It has a mac address, an ip address, and an OS, but no kernel. It uses the Proxmox host kernel. This means extremely low overhead, I'm talking like 20MB of RAM and...
  14. Q

    Combining a Proxmox and a Freenas installation in home setup

    Proxmox can connect to iSCSI, sure, but you would have to use iSCSI full-stop to a zvol on the FreeNAS box. This patch allows you to connect ZFS over iSCSI to the entire pool, and your VM's disks would each be a separate zvol on the ZPool. Supports snapshots, HA, all the cool guy stuff. If you...
  15. Q

    Combining a Proxmox and a Freenas installation in home setup

    I'd like to advocate for a single node Proxmox on ZFS setup. I took my old gaming PC hardware, bought 4x 1TB spinning disks, threw my old 256GB SATA SSD in for ZIL and L2ARC (I know, single disk no RAID isn't recommended for ZIL but this is not production, just my home server). In a ZFS RAID10...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!