Recent content by j4ys0n

  1. J

    Mount ZFS dataset with legacy mountpoint to LXC container

    @frank0366 can you describe how to add the zfs dataset as a mount point in the mean time? would be greatly appreciated!
  2. J

    Mellanox ConnectX-5 working on one machine but not another

    After some initial switch issues, I was able to get one NIC working, I then installed the other NIC in a separate machine and followed the same steps. Only the first NIC is working correctly. Details below. Anyone know what the issue could be? Proxmox 7.4 on both machines I'm able to assign an...
  3. J

    Create a VM copy from a VM snapshot

    I take a slightly different approach but I think it will result in something similar to what you're looking for. I use LXCs instead of VMs, and this is so I can mount ZFS datasets on the host directly in the LXC. The result is that you get nearly native performance of the underlying storage...
  4. J

    [SOLVED] Failed to run lxc.hook.pre-start

    Just recording in case anyone else has this happen. I got the same error in the UI, then I ran this: lxc-start -n 178 -F -lDEBUG -o lxc-178.log Inspected the log and saw a "disk quota exceeded" message, aka the provisioned disk was full. I expanded the attached root storage and it started right up.
  5. J

    [SOLVED] ssh error transferring to new node in cluster

    I got it - had to make sure the old host key was gone from all of the known host files on all of the nodes, which i thought i had done. ran ssh-keygen -f "/etc/pve/priv/known_hosts" -R "starhawk" and ssh-keygen -f "/root/.ssh/known_hosts" -R "starhawk" on each node, then connected manually with...
  6. J

    [SOLVED] ssh error transferring to new node in cluster

    @Stoiko Ivanov any idea what the problem is? the known_hosts files seem to be fine.
  7. J

    [SOLVED] ssh error transferring to new node in cluster

    yes 2022-05-13 00:14:00 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=starhawk' root@10.10.1.17 /bin/true 2022-05-13 00:14:00 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 2022-05-13 00:14:00 @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ 2022-05-13...
  8. J

    [SOLVED] ssh error transferring to new node in cluster

    the error only seems to occur when i'm initiating a transfer from one server to another (of a vm or lxc). i get this message.. 022-05-13 00:14:00 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 2022-05-13 00:14:00 @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @...
  9. J

    Grub update failed during PVE 6.4 -> 7.1 upgrade - switch to UEFI boot?

    Alternatively - I'm trying to figure out if I should switch to UEFI boot. I'm not totally sure why this system uses legacy bios boot as it's a newer board and its bios is UEFI. Maybe because the boot drives are SATA and not NVMe?
  10. J

    Grub update failed during PVE 6.4 -> 7.1 upgrade - switch to UEFI boot?

    I have not seen that - thanks for sending. Going through the guide - I'm pretty sure I started this server on 6.4, so proxmox-boot-tool is set up and looks to be configured. (at least the purchase dates line up to indicate that 6.4 was available before I purchased the hardware). I haven't run...
  11. J

    Grub update failed during PVE 6.4 -> 7.1 upgrade - switch to UEFI boot?

    i've upgraded 2 of the 4 nodes in my cluster already without issues, but on upgrading this current node, grub failed to update. here are the relevant output lines from when the failure occurred. Setting up pve-docs (7.1-2) ... Setting up libpython2.7-stdlib:amd64 (2.7.18-8) ... Setting up...
  12. J

    Live-Migration almost freezes Targetnode

    I should have added more info to the post. The server isn't completely new - it's some hardware I had laying around that I got a "new" (also old) motherboard for. i7 7700K, 32GB of 2400mhz memory, 4 Kingston SSDs in RAID10, a 10 G NIC and an ASRock Rack board. It's for some services I wanted to...
  13. J

    Live-Migration almost freezes Targetnode

    Same thing here. I thought my new server crashed, until I logged into another node and saw the IO delay spike. Are individuals able to contribute to the Proxmox source code? I'd love to fix things, add features :)
  14. J

    The server certificate /etc/pve/local/pve-ssl.pem is not yet active

    yep - was just about to follow up on that. networking was the issue. all of the nodes are on a 10g switch - so on that switch i disabled IGMP snooping and on the nodes i ran echo 0 >/sys/class/net/vmbr0/bridge/multicast_snooping and then on the existing node ran: service pve-cluster restart...