Recent content by menelaostrik

  1. M

    dead ssd drive on zfs points to initramfs

    omg. you're totally right. any chance i could restore from this? root@hyperlan:~# zfs list -t snapshot | grep 101 rpool/data/vm-101-disk-0@autosnap_2022-10-03_09:42:51_monthly 855M - 89.9G - rpool/data/vm-101-disk-0@autosnap_2022-10-05_09:55:18_weekly 454M - 89.9G -...
  2. M

    dead ssd drive on zfs points to initramfs

    it's a mirror. I don't know when on initramfs its shown like that. I have it mounted as read-only and the zpool status output is: https://prnt.sc/vKzNL0nUFALc if just attaching a new drive isn't possible, is there a way of migrating the VMs in a new pve over the network(samba maybe?)
  3. M

    dead ssd drive on zfs points to initramfs

    yes, its degraded. please chckout this screenshot: https://prnt.sc/91X8xc3vG1SA
  4. M

    dead ssd drive on zfs points to initramfs

    I have an ssd drive with failed controller and which was a part of a mirrored zfs vdev. Now it boots into initramfs and says that it can't import pool "rpool". when i issue zpool import -f rpool it says that there's no such pool or dataset. However, in "zpool import" can see that rpool has...
  5. M

    cannot activate storage

    Hi, suddensly i started getting: TASK ERROR: could not activate storage 'backup1': backup1: error fetching datastores - 500 Server closed connection without sending any data back and i can't figure out what might be causing it. when issueing: proxmox-backup-client version --repository...
  6. M

    After migrating to another node there's no internet connection

    All VMs are configured on vmbr0. here is the config: root@hyper:/etc/network# qm config 301 agent: 1,fstrim_cloned_disks=1 boot: order=scsi0 cores: 4 cpu: host ide2: none,media=cdrom memory: 8000 name: centos8 net0: virtio=4A:09:4F:42:82:CA,bridge=vmbr0 numa: 0 onboot: 1 ostype: l26 scsi0...
  7. M

    After migrating to another node there's no internet connection

    Hi, I don't use any LXC just a few VMs Yes, i have double-checked that MACs are unique - moreover, the MAC address doesn't change when i migrate, it stays the same and it was working flawlessly before the migration. Now for the last question i noticed something strange(strange like i don't...
  8. M

    After migrating to another node there's no internet connection

    Hi everyone, i've been facing a rather bizzare issue on PVE6.4-15 I have a small 2-node cluster and everytime i migrate a VM to the other node, it doesn't have network connectivity anymore. In order to solve the issue i have to remove the NICs and re-add them. Afterwards i have to modify...
  9. M

    abnormal cpu usage

    sure, here is the vm config: root@hyper2:/etc/pve/qemu-server# cat 500.conf agent: 1 boot: order=scsi0;ide2;net0 cores: 16 cpu: host ide2: none,media=cdrom memory: 55000 name: Cpanel net0: virtio=D2:BE:0B:96:8A:22,bridge=vmbr0,firewall=1 numa: 0 onboot: 1 ostype: l26 scsi0...
  10. M

    [SOLVED] VMs can't ping gateway on a certain node

    Good idea! unfortunately no. I have found the culprit to be the migration process from another node. While it's keeping the HWADDRESS of the virtio NIC during the migration process, i have to remove the nic and add it again every time i migrate and of course changing the connecting nic...
  11. M

    abnormal cpu usage

    I have a host running only one vm with a e5-2667v4 cpu(8core-16threads) i have assigned all threads to a vm which is only vm on the host. i can see a ~900% cpu usage on the kvm process (all other processes of the host doesn't consume more than 50%) while at the same time the vm iis at ~40-45%...
  12. M

    [SOLVED] VMs can't ping gateway on a certain node

    Hi, i've been facing this issue and have ran out of ideas on what the problem might be. The host can access the internet just fine and everything works. however within the VM i can only ping the host and nothing else. Firewall within proxmox is disabled everywhere. here is my config from the...
  13. M

    Can't empty free space

    Hi, I have pruned most of the backups (out of 100 i kept 10) but i can't free up empty space on the storage. pruning have ran without errors. any clues? thank you in advance, Menelos
  14. M

    guest hangs due to qemu-guest-agent

    i managed to fix the issue and i wanted to share the solution in case another member faces the same problem. The server is running cpanel and there's a function that hardens the /tmp partition you can find more here...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!