Search results

  1. L

    Bulk Hibernate?

    I know it's quite old thread, and I haven't commented at that moment, but I keep find myself from time to time in the point when this bulk hibernation option would be very handy to have.
  2. L

    [SOLVED] Proxmox 8 - systemd-shutdown[1] - Failed to get MD_LEVEL property

    Sure, here it is, and indeed it contains errors=remount-ro but just for md2: # / was on /dev/md2 during installation UUID=45380162-b151-4e50-9af3-a9a549ca1757 / ext4 discard,noatime,nodiratime,relatime,errors=remount-ro 0 1 # /boot was on /dev/md0 during installation...
  3. L

    [SOLVED] Proxmox 8 - systemd-shutdown[1] - Failed to get MD_LEVEL property

    After the update, it was looking like the problem was solved on first reboot, but after extensive testing, it's not. I still see this errors: 54.882310] watchdog: watchdogo: watchdog did not stop! 55.047108] systemd-shutdown [1]: Could not stop MD /dev/md1: No such device 55.047134] watchdog...
  4. L

    Wrong zpool version

    Well... somehow looks like I was missing zfsutils-linux and zfs-zed packages... Installing them fixed the problem (removing that zfs-fuse service too). Of course, I had to import the pool, but all went fine...
  5. L

    Wrong zpool version

    Hi, I have a quite weird problem here... I was running PVE 6 and decided to upgrade to 7, and then, if everything would be ok, to 8. Sounded like a plan. Before starting, I was checking things, and the first thing that was wrong, was that zpool command complained about /etc/init.d/zfs-fuse...
  6. L

    [SOLVED] Proxmox 8 - systemd-shutdown[1] - Failed to get MD_LEVEL property

    I have the same problem after upgrading from PVE 7 to 8, and editing /usr/lib/systemd/system-shutdown/mdadm.shutdown didn't helped...
  7. L

    Litle bug in backup interface?

    Hi, after upgrade to 6.2-9 and 6.2-10, the backup UI in PVE show the backups from other VMs with higher ID but same prefix. Can anyone confirm this? Dan
  8. L

    VM disks gets corrupted on thin ZSF storage

    Not really... nothing realated with a running VM. But I see some things like: I saw 6 segfault during current uptime of 19 days. And also: I think this latest messages are related to repairs of the VM disks. No other errors... No, they are connected to the AHCI SATA ports on the motherboard.
  9. L

    VM disks gets corrupted on thin ZSF storage

    It doesn't look like... Also, the VM disks are not full, maximum 50%. NAME USED AVAIL REFER MOUNTPOINT vmdata 1.23T 2.28T 31.5G /vmdata
  10. L

    VM disks gets corrupted on thin ZSF storage

    Hi, I have a server running latest PVE with a ZFS RAIDZ storage on SSDs. From time to time, some KVM VMs gets readonly filesystem and need to be rebooted and the filesystem repaired. Doesn't look like the problem is specific to a certain OS, because it happened on CentOS, Ubuntu, Debian... The...
  11. L

    CentOS 8 on LXC and the output of getconf

    Hi, As you know, LiteSpeed have licenses for 2 and 8 GB of RAM. It uses getconf to read the available memory. There have been no problem with any of this licenses on CentOS 7, but now, with availability of CentOS 8, I decided to get it a try on my dev server... and here comes the problem...
  12. L

    Issue with outbound traffic on VM

    One thing I just noted with PVE 6 (latest version, upgraded from PVE 5), on container creation, if "MAC address" is left unchanged to auto, it will generate a MAC when container is running, but when you activate firewall on that container, even if you define output rules, it won't have any...
  13. L

    LXC Ubuntu 14.04 template

    It's strange the only problem was with Ubuntu 14.04, but after a reboot looks like everything is working as supposed...
  14. L

    LXC Ubuntu 14.04 template

    I did press a lot of enter and the cursor go down to next line with no output. The networking interface is not going up. I think it hangs on something during boot way before network setup. The 118 is an Ubuntu 12.04 container that successfully boot, and 116 is the 14.04. If I "pct enter 116"...
  15. L

    LXC Ubuntu 14.04 template

    I re-uploaded that template and tested, with same result. The console show "Connected" message but there's no output. It's quite strange. Also tested, with same result, on two other PVE nodes I have access to. All of them where PVE 5.2 and 5.3 updated to 5.4.
  16. L

    LXC Ubuntu 14.04 template

    I've tested with all Ubuntu templates from http://download.proxmox.com/images/system/ and the only one I have this problem with is the 14.04.
  17. L

    LXC Ubuntu 14.04 template

    Sure, here it is: root@hn01:~# pct config 116 arch: amd64 cores: 1 hostname: ubuntu.example.com memory: 1024 net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=1A:89:B8:0B:CB:77,ip=10.10.5.5/24,ip6=auto,type=veth ostype: ubuntu rootfs: vmdata:116/vm-116-disk-0.raw,size=10G swap: 512 unprivileged: 1
  18. L

    LXC Ubuntu 14.04 template

    Hi, I was trying to migrate from OVZ to LXC some Ubuntu 14.04 containers. After restore of the dumps, the containers boot, but there is no console output in PVE UI and the network interface doesn't go up. I was thinking it's a problem with the original OVZ container, but the same happen when...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!