Search results

  1. Can't start some containers after upgrade 5.14 -> 6.1

    Everything is working again. Not sure why a dev directory was created in 2 of the container mount points, but that was probably the root cause. Issue solved.
  2. Can't start some containers after upgrade 5.14 -> 6.1

    Trying a "zfs mount -a" displayed an error that subvol-105-disk-0 and subvol-114-disk-0 were not empty, and therefore couldn't be mounted. Both of those subdirs had an empty "dev" directory. Once I removed them, "zfs mount -a" worked, and I could start 105 and 114. Also, the mount on 104 is now...
  3. Can't start some containers after upgrade 5.14 -> 6.1

    Evidently, the zfs pool did not auto-mount. I'm not sure if this is due to the upgrade from jessie -> buster, or the PVE upgrade. Issuing a 'zpool mount MEDIA' attached the drive to /MEDIA, and *one* container was able to start (104). However, one of the files were there. Other containers that...
  4. Can't start some containers after upgrade 5.14 -> 6.1

    It looks like I may have mounted the 2TB disk on /MEDIA on the proxmox server itself, since there is an empty /MEDIA dir. Also the config looks like that /MEDIA dir is then shared onto 104. Trying to mount the device returns this error: root@proxmox:/media# mount /dev/sda1 /MEDIA mount...
  5. Can't start some containers after upgrade 5.14 -> 6.1

    Hello. After upgrading my proxmox from jessie -> buster, several of my containers won't start. I had added a second disk through the UI, and stored several containers on it. The volume shows up in the UI as a ZFS volume, and the containers show up under Content, but attempts to start always...
  6. NFS clarification – LXC container exporting an NFS share to other LXC containers

    Well, no takers yet. :/ What I've done is to create a VM for the host that needs to export the NFS share, and left the consumers as LXC containers. That works fine. I would like some confirmation under which circumstances that LXC containers cannot export NFS. I used to be able to in earlier...
  7. NFS clarification – LXC container exporting an NFS share to other LXC containers

    Hello, all. I have to share an NFS dir from one host (server1) to three other hosts (server2-4). I've read many different threads here on this, and I'd like some clarification. 1) Firstly, Can an LXC container share an NFS mount point to other lxc containers on the same node? 2) If not, what...
  8. Proxmox node: can't mount NFS share

    This is really strange. When I try to create the NFS storage-whatever, I get that mkdir error, but the OS actually mounts the share in /mnt/pve/.
  9. Proxmox node: can't mount NFS share

    Thanks! tried that out, but got a permissions error: create storage failed: error with cfs lock 'file-storage_cfg': mkdir /mnt/pve/Livespare/images: Permission denied at /usr/share/perl5/PVE/Storage/Plugin.pm line 954. (500)
  10. Expand default 100G 'local' storage volume to 200G?

    Hello. It appears that by default proxmox creates a 100G 'local' storage volume for VZDump backup files, ISO images and Container templates. A local-lvm was set up using the balance of our 2G disk, of which 12% is used. I'm finding now that I need more room to bring over a 162G lzo backup. Is...
  11. Proxmox node: can't mount NFS share

    Hello. I need to mount an NFS share on the node so I can restore a backup. I noticed I can't get nfs-common services running as they're masked in /lib/systemd/system. How would I enable NFS mounting at the node level? root@proxmox3:~# mount livespare:/liveshare /mnt2 mount.nfs: requested NFS...
  12. No Space Left on Device when restoring from tar.lzo

    Alex.. thanks. Do you know why the fs space requirements would grow? I'm wondering if it would have anything to do with the fact that ZFS was used on the old proxmox, and ext4 on the new.
  13. No Space Left on Device when restoring from tar.lzo

    Trying again, but bumping up rootfs to 40G...
  14. No Space Left on Device when restoring from tar.lzo

    Thanks, Dietmar. I'm still getting the same error: root@proxmox3:/var/lib/vz/dump# pct restore --rootfs 32 105 vzdump-openvz-billingdev-prox3.tar.lzo -storage local-lvm Using default stripesize 64.00 KiB. Logical volume "vm-105-disk-0" created. mke2fs 1.43.4 (31-Jan-2017) Discarding device...
  15. No Space Left on Device when restoring from tar.lzo

    Hello. I'm in the process of migrating all my VMs and containers from version 3.3.5 and 4.x of proxmox to version 5.4. On several occasions, I have run into errors on restore: tar: ./usr/lib/pymodules/python2.7/ndg/httpsclient/test/scripts/openssl_https_server.sh: Cannot create symlink to...
  16. Using a second NIC on the node from VMs

    Hey all, My containers require 2 networks for application requirements. Within the node, adding a second interface to the VMs allows them to communicate between them. However, accessing other servers outside of the proxmox environment does not work. To that end, I've added a second physical...
  17. Adding more backup room

    Thanks. the LVM is already in use for containers, so are the methods above destructive?
  18. Adding more backup room

    Hello, all. I have a 5.4.13 node running that has a rather small 'local' directory that can have backups, templates and ISO images. I added a 1T storage called local-lvm, and 2T disk set called local2, but those only accept Disk Image and Container content. I need to ceate more room for dump...
  19. ebconf: Perl may be unconfigured (Can't locate IO/File.pm in @INC

    I fixed most of the perl issues, but now vzctl won't install cleanly: Reading changelogs... Done (Reading database ... 66738 files and directories currently installed.) Preparing to unpack .../vzctl_4.8-1+deb8u2_amd64.deb ... Unpacking vzctl (4.8-1+deb8u2) over (4.0-1pve6) ... dpkg: error...
  20. ebconf: Perl may be unconfigured (Can't locate IO/File.pm in @INC

    After an attempt to update from Debian wheezy to jessie, something broke perl. Now nothing works with apt-get (since it depends on perl), and therefore no PVE elements are getting updated, and all my VMs are now dead in the water. Many packages are held back, and I've tried force and reinstall...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!