ERROR: migration aborted (duration 00:00:06): storage migration for 'vdd:subvol-601-disk-1' to storage '' failed - no storage ID specified
TASK ERROR: migration aborted
I tried to migrate 1 LXC container to another from one (nested) Qemu node to another and got this error.
I remember when I...
I would like to import list of IP's to Proxmox IPset rule.
Any idea how that might be done?
I have collected Fail2Ban Recidive list for a while now and would like to import that to a ClusterWide rule set for every LXC container.
When I created a ZFS raid Proxmox allocated 13T out of 28T to data (ie. local-zfs).
How do I increase the size allocated to data?
NAME USED AVAIL REFER MOUNTPOINT
rpool 19.7T 805G 151K /rpool
After a looong strugle with RADVD I finally got Proxmox LXC containers to receive IP:s (and even DNS) from pfSense firewall on WAN side. Woohooo!
Problem is that RADVD is REALLY picky about working unless you offer it /64 CIDR network (and it really does not like my /48 network.)
I still can't...
I have been trying to figure this one out by my self but I think I need some help with translation.
One of nested Proxmox servers that has 4 LXC servers running (each using ~1G RAM) claims that it is using 14G out of 16G.
What is the rest of the memory being used?
root@vh0:~# cat /proc/meminfo...
I decided to test Proxmox 6 on my home PC where I installed 4 HD (8Tb) that I setup with RAIDZ1.
From the start the speed of the raid set was just terrible but I managed to speed it up somewhat by using "zfs set sync=disabled rpool".
Now I'm testing other methods to speed up the setup. Maybe...
There seems to be a problem with the new Debian 10 template.
I'm not just talking about it's failure to run MySQL but it also seems to hang easily.
Two different Bind9 servers (LXC) on two different (KVM) hosts have now hung after I moved them to run on the new template.
They show high CPU usage...
My big is itching :)
My big is a server with about 10 KVM clients that each host about 10 LXC containers (nested virtualization is the way to go these days.) Lately I have seen several "dev loop0" problems usually with I/O errors indicating a malfunctioning disk causing problems with...
I noticed that there does not seem to be a simple reset / reboot script for problematic clients so I made one.
Copy / Paste this to autorestart-vm1100.sh using nano
# Replace VM id number and IP address with your own "VM" id number and IP address
ping -c 1 10.168.100.100 &>...
Ok. It's not a picture but still... take a look how long it took to migrate LXC container (that is on a NAS so there is nothing to actually migrate).
2018-04-23 15:53:24 shutdown CT 144
2018-04-23 15:54:06 starting migration of CT 144 to node 'vh3' (10.168.100.103)
2018-04-23 15:54:07 volume...
I have been testing NFS mounts on LXC containers and they work mostly ok after you have modified App-Armour settings on the host. Problems arise when the LXC containers are moved or host-node restarts.
LXC container does not always seem to honor the /etc/fstab settings which results in mounts...