I upgraded the devuan guest from 2.1 to 3.0, kernel changed from 4.9 to 4.19. No packet loss so far with this kernel. Tested with ping -c 100 and mtr -s 1000 -r -c 1000.
I'll try installing Debian 10 kernel on the Debian 9 problem guests next.
I did more testing. The guests I had created previously either by creating VM or copying dump from Proxmox 5.4 do work. It is only the two guests I copied from Proxmox 5.4 and restored yesterday that have big packet loss.
I created a new VM just now, Debian 10. It does not have packet loss.
It...
Recently installed Proxmox VE 6.3. Two guests work OK, two have > 50 % packet loss so not really useful.
I can not understand what is wrong with the two problem hosts. Both of them I moved today from Proxmox 5.4-15, dumped and restored on this newer proxmox. Of the two working hosts one I...
Not those, I followed the Hetzner instructions I linked to in #1. But apart from not installing open-iscsi they look similar.
So I just mount the /dev/md1 to /var/lib/vz as dir storage?
root@hetrauta ~ # mount /dev/md1 /srv
mount: /srv: unknown filesystem type 'LVM2_member'.
On Hetzner dedicated server I installed Debian 10 using installimage, and then apt-get install proxmox-ve per Hetzner instructions
Now /var/lib/vz is on root partition, which is small and most of the NVMe disk is unused.
root@hetrauta ~ # pveversion
pve-manager/6.3-3/eee5f901 (running kernel...
Do you still see that disk in the GUI in Storage? If yes, you can see the values you need to write in that file or maybe save it and proxmox writes the file itself.
Consult the Proxmox documentation on Storage, you can recreate the lines yourself. Or add the storage again in the GUI.
Then shutdown the virtual machines, make a backup dump from all of them, copy the dump files offsite. Then reboot the host.
Consider updating your Proxmox, 5.1 is very old and not supported anymore.
If it breaks, you have the virtual machine dumps. You can install a new Proxmox and copy the...
Make bigger swap or add more RAM. Swap being full means all memory is used and even swap as additional memory, so exhaustion.
You could try modifying the vm memory settings, I think it is ballooning device that makes it request only the memory actually used, so should release some RAM when not...
That shows you are not using ZFS. It uses lots of memory so was one way to run out of memory.
What shows free -h
If you do not have swap, add swap partition.
Use top or htop commands to see which application uses memory.
It is a DNS record. Enter it as TXT record for your e-mail domain in your name service.
If you need to figure out what to write between the " characers, use Internet Search Engines with spf wizard to find tools to create the text for you.
ESXi is WMWare, Proxmox is a different product.
You do not write what kind of load you expect to run on those virtual machines. On my Proxmox installations, CPU has not been a problem. If there is enough memory they run plenty fast. Disk system used to be a bottleneck, five VM running on a SATA...
I have only experience on Debian, but there jail.conf is read first, jail.local last. Found this on man page:
So the Debian way is to create jail.local, copy there only the jails I want to modify. Enable the jails (seems everything is disabled in jail.conf, sshd gets enabled in...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.