So after upgrading 4 of our Proxmox servers from 6.0 to 6.1, i get this error:
starting apt-get update
Hit:1 http://ftp.no.debian.org/debian buster InRelease
Hit:2 http://ftp.no.debian.org/debian buster-updates InRelease
Hit:3 http://security.debian.org buster/updates InRelease
Hit:4...
Hello.
We have a psyhical machine running windows server 2012. This server runs Shadowcopy backup.
Is it possible to migrate/import a full image shadow copy image, directly into Proxmox?
Thanks.
It was a 2012r2 server, we disabled balooning, seems for work for now.
To free up the space we just migrated the VM to another host, then back again, and the the memory was back to normal :)
Hello.
We have one server using more memory than it's suppose to do.
I have configured zfs to only use 512MB ram max.
And i have configured the Ceph OSDs to use 1GB each.
But still i am missing
root@proxmox1:~# arcstat
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz...
Oh no no, i was manually migrating the VM's when it crashed. I was trying to recreate the senario to check if we did anything wrong. :)
I thought initially that the crash was caused by migrating a VM outside its HA-group. But that was not the case.
root@proxmox3:~# pvesm status
Name Type Status Total Used Available %
ceph-hdd rbd active 2824362176 943718976 1880643200 33.41%
ceph-ssd rbd active 1695300161 1342886849 352413312...
Is the graph determined by the gui running on the server i am connected to? What's the service running the web interface?
Maybe i can try restarting it.
Thought maybe it was because the new server was not in HA group, but when trying to recreate the crash in a virtual environment it did not crash.
Are the HA-groups necessary? How does proxmox determine what node is the best?
Found this post:
https://forum.proxmox.com/threads/pve-6-0-cannot-set-zfs-arc_min-and-arc_max.55940/#post-257738
Saying he got it working using: pve-efiboot-tool init /dev/device
I am running zfs raid1 on my boot volume, do i run the command against both disks?
Device Start End...
Yes. All of them, one by one.
update-initramfs: Generating /boot/initrd.img-5.0.21-1-pve
Running hook script 'zz-pve-efiboot'..
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.
Here is another view of how the patch fixed the issue, and then after a reboot of one of the nodes/servers it went back.
Is there a proxmox server that is master of how the volumes are read?
Hi.
Just installed the patch, it worked for a little while, then it went back :(
The start of this hourly graph showed the correct value, then we rebooted one of our servers, and it went back to wrong size.
Hi.
my /etc/modprobe.d/zfs.conf on all servers look like:
options zfs zfs_arc_max=573741824
I have tried running:
update-initramfs -u
on all servers.
and the result currently is:
root@proxmox1:~# cat /proc/spl/kstat/zfs/arcstats | grep c_max
c_max 4 67530100736...
The new server was updated to pve-cluster 6.0.7
while the rest of the cluster was running pve-cluster 6.0.5
The new server also running pve-kernel-5.0.21-2, while the others was running 5.0.21-1
Dunno if that has something to do with it.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.