I've had to do the same as a vm seems to have crashed unexpectedly yesterday.
"The computer has rebooted from a bugcheck. The bugcheck was: 0x000000d1 (0x0000000000000228, 0x0000000000000006, 0x0000000000000000, 0xfffff8008943f715). A dump was saved in: C:\Windows\MEMORY.DMP. Report Id...
I have had to totally disable all backup jobs because they all start qmp-timeout or fs-freeze fails halting the entire VM for several minuttes, which is unacceptable.
I was unable to find the tasklog in /var/log ??? So I had to copy it from UI into notepad. Which apparanty resulted in 90% blank space in the beginning of the document...
Where is the file located that you are requesting?
recieving server PBS02:
Version: 2.1-2
Kernel: 5.13.19-2-pve
data to be...
Hi, We just experienced this again, I will attempt to fill in the information you requested.
2022-02-08T06:43:51+01:00: re-sync snapshot "vm/212/2022-02-02T06:00:02Z" done
2022-02-08T06:43:51+01:00: percentage done: 80.88% (14/18 groups, 157/281 snapshots in group #15)...
I suspect the syncs taking a very long time. The backup in around 18tb being synced TO on a 300mbit offsite line.
Is this the reason for the ticket timeout? Is there a way to adjust it or otherwise fix this problem?
Whats wrong with the scheduler? Well, inconsistency for once.
Every 2 hours: */2:00
Mon-fri between 7-18 every 15 minuttes:
mon..fri 7..18:00/15
Ok, so mon-fri 7-18 every 2 hours should be
mon..fri 7..18:00/2:00 Nope...
How about
mon..fri 7..18:00/*/2:00 Nope...
So I am considering just...
The cluster consists of 6 online nodes.
p1-2 for VM's
p3-4 nodes for CEPH
Each CEPH node currently has 2 2TB Kingston SEDC500M enterprise SSD's. More to be added later.
It pushes out around 800mb/sec seq read and around 550 mb/sec seq write 32K
Network consists of two sets of stacked 10Gbe...
Hi, the qm rescan command was a usefull hint.
However, the unreferenced disks that pop up left and right is proxmox own creation if it fails to move disks. It does not always remove them even with the checkbox checked to delete after move.
The charade with migration continues...
Cancel? Why? Watchers? Huh? Shoot those watchers and get on!
This time the VM in question crashed as a bonus. Impressive. /s
drive-sata0: transferred 351.7 GiB of 1.2 TiB (28.36%) in 10m 19s
drive-sata0: transferred 352.3 GiB of 1.2 TiB (28.18%) in 10m...
.................
Move disk
Disk:
efidisk0
Target Storage:
Format:
Delete source:
Move disk
Task viewer: VM 107 - Move disk
OutputStatus
Stop
create full clone of drive efidisk0 (PVE02-STORAGE2:107/vm-107-disk-0.raw)
drive mirror is starting for drive-efidisk0
drive-efidisk0...
I ponder why I cannot migrate from storage type DIR to RBD... because it is a blatant lie.
Go to hardware, move disk from DIR to RBD = SUCCESS.
Once done, migrate the VM... it is now all running on ceph... litterally migrated from DIR to RBD....
WHY is PVE lying and rolling over like a little...
Dead PVE.... I wanted to move the EFI disk, hence I clicked MOVE.... I did not ask any cancel... why do you behave like a infantile AI and cancel my job? To anoy me? Success!!
WHY?
I don't know how many times i've had to shutdown a vm to move its EFI disk... as if its in used at all... This is...
5 of 6 nodes down for maint.
Can't login to web interface on the 6th with the correct password.
Can SSH to the 6th with the correct password.
Why?
If quorum is the culprit.... WHY?
You should in fact increase the arc_max size or alternatively, do nothing, that will be fine aswell.
ARC memory will be released if the system needs the memory for anything else.
So you are saying with 160GB free memory it cannot find a place to write 32k continously? That is hopefully not the problem, as that would be absurd.
I am also curious why my buffers/cache in free -m showed 160GB used by "buffers/cache" considering ALL drives are in ZFS pools, I do not see how...
The server has 512GB Ram.
It has 5 running vms with following allocated memory:
1:2GB
2:4GB
3:32GB
4:64GB
5:16GB
Please explain how this server can run OOM when it clearly has sufficient memory to run the meager load it is running.
Even with 2 ZFS pools with default value it will never fill...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.