Many time ago I have the same issue but with win2k12, The solution was to prepare the windows machin with ide drivers. I not sure can resolve your problem to. And i use the clonezille migration metod.
- make sure that you have standard IDE drivers enabled, use the mergeide.reg...
Hello,
have you tried following this guide https://pve.proxmox.com/wiki/USB_Devices_in_Virtual_Machines ?
An example of a command could be: qm set XXX -usb0 host=0d8c:013a, where XXX is the number of the VM to which you want to connect the device.
I know is a different metod, but if I have well understud maybe you can use the clonezilla migration system, I have do many migration with this system and normaly is workin well
https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#Clonezilla_Live_CDs
I would not recommend deploying a cluster with 2.5Gb connectivity for Ceph in a production environment.
This goes against Ceph's best practices.
Additionally, having such a low number of OSDs increases the likelihood of storage loss. Just think, with a 1Gbps network, it takes approximately 3...
Thanks for your answers..
In general, It seems that the problem occurs when you migrate from a node with CPU Xeon to CPU E5-2630 and viceversa.
Migration between CPU Xeon I have no problems.
Can you confirm it?
Bye
Hi Kellion, I still have the same problem.
I haven't upgraded the PVE yet to the last version 7.3.
Perhaps with the last kernel version the problem could be fixed? I hope so...:mad:
This is my current cluster configuration:
PVE1
32 x Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz (2 Sockets)
Linux 5.15.39-3-pve #2 SMP PVE 5.15.39-3
PVE2
32 x Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz (2 Sockets)
Linux 5.15.53-1-pve #1 SMP PVE 5.15.53-1
PVE3
40 x Intel(R) Xeon(R) CPU E5-2630...
This is the migration log:
2022-09-07 08:57:36 starting migration of VM 213 to node 'pve3' (192.168.253.58)
2022-09-07 08:57:36 found local, replicated disk 'local-zfs:vm-213-disk-0' (in current VM config)
2022-09-07 08:57:36 found local, replicated disk 'local-zfs:vm-213-disk-1' (in current VM...
Hello everyone,
I have a PVE cluster version 7.2-7 upgrated to the latest packets versions.
I'm having troubles to figure out why some VM (both Windows and Linux) are in stuck status after live migration.
Morover, this doesn't heppen for every host in the cluster.
For example, If I migrate the...
How can you define about 31GB for zfs min?
below my current arc_summary
ZFS Subsystem Report Wed Aug 03 16:40:02 2022
Linux 5.15.35-2-pve 2.1.4-pve1
Machine: pve (x86_64) 2.1.4-pve1
ARC...
Thank you fabian,
the very strange thing is that I have 5 VM and the RAM occupation summary should be about 8GB+12GB+12GB+16GB+8GB + ZFS 16GB (max) = 72GB RAM busy.
On the PVE web page I see about 95% of the RAM occupation!
It's the first time I see a something like this...
Thanks
Hi,
In my PVE 7.2-4 host I notice that I have a very high RAM usage and the VMs has been stopped randomly.
I have already limited ZFS RAM usage and for several month every things has been going right.
I've tried to use dmesg command in this way in order to figure out what happen. The output...
Hello everyone,
We had trouble with windows 10 VM.
This VM reboots every nigth at 12:00 and every reboot worked well. Yesterday night it rebooted for updating and in the morning I got the black windows page with circle..
very strange...
Hello all!
I have an easy question: where can I set the IO disk limit during the backup?
I tried to set on GUI: data store, options, bandwidth limit, but it seems not working.
My goal is reduce IO during the incremental backup task.
How can I do thet?
Regadrs
Alessandro
Logs: ok, please give me a logs (path or command) that you want to see. I will post it.
root@pbs1:/var/log# proxmox-backup-manager versions --verbose
proxmox-backup 1.0-4 running kernel: 5.4.78-2-pve
proxmox-backup-server 1.0.6-1 running version: 1.0.6...
Hello all,
I have below system:
Backup Server 1.0-6
Disc:
2 small SSD dedicated for OS (ZFS mirror ) name: rpool
4 SATA 1TB dedicated for data pool (raidz-1) name: Storage1
The healty of all disk are ok!
The system run well!
sometime, after the reboot, PBS lose the data pool Storage1 (not...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.