Hi
Noticed today that after backup of an LXC, when tried restoring it to another node I got a msg ....
tar: ./etc/vzdump/pct.conf: time stamp 2024-11-03 11:50:15.351796943 is 10308.553838456 s in the future
tar: ./etc/vzdump/pct.fw: time stamp 2024-11-03 11:50:15.371796943 is...
Changing it back to it's initial number made the trick and the VM started and user be able to login to the domain.
Bottom line, issue was the automatic change of vmgenid value after restore.
New edit: Tried that with a VM having as extra an efi and a TPM storage and worked as well. Only...
If you want to see it before you change it wouldn 't that be something like qm get ID --vmgenid 106 assuming the VM has id of 106. Tired a couple of combinations to no effect
Unfortunately not only helps but raises a hell more questions without them being necessary (one of them might be where the DcCloneConfig.xml is located since is being mentioned a lot(in the vm, in the hypervisor iteslf, which path...etc)). The article speaks of moving the DC itself rather than a...
Hi
Yesterday, I started the process of transferring all Vms from the old node to the new one.
Vms include a Domain Controller, a File Server, an SQL server, 2 win 10 & 11 machines for some services and a RDS server.
All servers are based on Win2019 OS. Hardware is different the servers...
For me there is a winner, I just need to replicate that assumption from others as well. I m not a storage built architect. This think is a job by itself.
ashift 12 and block of 8 instead of default 16. Has almost in all situation even by a little better IOPS and more importantly latency is less...
Well, with examples this time, I believe that my initial urge of 8k block size and ashift 12 and not the default 16 according to my easy-plain calculations waw correct ..... I guess.
So I run Iometer (I couln t find the benefit to jsut bench the underlying raw storage) inside a WinServ2019...
That's what I thought, not what you meant.
yes
... because only then the kvm service will restart too to take the changes. Why you were so cautions of a restart? It s not that bad.
I m doing it once in a few weeks for updates of Proxmox and stuff.
The part that cache = none would give just max performance .
So with cache=none the OS's cache communicated not with the host layer but the zfs file system directly and that gives reliability?
That is the part I thought it was the opposite. I was <<wrongly>> aware that that is why you choose...
Why? What is the difference between 2 shutdown modes?
Yet, since I'm using this option as a good practice (mean the write back). Just <<none>> and <<This gives max reliability, saves Ram by not double storing data in Ram and does not lie to the guest os.>> doesn t glue together.
Care to...
Mine as well. I was under the impression that write back was the safe option for data integrity and it was a specific option for win vms. For llinux it was set to off. Observed this in many configuration videos.
Can you set it to off afterwards? I guess yes but never mind to double checking...
Any specific reason you had that disabled at first place? Noticed if you choose for SCSI Controller, SCSI Controller Single, it auto-enabled it by default.
i ve also read that single has better performance without mentioning why though.
I know about fio in general. Since it is installed in the OS how come and it benches the raw storage? It seems to bench the OS layer where data gets accessed. You mentioned it to your link as well <<you should always benchmark the final layer on which you access your data>>.
So in my case...
Ok but with SSDs not advertising the true internal page sizing most of the times (all the times now that I come to think about it) the ashift value would be wrong nevertheless. Already read some posts mentioning with SSDs always go with ashift =13 -> 8k blocksize for disks but how would you know...
You propose 8k but for your storage (see below) which will be the same according to disks used and raid type, you say 16k. Why is that?
Also about your future (since then, you probably have set it up) mirror of 2 drives storage f
or VMs using DBs, Which Db you mean, because most of them tent to...
So your thumb of rule is what? All defaults?
Check here as well my post https://forum.proxmox.com/threads/blocksize-recordsize-thin-provision-options.155553/#post-710296 (specific the part with my examples)
If you have any insights of my assumptions on the examples feel free to join in and speak...
Thanks for showing up. You started the party without me :) (probably due to time difference ).
Depends. It was you 2 years before (all my knowledge of this matter, comes from documentation of our past conversation (of course i don't get many aspects of it)) who agreed with me using 4k but...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.