Check this recent thread.
It discusses a few options.
I think this applies to your situation as well
https://forum.proxmox.com/threads/a-way-to-share-internal-disks-for-shared-storage.117464/
The author of this thread does achieve something similar by partitioning the single disk into 5 partitions.
Those are getting used in a raidz1 pool. I had to read it twice to get to the point and like this unconventional approach.
It will have its downsides as well (io will suck) and still I...
I was curious enough to check if I can reproduce but neither have the needed Windows 2000 ISO nor a serial anymore.
So the only I can give you is that idea.
Try to install this VM on PVE with a smaller disk and attach the large disk later to access your data. Maybe virtual box does "some magic"...
If you do the passthrough of the HBA the entire responsibility lies within the Guest-OS. PVE does not even "see" that anymore.
So ZFS Caching (assuming you use the attached devices within a zpool) is up to the guest. But it happens there. It is not disabled - so the guest might/will need more memory
hm that sounds interesting. Hirens Boot CD basically provides a boot loader IIRC.
Something I have picked up on which I did not recognize on my last view. The size of the disk is "152636M" - is it the same size on Virtualbox?
Asking because I do recall that at some point OS had issues with...
As said. Shared storage except Ceph or similar concepts introduce just another problem in terms of a Spof. And this is often overseen. Especially at home or semi-professional environments. Hence I am mentioning it.
If you are only pushing your problem from one level to another including the...
Most likely there is already a file with the name, hence Windows adds a (1) to it - this is pretty common. I think osx does this as well.
There are a few things one shouldn't do. Blanks and special chars in file names for instance ;)
Not everything that can be done should be done.
My 2 cents
AFAIK you need to have multiple disks to use ceph and the usage with 2 nodes is questionable. You need a tie breaker (clust needs to have quorum to run).
So in the end the question is: why shared storage?
HA? You at moving your spof onto the storage potentially. Or to the network.
Because it is...
In the grub (bootloader) you can select different options to Boot. One should be an older kernel.
Aside that you can try to upgrade PVE as suggested but this also can make things worse/more complicated. If it has worked until the reboot I personally would first try to get back into that state...
Try to use an older kernel. I had that situation in the past, nearly weting my pants.
A reset of the server, selecting the older kernel from the boot menu got me back on track.
After another kernel revision got installed this typically went away.
Good luck
First of all: go to the CLI and check if the pool is there via "zpool list".
if it is there, then the only thing to do is to register the pool to Proxmox again.
If it is not there you might need to import it. This might be of help:
https://docs.oracle.com/cd/E19253-01/819-5461/gazuf/index.html...
Same vendor and series?
Its not uncommon that there is a very small difference which leads to these kind of issues. I have seen this especially with SSDs in the ppast
I also have seen such kind of things when the controller firmware was going nuts. Very unlikely with a ZFS software approach, but...
I think your cache-file for auto-starting those pools might need an update.
Have you checked the content?
Refer to https://openzfs.github.io/openzfs-docs/Project and Community/FAQ.html#the-etc-zfs-zpool-cache-file
Also check if the pools still exist in /etc/pve/storage.cfg
I'd expect it is not...
I'd try to disable KVM and also reduce number of vCPUs to 1.
It is a long way back but I think multi-core CPUs didn't exist back in the days. So maybe the system stumbles at that point.
Have you tried installing Win 2000 from scratch? Does that work?
Try a minimal viable setup.
Aside that I'd...
I am having a hard time to get to the bottom of it.
Either I do not understand your setup correctly or the question does not make sense to me.
Option 1: ZFS-Send/Receive would be an obvious choice for me in such a case. But then why are you asking if ZFS would be the choice - it is a...
It feels very good to be right ;) Glad I could help
Good question.
This might answer your question - I think it is the second option:
https://forum.proxmox.com/threads/delay-start-of-a-vm.111212/post-479307
/edit: However if that would be the case - it should not work for you - I am confused
HTH
can you share the VM configuration you have used?
Windows 2000 is very old. I question it's use as of today due to security concerns but guess you have your reasons. ;)
Virtio for Windows 2000 would be new for me. From my perspective: If it runs on Virtualbox, it should run on PVE as well
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.