Proxmox 4.4 virtio_scsi regression.

OK. Alto this is a very old thread, I think I need to point out that quite few people are quite literally red herring here by trying to talk about different filesystems and other stuff.

This issue was completely independent of FS used inside of gues VM. On a very first page, I think post 8?:
"...
Yep - I discovered it on btrfs and then was able to replicate it with dd directly to disk device :/
..."
I personally don't care about this issue anymore, but IF on one version of proxmox issue doesn't occur, then on next version it corrupts data on disks = it's a problem.
IF the problem CAN'T be solved within realm of proxmox => disallow this use-case for users.

4. displeased with inaction, the hypothesis (2) was added to the thread;
5. displeased with continuing inaction, other solution was chosen.
Company loosing most of it's data over the christmas period can cause "displeasement".
8. I can choose filesystem that ignores my O_DIRECT (as you did), but;
9. I can also choose not to use O_DIRECT.
If you pass through RAW disk to guest and DD doesn't work on it - what's the point of even having a RAW pass through ?!


So, if after 7 years issue is still reproducible (as mentioned by @t.lamprecht - didn't follow the link, just trust what he stated) - maybe remove this functionality from the gui AND/OR tell people in the documentation that it will corrupt their data, but DO NOT dance around the issue and close this thread.
 
OK. Alto this is a very old thread, I think I need to point out that quite few people are quite literally red herring here by trying to talk about different filesystems and other stuff.

I apologize I kind of "hijacked" it, but I believe after such a long time, it's a non-issue. The reason I re-started it here is that this was the oldest instance of the statement (quoted in multiple places) from Wolfgang on MDRAID. So this is not about your case, I believe.

Company loosing most of it's data over the christmas period can cause "displeasement".

The problem - with MDRAID - was never solved, whatever that problem was. Do note even Thomas quoted "a user". And BTW it might have been problem with QEMU at the time.

If you pass through RAW disk to guest and DD doesn't work on it - what's the point of even having a RAW pass through ?!

I don't think if I go on explaining it for myself here it will convince you, but point of my reply was that - you did not discover anything with that test case. It is a test case as if I discovered ZFS uses more RAM than BTRFS because ARC. What kind of test case is that? Documenting what (non)problem?

So, if after 7 years issue is still reproducible (as mentioned by @t.lamprecht - didn't follow the link, just trust what he stated) - maybe remove this functionality from the gui AND/OR tell people in the documentation that it will corrupt their data, but DO NOT dance around the issue and close this thread.

They "closed" that path for ordinary users by not providing MDRAID in installer. There's nothing to be "still reproduced", this is by the specs. We are discussing non-issue when we are discussing "test case". The corruption of that "a user" was related to we have no idea what.

EDIT: Moved away: https://forum.proxmox.com/threads/mdraid.156036/
 
Last edited:
I've never stated I used MDRAID within guest or host. So maybe this is completely unrelated discussion becuase for me a RAW passthrough disk was having issues (later to be even reproducible with ordinary "dd if= of=" within the guest). I don't know all the annals of the discussion surrounding this - but if that problem still persists, the ability for raw passthrough should be disabled.
 
I've never stated I used MDRAID within guest or host. So maybe this is completely unrelated discussion becuase for me a RAW passthrough disk was having issues (later to be even reproducible with ordinary "dd if= of=" within the guest). I don't know all the annals of the discussion surrounding this - but if that problem still persists, the ability for raw passthrough should be disabled.

I know, I apologise if I made you believe this was about that, I scroll back now the split off discussion starts at post #35:
https://forum.proxmox.com/threads/proxmox-4-4-virtio_scsi-regression.31471/page-2#post-159209

So I am not the primary hijacker. :)
 
No need to, got pretty thick skin, over 30 years of internet does it to you ;)
But maybe lets take this thread behind the shed and say "good dog" while we reach for the gun ?

No worries, I also do not apologise for hurt feelings typically. ;) But I did not mean to lead anyone to believe that 10 years old issue per se resurfaced. It's better off in its own thread anyways. Maybe it's the last of it, in which case I know I was spot on and everyone can move on, as Hannes already mentioned "nothing is broken", after all.
 
@fabian, do you want to have an output under 4.3 ? (also sas -> sas_disk setup has had proxmox removed from for safety sake, so this will take some time to get proxmox back on it)

Hey again @tomtom13, right, let's not hijack wrong threads, so back to here where it belongs and is actually ON-TOPIC.

I went to look at: https://qemu-devel.nongnu.narkive.com/7NrZkgBQ/data-corruption-in-qemu-2-7-1#post12

I will try check the old test case with current QEMU and let you know later on.
 
Good luck !

So, long story short, you could still hit this, just not by default. There is a scsiblock switch on qm:

Code:
scsiblock=<boolean> (default = 0)
               whether to use scsi-block for full passthrough of host block device

                   Warning
                   can lead to I/O errors in combination with low memory or high memory fragmentation
                   on host

Here the patch adding it.

And that's about it.
 
Yeah so bottom line is that good old Fabian created a patch that will change the default option to "not so really a passthrough disk", but the underlying qemu problem still persists.

See ? I was right, there are some (covered by thick dust) skeletons in that closet ... and alto I'm an idiot, I'm not a moron just yet :p
 
Yeah so bottom line is that good old Fabian created a patch that will change the default option to "not so really a passthrough disk", but the underlying qemu problem still persists.

The funny thing is, scsi-hd is actually giving better performance:
https://www.qemu.org/2021/01/19/virtio-blk-scsi-configuration/

See ? I was right, there are some (covered by thick dust) skeletons in that closet ... and alto I'm an idiot, I'm not a moron just yet :p

You kind of have to go out of your way to hit it, also it really happens only under specific circumstances (which you hit on a regular system only after prolonged period, must have been great surprise). I am kind of surprised it stayed this way, to be honest.
 
You kind of have to go out of your way to hit it, also it really happens only under specific circumstances (which you hit on a regular system only after prolonged period, must have been great surprise). I am kind of surprised it stayed this way, to be honest
hence it’s more dangerous - you do your preliminary tests for production, everything is cool, you dump you data in - few months later you hit a strange behaviour and realise all your data and backup is corrupt because it was slowly creeping in ;)
 
On the other hand, I would be quite confident now it's not going to happen with the other modes, so if you are fine with that, you do get the raw passthrough and good performance (without "actual" passthrough;)). So I would not be worried about running the other combinations.

Well, this was fun, learned something myself about topic I did not know about from hijacking it. ;)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!