Debian 10 is stricter in accepting random data to fill /dev/urandom, this is why everything takes longer to start when running in a vm.
Here's a page from Debian about this "issue":
https://wiki.debian.org/BoottimeEntropyStarvation
To fix this issue install havaged like the article states, or...
Another dutchie here, I've had good experiences with quadro and gtx cards dumping the bios and mounting it to the vm.
Here's a tutorial on the dumping part:
https://forums.unraid.net/topic/41951-gpu-passthrough-with-only-one-card/page/2/?tab=comments#comment-478049
And a snippet from the pve man...
Hi,
I'm sorry I didn't answer your issue with trying to dd from the host instead of the vm earlier.
The xvda1 and xvda2 are partitions on the xvda disk. If you create a disk the same size or larger on the target vm you can simply dd xvda so all partitions will be copied over.
Worst case scenario you could always dd the disks over from the virtual machine itself to the host:
Make sure to stop all services and freeze the filesystem to prevent inconsistency (or just load a recovery iso)
fsfreeze / -f
dd the complete filesystem over to a already created vm on the pve...
Installing a lxc container is pretty easy, there's loads of tutorials out there on how to set it up.
https://www.youtube.com/watch?v=cyjXxsQ8Igw is a nice video on installing a lxc container on pve.
After you're done it's basicly the same process as installing plex on a vm.
This guide will...
I run plex in a lxc container and my Intel Atom c3758 is able to transcode 3-4 1080p streams.
My cpu always jumps pretty high because I have set the throttle buffer to 10 minutes but it's always pretty responsive.
I am running bios because uefi on pve 5 gave me some issues with booting.
Here's a snippet of my bash history wich results in a working config:
vim /etc/default/zfs
vim /etc/default/grub
update-grub
vim /etc/modules
dmesg | grep ecap
find /sys/kernel/iommu_groups/ -type l
ls...
I have the exact same setup as you have, mine works fine with pci-e passthrough.
There's something you should check, 1st gen 2670 cpu's have a known bug wich breaks VT-d:
SR0H8 is C1 stepping, VT-d is broken in this stepping. SR0KX is the C2 stepping where VT-d is fixed.
My server isn't on at...
I started a scrub on my server without making any changes to the setup, for safety I stopped all services(like nfs and smb) and vm's/containers.
Scrubbing rpool (wich consists of 2 ssd mirrors) no errors popped up, trying again with all services etc started it still didn't bug out.
At this...
I had the same issue yesterday.
Server was running fine until I upgraded my pools to zfs 0.8.1.
The error messages started just after midnight at the first Sunday of the month.
I'll check if I can see what the cron does and if I can reproduce the issue this evening.
Thanks a lot for your hard work devs! Love the new features like TRIM support and live migration with local disks.
I need to upgrade some 10+ node clusters and doing them all in one go won't be an option.
Would running 5.4 and 6.0 in the same cluster be an issue for a a few weeks after...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.