Looks like it's having a problem with the 0a1f disk which is the OS partition "/"
Not sure if that would help because I'm attempting to use a specific grub/kernel to boot with.
Maybe there is a way I can see the full error? I'm not entirely sure how to do that as it's cut-off wtih the KVM...
Thanks for the quick reply! I am not able to boot into the system. Is it possible for me to use a livecd of some kind to edit this and check it out or fix it?
EDIT:
Ended up booting up via an ubuntu Live CD and here is the output of the fstab...all of the UUIDs are the same as listed here...
I have a multi disk VM running RH7 that I'm attempting to move over from VMWare Fusion. I have migrated all the files over and converted them from ovf --> vmdk --> .raw files. It is two separate disks. I am able to get the VM to boot-up but am getting errors.
dracut-initqueue[283]: Warning...
I'm having issues with this constantly now, not just when backing up. I'm really not able to find any possiblities as to why this is happening. Even simple poweroff's of CTs cause this to happen now. Is there anyone on the proxmox team that would be able to help us with this?!
I've been having this issue as well...whenever i initiate a backup the system faults and is thrown into this state. There really isn't anything in the logs to go off of. Restarting services doesn't seem to fix the issue either. My backup is being sent to a nfs share, but it NEVER had issues...
I'm seeing something similar to this where all of my LXCs and the node itself shows a "?"
But containers are still running mostly. This is after trying ot restart one LXC.
root@pve:~# ps aux| grep pmxcfs
root 1955 0.1 0.1 710848 43664 ? Ssl Jan29 7:15 /usr/bin/pmxcfs
root...
Thank you for the reply and information. I'll look into it. I might be able to get away with the performance as I'm only doing this for homelab stuff.
Do you know if the ceph read/write performance will really impact actual speed of the running LXCs/VMs? Or is it only the replicating to the...
I am using Samsung 850 Pro's. It is definitely not an ideal setup. But i think i should be getting better thoroughput from that. You can see i'm able to get good speeds with sequential.
I am running 3 nodes. All with 10GbE network. Only one OSD per machine right now, but the OSDs are SSDs. The Filesystem is also on SSDs. benchmarking them has them at around 450MB/s
Need help tweaking the write performance...hope it might just be my settings
I'm currently getting these performance numbers:
root@b:~# rados -p ceph bench 60 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 60 seconds or 0...
I got it working completely...just need help tweaking the write performance. Hope it might just be my settings
I'm currently getting these performance numbers:
root@b:~# rados -p ceph bench 60 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size...
So, on my A & C machines I was able to createosd just fine:
root@a:~# fdisk -l
Disk /dev/sda: 233.8 GiB, 251000193024 bytes, 490234752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.