I had an Intel i350 setup with SR-IOV and it was working fine. I could use ViFs with Linux, and Windows 11. I swapped out the i350 with an i710. This was smooth as the OS saw the new NIC and it seemed to work. I changed the # of Vifs from 7 for the i350 to 32 for the i710. I have assigned ViFs...
On my test system I ran PVE7to8 and got this warning on my only LXC vm
WARN: CT 102 - volume 'vm_images:102/vm-102-disk-1.raw' (in config) - storage does not have content type 'rootdir' configured.
WARN: Proxmox VE enforces stricter content type checks since 7.0. The guests above might not...
while the doc is not 100% clear, your pointers got me over the hump. Mirror is back together and system boots just fine. Thanks !
root@pveprod:~# zpool status rpool
pool: rpool
state: ONLINE
scan: resilvered 7.34G in 01:23:32 with 0 errors on Wed Nov 1 20:09:15 2023
config:
NAME...
I had a bad disk in my rpool. I replaced the disk but the new disk turned out to be really slow. I decided to remove it with zpool detatch . I am trying to attach a new drive to the rpool but I get an error. See below
root@pveprod:~# zpool status rpool
pool: rpool
state: ONLINE
scan...
I am getting flooded with: IPv4: martian destination 0.0.0.0 from 192.168.1.1, dev ens2f1v0
I am using SR-IOV on a an intel quad port i350 nic. ens2f1v0 is a passthrough to an LXC container. This nic is 5:10.1
root@pveprod:/etc/pve/lxc# ip -br a
lo UNKNOWN 127.0.0.1/8...
Thomas,
I followed the directions and like you had an issue getting the container to be seen. I have a zpool setup for container templates since I use a 64G USB key for Proxmox OS. I made sure all file types were selected under Datacenter->Storage->templates. Templates points to zpool...
Thx for the clarification. The bios is at its latest level. The system is an lga2011, manufacture will not be doing any more update AFAIK. Will make the change and try again.
I had a disk in a ZFS mirrored rpool go bad. I put the new disk in (USB Key), did:
sgdisk <good_rpool_disk> -R <new_rpool_disk>
sgdisk -G <new_rpool_disk_>
zpool replace rpool <bad_disk> <new_rpool_disk-3>
proxmox-boot-tool format <new_rpool_disk-2>
proxmox-boot-tool init <new_rpool_disk-2>...
This turned out to be a disk HW issue. One of my CACHE disks was bad causing zpool issues. Replaced the CACHE disk and things are running like a champ.
found this by running zpool iostat -v zones 3
I have a couple of Windows 10 VMs and all of a sudden disk Active time goes to 100% and with very little disk transfers. I have tried different virtio settings and nothing changes. I did a forum search and found someone else had the same issue but no resolution. Any ideas? Let me know if you...
All well an good but the man page or this link says nothing about the parms needed for serial console output. Can you give an example on the lines needed to get it working. I've tried different iterations but can't get it to work.
What is the proper way to get host terminal output with efi boot? Do you add lines to /etc/kernel/cmdline similar to grub? IE
"console=ttyS1,115200n8 console=tty0"
"serial --speed=115200 --unit=1 --word=8 --parity=no --stop=1"
I can get this working using grub boot loader but not all the way...
Wanted to see if anyone has run across this issue. I have a Supermicro X10SLL-F LGA1150 Mother board. I have 3 PCI cards, a NVIDIA Quadro P400, an Intel Quad Port i350, and a LSI SAS2008.
The issue I am having is when I add pci=assign-busses to GRUB_CMDLINE_LINUX_DEFAULT to get VFs I loose...
Zpool is now in a good state with the two new drives added. All done without data loss which amazes me. Thank you ZFS :)
pool: zones
state: ONLINE
scan: resilvered 1.32G in 00:03:14 with 0 errors on Mon Apr 26 13:25:01 2021
remove: Removal of vdev 2 copied 982G in 4h51m, completed on Tue...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.