Just wanted to follow up and confirm that the instructions mentioned above by Noel worked for me on the tesla p4.
nvidia-smi -e 0
i have the full ram available now. (8192MB)
Thanks for the detailed reply, yes its running consumer ssd drives. i will need to upgrade ssds i think. thanks again for the detailed explanation on the issue.
Hi Sorry for stating up this old thread but i just wanted to ask did you ever get this figured out? i have a very similar issue for the last 6 months or so on my standalone system. i tried a bunch of stuff including zfs performance tweaks, new raid card (running it mode) , btrfs file system...
yeah its a weird one alright, maybe check your user account or accounts in proxmox , for example does an admin@pam one exist? maybe check is the root account enabled/disabled for example but not sure even if that is possible.
@roycordero dont know if its relevant but try login as admin@pam (see pic below) , this could be normal not sure to be honest, but maybe try logging in with admin@pam just to see if that works. its worth a try anyway :)
@LnxBil thanks for the reply, unfortunately i am going in the other direction , i am migrating my drives from external netapp disk shelf (with HBA connection attached to truenas vm) into truenas vm with disks directly passed through, i am aware of the downsides of doing this ,SMART issues etc if...
having similar issues after upgrading to lastest Kernel. getting the following error:
TASK ERROR: Cannot bind 0000:43:00.0 to vfio
this pci device is an LSI SAS2008 pci-express Fusion-MPT SAS-2 [Falcon] [IT Mode Flashed] passed through to a VM (truenas core)
had to revert to Linux...
Hi
Just found this post as i am having the same issue as the screenshot above. i recently ran an update via the the webgui and carried out a host reboot i also tried diferent browser, time checks etc. my VMs are all running as normal but webgui is as above. just wondering did anyone manage to...
Hi
I have noticed the same issue running debian 10 lxc . it was taking a long time to start up. to solve the issue i disabled ip6 fully in lxc . to disable do the following in the lxc console:
nano /etc/sysctl.conf
add the following then reboot the container
net.ipv6.conf.all.disable_ipv6=1...
I had a issue mounting shares also , this is how i fixed it https://forum.proxmox.com/threads/pve-5-1-cifs-share-issue-mount-error-112-host-is-down.37788/
hope it helps :)
Just wanted to post a solution to an issue i had. hope it helps someone.
after recently updating to 5.1 i could not connect a network share to my server , it kept saying "mount error(112): Host is down" but i knew it was not. anyway the problem was due to the share i was connecting to is an...
I also have the exact same problem as this. also having the ZFS problem after the latest update. i ran a lscpu , here is my hardware output :
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.