Search results

  1. Proxmox VE 6 on HPE DL380 Gen10 with 10 GB/s-NIC

    I will be testing proxmox6 on my gen10's soon. I will report back once I do.
  2. Deleting Large LVM disks with 4.15.18-20-pve fails

    - Yep, lots of iscsi related errors. Sep 3 11:54:11 testprox1 kernel: [ 267.285924] sd 3:0:0:0: Power-on or device reset occurred Sep 3 11:54:11 testprox1 kernel: [ 267.285927] connection3:0: detected conn error (1008) Sep 3 11:54:12 testprox1 kernel: [ 267.294631] connection4:0...
  3. Proxmox 6.0 + Windows 10, VM often goes black + CPU 80%

    We are a software company, been on proxmox for over 7 years. Nothing to dream about here, its reality for us. Allready proven itself time and time again.
  4. Proxmox 6.0 + Windows 10, VM often goes black + CPU 80%

    We run plenty of OS's. Either way, good luck with your endeavours!
  5. Proxmox 6.0 + Windows 10, VM often goes black + CPU 80%

    Yep and we have 1000's of VM's and 100's of proxmox hosts and yet we don't have any issues. Good luck!
  6. Proxmox 6.0 + Windows 10, VM often goes black + CPU 80%

    Plenty of people with very successful environments, its unfortunate you can't figure out your issues. If the forums were littered with posts like yours, it would be one thing, but its not. I can tell from your response you don't know enough to run this type of enviroment. Your better off with...
  7. Deleting Large LVM disks with 4.15.18-20-pve fails

    Been flipping back and forth between 4.15.18-18 and can confirm everything is solid on 4.15.18-18.
  8. Deleting Large LVM disks with 4.15.18-20-pve fails

    Hitting some odd issues with deleting larger LVM disks on 4.15.18-20-pve 5.4-13 . Our setup is HP DL 380 Gen10 front ends, nimble iscsi storage with LVM on top. Deleting VM disks over 2TB fails on 4.15.18-20-pve and causes the host to basically loose access to the storage. Only way to get...
  9. ZFS Central Storage

    Yea I have looked at those in the past, but when you get down to the nitty gritty they came in with a price tag pretty much the same as our Nimble iSCSI array's. Only so many of our customers can afford such a setup. We are trying to come up with a bit more cost effective solution.
  10. ZFS Central Storage

    Its would be SAS connectivity, it should be a simple zfs import on failovers. I do get what your saying to a degree.
  11. ZFS Central Storage

    Ceph requires 4-5 nodes to be a real HA setup. We have a full ceph cluster setup inhouse, I wouldn't even consider it without atleast 4 nodes. Money is a factor as well. zfs-send/receive requires front ends with the same amount of disks. With central storage we only have to buy one set of...
  12. ZFS Central Storage

    I stumbled on this and thought it was a pretty neat idea. https://github.com/ewwhite/zfs-ha/wiki https://github.com/skiselkov/stmf-ha It would be pretty slick to have proxmox capable of something like this! Seems like it wouldn't be all that difficult either.
  13. Proxmox 6.0 + Windows 10, VM often goes black + CPU 80%

    On a side note, I posted this fix in the 2nd post lol.
  14. Proxmox 6.0 + Windows 10, VM often goes black + CPU 80%

    We have been using proxmox for almost 10 years with minimal issues. The vast majority of our infrastructure is based on it. However we run and manage our own hardware which is probably the largest difference. We have been running IBM and HP for years and years, we have 10's and 100's of...
  15. Proxmox 6.0 + Windows 10, VM often goes black + CPU 80%

    We run a small vdi enviroment and as of late have been having alot of black screen issues and high CPU usage. This is more a windows issue imo. For us, the cpu usage is being caused by the new 1903 update to RDP...
  16. VM with 1TB memory +

    I can also confirm that this is working for me. cpu: host args: -cpu host,host-phys-bits=true
  17. VM with 1TB memory +

    I can confirm that adding "host-phys-bits=true" is working as expected. Just booted a VM with 1.4TB of ram with no issues. I hope this can get added to both 5.x and 6. https://bugzilla.proxmox.com/show_bug.cgi?id=2318
  18. VM with 1TB memory +

    Seems like we should be able to get around this with phys-bits or host-phys-bits but I don't see anyway to add this as an option.
  19. VM with 1TB memory +

    Pretty bummed to say that we just got a set of DL 560's with 1.5TB of ram and we can't fire up a VM with more than 1TB. Anything more than that and the VM hits a internal error. Ive tried with numa on and off, huge pages enabled and disabled, nothing seems to help. root@testprox1:/var/log#...
  20. Suggestion: Tips and Tricks sub-forum

    Has nothing to do with "whats" being discussed. It has to do with organization and being able to quickly see threads like this. It can prevent people from asking the same questions over and over. I am on alot of vendor forums, and alot of them have something along these lines.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!