Recent content by lweidig

  1. SPICE three monitors - Windows 10 VM

    I have a VM setup using SPICE with three monitors and as long as I leave the memory setting at 32MB it will boot fine and runs though video performance at times is a bit slow. Any time I try to increase this all I get at the console is: Guest has not initialized the display (yet). The VM uses...
  2. Ubuntu do-release-upgrade container

    Yes, we are very current (all but the last few updates) and container was running great prior. We have other containers running 16.04 and 18.04. # pveversion --verbose proxmox-ve: 5.2-2 (running kernel: 4.15.18-1-pve) pve-manager: 5.2-5 (running version: 5.2-5/eb24855a) pve-kernel-4.15: 5.2-4...
  3. Ubuntu do-release-upgrade container

    So, tried upgrading a container running 14.04 -> 16.04. The container was fully patched prior to running the do-release-upgrade. Tried to restart container once that completed and now the container completely refuses to boot! Hoping somebody can help figure out why, I have been poking around...
  4. ixgbe initialize fails

    Excellent, is that the 5.3.7 drivers and do you plan to maintain this going forward with new releases?
  5. ixgbe initialize fails

    81:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01) 81:00.1 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01) We did end up having to download the Intel drivers and install them. With the system upgrading to 5.2...
  6. ixgbe initialize fails

    In 4.15.17-2-pve I am seeing Adapter Reset messages continually causing our 10G storage network / cluster to be INCREDIBLY unstable. Had to revert back to an older kernel 4.13.13-6-pve to get everything up and running. Would really prefer not to have to maintain building the module from Intel...
  7. Ceph and node reboot

    Four nodes with 8-10 osd's per node. Two of the OSDs are SSD and the others 10K 600GB SAS. The SSD drives are being used for the DB / WAL. Storage capacity about 30% at this point. The nodes are interconnected to each other with dual 10Gbps Intel adapters running LACP for the storage...
  8. Ceph issues new cluster 5.1 fully patched

    Pretty sure we have narrowed this down to one of the four nodes as ALL of the pgId's has an OSD located on that node. Now to dig further why this node is misbehaving.
  9. Ceph and node reboot

    EVERY time we need to restart one of our nodes in our cluster we are faced with this HORRIFIC impact on disk I/O while the Ceph pools need to be "rebuilt". It virtually consumes all of the resources and we need to know how to prevent this. I am simply talking about issuing a 'reboot' after...
  10. Ceph issues new cluster 5.1 fully patched

    We have a new four node cluster that is almost identical to other clusters we are running. However, since it has been up and running at what seems to be random times we end up with errors similar to: 2018-02-05 06:48:16.581002 26686 : cluster [ERR] Health check update: Possible data damage: 4...
  11. Replace Journal / WAL SSD drive

    We have a four node Proxmox cluster with all of the nodes also providing Ceph storage services. One of the nodes is having issues with the SSD that we are using for the journal / WAL drives (this is 5.1 / bluestore). We use a command like: pveceph createosd /dev/sdc --journal_dev /dev/sdr...
  12. Proxmox Network Question

    A single connection will "pick" one of the devices to use and therefore most tests that you run will never exceed 10G. HOWEVER, as mentioned when you have multiple hosts they should start distributing across the links so that the aggregate bandwidth will be 20G. With only a few hosts you...
  13. Proxmox Network Question

    This is still LACP related and honestly that is the best bonding mode. LACP uses certain pieces of information to determine which link it will use for each connection (and this can be configured). The connection then stays on that link for its entire life assuming the link does not go down...
  14. Decommission Cluster

    We have a three node Proxmox cluster that we are in the process of decommissioning as we have a new 5.1 cluster that we have migrated the majority of the machines over to. However, for a while we need to keep one of the nodes running with one of the containers it has. But, we also want to take...
  15. [SOLVED] Cluster Letsencrypt SSL

    Yep, that was the problem. I had installed using the zip file and not git. Changed to git and went through just as documented. Thanks for catching this!

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!