I have a VM setup using SPICE with three monitors and as long as I leave the memory setting at 32MB it will boot fine and runs though video performance at times is a bit slow. Any time I try to increase this all I get at the console is: Guest has not initialized the display (yet). The VM uses...
Yes, we are very current (all but the last few updates) and container was running great prior. We have other containers running 16.04 and 18.04.
# pveversion --verbose
proxmox-ve: 5.2-2 (running kernel: 4.15.18-1-pve)
pve-manager: 5.2-5 (running version: 5.2-5/eb24855a)
pve-kernel-4.15: 5.2-4...
So, tried upgrading a container running 14.04 -> 16.04. The container was fully patched prior to running the do-release-upgrade. Tried to restart container once that completed and now the container completely refuses to boot! Hoping somebody can help figure out why, I have been poking around...
81:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
81:00.1 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
We did end up having to download the Intel drivers and install them. With the system upgrading to 5.2...
In 4.15.17-2-pve I am seeing Adapter Reset messages continually causing our 10G storage network / cluster to be INCREDIBLY unstable. Had to revert back to an older kernel 4.13.13-6-pve to get everything up and running. Would really prefer not to have to maintain building the module from Intel...
Four nodes with 8-10 osd's per node. Two of the OSDs are SSD and the others 10K 600GB SAS. The SSD drives are being used for the DB / WAL. Storage capacity about 30% at this point. The nodes are interconnected to each other with dual 10Gbps Intel adapters running LACP for the storage...
Pretty sure we have narrowed this down to one of the four nodes as ALL of the pgId's has an OSD located on that node. Now to dig further why this node is misbehaving.
EVERY time we need to restart one of our nodes in our cluster we are faced with this HORRIFIC impact on disk I/O while the Ceph pools need to be "rebuilt". It virtually consumes all of the resources and we need to know how to prevent this. I am simply talking about issuing a 'reboot' after...
We have a new four node cluster that is almost identical to other clusters we are running. However, since it has been up and running at what seems to be random times we end up with errors similar to:
2018-02-05 06:48:16.581002 26686 : cluster [ERR] Health check update: Possible data damage: 4...
We have a four node Proxmox cluster with all of the nodes also providing Ceph storage services. One of the nodes is having issues with the SSD that we are using for the journal / WAL drives (this is 5.1 / bluestore). We use a command like:
pveceph createosd /dev/sdc --journal_dev /dev/sdr...
A single connection will "pick" one of the devices to use and therefore most tests that you run will never exceed 10G. HOWEVER, as mentioned when you have multiple hosts they should start distributing across the links so that the aggregate bandwidth will be 20G. With only a few hosts you...
This is still LACP related and honestly that is the best bonding mode. LACP uses certain pieces of information to determine which link it will use for each connection (and this can be configured). The connection then stays on that link for its entire life assuming the link does not go down...
We have a three node Proxmox cluster that we are in the process of decommissioning as we have a new 5.1 cluster that we have migrated the majority of the machines over to. However, for a while we need to keep one of the nodes running with one of the containers it has. But, we also want to take...
Yep, that was the problem. I had installed using the zip file and not git. Changed to git and went through just as documented. Thanks for catching this!
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.