Some more information that could be helpful to look at, the output of these commands:
ceph versions
ceph mon dump --format json-pretty
ceph config dump
cat /etc/pve/ceph.conf
Was diesbezüglich in der Regel hilft ist den Befehl lsusb -t einmal vor und einmal nach Anstecken des Geräts auszuführen, um direkt den Unterschied zu finden. In der Ausgabe von dmesg sollten sich da auch weitere Hinweise dazu finden beim Einstecken.
https://bugzilla.proxmox.com/show_bug.cgi?id=2276 ist der link dazu - damit die Info nicht gesucht werden muss. Danke dafür.
Ist es möglich, über die logs (z.B. /var/log/apt/history.log) herauszufinden, welche Pakete aktualisiert wurden, zwischen wo es zuletzt bekannt noch nur einmal angezeigt...
You can only set the expected value to something that would give you quorum - corosync doesn't allow you anything else. And when you currently have only one node active trying to set expected to 2 (even if that would be lower than your current value of 3) that would mean you don't have quorum...
Just for completeness, you are referencing USN-4041-1: https://usn.ubuntu.com/4041-1/
Please link such information and not just copy the text, though we are tracking this of course and are aware. It's not overlooked and on our radar.
You are encouraged to use the workaround with the filter of small segmented packets like mentioned in the link you posted. Given that Proxmox does offer you firewall management but doesn't mandate it it would be quite difficult to figure out how to apply such a rule into your ruleset to not...
The patched kernel is already available in the pvetest/pve-no-subscription repository for 5.*, will be pushed to enterprise tomorrow. You can mitigate the issue by blocking small mss packets in your firewall.
Given that 4.* is out of support since june last year there won't be a patch. You...
There was a bug in the tools that generate out proxmox CDs which ended in producing a broken Packages file in the proxmox/packages directory - if that was the issue you are facing with it not getting recognized. The one under dists/stretch/pve/binary-amd64 does work though. The broken Packages...
That looks good - and you have proxmox-ve installed just fine? I was rather interested in the output from onetwothree who had the issue of not being able to install it though.
We are unaware of new issues with 5.4 and cloud-init. There has been some things fixed, but if you used it with 5.3 it very well is expected to work the same with 5.4 too.
In general I suggest upgrading one node at a time, and moving all containers and VMs off that node for the time being. Especially when it comes to kernel upgrades you want to reboot anyway. :)
Try to move VMs and containers to a node that has the same or a newer stack, not the other way round...
In essence, if you copy the CD to the harddisk, you can change to the proxmox/packages directory and call "dpkg-scanpackages . /dev/null > Packages" in there to create it. That should do the trick for the time being.
Actually, I found the culprit. The second Packages file on the CD was missing its linebreaks. We fixed the building tools - the next release should be compatible with apt-cdrom (again). No need to file that feature request anymore. :)
You are right, the format of the CD isn't recognized by apt-cdrom. What you can do is go into the dists/stretch/pve/binary-amd64 and proxmox/packages directories and install the .deb files with "dpkg -i" (those that you have installed already - at some point I had some oneliner ready for that...
The ideal case would mean that the developers support the downgrade. Debian doesn't, and proxmox does neither. Especially with the base of Debian, in the case of library transitions an the likes this might lead to interesting issues, so when you ask Debian developers you usually will receive...
Which distribution are you using? The way the memory information that get picked up by the tools varies, and not all are really container aware. The content of /proc/meminfo and what can be found in /sys/fs/cgroup/memory is the more relevant content for that. And the memory management of...
Can you look at the differences of the configuration in /etc/pve/qemu-server/ and check it with the output of "ip r s" and "ip a s" both inside the VMs but specifically also on the host?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.