Best practice for CEPH is minimum 4 nodes with adequate resources (4GB of RAM per OSD and one CPU core per OSD) and redundant (aggregated) network interfaces for CEPH public and monitor networks. If your CEPH is failing, consider rebooting each node and make sure to back up guests frequently...
Agreed with Dunuin, also don't forget to delete unused disks from a vm that was moved from one storage to another. Check the VM hardware and LXC resources menus for each guest to see if there are any "unused" (original before moved) disks, when moving PVE safely copies but does not delete until...
As a junior Debian maintainer and senior Systems Engineer in Canadia-Land, I can confidently tell you that apt remove does indeed work.
Some things to consider:
Is kdmup referenced by other packages?
Are some packages from the greater kdump pre-installed with Debian before kdump-tools is added...
Thank you, that's good to know for diagnostic purposes, I had overlooked that. Doesn't mean that relatime doesn't do "the same" (similar) thing to EXT4 just because atime appears to be "on" while it is using relatime of course.
Relatime it is, thank you Thomas.
Tmanok
Specifically:
-Fabian
If I had to guess. Comes from this forum: https://forum.proxmox.com/threads/problem-after-updating-kernel.103293/ but I'm not sure why they didn't suggest uninstalling kdump-tools instead? In production I wouldn't (because it creates logs when the kernel crashes), but if...
To some, that's adequate documentation Hahaha... For me, I mix between Google Search and Confluence. Impossible to document all possibilities into Confluence, good to have PVE forums from time to time.
;) Sometimes I return to old posts and think "This is fascinating!" only to find out that I posted the solution or a follow-up comment a couple years ago. :O :D Hah!
Hi Wbanta99,
Thanks for sharing further details. Ok, to better understand, can you answer the following questions, I'm a little confused based on the config file alone.
What are the names of your nodes?
I did not see any external network attached storage (e.g. iSCSI), can you describe how you...
Ok, simple troubleshooting first:
Is the PVE node able to ping anything else on the network? -> If no, check cabling and configuration.
Is the PVE node able to ping the router? -> If no, continue:
Are other physical hosts able to ping PVE? Consider trying multiple.
Are other physical hosts on...
@Dunuin I found it, the reason why GC is 24Hr minimum. Thanks Thomas!
On another note, more related to this thread and to clear my head after reading conflicting information about ZFS atime from another thread:
PVE VM Storage can safely be configured with relatime instead of atime using ZFS...
Hi Maher,
It is a little bit ambiguous what you've done so I'll reveal a sample config and then offer a path forward for you.
iface eno1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.0.2/24
gateway 192.168.0.1
bridge-ports eno1
bridge-stp off
bridge-fd 0
What I've done...
It could also be misconfigured NIC on the server itself. Can anything reach the server? Your title says that you restored the system, did you restore /etc/network/interfaces too?
Cheers,
Tmanok
I'm not sure why there would be an issue, but in my experience with mounting NFS- and actually most shared file systems (AFP, SMB), mounting a parent directory and a child directory separately using the same user account often causes issues (even on Windows Server or as simple as a workstation)...
Dunuin is correct, the choice is in your hands and depends on your workload.
Personally, and without knowing your circumstances, I would mirror the SSDs using ZFS during installation of PVE onto the SSDs. Your installation will create two partitions, one of which is block based for VMs. You can...
Something else to note, you are swapping 1GB in your first screenshot. Your system does not appear to have enough memory, the PVE memory bar in summary does not show arc cache, if you open HTOP you will see all your memory is wired and cached.
Arc Cache behaves a little more hardcore than...
Yes, if your EFI space is full then you should clear old Kernels and startup components.
apt autoclean; apt autoremove
Be sure that you do not need the old versions before hitting "Y" on your keyboard. This is not a proxmox "bug" from what I can tell, simply how Linux operates and your...
Is your cluster using shared storage or ceph? Can you list some storage and physical setup details? (NIC link speed, iscsi or ceph performance, configuration).
Cheers,
Tmanok
Dunuin is correct, your checksum errors are troublesome. It is possible that the drive or file system may have automatically stopped using the bad sectors, but you should not rely on that disk if it is the problem.
Did you move the drive from drive bay / cable 3 to another? I've also witnessed...
For anyone else encountering this issue, I have observed it on PVE 5 and 6 when the NIC and Storage are both slow (HP G6 as well, with a poor storage server running under 50MB/s and gigabit NICs). This can also affect your ability to create VMs with large (>200GB disks).
Glad you figured it...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.