No not need to write in the ceph.conf and not need to do a reboot.
On a single node will propagate. However, you can run the following command on all the nodes to make sure that the scrub load did on all the nodes.
ceph config get osd osd_scrub_load_threshold
Hi, can you please post the network configuration of the Proxmox VE `cat /etc/network/interface` and the LXC config `pct config <CTID>`?
In general you can do that as the following in the network configuration of Proxmox VE:
post-up iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport <PORT>...
Same request here? https://forum.proxmox.com/threads/disk-raw-container-proxmox-6-recovery-lxc-proxmox-8.157190/
If yes, you can do that using dd, it's deadened on the target storage type. However, if you have a backup the process will easier.
You can set the scrub load threshold [0] as the following:
ceph config set osd osd_scrub_load_threshold 2.0
[0] https://docs.ceph.com/en/quincy/rados/configuration/osd-config-ref/#confval-osd_scrub_load_threshold
Hi, Can you check if the Network Device inside the Windows is installed i.e., open the Device Manager and check if the Red Hat VirtIO Etherenet Adapter is installed. If yes, could you please try to set the Network to the VM to `Intel E1000`?
Thank you for the syslog!
Could you please check the disk health using `smartctl`?
smartctl -a /dev/sdb
Since the `Input/output error (os error 5)` points to a hardware issue.
Can you please check the output of the `ip route` in the VM? The output should indicate your gateway `192.168.1.1`. Additionally, what say the ping to the gateway?
Hi,
The full SMB storage impacted the pvestatd daemon, which can cause the Proxmox VE Web UI to become unresponsive. I would check the syslog around the backup time.
You can also restart the pvestatd and pveproxy services.
Hi,
Can you check if the NIC have an available firmware update? Sometimes the firmware update can help. and/plus check if you have microcode is installed in your Proxmox VE.
Hi,
I still not have tried that. However, you can try to resolve the IPv6 on both nodes in /etc/hosts i.e., remove the IPv4 from /etc/hosts and add the IPv6.
Hi,
We offer support in German and English. Please consider posting in one of these languages for quicker help!
Have you managed to resolve your issue?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.