Firewall is disabled on the VM and host. The only way to make it works is to delete any netfilter rule (on host) that switch firewall into statefull mode. I guess there is a bug in the netfilter code which prevents fragmented packet to be reasembled when using a bridge (instead of routing).
I...
Yes i'm sure because after 'service pve-firewall stop' everything works as expected.
Works.
I don't undestand this. There are many routers between VM and the other end of the ipsec tunnel, all of them forward these packets without problem.
I did some investigations. After
iptables-save > ipt...
MTU set to 1500 on all interfaces (Node and VM). What kind of problem do you mean? Yes, packet is fragmented, but this is the proper way to send large packets over the network.
Why? Which rule in the firewall drops these packets?
Hello,
Firewall is enabled in datacenter, but disabled on host and vm. Packets 5 and 6 (see attached image), same as 8 and 9, appears on tap vm interface on host, but pve-firewall drops those frames (does not appear on host uplink interface). Everything works fine when pve-firewall service is...
It comes from ceph-volume.
Jul 25 09:07:22 ceph3 sh[1177]: Running command: /usr/sbin/ceph-volume simple trigger 12-dadf1750-4f14-4248-bd7b-054112ccc3cb
Jul 25 09:07:23 ceph3 kernel: No source specified
Jul 25 09:07:23 ceph3 kernel: fuse: Bad value for 'source'
Maybe it has something to do...
Hello.
Ceph cluster upgraded from Luminous to 14.2.1 and now to 14.2.2.
After upgrade ceph health detail shows:
osd.13 legacy statfs reporting detected, suggest to run store repair to get consistent statistic reports
for all upgraded osds.
How can we run a 'store repair'?
You should read this thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-July/035930.html
I know that the Proxmox Team hate RAID-0 configurations, but for me this is (with battery write cache) the only way to achieve low latency with high i/o in hdd-only clusters.
You have a pool with size=2 and min_size=2. If an osd is down, there are some placegroups with only one copy available, which is less than min_size, so any I/O is blocked.
You are propably going to use ceph pool with failure domain = host and min_size = 2, so you neeed at least 2 hosts.
Now you have a choices (from good to very bad):
- add second node;
- create rule with failure domain = osd;
- change min_size of the pool to 1.
Hello.
In PVE5.4:
root@fujitsu1:~# ceph <TAB><TAB>
auth df heap mon quorum service version
balancer features injectargs mon_status quorum_status status versions
compact...
Is there any slow request in the ceph status?
If not, and osd is still up, there wasn't any I/O operation on that drive, so ceph doesn't know if disk is still here.
Ceph mark an OSD down immediately after I/O error, and mark out 10 min later.
I/O is blocked because of:
2019-02-06 11:10:56.387126 mon.bluehub-prox02 mon.0 10.9.9.2:6789/0 33971 : cluster [WRN] Health check failed: Reduced data availability: 329 pgs inactive (PG_AVAILABILITY)
Please show your crush map.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.