Search results

  1. J

    pve-firewall blocks large UDP ipsec packets

    Firewall is disabled on the VM and host. The only way to make it works is to delete any netfilter rule (on host) that switch firewall into statefull mode. I guess there is a bug in the netfilter code which prevents fragmented packet to be reasembled when using a bridge (instead of routing). I...
  2. J

    pve-firewall blocks large UDP ipsec packets

    You're right. Yes, IKE can be configured with fragment=yes, but remote end of ipsec tunnel does not support it. So there is no solution?
  3. J

    pve-firewall blocks large UDP ipsec packets

    Yes i'm sure because after 'service pve-firewall stop' everything works as expected. Works. I don't undestand this. There are many routers between VM and the other end of the ipsec tunnel, all of them forward these packets without problem. I did some investigations. After iptables-save > ipt...
  4. J

    pve-firewall blocks large UDP ipsec packets

    MTU set to 1500 on all interfaces (Node and VM). What kind of problem do you mean? Yes, packet is fragmented, but this is the proper way to send large packets over the network. Why? Which rule in the firewall drops these packets?
  5. J

    pve-firewall blocks large UDP ipsec packets

    Hello, Firewall is enabled in datacenter, but disabled on host and vm. Packets 5 and 6 (see attached image), same as 8 and 9, appears on tap vm interface on host, but pve-firewall drops those frames (does not appear on host uplink interface). Everything works fine when pve-firewall service is...
  6. J

    dmeg shows many : fuse: Bad value for 'source'

    It comes from ceph-volume. Jul 25 09:07:22 ceph3 sh[1177]: Running command: /usr/sbin/ceph-volume simple trigger 12-dadf1750-4f14-4248-bd7b-054112ccc3cb Jul 25 09:07:23 ceph3 kernel: No source specified Jul 25 09:07:23 ceph3 kernel: fuse: Bad value for 'source' Maybe it has something to do...
  7. J

    [SOLVED] Osd legacy statfs reporting detected

    Oh, got it: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-July/035889.html
  8. J

    [SOLVED] Osd legacy statfs reporting detected

    Hello. Ceph cluster upgraded from Luminous to 14.2.1 and now to 14.2.2. After upgrade ceph health detail shows: osd.13 legacy statfs reporting detected, suggest to run store repair to get consistent statistic reports for all upgraded osds. How can we run a 'store repair'?
  9. J

    Ceph nautilus - Raid 0

    You should read this thread: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-July/035930.html I know that the Proxmox Team hate RAID-0 configurations, but for me this is (with battery write cache) the only way to achieve low latency with high i/o in hdd-only clusters.
  10. J

    VM's offline after OSD change

    You have a pool with size=2 and min_size=2. If an osd is down, there are some placegroups with only one copy available, which is less than min_size, so any I/O is blocked.
  11. J

    Cannot connect to ceph after install

    You are propably going to use ceph pool with failure domain = host and min_size = 2, so you neeed at least 2 hosts. Now you have a choices (from good to very bad): - add second node; - create rule with failure domain = osd; - change min_size of the pool to 1.
  12. J

    Ceph bash completion broken in pve6?

    Hello. In PVE5.4: root@fujitsu1:~# ceph <TAB><TAB> auth df heap mon quorum service version balancer features injectargs mon_status quorum_status status versions compact...
  13. J

    Proxmox 6.0 CEPH OSD's don't show down

    Is there any slow request in the ceph status? If not, and osd is still up, there wasn't any I/O operation on that drive, so ceph doesn't know if disk is still here. Ceph mark an OSD down immediately after I/O error, and mark out 10 min later.
  14. J

    [SOLVED] Can't restore backup

    Upgrade proxmox fixes the problem. Thank you.
  15. J

    [SOLVED] Can't restore backup

    proxmox-ve: 5.2-2 (running kernel: 4.15.18-4-pve) pve-manager: 5.2-8 (running version: 5.2-8/fdf39912) pve-kernel-4.15: 5.2-7 pve-kernel-4.13: 5.2-2 pve-kernel-4.15.18-4-pve: 4.15.18-23 pve-kernel-4.15.18-1-pve: 4.15.18-19 pve-kernel-4.15.17-1-pve: 4.15.17-9 pve-kernel-4.13.16-4-pve: 4.13.16-51...
  16. J

    [SOLVED] Can't restore backup

    Restore to local-lvm don't work too. restore vma archive: zcat /mnt/pve/qnap/dump/vzdump-qemu-1191-2019_03_08-10_18_13.vma.gz | vma extract -v -r /var/tmp/vzdumptmp988144.fifo - /var/tmp/vzdumptmp988144 CFG: size: 591 name: qemu-server.conf DEV: dev_id=1 size: 34359738368 devname: drive-scsi0...
  17. J

    [SOLVED] Can't restore backup

    Log: restore vma archive: zcat /mnt/pve/qnap/dump/vzdump-qemu-1191-2019_03_08-10_18_13.vma.gz | vma extract -v -r /var/tmp/vzdumptmp974517.fifo - /var/tmp/vzdumptmp974517 CFG: size: 591 name: qemu-server.conf DEV: dev_id=1 size: 34359738368 devname: drive-scsi0 DEV: dev_id=2 size: 8589934592...
  18. J

    Ceph down if one node is down

    The problem is you have size=min_size, so any down osd will freeze the pool. Change size to 3 (this will cause mass data movement, so be advised).
  19. J

    Ceph down if one node is down

    And pool size/min size?
  20. J

    Ceph down if one node is down

    I/O is blocked because of: 2019-02-06 11:10:56.387126 mon.bluehub-prox02 mon.0 10.9.9.2:6789/0 33971 : cluster [WRN] Health check failed: Reduced data availability: 329 pgs inactive (PG_AVAILABILITY) Please show your crush map.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!