Search results

  1. D

    Can not re-add OSD after destroy it

    Hi guy After destroying the osds, it doesn't allow me to rejoin them because it tells me that it is a member of an LVM group. but neither lvscan, nor vgscan nor ceph lvm ls shows said membership. The OSD Add buton doesn´t display the disk to add. This show the membership. but, this tells...
  2. D

    Db/Wall Size after upgrade to Nautilus

    I have realized that my DB partitions are sized to 1GB (my bad). I have one SSD 480 Gb for two DB/wall partitions that belongs to two 6TB hard drives. After a warning of spillover, i discovered the 1 gigabyte partitions. Then my plan was resize to 220Gb each, but because the spillover that is...
  3. D

    Gateways of VMs

    10.0.1.1 is the default gateway for your network?. Then this is the gateway for your vms, if they are in that 10.0.1.0/24 net
  4. D

    [SOLVED] Shutting down any node makes VMs unavailable

    Yes, I think so. I remember somebody going to downsizing the replication numbers. But doing that ypu put yourself in risk.
  5. D

    [SOLVED] Shutting down any node makes VMs unavailable

    2/1 in a test enviroment it's OK, In a production system it's a noway. If you lost one one, nothing happens, but if something occurs before the rebuild, you are game over. Be ready for a mass restauration for backups, because you will have data losses for sure. With 3/2 , if you lost a second...
  6. D

    [SOLVED] Shutting down any node makes VMs unavailable

    please post : ceph -s ,ceph health detail , pvecm status. So yo dont have to rebuild the pool, just increment the replica size to 3. Something like ceph osd pool set POOL_NAME size 3
  7. D

    [SOLVED] Ceph down/out timeout

    Gotcha!!! It was a parameter : mon osd min in ratio = 0.75, modifing this to 0.70 permit me to lost more OSD and mark it as Out. Thank you for your help.
  8. D

    [SOLVED] Ceph down/out timeout

    Can anybody test this? I'm testing this: 1.- If one or several OSD in one HOST goes down in a host, they work as expected, marked down, and afeter 600seg marked as out. The ceph rebuilds itself. 2.- If one host goes down, the all OSD in that host are marked as down, and after 600seg marked...
  9. D

    [SOLVED] Ceph down/out timeout

    Yep, I know. But one of the OSD never goes to down/out (from down/in). All the other became marked as out, and start recovery, but one osd still faulty.
  10. D

    [SOLVED] Ceph down/out timeout

    Hi. I have a problem. I am testing a 7 node cluster. Each node has 1 nvme (4 OSD) and two hd (2 OSD), so 6 OSD each node. There are two replication rules (type nvme and hd) and two pools (fast and slow) acording to the rules. All its ok, but when i shutdown one one, and thereafter another...
  11. D

    [SOLVED] Problem with corosync, Cluster stuck several minutes

    OK, Issue resolved. Was my fault Igmp querier not enabled in vlan at the switch. I changed the vlans and forget to enable it. I apologize for the inconvenients caused. Thank you very much for the waste of time and brain.
  12. D

    [SOLVED] Problem with corosync, Cluster stuck several minutes

    The 10 minute test is Ok too. 10.9.5.151 : unicast, xmt/rcv/%loss = 600000/600000/0%, min/avg/max/std-dev = 0.023/0.090/0.918/0.032 10.9.5.151 : multicast, xmt/rcv/%loss = 600000/600000/0%, min/avg/max/std-dev = 0.023/0.093/0.920/0.032 10.9.5.152 : unicast, xmt/rcv/%loss =...
  13. D

    [SOLVED] Problem with corosync, Cluster stuck several minutes

    I will test the omping for 10 minutes... there is no bridge in corosync interfaces.... more /etc/network/interfaces auto lo iface lo inet loopback auto enp175s0f0 iface enp175s0f0 inet static address 10.9.5.156 netmask 255.255.255.0 #Corosync RING0 auto enp175s0f1 iface...
  14. D

    [SOLVED] Problem with corosync, Cluster stuck several minutes

    omping -c 10000 -i 0.001 -F -q 10.9.5.151 10.9.5.152 10.9.5.153 10.9.5.154 10.9.5.155 10.9.5.155 10.9.5.156 10.9.5.157 10.9.5.151 : unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.029/0.082/0.237/0.023 10.9.5.151 : multicast, xmt/rcv/%loss = 10000/10000/0%...
  15. D

    [SOLVED] Problem with corosync, Cluster stuck several minutes

    I have a 7 node cluster. Corosync configured with 2 rings on two diferent ethernet interfaces. All is Running OK But when i shut down a node, (example node4) the cluster notifies the shutdown and goes well. But, when i power up the node4 again, then all the cluster is down...
  16. D

    Tagged VLAN in VM, vlan aware OVswitch again..

    Is it necessary to declare the ovswitch port (OVS inport) in the proxmox host in order to use taged VLANs in the VMs? I mean: I have this test configuration: allow-vmbr0 bond0 iface bond0 inet manual ovs_bridge vmbr0 ovs_type OVSBond ovs_bonds enp176s0f0 enp176s0f1...
  17. D

    [SOLVED] Fresh VE 5.3 installation doesn't work with Ubuntu 18.04

    I can confirm this point. Ubuntu-18.04.2-live-server-amd64.iso doesn't work. Ubuntu 18.04.1 works well.
  18. D

    Issues installing Debian VM

    https://www.debian.org/releases/stable/i386/ch03s04.html.en try 128Mb

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!