Search results

  1. P

    Bulk OSD replace in Ceph

    @RokaKen hmm....this is an idea. We don`t think about reweight 0. Shall to calculate it. Thanks.
  2. P

    Bulk OSD replace in Ceph

    Thanks a lot for reply @aaron. We fully understand risk when OSD down\out and we never remove 2 OSD on separate nodes in our "Per node" redundancy policy. I just asked is it ok remove OSD when Ceph is Healthy, there is no PG degraded state, but re-map\backfills process still running (no...
  3. P

    Bulk OSD replace in Ceph

    Hello, It`s not an issue topic, but I`ll be really appreciate for your answers. We have 12 Node PVE cluster with 54 Ceph OSD (all SSD). PVE 6.1.5 Ceph 14.2.5 OSD count 54 PG count 2048 Replica 3 OSD in RAID0 (we know that is not supported configuration, but we have to do so. And BTW OSD in...
  4. P

    [SOLVED] NFS mount- wont "online"

    Hi, Sorry for upping old themes, but I have same issue, but with some exceptions. Fresh installed Proxmox 6-2.4 (with all latest updates) 4 nodes cluster connected through 10G switch, Synology NAS in same broadcast domain and IP network (no FW, no ACL on switch side). Before cluster...
  5. P

    [SOLVED] OSDs fail on on one node / cannot re-create

    lucentwolf, do you reboot node after removing partitions? I tried to mount disks after removing partitions and got same error. Only after reboot node all disks are available for Ceph.
  6. P

    [SOLVED] OSDs fail on on one node / cannot re-create

    Hi lucentwolf, yes, ceph-volume lvm list also didn`t work. I found workaround for my issue (format disk from LiveCD). Also I found something looks like our issue https://github.com/ceph/ceph/pull/23532 Hope this would be helpfull.
  7. P

    [SOLVED] OSDs fail on on one node / cannot re-create

    Hello, Sorry in advance, to avoid new similar topic I`ll write here. I have issue with ceph-volume lvm zap command too. I use PVE 6.0.1 with Ceph for testing purpose. Now I have to migrate this cluster to production. I made clear install (PVE + upgrade + Ceph) on each of 4 nodes, create...
  8. P

    whmcs: HTTP/1.1 596 Connection timed out

    You use public IP. 1. Check FQDN in DNS or in hosts file in whmcs. 2. Check connectivity from whmcs to all cluster nodes for all necessary ports via IP and DNS Your problem is out of scope PVE.
  9. P

    CephFS share via NFS to VMware

    @Alwin Thanks a lot for your reply. We`ll try to do it. Thanks. Yes, I know about this product. But as you understand backup process not only create VZDump and then backup delta. It is also tape archiving, syncing with disaster site and so on. It`s not easy to make all B&R process by scripts...
  10. P

    CephFS share via NFS to VMware

    Thanks a lot for your time. Step by step. 1. iSCSI gateways? You mean this one https://docs.ceph.com/docs/mimic/rbd/iscsi-targets/ Does Proxmox support it ok? I can use only software iSCSI. Some nodes cards cant handle HW iSCSI. 2. Ok, but I can't delete existing pool :( I'll try from CLI...
  11. P

    CephFS share via NFS to VMware

    Hello, Sorry in advance for dummy questions, but i can`t find in docs how to connect VMware 5.5 Hosts to CephFS via NFS. All docs goes to Ceph official site. As it is production cluster i`m afraid to do something very bad. I have 10 nodes PVE cluster and Ceph cluster with 40 OSD SSD and 10...
  12. P

    Proxmox VE 6 does not boot ubuntu cloud images with cloud-init for VM.

    @kakao73, @ujiam did you check were Proxmox trying to find Cloud-Init hard drive? When we meet this error on older versions VM-Disk was on Ceph SSD Pool, but in VM Hardware options Cloud-Init disk was on our Dev\Test SAS Pool. We remount correct Cloud-Init disk and VM was able to start.
  13. P

    Proxmox VE 6 does not boot ubuntu cloud images with cloud-init for VM.

    Which qemu version do you use? We have a lot of problem with various CloudInit images since qemu qemu-server 6.0.10 to 6.0.17 with 6.0.9 or latest 6.1.1 everything ok. Try to use this versions.
  14. P

    Cluster network with storage, migration network

    Hi, thanks a lot for reply. Finally It work`s. Answer was not in those topic, but it push me to re-check physical network. On of 2 ports of 1Gbe was not properly configured. Thanks.
  15. P

    Cluster network with storage, migration network

    Still can't find what I'm doing wrong. :(
  16. P

    Cluster network with storage, migration network

    After few days of reading I decide to leave corosync network in 10G net. But I have problem with VLANs. As I mentioned above I have 2 10G nic without VLAN and 2 1G nics with VLANs configured on the switch. I create 2 bonds. One with regular Linux bond1 for 10G network, then create vmbr0 with...
  17. P

    Cluster network with storage, migration network

    Hi All, I`m going to migrate to Proxmox and I have a question about cluster network. I have servers with 4 NICs. 2 x 1Gbe for uplinks connected to separate switches in active-standby mode, which connected to Core router and all VMs obtain real IP. 2 x 10Gbe connected to separate switches in...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!