Search results

  1. M

    [SOLVED] Corosync redundancy - corosync.conf

    Updated original post with PVE version information.
  2. M

    [SOLVED] Corosync redundancy - corosync.conf

    Here's the original corosync: nodelist { node { name: VMHost2 nodeid: 3 quorum_votes: 1 ring0_addr: 10.10.2.16 ring1_addr: 10.10.1.16 ring2_addr: 10.10.0.16 } node { name: VMHost4 nodeid: 1 quorum_votes: 1 ring0_addr: 10.10.2.14 ring1_addr...
  3. M

    [SOLVED] Corosync redundancy - corosync.conf

    I have an existing 3 node cluster that was originally created in Proxomx 5.x. proxmox-ve: 7.2-1 (running kernel: 5.15.30-2-pve) pve-manager: 7.2-3 (running version: 7.2-3/c743d6c1) pve-kernel-helper: 7.2-2 pve-kernel-5.15: 7.2-1 pve-kernel-5.13: 7.1-9 pve-kernel-5.15.30-2-pve: 5.15.30-3...
  4. M

    [SOLVED] Ceph Octopus to Pacific - PG is stuck inactive

    1) Yes this happened 2) this is then the issue occurred. I think it had something to do with restarting monitors to service pacific honestly.
  5. M

    [SOLVED] Ceph Octopus to Pacific - PG is stuck inactive

    I upgraded the last node and restarted daemons and now it has healed itself. Not sure exactly what happened, but I think my issue was not following the upgrade steps EXACTLY as they are in the documentation. Let this be a warning to my future self and others!
  6. M

    [SOLVED] Ceph Octopus to Pacific - PG is stuck inactive

    3 node cluster. I just upgraded to the newest Proxmox 7.2 and Ceph Pacific 15.2.16 to 16.2.7 and after I upgraded and rebooted the second node, I am getting stuck pgs. At this time I have one last node to upgrade but [global] auth_client_required = none auth_cluster_required = none...
  7. M

    [SOLVED] Cannot Down and Remove the last remaining CEPH OSD from previous pool

    I started destroying the old osd, leaving 1 on each host, and I just went back in and noticed that the final osd was down! I'm not sure if it was just a matter of time or what happened, but the problem appears to have resolved itself.
  8. M

    [SOLVED] Cannot Down and Remove the last remaining CEPH OSD from previous pool

    Info: proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve) pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3) pve-kernel-helper: 7.1-14 pve-kernel-5.13: 7.1-9 pve-kernel-5.11: 7.0-10 pve-kernel-5.13.19-6-pve: 5.13.19-15 pve-kernel-5.13.19-3-pve: 5.13.19-7 pve-kernel-5.13.19-2-pve: 5.13.19-4...
  9. M

    CIFS/SMB Unable to activate storage - directory does not exist or is unrecheable

    MTU is the same at 1500 (defaults). Haven't even had time to go back and diagnose, partly also because I don't know what else to check.
  10. M

    [SOLVED] VM Stuck Locked

    I just moved the hard disk to another source, where I wanted it anyway, and it removed any snapshots and related issues.
  11. M

    [SOLVED] VM Stuck Locked

    I shut down a VM (Windows 11) and I wanted to take a snapshot. I realized the VM was not yet shut down because it was taking a snapshot of the RAM so I cancelled it while it was snapshoting. Unfortunately, this caused the VM to get stuck in a locked state, which I cannot get it out of...
  12. M

    [SOLVED] Database Sync Error

    Oh wow, I originally suspected it was a permissions issue but I missed that small detail about user 700! Thanks so much for the help, it has resolved the problem.
  13. M

    [SOLVED] Database Sync Error

    This is a pretty new cluster 3-host cluster: version 7.1-1 pmg01 (master) pmg02 (node) pmg03 (node). I am am seeing the following message in the syslogs for pmg01 and pmg03: <27>1 2021-12-30T08:49:16.624941-06:00 pmg01 pmgmirror 926 - - database sync 'pmg02' failed - DBI...
  14. M

    CIFS/SMB Unable to activate storage - directory does not exist or is unrecheable

    So far I discovered the following: The TrueNAS server has two interfaces: one for management GUI only via bonded 1 Gbe one for storage access via bonded 10 Gbe If I try to mount and write via the 1Gbe management interface, it works find. If I try to mount and write via the 10 Gbe storage...
  15. M

    CIFS/SMB Unable to activate storage - directory does not exist or is unrecheable

    I just tried to mount a TrueNAS Core NFS share via NFS v3 and I am having the same issue! When I attempt to upload an ISO for example, Proxmox freezes at the Copy data step. The server is not responding, but it mounts fine! I do not have this problem with Raspbian systems, or Ubuntu server...
  16. M

    CIFS/SMB Unable to activate storage - directory does not exist or is unrecheable

    I'm suspecting it might be a permissions issue. ProxmoxVE writes to these shares as the root user, correct? PS: I tried NFS sharing and I had a similar issue.
  17. M

    CIFS/SMB Unable to activate storage - directory does not exist or is unrecheable

    PVE Versions: proxmox-ve: 7.0-2 (running kernel: 5.11.22-5-pve) pve-manager: 7.0-13 (running version: 7.0-13/7aa7e488) pve-kernel-helper: 7.1-2 pve-kernel-5.11: 7.0-8 pve-kernel-5.4: 6.4-5 pve-kernel-5.11.22-5-pve: 5.11.22-10 pve-kernel-5.11.22-4-pve: 5.11.22-9 pve-kernel-5.4.128-1-pve...
  18. M

    [SOLVED] How to remove orphaned snapshots?

    I have a couple of snapshot that show up in their respective ceph pools: vm-105-state-ERPNextReady and vm-101-state-ubuntu20 These snapshots do not exist in the vm snapshot list: vm-105: vm-101: Does anyone know how I can remove them? When I try to remove both snapshots I get the...
  19. M

    Accessing VM Discs on CEPH Filesystems

    Yeah, I would imagine if ceph is not installed on that node it wouldn't have access to the rdb protocol.