Recent content by alexskysilk

  1. A

    tips for shared storage that 'has it all' :-)

    Ceph is the only "all features" supported shared solution easily available for PVE. Its also the most heavily worked on for other Virt platforms such as XCP-NG and various flavors of kvm. Your decision tree going forward heavily depends on WHY ceph was rejected. If its a matter of available...
  2. A

    Problem with SANs on a three node cluster

    As I mentioned before, options 1 and 2 are available to you. option 3 is not, at least not in a non hackey way. options 1 and 2 work the same no matter whether you choose to use one, the other, or both, and dont interact/interfere with each other unless you gave them identical iqns (which must...
  3. A

    Problem with SANs on a three node cluster

    I see. so you really do have two independent iscsi hosts (targets.) There should not be any conflict between the two pools. post logs (dmesg, iscsiadm -m session, lsblk, etc) for the host with the failure and we can go from there.
  4. A

    Problem with SANs on a three node cluster

    a SAN would be a shared device of some sort. when you say you have "two" SANs, do you mean you have two boxes serving independent iscsi luns, or two boxes in a failover capacity (meaning one set of luns?) In either case, this becomes a simple matter of mapping iscsi luns to your cluster. I...
  5. A

    I cannot make a proxmox cluster with msi z790 motherboard.

    post the contents of /etc/network/interfaces for all 3 nodes. also double check each machine's hosts file and make sure they contain the same records for all 3 machines, and that they all match the output of hostname for each.
  6. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    I feel you. this leaves you with 2 options- 1. do not use pveceph. which you totally CAN 2. open a feature suggestion here: https://bugzilla.proxmox.com/enter_bug.cgi
  7. A

    I cant upgrade proxmox

    Your issues are larger then can be corrected with dpkg/apt. you have two choices here: 1. regress EVERYTHING you did on this system (software installed, kernel line arguments, etc) until you can have that command complete without blowing up your computer 2. wipe and reinstall, and pay attention...
  8. A

    Ceph rbd du shows usage 2-4x higher than inside VM

    ceph usage gives you raw utilization numbers. if you use ceph df detail it will give you actual statistics including compression (which I suspect are the cause for the difference between du and df)
  9. A

    Ceph rbd du shows usage 2-4x higher than inside VM

    the one other factor that could be at play is the size of your pgs. how many pgs in the pool and whats the raw capacity?
  10. A

    System won't fully reboot or shutdown

    Disable acpi extended modes in bios.
  11. A

    Update PVE, Ceph and PBS

    This can be done without any downtime. and yes, you want to make sure that the running ceph version is available on the next distro- so make sure before you start you upgrade your ceph to squid. This process is non disruptive and will not introduce downtime. see...
  12. A

    My RDS-2025 VM running extremely slow.

    that should be your first option ;) PVE8 is not EOL yet, and even when it is (this august) you can keep running for some time after. might be the better option, especially if you dont need anything pve9 specific. This will give you time to figure out the issue on a pve9 testbed before...
  13. A

    Proxmox Cluster Random Reboot

    so here's what I'd suggest- dont use 10.13.30.x for corosync at all. assign arbitrary addresses to bond0 and bond1; --edit- ON DIFFERENT SUBNETS. ideally, they should be on seperate vlans too. use those addresses as ring0 and ring1.