Search results

  1. S

    40gbs Mellanox Infinityband

    First I tried this, but I found out later they weren't persistent: echo eth >/sys/bus/pci/devices/0000:XX:00.0/mlx4_port1 echo eth >/sys/bus/pci/devices/0000:XX:00.0/mlx4_port2 It's not just not persistent through reboots, or ifdowns. Basically for drivers on linux, you install the Mellanox...
  2. S

    40gbs Mellanox Infinityband

    Did you ever find another way to change this permanently?
  3. S

    Access Proxmox managed Ceph Pool from standalone node

    So for the record everyone, as I couldn't get the monitors or osds to bind to more than one address (IPv6 or IPv4), we simply created some static routes on the standalone host, it that enabled it to reach the segregated subnet that ceph is on. Of course first we had to have IPv6 addresses on...
  4. S

    Access Proxmox managed Ceph Pool from standalone node

    From what I can see, the monitors and osds only bind to one address, no matter what you put in there.
  5. S

    Adding a Second Public Network to Proxmox VE with Ceph Cluster

    Man I wish this thread went longer. I'm having the same issue. Any luck @b2a225 ?
  6. S

    Access Proxmox managed Ceph Pool from standalone node

    So I tried just another IPv6 subnet, but still no luck.
  7. S

    Access Proxmox managed Ceph Pool from standalone node

    Oh okay. I will look into how to go about routing it.
  8. S

    Access Proxmox managed Ceph Pool from standalone node

    Hello. To help make migration much easier, I'd like to connect a standalone node I have to the Proxmox managed ceph storage on a three node cluster by adding that storage as RBD on the standalone. The issue is, the current ceph public_network is isolated, because it's directly attached 40G NICs...
  9. S

    [SOLVED] Remove or reset cluster configuration.

    Worked for me too. Thank you! @FloUhl and @Gilberto Ferreira !
  10. S

    Joining Cluster pve-ssl.pem error

    This seemed to do the trick for me as well. Thank you for sharing!
  11. S

    Disk IOPS and Throughput Limit Best Practices

    Hello all, We're looking for Best Practices regarding setting IOPS and Throughput limits on "Hard Disks" in proxmox. There's obviously limit and brust settings under Advanced on a given disk. Questions we have: Is there a way to see, from the Host level, the current IOPS/Throughput a...
  12. S

    [SOLVED] Problem add external rbd storage

    This worked for me too thank you!! (had to restart after too, or only worked with rbd tool in cli)
  13. S

    Cannot Live Migrate with "Discard" Set

    So we had this problem again, with the same setup as described at the start of this thread, but while live migrating a VM that didn't have discard set :/ The only thing I could find worth noting was that this VM did not have "format=raw", as mentioned in the bug referenced above. What happened...
  14. S

    VM doesn't start Proxmox 6 - timeout waiting on systemd

    I get it. But that's what we did, and we've been good since. Plus the kernel change of course.
  15. S

    VM doesn't start Proxmox 6 - timeout waiting on systemd

    Checkout https://forum.proxmox.com/threads/vm-doesnt-start-proxmox-6-timeout-waiting-on-systemd.56218/post-276920
  16. S

    VM doesn't start Proxmox 6 - timeout waiting on systemd

    Did you follow all the suggestions earlier in this thread? I haven't had any issues since I reported things were fixed a while back.
  17. S

    Cannot Live Migrate with "Discard" Set

    We have ZFS for proxmox OS, and then a separate volume for VMs. We're not using LVM, but rather just ZFS, which I believe simply supports thin provisioning (otherwise we would see the storage usage shrink when we "fstrim -a" when discard is set).
  18. S

    HowTo defrag an zfs-pool?

    This was very helpful. Thank you!
  19. S

    HowTo defrag an zfs-pool?

    Good to know. That said, is there a certain percentage of frag that I should monitor for, that way I don't have to find out when customers start complaining about performance issues?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!