Search results

  1. Dedicated Ceph Setup for Multiple PVE Clusters

    Since you are effectively disaggregating Ceph from PVE and dedicating hardware resources, an alternative option would be Blockbridge - both the control and data paths are natively Multi-cloud/Multi-tenant. Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  2. FC SAN with Proxmox cluster

    For Proxmox, our current recommendation is iSCSI. We build all-flash datacenter solutions primarily used by service providers, where performance, availability, and automation are essential. We've been helping folks run Proxmox on shared storage for quite a while now. Initially, this was using...
  3. multipath config error after upgrading to version 7

    Multipath is just above the OS layer and below the Virtualization (Proxmox) layer. You should google something like "MD32xxi dell multipath guide linux" - the very first hit seems to be very promising...
  4. FC SAN with Proxmox cluster

    Sounds like by process of elimination you have arrived to my original advice :) : Clustered Filesystems with no direct support from PVE (https://en.wikipedia.org/wiki/Clustered_file_system#Examples) Of course there is also Blockbridge, however its not practical when the goal is to re-use...
  5. FC SAN with Proxmox cluster

    Neither thin-lvm or ZFS can be used natively with shared storage. Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  6. FC SAN with Proxmox cluster

    I wouldn't say its futile, it just requires more work than simple Proxmox cluster installation. You can take a LUN (I'd start with a single one instead of 22), deploy and configure Oracle Cluster File System (OCFS) on each of the cluster nodes and then use Proxmox directory storage to store...
  7. FC SAN with Proxmox cluster

    Without knowing low level details about your SAN, I am going to assume a few things, chiefly that your SAN has built-in disk protection (RAID). That means the LUNs that you present to clients (PVE nodes) are already protected from disk failure. A very rough equivalent of VMWare VMFS datastore...
  8. FC SAN with Proxmox cluster

    While there may be a valid reason to program WWID individually, most likely you can get away with using a wildcard. Keep in mind that PVE runs on Debian and so this low level storage configuration is standard and not PVE specific. If you expand your search, for example google "multipath command...
  9. Question about ISCSI storage, redundancy, and multipathing?

    The best way is to test it. Make the LUN available to Proxmox, then manually move it the other controller. The disk UUID will remain the same, the LVM (if you use that) will remain the same. Could there be some other dependencies at the app layer that will trip you up? Possibly. Even with...
  10. Question about ISCSI storage, redundancy, and multipathing?

    There are several layers of redundancy that are involved in enterprise SAN connection: - port/path (nic/cable/switch) - controller Keep in mind that Compellent is not Active/Active for a given LUN. The LUN "belongs" to one of the two controllers and will only be seen via network path on that...
  11. Import Disk from NFS Slow

    Unfortunately any thoughts on what your perceived bottleneck might be, would be completely wild guesses. You've provided insufficient information about your environment. More importantly, you've provided no troubleshooting steps that you took. Its entirely possible that you've reached the max...
  12. Should I worry about input/output errors?

    A 95% chance its a bad disk. 5% chance that its the PCI slot Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  13. Should I worry about input/output errors?

    The first suspect is the non-enterprise disk. It has no power-loss protection so an unexpected shutdown could very likely lead to file system issues. Perhaps an auto-fsck was run on boot, perhaps it fixed things. Are there guarantees? Of course not. Probably a good idea is to re-run all file...
  14. Install Ceph on dell PowerEdge 720 with perc

    While there are a few alternatives to Ceph, namely GlusterFS, neither is designed for use on a RAID'ed local disk. The distributed file systems are built with distributed data protection in mind. They replicate data for recovery from component failure, which means the blocks are replicated more...
  15. HELP PLEASE - Problem upgrading to PVE7

    Judging from the screenshots you have major issues with hardware. 3 disks are missing (USB?), and sdf has went caput. My advice would be to build a bootable live USB, boot from it and deal with each disk you expect to be present one by one from the console. Make sure they are visable by OS...
  16. HELP PLEASE - Problem upgrading to PVE7

    There are many more google hits to that error message, i.e. : https://askubuntu.com/questions/287021/how-to-fix-read-only-file-system-error-when-i-run-something-as-sudo-and-try-to The issue is not Proxmox specific, its an OS upgrade problem that many people have run to over the years. Go...
  17. HELP PLEASE - Problem upgrading to PVE7

    https://askubuntu.com/questions/269856/dpkg-error-unable-to-access-dpkg-status-area-read-only-file-system Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  18. HELP PLEASE - Problem upgrading to PVE7

    A very basic intro - from ssh session: - apt install screen - screen - run your commands if you want to disconnect from screen and leave it running: ctrl-A-D if you want to reconnect, either from manual disconnect or network crash: screen -r There is much more to screen functionality, you can...
  19. HELP PLEASE - Problem upgrading to PVE7

    If it was an ssh session and not console that crashed, then most likely it got killed and process got interrupted mid-way. Your best and only way is to restart whatever step you were on. In the future you should use "screen" to run important interactive tasks over ssh. Ultra low latency...
  20. One LVM for multiple nodes

    @Mike Tkatchouk I think the important part to highlight is that you are using _thick_ LVM, so as the VG gets sliced a thick LV gets created for each virtual disk. This LV is used by one node only (where the VM is active) and that is enforced by PVE. This is a fine and fully supported method by...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!