Search results

  1. CephFS share via NFS to VMware

    Nothing worth doing is easy. VMWare doesnt do this either by itself and you're depending on third party software to do so. The better question in my view is why are you trying to fix what isnt broken (except vsphere 5.5 has been end of support for almost 2 years now and should be considered...
  2. CephFS share via NFS to VMware

    If thats all thats stopping you- https://github.com/Corsinvest/cv4pve-barc
  3. ceph performance is really poor

    Yes, I did. It ended up not being ceph at all; This cluster had Sophos AV running and it just slows many disk operations to a crawl. If you can rule out any in memory process that can be slowing you down, its probably just your slow disks. spinning disks are only capable of delivering ~100 iops...
  4. migration from iSCSI single to iSCSI multipath

    It is theoretically possible to do all the above without reboot, but it would necessitate moving all your vms to other nodes first (in which case you can reboot safely anyway)
  5. migration from iSCSI single to iSCSI multipath

    its a relatively simple matter of 1. bring up your second interface. 2. install and configure mpio. 3. edit your iscsi targets and replace the disks with their mpio counterpart. 3. filter out /dev/sd* from lvm.conf 4. reboot. cluster should remain unaffected as the actual lvm is the same. I...
  6. No Space Left on Device when restoring from tar.lzo

    I couldnt say, I was simply pointing out that your archive extracts to 44GB/42GiB. I dont know where you saw the 32GB figure so I cant comment. edit- IF the original FS was ZFS you may have read the post compression disk utilization.
  7. Ceph and a Datacenter failure

    then all you have to do is to edit your crush rules to use your objects like so: rule replicated_ruleset { ruleset X type replicated min_size 2 max_size 3 step take default step choose firstn 2 type datacenter step chooseleaf firstn -1 type host step emit } I am curious about...
  8. Ceph and a Datacenter failure

    The first problem to solve is the matter of fencing. With ceph this is easier since crush is hierarchical- you can create datacenter level objects and distribute replication down; see https://ceph.io/geen-categorie/manage-a-multi-datacenter-crush-map-with-the-command-line/ for more discussion...
  9. How do I license MS Server 2016 standard and assign cores?

    Thats... sort of true but there is a way around- this will apply if you PAY for the host server Windows OS even if you're not using it. How to prove that is another question which my Microsoft rep was never able to really answer unless you have a SPLA. In other words- have a SPLA or dont use...
  10. Cannot add node(s) to cluster

    Yes, but there is no real corrective action I can prescribe. It is most likely network related although I cant say what. Problem was cured by blowing everything away and reinstalling- there is no rational reason to it having worked afterwards, but no rational reason for it not working in the...
  11. Adding external server to Proxmox CEPH Cluster

    I suppose you should describe what your needs are in this context. since cephfs presents a normal posix file system (which you could access with a client as @dcsapak recommended, or even simpler via nfs or sshfs; how would latency interfere with your remote access?
  12. [TUTORIAL] PVE 6.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    just curious; why are you formatting your LUNs? with lvmlockd you should be able to use lvm-thin and just map lvs directly to a virtual machine... I havent actually tried it but it should work...
  13. Need help with vmware migration

    You need to make sure your CORRECT disk is set in the vm boot order:
  14. I need to migrate a proxmox host out of cept storage into a vmware cluster, any ideas?

    ceph osd lspools will show you your pools rbd ls -p poolname will show you your objects where poolname is the one you identified above.
  15. I need to migrate a proxmox host out of cept storage into a vmware cluster, any ideas?

    qemu-img convert -f raw -O vmdk rbd:pool/object /path/to/disk.vmdk edit: removed brackets for clarity ;)
  16. CentOS 8 Container?

    What version of proxmox? what is the minimal version required to support?
  17. Poor performance

    forgive the crude markups :) creating a backup: restoring a backup: The datastore for restoration is the same one you will choose during backup creation.
  18. Poor performance

    htop is not part of a default installation. you can always add it by installing it, eg apt-get install htop incidentally, the less pretty "top" is present by default in almost all linux distros :) https://pve.proxmox.com/wiki/Backup_and_Restore This may have a ton of answers to many questions...
  19. [SOLVED] How to set up a vm using on a RBD storage

    Since your machine boots when set to IDE, the simple fix is to add a NEW drive to the SCSI bus, install the virtio driver, shutdown and change the disk type for your BOOT disk. instructions available here...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!