Search results

  1. F

    Ceph SSD for wal

    I'm looking for a good SSD for to outsource the WAL for my OSDs. Which SSDs can you recommend? I had already looked for the following models: SM883 Intel D3 S4610 series best regards
  2. F

    Proxmox Buster with discard option in virtio?

    Will Proxmox buster give the possibility of discard in virtio blk? https://git.qemu.org/?p=qemu.git;a=commitdiff;h=37b06f8d46fe602e630e4 https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=d548e65904ae43b0637d200a2441fc94e0589c30
  3. F

    LVM in a Virtualmachine

    Hi, I have the following situation. for testing ceph-ansible we use a virtual environment on proxmox. there are 4 VMs simulating a ceph environment, the blockdevices are located in an LVM on the proxmox node. so far the OSDs were installed with ceph-disk and bluestore. ceph-disk will not be...
  4. F

    ceph rbd overprovisioning

    Hello, I wanted to ask in general how you deal with the topic. The RBD images will be created thinprovision. The display for the storage shows only the current used storage, but not the possible usage by the created images. Regards
  5. F

    show vm IOPs from ceph storage

    hello, we use proxmox with ceph ( "ceph version 12.2.8) as storage.is there a way to see how many iops a vm uses? I would like to know which VM I have to limit accordingly. Best regards
  6. F

    qdevice for a scaleable cluster

    Hi, at the moment we are using a cluster of 2 nodes. To achieve a quorum I would like to follow the approach of https://pve.proxmox.com/pipermail/pve-devel/2017-July/027732.html in a VM. The post says exactly: Note that we highly recommend the ff-split algorithm! This means implicit that it...
  7. F

    [SOLVED] pvecm addnode = 400 Parameter verification failed

    Hallo, ich möchte einen Cluster aufbauen und scheitere beim hinzufügen von der Node B zu Node A pvecm addnode 192.168.1.11 400 Parameter verification failed. node: invalid format - value does not look like a valid node name pvecm addnode <node> [OPTIONS] pveversion -v proxmox-ve: 5.1-43...
  8. F

    windows 2016 virtio hard disk error code 10

    Hallo, pveversion -v proxmox-ve: 5.1-26 (running kernel: 4.13.4-1-pve) pve-manager: 5.1-36 (running version: 5.1-36/131401db) pve-kernel-4.13.4-1-pve: 4.13.4-26 libpve-http-server-perl: 2.0-6 lvm2: 2.02.168-pve6 corosync: 2.4.2-pve3 libqb0: 1.0.1-1 pve-cluster: 5.0-15 qemu-server: 5.0-17...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!