Hello guys,
I updated my PVE Cluster yesterday to the newest Ceph Version and did not notice any issues at first.
The VMs and CTs do run normally, I can read and write to the virtual disks, even create new ones, just fine! Migrating between Nodes also works, as well as deleting CTs...
But...
Hallo,
ich habe einen Linux Container mit einem großen Ceph RBD Volume das ich in einen anderen Pool verschieben soll. Aus der GUI ist das aber nur mit heruntergefahrenem Container möglich. Das Kopieren würde ca. 2 Tage dauern, daher habe ich mich nach Alternativen umgesehen und bin auf den...
Hi,
When I am listing rbd ls -l cephpool getting error as below
rbd: error opening vm-121-disk-0: (2) No such file or directory
This VM is not exists in the server. how to delete the same
Thank you.
Hi All,
I'm using 3x pve 7.1-10 nodes as a cluster. I already connected synology box as shared storage. all features are working properly at the moment.
I'm working on moving some workloads from failing redhat openstack cluster with ceph storage to the pve cluster. As I did many times...
/dev/rbd1
kvm: -drive file=/dev/rbd/rbd/vm-150-disk-0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap: Could not open '/dev/rbd/rbd/vm-150-disk-0': No such file or directory
TASK ERROR: start failed: QEMU exited with code 1
I suspect this issue has...
Hi Everyone,
Getting into using the CEPH RBD command because I need to identify storage usage in my CEPH cluster. When issuing the command below, it almost appears as though my snapshots are using an immense amount of space:
rbd -p pool du
Excerpt of one VM...
Hello,
I'm trying to setup Ceph rbd mirroring Snapshot-based between 2 Ceph clusters each installed from a different PVE cluster.
I call them pve-c1 and pve-c2. Anyone here already setup it successfully ? A this moment i only try a one-way replication from pve-c1 to pve-c2.
Proxmox VE 6.3-2...
Good morning, I have a cluster with 16 Hosts proxmox and an external CEPH cluster configured to store the VMS from the promox, I was using it normally on the proxmox and recently we had to do a maintenance on the storage, we moved all the VMS to another storage, I allocated the RBD storage from...
Hello,
I searched this forum and google but i cannot find the final aswer..
We have a Proxmox cluster with a remote Ceph Luminous cluster.
I see i get muge faster writes with Cache=writeback in the disk options in Proxmox, (random 4k up to 16x faster and Seq 10x faster) then with cache=none...
Hi all,
We are using openstack for most of our production instances with ceph storage backend. Recently we added additional hardware and setup Proxmox v6 and attached it to the same ceph storage cluster
With the ceph storage integration we tested couple of instance and it works perfectly fine...
Hi,
I have defined a storage of type "RBD" with only SSD drives connected to the relevant pool.
All RBDs are available:
root@ld3955:~# rbd ls -l ssd [25/1974]NAME SIZE PARENT FMT PROT LOCK
vm-100-disk-0 1 MiB...
I am running PVE 6.0 and on this pve node, I have already started 2 vm with rbd disks to my ceph storage, However, sometime if I need to start third VM using rbd disk, PVE will error with following msg. It only can be solve if I reboot the pve node and then I can start 3 VM or more using rbd...
Hi everyone,
I've run into a particular issue. I have a ClearOS VM in Proxmox acting as a domain controller with roaming profiles for some Windows PCs.
I have a 3TB disk in the Proxmox machine that I'd like to share to the ClearOS VM and other VMs in the future.
At the moment I'm exporting the...
Hello guys,
I am trying to use a persistent volume claim dynamically after defining a storage class to use Ceph Storage on a Proxmox VE 6.0-4 one node cluster.
The persistent volume gets created successfully on ceph storage, but pods are unable to mount it. It throws below error. I am not sure...
I have a cluster that has relatively heavy IO and consequently free space on the ceph storage is constantly constrained. I'm finding myself performing fstrim on a more and more frequent interval. Is there a way to auto trim a disk for an lxc container?
Fresh single node install currently, with the intent to add 2 more nodes later that will use the Ceph storage from the first node.
I installed the first node and installed ceph.
I have 2 virtual bridges
- vmbr0, 10.0.1.2/16 - general network
- vmbr3, 10.10.10.2/24 - ceph network
I...
There is an Proxmox VE 5.3-11 server running several VM's.
Disk images are stored on Ceph RBD.
I modified "print_drivedevice_full" subroutine in /usr/share/perl5/PVE/QemuServer.pm for RBD tuning as described in https://pve.proxmox.com/pipermail/pve-devel/2018-June/032787.html
Here is my patch...
Hello!
I have successfully setup a PVE cluster with Ceph.
After creating ceph pools and related RBD storage I moved the VM's drive to this newly created RBD storage.
Due to some issues I needed to reboot all cluster nodes one after the other.
Since then the PVE storage reports that all RBD is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.