Hello,
I searched this forum and google but i cannot find the final aswer..
We have a Proxmox cluster with a remote Ceph Luminous cluster.
I see i get muge faster writes with Cache=writeback in the disk options in Proxmox, (random 4k up to 16x faster and Seq 10x faster) then with cache=none...
Hi all,
We are using openstack for most of our production instances with ceph storage backend. Recently we added additional hardware and setup Proxmox v6 and attached it to the same ceph storage cluster
With the ceph storage integration we tested couple of instance and it works perfectly fine...
Hi,
I have defined a storage of type "RBD" with only SSD drives connected to the relevant pool.
All RBDs are available:
root@ld3955:~# rbd ls -l ssd [25/1974]NAME SIZE PARENT FMT PROT LOCK
vm-100-disk-0 1 MiB...
I am running PVE 6.0 and on this pve node, I have already started 2 vm with rbd disks to my ceph storage, However, sometime if I need to start third VM using rbd disk, PVE will error with following msg. It only can be solve if I reboot the pve node and then I can start 3 VM or more using rbd...
Hi everyone,
I've run into a particular issue. I have a ClearOS VM in Proxmox acting as a domain controller with roaming profiles for some Windows PCs.
I have a 3TB disk in the Proxmox machine that I'd like to share to the ClearOS VM and other VMs in the future.
At the moment I'm exporting the...
Hello guys,
I am trying to use a persistent volume claim dynamically after defining a storage class to use Ceph Storage on a Proxmox VE 6.0-4 one node cluster.
The persistent volume gets created successfully on ceph storage, but pods are unable to mount it. It throws below error. I am not sure...
I have a cluster that has relatively heavy IO and consequently free space on the ceph storage is constantly constrained. I'm finding myself performing fstrim on a more and more frequent interval. Is there a way to auto trim a disk for an lxc container?
Fresh single node install currently, with the intent to add 2 more nodes later that will use the Ceph storage from the first node.
I installed the first node and installed ceph.
I have 2 virtual bridges
- vmbr0, 10.0.1.2/16 - general network
- vmbr3, 10.10.10.2/24 - ceph network
I...
There is an Proxmox VE 5.3-11 server running several VM's.
Disk images are stored on Ceph RBD.
I modified "print_drivedevice_full" subroutine in /usr/share/perl5/PVE/QemuServer.pm for RBD tuning as described in https://pve.proxmox.com/pipermail/pve-devel/2018-June/032787.html
Here is my patch...
Hello!
I have successfully setup a PVE cluster with Ceph.
After creating ceph pools and related RBD storage I moved the VM's drive to this newly created RBD storage.
Due to some issues I needed to reboot all cluster nodes one after the other.
Since then the PVE storage reports that all RBD is...
Hi there,
I am running a 2-node Proxmox-Cluster and mounted RBD images on a remote Ceph cluster (latest Mimic release). Currently we are using the RBD image mount as backup storage for our VMs (mounted in /var/lib/backup).
It all works fine unless an OSD or an OSD-Host (we have 3, each...
Just noticed this on one of my clusters; disk resize is failing with the following error message:
Resizing image: 100% complete...done.
mount.nfs: Failed to resolve server rbd: Name or service not known
Failed to update the container's filesystem: command 'unshare -m -- sh -c 'mount...
Hi all,
So I have two external Ceph clusters - one is using IPv4 and the other is using IPv6. When I'm using IPv4 cluster, the dashborad shows status and content of RBD and I have no problem creating new images for VMs. But when I try to configure storage information about IPv6 cluster, I get...
Hello!
Can you please share some information why storage type rbd is only availabel for Disk-Image and Container?
I would prefer to dump a backup to another rbd.
THX
Hi,
I have created a pool + image using this commands:
rbd create --size 500G backup/gbs
Then I modified the features:
rbd feature disable backup/gbs exclusive-lock object-map fast-diff deep-flatten
Latest step was to create a client to get access to the cluster:
ceph auth get-or-create...
Hi Community,
to create a rbd image of 1T with an object size of 16K is easy. I did it like this:
rbd create -s 1T --object-size 16K --image-feature layering --image-feature exclusive-lock --image-feature object-map --image-feature fast-diff --image-feature deep-flatten -p Poolname vm-222-disk-4...
I have this intermittent problem with storage returning 0 values for a specific rbd pool. Its only happening on one cluster, and there doesnt seem to be a corrolation to which node context is being called...
Recently I had to re-install Proxmox on my SSD's since replication is not supported in LVM, had to make "sort of a backup" of some files from a container that are around 250GB. To achieve that I mounted a disk using ceph storage, transferred the files to the storage, unmounted the disk...
I have a new problem (well, it could be old and I just noticed it.) I have a number of containers that show any number of snapshots but when I look at the disk those snapshots dont exist.
Example:
pvesh get /nodes/sky12/lxc/16980/snapshot
200 OK
[
{
"description" : "Automatic snapshot...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.