Nobody?
So I created a new test setup with one proxmox box and 3 VM's. All VM's have a single public IP address.
It's using openvswitch with ovs_port eth0. eth0 is physically connected to the switch for internet uplink.
VM1 sends a lot of traffic to VM2.
VM3 has nothing to do with VM1 or VM2...
I'm running Proxmox 4.4 with openvswitch 2.6.0.
I just discovered that one VM, which was a clean Linux install, running no services, is seeing about 2mbyte/s of incoming traffic according to proxmox VM graph. So I installed a network sniffer in this VM and it can see a lot of traffic caused by...
Aren't RBD snapshots going to hurt performance if they are active 24/7?
For this backup script to work it has to keep snapshots all the time in order to generate incremental backups. But when I read online about RBD snapshots I read that RBD snapshots aren't meant to be active all the time and...
Yes, but proxmox backup system is sending commands to qemu-agent (running inside the guest) in order to freeze the filesystem momentarily or something, in orde to make the snapshot more consistent before taking a backup. I don't know if this is worth it but this is something your script won't...
So your script is not interacting with the guest OS (through qemu-agent for example) and the backups are maybe (more) inconsistent compared to Proxmox's VM backup method?
About the CloudLinux freezing: I just use the "Take snapshot" button from proxmox gui. It works for every single VM except those running CloudLinux inside. The dialog that opens when it starts to create a snapshot never finishes and the VM freezes completely. No idea why that is.
Maybe it helps...
Okay. So for rbd snapshots+exports. Is there any script available for that already or will I have to write something myself?
And before I can use this I need to look into more detail why making a snapshot of a VM running CloudLinux always freezes the VM completely.
I tried that before and as...
There exists a patch for proxmox that implements incremental backups. It's available here.
I read that it's not compatible with Ceph so I wonder if there's a way to do incremental backups using Ceph + Proxmox one way or another?
I really need incremental backups ...
Okay. So, in all my systems I have identical disk configurations like this:
PCIe SSD (consumer, for boot/proxmox)
3x 960Gb SSD (Enterprise, for pool1)
2x 900Gb SAS + 1x 240GB SSD (Enterprise, for pool2)
1x 480Gb SSD (Enterprise, for pool3)
The 240Gb SSD is used as a journal for both SAS disks...
I have proxmox installed on a PCI-e NVM SSD. This SSD is not an enterprise class one, so it doesn't have a high endurance TBW rating.
Now, my proxmox+ceph setup is running for only 4 days and when I do "iostat -k" I can see that there's already about 280Gb written to my PCI-e SSD. Which is not...
Okay, so I have write my own script for this? Or is there something ready made already ?
I just tried that but my VM freezed. After doing some more testing it turns out that VM's running CloudLinux freeze as soon as I try to create a snapshot. Is this a known issue?
So, is there a different way to make backups and increase the performance? Because at this speed the backup process will take longer than 24 hours if I move all VM's to this kind of setup
The system has 192Gb of ram and I'm only running 5 VM's on this node which have 8Gb assigned. So RAM shouldn't be a problem.
During backup the kvm process seems to be between 40 and 100% CPU according to 'top' (but the VM is using CPU too offcourse...)
I already tried disabling compression, to...
I have a new proxmox cluster with Ceph storage for VM's and NFS storage for backups.
Performance on Ceph is perfect (10G network limit) and performance on NFS is 1gigabit (=network limit).
Now, my problem is that backing up VM's stored on Ceph are extremely slow. The backup process reports an...
Thanks! This helped a lot and my speed is now actually a lot better ;)
I do have another problem now:
I removed all my test-vm's but there's still 16Gb used in ceph pool. When I go to a node -> storage -> content I can see there's vm-102-disk-1 of 16gb there but when I click it the remove...
I noticed that "rados bench -p testpool 20 write" shows 988mb/s but it's multi threaded.
If I do "rados bench -p testpool 20 write -t 1" I get about 200mb/s .. the same value as inside the VM.
So it looks like my ceph pool is slow with single thread i/o .. But why ? What can I do about it?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.