Thanks for the feedback!!
Sharing out the new server ZFS pool via NFS to the older cluster is currently working. I tested a live disk move on a smaller VM and everything went fine. I was then able to move it, as you suggest, from a local directory storage on the new server to ZFS pool on the...
I have an older Proxmox and Ceph cluster running 5.3-11 that I need to decomm. I have a new host running 7.3-3 that I want to move a VM to that is running ZFS.
The current VM has 2 virtio drives. One on local storage and a large 20TB volume on Ceph.
I would like to live migrate this VM to the...
Thanks Guys! I did forget to add the non-subscription repo. Added that and install works fine!
Sorry for that. I thought I had added it before. That is usually step 1 after a fresh install.
When running pve-ceph isntall on a brand new cluster using the non-subscription repo, the installer wants to remove necessary packages like proxmox-ve and qemu-server. Please see below.
PVE Version
Output of pveceph install
Running pve 5.2-2 (and also previous versions)
When I select the "Disks" option in the GUI I get an error
file '/proc/mounts' too long - aborting (500)
On the nodes where I receive this message, I have a large number of ZFS datasets with subsequent mountpoints. I believe this is interfering...
Hi spirit,
I am researching the best method for lve migrating VMs from an old cluster to a new cluster. My plan is to add the old cluster nodes as ceph clients of the new cluster, move the storage to the new ceph cluster and then figure out a way to live migrate the running vm to the new host...
Here is a snippet from one of my interfaces files:
# Ceph Cluster
# Activated on eth2, eth3
# eth2 Primary
auto eth2.200
iface eth2.200 inet manual
bond-master Bond200
bond-primary eth2.200
bond-mode active-backup
bond-miimon 100
bond-updelay 45000...
I had the exact same thing happen on 7/27. Windows is 2012R2, virtio disk and nic with quest agent installed.
root@pve20:~# pveversion -v
proxmox-ve: 4.4-77 (running kernel: 4.4.35-1-pve)
pve-manager: 4.4-5 (running version: 4.4-5/c43015a5)
pve-kernel-4.4.6-1-pve: 4.4.6-48...
Interesting. I have not seen any performance degradation on my side. I wasn't aware of this so I have been researching it all morning. It does seem like a fair amount of people have seen issues with snapshots on their clusters. Do you happen to have a link you could share that mentions snapshots...
I agree, pigz can be dangerous. You can limit the number of CPUs with the -p option in pigz. The nice thing about the rbd stdout option is you can choose whatever compression tool you prefer.
Good question about compression. I installed pigz compressor on one of my nodes in my cluster. Its a parallel compressor based on gzip so it is really fast.
In order to add add compression, when exporting with the rbd command, instead of specifying an output file to the rbd command, specify a...
You are right, my script does not do this currently. It should be pretty simple to quiesce the filesystems and then do the ceph snapshot just to make sure everything is good.
Actually you don't need a working ceph cluster to restore. rbd export creates raw image files. Those images can them be mapped directly to VMs or converted using qemu to another format. I know this works with a full export. For differential exports, it looks like I will have to write my own tool...
The snapshot button in the GUI creates a VM snapshot. Ceph rbd snapshots are completely different. My script makes use of them heavily. Hopefully they will work and my script will work for you.
I'll keep you posted on the script.
Here you go. Please go easy on me, I am not great at BASH scripting :) But all feedback is welcome.
I plan to modify the restore script so it will act like vzdump with vma files. Right now it just restores the image to the ceph cluster. Soon the script will restore the vm.conf and all disks...
Are you creating a VM snapshot of CloudLinux or a ceph snapshot of the disk of the CloudLinux VM?
I have written a script that will perform a nightly export of ceph images. On Saturday night, it takes a full export of the disk image from ceph. Every other day of the week it takes a differential...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.