Search results

  1. V

    Best way to migrate large volume from Ceph to ZFS

    Thanks for the feedback!! Sharing out the new server ZFS pool via NFS to the older cluster is currently working. I tested a live disk move on a smaller VM and everything went fine. I was then able to move it, as you suggest, from a local directory storage on the new server to ZFS pool on the...
  2. V

    Best way to migrate large volume from Ceph to ZFS

    I have an older Proxmox and Ceph cluster running 5.3-11 that I need to decomm. I have a new host running 7.3-3 that I want to move a VM to that is running ZFS. The current VM has 2 virtio drives. One on local storage and a large 20TB volume on Ceph. I would like to live migrate this VM to the...
  3. V

    [SOLVED] pveceph install tries to do bad things

    Thanks Guys! I did forget to add the non-subscription repo. Added that and install works fine! Sorry for that. I thought I had added it before. That is usually step 1 after a fresh install.
  4. V

    [SOLVED] pveceph install tries to do bad things

    I believe so. cat /etc/apt/sources.list Also, this is a fresh install from the 5.2 ISO.
  5. V

    [SOLVED] pveceph install tries to do bad things

    When running pve-ceph isntall on a brand new cluster using the non-subscription repo, the installer wants to remove necessary packages like proxmox-ve and qemu-server. Please see below. PVE Version Output of pveceph install
  6. V

    Disks GUI - /proc/mounts too long - aborting

    Running pve 5.2-2 (and also previous versions) When I select the "Disks" option in the GUI I get an error file '/proc/mounts' too long - aborting (500) On the nodes where I receive this message, I have a large number of ZFS datasets with subsequent mountpoints. I believe this is interfering...
  7. V

    How to best migrate to new host?

    aderumier, I look forward to it. Thanks!
  8. V

    How to best migrate to new host?

    Hi spirit, I am researching the best method for lve migrating VMs from an old cluster to a new cluster. My plan is to add the old cluster nodes as ceph clients of the new cluster, move the storage to the new ceph cluster and then figure out a way to live migrate the running vm to the new host...
  9. V

    Ceph network

    Here is a snippet from one of my interfaces files: # Ceph Cluster # Activated on eth2, eth3 # eth2 Primary auto eth2.200 iface eth2.200 inet manual bond-master Bond200 bond-primary eth2.200 bond-mode active-backup bond-miimon 100 bond-updelay 45000...
  10. V

    Windows Guest suddenly stops

    I had the exact same thing happen on 7/27. Windows is 2012R2, virtio disk and nic with quest agent installed. root@pve20:~# pveversion -v proxmox-ve: 4.4-77 (running kernel: 4.4.35-1-pve) pve-manager: 4.4-5 (running version: 4.4-5/c43015a5) pve-kernel-4.4.6-1-pve: 4.4.6-48...
  11. V

    Preventing Proxmox from importing zpools at boot

    Sorry for reviving an old thread, but was this ever resolved? I am experiencing the same issue.
  12. V

    Incremental backups & Ceph?

    Interesting. I have not seen any performance degradation on my side. I wasn't aware of this so I have been researching it all morning. It does seem like a fair amount of people have seen issues with snapshots on their clusters. Do you happen to have a link you could share that mentions snapshots...
  13. V

    Incremental backups & Ceph?

    I agree, pigz can be dangerous. You can limit the number of CPUs with the -p option in pigz. The nice thing about the rbd stdout option is you can choose whatever compression tool you prefer.
  14. V

    Incremental backups & Ceph?

    Good question about compression. I installed pigz compressor on one of my nodes in my cluster. Its a parallel compressor based on gzip so it is really fast. In order to add add compression, when exporting with the rbd command, instead of specifying an output file to the rbd command, specify a...
  15. V

    Incremental backups & Ceph?

    You are right, my script does not do this currently. It should be pretty simple to quiesce the filesystems and then do the ceph snapshot just to make sure everything is good.
  16. V

    Incremental backups & Ceph?

    Actually you don't need a working ceph cluster to restore. rbd export creates raw image files. Those images can them be mapped directly to VMs or converted using qemu to another format. I know this works with a full export. For differential exports, it looks like I will have to write my own tool...
  17. V

    Incremental backups & Ceph?

    The snapshot button in the GUI creates a VM snapshot. Ceph rbd snapshots are completely different. My script makes use of them heavily. Hopefully they will work and my script will work for you. I'll keep you posted on the script.
  18. V

    Incremental backups & Ceph?

    Here you go. Please go easy on me, I am not great at BASH scripting :) But all feedback is welcome. I plan to modify the restore script so it will act like vzdump with vma files. Right now it just restores the image to the ceph cluster. Soon the script will restore the vm.conf and all disks...
  19. V

    Incremental backups & Ceph?

    Are you creating a VM snapshot of CloudLinux or a ceph snapshot of the disk of the CloudLinux VM? I have written a script that will perform a nightly export of ceph images. On Saturday night, it takes a full export of the disk image from ceph. Every other day of the week it takes a differential...
  20. V

    Insecure migration settings

    Thanks Tom! I'll try that out and report back.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!