Recent content by valeech

  1. V

    Best way to migrate large volume from Ceph to ZFS

    Thanks for the feedback!! Sharing out the new server ZFS pool via NFS to the older cluster is currently working. I tested a live disk move on a smaller VM and everything went fine. I was then able to move it, as you suggest, from a local directory storage on the new server to ZFS pool on the...
  2. V

    Best way to migrate large volume from Ceph to ZFS

    I have an older Proxmox and Ceph cluster running 5.3-11 that I need to decomm. I have a new host running 7.3-3 that I want to move a VM to that is running ZFS. The current VM has 2 virtio drives. One on local storage and a large 20TB volume on Ceph. I would like to live migrate this VM to the...
  3. V

    [SOLVED] pveceph install tries to do bad things

    Thanks Guys! I did forget to add the non-subscription repo. Added that and install works fine! Sorry for that. I thought I had added it before. That is usually step 1 after a fresh install.
  4. V

    [SOLVED] pveceph install tries to do bad things

    I believe so. cat /etc/apt/sources.list Also, this is a fresh install from the 5.2 ISO.
  5. V

    [SOLVED] pveceph install tries to do bad things

    When running pve-ceph isntall on a brand new cluster using the non-subscription repo, the installer wants to remove necessary packages like proxmox-ve and qemu-server. Please see below. PVE Version Output of pveceph install
  6. V

    Disks GUI - /proc/mounts too long - aborting

    Running pve 5.2-2 (and also previous versions) When I select the "Disks" option in the GUI I get an error file '/proc/mounts' too long - aborting (500) On the nodes where I receive this message, I have a large number of ZFS datasets with subsequent mountpoints. I believe this is interfering...
  7. V

    How to best migrate to new host?

    aderumier, I look forward to it. Thanks!
  8. V

    How to best migrate to new host?

    Hi spirit, I am researching the best method for lve migrating VMs from an old cluster to a new cluster. My plan is to add the old cluster nodes as ceph clients of the new cluster, move the storage to the new ceph cluster and then figure out a way to live migrate the running vm to the new host...
  9. V

    Ceph network

    Here is a snippet from one of my interfaces files: # Ceph Cluster # Activated on eth2, eth3 # eth2 Primary auto eth2.200 iface eth2.200 inet manual bond-master Bond200 bond-primary eth2.200 bond-mode active-backup bond-miimon 100 bond-updelay 45000...
  10. V

    Windows Guest suddenly stops

    I had the exact same thing happen on 7/27. Windows is 2012R2, virtio disk and nic with quest agent installed. root@pve20:~# pveversion -v proxmox-ve: 4.4-77 (running kernel: 4.4.35-1-pve) pve-manager: 4.4-5 (running version: 4.4-5/c43015a5) pve-kernel-4.4.6-1-pve: 4.4.6-48...
  11. V

    Preventing Proxmox from importing zpools at boot

    Sorry for reviving an old thread, but was this ever resolved? I am experiencing the same issue.
  12. V

    Incremental backups & Ceph?

    Interesting. I have not seen any performance degradation on my side. I wasn't aware of this so I have been researching it all morning. It does seem like a fair amount of people have seen issues with snapshots on their clusters. Do you happen to have a link you could share that mentions snapshots...
  13. V

    Incremental backups & Ceph?

    I agree, pigz can be dangerous. You can limit the number of CPUs with the -p option in pigz. The nice thing about the rbd stdout option is you can choose whatever compression tool you prefer.