Hello
I have some local disk and other disk in NFS on some server KVM in proxmox5.
It's possible to detach an disk and have the posibility to retrieve data on it with another kvm server?
Thanks for your help.. In our case, it's for hosting service, so I don't think that ceph can be the better way for us, because I need an good (or almost, acceptable) I/O on each node, I suppose that ceph always will have less performance that RAID10 hardware. I review to do some RAID 10 with ZFS...
I have the same problem but without anu update, in the proxmox 5.0 version...
I have 4 nodes that sincronize in other all 30 mn, all was working fine, but this night, replication stop on all, pvesr and pvedaemon are on 100% cpu...
I have check the servier replication and all seem fine, there is...
Hello
I am trying to do some move disk from an NAS to local, but sometime I had to stop them for the overload of server and do it again when activity is lower. But when I do it, the zfs disk are created and use space in server, but not appear as "unused" disk in VM and can't be deleted from gui...
Hello
I have an cluster with 4 servers, 3 with 1 VM on each and no one on the last server. I configure the replcation of each VM on the empty last server each 30 mn.
At 21h today, I review the state of all of them and see that one have already the task at 05h30... the other, correctly have done...
Ok, I understand the unique way in this case is kill it in ssh. I know that task can continue, but in case of performance impact for clients... I think that it's sometime better stop it.
Moreover, I really love this new feature in proxmox 5! Thanks for your works.
Disable is for next replication, but I have sometime an overload due to this task, and had to stop the replication task to stop this overload... I can do it by killing task from ssh, but I want to know if there is some way to do it from GUI...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.