I never said it was not possible today, just that it could be largely automated and so, less prompt to errors and a far better user experience at the end.
Anyway, this is not my main point ... the real problem is the fact that :
deleting a VM template on Ceph+KRBD works but leaves you with...
I can perfectly understand it is not easy but as I proposed just before, if the template has no child, I don't see lmuch difficulties in doing it (and in fact, you can do that just by removing the "template" option in the conf file but you end up with a VM with snapshots on the disks you have to...
Well, I don't see this as a problem if there is sufficient warning before that.
An even smarter idea would be to propose to promote those childs to independant VMs by moving the child storage to a full clone of the disk image.
Well, I would agree if it were the case but I had no child of this...
Hi,
Found a bug with PVE 4.4-13.
When I convert a VM to a template that is stored on Ceph, a snapshot is created for my VM disks.
When I want to destroy this template, I have a message in the console saying the image still has watchers so it can't be removed.
Anyway, the VM IS removed in PVE...
There is none and I don't know if there is anything like that planned.
Anyway, this can be done without too much hassle in many ways however, I'm not sure if this would be such a good idea.
I already had problems with returning servers (some not returning at all, some others with certs problems...
I got very good results (performance and durability wise) with Samsung EVO 850 (not pro) 500GB.
They were under heavy and permanent r/w pressure and they worked flawlessly for more than a year.
You would probably get better durability results with Intel DC S35xx or "Pro" version of the EVO but...
I don't use NUC as ProxMox nodes but I already installed ProxMox on many different configurations, from PCs to servers.
I currently run a few ProxMox clusters, my biggest one is a 20 nodes cluster with both ZFS local storage and CEPH.
My ZFS local storage runs with a RAIDZ1 cluster of 6 Samsung...
I personnaly keep on using ProxMox that I find is geting better and beter ... To me this is the simplest and most efficient to date.
I don't say other solutions are bad but they are either very complicated to setup and manage (Openstack, Cloudstack, ...) or very expensive (VMWare ...) especially...
I would personnaly remove the "folder view" and "storage view" which are pretty redundant and useless to me ... never used them because "server view" and "pool view" are by far much more usefull
For example, when you want to remove a node for maintenance, you need to migrate VMs on the remaining nodes but reight now, you need to move them manually one by one or move every one of them but on only one target node which is sometime not possible.
I try to keep my node's memory usage under...
Definitely love it !
Would need some more informations like Ceph status, cluster load, ... but definitely the way to go !
ProxMox is great but it still lacks some abstract view, this is still too low level and it is hard to have a global view of the cluster state.
One thing that is related and...
I did a little summary with the SSDs installed in my nodes ... all same model (Corsair Force LS 60GB) .
As you would see, SSD_Life_Left and Lifetime_Writes_GiB correlate quite well except for some rare cases ... but this is not always true when related to Power_On_Hours which is odd since all...
I'll try some of your tips.
No, no backups on those disks.
In fact those disks are nearly empty, I have a little less than 5GB occupied by ProxMox itself, otherwise, the remaining space is unused.
my ceph-mon.log is about 50k for each mons I have ... quite not much.
ceph-mon process is eating about 150~200kb/s usually ... it must have peaked when I grabbed the stat.
Hi,
I know LXC Live Migration is not supported by now and that it waits for the CRIU project but it seems that things are moving on the CRIU side, even Docker implements it now so is there any timeframe for a fully functionnal CRIU implementation in ProxMox ?
Best regards.
Okay, checked the disk I changed yesterday.
Yesterday, I cloned the failed drive to the new one and since the drive is 60GB sized, when looking at the SMART value, I got this :
SSD_Life_Left : 99
Lifetime_Writes_GiB : 56
Lifetime_Reads_GiB : 1
Today, I read this :
SSD_Life_Left : 99...
For instance, most of my SSDs are cheap Corsair Force LS 60GB but I added a Samsung 750 on another node nearly 5 months ago and it is at 1% Wearout.
Might be a problem related to this model precisely ... I purchased Intel DC S3520 SSDs to replace them all ... I'll check them to see if it behaves...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.