Hey all,
I have to size a PVE/CEPH environment for two data center.
We need a new home for roughly 300 small VMs (4 cores, 4GB Memory,100-200GB storage)
I calculate half a year until all 300 VMs are migrated and calculated 100% growth in the next three years.
Storage bandwidth should not be...
Hey guys,
for the next VM deployment I want to identify the least utilized node in a cluster via API.
Sorry, but couldn't find it by myself.
Cheers,
luphi
Hello all,
I just want to add the 15th scsi drive to a VM. I'm aware of the limitation of 14 drive per scsi bus, therevor I have choosen virtio-scsi-single in the controller option to have a dedicated controller per drive. But the GUI still doesn't allow me to add values higher than 13 for the...
Hello,
I have the same issue but won't restart the whole node.
pct list is hanging, stopping the container is also hanging.
unfortunately storage seams to be okay, so I need help on further investigations.
root@pve:~# pvesm status
Name Type Status Total Used...
hey there,
I migrated a ceph node to new hardware. After moving the OSDs, they won't come up again. All pgs are unknown, all OSDs are down. The GUI shows them as filestore, but they are definitely bluestore.
The old server was originally installed with PVE 4.4/hammer and migrated from time to...
Hey all,
I just tried to get ceph's zabbix module running.
I followed the ceph docs at http://docs.ceph.com/docs/master/mgr/zabbix/
Communication seems to be fine but no data is send.
Got this in the mgr log:
2017-11-01 19:07:46.614999 7f7415d00700 20 mgr[zabbix] Waking up for new iteration...
I'm in the same situation, but before I delete the lock file manually, I want to make sure, that it is safe to do so.
Which process is using these lock files? What do I have to care about, to be on the safe side?
Btw: The lock file is more than a week old and I rebooted the VM several time...
Hey guys,
this time I really need your help :-(
first my setup:
3 nodes (pve0,pve1,pve2)
2 pools (both of size 2/1)
all nodes are running latest version 5.0/12.2.0
I just wanted to migrate all OSDs from filestore to bluestore, so I removed all OSDs of node pve2 and recreated them...
I did some tests:
rados bench -p test 30 write --no-cleanup
journal on SSD OSD
Total time run: 30.824673 30.506182
Total writes made: 485 405
Write size: 4194304 4194304
Object size: 4194304 4194304...
Thanks for your replay.
But how can I make sure, that primary and replicated pg are not on OSDs, which have their journal on the same SSD?
If the SSD will fail, I will loose my data.
Is my setup not the right way?
Cheeers,
Martin
I did some further research this morning by monitoring "ceph -s" and "ceph osd tree" during startup.
(I removed the host bucket since this is an unnecessary layer in my hierarchy)
At the beginning, everything seems to be ok, mgr is active, osd tree is correct, OSDs are just comming up...
Hey all,
since a few day I plan a new single PVE server for home use.
Here is my strategy:
I don't care much about availability, so a single host is ok for me.
I don't care much about performance, so a single host is ok for me.
I care about flexibility, that's why I want ceph for storage and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.