yes, on a single node you NEED to replicate over the bucket type OSD, whereas on multi-node you wanna go via Bucket Type Host, Rack or above bucket types.
In this particular Case (I'm doing tests on this particular system) i have a K9 M3 EC-Pool, a corresponding SSD-Cache (3/1) and a...
I just encountered the same issue on a single-node ceph install and was able to solve it. Since this is the only Google hit, covering the symptoms, i thought i ought to post the steps leading to a fruitful solution:
The symptom
manifests like this:
You try to start one of your VM's and it is...
do you use zfs and your swap is on said ZFS ?
If so, check https://forum.proxmox.com/threads/zfs-swap-crashes-system.25208/#post-126215
is totally unrelated, paging @spirit to confirm.
can you post:
cat /etc/network/interfaces for the vms ? preferably in [/'code] tags ?
Also a list of what VMBR each vNic for each VM on the proxmox-node is using??
cat /etc/network/interfaces for the Backup-server ? preferably in [CODE][code] ['/code] tags ?
can you specify who runs 4.1 and...
yeah, and you might need to trigger a manual scrub
ceph osd scrub osd.x
so ceph actually deleted the pg's no longer recognized.
You will know if you need to by watching ceph with
ceph -w
while executing the pool delete. If your available space does not increase you probably need to scrub.
Welcome to the exiting world of IT.
Here you will encounter the magic that is ones and zeros,
GigaByte (1000^3) represented as GB and GibiByte (1024^3) represented as GiB
Or in other words, the decimal system meeting the dual system, and if your not care full someone might throw in hexadecimal...
I assume your 2 VM's use vmbr1 as a bridge for their vNics connected at 10.0.0.57 and 10.0.0.58 respectively ?? and eth1 is connected to a switch that is connected to the 10.0.0.200 node ?
Any chance you use a vlan Tags and just forgot to set it for the "node" ?
Depends on the underlying replication used, the write patterns and wether or not your dedicated ceph network will be able to deal with it.
I suggest you have a look at this ceph documentation:
http://docs.ceph.com/docs/hammer/architecture/
specifically this diagram:
When you write to a ceph...
There is a "workaround"
Create a new pool.
Move all vDisks that reside on the old pool (and need to be kept) to the new pool.
double check you have no data on the old Pool you want to keep.
remove the old pool.
Profit. Kinda. Sadly.
ps.: we encounter this issue mostly on EC pools and use a...
yes, you create a vNic for every IP of the subnet that has been assigned to you.
Cidr is just a "language" to describe whats going on.
i could say 10.2.0.1 255.0.0.0
or i could say 10.2.0.1/8
quick question, which ISP you with ?
the same would apply. Generally,
You assign each IP of the /28 separately. You then use the appropriate gate-way (depends on ISP config) to connect the IP.
Some ISP's like e.g. ovh work with vMacs, so you go into their Robot, assign that IP to a vMac, then go into proxmox and assign that...
oh i see.
well, if you keep a copy of the VMID.conf, you can just plug the appropriate ceph pool back into a new proxmox-node, then copy the vmID.conf over, then bring up the the VM.
you'd need to make sure to name the pool the same you have named it in the first proxmoxnode. (compare...
http://docs.ceph.com/docs/master/rados/configuration/auth-config-ref/
has all you need on this.
Basically safe to do unless you do the following:
attacker inside your ceph-network
a second ceph-cluster on the same network (you should not do this, EVER)
a ceph-cluster doing "CEPH" over the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.