Great! Thanks. I will try that out.
Once I add the parameter to the config file, do I need to do anything for it to take effect? Do I need to run pveupdate or anything or is it automatic?
Hello,
Running PVE 4.3. I would like to disable secure migrations. Per http://pve.proxmox.com/wiki/Manual:_datacenter.cfg I should add the following line to the /etc/pve/datacenter.cfg:
migration: type=insecure
I have added that to the config file. Now, every night I get an email with the...
I followed this https://pve.proxmox.com/wiki/Ceph_Server#Creating_Ceph_OSDs and am noticing the same symptoms. ceph-disk list does not show any journal information.
Using the tutorial, I set my OSD journals to a NVMe partition. ceph-disk does not show it as active.
LVM thin simply allows thin provisioning of volumes. This can have a drawback in that you can over-provision.
And I mistyped. LVM does not support snapshotting. See here: https://pve.proxmox.com/wiki/LVM2#LVM_vs_LVM-Thin
Always glad to help. Good luck!
The commands look good.
You can use a directory on the USB drive as a mount point in the container(s). You will lose the ability to snapshot containers as a directory type filesystem is not a support filesystem for snapshotting. So just be aware of that. You could also use a LVM storage without...
I mis-typed. Ceph operates using a Cluster network for data replication between nodes and a Public network for Ceph monitor control and client access. http://docs.ceph.com/docs/jewel/rados/configuration/network-config-ref/
Per your own comment above, "multiple interfaces with the same latency...
Yes, you will need ntfs-3g in order to mount those partitions. Once they are mounted on the proxmox system, you can add them at the data center level as directories and your vm's/containers will see them.
And if you intend on leaving those drives on the proxmox server, I would highly recommend...
You could bond and dedicate at the same time as long as you use active/backup bonding.
Using 2 vlans, you can create eth1.10 and eth2.10 for Ceph Private network and eth1.20 and eth2.20 for Ceph Public network. bond0 will use eth1.10 and eth2.10 for its interfaces and set eth1.10 as the...
You won't need to pass through the USB devices because you are using a container. The container should have access to the drives.
You have the basic idea. Now you just need to mount the drives in the container by using a mount point. You will need to add the USB drives as storage at the data...
+1 on marsian's suggestions. Show node software versions and updates. Also, show node uptime. It would be nice to see who in the cluster has been up and for how long. Can you add a field for Fence events? Somehow track how often a node gets fenced.
+1 on Ceph dashboard view too.
You could add sdc (the new drive) to the system and create a software raid1 mirror on sdc. When you create the mirror just specify the second disk is missing. The raid device (ie md0) will come active on sdc alone. You can then add the raid device to LVM as a new volume group. Once added to LVM...
Hi Fabian,
Setup the exact same cluster in the lab running 4.2. Tested NFS backup. 30MB/s. Tested ceph export to same NFS filesystem, 400MB/s.
Upgraded cluster to latest 4.3. No change. 30MB/s on backup. ceph export, 400MB/s.
Hello,
I have a 3 node cluster running ceph. I have a ceph monitor and 6 OSDs on each node. Each OSD journal is mapped to an NVMe partition on a separate disk. Each node has a dedicated 10G nic for ceph public network and a dedicated nic for ceph cluster network.
On the same LAN as the ceph...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.