Thanks, this is temporary setup and we will add more nodes when we done migration VM out the old system. (old server will be reuse as new node in Proxmox).
Does this mean we can lose a half system (6 OSDs simultaneous in my current setup)?
Thanks, i have manage it already.
However i tried create a pool SSD again, then migrate disk to this SSD pool. The storage become degrade again:
Degraded data redundancy: 261/783 objects degraded (33.333%), 167 pgs degraded, 256 pgs undersized
pg 2.cd is active+undersized+degraded, acting...
I have taken following steps and my storage looks like ok now
165 /etc/init.d/ceph stop osd.0
166 /etc/init.d/ceph stop osd.2
167 /etc/init.d/ceph stop osd.8
168 /etc/init.d/ceph stop osd.9
169 ceph osd tree
170 ceph auth del osd.0
171 ceph auth del osd.2
172 ceph auth del...
Hi,
I have 2 nodes came with 6 OSD per node, i have follow this guide: https://pve.proxmox.com/wiki/Ceph_Server to create CEPH cluster. However after create OSD from web guide, it show 4/8 OSD out.
I tried create a pool on CEPH and copy data to this. The storage cluster said degrade...
Hi,
I have switch Windows 7 to Windows Server 2012. I login to Spice console then i look back at NoVNC and it show the same to what i do via spice console. How do you deal with this privacy for your users?
Hello,
I consider between VNC and Spice console for VDI usage. I see Spice console more interesting.
May i know advantages and disadvantages of both solutions?
Can we disable NoVNC of KVM? Because when we are using with Windows 7, i logged into Spice, the box is open on NoVNC too.
Hello,
We use this for hosting VPS only.
All these node connect to CEPH storage (for VM disk) and NFS v3 (for backup) only.
Our backup server got problem with raid array and it timeout (service pvestatd status) in the past few days. Now it’s up after rebuilding raid array.
We will monitor...
Thanks, i tried and it works.
root@hv109:~# qm set 292 -net0 virtio=3A:8F:9B:FC:37:DE,bridge=vmbr0,tag=575,rate=12
update VM 292: -net0 virtio=3A:8F:9B:FC:37:DE,bridge=vmbr0,tag=575,rate=12
root@hv109:~#
I will code it myself.
Hi,
Is there anyway to limit network speed to 12MB/s for 1000+ VM via command?
I tried:
qm set 341 --net0 rate=12
But it returns error:
400 Parameter verification failed.
net0: invalid format - format error
net0.model: property is missing and it is not optional
qm set <vmid> [OPTIONS]
After upgrade new error come: question mark before all nodes.
I logged and figure out both nodes can not ping to each other. When remove 1 slave in bond NIC and reboot. Now they can ping but problem still there:
root@cp102:~# pvecm s
Quorum information
------------------
Date...
Hi,
My cluster running ok, after adding CEPH storage (external) it goes red. May i know how to debug?
My proxmox version:
root@cp101:~# pveversion -v
proxmox-ve: 5.0-19 (running kernel: 4.10.17-2-pve)
pve-manager: 5.0-30 (running version: 5.0-30/5ab26bc)
pve-kernel-4.10.17-2-pve: 4.10.17-19...