The question is: can I use the ceph native repository? is it will works?
If I will use ceph native repository, when I will update Proxmox, could I have some problems?
Hi,
we are planning to create a Proxmox cluster (8 nodes) for computing and use a native (original) Ceph cluster (Octopus always update to the last version) installed on Debian Buster. Now I have some questions:
For Ceph "client" which repository should I use? The original proxomox repository...
Hi,
I made a test with a RAID10 + one spare disk, but I'm a little confused by the results...
I run the test with:
fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=16384M --numjobs=4 --runtime=240 --group_reporting
and the result of fio is:
Jobs: 4...
Hi,
I opted to use ZFS. Now I should decide whether to use RAIDZ2 or RAID10 with the fifth disk as SPARE. I have currently tested RAIDZ2 with:
fio --name=randwrite --output /NVME-1-VM --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=16384M --numjobs=4 --runtime=240...
Hi,
Yes, I know that Ceph needs at least 3 nodes, and for a stand alone server ZFS is the "natural" choice. My question arose because in the near future the customer will add two more servers and with ceph I would have had the Storage ready. It would have been enough to add nodes to Ceph ...
Hi,
a client of mine currently has only one server with the following characteristics:
Server: SUPERMICRO Server AS -1113S-WN10RT
CPU: 1 x AMD EPYC 7502 32C
RAM: 512 GB ECC REC
NIC 1: 4 x 10Gb SFP+ Intel XL710-AM1 (AOC-STG-I4S)
NIC 2: 4 x 10Gb SFP+ Intel XL710-AM1 (AOC-STG-I4S)
SSD for OS ...
Hi,
so if I understand correctly, just tag the bond with the outer vlan (in the exaple vlan 10) and then tag the vm nic with the inner vlans (in the example 34 and 35)?
Thank you
PS: I will wait impatiently for the stable version of the new sdn feature....
Hi,
I would like to start using Q in Q.... This is a typical scenario:
bond0: eno1 + eno2
bond0.10 : vlan 10
bond0.20: vlan 20
vmbr10: bridge assigned to VMs of customer A
vmbr11: bridge assigned to VMs of customer B
Customer A have 10 VMs, 4 of them have to "ping" between them but not with...
Hi,
within a few weeks I will have to configure a Proxmox 5-node Cluster using Ceph as storage. I have 2 Cisco Nexus 3064-X switches (48 ports 10Gb SFP+) and I would like to configure the ceph networks, for each node, in the following way::
Ceph Public: 2 x 10Gb Linux Bond in Active/Standby...
Hi,
I've read now this post and as i wrote in this post https://forum.proxmox.com/threads/proxmox-5-4-stops-to-work-zfs-issue.63849/#post-298631
I have the same identical problem (last time two days ago) and I don't know what to do anymore. I have update bios, proxmox (with pve-subscription)...
Hi,
I have removed swap with "swapoff -a" and now... hope well...
I also noticed in kern.log the following message:
Feb 7 14:11:50 dt-prox1 kernel: [57824.528963] perf: interrupt took too long (4912 > 4902), lowering kernel.perf_event_max_sample_rate to 40500
What is it?
If can help you, this is arc_summary (part 2):
ZFS Tunables:
dbuf_cache_hiwater_pct 10
dbuf_cache_lowater_pct 10
dbuf_cache_max_bytes 104857600
dbuf_cache_max_shift 5...
If can help you, this is arc_summary (part 1):
------------------------------------------------------------------------
ZFS Subsystem Report Thu Feb 06 07:44:18 2020
ARC Summary: (HEALTHY)
Memory Throttle Count: 0
ARC Misc:
Deleted: 13.13M...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.