I am new to Proxmox/Ceph and looking into some performance issues.
5 OSD nodes and 3 Monitor nodes
Cluster vlan - 10.111.40.0/24
OSD nod
CPU - AMD EPYC 2144G (64 Cores)
Memory - 256GB
Storage - Dell 3.2TB NVME x 10
Network - 40 GB for Ceph Cluster
Network - 1GB for Proxmox mgmt
MON nod
CPU -...
When i check 'disks' under 'storage view' it shows the nvme 1TB i have installed, next it it says usage ZFS.
When i click on 'ZFS' just below 'disks' there is a single pool named rpool which does not include the 1TB nvme, i see no way how to add it to this pool.
please assist
Hallo Leute,
vorab zu Beginn von mir: Ich bin nicht nur hier neu, sondern was Proxmox und Linux betrifft bin ich auch komplett ein Neuling.
Meine bisherigen Erfahrungen liegen nur auf Windows- sowie Hyper-V Server ebenen.
Bezugnehmend auf die im Betreff näher bezeichnete Angelegenheit habe ich...
Recently added an extra NVME drive to the system, using ZFS.
After adding it i'm lost as what is happening. ZFS appears to have 'absorbed' it but i cannot partition it. There appears no way to undo it etc.
I've definitely done something wrong but cannot progress, any pointers ?
Hey guys,
thanks for the release of Proxmox Backup Server!
PBS looks very promising in regards to what our company needs:
Incremental backups of our VMs, e.g. every 15 minutes
Flexible retention cycle, e.g. keep last 8, keep 22 hours, ...
One pushing PVE client, several backup servers pull...
My servers all run HGST260 enterprise PCIe NVMe drives, mirrored. The drives have great performance, but my guests seem to be limited by queue depth. Are we able to use emulated NVMe devices to increase the parallelization of disk IO, and would this help relative to the standard SCSI devices?
I...
My 1TB nvme is installed into an nvme pcie adapter (these can be bought for like $10)
First you simply install proxmox to the NVME, this is straight forward just like installing to a hard drive, the problem comes in to play when you try booting from the NVME, older bios do not support that...
Hallo zusammen,
ich kenne mich mit Proxmox noch nicht so gut aus und benötige deshalb etwas Hilfe.
Ich möchte gerne meine zweite NVMe, die momentan nicht verwendet wird, gerne als Backup Storage nutzen. Nur habe ich leider keine Ahnung, wie ich das genau mache. Eine wirklich gute Anleitung die...
I have a long-running server which use 2x ZFS-mirrored 7TB Ultrastar SN200 Series NVMe SSDs running Proxmox 6.2. I bought two more identical devices and setup a second server about a month ago.
I could create VMs and migrate to/from local-vmdata (the local ZFS pool on these drives).
At some...
Hi everbody,
we are currently in the process of replacing our VMware ESXi NFS Netapp setup with a ProxMox Ceph configuration.
We purchased 8 nodes with the following configuration:
- ThomasKrenn 1HE AMD Single-CPU RA1112
- AMD EPYC 7742 (2,25 GHz, 64-Core, 256 MB)
- 512 GB RAM
- 2x 240GB SATA...
Hi
We are using our Proxmox with a storage that connects over fabric using Mellanox driver
We also use Nvmesh as our storage management software therefore we can see the volume as local
and we are using it as a LOCAL ZFS file system , the problem is that proxmox doesn't see it as shared...
currently i have one ceph poll consists of mix of sas3 ssds , (different models and sizes , but all in the same performance category )
i thinking of creating another bucket (i dont know if bucket is the right name for what i want to do )
currently we have the default (consists of sas3 ssds)...
[cross posted on STH]
I work in a research lab that we recently purchased a Threadripper 3970x workstation from a System Integrator.
It is a better better deal than Dell-Intel, which would have cost us twice as much.
The role is to run Proxmox as base Hypervisor and run multiple Windows and...
Hi,
I have a workstation with MSI TRX40 motherboad, which comes with a Dual SSD PCIE card to allow two SSDs on the card to split a PCIE x16 lane.
Quad card can also be purchased.
I was wondering how proxmox will see them when passing through nvme SSDs on the Dual or Quad card.
Will they behave...
Hey guys,
I am building my first production Proxmox - not my first virt mind you, but costs are astronomical with 'other' virtualization options. Anyway.
I have specific needs and specific setup - regarding the setup, lets leave that. My question is this - from your experience - is this...
I am passing through an Intel P3600 1.6TB NVME SSD to a fresh install of Ubuntu 19.10 VM. Using Q35 Machine code, 44 cores, 48GB of RAM and the performance is terrible, maxing out around 1000 MB/s but averaging 450 MB/s.
It has been previously connected, through passthrough, on an Dell R710...
Current kernel used: 5.3.13-3-pve
I seem to having some ocassionall I/O errors issues with my NVM pci-e 4 drives (Corsair MP600 on a ASUS Pro WS X570-ACE motherboard) that I've come to determine that could be related to some kernel trim support of NVM drives as reflected on kernel bug...
I am running the latest version of proxmox on a 16 node 40 gbe cluster. Each node has 2 Samsung 960 EVO 250GB NVMe SSDs and 3 Hitachi 2 TB 7200 RPM Ultrastar disks. I am using bluestore for all disks with two crush rules, one for fast nvme and slow for hdd.
I have tested bandwidth between all...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.