currently i have one ceph poll consists of mix of sas3 ssds , (different models and sizes , but all in the same performance category )
i thinking of creating another bucket (i dont know if bucket is the right name for what i want to do )
currently we have the default (consists of sas3 ssds)...
[cross posted on STH]
I work in a research lab that we recently purchased a Threadripper 3970x workstation from a System Integrator.
It is a better better deal than Dell-Intel, which would have cost us twice as much.
The role is to run Proxmox as base Hypervisor and run multiple Windows and...
I have a workstation with MSI TRX40 motherboad, which comes with a Dual SSD PCIE card to allow two SSDs on the card to split a PCIE x16 lane.
Quad card can also be purchased.
I was wondering how proxmox will see them when passing through nvme SSDs on the Dual or Quad card.
Will they behave...
I am building my first production Proxmox - not my first virt mind you, but costs are astronomical with 'other' virtualization options. Anyway.
I have specific needs and specific setup - regarding the setup, lets leave that. My question is this - from your experience - is this...
I am passing through an Intel P3600 1.6TB NVME SSD to a fresh install of Ubuntu 19.10 VM. Using Q35 Machine code, 44 cores, 48GB of RAM and the performance is terrible, maxing out around 1000 MB/s but averaging 450 MB/s.
It has been previously connected, through passthrough, on an Dell R710...
Current kernel used: 5.3.13-3-pve
I seem to having some ocassionall I/O errors issues with my NVM pci-e 4 drives (Corsair MP600 on a ASUS Pro WS X570-ACE motherboard) that I've come to determine that could be related to some kernel trim support of NVM drives as reflected on kernel bug...
I am running the latest version of proxmox on a 16 node 40 gbe cluster. Each node has 2 Samsung 960 EVO 250GB NVMe SSDs and 3 Hitachi 2 TB 7200 RPM Ultrastar disks. I am using bluestore for all disks with two crush rules, one for fast nvme and slow for hdd.
I have tested bandwidth between all...
Hi, I’m thinking of replacing the single SATA SSD (500GB Samsung 860 Evo) disk in one of my PVE6 cluster nodes with a 500GB Samsung 970 Evo Plus NVMe disk...
Should this simply be a case of using clonezilla to clone the SATA SSD to the NVMe and then booting PVE6 from the NVMe or would there...
hii all. i am in the process of getting new hw and go to ceph storage
i think my hardware will be based on 5 nodes each with with 3x pcie-nvme and 10x 2.5 ssds each
the ssds 2.5 will be used for mostly read intensive access
and the pcie-nvme will be used for os\sqldb and lxc containers...
I have a simple question which I would like to share with because I’m interested in you point of view.
On a Proxmox server I have fast storage (SSDs or NVMe) and slow storage (SAS or SATA).
My fast choice could be to install Proxmox on fast storage and use it also for storing virtual...
I am absolutely new to Proxmox and also have never installed/configured a virtualisation server for hosting several VMs. So, I heard and thought, Proxmox seems a good idea for that, especially, as I like Debian a lot.
The system will be: Ryzen 7 3700X, 32 GB RAM, Gigabyte Mainboard, 1...
I have 32GB of RAM and 12 Xeon cores. All the containers I will launch will be the same version of Ubuntu. On top of that, some of the containers will be running identical applications with nearly identical data.
Because of deduplication, I think I could squeeze a lot more data onto the local...
I have MicroBlade MBI-6219G-T8HX node
I added to this node additional M.2 Drive Samsung 970 EVO NVMe M.2 2TB
but I don't see this new Drive, not in fdisk and not in Disks list on Proxmox GUI
if I boot from Installation CD, I can see this new drive on installation Drives list
I have two SSD M.2 ADATA 512GB SX6000 LITE (ASX6000LNP-512GT-C), but only one of them detected in linux.
[ 0.720367] nvme nvme0: pci function 0000:01:00.0
[ 0.720415] nvme nvme1: pci function 0000:1f:00.0
[ 0.942706] nvme nvme1: ignoring ctrl due to duplicate subnqn...
I'm trying to install Proxmox 5.4 on RAIDZ-1 on 3 NVMe drives, but I'm receiving the error "unable to create zfs root pool".
NVMe are correctly recognised and I can configure RAIDZ-1 on them:
But I'm receiving the following error:
Could you help me please?
ich teste gerade eine Konfiguration mit Proxmox VE 5.3-8
und habe ein Problen mit zwei NVME SSDs.
Die Grundinstallation hängt an zwei SATA SSD Platten auf denen ich zuerst die virtuellen Maschinen getestet habe. Hierbei gibt es auch keine Probleme - jetzt wollte ich noch zwei...
Ok, I have read probably 30 posts now about Proxmox and NVMe + UEFI, so I do apologize for this post, it is absolutely a duplicate of the many, however, I don't seem to see a clear answer for my setup in any of those posts, and my head is spinning trying to figure this all out, so I figured, I...
It should be possible to set a virtual disk drive to be backed by an SSD/Flash device. Virtio SCSI with SCSI, VirtIO and SATA drives should support this.
Virtual box supports this since 2014, when can we expect this to happen in PVE?
According to the upstream sources this should be possible...
I've got a couple of Proxmox servers (single no HA) with SATA drives and getting a backup is a real pain.
It takes 50 minutes or so to get a 60-70GB of backup.
I'm planning to migrate a few of these servers into one new much more powerful server with the following drives/setup: