I have some containers with large disks (mount points). I'd like to reduce the number of reserved blocks on these. How can I do this?
According to mount inside the container, the filesystem is ext4:
colin@www1:/dev$ mount | grep images
/dev/mapper/pve-vm--120--disk--2 on /disks/images type ext4...
Good day everybody,
we would like to subsequently set up a software raid 1 on our Proxmox system with mdadm.
Proxmox itself is currently installed on an NVMe with ext4, which is now to be mirrored on another NVMe.
However, the following error occurs.
"Floating point exception"
Hello everybody !
I am building a decentralised based cloud with proxmox, using multiple nodes across my country. It's a "best effort" project, aiming to provide to people some alternatives vs classic GAFAMs based cloud.
Some of my nodes are proxmox clusters, running an underlying ZFS...
I have a cluster with all nodes on ext4 root.
I would like to add a few (new) nodes with zfs raid1 root. I found out that there are some issues, and I would like to know what's the best way to proceed:
- The local-lvm storage shows up with a "?" on the machine with zfs root, which is...
I installed proxmox (using ext4 filesystem) on a 5212MB SATA SSD (used consumer-grade Sandisk SD8SB8U512G1001) to evaluate it. I run some benchmarks using fio, to get an idea of how fast it is. Before installing proxmox, I benchmarked the disk with PassMark's Performance Test on Win1o...
I have problem with my PVE 6.4-13/9f411e79 (running kernel: 5.4.128-1-pve) with Disk volume
Disk model: PERC H730 Mini (raid6 2Gb*8)
Disk /dev/sdd: 10.9 TiB
create file system in parted:
(parted) mklabel gpt
mkpart primary 1 90%
create file system ext4: mkfs.ext4 /dev/sdd1
Meinem Rechner mit proxmox wurde Unwetterbedingt der Strom abgedreht. Jetzt wollte ich die Kiste wieder neu starten, bekomme aber folgenden Fehler: VFS: Can't find ext4 filesystem
Failed to mount /mnt/sdb
Dependency failed for local file systems
Failed to start import zfs pool ssd1
dank einem Beitrag habe auf eine Frage von mir habe ich nun meinen Server auf ZFS umgestellt. Nun stellt sich mir die Frage: XFS oder ext4 als Gast FS? Bisher war ich immer ein Freund von XFS:
- defrag online möglich (auch unter Linux relevant; ggf. weniger/gar nicht bei SSD?)
I am running proxmox on a Dell R210 II, which I recently upgraded to use four Kingston A400 SSDs running in RAID 10 using the H200 RAID controller. After this machine ran for a few hours, it encountered an error and fell back into a read-only root filesystem (I've seen some answers that suggest...
I installed a new setup in my server. (Unfortunately old one crashed completely). I had to rebuild it from scratch.
In capacity, it's 2tb HDD SATA + 500 GB SSD, with 64GB RAM.
I followed the steps as per documents. I selected Ext4 for my installation.
Surprisingly, once I loaded vm's. It...
Der Server läuft, aber ich habe da drin 10 Platten an einem HW-Raid
2*SSD als Raid1 (Systempartitionen der VMs und Datenbanken)
6*HDD als Raid6 (Storage für VM 1&2, etwas schneller als RAID1)
2*HDD als Raid1 (Storage für VM 3, Geschwindigkeit egal)
Der SAS-RAID-Controller hat auch einen Akku...
I apologize if this is a noob question.
So I installed Proxmox "normally", i.e. with LVM and ext4 some time ago. Now i noticed that my SSD shows up with 223,57GiB in size under Datacenter->pve->Disks. But under Datacenter->pve->Disks->LVM-Thin the data LV is just 141,43GiB. What is the rest used...
Hi, I had a PVE 6.2 install running for quite some time.
Now, after creating a new VM, my server is acting up and I don't know why and how to fix it (even after extensive googling).
When trying to boot the server, I get repeated messages of this form:
EXT4-fs (dm-1): I/O error while writing...
I am trying to decide between using XFS or EXT4 inside KVM VMs. My goal is not to over-optimise in an early stage, but I want to make an informed file system decision and stick with that.
Ceph as backend storage
Writeback cache on VM disk
No LVM inside VM
After testing some new disk with ext and xfs, i end up with a Server/Disks/Directory with "/mnt/pve/test-disk /dev/disk/by-uuid/xxxxxx ext4 defaults"
This disk in now gone/formatted and replaced with xfs manually.
But this line in gui i cant find out how to remove. (its not in...
My config is:
3 nodes configured to hyperconverged (cluster+ceph+ha), two netwoks (one for services and one for ceph). All ISOs, templates and disks (for VMs and CTs) stored in Ceph. One of the three nodes was not installed with Zfs but with Ext4.
After you migrate a VM or CT to...
I'm looking to move from several physical machines to one big server and several VMs. One of those physical machines is a OMV-based NAS with several ext4 disks full of data.
Can I mount the ext4 filesystems to proxmox and then make them available to a VM (which will be...
I noticed that the only way to get low iowaits with ceph is to settle VM disk cache to writeback.
But it's not enough, with mysql (innodb) we still have high iowaits on high load. We have to disable barrier on ext4 mount options. After that, disk performance is OK.
On a 5 nodes cluster...
We're planning on using a few Intel NUCs in lab conditions with Proxmox. They take a single M.2 NVME device. What would be the best filesystem to use in this case?
Also if ZFS would a RAID1 or RAID0 array be the correct option?
Hi, I have set up a thin pool for container creation. When I create Cts the filesystem is formated with ext4 automatically.
is it possible to create containers with XFS instead of EXT4 using thin provisioning?