Hi, I had a PVE 6.2 install running for quite some time.
Now, after creating a new VM, my server is acting up and I don't know why and how to fix it (even after extensive googling).
When trying to boot the server, I get repeated messages of this form:
EXT4-fs (dm-1): I/O error while writing...
I am trying to decide between using XFS or EXT4 inside KVM VMs. My goal is not to over-optimise in an early stage, but I want to make an informed file system decision and stick with that.
Ceph as backend storage
Writeback cache on VM disk
No LVM inside VM
After testing some new disk with ext and xfs, i end up with a Server/Disks/Directory with "/mnt/pve/test-disk /dev/disk/by-uuid/xxxxxx ext4 defaults"
This disk in now gone/formatted and replaced with xfs manually.
But this line in gui i cant find out how to remove. (its not in...
My config is:
3 nodes configured to hyperconverged (cluster+ceph+ha), two netwoks (one for services and one for ceph). All ISOs, templates and disks (for VMs and CTs) stored in Ceph. One of the three nodes was not installed with Zfs but with Ext4.
After you migrate a VM or CT to...
I'm looking to move from several physical machines to one big server and several VMs. One of those physical machines is a OMV-based NAS with several ext4 disks full of data.
Can I mount the ext4 filesystems to proxmox and then make them available to a VM (which will be...
I noticed that the only way to get low iowaits with ceph is to settle VM disk cache to writeback.
But it's not enough, with mysql (innodb) we still have high iowaits on high load. We have to disable barrier on ext4 mount options. After that, disk performance is OK.
On a 5 nodes cluster...
We're planning on using a few Intel NUCs in lab conditions with Proxmox. They take a single M.2 NVME device. What would be the best filesystem to use in this case?
Also if ZFS would a RAID1 or RAID0 array be the correct option?
Hi, I have set up a thin pool for container creation. When I create Cts the filesystem is formated with ext4 automatically.
is it possible to create containers with XFS instead of EXT4 using thin provisioning?
Hi, I have an OMV setup as a QEMU virtual machine in Proxmox.
I want to pass through a physical disk to the virtual machine.
I have the command to do that:-
qm set 104 -virtio2 /dev/disk/by-id/ata-WDC_WD6003FFBX-68MU3N0_V8G3xxxx
I should also say that I created 2 partitions first on this...
I have proxmox home lab, and now I try to choose filesystem for my virtual machines.
I will run Windows VMs and I need better filesystem for that.
I setup Windows VM with this config:
Since I have upgraded my pve 3.4 to 5.2, now backup, clone, restore are very sloww and block access to my virtual machine.
Here you can find all the setup done :
PVE root, swap and data are in ext3 same as used with PVE 3.4
NAS synology is using ext4 (connect in nfs)...
I upgraded from 4.4 to 5.2 recently and noticed my /var/lib/vz ext4 filesystem didn't survive the upgrade. Is this a known issue with Debian 9 or something? On 4.4 this system was rebooted, and fsck'd, practically every weekend for either Proxmox or SAN upgrades so I'm wondering if the error is...
I believe it was up to 3.x that PVE setup from iso results in ext3 filesystem over lvm. Newer days we get lvm storage which may be nicer, but it is hard to copy VM disks by simple copy its disk files.
So the question is, is there any easy way to install 5.1 from iso the way that it was long...
I have two 500GB SSD RAID1. First RAID1 partion is ext4 for system (20GB size) and second partion is LVM-Thin for VMs (size 446GB).
Everything works great, except one thing - if I create backup of VM, proxmox want to save it at "/var/lib/vz" which is on 20GB ext4 partion.
Is possible to save...
We have Proxmox 4.4 installed on VDS with ZFS and several VM running on it.
After running regular nightly VM-snapshot, sometimes VM filesystem becomes to mount in readonly (all VM running Debian ext4 with SCSI VirtIO and no cashing mode), some resent failures got system in non-bootable...
after many hours spent to understand why quota in OpenVZ in Proxmox 3.4 doens't work anymore, I found that it happens when OpenVZ container stays in a ZFS partition; the problem doesn't occur in EXT4 partition.
I hope that you'll fix this issue because I believe that Proxmox 3.4 is the...
I have actually virtual server (KVM) on an very low profiles servers (Atom C2750, with 8G RAM and one no entreprise SSD disk).
As I don't have redundancy (only 1 disk), I firstly use DRBD between 2 server to have some RAID1 security, but DRBD was often broken, so I buy an QNAP (ES1640DC-V2) in...
4 months ago I installed PVE v4.4 on an eXT4 filesystem SSD. I put about 10 different containers on it and was the only user. I didn't use ceph or HA, and I had no other clusters--it was just one node with about 10 different LXCs. All containers were under 50% usage in their allotted storage...
For the past 4 months I had PVE 4.4 running exclusively on a 250GB m.2 SSD via PCIe adapter (EXT4 filesystem). The drive had less than 4 total TBs written to it, I had 12 Ubuntu containers, and <50% of overall total space utilization as well as <50% of consumed space within each container. Then...