I was working on a LXC and after a server restart later on I realized that I had set the password on the Proxmox Root instead of the LXC.
So..... now I need to reset the password for the Proxmox. The problem is that its on a ZFS.
Would anyone know how I could mount the ZFS and reset the...
I have a server running latest PVE with a ZFS RAIDZ storage on SSDs. From time to time, some KVM VMs gets readonly filesystem and need to be rebooted and the filesystem repaired. Doesn't look like the problem is specific to a certain OS, because it happened on CentOS, Ubuntu, Debian... The...
i'm testing large number of VPS replicated using the ZFS -R (recursion) option.
I am noticing that over 90/100 VPS the incremental replication sometimes fails and i hav to reset all snapshot on recv server..
Have someone the same experience?
What is the best way to syncronize two zfs...
Greetings to all,
I am currently encountering a detrimental issue with ZFS on one of my Proxmox systems. When trying to clone a VM from a template, it is extremely slow however it works. The issue though is that I am unable to extend the size of the disk as when I do so I get the following...
ALL backups are failing with: unable to create temporary directory '/mnt/pve/backup/dump/vzdump-qemu-146-2020_02_12-02_00_02.tmp' at /usr/share/perl5/PVE/VZDump.pm line 703.
We export ZFS via nfs kernel server, working fine all the time. NFS shares are readonly now.
Latest Update was...
Pve host use LVM
Established a new ZFS, the following error occurred while migrating the virtual machine
mount: /var/lib/lxc/104/.copy-volume-2: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.
My server is using LVM，One day I was going to add ZFS in, the build process went well, but when i migrated the VM, the console following error:
create full clone of drive sata1 (sdb:100/vm-100-disk-0.qcow2)
drive mirror is starting for drive-sata1 drive-sata1: Cancelling block job drive-sata1...
I found this odd.
I had an old Server with 2 x 1 TB SSDs setup. One Nvme and one Sata SSD setup as a Raid ZFS 1 or mirror.
The Storage pool I setup with ZFS mirror is a little smaller than the new server, but contains way more VM Disks but takes up less space. It's weird.
Can somebody help...
I have noticed this issue where all of my LXC fail to backup while all of the the VMs do backup successfully. This is similar to This Thread which has been marked "Solved", though that sounds like a workaround.
This is the error I am seeing:
INFO: Starting Backup of VM 110 (lxc)
wir betreiben einen Proxmox-Cluster mit 4 Nodes (1x Storage Proxmox 6.0-15 und 3x Nodes mit Proxmox 6.0.12). Der Storage nutzt ein mdadm-RAID10 über 8x2TB SM883 SSDs, um darüber via LIO ein iSCSI-Device für ZFS over iSCSI bereitzustellen.
Wir virtualisieren aktuell bestehende...
I have 2 Proxmox 6.0 nodes with replication setup between the nodes for a single VM. The replication works, but the freeze and thaw of the guest filesystem causes problems with the application running in the VM. It can't handle the brief pause. Is there a way to do the replication withOUT the...
i'm rebuilding a cluster.
I've a single node cluster with lvm storage (node1).
I tried to add a node (node2) to the cluster, Node2 have zfs storage (raid1) that works fine before i try to join to the cluster,
Join process seems fine but after the join the zfs storage on gui is displayed as...
ich brauche Fachwissen!
Und zwar habe ich eine externe Festplatte an meinem Server angeschlossen und wollte diese als Backup Festplatte nutzen und habe ein ZFS erstellt.
Nun konnte ich keine Backups drauf spielen, da der Inhalt nur als "Disk-Image, Container" angegeben war...
Currently I have an old server at home I wish to upgrade.
This server is an old i5-2500K, with 16GB of RAM, a 120GB SSD for booting and a Areca RAID controller with a 2x500GB mirror and a 6x2TB Raid 6 volume. Current OS is Windows Server 2016.
I use it for the following purposes:
I had a mirrored zfs pool called "wd" with two disks: sda and sdb on a proxmox. To expand the disk, I detached sdb, added a new disk, sdc. Resilvered it and detached the remaining old disk sda. Unfortunately I messed up and now I would like to have my old two disks sda and sdb running again...
so I just found out that I'm pretty much out of space on my File-server. (as shown in the attached snapshots)
is there any way to extend that File server's storage ?
ps : Zfs-loacl seems to have some decent amount of Gigs , is possible to use that ?
Thanks all !
as I made a config mistake my boot ends up with a kernel panic so I'd like to repair the installation (zfs-utils not running correctly so no root file system availible for booting). Thought about booting a buster-iso, chroot and try to fix it but I don't find anything about it to boot...
We have a setup of ZFS over ISCSI using LIO on Ubuntu 18 , and we have an issue with high IO load once we move disks that are bigger than 100GB,
once the move starts the Load is low until about a half of the transfer is done, and then it's getting crazy high,
our setup is very high end ...