I have two datastores for PBS: a 4x2 raid10 (ZFS) and a ZFS mirror. GC for the latter works just fine. For the former, every time it runs, in phase 1 (marking), I see:
found (and marked) 602 index files outside of expected directory scheme. (602 is the total number of chunks)...
So I have three nodes, pve1, pve2 and pve3. Because I started off with pve1, the mon is called 'mon.localhost' and the manager is also 'localhost'. I'm assuming this is all OK, but it looks weird to see mons localhost, pve2 and pve3, as well as managers with the same naming convention. I also...
Running a new install of 7.2, upgraded to 7.3. Guests live on a ceph/rbd datastore. I've noticed a couple of times when doing a live migration, that the guest seems to be migrated succesfully, but then fails to be up and running on the target host. Log snippet:
2022-12-05 09:40:36 migration...
Migrating off vsphere. My backup strategy had been: daily, weekly and monthly to a ZFS raid10 using veeam. 1st of each month, I'd hotplug a 1TB SSD and run a manual veeam job, scrub the hotplug pool, export it, and shelf it safely. Is there any way to do that with PBS? I've got a datastore...
I currently have shared storage on a omnios ZFS appliance via NFS. This uses a 10gb enet interface. I have a second host in the cluster which is normally off (it is a smaller, older host.) Literally only used to migrate guests to while doing maintenance on the primary host. Currently I need...
I have my one big server with NVME for the guests. Works great. Sometimes an update is pushed by proxmox which requires a reboot. So I have an (older) sandybridge server which I added to the cluster. I migrate all the storage to NFS, then all the guests to the other node. Upgrade & reboot...
It isn't a big deal, but when I did the initial 4.4 install, I used two SSD on an old LSI RAID controller. I'd like to migrate the install to a ZFS mirror. If there is no good way to do that without a reinstall, I'm fine with leaving it the way it is. Thanks!
SATA raid1 and NVME raid1. Rebooted. NVME pool not present. Imported manually and all was well. Looked at /var/log/syslog and saw this:
May 1 17:23:20 pve zpool[1036]: internal error: Value too large for defined data type
May 1 17:23:20 pve systemd[1]: zfs-import-cache.service: main...
I have two hosts, one has a zfs raid1 with two NVME drives. They are both connected to a JBOD via NFS. To upgrade the first host, I did this:
move all disks from nvme pool to jbod (shared) pool
migrate all guests from host1 to host2
upgrade and reboot host1.
Unfortunately, if I try to...
Proxmox 4.4. I have a Windows Server 2012 R2 VM cloned from a newly created template. It is on a ZFS storage. If I try to back it up, it proceeds to the step where it says 'INFO: creating archive ...', and then sits there. If I stop it, it seems to leave the VM locked, as I can't do anything...
So I created a storage of type 'ZFS'. I have confirmed that virtual disks created there (raw or qcow2) are being created as zvols. If I manually take a snapshot of such a guest, it creates a ZFS snapshot on that zvol. That makes sense. If I then back up this guest, and specify 'snapshot'...
So I got a couple of Samsung 1TB 960 PRO drives. I tried to use them with ESXi and Xenserver, but performance in both cases sucked. I had created a simple mirror using them. In both cases, I tried using a virtual storage appliance and exporting the ZFS datastore via iSCSI or NFS. I was lucky...
I'm trying to install proxmox 3.4 on a host with a supermicro motherboard, with builtin VGA. It comes up to the splash screen with only a small part of the license screen showing. The Abort button is visible towards the bottom of the screen, but the 'I agree' is apparently off to the right, so...
Okay, forgive me if this is a silly question, but I've googled and looked at the online help, and came up empty. When I have used proxmox before, I just used root. I need to deploy a server with proxmox for me and two other people to use. I created the unix accounts for them with passwords...
My current setup: two hosts running vmware in 2-node HA. A third node running Centos 7 as a NAS (exporting storage to the vmware hosts.) I wanted storage redundancy, so there are virtual centos guests on each vmware host. The storage is on an SAS JBOD with 2 inputs - one to the physical...
Based on a couple of articles I read, I decided to dabble with this. It was certainly simpler than setting up drbd, I'll say that much! Took about 5 minutes and I had a centos guest up and running. Write performance (two nodes with --copies=2) was about wire speed (110MB/sec). A large...
So I am running two PE2.3 hosts in a cluster using drbd as shared local storage. Works perfectly. My only complaint is that I can't snapshot any of the guests, since the storage is not snappable (lvs on the vg on top of the drbd devices.) What is frustrating is that since the storage is...
I have a running ubuntu 12.04 guest on proxmox2. I am trying to migrate it to proxmox1. It has no local resources or anything. I fire up the migrate (same results from cli as gui) and see this:
root@proxmox2:/var/log# qm migrate 100 proxmox1 --online
Apr 04 09:49:12 starting migration of VM...
I have a number of VMs with raw disk files on an NFS datastore. I created drbd mirror to other host in proxmox cluster. Put LVM on top of that with shared set to true. This way, as I understand it, each vm disk is an LVM. Anyway, all but two restored just fine (e.g. I did backup from NFS raw...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.