Hi. Thanks. I did the upgrade through Proxmox's Upgrade link as opposed to apt update.
root@proxmox:/var/log/apt# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.126-1-pve)
pve-manager: 7.4-17 (running version: 7.4-17/513c62be)
pve-kernel-5.15: 7.4-9
pve-kernel-5.4: 6.4-20
pve-kernel-5.3...
I performed a standard update on my 7.4 proxmox server to get the latest deb11 patches.
I did not see any errors during the upgrade, but after rebooting the box for the new kernel, I got this error:
Booting `Proxmox VE GNU/Linux`
Loading Linux 5.15.131-2-pve...
error: file...
Thanks, Lee. Yeah, I saw that. I'll have to try this in a lab. Backing up and restoring everything is just untenable. The main issue I gleaned from the manual is that there might be "ID conflicts". My takeaway is that, if I have 1 node using IDs 100, 101, 102 and a second node having completely...
Hello. I have 5 separate nodes running now and I'm planning to create 1 cluster for all of them.
However, they each have VM/CT IDs starting at "100".
Will this present a problem, as far as ID conflicts, or will pvecm resolve these automagically?
If I have to change IDs on the 4 nodes I wish to...
Thanks again. I've added the new drive using it's by-id value and it's showing as part of the pool, and resilvering has begun.
Once it's done, then I shall try again to remove the faulted drive.
root@proxmox:~# zpool attach rpool sda2 /dev/disk/by-id/ata-ST2000DM008-2FR102_ZFL63932-part2...
Sorry to ask more, but I'm really nervous about the possibility of blowing up the raid set...
The syntax of attach is: zpool attach [-fsw] [-o property=value] pool device new_device
Given my pool 'rpool' has 'sda2' as what I'm assuming is the "device", would the proper command be:
zpool attach...
I've used sgdisk to set up the new disk. Now the paritions match the disk in the pool.
I'm thinking of adding it first to the mirror and let it resilver before figuring out how to remove the dead/removed disk.
do I use 'zpool replace', 'zpool attach' or 'zpool add'? Do I use 'sdb' or the...
Hey. Dunuin. Incorrect terminology then on my part. I did install this server using a Proxmox installer image. I may have clicked on Initialize in the GUI for this new disk, but don't recall. It doesn't have any data at all on it, so no problem reformatting and repartitioning it. Is there a...
Hello, all.
One of the drives in my zpool has failed, and so I removed it and ordered a replacement drive.
Now that it's here, I am having problems replacing it.
OS: Debian 11
Pve: 7.3-4
I've installed the replacement drive and it shows up under both lsblk and in the gui.
Zpool status...
Hey, all. I've purchased a new 4tb volume expressly for holding backups. I noticed that the in the UI, you set the retention policy on the volume itself, so I set this value to 3. However, I have 1 rather large container that almost completely fills the backup volume with 3 backups. So, I'd like...
Hello,
I've not been able to configure proxmox 5.4-16 to allow lxc containers to serve directories via NFS. I've heard all kinds of different answers on whether it's possible or not. Can someone from Proxmox answer this definitively for me please? I would rather not run my NFS server as qemu if...
Everything is working again. Not sure why a dev directory was created in 2 of the container mount points, but that was probably the root cause. Issue solved.
Trying a "zfs mount -a" displayed an error that subvol-105-disk-0 and subvol-114-disk-0 were not empty, and therefore couldn't be mounted. Both of those subdirs had an empty "dev" directory. Once I removed them, "zfs mount -a" worked, and I could start 105 and 114. Also, the mount on 104 is now...
Evidently, the zfs pool did not auto-mount. I'm not sure if this is due to the upgrade from jessie -> buster, or the PVE upgrade. Issuing a 'zpool mount MEDIA' attached the drive to /MEDIA, and *one* container was able to start (104). However, one of the files were there.
Other containers that...
It looks like I may have mounted the 2TB disk on /MEDIA on the proxmox server itself, since there is an empty /MEDIA dir. Also the config looks like that /MEDIA dir is then shared onto 104. Trying to mount the device returns this error:
root@proxmox:/media# mount /dev/sda1 /MEDIA
mount...
Hello.
After upgrading my proxmox from jessie -> buster, several of my containers won't start. I had added a second disk through the UI, and stored several containers on it. The volume shows up in the UI as a ZFS volume, and the containers show up under Content, but attempts to start always...
Well, no takers yet. :/
What I've done is to create a VM for the host that needs to export the NFS share, and left the consumers as LXC containers. That works fine.
I would like some confirmation under which circumstances that LXC containers cannot export NFS. I used to be able to in earlier...
Hello, all. I have to share an NFS dir from one host (server1) to three other hosts (server2-4).
I've read many different threads here on this, and I'd like some clarification.
1) Firstly, Can an LXC container share an NFS mount point to other lxc containers on the same node?
2) If not, what...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.