Hi, guys!
I'd appreciate You for helping to figure out my case.
I have two completely identical hardware with clear by default installation of Proxmox VE partitions and Backup server. Here You are:
Proxmox VE
Proxmox Backup server
As You can note it's a difference in file system... And...
It's annoying, that we got a warning about bind mount point also. What's is wrong with it? Why PM think i'll delete bind mount point while restoring LXC from backup?
It sounds mystique, but the only provement about incident remains system email:
ZFS has detected that a device was removed.
impact: Fault tolerance of the pool may be compromised.
eid: 6
class: statechange
state: REMOVED
host: bs
time: 2024-02-17 07:51:44+0500
vpath...
Guys, it's ok now and PM works as all good people helped me.
i need one more advice.
Look, degraded SSD is about 1 week from shop. It's not good case...
To return it to the shop I must prove that it was a problem of that SSD...
Is it possible to find out somewhere in syslogs the provement...
Would You clarify, what /dev exactly?
s -la /dev/nvme*
crw------- 1 root root 241, 0 Feb 17 15:44 /dev/nvme0
brw-rw---- 1 root disk 259, 0 Feb 18 10:34 /dev/nvme0n1
brw-rw---- 1 root disk 259, 1 Feb 18 10:34 /dev/nvme0n1p1
brw-rw---- 1 root disk 259, 2 Feb 18 10:34 /dev/nvme0n1p2
brw-rw---- 1...
Guys, and one more stuck 8)
roxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
0FF5-B5BA is configured with: uefi (versions: 6.5.11-7-pve, 6.5.11-8-pve)
How i can add new SSD here?
Documentaion is unclear (for...
Look, I detached new disk from zpool, deleted partitions and cloned it by
sgdisk /dev/nvme1n1 -R /dev/nvme0n1
sgdisk -G /dev/nvme0n1
But now I'm in stack with attaching...
zpool attach -f rpool...
As I can understand zpool remove|add will destroy pool. But zpool detach|attache is for working with mirror's disks exactly.
Whould You clarify that moment?
root@bs:~# zpool status -v
pool: rpool
state: ONLINE
scan: resilvered 104G in 00:02:26 with 0 errors on Sat Feb 17 09:54:12 2024...
Hi, guys!
Just encountered that one of two SSDs from zfs-pool on mirror RAID1 was degraded.
After
- replacing SSD
- running zpool replace -f rpool 7349143022040719209 /dev/disk/by-id/nvme-Q2000MN_512G_MN2312512G00269
- got a success ZFS has finished a resilver
But as You can see got a...
As i said it's a lxc-container containing database. Every night back up is not so suitable (but useful of course)
So to be honest i don't know any other way (besides PBS) to save all changes of database hourly...
And You?
The lxc-container that needs to be backed up as often as possible has compressed PVE's vzdump file about 11G.
The same container reported by PBS in datastore content as 58G size.
I mead that i don't need all production every hour backup. Just one sensitive container with changing database.
Appreciate for advices!
Anyway, would You clarify for novice, is hourly (half-hourly) backup job (PVE GUI section Datacenter-Backup) for container a robust approach to have a close-to-production back up?
Hi, guys.
Is this the robust approach to run backup on lxc-container every 1 hour?
I have 2 containers, but backing up to PBS number 101 (~10G) very expensive! In the same time backing up number 102 (~25G) looks suitable.
Please help me to figure out that! Where to dig?
Hi, guys!
I'm really in stuck with command
/usr/bin/vzdump 104 --dumpdir /root/backups --mode snapshot --compress zstd --exclude-path /mnt/*
that leads to error
400 Parameter verification failed.
vmid: invalid format - value does not look like a valid VM ID
vzdump {<vmid>} [OPTIONS]
As i...
I do appreciate You for answer. It was my second question!
My simple first question concerns containers (CT). In PVE GUI it's easy to backup it to PBS.
How to do it with proxmox-backup-client backup in command line?
PVE GUI uses command line too comlex for understanding...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.