I prefered hardware solution. I mean i got a two msata-sata adapter and just cloned msata disk under Windows 10 by Macrium Reflect
It resized LVM partiton and further i adjusted it inside ProxMox.
Appreciate You. It was a way that i've done. And the only thing i'd like to figure out
what is last step for? The matter is that i'm far from understanding of that nuances and working as all-inclusive-specialist
I mean i have volume group pve
And df -h just shows
Filesystem...
Is this right and enough, guys?
I don't understand advices
xfs_growfs /
or
resize2fs /dev/mapper/pve-data
that i saw in forums and concerns filesystem adjusting...
Hi!
I just cloned PM SSD disk 128G to SSD 256G. LVM partition was resized while cloning.
Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 528383 524288 256M EFI System
/dev/sda3 528384 500117503 499589120 238.2G...
I have 1,73G HDD mirrored + 512G SSD mirrored on my system.
So what rule says:
4+1*1.73+4+1*0.512 to avoid a potential bottle neck, right?
Or i should add 4G once for all disk systems: 4+1*1.73+1*0.512?
Hi, guys.
I have a PM 6.4-13 system with bootable root on single SSD and additional LVM on single HDD. Nothing special...
Now i need to replace a SSD (with lager size).
Would You share Your ideas, how is it possible without any lost of configuration?
Appreciate for any help.
It seems to be working command line! Appreciate for Your patiency!
Would You advice, how i can restore owner of my mount point folfer? It become a root after restoring, but on premise it's www-data...
Should i run after restoring something like
pct exec 101 -- bash -c 'chown www-data.www-data...
After
/usr/sbin/pct restore 101 /opt/downloads/backup_vm/vzdump-lxc-101-*.tar.zst --storage hgst-lvm --mp0 volume=hgst-lvm:vm-101-disk-1,mp=/var/www/files.local,replicate=0,size=1G
got
mount points configured, but 'rootfs' not set - aborting
etc etc etc
It doesn't work at all because of need...
I do appreciate You for Your support. After reading Your advices I tried to read documentaion and solve a new task: restore mount point with a new size 1 Gib.
/usr/sbin/pct restore 101 /opt/downloads/backup_vm/vzdump-lxc-101-*.tar.zst --storage hgst-lvm --mp0 volume=STORAGE_ID:1
But pct...
Yes, pct restore point would be suitable. But can You clarify how command line should appear?
/usr/sbin/pct restore 101 /opt/downloads/backup_new/vzdump-lxc-101-*.tar.zst -mp0
or
/usr/sbin/pct restore 101 /opt/downloads/backup_new/vzdump-lxc-101-*.tar.zst --mp0
or what?
it's unchecked by default already
And as i said erlier backing up log contains strange info about disabled... excluding mount point... Where excluding is disabled and what for?
"excluding volume mount point mp0 ('/var/www/files.local') from backup (disabled)"
Files are ignored due to the option
vzdump 101 --exclude-path '/var/www/files**' --compress zstd --dumpdir /backups/backup_vm --mode snapshot
But my harm is restored mount point and its volume. I don't need that volume to be restored.
Hi, guys!
I'm really in stuck with manual https://pve.proxmox.com/pve-docs/vzdump.1.html
It says "By default additional mount points besides the Root Disk mount point are not included in backups."
But vzdump log says "excluding volume mount point mp0 ('/var/www/files.local') from backup...
Hi. guys.
I need Your advice because i'm not experienced with ProxMox.
Here You are df -h:
rpool 1.6T 128K 1.6T 1% /rpool
rpool/ROOT 1.6T 128K 1.6T 1% /rpool/ROOT
rpool/data 1.6T 128K 1.6T 1% /rpool/data...
The matter iptables -t nat -A POSTROUTING -s 10.1.1.0/24 -o vmbr0 -j MASQUERADE was in /etc/rc.local and it failed to start normally after reboot.
I don't know if exist a way lxc-containers having internal IP access inet using PVE host routing, but we use that approach.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.