This is not a good production server, I have not seen another virtualization platform that does not suggest using RAID10, or SSDs for any decent workload. These days users wanting performance on their desktop run SSD. It sounds like you are using 2TB *consumer* spinning drives with a low budget...
lvcreate -V 1000G -T pve/data -n backups
But.... doing -V means he didnt count the available size properly, and is over provisioning, which is fine, if you actually put that amount of data on there you will have a bomb unless you add a disk to the group, it is suggested to count your...
I dont backup my boot devices, just rsync out my /etc/pve, and any /etc/lvm , /etc/zfs, but even those can be recovered easily.
The gui is simple enough for backups, you can use a local drive, remote nfs, etc:
https://pve.proxmox.com/wiki/Backup_and_Restore
If you have a remote (or local)...
This is a great guide with lots of useful tricks, (some other tricks not so much)... but as noted, I would never want to do this in production - not even in my home, what if something went haywire, ie upgrade to newest rev of PVE, and you need to quickly get back up, there are a lot of commands...
I was trying to re-add an unused disk to a VM, and it failed showing the warning below, I check the conf file, and there is no duplicate for the size.
5.1-43, Latest updates as of today.
Error: Parameter verification failed. (400)
I am up in the air on which platform to go with - Dell or Quanta. I have done some limited ZFS installs to date so I know my way around it for the most part, but never anything big. I need at least 16TB of fast space, with room to grow later.
So typically my go-to is a Dell R730xd 26x 2.5...
I came across some quanta servers too cheap to turn down and want to make a test lab to try some ZFS clustering on low end disks... looking for some commentary- what will give best performance with least risk, and least complexity, most functionality.
I have been using PVE since v1.4 and am...
That is correct and you are correct the best way to do ZFS is with your RAID controller or HBA in "IT mode" presenting standalone disks to the os. I did not have time to do that though, maybe on another round.
I dont know if ESX + Veeam offers the "by the minute" snapshotting abilities of pve-zsync - depending on your scenario, this can also be handy.
In general I think anything VMWare can do, so can Proxmox, but with a bit more flexibility, while vmware needs 3rd party apps like Veeam to add...
Not sure what problem you are referring to. Run "zfs list" to check the zpools and their mount points, and run "zpool status -v" to see the member disks in each pool. The default pve install will make 2 zfs pools, and a portion of it can be directly accessed storage for the file system, the...
nevermind... I wouldnt waste all that time, zfs can make any sort of raid you need, no need to re-install.
https://www.zfsbuild.com/2010/06/03/howto-create-zfs-striped-vdev-pool/
Assuming sda is your main pve os drive... sdb-sdd are 3 other drives you have:
zpool create backuptank /dev/sdb...
Spare hardware laying here, so I decided to benchmark some different local (not SAN) storage formats and attempt to show the pros/cons of each, help me out if i'm missing any important points.
Test bed:
Dell R730xd, H730P Raid card (array specs at the bottom)
2x Xeon e5-2683 / 64gb RAM
1x...
Personally I have a couple machines with pve os and vmdata on same medium.... but I always try to avoid that scenario, for instance what if today I am using zfs for pve-zsync options, but tomorrow decide zfs is too slow and want to move up to raw devices on lvm? If my os is separate, I have the...
in that spirit many (ie dell) servers come with dual SD slots to run ESXi...... so certainly we could use the same principal with 2 of the mentioned usb ssd in a zraid or mdadm raid... or make an occasional manual clone of the usb and keep a cold spare on the shelf - even better than raid. you...
Actually - he COULD use a USB, if it was one of these:
https://www.sandisk.com/home/usb-flash/extremepro-usb
another option to use usb would be a usb > sata adaptor or dock, or drive bay, with a 2.5 ssd.
as said by fortechitsolutions, you need to either combine all those extra disks into a single array in your vm host, once in an array, probably your easiest route is to format the array with some file system ie:
mkfs.ext4 /dev/md0 #(assuming you made a mdadm array).
mkdir /mnt/backups
mount...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.