Nextcloud hdd setup

hux

New Member
Apr 14, 2020
7
1
1
41
Hi

Im just about to move and start using nextcloud to store all my data(pictures, dok).
Have setup a zpool of 9x2TB on zraid 2 with spare disk.
Now to my question. what is pest practices on how to add storage to your Nextcloud VM.
Do i just make a vm with a big disk or is there a smarter way to allocate disk space to VM.
if i make to small disk to VM can i resize it on the fly?

tnx in advance
 
  • Like
Reactions: Micjon
If you don't plan to use the full 9x2TB for Nextcloud only you can create a NFS share and mount that inside your Nextcloud-VM. That way your Nextcloud-VM can access folders stored outside your VM.
If you create a virtual HDD and store it in the big Pool only that VM can access your pictures and so on because you cant use the same virtual HDD on different VMs.
I use NFS for my Nextcloud so different hosts/guests can access the same files.

And yes, you could increase the size of a virtual HDD later but it is a little big annoying, because you need to edit the partitions, rebuild the partition table and grow your filesystem inside the VM each time you increase the size.
 
Last edited:
Just create lxc container and use a mount point to the zpool. Will be much better to manage the data in the long run. It truly runs great, been doing it like this for a long time and there is less abstract layers!
 
Just create lxc container and use a mount point to the zpool. Will be much better to manage the data in the long run. It truly runs great, been doing it like this for a long time and there is less abstract layers!
Yes, and that is why it is less secure because it isn't fully isolated. Because of that I only run webservers (and other stuff accessible from the internet) using VMs. But ofcause, if security isn't your highest priority you can run a LXC with bind-mounts too.
 
So based on your statement that unprivileged containers are not safe to use for any forward facing web services? Care to elaborate more because I do disagree but maybe I just don’t know enough so if you could expand why would be great.
 
So based on your statement that unprivileged containers are not safe to use for any forward facing web services? Care to elaborate more because I do disagree but maybe I just don’t know enough so if you could expand why would be great.
As soon as your webserver gets hacked it is way easier to get full access of the complete host and all other guests if you use a LXC because, you share the kernel and hardware with the host.
Because of that bad isolation you offer alot more attack vectors.
Because of the complete isolation its way harder to break out of a VM and you would need to hack the hardware like branch prediction, RAM overflows and so on.

If you are not a learned IT specialist its very likely that you dont always fix all vulnerabilities and the server could get hacked sooner or later. In that case you should want that webserver to be isolated so it can't damage other systems.
And most of the people here just want a plug-and-play solution like the turnkey LXC where they dont need to learn anything new. Thats always risky.
 
Tnx for your answers and time.

I do care a lot about security. Behind firewall with ids/ips and no more open ports then needed but one port is enough.

So to use a NFS i get convenience. The nextcloud takes the space it needs and so an. To use a VM with mush disk is the safest.
One more thing. im going to use the proxmox backup server as well to backup VM. why so mush data security..? its because im going to store pictures of my child and so an, that stuff cant be destroyed ever! aint it easyer/better to just go standalone VM when it comes to backup aswell?
So to sum it up. standalone VM is the way to go?
 
One more thing. im going to use the proxmox backup server as well to backup VM. why so mush data security..? its because im going to store pictures of my child and so an, that stuff cant be destroyed ever!
Yeah, thats also the point why I buy everything twice now. In the past I lost the drive on my computer with all the pictures and videos. I didn't replaced it as soon as possible and some weeks later, as I got my replacement drive and tried to copy the stuff back from my USB-backup-HDD that HDD died too and I lost 10 years of photos. Some pictures I got back from friends but most of the stuff is lost forever. So always do your backups atleast three times if you can'T recreate/replace your data.
aint it easyer/better to just go standalone VM when it comes to backup aswell?
So to sum it up. standalone VM is the way to go?
There are several way to backup stuff. You don't need to backup the complete VM with all the user data like pictures. This is how I do it:

Virtual HDDs of VMs are stored on a ZFS pool consisting of SSDs. That way the VMs are stored on a very fast storage with alot of IOPS capabilities. And that VMs don't need to be big, because they only need to store the OS and programs without any real user data like media.
User data is stored on another ZFS pool consisting of HDDs. These are slow but cheap and big so totally fine for storing photos, videos and so on. If a VM needs access to to specifiy user data I mount it into the VM using SMB or NFS shares. These mounted NFS/SMB shares won't be included in the VM backups so these are quiet small so the backup can finish quickly and won't block the VMs for too long. Because of data integrety I only use "stop" as backup mode where the VM will be offline for as long as the backup is running and that could take hours or days if the VMs virtual HDD is too big. If every backup would stop the VMs for hours I would not back it up often enough, because it would be way to unconvenient.

What I do to backup my user data is using replication. I got another server with the same ZFS pool setup that is booted up once a week. The replication will sync everything from the primary pool to my backup pool on the other server and shuts it down after everything has finished. So even if my complete primary server dies and kills the complete pool I got another server with everything in the basement and I only could loose data created in the past 7 days.
Single drive failures aren't a big problem because both pools are raidz1. I think raidz1 is fine in my case because everything is stored twice, so its annoying but not super critical if I loose a complete pool if 2 drives of a pool die at the same time. And I'm quite safe against ransomware because I keep my snapshots for 2 months on both servers and as long as I recognize a problem within 2 months I can always rollback even if the ransomware already encrypted my files. And the backup server is offline most of the time, so the drives won't wear off so fast. And I always try not to update both servers at the same time, so not both will be hit by critical bugs. And the SMB/NFS shares of the backup server are set to read only so no infected computer in the LAN can delete files (except for the server doing the replication).

And every X months I copy the most important files to a USB-HDD. That backup is never up-to-date but that is a good thing, because it can't be hit by long term problems and I can store it offsite.
 
Last edited:
  • Like
Reactions: hux
didn't replaced it as soon as possible and some weeks later, as I got my replacement drive and tried to copy the stuff back from my USB-backup-HDD that HDD died too and I lost 10 years of photos
That is not a good day :(. feel for you and is nothing i want to go through.

This is how I do it:
A really nice setup and in a way exactly what im looking for.
Can you share more on how the setup is configured. NFS and that stuffs is new to me. The zpool is local for me so how do i make the zpool go NFS and then mount the NFS in VM? Dont get that.
With the user data replication you have another node in a cluster and the data is synced between them that way?
 
Last edited:
if i use the proxmox backup server and use snapshot of VM. Then only new change will be backed up right and no downtime on VM. aint snapshot good enough?
 
The zpool is local for me so how do i make the zpool go NFS and then mount the NFS in VM?
You basically want your Proxmox to be a NAS so it can securely store files and share them on your LAN. But Proxmox can't do that by itself if you want a GUI. So one option would be to use the drives with ZFS on your Proxmox host itself. But that way you need to use CLI to setup, monitor and manage everything because that stuff isn't implemented in the Proxmox WebGUI. But you could install another GUI like Webmin or Cockpit to help you a little bit.
My favorite option would be to create a NAS VM where you install the NAS OS of your choice (I use FreeNAS/TrueNAS which is also using ZFS) so you got a nice GUI. If you got a free PCIe 8x slot you can use a PCIe HBA card and attach HDDs to it. If you then enable PCI passthrough (and your hardware supports this) you can passthrough the complete HBA card with all drives attached to it into the NAS VM, so the NAS OS can access that drives physically without any virtualization overhead. But if you passthrough something into a VM that hardware can't be used anymore on the host or on other VMs. So you would need some other drives (durable SSDs prefered) that are directly connected to your mainboard and not passed through so you can install Proxmox on it and run all VMs from it. Only option to access stuff on the HDDs then would be to use network protocols like NFS/SMB/iSCSI/FTP/WebDAV/SSH and so on.
With the user data replication you have another node in a cluster and the data is synced between them that way?
I've got two FreeNAS server but the same thing also works with Proxmox. Both OS are using the same native ZFS commands ("zfs send | zfs receive") to replicate stuff from one pool to another pool. Look here for zsync and pvesr. If you are using FreeNAS it is way easier to setup the replication because everything is build into the GUI.
if i use the proxmox backup server and use snapshot of VM. Then only new change will be backed up right and no downtime on VM. aint snapshot good enough?
Not sure if PSB supports incremental backups meanwhile. I know they wanted to implement that but last time I tried it, it wasn't ready.
If "snapshot" as backup mode is good enough depends on your needs. "stop" is always more secure, because the VM will be shutdown first so everything is securely stored on the drives, no software is running and everything is in a defined state.
If you are using "snapshot" as backup mode you are just saving the state of the disks while it is running and not the RAM, not the state of running processes and not stuff that is in the cache and waits to be written to the disk. But with the last one I'm not totally sure. Could be possible that Proxmox tells the VM to finish async writes. If you later want to restore a VM from a backup that was created with "snapshot" mode, it is like you pulled out the power plug while the VM was running and everything crashed. So you are booting into a crashed VM that wasn'T shutdown properly. That might be fine most of the times but I wouldn't rely on it, especially if that VMs are storing important stuff you don't want to loose.

And if you are not using PBS but want VM backups, it won't be incremental. So if you got a 10TB VM, each backup will store the complete 10TB VM again and again.
 
Last edited:
So one option would be to use the drives with ZFS on your Proxmox host itself.
My setup right now is like this. Proxmox is installed on a intel SSD. Then i have 12 HDD attached to a HBA and making two datasets of the drives. One of the datasets is the 9x2Tb raidz2 and its added to the proxmox as a zfs storage.

Now when i have read some about this my thought is to do it like this.
Make a VM and make it a big one to store all nextcloud data. the dataset is thinprovisioning enabled so the VM will only allocate as much space as it uses. Ill use VM backup to PBS and only incremental data will be sent when backup is runned. That will make the backup fast on a gigabit network and i can the do backup mode "stop" as well?
 
Make a VM and make it a big one to store all nextcloud data. the dataset is thinprovisioning enabled so the VM will only allocate as much space as it uses. Ill use VM backup to PBS and only incremental data will be sent when backup is runned. That will make the backup fast on a gigabit network and i can the do backup mode "stop" as well?
Thin provisioning won't make backups fast. If you create a 10TB VM with only 20GB data on it, the stored backup will be 20GB (or less) but the backup task will still need to read the complete 10TB to check what is empty and what not. So reading a complete 10TB virtual HDD each time will take ages even if 99,9% of that virtual HDD is empty.
If your mostly empty virtual HDD is 10TB and your pool is able to read with 200MB/s it takes 13.88 hours just to read the complete virtual HDD so it can be backupped.
 
Last edited:
Have done some testing/labbing today and even if my dataset can do 1,5GiB+/sec just a 1TB virt HDD took over 11 min to backup..
like you say no matter whats it has to read the compete virt drive.. garbage. back to the drawing board.
 
Like I said, just create a NAS VM, PCI passthrough the HBA with all HDDs attached to it, you want to use as network storage only and do the ZFS inside the VM. That way you get:
- fast incremental backups of your userdata (syncing incremental changes using snapshots from one pool to another)
- fast VM backups because the virtual HDDs can be really small if they don't need to store all that big user data
- you can access the same files from different VMs or even hosts in your network using SMB, NFS and so on (thats by the way not super slow because virtio nics and linux bridges can handle around 5Gbit, so communication from VM to VM should be 5Gbit even if your NICs are only 1Gbit)
- all ZFS benefits
- no virtualization overhead and so no additional write amplification for the drives
- easy managable because you are using a OS with GUI that is primarily designed to manage Pools and share files

There is really no need to store the nextcloud user data inside that VM. Use a SMB/NFS share as nextcloud data folder and your nextcloud VM virtual HDD only needs to be like 32GB and you can upload TBs of data to your nextcloud.
 
Last edited:
yes exactly. ill try that one. see if i manage to pull it of.
Tnx again for all the good advice. ill get back to you with resault or maybe more questions :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!