I have a detached disk I cant' get rid of.
In my storage view for ceph I see vm-100-disk-0. I click remove and it refuses because vm 100 exists.
I go to the hardware tab, and the only disk is local-zfs:vm-100-disk-0 (which is what I want, as I'm moving things off of ceph to reconfigure my...
I've done some more digging and this is only more confusing and frustrating.
The source disk is a LVM device on top of iSCSI. Trying from another node where enp6s0 is a 10GB link...
# iscsiadm --mode node
10.0.1.108:3260,-1 iqn.2005-10.org.freenas.ctl:data-c-iscsitarget
# ip r
default via...
I'm moving virtual disks from a FreeNAS box to local-zfs storage.
The freeNAS machine I'm migrating from is on 10.0.1.108/24 (storage network)
The machine I'm migrating to has:
vmbr0: 10.0.0.111/24 (management network)
vmbr1: (public ip + gateway)
bond0: 10.0.1.111/24 (storage network)
In my...
It seems odd to me as I've moved virtual disks between freenas boxes without issues before.
I appreciate any ideas, because I feel like I must be missing some knowledge.
Ok, assuming I'm going to try to work out my ZFS setup for now and look into ceph later:
I'm migrating from a FreeNAS server that only has 1GB NIC. Storage is on a dedicated subnet.
As I said, things go fine while migrating the disk at first. RAM usage steadily climbs until it hits 50% usage...
I have a 2U rack server with 4 nodes. Each node has:
2 CPUs, 6 cores, hyperthreaded (24 total)
128GB RAM
2x 1GB + 1x 10GB NIC
3x 4TB WD Gold
2 NVMe slots
I'm migrating from an old cluster involving multiple FreeNAS boxes and a big mess.
Currently I have set up a couple nodes with a 3 way...
Seriously, the solution is to download and pay for another control panel to run on top of Proxmox? Why am I paying a Proxmox subscription then? It's 8 months since this was posted how is it not fixed?
I had an issue growing a disk. It is on LVM.
I tried increasing the disk from 300GB to 400 via GUI. I got permission denied.
After trying a few things, I found I was missing quorum. Fixed that.
Increased disk size via GUI. Now it reported 700GB. Ugh. Ok. It's because I ran the command to...
I'm wondering what the best way to virtualize database (specifically postgres) loads is. The use case in question requires as few I/O bottlenecks as possible. I'm using local ZFS storage on the host and understand if I tune it to 8k records as well as setting Postgres to 8k records you can...
I have some VMs running with disks on zvol in Proxmox. I am using sanoid/syncoid to push zfs snapshots to a FreeNAS box as backup - but I'm curious if I can mount the snapshot read only and extract single files. The filesystem on the VM is ext4 if that matters.
Purely educational at this point...
It would be simply amazing to use zvol over ISCSI with FreeNAS. I'm a bit nervous to put it into production without an official release though. Is there any thoughts on this from Proxmox devs? I am going to give it a shot on some test hardware.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.