@warlocksyno @curruscanis Have either of you notice any irregularity viewing the disks in the UI (pve>disks in the menu)? I get communication failures but not all hosts are effected. i.e. Connection refused (595), Connection timed out (596)
@warlocksyno @curruscanis Have either of you notice any irregularity viewing the disks in the UI (pve>disks in the menu)? I get communication failures but not all hosts are effected. i.e. Connection refused (595), Connection timed out (596)
I had not checked that status and I moved all of my VMs to an NFS share on the same box... (I had a hardware issue to resolve, used = unstableCan you check to see if your pvestatd service in Proxmox is crashing? It's probably that. I have a branch being tested by @curruscanis at the moment to see if it fixes a similar issue.
Can you check to see if your pvestatd service in Proxmox is crashing? It's probably that. I have a branch being tested by @curruscanis at the moment to see if it fixes a similar issue.
d8P
d888888P
?88' 88bd88b?88 d8P d8888b 88bd88b d888b8b .d888b,
88P 88P' `d88 88 d8b_,dP 88P' ?8bd8P' ?88 ?8b,
88b d88 ?8( d88 88b d88 88P88b ,88b `?8b
`?8b d88' `?88P'?8b`?888P'd88' 88b`?88P'`88b`?888P'
d8b d8,
88P `8P
d88
?88,.d88b,888 ?88 d8P d888b8b 88b 88bd88b
`?88' ?88?88 d88 88 d8P' ?88 88P 88P' ?8b
88b d8P 88b ?8( d88 88b ,88b d88 d88 88P
888888P' 88b`?88P'?8b`?88P'`88bd88' d88' 88b
88P' )88
d88 ,88P For Proxmox VE
?8P `?8888P
TrueNAS Plugin v1.2.4 - Installed
---------------------------------
p.s. will this ever support "snapshots" ?
| Feature |
|---|
| Snapshots |
| VM State Snapshots (vmstate) |
| Clones |
| Thin Provisioning |
| Block-Level Performance |
| Shared Storage |
| Automatic Volume Management |
| Automatic Resize |
| Pre-flight Checks |
| Multi-path I/O |
| ZFS Compression |
| Container Storage |
| Backup Storage |
| ISO Storage |
| Raw Image Format |
My bad, I was using "RAW" formatted disks ....The plugin does support snapshots? To what are you refering too?
Feature Comparison
Feature Snapshots VM State Snapshots (vmstate) Clones Thin Provisioning Block-Level Performance Shared Storage Automatic Volume Management Automatic Resize Pre-flight Checks Multi-path I/O ZFS Compression Container Storage Backup Storage ISO Storage Raw Image Format
[th]
TrueNAS Plugin
[/th][th]
Standard iSCSI
[/th][th]
NFS
[/th]
[td]
[/td][td]
[/td][td]
[/td]
[td]
[/td][td]
[/td][td]
[/td]
[td]
[/td][td]
[/td][td]
[/td]
[td]
[/td][td]
[/td][td]
[/td]
[td]
[/td][td]
[/td][td]
[/td]
[td]
[/td][td]
[/td][td]
[/td]
[td]
[/td][td]
[/td][td]
[/td]
[td]
[/td][td]
[/td][td]
[/td]
[td]
[/td][td]
[/td][td]
[/td]
[td]
[/td][td]
[/td][td]
[/td]
[td]
[/td][td]
[/td][td]
[/td]
[td]
[/td][td]
[/td][td]
[/td]
[td]
[/td][td]
[/td][td]
[/td]
[td]
[/td][td]
[/td][td]
[/td]
[td]
[/td][td]
[/td][td]
[/td]
Legend:Native Support |
Via Additional Layer |
Not Supported


@warlocksyno In some cases, I can't migrate to the iscsi storage unless i use the raw format . The target format choice is greyed, i
This might be just a UI, issue
IMHO and I could be wrong, we should be able to change the format when leaving the plugin supplied storage and going to, i.e. a local zfs storage, to support qcow2 \snapshots.Hmm, it should ALWAYS be RAW. Since NVMe/TCP and iSCSI are exclusively block storage, the format should be RAW.
I also changed my zvol_block size to 128k in my most recent testing as my sysetms will be for VM workloads... from previous posts since I was reading through the documentation on the plugin and it recomends that.
truenasplugin: truenas-iscsi
api_host 10.10.5.10
api_key 3-vVeJEyC29sGwNEjpCfVnRBjHOKei8F3TekksP6bbr1NlStP4OHQ48UVmXl2laDiI
dataset tank/proxmox-iscsi
target_iqn iqn.2005-10.org.freenas.ctlroxmox-iscsi-1
api_insecure 1
shared 1
discovery_portal 10.10.5.10:3260
zvol_blocksize 128k
tn_sparse 1
use_multipath 1
portals 10.10.6.10:3260
content images
IMHO and I could be wrong, we should be able to change the format when leaving the plugin supplied storage and going to, i.e. a local zfs storage, to support qcow2 \snapshots.

@warlocksyno is there an expectation that we should be able to use the Proxmox VM replication?
I was hoping that in the future I could deploy a second TrueNAS and target it for replication.
II'm a bit curious about the documentation's reccomendation for 128k volblocksize. It's my understanding that KVM virtual machines running on ZFS can get better I/O with 64k.
This is my current ENV. I have one dedicated TrueNAS host exporting both ISCSI and NFS. This is a common VM configuration.Interesting... Can you give me a run down of what you are trying to achieve in this setup and with what machines/nodes? I'll see if I can get a similar setup going and get a fix started.
I moved my repo to alpha and updated.... it still shows "orphans" but won't clean them...This is my current ENV. I have one dedicated TrueNAS host exporting both ISCSI and NFS. This is a common VM configuration.
I could be wrong but, if the storage is "Shared" and I have another storage considered "Shared" I should be able to use the Proxmox "Replication" to create a standby "Disk" for HA. At the very least a copy I can manually reconfigure the VM to use on the second "Shared" storage.
standby "Disk" for HA \ DR. At the very least a copy I can manually reconfigure the VM to use on the second "Shared" storage.
┌─────────────┐
│ TrueNAS │
│ │
│ eth0: 10.15.14.172/23 ────────┐
│ eth1: 10.30.30.2/24 ───────┐ │
└─────────────┘ │ │
│ │
┌──────────┘ │
│ ┌──────────┘
│ │
┌──▼──▼────────┐
│ Switch(es) │
└──┬──┬────────┘
│ │
│ └──────────┐
└──────────┐ │
│ │
┌─────────────┐ │ │
│ Proxmox │ │ │
│ │ │ │
│ eth2: 10.15.14.89/23 ◄─────┘ │
│ eth3: 10.30.30.3/24 ◄─────────┘
└─────────────┘
I am looking for a configuration to "pre-stage" vm disks as a DR for a primary storage failure. I understand the Proxmox limitation, TY for confirming.Within the cluster itself? If so, I don't think you need the replication feature. Replication is for if you have non-shared storage between the nodes. Such as local storage, like a ZFS pool on each host. You can then use the replication feature to have the nodes replication the data to the other nodes so they all have a 1:1 copy of the data. The VM config is always synced between all cluster members though, so that is already covered by default.
If all 3 nodes are setup to use the plugin correctly, with HA turned on, if the HA manager decides the VM needs to be on a different node for any reason, that node should automatically connect the LUNs as needed.
If you are wanting to duplicate the data onto another storage, I'm not aware of Proxmox being able to do that natively unless you setup the other storage as a backup destination, which then you could do a live-restore from.
But maybe I'm misunderstanding the intent?
As for multipathing, you will have to the portal on TrueNAS shared out to more than one of the network interfaces and since TrueNAS does not allow two or more interfaces to share the same subnet, you would also have to have the same subnets setup on the Proxmox.
For instance:
Code:┌─────────────┐ │ TrueNAS │ │ │ │ eth0: 10.15.14.172/23 ────────┐ │ eth1: 10.30.30.2/24 ───────┐ │ └─────────────┘ │ │ │ │ ┌──────────┘ │ │ ┌──────────┘ │ │ ┌──▼──▼────────┐ │ Switch(es) │ └──┬──┬────────┘ │ │ │ └──────────┐ └──────────┐ │ │ │ ┌─────────────┐ │ │ │ Proxmox │ │ │ │ │ │ │ │ eth2: 10.15.14.89/23 ◄─────┘ │ │ eth3: 10.30.30.3/24 ◄─────────┘ └─────────────┘
From:
https://github.com/WarlockSyno/True...a/wiki/Advanced-Features.md#multipath-io-mpio

yet I still get :
We use essential cookies to make this site work, and optional cookies to enhance your experience.