Search results

  1. W

    No Disk Storage Options Available for VM's Following Upgrade to VE7.2-7 - Communication Failure (0)

    As embarrassed as I am to admit it, I actually had this same problem last year and solved it by freeing up inodes due to excessive .vg file creations: https://forum.proxmox.com/threads/excessive-inode-consumption-and-etc-lvm-archive.91868/ What happened is that when I upgraded to Proxmox 7, it...
  2. W

    No Disk Storage Options Available for VM's Following Upgrade to VE7.2-7 - Communication Failure (0)

    I have an issue where I cannot create new VM's following what appears to be an otherwise successful upgrade to VE7.2-7. Looking at another thread on what seems to be a related issue, I ran the pvesm status command, which succeeded with the following output: I am able to select an ISO from...
  3. W

    Can't create VM - Storage not available

    I would also like to point out that I am no longer experiencing the issue now that I have done what I wrote up here (cleared out the excessive inode usage): https://forum.proxmox.com/threads/excessive-inode-consumption-and-etc-lvm-archive.91868/ Is it possible that the lack of inodes available...
  4. W

    Can't create VM - Storage not available

    pvesm status root@prxmox:~# pvesm status Name Type Status Total Used Available % Store1 nfs active 38193152 49152 36173824 0.13% Data1 nfs active 959853568 509200384...
  5. W

    apt-get update/autoremove failed - no space left

    I had to start another thread about inode issues (https://forum.proxmox.com/threads/excessive-inode-consumption-and-etc-lvm-archive.91868/), so I defer to people who are dealing with Samba inode consumption directly. My research said deleting my .vg files in /etc/lvm/archive was safe; I cannot...
  6. W

    Excessive Inode Consumption and /etc/lvm/archive

    Some of you have probably seen these threads below: https://forum.proxmox.com/threads/apt-get-update-autoremove-failed-no-space-left.91867/ https://forum.proxmox.com/threads/an-help-to-free-inode-usage-on-my-servers.69730/ I am experiencing a similar problem - but not due to Samba. I have...
  7. W

    apt-get update/autoremove failed - no space left

    You are afflicted like many others: https://forum.proxmox.com/threads/an-help-to-free-inode-usage-on-my-servers.69730/ You need to clean out those inodes from Samba.
  8. W

    apt-get update/autoremove failed - no space left

    I believe I am running into the same problem, and I believe you may be out of inodes. Do this: cd / du --inodes -xS | sort -n At the bottom of that list, do you have a large number?
  9. W

    Can't create VM - Storage not available

    Running into a similar issue. I'm seeing that no one responded to this, but I ran 'pvesm status' to try to keep this going: Only error was that some .vg files were not found. All other storage was listed correctly. Just highlighting that this behavior is being observed by others too...
  10. W

    [TUTORIAL] Guide: Setup ZFS-over-iSCSI with PVE 5x and FreeNAS 11+

    Just successfully tested on 6.4-8. Please note: - Updated from FreeNAS 11.3 to TrueNAS Core 12.0-U4 - GrandWazoo seems to have made recent updates, so I deleted the freenas-proxmox repo I had cloned and re-cloned it - Appears to be working A-OK
  11. W

    [SOLVED] Peculiar Behavior for ZFS-over-ISCSI on FreeNAS (VM creates but can't be started because LUN can't be reached?)

    I was able to figure it out. See below if you run into similar issues: The ISCSI service on my FreeNAS-11.2-U8 was initially having trouble restarting after reconfiguring the network. This was due to ISCSI and associated pools still being associated with the prior network configuration...
  12. W

    [SOLVED] Peculiar Behavior for ZFS-over-ISCSI on FreeNAS (VM creates but can't be started because LUN can't be reached?)

    All, I re-did a lot of the networking on my cluster and appear to have successfully changed everything over to ZFS-over-ISCSI using LACP and Linux bonded interfaces for some throughput improvement and redundancy. I am connecting to a FreeNAS cluster using GrandWazoo's ZFS-over-ISCSI patch...
  13. W

    Local Shared Storage: Question about behavior

    Thank you sir - I am now tracking. Actually create a share (NFS, for example), and go from there.
  14. W

    Local Shared Storage: Question about behavior

    Thanks for the reply - why am I not seeing the same content across all the nodes then since they are marked shared? I only the drives listed under the nodes.
  15. W

    Local Shared Storage: Question about behavior

    Hello all, So I have a 3-node cluster which each server loaded with local hard drives. I want to share the local hard drives across all the systems in the cluster, so I did the following: Used parted to initialize with GPT and create a partition Used mkfs.ext4 to create a filesystem on each...
  16. W

    [TUTORIAL] Guide: Setup ZFS-over-iSCSI with PVE 5x and FreeNAS 11+

    Again, for the weary, FreeNAS-seeking-but-very-frustrated-because-no-one-actually-wrote-everything-up Proxmox user: I believe I have solved the problem of the "iSCSI: Failed to connect to LUN : Failed to log in to target. Status: Authorization failure(514) " listed above. As a comprehensive...
  17. W

    [TUTORIAL] Guide: Setup ZFS-over-iSCSI with PVE 5x and FreeNAS 11+

    Updates for the weary Proxmox/FreeNAS Internet traveler: Do not set a password on the SSH key you generate unless you have a way to supply the password every time you need to use the key On this line "Target: IQN on the FreeNAS box and target ID", that means the syntax needs to be of the...
  18. W

    FreeNAS ZFS-over-iSCSI Issue

    Well, I already solved one problem. I added the target name using the following syntax in the Edit:ZFS over iSCSI dialog box and it created the VM with no errors: iqn.custom-name.net.freenas.ctl:targetname However, I now have these problems when I attempt to start the VM: iscsiadm: No...
  19. W

    FreeNAS ZFS-over-iSCSI Issue

    I am trying to use a FreeNAS server for shared storage in a 3-node Proxmox cluster to enable HA and live migration. The iSCSI target is set up on a ZFS pool of 4 identical enterprise-grade SSD's and is reporting "healthy" in FreeNAS. I followed the helpful instructions here...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!