Search results

  1. S

    Extending an iSCSI disk on an Ubuntu VM causes the disk to go read-only?

    Unfortunately, there are no logs. The host sees the disk extend as successful, and the VM stops recording all syslog at the time the extend initiates until after reboot.
  2. S

    Extending an iSCSI disk on an Ubuntu VM causes the disk to go read-only?

    I am wondering if I'm doing something wrong here. I have 2 Proxmox 8.2.2 hosts, and they are connected to a TrueNAS Scale which is presenting ZFS over iSCSI LUNS over fiber, everything works well. I have Ubuntu 22.04.04 VM's, with qemu-guest-agent installed, everything seems happy, until I...
  3. S

    Request for Consideration: New Support Tier

    Yea, this was meant with good intentions, I'm asking if there is a way to create a price tier which does let me help the project without getting any additional benefit or removing any existing tiers. Ultimately the nag will go away, but if there's a way for me to do that and also find a new...
  4. S

    Request for Consideration: New Support Tier

    I'm a homelab Proxmox user. I have 2 physical servers with Proxmox on them, each with 2 physical sockets. I'm not using a subscription, but that's fine, I don't need the enterprise repository, I don't need the support.... ...Except for the nag. The nag is super effective, I feel bad. I want...
  5. S

    Removing erroneously created disks?

    That did it, from truenas' web UI I just deleted the datasets for the VM and nothing bad happened. When it was clear I migrated the storage over which is how I wanted it, and it all went fine.
  6. S

    Removing erroneously created disks?

    Curious, when doing that i get this: root@sr66-prox-02:~# qm disk rescan rescan volumes... Could not find lu_name for zvol vm-117-disk-0 at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 118. I still can't remove it from the LUN's GUI, and the extra disks don't show in the hardware UI for the VM.
  7. S

    Removing erroneously created disks?

    I recently set up ZFS over Iscsi on my truenas using a new 10G fiber network. While setting it up I had a few failures (binding the iscsi to the right nics on the TrueNAS, etc). The result was I tried to move a disk, but the move failed with an error. it's all since been resolved. In doing...
  8. S

    Locked myself out of networking following upgrade from 7->8, do I have options other than reimaging?

    I had an install of proxmox 7 that heavily utilized open-vswitch. Upon upgrade to proxmox 8, I lost networking in the middle of the install as I believe openvswitch was being upgraded. I was able to get in via the idrac console, and the system is up and survives a reboot, but my networking is...
  9. S

    Open-vswitch troubleshooting, having trouble getting cluster-communication vlan traffic working.

    I am setting up a 2-node proxmox cluster. These are connected to my switch via a 4-port LACP bond carrying multiple vlans each of which have a /16 subnet. Vlan 2 (subnet 10.2.0.0/16) is routable and is my management network, I can get to the web interfaces on this vlan and all is well. I have...
  10. S

    Help understanding best practice of monitoring Linux guest memory on proxmox

    Yea I'm monitoring both, but the question really is should I just disable the trigger prototype in the template. I think that's the move. Thanks!
  11. S

    Help understanding best practice of monitoring Linux guest memory on proxmox

    I'm using a completely up to date proxmox install and monitoring my Ubuntu VM's using an up to date zabbix. I have noticed a large discrepancy in the memory usage reported in proxmox vs. the memory usage reported by the vm. This makes sense; from the hypervisor side, this is a ballooning...
  12. S

    Proxmox VE and ZFS over iSCSI on TrueNAS Scale: My steps to make it work.

    Does this help? Make sure the API in truenas is bound to the correct interface: https://github.com/TheGrandWazoo/freenas-proxmox/issues/44
  13. S

    Proxmox VE and ZFS over iSCSI on TrueNAS Scale: My steps to make it work.

    Dunno, I'm not using LXC. All I know is my vm is happily running backed by the zfs datastore on truenas over iscsi:
  14. S

    Proxmox VE and ZFS over iSCSI on TrueNAS Scale: My steps to make it work.

    I found a lot of searching and testing with people struggling on this, but I was able to make this work. I wanted to document my exact steps in the hope it helps someone else. I have a VM povisioned, working, with snapshots and migrations. First, on TrueNAS Scale, I have a ZFS dataset with a...
  15. S

    Install help, PVE 7.2-1 from USB, "FATAL ERROR:writer: failed to read/uncompress file /target/usr/sbin/cgdisk"

    Maybe, I checksummed it and it looked good. My initial theory was that based on the error it was trying to pull something off a hard-coded CD-ROM path, so by making that available it could complete that step and continue as expected. That's why I tried it but I have no idea if that was an...
  16. S

    Install help, PVE 7.2-1 from USB, "FATAL ERROR:writer: failed to read/uncompress file /target/usr/sbin/cgdisk"

    This is kind of insane but I found a workaround. I burned a copy of the DVD, put it in the drive, booted off the USB, and it installed fine first try.
  17. S

    Install help, PVE 7.2-1 from USB, "FATAL ERROR:writer: failed to read/uncompress file /target/usr/sbin/cgdisk"

    I'm installing PVE 7.2-1 on a Dell Poweredge R710 from a USB key. I'm attempting to set up a BTRFS RAID1 on 2 drives as the OS, and I have 6 other drives which will be a RAIDZ-2. When attempting to install, I get the following error: "FATAL ERROR:writer: failed to read/uncompress file...
  18. S

    Given my configuration, what is the best way to achieve reliable storage?

    I did do that, but the results made me think it was a bad idea. Because the OS and the ceph would be backed by the same raid5, I read this: "Do not mix the OS disk with Ceph. Ceph will thrash the performance of the disk. Besides that the OS might grind to a halt, also Ceph won't benefit." And...
  19. S

    Given my configuration, what is the best way to achieve reliable storage?

    Yea, the networking stack is shared with production; in the event we felt this was reliable enough we built this connected with both prod and the lab VLANs. There's 4 nics per server and they're bonded and have LACP, but we don't have them all cabled up until we clear out some stuff on the...
  20. S

    Given my configuration, what is the best way to achieve reliable storage?

    I'm a bit overwhelmed setting up the shared storage for a new Proxmox Cluster in my lab. This is our previous generation hardware which was sitting idle, and each of 10 servers has 4 ~300GB SAS disks. It is connected to our production Dell SAN, where our first testing 2TB lun is presented over...