Search results

  1. B

    [SOLVED] ZFS vs Other FS system using NVME SSD as cache

    1. I'm not seeing any difference (based on Samsung info) between the PM863a and the SM863a, apart from marketing that one is better for write-intensive workloads. The speeds and IOPS between the two (again based on Samsung docs) show almost identical numbers. So...? 2. Have you actually seen any...
  2. B

    Upgraded to 5.0, nodes seem to want to SWAP more

    And? I'm not seeing how this is relevant to a change in swapping behaviour with a major release upgrade... Or, to rephrase it, swapping is happening _more_ since upgrading to 5.0 from 4.4, by quite a bit. Neither of the links you provided are relevant to that.
  3. B

    [SOLVED] ZFS vs Other FS system using NVME SSD as cache

    L2ARC is only going to be useful if you actually anticipate your ARC size to be much larger than your max RAM in the system. Otherwise it is useless and a waste of your money and time. If you have 128 GB of RAM available for your ARC (which I think you do?), that's going to give you a very large...
  4. B

    Upgraded to 5.0, nodes seem to want to SWAP more

    Recently upgraded from 4.4 to 5.0-23/af4267bf, and now my nodes are putting more into SWAP than before, even though they're at about 50% RAM usage on each node. This is really annoying as this can lead to higher IO latency, and I periodically flush my SWAP back to RAM in times like this. Sure...
  5. B

    Proxmox VE 5.0 released!

    Yeah, I'm using an Avaya 4548GT, and the "Prod" node 2 is also using LACP in literally the exact same configuration, and has not failed once. This single node is the consistent failure point. When doing the live migration of this many VMs, it's coming from Prod 2, to Prod 1, and Prod 2 is on the...
  6. B

    Proxmox VE 5.0 released!

    I don't mean to be rude, but have you read through all of my prior messages? I've been very exhaustive in my testing, and I have performed a good amount of switch-centric testing. If you haven't had a chance to review what I wrote, please do, and share your thoughts. :)
  7. B

    Proxmox VE 5.0 released!

    Okay after a few hours or something the "received packet on bond0 with own address as source address" error is coming back up again. This is really frustrating and I'm just going to disable the bonding until I get some dev response here :/ I have absolutely no clue what the root cause is, but I...
  8. B

    Proxmox VE 5.0 released!

    From what I've been reading the naming is actually a systemd thing, not kernel or debian specific. Hence 16.04 is systemd ;)
  9. B

    SNMP indexing shifts on reboot (libreNMS)

    Hi Folks, This is an issue I've been having for months, and I am not entirely sure where the cause is, but it might be proxmox. I've been having it since 4.3, and it happened in 4.4, and is now happening in 5.0. I use LibreNMS to monitor many things, including a bunch of proxmox nodes, via...
  10. B

    Proxmox VE 5.0 released!

    Another update to my quest for GLORY! Turns out the Prod node 1, which was a install from scratch 5.0 in our last episode, had a few packages to update. I updated them, and now have the bond0 LACP working. I don't think I did anything special. Apt update, apt upgrade, install presented packages...
  11. B

    Proxmox VE 5.0 released!

    Another bit of info. Prod node 2, upgraded from 4.4 to 5.0, keeps the old eth0/eth1 interface naming, and it has the /etc/udev/rules.d/70-persistent-net.rules file Prod node 1, reinstalled from scratch 5.0 release, has the "new" renaming of interfaces to enp4s0 or whatever, and I manually...
  12. B

    Proxmox VE 5.0 released!

    Okay so the node I upgraded from 4.4 to 5.0 has ifconfig but the node I rebuilt from scratch 5.0 does not have ifconfig... what.. the hell... :( EDIT: fresh installs do not get the package "net-tools", but upgraded ones retain it. This is how I got my precious ifconfig back.
  13. B

    Proxmox VE 5.0 released!

    BTW I'm loving the little nuanced GUI improvements, like: Shift click to select multiple VMs during migration. This is really convenient! (should be documented so others know) Right click on nodes in the left list to issue node-centric commands, like mass migrate. Colourising of the logs and...
  14. B

    Proxmox VE 5.0 released!

    I was upgrading the whole cluster, which consisted of two nodes that operate 24/7, two nodes that are turned on for labbing purposes, then turned off when not needed (due to their power inefficiencies and loudness). All nodes were previously 4.4 with the latest updates. I upgraded the lab nodes...
  15. B

    Proxmox VE 5.0 released!

    I googled the snot out of the error, and likely found a bunch of results you did too. Unfortunately what I found did not help at all. I tried doing very drastic stuff including rebooting the switch, reconfiguring the ports, switching which ports are used on the switch in the LACP bond...
  16. B

    Proxmox VE 5.0 released!

    You're not bad at all! I think you sufficiently explained your situation :) Nice! :D Perhaps reconsider ZFS in the future, but I understand your hesitation.
  17. B

    Proxmox VE 5.0 released!

    Did you have any LACP going on? Live migration between nodes during upgrade did not work for me at all :( (but might have been due to... other... issues as my notes above outline).
  18. B

    Proxmox 3.4 "Out of Range" at install

    This issue is current for proxmox 5.0, as I'm using a monitor that has a max res of 1280x1024, and what I assume to be the grub menu for the USB stick is out of range or something like that. Hitting enter boots into the installer. Thanks for this!
  19. B

    Proxmox VE 5.0 released!

    I just upgraded the cluster, and the last node, I was able to live migrate a bunch of VMs on it, but now I can't migrate them off it. No matter what node it tries to migrate to I get: ERROR: migration aborted (duration 00:00:01): Can't connect to destination address using public key EXCEPT it...
  20. B

    Proxmox VE 5.0 released!

    I'm in the process of upgrading an environment I work with from 4.4 to 5.0, and I just tried migrating a test VM from a 4.4 box to 5.0, as in live migration, and it seems to fail every time. :/ It looks like this version upgrade might be one of those ones where you will experience downtime, but...