Search results

  1. J

    Time synchronisation between PVE node/host and VM/guest without access to the Internet

    Hey everyone! As far as I understand, my Proxmox 7 cluster, installed on top of a Debian 11 Bullseye, uses systemd-timesyncd to keep the clock in sync via external NTP servers, configured either in /etc/systemd/timesyncd.conf or in a separate file inside /etc/systemd/timesyncd.conf.d/. All of...
  2. J

    HTTPS macro includes UDP?

    I agree. I opened a feature request bug report instead. Not that I could not open dozens of them, come to that, but I think this one is a must these days.
  3. J

    HTTPS macro includes UDP?

    No worries. I'll keep the extra rule in the security group for the time being. Thanks for the reply! P.S. I guess I am better off not hacking those rules in /usr/share/perl5/PVE/Firewall.pm, or is it just that they will be overwritten with each update?
  4. J

    HTTPS macro includes UDP?

    Hey everyone! Quick question: Does the HTTPS macro of the PVE Firewall include UDP traffic (to port 443) or is it still just TCP? I am on PVE 7.4-17, but if it is available in PVE 8.x I would also be interested in the answer, since I plan on migrate to it soon. Thanks in advance.
  5. J

    [SOLVED] Error code 11 when moving storage from zfspool to local: "no space left on device" but there is plenty of space

    For future reference, Fabian means this: the problem is that the upper layer doesn't know whether the lower storage layer uses compression and how much it affects the data. Still, my request would be to enable such information to be exposed. Not urgent, not critical, but when possible :)
  6. J

    [SOLVED] Error code 11 when moving storage from zfspool to local: "no space left on device" but there is plenty of space

    I think that is an excellent idea, @BruceX. Any chance this warning message could be brought into the next version of Proxmox, @fabian? Once you've experienced the situation, it makes sense and you will probably not fall for it again, but for the first-timers it could be very helpful. :)
  7. J

    How to configure the firewall of an LXC via Ansible module proxmox?

    I like this idea as it is quicker (no need to do a remote call to the API). On the other hand, calling the API ensures that, were the guys behind Proxmox change the behaviour in the future (e.g. the method were to perform some other tasks), it would still be valid. Thanks for the heads-up!
  8. J

    Periodically run fsck on LXC

    I use ZFS in two of the five nodes. Especifically, the one that had the filesystem corruption was using both SSD disks using software RAID 1 with ext4 and also ZFS disks in RAID 1. I had LXC on both storages, and all of them suffered from corruption. Unfortunately, I never managed to figure out...
  9. J

    Periodically run fsck on LXC

    Good day everyone! So I have a number of LXC on a Proxmox VE version 7.4 cluster with 5 nodes at the moment, and recently I had a file corruption problem on one of the nodes that affected the LXC in it (both SSD-ext4 pool and ZFS pool). That's in the past, but it got me thinking how am I...
  10. J

    Migrate LXC leaves it with no access to network beyond other LXC in the same node

    I just found this thread which seems to describe my issue as well. Clearly the guys in that thread know more about networking than I do.
  11. J

    Migrate LXC leaves it with no access to network beyond other LXC in the same node

    I started a tcpdump host 192.168.0.145 on the node where I migrated another LXC to (proxmox1 to proxmox4 again, but a LXC I can live without), and I see lots of ARP traffic: tcpdump host 192.168.0.145 tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on eno1...
  12. J

    How can I scrub the ZFS subvol of a LXC?

    Thanks for your replies, @Dunuin and @LnxBil. Fortunately, it's a small pool so it just takes around 3 minutes to scrub. I've run several manual scrubs and all came clean. :)
  13. J

    Migrate LXC leaves it with no access to network beyond other LXC in the same node

    Hello. As of lately (a week or two, using 7.3 and the problem persists after upgrading to 7.4), when I migrate a LXC from a node (e.g. proxmox1) to another node (e.g. proxmox4), the LXC cannot connect/ping to, or be connected/pinged from, any LXC but those in the same, new node. When I migrate...
  14. J

    How can I scrub the ZFS subvol of a LXC?

    I run backups every 12 hours via Proxmox Backup Server using the snapshot mode. Out of curiosity, would that count?
  15. J

    How can I scrub the ZFS subvol of a LXC?

    Yes, that is done every month: # cat /etc/cron.d/zfsutils-linux PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # TRIM the first Sunday of every month. 24 0 1-7 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/trim ]; then /usr/lib/zfs-linux/trim; fi # Scrub...
  16. J

    How can I scrub the ZFS subvol of a LXC?

    As the title says, I can use the zfs command to get information about the subvolume of a LXC that resides in my zfspool pool, e.g. zfs get all zfspool/subvol-109-disk-0 How can I scrub that subvolume to make sure everything is fine after running into issues with the filesystem and having to...
  17. J

    How to configure the firewall of an LXC via Ansible module proxmox?

    No, not really. Only thing I could do to improve it was check for the value of the enable property beforehand: # Enable and configure the firewall of a LXC - name: Check the status of the container firewall ansible.builtin.command: cmd: "pvesh get /nodes/{{ proxmox_node }}/lxc/{{...
  18. J

    Are remnants of old LXC in /var/lib/lxc safe to delete?

    Hey everyone! I had a problem with one of my nodes (proxmox3) on my LXC-only, 5-node, v7.4 cluster which lead to data corruption (filesystem) inside the LXC. I am still trying to figure out exactly what happened, but rebooting the machine triggered fsck on the main disks and the node was...
  19. J

    [SOLVED] Error code 11 when moving storage from zfspool to local: "no space left on device" but there is plenty of space

    Couldn't it be inferred from the compressratio property (and maybe some more) of the zfs get all command? As in, at least, provide a warning to the user and a lead to a solution in case he or she gets the "no space left on device" error? Just trying to be helpful here :)
  20. J

    [SOLVED] Error code 11 when moving storage from zfspool to local: "no space left on device" but there is plenty of space

    Just to confirm that it worked. I tried adding 1 GB at a time and at 5 GB the process went fine. Again, thank you very much for your time, Fabian. Marking the thread as solved. P.S. Wouldn't it be nice for the command being executed via the WebGUI to check for the compression ratio and do some...