Search results

  1. M

    Question about VM migration when using local zfs

    Hello! Two migration scenarios, source and destination servers are using local zfs storage, let's call it local-zfs. 1st scenario (with preparing): 1. set replication to destination, schedule manual replication, wait until completed - only used space is copied 2. migrate - will copy disk delta...
  2. M

    What ashift for ZFS SSDs?

    most ssds these days are using 8k as internal block size (even they advertise as 512) also you could try to create a pool without specifying any ashift, and see what zfs is telling you (they have a list of hdd/sdd models with the right sector size, maybe your drives are in their list)
  3. M

    LetsEncrypt certificate with dns alias challenge using nsupdate

    Hello By digging in the code I saw that is possible (but only from console) to use dns challenge with alias. The problem is that nsupdate runs as nobody/nogroup, and if you want to keep the dns key in a single place somewhere in /etc/pve/ it will be root:www-data, and no 0644 is possible (even...
  4. M

    [SOLVED] Old machines with virtio network driver stop responding after update to pve-kernel-4.15.18-30-pve

    Proxmox 5.x (updated to the full), when updating to kernel pve-kernel-4.15.18-30-pve / 4.15.18-58 older guest machines (debian <= 6) with virtio network driver became not accessible (not even ping). No message logged, nor in host, nor in guest, but ping just stops working. Kernel version in...
  5. M

    ACME over DNSChallenge failing

    Related question: I wish to use the nsupdate plugin with dns alias challenges, which is perfectly working with acme.sh, but I really cannot understand how to implement this in proxmox interface. I see only "API Data" when selecting "nsupdate" DNS API. Or in my case the best solution will be to...
  6. M

    Remote Syslog

    These are the packages needed for relp-tls (versions for debian 10). rsyslog 8.1901.0-1 rsyslog-gnutls 8.1901.0-1 rsyslog-relp 8.1901.0-1 And to understand better, it's about using rsyslog relp protocol over a gnu-tls connection (in newer...
  7. M

    Remote Syslog

    Actually there is an even better method to forward the logs: relp-tls. You have encryption but also reliability. Works perfectly with rsyslogd. Now I'm trying to understand where are all specific logs for proxmox, in order to verify that all that logs are correctly forwarded.
  8. M

    [SOLVED] Proxmox and Old ProLiant Server

    Proxmox 6.2 is working "fine" on DL380G5 with at least 4G of RAM, as long as you don't expect any performance and it's just for fun and testing/learning. 2G was too low (at least when also using ZFS), the VM began do get OOM killed (because I didn't setup swap).
  9. M

    About live migrations

    A related issue, LVM thin storage(s), online migration with local disks. VM with virtio SCSI - 1st phase: thickening the LV, 2nd phase: actually copying the disk data old VM with virtio - single phase: just copying the disk data (so in total ~ half of the time) (I'm only talking about the disk...
  10. M

    Proxmox 6.2 on ZFS on HPE Proliant servers

    Not necessarily mixing ancient and new in the same cluster, but putting some hardware to work ... As I looked for the G6 specs, only HPE SC08Ge Host Bus Adapter is listed as "compatible" HBA card ... and I know HPE for being very picky in terms of compatibility, even "it seems to work" :-P...
  11. M

    Proxmox 6.2 on ZFS on HPE Proliant servers

    Topic: Proxmox 6.2 on ZFS (live migration included) on HPE Proliant servers. (Sorry for such a long post :-P ) ceph is not quite a solution, because it requires (or at least highly recommends) at least 10gigs network I have some older and some newer hp servers, like: - HP Proliant DL380 G6 /...
  12. M

    4 drive config

    Like I've said, it's always a tradeoff between performance, data safety and many more. Indeed, raidz2 / raid6 on 4 devices seems maybe stupid at a glance, but what if you loose 2 disks from the same mirror in a raid10 configuration ? You lose all the data! (well, you have backup, right ? so you...
  13. M

    4 drive config

    or maybe raidz2, 50% overhead, slower even than raidz1, but you can survive to ANY 2 drive failure (better chances than raid1) also never mix hdd with ssd (as in the same raid), it's a no-no for performance (but indeed, nice idea for "disaster recovery") you can "mix" ssd with hdd by running vms...
  14. M

    As a percentage, how much work needs to be done in Debian?

    I don't want to be rude, but I consider HP G5 (even G6, except for some test lab) as pieces of junk these days. You talk about a G2 (that's 3 generations before G5), that frankly is so old that I don't think I've seen one in my life (one or two DL G3 maybe, and I have some forgotten DL G4 still...
  15. M

    VM Storage Location

    I would move everything from STRAIGHT /rpool (container, templates, etc.) to /rpool/something also the usage of same pool for thin but also thick images is "interesting"; cannot say if it's good or bad without further testing; but for you it will be hard to tell from a glance if a zvol is thick...
  16. M

    Replication error out of space

    you can "lose" significant space if source and destination are different raid types, different ashift, different number of disks etc. (and sometimes significant is really significant) but I will bet (blindly :-P) on the snapshots at source; maybe the disk is 1TB, but exactly what is the used...
  17. M

    Replication error out of space

    another question: vm snapshots (or any other zfs snapshots which will increase the used space) ? you should post some zfs list on source and destination storages
  18. M

    Using a main proxmox web panel for all nodes

    AFAIK, in order to administer all nodes from single server all nodes must be in the same cluster, and (without asking for trouble) the nodes must be in the same network (or at least the cluster network must be with lowest possible latency); and you will have to pay extra support for one server...
  19. M

    VM Storage Location

    well, you have 3 nested zfs mounts in your storage.cfg; although zfs can handle that, for me it's just "asking for trouble" it should try to separate (remember: divide et impera) and put the filesystems on the same level, it will be easier for you in the future
  20. M

    Replication error out of space

    do you have some reservations on destination ? although you already said that destination has 3 tb free ...