Recent content by alexc

  1. A

    Upload to PMG email list from Carbonio Mail Server to check against

    I'm planning to set up two PMGs in front of my Carbonio Mail Server (the one that "formerly Zimbra"). My goal is to make the PMGs fully autonomous, so that even if the mail server goes down, the PMGs can continue to receive emails and forward them to Carbonio once it's back online. To achieve...
  2. A

    Proxmox metrics to InfluxDB Cloud

    Thank you! That was what I tried but seems my fault was because of space in name. Now need to connect Grafana to it the cloud db, still on it.
  3. A

    Proxmox metrics to InfluxDB Cloud

    I found InfluxDB Cloud service and they even have free tier (pay as you go but retention up to 30 days seems to be free), and I really wonder if someone tried this. I tried to connect my lab server and have no success, but I'm not a pro with InfluxDB. The idea I have is that even with lab...
  4. A

    Policy-based routing in LXC Container

    I have LXC container on PVE host, and initially it was assigned one WAN IP and one LAN IP (LAN IP is internal subnet to reach other VMs on PVE host). It was configured via web and this is how it looked in the config: net0...
  5. A

    New system disks on host instead of old ones

    If I may disturb your futher I can tell I did added two new drives to rpool so the mirror pool rpool now listed 4 drives. I rebooted and find it works well. root@server:~# zpool status rpool pool: rpool state: ONLINE scan: resilvered 1.45T in 05:04:53 with 0 errors on Tue Nov 26 10:22:57...
  6. A

    New system disks on host instead of old ones

    You're right! root@server:~# lsblk -o+PARTTYPENAME -o+UUID NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT UUID sda 8:0 0 9.1T 0 disk ├─sda1 8:1 0 1007K 0 part ├─sda2 8:2 0 512M 0 part 7ACC-88E0 └─sda3 8:3 0 9.1T 0 part...
  7. A

    New system disks on host instead of old ones

    This was my first intention to do but I afraid two rpool's in system is not good idea. And yes, I have no place to do backup, this is why I want to do that "in place" (almost). I just added two new disks to rpool, and resilver started. The server... well, become very slow, so it seems data...
  8. A

    New system disks on host instead of old ones

    root@server:~# proxmox-boot-tool format /dev/sda2 UUID="7ACC-88E0" SIZE="536870912" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sda" MOUNTPOINT="" E: '/dev/sda2' contains a filesystem ('vfat') - exiting (use --force to override) root@server:~# proxmox-boot-tool init...
  9. A

    New system disks on host instead of old ones

    Here is the output: root@server:~# lsblk -o+PARTTYPENAME NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT PARTTYPENAME sda 8:0 0 9.1T 0 disk ├─sda1 8:1 0 1007K 0 part BIOS boot ├─sda2 8:2 0 512M 0 part EFI System └─sda3 8:3...
  10. A

    New system disks on host instead of old ones

    Sorry for my dumb questioning, I copies partitions boundaries from old disks to new one, so sda1 and sda2 9sda is one of new disks) are same-sized as old disk sdc1 and sdc2 (sdc is one of old disks). Now I need to add sd3 instead of sdc3, I can do zfs replace rpool ... ... or zpool attach rpool...
  11. A

    New system disks on host instead of old ones

    This is what I need to expand!!!
  12. A

    New system disks on host instead of old ones

    yes, making it bootable is the question. Acually, now I have 3 partitons on each disk used by ZFS (3rd partiton is in pool, while 1 and 2 is not), so I can copy disk partition scheme for 1 and 2 to new disks, and then - use https://pve.proxmox.com/wiki/Host_Bootloader , right?
  13. A

    New system disks on host instead of old ones

    I have PVE host running for 5+ years. It was set up on couple of 2Tb HDDs in zfs mirror, and now it's time to change these disks with newer ones. I brought 10Tb HDDs, and plan the following: - partition new disks so it has layout of old 2Tb disks, but the last partition will accomodate all 10Tb...
  14. A

    Auto migrate on cluster node high load?

    Thank you for pointing this out! It looks like there's a lot of work ahead. When can we expect this feature (auto migrate)? Dynamic loaded VMs would be much useful if we could arrange and reorganize them in a meaningful way not once at start but over time. P.S. Say, running developer VMs is a...