Search results

  1. VictorSTS

    [TUTORIAL] what is the best way to encrypt the disks

    What's the point on having the decrypt key physically on the same place as the server itself? IMHO that's as secure as an unencrypted disk. What am I missing here?
  2. VictorSTS

    Adding a New Server to Existing Proxmox Cluster - Network Configuration and VM Communication

    You need vRack for such setup to work properly and seamlessly. Also vRack public IP range so you can "move" the IP with VMs among the servers in the cluster (i.e. whe you move that pfsense VM to another host in the cluster). You really need 3 servers or at least 2 servers + QDevice to keep...
  3. VictorSTS

    Taking advantage of Special vdev after data has been written

    Special device does not help with verify, as verify reads the whole data from your HDD to checksum each chunk again and make sure that it still has the same checksum it had when originally stored. I mean, it barely puts any load on the special vdev. Special vDev does help with backup listing...
  4. VictorSTS

    Proxmox cluster is in a state of development ?

    I would bet that something isn't properly installed/updated and you should solve it before continuing deployment, as you may find similar behavior with other actions too.
  5. VictorSTS

    Backup encryption

    The traditional backup (vzdump to a file) is just a bit copy of whatever is in the source. Three options come to my mind atm: Encrypt the VM / CT if possible. Use LUKS or ZFS encryption in the storage (disk, NFS, etc) used as destination of the backups. You will have to somehow type the...
  6. VictorSTS

    Advice for disk layout and RAID type

    Running PVE on SD card or USB drive is not supported nor recommended due to the use PVE/Debian does of the OS disk. Also, not having redundant OS disk is nonsense for me for anything except testing machines. Having SSD also for the main storage, having ZIL on NVMe will add nothing but...
  7. VictorSTS

    Proxmox cluster is in a state of development ?

    I've installed literally hundreds of clusters with zero issues related to clustering. Yes, there's been bugs with corosync, qdevice, etc but they got sorted out eventually. There aren't too many questions about clustering issues in the forum, so I would bet that clustering works correctly...
  8. VictorSTS

    [SOLVED] host type backup: missing catalog.pcat1

    Has this patch been applied yet? Doesn't seem to be present on proxmox-backup-client v3.2.7. Thanks!
  9. VictorSTS

    [SOLVED] Failed to start pve-cluster.service - /etc/pve/local/pveproxy-ssl.key: failed to load local private key

    /etc/pve is populated by pve-cluster service based on the contents of the sqlite database at /var/lib/pve-cluster. If pve-cluster isn't running, /etc/pve should be empty... If it has contents someone/something has copied data there while the service was stopped and now it can't start. Umm...
  10. VictorSTS

    [SOLVED] Failed to start pve-cluster.service - /etc/pve/local/pveproxy-ssl.key: failed to load local private key

    Jan 21 21:52:06 prx500 pmxcfs[44947]: fuse: mountpoint is not empty /etc/pve must be empty for pve-cluster to be able to mount the pmxcfs filesystem
  11. VictorSTS

    [SOLVED] Failed to start pve-cluster.service - /etc/pve/local/pveproxy-ssl.key: failed to load local private key

    Is this host part of a cluster? Looks like the cert and the privkey don't match for some reason or that they don't match the host name. Probably caused by that ACME plugin. I would try to move pveproxy-ssl.key and pveproxy-ssl.pem from /etc/pve/local and try to restart pve-cluster and pveproxy...
  12. VictorSTS

    [SOLVED] Failed to start pve-cluster.service - /etc/pve/local/pveproxy-ssl.key: failed to load local private key

    How did you change the host name? You must change it both in /etc/hostname and /etc/hosts, pointing to the proper IP in the later file.
  13. VictorSTS

    SOLVED: ZVOL with LVM (for targetcli) gets lockd after reboot

    UNTESTED, use at your own risk: I think that a LVM filter should work here... In /etc/lvm/lvm.conf, replace: devices { # added by pve-manager to avoid scanning ZFS zvols and Ceph rbds global_filter=["r|/dev/zd.*|","r|/dev/rbd.*|"] } with something like: devices { # added by...
  14. VictorSTS

    I can´t get space if i delete backups on PBS

    Garbage Collector runs two steps or phases. GC phase one will go through all snapshots, read which chunks are used by each and update chunk's mtime to now(). GC phase two will remove all chunks whose mtime is older than 24 hours 5 minutes. So even is you run that command to manually force the...
  15. VictorSTS

    Shared storage over Fiber channel

    If understand this well, you are using NFS to access the storage, not FiberChannel even if your network uses fiber instead of cooper? Or can a Synology share some volumes using FC?
  16. VictorSTS

    Shared storage over Fiber channel

    Just curious: how much slow is your Ceph and with what infrastructure (Severs, OSD, network)? I would like to check your numbers with some that I've got from some clusters. Do your apps use Firebird database by chance? Got a few cases with that workload that I'm still to get working properly...
  17. VictorSTS

    what is the point of CEPH, if there is no HA?

    To add to this, just think that Ceph adds data protection (i.e. reduces the chances of data loss), while HA adds VM uptime protection (i.e. reduces the downtime of the VMs on some events like host failure). They are solutions to different problems and together increase the availability of the...
  18. VictorSTS

    instalation PBS via FC

    Don't do that, doesn't make sense in that kind of deployment. Install PBS alongside that PVE of the third node or even install PBS directly without PVE on that third server [1]. Then add a filesystem to that LUN, mount it and add a PBS datastore on it. [1]...
  19. VictorSTS

    Advice for disk layout and RAID type

    Looks ok to me, mainly because you dont use RAIDz as VM disks storage. You should be able of mixing disks of the HBA and the mobo in the same zpool, but to give exact recomendation on performance, etc you must know the exact NUMA configuration and PCIe lanes of the server.