Search results

  1. C

    Reboot = FullBackup?

    And if you reboot VM via pve because you want change some non-hot-changeable parameter...
  2. C

    Reboot = FullBackup?

    Live migrating require same CPU or default kvm64 which is slower due missing cpu flags... And if pve reboot unplanned... And btw, OP is meaning VM reboot, not pve. Just think about kernel update & reboot...
  3. C

    Live migration over which communication

    PVE frontend. Or you can define migration network https://pve.proxmox.com/wiki/Manual:_datacenter.cfg
  4. C

    Multiple Management IPs and VLAN segregation to VMs

    Why two mgmt ipv4? What are you trying to solve? NIC3 - read documentation about bridges.
  5. C

    [SOLVED] Ceph safe nearfull ratio

    Any source about overriding one copy per host rule by default in 3 node cluster, when one node is down 600s?
  6. C

    One LVM for multiple nodes

    https://pve.proxmox.com/wiki/Storage
  7. C

    [SOLVED] CEPH separate public/cluster network

    Two ways (based on non workload - so assume ceph pool is empty): 1] monitor's ip stays - manually change ceph config file and set cluster network, reboot 2] monitor's ip change - remove all monitors, change ceph configfile, recreate monitors Public (aka internet) network will never work with 9k...
  8. C

    Ceph Network public and cluster some questions

    2] What you will do, when you connect other pve host? You will need rework corosync network. Think twice about it. 1G switches are cheap. 3] What if your connection/switch fail in backup/vm migration/etc? Use LACP and don't even think about single connection.
  9. C

    Ceph Network public and cluster some questions

    I have little problem decyphering "connection for clients to the backend". What is client, what is backend? If you have pve frontend on the same subnet as all users in company - from my point of view its security and performance NO. Any broadcast can overhelm this subnet. Create management...
  10. C

    Ceph Network public and cluster some questions

    If you have slots for 1Gbe, use it for corosync. Now - split ceph to 2 networks C1] ceph cluster (osd etc) - 100Gbe C2] ceph public ( monitors = client access) - 10Gbe minimal Now Proxmox side: P1] pve cluster (corosync) - 1Gbe primary, 1Gbe secondary (or use ceph backend or pve frontend) P2]...
  11. C

    Best Option / Raid-1 of SSD or ZFS raidZ1

    Depends... 1] experience with zfs? 2] performance requirements? 3] HW available? 4] etc...
  12. C

    Poor write performance on ceph backed virtual disks.

    How much 10Gb links every node has? Where are fio/iperf/ceph bench tests? What is your ceph setup - 3/2 etc?
  13. C

    no network connection on 2nd onboard lan X11DPI-N

    After update from pve 6.2 to pve-manager/6.3-3/eee5f901 (running kernel: 5.4.78-2-pve) i had the same problem - 2nd link was down on pve host. Links are managed by openvswitch in LACP 802.3ad mode. Switch side showed both links up and LACPed.q Feb 15 10:38:00 pve-backup-1 kernel: [...
  14. C

    Configurating cluster networks with Ceph?

    My cheap suggestion: 1] LACP 802ad 2x 1Gbps - backup corosync vlan + pve vlan (=ceph monitors too) + vm vlans access 2] LACP 802ad 2x 10Gbps - primary corosync vlan + ceph cluster (osds) My standard cheap suggestion: - use 2x2 10Gbps port for 1]+2] and spare 1 Gbps for primary corosync And...
  15. C

    [SOLVED] PMG 6.2.5 sa-update doesn't run automatically

    I will hold half eye on it but currently can confirm that automatic update works.
  16. C

    [SOLVED] PMG 6.2.5 sa-update doesn't run automatically

    Because this task was manually initated from pmg webui. I can't find any log showing sa-update was ran succesfully via pmg-daily.
  17. C

    [SOLVED] PMG 6.2.5 sa-update doesn't run automatically

    Hi based on timers: Thu 2020-08-27 04:21:58 CEST 17h left Wed 2020-08-26 03:57:57 CEST 6h ago pmg-daily.timer pmg-daily.service I will expect to sa-update automatically ran. But: root@pmg-01:/var/log/pve/tasks/C# cat...
  18. C

    How to add redundancy to the system after the installation ?

    I don't use proxmox with EFI and even mixed with mdraid. Try other way - mount and boot from repair cd your "new raided system", regenerate grub...Can't help more.
  19. C

    How to add redundancy to the system after the installation ?

    Because you have some spare space on vg pve... 1] prepare ssd2 as raid1 disk (hw raid, mdraid, zfs etc) 2] create vg group "not pve" on that raid1 (if you want pve as vg group name) 3] copy logical volumes from ssd1 to ssd2 - for example, create new lv on ssd2/vg_name, use dd for copy...
  20. C

    Backup error - user mismatch

    1] backup VM to pbs as some user to repository A (acl assigned) 2] add other user to repository A (asl assigned) 3] change user in PVE for backup 4] backup error: other_user@pbs != some_user@pbs

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!