Search results

  1. P

    One common additional disk for many vm

    I moved auto vmbr1 iface vmbr1 inet static address 192.168.188.13/24 bridge-ports enp0s31f6 to address 192.168.189.13/24 then rebootet PVE this didn not improve the issue (and I can not access PVE on 192.168.189.13/24 from 192.168.188.xx/24, just to test it)
  2. P

    One common additional disk for many vm

    VM/UCS runs on vmbr=0, https://192.168.188.12/univention nextcloud runs as docker under UCS on https://192.168.188.12/nextcloud ======================================= root@s1:/etc/network# more interfaces # network interface settings; autogenerated # Please do NOT modify this file directly...
  3. P

    One common additional disk for many vm

    This is what I tried, but from (client-)VMs I can not see the smb-shares, only from external clients I see the smb-shares.
  4. P

    [SOLVED] OPNsense 21.1 on PVE 6.4

    I "solved" the issue, found a workaround, which is not too funny: As a test, I cloned the well-running VM/OPNsense, added a new netwok-device and got: ======= bridge 'vmbr5' does not exist kvm: network script /var/lib/qemu-server/pve-bridge failed with status 512 TASK ERROR: start failed: QEMU...
  5. P

    UCS

    On PVE 6.6 on a VM I installed UCS 4.4 / Nextcloud 20.0.10-0 (https://www.univention.de/) and configured a share (smb & nfs by samba) . In a second VM/client Linux Mint 20.1c, I can not see the nfs- or smb-shares. Server and Client can ping each other. I switched off all firewalls on all...
  6. P

    [SOLVED] OPNsense 21.1 on PVE 6.4

    Thank You. I will stay with VirtIO. It is said to be better and I do not need to change the eth interface config of PVE.
  7. P

    [SOLVED] OPNsense 21.1 on PVE 6.4

    I agree if You use the NIC for an exposed uplink to the internet. But for a single uplink, I would prefer to use a router on a dedicated hardware, like OPNsense on zotac CI329 or comparable to not loose the connection when rebooting or crashing or malconfiguring the host.
  8. P

    [SOLVED] OPNsense 21.1 on PVE 6.4

    More than one virtual network-device can be assigned to a Linux-Bridge, hooked to a Port/Slave (NIC). This way many VMs can use a single NIC in parallel. And I can reconfigure this quite quickly and flexible, especially if You additionally use OPNsense for task not offered by PVE. For example, I...
  9. P

    [SOLVED] OPNsense 21.1 on PVE 6.4

    I configured Intel-E1000 for all network-devices of the VM/OPNsense. I changed all four to VirtIO, still running. So I do not have any need to test it with another kernel. Thanx for Your hints.
  10. P

    [SOLVED] OPNsense 21.1 on PVE 6.4

    NIC on-board: Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31) description: Ethernet interface product: Ethernet Connection (2) I219-LM vendor: Intel Corporation physical id: 1f.6 bus info: pci@0000:00:1f.6...
  11. P

    [SOLVED] OPNsense 21.1 on PVE 6.4

    Now I cloned the VM/OPNsense 21.1, added one network-device ... and it starts without error. Very strange. I do not have any idea, why sometimes, adding a network-device leads to errors/no-start and in other cases it runs smothly.
  12. P

    [SOLVED] OPNsense 21.1 on PVE 6.4

    8 x Intel(R) Xeon(R) CPU E3-1275 v6 @ 3.80GHz (1 Socket), 64GB RAM Linux 5.4.114-1-pve #1 SMP PVE 5.4.114-1 pve-manager/6.4-6/be2fa32c
  13. P

    [SOLVED] OPNsense 21.1 on PVE 6.4

    ERRORS (from Tasks list, vmbr3 does exist) VM 127 not running TASK ERROR: Failed to run vncproxy. AND: bridge 'vmbr3' does not exist kvm: network script /var/lib/qemu-server/pve-bridge failed with status 512 TASK ERROR: start failed: QEMU exited with code 1
  14. P

    [SOLVED] OPNsense 21.1 on PVE 6.4

    I used "Linux Bridge" = "vmbr". Many other changes to the VM/os-configuration lead to a non-boot as well.
  15. P

    [SOLVED] OPNsense 21.1 on PVE 6.4

    I found a workaround: (THIS IS NOT A GOOD WORKAROUND) 1. clone VM with os (opnsense 21.1) 2. in this clone, reset os to factory default 3. clone VM/os a second time 4. under PVE "Hardware" add network device to second VM/os-clone 5. boot second VM/os clone 6. I did not test to import old...
  16. P

    [SOLVED] OPNsense 21.1 on PVE 6.4

    I installed OPNsense 21.1 into a VM on PVE 6.4. When I add a second network-device, then opnsense does not boot anymore.
  17. P

    powertop

    Do You recommend using powertop and let it optimize power consumption of VPE 6.4?
  18. P

    ZFS storage configuration

    a newbee question: > It is recommended to create an extra ZFS file system to store your VM images: > # zfs create tank/vmdata > To enable compression on that newly allocated file system: > # zfs set compression=on tank/vmdata > You can get a list of available ZFS filesystems with: > # pvesm...
  19. P

    ZFS cache with mirrored SSDs?

    My small beginners-roundup about setting up a ZFS-Raid-5 with 3 HDDs and 2 SSDs: Always use an UPS, a high-quality power supply, HDDs for continuous usage with CMR, SSDs with Power-loss Protection. # zpool create -f -o ashift=12 <pool> raidz1 <HDD1> <HDD2> <HDD3> log mirror...
  20. P

    ZFS cache with mirrored SSDs?

    Does it make sense to use mirrored SSDs for the ZFS-cache to avoid data loss (e.x. in case of ssd-hardware-failure)? Does this creat a zfs-pool with mirrored SSDs? zpool create -f -o ashift=12 <pool> mirror <hdd1> <hdd2> cache <ssd1> <ssd2>