Search results

  1. T

    permissions removable datastore

    hi, thx for your reply. unmounting is only part of the problem - although a hook on the notification is a nice idea. mounting and running the sync would be necessary too. fine grained permissions for these operations would be very useful. wbr,tja...
  2. T

    permissions removable datastore

    hi @all, i'm trying to get a grip on removable datastores and on a workable procedure for a backup operator user. in my book a backup operator user should be able to mount the removable drive see stats like free space start a pre-defined sync-job (local to removable, remote to removable), no...
  3. T

    rpcbind

    hi justinclift. we basically restricted rpc to localhost. we did this via adding /etc/systemd/system/rpcbind.socket.d/override.conf: [Socket] ListenStream= ListenDatagram= ListenStream=127.0.0.1:111 ListenDatagram=127.0.0.1:111 ListenStream=[::1]:111 ListenDatagram=[::1]:111 i cant remember...
  4. T

    rpcbind

    in one of our (lazy infrequent) security scans we stumbled upon a running rpcbind. it seems that it was installed around 8.0.4. trying to remove it tells us that pve depends on it: The following packages will be REMOVED: libpve-guest-common-perl* libpve-storage-perl* nfs-common* proxmox-ve*...
  5. T

    spice troubles with newer versions of remote-viewer

    thx tom. i would do that gladly but im not sure that the drbd 8.3 configuration will work in 5.x ?!? wbr,tja...
  6. T

    spice troubles with newer versions of remote-viewer

    hi, i've got an old small pve 3.4 cluster with 2 nodes and drbd 8.3 which works flawlessly for some years now. it runs a small office environment with a file server VM and a couple of windows 7 VMs used with spice for the users, kind of VDI. we did not upgrade for the drbd-not-supported-anymore...
  7. T

    parse error in '/etc/pve/datacenter.cfg' - 'migration': invalid format - format error#012migration.

    thx tom, it was the additional space in the line between the comma "," and "network", i double checked now. works: migration: type=insecure,network=10.101.103.0/24 raises error: migration: type=insecure, network=10.101.103.0/24 guten rutsch und ein gutes 2018 !
  8. T

    parse error in '/etc/pve/datacenter.cfg' - 'migration': invalid format - format error#012migration.

    tx tom for the fast reply. proxmox-ve: 5.1-32 (running kernel: 4.13.13-2-pve) pve-manager: 5.1-41 (running version: 5.1-41/0b958203) pve-kernel-4.13.4-1-pve: 4.13.4-26 pve-kernel-4.13.13-2-pve: 4.13.13-32 libpve-http-server-perl: 2.0-8 lvm2: 2.02.168-pve6 corosync: 2.4.2-pve3 libqb0: 1.0.1-1...
  9. T

    parse error in '/etc/pve/datacenter.cfg' - 'migration': invalid format - format error#012migration.

    hi all, i rebuilt our cluster with 5.1 and im happy so far. one thing that bugs me is the above error. https://pve.proxmox.com/pve-docs/datacenter.cfg.5.html states that my datacenter.cfg: keyboard: de migration: insecure, network=10.101.103.0/24 i tried with type=insecure, too ... still i get...
  10. T

    corosync: using 'ring1_addr' parameter needs a configured ring 1 interface!

    hi alwin, thx for looking into this. first i created the cluster without bindnet and ring addresses but before adding the first node i changed /etc/corosync/corosync.conf according to https://pve.proxmox.com/wiki/Cluster_Manager section "RRP On Existing Clusters". but as said above its a...
  11. T

    corosync: using 'ring1_addr' parameter needs a configured ring 1 interface!

    hi ! im building a new test cluster with 5.1 to try all the new features. im getting "corosync: using 'ring1_addr' parameter needs a configured ring 1 interface!" while adding the first node to the cluster regardless of using hostnames or ips. what i did: installed 3 basically identical...
  12. T

    proxmox 4 storage options

    hi @all, im using proxmox 3 for a very small company with a dedicated 2 node iscsi storage cluster and a couple of storageless hypervisor nodes. im not very happy with the current io performance, proxmox 4 is out and so i consider my future options. i've setup a small 3 node lab cluster with...