Search results

  1. S

    MS-SQL as LXC on Ubuntu 20.04 vs Win10-VM in Ceph Cluster

    Dear at all, has someone of you experience regarding performance if MS SQL in LXC on Ubuntu or in Win10-Vm is better. Background info: Hardware: Ceph Cluster 3 Node NVME 9300 Max RAM as needed The Setup is for MS-ACCESS (ERP-System) connection over ODBC. We have 80%write and 20%read...
  2. S

    Securtiy comparing/setting VM or Bare metal

    Thank you very much for your response. I already think further this topic. ATM a bare metal is running but the hardware is out of date. My Proxmox cluster is brand new with more power than i need. With that i start to think to virtualize my firewall. Maybe i do an hybrid solution as "half HA"...
  3. S

    Guest default gw after migrating

    I think but not sure it would be the best to place a reverse proxy between internet and gateways. This would route the vms. But this scenario is new for me and have no experience in this scenario. Here i can not help sorry.
  4. S

    Guest default gw after migrating

    If i understand you right you have your hosts behind router/dhcp. And you are using 3 nodes wich has own ip in the network behind the router. You want to use the same nic for guest traffic in the same network as the host connect to the router? If yes you create on all 3 nodes a linux bridge with...
  5. S

    Securtiy comparing/setting VM or Bare metal

    Hi at all, i want (maybe) setup an firewall/gateway on proxmox. I need to knwo if it is good or not to virtualize this kind of machine? Or it must be installed out of security reason on bare metal? I have read through the internet, but i did not get an clear point if it is possible to set it...
  6. S

    Delte Ceph-Pool "device_health_metrics"

    Dear at all, i have deleted the default pool "device_health_metrics". Dont ask whyo_O! My question is, is it repairable in an already running cluster or it must be set up complete new? If it is possible to do while it is running, is it enough to create the pool manually with PG1? Or is there...
  7. S

    Windows VM locks up and unable to shutdown/restart

    Dear, i had an nearly similar issue. I was in an loop while installing win10 regarding an security kernel issue. Stop was not possible regarding lock issue. I set this and after that i could stop the machine without shut down de node. xxx is you vm ID hope this helps you regards
  8. S

    [SOLVED] Problem with Mellanox MCX556A-ECAT on PVE 6.3

    UPDATE i have made an update over an microsoft pc to the newest FW of HP. At the End the problem was that the driver was set to IB. I have changed to ETH. Since then the cards was in PVE visible as usual card.
  9. S

    [SOLVED] Problem with Mellanox MCX556A-ECAT on PVE 6.3

    Hi at all, i have setup servers MCX556A-ECAT but it not run out of the box. i can not see the cards in networks. when i run follows i get that response if i try to start an fw update i get that When i try to install the OFED of mellanox itself i must uninstall pve required things if i...
  10. S

    Live migration over which communication

    many thanks for answer. Would you do that over corosync network and increase the connection for corosync to 10gbe? Or should the corosync connection absolut seperated all the time regarding the low latency? regards
  11. S

    Live migration over which communication

    Dear at all, i want to know over which transfer the live migration works. I didnt found this information on the documentation. I seperate corosync over bond and two nics. I seperate ceph(cluster and public together) over bond and two nics. And i have seperated with one nic the entry to the...
  12. S

    Ceph Network public and cluster some questions

    I thought that this is only for corosync communication and as admin i dont need to listen in real on this connection. For doing my maintenance or status connection i can see that over the gui or cli at proxmox itself. for sure yes you are right. it is an important point. But lets say i do LACP...
  13. S

    Ceph Network public and cluster some questions

    So you are right. I thought i have read that in the documentation. I dont have an big cluster as you have already said. So for avoid latency problems. I set corosync also on full mesh without switch. I have redundancy when i do the full mesh like desrcibed in the documentation? This makes...
  14. S

    Ceph Network public and cluster some questions

    thanks for detail response. I have some 10gbe nics free which i want to use only in bridgemode for the guest-vms. I believe you mean that in Point 5 4] in my scenario this would be in my main network with all my users in the company. This would be an bad decision regarding interruption of the...
  15. S

    Ceph Network public and cluster some questions

    Many thanks. I was confused if there is required 3 dedicated networks. This means if i want to separate as recommended ceph cluster and ceph public i need in sum 3 dedicated networks including the proxmox cluster network 3 dedicated networks: 1. ceph cluster network 2. ceph public...
  16. S

    Ceph Network public and cluster some questions

    this is for (pmxcfs) corosync? SFP+ 10gbe for cluster network. You are using NVMe? If yes how many OSDs? So it seems to be, that the public network is not hungry for high speed as the cluster network. Somebody else can confirm that? Exist there an recommendation for speed in public network...
  17. S

    Ceph Network public and cluster some questions

    Dear, I want to setup for ceph an full mesh network over dual 100gbe nic. In the documentation "Deploy Hyper-Converged Ceph Cluster" it is recommended to split the public and cluster network of ceph. If i am right, i have written that is recommended to have redundancy in the ceph network. To...
  18. S

    Server Konfiguration für PVE+Ceph 3 Node HA Cluster

    Hallo zusammen, ich möchte ein PVE Ceph-Cluster mit 3 Nodes konfigurieren und aufsetzen. Habe folgende Konfiguration gewählt und bin unschlüssig ob das so passen würde, oder ob ein Konstruktionsfehler oder gar Flaschenhals aufgebaut ist. Je Node folgende Hardware Gehäuse: Supermicro 2HE...
  19. S

    How to Deploy a 2 Node Cluster on ProxMox 4.x VE

    Also if there is no HA required? Lets say it is only needed to share the vm between two nodes and for reach load balancing. If it is made with two Volumgroups LVM. One is running on each node and replicating to the other. LVM VG 1 (Node A 3 vm running and replicating to Node B) LVM VG 2 (Node B...
  20. S

    DRBD Diskless after first reboot

    Update: I found my problem regarding above described issue. It was the lvm filter I have used following filter and now it works well again. filter = [ "a|^/dev/drbd0|", "a|^/dev/drbd1|", "a|^/dev/sda3|", "r/.*/" ] but i have the further problem that drbd dont starts automaticly. I have to run...