Search results

  1. A

    Configuration d'un cluster PVE de 2 nœuds

    This is not a sane approach. When you have multiple failure domains, the design should account for that- eg, two seperate DCs with a potential to disrupt connectivity should be redundant (and have a outside witness node,) not members of the same failure domain. And again, even if you insisted...
  2. A

    Configuration d'un cluster PVE de 2 nœuds

    True in theory. in practice the chances of the cluster splitting down the middle (so half the nodes only see themselves and not the other half) is so astronomically low it may as well be zero. If this is really a concern for you, you can always set your quorum minimum at 1/2n+1 so you'd get...
  3. A

    Configuration d'un cluster PVE de 2 nœuds

    pve clustering requires 3 nodes. the 3rd node can be a simple quorum vote as @leesteken linked, but dont confuse that for "replication." ZFS replication is a separate issue. see https://pve.proxmox.com/wiki/PVE-zsync
  4. A

    They asked me for a CEPH deployment plan!

    100gb is great, but bandwidth is one consideration; contention is the real enemy especially when there is a ceph rebalance storm. 1 4x25 setup will be more resilient and more dependable then 1x100. the general gist of what you want here (edit- AT MINIMUM; other networks are probably desirable as...
  5. A

    They asked me for a CEPH deployment plan!

    step 1. remove all non boot drives from your R840s. retain those for compute. examine your existing network topology as you will likely want/need to upgrade it. step 2. buy 3 smaller and cheaper nodes. populate with at least 4 nics each. the fatter the better. step 3. repopulate new nodes with...
  6. A

    Enterprise SSD Showing 0B in size

    This would be really concerning to me. What hardware were these connected to? those power supplies should have protected the devices on the low voltage rails from any spike or adverse condition; I'd effectively rule out its use in any meaningful application.
  7. A

    What if root filesystem became readonly

    lots of advice, no one asked the obvious. What do you see in dmesg to explain the fault? obvs only available before rebooting the node since your logs arent being written to your read-only file system.
  8. A

    Enterprise SSD Showing 0B in size

    I think the key to this mystery lies is "all the sudden." what happened immediately prior to that boot? firmware update? kernel update?
  9. A

    Should an official Proxmox "Hardening" wiki page be created?

    Source benchmarks would be good here. I dont think they will bear out that statement. there are a number of api calls that only work when called by root. the api mechanism requires a password.
  10. A

    PBS question

    yeah thats what I'm currently doing; just thought its a bit hacky and hoped there is a blessed method for it.
  11. A

    PBS question

    gah sorry. I'm not only blind to documentation apparently.
  12. A

    PBS question

    Apologies in advance if I'm being obtuse and not seeing it in the docs When using vzdump, I can always restore backups directly from their resulting tarballs. How do I go about restoring backups from a failed PBS instance? Is there a db/config backup mechanism I'm not seeing?
  13. A

    Should an official Proxmox "Hardening" wiki page be created?

    Its a good idea, BUT not really necessary. Security as applicable to a pve environment isnt really any different than any other virtualization platform, which means any hardening policies that would be best practices generically or even specifically to another platform (eg, vmware) would be...
  14. A

    Proxmox on VRTX

    one other possibility (not likely but worth a try) do you have firmware-bnx2 installed? its in the nonfree repo so you may need to add it to your repo list to install.
  15. A

    Proxmox on VRTX

    so you did... you need to add pci=realloc=off to your kernel boot line. I see you already did. IF you have dell support entitlement might need to give them a call. I'm fresh out of useful ideas except replace the nic with intel...
  16. A

    Proxmox on VRTX

    dude, you're still no further along.
  17. A

    Proxmox on VRTX

    great, so all 4 nics are visible. since they all looks like they use the same hardware, it is most likely that they all use the same kernel module, which means they're likely all loaded. what does ip l show
  18. A

    Proxmox on VRTX

    Not likely. VRTX doesnt actually do pci fabric, its a simple routing. a node has exclusive access to that hardware, and it shows on its individual node's PCI root. it either shows up, or it doesnt. on lspci, what are all the nics you see and what are their pci addresses?
  19. A

    Proxmox on VRTX

    I have not seen or touched one of these in quite a few years, but if memory serves the standard switch has 8 ports facing in, which are typically configured as 2 per node. Unless you actively set the MC to change that behavior, thats all you SHOULD be seeing. How many nodes do you have in your...
  20. A

    Problem After impost vmware virtual machine

    you should see the detached drive in the bottom of your hardware list (under node-> vmid-> Hardware) select it and hit the "edit" button. you'd be able to remap it there.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!