Search results

  1. J

    Meltdown and Spectre Linux Kernel fixes

    "Proxmox Host" = on the proxmox node = "PVE server" had to read this two times as well, it's really meant literally: there is an ("Type:") option called "host" now, which you have to select...
  2. J

    Proxmox 3.4, kernel 2.6.32-48-pve, aacraid - kernel panic

    not yet - as i said, we just discovered this issue... for now, we'll point grub to 2.6.32-46 kernel. perhaps dkms is an option... interestingly, PVE4 still uses the older, working aacraid driver...
  3. J

    Proxmox 3.4, kernel 2.6.32-48-pve, aacraid - kernel panic

    perhaps related to what we also just found out: changelog for 2.6.32-47: http://download.proxmox.com/debian/dists/wheezy/pve-no-subscription/binary-amd64/pve-kernel-2.6.32-47-pve_2.6.32-178.changelog => update aacraid sources to aacraid-linux-src-1.2.1-50667.tgz and when you look at...
  4. J

    Why can't I re-install rsyslog?

    we just recently discovered a rsyslog problem on some wheezy hosts, probably induced by some updates in the past - symptoms could be similar to yours: logfiles seem to remain empty, but in fact, the damon just logs into the last *rotated* one (means, e.g. "messages" remains empty, but...
  5. J

    Failing backup job

    ...sure, but obviously only in case you have the needed scope to skip days for a backup job; in our case it's a matter of "backup needed daily", with "spreading" the jobs over the night...
  6. J

    Failing backup job

    we also tried something in this direction, how ever, there's a problem: since one cannot configure a backup job being the "successor" of another one, the time schedule has to be "guessed" somehow. This does not work reliably at least for us, sometimes jobs take longer than expected, or longer...
  7. J

    Q: Upgrade 3-node Cluster (PVE 2.3->3.2) Sanity check

    i used this script a few times (~ 6 nodes/ 2 clusters) - and i got it *always* working afterwards. on the other hand, a always had to do some mannual fine tuning/fixing afterwards, but this was probably because of having installed some other packages... "fixing" afair consisted of some manual...
  8. J

    Proxmox VE Ceph Server released (beta)

    Does quota and Proxmox-UI (show disk usage etc.) still work with OpenVZ on CephFS? I once tried XFS, and had all sorts of problems, even with patched PVE perl modules (ok, that was about Prox V1.9 or so).
  9. J

    Proxmox VE Ceph Server released (beta)

    Hi, here's a test using rbd with "writeback": and here the same config using "nocache": ceph cluster of 3 nodes, using bonded 2GBit for OSD links, and bonded 2GBit for MONs, 4 OSDs per node, SATA disks.
  10. J

    is CEPH stable for environment production critical?

    interesting figures; do you use SSDs for journals? i am using one SSD per 4 OSDs - which is the recommended maximum. Without these separate journals, my throughputs are significantly smaller; i also increased journal size to 10GB/OSD (default was only 1GB, if i remember correctly). I used a...
  11. J

    is CEPH stable for environment production critical?

    Hi, may i ask what "not so good" means in figures? I'm running a test ceph installation, 3 nodes, each with 4 SATA disks with 4 OSDs/node, using 2GBit links (bonding). I'm getting up to 120MB/s, and i'm wondering if 10GBit links would still improve rates in this case... :)
  12. J

    Proxmox VE Ceph Server released (beta)

    Ok; but for me at least the interesting - if not most important - part with "trying" such is "what happens if something happens"... :) (i'd believe standard use case for Ceph includes some sort of redundancy - though you also might go without, of course...)
  13. J

    Proxmox VE Ceph Server released (beta)

    well, but when the ceph node running this one MON dies, the whole thing gets unavaillable, or? seems to be a bit pointless for me...
  14. J

    Proxmox VE Ceph Server released (beta)

    hm, isn't a quorum needed? Imo at least an uneven count of MONs is required. Though you can probably run 2 MONs on one of the two nodes, to get a quorum with 3 MONs...
  15. J

    [nginx + apache]

    nat? this is all what i use for nginx & proxmox - works for me; replace $IP by your public ip... server { listen $IP:80; server_name $IP; return 307 https://$IP$uri ; } server { listen $IP:443 ssl...
  16. J

    PVE 3.1 Console not working

    i do have this quite frequently - if i use a slow client, where the java startup takes a bit longer; in my case, the 2nd try using "reload" in console window then usually succeeds.
  17. J

    Proxmox VE 3.1 beta (pvetest)

    great news; there seems to be a little glitch, though: after updating to this version, there's a new password-protected apt repo configured, which prevents from further updates: Err https://enterprise.proxmox.com wheezy/pve-enterprise amd64 Packages The requested URL returned error: 401 Ign...
  18. J

    openvz and ploop

    you're right of course - at least no by default; some seem to have installed this vzstat-thing, though: 2.6.32-20-pve 5 2.6.32-19-pve 5 2.6.32-7-pve 2
  19. J

    openvz and ploop

    ...perhaps sort of little 'hen/egg problem' - *if* Proxmox would support ploop, *then* figures probably would hit the ceiling... ;-)
  20. J

    Node replacement after hardware failure on a cluster

    tell cluster to use a quorum of 2: pvecm expected 2

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!