Search results

  1. A

    fuckwit/kaiser/kpti

    Hi all, The application of this kernel security patch could result on noticeable performance impact, notably on servers. See for example this first bechmark from Phoronix : https://www.phoronix.com/scan.php?page=article&item=linux-415-x86pti&num=1 So I wonder if this patch should be applied...
  2. A

    [solved] Ceph time out after upgrade from PVE 5 beta to 5.0

    Finally, I solved the problem by rebooting the second node. I think it was in some stateof error, and as the cluster as only three nodes, hence quorum is two, and third node was not yet upgraded, there was no quorum. It is much better now : ~# ceph -s cluster: id...
  3. A

    [solved] Ceph time out after upgrade from PVE 5 beta to 5.0

    Some more information after reading another thread : ~# systemctl status ceph ceph-osd ● ceph.service - PVE activate Ceph OSD disks Loaded: loaded (/etc/systemd/system/ceph.service; enabled; vendor preset: enabled) Active: inactive (dead) since Mon 2017-08-14 22:19:06 CEST; 18h ago Main...
  4. A

    [solved] Ceph time out after upgrade from PVE 5 beta to 5.0

    Hello all, I am a little bit new to Ceph. I installed a few months ago a new proxmox cluster with Promox 5 beta using 3 Dell PE R630 nodes, each with 2 SSDs (one for OS and one for journals), and 8 500 GB HD drives, for OSDs. So I have 24 OSDs. Proxmox ans Ceph share the same servers. I...
  5. A

    Trouble with ceph on PVE 5

    For completion, I managd to deal with the warning 'no active mgr', by creating one with pveceph. # pveceph createmgr creating manager directory '/var/lib/ceph/mgr/ceph-prox-nest3' creating keys for 'mgr.prox-nest3' setting owner for directory enabling service 'ceph-mgr@prox-nest3.service'...
  6. A

    Trouble with ceph on PVE 5

    I ran into the same problem. I am testing the migration from PVE 4.4 with ceph jewel to promox 5.0 on a test cluster. So I firts migrated from ceph jewel to luminous following the documention, then migrated from jessie to stretch. I ended with this ceph package, as reported above : #...
  7. A

    Proxmox VE 5.0 beta1 released!

    I just met the same error, installing PVE 5.0 beta over a previous 4.4 install, so not a clean disk. I have the full screenshot. It was on a Dell PE R630, /dev/sda being an SSD disk.
  8. A

    Problem new server with SSD and LVM thin volume

    Hi again, I think there is a problem with the default configuration of a Thin LVM volume with 4.2 iso, when you have an existing cluster with no Thin LVM already configured. When I introduced this new node in my cluster, I saw a little /var/lib/vz partition, of about 45 Go, that was created...
  9. A

    Apt-get behind proxy - setting authentication

    Hi exup, We have a proxy on our network, and all we did (as for other linux servers), is to create an apt.conf file inside /etc/apt, and fill it it with : # cat apt.conf Acquire::http::Proxy "http://proxy:8080"; Acquire::https::Proxy "https://proxy:8080"; You should perhaps add https proxy...
  10. A

    Problem new server with SSD and LVM thin volume

    Hi Fabian, Thanks for your answer. Your advice comes in the way I was thinking. It's not very sustainable to add a new server with thin LVM, in a cluster where it has not been configured previously. I would like to keep the possibility to migrate VMs between new node and the older, so I think I...
  11. A

    Problem new server with SSD and LVM thin volume

    Hello all, I would like to have some advice on a configuration problem with a new server we bought recently. It is a Dell PE R730 with two 200 GB SSDs in Raid 1, and 8 500 GB NLSAS drives in Raid 10. We bought this configuration to use in the future this server in a Ceph cluster.as described...
  12. A

    No reboot with 4.4 pve kernel

    Hi all, I have the same problem with a new server, a Dell PE R730, where I replaced the default netword card by the Broadcom 57800 two ports 10 Gb Base T + two pors 1 Gb Base-T : # lspci | grep Broadcom 01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57800 1/10 Gigabit...
  13. A

    Proxmox VE 4.0 2-node cluster: cannot migrate online

    Hi Hivane, Xeon E5440 is rather old, but I verified it has indeed virtulization extensions. http://ark.intel.com/fr/products/33082/Intel-Xeon-Processor-E5440-12M-Cache-2_83-GHz-1333-MHz-FSB I guess that you have to enable virtualization in your BIOS on the second node.
  14. A

    Proxmox VE 4.0 released!

    Hi Tom, Thanks for your advice. I'll try to not forget this.
  15. A

    Proxmox VE 4.0 released!

    Hi Mortph027, Thanks for the detailed informations. I feel reassured that this will not be a problem when migrating. I have a little doubt still, though, as you did a fresh install, and I plan to migrate from 3.4. So the first step is to upgrade to Jessie, then to install (apt-get install), the...
  16. A

    Proxmox VE 4.0 released!

    Any answer on the above question ? I have a cluster of Dell PowerEdge R620 and R630 servers with Broadcom interfaces : # lspci | egrep -i --color 'network|ethernet' 01:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe 01:00.1 Ethernet controller: Broadcom...
  17. A

    Proxmox 3.4 Install Resolution Issue

    So, it confirms that the problem is with UEFI, and not only for Dell servers. But it seems that the current debian installer is working fine in UEFI mode ?
  18. A

    Proxmox 3.4 Install Resolution Issue

    Hello, As I was suspecting, because very few people are reporting this problem, the screen size problem comes from booting in UEFI mode. I revert to BIOS mode, and everything is fine with the display. So it seems that screen size resolution and settings are not the same in UEFI and BIOS mode...
  19. A

    Proxmox 3.4 Install Resolution Issue

    Hello, I have the same problem as hpk. Only the left part of the screen is displayed. I tried with a brand new 22" Dell monitor, autoadjust, move the displayed part of the screen on the left (via monitor menu), but on the right there is nothing. I was able to install Proxmox just confirming...
  20. A

    Proxmox v3.4 + 4.0 Cluster

    Proxmox v4 will use corosync v2 (the software that setup the cluster), which is not compatible with corosync v1, used previously. See : http://pve.proxmox.com/pipermail/pve-devel/2015-February/014411.html So indeed, cluster upgrade will certainly be done at once.