Debian update

mlaci001

New Member
Jan 12, 2015
12
0
1
Hi All !

Be careful!

After the Debian update (stable) the proxmox could not use the NFS storage at all!

I found that the "rpcinfo" disappeared from "libc-bin" and thus from "/usr/bin".

A temporary solution is to create a soft link to the new destination
"/usr/sbin/rpcinfo --> /usr/bin/rpcinfo"

The proxmox uses the following file to handle the NFS storage:
/usr/share/perl5/PVE/Storage/NFSPlugin.pm

And the following parameters are hardwired in to it:
my $cmd = ['/usr/bin/rpcinfo', '-p', $server];

Thanks

Update !!
Other node has drbd, after reboot repeat the message:

Node2 (debian updated):

pmxcfs[3354]: [quorum] crit: quorum_initialize failed: 6
pmxcfs[3354]: [quorum] crit: can't initialize service
pmxcfs[3354]: [confdb] crit: confdb_initialize failed: 6
pmxcfs[3354]: [quorum] crit: can't initialize service
pmxcfs[3354]: [dcdb] crit: cpg_initialize failed: 6

Not started drbd:
kernel: [ 51.707946] d-con r0: Starting worker thread (from drbdsetup-84 [3455])
kernel: [ 51.708207] block drbd0: disk( Diskless -> Attaching )
kernel: [ 51.708501] d-con r0: Method to ensure write ordering: drain
kernel: [ 51.708506] block drbd0: max BIO size = 262144
kernel: [ 51.708513] block drbd0: drbd_bm_resize called with capacity == 2757917920
kernel: [ 51.729314] block drbd0: resync bitmap: bits=344739740 words=5386559 pages=10521
kernel: [ 51.729322] block drbd0: size = 1315 GB (1378958960 KB)
kernel: [ 52.066784] block drbd0: bitmap READ of 10521 pages took 337 jiffies
kernel: [ 52.111632] block drbd0: recounting of set bits took additional 45 jiffies
kernel: [ 52.111639] block drbd0: 5259 MB (1346382 bits) marked out-of-sync by on disk bit-map.
kernel: [ 52.111653] block drbd0: disk( Attaching -> UpToDate )
kernel: [ 52.111658] block drbd0: attached to UUIDs FE433C2398EEB5A0:72C9D5C2D2FFB309:F558F4736881D727:F557F4736881D727
kernel: [ 52.117477] d-con r0: conn( StandAlone -> Unconnected )
kernel: [ 52.117502] d-con r0: Starting receiver thread (from drbd_w_r0 [3457])
kernel: [ 52.117570] d-con r0: receiver (re)started
kernel: [ 52.117586] d-con r0: conn( Unconnected -> WFConnection )

Node1 (debian not updated):
#cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
srcversion: 19422058F8A2D4AC0C8EF09
0: cs:StandAlone ro:primary/Unknown ds:UpToDate/DUnknown r-----
ns:112558888 nr:183756053 dw:327791976 dr:1416325951 al:13656 bm:120 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:14119228

Node2 (debian updated):
#cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
srcversion: 19422058F8A2D4AC0C8EF09
0: cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:5385528

Please help !!
Thanks
 
Last edited:
Be careful using the word stable. Debian just released a new stable yesterday, Jessie, 8.0. Are you still on Wheezy?
 
Im just apt-get upgrade via web gui (stable repo).
# cat os-release
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
 
Im just apt-get upgrade via web gui (stable repo).
# cat os-release
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"

I am sorry, but this is simply not supported. The current packages only works on wheeze. I suggest you re-install that node.
 
I am upgrade both nodes,create symbolic link (/usr/sbin/rpcinfo-->/usr/bin/rpcinfo), reboot all nodes and everything fine ! All nodes online !
I am happy :)
Thanks
 
What exactly do you gain from an upgrade to jessie?
Does jessie have something which is needed in the current proxmox, or does jessie contain fixes to bugs in wheezy?

Me, who don't understand why all young people seems to have forgotten the wise words: "If it ain't broken, don't fix it!";)
 
What exactly do you gain from an upgrade to jessie?
Does jessie have something which is needed in the current proxmox, or does jessie contain fixes to bugs in wheezy?

Me, who don't understand why all young people seems to have forgotten the wise words: "If it ain't broken, don't fix it!";)

I wanted to answer, but I'd rather not :(
 
Check /etc/apt/sources.list

Make sure its on wheezy (not stable)... I had 1 of 4 servers do this. Sure it was my fault.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!