Search results

  1. B

    how can we reduce multicast traffic ?

    note the multicast traffic from out Layer 3 switch: ifIndex 417 Octets Received 6673610456 Packets Received Without Errors 24005319 Unicast Packets Received 1431638 Multicast Packets Received 16965403 Broadcast Packets Received 5608278 Receive Packets Discarded 5125808...
  2. B

    sshfs shared storage

    I think it would be good to allow sshfs for shared storage. For us sshfs is more reliable then nfs . We use sshfs for user documents - with $HOME/Documents targeted to a kvm sshfs . We've done that for 3 years and have had very few issues.
  3. B

    no video on system console

    I have a supermicro model x7dal-e+ it does not have an on board video card. we used this for proxmox 1.9 with an add on video card, and had display on the system console. for proxmox 2.1 kernel 12 , we do not have video - after grub loads the kernel and gets to 'waiting for /dev/...' or...
  4. B

    vzdump vz local snapshot question

    I'm trying to better understand vzdump and how it does a snapshot for an openvz . the vz's are at /var/lib/vz . how does vzdump make a snapshot of a directory like /var/lib/vz/private/107 ? as I'd like to use that same method to make snapshots of then backup directories with in the vz.
  5. B

    KVM optimization

    With ha working great for kvm on drbd I'm tempted to switch a container to kvm. this container is where we do a lot of data entry. But speed tests on our data show ct 7 times faster then kvm. The system runs debian etch. Inside the system, memory and cpu usage are low. In case someone...
  6. B

    kvm migration fails, exit code 250

    I get this trying to migrate a kvm: Executing HA migrate for VM 101 to node fbc241 Trying to migrate pvevm:101 to fbc241...Temporary failure; try again TASK ERROR: command 'clusvcadm -M pvevm:101 -m fbc241' failed: exit code 250 However 4 other kvm's migrate back and forth with an issue...
  7. B

    "tar: write error" during a Restore

    Hello, restoring a kvm this displays: extracting archive '/var/lib/vz/dump/vzdump-qemu-100-2012_05_01-10_52_28.tar.lzo' extracting 'qemu-server.conf' from archive extracting 'vm-disk-ide0.raw' from archive Rounding up size to full physical extent 4.00 GiB Logical volume "vm-103-disk-1"...
  8. B

    Failover Domains question.

    So I've been reading threads about fd's . [ no more floppy disks around so fd = Failover Domains :) ] . I'm setting up a 3 node cluster with two drbd resources on nodeA and nodeB per what e100 has written. nodeC is to provide proper quorum and development. from this thread...
  9. B

    glusterfs and high availability.

    I tested glusterfs over the last week and am not going to use for our high availability necessary data. It worked great until a reboot and umount caused split brain. I'll post details if there are responses to this. In the future I think glusterfs version 3.3+ on top of zfs will be a way...
  10. B

    /etc/network/interfaces question

    on a new install from most recent iso interfaces looks like this: auto lo iface lo inet loopback auto vmbr0 iface vmbr0 inet static address 10.100.100.73 netmask 255.255.0.0 gateway 10.100.100.2 bridge_ports eth0 bridge_stp off bridge_fd 0...
  11. B

    VM is locked (migrate)

    A migrate failled, and the kvm will not start. more details on that later. the KVM was controlled by HA . I tried to do an on line migrate , the target system rebooted - not sure yet if it was a panic or what caused that. now trying to start the kvm this was in the log: Task started...
  12. B

    HA cluster question about node priority.

    We have a 3 node cluster . 2 of the nodes have more memory and newer hardware then the 3-rd node. Is there a way to assign a higher priority to nodes, so that when "/etc/init.d/rgmanager stop" is run the KVM's go to the 2 stronger nodes then the 3-rd only as a last resort?
  13. B

    after "pvecm delnode " deleted node shows with "pvecm nodes "

    Hello, We removed a node with " pvecm delnode s002 " . then powered off s002 . however " pvecm nodes " shows: fbc241 s012 ~ # pvecm nodes Node Sts Inc Joined Name 1 M 264 2012-04-17 06:58:42 fbc240 2 M 324 2012-04-18 19:50:40 fbc246 3 X 204...
  14. B

    Connect failed: connect: Connection refused; Connection refused (500)

    Connect failed: connect: Connection refused; Connection refused (500) *SOLVED* this issue was solved by running pvecm updatecerts see 2-nd post. --- Hello I have a server which had been running Proxmox 2.0 since December. according to aptitude logs it originally had proxmox-ve-2.6.32...
  15. B

    High Availability 2

    We have a couple of containers where data changes a lot. One is a mail system. Currently for high availability we use drbd and heartbeat. So disk writes always occur at two physical servers. If the primary system fails , the other takes over and has all the emails. But heartbeat does not...
  16. B

    High Availability Cluster questions

    Hello I'm trying to set up a 3 node cluster using wiki information. First question - does fencing need to be set up before creating the cluster?
  17. B

    heartbeat and proxmox 2.0

    Hello We've two 1.9 servers uing heartbeat and drbd . When upgrading the secondary to 2.0 using pve-upgrade-1.9-to-2.0 I noticed heartbeat was removed. After completing the upgrade I tried to install heartbeat and got: root@fbc4 /etc # aptitude install heartbeat The following NEW...
  18. B

    how do I increase the size of a FreeBSD disk?

    Hello we've pfsense running low on space. I searched forums and wiki and could not find how to do increase the size of a FreeBSD disk. the disk is an IDE . any suggestions? Or else we'll just do a fresh install and restore a backup pfsense config file.
  19. B

    Native ZFS for Linux on Proxmox

    For testing I want to use extra disks with Proxmox for ZFS . We'll use this for backups for now. I put my install notes here: http://pve.proxmox.com/wiki/ZFS#Native_ZFS_for_Linux_on_Proxmox_2.0 ZFS is amazing .
  20. B

    after aptutude full-upgrade , and reboot system issues

    SOLVED upgrade system issues Hello I have 2 non clustered systems which have been running 2.0 for 5 weeks . After upgrading fbc240 and reboot, the 6 or so vz's and a kvm were not started. in /var/log/daemon.log there were many Feb 16 23:49:23 fbc240 pvestatd[2890]: WARNING...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!