Hi,
initially empty, {'*pve-host*' => '{"release":"7.1","version":"7.1-8","repoid":"5b267f33"}'' } after restarting pvestatd.
Deleting users and adding tfa data also works now.
I just checked the log, I did not reboot the nodes after creating the cluster (also not when setting up a lonely test...
That Cluster is not yet in production, it would be a minor nuisance to reinstall but not a problem : )
I have no subscriptions for those nodes yet (and the other cluster with subscriptions is still at PVE 6 : /, there user deletion works just fine), but the non-subscription software should be...
Hey,
0.: is it okay to "revive" a thread already marked as solved?
Because i think the other issue that popped up, being unable to delete a user, is still present.
Yesterday we had the same issue - while creating accounts i miss clicked and created an account for the colleague as "foo@pam"...
hi,
for short: I have changed my Debian Xen to a ProxMox host, with Enterprise repository enabled.
Today I wanted to create a new VM on a single VE Host, but I get "proxy detected vanished client connection" messages. It seems, that these problem are storage related, because it happens only...
hi,
it seems, that I find (one?) reason:
I got a lot of:
<pre>
pve-firewall[234067]: status update error: iptables_restore_cmdlist: Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.
</pre>
errors and the ruleset for IPV6 wasn't added. The reason was a (via...
hi,
hmm, I tested around ... enabled IPV6 on host/datacenter and VM, via Proxmox -> Firewall -> Add Rule -> Proto "ipv6 ... ipv6-icmp ..." but, nothing. I also see nothing in the pve-firewall.log, about dropped IPV6 packets ...
I also removed [x] firewall from the interface, which only has an...
I've got most things working, but IPV6. After starting firewall (with some rules), IPV6 traffic from the KVM VMs stops working. For example ping to ipv6.google.com: -> destination unreachable
Where I have to set the correct rules, to get IPV6 for my VMs working?
any suggestions?
hi,
my private root server runs at the moment with Debian Wheezy, Xen and six running paravirtualized VMs. I have 5 public IPs:
Host itself
Mail VM
Web VM
Jabber VM
DNS
One VM has only a private IP, but public IPv6.
All VMs have also a private IP, for internal communications ...
hi,
at the moment, all of our IBM X3550M3 / X3750M4 / Suns ... Proxmox (3.2) nodes runs with 2 x 1Gbit/s for ISCSI and cluster communication (multipath) and 2 x 1Gib/s for the external communication (bonding LACP).
We planning a new network with 10Gib/s with the following hardware:
For...
hi,
I have for the external communication one bond0 with LACP configured (Cisco). bond0 is a trunk interface, with all VLANs I need. The problem is, that the external node address (for webinterface) is in the same VLAN, like I need for VMs. How should I configure Proxmox 3.2, that I can access...
hi Udo,
thanks for the reply and sorry, for the very late answer :-) my goal is the keep most parts identical, so the backup VM too. Everything is KVM which makes the documentation easier :-), but, however the IO is at this time not a real problem, because everything can be "backuped" in a...
hi,
I got new 4 IBM V3700 RBODs and they are not compatible with our SAS Switch (LSI 6160) :-( So i have to move all VMs from our "old" environment to the new one. After that, I have 4 x LSI 630J JBODs with 12 x 2TB SAS (Seagate Constallation) free. My idea was to connect one JBODs to our 4...
hi, we have 5 nodes in a cluster configuration. All uses ISCSI over 2 x 1 GB in multipath configuration (ISCSI hosts are open-E DSS7 in cluster setup, connected to 2 x Raid5 volumes ( 6 x SAS 2TB Seagate) JBod via 6Gib/s SAS). At the moment we have ~30 KVM VMs active. Some with very low I/O...
hi,
yes, I solved the problem, but not on the ProxMox side. The main problem was a broken ISCSI / DRBD construction from open-e DSS7. DSS7 is a black box so I don't know what the main problem was, but the support service fixed the issue.
Sorry, that I can't help you.
hi,
I migrate from ProxMox 2.x with several (KVM) VMs to a cluster enabled ProxMox 3.1. I do that with shutdow the VM, copy the image to the new host, convert it to raw and dd it to the lvm ... I thought, that there must be a problem with the converting, but I started the original VM on the old...
Yes, I think I want. We use two DSS7 Open-e ISCSI storage servers (with underlying BBU raid5) in a failover configuration. With DSS7 you have no NFS failover (not implemented yet). So I have created an ISCSI target; created a shared VG in ProxMox (with disabled "use LUNS directly", as...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.