With the below firewall rules, the machine can contact ipv4 addresses without any issue.
But for some reason ipv6 can't get through. (standard policy is drop in / drop out)
When I enable the first 2 lines, Ipv6 has internet again.
It does not seem to block anything in the logs, am I missing...
We have sanoid / syncoid running for managing snapshots and replicating them.
Is it possible to show the snapshots taken in the snapshots overview from Proxmox?
Hi I was thinking of building a small server for the office based on the new Ryzen processors (after they fixed the linux issue)
Something like this,
Ryzen 3900x
64GB ECC Ram
Asrock Rack X470D4U
Intel 905P drive as cache/slog
3x 10TB WD Red in RAID-Z2
I will be running a few VM's on them of...
I was hoping someone could help me clear something up.
When making a ZFS snapshot and sending it offsite the sizes don't really look to be what they should be.
What I mean is this, I made a snapshot and the size of the snapshot is 26.4M.
When transferring this same snapshot with zfs send /...
My containers get their IPv6 IP assigned through SLAAC, this works.
They can also ping and connect outside over ipv6.
But for some reason they won't accept any connection coming in.
Not from the internet, but also not locally on their public IP. (connections are open to the internet)
As far as...
I saw that QNAP has a NAS storage with a Ryzen processor.
Was wondering if anyone has a proxmox build with a Ryzen CPU and ZFS drives.
If so what is your configuration?
I want to build a new server.
I was thinking a i7 7820x with 128GB RAM
Now the question is how to arrange the harddrives.
I have 9x 1TB HDD's which I would like to keep using.
How can I optimize speed with ZFS?
Should I buy a NVMe for L2ARC & ZIL ?
Hi we have 3 nodes running Ceph with Bluestore OSD's
The disks are HDD's.
We have medium performance on this.
Currently the nodes are also running the containers and VM's.
I would like to introduce a new node which will run all containers and VM's and no OSD's.
Hopefully this will increase...
After running the upgrade to Luminous my data pool seems to be gone, but is still accessible
root@nod2:~# ceph status
cluster:
id: d13548c9-2763-4d87-bf30-27de2be235fd
health: HEALTH_WARN
crush map has straw_calc_version=0
no active mgr
services...
When I would buy a OEM license for a VM this should work, but it would be bound to that VM.
If I want to install or recreate the VM, the license would become invalid, I guess.
Is it possible through the SMBIOS option to give it the same hardware id if I recreate the VM?
Or do you need to buy a...
Ceph Jewel was acting strange so I rebooted my servers and then Ceph didn't want to come up.
The OSD's where down (except for one)
And starting the OSD with systemctl didn't work, it seems to hang on authentication.
The error was,
** ERROR: unable to open OSD superblock on...
This seems like a awesome addition / replacement for the current backup solution of Proxmox.
Made for storing vm backups offsite
https://github.com/wamdam/backy2
I am making a backup script for Ceph volumes.
It will make a backup of the rbd volume, then it will continue making diff backups.
This will make it a lot easier to store offsite, as you only have to transfer the main backup once.
The next step will be to merge the differences after a month orso...
I am not a advance network engineer.
I would like to connect my modem to the switch, give it a VLAN tag.
Then set a VLAN tag on the network interface of a KVM container and run the firewall in there.
As a test I have given a PC a VLAN tag in my switch, I set the switch port to Access and...
This is more a ceph question then a Proxmox question I guess.
But I want to backup images in my Ceph pool for offsite backup so preferably incremental how would one do this?
Can I take a snap shot like
rbd snap create vm-105-disk-1@Initial
then maybe for each day
rbd snap create...
I upgraded one node and all went fine for two small issues, ceph wouldn't start.
I complained there was no config file, but its there.
When starting with -c pointing to the config file it works?
And I keep ketting the following error, when starting things.
libust[22857/22857]: Warning: HOME...
So looking forward to finally being able to use containers on Ceph instead of only KVM!!! :cool:
Hope this will come in v4.0
I'll instantly convert all linux vm's to LXC :rolleyes:
Is it possible in anyway to directly map a RBD device from ceph?
As the RBD module is not present in the PVE kernel?
I would like to convert a OpenVZ container to KVM, for this to do it "easier" it would help to be able to map a RBD device.
Or does anyone know of a other method?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.