Results 1 to 23 of 23

Thread: New features

  1. #1
    Join Date
    May 2008
    Posts
    5

    Default New features

    Hi,

    I would like to see new features:

    1. Instalation process to be done on the software MD device RAID

    At this time it is possible to install the system on a single disk (may be hardware raid) For us will be useful to allow installation process on software MD raid devices. For example now when you are installing on bare metal and have in a system two disk you have to choose /dev/sda or /dev/sdb there should be an option to create SW raid0. Or allow the installer on an advanced mode where you can divide the disks by your own. (advanced user)

    2. User management. Allow to create an user administrator of a single virtual container(s). Allow user to start, stop, backup, redeploy container.
    Just the container user not allow to change any characteristics of the container (disk, memory ....)


    3. Better limits configurations, use limits templates in the configuration menu. Limits for CPU, disk IO, and many other possibilities available in the
    openVZ environment.

    4. Just bug report or strange functionality. In a situation you stop an virtual container with the init 0 or shutdown command. You can not start it again over the web interface you have to use vzctl start ID command to start it manually. It should be fine to start it over web interface again. The web interface just show mounted.

    Your work looks promising. Thanx for the code.

  2. #2
    Join Date
    May 2008
    Posts
    5

    Default Small mistake

    The RAID should be RAID1 mirroring of course.

  3. #3
    Join Date
    Aug 2006
    Posts
    9,920

    Default

    Quote Originally Posted by georg View Post
    Hi,

    I would like to see new features:

    1. Instalation process to be done on the software MD device RAID

    At this time it is possible to install the system on a single disk (may be hardware raid) For us will be useful to allow installation process on software MD raid devices. For example now when you are installing on bare metal and have in a system two disk you have to choose /dev/sda or /dev/sdb there should be an option to create SW raid0. Or allow the installer on an advanced mode where you can divide the disks by your own. (advanced user)
    Hi Georg,

    pls see this thread concering software raid.


    Quote Originally Posted by georg View Post
    2. User management. Allow to create an user administrator of a single virtual container(s). Allow user to start, stop, backup, redeploy container.
    Just the container user not allow to change any characteristics of the container (disk, memory ....)
    user management is on the roadmap, I added your comments to this.


    Quote Originally Posted by georg View Post
    3. Better limits configurations, use limits templates in the configuration menu. Limits for CPU, disk IO, and many other possibilities available in the
    openVZ environment.....)
    the question here: what is better? we decided to make it as simple as possible and also we take care of running an openvz and kvm guest on the same host.

    Quote Originally Posted by georg View Post
    4. Just bug report or strange functionality. In a situation you stop an virtual container with the init 0 or shutdown command. You can not start it again over the web interface you have to use vzctl start ID command to start it manually. It should be fine to start it over web interface again. The web interface just show mounted.

    Your work looks promising. Thanx for the code.
    yes, this is already a known bug and we will fix it.
    Best regards,
    Tom

    Do you have already a Commercial Support Subscription? - If not, Buy now

  4. #4
    Join Date
    May 2008
    Posts
    5

    Default

    [quote=tom;1603]Hi Georg,
    pls see this thread concering software raid.
    Thanx I have read this posts. You wrote:
    We initially had software raid, but removed support because it is to
    difficult to recover after craches.

    I think that it should not be so big problem. You can just create
    /dev/md0 cosist of /dev/sda and /dev/sdb in raid1

    Than you can make your whole partitionig over /dev/md0 isted of making
    it over single harddrive. Md device is just small additional layer
    to the existing device.

    It will be very useful to use it by this way. Without any raid protection
    we are not able to use your stuff.

    Now when you have installed it on a single disk and the disk crash down. Do you
    have much bigger problem.

    The software raid should be just option. There is no problem to write a documentation how to change a bad drive in the broken raid even on the fly (when hot swap disks available)


    user management is on the roadmap, I added your comments to this.
    thx.
    the question here: what is better? we decided to make it as simple as possible and also we take care of running an openvz and kvm guest on the same host.
    Yes in the basic mode it should be enough. But in some kind of advanced
    mode the ability to specify the whole range of values for openvz especially.

    For beginning the ability to specify a configuration file will be enough. (we can made the configuration files by ourselves on the disk for our specific needs). I am talking just about the management possibilities. Yes of course you can do it over console. But it is much better to have one central place where you can manage everything.




    Another new thing. To implement bandwidth limitations and traffic accounting.
    http://wiki.openvz.org/Traffic_shaping_with_tc
    To monitor not just CPU and disk and also networking interfaces.

  5. #5
    Join Date
    Apr 2005
    Location
    Austria
    Posts
    12,234

    Default

    Quote Originally Posted by georg View Post

    I think that it should not be so big problem. You can just create
    /dev/md0 cosist of /dev/sda and /dev/sdb in raid1
    ...
    Well, the way to set up software RAID1 is well known, for example our mail gateway (www.proxmox.com) supports software raid.

    The problems are:

    1.) Most system admins are unable to recover from a software raid error because they never read the documentation.

    2.) RAID1 is not the onyl raid level - I we want to fully support software raid we need a quite complex interface to configure that (RAID5, RAID10, ..)

    3.) We want to use LVM2 (snapshots). This adds an additional level of complexity, i.e. if you recover or if you want to extend your system by adding harddisks.

    4.) One single failed disk can make the whole system unusable - I observed this with sotware raid - but never with hardware raid

    Quote Originally Posted by georg View Post
    It will be very useful to use it by this way. Without any raid protection
    we are not able to use your stuff.
    We suggest using hardware RAID instead.

    Please can you elaborate on why you dont want to use HW RAID?

    Quote Originally Posted by georg View Post
    I am talking just about the management possibilities. Yes of course you can do it over console. But it is much better to have one central place where you can manage everything.
    aggreed.

    Quote Originally Posted by georg View Post
    Another new thing. To implement bandwidth limitations and traffic accounting.
    http://wiki.openvz.org/Traffic_shaping_with_tc
    To monitor not just CPU and disk and also networking interfaces.
    Thanks for that hint.

  6. #6
    Join Date
    May 2008
    Posts
    5

    Default HW raid vs SW raid

    Hi,

    Quote Originally Posted by dietmar View Post
    Well, the way to set up software RAID1 is well known, for example our mail gateway (www.proxmox.com) supports software raid.

    The problems are:

    1.) Most system admins are unable to recover from a software raid error because they never read the documentation.

    2.) RAID1 is not the onyl raid level - I we want to fully support software raid we need a quite complex interface to configure that (RAID5, RAID10, ..)

    3.) We want to use LVM2 (snapshots). This adds an additional level of complexity, i.e. if you recover or if you want to extend your system by adding harddisks.

    4.) One single failed disk can make the whole system unusable - I observed this with sotware raid - but never with hardware raid

    We suggest using hardware RAID instead.

    Please can you elaborate on why you dont want to use HW RAID?
    We had made a bad experience with HW RAID. We use HP DL140 and HP DL160. We have realized that the performance of the "cheap SC40Ge" HW raid is much worse that the SW raid. Also the rebuild time of HW raid was much longer compared to SW Raid.

    add to your text

    1. their bad
    2. why not to use the installer standard partitioning tools after partitioning you can choose in the installer where to install the system (disks, MD device)
    3. Yes when you do not exactly know what you are doing it should be confusing
    4. I had a bad experience with HW raid card from intel
    http://www.intel.com/design/servers/...s28x/index.htm
    (rebuild of a single 200 GB faild disk took 5 days on the highest priority)

    and adaptec
    Adaptec Serial ATA RAID 2810SA - broken replaced with:
    Adaptec Serial ATA II RAID 2820SA
    also, but it has been combined with firmware problem from WD.
    http://www.theinquirer.net/en/inquir...-caviar-drives

    May be now is I can make a better experience with HW raid ;-)

    At this time we would like to use your product for our virtual server offer.

    The basic servers should use MD SW raid (for cost reason)

  7. #7
    Join Date
    Apr 2005
    Location
    Austria
    Posts
    12,234

    Default

    Quote Originally Posted by georg View Post
    1. their bad
    We want to provide support for the system, so its a major issue to keep the system managable without deep linux knowledge.

    Quote Originally Posted by georg View Post
    At this time we would like to use your product for our virtual server offer. The basic servers should use MD SW raid (for cost reason)
    I think the best way to deal with your requirements is to install a debian first, and then install the pve packages - we will provide a debian 'task' package for that.

    - Dietmar

  8. #8
    Join Date
    Aug 2009
    Posts
    20

    Default Re: New features

    Quote Originally Posted by dietmar View Post
    I think the best way to deal with your requirements is to install a debian first, and then install the pve packages - we will provide a debian 'task' package for that.
    - Dietmar
    Dietmar, so if i install debian on linux raid, i can install el proxmox packages via a repo, but you have a diferent kernel, that kernel will support my already created linux raid? or i have to do a kernel compilation to get enable?
    thks from argentina

    sorry about my english

  9. #9
    Join Date
    Feb 2009
    Posts
    53

    Default Re: New features

    Quote Originally Posted by lucho115 View Post
    so if i install debian on linux raid, i can install el proxmox packages via a repo, but you have a diferent kernel, that kernel will support my already created linux raid? or i have to do a kernel compilation to get enable?
    The standard PVE kernel works with software raid :
    Code:
    Linux vhost2 2.6.24-2-pve #1 SMP PREEMPT Wed Jan 14 11:32:49 CET 2009 x86_64
    vhost2:~# cat /proc/mdstat 
    Personalities : [raid1] [raid10] 
    md1 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
          1952523904 blocks 64K chunks 2 near-copies [4/4] [UUUU]
          
    md0 : active raid1 sda1[0] sdb1[1]
          497856 blocks [2/2] [UU]
    
    vhost2:~# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/pve-root  100G  7.9G   93G   8% /
    tmpfs                 5.9G     0  5.9G   0% /lib/init/rw
    udev                   10M   96K   10M   1% /dev
    tmpfs                 5.9G     0  5.9G   0% /dev/shm
    /dev/mapper/pve-data  1.8T  110G  1.7T   7% /var/lib/vz
    /dev/md0              471M   79M  368M  18% /boot
    
    vhost2:/tmp# pveperf 
    CPU BOGOMIPS:      42564.35
    REGEX/SECOND:      250158
    HD SIZE:           99.95 GB (/dev/mapper/pve-root)
    BUFFERED READS:    112.48 MB/sec
    AVERAGE SEEK TIME: 10.81 ms
    FSYNCS/SECOND:     53.04
    DNS EXT:           82.35 ms
    
    vhost2:~# pveversion -v
    pve-manager: 1.3-1 (pve-manager/1.3/4023)
    qemu-server: 1.0-14
    pve-kernel: 2.6.24-8
    pve-kvm: 86-3
    pve-firmware: 1
    vncterm: 0.9-2
    vzctl: 3.0.23-1pve3
    vzdump: 1.1-2
    vzprocps: 2.0.11-1dso2
    vzquota: 3.0.11-1dso1
    This is an a 1U supermicro with 4 x 1TB drives I7-920 with 12 gig ram

  10. #10
    Join Date
    Aug 2009
    Posts
    20

    Default Re: New features

    Quote Originally Posted by fromport View Post
    Code:
     vhost2:/tmp# pveperf 
    CPU BOGOMIPS:      42564.35
    REGEX/SECOND:      250158
    HD SIZE:           99.95 GB (/dev/mapper/pve-root)
    BUFFERED READS:    112.48 MB/sec
    AVERAGE SEEK TIME: 10.81 ms
    FSYNCS/SECOND:     53.04
    DNS EXT:           82.35 ms
    This is an a 1U supermicro with 4 x 1TB drives I7-920 with 12 gig ram
    OK, thanks, but this pveperf is ok? only "FSYNCS/SECOND: 53.04"? i think that raid 10 was more faster. The system works ok? is in production?
    thks

  11. #11
    Join Date
    Jul 2008
    Posts
    29

    Default Re: New features

    We've been running with MD SW Raid ... Installed Lenny first, then followed directions to install PVE on top of that. However, there is a hitch in the get-along ... at least for us. We solved it with the following:

    mdadm --detail --scan >> /etc/mdadm/mdadm.conf
    Then edit /etc/mdadm/mdadm.conf and comment out the first set of references to the MD drives so that the only references are the ones that were added by the step above.

    Then you need to do the following:

    cd /lib/modules
    update-initramfs -k 2.6.24-7-pve -c -t
    Note 2.6.24-7-pve may change if there is a newer version.

    This worked for us, use at your own risk.

    We are running with no problems on Linux SW Raid 10. Been very successful to date.

  12. #12
    Join Date
    May 2009
    Posts
    24

    Default Re: New features

    Quote Originally Posted by dietmar View Post
    We want to provide support for the system, so its a major issue to keep the system managable without deep linux knowledge.



    I think the best way to deal with your requirements is to install a debian first, and then install the pve packages - we will provide a debian 'task' package for that.

    - Dietmar
    Dietmar,

    I have read your comments and the requests for mdadm in other posts. I thought I would add my two cents. In my past job I worked for a large OEM as a Linux support analyst for our enterprise customers. Almost all of them used mdadm instead of the onboard hardware raid provided or available from the OEM for enterprise level servers. This suprised me. I started asking them why they used mdadm instead of the hardware raid.

    They made pretty good cases for using sw raid. 1. Performance was as good as any but the most expensive controllers (running Linux NOT Windows). 2. Portability, you could take your raid array and drop them into another server 3. no lock in, if you ran an old server and the controller failed, you lose all your data as even a change in the chipset or bios could make your data un-available (this I have seen many times).

    The draw back? Battery cache. If the system shutdown unexpectedly, you risk un-recoverable data corruption.

    Aside from this one thing (no built in support for mdadm) you make a sound product that is yards ahead of Virtuozzo (which I hate) and Hypervm (no longer supported) plus we can deploy KVM, which is the future.

  13. #13
    Join Date
    May 2009
    Posts
    24

    Default Re: New features

    Quote Originally Posted by fromport View Post
    Code:
    vhost2:/tmp# pveperf 
    CPU BOGOMIPS:      42564.35
    REGEX/SECOND:      250158
    HD SIZE:           99.95 GB (/dev/mapper/pve-root)
    BUFFERED READS:    112.48 MB/sec
    AVERAGE SEEK TIME: 10.81 ms
    FSYNCS/SECOND:     53.04
    DNS EXT:           82.35 ms
    here is mine from a hardware raid 10 (adaptec controller) also a supermicro (2U):
    CPU BOGOMIPS: 40403.67
    REGEX/SECOND: 592889
    HD SIZE: 94.49 GB (/dev/pve/root)
    BUFFERED READS: 210.59 MB/sec
    AVERAGE SEEK TIME: 14.87 ms
    FSYNCS/SECOND: 503.83
    DNS EXT: 181.26 ms
    DNS INT: 45.49 ms (jadase.net)

    Notice you are faster in every cat!

  14. #14
    Join Date
    Feb 2009
    Posts
    53

    Default Re: New features

    Quote Originally Posted by lucho115 View Post
    only "FSYNCS/SECOND: 53.04"? i think that raid 10 was more faster. The system works ok? is in production?
    thks
    Yes, about 20 vm's running on it of which 8 kvm guests.
    Code:
    vhost2:~# procinfo
    Linux 2.6.24-2-pve (root@oahu) (gcc [can't parse]) #???  8CPU [vhost2.(none)]
    
    Memory:      Total        Used        Free      Shared     Buffers      
    Mem:      12288952     6378444     5910508           0          56
    Swap:       995944          92      995852
    
    Bootup: Mon Jun 29 20:29:47 2009    Load average: 2.92 2.78 2.55 1/221 22526
    
    user  :  22d  6:18:46.75   6.5%  page in : 23774774  disk 1: 30037038r51253478w
    nice  :   1d 22:32:09.00   0.5%  page out:698557285  disk 2: 28750172r51225305w
    system:   8d 11:48:12.61   2.4%  page act:  4188876  disk 3: 29375317r52543944w
    IOwait:       2:58:56.22   0.0%  page dea:  1212726  disk 4: 26589453r52521652w
    hw irq:       0:31:54.70   0.0%  page flt:4759994403
    sw irq:       3:00:38.75   0.0%  swap in :        0
    idle  : 257d  2:23:04.37  75.4%  swap out:       24
    uptime:  42d 14:43:55.66         context :61547219605
    
    irq    0:3144299918 timer                 irq   19:         0 uhci_hcd:usb3,       
    irq    1:         2 i8042                 irq   21:         0 uhci_hcd:usb2        
    irq    3:         0 serial                irq   23:         0 uhci_hcd:usb4,       
    irq    4:        14                       irq 2293:         4 ahci                 
    irq    8:         3 rtc                   irq 2294:         0 eth1                 
    irq    9:         0 acpi                  irq 2295:         0 eth1-Q0              
    irq   12:         0 i8042                 irq 2296:         0 eth0                 
    irq   16:         0 uhci_hcd:usb1         irq 2297:         0 eth0-Q0              
    irq   18:         1 uhci_hcd:usb6,
    FSYNCS/second is probably "influenced"

  15. #15
    Join Date
    Aug 2009
    Posts
    20

    Default Re: New features

    Quote Originally Posted by lucho115 View Post
    OK, thanks, but this pveperf is ok? only "FSYNCS/SECOND: 53.04"? i think that raid 10 was more faster. The system works ok? is in production?
    thks
    Did you disable the cache of disks?

  16. #16
    Join Date
    May 2009
    Posts
    24

    Default Re: New features

    Quote Originally Posted by lucho115 View Post
    Did you disable the cache of disks?
    No, we are to cheap to buy cached controllers, there is none.

  17. #17
    Join Date
    Feb 2009
    Posts
    53

    Default Re: New features

    Quote Originally Posted by Cybodog View Post
    here is mine from a hardware raid 10
    [SNIP]
    Notice you are faster in every cat!
    You mean slower ?
    Is your server under any load ?

    I did a test with hdparm:
    md0 = software raid1 (2 drives)
    md1 = software raid10 (4 drives)
    for comparison i also included one bare drive
    Drives are WD black 1TB btw
    Code:
    sync; hdparm -tT /dev/md0; sync ;  hdparm -tT /dev/md1
    
    /dev/md0:
     Timing cached reads:   14080 MB in  1.99 seconds = 7060.89 MB/sec
     Timing buffered disk reads:  286 MB in  3.01 seconds =  95.15 MB/sec
    
    /dev/md1:
     Timing cached reads:   13762 MB in  1.99 seconds = 6899.60 MB/sec
     Timing buffered disk reads:  470 MB in  3.01 seconds = 156.34 MB/sec
    
    vhost2:~# sync; hdparm -tT /dev/sda
    
    /dev/sda:
     Timing cached reads:   14442 MB in  1.99 seconds = 7244.80 MB/sec
     Timing buffered disk reads:  294 MB in  3.01 seconds =  97.67 MB/sec
    
    vhost2:~# hdparm -i /dev/sda
    
    /dev/sda:
    
     Model=WDC WD1001FALS-00J7B0                   , FwRev=05.00K05, SerialNo=     WD-WMATV0487135
     Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq }
     RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=50
     BuffType=unknown, BuffSize=32767kB, MaxMultSect=16, MultSect=?0?
     CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=1953525168
     IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
     PIO modes:  pio0 pio3 pio4 
     DMA modes:  mdma0 mdma1 mdma2 
     UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 
     AdvancedPM=no WriteCache=enabled
     Drive conforms to: Unspecified:  ATA/ATAPI-1,2,3,4,5,6,7
    Again: this system is under load, running guests so all the figures are pure indications without any real value.
    I hoped that my new supermicro 6026TT chassis would be here by now. That one is for a start being equipped with 2 E5530's .
    And I'm building an iscsi server for storage.
    Really looking forward to the "new" pve release.

  18. #18
    Join Date
    Aug 2009
    Posts
    20

    Default Re: New features

    Quote Originally Posted by mikeborschow View Post
    We've been running with MD SW Raid ... Installed Lenny first, then followed directions to install PVE on top of that. However, there is a hitch in the get-along ... at least for us. We solved it with the following:



    Then edit /etc/mdadm/mdadm.conf and comment out the first set of references to the MD drives so that the only references are the ones that were added by the step above.

    Then you need to do the following:



    Note 2.6.24-7-pve may change if there is a newer version.

    This worked for us, use at your own risk.

    We are running with no problems on Linux SW Raid 10. Been very successful to date.
    fromport do you need to do the same in every kernel update? or with teh kernel of version 1.3 and up, this problem dont exist?

    I was reading all the posts about linux raid and i realice that the whole problem is about bateries to keep the cache write the disk in a power down situation, so I think from my ignorance : Are there exist any kind of batery to put to individual disks? or any way to do it ? this solve the whole problem about linux raid, we can use swraid with cache on, and the performance was ok. Sorry if my question have no sense.

    thks, an sorry again for my english

  19. #19
    Join Date
    May 2009
    Posts
    24

    Default Re: New features

    no, no load. I have kvms running but they are not doing much. I thought I meant "Faster", however after looking at them again, I see I am doing more through put and I read the numbers backwards. So much for supporting my arguments for software raid!

  20. #20
    Join Date
    Aug 2009
    Posts
    20

    Default Re: New features

    Quote:
    Originally Posted by mikeborschow
    We've been running with MD SW Raid ... Installed Lenny first, then followed directions to install PVE on top of that. However, there is a hitch in the get-along ... at least for us. We solved it with the following:



    Then edit /etc/mdadm/mdadm.conf and comment out the first set of references to the MD drives so that the only references are the ones that were added by the step above.

    Then you need to do the following:



    Note 2.6.24-7-pve may change if there is a newer version.

    This worked for us, use at your own risk.

    We are running with no problems on Linux SW Raid 10. Been very successful to date.


    fromport do you need to do the same in every kernel update? or with teh kernel of version 1.3 and up, this problem dont exist?

    I was reading all the posts about linux raid and i realice that the whole problem is about bateries to keep the cache write the disk in a power down situation, so I think from my ignorance : Are there exist any kind of batery to put to individual disks? or any way to do it ? this solve the whole problem about linux raid, we can use swraid with cache on, and the performance was ok. Sorry if my question have no sense.

    thks, an sorry again for my english

  21. #21
    Join Date
    Apr 2005
    Location
    Austria
    Posts
    12,234

    Default Re: New features

    Quote Originally Posted by lucho115 View Post
    I was reading all the posts about linux raid and i realice that the whole problem is about bateries to keep the cache write the disk in a power down situation, so I think from my ignorance : Are there exist any kind of batery to put to individual disks? or any way to do it ? this solve the whole problem about linux raid, we can use swraid with cache on, and the performance was ok. Sorry if my question have no sense.
    You just invented the ultimate solution ;-) But AFAIK there is no such solution on the market. I guess newer RAID controllers will replace the battery by using flash memory cache (Adaptec SAS-5805Z). I also heard about disk using flash memory, but I do not know any vendor.

    Also, I have no idea how SSDs behave. AFAIK they have large amounts of volatile RAM cache. Does anybody knows if/how that cache is protected against power loss?

  22. #22
    Join Date
    Feb 2009
    Posts
    53

    Default Re: New features

    Quote Originally Posted by dietmar View Post
    You just invented the ultimate solution ;-) But AFAIK there is no such solution on the market. I guess newer RAID controllers will replace the battery by using flash memory cache (Adaptec SAS-5805Z). I also heard about disk using flash memory, but I do not know any vendor.

    Also, I have no idea how SSDs behave. AFAIK they have large amounts of volatile RAM cache. Does anybody knows if/how that cache is protected against power loss?
    I don't get the whole battery backup thingie.
    Don't we all put our server on UPS'es these days ?
    The real mission critical server have redundant powersupplies of which one i run one of the mains (protected by the building owener with a big generator set) and the other leg is on a big 120kva ups. The change of power going out is really minimal.

  23. #23
    Join Date
    Aug 2009
    Posts
    20

    Default Re: New features

    Quote Originally Posted by fromport View Post
    I don't get the whole battery backup thingie.
    Don't we all put our server on UPS'es these days ?
    The real mission critical server have redundant powersupplies of which one i run one of the mains (protected by the building owener with a big generator set) and the other leg is on a big 120kva ups. The change of power going out is really minimal.
    The motherboard could broken, or the cpu, etc.., in an homemade environment, use a 12v batery in pararell with the disks solve the problem, like old car stereos, jeje, i will try.
    thks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •