If you are looking around for alternative storage solution which can compete with big iron providers such as IBM EMC NetApp HitachiDS or how to build a Cloud solution using commercial and opensource components – you should watch webinar lead by @Marek Lubinski – Senior Engineer @LeaseWEB.
Marek presented Cloud solution build on:
- Nexenta storage HA clusters
- Supermicro NFS heads
- Apache Cloudstack as a cloud framework
- KVM and vSphere hypervisors – target is to have only KVM
- HP proliant and Dell PowerEdge servers
Marek explained in detail target set up, caveats, challenges and solution to problems which were faced during implementation and migration to new platform. I highly recommend to watch presentation from the beginning to the end.
[box type=”download”] WebeX recorded presentation
Presentation in PDF format[/box]
Thanks Artur for posting, I hope that people that follow your blog will find it useful, and won’t be scared to used NexentaStor 🙂
Good presentation, but i just don’t get why you start by using raidz2 (in your first configuration it’s like you only have the performance of 11 disks), the performance in zfs when in come to random workload is define by the number of vdev. With zfs and random workload you have only one way, mirroring vdev. And by the if you make the test i’m sure with the mirrored vdev 2 zeus ram is sufficient. How many disk did you have on your new configuration? Because 12k Iops it’s not that much P.S:i don’t like the ocz ssd i think… Read more »
n00n, thanks for your reply. You are right (but partially). We started with raidz2 and we had indeed performance of 11 disks, BUT that was write performance. That means that when writing, we were able to push around 1100 raw iops in writes, but still around raw 7-8k read iops because in vdevs you get as many write iops are single vdev, but as many read iops as you have spindles in (of course combined). And this setup was just perfectly sized for what we estimated in beginning (based on our input). Really, till we moved 1200 vm’s to this… Read more »