RAID 0 Stripe Sizes Compared with SSDs: OCZ Vertex Drives Tested

SSD by jmke @ 2010-01-15

We all know that two is better than one, we have dual core CPUs, dual GPU video cards, and if you really want to get the most out of your storage, a set of SSDs in RAID will boost your performance noticeably. We tested 6 different RAID stripe sizes and 3 different RAID configs in 4 different storage benchmarks, some synthetic, others real world operations. More than 1200 benchmarks results summed up in a few charts.

Introduction & Test Setup

Introduction

All good things come in pairs”, we have two eyes, two ears, two CPU cores, dual GPU configs. So why not have two storage devices linked up? When RAID was first conceived, it certainly had a business mind approach, increase redundancy without impacting performance (a lot). But with more affordable RAID chips we have been playing around with RAID on desktop systems for many years now.

RAID 0 is what it is all about on desktop systems when you want the highest performance, of course you always have a huge risk of data loss in case one of the RAID members in the RAID 0 array decides to stop working. With ye ‘ol HDD, who have moving parts and spinning platters it’s only a matter of time before they stop working. When SSDs were introduced they boasted impressive speeds, but also very high MTBF (mean time between failures):

Madshrimps (c)
(source)


SSD: 2 million hours roughly translates into 228 years. Where as HDD is about ~34 years. Most of us know that 34 years for a HDDs is a bit too optimistic, when your HDD is more than 5 years old you can start expecting it to fail, not saying it will, but keeping in mind to make a backup copy. If we translate this 34/5 ratio to the SSD side, it comes to about 34 years. So a realistic MTBF of more than 30 years is quite sufficient, you’ll most likely run out of rewrite cycles on the NAND flash chips inside.

Why is this all important? Because setting up a RAID-0 array of SSDs will inherit less risk than using one based on HDDs.


Stripe Size - Does size matter ?

When we talk about Stripe Size regarding RAID configurations, we’re referring to the size of the chunks in which your data is divided between the RAID drive members. If you have a 256Kb file and a Stripe Size of 128Kb, in a RAID 0 config with 2 members, each member will get one piece of 128Kb of the 256Kb file. The stripe size settings and options depend on the kind of raid controller you will be using.

Most raid controllers will allow you to go from 4kb stripe size up to 64 or 128Kb. Our test setup based around an Intel X58 motherboard with integrated Intel RAID controller goes up to 128Kb.

We won’t go through the motions of setting up RAID on your system, if you intend to use it, you should set aside a bit of spare time to experiment with the different settings; what we’ve done for you in this article is configure RAID 0 with 2x SSDs using different Stripe Sizes and see how this impacts performance.


Enabling Hard Disk Write-Back Cache

When setting up a RAID array on an Intel based controller you should install their Matrix Storage Manager. In your Windows OS this tool will allow you to enable write-back cache for your RAID array. In a none-raid setup you can set this up using Windows’ device manager, but once you defined your RAID array you’ll have to use the Matrix Storage Console.

Madshrimps (c)


We’ll do some tests to see if and where there differences when enabling this software option on the next pages.

Test Setup

After our real world SSD tests we asked a second sample of OCZ’s Vertex SSD. Armed with two 30Gb SSDs we installed them into a Dell T5500 workstation which is equipped with X58 motherboard and 3Ghz Core i7 CPU and 4Gb ram.

Madshrimps (c)


We installed Windows 7 x64 edition and started our tests. The OCZ Vertex drives were flashed with firmware version 1.41 which has OCZ’s garbage collection.

  • Note: For All RAID-0 tests Cache Write-Back is enabled unless mentioned otherwise

    Partitions were created using W7 disk manager , default NTFS file format. This is an important side note, as you can align your partition to match the stripe size you’re using, as well as the NTFS format to match the amount of bytes you set as stripe size. We did a quick test to see how much the impact on performance would be: negligible. But when you are setting up your final config, it’s recommended to follow these steps nonetheless.
  • Single Disk vs Raid 0 (128k)

    Single Disk vs Raid 0 (128k)

    We’ll start with the most obvious one, RAID 0 vs Single Disk performance. In the charts below we compared a single Vertex to two of them in Raid 0 (stripe size 128k).

    The data is presented by a percentage increase/decrease over the single disk setup.

    AS SSD Benchmark

    First up is a pretty new benchmark called AS SSD Benchmark. A raw translation of the German details below:

    The synthetic tests to determine the sequential and random read and write performance of the SSD. These tests are carried out without the use of the operating system caches. In the program Seq-test measures how long it takes to read a 1 GB large file, respectively, to write. In the 4K test will determine the read and write performance for random 4K blocks. The 4K-64-THRD-test corresponds to the 4K procedure except that the read and write operations are spread across 64 threads (typical start of a program).

    In all three synthetic test is the test file size of 1GB. Last, still determines the access time of the SSD, the access of which is determined to read through the entire capacity of the SSD (Full Stroke). Write access test only to be met with a 1 GB big test file.


    First up are READ speeds:

    Madshrimps (c)


    Sequential and threaded 4k read operations see a ~100% speed increase, the ideal result when going from one to two disks setup. The single threaded random 4k drops about ~20% though. Read Access times go slightly up, but negligible.

    Madshrimps (c)


    Write performance is up all over the chart, the biggest gain is random 4k: 136% faster! Write access is also better with the drives in RAID.

    Next up the “normal” copy/paste tests, which don’t bypass the OS cache;

    In the copy test (menu Tool-copy benchmark) the following test folders are created: (ISO) two large files, programs) (typical program folder with many small files) and games (folder of a game with small and large files. These three folders are copied with a simple copy command of the operating system. The cache is turned on for this test.


    Madshrimps (c)


    The smaller the files, the bigger the performance increase, RAID + OS cache really pays off here, almost 3x faster than single disk!

    HD Tune

    The synthetic benchmark HD Tune is more widely known.

    Madshrimps (c)


    Sequential read tests are quite promising, at its worst there’s a 153% performance boost; average read is 2x better than single disk.

    Madshrimps (c)


    The write performance boost is less pronounced in this test, hovering at ~60% average.

    The Random Access tests display average throughput using different size file chunks:

    Madshrimps (c)


    The random read with 1024Kb goes up, but with smaller file chunks performance is down ~20%.

    Madshrimps (c)


    Random write is “better”, meaning there’s less loss of performance this time with smaller file chunks.

    FC Test

    FC Test, or File-Copy Test is a small straight forward application. You can measure the time it takes to create files of different size, then measure how long it takes to copy them between volumes, and also measure how long it takes to delete them. For our test we were interested in the write/creation speeds. So we used the PROG,WIN,MP3 and ISO templates to measure the disk speed.

    Madshrimps (c)


    Going from single SSD to two in RAID 0 will give you a noticeable boost in real world applications, large sequential ISO does the best. Which is interesting as the AS SSD benchmark had the ISO test as one of the “lesser” ones. Overall we get at least 70% boost up to 100%

    PassMark

    The last benchmark used in this article is PassMark, a system benchmark tool which sports a quite complete HDD performance test. Allowing you to define different “worker” threads that can replicate “real world” drive usage. We used PassMark patterns: Database, FileServer and Workstation, we also created a custom random write thread.

  • Database: 10% Sequential / 90% Random IO, 90% Read / 10% Write, 2k file chunks, Asynchronous 128 queue
  • FileServer: 0% Sequential / 100% Random IO, 80% Read / 20% Write, 16Kb file chunks, Asynchronous 128 queue
  • Workstation: 20% Sequential / 80% Random IO, 70% Read / 30% Write, 16Kb file chunks, Synchronous
  • Custom: 0% Sequential / 100% Random IO, 0% Read / 100% Write, 4k file chunks, Synchronous

    Madshrimps (c)


    These tests put your storage through some serious workloads, few people will match this with day to day usage, surfing, email etc. That said, 4k write performance with RAID gets a spectacular boost, the same as seen in the AS SSD benchmark. The FileServer and Database patterns get a nice boost too, only the Workstation setup, which has a higher write/read balance than the other two default patterns, gets a very small boost.




    Overall we can conclude that you can expect a 50~200% boost in disk performance going from single SSD to two of them in RAID 0. Sequential operations will benefit the most, but smaller file operations won’t be slower than a single disk, on average.
  • Raid 0 128k: with/without Cache Write-Back

    Raid 0 (128k) – Cache Write-Back Enabled/Disabled

    Now we have left the single disk territory and are in RAID land, the Intel controller has a nice feature called “Cache Write-Back” which Intel recommends you enable for an extra performance boost. (Cache Write-Back = CWB)

    How much of a boost can we expect?

    (#we’ll repeat the benchmark descriptions, for quick reference and to help those jumping through random pages to find their way#)


    AS SSD Benchmark

    AS SSD Benchmark. A raw translation of the German details below:

    The synthetic tests to determine the sequential and random read and write performance of the SSD. These tests are carried out without the use of the operating system caches. In the program Seq-test measures how long it takes to read a 1 GB large file, respectively, to write. In the 4K test will determine the read and write performance for random 4K blocks. The 4K-64-THRD-test corresponds to the 4K procedure except that the read and write operations are spread across 64 threads (typical start of a program).

    In all three synthetic test is the test file size of 1GB. Last, still determines the access time of the SSD, the access of which is determined to read through the entire capacity of the SSD (Full Stroke). Write access test only to be met with a 1 GB big test file.


    First up are READ speeds:

    Madshrimps (c)


    With the CWB enabled we see a small boost in sequential read speed, but random performance is slightly slower. Latency remains approx. the same.

    Madshrimps (c)


    Write performance with CWB doesn’t impress. No dramatic drops in throughput, write access times does increase by ~20%. But seeing as we’re in the 0.xx milliseconds area here, it’s hardly dramatic.

    Next up the “normal” copy/paste tests, which don’t bypass the OS cache;

    In the copy test (menu Tool-copy benchmark) the following test folders are created: (ISO) two large files, programs) (typical program folder with many small files) and games (folder of a game with small and large files. These three folders are copied with a simple copy command of the operating system. The cache is turned on for this test.


    Madshrimps (c)


    With CWB we see a noticeable boost in this test, COPY game gets +60%, COPY ISO not really worth mentioning.

    HD Tune

    The synthetic benchmark HD Tune is more widely known.

    Madshrimps (c)


    Sequential read performance gets a very nice boost with CWB enabled, up to 2x faster!

    Madshrimps (c)


    Sequential write doesn’t get a large boost with CWB but noticeable nonetheless.

    The Random Access tests display average throughput using different size file chunks:

    Madshrimps (c)

    Madshrimps (c)


    Enabling CWB has no positive effect on random read/write performance; worse case scenario a ~20% drop.

    FC Test

    FC Test, or File-Copy Test is a small straight forward application. You can measure the time it takes to create files of different size, then measure how long it takes to copy them between volumes, and also measure how long it takes to delete them. For our test we were interested in the write/creation speeds. So we used the PROG,WIN,MP3 and ISO templates to measure the disk speed.

    Madshrimps (c)


    The real world tests shows a 13~17% performance increase with CWB.

    PassMark

    The last benchmark used in this article is PassMark, a system benchmark tool which sports a quite complete HDD performance test. Allowing you to define different “worker” threads that can replicate “real world” drive usage. We used PassMark patterns: Database, FileServer and Workstation, we also created a custom random write thread.

  • Database: 10% Sequential / 90% Random IO, 90% Read / 10% Write, 2k file chunks, Asynchronous 128 queue
  • FileServer: 0% Sequential / 100% Random IO, 80% Read / 20% Write, 16Kb file chunks, Asynchronous 128 queue
  • Workstation: 20% Sequential / 80% Random IO, 70% Read / 30% Write, 16Kb file chunks, Synchronous
  • Custom: 0% Sequential / 100% Random IO, 0% Read / 100% Write, 4k file chunks, Synchronous

    Madshrimps (c)


    With small file chunks and random operations the CWB doesn’t help performance, but as the file chunks get bigger the CWB does pay off, a 30% “free” performance boost.




    Summary? Just enable it, it won’t do any harm, worse case scenario you won’t notice a difference, best case scenario you get a nice throughput boost.
  • RAID 0 128k vs Software RAID 0 & Single Disk vs R1

    Software vs Hardware RAID

    If your motherboard doesn’t support RAID you can opt to create a software RAID array from inside your operating system. Windows does impose a few limitations, you can’t have Windows installed on on RAID 0 software array, you can split up your disk in different partitions and then RAID 0 the none-windows partition on DISK 1 with the complete DISK 2. RAID 1 in software doesn’t have this restriction.

    So we set up the two Vertex drives in software RAID 0, comparison with hardware RAID 0 (128k) in the following two charts. Why only 2 benchmarks? The other benchmarks didn’t see the software RAID array.

    (#we’ll repeat the benchmark descriptions, for quick reference and to help those jumping through random pages to find their way#)

    FC Test

    FC Test, or File-Copy Test is a small straight forward application. You can measure the time it takes to create files of different size, then measure how long it takes to copy them between volumes, and also measure how long it takes to delete them. For our test we were interested in the write/creation speeds. So we used the PROG,WIN,MP3 and ISO templates to measure the disk speed.

    Madshrimps (c)


    It seems the software RAID is a bit faster on average in this mostly sequential write test.

    PassMark

    The last benchmark used in this article is PassMark, a system benchmark tool which sports a quite complete HDD performance test. Allowing you to define different “worker” threads that can replicate “real world” drive usage. We used PassMark patterns: Database, FileServer and Workstation, we also created a custom random write thread.

  • Database: 10% Sequential / 90% Random IO, 90% Read / 10% Write, 2k file chunks, Asynchronous 128 queue
  • FileServer: 0% Sequential / 100% Random IO, 80% Read / 20% Write, 16Kb file chunks, Asynchronous 128 queue
  • Workstation: 20% Sequential / 80% Random IO, 70% Read / 30% Write, 16Kb file chunks, Synchronous
  • Custom: 0% Sequential / 100% Random IO, 0% Read / 100% Write, 4k file chunks, Synchronous

    Madshrimps (c)


    Here we see why you should opt for a hardware RAID solution, random read/write is noticeably faster on the Intel array, the Software RAID 0 is up to ~47% slower in some test!




    Single Disk vs RAID 1

    RAID 1 on SSD would not be a very interesting option, but we tested it anyway, since RAID 1 writes the same data to all the RAID array members, read speed tests might be interesting.


    AS SSD Benchmark

    AS SSD Benchmark. A raw translation of the German details below:

    The synthetic tests to determine the sequential and random read and write performance of the SSD. These tests are carried out without the use of the operating system caches. In the program Seq-test measures how long it takes to read a 1 GB large file, respectively, to write. In the 4K test will determine the read and write performance for random 4K blocks. The 4K-64-THRD-test corresponds to the 4K procedure except that the read and write operations are spread across 64 threads (typical start of a program).

    In all three synthetic test is the test file size of 1GB. Last, still determines the access time of the SSD, the access of which is determined to read through the entire capacity of the SSD (Full Stroke). Write access test only to be met with a 1 GB big test file.


    First up are READ speeds:

    Madshrimps (c)


    Sequential read speed doesn’t start of very promising, it’s not until you add multiple threads to the 4k tests that there is a noticeable advantage for the RAID 1 setup.

    Madshrimps (c)


    We expected the write speeds to be lower, a ~20% performance penalty is noticeable.

    Next up the “normal” copy/paste tests, which don’t bypass the OS cache;

    In the copy test (menu Tool-copy benchmark) the following test folders are created: (ISO) two large files, programs) (typical program folder with many small files) and games (folder of a game with small and large files. These three folders are copied with a simple copy command of the operating system. The cache is turned on for this test.


    Madshrimps (c)


    Also in this write test the performance is noticeably lower compared to a single disk setup.

    HD Tune

    The synthetic benchmark HD Tune is more widely known.

    Madshrimps (c)


    Quite interesting results in the sequential read test, the minimum read speed increase with RAID 1 is larger than going from single disk to RAID 0! (172% boost vs 156%)

    Madshrimps (c)


    Sequential write takes a large hit, up to 63% slower!

    The Random Access tests display average throughput using different size file chunks:

    Madshrimps (c)

    Madshrimps (c)


    RAID 1 is not very good for random read/write, up to ~84 slower in some cases!




    Software RAID and RAID 1 is not the what this article is about, they can be interesting to “play around with”, but it’s time we check out the impact of stripe size on a RAID 0 array ->
  • RAID 0: AS SSD Benchmark

    AS SSD Benchmark

    First up is a pretty new benchmark called AS SSD Benchmark. A raw translation of the German details below:

    The synthetic tests to determine the sequential and random read and write performance of the SSD. These tests are carried out without the use of the operating system caches. In the program Seq-test measures how long it takes to read a 1 GB large file, respectively, to write. In the 4K test will determine the read and write performance for random 4K blocks. The 4K-64-THRD-test corresponds to the 4K procedure except that the read and write operations are spread across 64 threads (typical start of a program).

    In all three synthetic test is the test file size of 1GB. Last, still determines the access time of the SSD, the access of which is determined to read through the entire capacity of the SSD (Full Stroke). Write access test only to be met with a 1 GB big test file.


    Madshrimps (c)


    Sequential READ likes large stripe sizes as is visible in the chart above, the difference between 64k and 128k is quite small.

    Madshrimps (c)


    4k performance is surprisingly not highest with 4k stripe, 16k and 8k score the best here. But overall it doesn’t really matter as the difference between them is ~1mb/s throughput.

    Madshrimps (c)


    Threading the 4k test shows no clear winner.

    Madshrimps (c)


    Read access times are all pretty much the same.

    Madshrimps (c)


    Unlike with the sequential read test, the write test doesn’t show clear winners, the 16k stripe offers the best performance here, but the 128k stripe is only ~5mb/s slower;

    Madshrimps (c)


    4k write performance is best on 32k stripe size, while 4k stripe size is worst, 128k doesn’t do that well either.

    Madshrimps (c)


    The threaded 4k performance follows the previous result chart.

    Madshrimps (c)


    Access times are all well below half a millisecond, the differences are bit larger here, 32k stripe performing the best.

    In the copy test (menu Tool-copy benchmark) the following test folders are created: (ISO) two large files, programs) (typical program folder with many small files) and games (folder of a game with small and large files. These three folders are copied with a simple copy command of the operating system. The cache is turned on for this test.


    Madshrimps (c)


    The COPY ISO test is all about large files, the largest stripe size performs best and the smaller the stripe size, the lower the average throughput gets.

    Madshrimps (c)


    The result chart with the Program preset is less linear, 128k still in the lead.

    Madshrimps (c)


    The Game COPY performs best with 32k stripe size, going one step lower incurs a large penalty, from 80mb/s to 51.46mb/s with 16k strip size.




    Let’s take an average from all the throughput tests, which stripe size does best:

    Madshrimps (c)


    Let us find out if this trend continues in the other benchmarks.

    RAID 0: HD Tune Pro 3.5 - Read Tests

    HD Tune – Sequential Read Tests

    The synthetic benchmark HD Tune is more widely known.

    Madshrimps (c)


    For the Min read test: 128/64/32 all on the same level, going smaller gives a noticeable speed decrease.

    Madshrimps (c)

    Madshrimps (c)


    The max and average tests follow the same pattern, 128/64k in the lead, 32k is consistently slower, but still noticeably faster than 16/8/4k stripe sizes.

    Madshrimps (c)


    Read access times are fastest with 128k stripe size, slower with 32k, but don’t read too much into these numbers.

    HD Tune – Random Read Tests

    Madshrimps (c)


    At 512 bytes performance is low for all configurations, lowest with 16/4k stripe sizes.

    Madshrimps (c)


    Finally! 4k random read, 4k stripe size among the fastest.

    Madshrimps (c)


    16Kb files, the smaller stripe sizes still doing quite good.

    Madshrimps (c)


    When the big file chunks come along 128k stripe size is again top of the list.

    Onto the write tests ->

    RAID 0: HD Tune Pro 3.5 - Write Tests

    HD Tune – Sequential Write Tests

    The synthetic benchmark HD Tune is more widely known.

    Madshrimps (c)


    Minimum write speed is only noticeably worse with 16k stripe size.

    Madshrimps (c)


    Max write highest with 32k, the smaller stripe sizes a bit slower than the others.

    Madshrimps (c)


    The average write speed between slowest and fastest is ~16.7Mb/s; not that noticeable.

    Madshrimps (c)


    Access times for read actions are lowest with the smallest stripe size, according to this benchmark.


    HD Tune – Random Write Tests

    Madshrimps (c)


    Random write performance with the 4k stripe size is 7x faster than the slowest competitor; if all you do is write 512 byte files, 4k stripe size should be your choice… what? you don’t have that many 512 byte files?

    Madshrimps (c)


    With 4k stripe size performance is highest with 16k, the others are pretty much on par, except for 128k.

    Madshrimps (c)


    64k does best in this benchmark, 128k is even slower than 4k stripe size in this write test.

    Madshrimps (c)


    And here’s why you don’t want a 4/8k stripe size RAID 0 , even if the smaller file chunk random performance is 7x faster, if you randomly write larger files to your RAID array, you don’t want to get stuck at <50mb>




    Again throwing all throughput benchmarks together in a single chart we come up with:

    Madshrimps (c)


    128k is not the fastest according to HD Tune, you’ll get better results with 64 and 32k.

    RAID 0: FC Test - Write Speeds

    FC Test

    FC Test, or File-Copy Test is a small straight forward application. You can measure the time it takes to create files of different size, then measure how long it takes to copy them between volumes, and also measure how long it takes to delete them. For our test we were interested in the write/creation speeds. So we used the PROG,WIN,MP3 and ISO templates to measure the disk speed.

    Madshrimps (c)


    16k stripe size in the first spot, but only a small lead on 128k, the others don’t do half bad either.

    Madshrimps (c)


    Again 16k in the lead, this time a bit more comfortable. 4/8k noticeably slower.

    Madshrimps (c)


    Except for 4/8k this pattern performs the same on all stripe sizes.

    Madshrimps (c)


    The 32/16k stripe sizes do best; 4/8k to be avoided again.




    16k stripe size did pretty good in FC Test, the chart below is a total of the charts seen above and makes it fact:

    Madshrimps (c)

    RAID 0: PassMark HD Test

    PassMark

    The last benchmark used in this article is PassMark, a system benchmark tool which sports a quite complete HDD performance test. Allowing you to define different “worker” threads that can replicate “real world” drive usage. We used PassMark patterns: Database, FileServer and Workstation, we also created a custom random write thread.

  • Database: 10% Sequential / 90% Random IO, 90% Read / 10% Write, 2k file chunks, Asynchronous 128 queue
  • FileServer: 0% Sequential / 100% Random IO, 80% Read / 20% Write, 16Kb file chunks, Asynchronous 128 queue
  • Workstation: 20% Sequential / 80% Random IO, 70% Read / 30% Write, 16Kb file chunks, Synchronous
  • Custom: 0% Sequential / 100% Random IO, 0% Read / 100% Write, 4k file chunks, Synchronous

    Madshrimps (c)


    128k in the lead, the results of the other stripe sizes are all over the place.

    Madshrimps (c)


    The file server test features larger file size chunks and we now see 128k clearly in the lead. 4/8k are the slowest, trailing the rest.

    Madshrimps (c)


    The workstation load test has 32/64k in the lead, although it’s a very small lead.

    Madshrimps (c)


    Our custom 100% random write test puts 128k stripe size in first spot, 4k stripe size scores worst.




    128k stripe size was the best choice according to PassMark HD test:

    Madshrimps (c)

  • Conclusive Thoughts

    Conclusive Thoughts

    And the ideal stripe size for RAID 0 with SSDs is:

    The default 128k setting


    Well, you either decide that this article was a complete waste of time, or think of it as a confirmation that the default raid stripe size turned out the offer the best balanced performance throughout the different benchmarks.

    It was fun to see a RAID 0 with 4k stripe size outperform the rest by a factor of 7x, but that was only in a single benchmark, more than 1200 bench runs later though we can safely make the assumption that you should not configure a RAID 0 array with a stripe size smaller than 32k.

    After throwing all the random, sequential, threaded, and asynchronous read and write tasks to the RAID 0 array with different stripe size. Here is the final chart:

    Madshrimps (c)


    Performance wise with RAID 0 we saw a performance boost over single SSD up to 250% in certain tasks; on average it will give you a 100% boost in all disk related tasks; so it’s certainly a recommended path if performance is your main goal.

    Cost wise buying two smaller SSD drives will be more expensive than one larger SSD; but if you invest a bit of time configuring your RAID array you will have a faster system than what was possible with the single SSD.

    Reliability of a RAID 0 array with SSD is of course a lot better than what was possible with HDDs, but don’t go thinking it’s a foul proof plan. SSD can still malfunction, RAID array can still go corrupt and moving between systems with a RAID 0 array install poses an extra challenge. Also don’t forget that TRIM doesn’t work on RAID arrays, so you’ll have to manually perform the necessary steps if you want your RAID array to perform at its best.

    The OCZ SSDs drives used in this article are already a bit older, the Vertex 30Gb costs about ~€130, and doesn’t have a 30~40mb/s write cap like some other entry products recently launched. When you put two of these drives in RAID 0 we saw speeds close to 200mb/s; definitely enough for most power users.

    If you add more members to a RAID 0 array you’ll see nice performance scaling; but to get the most out of it you’ll have to invest in a dedicated RAID controller which has onboard cache and PCI Express interface; this will allow you to build your own 2000mb/s disk volume. OCZ and other manufacturers have been playing around with this approach (RAID controller + SSDs in RAID 0) since Cebit last year, but as with everything new, flashy, shiny and speedy, it doesn’t come cheap.

    If you do want to take a peak at the future with a limited budget and you have an interest in SSDs and RAID, we hope this article can be of use to.

    We like to thank OCZ for allowing us to stress test their Vertex drives, until next time, thank you for reading!
      翻译: