PLEASE NOTE: This article has been archived. It first appeared on in September 2000, contributed by then Contributing Editors D. Glen Cardenas and Jose-Maria Catena. We will not be making any updates to the article. Please visit the home page for our latest content. Thank you!

A Complete Assessment


Look in any newsgroup devoted to DAW discussion and sooner or later there will be some sort of mention regarding favorite hard disks or preferred disk formatting techniques or optimum parameter settings or SOMETHING about the impact of specific hard drives on the performance of audio streaming.

Often, the argument starts with the personal preference between SCSI and IDE disk drives. Why “personal preference”? We think that after going over the data in this article, you will see that there’s a lot of room for subjective opinion in this discussion. Far from proving that there is one clear winner between the two, research has proven just the opposite.

There is a lot to be said for SCSI. On the other hand, many readers are about to say “A-HA! I knew SCSI was better!” and are about to be disappointed. This will come as a shock to many hard core SCSI advocates – perhaps even an insult! However, before proponents on either side start sending us an HTML flame-thrower, look over the data here and keep an open mind. You, too, may discover things about both formats you didn’t know and even more chilling, things about the whole argument that you never took into account before.

One contention with this whole argument has been a blatant lack of fact and information in the discussions seen in many news groups. This article is out to change this by offering a full range of facts, specifications and information from manufacturers, testers and hackers. The facts as we have found them show that either format will work very well in any system, and that one format can have a slight edge over the other if properly set up and under some conditions.

This may seem like overkill for a discussion that is destined to be a washout (so to speak), so why bother?

Well, even though there may be little advantage to either EIDE or SCSI in most system configurations, there are very important specifications to examine in terms of the drives themselves, and in cases where the format DOES make a difference, it is good to have those facts and a clear understanding of them. Besides, we don’t want to offer conclusions without backing them up or it would be just more opinion and nothing else. No thanks!

This may be about to rock some people’s boats and it should have something to stand on. In the following pages we offer a fair sample of the information gleaned during the past few weeks and over the past several years of looking at this issue. The information is formatted in a way that, hopefully, will provide the reader with a strong overview of all aspects of the issue and any conclusions.

Information is power. Have some juice!

A Question of Drives

The heart of any successful DAW, be it a PC, Mac, Amiga or whatever, is the storage media. In this case, the hard disk drive. No matter how sophisticated the software or how clean the sound card, it boils down to the data stored on the disk. If you can’t get the data off the drive in a timely manner, well then, what’s the point?

Although there are many who use the Mac for their DAW, and there is now a renewed interest in the Amiga as well, we will limit this discussion to the PC, and furthermore, to the 32-bit Windows based PC. This isn’t to say that Windows is a better platform than BeOS, Linux or a number of other alternative operating systems. In fact, it’s probably the worst! However, it’s just a practical fact that it’s the most popular and thus the most widely supported.

When we talk about Windows in this article, we will be referring to Windows 95B, Windows 98, and Windows NT4. Each version has some specific considerations and when these become relevant, a clear distinction between them will be made.

The Contenders

There are two major types of disk drive formats associated with personal computers in general, IDE and SCSI. The IDE system is built into all modern motherboards and is the default choice for almost all PC users. The SCSI system is supported directly on some newer motherboards but by and large, requires a separate PCI controller card called a “host adapter” to which all SCSI devices are connected.

Other Factors that Drive Audio Performance

Many things will affect the performance of digital audio software in general, and multi-track production software in particular. The performance of the disk drive being used to store the audio data is only the beginning. Naturally, this component must be of optimum efficiency in order to allow real time streaming at a high track count.    However, this isn’t necessarily always going to be the limiting factor in a DAW’s performance. There are other places to look as well.

Modems: Having a modem plugged into a DAW’s PCI bus can lead to conflicts, particularly if the modem is a voice modem. Some production software will attempt to configure the modem as a sound card. While more “aware” programs such as Cakewalk will report a modem upon finding it and allow you to ignore it as part of the sound system setup, other programs may not, and could default to a lower bit depth or sample rate as a result. Many DAW users agree that an external modem connected to one of the computer’s serial ports is the safest way to go here.

Video: Likewise, using a fancy accelerated 3D “gamer’s” style video card can cause significant degradation in system performance due to the high demands of the card on system timing. The fact is that DAW software is not graphic intensive and doesn’t require a high performance video card unless you intend to do A/V production. If that is the case, follow the recommendations of others in the field who are using the same kind of motherboard/processor configuration as yourself and choose a video card accordingly.

Windows Settings: Sometimes turning down the hardware acceleration slider (from Control Panel double click the “Display” icon, go to the “Settings” tab and click the “Advanced” button. The slider is in the “Performance” tab) will improve performance, though it can also disable your video card’s higher resolution/color functions.

Specificity: You may want to consider maintaining your DAW separate from all non-DAW operations and use another computer for internet activity, game playing and business applications. Your DAW might then contain only the sound cards, video card and an Ethernet card to allow transfers to and from the DAW and the separate system containing the CD recorder, internet access, back-up space, etc.

Simplicity: You may consider not even  putting a CD drive in the DAW. You can access the CD ROM on the other system over the Ethernet as effectively as if it were a local drive. To prevent the Ethernet LAN from consuming system time, you can re-boot your system when you wish to do very demanding streaming production and at that time decline (hit ESCAPE) when asked for your network password. When your production load is not so demanding or when you want to move files around, re-boot and enter your network password to sign on to the LAN. All of this may seem a bit extreme, but it doesn’t hurt to consider it.

Testing: If you wish to use one system for all of your computing needs as well as a DAW, then you will want to consider carefully each choice you make in terms of devices and software you install. After any changes you make, test your system fully for potential conflict with your sound equipment and production software. This way, if something causes trouble, you can quickly pin it down to a single item.

Screen Savers: As a general rule, do not load or configure any screen savers or activate the power saver functions of your DAW. These features consume resources and at the oddest moments. Screen savers were important in the days of monochrome displays due to screen burn-in. However, modern color monitors are not subject to this problem, so don’t bother with them. Do not use a background virus scanner, although you will want to have one loaded to perform scans on demand from time to time.

2 Drives or Not 2 Drives?

There has been some debate as to the proper allocation of disk space in a DAW. For one thing, many feel it is important to keep the software and data on separate drives for fear that the system might bog down if required to access different parts of the same drive for software while data is streaming.

When you have a large executable plus DLLs, (simply) running the program doesn’t always ensure that all of the bytes (for that program) are loaded into physical memory. Some “pages” of the code can remain on disk, and will be paged in on demand when certain areas of the program are executed for the first time.

For an example of this, watch what your disk does the first time you open the Staff View in Cakewalk. Then watch it again the second time you open the Staff View. Also, every program has “resources” (text, menus, dialog boxes, etc) which live on disk, paged in on demand. So unless the DAW prevents the user from manipulating the UI during playback, it is not true in general that the only disk access you’ll get during streaming is from the data files.

Of course, there are ways to be clever and force things to get paged in ahead of time. This is actually easier to do under NT than on Win9x, but on a system with not a whole lot of RAM this isn’t a good idea.

Ron Kuper
Chief Technology Officer, Cakewalk Music Software

The key word in the above statement is “RAM”. If a DAW has sufficient RAM, then the user can open the views they intend to use during streaming before streaming starts. Those resources are then pre-loaded into memory. As for the dialogs and menus and so on, it must be understood that cranking around in the program while doing very demanding streaming isn’t a very good idea anyway, even if the disk access for those resources is negligible.

The simple fact is, once a digital production program is running and streaming data, the drive shouldn’t have to access the disk for more software. All the software it needs should be in RAM. With a bit of planning and prudent use of the UI during streaming, this shouldn’t become an issue. Therefore, there is no drive access conflict.

Aside from being able to back up one drive to the other to guard against loosing your data to a sudden disk crash, the best reason to use two drives isn’t to keep the data and software separate, but being able to put the partition holding your streaming data at the very front of a drive. This has advantages as we will discuss below, and has a much higher impact on overall performance than having the data and the software on two separate drives.

Virtual Memory

Ron Kuper also offers the following observation, “Another process which can hit the disk unexpectedly is virtual memory compaction and/or cache cleanup. Windows can decide to do this during ‘idle’ time. What constitutes ‘idle’ time? It’s up to the O/S.”

With the virtual memory wild card in mind, many have advocated that disabling the Windows virtual memory will provide a boost in performance because the system will no longer attempt to use the swap file on the disk, removing this possible conflict during streaming. This is not a good idea. Again, if you have the proper amount of RAM in your system to load the software you need and to buffer the data as it streams, then virtual memory will not need to swap at all.

Although preventing errant housekeeping tasks is almost impossible, you can minimize the effects. For advice on optimizing the virtual memory system, read the article Virtual Memory Optimization by José Catena. There is also good advice there on how to change your Windows cache settings to enhance performance. This article is required reading for anyone wishing to fine tune their system.

Other Tweaks

Another simple tweak that helps is setting the “System Type” and “Read Ahead” optimizations. To get to them, open Control Panel and from there double click on the “System” icon. Go to the “Performance” tab and then click on the “File System” button. There is a box that allows you to select the use to which your PC will be put. The “Desktop” setting is the default and will likely be the setting you see when you look here. Changing that to “Server” gives higher priority to disk I/O, which usually helps a bit with disk intensive applications such as those found on a DAW.

You will also see a “Read Ahead” optimization slider. By default, this should be set to maximum (64 KB) We recommend that you leave it there. It might be worth noting that some audio programs recommend resetting it or even set it automatically to the minimum, but this usually results in very little improvement for those programs and noticeable degradation for others. It might be wise to check this setting from time to time to see if it has been tampered with, especially if you notice a sudden drop in performance of your system after running a new program for the first time.

Some drive optimization tricks can make a big difference. If you share a disk for audio and other data or applications, you can partition the drive such that your audio data is concentrated at the front of the disk. This area has as much as 60% faster access performance than the back of the disk. On a single drive system, one might partition (for example) a 10 gig drive with 10 MB as a C drive boot partition only, which then points to the E drive where the operating system is installed, then a D drive partition of 6 gig for audio. The back partition, the E drive, would hold Windows itself and the application software. Of course, a better option is to dedicate a drive to only audio data.

It is also true that streaming is more efficient using large cluster sizes, not the default 4K clusters generated by the Windows FAT 32 partitioning system. You can force the issue by applying the /Z:64 switch to the FORMAT command. This switch will tell FORMAT to build each cluster out of 64 sectors thus generating 32K clusters (each sector is 512 bytes). Better still, use a program like Partition Magic to reset the cluster sizes without having to reformat the drive or destroy your current data. As a side note, if the partition in question is 2 gigabytes or smaller in size, it can be partitioned using FAT 16 instead of FAT 32. For more information on optimizing your disk system, read Hard Disk Optimization by José Catena.

Finally, the autoinsert notification for CD-ROMs can hurt performance a bit because the system will periodically access the CD-ROM to test for the insertion of a disk. It is a good idea to disable autoinsert notification by un-checking that box in the Device Manager.

Application Design

Certain multitasking schedule and synchronization issues can degrade file I/O performance because audio buffer processing is a very high priority task. It can take a significant amount of available time, particularly with heavy real time effects loads. This can be minimized if disk I/O is done through bus mastering or at least DMA, and the application has been designed to optimize disk I/O throughput.

As a general rule, the ratio between audio buffer size and file buffer size is critical for such optimizing. The larger the file I/O buffer, the better. The smaller the audio buffer, the better. The file I/O buffer size should be large enough so that the typical time required to transfer a file buffer to/from disk is significantly larger (4 times or more) than the time required to playback an audio buffer. If the time ratio is low, the performance penalty will be large, and if it’s below 1, it will be dramatic.

As a consumer, it is up to you to judge your application software wisely and invest in an application that has a good track record overall, and a commitment to customer support. If the software can’t take advantage of the high disk throughput you’ve invested so much into, then you’ve shot yourself in the foot.

About SCSI

The SCSI interface is an old timer. Before there was IDE, there was SCSI. It was used not only for disk drives, but scanners, printers and even to interface the PC with synthesizers and automated sound and light boards. For a long time, SCSI was the only really high performance disk interface, and in early versions, high performance was a whopping 5 MBytes/sec. WOW! Remember, that was in the time before the PC-based DAW, before Windows and before a person could buy a PC with more than 1 meg or RAM. Here’s a quick rundown on SCSI.

The History of SCSI

In 1980, SCSI amounted to a proposed interface whose specifications occupied little more than 20 pages. Compare that with the more than 600 pages used to describe the interface standard today. In 1985, a group of manufacturers got together and started pressing for ANSI to define SCSI. This came to pass in 1986 with the publishing of the first SCSI standard, now referred to as SCSI-1. This new interface standard consisted of a controller card, often called a host adapter, that interfaced the PC to a bus capable of driving up to 7 devices with a combined throughput of 5 Mbytes/sec.

Not far behind SCSI-1 came SCSI-2. This new standard removed some of the instructions from SCSI-1 and replaced them with an enhanced set of instructions as well as many low level enhancements. At this point, the concept of the “Wide” SCSI bus was introduced allowing for 16 and 32 bit data paths between the host adapter and the expanded array of now up to 15 devices. Along with the wide bus, the “Fast” SCSI allowed for bus speeds of up to 20 MBytes/sec using the 32 bit bus. The addition of active termination made for better data integrity.

The SCSI-2 standard was rounded off by the addition of Command Queuing. This feature allows up to 256 separate commands to be stored in the controller. The host adapter can then send several requests to the same device before it processes the first one. Command queuing is defined in SCSI at the device level. That is, each device will support command queuing only as far as the designers want to take it for that device and the maximum number of queued commands is optional for each device in the bus (4 is a very common number). At a different level, the host adapter can queue commands to be sent later when the device can accept new ones (also optional). In both levels of command queuing, one thing that advanced devices or controllers can do is “out of order” processing of queued commands, optimizing overall times by reducing head movements.

Ultra SCSI, although not a new standard, has nevertheless become the de facto state of the current art. Ultra SCSI is seen as a stepping stone between the current generation of parallel bus devices and the new proposed SCSI-3 standard designed to, among other thing, introduce a high speed serial SCSI protocol such as the Fibre Channel. In fact, SCSI-3 amounts to a sum of separate standards as defined by different interested parties. As a result, the direction of SCSI seems to be the breaking up of the standard into smaller packets designed to address various individual projects while keeping the efforts coordinated under the general umbrella of SCSI.

It is intended that this aspect of SCSI-3 will accelerate the development of future SCSI implementations. Another place SCSI-3 is hoping to go is the removal of the current 15 device limit imposed on wide SCSI. The plan is to offer a “2 phase” addressing system that sends the higher order selection bits in the first phase and then the final selection bits in a second addressing phase. As a result, up to 255 devices on a narrow SCSI bus or 1023 on a wide bus could be accessed.

The SCSI Standard

Because Ultra 2 SCSI is now the norm among SCSI systems, we will focus on its implementation for the balance of this discussion. Ultra 2 SCSI currently allows transfer speeds of up to 80 MBytes/sec over a 16 bit bus with 160 Mbytes/sec becoming popular on high-end systems. The new higher speeds available in Ultra SCSI are due in large part to improvements in the processing speeds of the controller’s chips. Ultra SCSI calls for a doubling of internal clock speeds for the controller electronics and thus the data transfer cycle times are now much shorter. Add this to the quicker execution of SCSI commands and the throughput has become impressive, even for multiple-drive server implementations.

That’s not even the best part. The best part is that in order to take advantage of these higher burst transfer rates, there need be no change in the operating system or the SCSI drivers that command the controller. From a system point of view, the enhanced throughput is free! In fact, there is little change needed in either hardware or firmware of the peripherals themselves to become Ultra 2 SCSI compliant. How this will pan out as SCSI reaches for the 160 Mbytes/sec grail is uncertain, almost surely requiring some sort of major change in the hardware. Still, the impact on the system designer is promised to be minimal.

There is a price to be paid for these higher data transfer speeds. That price is data integrity. In order to prevent the higher data speeds from being consumed by transfer retries, the SCSI cable must be kept to a length limit of 1.5 meters. This doesn’t offer any real obstacle in the average PC implementation, including DAW implementations. The place where this really starts to hurt is in large server arrays, RAID arrays and video-on-demand systems. Under these conditions, it isn’t convenient to cram 15 drives into a single tower. The use of outboard drive bays can often tax the cable length specification to the point of non-compliance.

In order to address this issue, a type of SCSI bus called “Low Voltage Differential” or LVD has been introduced. A differential bus provides a second set of data lines being driven with the opposite electrical polarity as the original lines. If a “one” bit is represented by +5 volts in the standard data line, that same “one” bit is echoed on the supplemental data line as -5 volts. Thus, there is a higher overall voltage swing to represent the data, there is redundancy in the transmission paths and any outside noise that might enter the lines will be canceled out at the receiving end. The result is a boost in cable lengths to 25 meters. Another feature being offered by the new generation of Adaptec Ultra160 LVD host adapters is both CRC (Cyclic Redundancy Checking) and Domain Validation which scans the system for proper configuration. These adapters limit cable length to 12 meters.

SCSI ImplementationBus Width (bits)Burst Speed (MB/sec)
Fast SCSI810
Fast Wide SCSI1620
Ultra SCSI820
Wide Ultra SCSI1640
Ultra 2 SCSI LVD840
Wide Ultra 2 SCSI LVD1680
Wide Ultra 3 SCSI LVD16160

Something to keep in mind before you run right out and snag an LVD drive: in the past, you could not mix normal SCSI and LVD SCSI drives on the same bus. You needed a host adapter that specifically supported LVD and unless you were ready to accept 2 SCSI adapters in your DAW or upgrade to an adapter that had both a normal and a LVD connector, you either had to dump all of your current SCSI devices and replace them with LVD devices, or close your jaw and not worry about that side of the cutting edge.

However, many modern SCSI LVD drives can be switched to run on a single-ended bus and some even auto-switch from single to LVD depending on how they sense the bus they’re connected to. Before you buy, be sure of what you’re getting. Many “super fast” SCSI drives are LVD and may not be switchable. Mixing the two bus formats can result in a lot of smoke and no SCSI!

Setting Up SCSI

There’s really no point in installing a SCSI host adapter that is not:

1) A PCI card. Trying to get DAW performance from an ISA SCSI card is like trying to pull an elephant through a toilet seat. Don’t bother!

2) Configured for bus mastering. Although it is possible to still buy a PIO-only SCSI adapter, and at not exactly a modest price either, don’t even think about it. If you’re going to be spending the extra money to go SCSI, do it right.

3) Ultra SCSI. There is no point in going all the way to the ocean with your bathing suit on and not jumping into the water. If you want to implement SCSI on your system, do so with an eye to the future. Besides, the entry level for an Ultra SCSI host adapter is no more than $80 or so with Ultra 2 Wide running about $180. Take a look at the price comparison charts on the COMPARING DRIVES page.

Perhaps if you are thinking of building a DAW from the ground up, the wise choice for implementing SCSI is to do it at the motherboard level. Several good motherboards support SCSI right on the board much the same way motherboards support IDE. Just plug your SCSI controller cable into the SCSI port on the motherboard and then run the other end to your drive(s).

Don’t forget to terminate the cable at your last SCSI device. All signals on the bus must be terminated with resistors at the bus ends to avoid electrical reflections. This is achieved either by a switch on the device (not always present) or by placing an external terminator block on the connector of the first and last devices on the bus. Often, the first device will be the host adapter itself.

Installing the SCSI drivers is best left up to Windows, which will see the SCSI controller and set up the drivers for you through Plug And Play. SCSI adapters are either PIO, DMA, or bus mastering, and the user can’t choose the mode. If you have a bus mastering SCSI adapter, the driver only works in bus mastering mode. SCSI configuration is somewhat more complex, as there are many configurable options such as disconnect strategy, SCAM, LUN, BIOS emulation, etc. All of this should be explained in the adapter installation guide.

It should be noted that some users have reported problems using some host adapters with some motherboards and chip sets. MVP3 and Aladdin V chipsets have fallen into question, although there seems to be no problem using a motherboard with the workhorse Intel BX chip set. There have also been comments made about some AGP cards being so power hungry that on some motherboards they rob the PCI bus of the needed juice to reliably run high-end host adapters. Not-so-high-end adapters may not complain about the chip set or video card’s appetite.

About IDE

IDE, or more formally, IDE/ATA, is the most common system for connecting a hard drive to a PC.

In modern systems (to which this discussion is limited), they plug directly into the motherboard through a 40 pin cable. Most motherboards offer 2 separate IDE channels and thus 2 connectors on the board. Each connector can support 2 IDE devices, be they disk drives, CD drives, tape drives, removable drives and so on. If a channel has 2 devices on it, one must be designated a master and the other a slave. This is done simply by moving or removing a jumper on the drive itself.

As a result of this configuration, any system can have 4 IDE devices connected to it. Using an external controller board connected to the PCI bus supporting 2 additional channels, up to 8 devices and be supported on a PC. This is the limit, and attempting to add 4 more devices with an extra controller will consume more interrupts and other system resources. This contrasts with modern SCSI which can have up to 15 devices on a controller and occupies the same amount of system resources regardless of the number of devices connected up to that limit.

The History of IDE

IDE replaces older interfaces such as ST-506 and ESDI. Through the years, many changes have been made to the IDE standard as defined by ANSI.

The original standard, call simply ATA called for 2 devices on the same channel configured as master and slave. It also defined PIO modes 0, 1 and 2 and DMA single word modes 0, 1 and 2 and multiword mode 0. However, this standard had problems. Often drives by different manufacturers wouldn’t work if combined on a single channel as master and slave. ATA-2 added the faster PIO modes 3 and 4 (mode 4 being the common default PIO mode for modern PCs), faster DMA multiword modes 1 and 2, the ability to do block mode transfers, Logical Block Addressing or LBA, and improved support for the “identify drive” command that allows the system to interrogate the drive for manufacturer, model and geometry.

The terms “Fast ATA and Fast ATA-2” are the inventions of Seagate and Quantum. They are not really standards and only denote drives that are compliant to all or part of the ATA-2 standard. ATA-3, however, was a real standard that improved reliability and defined the SMART feature in disk drives. It was followed by the current Ultra ATA or UATA. UATA also goes by many other names like UDMA, DMA-33/66 and ATA-33/66.

UATA isn’t really a new standard, and UATA drives are still backward compatible with ATA and ATA-2 systems. Ultra ATA is the term given to drives that support the new DMA modes that provide up to 33 MB/s (UDAM-33) or up to 66 MB/s (UDMA-66) transfer rates with 100 MB/s just over the next hill. Both UDMA versions support CRC error checking that assures data integrity through the IDE cable, which was a source of serious problems in previous standards. Note that the UDMA-66 standard calls for an 80 conductor cable instead of the 40 conductor cable used up to and through UDMA-33.

EIDE or Enhanced IDE is a designation created by Western Digital to describe its newer line of high speed drives. It really isn’t a standard at all, but just a marketing tool. However, it has taken on common public use to refer to all high speed drives and the systems that support them.

Bus Mastering

By default, IDE disk drives transfer data to and from the system using a protocol called “Programmed Input/Output” or PIO. This technique requires the CPU to get into the middle of things by executing commands that shuffle the data to or from RAM and the drive. Thus, the CPU is tied up doing the work of fetching and stuffing. Also, the time overhead involved in putting data in the cache, reading each byte into the CPU, sending it out to the cache again and then routing it to its destination puts a top end to the speed of the transfers.

In typical desktop systems this isn’t much of a problem. The system doesn’t have much to do during these transfers anyway, so who cares? Even if a user has several applications open at once, seldom is more than one actually doing anything, and during disk I/O, the application will likely be idle anyhow.

Now suppose you have an activity known as “streaming” going on which is pulling lots of data from the drive in real time while the application doing the streaming is simultaneously attempting to process the data as it arrives. Wow! Now we have a problem. The CPU really does have lots to do while data is being transferred and so getting tied up actually DOING the transfers cuts into application processing time.

In all fairness, even at the fastest rate, a disk drive couldn’t pump enough data to or from memory fast enough to cause modern high speed CPUs to break into a sweat. Even at this high demand level, there is time to shuffle data, process that data, shuffle it back, service interrupts, update the screen, send a byte to the modem, and so on.

Enter the DAW

Now we have a whole new ball game. Not only is the digital audio application trying to stream data and process in real time, but it needs to stream multiple files for multi-track mixing at the same time and still supply CPU horsepower to real time effects like reverbs and compressors. This forces a limit on the number of tracks in the mix and the number of real time effects that the project can sustain when attempting to perform real time production.

Under this load, even a Pentium 500 will fall short of the goal if it has to worry with PIO along with all of this other processing. If you want to mix more than 6 or 7 tracks using more than a few parametric EQs and one reverb, you will need to free up some major CPU cycles!

The answer is to put the load of data I/O someplace else so the CPU can just go to RAM and expect to find the data already there and process it. This is the idea behind DMA or Direct Memory Access. Using DMA, a system splits the responsibility of data communication among several intelligent sub-systems so each can do a specialized job very well.

DMA may seem like a new idea, but actually it has been around since before the first PC was ever designed. In the PC, sound cards, floppy drives and even SCSI controllers have been using DMA on the ISA bus for a long time. This method requires the ISA chip to referee the DMA transfers between the devices and RAM and thus is called “third party” DMA.

However, the ISA bus is slow. This doesn’t bother low-throughput devices like the floppy drive and simple sound cards, but to make DMA effective for high speed disk drives, the ISA bus is useless. The world had to wait for the development of the “local bus” to get the job done. This local bus technology is being implemented today on newer motherboards by the PCI bus.

With PCI, third party DMA is fast enough to become a useful disk access alternative to PIO. Another ability of the PCI bus is the ability for a device connected to it to take control of the bus and perform the transfers without the use of a DMA controller chip. This is referred to as “first party” DMA or more commonly, bus mastering. Using bus mastering, the peripheral device can access system memory the same way as the CPU itself.

Just about everything on the PCI bus (and its offshoot, the AGP connector) can use bus mastering if the designers wish it to. This includes Ethernet controllers, sound cards, Win-modems, display adapters, and so on, although due to little demand for high speed data transfer by these adapters, most of them still stick to PIO. It’s important to understand that disk controllers are the bus the master devices on the PCI bus, not the drives themselves. However, for most disk controllers to be operated in bus master mode, they require that the drives themselves at least support multiword DMA mode 2 so the data handshaking controls can be implemented between the drive and the bus mastering controller.

Bus mastering, being an advanced form of DMA, demands very specific motherboard chip set support as well as specific support from the hardware attempting to use it. The operating system must also be able to support it by loading special “bus mastering aware” drivers. This may sound rather complicated, and it is. However, the gain in data transfer speeds and CPU overhead reduction associated with bus mastering is such that there is no way modern digital audio applications could perform acceptably without it.

Luckily, Intel and Windows support it on the board and in the system and most if not all SCSI and IDE controllers can operate using it. Don’t think that these manufacturers went through all of this trouble just for us musicians. Be assured, they didn’t. This improvement was to facilitate network server applications. However, we can also reap the benefits of this technology.

A Shaky Start

Most all modern SCSI controllers connect to the PCI bus and use bus mastering. This has been SCSI’s largest advantage in terms of DAW performance, but that all changed with the advent of IDE’s entry into PCI bus mastering. For the record though, from this point on, we will limit the definition of PCI bus mastering to a system whereby the IDE controller transfers data to and from the drive using an enhanced DMA protocol. It is usually referred to as Ultra DMA (or DMA-33/66) or Ultra ATA (UATA or ATA33/66).

In the past, there were a lot of problems getting this to work. The Intel drivers shipped with many motherboards were “behind the curve” in terms of functionality when compared to the Intel drivers installed with Windows. Also, Windows 95 didn’t start off supporting bus mastering. Upgrading to 95B was necessary to provide this feature. The same goes for NT4. Service Pack 3 must be installed to provide bus mastering. Many people were tempted, not knowing any better, to install the drivers shipped with the motherboard during setup because, well, they were shipped with the motherboard! It seemed the thing to do.

Unfortunately, these drivers gave poor performance, and sometimes none at all. Even after discovering the mistake and attempting to remove them from Windows, they wouldn’t de-install cleanly. This left no alternative but to wipe the drive and re-install Windows and all of the software that came after it. Not fun! As it turns out, just using the native Windows drivers seems the way to go.

There was some early confusion as to which drives would or would not work under bus mastering. In that there are currently 2 types of Ultra DMA in common use, UDMA-33 and UDMA-66, one needs to check the specs. With UDMA-33, this isn’t much of an issue any more as almost any disk drive manufactured in the past 2 years is capable of multiword DMA mode 2 or better transfers and thus will run under bus mastering. The same can be said for current motherboards. Most using the Intel 430 FX, HX, VX, TX or 440 FX, LX, EX, BX, GX Pentium chip set will support bus mastering as well as those using the VIA chip set. Naturally, the Intel 810, 820 and 840 chip sets support bus mastering, but this chip set family is plagued with problems in the memory department and so at this point, a DAW using a motherboard with either of these three chip sets is a dicey matter.

Make sure that both the motherboard and disk drive support the newer UDMA-66 if you want this higher performance transfer feature. UDMA-33 will use the same IDE cable between the drive and motherboard as the older PIO system, so if you currently have a newer drive and motherboard but don’t use bus mastering, usually all you need to do is go to Windows and switch it on. As mentioned above, UDMA-66 uses a different cable and chip set, so you must make some real effort to upgrade to UDMA-66 from PIO or UDMA-33 even if the drive supports it. If your current motherboard isn’t UDMA-66 capable, you can get a separate IDE controller board designed for UDMA-66 which plugs into your PCI bus to get UDMA-66 up and running on your current system.

What is the big deal with the new cable, you ask? As it turns out, the cable is 80 conductor instead of the usual 40 conductor. Both ends still have 40 pin connectors. Huh? Here’s the deal. The extra 40 wires are grounds and lie in between the other 40 signal lines acting as shielding. This reduces crosstalk on the lines and enhances reliability. UDMA-66 drives will not function at 66 MB/s without this 80 conductor cable, and will default back down to 33 MB/s if they sense a 40 conductor cable. On the other hand, using an 80 conductor cable on a UDMA-33 drive will likely enhance its performance too, due to the more reliable connection and thus fewer transfer retries.

As one example of the kinds of things that can go wrong, this is an experience Glen, one of this article’s authors, had setting up his new DAW.

When I set up my first DAW system 2 years ago, I picked up one of the new Western Digital 13 gig drives and tried to set it up for bus mastering. When I tried, the stupid thing kept defaulting to DOS mode! Nothing I did helped until a friend suggested I poke around on the WD web site for clues.

I hunted for quite some time until I came across an obscure reference to the fact that all of these new drives were being shipped enabled for UDMA-66 by default. If a user wanted to use UDMA-33 instead, they needed to download this little program that will talk to the drive and tell it to switch modes. Fancy that! I downloaded and ran the utility. Within a few moments I had bus mastering running and a benchmark reading of 3.53% CPU usage for streaming transfers and an estimated track count of over 80 tracks of digital audio.

I understand that these drives are no longer being shipped with UDMA-66 as the default. I wonder if my letter had anything to do with that!

To make things a bit more livable, almost all UDMA-66 drives made today will auto-switch between 33 and 66 depending on the abilities of the controller and the cable. Incidents like the one described above are now, hopefully, a thing of the past. After all, drive manufacturers WANT you to buy these new drives regardless of whether or not you can use the enhanced throughput. This way, they only need to make one type of interface for their drives. Again, this isn’t to make our lives easier, but theirs.

That Voodoo That You Do

Another example of things that go “bump in the night” is from a series of posts on the PC-DAW news group where a fellow tried to enable bus mastering in NT4 only to be told by the Microsoft utility he was running that there were no such drives in his system. Between convincing his system that he has the authority to hack the registry and finding drivers that would work, he got it set up but still an air of mystery hangs over his system because it simply didn’t follow the rules during set-up.

When you have to resort to shaking chicken bones over the tower and smoking chunks of cactus to make things work, you know you’re dealing with Windows.

A drawback to using Bus Mastering  and CD ROMs is that if you put a CD drive on a bus mastering channel, you must be sure it is Ultra DMA compatible. This is a good reason to never put a hard disk from which you will be streaming data and a CD drive (player or recorder) on the same IDE channel unless you are sure the CD player is Ultra DMA compatible.

Bus Mastering and DMA

What is the difference between regular DMA and bus mastering?


Bus Mastering Logistics

First, let’s look at bus mastering again but from a DMA point of view. A bus is a data transport. Bus mastering is a very advanced means of transporting data to and from devices and/or memory using the PCI bus as a conduit.

A device that issues read and write operations to memory and/or I/O slave devices is considered the master, although a master device can have slave memory and/or I/O ports available to be accessed by other masters. For example, an Ethernet controller must convey data it receives from over the LAN and must also access data to send over the LAN as a bus master, but acts as a slave when the CPU, acting as a master, programs it to initialize and to specify where it must get and put data.

Only one bus master can own, or “drive” the bus at a given instant, and the bus is responsible for arbitrating bus master requests from the various bus master devices. A bus master device will request access to the bus, which is granted immediately providing no other master has it at the moment. If another master device has been granted access, the new one must wait until the first one completes its single or burst transfer, or the bus arbiter times out and yanks the access away in favor of the new requesting master, whichever happens first.

If an operation is interrupted by a timeout, it is resumed when that issuing master receives its turn again. The CPU is a bus master device, and is always present. The Intel PIIX family of IDE controllers found in all modern Intel chipsets for the x86 family are bus master devices. The SoundBlaster Live! is a bus master device that accesses main memory through the bus to read samples. There are many peripherals which use bus mastering on the PCI bus to free the CPU from actually doing every transfer, for example, video cards, network cards, SCSI controllers, other storage devices, and so on. Note that bus mastering transfers do not require and therefore do not tie up the DMA channels like normal DMA devices do.

DMA Logistics

Normal DMA is controlled by a chip. The DMA chip itself is a bus master device. It can be programmed by the CPU to perform transfers from memory to I/O, or I/O to memory (some also allow memory to memory, but is not in the case with the PC, although two DMA channels can be used to do that given some fancy driver footwork). Therefore, the DMA system acts as a bus master to perform the programmed operation while the CPU can be doing something else. The DMA controller sends a signal to the CPU when the transfer is complete.

DMA is used to perform transfers without CPU intervention to or from peripherals that don’t have bus master capabilities. DMA issues accesses similar to standard bus I/O accesses, but with the addition of handshaking lines DMA_Request and DMA_Acknowledge. These signals are present on the bus for each DMA channel. A slave device must handle these handshaking lines to be able to be operated through DMA. Obviously, this is a much simpler system than having to support all the complex and necessary logic in a bus master device.

The main limitations of a DMA capable slave compared with a bus master peripheral, are:

1) The DMA slave is passive. It is the CPU which must specify the transfers to be done. The bus master device can perform transfers by its own initiative without restrictions.

2) DMA can only transfer blocks of contiguous memory content, and only one block for each programmed transaction. The bus masters can access memory or I/O following any pattern without restriction.

3) In the case of the PC, the DMA device can only transfer blocks of up to 64 KBytes, and always on 64 KByte boundaries, which limits its utility. In older PCs, the DMA system could only access the first megabyte of memory. Later it was extended to the first 16 megabytes and currently the DMA device can more often access all memory, but always within 64 KByte boundaries for each operation.

4) DMA is generally slower, although there are new faster modes and burst timing modes achieving considerable throughputs. These modes must be specifically supported by the slaves in order to used them. The original Intel 8237 DMA controller was extremely slow. So slow that disk transfers were more efficiently done by the CPU using PIO mode 4 because DMA would become the bottleneck. In the best theoretical case (that was never meet) it could only transfer 4 MB/s. The reality was more like 1 MB/s.

UDMA Bus Mastering

So how do you get this so-called Bus Mastering to work anyhow?

First, let’s make sure your ducks are in a row. You must have the following squared away:

1) A motherboard with the proper chip set for bus mastering. The 430 FX, HX, VX, TX and 440 FX, LX, EX, BX, GX chipsets from Intel will support UDMA bus mastering as well as the VIA chip set and some other competing chip sets.

2) A disk drive that is Ultra DMA compliant. Most new drives are.

3) Windows 98, Windows 95 OSR2 or above, or Windows NT with service pack 3 (at least!) installed.

How can you tell if you have this condition met? If you have Windows 98 installed, you’re ready to rock. If you are running Windows NT and don’t know if you have service pack 3 installed, then you aren’t the one to be messing with NT and you need to call in whoever it is that normally administers your system. If that’s you and you still don’t know what I’m talking about, sell your system and buy a Windows 98 system. You’ll be better off. Otherwise, NT users please skip ahead to section 6) below. Windows 2000 users can skip to step 7).

For those of you still running Windows 95, open Control Panel and double click on the SYSTEM icon. You will see a box with the Windows logo and a heading “System” beside it. The system will shows “Microsoft Windows 95” on one line. If you see “4.00.950b” or “4.00.950c” on the second line, you’re good to go. If you see “4.00.950” or “4.00.950a” on the second line, then you should upgrade to OSR2 or OSR 2.5 or Windows 98. If you’re stubborn, you can also download the Intel bus master driver for your version of Windows 95 from Intel at:

However, there’s a catch! If you install this driver and later upgrade to Windows 98, you MUST un-install this driver prior to the upgrade. This driver was never meant for Windows 98 and your system will likely go nuts if you leave this driver in place. If the driver is already on your system and you are going to upgrade to Windows 98, you can download the driver install program and use it to un-install the driver first, then do the upgrade.

4) Now you need to see if Windows has properly identified your motherboard chip set. You may need to run the Windows 95/98 INF Update utility. To see if you do, you must first know the chip set on your motherboard. Get out the book that came with it and see what it says. Next, open CONTROL PANEL and double click on the SYSTEM icon to get the box with your system version number. Now look at the following chart to see of you need to even go any further with this process.

Chip Set4.00.9504.00.950a4.00.950b4.00.950c4.10.19998
Operating System Table taken from Intel document “Troubleshooting Common System Configuration Issues”

If you fall into a NO category, then skip to part 5) below. If you got a YES, again open CONTROL PANEL and double click on the SYSTEM icon. With the System box open, click the Device Manager tab and make sure the “View Devices by Type” button is checked. Click on the “+” box next to “Hard disk controllers”. If you see a list of controllers like this:

Primary IDE Controller (single FIFO)
Standard Dual PCI IDE Controller
Standard IDE/EIDE Controller

and that’s it, well, you need to run this utility. You can get it from:

Be sure to read everything there including the README text to make sure you know what’s going on.

5) Open CONTROL PANEL and double click on the SYSTEM icon. Select the Device Manager tab and then click on the “+” box next to “Disk Drives” to expand the list of hard drives on your system. Double click on the first IDE disk entry in the list. You will get a new control box related to that drive. Click on the “Settings” tab. You should see a check box labeled “DMA” in the center of the  right side. If this box is grayed out, you have some troubleshooting to do. If it’s not grayed out, check the box and then click OK at the bottom. Now double click on the next IDE entry if you have more than one IDE drive. Do the same thing for this one and any other ones you have. Click on the “+” box for your CD ROM also and follow the same procedure. If your CD ROM is UDMA compliant, you will have a DMA check box for it (them) as well. Check these boxes also. Now all of your drives will be running bus mastering once you reboot.

6) To activate bus mastering on Windows NT, you must run a utility called DMACHECK.EXE in the support\utils\i386 directory. If it’s not there, download it from:

Run DMACHECK and it will show you if DMA is enabled on either IDE channel. If not, click on the ENABLE radio button for all of the DMA compatible devices on your system that you wish to activate. This should be all of your hard drives and any CD ROM drives too. Any other listed devices, well, that’s up to you. After the selections are made, reboot and run the program again. It should tell you that all of the devices you selected are now enabled for DMA protocol. If this operation failed, there’s some good advice from the web. Go to:

Be prepared to do some registry hacking! It may come to that. At least this document will give you a fighting chance, so check it out even if you don’t think anything went wrong. You may have been fooled by DMACHECK!

Users of Windows 2000 have an easy time of it. Bus mastering is installed and activated by default so you need do nothing to use it.

7) Now it’s time to run a benchmark test that will focus on your system’s performance under DAW conditions. José has written a benchmark test that accurately simulates multi-track digital audio streaming called DSKBENCH.EXE.

This program is run from a DOS shell and will report record and play throughput in MB/sec as well as an estimated number of 16-bit 44.1KHz audio tracks you might expect to be able to stream simultaneously with that drive under real conditions. For details on using that benchmark, continue to the section DAW Disk Benchmark.

Comparing Drives

When it comes to picking a drive for a DAW, you have a bit of a job ahead of you.

We looked at the two contending controller formats in the last sections, but that’s just an overview. What about the specifications? What do you need to know about a drive’s performance in order to make an intelligent choice regardless of which format you’re interested in?

As it turns out, the specifications of both the drives and the controllers can lead you quite clearly to the best choice so long as you don’t lose track of what you’re after. You want a disk for a DAW – not a file server – so many of the drive specs and controller advantages don’t apply and others will count more heavily. On the other hand, you’re not just going to be typing email or surfing the net on this system either, so not “just any old drive” will do.

Decision Criteria

To an extent, the drive format you have already committed to will be a big factor. If you don’t want to support a large number of drives and CD devices, IDE will look like the best path to follow and SCSI will be much less appealing. If you already have SCSI, then the choice is clear.

If you are building from scratch, you should at this point have a good idea what CPU you would like to run, how much RAM you will need, what you feel is right as far as video, sound, and perhaps LAN cards go, and if you want an internal modem. Your choice of motherboards and disk system are now at issue.

Should you spring for SCSI and should you get a motherboard with built-in SCSI support?

Is IDE the best way to go?

Does it really matter?

We can’t answer these questions for you, but we will give you better tools for reaching that decision yourself with less dependence on the common DAW disk superstitions, misconceptions and other people’s unfounded prejudice.

Care and Maintenance is Still Important

Just for the record, no amount of care in picking a drive will offer you advantage if you don’t do a few simple optimizing steps on your own.

For one thing, defragment your data partitions often. Keeping large file access sequential will allow your drive’s performance qualities to shine. Also, store your data in the front tracks of the drive (first partition) or as close to the front as you can. As the track numbers get higher and the tracks get closer to the spindle, the “zoned formatting” of your drive will result in fewer sectors per track as you move toward the spindle. The more data you can pull from a single track, the faster the throughput. The outer tracks with the higher sector count will hold more data, thus offering up to 60% faster read/write throughput compared to the inner tracks.

For another thing, if you are using FAT16 partitions for your system, consider reformatting the data partition as a FAT 32 drive with the /z:64 switch. Assuming you’ve already fdisk’ed the drive as a FAT32 device, the proper format command is:

format d: /z:64

to format the d: drive. Replace the d: with the drive letter you wish to format. FAT32 with the /z:64 switch will remove the 2.1 gig partition limit while still giving you the large cluster sizes. Note that Windows NT cannot use FAT32 formatting, and NTFS uses small cluster sizes. Therefore, under Windows NT you need to format your audio disks as FAT16 disks or suffer a modest performance hit from NTFS.

However, Windows 2000 (NT 5 by another name) users need not suffer the constraints of NTFS because W2000 supports FAT 32. Keep in mind that if you’re already a W2000 user and have already formatted all of your drives to NTFS, Windows cannot “un-do” an NTFS format back to FAT. You’re stuck with it unless you’re willing to either FDISK your audio data partition and reformat it or use a third party program like Partition Magic to make the switch.

That said, let’s look at the guts of a hard drive.

Media Access Speed

In comparing IDE and SCSI it is important to understand that both types of drive are, from a “between the shells” point of view, the same.

Inside the Drive

Hard disks have a sealed case with one or more platters of magnetically coated media, a small synchronous motor designed to rotate the platters at a precise speed, and an actuator with one or more arms attached, each with a read/write head at the tip. The platters hold the data in the form of concentric tracks, each split like a pie into many sectors. Each sector will hold 512 bytes of user data as well as error correction information and other alignment information.

The actuator is designed like a speaker voice coil, extending or retracting along its throw path depending on the strength of an electrical signal in the coil which will force it very precisely to any location. The arms attached to the actuator are thereby positioned to various places above the spinning platters where the heads can pick up or lay down streams of magnetic information.

The heads float on a cushion of air at a distance of about 10 microns above the platter surface. The platter’s rotation produces that cushion of air. In contrast, a particle of smoke is about 100 microns in size, or 10 times the head gap. For this reason, these drives are manufactured in very closely controlled “clean room” conditions, are sealed at the factory against any interaction with the outside environment and sold with the expressed condition that the user never, for any reason, open the drive casing.

The drive also has a circuit board to control the mechanism and coordinate the transfer of data to and from the platters in a specific format. Aside from the data and power connectors, that’s about the whole story. It stands to reason, therefore, that the physical properties of these moving parts hold the key to a drive’s access speed and data throughput.

In reality, this is more the case than is commonly believed, and for that matter, commonly disclosed by the drive manufacturers. So many drives are advertised with little more than their data storage capacity and interface burst transfer speed. Neither of these factors relates directly to a drive’s usability as a DAW storage system. To get the real story, you must dig into the drive specifications, usually available only on the maker’s web site and even then only after linking past several pages of ad hype and chest pounding.

The drive actually performs two distinct operations in order to read or write data, those being head positioning and data transfer. Let’s start with head positioning. To perform this act, the drive must:

1) receive a request to position the heads to a specific location on the platter.

2) select the proper head to access the requested platter.

3) wait for the requested sector on the track to rotate into position for access.

All of this positioning and the buffering of the data to be written or that is finally read must be controlled by the drive electronics. Although the electronics is quite fast by all accounts, there is still a certain amount of overhead associated with this activity. It is referred to as… you guessed it, Controller Overhead. Sometimes this spec will be listed for the drive and is usually the same over a given product line or at least a given model range. It is expressed in milliseconds (thousands of a second), or “mSec”.

Understanding the Specs

The act of locating and positioning to a specific track is called “seek” and is likewise measured in mSec. It can come in three flavors. A seek can cause the heads to ramp from one end of the drive, say the outer most track, to the other end, the inner most track or vice versa. Obviously, this end-to-end movement represents the worst case as far as seek time. It is specified as the “full stroke” seek time. Another case is the head having to move only one track over, most common in reading data from a large file that extends from one track to another. This is called “track-to-track” seek time and represents the best case. The last is a measure of the average time required to perform a series of random seeks to various tracks and is referred to as “average” seek time.

Here is another factor, but one that is seldom specified. Given that the heads are at the end of long arms that are being swung along an arc by the actuator, and considering that the track being hunted is very narrow and separated from the adjacent tracks by fractions of microns, once the actuator has stopped, the head will require a finite amount of time to stop jiggling around and hover precisely over the target track. This time is called “settling” time. If you look at a drive’s track-to-track seek time and then its full stroke seek time, it will be obvious that it takes a long time for a head to move just to the next track as compared to the head moving across a thousand tracks. In other words, it doesn’t take a thousand times longer to move the head across a thousand tracks. This is because a good portion of that seek time is really settling time and is determined by a pre-programmed delay in the drive electronics.

Once the seek has taken place, the drive must wait for the target sector to rotate into position under the head. This delay is called “Rotational Latency”, and “average latency” is given to be one half the time it takes the platter to make one full rotation. It will be the same for all drives running at the same rotation speed. There isn’t much you can do about it. However, the faster the rotation speed of a drive, the less time it takes the target sector to rotate into position. Faster is better.

It is important to note that drives are formatted with what is called “track skewing” where by the sectors of adjacent tracks are not laid out next to each other, but are offset along the arc of the track. It’s designed so as to be more likely that the next sequential sector in a read or write operation will be ready to rotate under the head after the amount of time it takes the head to move to the next track has elapsed. Because the rotational latency number is specified for new, random accesses, we can’t speak properly of the latency spec as it simply doesn’t apply to the sequential access case.

As the drive reads a large file that extends beyond the capacity of a single track, the drive will switch heads to the same numbered track of the next platter to continue reading or writing. This wastes less time than doing a track-to-track seek after filling every track. In technical terms, all tracks of the same number on all sides of all platters is referred to as a single “cylinder”.

Therefore, if a drive has 4 platters making for a total of 8 “heads” or sides of platters and each side has 1400 tracks, then the entire drive has 1400 cylinders and each cylinder is then made up of 8 tracks that all line up under each other. The more platters there are in a drive, the more often the heads will be switched from platter to platter during a long read or write, exactly the sort of thing that will happen in a DAW. The heads are switched electronically and thus are not subject to mechanical delays except for rotational latency, and track skewing helps here too.

You might think that the more platters, the better. In rough terms, yes.

However, it is even better if each platter is so large that a track has many more sectors and thus will hold more data before the heads need to be switched or sent seeking the next track. Therefore, the important spec in this area is lowest number of heads for the same amount of storage space. Drives with higher capacity platters are the clear winners here. This spec is sometimes given as the “areal density” or the number of bits per square inch that the magnetic material can hold. Even if this number is not given, you can make a good guess by taking the storage capacity of a drive and dividing by the number of heads in the drive (that is, the number of PHYSICAL heads, not the number reported by DOS). If you compare this figure among drives, you will have a good guide even if areal density isn’t listed.

Now let us review and prioritize:

1) In DAW streaming access, we must read or write small chunks of data for each audio track, and so random seeks are the majority. The average track seek time holds a lot of weight, although it will be reduced if all data accessed is inside a relatively small part of the disk. The lower this average seek time, the better.

2) The higher the areal density, the more data will be throughput before the head will have to switch platters or move to another track. Therefore, the higher the capacity-to-head ratio, the better.

3) Finally, rotation speed is a big factor in increasing throughput. If other factors are held constant, the faster a drive can spin, the better.

The following charts are a sample of drive specs as offered by several random, yet very popular drive manufacturers. This data was taken from their web sites and can be accessed by logging on to the manufacturer’s web sites. A list of web site addresses for drive manufacturers and other interesting places on the internet may be found in the NOTES page of this article.

Just the Facts

The following charts are a sample of drive specs as offered by several random, yet very popular drive manufacturers. This data was taken from their web sites and can be accessed by logging on to the manufacturer’s web sites. A list of web site addresses for drive manufacturers and other interesting places on the internet may be found in the NOTES page of this article.

Western Digital Disk Drive Specification Comparison Chart as of June 29, 2000

IDE DrivesRotational speedCapacity
Avg. Seek ReadAvg. Seek WrtTrack to TrackFull Stroke ReadAvg. LatencyController overheadMin/Max Xfer rate Buffer to diskBuffer size
2 / 2
2 / 3
2 / 3
8.9ms10.9ms2.0ms21ms4.2ms0.3ms23.6 to 38 MB/sec2MB
1 / 2
2 / 4
2 / 3
3 / 6
9.5ms11.5ms2.0ms19ms5.5ms0.3ms16.6 to 29.1 MB/sec2M
WD307AA5400rpm30,7583 / 69.5ms11.5ms2.0ms19ms5.5ms0.3ms19.7 to 33.9 MB/sec2M
WD450AA5400rpm40,0203 / 69.5ms11.5ms2.0ms19ms5.5ms0.3ms22.2 to 37.6 MB/sec2M
SCSI DrivesRotational speedCapacity
Avg. Seek ReadAvg. Seek WrtTrack to TrackFull Stroke ReadAvg. LatencyAreal DensityMin/Max Xfer rate Buffer to diskBuffer size
WDE18300 -0048
WDE18300 -0049
7200rpm18,3106 / 126.9ms7.9ms0.8ms16ms4.17ms2.24 Gb/SqIn30MB/sec max2MB
7200rpm18,3106 / 126.9ms7.9ms0.8ms16ms4.17ms2.24 Gb/SqIn30MB/sec max4MB
10,03618,3104 / 85.2ms6.2ms0.6ms14ms2.99ms3.311 Gb/SqIn45 MB/sec max2M
10,03618,3104 / 86.6ms7.9ms0.6ms15.7ms2.99ms3.311 Gb/SqIn45 MB/sec max2M

Maxtor Disk Drive Specification Comparison Chart as of June 29, 2000

IDE DrivesRotational speedCapacity
Avg. SeekTrack to Track seekFull Stroke SeekAvg. LatencyController overheadMin/Max Xfer rate Buffer to diskBuffer size
36 series
2 / 3
2 / 4
3 / 6
4 / 8
9ms1ms20ms5.55ms0.3ms34.2 MB/sec2MB
40 series
1 / 2
2 / 4
3 / 6
4 / 8
9ms1ms20ms5.55ms0.3ms36.9 MB/sec2MB
60 series
3 / 6
4 / 8
9ms1ms20ms5.55ms0.3ms40.8 MB/sec2MB
VL17 series
1 / 1
1 / 2
2 / 3
2 / 4
9.5ms1ms20ms5.55ms0.3ms34.2 MB/sec512K
VL20 series
1 / 2
2 / 3
2 / 4
9.5ms1ms20ms5.55ms0.3ms36.9 MB/sec512K
VL30 series
1 / 1
1 / 2
2 / 3
2 / 4
9.5ms1ms20ms5.55ms0.3ms40.8 MB/sec512K
6800 series
1 / 2
2 / 3
2 / 4
3 / 6
4 / 8
9ms1ms20ms4.18ms0.3ms33.7 MB/sec2MB
Plus40 series
1 / 2
2 / 3
2 / 4
3 / 6
4 / 8
9ms1ms20ms4.17ms0.3ms43.2 MB/sec2MB

Quantum Disk Drive Specifications Comparison Chart as of June 29, 2000

IDE DrivesRotational
Avg. Seek ReadTrack to TrackFull Stroke ReadAvg. LatencyController overheadMin/Max Xfer rate Buffer to diskBuffer
Fireball lct 105400rpm5,121
8.9ms2ms18ms5.56msNot Listed37.13 MB/sec512K
Fireball Plus LM7200rpm10,273
2 /?
3 /?
4 /?
6 /?
8.5ms0.8ms15ms4.17msNot ListedNot Listed2MB
SCSI DrivesRotational speedCapacity
Avg. Seek ReadTrack to TrackFull Stroke ReadAvg. LatencyAreal DensityMin/Max Xfer rate Buffer to diskBuffer size
Atlas V7200rpm9,100
1 / 2
2 / 4
4 / 8
6.3ms0.8ms15ms4.17ms6.5 GB/SqIn24.25 to 42.5 MB/sec4MB
Atlas 10K II10,000rpm9,200
2 / 3
3 / 6
5 / 10
10 / 20
3ms7.7 GB/SqIn23.33 to 59.75 MB/sec8MB

Seagate Disk Drive Specifications Comparison Chart as of June 29, 2000

IDE DrivesRotational
Avg. Seek ReadTrack to TrackFull Stroke ReadAvg. LatencyMin/Max Xfer rate Buffer to diskBuffer
1 / 2
3 / 4
2 / 4
3 / 6
8.2msNot ListedNot Listed4.16ms45.5 MB/sec2MB
U Series 5 UDMA1005,400rpm10,000
1 / 1
1 / 2
1 / 2
2 / 3
2 / 4
8.9msNot ListedNot Listed5.6ms41.25 MB/secNot Listed
SCSI DrivesRotational speedCapacity
Avg. Seek ReadTrack to TrackFull Stroke ReadAvg. LatencyMin/Max Xfer rate Buffer to diskBuffer
Cheetah X1515,000rpm18,3505 / 103.9ms0.7msNot Listed2ms37.4 to 48.9 MB/secNot Listed
Cheetah 18LP10,000rpm9,100
3 / 6
6 / 12
12 / 24
5.2ms0.5msNot Listed2.99ms22.7 to 36.2 MB/secNot Listed
Barracuda 18XL ST 311184 series7,200rpm18,4003 / 65.8ms0.5msNot Listed4.17ms18.5 to 29.4 MB/secNot Listed

Fujitsu Disk Drive Specifications Comparison Chart as of June 29, 2000

IDE DrivesRotational speedCapacity
Avg. Seek ReadTrack to TrackFull Stroke ReadAvg. LatencyMin/Max Xfer rate Buffer to diskBuffer
3 / 5
3 / 6
4 / 8
9.5ms1.5ms18ms5.56ms14.51 to 26.1 MB/sec512K
3 / 6
4 / 8
9ms1.3ms17ms4.17ms21.4 to 34.6 MB/sec512K
3 / 5
3 / 6
4 / 8
9.5ms1.5ms18ms5.56ms17.2 to 30.4 MB/sec512K
MPE3173AE5,40017,3402 / 49.5ms1.5ms18ms5.56ms20.3 to 34.4 MB/sec512K
2 / 4
3 / 6
4 / 8
8.5ms1ms17ms4.17ms24.5 to 40.7 MB/sec2MB
1 / 2
9.5ms1.2ms18ms5.56ms21.5 to 37.8 MB/sec512K
1 / 2
2 / 3
2 / 4
8.5ms0.8ms17ms4.17ms30.6 to 53.9 MB/sec2MB
SCSI DrivesRotational speedCapacity
Avg. Seek ReadTrack to TrackFull Stroke ReadAvg. LatencyMin/Max Xfer rate Buffer to diskBuffer size
MAA31827,20018,20010 / 198ms0.9ms18ms4.17ms12.3 to 19.5 MB/sec512K
MAE31827,20018,2004 / 87.5ms0.8ms16ms4.17ms21.7 to 32.8 MB/sec2MB
MAG318210,02518,2005 / 105.2ms0.7ms11ms2.99ms29.5 to 45 MB/sec2MB
MAF336410,02536,40010 / 195.7ms0.7ms12ms2.99ms30.3 to 45 MB/sec2MB
MAH31827,20018,2002 / 46.8ms0.6ms15ms4.17ms40 to 49.5 MB/sec4MB
3 / 5
5 / 10
4.7ms0.6ms11ms2.99ms41.8 to 62.5 MB/sec4MB

IBM Disk Drive Specifications Comparison Chart as of June 29, 2000

IDE DrivesRotational speedCapacity
Avg. Seek ReadTrack to TrackFull Stroke ReadAvg. LatencyAreal DensityMin/Max Xfer rate Buffer to diskSustained ThroughputBuffer
75GXP Series
76.8 GB
61.4 GB
46.1 GB
30.7 GB
20.5 GB
15.3 GB
5 / 10
4 / 8
3 / 6
2 / 4
2 / 3
1 / 2
8.5ms1.2ms15ms4.17ms11.0 Gb/SqIn55.5 MB/sec37 MB/sec2MB
40GV Series
41.1 GB
30.7 GB
20.5 GB
2 / 4
2 / 3
1 / 2
9.5ms1.6ms16ms5.56ms14.5 Gb/SqIn46.5 MB/sec32 MB/sec512K
37GP Series
37.5 GB
30.0 GB
22.5 GB
15.0 GB
5 / 10
4 / 8
3  6
2 / 4
9ms2.2ms15.5ms5.56ms5.3 Gb/SqIn31 MB/sec10.7 to 19.9 MB/sec 
34GXP Series
34.2 GB
27.3 GB
20.5 GB
13.6 GB
5 / 10
4 / 8
3 / 6
2 / 4
9ms2.2ms15.5ms4.17ms5.3 Gb/SqIn35.5 MB/sec13.8 to 22.9 MB/sec2MB
25GP Series
25.0 GB
20.3 GB
15.2 GB
10.1 GB
5 / 10
4 / 8
3 / 6
2 / 4
9ms2.2ms15.5ms5.56ms3.74 Gb/SqIn24.45 MB/sec8.7 to 15.5 MB/sec 
22GXP Series
22.0 GB
18.0 GB
13.5 GB
9.1 GB
5 / 10
4 / 8
3 / 6
2 / 4
9ms2.2ms15.5ms4.17ms3.43 Gb/SqIn27.93 MB/sec10.7 to 17.9 MB/sec2M
16GP Series
16.8 GB
13.5 GB
12.9 GB
10.1 GB
5 / 10
4 / 8
3 / 6
3 / 5
9.5ms2.2ms15.5ms5.56ms2.42 Gb/SqIn24 MB/sec6 to 12 MB/sec512K
14GXP Series
14.4 GB
12.9 GB
10.1 GB
5 / 10
4 / 8
4 / 7
9.5ms2.2ms15.5ms4.17ms2.66 Gb/SqIn18.38 MB/sec3 to 8 MB/sec512K
SCSI DrivesRotational speedCapacity
Avg. Seek ReadTrack to TrackFull Stroke ReadAvg. LatencyAreal DensityMin/Max Xfer rate Buffer to diskSustained ThroughputBuffer
36LZX Series
36.7 GB
18.3 GB
9.1 GB
6 / 12
3 / 6
2 / 3
4.9ms0.5ms10.5ms2.99ms7.04 Gb/SqIn35 to 56.5 MB/sec21.7 to 36.1 MB/sec4MB
36LP Series
36.9 GB
18.3 GB
9.1 GB
5 / 10
3 / 5
2 /3
6.8ms0.6ms15ms4.17ms6.44 Gb/SqIn31 to 50 MB/sec19.5 to 31.9 GB/sec4MB
36XP Series
7,200rpm36.4 GB10 / 207.5ms0.3msNot Listed4.17ms2.76 Gb/SqIn17.9 to 28.9 MB/secNot Listed4MB
36ZX / 18LZX
36.7 GB
18.3 GB
9.1 GB
10 / 20
5 / 10
3 /5
0.3msNot Listed3.0ms3.53 Gb/SqIn23.3 to 44.3 MB/sec15.2 to 29.5 MB/sec2 to 4MB up to 8MB
18ZX / 9LZX
18.2 GB
9.1 GB
10 / 20
5 / 10
0.7msNot Listed2.99ms2.2 Gb/SqIn23.4 to 30.5 MB/secNot Listed4MB
18XP / 9LP
18.2 GB
9.1 GB
10 / 20
5 / 10
0.7msNot Listed4.17ms1.25 Gb/SqIn11.5 to 22.4 MB/secNot Listed1MB
18ES Series
18.3 GB
9.1 GB
5 / 10
3 / 5
7ms0.8ms13ms4.17ms3.03 Gb/SqIn19.9 to 30.5 MB/secNot Listed2MB
9ZX Series
10,000rpm9.1 GB12 / 66.3ms0.7msNot Listed2.99ms1.14 Gb/SqIn16.2 to 25.5 MB/secNot Listed1MB

Interpreting the Specifications

Seek Times

Seek times are a large part of the total time during DAW operation. This being said, the discrete size for each access, as used by the software, has a dramatic impact on this ratio. The larger the reads, the smaller the percentage of time wasted in seeks. That’s why larger buffer sizes translate into larger track counts. For relatively large read-sizes, the sequential sustained transfer rate can become more important for the final result than the seek times, although the seek times are still a big part of the total time.

The speed and size of the cache on the controller is of little importance when streaming multimedia data. The cache operates on the assumption that the system will want to access data in chunks smaller than the cache size thus allowing those very impressive burst transfer speeds that the drive manufacturer likes to promise. The problem here is that streaming data will quickly overrun the cache and the interface transfer rate will quickly drop to the drive’s sustained throughput rate. If the cache is VERY large, like 2 or more megabytes, then the drive’s ability to “cache ahead” of the reads can be utilized, speeding up at least the first parts of the transfers. This is why some drives with very large caches but otherwise the same specs as their siblings are advertised as “multimedia” drives.

Sustained Throughput

So now let’s take a look at that ever important sustained throughput speed and the factors that affect it.

First of all, let it be understood that when a drive is listed with a “disk to buffer” transfer rate, this is NOT the same as sustained throughput. It’s at least a good marker, but not the same.

Why is this, you ask?

Well first of all, most of the time this disk to buffer transfer rate includes a lot of bits that are not data. Every sector contains error correction code (ECC) bytes that help the interface unscramble slightly mangled data without having to resort to a re-read of the sector. While these ECC bytes are very good to have, they don’t contribute to the quantity of user data read. However, these extra bytes, usually 28 of them, are counted as part of the data transferred, and so make the numbers look higher. There are other bits on a disk that are counted in this number that don’t make up any part of the user data and they too distort these numbers.

Another point is that when only one value is given for this disk to buffer rate, it’s almost always going to be under the most ideal conditions. This isn’t realistic from a buyer’s point of view, and so these numbers need to be taken with the appropriate quantity of salt. Only when a manufacturer lists explicitly the “sustained data throughput” or “sustained transfer rate” can you take these numbers to heart. This value will also take into account head switching and track-to-track seeking as well as latency and controller overhead. As a rule, the sustained throughput will be on the order of 60% to 70% of any advertised disk-to-buffer or buffer-to-disk rate. Said another way, the media to buffer rates will be about 1.5 times the sustained rates.

Comparison of sustained throughput specifications for those drive that provide them

IDE Drive Make and SeriesRotational
Avg. Read Seek TimeMin/Max Xfer rate
Buffer to disk
Sustained Transfer RateRatio of the
two given rates
IBM 75GXP Series7200rpm8.5ms55.5 MB/sec37 MB/sec1.5
IBM 40GV Series5400rpm9.5ms46.5 MB/sec32 MB/sec1.45
IBM 37GP Series5400rpm9ms31 MB/sec10.7 to 19.9 MB/sec1.56
IBM 34GXP Series7200rpm9ms35.5 MB/sec13.8 to 22.9 MB/sec1.55
IBM 25GP Series5400rpm9ms24.45 MB/sec8.7 to 15.5 MB/sec1.58
IBM 22GPX Series7200rpm9ms27.93 MB/sec10.7 to 17.9 MB/sec1.56
IBM 16GP Series5400rpm9.5ms24 MB/sec6 to 12 MB/sec2
IBM 14GXP Series7200rpm9.5ms18.38 MB/sec3 to 8 MB/sec2.3
Seagate Barracuda7200rpm8.2ms45.4 MB/sec30 MB/sec Avg. Min.1.51
Seagate U Series 55400rpm8.9ms41.25 MB/sec14.5 MB/sec Avg. Min.2.84
SCSI Drive Make and SeriesRotational SpeedAvg. Read Seek TimeMin/Max Xfer rate Buffer to diskSustained Transfer RateRatio of the two given rates
IBM 36LZX Series10000rpm4.9ms35 to 56.5 MB/sec21.7 to 36.1 MB/sec1.61 to 1.57
IBM 36LP Series7200rpm6.8ms31 to 50 MB/sec21.7 to 36.1 MB/sec1.43 to 1.39
IBM 36ZX-18LZX Series10000rpm4.9ms to 5.4 ms23.3 to 44.3 MB/sec15.2 to 29.5 MB/sec1.53 to 1.5
Western Digital WDE 9100 Series7200rpm7.8ms21 MB/sec9.3 to 14.4 MB/sec1.46
Quantum Atlas V Series7200rpm6.3ms24.25 to 42.5 MB/sec17 to 29 MB/sec1.43 to 1.47
Quantum Atlas 10K II Series10000rpm4.7ms to 5.2ms23.33 to 59.75 MB/sec18 to 40 MB/sec1.30 to 1.49
In obtaining the ratio when only one number was given for one rate and two numbers were given for the other rate,

the highest numbers were used in the calculation due to the industry’s tendency to list only the highest rate values.

As you can see from the chart above, the sustained throughput is never as high as the disk to buffer transfer rate. However, if you know the sustained rate of one drive, you can compare its disk to buffer numbers with those of other drives and come to some intelligent conclusions.

There should be no question in anyone’s mind that the faster a drive’s rotation speed, the faster it can transfer data. It is safe to say that this one factor is the one single most important factor for segregating drives into classes. As mentioned above, higher rotation speeds allow the target sector to become positioned under the heads faster and the bits can be written or read faster.

In the same way, the drive with high areal density will translate directly into a drive with faster throughput. Simply put, the more data that can be packed into a track, the more data can be transferred without moving or switching the head and the faster the bits will go through the head. The disk to head transfer rate is a product of areal density and rotational speed. It’s clear that if in the same revolution of the platter, you read more data, you’re reading it faster. Large capacity drives with few heads are going to be faster then drives with the same capacity but more heads and the faster the rotation speed, the quicker the throughput. It’s that simple.

Understanding the requirements of your audio software is critical. Software that allows for large read-block sizes tends to benefit more from the highest throughput drives than from drives with the fastest seek times. On the other hand, software that uses smaller read-block sizes tend to perform best when used with disks with the lowest seek times. Generally speaking, the best overall audio performance is delivered by applications with large read blocks and very high throughput disks.

Noise and Heat

There is one point that must be made about the faster 7200 and 10,000+ rpm drives. They can be VERY noisy! This could be a consideration if your computer is in the same room where you like to track. Special computer boxes are available to keep the system cool while muffling the noise, but be prepared to spend for them. If you attempt to build your own “Quiet Box” for your system, be sure to provide good air circulation. There’s nothing gained by a quiet computer that keeps shutting itself down right in the middle of that “golden” take! Luckily, newer 7200 rpm drives are much quieter then the the earlier ones.

A consideration that may not occur to many folks is how cooling will affect drive performance. The faster the rotation speed of a drive, the hotter it will get. Also, as a rule, SCSI drives tend to run hot anyhow. This is why many SCSI drives recommend drive fans or some form of passive drive cooling. If a drive is allowed to overheat, this not only puts stress (perhaps fatal stress) on the electronics, but can cause the data storage media to fail, and there goes your big hit record! This isn’t to say that IDE drives are immune to this consideration. These drives get hot too, especially the 7200 rpm jobs.

Everyone knows that good cooling will keep the CPU happy, even if you’re not overclocking. Good cooling will keep your drive happy too. For starters, don’t mound the hard drive at the bottom of the internal bay unless the bottom of the bay is open to the inside (uses mounting flanges instead of a solid metal bottom). Keeping a gap under the drive will promote good air circulation. Add a second fan if you don’t have one already. A fan in the front of the chassis blowing in will complement the power supply fan in the back blowing out.

Another factor of drives and heat is the phenomenon of thermal recalibration. This is a tendency of drive controllers to detect that the platters have expanded due to heat. In response to this determination, the controller will home the heads and launch them to a specific spot and calculate the error between where it thinks the heads should be and where they ended up. It then re-calculates the predicted locations of the tracks on the disk using this data. In doing so, the controller has a much better chance of hitting its target track on a seek if it knows how much that track has moved out of position due to thermal expansion of the platter. Yes, this is a good idea and a cool thing to do – except during a recording or mixing session. The thermal recalibration takes a large fraction of a second to perform and that’s more than enough time to stall out a streaming operation. A recal is NOT the kind of thing you want your drive to be doing in the middle of a recording session!

Keeping the drive cool and at a stable temperature is the best defense against this happening.

Interface Access Speed

At this point, not much else need to be said about the interface.

To make that point, look at the charts again. Look at the data throughput figures. See anything interesting?

You should!

Controller Speed: Not the Bottleneck

If you are using an IDE interface set up for bus mastering and burst transfer rates of 66 MBytes/sec, then you will never be able to get any of the drives listed to come all that close to taxing the interface. In fact, even if you could force the 15,000 rpm SCSI drives to talk to an IDE interface at the same sustained data throughput rates they boast for SCSI, it still wouldn’t come all of the way to 66 MBytes/sec.

Likewise, a SCSI controller running under bus mastering and offering a burst rate of 80 MBytes/sec wouldn’t break into a sweat with the highest throughput drive. In fact, a 40 MB/sec controller wouldn’t be all that taxed using most of the drives listed. As shown from the charts above, the buffer transfer rate is in the neighborhood of 1.5 times higher than sustained transfer rate. Therefore, from this logic, the Cheetah’s 48.9 MB/sec internal transfer rate would translate to something like 32.6 MB/sec sustained throughput. That wouldn’t even fully saturate a UDMA 33 interface! In fact, only the Quantum Atlas 10K II and the Fujitsu MAH/MAJ series might saturate a 40 MB/sec SCSI interface with the IBM 75GXP series likely to swamp a UDMA33 interface.

As for those SCSI drives, how their advertised internal burst rates of around 60 MB/sec would relate to the REAL WORLD of day-to-day DAW activity is hard to tell. Except for IBM and some of the Quantum and Seagate drives, the sustained rates aren’t listed by most manufacturers because if they did list them, their drives wouldn’t be quite as impressive.

From the point of view of interface speed only, it’s a wash! Remember, in a DAW, you are interested in getting the highest throughput from one drive – period! You don’t care if you can get high burst throughput from six SCSI drives all at once because that’s not how you’re going to be using it. Keep your eye on the ball. Sustained throughput is what need to be looking at.

Other Interface Tradeoffs

SCSI is a very complex interface, and as such, it has a complex command set. The CPU must do more work to set up a SCSI transaction than it must do to set up an IDE transaction. As a result, in the specific world of the DAW, that is, a single user system not engaged in multi-tasking and multi-drive I/O, SCSI can be a bit slower than IDE. Remember, SCSI shines the brightest when you have a bunch of devices operating simultaneously. Your average DAW isn’t one of these situations.

Another interface enhancement that holds no advantage for real time data streaming is cache. Disk controller cache is great for burst operations and random file access, but when data is being streamed constantly through the controller, there is no need for cache. Any cache would be overrun in the first seconds of streaming and never get a chance to refill again. For DAW use, ignore any advertised cache advantages.

Something important to keep in mind is the possible conflict that attaching devices of differing interface characteristics has on overall system performance. At one time, attaching a fast disk drive as a master and a slow CD ROM drive as a slave on the same IDE channel would pull that channel’s overall performance down to the level of the slower device. Early PIIX controllers put severe limitations on the configuration of IDE modes. There could be only two modes available. So, if both devices couldn’t operate at the fastest mode supported by the controller, either both would run at the speed of the slowest, or one of them would run at PIO mode 0 regardless of its capabilities. The first PIIX imposed this limitation even across channels. The PIIX3 removed the limitation across channels, but not across devices in the same channel.

The latest controllers with UDMA support (PIIX4 and PIIX4E) have removed these limits completely, and the mode can be configured independently for each device. This should not be an issue any longer. However, the software drivers may not all be taking advantage of the hardware improvement (we can’t confirm if the drivers shipping with Win98SE take advantage of this improvement in channel mode selection, but Win2K does for sure). Even though this may be a dead issue, it doesn’t hurt to avoid connecting a hard drive and a CD ROM on the same IDE channel unless the CD ROM does support UDMA (has a DMA checkbox in Device Manager like the hard drive) and it is enabled.

Lastly, if you want to use UDMA66 transfer rates and your motherboard doesn’t support it, you can buy a PCI UDMA66 controller board and go that route. Simply disable your second internal IDE port and use the freed IRQ for the new interface. However, as we will see in the next part of this article, even the average IDE drive running at 33MB/sec will likely give you more raw tracks than you will ever need, so don’t hurt yourself trying to go for the fastest drive on earth just because it’s out there. Those tracking at 96 KHz rates will need to be a bit more mindful of drive speed and so should consider ONLY the UDMA66 or 80 MB/160MB SCSI options with the fastest, highest throughput drives. Even so, if you look at the drives with the very highest throughputs, there are both IDE and SCSI drives that tie at the top rates.

Comparing Cost

There is no secret that SCSI is more expensive than IDE.

Not only are the drives more expensive, even for drives that compare equivalently with IDE, but you must also buy either a SCSI controller or a motherboard with built-in SCSI support and that will set you back several bucks compared to a motherboard without SCSI. At some point you need to justify the added cost if you want to go SCSI from scratch.

The best (and maybe only) reason to choose SCSI is this: the SCSI interface only requires one IRQ and address space in order to operate all devices connected to the SCSI controller. IDE requires an IRQ and address space for each port and each port is good for only two devices. Do the math: 1 IRQ for 15 drives on SCSI, 2 IRQs for 4 drives on IDE. If your system is a resource hog, that may be enough to send the argument over the top toward SCSI. Not only is SCSI a better steward of your system’s resources, but it isn’t likely that you will run out of space on a SCSI bus any time soon even with 4 drives, a CD ROM, a JAZ drive, a scanner, and a CDRW writer hanging off of your controller.

On the other hand, if you plan to keep your DAW free of extraneous devices and uses and just do audio, then an IDE solution would seem better as you will not be taxing system resources, will not need to go beyond the 4 drive limit and will gain a bit if speed from using the less complex interface. On top of that, it’s cheaper!

After all, if you want to reach or beat the performance of the current fastest IDE disks, you need ultra-wide SCSI or better and one of the fastest SCSI disks. That translates into a rather large initial investment.

Below is a chart listing drives by size and how much some internet sites are charging for them. As you would expect, these prices can and do change daily. However, all of these prices were taken during one day and should reflect the relative prices of the drives. Also on this chart is the max disk to buffer throughput. As mentioned before, this isn’t the same as sustained throughput, but these figures were given for all drives in the chart and serve as a basis of comparison. The chart may surprise you. Some drives that have low throughput figures may be rather expensive. This may be because the buffer is larger or there may be more heads in this unit or some other reason. When looking over the drives in the list, keep in mind that a lot goes into pricing a drive. Many of these reasons have nothing to do with using them in a DAW.

Price Survey through LowerPrices.COM as of June 29, 2000

9 GB to 16 GB IDE

Western Digital WD102AA10.2 GB540029.1 MB/sec$89 to $115
Western Digital WD102BA10.2 GB720038 MB/sec$110
Maxtor 51024U210.2 GB720043.2 MB/sec$107
Fujitsu MPD3130AT13 GB540026.1 MB/sec$114
Western Digital WD136AA13.6 GB540029.1 MB/sec$96 to $135
Maxtor 91531U315 GB540036.9 MB/sec$113 to $129
Quantum Fireball Plus LM15 GB7200Not Listed$136
Maxtor 51536U315.3 GB720043.2 MB/sec$125
Western Digital WD153BA15.3 GB720038 MB/sec$113 to $159

9 GB to 16 GB SCSI

Quantum Atlas V9.1 GB720042.5 MB/sec$226 to $289
Seagate Barracuda9.1 GB720029.4 MB/sec$255 to $280
Seagate Cheetah 18LP91. GB10,00036.2 MB/sec$345 to $381
Quantum Atlas 10K9.1 GB10,00059.75 MB/sec$345

17 GB to 25 GB IDE

Fujitsu MPD3173AT17 GB540026.1 MB/sec$128 to $145
Fujitsu MPE3204AT20 GB540030.4 MB/sec$135 to $155
Maxtor 92049U620 GB720033.7 MB/sec$167
IBM DTLA30702020 GB720055.5 MB/sec$170 to $188
IBM DPTA37205020 GB720035.5 MB/sec$189
Quantum LCT1020.4 GB540037.13 MB/sec$124
Maxtor 92041U420.4 GB540036.9 MB/sec$145
Maxtor 52049U420.4 GB720043.2 MB/sec$179
Seagate Barracuda20.4 GB720045.5 MB/sec$163

17 GB to 25 GB SCSI

Western Digital WDE18300-004818 GB720030 MB/sec$385 to $480
Seagate Barracuda 18XL18.2 GB720029.4 MB/sec$372 to $465
Quantum Atlas V18.2 GB720042.5 MB/sec$395
Seagate Cheetah 18LP18.2 GB10,00036.2 MB/sec$460 to $630
Quantum Atlas 10K18.2 GB10,00059.75 MB/sec$555
Fujitsu MAG3182LP18.2 GB10,00045 MB/sec$475
IBM 18LZX18.2 GB10,00044.3 MB/sec$515 to $545

26 GB to 41 GB IDE

Maxtor 93073U630 GB540036.9 MB/sec$198
Maxtor 53073U630 GB720043.2 MB/sec$201 to $219
IBM DTLA30703030 GB720037 MB/sec$232 to $237
Quantum LCT1030.6 GB540037.13 MB/sec$175
Western Digital WD307AA30.7 GB540033.9 MB/sec$158 to $199
Maxtor 54098U840.9 GB720043.2 MB/sec$270 to $279

26 GB to 41 GB SCSI

IBM 36XP36.4 GB720028.9 MB/sec$1,103
Seagate Barracuda36.4 GB720029.4 MB/sec$795
Quantum Atlas V36.4 GB720042.5 MB/sec$735 to $950
Quantum Atlas 10K36.4 GB10,00059.75 MB/sec$945
IBM 36ZX36.7 GB10,00044.3 MB/sec$790 to $859
Seagate Cheetah36.7 GB10,00036.2 MB/sec$895 to $971

42 GB and up IDE

Western Digital WD450AA45 GB540037.6 MB/sec$237
IBM DTLA30704545 GB720037 MB/sec$298 to $395
IBM DTLA30706060 GB720037 MB/sec$647
Maxtor 96147U861 GB540040.8 MB/sec$343
IBM DTLA30707575 GB720037 MB/sec$635

42 GB and up SCSI

Seagate Barracuda50.1 GB720029.4 MB/sec$895 to 943
Seagate Cheetah73.4 GB10,00036.2 MB/sec$1675

Disk-to-buffer transfer speeds were used in this chart because it is the only transfer rate that is reliably reported for all of the drives listed.

SCSI ControllerSpecificationsPrices
Adaptec 2940 AUUltra$165
Adaptec 2940 UWUltra Wide$197 to $205
Adaptec 2940 U2WUltra 2 Wide$205 to $255
Adaptec 19160Ultra 160$175
Adaptec 29160N PCI 32Ultra 160 32 bit PCI$259 to $270
Adaptec 29160 PCI 64Ultra 160 LVD 64 bit PCI$269 to $350
Adaptec 39160 PCI 64Dual Ultra Wide 64 bit PCI$359 to $400
Koutech SCSI-2SCSI 2$47
Koutech UWUltra Wide$85
Koutech UW LVDUltra Wide LVD$135
SCSI MotherboardSpecificationsPrice
ASUS P2B-DS440BX Dual PII w/Ultra 2 Wide SCSI 80 MB/sec$526
ASUS P2B-LS440BX PII ATX w/Ultra Wide SCSI 80 MB/sec$369
ASUS P2B-S440BX PII ATX w/Ultra Wide SCSI 80 MB/sec$329
Iwill DBL-100440BX PII/PIII ATX w/AIC7890 SCSI chip$429
Iwill BS-100440BX PII/PIII ATX w/AIC7890 SCSI chip$259
Supermicro SQ2R6Quad PIII w/Dual Ultra 160 SCSI$2785
Supermicro 370DL3Dual PIII – LC chip set w/Ultra 160 SCSI$489
Supermicro Super S2DG2Dual PII/PIII Xeon 440GX w/Dual Ultra-2 SCSI$479
Supermicro Super S2DGUDual PII/PIII Xeon 440GX w/Ultra-2 SCSI$475
Supermicro Super D2DGRDual PII/PIII Xeon 440GX w/Ultra Wide SCSI$419
SuperMicro P6DBSDula PII 440BX w/Dual Ultra Wide SCSI$345
Supermicro P6SBUPII 440BX w/Ultra-2 SCSI$297
Supermicro P6SBSPII 440BX w/Dual Ultra Wide SCSI$269
Tyan S2257DUANDual PII/PIII 840 chip set w/Dual Ultra-2 LVD$649
Tyan S1952DLURDual PII Xeon 440GX w/Ultra-2 SCSI$599
Tyan S1837UANG-LDual PII/PIII 440GX w/Dual Ultra-2 SCSI$469
Tyan S1837UANG-RDual PII/PIII 440GX w/Dual Ultra-2 SCSI$585
A quick product and price survey from the internet and some on-line retailers at random

How you wish to allocate financial and system resources is a very personal decision, but there are clear trade-offs in going either way. On the one hand, you might have more flexibility with SCSI. On the other, IDE might be a bit faster and allow you to take the money you save and put it into a faster CPU, better motherboard and more RAM which will translate directly into better performance.

DAW Considerations

We just want to start out this section by quoting a few lines from some of the documents encountered while researching this article.

“For most people, IDE bus mastering is not worth the effort and problems, and I now do not bother with it on new installs of Windows 95. This may be somewhat controversial, but in my opinion it is overrated as a potential system improvement, given how much effort it requires.”
Charles M. Kozierok – “The PC Guide”

“Now SCSI, the traditional I/O powerhouse, is facing challenges from the recently released Ultra DMA interface standard. Yet, benchmark tests on Ultra DMA drives show limited performance improvements over previous generation ATA drives.”
Thomas W. Martin and Andy Scholl – “SCSI Remains the I/O Interface of Choice for Workstations: An analysis comparing SCSI and Ultra DMA”

The conventional wisdom held that…SCSI disks would pull ahead thanks to features like command-tag queuing…Yet our most recent round of tests showed that SCSI and EIDE disks performed almost identically in a variety of desktop environments, even in multitasking scenarios and even with multiple hard disks installed – precisely the kind of situation in which we’d expect SCSI to shine.”
Nick Stam – PC Magazine On Line – PC Labs On Line

“We’ve done a fair amount of performance profiling lately and to our surprise, we found SCSI drives to be very “bursty” as compared to EIDE/ATA drives. This means that there are higher system latencies when the SCSI controller and it’s device driver grab the processor and bus. As a real-world example, with one particular system we have audio performance latencies of about 40 ms with an EIDE drive and with the same system, a SCSI drive required much larger audio buffers, about 160 ms latency, to avoid dropouts.”
Rob Ranck, Gadget Labs, Inc. – taken from the Cakewalk Audio news group.

These comments need to be put a bit into perspective.

The first comment by Mr. Kozierok may be a bit dated. After all, when Windows 95 came out, it didn’t support bus mastering at all. One needed to hunt down the driver and install it. Sometimes it wouldn’t work. Sometimes the drive or chip set simply wasn’t up to the challenge. After OSR2, Windows 95 installed the proper drivers by default most of the time, but sometimes Windows wouldn’t see the Intel chip set and so wouldn’t install them. NT4 was a bit of a problem too, but all of these service packs (what are they up to now – SP5 is it?) makes keeping up with NT a bit of a zoo anyhow, and there is no “DMA check box” in NT. You must run a utility to enable UDMA and sometimes even that falls short leading to direct hacking of the registry.

With Windows 98, it’s been reduced to simply going to the drive dialog under Device Manager and clicking on the DMA check box and rebooting. That’s about it. Windows 2000 is even simpler. Just install Windows and start tracking.

As for the comments from Mr. Martin and Mr. Scholl, well, first remember that this paper was for the SCSI Trade Association, so how objective can you expect them to be? Second, they made their tests using 10,000 RPM SCSI drives against IDE drives of unknown rotation speed, likely 5400 given the numbers they reported. It seems strangely conspicuous that they specifically listed the RPM rates of the SCSI drives in their tests but made no mention of the IDE drives. In a test of “one” SCSI drive against “one” IDE drive both simply streaming data, there should be little difference if the rotational speeds and access figures are equivalent for both drives. If you want to compare just the interface, the drives must have the same guts in order to apply any meaning to the results. This was clearly not the case here.

This brings us to the comments by Mr. Stam with PC Magazine and the post by Mr. Ranck of Gadget Labs. After careful review of the data at hand, it seems clear that for the specific function of a DAW, which is unique as compared to a server on one end and a desktop workstation on the other end, there is little advantage to using SCSI, and perhaps even a bit of a disadvantage. This goes against conventional wisdom to such an extent that if it were not for the facts we have collected, we too might have thought differently.

DAW Disk Benchmark

There may be a lot of disk performance benchmark tests out there, but the only one that seems to pin-point drive performance as it relates to the DAW is DSKBENCH written by this article’s co-author, José Catena. It is available for downloading here from ProRec. Unzip the package, read the docs and then move the EXE file to your WINDOWS directory or in some other directory that is in your DOS PATH.

To use it, open a DOS window and at the prompt, log on to the drive you wish to test. At the next prompt, type DSKBENCH and the program will start to run. If you wish to make a record of the resulting analysis, add the “>” redirect command at the end and send the output to a text file of your choice, such as this:


You don’t need to use upper case type. It is used here to make it easier to read. In the above example, the program will run without printing anything to your display, but will instead write everything to a text file called RESULTS.TXT that will be created in your C:\TEMP folder. You can send it to any folder you wish, of course.

Here is an example of what you will get by running this program:

DskBench 2.11
(c) 1998, SESA, J.M.Catena (,
Timer Check = 990 (should be near 1000)
CPU Check = 50.40 % (should be near 50.00 %)
CPU index (relative to Pro 200 MHz) = 0.900380
Open = 4 ms
Write = 43105 ms, 5.94 MB/s, CPU = 7.88 %
Flush = 283 ms
Rewin = 0 ms
Read = 39995 ms, 6.40 MB/s, CPU = 5.31 %
Close = 193 ms
BlockSize = 131072, MB/s = 4.45, Tracks = 52.94, CPU = 6.06 %
BlockSize = 65536, MB/s = 3.18, Tracks = 37.85, CPU = 3.74 %
BlockSize = 32768, MB/s = 2.00, Tracks = 23.73, CPU = 3.67 %
BlockSize = 16384, MB/s = 1.10, Tracks = 13.10, CPU = 3.47 %
BlockSize = 8192, MB/s = 0.58, Tracks = 6.87, CPU = 3.40 %
BlockSize = 4096, MB/s = 0.30, Tracks = 3.53, CPU = 3.51 %

The first part of the readout is just to make sure there isn’t something drastically wrong with your CPU timing system. As long as the numbers are close, don’t worry about them. The CPU index is a comparison of the relative processing power of your CPU to that of a reference standard – in this case, a Pentium Pro 200 MHz. The above readings were taken on a Pentium MMX 166 MHz and so reports in at only 90% of the Pro 200’s power.

The disk performance numbers start with a simple sequential write and then read to an uncached 256 MB file. First is the file open time, in milliseconds. It should be at or near zero. The file write time in milliseconds is a true sustained transfer rate. CPU usage shows how much of the processor’s time needed to be spent on setting up the transfers. Under bus mastering, this figure should be quite low. With PIO transfers, the CPU percentage will usually be in the 90% range.

File flush time in milliseconds  should be 0, as the writes were uncached. A non-zero figure shows that the controller is ignoring the “un-cached file” flag and is using the controller cache anyway. The rewind time should be zero as well, as this operation is simply a software pointer adjustment. The file read time in milliseconds is also a true sustained transfer rate, and CPU usage is again the amount of processor time occupied with performing the read.

This value is very useful in comparing overall disk performance objectively. The CPU usage value is also very important, particularly for audio applications. With low CPU usage percentage values, we have more CPU cycles free for mixing, real time effects and other processing functions. Bus master drivers usually offer very low CPU usage values (typically below 2%). PIO mode wastes much more CPU bandwidth. File close time in milliseconds rounds out the sequential access part of the test.

The lines that follow contain measurements performed by accessing 8 files in rotating alternation using different block sizes for each read. This shows how larger read sizes dramatically improve performance . The files are each 16 MB long. BlockSize  is the size used for the block read for each file. The test is run at 128K (131072 bytes), 64K (65536 bytes) and so on down to  small 4K blocks. For each block size we get a total transfer rate in MBytes/sec that includes all overhead for doing an 8 file alternating read operation  The track number is the total number of 44.1 KHz, 16 bit mono tracks that can be transferred. This assumes a DAW with no real time effects and that the software is written to effectively utilize the systems performance abilities. Finally we have CPU usage figures to show the processor overhead during the file transfers for that block size.

We have asked some of our friends to report their DSKBENCH numbers for various drives and mother boards. Below is a chart of the results. You will note that we have recorded results only for the transfer rates and CPU usage figures for the 256 MB sequential write and read, and for the block tests only as far as the 32K buffer size as the other sizes are of marginal use.

and chipset
CPUDrive / ControllerMfg. specs for buffer to disk transfers MBytes /secWrite MBytes /secWrite CPU UsageRead MBytes /secRead CPU Usage128K MBytes /sec128K CPU Usage64K MBytes /sec64K CPU Usage32K MBytes /sec32K CPU Usage
ABIT BH6 (BX)Celeron 533 overclocked 800MHz W98
(ProRec RYO2K)
Maxtor 52048U4 20 GB on UDMA/3343.220.632.4528.773.2112.582.246.47.993.29.96
ABIT BH6 (BX)Celeron 300A overclocked 450MHz W98Maxtor 91020D6 10gig18.611.871.74%12.061.44%7.281.96%4.300.96%2.200.83%
ABIT BH6 (BX)Celeron 300A overclocked 450MHz W98Maxtor Diamond Plus 2500 10gig 12.422.55%12.512.26%6.242.25%4.761.06%2.601.00%
VIAP 166 W98SEQuantum Fireball 2gig 7.2396.16%6.7096.13%4.1564.22%2.9946.44%1.7931.71%
Intel 440BX (BX)PIII 450 MHz W98Seagate Medallist UDMA2 7200 rpm 9gig 12.472.74%12.762.59%4.922.05%5.011.43%3.051.51%
Intel 440BX (BX)PIII 450 MHz W98Fujitsu MPB3085  UDMA2 5400 rpm 8.5gig 10.611.76%11.861.30%6.350.55%3.780.77%2.990.40%
Intel 440BX (BX)PIII 450 MHz W98Fujitsu MPB3064  UDMA2 5400 rpm 6.5gig109.362.77%10.652.35%5.332.23%4.241.44%2.741.44%
Intel 440BX (BX)PIII 450 MHz W98Hewlett Packard HP3725S 5400 rpm SCSI 2 Adaptec 2940 AU controller5.74.931.96%5.042.24%3.291.98%2.661.02%1.561.05%
QDI P51430TX (TX)P 166 MMX W98Samsung 3.4gig 5.947.88%6.405.31%4.456.06%3.183.74%2.003.67%
QDI P51430TX (TX)P 166 MMX W98Samsung 2.1gig 5.356.22%6.654.90%4.675.45%3.433.53%2.123.32%
ABIT BH6 (BX)Celeron 300A overclocked 450 MHz W98Western Digital AC313000 13gig10.99 to 21.4312.317.25%12.732.68%7.053.53%2.430.76%1.270.72%
Asus Super-7 VIA ChipsetAMD K6/2-400 3D W98Quantum Fireball ST6 6.4gig163.4412.48%8.4036.84%3.5617.87%4.397.82%2.394.42%
Asus Super-7 VIA ChipsetAMD K6/2-400 3D W98IBM Deskstar5 DHEA-36480 6.4gig8.9 to 14.94 (min sustained 5.4)0.333.07%9.0039.92%4.4622.26%3.586.36%2.013.67%
Toshiba Tecra 8000  LaptopPII 266 W95BUnknown 5.771.04%6.370.35%3.952.56%2.800.32%1.731.98%
AsusCeleron 300A overclocked to 450 MHzMaxtor 2800 series 8.4 gig 10.722.07%12.071.73%5.471.75%4.300.92%2.200.86%
AsusCeleron 300A overclocked to 450 MHzMaxtor 2800 series 10 gig 11.842.05%12.193.23%6.321.81%5.091.04%2.690.94%
AsusCeleron 300A overclocked to 450 MHzMaxtor 4120 10 gig 12.572.41%14.822.47%6.001.82%5.761.17%3.001.07%
AsusCeleron 300A overclocked to 450 MHzMaxtor 6800 series 20gig 13.482.91%17.622.32%7.112.23%6.181.40%4.361.46%
Asus TXP4 (TX)PII 200 MMX W98SESeagate 6gig 7.7814.62%8.686.13%5.338.14%3.178.74%2.364.23%
Abit Bx6r2 (BX)Celeron 366 at 550 MHz W98SESeagate Medallist Pro 5400 RPM UW SCSI 9.1gig Diamond Fireport 40 UW SCSI controller 13.391.86%13.780.01%7.03~0%4.86~0%3.22~0%
Abit Bx6r2 (BX)Celeron 366 at 550 MHz W98SESeagate Barracuda 7200 RPM UW SCSI 9.1gig Diamond Fireport 40 UW SCSI controller 13.062.62%13.722.14%7.182.10%4.951.14%2.580.91%
Abit Bx6r2 (BX)Celeron 366 at 550 MHz W98SEMaxtor Diamond Max Plus (5120) ATA 33 20gig 1 MB cache 17.451.11%20.080.63%9.253.3%5.252.42%2.67~0%
Abit Bx6r2 (BX)Celeron 366 at 550 MHz NT4-SP5Seagate Medallist Pro 5400 RPM UW SCSI 9.1gig Diamond Fireport 40 UW SCSI controller 13.102.95%13.351.41%6.071.05%4.570.94%2.910.94%
Abit Bx6r2 (BX)Celeron 366 at 550 MHz NT4-SP5Seagate Barracuda 7200 RPM UW SCSI 9.1gig Diamond Fireport 40 UW SCSI controller 13.120.97%13.561.03%7.880.62%5.460.21%3.530.66%
Abit Bx6r2 (BX)Celeron 366 at 550 MHz NT4-SP5Maxtor Diamond Max Plus (5120) ATA 33   20gig   1 MB cache 17.241.52%20.081.82%7.230.94%4.440.64%3.520.75%
FIC SD-11 VIA ChipsetAthlon 600Maxtor 6800 series 7200RPM 17gig 16.533.60%23.174.00%9.092.68%7.011.05%4.971.91%
Abit Bx6r2 (BX)PII 450Quantum  Atlas 10k ultra 160 LVD SCSI 18 gig Adaptec
2940uw (non-LVD) Controller
Abit  BH6 (BX)Celeron 300A at 450 MHz W98Western Digital WD205BA 7200rpm 20.5 gig 20.816.78%23.103.82%9.575.04%7.461.72%7.382.41%

Looking at DSKBENCH results, you can see how the total throughput decreases dramatically as the block size gets smaller. That’s because of the larger time being wasted in seeks. Note that even at 128 K blocks, the throughput is much smaller than for sequential access, and the difference is because of the added seeking times.

Notice the entry for a DeskStar 5 drive on a Asus Super-7 motherboard. This is a rather poor performance spec for the sustained write throughput. The system is badly configured. That disk should read about 10 MBytes/sec instead of the 0.33 that is listed. This needs to be looked at! Also note that as a rule, reports for VIA chipsets show higher CPU usage that PIIX chip sets. This is shown here, too, although there are few entries for VIA among this data and it’s a good bet in one case that bus mastering isn’t engaged on this system.

Many folks have become skeptical of bus mastering results because of the way Intel has cheated on the reporting of performance using its bus mastering drivers. Standard benchmark programs that attempt to measure disk performance under “business” or “server” conditions were obtaining ridiculously overinflated performance figures due to Intel’s behind-the-back use of memory for disk cache even after being instructed by the software to disable any caching. With other bus mastering drivers following the rules (and Intel’s not following the rules), this made it seem that Intel was the only company that knew how to write good drivers. This has been discovered and taken into account with newer benchmark programs including DSKBENCH, which uses the “flush” time to detect any underhanded activity, not that using cache would improve the results of streaming tests like the kind that DSKBENCH performs anyway.

CPU Power Bottleneck

Let’s be realistic about this.

Let’s say you would like to get 50 tracks of 16 bit, 44.1 KHz audio to play and/or record at the same time on your system. Will you ever use 50 tracks of raw audio voicing at the same time in one project? Now keep in mind, this isn’t to say that you will not have many projects with 50 or more tracks in use, but how many total tracks of audio will be voicing AT THE SAME TIME? How many will be archived takes? How many will be short clips that will always fall in between other short clips? When was the last time you even considered using 50 tracks of solid audio? When was the last time you had to close-mic the Mormon Tabernacle Choir?

Now granted, as a good friend, David Hallock, mentioned in response to this question, sometimes he likes to double or triple up on a few tracks, move them slightly out of phase and mix them to fatten the sound, but never to such a point that it would add up to 50 simultaneously voicing tracks at any given time (Dave also notes that he usually will mix those tracks to a final stereo sub group mix anyway).

However, if you go to 24 bits per track, that might limit you to 32 tracks on the same system, and at 96 KHz on top of it, 16 tracks. Now we’re talking some limitations. For those of us that wish to use 24/96  recording, there are some tough choices to be made. However, it stands to reason that at those requirement levels, an improvement in drives will amount to an even smaller gain in track count because each track is so incredibly demanding that you must have nothing short of a miracle to gain another track through tweaking. Again, is there any advantage of one drive over another once you get to a certain point?

For this kind of production, you need the absolute fasted rotational speeds and highest sustained throughput if you want the same kind of track count that the average user is getting at 16/44.1 or 16/48. At the present time, only drives like the Quantum Atlas 10K or IBM Ultrastar SCSI with 10,000 rpm rotation speeds, the new Seagate Cheetah SCSI spinning at 15,000 rpm and the Fujitsu MPF-AH series IDE will hit that ceiling.

User experience indicates that one can obtain over 24 solid tracks at 24/96 in Cakewalk 9.03 using a 7200 RPM Maxtor on a UDMA33 bus. You must decide just how much disk power you really need in order to get your work done and set your requirements accordingly.

So why not recommend these drives for EVERYBODY?

With the notable exception of a very few situations, almost all digital audio production utilizing streaming of multiple tracks in real time will use some real time (TDM, Direct X, or VST) effects. As a result, the number of tracks in a given project will be limited more by the amount of CPU power available for real time effects processing and track mixing than by the drive’s ability to stream the data.

If you want proof, simply load your most demanding, highest track count project into the streaming application of your choice and hit PLAY. Now look at the drive LED. Is it lit 100% of the time? Not likely! The drive is not being taxed to the point where it must maintain a constant flow of data. There is a resting period, short as it may be, between the bursts required to keep the I/O buffers full.

Now, in all fairness, without a fast drive and some minimal optimization, there will be a reduction in track count due to the drive even with effects in use. This goes without saying, even though we’ve spent a lot of room here saying just that! However, this situation where the user has decided to ignore common sense and set up his DAW as carelessly as possible aside, we’re more limited by the CPU than the drive in all cases.

A drive that dumps data like a bat out of hell will not improve the situation and only cost you money that would be better spent on a faster CPU or more RAM.



When it comes to raw track count per dollar, the clear advantage is IDE, which can offer very high track counts at rates that are actually cheaper than their analog alternatives. For example an under-$200 20 GB IDE disk can offer over 24 tracks of 24 bit / 96 KHz audio sufficient to record multitrack audio for a full-length CD project. By comparison, 24-track analog tape typically costs roughly $100 per 15 minutes of recording time, or $400 to record enough tracks for a full length CD. Although IDE allows for 4 devices, this limitation should not pose a constraint for dedicated DAW systems, allowing a system (OS) disk, two audio disks, and a UDMA-compatible CDRW.

When it comes to flexibility, the clear advantage is SCSI. With its 15-device limit and ability to work with all kinds of devices besides hard disks, SCSI is the hands-down winner in terms of flexibility. If you have a need for lots of devices on your dedicated DAW, or need to use your computer for tasks beyond audio, SCSI may be your only alternative.

Note, however, that to achieve real performance benefits from SCSI, you need to invest in fairly expensive disks. If you use your computer for tasks beyond audio, some DAW enthustiasts would suggest spending the additional money on a dedicated IDE-based DAW rather than a SCSI interface and expensive SCSI disks – a solution that also offers the benefits of having a minimal amount of software and hardware on your audio computer. Only you can decide what solution best meets your needs.


All of the analysis of this article comes down to two major points.

First of all, don’t be swayed by myth or biased opinions based on factors that do not apply to your situation. In the world of the DAW, there is enough evidence to say that either SCSI or IDE can offer the full level of drive performance necessary to take your system to its full potential. Although there are places for SCSI where IDE dare not go, one of them is NOT the digital audio workstation. Here, IDE can outperform SCSI just as often as SCSI can outperform IDE, in both cases usually not significantly and not even consistently. The format is not the issue. The drive is the issue. You’re looking for high sustained throughput as made possible by fast rotation speed, quick average seek times and high areal density. Where the choice of interface comes in is the way you intend to use the system when in non-streaming mode.

The specifications regarding the interface data transfer speeds are, for all intents and purposes, irrelevant in DAW operations. This isn’t to say that it isn’t a good idea to have a fast interface, just a reality that the specification of concern is the sustained throughput of the drive, not the burst speed of the interface. IDE drives that use a UDMA66 interface (66 MB/sec) or even the new UDMA100 interface (100 MB/sec) being offered by Maxtor and Seagate cannot hope to sustain that rate. SCSI drives and host adapters designed to burst at 160 MB/sec or even higher are likewise limited by the drive’s sustained throughput.

With very few exceptions, any drive, IDE or SCSI can’t begin to saturate the interface it’s connected to during media streaming regardless of the advantages of these high interface speeds for “normal” desktop and server application demands. Even the new IBM packetized SCSI interface, although a minor wonder in server applications, can’t help much in streaming media because the advancement was made in the device-to-device transfer setup protocols, not in the way the data itself is moved. The simple fact is, all other factors being equal, the type of interface the drive uses is not a very important factor in a DAW based strictly on the drive’s ability to stream data.

A DAW system with two or maybe three disk drives and a UDMA compliant CD ROM will likely lead to the IDE choice. These drives, well chosen, will offer all of the performance necessary to satisfy the software accessing them and the CPU running them. In fact, it is almost ridiculous to even consider the idea of ever using all of the tracks these drives can make available, especially in the face of the limitation placed on that track count by mixing overhead and real time effects. On the other hand, if you want to use more total drives than four, are running additional hardware that needs the extra IRQ that the second IDE channel eats up, or plan on using the system for server-level activities when not acting as a DAW, then you must expend the extra cash and go SCSI. However, this is not because the interface is better for operating a DAW, but because the demands you are placing on your system go beyond that required of a DAW.

Another consideration is the case of wanting to process large projects at 24 bit depth and 96KHz sample rates. If you need high track count (>24 tracks), this might warrant a 10K or 15K rpm drive for its higher throughput. This will limit you to SCSI until future IDE drives pick up the technology. That said, keep your eye on the Maxtor multi-processor drives when they come out and on other advances in IDE drive technology like the Fujitsu. It could end up making you want to dump SCSI and go IDE for the fastest drives instead of the other way around. Obviously, if you already have an investment in SCSI, you can stay with SCSI or you can add IDE drives to a SCSI-only system for a lot less money. Providing your system doesn’t fall in the categories mentioned above, you will have the same level of additional performance either way.

The second point to be made here is that a lot of a DAW system’s performance hinges on the way the user has configured it. A well tweaked system of modest means can outperform a cutting edge system that has been poorly set up. Use bus mastering – period! It doesn’t matter what format you select, just make sure your IDE bus mastering is active or your SCSI host adapter is of the bus mastering persuasion. The savings in CPU time will add more “real world” tracks to your DAW than any changes you could make to the the drives themselves.

Buy more RAM! You want to make a big performance difference? Buy more RAM. After 256 MBytes, you can start to relax a bit and think about a faster CPU and/or better motherboard. Remember, the disk drive isn’t the only hardware in a DAW that directly affects your performance. By the same token, save up and buy good software. Nothing sends a DAW out a third floor window faster than crummy software.

Use good judgement when selecting the other hardware for your DAW. Don’t get dazzled by the top-of-the-line video cards as they can significantly degrade DAW system performance. Also, keep up with the latest drivers for your sound card. It doesn’t hurt to cruse the manufacturer’s web site once each month or so just to see what might be new.

Make the tweaks to your virtual memory, your “System Type” settings and your system cache. Pay attention to your drive partition set ups and your FAT cluster size, also mentioned in Part 1. Keep your DAW uncluttered with programs or Windows features that cut into your CPU time from the background. These include anti-virus autoscan settings, autoinsert notification for CD ROMS, screen savers, power monitors and so on. If you use the DAW for other computing tasks, check thoroughly for conflicts after each new hardware or software install. When confronted with very demanding projects, decline network connection at boot-up to save a few CPU cycles there too.

Participate or at least monitor news groups that focus on DAW issues. There’s a lot of information there from fellow users that have hands-on experience to share. Just be sure to take anything you see with the necessary grain of salt. Much myth, bias and mis-information can be found anywhere “the average user” is allowed to post without peer scrutiny, so select the good news groups if you want advice worth following.

Finely, keep your eye on the ball. Remember what you want to do with this DAW – make music. This is the absolute bottom line and at some point it isn’t so much the equipment you have on hand as it is the way you use what you do have. Creativity is a human process, not a function of gigabytes or megahertz. Once you have a level of performance you can be creative with, don’t get distracted obsessing over more and better. Allow your system to evolve through use, not because you feel there’s no point going on unless you have the best and fastest. Leave such folly for network geeks being paid to obsess.

As the late Frank Zappa said, “Shut up and play your guitar.” Good advice to give yourself when your eyes get bigger than your needs.



Thanks to the following people for submitting their DSKBENCH results:

John Bartelt
Dean Brewer
Keni Fink
David Hallock
David Light
Kraig Olmstead
Jim Roseberry
Jan van Schalkwyk 
John Vernon

Article Related Links:

SCSI Trade Association Web Site:

Adaptec Inc. Web Site:

Weboedia, Online Dictionary and Search Engine for Computer and Internet technology:

Microsoft Web Site:

Intel Corp Web Site:

Quantum Inc. Web Site:

Samsung Web Site:

IBM Disk Drives Web Site:

Maxtor Web Site:

Fujitsu Web Site:

Western Digital Web Site:

Seagate Web Site:

Cakewalk Music Software web site:

Gadget Labs web site:

On-line price comparisons from:


ANSIStands for American National Standards Institute and is the body responsible for establishing industry standards in, among other fields, computer electronics.
ATAStands for AT Attachment and describes the standard for attaching devices to any AT style PC.
Block ModeA feature that allows multiple read or write commands to be processed in batches of up to 32 sectors at a time instead of sector by sector.
BusA bus is a group of data, control and/or addressing lines that extend from device to device and act as a conduit for signals. Often the bus will be shared by several devices and a set of signals or a “protocol” is implemented to arbitrate who shall send and receive signals at any given time.
CRCStands for Cyclic Redundancy Checking. At the sender, a block of data is subjected to a mathematic algorithm creating a result byte which is sent with the data block. The receiver performs the same calculation. If the results agree, the data is presumed to be error free.
DAWStands for Digital Audio Workstation. A computer whose function is devoted primarily to digital audio recording and production.
DMAStands for Direct Memory Access. A system by which peripherals can transfer data to and from system RAM without the intervention of the CPU.
FATStands for File Allocation Table and is a table that the system builds on any disk to keep track of what sectors are bad, are in use and by what file and in what sequence. Damage to the FAT is catastrophic! DOS/Windows keeps two copies of the FAT on any disk just for safety.
FAT 16DOS and Windows through version 95A only supported FAT 16. This system stored the FAT table in a 16 bit word map. Large drives were split into very large clusters so they could be mapped. Drive partitions were limited to just over 2 gig as a result.
FAT 32Starting with Windows 95B, a 32 bit mapping system was applied to the FAT so cluster sizes could be kept smaller and very large drives could be mapped. The result is more efficient use of disk space because any entry in the directory is allocated space in clusters, not sectors.
HandshakingA term given to a data transfer protocol that requires both the talker and the listener to exchange signals that verify a data packet has been received error free before the next packet is sent. If needed, the sender will be asked to repeat a packet if it arrives corrupted.
IDEStands for Integrated Drive Electronics. Common name given to the ATA disk drive format popularly used in PCs today. Usually connects directly to the Mother Board. EIDE is Enhanced IDE. Drives capable of bus mastering are EIDE drives.
I/OStands for Input / Output. This is a term for situations where data is transfered to and/or from devices or a system.
ISAStands for Industry Standard Architecture. Originally an 8 bit bus in the first PCs, it was quickly upgraded to 16 bits in the IBM AT. Still in use on modern mother boards, it is limited to slower throughput peripherals due to its inherently low transfer speeds.
LBAStands for Logical Block Addressing, and allows the BIOS to remap a drive’s geometry so drives larger than 504 MB to be configured. Requires BIOS support.
PCIStands for Peripheral Component Interconnect and refers to a bus in modern PCs that allows high speed connections to plug-in cards and IDE drives. The new AGP connector now being used by a lot of graphics cards is, in fact, only an extension of the PCI bus..
PIOStands for Programmed Input/Output. If your drives aren’t set for DMA, then you can bet they are set for PIO mode 4.
RAIDStands for “Redundant Array of Inexpensive Drives”. This system clusters a group of disk drives together that will all hold the exact same information so as to guard against down time due to the failure of any one drive.
RetriesIf a device is attempting to communicate over a channel, but the receiver at the other end signals back that the data was corrupt, the sender then “retries” the transmission. Constant retries due to bad connection will eat away at otherwise good throughput specifications.
SCSIStands for Small Computer System Interface. Considered a high performance interface for disk drives, CD drives, scanners, and other peripherals.
Serial SCSIThere are several attempts being made to standardize a high speed serial communications protocol to be used between a PC and peripheral devices such as cameras, keyboards, printers, etc. Aside from SCSI, the USB port and “FireWire” are also jockeying for acceptance.
SMARTStands for Self-Monitoring Analysis and Reporting Technology. Allows drives to do sophisticated self diagnostics and auto correct when possible and report faults to the OS when necessary.
System LatencyThis kind of  latency is not to be confused with disk drive rotational latency. An example of SYSTEM LATENCY is the amount of time it takes changes in the user interface of a DAW application (such as  clicking a SOLO button) to translate into a change in the audio output.
UIStands for User Interface and usually refers to the combination of the display, keyboard and pointing device (mouse) that allow the user to interface with the program. Can also refer to just the graphical information being displayed by the program.
Note about SCSI host adapters
When the number of devices that a SCSI controller can access is mentioned, it is understood that the TOTAL number of devices must include the controller itself as a device. Therefore, SCSI-1 can access 8 devices, but only 7 remain when the controller itself is counted. The same goes for wide SCSI that can access 16 devices; the controller and then 15 more.