As the Wédney said, the improvement would be minimal. Actually, I do not even know if there will be real gain, after all I do not think SQL Server can read the two tables in parallel or asynchronously for the same query.
Considering the Implementation Plan
Before you begin to read the disk, database systems such as SQL Server first calculate the execution plan. The first step is to check if there is any index with which it can determine in advance which records are to be read from the physical table.
If the join is performed only by the primary keys in foo
and bar
, then there will certainly be indexes. Having enough memory helps SQL Server keep indexes in memory and thus avoid readings of disk indexes. So, up to this step, no reading would be necessary.
With the calculated execution plan, knowing exactly which records are to be read from the disk, SQL Server will then start to read the disk records. If they are on separate disks, it may not make any difference, after all it probably needs to wait for one table or record to be read to access another table.
Furthermore, even though the two tables are on different disks, there is no guarantee that the needle movements of the HD will be more efficient. Data from the same table may be fragmented in sectors scattered across HD tracks.
Conclusion
Therefore, it may be much more efficient to just compact your database, to eliminate unused portions of the file, and then defragment the HD,