GCFScape v1.6.0 And HLLib v2.0.2 - NemPosted: Nov 10th, 2006 - 5:41:14 pm

I mentioned previously that my attempt to add a fragmentation property to HLLib revealed a surprising amount of GCF fragmentation, even after Steam's built in GCF defragmenter had been run. As a result I've released a new version of GCFScape (complete change-log) and HLLib with my own implementation of a GCF defragmenter. As always you can find GCFScape here.

The results of HLLib's defragmenter were better than I had expected with an average speedup on HLLib tests of 2.62 and an average speedup on Steam tests of 2.32 (for a total average speedup of 2.47). The baseline tests were all conducted on Day of Defeat: Source GCF files that had been run through Steam's defragmenter (which claimed 0% fragmentation even though I calculated an average of 8.38%). I restarted my computer between tests to insure that it was in a similar state for each test. The complete results are as follows:

Test Results
Test Library Fragmented Time Defragmented Time Speedup
Load dod_jagd HLLib (Crafty) 1:07 0:58 1.16
Load dod_anzio Steam 1:52 1:26 1.30
Validate DOD: Source HLLib (GCFScape) 8:01 1:58 4.08
Validate DOD: Source Steam 6:29 1:56 3.35

Of particular interest is the 3.35 speedup for DOD: Source validation through Steam. This is because validation is more IO bound than CPU bound meaning the speedup is indicative of Steam's raw file IO speedup. The result of this IO speedup can be seen in the dod_anzio test where I was able to load dod_anzio 26 seconds faster than before defragmentation. Even the casual gamer should appreciate that.

To defragment your GCF files using GCFScape, do the following:

  1. Shutdown Steam.
  2. Launch GCFScape.
  3. In the Options menu, enable Write Acces and disable Volatile Access.
  4. Open your GCF file.
  5. Select Defragment from the Tools menu.

HLLib uses a crude and slow defragmentation algorithm which moves each file (in the order it appears in the directory) to the start of the GCF file sequentially regardless of whether or not the file is fragmented. This algorithm may be crude, but it insures that each file is 100% defragmented and has the added benefit of arranging files lexicographically such that items such as models, which are made up of multiple files, are placed side by side and can be read with little seeking. The algorithm can be canceled at any time.


[ 1 ]

[ 1 ]

You must be logged in to post a comment.
New users can register here.
Nem's Tools v2.0 © 2006 Ryan Gregg.
Execution time: 0.024304s; Queries: 17.
dishes served.
Powered by The Wavelength.

Valid XHTML 1.0 Transitional Valid CSS