Performance Testing of Distributed File Systems at Google
May 31st, 2008 | Published in Google Testing
Posted by Rajat Jain and Marc Kaplan, Infrastructure Test Engineering
Google is unique in that we develop most of our software infrastructure from scratch inside the company. Distributed filesystems are no exception, and we have several here at Google that all serve different purposes. One such filesystem is the Google File System (GFS) which is used to store almost all data at Google. Although, GFS is the ultimate endpoint for much of the data at Google, there are many other distributed file systems built on top of GFS for a variety of purposes (see Bigtable, for example -- but several others also exist) with developers constantly trying to improve performance to meet the ever-increasing demands of serving data at Google. The challenge to the teams testing performance of these filesystems is that running performance tests, analyzing the results, and repeating over and over is very time consuming. Also, since each filesystem is different, we have traditionally had different performance testing tools for the different filesystems, which made it difficult to compare performance between the filesystems, and led to a lot of unnecessary maintenance work on the tools.
In order to streamline testing of these filesystems, we wanted to create a new framework that is capable of easily performance testing the filesystems at Google. The goals of this system were as follows:
- Generic: The testing framework should be generic enough to test any type of file-system inside Google. In having a generic framework, it will be easier to compare the performance of different filesystems across many operations.
-
Ease of use: The framework should be easy enough to use so that software developers can design and run their own tests without any help from the test team.
- Scalable: Testing can be done at various scales depending on the scalability of the FS. The framework can issue any number of operations simultaneously. So, for a testing a Linux file system, we might only issue 1000 parallel requests, while for the Google File System, we might want to issue requests at a much larger scale.
- Extensible:
-
- Firstly, it should be easy to add a new kind of operation in the framework, if its developed in future (For example, RecordAppend operation in GFS).
- Also, the framework should allow the user to easily generate complex types of loadscenarios on the server. For example, we might want to have a scenario in which we issue File Create operations simultaneously with Read, Write, and Delete operations. Thus, we want a good mix of operations but not in a randomized way, so that we can have benchmark results.
- Firstly, it should be easy to add a new kind of operation in the framework, if its developed in future (For example, RecordAppend operation in GFS).
- Unified testing: The framework should be stand-alone or independent ie it should be a one-stop solution to setup, run the tests and monitor the results.
We developed a framework which allows us to achieve all the above mentioned goals. We used the Google's generic File API for writing the framework, since every file system can be tested just by changing the file namespace in which the testing data will be generated (e.x. /gfs vs. /bigtable). Following Google's standard, we developed a Driver + Worker system. The Driver co-ordinates the overall test, by reading configuration files to set up the test, automatically launching different number of workers depending on the load, monitoring the health of workers, collecting performance data from each worker and calculating the overall performance. TheWorker class is the one which loads the file systems with appropriate operations. A worker is an abstract class and a new child class can be created for each file operation, which gives us the flexibility to add any operation we want in the future. A separate Worker instance is launched on a different machine depending on the load that we want to generate. It is simple to run more or less workers on remote machines simply by changing the config file.
The test is divided into various phases. In a phase, we can run a single operation N number of times (with a given concurrency) and collect performance data. So, we can run a create phase followed by a write phase followed by a read phase. We can also have multiple sub-phases inside a phase, which gives us the ability to generate many different simultaneous operations on the system. For example, in a phase, we might add three subphases create, write and delete, which will issue all the different kinds of operations simultaneously on remote client machines against the distributed filesystem.
It is instructive to look at an example config file for an idea of how the load is specified against this filesystem:
# Example of create
phase {
label: "create"
shards: 200
create {
file_prefix: "metadata_perf"
}
count: 100
}
So in the example above, we launch 200 shards (which all run of different client machines) that all do creates of files with a prefix of metadata_perf, and suffixes based upon the index of the worker shard. In practice, the user of the performance test passes a flag into the performance test binary that specifies a base path to use: i.e. /gfs/cell1/perftest_path, and the resulting files will be /gfs/cell1/perftest_path/worker.i/metadata_perf.j, for i=1 until i=#shards, and j=1, until j=count.
# Example of using subphases
phase {
label: "subphase_test"
subphase {
shards: 200
label: "stat_subphase"
stat {
file_prefix: "metadata_perf"
}
count: 100
}
subphase {
shards: 200
label: "open_subphase"
open {
file_prefix: "metadata_perf"
}
count: 100
}
}
In the example above, we simultaneously do stats and opens of the files that were initially created in the create phase. Different workers execute these, and then report their results to the driver.
On conclusion of the test, the driver prints a performance test results report that details the aggregate results of all of the clients, in terms of MB/s for data intensive ops, ops/s for metadata intensive ops, and latency measures of central tendency and dispersion.
In conclusion, Google's generic File API, use of Driver & Workers and the concept of phaseshave been very useful in the development of the performance testing framework and hence making performance testing easier. Almost as important, the fact that this is a simple script-driven method of testing complex distributed filesystems has resulted in an ease of use that has given both developers and testers, the ability to quickly experiment and iterate, resulting in faster code development and better performance overall.