Darshan 1.1.4 release


  • Track files opened via Parallel NetCDF
  • Track files opened via HDF5
  • Record slowest individual POSIX read and write times along with access size for those operations
  • Inspect symbols at compile time to determine whether to enable Darshan or not based on the presence of MPI and PMPI symbols
  • Use GNU and IBM compilers from path rather than hard coded location
  • Simplify warning message if unable to open log file
  • Remove unused internal benchmark routines

This release is now available on the download page.  Note that the output files generated by Darshan 1.1.4 are not compatible with the output files generated by 1.1.3.

Darshan 1.1.1 release


  • Set default permissions to 0400 (user read only) for output files
  • Automatically disable Darshan at link time if common PMPI libraries are detected in the command line
  • Experimental tool (darshan-gen-cc.pl) to automatically generate Darshan-enabled mpicc scripts

This release is now available on the download page.

    Welcome to the Darshan project

    This is the home page for  Darshan, a scalable HPC I/O characterization tool. Darshan is designed to capture an accurate picture of application I/O behavior, including properties such as patterns of access within files, with minimum overhead.  The name is taken from a Sanskrit word for “sight” or “vision”.

    Darshan can be used to investigate and tune the I/O behavior of complex HPC applications.  In addition, Darshan’s lightweight design makes it suitable for full time deployment for workload characterization of large systems.  We hope that such studies will help the storage research community to better serve the needs of scientific computing.

    Darshan was originally developed on the IBM Blue Gene series of computers deployed at the Argonne Leadership Computing Facility, but it is portable across a wide variety of platforms include the Cray XE6, Cray XC30, and Linux clusters.  Darshan routinely instruments jobs using up to 786,432 compute cores on the Mira system at ALCF.

    You will find current news about the Darshan project posted below.   Additional documentation and details about the Darshan are available from the links at the top of this page.

    Testpio Case Study #1

    Last week I did some comparative runs of the “testpio” kernel to find out why pnetcdf I/O was slower than raw binary MPI-IO. In this scenario, 512 cores write a 51MB file ten times.

    There were some minor differences:  binary (MPI-IO) uses a blockindexed type, while pnetcdf uses subarray.  Pnetcdf syncs the file a few more times – pnetcdf calls MPI_FILE_SYNC when exiting define mode, but I think we will change that soon.