Large-scale science applications are expected to generate exabytes of data over the next 5 to 10 years. With scientific data collected at unprecedented volumes and rates, the success of large scientific collaborations will require that they provide distributed data access with improved data access latencies and increased reliability to a large user community. To meet these requirements, scientific collaborations are increasingly replicating large datasets over high-speed networks to multiple sites. The main objective of this work is to develop and deploy a general-purpose data access framework for scientific collaborations that provides lightweight performance monitoring and estimation, fine- grained and adaptive data transfer management, and enforcement of site and VO policies for resource sharing. Lightweight mechanisms will collect monitoring information from data movement tools without putting extra loads on the shared resources. Data transfer management mechanisms will select transfer properties based on each transfer's performance estimation and will adapt those properties when observed performance changes due to the dynamic load on storage, network and other resources. Finally, policy- driven resource management using Virtual Organization policies regarding replication and resource allocation will balance user requirements for data freshness with the load on resources.
Intellectual merit: The team will produce a software framework that will improve the ability of distributed scientific collaborations to provide efficient access to replicated datasets by a large community of users; this framework will combine fine-grained transfer management, transfer advice from policy- driven resource management, and light-weight monitoring.
Broader impact: The proposed development will facilitate scientific advances in many domains that increasingly depend on replication and sharing of ever-growing datasets.