The ScienceDMZ provides a network-architecture approach optimized for high-performance scientific applications and the transfer of large research data sets over high-speed wide area networks. It supports big data movement by improving security, cost-effectiveness, and the nimble handling of large (mostly) scientific data sets.
The ScienceDMZ works by reserving a small part of the campus network to provide friction-free, high-performance networking in an environment separate from the business or enterprise systems that constitute the great majority of Pitt’s local area network (LAN). This separate environment provides a relatively small space optimized for the wide area data-movement needs of systems whose effectiveness depends on high-speed flows of big data. It also gives researchers greater networking capacity to work with colleagues at other campuses, institutions, or national labs.
For Prospective PIs Working on NSF Proposals
Results of Prior Work for Notice for Prospective PIs:
CC*IIE Networking Infrastructure: The University of Pittsburgh is the recipient of NSF CC*IIE award #144064: Accelerating Science, Translational Research, and Collaboration at the University of Pittsburgh through the implementation of Network Upgrades. (2014, $499,437). This award provided funding for upgrades to campus cyberinfrastructure used in research and data-driven scientific projects accessing high-performance computing resources, data storage services, data repositories, and scientific collaborations at regional, national, and international locations. Researchers and educators using institutional resources for research computing can be more productive by using the new capacity and services built for fast and efficient movement of research data. The project included an upgrade of bandwidth (100Gb/s) for a campus to data center (NOC) connection; an upgrade of bandwidth (100Gb/s) for a campus to Three Rivers Optical Exchange (3Rox) connection providing access to Internet2 and other research and education networks; and implementation of a dedicated data transfer node (DTN) using Globus Connect Server for the transfer of files to and from University HPC resources using campus credentials via InCommon. The new capacity and services have been integrated into a “ScienceDMZ” network architecture providing a secure, high-performance platform for meeting the expanding needs of collaborative and multi-disciplinary data-driven research using cyberinfrastructure at the University of Pittsburgh. [Brian S. Stengel, PI]