Next-generation Network-aware Data Management Middleware
  • FENS
  • Next-generation Network-aware Data Management Middleware

You are here

Next-generation Network-aware Data Management Middleware

Mehmet Balman

Accessing and managing large amount of data is one of the major difficulties both in science and business applications. In addition to increasing data volumes, future collaborations require cooperative work at the extreme scale. As the number of multidisciplinary teams and experimental facilities increase, data sharing and resource coordination among globally distributed centers are becoming significant challenges every passing year. We require complex middleware to orchestrate storage, network, and compute resources, and to manage end-to-end processing of data. Next-generation high-bandwidth networks need to be evaluated carefully from the applications perspective as well.

In this talk, I will first introduce a flexible network reservation system in guaranteed bandwidth virtual circuit services. Many scientific applications need support from a communication infrastructure that provides predictable performance, which requires effective algorithms for network provisioning. I will present a new data scheduling model with advance resource provisioning to use data-placement as-a-service, so researchers and higher-level meta-schedulers can plan ahead and submit their data requests in advance.Next, I will describe a new data movement prototype used in 100Gbps demonstrations, in which applications map memory blocks for remote data, in contrast to the send/receive semantics. 100Gbps is beyond the capacity of today’s commodity machines, sincewe need substantial amount of processing power and involvement of multiple cores.

I will conclude my talk with future design principles in high-speed networking, network virtualization, and autonomous resource provisioning in next-generation dynamic networks.

Bio: Mehmet Balman is a researcher engineer working in the Computational Research Division at Lawrence Berkeley National Laboratory (Berkeley Lab) since 2009. His recent work particularly deals with performance problems in high-bandwidth networks, efficient data transfer mechanisms and data streaming, high-performance network protocols, network virtualization, and data transfer scheduling for large-scale applications. Before coming to Berkeley, he worked in the Center for Computation & Technology (CCT) at Louisiana State University (LSU). He has several years of industrial experience as system administrator and R&D specialist, at various software companies before joining LSU. He worked as a summer intern in Los Alamos National Laboratory in 2008. Mehmet got his doctoral degree in computer science in 2010 from LSU. He also holds M.S. and B.S. degrees in computer engineering from Bogazici University, Turkey.