The NTIS website and supporting ordering systems are undergoing a major upgrade from 8PM on September 25th through approximately October 6. During that time, much of the functionality, including subscription and product ordering, shipping, etc., will not be available. You may call NTIS at 1-800-553-6847 or (703) 605-6000 to place an order but you should expect delayed shipment. Please do NOT include credit card numbers in any email you might send NTIS.
Documents in the NTIS Technical Reports collection are the results of federally funded research. They are directly submitted to or collected by NTIS from Federal agencies for permanent accessibility to industry, academia and the public.  Before purchasing from NTIS, you may want to check for free access from (1) the issuing organization's website; (2) the U.S. Government Printing Office's Federal Digital System website; (3) the federal government Internet portal; or (4) a web search conducted using a commercial search engine such as
Accession Number ADA584727
Title Exploiting Data Sparsity in Parallel Matrix Powers Computations.
Publication Date May 2013
Media Count 20p
Personal Author E. Carson J. Demmel N. Knight
Abstract The increasingly high relative cost of moving data on modern parallel machines has caused a paradigm shift in the design of high-performance algorithms: to achieve e ciency, one must focus on strategies which minimize data movement, rather than minimize arithmetic operations. We call this a communication-avoiding approach to algorithm design. In this work, we derive a new parallel communication-avoiding matrix powers algorithm for matrices of the form A = D+USV(H), where D is sparse and USV(H) has low rank but may be dense. Matrices of this form arise in many practical applications, including power-law graph analysis, circuit simulation and algorithms involving hierarchical (H) matrices, such as multigrid methods, fast multipole methods numerical partial di erential equation solvers, and preconditioned iterative methods. If A has this form, our algorithm enables a communication-avoiding approach. We demonstrate that, with respect to the cost of computing k sparse matrix-vector multiplications, our algorithm asymptotically reduces the parallel latency by a factor of O(k) for small additional bandwidth and computation costs. Using problems from real-world applications, our performance model predicts that this reduction in communication allows for up to 24 speedups on petascale machines.
Keywords Algorithms
Communication avoiding
Computation science
Data transfer
Parallel blocking covers algorithm
Parallel matrix powers algorithms
Parallel processing
Sparse matrix

Source Agency Non Paid ADAS
NTIS Subject Category 72B - Algebra, Analysis, Geometry, & Mathematical Logic
Corporate Author California Univ., Berkeley. Dept. of Electrical Engineering and Computer Science.
Document Type Technical report
Title Note Technical rept.
NTIS Issue Number 1403
Contract Number HR0011-12-2-0016

Science and Technology Highlights

See a sampling of the latest scientific, technical and engineering information from NTIS in the NTIS Technical Reports Newsletter

Acrobat Reader Mobile    Acrobat Reader