Machine Learning Algorithm Manages Supercomputer Datasets, Sets a New Record

Share on twitter
Share on linkedin
Share on facebook
Share on reddit
Share on pinterest

Los Alamos National Laboratory researchers have made a breakthrough in the digital landscape by developing a new machine-learning algorithm. The algorithm can identify key characteristics of large datasets and factorize them into manageable batches. Its ability to resolve hardware bottlenecks by processing datasets surpassing a computer’s available memory, as evidenced by the test run on Oak Ridge National Laboratory’s Summit, the fifth-fastest supercomputer globally, has set a new world record.

“We developed an ‘out-of-memory’ implementation of the non-negative matrix factorization method that allows you to factorize larger data sets than previously possible on a given hardware,” said Ismael Boureima, a Los Alamos National Laboratory computational physicist. “Our implementation simply breaks down the big data into smaller units that can be processed with the available resources. Consequently, it’s a useful tool for keeping up with exponentially growing data sets,” he added.

The test run was deemed successful when the algorithm used 25,000 GPUs to analyze a 340-terabyte dense matrix and an 11-exabyte sparse, exhibiting the algorithm’s ability to manage exabyte division and factorization, setting a world record as a first-of-a-kind achievement.

The algorithm demonstrates exceptional stability. In addition to scaling thousands of GPUs on high-performance computers and processing datasets exceeding their available memories, it can also run effectively and efficiently on desktop computers and laptops. Its ability to handle massive datasets proves its compatibility with various computer devices, making it accessible to a broader section of users.

Furthermore, it can make waves in multiple industries by transforming data processing. Cancer research, strengthening social media networks, obtaining satellite imagery, and earthquake research—this algorithm can revolutionize data usage across various disciplines.

Memory limitations have posed a challenge when computing data using traditional methods, as datasets must fit in the available memory. However, the record-breaking ML algorithm combats and resolves this issue by breaking down massive datasets into smaller factions, making it easier to process using the available resources. This algorithm seamlessly manages and computes large datasets by cycling the processed segments in and out of memory, thus “keeping up with exponentially growing data sets.”

The factorized algorithm can run efficiently on hardware as small as desktop computers or as complex as supercomputers. It leverages hardware tools like GPUs to speed up computation and enable quick interconnection to transfer data between computers effectively.

In machine learning, non-negative factorization, as used by the algorithm, is used as unsupervised learning to acquire meaning from data. It identifies the hidden features in the data that could be relevant or have specific uses for the user, making it valuable for data processing, analytics, and machine learning.

“The question is no longer whether it is possible to factorize a larger matrix, rather how long is the factorization going to take,” said Boureima. “That’s very important for machine learning and data analytics because the algorithm can identify explainable latent features in the data that have a particular meaning to the user,” he added.

This new development in machine learning algorithms sets a precedent for advancements in data processing fields by analyzing massive datasets, resolving memory barriers, and transforming scientific research and development by empowering data-driven applications.

0 replies on “Machine Learning Algorithm Manages Supercomputer Datasets, Sets a New Record”

Related Post