The adoption of high-performance computing (HPC) and public cloud is increasing alongside the overlapping areas of artificial intelligence (AI). We talk about the convergence of HPC, simulation, and data analytics, placing machine learning (ML) at the center of that as a key part – how these areas can come closer together as they both grow.
But are we really converging or are we all becoming generalists? As our sectors grow, they become more complex. Are tools able to solve each business’ needs adequately? To answer this question, we must understand whether technologies and sectors really are converging or simply growing alongside each other.
Last year the company I had spent ten years building was acquired by Altair, giving my technology the context it needed to be truly successful and giving me an opportunity to work at the center of several converging technologies.
Moving in many directions
To start by looking at what we do, Altair has long dominated the simulation and product design market. We provide best-in-class tools for building everything from smart connected washing machines to Formula 1 cars. Altair solutions fit into every aspect of design, from the design and test of the embedded system to the physics-based modelling of the overall design.
Several key acquisitions over the last ten years have seen Altair expand into the neighboring spaces of HPC and data analytics. Many physics simulations and design processes produce large amounts of data and require significant compute resources to run so the incoming solutions in HPC and data analytics have a great fit within the organization.
These tighter partnerships have enhanced the technology within Altair and have opened the door for us to lead the way in technologies such as digital twins – the representation of a physical object in electronic form. By combining 3D representation, physics modelling, in-service data collection, and ML, the lifecycle of a product or component can be understood like never before. The use of these techniques dramatically improves safety, drives up profit margins, and increases the longevity of products, something that has significant environmental ramifications.
HPC is not getting easier. The diversification of workloads is making it harder for the same HPC tools to cover the needs of different verticals and organizations, or even of teams within the same organization. As a result, we are seeing continued investment in tools that have been finely tuned to industry or function-specific workloads.
For example, the very wide MPI workloads that run for days have very different data, compute, and orchestration requirements from the single-core, high-throughput workloads commonly found in finance or semiconductor design. Life science workloads have their own unique challenges and look nothing like the massive single application runs of the weather forecast.
The adoption of hybrid cloud is also making it harder to keep it simple. The cloud makes it easier to deploy an ever-increasing range of products and solutions that go far beyond the selection typically found in an on-prem environment.
In short, I have yet to see two HPC environments set up in the same way and I don’t think the cloud is going to change that any time soon.
At Altair, this is where the vast diversity of our product portfolio continues to provide a strategic advantage. With simulation-driven design tools, physics solvers, data analytics, and IoT solutions in use by tens of thousands of customers across the globe, the Altair teams building and implementing our HPC solutions team gain an intimate understanding of how to architect solutions for each of these diverse use cases. No two infrastructures may be identical, but we strive to pack as much of this problem-solving know-how into every HPC solution we deliver.
Will we all become data scientists?
Data science is equally applicable in the areas of scientific discovery, design, and in managing the IT infrastructures needed to support those functions. A natural response to increased system complexity is to increase monitoring at all levels, which is why the Altair Breeze™ and Altair Mistral™ I/O profiling tools that we develop play such an important part of the Altair HPC product suite. Increased monitoring is both necessary and an opportunity to do better when we’re faced with technical diversity in the problem space, tools and compute environments, but it’s not enough to just collect the data. Data always needs to be transformed and visualized appropriately in order to provide actionable insight.
Now that I am part of Altair, I am fortunate enough to work alongside some of the leading experts in data science. Altair Knowledge Works™ assists in every stage of the ML pipeline from data preparation through AutoML to explainable AI and data visualization. These techniques are already shaping our lives in areas such as drug discovery, financial modelling, and product design, and that will continue to grow. The HPC industry has started to look at ML to improve efficiency and to make decisions in HPC, but I think it is safe to say that we have a long way to go before much of the industry is automated in this way.
There will be a lot of wins and a lot of misses, I think, in our road to adopt AI and data science in these engineering disciplines. Only when the dust settles in the future will we be able to really see whether we’re all converging or not.
Dr. Rosemary Francis founded Ellexus in 2010, which was acquired by Altair in 2020. She obtained her PhD in computer architecture from the University of Cambridge and founded Ellexus to build tools for managing the complex tool chains needed for semiconductor design. Ellexus went on to become the I/O profiling company, working with high-performance computing organizations around the world in semiconductor, life sciences, and oil and gas. Now part of Altair, Francis continues to lead the Ellexus team to work on job-level analytics and storage-aware scheduling. She is a member of the Raspberry Pi Foundation, an educational charity that promotes access to technology education and digital making.