Skip to content
This repository was archived by the owner on Aug 28, 2024. It is now read-only.

Commit

Permalink
removed references to 4k plot
Browse files Browse the repository at this point in the history
  • Loading branch information
cwishard committed Mar 4, 2023
1 parent 3bd4f26 commit d850a41
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,14 +49,14 @@ The combination of modern programming practices, flexible data processing tools,

# Performance

Modeling the behavior of a thousands of fully interacting bodies over long timescales is computationally expensive, with typical runs taking weeks or months to complete. The addition of collisional fragmentation can quickly generate hundreds or thousands of new bodies in a short time period, creating further computational challenges for traditional \textit{n}-body integrators. As a result, enhancing computational performance was a key aspect of the development of 'Swiftest'. Here we show a comparison between the performance of 'Swift', 'Swifter-OMP' (a parallel version of 'Swifter'), and 'Swiftest' on simulations with 1k, 2k, 4k, 8k, and 16k fully interacting bodies. The number of CPUs dedicated to each run is varied from 1 to 24 to test the parallel performance of each program.
Modeling the behavior of a thousands of fully interacting bodies over long timescales is computationally expensive, with typical runs taking weeks or months to complete. The addition of collisional fragmentation can quickly generate hundreds or thousands of new bodies in a short time period, creating further computational challenges for traditional \textit{n}-body integrators. As a result, enhancing computational performance was a key aspect of the development of 'Swiftest'. Here we show a comparison between the performance of 'Swift', 'Swifter-OMP' (a parallel version of 'Swifter'), and 'Swiftest' on simulations with 1k, 2k, 8k, and 16k fully interacting bodies. The number of CPUs dedicated to each run is varied from 1 to 24 to test the parallel performance of each program.

\autoref{fig:performance} shows the results of this performance test. We can see that 'Swiftest' outperforms 'Swifter-OMP' and 'Swift' in each simulation set, even when run in serial. When run in parallel, 'Swiftest' shows a significant performance boost when the number of bodies is increased. The improved performance of 'Swiftest' compared to 'Swifter-OMP' and 'Swift' is a critical step forward in \textit{n}-body modeling, providing a powerful tool for modeling the dynamical evolution of planetary systems.

# Acknowledgements

`Swiftest` was developed at Purdue University and was funded under the NASA Emerging Worlds and Solar System Workings programs. Active development by the Purdue Swiftest Team is ongoing and contributions from the community are highly encouraged.

![Performance testing of 'Swiftest' on systems of (a) 1k, (b) 2k, (c) 4k, (d) 8k, and (e) 16k fully interacting massive bodies. All simulations were run using the \textit{SyMBA} integrators included in 'Swift', 'Swifter-OMP', and 'Swiftest'. Speedup is measured relative to 'Swift' (dashed), with an ideal 1:1 speedup relative to 'Swiftest' in serial shown as an upper limit (dotted). The performance of 'Swifter-OMP' is shown in green while the performance of 'Swiftest' is shown in blue. All simulations were run on the Purdue University Rosen Center for Advanced Computing Brown Community Cluster. Brown contains 550 Dell compute nodes, with each node containing 2 12-core Intel Xeon Gold Sky Lake processors (CPUs), resulting in 24 cores per node. Each node has 96 GB of memory. \label{fig:performance}](performance.png)
![Performance testing of 'Swiftest' on systems of (a) 1k, (b) 2k, (c) 8k, and (d) 16k fully interacting massive bodies. All simulations were run using the \textit{SyMBA} integrators included in 'Swift', 'Swifter-OMP', and 'Swiftest'. Speedup is measured relative to 'Swift' (dashed), with an ideal 1:1 speedup relative to 'Swiftest' in serial shown as an upper limit (dotted). The performance of 'Swifter-OMP' is shown in green while the performance of 'Swiftest' is shown in blue. All simulations were run on the Purdue University Rosen Center for Advanced Computing Brown Community Cluster. Brown contains 550 Dell compute nodes, with each node containing 2 12-core Intel Xeon Gold Sky Lake processors (CPUs), resulting in 24 cores per node. Each node has 96 GB of memory. \label{fig:performance}](performance.png)

# References

0 comments on commit d850a41

Please sign in to comment.