From d850a41e43aad56ad814f2781c8a38c78fc79d15 Mon Sep 17 00:00:00 2001 From: Carlisle Wishard Date: Sat, 4 Mar 2023 12:30:39 -0500 Subject: [PATCH] removed references to 4k plot --- paper/paper.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/paper/paper.md b/paper/paper.md index 782da0cf1..3bba09655 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -49,7 +49,7 @@ The combination of modern programming practices, flexible data processing tools, # Performance -Modeling the behavior of a thousands of fully interacting bodies over long timescales is computationally expensive, with typical runs taking weeks or months to complete. The addition of collisional fragmentation can quickly generate hundreds or thousands of new bodies in a short time period, creating further computational challenges for traditional \textit{n}-body integrators. As a result, enhancing computational performance was a key aspect of the development of 'Swiftest'. Here we show a comparison between the performance of 'Swift', 'Swifter-OMP' (a parallel version of 'Swifter'), and 'Swiftest' on simulations with 1k, 2k, 4k, 8k, and 16k fully interacting bodies. The number of CPUs dedicated to each run is varied from 1 to 24 to test the parallel performance of each program. +Modeling the behavior of a thousands of fully interacting bodies over long timescales is computationally expensive, with typical runs taking weeks or months to complete. The addition of collisional fragmentation can quickly generate hundreds or thousands of new bodies in a short time period, creating further computational challenges for traditional \textit{n}-body integrators. As a result, enhancing computational performance was a key aspect of the development of 'Swiftest'. Here we show a comparison between the performance of 'Swift', 'Swifter-OMP' (a parallel version of 'Swifter'), and 'Swiftest' on simulations with 1k, 2k, 8k, and 16k fully interacting bodies. The number of CPUs dedicated to each run is varied from 1 to 24 to test the parallel performance of each program. \autoref{fig:performance} shows the results of this performance test. We can see that 'Swiftest' outperforms 'Swifter-OMP' and 'Swift' in each simulation set, even when run in serial. When run in parallel, 'Swiftest' shows a significant performance boost when the number of bodies is increased. The improved performance of 'Swiftest' compared to 'Swifter-OMP' and 'Swift' is a critical step forward in \textit{n}-body modeling, providing a powerful tool for modeling the dynamical evolution of planetary systems. @@ -57,6 +57,6 @@ Modeling the behavior of a thousands of fully interacting bodies over long times `Swiftest` was developed at Purdue University and was funded under the NASA Emerging Worlds and Solar System Workings programs. Active development by the Purdue Swiftest Team is ongoing and contributions from the community are highly encouraged. -![Performance testing of 'Swiftest' on systems of (a) 1k, (b) 2k, (c) 4k, (d) 8k, and (e) 16k fully interacting massive bodies. All simulations were run using the \textit{SyMBA} integrators included in 'Swift', 'Swifter-OMP', and 'Swiftest'. Speedup is measured relative to 'Swift' (dashed), with an ideal 1:1 speedup relative to 'Swiftest' in serial shown as an upper limit (dotted). The performance of 'Swifter-OMP' is shown in green while the performance of 'Swiftest' is shown in blue. All simulations were run on the Purdue University Rosen Center for Advanced Computing Brown Community Cluster. Brown contains 550 Dell compute nodes, with each node containing 2 12-core Intel Xeon Gold Sky Lake processors (CPUs), resulting in 24 cores per node. Each node has 96 GB of memory. \label{fig:performance}](performance.png) +![Performance testing of 'Swiftest' on systems of (a) 1k, (b) 2k, (c) 8k, and (d) 16k fully interacting massive bodies. All simulations were run using the \textit{SyMBA} integrators included in 'Swift', 'Swifter-OMP', and 'Swiftest'. Speedup is measured relative to 'Swift' (dashed), with an ideal 1:1 speedup relative to 'Swiftest' in serial shown as an upper limit (dotted). The performance of 'Swifter-OMP' is shown in green while the performance of 'Swiftest' is shown in blue. All simulations were run on the Purdue University Rosen Center for Advanced Computing Brown Community Cluster. Brown contains 550 Dell compute nodes, with each node containing 2 12-core Intel Xeon Gold Sky Lake processors (CPUs), resulting in 24 cores per node. Each node has 96 GB of memory. \label{fig:performance}](performance.png) # References \ No newline at end of file