More machines and machine sizes
The four existing scalc's seem to work exactly as we wanted them. So it's time for adding more.
Before that, we should think about resizing the existing and the new ones. (My gut feeling is that the 32GB were a little small.)
-
The three ESX hosts have 2x20 double-threaded CPU cores with 1000 GB Memory. This makes approximately 12.5 ~ 10 GB / vCPU.
-
Our analyses handle data in chunks of up to a few GB. (One time slice of one field of a large Ocean model is ~5GB). For more complicated / convoluted calculations, each chunk needs to fit into memory O(10-20) times. So we'd need 50-100 GB per machine.
-
50-100 GB Memory would come with 5 to 10 vCPUs. (Is it a bad idea to move away from powers of two here??)
-
We cannot make the individual VMs to big, because we want them to be mobile between ESX hosts.
-
We need to leave some room on the ESX. How much? 25%?
Scenario A
-
scalc{01..15}
with 100 GB, 10 vCPU each -
scalcg{01..03}
with 100 GB, 10 vCPU, 1 GPU each -
ursus7
andtaurus7
with 256 GB, 32 CPU each
Scenario B
-
scalc{01..30}
with 50 GB, 5 vCPU each -
scalcg{01..03}
with 100 GB, 10 vCPU, 1 GPU each -
ursus7
andtaurus7
with 256 GB, 32 CPU each
Scenario C
-
scalc{01..06}
with 200 GB, 20 vCPU each -
scalcg{01..03}
with 200 GB, 20 vCPU, 1 GPU each -
ursus7
andtaurus7
with 256 GB, 32 CPU each