Sender: LSF System Subject: Job 14302: in cluster Done Job was submitted from host by user in cluster at Mon Mar 2 09:35:59 2020 Job was executed on host(s) , in queue , as user in cluster at Mon Mar 2 10:05:38 2020 was used as the home directory. was used as the working directory. Started at Mon Mar 2 10:05:38 2020 Terminated at Mon Mar 2 22:51:59 2020 Results reported at Mon Mar 2 22:51:59 2020 Your job looked like: ------------------------------------------------------------ # LSBATCH: User input #!/usr/bin/env bash #### cryoSPARC cluster submission script template for PBS ## Available variables: ## /data/home/cryosparc_user/cryosparc2_worker/bin/cryosparcw run --project P75 --job J1193 --master_hostname amc-cryowks1 --master_command_core_port 39002 > /data/home/bonilla/Desktop/Projects/P75/J1193/job.log 2>&1 - the complete command string to run the job ## 12 - the number of CPUs needed ## 2 - the number of GPUs needed. ## Note: the code will use this many GPUs starting from dev id 0 ## the cluster scheduler or this script have the responsibility ## of setting CUDA_VISIBLE_DEVICES so that the job code ends up ## using the correct cluster-allocated GPUs. ## 32.0 - the amount of RAM needed in GB ## /data/home/bonilla/Desktop/Projects/P75/J1193 - absolute path to the job directory ## /data/home/bonilla/Desktop/Projects/P75 - absolute path to the project dir ## /data/home/bonilla/Desktop/Projects/P75/J1193/job.log - absolute path to the log file for the job ## /data/home/cryosparc_user/cryosparc2_worker/bin/cryosparcw - absolute path to the cryosparc worker command ## --project P75 --job J1193 --master_hostname amc-cryowks1 --master_command_core_port 39002 - arguments to be passed to cryosparcw run ## P75 - uid of the project ## J1193 - uid of the job ## Steve Bonilla Rosales - name of the user that created the job (may contain spaces) ## STEVE.BONILLAROSALES@CUANSCHUTZ.EDU - cryosparc username of the user that created the job (usually an email) ## ## What follows is a simple PBS script: #BSUB -J cryosparc_P75_J1193 #BUSB -n 12 #BSUB -gpu "num=2:j_exclusive=yes" #BSUB -R "rusage[mem=32.0]" #BUSB -R "span[hosts=1]" #BSUB -o /data/home/bonilla/Desktop/Projects/P75/J1193/ #BSUB -e /data/home/bonilla/Desktop/Projects/P75/J1193/ #BSUB -q cryosparc #available_devs="" #for devidx in $(seq 0 3); #do # if [[ -z $(nvidia-smi -i $devidx --query-compute-apps=pid --format=csv,noheader) ]] ; then # if [[ -z "$available_devs" ]] ; then # available_devs=$devidx # else # available_devs=$available_devs,$devidx # fi # fi #done #export CUDA_VISIBLE_DEVICES=$available_devs #USER Steve Bonilla Rosales echo $LSB_JOBID >> /tmp/cuda_visible.txt sleep 10 CUDA_VISIBLE_DEVICES=$( /opt/relion/scripts/cryosparc_cuda.sh $LSB_JOBID ) export CUDA_VISIBLE_DEVICES export CUDA_HOME=/data/software/cuda (... more ...) ------------------------------------------------------------ Successfully completed. Resource usage summary: CPU time : 130159.91 sec. Max Memory : 15 GB Average Memory : 8.50 GB Total Requested Memory : 32.00 GB Delta Memory : 17.00 GB Max Swap : - Max Processes : 14 Max Threads : 73 Run time : 45981 sec. Turnaround time : 47760 sec. The output (if any) follows: 2,3