The current configuration of the computational system has a total of 1568 CPU cores and little more than 83k GPU cores, spread across a platform with ultrafast infiniband network and 10 Gbit/s links between master and slave nodes. Having more than 300 TB in total storage space, 3248 GB in memory available and a total of 59270 kWh for the all year energy consumption, this system had an average occupancy rate above 112% in the last year, minimum and maximum “wait time” around 1,45 h and 8,76 days respectively and accumulated total offline time of 11 days for the slave nodes and 4 h for the master nodes. Users have access to the master nodes only, from there, they can submit whatever calculation they need for their work to the slave nodes using the queuing system implemented in the cluster. This system will select, from the several types of slave nodes, the ones suitable for the submitted job according to the demand of resources of the job, the availability of the slave nodes and the user total usage. No reservation is active, all online slave nodes are available for all the users and the priority is set by the following rule 0,1 (FIFO) + 0,5 (academic degree) + 0.4 (least usage/user usage).