Ma configuration :
J’ai lancé un nouveau test via llm_benchmark, afin de comparer avec ma dernière configuration fonctionnelle :
J’ai changé la carte NVIDIA car deux cartes NVIDIA avec 8 Go chacune, elles sont vues par la VM qui est lancé par proxmox :
# nvidia-smi --list-gpus
GPU 0: Quadro M5000 (UUID: GPU-)
GPU 1: Quadro M4000 (UUID: GPU-)
# nvidia-smi
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.86.15 Driver Version: 570.86.15 CUDA Version: 12.8 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 Quadro M5000 Off | 00000000:00:10.0 Off | Off |
| 38% 37C P8 13W / 150W | 5MiB / 8192MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 Quadro M4000 Off | 00000000:00:11.0 Off | N/A |
| 46% 39C P8 13W / 120W | 5MiB / 8192MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
les résultats du test sont les suivants :
J’ai trouvé un outil de test de llm : llm_benchmark ( installation via pip )
Je suis en dernière position : https://llm.aidatatools.com/results-linux.php , avec “llama3.1:8b”: “1.12”.


llm_benchmark run
-------Linux----------
{'id': '0', 'name': 'Quadro 4000', 'driver': '390.157', 'gpu_memory_total': '1985.0 MB',
'gpu_memory_free': '1984.0 MB', 'gpu_memory_used': '1.0 MB', 'gpu_load': '0.0%',
'gpu_temperature': '60.0°C'}
Only one GPU card
Total memory size : 61.36 GB
cpu_info: Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz
gpu_info: Quadro 4000
os_version: Ubuntu 22.04.5 LTS
ollama_version: 0.5.7
----------
LLM models file path:/usr/local/lib/python3.10/dist-packages/llm_benchmark/data/benchmark_models_16gb_ram.yml
Checking and pulling the following LLM models
phi4:14b
qwen2:7b
gemma2:9b
mistral:7b
llama3.1:8b
llava:7b
llava:13b
----------
....
----------------------------------------
Sending the following data to a remote server
-------Linux----------
{'id': '0', 'name': 'Quadro 4000', 'driver': '390.157', 'gpu_memory_total': '1985.0 MB',
'gpu_memory_free': '1984.0 MB', 'gpu_memory_used': '1.0 MB', 'gpu_load': '0.0%',
'gpu_temperature': '61.0°C'}
Only one GPU card
-------Linux----------
{'id': '0', 'name': 'Quadro 4000', 'driver': '390.157', 'gpu_memory_total': '1985.0 MB',
'gpu_memory_free': '1984.0 MB', 'gpu_memory_used': '1.0 MB', 'gpu_load': '0.0%',
'gpu_temperature': '61.0°C'}
Only one GPU card
{
"mistral:7b": "1.40",
"llama3.1:8b": "1.12",
"phi4:14b": "0.76",
"qwen2:7b": "1.31",
"gemma2:9b": "1.03",
"llava:7b": "1.84",
"llava:13b": "0.73",
"uuid": "",
"ollama_version": "0.5.7"
}
----------