<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Ia on Blog GoHugo de Fredô : Linux, Proxmox, IA, Trail, Course, Randonnée, Gravel, Ski de Randonnée</title>
    <link>https://move.cyber-neurones.org/tags/ia/</link>
    <description>Recent content in Ia on Blog GoHugo de Fredô : Linux, Proxmox, IA, Trail, Course, Randonnée, Gravel, Ski de Randonnée</description>
    <generator>Hugo</generator>
    <language>fr</language>
    <lastBuildDate>Mon, 02 Jun 2025 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://move.cyber-neurones.org/tags/ia/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Proxmox / Ollama /Open WebUI : Ajout de AUTOMATIC1111</title>
      <link>https://move.cyber-neurones.org/post/2025/06/2025-06-02-proxmox-ollama-open-webui-ajout-de-automatic1111/</link>
      <pubDate>Mon, 02 Jun 2025 00:00:00 +0000</pubDate>
      <guid>https://move.cyber-neurones.org/post/2025/06/2025-06-02-proxmox-ollama-open-webui-ajout-de-automatic1111/</guid>
      <description>&lt;p&gt;Ma configuration :&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Proxmox : &lt;strong&gt;8.4.1&lt;/strong&gt; ( kernel : Linux &lt;strong&gt;6.8.12-10-pve&lt;/strong&gt; (2025-04-18T07:39Z) )&lt;/li&gt;&#xA;&lt;li&gt;VM sous Ubuntu : Ubuntu &lt;strong&gt;22.04.5&lt;/strong&gt; LTS ( kernel : &lt;strong&gt;5.15.0-140&lt;/strong&gt;-generic )&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Ollama : &lt;strong&gt;&lt;strong&gt;0.9.0 (&lt;/strong&gt;&lt;/strong&gt; Le &lt;a href=&#34;https://www.cyber-neurones.org/2025/02/proxmox-ollama-llm_benchmark-test-n2/&#34; title=&#34;14:21&#34;&gt;10/02/2025&lt;/a&gt; &lt;strong&gt;&lt;strong&gt;:&lt;/strong&gt;&lt;/strong&gt; &lt;a href=&#34;https://www.cyber-neurones.org/2025/02/proxmox-ollama-llm_benchmark-test-n2/&#34;&gt;0.5.7&lt;/a&gt; )  : &lt;a href=&#34;https://ollama.com/&#34;&gt;https://ollama.com/&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;Python : &lt;strong&gt;3.10&lt;/strong&gt; (pour AUTOMATIC1111) &amp;amp; 3.11 (pour Ollama)&lt;/li&gt;&#xA;&lt;li&gt;Open WebUI : &lt;strong&gt;0.6.13&lt;/strong&gt; : &lt;a href=&#34;https://github.com/open-webui/open-webui&#34;&gt;https://github.com/open-webui/open-webui&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;NVIDIA : &lt;strong&gt;575.51.03&lt;/strong&gt; .( Le &lt;a href=&#34;https://www.cyber-neurones.org/2025/02/proxmox-ollama-llm_benchmark-test-n2/&#34; title=&#34;14:21&#34;&gt;10/02/2025&lt;/a&gt;  : &lt;a href=&#34;https://www.cyber-neurones.org/2025/02/proxmox-ollama-llm_benchmark-test-n2/&#34;&gt;570.86.15&lt;/a&gt; )&lt;/li&gt;&#xA;&lt;li&gt;CUDA : &lt;strong&gt;12.8.93&lt;/strong&gt; ( &lt;em&gt;nvcc &amp;ndash;version&lt;/em&gt; )&lt;/li&gt;&#xA;&lt;li&gt;AUTOMATIC1111 : &lt;strong&gt;1.10.1&lt;/strong&gt; : &lt;a href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui&#34;&gt;https://github.com/AUTOMATIC1111/stable-diffusion-webui&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;J&amp;rsquo;ai lancé un nouveau test via llm_benchmark, afin de comparer avec ma dernière configuration fonctionnelle :&lt;/p&gt;</description>
    </item>
    <item>
      <title>Proxmox/Ollama : llm_benchmark (Test n°2)</title>
      <link>https://move.cyber-neurones.org/post/2025/02/2025-02-10-proxmox-ollama-llm_benchmark-test-n2/</link>
      <pubDate>Mon, 10 Feb 2025 00:00:00 +0000</pubDate>
      <guid>https://move.cyber-neurones.org/post/2025/02/2025-02-10-proxmox-ollama-llm_benchmark-test-n2/</guid>
      <description>&lt;p&gt;J&amp;rsquo;ai changé la carte NVIDIA car deux cartes NVIDIA avec 8 Go chacune, elles sont vues par la VM qui est lancé par proxmox :&lt;/p&gt;&#xA;&#xA;&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;# nvidia-smi --list-gpus&#xA;GPU 0: Quadro M5000 (UUID: GPU-)&#xA;GPU 1: Quadro M4000 (UUID: GPU-)&#xA;# nvidia-smi      &#xA;+-----------------------------------------------------------------------------------------+&#xA;| NVIDIA-SMI 570.86.15              Driver Version: 570.86.15      CUDA Version: 12.8     |&#xA;|-----------------------------------------+------------------------+----------------------+&#xA;| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |&#xA;| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |&#xA;|                                         |                        |               MIG M. |&#xA;|=========================================+========================+======================|&#xA;|   0  Quadro M5000                   Off |   00000000:00:10.0 Off |                  Off |&#xA;| 38%   37C    P8             13W /  150W |       5MiB /   8192MiB |      0%      Default |&#xA;|                                         |                        |                  N/A |&#xA;+-----------------------------------------+------------------------+----------------------+&#xA;|   1  Quadro M4000                   Off |   00000000:00:11.0 Off |                  N/A |&#xA;| 46%   39C    P8             13W /  120W |       5MiB /   8192MiB |      0%      Default |&#xA;|                                         |                        |                  N/A |&#xA;+-----------------------------------------+------------------------+----------------------+&#xA;                                                                                         &#xA;+-----------------------------------------------------------------------------------------+&#xA;| Processes:                                                                              |&#xA;|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |&#xA;|        ID   ID                                                               Usage      |&#xA;|=========================================================================================|&#xA;|  No running processes found                                                             |&#xA;+-----------------------------------------------------------------------------------------+&lt;/code&gt;&lt;/pre&gt;&#xA;&lt;p&gt;les résultats du test sont les suivants :&lt;/p&gt;</description>
    </item>
    <item>
      <title>Proxmox/Ollama : llm_benchmark</title>
      <link>https://move.cyber-neurones.org/post/2025/01/2025-01-25-proxmox-ollama-llm_benchmark/</link>
      <pubDate>Sat, 25 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://move.cyber-neurones.org/post/2025/01/2025-01-25-proxmox-ollama-llm_benchmark/</guid>
      <description>&lt;p&gt;J&amp;rsquo;ai trouvé un outil de test de llm : llm_benchmark ( installation via pip )&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://pypi.org/project/llm-benchmark/&#34;&gt;https://pypi.org/project/llm-benchmark/&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://github.com/aidatatools/ollama-benchmark&#34;&gt;https://github.com/aidatatools/ollama-benchmark&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://llm.aidatatools.com/&#34;&gt;https://llm.aidatatools.com/&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;Je suis en dernière position : &lt;a href=&#34;https://llm.aidatatools.com/results-linux.php&#34;&gt;https://llm.aidatatools.com/results-linux.php&lt;/a&gt; , avec &amp;ldquo;llama3.1:8b&amp;rdquo;: &amp;ldquo;1.12&amp;rdquo;.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img src=&#34;images/2025-01-24-21-27-37-300x125.png&#34; alt=&#34;&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img src=&#34;images/2025-01-25-12-07-06-300x18.png&#34; alt=&#34;&#34;&gt;&lt;/p&gt;&#xA;&#xA;&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt; llm_benchmark run&#xA;-------Linux----------&#xA;{&amp;#39;id&amp;#39;: &amp;#39;0&amp;#39;, &amp;#39;name&amp;#39;: &amp;#39;Quadro 4000&amp;#39;, &amp;#39;driver&amp;#39;: &amp;#39;390.157&amp;#39;, &amp;#39;gpu_memory_total&amp;#39;: &amp;#39;1985.0 MB&amp;#39;,&#xA;&amp;#39;gpu_memory_free&amp;#39;: &amp;#39;1984.0 MB&amp;#39;, &amp;#39;gpu_memory_used&amp;#39;: &amp;#39;1.0 MB&amp;#39;, &amp;#39;gpu_load&amp;#39;: &amp;#39;0.0%&amp;#39;, &#xA;&amp;#39;gpu_temperature&amp;#39;: &amp;#39;60.0°C&amp;#39;}&#xA;Only one GPU card&#xA;Total memory size : 61.36 GB&#xA;cpu_info: Intel(R) Xeon(R) CPU E5-2450 v2 @ 2.50GHz&#xA;gpu_info: Quadro 4000&#xA;os_version: Ubuntu 22.04.5 LTS&#xA;ollama_version: 0.5.7&#xA;----------&#xA;LLM models file path：/usr/local/lib/python3.10/dist-packages/llm_benchmark/data/benchmark_models_16gb_ram.yml&#xA;Checking and pulling the following LLM models&#xA;phi4:14b&#xA;qwen2:7b&#xA;gemma2:9b&#xA;mistral:7b&#xA;llama3.1:8b&#xA;llava:7b&#xA;llava:13b&#xA;----------&#xA;....&#xA;----------------------------------------&#xA;Sending the following data to a remote server&#xA;-------Linux----------&#xA;{&amp;#39;id&amp;#39;: &amp;#39;0&amp;#39;, &amp;#39;name&amp;#39;: &amp;#39;Quadro 4000&amp;#39;, &amp;#39;driver&amp;#39;: &amp;#39;390.157&amp;#39;, &amp;#39;gpu_memory_total&amp;#39;: &amp;#39;1985.0 MB&amp;#39;,&#xA; &amp;#39;gpu_memory_free&amp;#39;: &amp;#39;1984.0 MB&amp;#39;, &amp;#39;gpu_memory_used&amp;#39;: &amp;#39;1.0 MB&amp;#39;, &amp;#39;gpu_load&amp;#39;: &amp;#39;0.0%&amp;#39;, &#xA;&amp;#39;gpu_temperature&amp;#39;: &amp;#39;61.0°C&amp;#39;}&#xA;Only one GPU card&#xA;-------Linux----------&#xA;{&amp;#39;id&amp;#39;: &amp;#39;0&amp;#39;, &amp;#39;name&amp;#39;: &amp;#39;Quadro 4000&amp;#39;, &amp;#39;driver&amp;#39;: &amp;#39;390.157&amp;#39;, &amp;#39;gpu_memory_total&amp;#39;: &amp;#39;1985.0 MB&amp;#39;,&#xA; &amp;#39;gpu_memory_free&amp;#39;: &amp;#39;1984.0 MB&amp;#39;, &amp;#39;gpu_memory_used&amp;#39;: &amp;#39;1.0 MB&amp;#39;, &amp;#39;gpu_load&amp;#39;: &amp;#39;0.0%&amp;#39;,&#xA; &amp;#39;gpu_temperature&amp;#39;: &amp;#39;61.0°C&amp;#39;}&#xA;Only one GPU card&#xA;{&#xA;    &amp;#34;mistral:7b&amp;#34;: &amp;#34;1.40&amp;#34;,&#xA;    &amp;#34;llama3.1:8b&amp;#34;: &amp;#34;1.12&amp;#34;,&#xA;    &amp;#34;phi4:14b&amp;#34;: &amp;#34;0.76&amp;#34;,&#xA;    &amp;#34;qwen2:7b&amp;#34;: &amp;#34;1.31&amp;#34;,&#xA;    &amp;#34;gemma2:9b&amp;#34;: &amp;#34;1.03&amp;#34;,&#xA;    &amp;#34;llava:7b&amp;#34;: &amp;#34;1.84&amp;#34;,&#xA;    &amp;#34;llava:13b&amp;#34;: &amp;#34;0.73&amp;#34;,&#xA;    &amp;#34;uuid&amp;#34;: &amp;#34;&amp;#34;,&#xA;    &amp;#34;ollama_version&amp;#34;: &amp;#34;0.5.7&amp;#34;&#xA;}&#xA;----------&lt;/code&gt;&lt;/pre&gt;</description>
    </item>
  </channel>
</rss>
