Avatar
🏃
achat android animaux anticor apple arnaque association biot bluesky bricolage canoe canyoning ccesoir chateau choisir-velo cinema citation citroen-c8 commission configuration consomacteur course courses courses-dobstacles cuisine cv-pro dawarich debian dell depannage dev docker don dsi désinformation economie facebook fedora firewall football garmin gohugo gravel hebergement home-assistant hugo humhub inondation internets iphone-ipad jeux justice kali linux livre loisirs mac marathon mastodon mecanique misere momes-dazur musee-loisirs nice-matin non-classe oracle padel petition peugeot-206 peugeot-307 peugeot-406 photos php politique postgresql pro proxmox randonnee randonnée raspberry redhat redis republique-numerique reseaux-sociaux sante-internets sante-loisirs securite-internets self-hosted ski ski de randonnée sortir soutien spam sport-biot-2 sports stockage theatre trail twitter/x twitter/x ubuntu velo velo-biot velo-sur-biot via-ferrata ville villes visites voyage vtt windows wordpress wordpress-fr xbox élections municipales 0189xxxxxx 10-km 10-miles 10km 12-km 140 2014 2kv 307 406 406-hdi 920 920xt 930 935 abus acpi actif azur adb adhesion adsl agay age air aix-en-provence alcatel allianz allianz-riviera alpe-dhuez alpes-maritimes alpilles alternateur amap amazon amp anaconda analyse analytics android animaux anote anthea anthea-fr antibes anticor antutu apache2 apn apple applepi-baker april arcep arfi arnaque aroooo aseb-am asics association assurantprotection.fr asterisk asus asus-zenfone atos atsemtex audit auron authentique-fr autoroute avis awesome-note awk awox baignade balade balade-fr barcelonnette barnier base-de-donnee bash basket-ball belt benchmark bento bentomedia betrail biellette bigreen bilan bilan professionnel bilan sportif bilan-fr bio biot biot-fr biot-vernir bitdefender bkl-l09 blacklist blender blockchain-fr blog blogger-com bluesky bonifacio bormes-les-mimosas bot brague braun breil-sur-roya brew bricolage brm brm200 bruxelles bueil bug c cadolive cagnes-sur-mer caille caisse-depargne calanques calencon calendrier camargue cannes cannes-fr canoe cantabrie canyon canyoning cap-dantibes cardio carlit carnaval-de-nice carnaval-de-nice-fr carros carroz casa castellane castellar cat catamaran caussols ccbot ccc cdisplay ceinture cevennes chamonix chateau chaussures cheiron choisir velo cinema circuit cisco citation citrix classement claudebot clignotant cloud cloud-act cloud-personnel cnews codegouv col connecté col de greoliere col-connecte col-de-la-madone col-de-turini colmars competition compression configuration connect consomacteur conspiracy watch convergence-azureenne coreos coronarivus coronavirus corse courmes course course-a-pied course-dobstacle course-dobstacles course-dorientation course-fr coursegoules courses-2 covid covid-19 cozy-cloud cozytouch cpu critique cross cuda cuenod curl cursor dansup dark-web data dataforseo dawarich dawarich.app day-journal debian decathlon decodex deletefacebook dell dello-sciliar-catinaccio delphi demi-yasso derby-de-la-meije developpement developper-tools-access diag diario diaro diaspora digikam digiposte distribution docker docteur-gsm-com dolceacqua dolomites domolites domotique don dourgne dsi duranus débats désinformation ecologie economie education el-capitan elasticsearch elk elm327 email endurain enphase ensol entreprises entretien epidemie escape-game escarene escroquerie espagne estrosi europe eurovelo evasion-fiscale evolution-a-faire exiftool export eze f-f-a facebook facture facture-deau-fr fakenews falicon fan fayence fedora fenix 7 ffa ffmpeg filebeat fillion fillon film-bon film-moyen filtre-a-gazole filtre-pollen fire firewall fittrackee folder foodwatch footing for forerunner forerunner-935 forerunner-945 forerunner-955 forgejo framasoft framasphere france culture france-3 france-soir fraude fraude-fiscale free free-mobile freebox frejus funchal gafams gaillac garageband garmin garmin-connect gavarnie gcc gelas geodes gilette git github glance gmail goaccess gohugo google google-analytic google-analytics google-search-console gopro gorbio gotify goudurix gouffre gourdon gourdon-fr gptbot gpx grafana grasse gravel graxx greoliere greolieres greolieres-les-neiges grub gréolières gtest guillon hadopi haproxy hautpoul hebergement high-trail-vanoise hintertux hipay hipay-com historique-des-dons holdup home-assistant homeassistant hommage honeywell honor hop howto htv huawei hugo huile-direction-assiste humhub ia ibm iconservicesagent ie ign ilonse imagento imageoptim imagneto immich imovie import impots indent injecteur inondation insee insta360 installation internet intimidation ipad ipad-argus iphone iphoto iptables isola-2000 issue italie itra itunes j2s jaime-courir java javascript jeedom jenkins jetpack jeux jeux-de-sophia jeux-de-sophia-antipolis joplin joseph journey jpegoptim juns jupyter justice kali karer-pass kayak kernel kibana kies-app kilometre-vertical kisskissbankbank kiwix klaxon kodi kokopelli korben la-brague la-capelette la-grave-de-peille la-poste la-quadrature-du-net la-vie lac-de-vens langage lantosque lapeyre laquadrature latemar lateral lautrec lcd le-monde leon les-cammazes les-visiteurs-du-soir let's encrypt lets-encrypt levens libreoffice libvirt lightmd linux liste livebox livre logement logiciel logstash loi-numerique lombricomposteur luberon lulu mac mac-os macjournal macos macos-high-sierra madere malade malware mamp mandelieu maps marathon marche mariadb markdown marseille mastodon matomo matomo-analytics matosdon matrix mazamet mcafee mecanique mediapart meduses meije meltdown memo menuiserie mercantour merci-michel mermaid meta metricbeat microsoft-teams migration misere mkdocs module mogrify mojave monsanto montagne-noire montagne-noire-fr montauroux motionpro mougins moulinet mouton msf mud-day mud-day-fr munin mysql mytf1 mytf1-fr mywellness myzone naiad naiad2020 natation natation-libre nature nautipolis neige netamo netatmo nextcloud nginx nice nice-fr nice-matin notes nsinvalidargumentexception ntfs nuxit nvidia objective-see obsolescence-programmee obstacle occ occasion odbii ok-google ollama olvid onedrive oopad open open-data open-sky open-source open-source-experience opencv opendata opensky opensky-a-valbonne opio oracle-linux orange origine-cycles orsiere osm france osmc oss-paris osx osxp osxp2024 osxp2025 outlook owncloud pac paca padel panorama panoramique paradisdiscount-com parc-du-paradou parc-naturel-dello-sciliar-catinaccio paris pascal paul paypal pdf peillon peinture petition peugeot peugeot-307 phare photo photon photos photovoltaique php phpnet phpnet-org pichauris pigeon pinterest pip piscine planning play-store plongee plu plugin pluviometrie pneu politique pollution polylang postfix postgresql ppri prejuges preparation print prix programmation prom-classic prometheus provence proxmox pssh publicite purge pyrenees python python3 qnap quartier qwant raid rameur-dinterieur rancheros randonnee randonnee-2 randonnees randonnée raspberry recette recette-fr redbysfr redhat redis redmi reparartion reparation reseau-sociaux resideo rest-api resultat reunion revue-de-presse rgpd rimplas rock64 root roquebilliere roubion rouret row rsyslog rt-france rubitrack rue89 saint-amancet-fr saint-avit saint-fereol saint-jeannet saint-vallier-de-thiey saintetic salade-nicoise-fr salle-serveur salon samsung san-remo sante sauter scop sd-card securite security.txt sed selection self-hosted semaine-de-la-critique semi semi-marathon semi-marathon-fr sentier seranon server-git serveur service-public siagne sidobre sierra signal sip ski ski de randonnée ski-de-randonnee skred skype slack smart-home smartphone smsc snap sommets sondage sondage-en-ligne sophia-antipolis soreze soreze-fr sortie-en-famille sospel soual spam spartan spartan-race spartan-race-fr spartian spectre sport sql sqlite ssd stable-diffusion stade station-meteo statistique statistiques stockage strava suisse suivi summary surf svn swap swarm syslog systemd tanneron tapform tarif tarn tarn-fr taxes telerama television temp temperature template templier tende tennis tennis-de-table tensorflow test testeur tests textwrangler theatre theoule-sur-mer thorenc thunderbird thunderbirds tignes tor tour-des-sangliers tour-du-sanglier tourisme tourrettes-sur-loup trail trail-de-la-vesubie transposh transvesubienne trashbusters travail travaux trial trifecta turbie tux tuxedo tuxedo-computer tuya twitter tyrol ubaye ubuntu ufc-que-choisir ultra uniq unknown unroot update urbain urban-trail urbanbiker utcam utelle utmb vae valbonne valeo vallee-des-merveilles valloire vallon-des-horts valmasque var vaucluse vaultwarden vauplane vegay velo velotaf vence ventoux veolia verdon vesubie vidange video villeneuve-loubet vim virtual-box virtualbox virus vma vmware vol voyage vpn vps vtc vtt wanderer wannacry webalizer wget whatsapp wifi wikimedia-foundation wikipedia wiko windows wine woocommerce wordfence wordpress x xbmc xbox yahoo-mail yolo zenfone zenpad zigbee
  • Step 0 : Install Joplin and activate the REST API ( https://joplin.cozic.net/api/ ) .

    Step 1: Install staticmap with pip ( for more information see https://github.com/komoot/staticmap )

    $ pip install staticmap
    Collecting staticmap
      Downloading https://files.pythonhosted.org/packages/f9/9f/5a3843533eab037cba031486175c4db1b214614404a29516208ff228dead/staticmap-0.5.4.tar.gz
    Collecting Pillow (from staticmap)
      Downloading https://files.pythonhosted.org/packages/c9/ed/27cc92e99b9ccaa0985a66133baeea7e8a3371d3c04cfa353aaa3b81aac1/Pillow-5.4.1-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (3.7MB)
        100% |████████████████████████████████| 3.7MB 6.3MB/s 
    Requirement already satisfied: requests in /usr/local/lib/python3.7/site-packages (from staticmap) (2.21.0)
    Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/site-packages (from requests->staticmap) (3.0.4)
    Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.7/site-packages (from requests->staticmap) (2.8)
    Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.7/site-packages (from requests->staticmap) (1.24.1)
    Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/site-packages (from requests->staticmap) (2018.11.29)
    Building wheels for collected packages: staticmap
      Building wheel for staticmap (setup.py) ... done
      Stored in directory: /Users/..../Library/Caches/pip/wheels/fe/a6/a5/2acceb72471d85bd0498973aabd611e6ff1cdd48796790f047
    Successfully built staticmap
    Installing collected packages: Pillow, staticmap
    Successfully installed Pillow-5.4.1 staticmap-0.5.4

    The source code :

    joplin maps python rest-api Created Wed, 13 Feb 2019 00:00:00 +0000
  • Install JOPLIN : https://joplin.cozic.net ,  and start REST API. (Easy)

    Step 1 : Put this script in folder.

    Step 2 : Edit the script and put your token 

    Step 3 : Run the script

    The script :

    #
    # Version 1 
    # for Python 3
    # 
    #   ARIAS Frederic
    #   Sorry ... It's difficult for me the python :)
    #
    
    import feedparser
    from os import listdir
    from pathlib import Path
    import glob
    import csv
    import locale
    import os
    import time
    from datetime import datetime
    import json
    import requests
    
    #Token
    ip = "127.0.0.1"
    port = "41184"
    token = "Put your token here"
    
    nb_import = 0;
    headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
    
    url_notes = (
        "http://"+ip+":"+port+"/notes?"
        "token="+token
    )
    url_folders = (
        "http://"+ip+":"+port+"/folders?"
        "token="+token
    )
    url_tags = (
        "http://"+ip+":"+port+"/tags?"
        "token="+token
    )
    url_ressources = (
        "http://"+ip+":"+port+"/ressources?"
        "token="+token
    )
    
    #Init
    Wordpress_UID = "12345678901234567801234567890123"
    UID = {}
    
    payload = {
        "id":Wordpress_UID,
        "title":"Wordpress Import"
    }
    
    try:
        resp = requests.post(url_folders, data=json.dumps(payload, separators=(',',':')), headers=headers)
        resp.raise_for_status()
        resp_dict = resp.json()
        print(resp_dict)
        print("My ID")
        print(resp_dict['id'])
        Wordpress_UID_real = resp_dict['id']
        save = str(resp_dict['id'])
        UID[Wordpress_UID]= save
    except requests.exceptions.HTTPError as e:
        print("Bad HTTP status code:", e)
    except requests.exceptions.RequestException as e:
        print("Network error:", e)
    
    feed = feedparser.parse("https://www.cyber-neurones.org/feed/")
    
    feed_title = feed['feed']['title']
    feed_entries = feed.entries
    
    numero = -2
    nb_entries = 1
    nb_metadata_import = 1
    
    while nb_entries > 0 : 
      print ("----- Page ",numero,"-------")
      numero += 2
      url = "https://www.cyber-neurones.org/feed/?paged="+str(numero)
      feed = feedparser.parse(url)
      feed_title = feed['feed']['title']
      feed_entries = feed.entries
      nb_entries = len(feed['entries'])
      for entry in feed.entries:
         nb_metadata_import += 1
         my_title = entry.title
         my_link = entry.link
         article_published_at = entry.published # Unicode string
         article_published_at_parsed = entry.published_parsed # Time object
         article_author = entry.author
         timestamp = time.mktime(entry.published_parsed)*1000
         print("Published at "+article_published_at)
         my_body = entry.description
         payload_note = {
             "parent_id":Wordpress_UID_real,
             "title":my_title,
             "source":"Wordpress",
             "source_url":my_link,
             "order":nb_metadata_import,
             "user_created_time":timestamp,
             "user_updated_time":timestamp,
             "author":article_author,
             "body_html":my_body
             }
         payload_note_put = {
             "source":"Wordpress",
             "order":nb_metadata_import,
             "source_url":my_link,
             "user_created_time":timestamp,
             "user_updated_time":timestamp,
             "author":article_author
             }
    
         try:
             resp = requests.post(url_notes, json=payload_note)
             resp.raise_for_status()
             resp_dict = resp.json()
             print(resp_dict)
             print(resp_dict['id'])
             myuid= resp_dict['id']
         except requests.exceptions.HTTPError as e:
             print("Bad HTTP status code:", e)
         except requests.exceptions.RequestException as e:
             print("Network error:", e)
    
         url_notes_put = (
        "http://"+ip+":"+port+"/notes/"+myuid+"?"
        "token="+token
    )
         try:
             resp = requests.put(url_notes_put, json=payload_note_put)
             resp.raise_for_status()
             resp_dict = resp.json()
             print(resp_dict)
         except requests.exceptions.HTTPError as e:
             print("Bad HTTP status code:", e)
         except requests.exceptions.RequestException as e:
             print("Network error:", e)
    import joplin wordpress Created Tue, 12 Feb 2019 00:00:00 +0000
  • Link to Diaro App : https://diaroapp.com .

    But to many tracking !!!

    Link to JOPLIN : https://joplin.cozic.net/ , and the REST API : https://joplin.cozic.net/api/

    Step 1 : Add in first ligne :  before  in file DiaroBackup.xml … it’s mandatory !

    My note for REST API :

    1. Not possible to choose the ID on folder.
    2. Not possible to choose the ID on tags.
    3. Not possible to do PUT on note to add at the end of text : [](:/ID_RESOURCE). The syntax : PUT /ressources/ID_RESSOURCE/notes/ID_NOTE?token=…” . It’s more simple ….
    4. Not possible to add ID of tags instead text on Notes (POST).
    5. Not possible to create NOTE with “user_created_time” (POST) , it’s mandatory to do PUT.
    6. Not possible to change “user_updated_time” with PUT.

    After install python3 ( it’s easy … and run this script), note put your token in the script.

    diaro joplin Created Mon, 11 Feb 2019 00:00:00 +0000
  • Voici ce que j’ai fait pour faire l’installation de pip sur Mac OS :

    $ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 1662k  100 1662k    0     0   560k      0  0:00:02  0:00:02 --:--:-- 560k
    
    $ python3 get-pip.py
    Collecting pip
      Downloading https://files.pythonhosted.org/packages/d7/41/34dd96bd33958e52cb4da2f1bf0818e396514fd4f4725a79199564cd0c20/pip-19.0.2-py2.py3-none-any.whl (1.4MB)
        100% |████████████████████████████████| 1.4MB 154kB/s 
    Installing collected packages: pip
      Found existing installation: pip 18.1
        Uninstalling pip-18.1:
          Successfully uninstalled pip-18.1
    Successfully installed pip-19.0.2
    
    $ pip install feedparser
    Collecting feedparser
      Downloading https://files.pythonhosted.org/packages/91/d8/7d37fec71ff7c9dbcdd80d2b48bcdd86d6af502156fc93846fb0102cb2c4/feedparser-5.2.1.tar.bz2 (192kB)
        100% |████████████████████████████████| 194kB 500kB/s 
    Building wheels for collected packages: feedparser
      Building wheel for feedparser (setup.py) ... done
      Stored in directory: ....
    Successfully built feedparser
    Installing collected packages: feedparser
    Successfully installed feedparser-5.2.1
    pip python Created Mon, 11 Feb 2019 00:00:00 +0000
  • (See the finale release : https://www.cyber-neurones.org/2019/02/diaro-app-pixel-crater-ltd-diarobackup-xml-how-to-migrate-data-to-joplin/ )

    Now with release V3, it’s possible to import data … Le last issue is on user_created_time and user_updated_time.

    The REST API is very good ( https://joplin.cozic.net/api/ ) , but If it’s not too complex :

    1. Add possibility to choose the ID on folder.
    2. Add possibility to choose the ID on tags.
    3. Add possibility to do PUT on note to add at the end of text : [](:/ID_RESOURCE). The syntax : PUT /ressources/ID_RESSOURCE/notes/ID_NOTE?token=…”
    4. Possibility to add ID of tags instead text on Notes.

    My last source :

    import joplin Created Sat, 09 Feb 2019 00:00:00 +0000
  • (See the finale release : https://www.cyber-neurones.org/2019/02/diaro-app-pixel-crater-ltd-diarobackup-xml-how-to-migrate-data-to-joplin/ )

    Je pensais avoir trouvé mon bug … je pensais que c’était l’espace avant la variable qui faisait que les valeurs n’était pas prise en compte.

    Avec : requests.post(url_folders, json=payload) on a un . c.a.d. un 20

    Avec : requests.post(url_folders, data=json.dumps(payload, separators=(’,’,’:’)), headers=headers)

    Sachant que : headers = {‘Content-type’: ‘application/json’, ‘Accept’: ’text/plain’}

    Je n’ai plus d’espace mais les valeurs numériques ne sont pas prise en compte …

    python Created Sat, 09 Feb 2019 00:00:00 +0000
  • (See the finale release : https://www.cyber-neurones.org/2019/02/diaro-app-pixel-crater-ltd-diarobackup-xml-how-to-migrate-data-to-joplin/ )

    I have issue with ressources (link between ressources and notes) …. error 404. The logs in : .config/joplin-desktop/log-clipper.txt

    ....: "Request: PUT /ressources/71dd2cba2af54c4ebb53fb7fd8d0543b/notes/cbbc6076b2ac321ccae1f036a2fe6659?token=...."
    ....: "Error: Not Found
    Error: Not Found
        at Api.route (/Applications/Joplin.app/Contents/Resources/app/lib/services/rest/Api.js:103:41)
        at execRequest (/Applications/Joplin.app/Contents/Resources/app/lib/ClipperServer.js:147:39)
        at IncomingMessage.request.on (/Applications/Joplin.app/Contents/Resources/app/lib/ClipperServer.js:185:8)
        at emitNone (events.js:105:13)
        at IncomingMessage.emit (events.js:207:7)
        at endReadableNT (_stream_readable.js:1045:12)
        at _combinedTickCallback (internal/process/next_tick.js:138:11)
        at process._tickCallback (internal/process/next_tick.js:180:9)"

    My last code :

    #
    # Version 2 
    # for Python 3
    # 
    #   ARIAS Frederic
    #   Sorry ... It's difficult for me the python :)
    #
    
    #from lxml import etree
    import xml.etree.ElementTree as etree
    from time import gmtime, strftime
    import time
    import json
    import requests
    import os
    
    strftime("%Y-%m-%d %H:%M:%S", gmtime())
    start = time.time()
    
    #Token
    ip = "127.0.0.1"
    port = "41184"
    token = "ABCD123ABCD123ABCD123ABCD123ABCD123"
    
    url_notes = (
        "http://"+ip+":"+port+"/notes?"
        "token="+token
    )
    url_folders = (
        "http://"+ip+":"+port+"/folders?"
        "token="+token
    )
    url_tags = (
        "http://"+ip+":"+port+"/tags?"
        "token="+token
    )
    url_ressources = (
        "http://"+ip+":"+port+"/ressources?"
        "token="+token
    )
    
    #Init
    Diaro_UID = "12345678901234567801234567890123"
    Lat = {}
    Lng = {}
    UID = {} 
    TAGS = {}
    Lat[""] = ""
    Lng[""] = ""
    
    payload = {
        "id": Diaro_UID,
        "title": "Diaro Import"
    }
    
    try:
        resp = requests.post(url_folders, json=payload)
        #time.sleep(1)
        resp.raise_for_status()
        resp_dict = resp.json()
        print(resp_dict)
        print("My ID")
        print(resp_dict['id'])
        Diaro_UID_real = resp_dict['id']
        save = str(resp_dict['id'])
        UID[Diaro_UID]= save
    except requests.exceptions.HTTPError as e:
        print("Bad HTTP status code:", e)
    except requests.exceptions.RequestException as e:
        print("Network error:", e)
    
    print("Start : Parse Table")
    tree = etree.parse("./DiaroBackup.xml")
    for table in tree.iter('table'):
        name = table.attrib.get('name')
        print(name)
        myorder = 1
        for r in table.iter('r'):
             myuid = ""
             mytitle = ""
             mylat = ""
             mylng = ""
             mytags = ""
             mydate = ""
             mydate_ms = 0;
             mytext = ""
             myfilename = ""
             myfolder_uid = Diaro_UID
             mylocation_uid = ""
             myprimary_photo_uid = ""
             myentry_uid = ""
             myorder += 1
             for subelem in r:
                 print(subelem.tag)
                 if (subelem.tag == 'uid'):
                     myuid = subelem.text
                     print ("myuid",myuid)
                 if (subelem.tag == 'entry_uid'):
                     myentry_uid = subelem.text
                     print ("myentry_uid",myentry_uid)
                 if (subelem.tag == 'primary_photo_uid'):
                     myprimary_photo_uid = subelem.text
                     print ("myprimary_photo_uid",myprimary_photo_uid)
                 if (subelem.tag == 'folder_uid'):
                     myfolder_uid = subelem.text
                     print ("myfolder_uid",myfolder_uid)
                 if (subelem.tag == 'location_uid'):
                     mylocation_uid = subelem.text
                     print ("mylocation_uid",mylocation_uid)
                 if (subelem.tag == 'date'):
                     mydate = subelem.text
                     mydate_ms = int(mydate)
                     print ("mydate",mydate," in ms",mydate_ms)
                 if (subelem.tag == 'title'):
                     mytitle = subelem.text
                     print ("mytitle",mytitle)
                     #if type(mytitle) == str:
                        #mytitle = mytitle.encode('utf8')
                 if (subelem.tag == 'lat'):
                     mylat = subelem.text
                     print ("mylat",mylat)
                 if (subelem.tag == 'lng'):
                     mylng = subelem.text
                     print ("mylng",mylng)
                 if (subelem.tag == 'tags'):
                     mytags = subelem.text
                     if mytags:
                        mytags[1:]
                     print ("mytags",mytags)
                 if (subelem.tag == 'text'):
                     mytext = subelem.text
                     print ("mytext",mytext)
                     #if type(mytext) == str:
                           #mytext = mytext.encode('utf8')
                 if (subelem.tag == 'filename'):
                     myfilename = subelem.text
                     print ("myfilename",myfilename)
                     
             if (name == 'diaro_folders'):
                payload_folder = {
      "id": myuid,
      "title": mytitle,
      "parent_id": Diaro_UID_real
    }
                print(payload_folder)
                try:
                    resp = requests.post(url_folders, json=payload_folder)
                    #time.sleep(1)
                    resp.raise_for_status()
                    resp_dict = resp.json()
                    print(resp_dict)
                    print(resp_dict['id'])
                    save = str(resp_dict['id']) 
                    UID[myuid]= save
                except requests.exceptions.HTTPError as e:
                    print("Bad HTTP status code:", e)
                except requests.exceptions.RequestException as e:
                    print("Network error:", e)
    
             if (name == 'diaro_tags'):
                payload_tags = {
                    "id": myuid,
                    "title": mytitle
                }
                try:
                    resp = requests.post(url_tags, json=payload_tags)
                    #time.sleep(1)
                    resp.raise_for_status()
                    resp_dict = resp.json()
                    print(resp_dict)
                    print(resp_dict['id'])
                    UID[myuid]= resp_dict['id']
                    TAGS[myuid] = mytitle
                except requests.exceptions.HTTPError as e:
                    print("Bad HTTP status code:", e)
                except requests.exceptions.RequestException as e:
                    print("Network error:", e)
    
             if (name == 'diaro_attachments'):
                payload_ressource = {
                    "id": myuid
                }
                filename = "./media/photo/"+myfilename
                files = {'document': open(filename, 'rb')}
                files2 = {'data': open(filename, 'rb')}
                files3 = {'data': open(filename, 'rb'), 'props': payload_ressource}
                data_ressource = {
                     "title": myfilename
                }
                multiple_files = [
                    ('data', (myfilename, open(filename, 'rb'))),
                    ('props', data_ressource)]
                headers = {'Content-type': 'multipart/form-data'}
                print("Push : "+filename);
                #print os.path.isfile(filename)
                print("----------0-----------")
                #try:
                   #resp = requests.post(url_ressources, files=filename, json=payload_ressource)
                   #resp = requests.post(url_ressources, files=files, json=payload_ressource, headers=headers) 
                   #resp = requests.post(url_ressources, files=files2, headers=headers)
                   #resp = requests.post(url_ressources, files=files2, headers=headers)
                   #resp = requests.post(url_ressources,files = {'data' : (myfilename, open(filename, 'rb'), 'image/jpg')}, data = {'id' : myuid}, headers=headers)
                   #resp = requests.post(url_ressources,files = files2, data= data_ressource, headers=headers)
                   #resp = requests.post(url_ressources,files = multiple_files, headers=headers)
                   #resp = requests.post(url_ressources,files = multiple_files)
                   #resp.text
                   #time.sleep(1)
                   #resp.raise_for_status()
                   #if (resp.status_code == requests.codes.ok):
                   #    resp_dict = resp.json()
                   #    print(resp_dict)
                   #    print(resp_dict['id'])
                   #    UID[myuid]= resp_dict['id']
                #except requests.exceptions.HTTPError as e:
                   #print("Bad HTTP status code:", e)
                   #UID[myuid]=""
                   #print("----------1-----------")
                #except requests.exceptions.RequestException as e:
                   #print("Network error:", e)
                   #UID[myuid]=""
                   #print("----------2-----------")
    
                cmd = "curl -F 'data=@"+filename+"' -F 'props={\"title\":\""+myfilename+"\"}' http://"+ip+":"+port+"/resources?token="+token
                resp = os.popen(cmd).read()
                respj = json.loads(resp)
                #resp_dict = respj.json() 
                print(respj['id'])
                UID[myuid]= respj['id']
    
                print("Link : ",myuid," => ",myentry_uid," // ",UID[myuid]+" => ",UID[myentry_uid])
                time.sleep(1)
    
                cmd = "curl -X PUT http://"+ip+":"+port+"/ressources/"+UID[myuid]+"/notes/"+UID[myentry_uid]+"?token="+token
                resp = os.popen(cmd).read()
                print (resp)
                #url_link = (
                #   "http://"+ip+":"+port+"/ressources/"+UID[myuid]+"/notes/"+UID[myentry_uid]+"?"
                #   "token="+token
                #   )
                #try:
                #  resp = requests.post(url_link)
                #   #time.sleep(1)
                #   resp.raise_for_status()
                #   resp_dict = resp.json()
                #   print(resp_dict)
                #   print(resp_dict['id'])
                #   UID[myuid]= resp_dict['id']
                #except requests.exceptions.HTTPError as e:
                #   print("Bad HTTP status code:", e)
                #except requests.exceptions.RequestException as e:
                #   print("Network error:", e)
    
             if (name == 'diaro_locations'):
                  Lat[myuid] = mylat
                  Lng[myuid] = mylng
    
             if (name == 'diaro_entries'):
                if not mytext:
                      mytext = ""
                if not myfolder_uid:
                      myfolder_uid = Diaro_UID
                if not mytags:
                      mytags = ""
                if not mylocation_uid:
                      mylocation_uid = ""
                mytext = mytext.replace("'", "")
                mytitle = mytitle.replace("'", "")
                mytext = mytext.strip("\'")
                mytitle = mytitle.strip("\'")
                mytext = mytext.strip('(')
                mytitle = mytitle.strip('(')
                listtags = mytags.split(",")
                new_tagslist = "";
                for uid_tags in listtags:
                     if (len(uid_tags) > 2):
                            if uid_tags in UID:
                                 new_tagslist = new_tagslist + TAGS[uid_tags] + ",";
                print ("TAGS",mytags,"==>",new_tagslist);
                payload_note = {
                    "id": myuid,
                    "latitude": Lat[mylocation_uid],
                    "longitude": Lng[mylocation_uid],
                    "tags": new_tagslist,
                    "parent_id": UID[myfolder_uid],
                    "title": mytitle,
                    #"created_time": mydate_ms,
                    "user_created_time": mydate_ms,
                    "user_updated_time": mydate_ms,
                    "author": "Diaro",
                    "body": mytext 
                }
                try:
                    resp = requests.post(url_notes, json=payload_note)
                    #time.sleep(1)
                    resp.raise_for_status()
                    resp_dict = resp.json()
                    print(resp_dict)
                    print(resp_dict['id'])
                    UID[myuid]= resp_dict['id']
                except requests.exceptions.HTTPError as e:
                    print("Bad HTTP status code:", e)
                except requests.exceptions.RequestException as e:
                    print("Network error:", e)
    
    print("End : Parse Table")
    
    strftime("%Y-%m-%d %H:%M:%S", gmtime())
    done = time.time()
    elapsed = done - start
    print(elapsed)
    
    # END : Ouf ...
    diaro python Created Fri, 08 Feb 2019 00:00:00 +0000
  • J’ai voulu suivre la procédure avec brew, pip, …. mais sans succès avec la version 2.7.2

    $ python --version
    Python 2.7.2

    J’avais des erreurs du type :

    $ brew reinstall python....xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun Error: An exception occurred within a child process:   CompilerSelectionError: python cannot be built with any available compilers. Install GNU's GCC   brew install gcc$ python -m pip install --user requests /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python: No module named pip$ sudo easy_install pip Searching for pip Reading http://pypi.python.org/simple/pip/ Couldn't find index page for 'pip' (maybe misspelled?) Scanning index of all packages (this may take a while) Reading http://pypi.python.org/simple/ No local packages or download links found for pip Best match: None Traceback (most recent call last):   File "/Library/Frameworks/Python.framework/Versions/2.7/bin/easy_install", line 8, in <module>     load_entry_point('setuptools==0.6c11', 'console_scripts', 'easy_install')()   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 1712, in main        File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 1700, in with_ei_usage        File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 1716, in <lambda>        File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 152, in setup     dist.run_commands()   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 953, in run_commands     self.run_command(cmd)   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command     cmd_obj.run()   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 211, in run        File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 434, in easy_install        File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/package_index.py", line 475, in fetch_distribution AttributeError: 'NoneType' object has no attribute 'clone'  

    J’ai donc changé de fusil d’épaule :

    joplin python Created Wed, 06 Feb 2019 00:00:00 +0000
  • (See the finale release : https://www.cyber-neurones.org/2019/02/diaro-app-pixel-crater-ltd-diarobackup-xml-how-to-migrate-data-to-joplin/ )

    Step 1: Add in first ligne : before  in file DiaroBackup.xml … it’s mandatory !

    I use REST API to insert in JOPLIN : https://joplin.cozic.net/api/ , it’s good documentation.

    Here my first release in Python to import data from Diaro App Backup to Joplin API :

    #
    # Version 1  
    #  
    #   ARIAS Frederic
    #   Sorry ... It's difficult for me the python :)
    
    from urllib2 import unquote
    from lxml import etree
    import os
    from time import gmtime, strftime
    import time
    
    strftime("%Y-%m-%d %H:%M:%S", gmtime())
    start = time.time()
    
    print("Start : Parse Table")
    tree = etree.parse("./DiaroBackup.xml")
    for table in tree.xpath("/data/table"):
        print(table.get("name"))
    print("End : Parse Table")
    
    #Token
    ip = "127.0.0.1"
    port = "41184"
    #token = "ABCD123ABCD123ABCD123ABCD123ABCD123"
    token = "blablabla"
    cmd = 'curl http://'+ip+':'+port+'/notes?token='+token
    print cmd
    os.system(cmd)
    
    #Init
    Diaro_UID = "12345678901234567801234567890123"
    Lat = {}
    Lng = {}
    Lat[""] = ""
    Lng[""] = ""
    cmd = 'curl --data \'{ "id": "'+Diaro_UID+'", "title": "Diaro Import"}\' http://'+ip+':'+port+'/folders?token='+token
    print cmd
    os.system(cmd)
    
    print("Start : Parse Table")
    tree = etree.parse("./DiaroBackup.xml")
    for table in tree.iter('table'):
        name = table.attrib.get('name')
        print name
        myorder = 1
        for r in table.iter('r'):
             myuid = ""
             mytitle = ""
             mylat = ""
             mylng = ""
             mytags = ""
             mydate = ""
             mytext = ""
             myfilename = ""
             myfolder_uid = Diaro_UID
             mylocation_uid = ""
             myprimary_photo_uid = ""
             myentry_uid = ""
             myorder += 1
             for subelem in r:
    	     print(subelem.tag)
                 if (subelem.tag == 'uid'):
                     myuid = subelem.text
                  	 print ("myuid",myuid)
                 if (subelem.tag == 'entry_uid'):
                     myentry_uid = subelem.text
                     print ("myentry_uid",myentry_uid)
                 if (subelem.tag == 'primary_photo_uid'):
                     myprimary_photo_uid = subelem.text
                     print ("myprimary_photo_uid",myprimary_photo_uid)
                 if (subelem.tag == 'folder_uid'):
                     myfolder_uid = subelem.text
                     print ("myfolder_uid",myfolder_uid)
                 if (subelem.tag == 'location_uid'):
                     mylocation_uid = subelem.text
                     print ("mylocation_uid",mylocation_uid)
                 if (subelem.tag == 'date'):
                     mydate = subelem.text
                     print ("mydate",mydate)
                 if (subelem.tag == 'title'):
                     mytitle = subelem.text
                     print ("mytitle",mytitle)
    		 print type(mytitle)
                     if type(mytitle) == unicode:
    			mytitle = mytitle.encode('utf8')
                 if (subelem.tag == 'lat'):
                     mylat = subelem.text
                     print ("mylat",mylat)
                 if (subelem.tag == 'lng'):
                     mylng = subelem.text
                     print ("mylng",mylng)
                 if (subelem.tag == 'tags'):
                     mytags = subelem.text
                     if mytags:
                        mytags[1:]
                     print ("mytags",mytags)
                 if (subelem.tag == 'text'):
                     mytext = subelem.text
                     print ("mytext",mytext)
                     if type(mytext) == unicode:
                            mytext = mytext.encode('utf8')
                 if (subelem.tag == 'filename'):
                     myfilename = subelem.text
                     print ("myfilename",myfilename)
             if (name == 'diaro_folders'):
                  cmd = 'curl --data \'{ "id": "'+myuid+'", "title": "'+mytitle+'", "parent_id": "'+Diaro_UID+'"}\' http://'+ip+':'+port+'/folders?token='+token
                  print cmd
                  os.system(cmd)
             if (name == 'diaro_tags'):
                  cmd = 'curl --data \'{ "id": "'+myuid+'", "title": "'+mytitle+'"}\' http://'+ip+':'+port+'/tags?token='+token
                  print cmd
                  os.system(cmd)
             if (name == 'diaro_attachments'):
                  cmd = 'curl -F \'data=@media/photo/'+myfilename+'\'  -F \'props={"id":"'+myuid+'"}\' http://'+ip+':'+port+'/resources?token='+token
                  print cmd
                  os.system(cmd)
                  cmd = 'curl -X PUT http://'+ip+':'+port+'/resources/'+myuid+'/notes/'+myentry_uid+'?token='+token
                  print cmd
                  os.system(cmd)
             if (name == 'diaro_locations'):
                  Lat[myuid] = mylat
                  Lng[myuid] = mylng
             if (name == 'diaro_entries'):
                 if not mytext:
                      mytext = ""
                 if not myfolder_uid:
                      myfolder_uid = Diaro_UID
                 if not mytags:
                      mytags = ""
                 if not mylocation_uid:
                      mylocation_uid = ""
                 mytext = mytext.replace("'", "")
                 mytitle = mytitle.replace("'", "")
                 mytext = mytext.strip("\'")
                 mytitle = mytitle.strip("\'")
                 mytext = mytext.strip('(')
                 mytitle = mytitle.strip('(')
                 print type(mytext)
                 cmd = 'curl --data \'{"latitude":"'+Lat[mylocation_uid]+'","longitude":"'+Lng[mylocation_uid]+'","tags":"'+mytags+'","parent_id":"'+myfolder_uid+'","id":"'+myuid+'","title":"'+mytitle+'", "created_time": "'+mydate+'", "body": "'+mytext+'"}\' http://'+ip+':'+port+'/notes?token='+token
                 print cmd
                 os.system(cmd)
    print("End : Parse Table")
    
    strftime("%Y-%m-%d %H:%M:%S", gmtime())
    done = time.time()
    elapsed = done - start
    print(elapsed)

    But I don’t understand the API, I can force the id ( for exemple : 12345678901234567801234567890123 ):

    diario diaro joplin python Created Tue, 05 Feb 2019 00:00:00 +0000
  • Voici mon architecture :

    • Mac Version 10.14.3 :
      • Joplin version : 1.0.125
      • WebDAVNav Server : v 2.6.4
    • Android : 9 ( Build 9.0.0.162 : Honor View 10 )
      • Joplin version : v1.0.234 - Base de données v17.

    J’ai réussi a faire une synchronisation de Mac -> Android et une synchronisation de Android -> Mac. C’est donc fonctionnel !

    Quelques captures d’écran :

    Et sur Android :

    joplin Created Mon, 04 Feb 2019 00:00:00 +0000