Avatar
🏃
achat android animaux anticor apple arnaque association biot bluesky bricolage canoe canyoning ccesoir chateau choisir-velo cinema citation citroen-c8 commission configuration consomacteur course courses courses-dobstacles cuisine cv-pro dawarich debian dell depannage dev docker don dsi désinformation economie facebook fedora firewall football garmin gohugo gravel hebergement home-assistant hugo humhub inondation internets iphone-ipad jeux justice kali linux livre loisirs mac marathon mastodon mecanique misere momes-dazur musee-loisirs nice-matin non-classe oracle padel petition peugeot-206 peugeot-307 peugeot-406 photos php politique postgresql pro proxmox randonnee randonnée raspberry redhat redis republique-numerique reseaux-sociaux sante-internets sante-loisirs securite-internets self-hosted ski ski de randonnée sortir soutien spam sport-biot-2 sports stockage theatre trail twitter/x twitter/x ubuntu velo velo-biot velo-sur-biot via-ferrata ville villes visites voyage vtt windows wordpress wordpress-fr xbox élections municipales 0189xxxxxx 10-km 10-miles 10km 12-km 140 2014 2kv 307 406 406-hdi 920 920xt 930 935 abus acpi actif azur adb adhesion adsl agay age air aix-en-provence alcatel allianz allianz-riviera alpe-dhuez alpes-maritimes alpilles alternateur amap amazon amp anaconda analyse analytics android animaux anote anthea anthea-fr antibes anticor antutu apache2 apn apple applepi-baker april arcep arfi arnaque aroooo aseb-am asics association assurantprotection.fr asterisk asus asus-zenfone atos atsemtex audit auron authentique-fr autoroute avis awesome-note awk awox baignade balade balade-fr barcelonnette barnier base-de-donnee bash basket-ball belt benchmark bento bentomedia betrail biellette bigreen bilan bilan professionnel bilan sportif bilan-fr bio biot biot-fr biot-vernir bitdefender bkl-l09 blacklist blender blockchain-fr blog blogger-com bluesky bonifacio bormes-les-mimosas bot brague braun breil-sur-roya brew bricolage brm brm200 bruxelles bueil bug c cadolive cagnes-sur-mer caille caisse-depargne calanques calencon calendrier camargue cannes cannes-fr canoe cantabrie canyon canyoning cap-dantibes cardio carlit carnaval-de-nice carnaval-de-nice-fr carros carroz casa castellane castellar cat catamaran caussols ccbot ccc cdisplay ceinture cevennes chamonix chateau chaussures cheiron choisir velo cinema circuit cisco citation citrix classement claudebot clignotant cloud cloud-act cloud-personnel cnews codegouv col connecté col de greoliere col-connecte col-de-la-madone col-de-turini colmars competition compression configuration connect consomacteur conspiracy watch convergence-azureenne coreos coronarivus coronavirus corse courmes course course-a-pied course-dobstacle course-dobstacles course-dorientation course-fr coursegoules courses-2 covid covid-19 cozy-cloud cozytouch cpu critique cross cuda cuenod curl cursor dansup dark-web data dataforseo dawarich dawarich.app day-journal debian decathlon decodex deletefacebook dell dello-sciliar-catinaccio delphi demi-yasso derby-de-la-meije developpement developper-tools-access diag diario diaro diaspora digikam digiposte distribution docker docteur-gsm-com dolceacqua dolomites domolites domotique don dourgne dsi duranus débats désinformation ecologie economie education el-capitan elasticsearch elk elm327 email endurain enphase ensol entreprises entretien epidemie escape-game escarene escroquerie espagne estrosi europe eurovelo evasion-fiscale evolution-a-faire exiftool export eze f-f-a facebook facture facture-deau-fr fakenews falicon fan fayence fedora fenix 7 ffa ffmpeg filebeat fillion fillon film-bon film-moyen filtre-a-gazole filtre-pollen fire firewall fittrackee folder foodwatch footing for forerunner forerunner-935 forerunner-945 forerunner-955 forgejo framasoft framasphere france culture france-3 france-soir fraude fraude-fiscale free free-mobile freebox frejus funchal gafams gaillac garageband garmin garmin-connect gavarnie gcc gelas geodes gilette git github glance gmail goaccess gohugo google google-analytic google-analytics google-search-console gopro gorbio gotify goudurix gouffre gourdon gourdon-fr gptbot gpx grafana grasse gravel graxx greoliere greolieres greolieres-les-neiges grub gréolières gtest guillon hadopi haproxy hautpoul hebergement high-trail-vanoise hintertux hipay hipay-com historique-des-dons holdup home-assistant homeassistant hommage honeywell honor hop howto htv huawei hugo huile-direction-assiste humhub ia ibm iconservicesagent ie ign ilonse imagento imageoptim imagneto immich imovie import impots indent injecteur inondation insee insta360 installation internet intimidation ipad ipad-argus iphone iphoto iptables isola-2000 issue italie itra itunes j2s jaime-courir java javascript jeedom jenkins jetpack jeux jeux-de-sophia jeux-de-sophia-antipolis joplin joseph journey jpegoptim juns jupyter justice kali karer-pass kayak kernel kibana kies-app kilometre-vertical kisskissbankbank kiwix klaxon kodi kokopelli korben la-brague la-capelette la-grave-de-peille la-poste la-quadrature-du-net la-vie lac-de-vens langage lantosque lapeyre laquadrature latemar lateral lautrec lcd le-monde leon les-cammazes les-visiteurs-du-soir let's encrypt lets-encrypt levens libreoffice libvirt lightmd linux liste livebox livre logement logiciel logstash loi-numerique lombricomposteur luberon lulu mac mac-os macjournal macos macos-high-sierra madere malade malware mamp mandelieu maps marathon marche mariadb markdown marseille mastodon matomo matomo-analytics matosdon matrix mazamet mcafee mecanique mediapart meduses meije meltdown memo menuiserie mercantour merci-michel mermaid meta metricbeat microsoft-teams migration misere mkdocs module mogrify mojave monsanto montagne-noire montagne-noire-fr montauroux motionpro mougins moulinet mouton msf mud-day mud-day-fr munin mysql mytf1 mytf1-fr mywellness myzone naiad naiad2020 natation natation-libre nature nautipolis neige netamo netatmo nextcloud nginx nice nice-fr nice-matin notes nsinvalidargumentexception ntfs nuxit nvidia objective-see obsolescence-programmee obstacle occ occasion odbii ok-google ollama olvid onedrive oopad open open-data open-sky open-source open-source-experience opencv opendata opensky opensky-a-valbonne opio oracle-linux orange origine-cycles orsiere osm france osmc oss-paris osx osxp osxp2024 osxp2025 outlook owncloud pac paca padel panorama panoramique paradisdiscount-com parc-du-paradou parc-naturel-dello-sciliar-catinaccio paris pascal paul paypal pdf peillon peinture petition peugeot peugeot-307 phare photo photon photos photovoltaique php phpnet phpnet-org pichauris pigeon pinterest pip piscine planning play-store plongee plu plugin pluviometrie pneu politique pollution polylang postfix postgresql ppri prejuges preparation print prix programmation prom-classic prometheus provence proxmox pssh publicite purge pyrenees python python3 qnap quartier qwant raid rameur-dinterieur rancheros randonnee randonnee-2 randonnees randonnée raspberry recette recette-fr redbysfr redhat redis redmi reparartion reparation reseau-sociaux resideo rest-api resultat reunion revue-de-presse rgpd rimplas rock64 root roquebilliere roubion rouret row rsyslog rt-france rubitrack rue89 saint-amancet-fr saint-avit saint-fereol saint-jeannet saint-vallier-de-thiey saintetic salade-nicoise-fr salle-serveur salon samsung san-remo sante sauter scop sd-card securite security.txt sed selection self-hosted semaine-de-la-critique semi semi-marathon semi-marathon-fr sentier seranon server-git serveur service-public siagne sidobre sierra signal sip ski ski de randonnée ski-de-randonnee skred skype slack smart-home smartphone smsc snap sommets sondage sondage-en-ligne sophia-antipolis soreze soreze-fr sortie-en-famille sospel soual spam spartan spartan-race spartan-race-fr spartian spectre sport sql sqlite ssd stable-diffusion stade station-meteo statistique statistiques stockage strava suisse suivi summary surf svn swap swarm syslog systemd tanneron tapform tarif tarn tarn-fr taxes telerama television temp temperature template templier tende tennis tennis-de-table tensorflow test testeur tests textwrangler theatre theoule-sur-mer thorenc thunderbird thunderbirds tignes tor tour-des-sangliers tour-du-sanglier tourisme tourrettes-sur-loup trail trail-de-la-vesubie transposh transvesubienne trashbusters travail travaux trial trifecta turbie tux tuxedo tuxedo-computer tuya twitter tyrol ubaye ubuntu ufc-que-choisir ultra uniq unknown unroot update urbain urban-trail urbanbiker utcam utelle utmb vae valbonne valeo vallee-des-merveilles valloire vallon-des-horts valmasque var vaucluse vaultwarden vauplane vegay velo velotaf vence ventoux veolia verdon vesubie vidange video villeneuve-loubet vim virtual-box virtualbox virus vma vmware vol voyage vpn vps vtc vtt wanderer wannacry webalizer wget whatsapp wifi wikimedia-foundation wikipedia wiko windows wine woocommerce wordfence wordpress x xbmc xbox yahoo-mail yolo zenfone zenpad zigbee
  • (See the finale release : https://www.cyber-neurones.org/2019/02/diaro-app-pixel-crater-ltd-diarobackup-xml-how-to-migrate-data-to-joplin/ )

    Now with release V3, it’s possible to import data … Le last issue is on user_created_time and user_updated_time.

    The REST API is very good ( https://joplin.cozic.net/api/ ) , but If it’s not too complex :

    1. Add possibility to choose the ID on folder.
    2. Add possibility to choose the ID on tags.
    3. Add possibility to do PUT on note to add at the end of text : [](:/ID_RESOURCE). The syntax : PUT /ressources/ID_RESSOURCE/notes/ID_NOTE?token=…”
    4. Possibility to add ID of tags instead text on Notes.

    My last source :

    import joplin Created Sat, 09 Feb 2019 00:00:00 +0000
  • (See the finale release : https://www.cyber-neurones.org/2019/02/diaro-app-pixel-crater-ltd-diarobackup-xml-how-to-migrate-data-to-joplin/ )

    Je pensais avoir trouvé mon bug … je pensais que c’était l’espace avant la variable qui faisait que les valeurs n’était pas prise en compte.

    Avec : requests.post(url_folders, json=payload) on a un . c.a.d. un 20

    Avec : requests.post(url_folders, data=json.dumps(payload, separators=(’,’,’:’)), headers=headers)

    Sachant que : headers = {‘Content-type’: ‘application/json’, ‘Accept’: ’text/plain’}

    Je n’ai plus d’espace mais les valeurs numériques ne sont pas prise en compte …

    python Created Sat, 09 Feb 2019 00:00:00 +0000
  • (See the finale release : https://www.cyber-neurones.org/2019/02/diaro-app-pixel-crater-ltd-diarobackup-xml-how-to-migrate-data-to-joplin/ )

    I have issue with ressources (link between ressources and notes) …. error 404. The logs in : .config/joplin-desktop/log-clipper.txt

    ....: "Request: PUT /ressources/71dd2cba2af54c4ebb53fb7fd8d0543b/notes/cbbc6076b2ac321ccae1f036a2fe6659?token=...."
    ....: "Error: Not Found
    Error: Not Found
        at Api.route (/Applications/Joplin.app/Contents/Resources/app/lib/services/rest/Api.js:103:41)
        at execRequest (/Applications/Joplin.app/Contents/Resources/app/lib/ClipperServer.js:147:39)
        at IncomingMessage.request.on (/Applications/Joplin.app/Contents/Resources/app/lib/ClipperServer.js:185:8)
        at emitNone (events.js:105:13)
        at IncomingMessage.emit (events.js:207:7)
        at endReadableNT (_stream_readable.js:1045:12)
        at _combinedTickCallback (internal/process/next_tick.js:138:11)
        at process._tickCallback (internal/process/next_tick.js:180:9)"

    My last code :

    #
    # Version 2 
    # for Python 3
    # 
    #   ARIAS Frederic
    #   Sorry ... It's difficult for me the python :)
    #
    
    #from lxml import etree
    import xml.etree.ElementTree as etree
    from time import gmtime, strftime
    import time
    import json
    import requests
    import os
    
    strftime("%Y-%m-%d %H:%M:%S", gmtime())
    start = time.time()
    
    #Token
    ip = "127.0.0.1"
    port = "41184"
    token = "ABCD123ABCD123ABCD123ABCD123ABCD123"
    
    url_notes = (
        "http://"+ip+":"+port+"/notes?"
        "token="+token
    )
    url_folders = (
        "http://"+ip+":"+port+"/folders?"
        "token="+token
    )
    url_tags = (
        "http://"+ip+":"+port+"/tags?"
        "token="+token
    )
    url_ressources = (
        "http://"+ip+":"+port+"/ressources?"
        "token="+token
    )
    
    #Init
    Diaro_UID = "12345678901234567801234567890123"
    Lat = {}
    Lng = {}
    UID = {} 
    TAGS = {}
    Lat[""] = ""
    Lng[""] = ""
    
    payload = {
        "id": Diaro_UID,
        "title": "Diaro Import"
    }
    
    try:
        resp = requests.post(url_folders, json=payload)
        #time.sleep(1)
        resp.raise_for_status()
        resp_dict = resp.json()
        print(resp_dict)
        print("My ID")
        print(resp_dict['id'])
        Diaro_UID_real = resp_dict['id']
        save = str(resp_dict['id'])
        UID[Diaro_UID]= save
    except requests.exceptions.HTTPError as e:
        print("Bad HTTP status code:", e)
    except requests.exceptions.RequestException as e:
        print("Network error:", e)
    
    print("Start : Parse Table")
    tree = etree.parse("./DiaroBackup.xml")
    for table in tree.iter('table'):
        name = table.attrib.get('name')
        print(name)
        myorder = 1
        for r in table.iter('r'):
             myuid = ""
             mytitle = ""
             mylat = ""
             mylng = ""
             mytags = ""
             mydate = ""
             mydate_ms = 0;
             mytext = ""
             myfilename = ""
             myfolder_uid = Diaro_UID
             mylocation_uid = ""
             myprimary_photo_uid = ""
             myentry_uid = ""
             myorder += 1
             for subelem in r:
                 print(subelem.tag)
                 if (subelem.tag == 'uid'):
                     myuid = subelem.text
                     print ("myuid",myuid)
                 if (subelem.tag == 'entry_uid'):
                     myentry_uid = subelem.text
                     print ("myentry_uid",myentry_uid)
                 if (subelem.tag == 'primary_photo_uid'):
                     myprimary_photo_uid = subelem.text
                     print ("myprimary_photo_uid",myprimary_photo_uid)
                 if (subelem.tag == 'folder_uid'):
                     myfolder_uid = subelem.text
                     print ("myfolder_uid",myfolder_uid)
                 if (subelem.tag == 'location_uid'):
                     mylocation_uid = subelem.text
                     print ("mylocation_uid",mylocation_uid)
                 if (subelem.tag == 'date'):
                     mydate = subelem.text
                     mydate_ms = int(mydate)
                     print ("mydate",mydate," in ms",mydate_ms)
                 if (subelem.tag == 'title'):
                     mytitle = subelem.text
                     print ("mytitle",mytitle)
                     #if type(mytitle) == str:
                        #mytitle = mytitle.encode('utf8')
                 if (subelem.tag == 'lat'):
                     mylat = subelem.text
                     print ("mylat",mylat)
                 if (subelem.tag == 'lng'):
                     mylng = subelem.text
                     print ("mylng",mylng)
                 if (subelem.tag == 'tags'):
                     mytags = subelem.text
                     if mytags:
                        mytags[1:]
                     print ("mytags",mytags)
                 if (subelem.tag == 'text'):
                     mytext = subelem.text
                     print ("mytext",mytext)
                     #if type(mytext) == str:
                           #mytext = mytext.encode('utf8')
                 if (subelem.tag == 'filename'):
                     myfilename = subelem.text
                     print ("myfilename",myfilename)
                     
             if (name == 'diaro_folders'):
                payload_folder = {
      "id": myuid,
      "title": mytitle,
      "parent_id": Diaro_UID_real
    }
                print(payload_folder)
                try:
                    resp = requests.post(url_folders, json=payload_folder)
                    #time.sleep(1)
                    resp.raise_for_status()
                    resp_dict = resp.json()
                    print(resp_dict)
                    print(resp_dict['id'])
                    save = str(resp_dict['id']) 
                    UID[myuid]= save
                except requests.exceptions.HTTPError as e:
                    print("Bad HTTP status code:", e)
                except requests.exceptions.RequestException as e:
                    print("Network error:", e)
    
             if (name == 'diaro_tags'):
                payload_tags = {
                    "id": myuid,
                    "title": mytitle
                }
                try:
                    resp = requests.post(url_tags, json=payload_tags)
                    #time.sleep(1)
                    resp.raise_for_status()
                    resp_dict = resp.json()
                    print(resp_dict)
                    print(resp_dict['id'])
                    UID[myuid]= resp_dict['id']
                    TAGS[myuid] = mytitle
                except requests.exceptions.HTTPError as e:
                    print("Bad HTTP status code:", e)
                except requests.exceptions.RequestException as e:
                    print("Network error:", e)
    
             if (name == 'diaro_attachments'):
                payload_ressource = {
                    "id": myuid
                }
                filename = "./media/photo/"+myfilename
                files = {'document': open(filename, 'rb')}
                files2 = {'data': open(filename, 'rb')}
                files3 = {'data': open(filename, 'rb'), 'props': payload_ressource}
                data_ressource = {
                     "title": myfilename
                }
                multiple_files = [
                    ('data', (myfilename, open(filename, 'rb'))),
                    ('props', data_ressource)]
                headers = {'Content-type': 'multipart/form-data'}
                print("Push : "+filename);
                #print os.path.isfile(filename)
                print("----------0-----------")
                #try:
                   #resp = requests.post(url_ressources, files=filename, json=payload_ressource)
                   #resp = requests.post(url_ressources, files=files, json=payload_ressource, headers=headers) 
                   #resp = requests.post(url_ressources, files=files2, headers=headers)
                   #resp = requests.post(url_ressources, files=files2, headers=headers)
                   #resp = requests.post(url_ressources,files = {'data' : (myfilename, open(filename, 'rb'), 'image/jpg')}, data = {'id' : myuid}, headers=headers)
                   #resp = requests.post(url_ressources,files = files2, data= data_ressource, headers=headers)
                   #resp = requests.post(url_ressources,files = multiple_files, headers=headers)
                   #resp = requests.post(url_ressources,files = multiple_files)
                   #resp.text
                   #time.sleep(1)
                   #resp.raise_for_status()
                   #if (resp.status_code == requests.codes.ok):
                   #    resp_dict = resp.json()
                   #    print(resp_dict)
                   #    print(resp_dict['id'])
                   #    UID[myuid]= resp_dict['id']
                #except requests.exceptions.HTTPError as e:
                   #print("Bad HTTP status code:", e)
                   #UID[myuid]=""
                   #print("----------1-----------")
                #except requests.exceptions.RequestException as e:
                   #print("Network error:", e)
                   #UID[myuid]=""
                   #print("----------2-----------")
    
                cmd = "curl -F 'data=@"+filename+"' -F 'props={\"title\":\""+myfilename+"\"}' http://"+ip+":"+port+"/resources?token="+token
                resp = os.popen(cmd).read()
                respj = json.loads(resp)
                #resp_dict = respj.json() 
                print(respj['id'])
                UID[myuid]= respj['id']
    
                print("Link : ",myuid," => ",myentry_uid," // ",UID[myuid]+" => ",UID[myentry_uid])
                time.sleep(1)
    
                cmd = "curl -X PUT http://"+ip+":"+port+"/ressources/"+UID[myuid]+"/notes/"+UID[myentry_uid]+"?token="+token
                resp = os.popen(cmd).read()
                print (resp)
                #url_link = (
                #   "http://"+ip+":"+port+"/ressources/"+UID[myuid]+"/notes/"+UID[myentry_uid]+"?"
                #   "token="+token
                #   )
                #try:
                #  resp = requests.post(url_link)
                #   #time.sleep(1)
                #   resp.raise_for_status()
                #   resp_dict = resp.json()
                #   print(resp_dict)
                #   print(resp_dict['id'])
                #   UID[myuid]= resp_dict['id']
                #except requests.exceptions.HTTPError as e:
                #   print("Bad HTTP status code:", e)
                #except requests.exceptions.RequestException as e:
                #   print("Network error:", e)
    
             if (name == 'diaro_locations'):
                  Lat[myuid] = mylat
                  Lng[myuid] = mylng
    
             if (name == 'diaro_entries'):
                if not mytext:
                      mytext = ""
                if not myfolder_uid:
                      myfolder_uid = Diaro_UID
                if not mytags:
                      mytags = ""
                if not mylocation_uid:
                      mylocation_uid = ""
                mytext = mytext.replace("'", "")
                mytitle = mytitle.replace("'", "")
                mytext = mytext.strip("\'")
                mytitle = mytitle.strip("\'")
                mytext = mytext.strip('(')
                mytitle = mytitle.strip('(')
                listtags = mytags.split(",")
                new_tagslist = "";
                for uid_tags in listtags:
                     if (len(uid_tags) > 2):
                            if uid_tags in UID:
                                 new_tagslist = new_tagslist + TAGS[uid_tags] + ",";
                print ("TAGS",mytags,"==>",new_tagslist);
                payload_note = {
                    "id": myuid,
                    "latitude": Lat[mylocation_uid],
                    "longitude": Lng[mylocation_uid],
                    "tags": new_tagslist,
                    "parent_id": UID[myfolder_uid],
                    "title": mytitle,
                    #"created_time": mydate_ms,
                    "user_created_time": mydate_ms,
                    "user_updated_time": mydate_ms,
                    "author": "Diaro",
                    "body": mytext 
                }
                try:
                    resp = requests.post(url_notes, json=payload_note)
                    #time.sleep(1)
                    resp.raise_for_status()
                    resp_dict = resp.json()
                    print(resp_dict)
                    print(resp_dict['id'])
                    UID[myuid]= resp_dict['id']
                except requests.exceptions.HTTPError as e:
                    print("Bad HTTP status code:", e)
                except requests.exceptions.RequestException as e:
                    print("Network error:", e)
    
    print("End : Parse Table")
    
    strftime("%Y-%m-%d %H:%M:%S", gmtime())
    done = time.time()
    elapsed = done - start
    print(elapsed)
    
    # END : Ouf ...
    diaro python Created Fri, 08 Feb 2019 00:00:00 +0000
  • C’est très facile à voir, en deux images :

    12 pisteurs VS 1 pisteurs.

    joplin Created Wed, 06 Feb 2019 00:00:00 +0000
  • (See the finale release : https://www.cyber-neurones.org/2019/02/diaro-app-pixel-crater-ltd-diarobackup-xml-how-to-migrate-data-to-joplin/ )

    Step 1: Add in first ligne : before  in file DiaroBackup.xml … it’s mandatory !

    I use REST API to insert in JOPLIN : https://joplin.cozic.net/api/ , it’s good documentation.

    Here my first release in Python to import data from Diaro App Backup to Joplin API :

    #
    # Version 1  
    #  
    #   ARIAS Frederic
    #   Sorry ... It's difficult for me the python :)
    
    from urllib2 import unquote
    from lxml import etree
    import os
    from time import gmtime, strftime
    import time
    
    strftime("%Y-%m-%d %H:%M:%S", gmtime())
    start = time.time()
    
    print("Start : Parse Table")
    tree = etree.parse("./DiaroBackup.xml")
    for table in tree.xpath("/data/table"):
        print(table.get("name"))
    print("End : Parse Table")
    
    #Token
    ip = "127.0.0.1"
    port = "41184"
    #token = "ABCD123ABCD123ABCD123ABCD123ABCD123"
    token = "blablabla"
    cmd = 'curl http://'+ip+':'+port+'/notes?token='+token
    print cmd
    os.system(cmd)
    
    #Init
    Diaro_UID = "12345678901234567801234567890123"
    Lat = {}
    Lng = {}
    Lat[""] = ""
    Lng[""] = ""
    cmd = 'curl --data \'{ "id": "'+Diaro_UID+'", "title": "Diaro Import"}\' http://'+ip+':'+port+'/folders?token='+token
    print cmd
    os.system(cmd)
    
    print("Start : Parse Table")
    tree = etree.parse("./DiaroBackup.xml")
    for table in tree.iter('table'):
        name = table.attrib.get('name')
        print name
        myorder = 1
        for r in table.iter('r'):
             myuid = ""
             mytitle = ""
             mylat = ""
             mylng = ""
             mytags = ""
             mydate = ""
             mytext = ""
             myfilename = ""
             myfolder_uid = Diaro_UID
             mylocation_uid = ""
             myprimary_photo_uid = ""
             myentry_uid = ""
             myorder += 1
             for subelem in r:
    	     print(subelem.tag)
                 if (subelem.tag == 'uid'):
                     myuid = subelem.text
                  	 print ("myuid",myuid)
                 if (subelem.tag == 'entry_uid'):
                     myentry_uid = subelem.text
                     print ("myentry_uid",myentry_uid)
                 if (subelem.tag == 'primary_photo_uid'):
                     myprimary_photo_uid = subelem.text
                     print ("myprimary_photo_uid",myprimary_photo_uid)
                 if (subelem.tag == 'folder_uid'):
                     myfolder_uid = subelem.text
                     print ("myfolder_uid",myfolder_uid)
                 if (subelem.tag == 'location_uid'):
                     mylocation_uid = subelem.text
                     print ("mylocation_uid",mylocation_uid)
                 if (subelem.tag == 'date'):
                     mydate = subelem.text
                     print ("mydate",mydate)
                 if (subelem.tag == 'title'):
                     mytitle = subelem.text
                     print ("mytitle",mytitle)
    		 print type(mytitle)
                     if type(mytitle) == unicode:
    			mytitle = mytitle.encode('utf8')
                 if (subelem.tag == 'lat'):
                     mylat = subelem.text
                     print ("mylat",mylat)
                 if (subelem.tag == 'lng'):
                     mylng = subelem.text
                     print ("mylng",mylng)
                 if (subelem.tag == 'tags'):
                     mytags = subelem.text
                     if mytags:
                        mytags[1:]
                     print ("mytags",mytags)
                 if (subelem.tag == 'text'):
                     mytext = subelem.text
                     print ("mytext",mytext)
                     if type(mytext) == unicode:
                            mytext = mytext.encode('utf8')
                 if (subelem.tag == 'filename'):
                     myfilename = subelem.text
                     print ("myfilename",myfilename)
             if (name == 'diaro_folders'):
                  cmd = 'curl --data \'{ "id": "'+myuid+'", "title": "'+mytitle+'", "parent_id": "'+Diaro_UID+'"}\' http://'+ip+':'+port+'/folders?token='+token
                  print cmd
                  os.system(cmd)
             if (name == 'diaro_tags'):
                  cmd = 'curl --data \'{ "id": "'+myuid+'", "title": "'+mytitle+'"}\' http://'+ip+':'+port+'/tags?token='+token
                  print cmd
                  os.system(cmd)
             if (name == 'diaro_attachments'):
                  cmd = 'curl -F \'data=@media/photo/'+myfilename+'\'  -F \'props={"id":"'+myuid+'"}\' http://'+ip+':'+port+'/resources?token='+token
                  print cmd
                  os.system(cmd)
                  cmd = 'curl -X PUT http://'+ip+':'+port+'/resources/'+myuid+'/notes/'+myentry_uid+'?token='+token
                  print cmd
                  os.system(cmd)
             if (name == 'diaro_locations'):
                  Lat[myuid] = mylat
                  Lng[myuid] = mylng
             if (name == 'diaro_entries'):
                 if not mytext:
                      mytext = ""
                 if not myfolder_uid:
                      myfolder_uid = Diaro_UID
                 if not mytags:
                      mytags = ""
                 if not mylocation_uid:
                      mylocation_uid = ""
                 mytext = mytext.replace("'", "")
                 mytitle = mytitle.replace("'", "")
                 mytext = mytext.strip("\'")
                 mytitle = mytitle.strip("\'")
                 mytext = mytext.strip('(')
                 mytitle = mytitle.strip('(')
                 print type(mytext)
                 cmd = 'curl --data \'{"latitude":"'+Lat[mylocation_uid]+'","longitude":"'+Lng[mylocation_uid]+'","tags":"'+mytags+'","parent_id":"'+myfolder_uid+'","id":"'+myuid+'","title":"'+mytitle+'", "created_time": "'+mydate+'", "body": "'+mytext+'"}\' http://'+ip+':'+port+'/notes?token='+token
                 print cmd
                 os.system(cmd)
    print("End : Parse Table")
    
    strftime("%Y-%m-%d %H:%M:%S", gmtime())
    done = time.time()
    elapsed = done - start
    print(elapsed)

    But I don’t understand the API, I can force the id ( for exemple : 12345678901234567801234567890123 ):

    diario diaro joplin python Created Tue, 05 Feb 2019 00:00:00 +0000
  • Le format est en XML : DiarioBackup.xml , la syntaxe est la suivante :

    <data version="2"><table name="diaro_folders"><r><uid>0773341a39b09938e234d0c4e2970988</uid><title>Nom du fichier</title><color>#ff921c</color><pattern></pattern></r>...</table><table name="diaro_tags"><r>   <uid>0b2cc127642c774a77e4e048278fb716</uid>   <title>Nom du tags</title></r>...</table><table name="diaro_locations"><r>   <uid>008e9d97ecbae5876ceefc3463c57753</uid>   <title>Lieu</title>   <address>Address</address>   <lat>YY.YYYYY</lat>   <lng>X.XXXXX</lng>   <zoom>10</zoom></r>...</table><table name="diaro_entries"><r>   <uid>f4526cfd9536ecc422df849bc4b69d89</uid>   <date>1475771220000</date>   <tz_offset>+02:00</tz_offset>   <title>Titre</title>   <text>Texte</text>   <folder_uid>4c4db654f97a84333d4e29fd949cbada</folder_uid>   <location_uid>85c77bb40d800da8f5a9d9777967d325</location_uid>   <tags>,28f79fcdf75cb5a3deb10ab40d1ed956,</tags>   <primary_photo_uid></primary_photo_uid>   <weather_temperature>null</weather_temperature>   <weather_icon></weather_icon>   <weather_description></weather_description>   <mood>0</mood></r>....</table><table name="diaro_attachments"><r>   <uid>0237499c90decb1cc9787ecb11718a35</uid>   <entry_uid>53b97932b1acc1b4a5be5895d22bc16d</entry_uid>   <type>photo</type>   <filename>name.jpg</filename>   <position>1</position></r>...</table></data>

    Sachant qu’ensuite les photos dans dans le répertoire media/photo/ .

    Mon but est de convertir cela en fichier .ENEX pour ensuite faire un import dans Joplin. J’ai vu un programme en Python assez intéressant : https://github.com/andrewheiss/nvalt2evernote “Convert plain text notes stored in Notational Velocity or nvALT to an .enex file to import into Evernote.”

    diaro joplin Created Mon, 04 Feb 2019 00:00:00 +0000
  • Voici mon architecture :

    • Mac Version 10.14.3 :
      • Joplin version : 1.0.125
      • WebDAVNav Server : v 2.6.4
    • Android : 9 ( Build 9.0.0.162 : Honor View 10 )
      • Joplin version : v1.0.234 - Base de données v17.

    J’ai réussi a faire une synchronisation de Mac -> Android et une synchronisation de Android -> Mac. C’est donc fonctionnel !

    Quelques captures d’écran :

    Et sur Android :

    joplin Created Mon, 04 Feb 2019 00:00:00 +0000
  • Mise à jours du smartphone en version EMUI 9.0.0.159 (C432E4R1P9)

    J’aime bien “Equilibre digital” : Le nouveau tableau de bord montre comment est employé le temps passé sur son appareil.

    En terme de puissance, j’ai l’impression d’avoir beaucoup perdu … de 200.000 à 160.000. A suivre.

    honor Created Thu, 03 Jan 2019 00:00:00 +0000
    Created Wed, 02 Jan 2019 00:00:00 +0000
  • Comme toujours j’ai suivi le tutorial : https://www.youtube.com/watch?v=7dSO0BigR4U 

    C’est assez complexe … mais pas impossible.

    Created Sun, 16 Dec 2018 00:00:00 +0000