Step 0 : Install Joplin and activate the REST API ( https://joplin.cozic.net/api/ ) .
Step 1: Install gmplot with pip
$ pip install gmplot
Collecting gmplot
Downloading https://files.pythonhosted.org/packages/e2/b1/e1429c31a40b3ef5840c16f78b506d03be9f27e517d3870a6fd0b356bd46/gmplot-1.2.0.tar.gz (115kB)
100% |████████████████████████████████| 122kB 1.0MB/s
Requirement already satisfied: requests in /usr/local/lib/python3.7/site-packages (from gmplot) (2.21.0)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.7/site-packages (from requests->gmplot) (1.24.1)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/site-packages (from requests->gmplot) (2018.11.29)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.7/site-packages (from requests->gmplot) (2.8)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/site-packages (from requests->gmplot) (3.0.4)
Building wheels for collected packages: gmplot
Building wheel for gmplot (setup.py) ... done
Stored in directory: /Users/...../Library/Caches/pip/wheels/81/6a/76/4dd6a7cc310ba765894159ee84871e8cd55221d82ef14b81a1
Successfully built gmplot
Installing collected packages: gmplot
Successfully installed gmplot-1.2.0
The source code : (change your token)
Step 0 : Install Joplin and activate the REST API ( https://joplin.cozic.net/api/ ) .
Step 1: Install staticmap with pip ( for more information see https://github.com/komoot/staticmap )
$ pip install staticmap
Collecting staticmap
Downloading https://files.pythonhosted.org/packages/f9/9f/5a3843533eab037cba031486175c4db1b214614404a29516208ff228dead/staticmap-0.5.4.tar.gz
Collecting Pillow (from staticmap)
Downloading https://files.pythonhosted.org/packages/c9/ed/27cc92e99b9ccaa0985a66133baeea7e8a3371d3c04cfa353aaa3b81aac1/Pillow-5.4.1-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (3.7MB)
100% |████████████████████████████████| 3.7MB 6.3MB/s
Requirement already satisfied: requests in /usr/local/lib/python3.7/site-packages (from staticmap) (2.21.0)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/site-packages (from requests->staticmap) (3.0.4)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.7/site-packages (from requests->staticmap) (2.8)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.7/site-packages (from requests->staticmap) (1.24.1)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/site-packages (from requests->staticmap) (2018.11.29)
Building wheels for collected packages: staticmap
Building wheel for staticmap (setup.py) ... done
Stored in directory: /Users/..../Library/Caches/pip/wheels/fe/a6/a5/2acceb72471d85bd0498973aabd611e6ff1cdd48796790f047
Successfully built staticmap
Installing collected packages: Pillow, staticmap
Successfully installed Pillow-5.4.1 staticmap-0.5.4
The source code :
Install JOPLIN : https://joplin.cozic.net , and start REST API. (Easy)
Step 1 : Put this script in folder.
Step 2 : Edit the script and put your token
Step 3 : Run the script
The script :
#
# Version 1
# for Python 3
#
# ARIAS Frederic
# Sorry ... It's difficult for me the python :)
#
import feedparser
from os import listdir
from pathlib import Path
import glob
import csv
import locale
import os
import time
from datetime import datetime
import json
import requests
#Token
ip = "127.0.0.1"
port = "41184"
token = "Put your token here"
nb_import = 0;
headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
url_notes = (
"http://"+ip+":"+port+"/notes?"
"token="+token
)
url_folders = (
"http://"+ip+":"+port+"/folders?"
"token="+token
)
url_tags = (
"http://"+ip+":"+port+"/tags?"
"token="+token
)
url_ressources = (
"http://"+ip+":"+port+"/ressources?"
"token="+token
)
#Init
Wordpress_UID = "12345678901234567801234567890123"
UID = {}
payload = {
"id":Wordpress_UID,
"title":"Wordpress Import"
}
try:
resp = requests.post(url_folders, data=json.dumps(payload, separators=(',',':')), headers=headers)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
print("My ID")
print(resp_dict['id'])
Wordpress_UID_real = resp_dict['id']
save = str(resp_dict['id'])
UID[Wordpress_UID]= save
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
feed = feedparser.parse("https://www.cyber-neurones.org/feed/")
feed_title = feed['feed']['title']
feed_entries = feed.entries
numero = -2
nb_entries = 1
nb_metadata_import = 1
while nb_entries > 0 :
print ("----- Page ",numero,"-------")
numero += 2
url = "https://www.cyber-neurones.org/feed/?paged="+str(numero)
feed = feedparser.parse(url)
feed_title = feed['feed']['title']
feed_entries = feed.entries
nb_entries = len(feed['entries'])
for entry in feed.entries:
nb_metadata_import += 1
my_title = entry.title
my_link = entry.link
article_published_at = entry.published # Unicode string
article_published_at_parsed = entry.published_parsed # Time object
article_author = entry.author
timestamp = time.mktime(entry.published_parsed)*1000
print("Published at "+article_published_at)
my_body = entry.description
payload_note = {
"parent_id":Wordpress_UID_real,
"title":my_title,
"source":"Wordpress",
"source_url":my_link,
"order":nb_metadata_import,
"user_created_time":timestamp,
"user_updated_time":timestamp,
"author":article_author,
"body_html":my_body
}
payload_note_put = {
"source":"Wordpress",
"order":nb_metadata_import,
"source_url":my_link,
"user_created_time":timestamp,
"user_updated_time":timestamp,
"author":article_author
}
try:
resp = requests.post(url_notes, json=payload_note)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
print(resp_dict['id'])
myuid= resp_dict['id']
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
url_notes_put = (
"http://"+ip+":"+port+"/notes/"+myuid+"?"
"token="+token
)
try:
resp = requests.put(url_notes_put, json=payload_note_put)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
Link to Diaro App : https://diaroapp.com .

But to many tracking !!!

Link to JOPLIN : https://joplin.cozic.net/ , and the REST API : https://joplin.cozic.net/api/
Step 1 : Add in first ligne : before in file DiaroBackup.xml … it’s mandatory !
My note for REST API :
](:/ID_RESOURCE). The syntax : PUT /ressources/ID_RESSOURCE/notes/ID_NOTE?token=…” . It’s more simple ….After install python3 ( it’s easy … and run this script), note put your token in the script.
Install JOPLIN : https://joplin.cozic.net , and start REST API.
Step 1 : Download all with https://takeout.google.com
Step 2 : Uncompress and put all on same folder.
Step 3 : Put this script in folder.
Step 4 : Edit the script and put your token
The script :
#
# Version 1
# for Python 3
#
# ARIAS Frederic
# Sorry ... It's difficult for me the python :)
#
from os import listdir
from pathlib import Path
import glob
import csv
import locale
import os
import time
from datetime import datetime
import json
import requests
nb_metadata = 0
nb_metadata_import = 0
def month_string_to_number(string):
m = {
'janv.': 1,
'feb.': 2,
'févr.': 2,
'mar.': 3,
'mars': 3,
'apr.':4,
'avr.':4,
'may.':5,
'mai':5,
'juin':6,
'juil.':7,
'aug.':8,
'août':8,
'sept.':9,
'oct.':10,
'nov.':11,
'déc.':12
}
s = string.strip()[:5].lower()
try:
out = m[s]
return out
except:
raise ValueError('Not a month')
locale.setlocale(locale.LC_TIME, 'fr_FR.UTF-8')
#today = datetime.date.today()
#print(today.strftime('The date :%d %b. %Y à %H:%M:%S UTC'))
from time import strftime,localtime
print(localtime())
print(strftime("%H:%M:%S, %d %b. %Y",localtime()))
date = datetime.strptime('2017-05-04',"%Y-%m-%d")
#Token
ip = "127.0.0.1"
port = "41184"
token = "Put your token here"
nb_import = 0;
headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
url_notes = (
"http://"+ip+":"+port+"/notes?"
"token="+token
)
url_folders = (
"http://"+ip+":"+port+"/folders?"
"token="+token
)
url_tags = (
"http://"+ip+":"+port+"/tags?"
"token="+token
)
url_ressources = (
"http://"+ip+":"+port+"/ressources?"
"token="+token
)
#Init
GooglePlus_UID = "12345678901234567801234567890123"
UID = {}
payload = {
"id":GooglePlus_UID,
"title":"GooglePlus Import"
}
try:
resp = requests.post(url_folders, data=json.dumps(payload, separators=(',',':')), headers=headers)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
print("My ID")
print(resp_dict['id'])
GooglePlus_UID_real = resp_dict['id']
save = str(resp_dict['id'])
UID[GooglePlus_UID]= save
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
for csvfilename in glob.iglob('Takeout*/**/*.metadata.csv', recursive=True):
nb_metadata += 1
print(nb_metadata," ",csvfilename)
#print("Picture:"+os.path.basename(csvfilename))
mybasename = os.path.basename(csvfilename)
mylist = mybasename.split(".")
myfilename = mylist[0] + "." + mylist[1]
filename = os.path.dirname(csvfilename)+"/"+myfilename
my_file = Path(filename)
with open(csvfilename) as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
if (len(row['description']) > 0):
print(row['title'], row['description'], row['creation_time.formatted'], row['geo_data.latitude'], row['geo_data.longitude'])
#date = datetime.strptime(row['creation_time.formatted'], "%d %b %Y à %H:%M:%S %Z").timetuple()
#print(date)
mylist2 = row['creation_time.formatted'].split(" ");
mylist3 = mylist2[4].split(":");
date = date.replace(hour=int(mylist3[0]), year=int(mylist2[2]), month=month_string_to_number(mylist2[1]), day=int(mylist2[0]))
timestamp = time.mktime(date.timetuple())*1000
print(timestamp)
nb_metadata_import += 1
mybody = row['description']
if (len(row['geo_data.latitude']) > 2):
payload_note = {
"parent_id":GooglePlus_UID_real,
"title":row['creation_time.formatted'],
"source":myfilename,
"source_url":row['url'],
"order":nb_metadata_import,
"body":mybody
}
payload_note_put = {
"latitude":float(row['geo_data.latitude']),
"longitude":float(row['geo_data.longitude']),
"source":myfilename,
"source_url":row['url'],
"order":nb_metadata_import,
"user_created_time":timestamp,
"user_updated_time":timestamp,
"author":"Google+"
}
else:
payload_note = {
"parent_id":GooglePlus_UID_real,
"title":row['creation_time.formatted'],
"source":myfilename,
"source_url":row['url'],
"order":nb_metadata_import,
"user_created_time":timestamp,
"user_updated_time":timestamp,
"author":"Google+",
"body":mybody
}
payload_note_put = {
"source":myfilename,
"order":nb_metadata_import,
"source_url":row['url'],
"user_created_time":timestamp,
"user_updated_time":timestamp,
"author":"Google+"
}
try:
resp = requests.post(url_notes, json=payload_note)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
print(resp_dict['id'])
myuid= resp_dict['id']
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
url_notes_put = (
"http://"+ip+":"+port+"/notes/"+myuid+"?"
"token="+token
)
try:
resp = requests.put(url_notes_put, json=payload_note_put)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
if my_file.is_file():
cmd = "curl -F 'data=@"+filename+"' -F 'props={\"title\":\""+myfilename+"\"}' http://"+ip+":"+port+"/resources?token="+token
print("Command"+cmd)
resp = os.popen(cmd).read()
try:
respj = json.loads(resp)
print(respj['id'])
myuid_picture= respj['id']
except:
print('bad json: ', resp)
mybody = row['description'] + "\n  \n";
payload_note_put = {
"body":mybody
}
try:
resp = requests.put(url_notes_put, json=payload_note_put)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
print(nb_metadata)
print(nb_metadata_import)
(See the finale release : https://www.cyber-neurones.org/2019/02/diaro-app-pixel-crater-ltd-diarobackup-xml-how-to-migrate-data-to-joplin/ )
Now with release V3, it’s possible to import data … Le last issue is on user_created_time and user_updated_time.
The REST API is very good ( https://joplin.cozic.net/api/ ) , but If it’s not too complex :
](:/ID_RESOURCE). The syntax : PUT /ressources/ID_RESSOURCE/notes/ID_NOTE?token=…”My last source :
J’ai voulu suivre la procédure avec brew, pip, …. mais sans succès avec la version 2.7.2
$ python --version
Python 2.7.2
J’avais des erreurs du type :
$ brew reinstall python....xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun Error: An exception occurred within a child process: CompilerSelectionError: python cannot be built with any available compilers. Install GNU's GCC brew install gcc$ python -m pip install --user requests /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python: No module named pip$ sudo easy_install pip Searching for pip Reading http://pypi.python.org/simple/pip/ Couldn't find index page for 'pip' (maybe misspelled?) Scanning index of all packages (this may take a while) Reading http://pypi.python.org/simple/ No local packages or download links found for pip Best match: None Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/bin/easy_install", line 8, in <module> load_entry_point('setuptools==0.6c11', 'console_scripts', 'easy_install')() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 1712, in main File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 1700, in with_ei_usage File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 1716, in <lambda> File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 152, in setup dist.run_commands() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 211, in run File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 434, in easy_install File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/package_index.py", line 475, in fetch_distribution AttributeError: 'NoneType' object has no attribute 'clone'
J’ai donc changé de fusil d’épaule :
C’est très facile à voir, en deux images :


12 pisteurs VS 1 pisteurs.
(See the finale release : https://www.cyber-neurones.org/2019/02/diaro-app-pixel-crater-ltd-diarobackup-xml-how-to-migrate-data-to-joplin/ )
Step 1: Add in first ligne : before in file DiaroBackup.xml … it’s mandatory !
I use REST API to insert in JOPLIN : https://joplin.cozic.net/api/ , it’s good documentation.
Here my first release in Python to import data from Diaro App Backup to Joplin API :
#
# Version 1
#
# ARIAS Frederic
# Sorry ... It's difficult for me the python :)
from urllib2 import unquote
from lxml import etree
import os
from time import gmtime, strftime
import time
strftime("%Y-%m-%d %H:%M:%S", gmtime())
start = time.time()
print("Start : Parse Table")
tree = etree.parse("./DiaroBackup.xml")
for table in tree.xpath("/data/table"):
print(table.get("name"))
print("End : Parse Table")
#Token
ip = "127.0.0.1"
port = "41184"
#token = "ABCD123ABCD123ABCD123ABCD123ABCD123"
token = "blablabla"
cmd = 'curl http://'+ip+':'+port+'/notes?token='+token
print cmd
os.system(cmd)
#Init
Diaro_UID = "12345678901234567801234567890123"
Lat = {}
Lng = {}
Lat[""] = ""
Lng[""] = ""
cmd = 'curl --data \'{ "id": "'+Diaro_UID+'", "title": "Diaro Import"}\' http://'+ip+':'+port+'/folders?token='+token
print cmd
os.system(cmd)
print("Start : Parse Table")
tree = etree.parse("./DiaroBackup.xml")
for table in tree.iter('table'):
name = table.attrib.get('name')
print name
myorder = 1
for r in table.iter('r'):
myuid = ""
mytitle = ""
mylat = ""
mylng = ""
mytags = ""
mydate = ""
mytext = ""
myfilename = ""
myfolder_uid = Diaro_UID
mylocation_uid = ""
myprimary_photo_uid = ""
myentry_uid = ""
myorder += 1
for subelem in r:
print(subelem.tag)
if (subelem.tag == 'uid'):
myuid = subelem.text
print ("myuid",myuid)
if (subelem.tag == 'entry_uid'):
myentry_uid = subelem.text
print ("myentry_uid",myentry_uid)
if (subelem.tag == 'primary_photo_uid'):
myprimary_photo_uid = subelem.text
print ("myprimary_photo_uid",myprimary_photo_uid)
if (subelem.tag == 'folder_uid'):
myfolder_uid = subelem.text
print ("myfolder_uid",myfolder_uid)
if (subelem.tag == 'location_uid'):
mylocation_uid = subelem.text
print ("mylocation_uid",mylocation_uid)
if (subelem.tag == 'date'):
mydate = subelem.text
print ("mydate",mydate)
if (subelem.tag == 'title'):
mytitle = subelem.text
print ("mytitle",mytitle)
print type(mytitle)
if type(mytitle) == unicode:
mytitle = mytitle.encode('utf8')
if (subelem.tag == 'lat'):
mylat = subelem.text
print ("mylat",mylat)
if (subelem.tag == 'lng'):
mylng = subelem.text
print ("mylng",mylng)
if (subelem.tag == 'tags'):
mytags = subelem.text
if mytags:
mytags[1:]
print ("mytags",mytags)
if (subelem.tag == 'text'):
mytext = subelem.text
print ("mytext",mytext)
if type(mytext) == unicode:
mytext = mytext.encode('utf8')
if (subelem.tag == 'filename'):
myfilename = subelem.text
print ("myfilename",myfilename)
if (name == 'diaro_folders'):
cmd = 'curl --data \'{ "id": "'+myuid+'", "title": "'+mytitle+'", "parent_id": "'+Diaro_UID+'"}\' http://'+ip+':'+port+'/folders?token='+token
print cmd
os.system(cmd)
if (name == 'diaro_tags'):
cmd = 'curl --data \'{ "id": "'+myuid+'", "title": "'+mytitle+'"}\' http://'+ip+':'+port+'/tags?token='+token
print cmd
os.system(cmd)
if (name == 'diaro_attachments'):
cmd = 'curl -F \'data=@media/photo/'+myfilename+'\' -F \'props={"id":"'+myuid+'"}\' http://'+ip+':'+port+'/resources?token='+token
print cmd
os.system(cmd)
cmd = 'curl -X PUT http://'+ip+':'+port+'/resources/'+myuid+'/notes/'+myentry_uid+'?token='+token
print cmd
os.system(cmd)
if (name == 'diaro_locations'):
Lat[myuid] = mylat
Lng[myuid] = mylng
if (name == 'diaro_entries'):
if not mytext:
mytext = ""
if not myfolder_uid:
myfolder_uid = Diaro_UID
if not mytags:
mytags = ""
if not mylocation_uid:
mylocation_uid = ""
mytext = mytext.replace("'", "")
mytitle = mytitle.replace("'", "")
mytext = mytext.strip("\'")
mytitle = mytitle.strip("\'")
mytext = mytext.strip('(')
mytitle = mytitle.strip('(')
print type(mytext)
cmd = 'curl --data \'{"latitude":"'+Lat[mylocation_uid]+'","longitude":"'+Lng[mylocation_uid]+'","tags":"'+mytags+'","parent_id":"'+myfolder_uid+'","id":"'+myuid+'","title":"'+mytitle+'", "created_time": "'+mydate+'", "body": "'+mytext+'"}\' http://'+ip+':'+port+'/notes?token='+token
print cmd
os.system(cmd)
print("End : Parse Table")
strftime("%Y-%m-%d %H:%M:%S", gmtime())
done = time.time()
elapsed = done - start
print(elapsed)
But I don’t understand the API, I can force the id ( for exemple : 12345678901234567801234567890123 ):
Le format est en XML : DiarioBackup.xml , la syntaxe est la suivante :
<data version="2"><table name="diaro_folders"><r><uid>0773341a39b09938e234d0c4e2970988</uid><title>Nom du fichier</title><color>#ff921c</color><pattern></pattern></r>...</table><table name="diaro_tags"><r> <uid>0b2cc127642c774a77e4e048278fb716</uid> <title>Nom du tags</title></r>...</table><table name="diaro_locations"><r> <uid>008e9d97ecbae5876ceefc3463c57753</uid> <title>Lieu</title> <address>Address</address> <lat>YY.YYYYY</lat> <lng>X.XXXXX</lng> <zoom>10</zoom></r>...</table><table name="diaro_entries"><r> <uid>f4526cfd9536ecc422df849bc4b69d89</uid> <date>1475771220000</date> <tz_offset>+02:00</tz_offset> <title>Titre</title> <text>Texte</text> <folder_uid>4c4db654f97a84333d4e29fd949cbada</folder_uid> <location_uid>85c77bb40d800da8f5a9d9777967d325</location_uid> <tags>,28f79fcdf75cb5a3deb10ab40d1ed956,</tags> <primary_photo_uid></primary_photo_uid> <weather_temperature>null</weather_temperature> <weather_icon></weather_icon> <weather_description></weather_description> <mood>0</mood></r>....</table><table name="diaro_attachments"><r> <uid>0237499c90decb1cc9787ecb11718a35</uid> <entry_uid>53b97932b1acc1b4a5be5895d22bc16d</entry_uid> <type>photo</type> <filename>name.jpg</filename> <position>1</position></r>...</table></data>
Sachant qu’ensuite les photos dans dans le répertoire media/photo/ .
Mon but est de convertir cela en fichier .ENEX pour ensuite faire un import dans Joplin. J’ai vu un programme en Python assez intéressant : https://github.com/andrewheiss/nvalt2evernote “Convert plain text notes stored in Notational Velocity or nvALT to an .enex file to import into Evernote.”