Link to Diaro App : https://diaroapp.com .

But to many tracking !!!

Link to JOPLIN : https://joplin.cozic.net/ , and the REST API : https://joplin.cozic.net/api/
Step 1 : Add in first ligne : before in file DiaroBackup.xml … it’s mandatory !
My note for REST API :
](:/ID_RESOURCE). The syntax : PUT /ressources/ID_RESSOURCE/notes/ID_NOTE?token=…” . It’s more simple ….After install python3 ( it’s easy … and run this script), note put your token in the script.
(See the finale release : https://www.cyber-neurones.org/2019/02/diaro-app-pixel-crater-ltd-diarobackup-xml-how-to-migrate-data-to-joplin/ )
I have issue with ressources (link between ressources and notes) …. error 404. The logs in : .config/joplin-desktop/log-clipper.txt
....: "Request: PUT /ressources/71dd2cba2af54c4ebb53fb7fd8d0543b/notes/cbbc6076b2ac321ccae1f036a2fe6659?token=...."
....: "Error: Not Found
Error: Not Found
at Api.route (/Applications/Joplin.app/Contents/Resources/app/lib/services/rest/Api.js:103:41)
at execRequest (/Applications/Joplin.app/Contents/Resources/app/lib/ClipperServer.js:147:39)
at IncomingMessage.request.on (/Applications/Joplin.app/Contents/Resources/app/lib/ClipperServer.js:185:8)
at emitNone (events.js:105:13)
at IncomingMessage.emit (events.js:207:7)
at endReadableNT (_stream_readable.js:1045:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)"
My last code :
#
# Version 2
# for Python 3
#
# ARIAS Frederic
# Sorry ... It's difficult for me the python :)
#
#from lxml import etree
import xml.etree.ElementTree as etree
from time import gmtime, strftime
import time
import json
import requests
import os
strftime("%Y-%m-%d %H:%M:%S", gmtime())
start = time.time()
#Token
ip = "127.0.0.1"
port = "41184"
token = "ABCD123ABCD123ABCD123ABCD123ABCD123"
url_notes = (
"http://"+ip+":"+port+"/notes?"
"token="+token
)
url_folders = (
"http://"+ip+":"+port+"/folders?"
"token="+token
)
url_tags = (
"http://"+ip+":"+port+"/tags?"
"token="+token
)
url_ressources = (
"http://"+ip+":"+port+"/ressources?"
"token="+token
)
#Init
Diaro_UID = "12345678901234567801234567890123"
Lat = {}
Lng = {}
UID = {}
TAGS = {}
Lat[""] = ""
Lng[""] = ""
payload = {
"id": Diaro_UID,
"title": "Diaro Import"
}
try:
resp = requests.post(url_folders, json=payload)
#time.sleep(1)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
print("My ID")
print(resp_dict['id'])
Diaro_UID_real = resp_dict['id']
save = str(resp_dict['id'])
UID[Diaro_UID]= save
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
print("Start : Parse Table")
tree = etree.parse("./DiaroBackup.xml")
for table in tree.iter('table'):
name = table.attrib.get('name')
print(name)
myorder = 1
for r in table.iter('r'):
myuid = ""
mytitle = ""
mylat = ""
mylng = ""
mytags = ""
mydate = ""
mydate_ms = 0;
mytext = ""
myfilename = ""
myfolder_uid = Diaro_UID
mylocation_uid = ""
myprimary_photo_uid = ""
myentry_uid = ""
myorder += 1
for subelem in r:
print(subelem.tag)
if (subelem.tag == 'uid'):
myuid = subelem.text
print ("myuid",myuid)
if (subelem.tag == 'entry_uid'):
myentry_uid = subelem.text
print ("myentry_uid",myentry_uid)
if (subelem.tag == 'primary_photo_uid'):
myprimary_photo_uid = subelem.text
print ("myprimary_photo_uid",myprimary_photo_uid)
if (subelem.tag == 'folder_uid'):
myfolder_uid = subelem.text
print ("myfolder_uid",myfolder_uid)
if (subelem.tag == 'location_uid'):
mylocation_uid = subelem.text
print ("mylocation_uid",mylocation_uid)
if (subelem.tag == 'date'):
mydate = subelem.text
mydate_ms = int(mydate)
print ("mydate",mydate," in ms",mydate_ms)
if (subelem.tag == 'title'):
mytitle = subelem.text
print ("mytitle",mytitle)
#if type(mytitle) == str:
#mytitle = mytitle.encode('utf8')
if (subelem.tag == 'lat'):
mylat = subelem.text
print ("mylat",mylat)
if (subelem.tag == 'lng'):
mylng = subelem.text
print ("mylng",mylng)
if (subelem.tag == 'tags'):
mytags = subelem.text
if mytags:
mytags[1:]
print ("mytags",mytags)
if (subelem.tag == 'text'):
mytext = subelem.text
print ("mytext",mytext)
#if type(mytext) == str:
#mytext = mytext.encode('utf8')
if (subelem.tag == 'filename'):
myfilename = subelem.text
print ("myfilename",myfilename)
if (name == 'diaro_folders'):
payload_folder = {
"id": myuid,
"title": mytitle,
"parent_id": Diaro_UID_real
}
print(payload_folder)
try:
resp = requests.post(url_folders, json=payload_folder)
#time.sleep(1)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
print(resp_dict['id'])
save = str(resp_dict['id'])
UID[myuid]= save
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
if (name == 'diaro_tags'):
payload_tags = {
"id": myuid,
"title": mytitle
}
try:
resp = requests.post(url_tags, json=payload_tags)
#time.sleep(1)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
print(resp_dict['id'])
UID[myuid]= resp_dict['id']
TAGS[myuid] = mytitle
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
if (name == 'diaro_attachments'):
payload_ressource = {
"id": myuid
}
filename = "./media/photo/"+myfilename
files = {'document': open(filename, 'rb')}
files2 = {'data': open(filename, 'rb')}
files3 = {'data': open(filename, 'rb'), 'props': payload_ressource}
data_ressource = {
"title": myfilename
}
multiple_files = [
('data', (myfilename, open(filename, 'rb'))),
('props', data_ressource)]
headers = {'Content-type': 'multipart/form-data'}
print("Push : "+filename);
#print os.path.isfile(filename)
print("----------0-----------")
#try:
#resp = requests.post(url_ressources, files=filename, json=payload_ressource)
#resp = requests.post(url_ressources, files=files, json=payload_ressource, headers=headers)
#resp = requests.post(url_ressources, files=files2, headers=headers)
#resp = requests.post(url_ressources, files=files2, headers=headers)
#resp = requests.post(url_ressources,files = {'data' : (myfilename, open(filename, 'rb'), 'image/jpg')}, data = {'id' : myuid}, headers=headers)
#resp = requests.post(url_ressources,files = files2, data= data_ressource, headers=headers)
#resp = requests.post(url_ressources,files = multiple_files, headers=headers)
#resp = requests.post(url_ressources,files = multiple_files)
#resp.text
#time.sleep(1)
#resp.raise_for_status()
#if (resp.status_code == requests.codes.ok):
# resp_dict = resp.json()
# print(resp_dict)
# print(resp_dict['id'])
# UID[myuid]= resp_dict['id']
#except requests.exceptions.HTTPError as e:
#print("Bad HTTP status code:", e)
#UID[myuid]=""
#print("----------1-----------")
#except requests.exceptions.RequestException as e:
#print("Network error:", e)
#UID[myuid]=""
#print("----------2-----------")
cmd = "curl -F 'data=@"+filename+"' -F 'props={\"title\":\""+myfilename+"\"}' http://"+ip+":"+port+"/resources?token="+token
resp = os.popen(cmd).read()
respj = json.loads(resp)
#resp_dict = respj.json()
print(respj['id'])
UID[myuid]= respj['id']
print("Link : ",myuid," => ",myentry_uid," // ",UID[myuid]+" => ",UID[myentry_uid])
time.sleep(1)
cmd = "curl -X PUT http://"+ip+":"+port+"/ressources/"+UID[myuid]+"/notes/"+UID[myentry_uid]+"?token="+token
resp = os.popen(cmd).read()
print (resp)
#url_link = (
# "http://"+ip+":"+port+"/ressources/"+UID[myuid]+"/notes/"+UID[myentry_uid]+"?"
# "token="+token
# )
#try:
# resp = requests.post(url_link)
# #time.sleep(1)
# resp.raise_for_status()
# resp_dict = resp.json()
# print(resp_dict)
# print(resp_dict['id'])
# UID[myuid]= resp_dict['id']
#except requests.exceptions.HTTPError as e:
# print("Bad HTTP status code:", e)
#except requests.exceptions.RequestException as e:
# print("Network error:", e)
if (name == 'diaro_locations'):
Lat[myuid] = mylat
Lng[myuid] = mylng
if (name == 'diaro_entries'):
if not mytext:
mytext = ""
if not myfolder_uid:
myfolder_uid = Diaro_UID
if not mytags:
mytags = ""
if not mylocation_uid:
mylocation_uid = ""
mytext = mytext.replace("'", "")
mytitle = mytitle.replace("'", "")
mytext = mytext.strip("\'")
mytitle = mytitle.strip("\'")
mytext = mytext.strip('(')
mytitle = mytitle.strip('(')
listtags = mytags.split(",")
new_tagslist = "";
for uid_tags in listtags:
if (len(uid_tags) > 2):
if uid_tags in UID:
new_tagslist = new_tagslist + TAGS[uid_tags] + ",";
print ("TAGS",mytags,"==>",new_tagslist);
payload_note = {
"id": myuid,
"latitude": Lat[mylocation_uid],
"longitude": Lng[mylocation_uid],
"tags": new_tagslist,
"parent_id": UID[myfolder_uid],
"title": mytitle,
#"created_time": mydate_ms,
"user_created_time": mydate_ms,
"user_updated_time": mydate_ms,
"author": "Diaro",
"body": mytext
}
try:
resp = requests.post(url_notes, json=payload_note)
#time.sleep(1)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
print(resp_dict['id'])
UID[myuid]= resp_dict['id']
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
print("End : Parse Table")
strftime("%Y-%m-%d %H:%M:%S", gmtime())
done = time.time()
elapsed = done - start
print(elapsed)
# END : Ouf ...
(See the finale release : https://www.cyber-neurones.org/2019/02/diaro-app-pixel-crater-ltd-diarobackup-xml-how-to-migrate-data-to-joplin/ )
Step 1: Add in first ligne : before in file DiaroBackup.xml … it’s mandatory !
I use REST API to insert in JOPLIN : https://joplin.cozic.net/api/ , it’s good documentation.
Here my first release in Python to import data from Diaro App Backup to Joplin API :
#
# Version 1
#
# ARIAS Frederic
# Sorry ... It's difficult for me the python :)
from urllib2 import unquote
from lxml import etree
import os
from time import gmtime, strftime
import time
strftime("%Y-%m-%d %H:%M:%S", gmtime())
start = time.time()
print("Start : Parse Table")
tree = etree.parse("./DiaroBackup.xml")
for table in tree.xpath("/data/table"):
print(table.get("name"))
print("End : Parse Table")
#Token
ip = "127.0.0.1"
port = "41184"
#token = "ABCD123ABCD123ABCD123ABCD123ABCD123"
token = "blablabla"
cmd = 'curl http://'+ip+':'+port+'/notes?token='+token
print cmd
os.system(cmd)
#Init
Diaro_UID = "12345678901234567801234567890123"
Lat = {}
Lng = {}
Lat[""] = ""
Lng[""] = ""
cmd = 'curl --data \'{ "id": "'+Diaro_UID+'", "title": "Diaro Import"}\' http://'+ip+':'+port+'/folders?token='+token
print cmd
os.system(cmd)
print("Start : Parse Table")
tree = etree.parse("./DiaroBackup.xml")
for table in tree.iter('table'):
name = table.attrib.get('name')
print name
myorder = 1
for r in table.iter('r'):
myuid = ""
mytitle = ""
mylat = ""
mylng = ""
mytags = ""
mydate = ""
mytext = ""
myfilename = ""
myfolder_uid = Diaro_UID
mylocation_uid = ""
myprimary_photo_uid = ""
myentry_uid = ""
myorder += 1
for subelem in r:
print(subelem.tag)
if (subelem.tag == 'uid'):
myuid = subelem.text
print ("myuid",myuid)
if (subelem.tag == 'entry_uid'):
myentry_uid = subelem.text
print ("myentry_uid",myentry_uid)
if (subelem.tag == 'primary_photo_uid'):
myprimary_photo_uid = subelem.text
print ("myprimary_photo_uid",myprimary_photo_uid)
if (subelem.tag == 'folder_uid'):
myfolder_uid = subelem.text
print ("myfolder_uid",myfolder_uid)
if (subelem.tag == 'location_uid'):
mylocation_uid = subelem.text
print ("mylocation_uid",mylocation_uid)
if (subelem.tag == 'date'):
mydate = subelem.text
print ("mydate",mydate)
if (subelem.tag == 'title'):
mytitle = subelem.text
print ("mytitle",mytitle)
print type(mytitle)
if type(mytitle) == unicode:
mytitle = mytitle.encode('utf8')
if (subelem.tag == 'lat'):
mylat = subelem.text
print ("mylat",mylat)
if (subelem.tag == 'lng'):
mylng = subelem.text
print ("mylng",mylng)
if (subelem.tag == 'tags'):
mytags = subelem.text
if mytags:
mytags[1:]
print ("mytags",mytags)
if (subelem.tag == 'text'):
mytext = subelem.text
print ("mytext",mytext)
if type(mytext) == unicode:
mytext = mytext.encode('utf8')
if (subelem.tag == 'filename'):
myfilename = subelem.text
print ("myfilename",myfilename)
if (name == 'diaro_folders'):
cmd = 'curl --data \'{ "id": "'+myuid+'", "title": "'+mytitle+'", "parent_id": "'+Diaro_UID+'"}\' http://'+ip+':'+port+'/folders?token='+token
print cmd
os.system(cmd)
if (name == 'diaro_tags'):
cmd = 'curl --data \'{ "id": "'+myuid+'", "title": "'+mytitle+'"}\' http://'+ip+':'+port+'/tags?token='+token
print cmd
os.system(cmd)
if (name == 'diaro_attachments'):
cmd = 'curl -F \'data=@media/photo/'+myfilename+'\' -F \'props={"id":"'+myuid+'"}\' http://'+ip+':'+port+'/resources?token='+token
print cmd
os.system(cmd)
cmd = 'curl -X PUT http://'+ip+':'+port+'/resources/'+myuid+'/notes/'+myentry_uid+'?token='+token
print cmd
os.system(cmd)
if (name == 'diaro_locations'):
Lat[myuid] = mylat
Lng[myuid] = mylng
if (name == 'diaro_entries'):
if not mytext:
mytext = ""
if not myfolder_uid:
myfolder_uid = Diaro_UID
if not mytags:
mytags = ""
if not mylocation_uid:
mylocation_uid = ""
mytext = mytext.replace("'", "")
mytitle = mytitle.replace("'", "")
mytext = mytext.strip("\'")
mytitle = mytitle.strip("\'")
mytext = mytext.strip('(')
mytitle = mytitle.strip('(')
print type(mytext)
cmd = 'curl --data \'{"latitude":"'+Lat[mylocation_uid]+'","longitude":"'+Lng[mylocation_uid]+'","tags":"'+mytags+'","parent_id":"'+myfolder_uid+'","id":"'+myuid+'","title":"'+mytitle+'", "created_time": "'+mydate+'", "body": "'+mytext+'"}\' http://'+ip+':'+port+'/notes?token='+token
print cmd
os.system(cmd)
print("End : Parse Table")
strftime("%Y-%m-%d %H:%M:%S", gmtime())
done = time.time()
elapsed = done - start
print(elapsed)
But I don’t understand the API, I can force the id ( for exemple : 12345678901234567801234567890123 ):
Le format est en XML : DiarioBackup.xml , la syntaxe est la suivante :
<data version="2"><table name="diaro_folders"><r><uid>0773341a39b09938e234d0c4e2970988</uid><title>Nom du fichier</title><color>#ff921c</color><pattern></pattern></r>...</table><table name="diaro_tags"><r> <uid>0b2cc127642c774a77e4e048278fb716</uid> <title>Nom du tags</title></r>...</table><table name="diaro_locations"><r> <uid>008e9d97ecbae5876ceefc3463c57753</uid> <title>Lieu</title> <address>Address</address> <lat>YY.YYYYY</lat> <lng>X.XXXXX</lng> <zoom>10</zoom></r>...</table><table name="diaro_entries"><r> <uid>f4526cfd9536ecc422df849bc4b69d89</uid> <date>1475771220000</date> <tz_offset>+02:00</tz_offset> <title>Titre</title> <text>Texte</text> <folder_uid>4c4db654f97a84333d4e29fd949cbada</folder_uid> <location_uid>85c77bb40d800da8f5a9d9777967d325</location_uid> <tags>,28f79fcdf75cb5a3deb10ab40d1ed956,</tags> <primary_photo_uid></primary_photo_uid> <weather_temperature>null</weather_temperature> <weather_icon></weather_icon> <weather_description></weather_description> <mood>0</mood></r>....</table><table name="diaro_attachments"><r> <uid>0237499c90decb1cc9787ecb11718a35</uid> <entry_uid>53b97932b1acc1b4a5be5895d22bc16d</entry_uid> <type>photo</type> <filename>name.jpg</filename> <position>1</position></r>...</table></data>
Sachant qu’ensuite les photos dans dans le répertoire media/photo/ .
Mon but est de convertir cela en fichier .ENEX pour ensuite faire un import dans Joplin. J’ai vu un programme en Python assez intéressant : https://github.com/andrewheiss/nvalt2evernote “Convert plain text notes stored in Notational Velocity or nvALT to an .enex file to import into Evernote.”
In my opinion Diaro should do more developments on data imports. Currently, people who change from iPhone to Android are looking for an equivalent of Awesome Note. The problem is nobody import Awesome Note data Backup. In fact before 22/05/2017 it was impossible because the data was encrypted, now it’s totally possible. Each note is a plist file in binary format.
Since the 22/05/2017, it’s more easy to have access to the data with Awesome Note :
This is very simple, with this app (Awesome Note) you don’t have a local backup of your data. Currently you have a backup, but this backup is protect by a password. And BRID don’t communicate the password. On the site, Awesome don’t precise :
3. Awesome Note Data Backup This will compress all notes created in Awesome Note and save it to one ‘.anb’ file. Backup file will be saved according to the date for you to choose which file to restore from in the future.
Le site du logiciel est : http://www.diaroapp.com , la dernière version date du “Aug 12, 2016”. La société qui fait le logiciel est http://www.pixelcrater.com (Audit House, 260 Field End Road .Ruislip en GB). Depuis la dernière version il n’y a plus beaucoup d’activité sur le logiciel. C’est bien dommage car il me semblait assez propre. Je pense que c’est propre car il est facile pour un autre logiciel de reprendre les données. Le principal problème avec ce type de logiciel c’est la migration des données.
Un petit POST sur ma première analyse de recherche de software pour “journal personnel”.
Sur Mac : j’en ai sélectionné 7.
Sur iOS : J’en ai retenu seulement 2.