Step 0 : Install Joplin and activate the REST API ( https://joplin.cozic.net/api/ ) .
Step 1: Install staticmap with pip ( for more information see https://github.com/komoot/staticmap )
$ pip install staticmap
Collecting staticmap
Downloading https://files.pythonhosted.org/packages/f9/9f/5a3843533eab037cba031486175c4db1b214614404a29516208ff228dead/staticmap-0.5.4.tar.gz
Collecting Pillow (from staticmap)
Downloading https://files.pythonhosted.org/packages/c9/ed/27cc92e99b9ccaa0985a66133baeea7e8a3371d3c04cfa353aaa3b81aac1/Pillow-5.4.1-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (3.7MB)
100% |████████████████████████████████| 3.7MB 6.3MB/s
Requirement already satisfied: requests in /usr/local/lib/python3.7/site-packages (from staticmap) (2.21.0)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/site-packages (from requests->staticmap) (3.0.4)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.7/site-packages (from requests->staticmap) (2.8)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.7/site-packages (from requests->staticmap) (1.24.1)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/site-packages (from requests->staticmap) (2018.11.29)
Building wheels for collected packages: staticmap
Building wheel for staticmap (setup.py) ... done
Stored in directory: /Users/..../Library/Caches/pip/wheels/fe/a6/a5/2acceb72471d85bd0498973aabd611e6ff1cdd48796790f047
Successfully built staticmap
Installing collected packages: Pillow, staticmap
Successfully installed Pillow-5.4.1 staticmap-0.5.4
The source code :
Install JOPLIN : https://joplin.cozic.net , and start REST API. (Easy)
Step 1 : Put this script in folder.
Step 2 : Edit the script and put your token
Step 3 : Run the script
The script :
#
# Version 1
# for Python 3
#
# ARIAS Frederic
# Sorry ... It's difficult for me the python :)
#
import feedparser
from os import listdir
from pathlib import Path
import glob
import csv
import locale
import os
import time
from datetime import datetime
import json
import requests
#Token
ip = "127.0.0.1"
port = "41184"
token = "Put your token here"
nb_import = 0;
headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
url_notes = (
"http://"+ip+":"+port+"/notes?"
"token="+token
)
url_folders = (
"http://"+ip+":"+port+"/folders?"
"token="+token
)
url_tags = (
"http://"+ip+":"+port+"/tags?"
"token="+token
)
url_ressources = (
"http://"+ip+":"+port+"/ressources?"
"token="+token
)
#Init
Wordpress_UID = "12345678901234567801234567890123"
UID = {}
payload = {
"id":Wordpress_UID,
"title":"Wordpress Import"
}
try:
resp = requests.post(url_folders, data=json.dumps(payload, separators=(',',':')), headers=headers)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
print("My ID")
print(resp_dict['id'])
Wordpress_UID_real = resp_dict['id']
save = str(resp_dict['id'])
UID[Wordpress_UID]= save
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
feed = feedparser.parse("https://www.cyber-neurones.org/feed/")
feed_title = feed['feed']['title']
feed_entries = feed.entries
numero = -2
nb_entries = 1
nb_metadata_import = 1
while nb_entries > 0 :
print ("----- Page ",numero,"-------")
numero += 2
url = "https://www.cyber-neurones.org/feed/?paged="+str(numero)
feed = feedparser.parse(url)
feed_title = feed['feed']['title']
feed_entries = feed.entries
nb_entries = len(feed['entries'])
for entry in feed.entries:
nb_metadata_import += 1
my_title = entry.title
my_link = entry.link
article_published_at = entry.published # Unicode string
article_published_at_parsed = entry.published_parsed # Time object
article_author = entry.author
timestamp = time.mktime(entry.published_parsed)*1000
print("Published at "+article_published_at)
my_body = entry.description
payload_note = {
"parent_id":Wordpress_UID_real,
"title":my_title,
"source":"Wordpress",
"source_url":my_link,
"order":nb_metadata_import,
"user_created_time":timestamp,
"user_updated_time":timestamp,
"author":article_author,
"body_html":my_body
}
payload_note_put = {
"source":"Wordpress",
"order":nb_metadata_import,
"source_url":my_link,
"user_created_time":timestamp,
"user_updated_time":timestamp,
"author":article_author
}
try:
resp = requests.post(url_notes, json=payload_note)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
print(resp_dict['id'])
myuid= resp_dict['id']
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
url_notes_put = (
"http://"+ip+":"+port+"/notes/"+myuid+"?"
"token="+token
)
try:
resp = requests.put(url_notes_put, json=payload_note_put)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
Link to Diaro App : https://diaroapp.com .

But to many tracking !!!

Link to JOPLIN : https://joplin.cozic.net/ , and the REST API : https://joplin.cozic.net/api/
Step 1 : Add in first ligne : before in file DiaroBackup.xml … it’s mandatory !
My note for REST API :
](:/ID_RESOURCE). The syntax : PUT /ressources/ID_RESSOURCE/notes/ID_NOTE?token=…” . It’s more simple ….After install python3 ( it’s easy … and run this script), note put your token in the script.
Voici ce que j’ai fait pour faire l’installation de pip sur Mac OS :
$ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1662k 100 1662k 0 0 560k 0 0:00:02 0:00:02 --:--:-- 560k
$ python3 get-pip.py
Collecting pip
Downloading https://files.pythonhosted.org/packages/d7/41/34dd96bd33958e52cb4da2f1bf0818e396514fd4f4725a79199564cd0c20/pip-19.0.2-py2.py3-none-any.whl (1.4MB)
100% |████████████████████████████████| 1.4MB 154kB/s
Installing collected packages: pip
Found existing installation: pip 18.1
Uninstalling pip-18.1:
Successfully uninstalled pip-18.1
Successfully installed pip-19.0.2
$ pip install feedparser
Collecting feedparser
Downloading https://files.pythonhosted.org/packages/91/d8/7d37fec71ff7c9dbcdd80d2b48bcdd86d6af502156fc93846fb0102cb2c4/feedparser-5.2.1.tar.bz2 (192kB)
100% |████████████████████████████████| 194kB 500kB/s
Building wheels for collected packages: feedparser
Building wheel for feedparser (setup.py) ... done
Stored in directory: ....
Successfully built feedparser
Installing collected packages: feedparser
Successfully installed feedparser-5.2.1
(See the finale release : https://www.cyber-neurones.org/2019/02/diaro-app-pixel-crater-ltd-diarobackup-xml-how-to-migrate-data-to-joplin/ )
Now with release V3, it’s possible to import data … Le last issue is on user_created_time and user_updated_time.
The REST API is very good ( https://joplin.cozic.net/api/ ) , but If it’s not too complex :
](:/ID_RESOURCE). The syntax : PUT /ressources/ID_RESSOURCE/notes/ID_NOTE?token=…”My last source :
(See the finale release : https://www.cyber-neurones.org/2019/02/diaro-app-pixel-crater-ltd-diarobackup-xml-how-to-migrate-data-to-joplin/ )
Je pensais avoir trouvé mon bug … je pensais que c’était l’espace avant la variable qui faisait que les valeurs n’était pas prise en compte.
Avec : requests.post(url_folders, json=payload) on a un . c.a.d. un 20

Avec : requests.post(url_folders, data=json.dumps(payload, separators=(’,’,’:’)), headers=headers)
Sachant que : headers = {‘Content-type’: ‘application/json’, ‘Accept’: ’text/plain’}

Je n’ai plus d’espace mais les valeurs numériques ne sont pas prise en compte …
(See the finale release : https://www.cyber-neurones.org/2019/02/diaro-app-pixel-crater-ltd-diarobackup-xml-how-to-migrate-data-to-joplin/ )
I have issue with ressources (link between ressources and notes) …. error 404. The logs in : .config/joplin-desktop/log-clipper.txt
....: "Request: PUT /ressources/71dd2cba2af54c4ebb53fb7fd8d0543b/notes/cbbc6076b2ac321ccae1f036a2fe6659?token=...."
....: "Error: Not Found
Error: Not Found
at Api.route (/Applications/Joplin.app/Contents/Resources/app/lib/services/rest/Api.js:103:41)
at execRequest (/Applications/Joplin.app/Contents/Resources/app/lib/ClipperServer.js:147:39)
at IncomingMessage.request.on (/Applications/Joplin.app/Contents/Resources/app/lib/ClipperServer.js:185:8)
at emitNone (events.js:105:13)
at IncomingMessage.emit (events.js:207:7)
at endReadableNT (_stream_readable.js:1045:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)"
My last code :
#
# Version 2
# for Python 3
#
# ARIAS Frederic
# Sorry ... It's difficult for me the python :)
#
#from lxml import etree
import xml.etree.ElementTree as etree
from time import gmtime, strftime
import time
import json
import requests
import os
strftime("%Y-%m-%d %H:%M:%S", gmtime())
start = time.time()
#Token
ip = "127.0.0.1"
port = "41184"
token = "ABCD123ABCD123ABCD123ABCD123ABCD123"
url_notes = (
"http://"+ip+":"+port+"/notes?"
"token="+token
)
url_folders = (
"http://"+ip+":"+port+"/folders?"
"token="+token
)
url_tags = (
"http://"+ip+":"+port+"/tags?"
"token="+token
)
url_ressources = (
"http://"+ip+":"+port+"/ressources?"
"token="+token
)
#Init
Diaro_UID = "12345678901234567801234567890123"
Lat = {}
Lng = {}
UID = {}
TAGS = {}
Lat[""] = ""
Lng[""] = ""
payload = {
"id": Diaro_UID,
"title": "Diaro Import"
}
try:
resp = requests.post(url_folders, json=payload)
#time.sleep(1)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
print("My ID")
print(resp_dict['id'])
Diaro_UID_real = resp_dict['id']
save = str(resp_dict['id'])
UID[Diaro_UID]= save
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
print("Start : Parse Table")
tree = etree.parse("./DiaroBackup.xml")
for table in tree.iter('table'):
name = table.attrib.get('name')
print(name)
myorder = 1
for r in table.iter('r'):
myuid = ""
mytitle = ""
mylat = ""
mylng = ""
mytags = ""
mydate = ""
mydate_ms = 0;
mytext = ""
myfilename = ""
myfolder_uid = Diaro_UID
mylocation_uid = ""
myprimary_photo_uid = ""
myentry_uid = ""
myorder += 1
for subelem in r:
print(subelem.tag)
if (subelem.tag == 'uid'):
myuid = subelem.text
print ("myuid",myuid)
if (subelem.tag == 'entry_uid'):
myentry_uid = subelem.text
print ("myentry_uid",myentry_uid)
if (subelem.tag == 'primary_photo_uid'):
myprimary_photo_uid = subelem.text
print ("myprimary_photo_uid",myprimary_photo_uid)
if (subelem.tag == 'folder_uid'):
myfolder_uid = subelem.text
print ("myfolder_uid",myfolder_uid)
if (subelem.tag == 'location_uid'):
mylocation_uid = subelem.text
print ("mylocation_uid",mylocation_uid)
if (subelem.tag == 'date'):
mydate = subelem.text
mydate_ms = int(mydate)
print ("mydate",mydate," in ms",mydate_ms)
if (subelem.tag == 'title'):
mytitle = subelem.text
print ("mytitle",mytitle)
#if type(mytitle) == str:
#mytitle = mytitle.encode('utf8')
if (subelem.tag == 'lat'):
mylat = subelem.text
print ("mylat",mylat)
if (subelem.tag == 'lng'):
mylng = subelem.text
print ("mylng",mylng)
if (subelem.tag == 'tags'):
mytags = subelem.text
if mytags:
mytags[1:]
print ("mytags",mytags)
if (subelem.tag == 'text'):
mytext = subelem.text
print ("mytext",mytext)
#if type(mytext) == str:
#mytext = mytext.encode('utf8')
if (subelem.tag == 'filename'):
myfilename = subelem.text
print ("myfilename",myfilename)
if (name == 'diaro_folders'):
payload_folder = {
"id": myuid,
"title": mytitle,
"parent_id": Diaro_UID_real
}
print(payload_folder)
try:
resp = requests.post(url_folders, json=payload_folder)
#time.sleep(1)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
print(resp_dict['id'])
save = str(resp_dict['id'])
UID[myuid]= save
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
if (name == 'diaro_tags'):
payload_tags = {
"id": myuid,
"title": mytitle
}
try:
resp = requests.post(url_tags, json=payload_tags)
#time.sleep(1)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
print(resp_dict['id'])
UID[myuid]= resp_dict['id']
TAGS[myuid] = mytitle
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
if (name == 'diaro_attachments'):
payload_ressource = {
"id": myuid
}
filename = "./media/photo/"+myfilename
files = {'document': open(filename, 'rb')}
files2 = {'data': open(filename, 'rb')}
files3 = {'data': open(filename, 'rb'), 'props': payload_ressource}
data_ressource = {
"title": myfilename
}
multiple_files = [
('data', (myfilename, open(filename, 'rb'))),
('props', data_ressource)]
headers = {'Content-type': 'multipart/form-data'}
print("Push : "+filename);
#print os.path.isfile(filename)
print("----------0-----------")
#try:
#resp = requests.post(url_ressources, files=filename, json=payload_ressource)
#resp = requests.post(url_ressources, files=files, json=payload_ressource, headers=headers)
#resp = requests.post(url_ressources, files=files2, headers=headers)
#resp = requests.post(url_ressources, files=files2, headers=headers)
#resp = requests.post(url_ressources,files = {'data' : (myfilename, open(filename, 'rb'), 'image/jpg')}, data = {'id' : myuid}, headers=headers)
#resp = requests.post(url_ressources,files = files2, data= data_ressource, headers=headers)
#resp = requests.post(url_ressources,files = multiple_files, headers=headers)
#resp = requests.post(url_ressources,files = multiple_files)
#resp.text
#time.sleep(1)
#resp.raise_for_status()
#if (resp.status_code == requests.codes.ok):
# resp_dict = resp.json()
# print(resp_dict)
# print(resp_dict['id'])
# UID[myuid]= resp_dict['id']
#except requests.exceptions.HTTPError as e:
#print("Bad HTTP status code:", e)
#UID[myuid]=""
#print("----------1-----------")
#except requests.exceptions.RequestException as e:
#print("Network error:", e)
#UID[myuid]=""
#print("----------2-----------")
cmd = "curl -F 'data=@"+filename+"' -F 'props={\"title\":\""+myfilename+"\"}' http://"+ip+":"+port+"/resources?token="+token
resp = os.popen(cmd).read()
respj = json.loads(resp)
#resp_dict = respj.json()
print(respj['id'])
UID[myuid]= respj['id']
print("Link : ",myuid," => ",myentry_uid," // ",UID[myuid]+" => ",UID[myentry_uid])
time.sleep(1)
cmd = "curl -X PUT http://"+ip+":"+port+"/ressources/"+UID[myuid]+"/notes/"+UID[myentry_uid]+"?token="+token
resp = os.popen(cmd).read()
print (resp)
#url_link = (
# "http://"+ip+":"+port+"/ressources/"+UID[myuid]+"/notes/"+UID[myentry_uid]+"?"
# "token="+token
# )
#try:
# resp = requests.post(url_link)
# #time.sleep(1)
# resp.raise_for_status()
# resp_dict = resp.json()
# print(resp_dict)
# print(resp_dict['id'])
# UID[myuid]= resp_dict['id']
#except requests.exceptions.HTTPError as e:
# print("Bad HTTP status code:", e)
#except requests.exceptions.RequestException as e:
# print("Network error:", e)
if (name == 'diaro_locations'):
Lat[myuid] = mylat
Lng[myuid] = mylng
if (name == 'diaro_entries'):
if not mytext:
mytext = ""
if not myfolder_uid:
myfolder_uid = Diaro_UID
if not mytags:
mytags = ""
if not mylocation_uid:
mylocation_uid = ""
mytext = mytext.replace("'", "")
mytitle = mytitle.replace("'", "")
mytext = mytext.strip("\'")
mytitle = mytitle.strip("\'")
mytext = mytext.strip('(')
mytitle = mytitle.strip('(')
listtags = mytags.split(",")
new_tagslist = "";
for uid_tags in listtags:
if (len(uid_tags) > 2):
if uid_tags in UID:
new_tagslist = new_tagslist + TAGS[uid_tags] + ",";
print ("TAGS",mytags,"==>",new_tagslist);
payload_note = {
"id": myuid,
"latitude": Lat[mylocation_uid],
"longitude": Lng[mylocation_uid],
"tags": new_tagslist,
"parent_id": UID[myfolder_uid],
"title": mytitle,
#"created_time": mydate_ms,
"user_created_time": mydate_ms,
"user_updated_time": mydate_ms,
"author": "Diaro",
"body": mytext
}
try:
resp = requests.post(url_notes, json=payload_note)
#time.sleep(1)
resp.raise_for_status()
resp_dict = resp.json()
print(resp_dict)
print(resp_dict['id'])
UID[myuid]= resp_dict['id']
except requests.exceptions.HTTPError as e:
print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
print("Network error:", e)
print("End : Parse Table")
strftime("%Y-%m-%d %H:%M:%S", gmtime())
done = time.time()
elapsed = done - start
print(elapsed)
# END : Ouf ...
J’ai voulu suivre la procédure avec brew, pip, …. mais sans succès avec la version 2.7.2
$ python --version
Python 2.7.2
J’avais des erreurs du type :
$ brew reinstall python....xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun Error: An exception occurred within a child process: CompilerSelectionError: python cannot be built with any available compilers. Install GNU's GCC brew install gcc$ python -m pip install --user requests /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python: No module named pip$ sudo easy_install pip Searching for pip Reading http://pypi.python.org/simple/pip/ Couldn't find index page for 'pip' (maybe misspelled?) Scanning index of all packages (this may take a while) Reading http://pypi.python.org/simple/ No local packages or download links found for pip Best match: None Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/bin/easy_install", line 8, in <module> load_entry_point('setuptools==0.6c11', 'console_scripts', 'easy_install')() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 1712, in main File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 1700, in with_ei_usage File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 1716, in <lambda> File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 152, in setup dist.run_commands() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 211, in run File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/command/easy_install.py", line 434, in easy_install File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/package_index.py", line 475, in fetch_distribution AttributeError: 'NoneType' object has no attribute 'clone'
J’ai donc changé de fusil d’épaule :
(See the finale release : https://www.cyber-neurones.org/2019/02/diaro-app-pixel-crater-ltd-diarobackup-xml-how-to-migrate-data-to-joplin/ )
Step 1: Add in first ligne : before in file DiaroBackup.xml … it’s mandatory !
I use REST API to insert in JOPLIN : https://joplin.cozic.net/api/ , it’s good documentation.
Here my first release in Python to import data from Diaro App Backup to Joplin API :
#
# Version 1
#
# ARIAS Frederic
# Sorry ... It's difficult for me the python :)
from urllib2 import unquote
from lxml import etree
import os
from time import gmtime, strftime
import time
strftime("%Y-%m-%d %H:%M:%S", gmtime())
start = time.time()
print("Start : Parse Table")
tree = etree.parse("./DiaroBackup.xml")
for table in tree.xpath("/data/table"):
print(table.get("name"))
print("End : Parse Table")
#Token
ip = "127.0.0.1"
port = "41184"
#token = "ABCD123ABCD123ABCD123ABCD123ABCD123"
token = "blablabla"
cmd = 'curl http://'+ip+':'+port+'/notes?token='+token
print cmd
os.system(cmd)
#Init
Diaro_UID = "12345678901234567801234567890123"
Lat = {}
Lng = {}
Lat[""] = ""
Lng[""] = ""
cmd = 'curl --data \'{ "id": "'+Diaro_UID+'", "title": "Diaro Import"}\' http://'+ip+':'+port+'/folders?token='+token
print cmd
os.system(cmd)
print("Start : Parse Table")
tree = etree.parse("./DiaroBackup.xml")
for table in tree.iter('table'):
name = table.attrib.get('name')
print name
myorder = 1
for r in table.iter('r'):
myuid = ""
mytitle = ""
mylat = ""
mylng = ""
mytags = ""
mydate = ""
mytext = ""
myfilename = ""
myfolder_uid = Diaro_UID
mylocation_uid = ""
myprimary_photo_uid = ""
myentry_uid = ""
myorder += 1
for subelem in r:
print(subelem.tag)
if (subelem.tag == 'uid'):
myuid = subelem.text
print ("myuid",myuid)
if (subelem.tag == 'entry_uid'):
myentry_uid = subelem.text
print ("myentry_uid",myentry_uid)
if (subelem.tag == 'primary_photo_uid'):
myprimary_photo_uid = subelem.text
print ("myprimary_photo_uid",myprimary_photo_uid)
if (subelem.tag == 'folder_uid'):
myfolder_uid = subelem.text
print ("myfolder_uid",myfolder_uid)
if (subelem.tag == 'location_uid'):
mylocation_uid = subelem.text
print ("mylocation_uid",mylocation_uid)
if (subelem.tag == 'date'):
mydate = subelem.text
print ("mydate",mydate)
if (subelem.tag == 'title'):
mytitle = subelem.text
print ("mytitle",mytitle)
print type(mytitle)
if type(mytitle) == unicode:
mytitle = mytitle.encode('utf8')
if (subelem.tag == 'lat'):
mylat = subelem.text
print ("mylat",mylat)
if (subelem.tag == 'lng'):
mylng = subelem.text
print ("mylng",mylng)
if (subelem.tag == 'tags'):
mytags = subelem.text
if mytags:
mytags[1:]
print ("mytags",mytags)
if (subelem.tag == 'text'):
mytext = subelem.text
print ("mytext",mytext)
if type(mytext) == unicode:
mytext = mytext.encode('utf8')
if (subelem.tag == 'filename'):
myfilename = subelem.text
print ("myfilename",myfilename)
if (name == 'diaro_folders'):
cmd = 'curl --data \'{ "id": "'+myuid+'", "title": "'+mytitle+'", "parent_id": "'+Diaro_UID+'"}\' http://'+ip+':'+port+'/folders?token='+token
print cmd
os.system(cmd)
if (name == 'diaro_tags'):
cmd = 'curl --data \'{ "id": "'+myuid+'", "title": "'+mytitle+'"}\' http://'+ip+':'+port+'/tags?token='+token
print cmd
os.system(cmd)
if (name == 'diaro_attachments'):
cmd = 'curl -F \'data=@media/photo/'+myfilename+'\' -F \'props={"id":"'+myuid+'"}\' http://'+ip+':'+port+'/resources?token='+token
print cmd
os.system(cmd)
cmd = 'curl -X PUT http://'+ip+':'+port+'/resources/'+myuid+'/notes/'+myentry_uid+'?token='+token
print cmd
os.system(cmd)
if (name == 'diaro_locations'):
Lat[myuid] = mylat
Lng[myuid] = mylng
if (name == 'diaro_entries'):
if not mytext:
mytext = ""
if not myfolder_uid:
myfolder_uid = Diaro_UID
if not mytags:
mytags = ""
if not mylocation_uid:
mylocation_uid = ""
mytext = mytext.replace("'", "")
mytitle = mytitle.replace("'", "")
mytext = mytext.strip("\'")
mytitle = mytitle.strip("\'")
mytext = mytext.strip('(')
mytitle = mytitle.strip('(')
print type(mytext)
cmd = 'curl --data \'{"latitude":"'+Lat[mylocation_uid]+'","longitude":"'+Lng[mylocation_uid]+'","tags":"'+mytags+'","parent_id":"'+myfolder_uid+'","id":"'+myuid+'","title":"'+mytitle+'", "created_time": "'+mydate+'", "body": "'+mytext+'"}\' http://'+ip+':'+port+'/notes?token='+token
print cmd
os.system(cmd)
print("End : Parse Table")
strftime("%Y-%m-%d %H:%M:%S", gmtime())
done = time.time()
elapsed = done - start
print(elapsed)
But I don’t understand the API, I can force the id ( for exemple : 12345678901234567801234567890123 ):
Voici mon architecture :
J’ai réussi a faire une synchronisation de Mac -> Android et une synchronisation de Android -> Mac. C’est donc fonctionnel !

Quelques captures d’écran :




Et sur Android :

