FOAF-ing the music, RDF templates and suggesting content

Oscar Celma introduces a service that includes music interests in your FOAF profile. Instead of writing and updating them yourself regularly, the script will search (using SPARQL) in your profile for websites that share playlists (last.fm, pandora …) and create an updated FOAF file with your interests. I really like this approach of decentralized content to create RDF files … let computers do the job for you !

Yet I’m wondering if the script can have an option to return only the portion of RDF that should be included. It makes me think we can have “templated-RDF” that will query services to be constructed / updated. Using Oscar’s service, I can include this in my profile:

 <tpl:hasService>
<tpl:Service rdf:resource="http://foafing-the-music.iua.upf.edu/RDFize/me">
<tpl:hasParameter>
<tpl:Parameter>
<tpl:has_name>foaf</tpl:has_name>
<tpl:has_value>http://apassant.net/foaf.rdf</tpl:has_value>
</tpl:Parameter>
</tpl:hasParameter>
<tpl:hasParameter>
<tpl:Parameter>
<tpl:has_name>account</tpl:has_name>
<tpl:has_value>http://last.fm</tpl:has_value
</tpl:Parameter>
</tpl:hasParameter>
</tpl:Service>
</tpl:hasService>

Then, a parser will interpret the file to add /replace this part by the resulting RDF from the service. And then, any service that return RDF could be included. Do someone know if such features already exist ?

In the meantime, I also discovered the Music Use Case, from the same team. It briefly shows how, using music metadata and people interest, it helps to suggest relevant artist and concerts, as last.fm does, but using Semantic Web technologies. This is also something I like in Semantic Web, metadata, ontologies and linked data, and it makes me think to an approach I’ll present at ICWSM, regarding blogs and information retrieval.

About these ads

Using Twitter from irssi

There’s a lot of ways to update your Twitter status / browse friends without going on their website (which is sometimes really slow !). You can also simply use curl in bash.

I choosed wd, to make it work through irssi.

In irssi type:

 /alias twit /exec wd $*
/save

Then you can use

 /twit 
/twit myfriend

to see your friends status (no arg = all friends, arg = one friend), or

 /twit -sv my new status 

to update yours .

Using Protégé through an authenticated proxy

I recently struggled with Protégé in order to make it work through an authenticated enterprise proxy, since I wanted to import / extend ontologies from the Web.

When running a Java application, you can pass it the URL of a proxy, but it seems the only way to setup authentication parameters is to modify original source code. I agree a plugin is certainly better, but I don’t have time to invest it, so here’s a simple hack to make Protégé-OWL work with such a proxy.

  • First, create a new directory and checkout Protégé-OWL SVN source code.
 mkdir protege-owl
cd protege-owl
svn checkout http://smi-protege.stanford.edu/repos/protege/owl/trunk .

You’ll also need to get the protege.jar file, that can be downloaded here, and move it to the /libs folder.

  • Edit src/edu/stanford/smi/protegex/owl/jena/parser/ProtegeOWLParser.java

Add the following import:

 import java.util.Properties; 

In getInputStream method, replace

 conn.addRequestProperty("Accept", "*/*");
return conn.getInputStream();

By

 conn.addRequestProperty("Accept", "*/*");
// Set proxy settings only if the file is outside our domain
if(url.getHost().indexOf("company.host.tld")==-1) {
Properties systemSettings = System.getProperties();
systemSettings.put("http.proxyHost","proxy_URL");
systemSettings.put("http.proxyPort", "proxy_Port");
sun.misc.BASE64Encoder encoder = new sun.misc.BASE64Encoder();
String encodedUserPwd = encoder.encode("proxy_User:proxy_Pass".getBytes());
conn.setRequestProperty("Proxy-Authorization", "Basic " + encodedUserPwd);
}
return conn.getInputStream();
  • Edit src/edu/stanford/smi/protegex/owl/ui/metadatatab/imports/wizard/URLImportEntry.java

Add the following imports:

 import java.util.Properties;
import java.io.InputStream;
import java.net.URLConnection;

In isPossibleToImport() method, replace:

 OntologyNameExtractor extractor = new OntologyNameExtractor(url.openConnection().getInputStream(), url);
URI uri = extractor.getOntologyName();

By:

 OntologyNameExtractor extractor;
// Set proxy settings only if the file is outside our domain
if(url.getHost().indexOf("edf.fr")==-1) {
URLConnection conn = url.openConnection();
conn.setRequestProperty("Accept", "application/rdf+xml");
conn.addRequestProperty("Accept", "text/xml");
conn.addRequestProperty("Accept", "*/*");
Properties systemSettings = System.getProperties();
systemSettings.put("http.proxyHost","proxy_URL");
systemSettings.put("http.proxyPort", "proxy_Port");
sun.misc.BASE64Encoder encoder = new sun.misc.BASE64Encoder();
String encodedUserPwd = encoder.encode("proxy_User:proxy_Pass".getBytes());
conn.setRequestProperty("Proxy-Authorization", "Basic " + encodedUserPwd);
InputStream stream = conn.getInputStream();
extractor = new OntologyNameExtractor(stream, url);
} else {
extractor = new OntologyNameExtractor(url.openConnection().getInputStream(), url);
}
URI uri = extractor.getOntologyName();
  • Compile with ant (either ant plugin.jar or ant plugin.zip), and replace the original edu.stanford.smi.protegex.owl folder by the new one that will be created in the /dists folder (in .jar or .zip format).
  • Run Protégé, you should be able to import / load both internal and external ontologies trough your proxy !

Laneo, le Web 2.0 au service de l’écologie

www.laneo.org

Disponible dans de nombreuses langues et basé en France, Laneo propose une initiative intéressante de site participatif:

Comment fonctionne Laneo?

Vous répondez à des questionnaires concernant vos activités en plein air. Nous compilons les résultats de ces recherches.

Des organisations nous achètent ces résultats. Nous reversons l’argent récolté à des initiatives écologiques de nettoyage.

Vous votez pour l’organisme qui recevra ces dons. Nous vous tenons au courant de leurs avancées.

La planète devient un terrain de jeu plus propre.

Le site compte actuellement un peu plus de 1000 inscrits, c’est le moment de devenir membre !

DOAP Dataset

For those who want to play with DOAP files, doap:store now provides a dump of its content, containing all DOAP files fetched from PTSW.

The dump is available in both RDF/XML and N3, and is run daily[1]. It also contains information about RDF files since I made a complete export of the RDF store, including informations added by 3store when dealing with contexts.

Also remember that there’s a SPARQL endpoint for doap:store, that accepts CONSTRUCT queries.

Notes

[1] It seems there’s broken use of rdf:resource in the XML version, the N3 is a cleaned version, thanks to rapper.