Tag Archives: json

Export and structure your musical activity with schema.org

Following my recent post on schema.org and personalisation on the Web, I wrote a music actions exporter for various services, including Facebook, Deezer, and last.fm. Available at http://music-actions.appspot.com, it’s mostly a proof-of-concept, but it showcases the ability to uniformly export and structure your data (in that case music listening actions) whatever service you initially used. Does that ring a bell?

As the previous post focused on why it matters, I’ll cover technical aspects of the exporter here, including the role of JSON-LD for representing content on the Web.

One model to rule them all

The Music Actions exporter is not rocket science. Basically speaking, it translates (application-specific) JSON data into another (open, with shared semantics) JSON representation, using JSON-LD. But that’s also where the power lies: it would take only a few engineering hours to most platforms to expose their actions with schema.org if they already have a public API – or user profile pages (think RDFa or microdata) – doing so. And they would probably enjoy the same benefits as when publishing factual data with schema.org.

Moreover, it will make life easier for developers: understanding a single model / semantics and learning a common set of tools will be enough to get and use data from multiple sources, as opposed to handling multiple APIs as it is currently the case – meaning, eventually, more exposure for the service. This is the grand Semantic Web promise, and I’m glad to see it more alive than ever.

In particular, let’s consider the music vertical: Inter-operable taste profiles, shared playlists, portable collections, death-to-cold-start… you name it, it could finally be done. The promise has been here for a while, many have tried, and it obviously reminds me some earlier work I’ve done circa 2008 (during and post-Ph.D.), including this initiative with Yves Raimond from the BBC using FOAF, SIOC, MO and more:

Coming back to the exporter, here’s an excerpt of my recent Facebook music.listens activity (mostly gathered from spotify here) exported as JSON-LD, with a longer feed here.

{
"@context": {
"name": "http://schema.org",
"agent_of": {
"@reverse": "http://schema.org/agent"
}
},
"@id": "http://facebook.com/alexandre.passant",
"url": "http://facebook.com/alexandre.passant",
"name": "Alexandre Passant",
"@type": "Person",
"agent_of": [{
"@type": "ListenAction",
"object": {
"@id": "http://open.spotify.com/track/1B930FbwpwrJKKEQOhXunI",
"url": "http://open.spotify.com/track/1B930FbwpwrJKKEQOhXunI",
"@type": "MusicRecording",
"name": "Represent (Rocked Out Mix)",
"audio": "http://open.spotify.com/track/1B930FbwpwrJKKEQOhXunI",
"byArtist": [{
"@id": "http://open.spotify.com/artist/3jOstUTkEu2JkjvRdBA5Gu"
"url": "http://open.spotify.com/artist/3jOstUTkEu2JkjvRdBA5Gu"
"@type": "MusicGroup",
"name": "Weezer",
}],
"inAlbum": [{
"@id": "http://open.spotify.com/album/0s56sFx1BJMyE8GGskfYJX",
"url": "http://open.spotify.com/album/0s56sFx1BJMyE8GGskfYJX"
"@type": "MusicAlbum",
"name": "Hurley",
}]
}
}]
}

For every service, it returns the most recent tracks listened to (as ListenAction), including – when available – additional data about artists and albums. In the case of Deezer and Lastfm, those information are already in the history feed, while for Facebook, this requires additional calls to the Graph API, querying individual song entities in their data-graph.

Using Google Cloud Endpoints as an API layer

Since the exporter works as a simple API, I’ve implemented it using Google Cloud Endpoints. As part of Google’s Cloud offering, it greatly facilitates the process of building a Web-based APIs. No need to build a full – albeit lightweight – application with routes / handlers (webapp2, etc.): document the API patterns (Request and Response messages),  define the application logic, and let the infrastructure manages everything.

It also automatically provides a web-based front-end to test the API, and other advantages of Google App Engine infrastructure, such as Web-based logs management in order can trace production errors without logging-in to a remote box.

GAE Endpoints API Explorer
GAE Endpoints API Explorer

The only issue is that it can’t directly return JSON-LD , since it encapsulate everything into the following response.

{
"kind": "musicactions#resourcesItem",
"etag": "\"_oj1ynXDYJ3PHpeV8owlekNCPi4/NH17nWS3hMc3GSHWziswWp2pTFk\""
"data": "<a style="color: #428bca;" href="http://music-actions.appspot.com/static/data.json">some action data</a>"
}

Thus, if you use the exporter,  you’ll need to parse the response and extract the data string value, then transform it into JSON to get the “real” JSON-LD data. That’s not a big deal as you probably won’t link to the API URL anyways since the it contains your private authentication tokens. But it’s worth keeping in mind for some projects.

JSON-LD and the beauty of RDF

Last but not least: the use of JSON-LD, augmenting JSON with the concept of “Linked Data“, i.e. “meanings, not strings”.

Let’s look at the representation of 2 ListenAction instances for the same user (using their Facebook IDs in this example). The JSON-LD serialisation will be as follows.  I’m using the @graph property to represent two statements about distinct objects (as those are 2 different ListenAction) in the same document, but I could have used multiple contexts.

{
"@context": "http://schema.org&quot;,
"@graph": [{
"@type": "ListenAction",
"agent" : {
"@id": "http://graph.facebook.com/607513040&quot;,
"name": "Alexandre Passant",
"@type": "Person"
},
"object": {
"@id": "http://graph.facebook.com/10150500879645722&quot;,
"name": "My Name Is Jonas",
"@type": "MusicRecording"
}
}, {
"@type": "ListenAction",
"agent" : {
"@id": "http://graph.facebook.com/607513040&quot;,
"name": "Alexandre Passant",
"@type": "Person"
},
"object": {
"@id": "http://graph.facebook.com/10150142973310868&quot;,
"name": "Buddy Holly",
"@type": "MusicRecording"
}
}]
}

Below is the corresponding graph representation, with 2 nodes for the same agent (i.e. the user committing the action).

Representing ListeningActions with JSON-LD
Representing ListeningActions with JSON-LD

Yet, an interesting aspect of JSON-LD is its relation with RDF – the Resource Description Framework and its graph model especially suited for the Web. As JSON-LD uses @ids as common node identifiers, a.k.a. URIs, those 2 agents are actually the same, and so the graph looks like:

Merging agents with JSON-LD
Merging agents with JSON-LD

Finally, an interesting property of RDF / JSON-LD graphs is their directed edges. Thus, instead of writing the previous statement from an Action-centric perspective, with un-identified action instances (a.k.a. blank nodes), we can write it from a User-centric perspective using an inverse property (“reverse” in the JSON-LD world), as follows.

Using inverse properties in JSON-LD
Using inverse properties in JSON-LD

Leading to the following JSON-LD document, thanks to the definition of an additional reverse property in the context. This makes IMO the document easier to understand, since it’s now user-centric, with the user / Person being the core element of the document, with edges from itself to the actions it contributes to.

{
"@context": {
"name": "http://schema.org&quot;,
"agent_of": {
"@reverse": "http://schema.org/agent&quot;
}
},
"@id": "http://graph.facebook.com/607513040&quot;,
"name": "Alexandre Passant",
"@type": "Person",
"agent_of": [{
"@type": "ListenAction",
"object": {
"@id": "http://graph.facebook.com/10150500879645722&quot;,
"name": "My Name Is Jonas",
"@type": "MusicRecording"
}
}, {
"@type": "ListenAction",
"object": {
"@id": "http://graph.facebook.com/10150142973310868&quot;,
"name": "Buddy Holly",
"@type": "MusicRecording"
}
}]
}

From shared actions to shared entities

While being (for now) a proof of concept, the exporter is a first step towards a common integration of musical actions on the Web. Of course, the same pattern / method could be applied to any other vertical. But, more interestingly, we can hope that services will directly publish their actions using schema.org, as they’ve been doing for other facts – for instance artist concert data, now enriching Google’s search results through their Knowledge Graph.

In addition, an interesting next step would be to use common object identifiers across services, in order to not only share a common semantics about actions, but also about the objects used in those actions. This could be achieved by referring to open knowledge bases such as Freebase, or using vertical-specific ones such as our new seevl API in the music area. Oh, and there will be more to come about seevl and actions in the near future. Interested? Let’s connect.

About these ads

The new schema.org actions: What they mean for personalisation on the Web

The schema.org initiative just announced the release of a new action vocabulary. As their blog post emphasises:

The Web is not just about static descriptions of entities. It is about taking action on these entities.

Whether they’re online or offline, publishing those actions in a machine-readable format follows TimBL’s “Weaving the Web” vision of the Web as a social machine.

It’s even more relevant when the online and the offline world become one, whether it’s through apps (4square, Uber, etc.) or via sensors and wearable tech (mobile phones, Glass, etc.). A particular aspect I’m interested in is how those actions can help to personalise the Web

The rise of dynamic content and structured data on the Web

This is not the first time actions – at least online ones –  are used on the Web: think of Activity StreamsWeb Intents, as well as SIOC-Actions that I’ve worked on with Pierre-Antoine Champin a few years ago.

Yet, considering the recent advances on structured Web data (schema.org, Google’s Knowledge Graph, Facebook OpenGraph, Twitter cards…), this addition is a timely move. Every one can now publish their actions using a shared vocabulary, meaning that apps and services can consume them openly – pending the correct credentials and privacy settings. And that’s a big move for personalisation.

Personalising content from distributed data

Let’s consider my musical activity. Right now, I can plug my services into Facebook and use the Graph API to retrieve my listening history. Or query APIs such as the Deezer one. Or check my Twitter and Instagram feeds to remember some of the records I’ve put on my turntable. Yet, if all of them would publish actions using the new ListenAction type, I could use a single query engine to get the data from those different endpoints.

Deezer could describe actions using the following JSON-LD, and Spotify with RDFa, but it doesn’t really matter – as both would agree on shared semantics through a single vocabulary.

<scripttype="application/ld+json">
{
  "@context":"http://schema.org",
  "@type":"ListenAction",
  "agent":{
    "@type":"Person",
    "name":"Alex"
  },
  "object":{
    "@type":"MusicGroup",
    "name":"The Clash"
  },
  "instrument":{
    "@type":"WebApplication", 
    "name":"Deezer",
    "url":"http://deezer.com"
  } 
} </script>

Ultimately, that means that every service could gather data from different sources to meaningfully extract information about myself, and deliver a personalised experience as soon as I log-in.

You might think that Facebook enables this already with the Graph API. Indeed, but data need to be in Facebook. This is not always the case, either because the seed services haven’t implemented – or removed – the proper connectors, or because you didn’t allow them to share your actions.

In this new configuration, I could decide, for every service I log-in, which sources it can access. Log-in to a music platform? Let’s access to my Deezer and Spotify profiles, where some schema.org Actions can be found. Booking a restaurant? Check my OpenTable ones. From there, those services can quickly build my profile and start personalising my online experience.

In addition, websites could decide to use background knowledge to enrich one’s profile, using vertical databases, e.g. Factual for geolocation data or our recently relaunched seevl API for music meta-data, combining with advanced heuristics such as such as time decay, actions-objects granularity and more to enhance the profiling capabilities (if you’re interested in the topic, check the slides of Fabrizio Orlandi’s Ph.D. viva on the topic) .

Privacy matters

This way of personalising content could also have important privacy implications. By selecting which sources a service can access, I implicitly block access to data that is non-relevant or too private for that particular service – as opposed to granting access to all my content.

Going further, we can imagine an privacy-control matrix where I can select not only the sources, but also the action types to be used, keeping my data safe and avoiding freakomendations. I could provide my 4square eating actions (restaurants I’ve checked-in) to a food website, but offer my musical background (concerts I’ve been to) to a music app, keeping both separate.

Of course, websites should be smart enough to know which action they require, doing a source/action pre-selection for me. This could ultimately solve some of the trust issues often discussed when talking about personalisation, as Facebook’s Sam Lessin addressed in his keynote on the future of travel.

What’s next?

As you could see, I’m particularly interested in what’s going to happen with this new schema.org update, both from the publishers and the consumers point of view.

It will also be interesting to see how mappings could emerge between it and the Facebook Graph API, adding another level of interoperability in this quest to make the Web a social space.

Timeline for RSS feeds

When playing with Timeline, I thaught it could be a nice interface for RSS feeds, especially for weblogs or planets.

So, I wrote an ”RSS to Timeline” service, that takes any RSS/Atom feed as an input, and translates it into the correct JSON / Timeline format. Just put the correct URL as a data source for your Timeline, and you’ll get it !

Eg:

As you can see, I’ve also setup a demo service where you can see your feed in action. Everything is described in details here.

Regarding the implementation, the script is written in Python, using feedparser and mod_python. I first started in PHP with MagpieRSS, but it doesn’t provide universal methods to access feed/items informations, so the way to access content depends on the feed format. Yet, with feedparser, methods and properties are the same whatever your feed: is RSS 0.9, RSS 1.0, Atom … which is really interesting for writing universal agregators / translators.

It was also the first time I used mod_python, and I must say the Publisher handler is also very easy to use, with a templating system and interaction between the interface and the script.

SPARQL/JSON into Timeline

Timeline seems to be the tool of the moment in the SemWeb area. Indeed, this is a really nice JavaScript tool to display temporal data, so we could imagine a lot of usages for it.

Timeline first used its own XML format for data, then JSON was introduced. Yet, the JSON format needed is different from the one defined by JSON serialization of SPARQL results, as Joseki does – among others RDF stores.

I’ve wrote a function based on the original JSON one to load data formatted according to the previous specs: http://apassant.net/home/2006/07/sioc-timeline/sparqljson.js

Just before, Morten Frederiksen also wrote a SPARQL/XML loader for timeline.

So, now, you’ve got the choice: either load XML / JSON data into Timeline (aventually using Danny Ayers’ XSLT), or use SPARQL/XML or SPARQL/JSON loader. You don’t have any excuse anymore for not loading some data into your Timeline :)

Here’s an example of Timeline loading on-the-fly generated JSON data from a SPARQL endpoint (using SIOC-enabled websites as data sources): http://apassant.net/home/2006/07/sioc-timeline/.

Uldis also wrote a SIOC Timeline, with a nice design and the complete blog posts text.

So … Timeline offers another cool way to see Semantic Web in action !