Last night a DJ saved my life: What if Twitter could be your own DJ?

While the Twitter music app eventually failed, it’s still clear that people use Twitter’s data stream to share and/or discover new #music. Thanks to Twitter cards, a great thing is that you can directly watch a YouTube video, or listen to a SoundCloud clip, right from your feed, without leaving the platform. But what if Twitter could be your own DJ, playing songs on your request?

Since it’s been a few month since I enjoyed my last Music Hack Day – oh, I definitely miss that! – I’ve hacked a proof of concept using the seevl API, combined with the Twitter and the YouTube ones, to make Twitter acts as your own personal DJ.

Hey @seevl, play something cool

The result is a twitter bot, running under our @seevl handle, which accepts a few (controlled) natural-language queries and replies with an appropriate track, embedded in a Tweet via a YouTube card. Here are a few patterns you can use:

Hey @seevl, play something like A

To play something that is similar to A. For instance, tweet “play something like New Order”, and you might get a reply with a Joy Division track in your feed.

Hey @seevl, play something from L

To play something from an artist signed on label L (or, at least, that used to be on this label at some stage)

Hey @seevl, play some G

To play something from a given genre G

Hey @seevl, play A

To simply play a track from A.

By the way, you can replace “Hey” by anything you want, as long as you politely ask your DJ what you want him to spin. Here’s an example, with my tweet just posted (top of the timeline), and a reply from the bot (bottom left).

Twitter As A DJ

Twitter As A DJ

A little less conversation

As it’s all Twitter-based, not only you can send messages, but you can have a conversation with your virtual DJ. Here’s for instance what I’ve sent first

And got this immediate reply – with the embedded YouTube video

Followed by (“coo” meant to be “cool”)

To immediately listen to Bettie Smith in my stream

It’s kind of fun, I have to say, especially due to the instantaneous nature of the conversation – and it even reminds IRC bots!

Unfortunately, it’s likely that the bot will reach the API rate-limit when posting Tweets (and I’m not handling those errors in the current MVP), so you may not have a reply when you interact with it.

Twitter As A Service?

Besides the music-related hack, I also wanted to showcase the growth of intelligent services on the Web – and how a platform like Twitter can be part of it, using “Twitter As A Service” as a layer for an intelligent Web.

The recently-launched “Buy button” is a simple example of how Twitter can be a Siri-like interface to the world. But why not bringing more intelligence into Twitter. What about “Hey @uber, pick me in 10 minutes”, and using the Tweet geolocation plus a Uber-API integration integration to directly pick – and bill – whoever #requested a black car? Or “Please @opentable, I’d love to have sushis tonight”, and get a reply with links to the top-rated places nearby, with in-tweet booking capability (via the previous buy button)? The data is there, the tools and APIs are there, so…

Yes, this sound a bit like what’s described in the seminal Semantic Web article by Tim Berners-Lee, James Hendler and Ora Lassila. Maybe it’s because we’re finally there, in an age where computers can be those social machines that we’re dreaming about!

 

Love Product Hunt? Here’s a Chrome extension to discover even more products

Product Hunt is the new rising star in the start-up community. Think of it as a mix of Beta List and Hacker News, but with products that are already live, and a wider community, including engineers of course, but also product people, investors, media and more. A few days ago, and by popular demand, they’ve launched early access to heir official API.

https://twitter.com/rrhoover/status/501022229899902977

More Products: A Chrome plug-in for Product Hunt recommendations

With more than 6,000 products already in the Product Hunt database, I’ve decided to use the API to build a product recommendation engine. It seems that evey times it comes to hacking and APIs, I can’t get away of discovery, or music. Or both.

The result is a Chrome extension simply named “More Products!”. It directly integrates top-10 related products for each product page, as you can see below. I might iterate on the algorithm itself, but want to keep this plug-in very focused so it’s unlikely that it will integrate other features. Note that it doesn’t track anything, so your privacy is preserved.

More Products on Product Hunt!

More Products on Product Hunt!


Under the hood

The engine relies on the API to get the list of all products and related posts, and then uses TF-IDF and Cosine similarity to to find similarities between them, using NLTK and scikit-learn, respectively the standard Python tools for Natural Language Processing and Machine Learning. To put it simply, it builds a giant database of words used in all posts, mapped to products with their frequency, and then finds how close products are,  based on those frequencies.

New products are fetched every 2 hours, and recommendations are updated at the same time. Flask handles requests between the extension and the recommendations database, and Redis is used as a cache layer. 

Back in Black: Analyzing the loudness and tempo of the Rolling Stone top 500 songs

Here’s the second post of my data analysis series on the Rolling Stone top 500 greatest songs of all time. While the first one focused on lyrics, this one is all about the acoustic properties of the data-set – especially their volume and tempo.

To do so, I used the EchoNest, which delivers a good understanding of each track at the section level (e.g. verse, chorus, etc.) but also at a deeper “segment” level, providing loudness details about very short intervals (up to less than a second). This is not perfect, due to some issues discussed below, but gives a few interesting insights.

[Update 22 July] As noted in the comments, there were a few unexpected results. I’ve run the pipeline again and done some more cleaning on the API results, as explained here.

Black leather, knee-hole pants, can’t play no highschool dance

As my goal was to identify relevant tracks from the dataset, in addition to absolute values for the loudness and tempo of each track, I also looked at their standard deviation. If you’re not familiar with it, this helps to identify which songs / artists tend to be closer to their average tempo / loudness, versus the ones that are more dynamic.

Before going through individual songs from the top-500, let’s take an example with the top-10 Spotify tracks of a few artists to check their loudness:

Artist Average Loudness Standard deviation
Motörhead -5.05 1.29
Ramones -6.85 3.22
Radiohead -11.83 3.20
Daft Punk -11.23 4.82
Public Enemy -5.34 2.30
Beastie Boys -9.38 4.30
Bob Dylan -10.67 2.88
Pink Floyd -16.06 6.50

And the tempo:

Artist Average Tempo Standard deviation
Motörhead 130.58 33.35
Ramones 175.34 7.69
Radiohead 104.80 28.86
Daft Punk 109.90 11.96
Public Enemy 102.93 13.28
Beastie Boys 108.37 16.85
Bob Dylan 126.60 33.95
Pink Floyd 118.23 25.08

You can see that some bands really deserve their reputation. For instance, while the Pink Floyd have a high standard deviation both in volume and tempo (not surprising), Motörhead is not only the loudest (in average) of the list, but also the one with the smallest standard deviation, meaning most of their tracks evolve around that average loudness. In order words, they play everything loud. While the Ramones and just fast, everything fast. And when they’re together on stage, the result is not surprising

But you don’t really care for music, do you?

Coming back to the top 500, I ran the Echonest analysis on 474 tracks of the list. The 26 missing are due to various errors at different stages of the full pipeline.

On the one hand, I’ve used raw results from the song API to get the average values. [Update 22 July] I had to consolidate the data by aggregating multiple API results together. For a single song, multiple tracks are returned by the API (as expected), but there can be large inconsistencies between them. For instance, if you search for American Idiot, one track (ID=SOHDHEA1391229C0EF) is identified having a tempo of 93, the other one (SOCVQDB129F08211FC) of 186. Some can also have slighter variations (in volume for instance, between a live and the original version). To simplify things – and I agree it include a bias in the results – I averaged the first 3 results from the API.

On the other hand, I relied on numpy to compute the standard deviation from the first API result, removing first the fade-in and fade-out of each track. Here, I’ve also skipped every segment of section where the API confidence was too low (< 0.4).

The average loudness for the dataset is -10.38 dB. Paul Lamere run an analysis of 1500 tracks a few years ago, with an average of  -9.5 dB so we can see that this dataset is not too far from a “random” sample – check the conclusion of this post to understand why the Echonest’s loudness is less than 0.

Going through individual tracks, here are the loudest tracks from the list:

And the quietest ones:

You can clearly see the dB difference between a loud (CCR) and quiet (Jeff Buckley) track on following plots.

Loudness for CCR - Who'll stop the rain

Creedence Clearwater Revival – Who’ll Stop the Rain

Loudness plot or Hallelujah by Jeff Buckley

Jeff Buckley – Hallelujah (a low-level but very dynamic track)

Looking at the standard deviations, here are now the most dynamic, volume-wise, tracks.

This last one is a beautiful example of a soul song with a dynamic volume range, and here’s a live version below.

On the other side of the spectrum, here are the less dynamic tracks – i.e. the ones with the smallest standard deviation, volume-wise:

The Ramones strike again – but I’m not sure that Highway to Hell is actually so linear – even though the 2nd part definitely is!

AC/DC - Highway to Hell

AC/DC – Highway to Hell

Please could you stop the noise, I’m trying to get some rest

Going away from the loudness and focusing on the tempo, here are the fastest tracks (in average BpM) of the list (some seem a bit awkward here):

And the slowest ones, also including the Stones:

But I believe that once again, it’s interested to look at how dynamic the tracks can be, with the most dynamic ones (tempo-wise):

And the most static ones, i.e. the ones with less tempo variation:

If you’ve ever looked at the Man vs Machine app, you might find fun that even though the less dynamic (or the more consistent, depending how you look at it) one is using samples (Run DMC), all other involved drummers. Don’t forget to thank the best backing band ever for the perfect tempo on Marvin Gaye’s track (and I couldn’t resist sharing their own cover of the song).

I’m waiting for that final moment you say the words that I can’t say

Last but not least, I’ve normalized and combined both the tempo deviation and the rhythm one to assign a [0:1] score to each track in order find the most and less dynamic tracks overall. Here’s the top-5 of the most dynamic ones:

If you listen to My Generation, you can clearly hear the dynamic both in tempo and volume with the different bursts of the song. While the Radiohead one is more on the long-run, with clearly distinct phases as shown below for the volume part.

Radiohead - Paranoid Android

Radiohead – Paranoid Android

Finally, here are the less dynamic ones. Several ones on that list made it through the  charts, showing that even though a song can be pretty flat in both volume and tempo, it can still be a hit – or at least an earworm:

Music recommendations with 300M data points and one SQL query

While toying with the public BigQuery datasets, impatiently waiting for Google Cloud Dataflow to be released, I’ve noticed the Wikipedia Revision History one, which contains a list of 314M Wikipedia edits, up to 2010. In the spirit of Amazon’s “people who bought this”, I’ve decided to run a small experiment about music recommendations based on Wikipedia edits. The results are not perfect, but provide some insights that could be used to bootstrap a  recommendation platform.

Wikipedia edits as a data source

Wikipedia pages are often an invaluable source of knowledge. Yet, the type and frequency of their edits also provide great data to mine knowledge from. See for instance the Wikipedia Live Monitor by Thomas Steiner, detecting breaking news through Wikipedia,  “You are what you edit“, an ICWSM09 study of Wikipedia edits to identify contributors’ location, or some of my joint work on data provenance with Fabrizio Orlandi.

Here, my assumption to build a recommendation system is that Wikipedia contributors edit similar pages, because they have an expertise and interest in a particular domain, and tend to focus on those. This obviously becomes more relevant at the macro-level, taking a large number of edits into account.

In the music-discovery context, this means that if 200 of the Wikipedia editors contributing to the Weezer page also edited the Rivers Cuomo one, well, there might be something in common between both.

The dataset

Let’s have a quick look at the aforementioned Wikipedia Revision History dataset:

This dataset contains a version of that data from April, 2010. This dataset does not contain the full text of the revisions, but rather just the meta information about the revisions, including things like language, timestamp, article and the like.

  • Name: publicdata:samples.wikipedia
  • Number of rows: 314M

Sounds not too bad, as it contains a large set of (page, title, user) tuples, exactly what we need to experiment.

Querying for similarity

Instead of building a complete user/edits matrix to compute the cosine distance between pages, or using a more advanced algorithm like Slope One (with the number of edits as an equivalent for ratings), I’m simply finding common edits, as explained in the original Amazon paper. And, to make this a bit more fun, I’ve decided to do it with a single query over the 314M rows, testing BigQuery capabilities at the same time.

The following query is used to find all pages sharing common editors with the current ones, ranked by the number of common edits. Tested with multiple inputs, it took an average of 5 seconds to answer it over the full dataset. You can run those by yourself by going to your Google BigQuery console and selecting the Wikipedia dataset.

SELECT title, id, count(id) as edits
FROM [publicdata:samples.wikipedia]
WHERE contributor_id IN (
  SELECT contributor_id
  FROM [publicdata:samples.wikipedia]
  WHERE id=30423
    AND contributor_id IS NOT NULL
    AND is_bot is NULL
    AND is_minor is NULL
    AND wp_namespace = 0
  GROUP BY contributor_id
  )
  AND is_minor is NULL
  AND wp_namespace = 0
GROUP EACH BY title, id
ORDER BY edits DESC
LIMIT 100

Update 2014-07-14: To clarify a comment on Twitter / reddit - I’m using page ID instead of title to make sure the edits over time apply to the same page, since IDs are immutable but page titles can change upon requests from the community.

This is actually a simple query, finding all pages (wp_namespace=0 restricts to content pages, excluding user pages, talks, etc.) edited (excluding minor edits) by users whom also edited (excluding bots and minor contributions) the page with ID 30423, ranking them by number of edits. You can read it as “Find all pages edited by people who also edited the page about the Clash, ranked by most edited first”.

And here are some of the results

Who's related the the Clash, using Wikipedia edits

Who’s related the the Clash, using Wikipedia edits

As you can see, from a music-discovery perspective, that’s a mix between relevant ones (Ramones, Sex Pistols), and WTF-ones (The Beatles, U2). There’s also a need to exclude non-music pages, but that could be done programmatically with some more information in the dataset.

Towards long tail discovery

As we can expect, and as seen before, results are not that good for mainstream / popular artists. Indeed, edits about the Beatles page are unlikely, in average, to say much about the musical preferences of their editors. Yet, this becomes more relevant for long-tail artist discovery: if you care editing indie bands pages, that’s most likely you care about it.

Trying with Mr Bungle, the query returns Meshuggah and The Mars Volta as the first two music-related entries, all of them playing some kind of experimental metal – but then digresses again with the Pixies. Looking at band members / solo artists and using Frank Black as a seed leads to The Smashing Pumpkins, Pearl Jam, R.E.M. and obviously the Pixies as the first four recommendations. Not perfect for both, but not too bad for an algorithm that is completely music-agnostic!

Scaling the approach

There are many ways this could be improved, for instance:

  • Removing too-active contributors – who may edit pages to ensure Wikipedia guidelines are followed, rather than for topic-based interest, and consequently introduce some bias;
  • Filtering the results using some ML approaches or graph-based heuristics – e.g. exclude results if their genres are more than X nodes away in a genre taxonomy.
  • Using time-decay – someone editing Nirvana pages in 1992 might be interested in completely different genres now, so joint edits might not be relevant if done with an x-days interval or more.

Yet, besides its scientific interest, and showing that BigQuery is very cool to use, this approach also showcases – if needed – that even though algorithms may rule the world of music discovery, they might not be able to do much without user-generated content.

The Long Tail, with Spotify and Polymer

The Long Tail. That’s not something new, neither on the Web nor in the music field. I remember when I first read Chris Anderson article , and since, many have talked or wrote about it, including Paul Lamere or Oscar Celma in the music-tech sphere.

Yet, one must admit that, with millions of tracks available online, it’s always a challenge to find something new,  digging in that so-called long-tail of less popular artists or songs.

So, between Word Cup games, I’ve built a web component – and a companion web app –  to enjoy the less popular tracks of any artist.

A web component to play an artist’s long tail

Built with Polymer and using the new Spotify Web API, <long-tail> is a web component that embeds a Spotify play button with the less popular tracks of one artist.

First, install it with Bower:

bower install long-tail

The, include in an HTML page:

<html>

<head>
  <script src="bower_components/platform/platform.js"></script>
  <link rel="import" href="bower_components/long-tail/long-tail.html">
</head>

<body>
  <long-tail artist="4tZwfgrHOc3mvqYlEYSvVi" size="25"></long-tail>
</body>

</html>

And there it is, you’re ready to play. No JS to write, no code to copy and paste, everything is handled internally: the beauty of Web components. Unfortunately, Javascript can’t be used on WordPress.com blogs, but here’s the result of the previous snippet.

Daft Punk less popular tracks

Daft Punk less popular tracks

The source is on github (MIT license), and you can see how easy it is to create. It simply calls the Spotify API to find an artist’s albums, then tracks (limiting to 50 results each time – hence parsing a maximum of 2500 tracks per artist), finally sorting them by inverse popularity. It also excludes the ones with popularity=0, as it seems there are not always the less popular ones. Maybe some region-dependant issue?

I suppose, as many recent JS toolkits such as AngularJS, that the learning curve will be stiffer when building  advanced components (probably due to the early-stage documentation), but at a first glance, it looks very intuitive, and there are many elements to reuse already.

Try it with your favorite artists

As the component is mostly for coders, I’ve put together a companion Web app – shamelessly reusing Paul’s design from his recent Spotify hacks. For each artist, it uses the previous component and displays their 50 less popular tracks, according to Spotify.

Try it at The Long Tail, and have fun exploring the hidden gems of your favorite artist!

The Long Tail of Rancid tracks

The Long Tail of Rancid tracks

 

Google I/O 2014 Recap: Android, Knowledge Graph and more

Back in April, I was lucky enough to get a partner invite for Google I/O. Coupled with a stay at the Startup House, a co-working / housing space (ideal when you’re jet-lagged at 4AM and want a proper desk to code a few meters away from your bed) located only one black away from Moscone, I’m very glad I’ve made the trip to my first I/O!

Google I/O after hours party in Yerba Buena Gardens

Google I/O after hours party in Yerba Buena Gardens

Here are a few highlights, in a conference which clearly confirmed the role of (1) Android as a global OS, and (2) the Knowledge Graph as a hub for everything AI-related, at Google and beyond.

Most of the videos of the sessions are online on Google Developers’ YouTube channel, and I’ve tried as much as possible to link to the relevant ones below.

Android – One OS to rule them all

While I’m not (yet) a full-time Android user (let alone a developer), it’s now clear that it goes far beyond a phone-only OS. With the introduction of AndroidWear, AndroidCar, and AndroidTV during the keynote, the OS is now the core of all hardware-related initiatives at Google.

With common SDKs and API to interact with, wherever the OS is used, this makes the life of developers much easier when building cross-devices products. Relying on a single ecosystem is also of importance when building an engineering team, and I guess it may also be an decision factor for small start-ups when deciding which market to tackle.

Last but not least, the improvements in the OS itself, including a new runtime – see “What’s new in Android“, makes it even faster then before, a plus for embedded systems of all sorts.

Google’s Knowledge Graph – From search to voice controls and app indexing

So far, Google’s Knowledge Graph has been used mostly in search-related projects, including the snippets you can see when searching for entities such as places, people, music and movies on Google. Several sessions-cases showed how it is now used as a central hub for AI-related projects and products.

Search results getting richer with Google's Knowledge Graph

Search results getting richer with Google’s Knowledge Graph

Using Android TV, you can ask your TV (literally, by talking to your Android watch) to suggest an Oscar-awarded movie from 2000, or who’s casting in X or Y – all answers coming from the Knowledge Graph.  In the first case, results can be bought from Google play, another nice piece of integration between the different offerings from the company.

Another interesting case is the use if the Knowledge Graph to connect the dots between previously isolated silos, namely mobile apps. One of the common issue with those apps is their lack of links and outside-world connections, in spite of recent efforts such as Facebook-supported App Links. In the session “The Future of Apps and Search“, a combination of app indexing, JSON-LD and Knowledge Graph was presented to directly link into an app from, e.g., Google’s search results or autocompletion-search in Android, as well as launching actions from search results – e.g. playing a track in Spotify, a use-case announced a few days before I/O – using the new schema.org actions I’ve recently blogged about.

As an early JSON-LD enthusiast, and working on related technologies for almost a decade, you can’t imagine how excited I was when I saw this in something used by million of users! Let’s bet that’s only the beginning, and that new verticals will follow.

Spotify, with real bits of JSON-LD inside

Spotify, with real bits of JSON-LD inside

Google Cloud and DataFlow – Smarter, faster, easier

I’ve been recently using Google Cloud infrastructure in several projects (from GAE to Google Prediction – watch “Predicting the future with the Google Cloud Platform” for more about their ML infrastructure), and a few announcements made my day here:

  • Cloud Debugger – making DevOps and back-engineers more efficient when debugging code. You can now add breakpoints, including conditional ones (e.g. user=X) in your live app, without jeopardising its speed, and most important, without having to stop/restart/deploy anything. This means that code can be debugged on production servers with live data, and  without patching / tracing multiple boxes,  all in the comfort of your browser. A kind of New Relic on steroids, so big thumbs-up here!
  • Dataflow –  aiming to replace MapReduce, with a special focus on stream processing and scalability. A convincing use-case during the keynote was Twitter sentiment analysis, showing not only the simplicity of the interface, but also the orchestration of the services through the API. The service is not open yet, but you can check “Big data, the Cloud Way: Accelerated and simplified” to know more. I’m looking forward to try it on a few stream processing for content discovery!
Dataflow - Coming soon to a theater near you

Dataflow – Coming soon to a theater near you

The Web platform – Polymer, WebRTC and HTML5

Whether you’re accessing if from your desktop, phone, or now, your watch or Glass, there’s only one Web. And far from just websites, it can be used as a platform to build powerful apps, as many session focused on:

  • Polymer / Web components – or how to build your own HTML tags for quick prototyping and distribution. As an AngularJS user, I was immediately convinced by its two-way data bindings. Polymer (“Polymer and the Web Components revolution“) adds another elegant layer to the Web, allowing to define tags that are then rendered as full components. Imagine a <my-recent-tracks> tag that will automatically render the top-tracks you’ve played on all your favorite music platforms. Well, that’s exactly what Polymer can do;
  • HTML5 – the Web as a platform, from different perspectives. In particular, “HTML5 everywhere: How and why YouTube uses the Web platform” was a great intro talk to understand the benefits of HTML5 from different points of view: UX, scalability, cross-platform. Recommended to anyone who still have doubts about it.
  • WebRTC – building real-time systems in your browser. “Making music mobile with the Web” not only showed how to transform your Macbook into a Marshall JCM2000 with Soundtrap, but also how WebRTC was used for real-time collaborative music creation, with very low latency.

Wearables – It’s all about the UX

Then, a big part of the conference: Glass and smart watches. I often thought that most of the effort to build those was put in the hardware and OS side of things (reducing footprint, optimising battery life, gathering sensor data, etc.).

While some talks clearly focused on this (with some nice hacks such as back-camera for biking in “Innovate with the Glass Platform“, and football-related ones), I was impressed by “Designing for wearables“, which focused on the role of UX to make sure wearables are devices that let you connect, and not interfere with the world as a phone does.

Paris Saint-Germain represents at I/O 2014!

Paris Saint-Germain represents at I/O 2014!

Showing some early prototypes and discussing how and why Glass / wear notifications are so minimalistic, this was an inspiring session for anyone interested in UX and products. A must-watch for developers and entrepreneurs aiming to  build appealing user-facing products, whether it’s for wearables or more standard devices.

Google+ – Or how Google missed the spot

I may have missed it from other sessions, but none of the ones I’ve been to mentioned Google+. I was not expecting much about it at I/O since the departure of Vic Gundotra, and Sergey Brin’s statements, as well as a plus-free agenda. Still, that was a big surprise, as it would have been a no-brainer use-case in many talks.

Using dataflow to process streams from your social circles? Not a word about it. Using Glass to see what your friends are posting? Nope. Alerts on your Google TV to binge watch some TV-show together with your friends home 5000km away? Neither.

G+ could have been an awesome social network – or should I say a social platform. Combined with Freebase / Knowledge Graph, linking people to things they like, possibilities would be endless in terms of profiling, discovery and more. Yet, with a poor API, a lack of portability that could have differentiate it from its main competitors from Day 1 (imagine PubSubHubbub / WebSocket as an easy way to integrate G+ into other platforms), I’m sad they’ve missed the spot.

Up to 2015?

Overall, a great conference, in spite of the queue mismatch that forced me to miss about 30min of the keynote, queueing twice around the Moscone, a real shame when you travel 8000km for such an event.

I particularly enjoyed the focus around the 3D topics (Design, Develop, Distribute), the diversity of talks (watch the awesome “Robotics in a new world – Presented by Women Techmakers“), and the accessibility of the DevRel team between sessions at the Developer sandboxes.

Looking forward to the next one!