Remove inactive Twitter followees with this tiny Python script

I recently reached the Twitter limit to add new followees so I’ve wrote a tiny Python script, Twitter Cleaner, to remove people who haven’t send anything for a number of days (30 by default) – and consequently be able to add new ones. It’s now available on github.

Twitter Cleaner
Twitter Cleaner

Note that it might conflict with the previous Twitter TOS if you unfollow too many people at once. However, it will happen only once if you put it into a daily crontab. It was safe in my case, but I can’t guarantee it will be in yours. You may also reach the API rate-limiting if you’ve too many followees.

It’t built using python-twitter, and is available under the MIT license.

 

Last night a DJ saved my life: What if Twitter could be your own DJ?

While the Twitter music app eventually failed, it’s still clear that people use Twitter’s data stream to share and/or discover new #music. Thanks to Twitter cards, a great thing is that you can directly watch a YouTube video, or listen to a SoundCloud clip, right from your feed, without leaving the platform. But what if Twitter could be your own DJ, playing songs on your request?

Since it’s been a few month since I enjoyed my last Music Hack Day – oh, I definitely miss that! – I’ve hacked a proof of concept using the seevl API, combined with the Twitter and the YouTube ones, to make Twitter acts as your own personal DJ.

Hey @seevl, play something cool

The result is a twitter bot, running under our @seevl handle, which accepts a few (controlled) natural-language queries and replies with an appropriate track, embedded in a Tweet via a YouTube card. Here are a few patterns you can use:

Hey @seevl, play something like A

To play something that is similar to A. For instance, tweet “play something like New Order”, and you might get a reply with a Joy Division track in your feed.

Hey @seevl, play something from L

To play something from an artist signed on label L (or, at least, that used to be on this label at some stage)

Hey @seevl, play some G

To play something from a given genre G

Hey @seevl, play A

To simply play a track from A.

By the way, you can replace “Hey” by anything you want, as long as you politely ask your DJ what you want him to spin. Here’s an example, with my tweet just posted (top of the timeline), and a reply from the bot (bottom left).

Twitter As A DJ
Twitter As A DJ

A little less conversation

As it’s all Twitter-based, not only you can send messages, but you can have a conversation with your virtual DJ. Here’s for instance what I’ve sent first

And got this immediate reply – with the embedded YouTube video

Followed by (“coo” meant to be “cool”)

To immediately listen to Bettie Smith in my stream

It’s kind of fun, I have to say, especially due to the instantaneous nature of the conversation – and it even reminds IRC bots!

Unfortunately, it’s likely that the bot will reach the API rate-limit when posting Tweets (and I’m not handling those errors in the current MVP), so you may not have a reply when you interact with it.

Twitter As A Service?

Besides the music-related hack, I also wanted to showcase the growth of intelligent services on the Web – and how a platform like Twitter can be part of it, using “Twitter As A Service” as a layer for an intelligent Web.

The recently-launched “Buy button” is a simple example of how Twitter can be a Siri-like interface to the world. But why not bringing more intelligence into Twitter. What about “Hey @uber, pick me in 10 minutes”, and using the Tweet geolocation plus a Uber-API integration integration to directly pick – and bill – whoever #requested a black car? Or “Please @opentable, I’d love to have sushis tonight”, and get a reply with links to the top-rated places nearby, with in-tweet booking capability (via the previous buy button)? The data is there, the tools and APIs are there, so…

Yes, this sound a bit like what’s described in the seminal Semantic Web article by Tim Berners-Lee, James Hendler and Ora Lassila. Maybe it’s because we’re finally there, in an age where computers can be those social machines that we’re dreaming about!

 

Love Product Hunt? Here’s a Chrome extension to discover even more products

Product Hunt is the new rising star in the start-up community. Think of it as a mix of Beta List and Hacker News, but with products that are already live, and a wider community, including engineers of course, but also product people, investors, media and more. A few days ago, and by popular demand, they’ve launched early access to heir official API.

https://twitter.com/rrhoover/status/501022229899902977

More Products: A Chrome plug-in for Product Hunt recommendations

With more than 6,000 products already in the Product Hunt database, I’ve decided to use the API to build a product recommendation engine. It seems that evey times it comes to hacking and APIs, I can’t get away of discovery, or music. Or both.

The result is a Chrome extension simply named “More Products!”. It directly integrates top-10 related products for each product page, as you can see below. I might iterate on the algorithm itself, but want to keep this plug-in very focused so it’s unlikely that it will integrate other features. Note that it doesn’t track anything, so your privacy is preserved.

More Products on Product Hunt!
More Products on Product Hunt!

Under the hood

The engine relies on the API to get the list of all products and related posts, and then uses TF-IDF and Cosine similarity to to find similarities between them, using NLTK and scikit-learn, respectively the standard Python tools for Natural Language Processing and Machine Learning. To put it simply, it builds a giant database of words used in all posts, mapped to products with their frequency, and then finds how close products are,  based on those frequencies.

New products are fetched every 2 hours, and recommendations are updated at the same time. Flask handles requests between the extension and the recommendations database, and Redis is used as a cache layer. 

Back in Black: Analyzing the loudness and tempo of the Rolling Stone top 500 songs

Here’s the second post of my data analysis series on the Rolling Stone top 500 greatest songs of all time. While the first one focused on lyrics, this one is all about the acoustic properties of the data-set – especially their volume and tempo.

To do so, I used the EchoNest, which delivers a good understanding of each track at the section level (e.g. verse, chorus, etc.) but also at a deeper “segment” level, providing loudness details about very short intervals (up to less than a second). This is not perfect, due to some issues discussed below, but gives a few interesting insights.

[Update 22 July] As noted in the comments, there were a few unexpected results. I’ve run the pipeline again and done some more cleaning on the API results, as explained here.

Black leather, knee-hole pants, can’t play no highschool dance

As my goal was to identify relevant tracks from the dataset, in addition to absolute values for the loudness and tempo of each track, I also looked at their standard deviation. If you’re not familiar with it, this helps to identify which songs / artists tend to be closer to their average tempo / loudness, versus the ones that are more dynamic.

Before going through individual songs from the top-500, let’s take an example with the top-10 Spotify tracks of a few artists to check their loudness:

Artist Average Loudness Standard deviation
Motörhead -5.05 1.29
Ramones -6.85 3.22
Radiohead -11.83 3.20
Daft Punk -11.23 4.82
Public Enemy -5.34 2.30
Beastie Boys -9.38 4.30
Bob Dylan -10.67 2.88
Pink Floyd -16.06 6.50

And the tempo:

Artist Average Tempo Standard deviation
Motörhead 130.58 33.35
Ramones 175.34 7.69
Radiohead 104.80 28.86
Daft Punk 109.90 11.96
Public Enemy 102.93 13.28
Beastie Boys 108.37 16.85
Bob Dylan 126.60 33.95
Pink Floyd 118.23 25.08

You can see that some bands really deserve their reputation. For instance, while the Pink Floyd have a high standard deviation both in volume and tempo (not surprising), Motörhead is not only the loudest (in average) of the list, but also the one with the smallest standard deviation, meaning most of their tracks evolve around that average loudness. In order words, they play everything loud. While the Ramones and just fast, everything fast. And when they’re together on stage, the result is not surprising

But you don’t really care for music, do you?

Coming back to the top 500, I ran the Echonest analysis on 474 tracks of the list. The 26 missing are due to various errors at different stages of the full pipeline.

On the one hand, I’ve used raw results from the song API to get the average values. [Update 22 July] I had to consolidate the data by aggregating multiple API results together. For a single song, multiple tracks are returned by the API (as expected), but there can be large inconsistencies between them. For instance, if you search for American Idiot, one track (ID=SOHDHEA1391229C0EF) is identified having a tempo of 93, the other one (SOCVQDB129F08211FC) of 186. Some can also have slighter variations (in volume for instance, between a live and the original version). To simplify things – and I agree it include a bias in the results – I averaged the first 3 results from the API.

On the other hand, I relied on numpy to compute the standard deviation from the first API result, removing first the fade-in and fade-out of each track. Here, I’ve also skipped every segment of section where the API confidence was too low (< 0.4).

The average loudness for the dataset is -10.38 dB. Paul Lamere run an analysis of 1500 tracks a few years ago, with an average of  -9.5 dB so we can see that this dataset is not too far from a “random” sample – check the conclusion of this post to understand why the Echonest’s loudness is less than 0.

Going through individual tracks, here are the loudest tracks from the list:

And the quietest ones:

You can clearly see the dB difference between a loud (CCR) and quiet (Jeff Buckley) track on following plots.

Loudness for CCR - Who'll stop the rain
Creedence Clearwater Revival – Who’ll Stop the Rain
Loudness plot or Hallelujah by Jeff Buckley
Jeff Buckley – Hallelujah (a low-level but very dynamic track)

Looking at the standard deviations, here are now the most dynamic, volume-wise, tracks.

This last one is a beautiful example of a soul song with a dynamic volume range, and here’s a live version below.

On the other side of the spectrum, here are the less dynamic tracks – i.e. the ones with the smallest standard deviation, volume-wise:

The Ramones strike again – but I’m not sure that Highway to Hell is actually so linear – even though the 2nd part definitely is!

AC/DC - Highway to Hell
AC/DC – Highway to Hell

Please could you stop the noise, I’m trying to get some rest

Going away from the loudness and focusing on the tempo, here are the fastest tracks (in average BpM) of the list (some seem a bit awkward here):

And the slowest ones, also including the Stones:

But I believe that once again, it’s interested to look at how dynamic the tracks can be, with the most dynamic ones (tempo-wise):

And the most static ones, i.e. the ones with less tempo variation:

If you’ve ever looked at the Man vs Machine app, you might find fun that even though the less dynamic (or the more consistent, depending how you look at it) one is using samples (Run DMC), all other involved drummers. Don’t forget to thank the best backing band ever for the perfect tempo on Marvin Gaye’s track (and I couldn’t resist sharing their own cover of the song).

I’m waiting for that final moment you say the words that I can’t say

Last but not least, I’ve normalized and combined both the tempo deviation and the rhythm one to assign a [0:1] score to each track in order find the most and less dynamic tracks overall. Here’s the top-5 of the most dynamic ones:

If you listen to My Generation, you can clearly hear the dynamic both in tempo and volume with the different bursts of the song. While the Radiohead one is more on the long-run, with clearly distinct phases as shown below for the volume part.

Radiohead - Paranoid Android
Radiohead – Paranoid Android

Finally, here are the less dynamic ones. Several ones on that list made it through the  charts, showing that even though a song can be pretty flat in both volume and tempo, it can still be a hit – or at least an earworm:

Music recommendations with 300M data points and one SQL query

While toying with the public BigQuery datasets, impatiently waiting for Google Cloud Dataflow to be released, I’ve noticed the Wikipedia Revision History one, which contains a list of 314M Wikipedia edits, up to 2010. In the spirit of Amazon’s “people who bought this”, I’ve decided to run a small experiment about music recommendations based on Wikipedia edits. The results are not perfect, but provide some insights that could be used to bootstrap a  recommendation platform.

Wikipedia edits as a data source

Wikipedia pages are often an invaluable source of knowledge. Yet, the type and frequency of their edits also provide great data to mine knowledge from. See for instance the Wikipedia Live Monitor by Thomas Steiner, detecting breaking news through Wikipedia,  “You are what you edit“, an ICWSM09 study of Wikipedia edits to identify contributors’ location, or some of my joint work on data provenance with Fabrizio Orlandi.

Here, my assumption to build a recommendation system is that Wikipedia contributors edit similar pages, because they have an expertise and interest in a particular domain, and tend to focus on those. This obviously becomes more relevant at the macro-level, taking a large number of edits into account.

In the music-discovery context, this means that if 200 of the Wikipedia editors contributing to the Weezer page also edited the Rivers Cuomo one, well, there might be something in common between both.

The dataset

Let’s have a quick look at the aforementioned Wikipedia Revision History dataset:

This dataset contains a version of that data from April, 2010. This dataset does not contain the full text of the revisions, but rather just the meta information about the revisions, including things like language, timestamp, article and the like.

  • Name: publicdata:samples.wikipedia
  • Number of rows: 314M

Sounds not too bad, as it contains a large set of (page, title, user) tuples, exactly what we need to experiment.

Querying for similarity

Instead of building a complete user/edits matrix to compute the cosine distance between pages, or using a more advanced algorithm like Slope One (with the number of edits as an equivalent for ratings), I’m simply finding common edits, as explained in the original Amazon paper. And, to make this a bit more fun, I’ve decided to do it with a single query over the 314M rows, testing BigQuery capabilities at the same time.

The following query is used to find all pages sharing common editors with the current ones, ranked by the number of common edits. Tested with multiple inputs, it took an average of 5 seconds to answer it over the full dataset. You can run those by yourself by going to your Google BigQuery console and selecting the Wikipedia dataset.

SELECT title, id, count(id) as edits
FROM [publicdata:samples.wikipedia]
WHERE contributor_id IN (
  SELECT contributor_id
  FROM [publicdata:samples.wikipedia]
  WHERE id=30423
    AND contributor_id IS NOT NULL
    AND is_bot is NULL
    AND is_minor is NULL
    AND wp_namespace = 0
  GROUP BY contributor_id
  )
  AND is_minor is NULL
  AND wp_namespace = 0
GROUP EACH BY title, id
ORDER BY edits DESC
LIMIT 100

Update 2014-07-14: To clarify a comment on Twitter / reddit - I’m using page ID instead of title to make sure the edits over time apply to the same page, since IDs are immutable but page titles can change upon requests from the community.

This is actually a simple query, finding all pages (wp_namespace=0 restricts to content pages, excluding user pages, talks, etc.) edited (excluding minor edits) by users whom also edited (excluding bots and minor contributions) the page with ID 30423, ranking them by number of edits. You can read it as “Find all pages edited by people who also edited the page about the Clash, ranked by most edited first”.

And here are some of the results

Who's related the the Clash, using Wikipedia edits
Who’s related the the Clash, using Wikipedia edits

As you can see, from a music-discovery perspective, that’s a mix between relevant ones (Ramones, Sex Pistols), and WTF-ones (The Beatles, U2). There’s also a need to exclude non-music pages, but that could be done programmatically with some more information in the dataset.

Towards long tail discovery

As we can expect, and as seen before, results are not that good for mainstream / popular artists. Indeed, edits about the Beatles page are unlikely, in average, to say much about the musical preferences of their editors. Yet, this becomes more relevant for long-tail artist discovery: if you care editing indie bands pages, that’s most likely you care about it.

Trying with Mr Bungle, the query returns Meshuggah and The Mars Volta as the first two music-related entries, all of them playing some kind of experimental metal – but then digresses again with the Pixies. Looking at band members / solo artists and using Frank Black as a seed leads to The Smashing Pumpkins, Pearl Jam, R.E.M. and obviously the Pixies as the first four recommendations. Not perfect for both, but not too bad for an algorithm that is completely music-agnostic!

Scaling the approach

There are many ways this could be improved, for instance:

  • Removing too-active contributors – who may edit pages to ensure Wikipedia guidelines are followed, rather than for topic-based interest, and consequently introduce some bias;
  • Filtering the results using some ML approaches or graph-based heuristics – e.g. exclude results if their genres are more than X nodes away in a genre taxonomy.
  • Using time-decay – someone editing Nirvana pages in 1992 might be interested in completely different genres now, so joint edits might not be relevant if done with an x-days interval or more.

Yet, besides its scientific interest, and showing that BigQuery is very cool to use, this approach also showcases – if needed – that even though algorithms may rule the world of music discovery, they might not be able to do much without user-generated content.

The Long Tail, with Spotify and Polymer

The Long Tail. That’s not something new, neither on the Web nor in the music field. I remember when I first read Chris Anderson article , and since, many have talked or wrote about it, including Paul Lamere or Oscar Celma in the music-tech sphere.

Yet, one must admit that, with millions of tracks available online, it’s always a challenge to find something new,  digging in that so-called long-tail of less popular artists or songs.

So, between Word Cup games, I’ve built a web component – and a companion web app –  to enjoy the less popular tracks of any artist.

A web component to play an artist’s long tail

Built with Polymer and using the new Spotify Web API, <long-tail> is a web component that embeds a Spotify play button with the less popular tracks of one artist.

First, install it with Bower:

bower install long-tail

The, include in an HTML page:

<html>

<head>
  <script src="bower_components/platform/platform.js"></script>
  <link rel="import" href="bower_components/long-tail/long-tail.html">
</head>

<body>
  <long-tail artist="4tZwfgrHOc3mvqYlEYSvVi" size="25"></long-tail>
</body>

</html>

And there it is, you’re ready to play. No JS to write, no code to copy and paste, everything is handled internally: the beauty of Web components. Unfortunately, Javascript can’t be used on WordPress.com blogs, but here’s the result of the previous snippet.

Daft Punk less popular tracks
Daft Punk less popular tracks

The source is on github (MIT license), and you can see how easy it is to create. It simply calls the Spotify API to find an artist’s albums, then tracks (limiting to 50 results each time – hence parsing a maximum of 2500 tracks per artist), finally sorting them by inverse popularity. It also excludes the ones with popularity=0, as it seems there are not always the less popular ones. Maybe some region-dependant issue?

I suppose, as many recent JS toolkits such as AngularJS, that the learning curve will be stiffer when building  advanced components (probably due to the early-stage documentation), but at a first glance, it looks very intuitive, and there are many elements to reuse already.

Try it with your favorite artists

As the component is mostly for coders, I’ve put together a companion Web app – shamelessly reusing Paul’s design from his recent Spotify hacks. For each artist, it uses the previous component and displays their 50 less popular tracks, according to Spotify.

Try it at The Long Tail, and have fun exploring the hidden gems of your favorite artist!

The Long Tail of Rancid tracks
The Long Tail of Rancid tracks