People naturally identify the rhythm of music as they tap their feet and sway in time with the beat. Underlying such motions is an act of cognition that is not easily reproduced in a computer program or automated by machine. This work asks (and answers) the question: How can we build a device that can "tap its foot" along with the music?
Live-tweeting has emerged as a popular hybrid media activity during broadcasted media events. Through second screens, users are able to engage with one another and react in real time to the broadcasted content. These reactions are dynamic: they ebb and flow throughout the media event as users respond to and converse about different memorable moments. Using the first 2016 U.S. presidential debate between Hillary Clinton and Donald Trump as a case, this paper employs a temporal method for identifying resonant moments on social media during televised events by combining time series analysis, qualitative (human-in-the-loop) evaluation, and a novel natural language processing tool to identify discursive shifts before and after resonant moments. This analysis finds key differences in social media discourse about the two candidates. Notably, Trump received substantially more coverage than Clinton throughout the debate. However, a more in-depth analysis of these candidates' resonant moments reveals that discourse about Trump tended to be more critical compared to discourse associated with Clinton's resonant moments.