Here’s my documentation for Siri++, which was performed for the first time in class on March 9, 2015. The video of (most of) that performance is available here: In Class Siri++ Performance
The basics of the trick involve fake shuffling, a stacked deck, and my digital assistant (a pre-recorded audio track). This trick was fairly topical, since it involved references to the first trick I performed, the most recent Apple keynote, a recently released Netflix show, and MIT. But the trick is generalizable to nearly anything. Any slight of hand or algorithm based card tricks can be augmented with a “digital assistant”. In this instance, Siri was “working against me” and causing trouble, but you could also have a helpful version. Below is a video explanation, a practice video, the raw soundtrack file, and instructions for how to make your own Siri dialogue & script video.
A video based explanation and demonstration of my trick is available here:
A home practice of the trick (without the minor hiccup from the class performance) is available here:
The video that plays the audio clip, and shows me my lines during the trick is available here:
Since the stacked deck and fake shuffles are fairly ordinary, the remainder of this documentation will describe how I made the pre-recorded audio, as well as the raw script video. I used a Mac, and will be talking about specific Mac applications, but this could easily be done on Windows using analogous programs.
Macs can speak back text using the “say” terminal command. For example, opening Terminal.app and typing: >>say “Hello World” will have the computer speak the phrase.
Instead of immediately speaking the phrase, it can be saved as a .aiff audio file using the flag: -o. Next, instead of reading text typed into Terminal, it can read from a .txt file using the -f flag. Finally, Macs come with a few different built-in voices at different speeds. You can also download (for free) higher quality versions of those voices (which simply take up more disk space). I found that the “Enhanced” version of “Samantha” was the closest to sounding like Siri on iOS devices. You can set this voice by going to System Perferences.app -> Dictation & Speech ->Text to Speech -> and then choosing “Customize” from the available options.
Therefore, I first created a script, and then made a text file with all of “Siri”‘s spoken lines. I then set the system voice to Enhanced Samantha, and ran the terminal command: >>say -f “script.txt” -o “result.aiff”
This produces an audio file of the script. However, there needs to be pauses between lines to give the performer time to respond. There is no way to set timed pauses during the terminal say command, so we’ll get to that with a separate application.
For audio file editing, I used Garageband (though Audacity, or any other basic audio editor should work). There was no exact science to this part, I simply played one line of Siri, then hit mute (so I wouldn’t be interrupted) while I said my next line. When I finished my line, I paused Garageband. The amount of time that had passed since Siri finished her line was the amount of time that my line. I then split the audio clip, and then added blank noise for that amount of time. I repeated this process for every line in the script. I then exported as an .mp3 file.
I then wrote out all of my lines on different slides of a Keynote presentation. I then “recorded” the keynote presentation while I played the audio file, which tracked the time I spent on each slide/line. I was then able to export that recorded slide show as a quicktime file.
Finally, I combined the timed keynote slides and the audio saved from garageband in iMovie. I then exported the entire video. In in class for the trick, I was simply able to play the video file, and the audio was presented to the audience and the visual script lines were presented to me.
Conclusion & Expansion
This trick worked really well. There was a slight hiccup with a mistake I made during the first performance, and there was some cracking because of the speaker system. But the general concept works quite well.
There is definitely room for expansion. The obvious direction would be to make it nonlinear: either by having it actually respond to my voice (though and fortunately, the best voice recognition software out there (Apple’s & Google’s) are not accessible by 3rd party devs), or to make it dynamic, and subtly change what it will say based on a keypress on the keyboard, or a button on an iPhone app. Doing card tricks were Siri was able to reveal what card you had, and would be able to do it for any card, not just one, would be very cool.
What other ideas do you have for a digital assistant?