Final Project Documentation (Kenny Friedman)

Beginnings
My final project is the culmination of projects I’ve done throughout the semester using magic to tell stories. I started with a math-based card trick involving the Fibonacci sequence. Then, I augmented that trick with a fake-Siri and had an argument with my computer for my Trick++. Next, for my midterm project, I used the pre-recorded audio concept and applied it to audio and video across multiple screens (the projector, an iPad, and an iPhone). This was similar to Marco Tempest’s iPod illusion, except with vertical displays of multiple sizes.

IMG_2191Now, I’m taking the same concept of interacting with screens, except the audio and video is no longer pre-recorded. This allows for less precision as the act responds to the performer, as well as more options for audience interaction or mistakes.

I think using techno illusions to convey a concept is a really powerful medium. In the iPod illusion, Marco uses it (aptly) to discuss the concept of deception. I really enjoyed the meta level of using a concept to talk about the concept. Since my technology involves augmented reality (AR), I initially thought of talking about AR by using AR. However, after playing around with the story, I realized that a more general and universal concept like deception, empathy, or time is a better use of techno illusions. I decided to talk about time.

Finally, I tried to step into a magic circle to think about mediums that are rarely used in magic. After looking through different kinds of art and word play, I noticed that I couldn’t find many examples of poetry & magic. And, with the exception of Bo Burnham, I couldn’t find any examples of comedy*, poetry, and magic combined.

*not that my goal was to perform something humorous or anything.

Thoughts Behind the Technology

Marco Tempest’s MultiVid ( http://marcotempest.com/screen/Public_MultiVid ) is a fantastic piece of software, however it’s limited to videos on iOS devices. The videos sync, but they can’t interact. I wanted to make a framework that allowed incorporated many aspects of an interactive multimedia performance.

I ended up successfully implementing three interactive multimedia elements. They are (in increasing oder of technical impressiveness): (1) timing a video projected on a wall that you can interact with, (2) communicating with a fake Artificial Intelligence. (3) knowing the position of a mobile device in free space.

For my performance, I put the least technically interesting one in the middle. Ignoring (for now) the gimmicks & props that I used throughout, there were 3 main parts to my trick, each corresponding to one of the technologies. These three are described below:

1. Interactive Projected Screen

While an interesting trick, this part is least impressive from a technological standpoint. I created an app that can control pause/play functionality on another device by tapping anywhere on the screen of the first device. This ensures that you can have “chapters” to an interactive screen trick. You don’t have to have a single video that encapsulates the entire performance (as I did for my midterm performance). This capability is possible on Marco’s MultiVid as well, but my version can send multiple commands (instead of simply play/pause), so it would be possible to have branching based on audience input (however, I don’t use this functionality in my trick).

The communication between devices is OSC* (see more on OSC below) on both iOS and Mac. (My trick involves just iOS to iOS for this section). Each device is running an instance of a custom app. One app receives data and displays video. The other is used as the controller.

2. Communicating with a fake Artificial Intelligence

In both my Trick++ and my Midterm, I had a conversation with a fake AI that pretended to be Siri. In both performances, they were prerecorded. There are three problems with this approach: timing during the performance has to be nearly perfect, changing the script is very difficult after it’s been created the first time (you have to regenerate the entire audio clip), and finally, there are no pause commands using my old method (see midterm documentation) so once the audio was generated, you had to manually insert breaks using an audio editor.

This time, I created an easy to use system that consists of procedurally generating the audio during the performance, and easy timing controls (either with pause commands or with a remote device). The audio is generated using AVSpeechUtterance, with is part of Apple’s native iOS SDK. This solves all three of the original problems with the prerecorded versions. It also enables the possibility of branching during the performance (by pressing different buttons on the remote), but again, that was not part of my performance.

3. Know the Position of an iOS Device in Free Space

Here are a bunch of ways not to get this feature to work (or, at least I couldn’t get this feature working)

  • Accelerometer Data: I first came up with understanding the position in free space by playing with the accelerometer data. However, the accelerometer produces the acceleration. You need to take the double integral of that to find the distance traveled. However, taking the integral twice produces so much noise that it is impossible to calculate accurately. Holding the device perfectly still will say that the device has traveled meters. So centimeter accuracy for any length of time is impossible.
  • IMG_2194Vuforia AR: Vuforia is a great AR framework developed by Qualcomm that has nearly perfect target tracking. The targets/markers can be a photo of any object with well defined (high contrast) boarders. I used this framework before in a UROP, but not for this purpose. The goal was to find the vectors of a particular marker that the projector was projecting on to the screen. Using the camera from the iOS device, it would detect a marker
  • Optical Flow: I believe this one is possible if you have a better understanding of linear algebra. I don’t have very much matrix experience, and couldn’t figure out the math to do this one correctly. It’s probably doable, and someone definitely should do it.

Those three methods all involve the device itself detecting it’s location. After none of these worked, I moved to a fallback: have a secondary, stationary device detect the movement, and wirelessly transfer the data to the moving device. I ended up using OpenCV in OpenFrameworks to detect a marker on screen, and then transfer the location information via OSC back to the iPad itself.

How To Properly Communicate Between Apple Devices

I did not perform on Final Presentations day because I could not get the devices communicating (with ruins part 1 and 3 [and sometimes 2] of my performance).

I initially thought that high level Bluetooth LE protocols would be the way to go, since I’ve used it for iOS to iOS data transfer before. However, when transferring data from iOS to Mac, it doesn’t work. BLE on OS X is only set up to be a central device, not a peripheral. My system needed the Mac to be the peripheral, and the iPad to be the central, receiving device.

OSC saved the day, using the built OSC framework that comes standard on both the iOS and OS X versions of OpenFrameworks.