Author Archives: kennethf

Final Project Documentation (Kenny Friedman)

My final project is the culmination of projects I’ve done throughout the semester using magic to tell stories. I started with a math-based card trick involving the Fibonacci sequence. Then, I augmented that trick with a fake-Siri and had an argument with my computer for my Trick++. Next, for my midterm project, I used the pre-recorded audio concept and applied it to audio and video across multiple screens (the projector, an iPad, and an iPhone). This was similar to Marco Tempest’s iPod illusion, except with vertical displays of multiple sizes.

IMG_2191Now, I’m taking the same concept of interacting with screens, except the audio and video is no longer pre-recorded. This allows for less precision as the act responds to the performer, as well as more options for audience interaction or mistakes.

I think using techno illusions to convey a concept is a really powerful medium. In the iPod illusion, Marco uses it (aptly) to discuss the concept of deception. I really enjoyed the meta level of using a concept to talk about the concept. Since my technology involves augmented reality (AR), I initially thought of talking about AR by using AR. However, after playing around with the story, I realized that a more general and universal concept like deception, empathy, or time is a better use of techno illusions. I decided to talk about time.

Finally, I tried to step into a magic circle to think about mediums that are rarely used in magic. After looking through different kinds of art and word play, I noticed that I couldn’t find many examples of poetry & magic. And, with the exception of Bo Burnham, I couldn’t find any examples of comedy*, poetry, and magic combined.

*not that my goal was to perform something humorous or anything.

Thoughts Behind the Technology

Marco Tempest’s MultiVid ( ) is a fantastic piece of software, however it’s limited to videos on iOS devices. The videos sync, but they can’t interact. I wanted to make a framework that allowed incorporated many aspects of an interactive multimedia performance.

I ended up successfully implementing three interactive multimedia elements. They are (in increasing oder of technical impressiveness): (1) timing a video projected on a wall that you can interact with, (2) communicating with a fake Artificial Intelligence. (3) knowing the position of a mobile device in free space.

For my performance, I put the least technically interesting one in the middle. Ignoring (for now) the gimmicks & props that I used throughout, there were 3 main parts to my trick, each corresponding to one of the technologies. These three are described below:

1. Interactive Projected Screen

While an interesting trick, this part is least impressive from a technological standpoint. I created an app that can control pause/play functionality on another device by tapping anywhere on the screen of the first device. This ensures that you can have “chapters” to an interactive screen trick. You don’t have to have a single video that encapsulates the entire performance (as I did for my midterm performance). This capability is possible on Marco’s MultiVid as well, but my version can send multiple commands (instead of simply play/pause), so it would be possible to have branching based on audience input (however, I don’t use this functionality in my trick).

The communication between devices is OSC* (see more on OSC below) on both iOS and Mac. (My trick involves just iOS to iOS for this section). Each device is running an instance of a custom app. One app receives data and displays video. The other is used as the controller.

2. Communicating with a fake Artificial Intelligence

In both my Trick++ and my Midterm, I had a conversation with a fake AI that pretended to be Siri. In both performances, they were prerecorded. There are three problems with this approach: timing during the performance has to be nearly perfect, changing the script is very difficult after it’s been created the first time (you have to regenerate the entire audio clip), and finally, there are no pause commands using my old method (see midterm documentation) so once the audio was generated, you had to manually insert breaks using an audio editor.

This time, I created an easy to use system that consists of procedurally generating the audio during the performance, and easy timing controls (either with pause commands or with a remote device). The audio is generated using AVSpeechUtterance, with is part of Apple’s native iOS SDK. This solves all three of the original problems with the prerecorded versions. It also enables the possibility of branching during the performance (by pressing different buttons on the remote), but again, that was not part of my performance.

3. Know the Position of an iOS Device in Free Space

Here are a bunch of ways not to get this feature to work (or, at least I couldn’t get this feature working)

  • Accelerometer Data: I first came up with understanding the position in free space by playing with the accelerometer data. However, the accelerometer produces the acceleration. You need to take the double integral of that to find the distance traveled. However, taking the integral twice produces so much noise that it is impossible to calculate accurately. Holding the device perfectly still will say that the device has traveled meters. So centimeter accuracy for any length of time is impossible.
  • IMG_2194Vuforia AR: Vuforia is a great AR framework developed by Qualcomm that has nearly perfect target tracking. The targets/markers can be a photo of any object with well defined (high contrast) boarders. I used this framework before in a UROP, but not for this purpose. The goal was to find the vectors of a particular marker that the projector was projecting on to the screen. Using the camera from the iOS device, it would detect a marker
  • Optical Flow: I believe this one is possible if you have a better understanding of linear algebra. I don’t have very much matrix experience, and couldn’t figure out the math to do this one correctly. It’s probably doable, and someone definitely should do it.

Those three methods all involve the device itself detecting it’s location. After none of these worked, I moved to a fallback: have a secondary, stationary device detect the movement, and wirelessly transfer the data to the moving device. I ended up using OpenCV in OpenFrameworks to detect a marker on screen, and then transfer the location information via OSC back to the iPad itself.

How To Properly Communicate Between Apple Devices

I did not perform on Final Presentations day because I could not get the devices communicating (with ruins part 1 and 3 [and sometimes 2] of my performance).

I initially thought that high level Bluetooth LE protocols would be the way to go, since I’ve used it for iOS to iOS data transfer before. However, when transferring data from iOS to Mac, it doesn’t work. BLE on OS X is only set up to be a central device, not a peripheral. My system needed the Mac to be the peripheral, and the iPad to be the central, receiving device.

OSC saved the day, using the built OSC framework that comes standard on both the iOS and OS X versions of OpenFrameworks.

Midterm Trick Documentation (Kenny Friedman)

Midterm Teaser

In Class Trick Performance

Here is a video of the in-class performance. Unfortunately, the iPhone 5s was not playing, which both took away from the overall effect, and distracted me during the second half of the video. Check back soon for a staged (but better!) performance of the trick.

Three Videos At Once

Here is a video that shows all three of the clips playing simultaneously. The top clip is the video displayed on the Projector (by means of an iPhone 4S), the bottom left video is played on the iPad, and the bottom right video is played on the iPhone 5s.

Main Idea


The main idea was to tell the story of the computer using multiple devices: the projector, an iPad, an iPhone, and some physical devices.

The inspiration for this trick was Marco Tempest’s iPod TED Talk trick. I love the way he augments his storytelling with “tricks” spread throughout. I wanted to build on this idea by using vertical screens (instead of on a table) as well as using multiple sized screens. I tried multiple ways (see “Expanding on the Idea”) to get this effect. After trying many methods, Marco’s MultiVid software ended up being the best way to perform the trick.

How It Was Done


Here is a diagram of the system in place. The trick uses 4 videos going at the same time. One video is for the iPad, one is for the iPhone, one is for the projector (which is projected from a second iPhone), and the 4th is a “teleprompter” that only the magician can see (so he/she knows  where they are in the trick). All 4 videos are connected to a MacBook. All 5 devices are running MultiVid (available for free online).

I purchased an iPad case that allows for holding it with your palm, so my hand would not get in the way of the screen. I could not find a similar product for the iPhone. I tried to make a “handle” on the back of the iPhone, which failed miserably. I also used a 30pin-to-VGA adapter to transfer the iPhone video to the projector.

The videos were all created using a 30-day free trial of Final Cut Pro X [FCPX] (some of the animations and “slides” were created in, and then exported as QuickTime files to be used in FCPX). All of the audio across all 3 of the video displays (that the audience saw) were pushed to a single device (the projector connected iPhone) to get the audio through the room’s speaker system, as well as to prevent slightly-out-of-sync audio (which is more obvious than out-of-sync video).

Audio: The audio was created in the same way that my previous trick (see: Siri++ ), however I augmented Siri’s voice with additional sounds and music. The music included instrumentals of The Beatles & Bob Dylan (Jobs’s favorite bands… but mine too, so there’s not too much of a connection there 😉 ). Most of the sound effects were used with the default iLife sound effects (that come with or All audio editing was done within FCPX.

Video: After some failed attempts to get After Effects up and running (hoping to get it figured out by the final presentation), I defaulted to Keynote animations and FCPX free trial.

Expanding on the Idea

I would love to get interactive elements working on multiple screens, however MultiVid only supports video. It would be great to write software to make interactive across screens. (I am going to try to play around with this, but for it to work, I need to figure out how to do it in a way that doesn’t take as long as a PhD thesis [see, THAW]).

Also, after working on the trick, I see why Marco did it on a flat surface. You are limited by what you can hold if you want to do the trick vertically. I will have to see how this can be improved.

Dragon Flight

Isa, Jonathan, and Kenny.

Teaser Trailer


The year is 1235 PD (post-dragonism). You’re a young dragon master tasked with getting newborn dragons ready for battle. You’ve come from a small, rural town from the West, and you’ve just be thrown into the big leagues. You’re the underdog, but you’ve got grit, and you’re going to make it. Welcome to Dragon Flight.

For your first task, you’ve been allowed to choose five new dragons. Right now, they are just eggs, so you won’t be able to tell them apart. However they will soon become great warriors. Your job is to turn your dragon eggs into fighters before anyone else does.

Each dragon will come from one of four tribes: earth, fire, wind, and water. Earth dragons have a great understanding of their terrain. They use the ground to their advantage: knowing how to hide, traveling quickly, and manipulating soil. Fire dragons use heat to their advantage. They can wield fire to defend and attack. Wind dragons are the fastest. Flying through the air faster than the rest, they don’t like to stay on the ground for long. Water dragons can fly in water. Also known as swimming, these dragons have the advantage of traveling below the surface (as long as that surface is liquid). Wind and fire dragons work well together. Earth and water are friendly as well. However, fire and water don’t mix. Earth stays still and wind is ever changing – don’t let them combine!


Be the first to teach your dragon to fly!

Included Pieces

  • Board
  • Catching Net
  • Dragon Eggs


First, place the board on a flat surface. Distribute 5 eggs to each of the 2 to 4 players, randomly (either from a bad or with all eggs face down on the table). Once you have your eggs, add your color marker to your egg so you know whose eggs are whose! The board has a starting location for each color, and that is where your dragons will begin their journey.


The game is turned based. Each player takes a turn in order (youngest player goes first). Two(2) actions are allowed to be performed per turn. See possible actions below. The game is over when the first player successfully flies all 5 of their dragons out of their nests.

Element Interactions

Dragons born of different elements have different powers. Similar to rock paper scissors, a fire dragon might be extinguished by a water dragon when coming in contact. The chart below settles the score.

  • A wind dragon defeats a water dragon if they touch.
  • A fire dragon defeats an earth dragon if they touch.
  • A water dragon defeats a fire dragon if they touch.
  • An earth dragon defeats a wind dragon if they touch.


  • Flick
  • Fly


Just like in shuffleboard, curling, or table football, you need to flick your dragon eggs into their nests. For a flick to be official, the finger and thumb need to be together at the start.

How to Fly

Before your dragons can fly, they need to hatch from their eggs. Hatching requires an entire round, so you can not fly a dragon in the same turn as landing in the nest. Remember, dragon nests are not a safe place, so you don’t want to be there too long, you might just get knocked off the nest.

The mechanics of flying are fun, but be careful. When you try flying, you need to place your dragon on a certified dragon launch pad (aka: fork). Then launch your dragon into the air and for a flight to be successful, you need to catch the dragon in flight, the only way to properly harness a dragon.


If a dragon is knocked off of the board or comes in contact with a dragon that defeats it, the dragon egg returns to your initial pile to join the battle field on your next turn.


The first player to hatch all of their dragons and successfully harness them in flight wins!!

Good luck, young Dragon Master!


(some of the images below at GIFs, which don’t seem to play unless you click on them).

catch_sm IMG_2649-300x225 IMG_2648-300x225 launch_sm IMG_2647-300x225 flick_sm IMG_2635

D&D Characters, Kenny Friedman

Question: How would you characterize the moments in this account in which stats are referenced or dice are rolled? What is happening in these moments? How do they differ from the rest of the account? How do they differ from each other (that is, how are the stats lookups different from the dice rolls)?

Stats and dice rolls are characterized as decision points that will guide diverges in the otherwise linear storyline/game play. A stats lookup is a conscious decision on what to focus on, while a dice roll is random chance – the butterfly effect in the game.

Imagine what kind of person you’ve just created based on these attributes. What personality is created by combining these attributes? Do you know anyone in real life who matches this mix of characteristics? How would you describe someone like this to a friend? What jobs would they thrive in? What are some situations in which they’d be really out of place? Write a paragraph describing your character as if they were a real person. Pretend you’re describing a friend or professor of yours to someone you know.

Character 1 – Jake

Intelligence: 16

Wisdom: 14

Charisma: 12

Dexterity: 11

Constitution: 9

Strength: 6

Jake is sharp. He can quickly understand the context of a concept, conversation, or joke. He’s very good at seeing other people’s point of view. He has good taste and a respect for quality. He would be friends with the jocks in high school, but not be one of them. He listens to all types of music that aren’t “country”. He loves adventure, but it also set in his ways. He wants to explore the world, but he likes eating the same thing for lunch every day. He uses an iPhone.

Now move the numbers around and do this again. Try to create a character who’s very different from your first character without just being the exact inverse.

Character 2 – Brad

Intelligence: 11

Wisdom: 6

Charisma: 12

Dexterity: 14

Constitution: 16

Strength: 9

Brad is an ambitious guy. He wants to be well liked, and could be described as a “networking”-type, but he doesn’t like that label. He cares about other people’s opinions more than he should, and he knows it. He takes good care of himself, waking up at 6am to do his morning jog. He occasionally chooses friends based on their popularity. He doesn’t like school, but gets good grades, just like all of the other social people.

Select a class for your character. While considering options try imagining a person with your character’s attributes in each of the 12 classes. How would they have ended up doing that job? How would they make it work even if it might seem wildly inappropriate on the surface?

Jake is a sorcerer. He has some charisma, but what he really brings to the table is an understanding of others. He can predict what his friends and enemies will do by understanding their motivation. While there are other sorcerers that have more charisma and better bloodlines, Jake always surprises others with how well he can keep up.

Brad is a fighter. He fell into this position he doesn’t love it. However, it’s admired in the community, and Brad always fights his way to the top, so he is usually considered the top dog.

As you read through the race descriptions, try each one on for size. How would they fit with the character you’ve been building? What story would you make up to explain your character’s experience growing up within this culture? Maybe they were emigrants so they didn’t grow up amongst too many people of their kind. How would that change their attitudes to their race’s mainstream values? Would they romanticize them or be embarrassed of the traits that made them different from their surroundings?

Jake is a human. In a world filled with eccentric creatures, Jake comes across as quite plain at first. Only when you get to know him on more than a shallow level do you learn about his passions, ambitions, and plans for the future. Jake sometimes wishes he wasn’t just a human, but realizes that this gives him the ability to play the underdog. When he hits, no one will have seen it coming.

Brad is a dragonborn. As Mr. Popular, he is proud of being a dragon born, and doesn’t really want to interact with half-rocs or high elves. Of course, he will be polite if he needs to be, but he’ll keep his distance so that others don’t notice his association’s with others.

Now select an alignment for your character. As should be familiar by now, start by exploring each option and imagining how you’d incorporate it into the existing portrait you’ve been building for your character. What stories can you come up with to make them make sense? Are there any alignments that seem to match particularly well with your character’s attributes, class, and relationship to their race?

Jake is chaotic good. He is light hearted, fun, and passionate. He wants to help others, so he does his best to do what he thinks is right. But he has no respect for the status quo, and no respect for rules. He doesn’t care what other people see as right or wrong, because those people often lack the perspective to see the big picture.

Brad is true neutral. He does what he thinks other people will want him to do. If it’s popular, it doesn’t matter if it’s right or wrong. He cares about being liked above all else.

For each one that you pick, write down what you think their strength, charisma, wisdom, intelligence, dexterity, and constitutions scores are. What’s the closest class to what they do in real life? What race’s traditions or aesthetic matches them? What alignment are they?

Serena Williams

Intelligence: 11

Wisdom: 6

Charisma: 16

Dexterity: 9

Constitution: 12

Strength: 14

Martin Luther King, Jr.

Intelligence: 12

Wisdom: 14

Charisma: 16

Dexterity: 6

Constitution: 11

Strength: 9

Siri++ (Trick++ Documentation)

Here’s my documentation for Siri++, which was performed for the first time in class on March 9, 2015. The video of (most of) that performance is available here: In Class Siri++ Performance

The basics of the trick involve fake shuffling, a stacked deck, and my digital assistant (a pre-recorded audio track). This trick was fairly topical, since it involved references to the first trick I performed, the most recent Apple keynote, a recently released Netflix show, and MIT. But the trick is generalizable to nearly anything. Any slight of hand or algorithm based card tricks can be augmented with a “digital assistant”. In this instance, Siri was “working against me” and causing trouble, but you could also have a helpful version. Below is a video explanation, a practice video, the raw soundtrack file, and instructions for how to make your own Siri dialogue & script video.

A video based explanation and demonstration of my trick is available here:

A home practice of the trick (without the minor hiccup from the class performance) is available here:

The video that plays the audio clip, and shows me my lines during the trick is available here:

Since the stacked deck and fake shuffles are fairly ordinary, the remainder of this documentation will describe how I made the pre-recorded audio, as well as the raw script video. I used a Mac, and will be talking about specific Mac applications, but this could easily be done on Windows using analogous programs.


Macs can speak back text using the “say” terminal command. For example, opening and typing: >>say “Hello World” will have the computer speak the phrase.

Instead of immediately speaking the phrase, it can be saved as a .aiff audio file using the flag: -o. Next, instead of reading text typed into Terminal, it can read from a .txt file using the -f flag. Finally, Macs come with a few different built-in voices at different speeds. You can also download (for free) higher quality versions of those voices (which simply take up more disk space). I found that the “Enhanced” version of “Samantha” was the closest to sounding like Siri on iOS devices. You can set this voice by going to System -> Dictation & Speech ->Text to Speech -> and then choosing “Customize” from the available options.

Therefore, I first created a script, and then made a text file with all of “Siri”‘s spoken lines. I then set the system voice to Enhanced Samantha, and ran the terminal command: >>say -f “script.txt” -o “result.aiff”

This produces an audio file of the script. However, there needs to be pauses between lines to give the performer time to respond. There is no way to set timed pauses during the terminal say command, so we’ll get to that with a separate application.

For audio file editing, I used Garageband (though Audacity, or any other basic audio editor should work). There was no exact science to this part, I simply played one line of Siri, then hit mute (so I wouldn’t be interrupted) while I said my next line. When I finished my line, I paused Garageband. The amount of time that had passed since Siri finished her line was the amount of time that my line. I then split the audio clip, and then added blank noise for that amount of time. I repeated this process for every line in the script. I then exported as an .mp3 file.


I then wrote out all of my lines on different slides of a Keynote presentation. I then “recorded” the keynote presentation while I played the audio file, which tracked the time I spent on each slide/line. I was then able to export that recorded slide show as a quicktime file.

Finally, I combined the timed keynote slides and the audio saved from garageband in iMovie. I then exported the entire video. In in class for the trick, I was simply able to play the video file, and the audio was presented to the audience and the visual script lines were presented to me.

Conclusion & Expansion

This trick worked really well. There was a slight hiccup with a mistake I made during the first performance, and there was some cracking because of the speaker system. But the general concept works quite well.

There is definitely room for expansion. The obvious direction would be to make it nonlinear: either by having it actually respond to my voice (though and fortunately, the best voice recognition software out there (Apple’s & Google’s) are not accessible by 3rd party devs), or to make it dynamic, and subtly change what it will say based on a keypress on the keyboard, or a button on an iPhone app. Doing card tricks were Siri was able to reveal what card you had, and would be able to do it for any card, not just one, would be very cool.

What other ideas do you have for a digital assistant?

Math & Magic: Week 1 Trick

Here’s my magic trick in progress. In true week-one fashion, I made a couple mistakes in the performance.

My entire idea is not completely fleshed out yet, but the gist of it is that I am trying to tell a story of the intersection of math and magic. How people who are bad at math think it’s all magic, and how magicians use math to deceive their audience. In it, I want to have cards do some math for us. So far, I have the numbers 26 appear, the age Newton was when he invented calculus… as well as the Fibonacci sequence appear.

The current way I’m revealing the fibonacci sequence takes a ridiculous amount of cards… which leaves very little room for fake-shuffling, so I’ll probably have to figure out a better way to reveal them.


Kenny Friedman