Category Archives: Projects

Midterm Documentation

My midterm project was an extension of my Trick++, and used the same deck of NFC-tagged cards I created for that project. This time, I developed the trick to move the reveal away from my phone and onto a volunteers – having the selected card appear on their smartphone instead of simply showing it on mine.

unnamed 0

 

This was accomplished mostly with software development on my Android app. Since the application can know which card is selected long before the reveal, it isn’t limited to just displaying the image of the card. For this midterm I added functionality to automatically text the card to any phone, or send the card as an email to any address.

On the technical side, this was not difficult to implement. Sending SMS is trivial within Android apps, and required just a few lines of code. Email was only slightly more difficult, since it needs to authenticate with some sort of account, and I decided to just use a Parse app and a free Mailgun account. I also tried to get the app to post the group Slack, but since I’m a restricted user I don’t have the right access privileges.

Untitled

The code for the app is available here, which is a little better organized than the version from the Trick++. The app has also picked up some other additional features, such as card history/deck tracking, just in case I decided to integrate those into a performance some day as well.

In review: although the in-class performance didn’t quite work out (it ended up being due to 1) my phone’s WiFi being off and 2) T-Mobile’s poor data coverage inside the depths of the Media Lab), I think this trick was an intuitive and natural extension of what I had already built. I wish that I had more time to develop more export destinations (Slack, Facebook, Twitter… anything!), but I think the trick played well as is. This midterm will probably also wrap up the career of my NFC deck, since I have a few separate ideas I think I’d like to pursue for my final.

So long, Magic Deck. You’ve served me well.

 

Midterm Trick Documentation (Kenny Friedman)

Midterm Teaser

In Class Trick Performance

Here is a video of the in-class performance. Unfortunately, the iPhone 5s was not playing, which both took away from the overall effect, and distracted me during the second half of the video. Check back soon for a staged (but better!) performance of the trick.

Three Videos At Once

Here is a video that shows all three of the clips playing simultaneously. The top clip is the video displayed on the Projector (by means of an iPhone 4S), the bottom left video is played on the iPad, and the bottom right video is played on the iPhone 5s.

Main Idea

IMG_2008

The main idea was to tell the story of the computer using multiple devices: the projector, an iPad, an iPhone, and some physical devices.

The inspiration for this trick was Marco Tempest’s iPod TED Talk trick. I love the way he augments his storytelling with “tricks” spread throughout. I wanted to build on this idea by using vertical screens (instead of on a table) as well as using multiple sized screens. I tried multiple ways (see “Expanding on the Idea”) to get this effect. After trying many methods, Marco’s MultiVid software ended up being the best way to perform the trick.

How It Was Done

IMG_2015

Here is a diagram of the system in place. The trick uses 4 videos going at the same time. One video is for the iPad, one is for the iPhone, one is for the projector (which is projected from a second iPhone), and the 4th is a “teleprompter” that only the magician can see (so he/she knows  where they are in the trick). All 4 videos are connected to a MacBook. All 5 devices are running MultiVid (available for free online).

I purchased an iPad case that allows for holding it with your palm, so my hand would not get in the way of the screen. I could not find a similar product for the iPhone. I tried to make a “handle” on the back of the iPhone, which failed miserably. I also used a 30pin-to-VGA adapter to transfer the iPhone video to the projector.

The videos were all created using a 30-day free trial of Final Cut Pro X [FCPX] (some of the animations and “slides” were created in Keynote.app, and then exported as QuickTime files to be used in FCPX). All of the audio across all 3 of the video displays (that the audience saw) were pushed to a single device (the projector connected iPhone) to get the audio through the room’s speaker system, as well as to prevent slightly-out-of-sync audio (which is more obvious than out-of-sync video).

Audio: The audio was created in the same way that my previous trick (see: Siri++ ), however I augmented Siri’s voice with additional sounds and music. The music included instrumentals of The Beatles & Bob Dylan (Jobs’s favorite bands… but mine too, so there’s not too much of a connection there 😉 ). Most of the sound effects were used with the default iLife sound effects (that come with Garageband.app or iMovie.app). All audio editing was done within FCPX.

Video: After some failed attempts to get After Effects up and running (hoping to get it figured out by the final presentation), I defaulted to Keynote animations and FCPX free trial.

Expanding on the Idea

I would love to get interactive elements working on multiple screens, however MultiVid only supports video. It would be great to write software to make interactive across screens. (I am going to try to play around with this, but for it to work, I need to figure out how to do it in a way that doesn’t take as long as a PhD thesis [see, THAW]).

Also, after working on the trick, I see why Marco did it on a flat surface. You are limited by what you can hold if you want to do the trick vertically. I will have to see how this can be improved.

“It’s Showtime, Girls”: A “Magical Girl” Transformation Made Real through the Intersections of Magic and Technology

Kyrie Caldwell, CMS Grad Student

My Trick++ rather erroneously ended up being design work on a more ambitious, larger trick that will become my semester-long project in stages. With little previous design experience, this is a large undertaking and will require plenty of iteration and catch-up methodological work, so please bear with me!

Research Question
The initial idea came through my interest in and exploration of anime and video games’ representations of women as well as the cosplay communities around these two media forms. Cosplay is a portmanteau of “costume” and “play,” which captures both the fashion design and performance elements of the practice, which includes creating or purchasing and then wearing outfits and accessories in order to emulate favorite characters, usually from anime or games (digital or tabletop). Knowing of the magical girl anime genre and its leaks into games, I decided to explore the “transformation” sequences that appear ubiquitously in the genre.
How does one leverage technological enhancements into magic, illusionist, mentalist, etc. performances while still retaining the sought sense of wonder, often built through the showmanship and preparatory work of the performer? Alternatively but similarly, how does one incorporate the work of such magic performances into the front-end development, demonstration, and use of technology, such as via user interfaces and user experiences?
Also, how might the socio-cultural implications of the “magical girl” genre translate into real-world performances? How would this contribute to cosplay performances, namely through practitioners’ experimentation with media representations of women and through “D.I.Y.” approaches to cosplay and the status of such creator-performers in cosplay communities? These questions are somewhat out of the scope of the current project but could lead to rich research using the results of the design work represented here.

Trend Research
Although deeper research into the “magical girl” genre in media and in cosplay performances again lie outside of the current project’s scope, understanding of the aesthetics of the genre is critical here. Popular representatives of the genre include Sailor Moon and Prétear in anime and Final Fantasy X-2 in video games. In the case of Sailor Moon and Final Fantasy X-2, the action centers around an all-female main cast; in Prétear, the main cast has one female character and a group of male characters (a version of the “harem” genre, in which a male/female character is surrounded by characters of the other genre, with sexual/romantic, usually gendered respectively, antics ensuing). In Sailor Moon and Prétear, the transformation sequences mark a shift from an otherwise normal girl to one with spectacular (literally, in the sense of the spectacle) powers; in Prétear and Final Fantasy X-2, these transformations are also dependent on the powers that will be gained, resulting in different outfits for different sets of skills and abilities.

Overall, these transformations take place in a usually unnamed realm that is separate from the main setting, shown through a lack of or abstraction (moving colored light fields) of the background. The character who is transforming is often outlined as a body without clothes levitating in space as those clothes transition, either in one bright flash of light, or materializing in stages. Then the character appears again in the main setting, now in a different and usually much more elaborate outfit, complete with the new powers afforded by it. 

Trend research also here includes magical performances, such as the levitation and quick change sequences that will be included in the current project. There are several ways to approach each illusion, but they are based around similar ideas. For levitation, the key is to control where the audience is looking so that very simple optical illusions become the audience’s only way of seeing the performance; that is, if a mirror is used, making sure that the audience sees the mirror not as a reflection but as a part of the flat visual landscape, requiring either a symmetrical room or a setting with visual ambiguity (such as plants or other objects that cloy where shapes begin and end). Other methods use the expectation that shoes will accompany feet, even when the latter has discreetly exited from and is moving independently of the former. For quick changes, there is almost always an obscuring element (e.g. a sheet, a burst of confetti, a flash of light) for the moment of the change. The speed of the change is usually enabled by specially-made clothes that look like normal garments but have a quick release mechanism, such as snaps.

Brainstorming
Brainstorming the performance was an exercise in thinking about what has been done and could be done through researching trends as outlined above. Each part of the performance (aesthetics, quick change, levitation, and the material/technological components needed for each) was considered as its own element. I generated ideas around each element, eliminating ones that seemed beyond my capacity in terms of technical/physical skills and temporal/financial constraints.

Concept
The working concept is a physically performed sequence set to sound and visual effects taking place in a reasonably symmetrical room with lights that can be dimmed remotely. The performer and audience is situated across from each other, oriented to maximize symmetry between the audience’s and performer’s halves of the room. The performed theatrically discusses what she was about to perform, such as needing to quickly get ready for a special performance, either theatrical or in efforts against a vague evil. The pre-programmed audiovisual sequence would begin, in which the lights would dim, a projector behind the performer would cast an animated field of colored light, and a coordinated sound effects track would play. The performer would mount the mirror levitation as a Microsoft Kinect would track her body, projecting visual noise (e.g. shimmering light) onto her. Meanwhile, the performer is carefully manipulating the quick change clothes as much as possible without showing this to the audience. The light from the projector and Kinect and the sound effects would crescendo, culminating in a flash of overhead light, during which the quick change would be performed and the mirror dismounted. Then the ambient lighting return to its dim state, and the projector and the Kinect effects would fade away as the audience’s eyes readjust to the performer’s silhouette, now in a different costume.

Storyboard/Flowchart and Enabling Technology2015-03-15 13.24.50

Timeline
Considering my inexperience with this kind of production, my timeline will be slightly more extended than might be otherwise required. The initial design work has taken two weeks, and it would be expected that perfecting each part of the performance would need similar timeframes. I am estimating two to three weeks to practice the levitation and quick change maneuvers, another two to three weeks for editing the audiovisual effects sequence, and a final two to three weeks for putting together all the elements into one performance.

Revision
I am expecting that the design will shift as the prototyping and storyboarding is attempted in real time/space. The current design is accounting as thoroughly as possible for all constraints, but iteration is likely and will be recorded as the production is realized.

Included below is the slide deck for the first presentation of my design work. Videos and GIF files are rendered here as static images.

Trick++ Slide 1 Trick++ Slide 2 Trick++ Slide 3 Trick++ Slide 4 Trick++ Slide 5 Trick++ Slide 6 Trick++ Slide 7

Trick++ documentation: Faux Mentalism

My trick was a sleight of hand trick that involved a card force and a false mentalism component.  The effect is to know a card chosen by a spectator seemingly by reading microexpressions from their face.

The trick has multiple slight of hand components.

First, you need to shuffle the deck in such a way that you know what the bottom card is.  This is done by riffling through half the deck with the same motion used to cut the cards prior to a riffle shuffle.  However, when half the cards have been riffled from your right to your left hand, look at the card on the bottom of the right hand half.  Then, shuffle the cards, taking care to release cards first from your right hand, keeping the card you’ve seen on bottom.

Further shuffles can be done to enhance the misdirection.  It is possible to do an overhand shuffle that moves the bottom card of the deck to the top, which can then be reversed again, keeping the same card on bottom.  To do this, start with do a normal overhand shuffle, but make sure that you end up with only one left (the bottom card) which you then place on top of the shuffled deck.  The second shuffle should start with you pulling off the top card only with your thumb, then shuffling the rest of the deck normally.  Thus the bottom card stays on bottom.

During the entire shuffling routine, make consistent I contact with the audience and discuss the mechanics of reading microexpressions.  This will distract their attention from the giveaways in the shuffles (looking at the bottom card of the riffle shuffle, pulling off single cards for the overhand), which is very important. I didn’t do a great job at the banter.

 Movie on 3-15-15 at 11.16 PM #2

You now have a deck with a known card on the bottom.  The next step is forcing this card on the spectator while giving the illusion of free choice.  There are a variety of forces, and I used one that is particularly easy to learn and has good angles.  It is complicated to describe in words, so I will simply post a video of me performing the move.

Movie on 3-15-15 at 11.20 PM #2

 

Now you’ve forced the card on your spectator, who believes they have chosen a card randomly.  Go ahead and give them the card face down and set the rest of the cards aside.  Now is a good time to shift the tone of the trick to one of quiet focus.  Tell the spectator to visualize their card, imagine it turning over and revealing itself, and to clear their mind of everything else.  Then step through the possible card values and suits, feigning intense focus and mental strain.  To enhance the illusion, I pretended to get a poor reading the first time and repeated the values twice.

Finally, you can dramatically reveal the selected card.

 

Trick++ Documentation: Magic Deck

Here’s the documentation for my Trick++ last Monday, which was constructed around a deck that had NFC chips embedded in all of the cards.

Inspiration for this trick came from several examples of computer vision/card recognition tricks I came across online, which creates an interesting performance dynamic: it’s only the computer assistant – not the magician – who knows what the chosen card is. Since I was toying with NFC communication for a separate personal project, it was a natural extension to magic tricks. NFC is obscure and new enough that most people aren’t aware of it (much less anticipate it), unlike cameras or microphones that we’ve been trained are “everywhere”. NFC was the perfect combination of secrecy and subtlety.

Actually constructing an NFC-tagged deck was more work than I anticipated. RFID decks are available online, but cost ridiculous amounts (>$100), so I was forced to build my own. I purchased a pack of 75 NFC stickers from Amazon for $40 (25-pack available here), which was still more than I’d hoped, but much more reasonable. The tags (including the plastic sticker casing) were 12x19mm and just 0.157mm thick (cards, for comparison, are just over 0.300mm), and could store 144 bytes. I found an old, low-quality deck that I wouldn’t mind losing if I messed up and was made of paper stock, not plastic. Then I got a glue stick, an Exacto knife, a needle-nose pliers, a lamp, and a long Sunday afternoon.

The process wasn’t bad once I practiced on a few Jokers (may they rest in peace). Paper stock cards are easy enough to split open from the corners, and I didn’t need cut very deep to slide the chip in with the pliers. I also alternated corners, to try to reduce the thickness increase in the final deck as much as possible. After the tag was in, I used a toothpick to “paint” the inside with glue, and then flattened out the card and put it under a heavy stack of books to set. After iterating through all 52 (miraculously without any card fatalities), I went through them again with the knife to trim any glue left on the edges. The final deck was also slightly thicker than a normal deck, so I omitted 6 cards when performing it so that it appeared the same height. Here’s the final result:

IMG_20150312_132744512

In any sort of ordinary handling, the chips are effectively invisible. If you examine each corner very closely (and get the light to catch just right), you can notice warping, although I think that is more of a result of the glue than of the chip.

IMG_20150312_135209

The deck itself was only half of the trick. On the software side, I used a generic NFC reader-writer on the Google Play Store to write each cards’ name on it. The format I used was a two-character text string, like “4h” for the Four of Hearts or “td” for the Ten of Diamonds.

Once the cards were all encoded, I build an Android app that automatically launched itself upon contact with any of those cards, and displayed the appropriate card image. The app filters for the ACTION_NDEF_DISCOVERED intent and for text-only tags, and will launch itself whenever the phone comes in contact with one regardless of if the app is open or not (thanks Android!). The code was extremely hack-y, relied heavily on examples here and here, and I’m deeply embarrassed for ever having written it, but it is available here if anyone would like to use it for themselves. Otherwise you can get the .apk here.

Here’s a short video of the app in action:

Having created the mechanics of the trick, I then needed to construct a performance around it. I felt like I really under-utilized the potential here – there are so many possibilities that the deck opened up I hardly knew where to go to properly take advantage of its strengths.

I practices a few variations in which I would have the volunteer pick a card, which I would wave around and casually bring it near my phone in my pocket. This was rather awkward, since it’s not a natural place to leave your hand hanging, especially when it’s holding something so important. Also, “checking” my phone later on aroused a lot of suspicion.

I eventually settled on a routine built around me leaving my phone face-down on the table, then slapping the card down on top of it. Forcefully slapping things on tables seems to be a common theme in a lot of magic tricks, so it didn’t arouse too much suspicion in my practice runs – most people thought it was just showmanship, not a critical part of the trick itself. Here’s a video of one of my practice runs:

But I couldn’t just let a volunteer pick a card and then immediately slap-reveal on the phone – that would be far too obvious and, much worse, far too short for a trick like this. Instead I decided to try to purposefully obfuscate the performance so that the audience would have theories. I decided to start with a complex, unimpressive trick – a performance that would hopefully leave the audience with several theories. My original plan was to perform the same trick three times, with each repeat explicitly whittling down the possible theories until none remained. In class, on a whim (that I regret taking), I decided to only perform it twice. Here’s the final presentation, complete with spur-of-the-moment patter that tried to “explain” why I’m slapping the card down on the phone:

… “Now… It seems like some of you think this is just a magic trick. And your suspicions are well-founded – I mean, I got see the card myself, and I could have interacted with the phone in all sorts of ways. But to show you that this is, in fact, a legitimate scientific experiment, I’m going to do the trick again…”

I need to say “um” less often.

I’ll probably stick to my NFC deck for at least the midterm project, and am really searching for ideas on how to expand the presentation. How can I take full advantage of this technology?

What would you do with a magic deck?

Siri++ (Trick++ Documentation)

Here’s my documentation for Siri++, which was performed for the first time in class on March 9, 2015. The video of (most of) that performance is available here: In Class Siri++ Performance

The basics of the trick involve fake shuffling, a stacked deck, and my digital assistant (a pre-recorded audio track). This trick was fairly topical, since it involved references to the first trick I performed, the most recent Apple keynote, a recently released Netflix show, and MIT. But the trick is generalizable to nearly anything. Any slight of hand or algorithm based card tricks can be augmented with a “digital assistant”. In this instance, Siri was “working against me” and causing trouble, but you could also have a helpful version. Below is a video explanation, a practice video, the raw soundtrack file, and instructions for how to make your own Siri dialogue & script video.

A video based explanation and demonstration of my trick is available here:

A home practice of the trick (without the minor hiccup from the class performance) is available here:

The video that plays the audio clip, and shows me my lines during the trick is available here:

Since the stacked deck and fake shuffles are fairly ordinary, the remainder of this documentation will describe how I made the pre-recorded audio, as well as the raw script video. I used a Mac, and will be talking about specific Mac applications, but this could easily be done on Windows using analogous programs.

Audio

Macs can speak back text using the “say” terminal command. For example, opening Terminal.app and typing: >>say “Hello World” will have the computer speak the phrase.

Instead of immediately speaking the phrase, it can be saved as a .aiff audio file using the flag: -o. Next, instead of reading text typed into Terminal, it can read from a .txt file using the -f flag. Finally, Macs come with a few different built-in voices at different speeds. You can also download (for free) higher quality versions of those voices (which simply take up more disk space). I found that the “Enhanced” version of “Samantha” was the closest to sounding like Siri on iOS devices. You can set this voice by going to System Perferences.app -> Dictation & Speech ->Text to Speech -> and then choosing “Customize” from the available options.

Therefore, I first created a script, and then made a text file with all of “Siri”‘s spoken lines. I then set the system voice to Enhanced Samantha, and ran the terminal command: >>say -f “script.txt” -o “result.aiff”

This produces an audio file of the script. However, there needs to be pauses between lines to give the performer time to respond. There is no way to set timed pauses during the terminal say command, so we’ll get to that with a separate application.

For audio file editing, I used Garageband (though Audacity, or any other basic audio editor should work). There was no exact science to this part, I simply played one line of Siri, then hit mute (so I wouldn’t be interrupted) while I said my next line. When I finished my line, I paused Garageband. The amount of time that had passed since Siri finished her line was the amount of time that my line. I then split the audio clip, and then added blank noise for that amount of time. I repeated this process for every line in the script. I then exported as an .mp3 file.

Video

I then wrote out all of my lines on different slides of a Keynote presentation. I then “recorded” the keynote presentation while I played the audio file, which tracked the time I spent on each slide/line. I was then able to export that recorded slide show as a quicktime file.

Finally, I combined the timed keynote slides and the audio saved from garageband in iMovie. I then exported the entire video. In in class for the trick, I was simply able to play the video file, and the audio was presented to the audience and the visual script lines were presented to me.

Conclusion & Expansion

This trick worked really well. There was a slight hiccup with a mistake I made during the first performance, and there was some cracking because of the speaker system. But the general concept works quite well.

There is definitely room for expansion. The obvious direction would be to make it nonlinear: either by having it actually respond to my voice (though and fortunately, the best voice recognition software out there (Apple’s & Google’s) are not accessible by 3rd party devs), or to make it dynamic, and subtly change what it will say based on a keypress on the keyboard, or a button on an iPhone app. Doing card tricks were Siri was able to reveal what card you had, and would be able to do it for any card, not just one, would be very cool.

What other ideas do you have for a digital assistant?