Elke Reinhuber, Hannes Rall & Sebastian Pelz – Layered Images: The Desire to See More Than the Obvious

1. Introduction

As a general observation, since the appearance of moving images, audio-visual input has increased exponentially: From the static camera observing an action, for instance in Méliès’ pioneering works, escalated to faster and faster cuts as pioneered in Dziga Vertov’s Man with a Movie Camera, concurrently with Abel Gance’s approach to spread the images over more than one screen.

Audio added another layer of information, emotion or sometimes even distraction. The screens became bigger and more immersive, from Cinescope to Omnimax, until moving images became ubiquitous, omnipresent and shrank down to pocket- or even watch-sized. Fast-forward and repeat options add a new way of watching films in a non-linear manner, which were Not possible for movie-goers a century ago. As James Monaco observed: “Films are a lot more like books, now” (Monaco 2000, p. 13).

We can even see how the screen now enhances our real world through augmented reality and location-based services. And not only for our physical existence: the information of our fictitious onscreen reality does not suffice anymore either. The modern multitasking human being is able to cope with more information simultaneously (Baxter 2013) and even seems to desire and demand it: From capturing of our real world to suggestive editing, as Kuleshov (Mobbs et al. 2006) has proposed, this has extended to an enhanced flow of information on multiple levels, which can provide insight to what is not visible with the naked eye.

In the following, we explore the occurrence of these additional layers, the ways by which they are added, and we suggest the purpose and future of these methods.

The interest in this field is based on our own works with different approaches. Elke Reinhuber introduces the additional layer as a counterfactual element, a ‘what if’ or alternative reality; whereas Hans-Martin Rall enhances the spoken words suggestively, in a manner redundant to the visual, to increase the level of entertainment in the documentary.

2. The second layer and its manifestations

Not only are our artistic works enhanced through multiple layers of information, visually and verbally: we have observed a tendency in that direction in fiction, documentaries or advertising and are currently researching the occurrence and possible effect of multi-layered movies and videos.

2.1 Appearance of the additional layer

In this section, we introduce our observations with a few examples of how visual additions to our environment were employed in media and our viewing habits were progressively changed.

Modern cities are dominated by messages and advertisements. Printed posters, billboards and the advertising column – specifically invented for the purpose – announce anything from cultural events to detergents in public. The idea of the future in the imagination of film creators suggested enormous customised video walls, such as in Blade Runner (Scott 1982). This vision has already come true in global cities, which glimmer in the flickering video light from commercials, film clips and news flashes. Projection mapping can be seen as a special case to modify and transform buildings or sights. And even graffiti could be recognised as a supplement to urban architectural structures, particularly when it has been commissioned.

Second screen

The demand for more information becomes obvious in the usage of the second screen. Literally, the beginning can be found in the early days of television: on election day, for example, while the endless counting of votes was transmitted for several hours, another television set might have been brought from next door to collectively view with the neighbours, and – with an entertainment programme on the other channel – the waiting time for the final results of the election would be shortened, while staying up-to-date. Nowadays, ubiquitous mobile devices enable constant media multitasking while watching TV or working in front of a screen (Turner 2013). On-task activities, such as gathering additional information, taking notes or accessing related content, support the main pursuit, whereas off-task activities are often related to engagement in social networks, reducing media consumption to mere background adornment.

To date, there are only few examples of content produced thoughtfully for the second screen. The thriller App (Boerman 2013) was the first feature film created for a second-screen experience. A young student becomes addicted to her smartphone after she downloads an application. This app starts to terrorise her and takes over the life of the protagonist. At the same time, the audience in the movie theatre is encouraged to use their mobile devices and explore additional content which is simultaneously provided with the onscreen action (Blake 2017).

One technique to synchronise the main and the second screen is called automatic content recognition, ACR, which started with the audio identification tool Shazam, which identified music and retrieved information such as title and artist on a mobile device. The fingerprinting technology was extended to send messages to the audience as the app recognised a sound inaudible to the human ear. At the same time, the software is now able, with this background to enhance the content for a second screen in real time, besides from providing basic information instantly.

The major studios, such as Disney or Warner Brothers, connect content on the main and the second screens, but have not really achieved the integration of the audience and the interaction with an additional device fully into the narrative (Orange Transmedia Lab 2013). In most cases supplementary apps include the features to be used as ‘Extras’ on DVDs: artwork, behind-the-scenes material, bonus material, quizzes; but now they may also comprise opinions and comments from social networks.

 PiP and Teletext

The next level from the early double-television set-up was achieved in the 1980s. In order not to miss what was happening on the other channel at the same time, PiP, Picture in Picture, and PAT, Teletext, were introduced. A small window on the main screen displayed simultaneously a car-race while watching a ‘soap’ on the big screen. Both transmissions could easily be swapped on demand to enlarge a car-crash or other exciting moment while minimising the boring scenes of the sequenced suburban drama. Only last year, PiP technology experienced a revival through the official implementation on Android mobile phones: a fine alternative to using two mobile devices simultaneously.

Teletext added a plain layer of additional information to the analogue broadcasted signal. Whether related to the programme being shown or totally independent of the content, the character-based text pages delivered current news and programming schedules to the viewer, a crude but effective individual data retrieval system.

Another method of text-based overlays appeared during the 1970s, mainly in live sports transmissions: an informative layer indicating goals, the runtime and the names of players, etc. This method has become more and more advanced, even when it relied on dedicated hardware, allowing analysis of certain moves or strategic areas on the turf, and developing into the customised touchline advertising of recent years.

     Augmented Reality, AR

The additional, informative layer in our corporeality has proven useful for many years, particularly for technical applications, and is already well established for mechanical engineering to visualise montage or circuit diagrams. For engineering, the actual subject is overlaid with the information, which otherwise can only be accessed in a manual. In medicine, images, which would need to be seen on a light table next to the patient, can be superimposed. Now, the surgeon is able to view the patient’s body through this imaging layer which is not visible for the human eye, almost as if with X-ray eyes.

Although augmented reality is nothing new, it only reached the masses in recent years. Early examples such as the Metro Paris Subway app from 2009 (liamc 2009), which allowed the user to spot the closest subway station while in Paris, provided only a glimpse of what is possible today.

As another representative case, immonet’s informative mobile AR app which is used to spot vacant apartments in German cities did not reach the entire world in 2010 (Mehr 2010), because of deficient geocoding, internet accessibility and privacy issues (Lang et al. 2011).

Entertaining applications and games such as Pokémon Go recently popularised the augmentation of our physical world with imaginary pocket monsters, while the introduction of Google Glass was highly anticipated – however it has not been satisfactory to date. A pair of glasses functioning like an optical head-mounted-display, OHMD, would in theory allow mobility while perceiving multiple realities – educative, entertaining or informative.

Head-up-displays, or HUD, employ a similar technique: text and graphics are presented on a pane of glass, most commonly a windscreen, to allow pilots, drivers and other users to stay attentive to their environment while seeing important instructions.

This futuristic ingenuity available in luxury automobiles for the last decade is now accessible for the common compact car and provides useful information, so that drivers do not need to take their eyes off the road anymore.

     Multi-channel installations

The display of multiple screens in media arts has also become increasingly popular in the past decade. While the first extensive occurrence dates back to the World Expo in Montréal in 1967, digital technology has facilitated the multiplication of screens and projectors. Art exhibitions all around the world increasingly feature productions with more than one screen or projections to create an immersive, embodied and overwhelming experience for visitors. For instance, Isaac Julien’s engaging 9-channel multi-screen installation Ten Thousand Waves (Julien 2010) has been screened in numerous museums around the world and has left a lasting impression on the audience. The visitor almost becomes the editor of the piece by choosing the viewing position and angle, as well as by composing the sequence of screens to look at.

     Layering in movies

Although they could be rather seen as technical enhancements to a fictional reality, we also include additional layers in theatre and movie productions. As one of the oldest techniques in theatres, Pepper’s Ghost effect succeeded in tricking audience with optical illusions from the 19th century onwards. Actors appear as ghosts on the stage, as a reflection from an adjacent illuminated room. A similar technique, refined by master cameraman Eugen Schüfftan, was applied in Fritz Lang’s Metropolis to combine the futuristic cityscape model with the actors. The visual layover replaces the stage on which the actors move with this clever technique, a predecessor to the travelling matte and green screen processes (Pinteau 2005).

This method can be also implemented in Virtual Showcases (Bimber et al. 2001) in museums: videos and informative graphs appear on precious artefacts; however they are placed beyond a glass screen tilted at a 45° angle.

Obviously, these general developments led to a rising usage of multiple layers of recorded images, moving or still. Nowadays, as almost everyone owns some sort of mobile device, phone, tablet or portable computer, looking at more than one screen has become much easier. Based on our own observations, we propose multiple applications while watching movies: staying informed with general news (e.g. the above-mentioned elections), acquiring additional information for the feature on the main screen, watching related content on the second screen, sharing and communicating on the social network – or just finishing the daily duties on the laptop.

2.2 Possible reasons for increasing tendency

At this stage, we can only speculate about the reasons for an increasing tendency to apply multiple layers in feature films. However, it is a field which invites further exploration, ideally in collaboration with directors, psychologists and audience members from different generations.

     Changing perception

In our everyday lives and particularly in our experience as educators we observe a recent desire for more visual input within the younger generation.

Moving images are a form of communication which is taken for granted today. We have come a long way from the invention of moving images, when an approaching train in black-and-white, shot with a static camera in one take, managed to excite the audience.

The fixed, or motionless camera, which captured a staged action similar to a theatre production, advanced to meticulously choreographed camera movements for extraordinary long takes. Specialised motion control cranes and well-trained Steadicam-operators have achieved breath-taking shots as in Kill Bill (Tarantino 2003) or The Russian Arc (Sokurov 2002). On the other hand, the average shot length (ASL) for films has become shorter and shorter (Salt 2009a), and is clearly facilitated by digital editing techniques. An average shot length of two seconds is commonplace, especially for action movies such as Mad Max: Fury Road (Miller 2015) [ASL 2.1]. However, even in the romantic drama Moulin Rouge (Luhmann 2001), the ASL was below 2 seconds [ASL 1.9]. For music videos the ASL may be even be less than one second [e.g. Jay Z: 99 Problems (Romanek 2004), with ASL 0.7].

As Marshall McLuhan establishes in “Media Cold and Hot”, the second chapter of Understanding Media, that “Any hot medium allows of less participation than a cool one.” (McLuhan 2003, p. 25). He argues that some media, such as film, engage the viewer in a certain intensity that is unusual with other media, which leaves the user or viewer detached and less involved. According to McLuhan, the immersive character of cinema might have diminished over the last half-century and the audience needs more activity in front of them to be still captivated.

The change in audience perception and behaviour affects the ways of seeing and demands another style in capturing and playback, but the viewer still seems to long for more: additional information, additional entertainment, additional excitement and additional amusement.

The majority of people have become global citizens as a result of the internet, since we are no longer satisfied with what is happening in our immediate vicinity. Moreover, wider knowledge is becoming imperative in order to survive in the global jungle in which competition is getting fiercer and fiercer (Blossfeld et al. 2006).

     Augmenting Fiction

The different manifestations of the layered screen in our little survey are aspects of a grander scheme. Artist and filmmaker Chris Gerrard distinguishes between multi-frame, multi-layered and multi-screen and has summarised all approaches to extend the individual image as multi-image (Gerrard 2016).

3. Implementation of the second layer

In the following, we will describe several case studies which demonstrate the addition of layers to the captured reality successfully combined with different techniques and motivations. As exemplary works, we introduce a range of movies in which the layering technique is successfully applied and which illustrate different approaches.

     3.1 The Beginning

The first addition to the single stream of meaning in the early days of movies happened to be intertitles, spoken dialogue and music.

Adding another layer of information could be seen as a new form of the intertitle from the age of silent movies. Title cards, which substituted for the spoken words and contributed to the transfer of information (Gaudreault et al. 2013), became obsolete by employing sound in ‘talkies’ – a form of onscreen communication which is taken for granted in movies today. However, in foreign language films meaning is still conveyed via subtitles – the movie sound designates only the universal actions, such as gunshots, car crashes, etc.

Even while listening, the spectator often wants to gather additional information when watching a movie or programme (Van Cauwenberge et al 2014). Viewing habits change immensely with every decade, so the methods of the film creators have to adapt constantly in order to keep the audience engaged.

The 1960s television series Batman (Dozier 1966) introduced a transition from intertitles to an additional element as stylistic reference to the famous DC comic magazine. Hilarious fight scenes were overlaid at the moment of the punch with screen-filling letters like in the comic strip: SWOOSH!, BONK!, AWKKKKKK!, WHACK! and many more.[1]

Not only did the overlay serve to obscure the details of the fight scene, they were also entertaining and ironic. But mainly they provided a connection to the origin of the footage to the aesthetics of the comic book, targeting a similar audience [Fig.1]).

 

 

 

 

 

 

Figure 1: Scene from the Batman TV Series (Dozier 1966), Adam West as the Superhero. Photo: Public Domain, screen shot taken from broadcast, https://www.youtube.com/watch?v=qpoxHvmWPfc.

The Australian TV short Basically Black (1973) similarly retains links to  graphic novels, albeit in a slightly more subtle way. Thought bubbles appear above the characters to share their inner thoughts and ideas in an entertaining way. Initially planned as a series, the pilot addressed issues of racism toward Australian aboriginal peoples in a comical style [Fig. 2].

 

 

 

 

 

 

Figure 2: Scene from Basically Black (Parsons 1973), Zac Martin as “Super Boong.” Photo: Public Domain, screen shot taken from broadcast, https://www.youtube.com/watch?v=uTunYAlu6Rk.

3.2 A Second Reality

Who Framed Roger Rabbit? (Zemeckis 1988) is the apex specimen in which cinematography and animation merge into a perfect hybrid, establishing a new quality of entertainment. Comic animation and live action are treated equally, visually and in terms of content. Therefore, we would refrain from even talking about an additional layer in this example.

Hans-Martin Rall’s animated documentary on Little Red Bard – Shakespeare in Singapore (Rall 2018) also combines animation and live footage in a captivating style. Interviews with experts on Shakespeare and with young Singaporeans are rich in content. However, as elaborated in section 2, specifically digital natives would not be engaged by the talking heads and conversations in the same way as with a visually enjoyable supplement. With his expertise in animation, Hans-Martin Rall finds an appropriate visual method to interpret the spoken words and at the same time trigger the imagination of the viewer while inserting micro-narratives on a meta-level through animated elements. These go beyond merely illustrating factual content by adding elements of satirical commentary. In the interview with Professor Dobson [Fig. 3] the very topic that is talked about, animated documentary, is simultaneously visualised: While Prof. Dobson explains his ideas for the animated documentary, he literally turns into a frog.

 

 

 

 


Figure 3
: Shakespeare expert Michael Dobson turns into a frog during an interview with Author 2 after suggesting exactly that: ‘And I hope my words are being spoken by a froggy…’ Photo Copyright: the authors, © 2016-2018.

The medium is used to explain itself: Through the transformation of the interviewee into an animated character, the proposed visual approach is demonstrated in ‘real time.’ By doing so, the live action/animation combination turns into something bigger than the sum of its parts: information that could not be communicated through an isolated use of either live action or animation.

A second example from this project further demonstrates this approach: the interviewee talks about the potential of internet memes to attract Singaporean millennials to Shakespeare. While doing so animated memes appear behind and around him – simultaneously applying the concept while he explains its relevance. Verbalised theory is explained through visualised practice. Beyond this educational impact, the animated technique instantly provides the whimsical approach to reach out to an audience of digital natives: ‘Shakespeare 4.0’ demonstrated.

An animated documentary which is in principle similar to Hans-Martin Rall’s interviews is Is the Man Who Is Tall Happy? by Michel Gondry (Gondry 2013). The director utilises the footage of his conversation with linguist Noam Chomsky to draw upon the moving images in his idiosyncratic style, visualising the complex topics of their exchange, explaining abstract ideas in metaphorical scribbles. The footage of the interview almost disappears behind the second layer, but grounds the viewer every now and then of the dialogue which took place between Gondry and Chomsky. [Fig. 4]

 

 

 

 

Figure 4: Film still from the animated conversation Is the Man Who Is Tall Happy?, with Noam Chomsky (Gondry 2013). Photo: Public Domain, screen shot taken from the movie trailer, https://www.youtube.com/watch?v=d9c4xJEP6eI.

One of the most vivid and obvious examples undermining the tendency toward and fascination of layering the image, as well as referring to the media multitasking society, will be introduced hereafter. The original video footage disappears almost behind the overlay, however in another genre and spirit. [Fig. 5]

 

 

 

 

 

Figure 5: Film still from the multi-layered video short Hyper-Reality (Matsuda 2016). Photo: Public Domain, screen shot taken from broadcast, https://vimeo.com/166807261

     Hyperreality

The video short Hyper-Reality (Matsuda 2016) explains vividly how our daily life is overcrowded with multiple layers of messages, images and videos. Using competing layers, it illustrates the life of the protagonist in a POV perspective. The video is not only visually enhanced but also addresses the information overload in everyday life within the narrative. The trashy aesthetic is unpleasant in the same way as advertisements and messages, which haunt us. The moment the protagonist restarts her digital identity, it almost seems like a relief: all the commercials and messages disappear, revealing the plain video image for a short time, until the extensive overlays cover the screen again. Matsuda financed the work through crowdfunding, for which he produced another video with a convincingly stuffed world around his desk, introducing him as a visionary creator.

     Stranger Than Fiction

Despite a more pleasing aesthetic, the fantasy-drama Stranger Than Fiction (Forster 2006), with its animated graphics overlaying the video image, bridges the visuals and narration in a similar way by opening an insight into the world of the main character, Harold [Fig. 6]. The animated monochrome graphics synchronise with the moving images of the opening sequence underline the meta-level which is introduced by his narrator – or better, his author. Roger Ebert describes how the film contains ‘quirky little graphics that show how Harold compulsively counts and sees spatial relationships’ (Ebert 2006). By embellishing the daily habits of the protagonist [Fig. 7], the movie makes the secret mechanics behind him evident and intensifies this message even more visually: the construction of Harold as a fictional character is emphasised via the graphical layer.

 

 

 

 

 

Figure 6: Film still from Stranger Than Fiction by Marc Forster (2006), starring Will Ferrell as Harold Crick. Photo: Public Domain, screen shot taken from the excerpt, https://www.youtube.com/watch?v=WDwTQ57YyzI.

 

 

 

 

 

Figure 7: Film still from Stranger Than Fiction (Forster 2006), Harold Crick performs his morning routine. Photo: Public Domain, screen shot taken from the excerpt, https://www.youtube.com/watch?v=WDwTQ57YyzI.

Fight Club operates in an aesthetically similar way, though with a different intent (Fincher 1999). An empty room becomes furnished in the view of the protagonist while we see him browsing through a furniture catalogue [Fig. 8]. Labels, descriptions and even prices appear in the completely outfitted set as an overlay, while the viewer sees the protagonist walking through the catalogue image, talking into the cordless phone. The verbally explained atmosphere establishes the character and his context, the overlays create a close correspondence to the everyday experiences of the audience.

 

 

 

 

Figure 8: Film still from Fight Club (Fincher 1999), starring Edward Norton as the unnamed narrator. Photo: Public Domain, screen shot taken from the movie trailer, https://www.youtube.com/watch?v=SUXWAEX2jlg

     Graphic Animation

As in the case above, in the TV Series Sherlock (Gatiss, Moffat 2010-2017) additional layers serve to show what the protagonists look at or think about [Fig. 9/10]. Here, the supplements are cleverly inserted: For instance, the protagonist receives text messages which are displayed as cut-out text on the screen for the audience to read. In this way, an insert cut of the mobile phone or a detail on a personal file becomes superfluous and the onscreen addition can be regarded as a novel style element implementing the use of mobile devices as a form of communication – although appealingly animated, they are reminiscent of the early intertitles. Other TV series, such as House of Cards (Willimon 2013-ongoing), illustrate non-sonic conversations in a comparable manner [Fig. 11]. In a recent paper (Bach et al. 2017) on information visualisation, very similar methods were discussed, since ‘the physical world contains physical objects’ and is not a blank space.

 

 

 

 

 

Figures 9/10: Season 2: Episode 1, ‘A Scandal in Belgravia’ of the series Sherlock (Gatiss, Moffat 2010-2017), about the detective who is reading text messages and people. Photo: Public Domain, promotional material.

 

 

 

 


Figure 11
: Texting in House of Cards (Willimon 2013-ongoing). Photo: Public Domain, promotional material.

3.3 Layering as a narrative device

Stacked layers to convey several meanings at once can be employed in more sophisticated ways, as can some chat protocols. Elke Reinhuber developed different approaches to dealing with conflicting information in one story by implementing additional layers to captured visible reality in an artwork.

The film short Standard Video by Elke Reinhuber (Reinhuber 2014) presents a distinct additional layer on the original screen. The press spokesman of the German Institute for Standardisation, DIN, explains the principles of standardisation in an interview while the message in the subtitle could almost be regarded as a second narrative. The inserted text describes how the recording, editing and playing back of videos requires the knowledge and the application of an abundance of standards. At the same time, some of these standards are applied to the footage itself, such as light sensitivity, ISO, aspect ratio, resolution, colour space or compression, and the footage demonstrates the effect while describing it. The video short challenges the audience to multitask, as the written and narrated text have nothing in common – but both also function individually.

Elke Reinhuber’s silent movie VENOMENON (Reinhuber 2016) is less challenging in this regard, as the additional layer supports the storyline. Infographics from a life-support-machine appear in a scene with a long tracking shot along a corridor and set the context for the location – but also introduce the critical state of the female protagonist. In another scene, old-fashioned postcard frames direct the audience to a flashback scene. The work adds another dimension, though, to challenge the viewers: the stereoscopic video is presented as 2-channel video with two opposing screens. In a manner different from a normal second screen, which can just be used as an addition without having the eyes leave the main screen, here the audience needs to decide which screen to look at and will miss the content of the other display by turning the head – a reference to the tale being presented: it is based on the myth of Orpheus and Eurydice. The hero from ancient Greek mythology tried to rescue his deceased wife from the underworld, but turned his head around despite being told not to do so – and lost his wife forever. In the video installation, the audience only misses the content on the other screen.

In an experimental version, based on the same footage, ADM student Kaie Slater suggested in a study project that overlays be included which related to mythological tales connected to Singapore as a multi-cultural country. In this version, Orpheus’ long path to the underworld is covered with Chinese paper money, burned as effigies for the dead. As a reference to Indian culture, the representation of the underworld is connected to the Yama lintel, and motifs of Malayan background and the contemporary influences of the Western world add visual context to the project too.

As a final example, we would like to introduce a work which can be seen as a reverse approach by using only a second screen.

3.4 Additional layer on reality

As in many AR applications, location is one of the most essential components in ‘Alter Bahnhof Video Walk’ by Janet Cardiff and George Bures Miller, which was produced for Documenta 13 (Cardiff, Bures Miller 2012).

The artist duo recorded a twenty-six minute walk through the old train station in Kassel on video. The only way to watch the film was site-specific by renting a prepared iPod touch and headphones in the train station. Having done so, Cardiff gave instructions with her engaging voice. By aligning the screen with the actual backdrop of the station, a second layer was added for the viewer. It was very close to the actual setting and therefore easy to identify with. The architecture in the small-scale movie and in the station were congruent, whereas items such as the advertising board in the bakery indicated a shift in time; people, animals and moving trains reminded the viewer occasionally of the two layers, the verisimilitude of the pre-recorded video and the physical existence of the station. The sound of a brass band became audible; slowly, from behind a column, musicians appeared carrying huge instruments. Cardiff’s work invited the viewer – or better, the participant – to walk behind them. Without hesitating, the immersed participant could turn around and follow the image by aligning the recorded past on the small video screen with the present scenario of the station. The narration directed the viewer even further back in time, to traumatic events in the same location, the former central station, during World War II. In the exhibition catalogue, Cardiff and Miller wrote: ‘An alternate world opens up where reality and fiction meld in a disturbing and uncanny way that has been referred to as “physical cinema”.’ ‘Alter Bahnhof Walk’ is a fine example of how present and past worlds can merge by immersing the audience with audio into a video scenario – instead of adding content to a movie – adding a fictional perspective to reality.

4. Considerations and outlook

In summary, we have observed a tendency to increase visual input through layering and multiple screens which provide an interesting supplement to that plain image which only offers the same impression as seen by the unfortified human eye.

But the things we see on any of the screens provided seem never to be enough for our insatiable appetite for visual sensation. Offering more than one stream of imagery, delegating the information to several channels with different properties, means we are closing the gap to the feeling of being-in-the-world, since many impressions of different quality reach our audiences simultaneously and heighten their engagement.

According to Jeff Gomez, the pioneer transmedia producer, the gaps in encapsulating reality should be delivered intentionally (Miller 2008: 167). He came up with the idea of the ‘cheese hole’ in regard to transmedia storytelling, representing spaces in the story-world to be filled by the spectator. An open story design could offer opportunities to the audience – and their social media networks – to be part of and co-create content.

In the examples provided, we have explored several possibilities, techniques and results. The most common appearance, in form of supplementary information, is already well established on the main and second screens. The same can be said for approaches to set the framework and establish the context of a film while working with moving and still image, sandwiching or collaging techniques. The relationship between the several layers and screens is significant, ultimately determining an effective connection between the content and its extensions (Willett et al. 2017).

However, connecting content between multiple screens has only recently reached a wider audience and may, in our media multitasking society, be a field to explore further – in theory, but particularly through practical examples. We have perceived as another area of interest the in-between or even transformative state of reality and fiction. Augmentations, which engage the audience not only with moving images but with their environment, is a promising approach to be investigated further.

Acknowledgements

We thank everyone involved: you know who you are!

his research has been kindly supported by ADM / NTU Singapore and the Singapore Ministry of Education as part of an AcRF Tier 1 and the  AcRF Tier 2 grant From Print to Digital: Re-Defining Narrativity for Interactive Digital Media.


References

Allen, Woody. 1985. Purple Rose of Cairo. Feature film. [82 min., 1.85:1, colour, monaural] United States.

Android Developers. 2018. Picture-in-picture Support. Retrieved October 31, 2018 from https://developer.android.com/guide/topics/ui/picture-in-picture.html

Bach, Benjamin, Ronell Sicat, Hanspeter Pfister and Aaron Quigley. Drawing into the AR-CANVAS: Designing Embedded Visualizations for Augmented Reality. Workshop on Immersive Analytics, IEEE VIS. Retrieved October 31, 2018 from 10.1109/TVCG.2016.2598608

Bailey, Jason. 2014. “Second Screens” at Movie Theaters Isn’t Just a Terrible Idea — It’s a Nonsensical One. Retrieved October 31, 2018 from http://flavorwire.com/462957/second-screens-at-movie-theaters-isnt-just-a-terrible-idea-its-a-nonsensical-one

Baxter, Mike. 2013. Evolution in Hollywood editing patterns? Retrieved October 31, 2018 from http://www.cinemetrics.lv/dev/Evolution_paper_for_Cinemetrics.pdf

Bimber, Oliver, Bernd Fröhlich, Dieter Schmalstieg and Miguel Encarnação. 2001. The Virtual Showcase. Retrieved October 31, 2018 from http://web.uni-weimar.de/cms/fileadmin/medien/vr/documents/publications/TheVirtualShowcase2001.pdf

Blake, James. 2017. Second Screen interaction in the cinema. Experimenting with transmedia narratives and commercialising user participation. Participations, 14:2, pp. 526-544.

Blossfeld, Hans-Peter, Erik Klijzing, Melinda Mills and Karin Kurz (eds.). Globalization, Uncertainty and Youth in Society: The Losers in a Globalizing World, Abingdon-on-Thames: Routledge (Advances in Sociology).

Boermans, Bobby. 2013. App. Feature film. [75 min., 2.35:1, colour, Dolby Digital] Netherlands.

Cardiff, Janet and George Bures Miller, 2012. Alter Bahnhof Video Walk. Media art. [26 min., 1.78:1, colour, stereo] Documenta 13, Kassel. Retrieved October 31, 2018 from http://www.cardiffmiller.com/artworks/walks/bahnhof.htm

Carrier, L. Mark, Larry D. Rosen, Nancy A. Cheever and Alex F. Lim. 2015. Causes, effects, and practicalities of everyday multitasking. Developmental Review, 35: 64-78. https://doi.org/10.1016/j.dr.2014.12.005

Van Cauwenberge, Anna, Gabi Schaap and Rob van Roy. 2014. “TV no longer commands our full attention”: Effects of second-screen viewing and task relevance on cognitive load and learning from news, Elsevier Computers in Human Behavior, 38, pp.100-109. https://doi.org/10.1016/j.chb.2014.05.021

Corrigan, Timothy and Patricia White. 2012. “Telling Stories: Narrative Films” In The Film Experience; 2nd. ed., Boston, New York: Bedford/St. Martin’s, pp.233-253.

Covert, Hannah H. and Mirka Koro-Ljungberg. Layers of narratives, images, and analysis. Qualitative Research Journal, 15:3, pp.306-318. https://doi.org/10.1108/QRJ-08-2014-0042

Cutting, James E., Kaitlin L. Brunick, Jordan E. DeLong, Catalina Iricinschi and Ayse Candan. 2011. Quicker, faster, darker: Changes in Hollywood film over 75 years. i-Perception, 2, pp.569-576.

Dozier, William. 1966-1968, 3 Seasons. Batman. TV series. [25 min./120 episodes, 1.33:1, colour, monaural] United States: ABC.

Ebert, Roger. 2006. Stranger Than Fiction. Retrieved October 31, 2018 from http://www.rogerebert.com/reviews/stranger-than-fiction-2006

Chiachiri Filho, Antonio Roberto and Edson do Prado Pfützenreuter. 2012. Stranger than Fiction: A Movie to think about Semiosis. Paper presented at the 11th World Congress of Semiotics (IASS), 2012, Nanjing. Retrieved October 31, 2018 from https://www.academia.edu/6531465/Stranger_than_fiction_a_movie_to_think_on_semiosis_-_11th_World_Congres_of_the_IASS_2012_co-author_Prof._Roberto_Chiachiri._China_2012

Fincher, David. 1999. Fight Club. Feature film. [139 min., 2.35:1, colour, DTS] United States.

Forster, Marc. 2006. Stranger Than Fiction. Feature film. [113 min., 1.85:1, colour, SDDS] United States.

Gance, Abel. 1927. Napoleon. Feature film. [313 min., 1.33:1/4:1, black and white (tinted), silent] France.

Gatiss, Mark and Steven Moffat. 2010-2017, 4 Seasons. Sherlock. TV series. [ca. 90 min./13 episodes, 1.78:1, colour, stereo] United Kingdom: BBC.

Gaudreault, André and Timothy Barnard. 2013. Titles, Subtitles, and Intertitles: Factors of Autonomy, Factors of Concatenation. Film History: An International Journal, 25:1-2, pp.81-94.

Gerrard, Chris. 2016. The Cinema Multi’s: Image, Frame, Layers and Screen. Retrieved October 31, 2018 from https://chrisgerrardfilmart.wordpress.com/2016/10/07/the-cinema-multis-image-frame-layers-and-screen/

Gondry, Michel. 2013. Is the Man Who Is Tall Happy? Animated documentary. [88 min., 1.85:1, colour, stereo] France.

Handler Miller, Carolyn. 2008. Digital storytelling: A creator’s guide to interactive entertainment, Oxford: Focal Press.

Ifeanyi, KC. 2016. Every Batman Fight Scene Onomatopoeia in One Alphabetical GIF. Retrieved October 31, 2018 from https://www.fastcompany.com/3055253/every-batman-fight-scene-onomatopoeia-in-one-alphabetical-gif

Julien, Isaac. 2010. Ten Thousand Waves. Media art. [50 min., 1.85:1 – 9-channel multi-screen, colour, 9.2 sound] Retrieved October 31, 2018 from https://www.isaacjulien.com/projects/ten-thousand-waves/

Kessler, Frank. 2005. Georges Méliès, in R. Abel (ed.), Encyclopedia of Early Cinema, Abingdon-on-Thames: Routledge, pp.600-602.

Lang, Fritz. 1927. Metropolis. Feature film. [153 min., 1.33:1, black and white, silent] Germany.

Lang, Veronika and Peter Sittler. 2011. Looking for property via Smartphone trends, offers and prospects; Research Paper presented at the 18th European Real Estate Society Conference (ERES) 15-18 June 2011, Eindhoven. Retrieved October 31, 2018 from http://eres.scix.net/data/works/att/eres2011_49.content.pdf

liamc/limalicas. 2009. First Augmented Reality App for the iPhone Is Live. Retrieved October 31, 2018 from https://gigaom.com/2009/08/26/first-augmented-reality-app-for-the-iphone-is-live/

Luhman, Baz. 2001. Moulin Rouge! Feature film. [127 min., 2.39:1, colour, SDDS] Australia, United States.

Matsuda, Keiichi. 2016. Hyper-Reality. Video short [6 min., 1.85:1, colour, stereo] Colombia. Retrieved October 31, 2018 from https://vimeo.com/166807261

McLuhan, Marshall (W. Terrence Gordon, ed.). 2003. Understanding Media: The Extensions of Man – Critical Edition, Berkeley, Ca.: Gingko Press.

Mehr, Hannes. 2010. Immonet iPhone-App mit Augmented Reality. Retrieved October 31, 2018 from https://www.immobilienportale.com/20102808-immonet-iphone-app-mit-augmented-reality/

Miller, George. 2015. Mad Max: Fury Road. Feature film. [120 min., 2.35:1, colour, Dolby Atmos] Australia, United States.

Miller, Greg. 2014. Data From A Century Of Cinema Reveals How Movies Have Evolved. Retrieved October 31, 2018 from https://www.wired.com/2014/09/cinema-is-evolving/

Mobbs, Dean, Nikolaus Weiskopf, Hakwan C. Lau, Eric Featherstone, Ray J. Dolan and Chris D. Frith. 2006. The Kuleshov Effect: the influence of contextual framing on emotional attributions. Social Cognitive and Affective Neuroscience, 1:2, pp.95–106. http://doi.org/10.1093/scan/nsl014

Monaco, James. 2000. How to Read a Film; 3rd ed., New York City: Oxford University Press.

Morris, Chris. 2012. Cinema second screen? Retrieved October 31, 2018 from http://variety.com/2012/digital/news/cinema-second-screen-1118059277/

Nedomansky, Vashi. 2017. Average Shot Length Archives – Blog. Retrieved October 31, 2018 from http://vashivisuals.com/category/one-sheets/average-shot-length-one-sheets/

Orange Transmedia Lab. Cinema and second screen applications: focus on the film App and the Disney Second Screen experience. 2013. Retrieved October 31, 2018 from http://www.transmedialab.org/en/the-blog-en/cinema-and-second-screen-applications-focus-on-the-film-app-and-the-disney-second-screen-experience/

Parsons, Nicholas R.. 1973. Basically Black. TV short. [30 min., 1.33:1, black and white, monaural] Australia: Australian Broadcasting Commission.

Pinteau, Pascal. 2005. Special Effects: An Oral History – Interviews with 37 Masters, Spanning 100 Years, New York City: Harry N. Abrams, Inc..

Rall, Hans-Martin. 2018. Little Red Bard – Shakespeare in Singapore. Documentary, live action and animation [30 min., 1.78:1, colour, stereo] Singapore. Retrieved October 31, 2018 from https://vimeo.com/*

Reinhuber, Elke. 2014. A Standard Video. Video short [11 min., 1.78:1, colour, stereo] France, Germany. Retrieved October 31, 2018 from https://vimeo.com/98644158

—–. 2016. Venomenon. Video short in S3D [12 min., 2 screens S3D, each 1.33:1, colour, stereo] Singapore. Retrieved October 31, 2018 from https://vimeo.com/*

Mark Romanek. 2004. Jay Z: 99 Problems. Music video. [4 min., 1.78:1, black and white, stereo] United States.

Salt, Barry. 2009a. Film Style and Technology: History and Analysis, London: Starword.

—–. 2009b. Statistical Film Analysis (Basic Concepts and Practical Details). Retrieved October 31, 2018 from http://www.cinemetrics.lv/salt.php

Scott, Ridley. 1982. Blade Runner. Feature film. [137 min., 2.39:1, colour, 6-track audio] United States.

Sokurov, Aleksandr. 2002. The Russian Arc. Feature film. [99 min, 1.85:1, colour, Dolby Digital] Russia.

Spielberg, Steven. 2002. Minority Report. Feature film. [145 min., 2.35:1, colour, SDDS] United States.

Sterritt, David. 1999. The Films of Jean Luc Godard. Seeing the Invisible, Cambridge: Cambridge University Press.

Tarantino, Quentin. 2003. Kill Bill. Feature film. [248 min., 2.39:1, colour, SDDS] United States.

Turner, Steve. 2013. The Evolution Of The Second Screen. Retrieved October 31, 2018 from https://www.wired.com/insights/2013/05/the-evolution-of-the-second-screen/

Vertov, Dziga. 1929. Man with a Movie Camera. Documentary film. [68 min., 1.33:1, black and white, silent] Soviet Union.

Walton, Kendall. 1990. Mimesis as Make-Believe: On the Foundations of the Representational Arts, Cambridge, Ma.: Harvard University Press.

Willimon, Beau. 2013-2018. House of Cards. TV series. [51 min./73 episodes, 2:1, colour, Dolby Digital] United States: Netflix.

Willett, Wesley, Yvonne Jansen and Pierre Dragicevic. Embedded Data Representations. IEEE Transactions on Visualization and Computer Graphics, Institute of Electrical and Electronics Engineers, pp.461-470. Retrieved October 31, 2018 from 10.1109/TVCG.2016.2598608

Zemeckis, Robert. 1988. Who Framed Roger Rabbit? Feature film. [104 min., 1.85:1, colour, 6-track audio] United States.

 

Notes

[1]        AIEEE! – AIIEEE! – ARRRGH! – AWK! – AWKKKKKK! – BAM! – BANG! – BANG-ETH! – BAP! – BIFF! – BLOOP! – BLURP! – BOFF! – BONK! – CLANK! – CLANGE! – CLANK-EST! – CLASH! – CLUNK! – CLUNK-ETH! – CRAAACK! – CRASH! – CRRAAACK! – CRUNCH! – CRUNCH-ETH! – EEE-YOW! – FLRBBBBB! – GLIPP! – GLURPP! – KAPOW! – KAYO! – KER-PLOP! – KER-SPLOOSH! – KLONK! – KRUNCH! – OOOFF! – OOOOFF! – OUCH! – OUCH-ETH! – OWWW! – OW-ETH! – PAM! – PLOP! – POW! – POWIE! – QUNCKKK! – RAKKK! – RIP! – SLOSH! – SOCK! – SPLAAT! – SPLATT! – SPLOOSH! – SWAAP! – SWISH! – SWOOSH! – THUNK! – THWACK! – THWACKEL! – THWAPE! – THWAPP! – TOUCHÉ! – UGGH! – URKK! – URKKK! – VRONK! – WHACK! – WHACK-ETH! – WHAM-ETH! – WHAMM! – WHAMMM! – WHAP! – ZAM! – ZAMM! – ZAMMM! – ZAP! – ZAP-ETH – ZGRUPPP! – ZLONK! – ZLOPP! – ZLOTT! – ZOK! –ZOWIE! – ZWAPP! – Z-ZWAP! – Z-Z-Z-Z-WAP! – ZZZZZWAP! (Ifeanyi, 2016)

© Elke Reinhuber, Hannes Rall & Sebastian Pelz

Edited by Amy Ratelle