Practice Log 1: Bradford Industrial Museum Recordings

Date: 03/01-09/01/2022

Augmented Reality (AR) Project Showcase

How to download and view:

  1. Download Adobe Aero to your smartphone. If you are using an iPhone, this app is not supported; if you are using an Android phone, please download it here.
  2. You might need to log in to the app with an Adobe account.
  3. Download the file from the Google Drive link: https://drive.google.com/file/d/1Xxn2q8VMFa_Z6c2jftpxHBUjejTKNhZi/view?usp=drive_link. Then open it via Adobe Aero. It might take some time to load.
  4. After opening, you will be asked by the app to find a big flat area in reality for placing the 3D model.
  5. Please do not change any settings, just click the Preview button (as shown in picture 1 below) to start viewing. If you changed any settings by any chance, please close the app directly without saving. Please feel free to walk around the model and use your phone screen as a window that connects reality and virtuality.
  6. There are two buttons on the side of the model for you to experience different sound environments. By activating them, you need to look at the button on your phone and physically move close to the button, then click the button with your finger. The sound will stop when the loop finishes playing. You can click it again if you would like to listen more times.

If you cannot download or view the original file, please have a look at the video tutorial for downloading and representing the project below.

Video representation:

360-degree video with panning sound showcase


Reflective Diary

I started writing my PhD proposal when the pandemic was at its most severe. I have always been a museum enthusiast, and I felt confused when I saw the news – on one hand, small and medium-sized museums were closing one after another, either temporarily or permanently; on the other hand, large museums were launching their new online museums. This made me ask: What was causing this polarisation? What role was digital technology really playing? Why couldn’t digital technology be made more accessible to everyone? Can low-cost digital technologies do anything to help small and medium-sized museums join the online museum world? And if they can, what is the best way to select and use these technologies? (A more detailed account of my motivation is available in Chapter 1.4 of my thesis.)

Therefore, my starting point was to test whether consumer-grade technology could authentically deliver the complexity of offline museum exhibits both visually and aurally with an authentic or even sensual experience. After attending the Science Museum Group’s Congruence Engine conference in the winter of 2021, I learned about the organisation’s attempt to build more profound relationships between museums, as well as between visitors and museums, to empower the public to become contributors to historical information. Guided by this concept, I entered the Bradford Industrial Museum and secured the opportunity to digitise the museum’s industrial machinery exhibition hall. At the time, I did not have a clear hypothesis or even a clear method, but rather two curiosity and enthusiasm-driven questions. Firstly, on a technical level, what happens when a space designed to display industrial memory is captured and presented using tools that are readily available to anyone (such as 360-degree cameras and consumer-grade recorders)? Secondly, on a social level, could this low-cost digital re-presentation then be used as a tool to follow the lead of the Science Museum Group? (e.g. Could it become a platform that empowers the public to become a historical information contributor? Or does it just end up being a static digital copy?)

This was my first attempt at recording, photographing, and scanning artefacts in a museum environment. As the earliest practice in this research, it was conducted before Ethics Approval was granted and therefore involved no participants, relying instead on self-observation. Looking back now (at the end of this 4-year research journey), it is surprising how much of a crucial inspirational role this practice played in the subsequent development of my research. 

When it came to visual data, I was definitely influenced by everything happening in 2021. Apple was integrating 3D scanning (LiDAR) into its phones, and suddenly, this tech felt like it was moving into everyday life. It was being widely discussed by the public and especially by tech-testing YouTubers. Consequently, I chose to use an iPhone 13 Pro, the latest model in 2022, with its native LiDAR scanner, in cooperation with three applications: Scaniverse, Polycam, and 3D Scanner App. Furthermore, considering the feasibility of presenting panoramic video via portable VR headsets (due to their low cost, ease of setup, and interaction potentials), I recorded panoramic video, too. For the audio equipment, based on my previous experience producing and browsing ASMR content, I selected mono, binaural, and multi-channel recording devices to try to recreate the spatial experience in sensory dimensions. To achieve a 3D environment engagement and ensure device compatibility for the 3D model component, I chose Adobe Aero (Beta version) for data presentation. The dataset was generated using three layers: isolated object sounds (mono and binaural recordings), collective ambient sounds, and 3D scans from the collections. For the environmental soundscape, I combined panoramic video with multi-channel audio. I then uploaded this to YouTube and enabled panning settings. This was crucial as it meant participants could access it easily and (more importantly) they could perceive changes in sound as they rotated their heads.

This initial practice was not simply just a technical experiment, but also a chance to generate answers to the questions I raised earlier. At the same time, it enabled me to have a deeper consideration on how everyday devices that were designed for consumer use rather than cultural heritage can be mobilised as curatorial agents. I carried this question into later practices. The choice to use Scaniverse on an iPhone 13 Pro without external lighting or sensors was deliberate to mimic the infrastructural conditions of many small museums that have limited resources and personnel, but are keen to reach a bigger size of visitors. The simplicity of these apps and devices reduced barriers to entry, but their outputs were uneven. Due to the shortcomings of LiDAR, the data of dark-painted machines or those with reflective surfaces frequently failed to be detected and collected by the scanner. It led to the need for manual repairs in Blender, which in turn elevated the user’s technical requirements. Thus, it exposed an early conceptual tension of using accessible technology: it seems that the more accessible the technology is advertised, the more invisible labour gets displaced downstream. It led me to wonder if simplification at the front end rarely actually translates to simplicity in the process of meaning-making.

The sound recording process offered a similar insight. I used a TASCAM stereo recorder to capture both isolated machine operations and the soundscape of the hall. I added these recordings into the Aero interface in two modes to provide options for visitors to choose what they would like to engage with. Visitors can use gestures to listen to the sound from one collection alone, or to listen to the sound of the whole exhibition when all the machines are operating simultaneously. Combining this with physical movements (walking closer to virtual machines), visitors can also adjust the volume, allowing visitors to experience different soundscapes. However, this choice is difficult to provide in a physical museum. The operation of machines is arranged as a showcase, which is only available within a fixed schedule arranged by the museum. Moreover, in open museum spaces, sound from different machines actually reflects, bleeds, and echoes in the hall, which introduces a “crash” of sounds, creating confusion as opposed to the orientation that is intended. My experiment was trying to address this issue. I wanted to see if digital technology could smooth this “crash” and balance the visual-audio engagement based on the audience’s choice. This whole dynamic (the “crash” of sounds in the physical exhibition versus the smoothing of the digital) is a great example of Pink’s (2009) argument that sensory perception, experience, and categories are all interconnected and constantly in flux. My experiment was testing if I could control that flux using digital technologies. 

Another reflection emerged around the concept of interactivity. While Aero allowed for gesture and movement-triggered sound (or change of sound), it lacked semantic depth. It only offered a simulation of interaction, without substance or the ability to trigger deeper thoughts based on the digitised content. It made me think: was this gesture triggered audio truly interpretive, or merely decorative? Is it actually meaningful to just copy and paste limited offline engagements to online environments? I began to suspect that such experiences risk slipping into what Tyler (1984) critiques as the dominance of the visual mode, but instead shifted into audio-visual terms. It led to a methodological self-questioning: can accessibility still be meaningful without agency?

As Bolt (2007) insists, practice is not just technique, but also epistemology. Applying this to my practice, I learned this was not just about the limitations of software, but more about the fragile assumptions I had about “low-cost digitisation”. This is not a negative comment on my practice or research start point at this initial stage; contrastingly, it has inspirational and iterative value, which can only emerge after practice. Immersion requires multiple modalities, but cannot be achieved by layering sensory inputs alone. I would say the AR project I created is just a multisensory duplication of an offline museum experience, rather than an independent immersive experience. It also showed me that careful consideration of the interpretation, orientation, and response from the user is very important for their virtual engagement. Moreover, I realised that sensory fidelity is not the same as epistemic clarity, which I will explore more in the future. 

Besides, the combination of machine sounds in AR complicates the situation of environmental perception.

In my AR prototype, I noticed a fundamental difference: while the sounds of individual exhibits could show location or movement (e.g., the direction of steam emission, which plays when tapping the button on the right in AR), the environmental sound (left button) with a combination of multiple machine sounds into a single ambient track disabled the perception clarity of positional sound cues (e.g., the sound that plays when tapping the button on the left in AR). I think this means sound technology is not only mediating the experience but also the direction itself. This means that the choice of a specific presentation technology influences the selection of technologies for initial data collection. Therefore, without exploratory practice, it is difficult to determine which presentation technology to use and what kind of data to collect to achieve a desired effect.

This problem is exacerbated by my being deaf in one ear. Few researchers, particularly those studying sound, are likely to have hearing issues. For me, this is not a deficit, but a trigger condition that enabled self-observation and offered a unique perspective. This perhaps makes the significance and findings of this study distinctive. Normally, I can navigate a sound environment by adjusting my head position multiple times to locate sound cues. But in the echo-filled exhibition halls of the Bradford Industrial Museum’s machinery exhibit, I felt overwhelmed as I was swallowed by sound without a sense of direction. Echo transforms the acoustic experience into an incomprehensible blur. Different sound elements can act like a fog, masking sound events and displacing spatial cues (Patterson & Green, 2012). However, this fog taught me more than any clear signal: even immersive technologies cannot overcome the embodied reality of perception. It made me begin to question whether the museum’s use of immersive technology actually resulted in an immersive effect, and to what extent that effect was achieved.

I later attempted to rectify this by superimposing omnidirectional audio over 360° video. However, the limitations of the technology are coupled with the acoustic nature of the hall, which means that the illusion of sound agency still collapses. The sound refused to behave as expected. Enveloped by the sound of massive belts and engines, I found it difficult to discern the sounds of other machines in specific locations. I remained in the “fog”.

The visuals were no better. The 3D models I produced were barely passable at best. Material textures, especially dark colours or reflective metals, were difficult for the LiDAR scanner to handle. The result is a low-fidelity set of objects whose visual form suggests presence, but not precision. However, this failure also illuminates something important: sound is doing more with less. While the visual model struggled to convey historical texture, despite the flaws of the ambient sound, it evoked something more ambient and intuitive.

This made me reflect on the cultural dominance of the visual in museum practice. As Stephen Tyler (1984) reminds us, Western epistemologies prioritise the visual, often at the expense of the other senses. But I am not trying to advocate a binary opposition between the visual and other senses. In my research, I found that despite its technical imperfections, sound provided a more infectious channel of communication. It is a tension between clarity and ambiguity, also between what is seen and what is felt, that becomes not only a technical but also a curatorial and philosophical issue. I believe that understanding the role and potential of sound will be the first step in developing a multisensory engagement in future online museums. Subsequently, an emphasis on and understanding of the engagement of other senses should also be placed on the agenda.

Ultimately, this foundational experiment (Practice Log 1), with its mixture of frustrating failures and unexpected insights, was not an end in itself but a crucial methodological starting point. The challenges of visual capture—specifically the ‘black box’ nature of the apps and their inconsistent results—directly set the agenda for the more systematic technical evaluations in Practice Log 2 (3D Scan Test) and the critical exploration of an alternative platform in Practice Log 4 (Matterport Scan). Similarly, the difficulties and latent potential of audio recording about the ‘acoustic fog’ versus the evocative power of ambience necessitated a more focused investigation into sound itself in Practice Log 3 (Machine Sound Collection). Most importantly, the tension between my initial top-down design and the clear need for richer participant agency prompted the vital shift towards a collaborative, co-creative methodology, which was explored in the workshops of Practice Log 5.

Reflective Methodological Note

“In the best PaR, there is an intellectual diagnostic rigour in the critical reflection on practice, in the movement between the tacit know-how and the explicit know-what and in the resonances marked between know-what and know-that. […] The purpose of critical reflection in a PaR context is better to understand and articulate – by whatever specific means best meet the need in a particular project – what is at stake in the praxis in respect of substantial new insights. ”

——Robin Nelson, Practice as Research in the Arts (2013)

What emerged from this experience was not simply a technical lesson, but a deeper realisation about my position within the research. I am not only a designer or observer – I am also a sensory subject whose embodied experience shapes the very process I am trying to study. My hearing impairment is a barrier in my daily life; however, in this practice, it became an unexpected lens to help me detect the disconnection and barrier between sight and sound.

In this project, I acted as a participant, mediator, and reflective practitioner at the same time. I moved through space not only for collecting data, but also for starting to understand how digital reconstruction is not a transparent mirror of historical truth, but a negotiation and encounter between technology, body, and memory.

As recent research on sensory museum design argues, immersion is not a totality but a process of cross-modal coordination (Pietroni, 2025; Parker, Spennemann & Bond, 2024). In this instance, the auditory offered not certainty, but ambiguous resonance as the hum of machines was not seen via the movement but felt from sound, the echo of the hall seemed to evoke a sense of loss (e.g. the direction of sound cues, and the details of quieter machines) rather than of presence.


Reference list

Barrett, E. and Bolt, B. eds. 2019. Practice as research: Approaches to creative arts enquiry. London: Bloomsbury Publishing.

Pietroni, E. 2025. Multisensory Museums, Hybrid Realities, Narration and Technological Innovation: A Discussion Around New Perspectives in Experience Design and Sense of Authenticity. [Online]. [Accessed 23 April 2025]. Available from: https://doi.org/10.20944/preprints202502.0440.v1

Pink, S. 2009. Doing sensory ethnography. London: SAGE Publications Ltd.

Parker, M., Spennemann, D.H. and Bond, J. 2024. Sensory perception in cultural studies—A review of sensorial and multisensorial heritage. The Senses and Society. [Online]. 19(2), pp.231-261. [Accessed 23 April 2025]. Available from: https://doi.org/10.1080/17458927.2023.2284532

Tyler, S.A. 1984. The vision quest in the West, or what the mind’s eye sees. Journal of anthropological research. [Online]. 40(1), pp.23-40. [Accessed 23 April 2025]. Available from: https://doi.org/10.1086/jar.40.1.3629688

Patterson, R.D. and Green, D.M. 2012. Auditory masking. In Carterette, E.C. and Friedman, M.P. eds. Hearing. Los Angels: Academic Press, pp.337-361.

Nelson, R., 2013. Practice as research in the arts: Principles, protocols, pedagogies, resistances. London: Springer.

Leave a Reply

Your email address will not be published. Required fields are marked *

Read More

Practice Log 6: The King’s Book Leeds

Date: 01/04–01/08/2023 Project Showcase Link to the original project: https://immersivenetworks.co.uk/thekingsbook/ My role in this project was observing the project design process, interview facilitation and post-experimental

Read More »

Get in touch