UTS site search

Dr Sam Ferguson

Biography

Sam Ferguson is a musician, researcher and programmer who is a lecturer at the University of Technology, Sydney. His research focus is to understand the relationship between, and the effects of, sound and music on human beings.
He has around 40 publications in areas as diverse as spatial hearing and loudness research, to data sonification, emotion, and tabletop computing. He has been a research fellow or assistant on more than 6 ARC research projects, and continues to maintain several open source code projects. He has taught numerous subjects at the postgraduate and undergraduate level at the University of Technology, Sydney, the University of Sydney and UWS, and currently is a lecturer at UTS in the Faculty of Engineering and IT.
Image of Sam Ferguson
Lecturer, School of Software
Core Member, HCTD - Human Centred Technology Design
B.Mus, M. Des. Sci (Hons) (Audio), PhD
 
Phone
+61 2 9514 4682

Sam teaches or has taught subjects such as:

31265 Communications for IT Professionals 

95569 Digital Media Studio

95566 Digital Information and Interaction Design

31080 Digital Multimedia 

32027 Multimedia Systems Design 

50860 Sonology

50858 Audio Production

50846 Situated Media Installation Studio

Sam is currently heavily involved in the Software Development Studio, which is an inter-disciplinary studio where students can get hands-on experience developing software in a collaborative environment with industry mentors. He also acts as an academic advisor for IT students enrolled in the Enterprise Systems Development major. 

Chapters

Tan, C. & Ferguson, S. 2014, 'The Role of Emotions in Art Evaluation' in Candy, L. & Ferguson, S. (eds), Interactive Experience in the Digital Age, Springer, pp. 139-152.
View/Download from: UTS OPUS or Publisher's site
With contributions from artists, scientists, curators, entrepreneurs and designers engaged in the creative arts, this book is an invaluable resource for both researchers and practitioners, working in this emerging field.
Candy, L. & Ferguson, S.J. 2014, 'Interactive Experience, Art and Evaluation' in Candy, L. & Ferguson, S. (eds), Interactive Experience in the Digital Age, Springer, pp. 1-10.
View/Download from: UTS OPUS
With contributions from artists, scientists, curators, entrepreneurs and designers engaged in the creative arts, this book is an invaluable resource for both researchers and practitioners, working in this emerging field.
Loke, L. & Khut, G.P. 2014, 'Intimate Aesthetics and Facilitated Interaction' in Candy, L. & Ferguson, S. (eds), Interactive Experience in the Digital Age, Springer, pp. 91-108.
View/Download from: Publisher's site
Ferguson, S., Martens, W. & Cabrera, D. 2011, 'Statistical Sonification for Exploratory Data Analysis' in Hermann, T., Hunt, A. & Neuhoff, J.G. (eds), The Sonification Handbook, Logos Verlag Berlin GmBH, Berlin, Germany, pp. 175-196.
At the time of writing, it is clear that more data is available than can be practically digested in a straightforward manner without some form of processing for the human observer. This problem is not a new one, but has been the subject of a great deal of practical investigation in many fields of inquiry. Where there is ready access to existing data, there have been a great many contributions from data analysts who have refined methods that span a wide range of applications, including the analysis of physical, biomedical, social, and economic data. A central concern has been the discovery of more or less hidden information in available data, and so statistical methods of data mining for `the gold in there have been a particular focus in these developments. A collection of tools that have been amassed in response to the need for such methods form a set that has been termed Exploratory Data Analysis [48], or EDA, which has become widely recognized as constituting a useful approach. The statistical methods employed in EDA are typically associated with graphical displays that seek to `tease out a structure in a dataset, and promote the understanding or falsification of hypothesized relationships between parameters in a dataset.

Conferences

Prior, J., Ferguson, S. & Leaney, J. 2016, 'Reflection is hard: teaching and learning reflective practice in a software studio', http://dl.acm.org/citation.cfm?id=2843346, Australasian Computing Education Conference, ACM, Canberra, Australia.
We have observed that it is a non-trivial exercise for undergraduate students to learn how to reflect. Reflective practice is now recognised as important for software developers and has become a key part of software studios in universities, but there is limited empirical investigation into how best to teach and learn reflection. In the literature on reflection in software studios, there are many papers that claim that reflection in the studio is mandatory. However, there is inadequate guidance about teaching early stage students to reflect in that literature. The essence of the work presented in this paper is a beginning to the consideration of how the teaching of software development can best be combined with teaching reflective practice for early stage software development students. We started on a research programme to understand how to encourage students to learn to reflect. As we were unsure about teaching reflection, and we wished to change our teaching as we progressively understood better what to do, we chose action research as the most suitable approach. Within the action research cycles we used ethnography to understand what was happening with the students when they attempted to reflect. This paper reports on the first 4 semesters of research. We have developed and tested a reflection model and process that provide scaffolding for students beginning to reflect. We have observed three patterns in how our students applied this process in writing their reflections, which we will use to further understand what will help them learn to reflect. We have also identified two themes, namely, motivation and intervention, which highlight where the challenges lie in teaching and learning reflection.
Murray-Leslie, A., Ferguson. S & Johnston, A. 2014, 'Colour Tuning', http://www.tuttocongressi.it/website/congresses/congressDetail2.aspx?idc..., Costume Colloquium IV: Colors in Fashion, Life Beyond Tourism, Florence, Italy.
Colour Tuning is practice based research into the relationship between colour, dance, fashion and music in the form of an APP (IPad application) developed to be used in conjunction with a performance fashion or live Art context. The APP is used during a performance, encouraging the audience and performers tune into each other, via an IPad APP, to compose acoustic compositions or 'Colour Music. Colour Tuning enables environments and bodies in space to tune in and out of each other, by using the IPad as a digital viewfinder through the APP. The APP player (eg: Audience member) points the IPad in the direction he or she would like to compose music and create colour feedback to and selects colours (eg: a collection of coloured clothing worn by dancers, models or actors on stage) on the IPad screen, which are then tracked. Each colour denotes a different sound space (each colour being mapped to an acoustic generative algorithim). Once the colours on the screen (colour fields denote the bodies of the actors) start moving, the sounds change according to what colour / actor comes close to another colour / actor and when the colours/actors overlap or make contact, new sounds are generated, like mixing coloured paint, meaning the performative composition of colour and sound is always in flux and never sounds the same. Colour Tuning addresses multiple themes of the conference, including: 'Symbolism of colours in dress and fashion, by translating colour and fashion into metaphorical sounds and timbres in music, to create a larger synesthesia Live Art experience. Colour Tuning presents a participatory dialogue between audience and designer, through the interactive nature of the Colour Tuning APP's mode of presentation, by inviting audience members to use the IPad APP to tune into the colours they want to hear whilst watching the actors on stage or on the Street. Colour Tuning presents a critical view on the history of colors in style and fashion; questioning the powerful...
Bown, O., Loke, L., Ferguson, S.J. & Reinhardt, D. 2015, 'Distributed Interactive Audio Devices: Creative strategies and audienceresponses to novel musical interaction scenarios', http://isea2015.org/publications/proceedings-of-the-21st-international-s..., International Symposium on Electronic Art, Vancouver, Canada.
With the rise of ubiquitous computing, comes new possibilities for experiencing audio, visual and tactile media in distributed and situated forms, disrupting modes of media experience that have been relatively stable for decades. We present the Distributed Interactive Audio Devices (DIADs) project, a set of experimental interventions to explore future ubiquitous computing design spaces in which electronic sound is presented as distributed, interactive and portable. The DIAD system is intended for creative sound and music performance and interaction, yet it does not conform to traditional concepts of musical performance, suggesting instead a fusion of music performance and other forms of collaborative digital interaction. We describe the thinking behind the project, the state of the DIAD system's technical development, and our experiences working with userinteraction in lab-based and public performance scenarios.
Ferguson, S.J. 2015, 'Using audio feature extraction for interactive feature-based sonification of sound', https://smartech.gatech.edu/bitstream/handle/1853/54106/ICAD%20Proceedin..., International Conference on Auditory Display, Georgia Institute of Technology, Graz, Austria, pp. 66-72.
Ferguson, S., Schubert, E. & Stevens, C.J. 2014, 'Dynamic dance warping: Using dynamic time warping to compare dance movement performed under different conditions', Proceedings of the 1st International Workshop on Movement and Computing, ACM, Paris, France, pp. 94-99.
View/Download from: UTS OPUS
Ferguson, S.J., Johnston, A. & Murray-Leslie, A. 2014, 'Methodologies with fashion acoustics Live on Stage!', 14th International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 1-4.
View/Download from: UTS OPUS
Tan, C.T., Johnston, A., Bluff, A., Ferguson, S. & Ballard, K.J. 2014, 'Retrogaming as visual feedback for speech therapy', Proceeding SA'14 SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications, SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications, ACM, Shenzen Convention & Exhibition Center.
View/Download from: UTS OPUS or Publisher's site
A key problem in speech therapy is the motivation of patients in repetitive vocalization tasks. One important task is the vocalization of vowels. We present a novel solution by incorporating formant speech analysis into retro games to enable intrinsic motivation in performing the vocalization tasks in a fun and accessible manner. The visuals in the retro games also provide a simple and instantaneous feedback mechanism to the patients' vocalization performance. We developed an accurate and efficient formant recognition system to continuously recognize vowel vocalizations in real time. We implemented the system into two games, Speech Invaders and Yak-man, published on the iOS App Store in order to perform an initial public trial. We present the development to inform like-minded researchers who wish to incorporate real-time speech recognition in serious games.
Tan, C.T., Johnston, A.J., Bluff, A., Ferguson, S. & Ballard, K.J. 2014, 'Speech invaders & yak-man: retrogames for speech therapy', Proceeding SA '14 SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications, SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications, Shenzen Convention & Exhibition Center.
View/Download from: UTS OPUS or Publisher's site
Speech therapy is used for the treatment of speech disorders and commonly involves a patient attending clinical sessions with a speech pathologist, as well as performing prescribed practice exercises at home [Ruggero et al. 2012]. Clinical sessions are very effective -- the speech pathologist can carefully guide and monitor the patient's speech exercises -- but they are also costly and timeconsuming. However, the more inexpensive and convenient home practice component is often not as effective, as it is hard to maintain sufficient motivation to perform the rigid repetitive exercises.
Ferguson, S. 2013, 'Sonifying every day: Activating everyday interactions for ambient sonification systems', Website Proceedings of the 2013 International Conference on Auditory Display, International Conference on Auditory Display, Lodz University of Technology Press, Lodz, Poland, pp. 77-84.
View/Download from: UTS OPUS
Sonifying every day: Activating everyday interactions for ambient sonification systems
Ferguson, S., Schubert, E., Lee, D., Cabrera, D. & McPherson, G.E. 2013, 'A Comparison between Continuous Categorical Emotion Responses and Stimulus Loudness Parameters', 4th International Conference on Information, Intelligence, Systems and Applications, ISSA 2013, IEEE, Piraeus, Greece, pp. 118-123.
View/Download from: UTS OPUS or Publisher's site
This paper investigates the use of psychoacoustic loudness analysis as a method for determining the likely emotional responses of listeners to musical excerpts. 19 excerpts of music were presented to 86 participants (7 randomly chosen excerpts per participant) who were asked to rate the emotion category using the emotion-clock-face continuous response interface. The same excerpts were analysed with a loudness model, and time series results were summarised as both loudness median and standard deviation. Comparisons indicate that the median and standard deviation of loudness plays an important role in determining the emotion category responses.
Ferguson, S., Johnston, A.J. & Martin, A.G. 2013, 'A corpus-based method for controlling guitar feedback', Proceedings of the International Conference on New Interfaces for Musical Expression, Korea Advance Institute of Science and Technology, Daejeon & Seoul, Korea Republic, pp. 541-546.
View/Download from: UTS OPUS
The use of feedback created by electric guitars and amplifiers is problematic in musical settings. For example, it is difficult for a performer to accurately obtain specific pitch and loudness qualities. This is due to the complex relationship between these quantities and other variables such as the string being fretted and the positions and orientations of the guitar and amplifier. This research investigates corpus-based methods for controlling the level and pitch of the feedback produced by a guitar and amplifier. A guitar-amplifier feedback system was built in which the feedback is manipulated using (i) a simple automatic gain control system, and (ii) a band-pass filter placed in the signal path. A corpus of sounds was created by recording the sound produced for various combinations of the parameters controlling these two components. Each sound in the corpus was analysed so that the control parameter values required to obtain particular sound qualities can be recalled in the manner of concatenative sound synthesis. As a demonstration, a recorded musical target phrase is recreated on the feedback system.
Tan, C., Johnston, A.J., Ballard, K.J., Ferguson, S. & Perera-Schulz, D. 2013, 'sPeAK-MAN: towards popular gameplay for speech therapy', Proceedings of 9th Australasian Conference on Interactive Entertainment IE'13, Australasian Conference on Interactive Entertainment, ACM, Melbourne, VIC, Australia, pp. 1-4.
View/Download from: UTS OPUS or Publisher's site
Current speech therapy treatments are not easily accessible to the general public due to cost and demand. Therapy sessions are also laborious and maintaining motivation of patients is hard. We propose using popular games and speech recognition technology for speech therapy in an individualised and accessible manner. sPeAK-MAN is a Pac-Man-like game with a core gameplay mechanic that incorporates vocalisation of words generated from a pool commonly used in clinical speech therapy sessions. Other than improving engagement, sPeAK-MAN aims to provide real-time feedback on the vocalisation performance of patients. It also serves as an initial prototype to demonstrate the possibilities of using familiar popular gameplay (instead of building one from scratch) for rehabilitation purposes.
Ferguson, S., Nagai, Y., Hewett, T., Yi-Luen Do, E., Dow, S., Ox, J., Smith, S., Nishimoto, K. & Tan, C. 2013, 'Proceedings of the 9th ACM Conference on Creativity & Cognition', Proceedings of the 9th ACM Conference on Creativity & Cognition, ACM, Sydney, NSW, Australia.
Ferguson, S., Johnston, A.J., Ballard, K.J., Tan, C. & Perera-Schulz, D. 2012, 'Visual feedback of acoustic data for speech therapy: model and design parameters', Proceedings of the 7th Audio Mostly Conference: A Conference on Interaction with Sound, Audio Mostly, ACM, Corfu, Greece, pp. 135-140.
View/Download from: UTS OPUS
Feedback, usually of a verbal nature, is important for speech therapy sessions. Some disadvantages exist however with traditional methods of speech therapy, and visual feedback of acoustic data is a useful alternative that can be used to complement typical clinical sessions. Visual feedback has been investigated before, and in this paper we propose sev- eral new prototypes. From these prototypes we develop an iterative model of analysing the design of feedback sys- tems by examining the feedback process. From this iterative model, we then extract methods to inform design of visual feedback systems for speech therapy
Schubert, E., Ferguson, S., Farrar, N., Taylor, D. & McPherson, G.E. 2012, 'Continuous Response to Music using Discrete Emotion Faces', Proceedings of the 9th International Symposium on Computer Music Modelling and Retrieval, 9th International Symposium on Computer Music Modelling and Retrieval, Queen Mary University of London, London, UK, pp. 3-19.
View/Download from: UTS OPUS
An interface based on expressions in simple graphics of faces were aligned in a clock-like distribution with the aim of allowing participants to quickly and easily rate emotions in music continuously. We developed the interface and tested it using six extracts of music, one targeting each of the six faces: `Excited (at 1 oclock), `Happy (3), `Calm (5), `Sad (7), `Scared (9) and `Angry (11). 30 participants rated the emotion expressed by these excerpts on our `emotion-face-clock. By demonstrating how continuous category selections (votes) changed over time, we were able to show that (1) more than one emotion-face could be expressed by music at the same time and (2) the emotion face that best portrayed the emotion the music conveyed could change over time, and that the change could be attributed to changes in musical structure.
Taylor, D., Schubert, E., Ferguson, S. & McPherson, G.E. 2012, 'The Role of Musical Features in the Perception of Initial Emotion', Proceedings of the 9th International Symposium on Computer Music Modelling and Retrieval, CMMR 2012, Queen Mary University of London, London, pp. 136-143.
View/Download from: UTS OPUS
170 participants were played short excerpts of orchestral music and instructed to move a mouse cursor as quickly as possible to one of six faces that best corresponded to the emotion they thought the music expressed. Excerpts were analysed and the musical cues coded. Relationships between the number of cues and participantsâ response times were investigated and reported. No relationship between the number of cues available to the listener and the speed of response was found. Findings suggest that the initial response to ecologically plausible musical excerpts is quite complex, and requires further investigation to provide emotion-retrieval models of music with psychologically driven data
Johnston, A.J., Beilharz, K.A., Chen, Y. & Ferguson, S. 2010, 'Proceedings of the 2010 Conference on New Interfaces for Musical Expression (NIME 2010)', Proceedings of the 2010 Conference on New Interfaces for Musical Expression (NIME 2010), University of Technology Sydney, Sydney, Australia.
Beilharz, K.A. & Ferguson, S. 2009, 'An Interface and Framework Design for interactive Aesthetic Sonification', Proceedings of the 15th International Conference on Auditory Display, International Conference on Auditory Display, Re:New Digital Arts Forum, Copenhagen, Denmark, pp. 1-8.
View/Download from: UTS OPUS
This paper describes the interface design of our AeSon (Aesthetic Sonification) Toolkit motivated by user-centred customisation of the aesthetic representation and scope of the data. The interface design is developed from 3 premises that distinguish our approach from more ubiquitous sonification methodologies. Firstly, we prioritise interaction both from the perspective of changing scale, scope and presentation of the data and the user's ability to reconfigure spatial panning, modality, pitch distribution, critical thresholds and granularity of data examined. The user, for the majority of parameters, determines their own listening experience for real-time data sonification, even to the extent that the interface can be used for live data-driven performance, as well as traditional information analysis and examination. Secondly, we have explored the theories of Tufte, Fry and other visualization and information design experts to find ways in which principles that are successful in the field of information visualization may be translated to the domain of sonification. Thirdly, we prioritise aesthetic variables and controls in the interface, derived from musical practice, aesthetics in information design and responses to experimental user evaluations to inform the design of the sounds and display. In addition to using notions of meter, beat, key or modality and emphasis drawn from music, we draw on our experiments that evaluated the effects of spatial separation in multivariate data presentations.
Ferguson, S. & Beilharz, K.A. 2009, 'An Interface for Live Interactive Sonification', Proceedings of New Interfaces for Musical Expression, New Interfaces for Musical Expression, NIME, Pittsburgh PA, pp. 35-36.
View/Download from: UTS OPUS
NA
Ferguson, S.J. & Cabrera, D. 2009, 'Auditory Spectral Summarisation for Audio Signals with Musical Applications', 10th International Society for Music Information Retrieval Conference, International Symposium for Music Information Retrieval, International Society on Music Information Retrieval, Kobe, Japan, pp. 567-572.
View/Download from: UTS OPUS
Methods for spectral analysis of audio signals and their graphical display are widespread. However, assessing music and audio in the visual domain involves a number of challenges in the translation between auditory images into mental or symbolically represented concepts. This paper presents a spectral analysis method that exists entirely in the auditory domain, and results in an auditory presentation of a spectrum. It aims to strip a segment of audio signal of its temporal content, resulting in a quasi-stationary signal that possesses a similar spectrum to the original signal. The method is extended and applied for the purpose of music summarisation.
Ferguson, S., Cabrera, D., Beilharz, K.A. & Song, H. 2006, 'Using Psychoacoustical Models for Information Sonification', Proceedings of 12th International Conference on Auditory Display ICAD 2006, International Conference on Auditory Display, ICAD, London, UK, pp. 113-120.
View/Download from: UTS OPUS
NA
Beilharz, K.A., Jakovich, J. & Ferguson, S. 2006, 'Hyper-shaku [Border-crossing]: Towards the multi-modal gesture-controlled hyper-instrument', NIME06: Sixth International Conference on New Interfaces for Musical Expression 2006, International Conference on New Interfaces for Musical Expression, Ircam - Centre Pompidou, Paris, France, pp. 352-357.
View/Download from: UTS OPUS

Journal articles

Ferguson, S., Kenny, D.T., Mitchell, H.F., Ryan, M. & Cabrera, D. 2013, 'Change in messa di voce characteristics during 3 years of classical singing training at the tertiary level', Journal of Voice, vol. 27, no. 4, pp. 35-48.
View/Download from: UTS OPUS or Publisher's site
A 3-year longitudinal study was conducted to investigate changes in vocal quality as a result of singing training at a tertiary level conservatorium in Australia. Singers performed a messa di voce (MDV) at intervals of 6 months over the 3-year period of training. The study investigated the evolving acoustic features of the singers' voices exhibited during the MDV, including sound pressure level (SPL), short-term energy ratio (STER), duration, and vibrato parameters of the fundamental frequency (F0), SPL, and STER. The maximum SPL exhibited a marginal systematic increase over the training period, but the maximum STER did not systematically change. F0 vibrato extent increased significantly, whereas the extent of SPL and STER vibrato did not change significantly.
Schubert, E., Ferguson, S., Farrar, N., Taylor, D. & McPherson, G.E. 2013, 'The Six Emotion-Face Clock as a Tool for Continuously Rating Discrete Emotional Responses to Music', Lecture Notes in Computer Science, vol. 7900, pp. 1-18.
View/Download from: UTS OPUS or Publisher's site
Recent instruments measuring continuous self-reported emotion responses to music have tended to use dimensional rating scale models of emotion such as valence (happy to sad). However, numerous retrospective studies of emotion in music use checklist style responses, usually in the form of emotion words, (such as happy, angry, sad) or facial expressions. A response interface based on six simple sketch style emotion faces aligned into a clock-like distribution was developed with the aim of allowing participants to quickly and easily rate emotions in music continuously as the music unfolded. We tested the interface using six extracts of music, one targeting each of the six faces: `Excited (at 1 oclock), `Happy (3), `Calm (5), `Sad (7), `Scared (9) and `Angry (11). 30 participants rated the emotion expressed by these excerpts on our `emotion-face-clock. By demonstrating how continuous category selections (votes) changed over time, we were able to show that (1) more than one emotion-face could be expressed by music at the same time and (2) the emotion face that best portrayed the emotion the music conveyed could change over time, and (3) the change could be attributed to changes in musical structure. Implications for research on orientation time and mixed emotions are discussed.
Ferguson, S., Beilharz, K.A. & Calo, C.A. 2012, 'Navigation of interactive sonifications and visualisations of time-series data using multi-touch computing', Journal on Multimodal User Interfaces, vol. 5, no. 3-4, pp. 97-109.
View/Download from: UTS OPUS or Publisher's site
This paper discusses interaction design for inter- active sonification and visualisation of data in multi-touch contexts. Interaction design for data analysis is becoming increasingly important as data becomes more openly avail- able. We discuss how navigation issues such as zooming, se- lection, arrangement and playback of data relate to both the auditory and visual modality in different ways, and how they may be linked through the modality of touch and gestural in- teraction. For this purpose we introduce a user interface for exploring and interacting with representations of time-series data simultaneously in both the visual and auditory modali- ties.
Ferguson, S., Schubert, E. & Dean, R. 2011, 'Continuous subjective loudness responses to reversals and inversions of a sound recording of an orchestral excerpt', Musicae Scientiae, vol. 15, no. 3, pp. 387-401.
View/Download from: Publisher's site
Twenty-four respondents continuously rated the loudness of the first 65 seconds of a Dvorak Slavonic Dance, which was known to vary considerably in loudness. They also rated the same excerpt when the sound file was digitally treated so that (1) the sound pressure level (SPL) was inverted or (2) it was temporally reversed or (3) both 1 and 2. Specifically we wanted to see if acoustic intensity was processed into the percept of loudness primarily using a bottom-up (indifferent to timbral environment and thematic cues) or top-down style (where musical context, such as instrument identity and musical expectation affects the loudness rating). Comparing the different versions (conditions) allowed us to ascertain which style they were likely to be using. A single, six-second region was located as being differentiated across two conditions, where loudness seemed to be increased due to expectation of the instrument and orchestral texture, despite the lower SPL. We named this effect an auditory loudness stroop. A second region was differentiated between the two conditions, but its explanation appears to involve two factors, auditory looming perception and the reversal of stimulus note ramps. The overall conclusion was that the predominant processing style for loudness rating was bottom-up. Implications for further research and application to models of loudness are discussed.
Ferguson, S., Kenny, D.T. & Cabrera, D. 2010, 'Effects of training on time-varying spectral energy and sound pressure level in nine male classical singers', Journal of Voice, vol. 24, no. 1, pp. 39-46.
View/Download from: UTS OPUS or Publisher's site
Ferguson, S.J. & Cabrera, D. 2005, 'Vertical Localization of Sound from Multiway Loudspeakers', Journal of the Audio Engineering Society, vol. 53, no. 5, pp. 163-173.

Non traditional outputs

Haeusler, M., Beilharz, K.A., Ferguson, S. & Barker, T., 'Polymedia Pixel', Media Architecture Biennale 2010, Media Architecture Institute, Kuenstlerhaus Vienna.