Group member: Stig Turner [sttur14@student.sdu.dk]

Introduction
This blogpost is a part of a portfolio in the course OK-IOS-U1(autumn 2015). The Blog is about the process of designing and implementing a prototype audiometry application for iOS.
The idea of a audiometry application comes from Jesper Hvass Scmidt, who is a doctor at Odense university hospital.

Wellfare is getting more and more expensive, due to more patients..
Therefor it is interesting to create an audiometry application that is controlled by the test person alone. It may be able to make  audiometry tests easier and cheaper..
The application is not necessarily meant to replace a test preformed by a welfare worker, but could be used to evaluate if such a test is even necessary.

Common platform
Before you start reading, it is preferred that you have some experience with Xcode, audiokit and Swift, however if you know something about the thing listed below it will be fine:

  • Programming in SWIFT (if you know any a language like C, C#, C++ or JAVA, you will be fine).
  • UI’s (just a little about ex. html, css ,WPF, android UI or Qt)
  • Some knowledge about audio

Audiometry in a nutshell
A audiometry test is a test that is able to estimate how good a persons hearing is. The test is preformed by playing a frequency at multiple amplitudes(or volumes). Eash time the frequency is played at a new amplitude the test person answers if he/she heard the sound. These answers can then be used to find were the threshold of the persons hearing is, for the tested frequency. The test is then repeated for multiple frequency’s, giving a doctor a clear idea of what a person can hear/not hear.

Problem statement
There will be a couple of challenges in creating a  audiometry application. One is to create a simple and easy application that anybody can use, so as many test persons as possible are able to complete the test by themselves. An other challenge is to implement the functionality to preform an audiometry test. This means that is should be able to play specific frequency’s at specific amplitude in stereo. Another important issue is that when audiometry test are preformed patients sometimes lie. Not necessarily by choice, but sometimes they are convinced that they hear a sound, even though it not there. This is simply because they are so concentrated on the listening task. So this has to be taken into consideration when designing the application. So the main goals are:

  • Simple user-interface
  • Basis for implementing a full test
  • Compensate for the fact that users don’t  always answer truthfully

However relevents parts have been left out to limit the scope of the project:

  • The prototype will not use the correct dB scale for controlling the amplitude
  • The used test frequency’s will be more or less randomly selected, but easy to change
  • The prototype will not be able to store data. However the results will be stored in a single array, making future implementation of data storage easier

A bit about audiometry
When doing a hearing test like this, the goal is to find the threshold values of a patients hearing. These threshold is the amplitude of a specific frequency where the patient is just able to hear the sound. The test frequency’s are within what a normal hearing person should be able to hear. So when a test is done, the result is just a set of frequency/amplitude pairs that the patient are able to hear. A doctor can then use these results to judge if a patient have a normal hearing.

A simple scenario
A patient is at the doctor because she/he is concerned about her/his hearing. While the patient is in the waiting-room, waiting for his/her turn, the secretary hands the patient an iPad/iPhone with a set if earphones. While the patient wait, he/she can complete hearing test. When it is the patients turn to see the doctor, the doctor can get see the relevant test result right away, and judge if further tests is needed.

Design Brainstorm and Mockups
Before building a digital prototype of the user interface, its allways a good idea to create a morckup, ex using a popapp. A popapp is a simple way of making paper drawing of an application act like an application. This makes the design process a lot easier, since it a lot faster to change detail of a drawing then on a digital UI.
A popapp prototype, were created with the idea of a simple interface, keeping a minimum of information on the screen at all times. The Layout was designed with some guidance from Madeleine Friis at Aarhus University who has a bachelor with focus on visual culture.

The storyboard of the app can be seen in the figure below:

  1. Enter Name:
    A simple view, asking the user to enter their name.
  2. Introduction Video:
    A view with a short video showing how to use the test
  3. Initial sound level test:
    A view with a test for estimating a loose threshold for the current frequency. (This makes the step size test a lot faster)
  4. Step size test:
    A view with a test playing a sound and then asking the user to answer of they heard it. This test is repeated until the amplitude were the user is just able to hear the sound is found(threshold).
  5. Test done and test results:
    A simple view telling that the test is done, followed by a view with the test results.

Step 3 and 4 repeats for all test frequency’s.
The logic for finding the threshold is explained later. (Test Model description under Object diagram).

Two mockups of step 3 in the storyboard were created in the popapp. One with a trackbar and one with a big round button. The idea of the trackbar is to make the user move it until he/she is just able to hear a sound, and then use this value as an initial guess of the threshold.The other idea is a big button that the user holds down. When the button is pressed the app start playing a sound at a very low amplitude, and then slowly increasing it until the user releases the button. The amplitude present when the button is released can then be used as a initial guess of the threshold.

The Paper drawing of the app can be seen below, and the interactive popapp can be found here: popapp

A bit of FeedBack
To ensure that the design was moving in the right direction it was presented for Madeleine Friis, and a couple of fellow students. They were all satisfied with the simple design. What were more interesting was that they were not a fan of the track-bar compared to the hold-button idea. The info button in the top left corner were also added on basis of the feedback. The info button function is to show the introduction video again.

Building a navigational digital prototype
After the mockup feedback, the process of creating a digital prototype of the UI could begin. As the design is created with the idea of as little information on the screen at the time, it UI is fairly easy to build. The layout can be set to center at the middle of the screen, and then just constraint the distance between the different UI elements. This will also make ex. future work of creating an iPad version of the UI easier.
To make the prototype dynamic, all buttons were connected to segways, creating a storyboard identical to the mockup storyboard. In the “SWIFT universe” a segway is a connection between views, that changes the present view and controller. a segway can also be used to start seague to a new view from the code.

Object diagram
The object diagram of the application can be seen blow, followed by short descriptions of each object.

Enter name controller class:
This class gets the name input from the user, and creates a new Test Model class with the name given. This ensures that a new Test model is created every time a new test begins.

Initial Sound Level test controller class:
Responsible for detecting when the Initial Sound level views button is pressed/released. Calls start and stop functions in the Test model controlling the sound.

Step size sound level test controller class:
Calls function in Test model when user presses play, and calls function with the answer as argument.

Test result controller class:
Gets the result from the Test model and presents them in the view.

Test Model class:
Holds the logic for finding the threshold, and contains the results. Also able to start playing a tone a low amplitude and slowly increasing the it, until a stop function is called. at this point the Test model saves the amplitude value as the initial threshold guess.
The class reads the test frequency’s from an array, making it easy to change the frequency’s and the number of them.
The Test Model class is used to find the threshold amplitude is show below:

  1. Find the initial threshold guess for each ear as descried earlier
  2. Go a large step down in amplitude (both ears)
  3. Play the current test frequency in a random ear
    Ask if the user can hear the sound
    If yes: Save the amplitude value and go a large step down in amplitude
    If no: Go a step up in amplitude
    If wrong answer (wrong ear): Go a large step down
  4. repeat until three “yes’s” have been answered for each ear the current test frequency
  5. Calculate the mean value of the saved amplitude values and save it as results for each ear
  6. Repeat for all test frequency’s

Tone Player class:
Creates audiokit classes, and plays the sound. (A code snippet of the class can be found later in this blog)

Code snippets:
The following is some interesting, very reusable code snippets from the application, using frameworks for iOS.

Adding a video to the project
Adding a video to a project can be done by simply dragging the video into the file explorer shown in the left panel of Xcode.

Screenshot from 2015-12-03 00:12:40
Xcode: Video file located in the left panel

Use of AVkit to play the added video and present it modal.
Presenting a view modal means that the AVPlayer pops up on top of the current view, and return to the view when the “pop up” is dismissed. This keeps all data in the previous view saved.

private func playVideo() throws {
        //"Locate" the video file... it is located at the root of the project, so the name of the video file can be used directly
        guard let path = NSBundle.mainBundle().pathForResource("Instruction_Video", ofType:"mp4") else {
            throw AppError.InvalidResource("video", "m4v")
        }
        //Load the file as an item and use it to create the AVPlayer
        let item = AVPlayerItem(URL: NSURL(fileURLWithPath: path))
        let player = AVPlayer(playerItem: item)
        //add control buttons to the AVPlayer
        let playerController = AVPlayerViewController()
        playerController.player = player
        //Add a NSNotification, to call function when the video ends
        NSNotificationCenter.defaultCenter().addObserver(self, selector: "playerDidFinishPlaying:", name: AVPlayerItemDidPlayToEndTimeNotification, object: item)
        //present the Modal view (AVPlayer)
        self.presentViewController(playerController, animated: true) {
        }
        //start the video
        player.play()
    }
    //function called when video ends
    func playerDidFinishPlaying(note: NSNotification) {
        self.dismissViewControllerAnimated(true,completion: {}) //dismiss the modal view
    }

Use of AudioKit for stereo audio (Tone Player class)

import Foundation
import AVKit

class TonePlayer {
    //declare instrument and property's (Later used to change the property of the instrument)
    let instrument = AKInstrument()
    let Leftamplitude = AKInstrumentProperty(value: 0.25, minimum: 0.0, maximum: 1.0)
    let Leftfrequency = AKInstrumentProperty(value: 1000, minimum: 0, maximum: 10000)
    let Rightamplitude = AKInstrumentProperty(value: 0.25, minimum: 0.0, maximum: 1.0)
    let Rightfrequency = AKInstrumentProperty(value: 1000, minimum: 0, maximum: 10000)

    var playing = false;

    //"Constructor"
    init(){
        //add the property's to the instrument
        instrument.addProperty(Leftamplitude)
        instrument.addProperty(Leftfrequency)
        instrument.addProperty(Rightamplitude)
        instrument.addProperty(Rightfrequency)

        //declare and setup the Oscillators for the instrument
        let LOscillator = AKOscillator(waveform: AKTable.standardSineWave(), frequency: Leftfrequency, amplitude: Leftamplitude)
        let ROscillator = AKOscillator(waveform: AKTable.standardSineWave(), frequency: Rightfrequency, amplitude: Rightamplitude)
        //declare the stereo audio for the instrument
        let stereoaudio = AKStereoAudio(leftAudio: LOscillator,rightAudio: ROscillator)

        //add the stereoaudio to the instrument and add it to the AKOrchestra
        instrument.setStereoAudioOutput(stereoaudio)
        AKOrchestra.addInstrument(instrument)
    }

    //PlaySound function
    func playSound(amp: Float, freq: Float, left: Bool){
        //Pick a side to play the audio in
        if(left){
            Rightamplitude.value = 0.0 //set the amplitude in the other side to 0
            Leftamplitude.value = amp //set the amplitude of the ocillator
            Leftfrequency.value = freq //set the frequency of the ocillator
        }
        else{
            Leftamplitude.value = 0.0
            Rightamplitude.value = amp
            Rightfrequency.value = freq
        }
        if !playing{ //if the instrument is not playing, then start it
            instrument.play()
            playing = true
        }
    }
    
    //stop the instrument
    func stopSound()
    {
        instrument.stop()
        playing = false
    }
    
}

Use of the NSTimer for calling a function periodically.
The timer is used to slowly increase the amplitude in the hold button test.

//Start periodically timer that calls "timerfunc" every 0.05 second
func startTimer(){
timer = NSTimer.scheduledTimerWithTimeInterval(0.05, target:self, selector: Selector("timerfunc"), userInfo: nil, repeats: true) 
}

//stop periodically timer
func stopTimer(){    
timer.invalidate()
}

Frameworks used

  • AudioKit: “Powerful Audio synthesis, processing, and analysis, without the steep learning curve” – audiokit.io…
    Instruction on how to setup and use audiokit can be found at http://audiokit.io/
  • AVKit: Default framework in Xcode, used for ex. video playback.

Demo:
Before watching the demo video please note that:
-The test have been modified only to test with a 640 Hz tone (To limit the length of the demo), the result is displayed as the result for 100 Hz.
-The step sizes of the amplitude have been modified, and the change in amplitude is there for not accurate. (The values in the real app are optimized for earphones with requires a significant lower amplitude to emit sound)
-There is no instruction video, Instead a demo video of a lego helicopter is displayed.

The complete code can be downloaded from GitHub.

Discussion
Prototype has a lot of good things, however some thing still needs to be improved before the application can be taken into use. A algorithm for converting the amplitude values into dB is needed before the result can be used for diagnostics. Such a algorithm can not be designed without a set of earphones with a known frequency response.  A better choice of frequency’s would also be needed, and would require more knowledge about audiometry test then I got. However these things are expected issues, and are not in the goals of the prototype. Some improvement to the logic for finding the threshold could also be done. Ex. when listener hear the sound at the first test at the step size sound level tests. In this saturation, the first heard amplitude value will have an effect on the result, with might not be near the threshold.  However the prototype demonstrates a clear idea of how a audiometry test application for iOS could be implemented.

Leave a Reply