Project Title: Hearing Test App.

BitBucket link: https://bitbucket.org/anan416/ioshearingtestapp/src/master/

Group members: Andreas Thorlund Andersen (anan416), Omar Nabil Hawwash (omhaw16), Persha Pakdast (pepak16)

Introduction

This project is about a hearing test app, where a person has the ability to check to their hearing ability by using their own headset (which needs to be calibrated in order to be as effective as possible and the user be able to detect the exact sounds when taking the test). The main goal for this app is to give the user an understanding of how good their hearing ability is.

This information about hearing and ear health can for some users be of great importance. Chemotherapy patients need to make observations on their hearing ability from time to time, because they stand at risk for losing their hearing, as an unwanted side effect of chemotherapy. Other people, such as elderly people, or people who somehow have lost some of their ability to hear properly, might want to monitor their ear health and their current ability to hear. This is valuable because hearing damage is permanent and one can not regain hearing.

With hearing being one of the humans basic senses, one might not want to lose this ability. So to make sure one doesn’t lose more of this ability when they have a higher risk of losing this ability or already lost some, they might want to screen their ears and their hearing. Such Procedure is usually executed at the clinic, with specialized workers handling the procedure.

This means that there is a location constraint on where you can screen ones hearing. It also means that specialized personnel needs to take time for each individual patient to consult and to screen them. This cost money and time. For both the clinics and the patients.

What the will do to solve this problem, is to eliminate the location constraint, and make it possible to take the hearing screening test where ever the patient might be, and without the help of clinical personnel.

The goal for this app is to screen the patient, show the results to the patient and have the ability to automatically hand the results to the patient’s own doctor.

 

shown here is the use case diagram for the app.

Methods and materials

Methods

Design

Architecture design is made with draw.io.

For identified requirements we used the MoSCoW model for prioritizing found requirements.

Development

Development was done with Xcode.

The development code is hosted with bitbucket, and managed with gitkraken by choice.

Testing

Testing was done manually, with the help of the Xcode iphone simulator.

Materials for development

Swift 4

Our choice of programming language is Swift 4.

Xcode

Our choice of IDE is Xcode, where we can test our app on a Iphone simulator.

BitBucket.com

We use Bitbucket for hosting our code for sharing internally in the group and for Git functionality.

Gitkraken

Gitkraken is used for managing code changes.

Draw.io

Draw.io is used for drawing architecture diagrams.

marvelapp.com

This tool is used for making conceptional illustrations of the app idea,

 

Results

In order to be able to actually implement the application, it would good to have some requirements to aim for and to be certain that the idea fulfills the mandatory project requirements too, as introduced at the beginning of the course.

The project theme was to use sensors, which could be GPS, microphone, temperature or something else. The project has aimed at using the microphone sensor in this case, which would be used for calibrating the different variants of headphones that would be used to make a more precise hearing test although the test would lack a bit of clinical accuracy anyways.

The context was to either make a game or something related to health technology, where in the group chose to work with a health technology related project.

To get an idea of how the application would look like, the website Marvelapp.com was used, which made it possible to make a sketch on the user interface of the application.

 

Some of the sketch drawings are shown here:

The first picture (left picture) is the front page that would contain buttons for e.g. entering the page for starting the test, signup and login page, userinformation page (where the user can edit his/ her user data, such as full name, age, etc.), info on how the hearing test is done and last but not least, settings, which would contain settings for adjusting text size, changing language and much more.

The next picture (middle picture) is the sign-up page, where the user can create an account in order to save his hearing test records. This page would also contain a small label button for the login page, in case the user already has an account already and just need to log in to the application to view his hearing test records.

The next and last picture (right picture) is the page used for calibrating the users’ headphone, as the user would need to make his headphone working as precise as possible.

After the development of the prototype, it was still unsure on how precise the application should be implemented although some of the group members have asked the lecturer and found out that he knows some student who made an Android version of the same application who could ease the groups work.

A meeting was later arranged with the lecturer Jacob and the student who made the Android application of the hearing test app and thereby received some feedback and tips on what tools the group would benefit from using them in the project.

One thing was sure that the student used an algorithm called One-up-to-down algorithm, where it is about how the process ongoing when a user takes the hearing test.

The algorithm is about every time the user answers “no” to the hearing test question as he didn’t notice any sound in his headphone, he will be taken two steps backward in the test process, but if the user answers “yes”, he will go one step further in the process on the other hand.

Another issue was discussed with the student who made the Android version of the application, where the headphone and calibration part of the application could be a bit complicated, as he used a lot of complicated hardware to test the headphones range compared to the user’s ear, but the group wouldn’t get in touch of this part of the development though.

There is a wide range of headphones that have different specifications on how well they can be used for a hearing test and there was also discussed with some of the students who wrote their bachelor and wrote an android hearing test application. It was mentioned that there was also an issue with in-ear headphones than on-ear, as the distance between the ear and the headphone needed to match in a specific length in order to accommodate to a good hearing test. The headphone that was tested for the Android application was a Sennheiser HD 280 pro headphone.

In terms of requirements, we made an MoSCoW model in order to prioritize all requirements, where the most essential requirements to be fulfilled by the application would be the “Must” part of the model. These requirements are listed below:

Must:

  • Pure tone test with oscillator tones at different frequencies at different amplitude.
  • Button for the user to register, that they heard the sound.
  • Local storage of result data.
  • Create sound output through the stereo channels.

 

Should:

  • Headphone calibration.
  • System for Noise Cancellation.
  • Ability to override and change the font size inside the app-
  • User registration and login.

Could:

  • Quick introduction of the app.
  • Speaking test w/ noise overlay.
  • Quick consulting questionnaire about the users hearing habits.

 

In terms of classes and MVC, the group agreed upon that the Model part would do all the algorithmic stuff, which would be using the One-up-to-down algorithm. The model part of the MVC would then be connected to the Controller classes, which should use it for the user interaction part of the application

 

The Controller classes aim at responding to all the user interaction the user makes, such as starting the test, should trigger the code that uses the algorithm in the Model. Another thing that would be necessary to be considered was to store all these users hearing test results in a database of some sort. This would be the connection part to the database, as the user would need to register an account and log in to their created accounts and store their tests as they take them. However, this was not a must according to the requirements, hence not important to be considered in this case. An alternative could be to store the data locally on the phone.

 

The View would be used to allow the users to interact with the application.

The aim was to make something user-friendly GUI, as it should be possible for elderly people to use the application too, where the focus would be to make some settings for adjusting the text size of all the possible texts in the GUI too.

This figure explains the MVC relations of the project

Here is some of the important implementation:

Code snippet 1: the first recursive loop for testing the ear:

func screenEar(ear: String) {
 for currentFreq in Frequencies {
         let treshold = oneUpTwoDown(currentFreq: currentFreq)
         let result = FreqObservation(ear: ear, freq: 

currentFreq, dBTreshold: treshold)
    resultList.append(result)
}

}

 

What we see here is the code for testing one ear at the time in the pure tone test. It iterates through the list of defined frequencies (250, 500, 1000, 2000 hz), and executes the OneUpTwoDown test for the frequencies.

 

OneUpTwoDown

We want to find out what the lowest Decibel level, which the user can hear at each frequency. To do this, we use the One Up Two Down algorithm. It starts at a base Decibel level, which we have set to 60 in the app. It plays the Oscillator tone at the specified frequency, and then the app waits for the user to click the button on screen, to check if the user heard the sound. If the user pressed the button, then the sound level will go down with two steps in decibel level. One step is 5 decibel, so two down will be 10 decibel. If the user didn’t hear, then the sound will go up with one step in decibel. If the user clicks heard 3 times for the same decibel level, then the app writes the level down as the result for the frequency.

 

This is the algorithm:

func oneUpTwoDownPerDB(currentFreq: Int) -> Int {

        oscillator.frequency = Double(currentFreq)

        thisFreq = currentFreq

        var currentDBLevel = baseDB

        var checkedDBLevels = [DBHitCounter]()

        startAudio()

        

        while (checkForSpecificHits(receivedSet: checkedDBLevels, hit: 3) == nil && TestActive) {

            thisDB = currentDBLevel

            print(currentFreq, currentDBLevel)

            oscillator.amplitude = convertDBToAKAmplitude(db: currentDBLevel)

            startOsc()

            

            timerEnabled = true

            

            let work = DispatchWorkItem(block: {self.fireTimer()})

            DispatchQueue.global(qos: .background).asyncAfter(deadline: .now() + 4.0, execute: work)

            while (timerEnabled) {

                if (userRegisteredSound) {

                    work.cancel()

                    print("cancelled timer because user registered sound", currentDBLevel)

                }

            }

            work.cancel()

            stopOsc()

            

            if (userRegisteredSound){ //do this if user heard the sound  //We check if the db level was heard before

                

                if (getIndexForDB(receivedSet: checkedDBLevels, DB: currentDBLevel) == -1){ // if this level isn't heard  before

                    let currentDBHit = DBHitCounter(dBLevel: currentDBLevel)

                    currentDBHit.addHit()

                    checkedDBLevels.append(currentDBHit)

                } else {    //if this level was heard before

                    let index = getIndexForDB(receivedSet: checkedDBLevels, DB: currentDBLevel)

                    checkedDBLevels[index].addHit()

                }

                

                if ((currentDBLevel - 10) >= MinDB) { // goes two down

                    currentDBLevel = currentDBLevel - 10

                } else {

                    currentDBLevel = MinDB

                }

                userRegisteredSound = false

                

            } else {//user didn't hear sound

                if ((currentDBLevel + 5) <= MaxDB) {// adds one up

                    currentDBLevel = currentDBLevel + 5

                } else {

                    currentDBLevel = MaxDB

                    if (getIndexForDB(receivedSet: checkedDBLevels, DB: currentDBLevel) == -1){ // if this level isn't heard  before

                        let currentDBHit = DBHitCounter(dBLevel: currentDBLevel)

                        currentDBHit.addHit()

                        checkedDBLevels.append(currentDBHit)

                    } else {

                        let index = getIndexForDB(receivedSet: checkedDBLevels, DB: currentDBLevel)

                        checkedDBLevels[index].addHit()

                    }

                }

            }

        }

        stopAudio()

        return (checkForSpecificHits(receivedSet: checkedDBLevels, hit: 3)?.db ?? -1)

    }

 

The App’s interface

If the video doesn’t work, click HERE.

The video provided shows that the Results window crashes, but if you look in the actual ‘Results’ view, you’ll see a TableView that was attempted implemented. Moreover, when you approach the “Start test” window, if you click on the ear, it’ll automatically start tracking your microphone, and printing out the amplitude in the console. This, and the fact that the persistence which happens through the OneUpTwoDown and Persistence class which saves and retrieves the results, was unfortunately not shown in the video due to lack of time for recording. However, if you look in the source code, code (both attempted and final) for persistence will be available in the OneUpTwoDown and the Persistence classes, respectively.

 

More on this can be found in the Discussion section below.

 

Discussion

Although the application succeeded with the most essential parts of the MoScoW requirements, there was a lot of functionalities that was still missing though. These were functionalities such as the settings for making it more user-friendly in terms of text size, language or headphone calibration.

From here it would be a good idea to continue working on the other small functions such as the different setting functionalities, or a video guide for guiding the user through the hearing test.

In the beginning, it was expected to implement more views for the test, but as it looks like for now, it isn’t the case, as one view with just a button was enough for making the test to get through.

The application is a bit messy than what it was expected to be, as there it is a bit monolithic and contains a lot classes in the root folder of the project, instead of keeping it simple. The MVC is where the Model, View, and Controller part of the project should have each of their own classes, but as it looks, for now, they don’t live up to this, as a lot of classes are flooding around in the project. This was simply because the main focus was to make everything to work out in the beginning, hence why the MVC part wasn’t taken into consideration.

While the microphone is used, it is not quite as functional as wished. The microphone as of now tracks the amplitude of what’s being sent from the user through the microphone. The intended implementation is for it to – depending on the amplitude – either tell the user to move to a more quiet place, or that the current location is fine for a complete hearing test.

The labels in the hearing test app do not show the right numbers all the time, unfortunately. An Update mechanism was attempted implemented, but will be implemented prior to the exam. As will a more functional version of the microphone, and a persistence layer transferred to the Table View in Results.

 

The conclusion is, that with more time, the project could have focused more on making the MVC complete and implementing a lot of the features from the MoScoW model.

 

Leave a Reply