By: Daniel Kaypour [dakay14@student.sdu.dk]

Introduction

This project is about a green energy solution, which is an idea that is built around making the consumers aware of their bad energy habits. This means, that give the user some sort of information, that tells them, when they are using bad energy. In such bad energy means, letting your windows be open, while your heater is on. Leaving the lights on, when you have left the building.

This project aims to solve all the issues, that can occur in a project like this. Though, it is important to notice, that due to the limited time available, a full blown application shall not be expected from this project.
This project is about more than just coding an application, therefore some technical knowledge is required, in order to understand the functionality of the whole system.

This application is not intended to be a precise calculation for bad energy consumption. However, it will serve as a information center, that can guide the users in the right direction. Perhaps make the world a less CO2 place.

Motivation

Bad energy habits, has unfortunately become part of many people’s daily lives. We forget to turn off the lights, when we are not in the room. We forget to turn off the heater, when we open the windows.

Initial inspiration for this project, can be found here: http://www.york.ac.uk/energyconservation/fact_sheets/lights%20comp%20window.pdf

Though automated lights (turns off when you do not move for a while) have existed for a while, I tried to slightly move into another direction. The main idea is not to interfere with the person’s daily life, but instead be personal information, available to the user by their request.
I believe that by showing the user concise consumption data, it will make them change their habits. I do not believe that interfering with the user’s behaviour, is the right solution to this global problem.

However, showing the user how much wasted energy they use, and how much they are paying for this wasted energy, could perhaps make the user change their habits.
This will benefit both the user and the planet, by making it more “green”. Reducing the CO2 emissions, due to power plants having to produce less energy, since more people are aware of their energy consumption, using less energy.

At this time, I do not see this as a feasible solutions for companies, but instead I see it as a viable option for private households. Perhaps in already established houses / apartments, or as part of new smart houses.

Problem statement

The essential solution is to create a system that is feasible for the user to invest in. I need to identify the problems correctly, so I solve this specific issue in a worthy manner.

In this particular project, there is a few problems.

First I need to find out, whether or not this product is feasible by doing some kind of a pilot study.
Once I have identified, that it is feasible, I need to figure out how we are going to gather the energy consumption information from the user. Which scenarios am I going to cover, and also which hardware / sensors I am are going to use.
At the end, I also need a way to show the user these informations, in a user friendly way. This information would be shown through an iOS application.

Market research

Before coming up with an idea for a product solution, I analyzed the market for opportunities using the market attractiveness pie chart. The model studies the market from 6 different perspectives and helps identify opportunities to direct the idea generation.

Screen Shot 2015-12-06 at 01.18.47

The target market would be a market for savings in building energy consumption through building refurbishment. Starting from one end the Demographic Environment lists the potential customers in the market, which on the first glance is basically everyone paying an energy bill. More specifically though the more attractive demographic segments would be:

  1. Private House Owners
  2. Landlords
  3. The Government
  4. House Owners To Be
  5. Business Owners
  6. Tenants

Not necessarily in that order. Of course all those categories could be divided even further, f. ex. by income, educational level or similar.

The Socio-Cultural Environment governs beliefs, values and norms and in our market there is a rising environmental awareness with the responsibility to keep the planet healthy for future generations, a general trend of going green and a trend for cloud services that might open up opportunities for a product.

The Economic Environment is a very important factor as it includes the (monetary) size of the market. In our case it is a high value market seeing that the purchasing of a building has a high cost and building require a constant upkeep. As the product by definition of the project will help save energy it can also be considered as an investment, returning its cost over the next few years. Additionally the initial target market in a western country gives the population a high purchasing power, and thinking globally the middle class of third world countries is rapidly growing.

In the Political-Legal Environment of the starting market in Denmark the government has a general goal of having a fossil free electricity and heat production by 2035 which in most projected scenarios requires a substantial increase in energy efficiency to obtain. To further this there is a more specific goal of reaching 75% reduction of energy consumption in buildings by 2050. This has and will presumably lead to more subsidies to energy saving investments and projects. Additionally there is a general subsidy in form of tax deduction to maintenance in private homes that is applicable to modernization and refurbishment.

An important opportunity in the Technological Environment is the long lifetime of buildings in respect to the fast technology development rate, resulting in high discrepancy between the average energy consumption vs the state of the art. Other opportunities are the low price and high availability of sensors and the widespread access to and acceptance of smartphones, internet and cloud based services.

Finally the Natural Environment is almost covered by the Socio-Cultural Environment as its upkeep vital for the responsibility for future generations. Opportunities here are the impact of energy savings on the global warming, and additionally the reduction of local pollution.

Report structure

To give the reader the best possible understanding of the problem, I have structured the in report way, where we move from the conceptual overview and slowly down to the technical part of the project.
However, this report does expect that the reader has some technical knowledge, in order to understand the technical details.

The report will start by analysing the problem and showcasing eventual current technologies. We then proceed with analysing the overall system, and justify the technical choices we have made for the technology, we chose to work with.

We then look at the system design and implementation details.
The report will finish with a discussion, followed by a conclusion.

System analysis

The system should consist of different components. First there is need of some hardware in the user’s home, which has the ability to read their consumption, and also detect whether there is someone inside the room, or not. I need this data, because it is required to do the core calculations. Furthermore, the hardware needs to be able to push the information onto the cloud, where I can start to analyze it, and shape it the way we want the user to see it. Thereafter I need an app for the user’s phone, so they can login to the cloud, and be able to fetch his personal data, and look at his statistics.

Obviously security should be taken very serious in this matter, since the habit of the user could potentially be exposed to the outside world.

It is important that the hardware that pushes the data to the cloud, can not be accessed from the outside world. It only has the function of uploading the data, to a certain endpoint.

In a functional perspective, most relevant objectives regarding the hardware measuring the consumption, is happening transparently.

In such it must be possible for the hardware to detect by itself, whether or not there is people in the room. If that is no one in the room, it will then proceed to upload relevant data, until either the energy source gets shut off, or people re enter the room.

Thus requiring some sort of a backend (cloud endpoint) to receive the data from the hardware. This backend will also be the same backend, the user will use to login to from their smartphone app.

Basically the calculations will happen on the backend, once it receives data from the hardware. Then when the user logins into the app, and requests information, this backend will proceed to handle the query, by showing the user the relevant information.

In a nutshell, the operational concerns are fault tolerant, available, reliable, secure and safe. It is critical that the hardware is well tested, since sending constant technicians out is not feasible in the long run. It must be perfectly able to run by itself, for many years to come.

It must also be available, very important for backup plans in case something happens. This is particularly important for the hardware, since the hardware is out on the “field”. Thus becoming a bigger cost to fix. It is much easier to handle a breakdown in the server room, than in the “field”. Therefore the system must as fault tolerant, available and reliable as possible.

Secure and safe is two other concerns that involves the whole system. We are talking about household consumption, which is categorised under private data.
The vision of this system is still in its infancy, therefore it is very important that the system is easily maintainable, because it will evolve rapidly, when I start to add support for more kind of monitoring, than these mentioned in this report.

My purchasing power is very limited, therefore I need to use our resources sparsely. This means that I can not go to the market, and acquire expensive hardware (microcontrollers / sensors) but instead I need to focus on developing a prototype, which is feasible within our economical constraints.

Choices of technology

In an effort to achieve success, we must not supersede our economical threshold. This mean that as of this report being written, we have a limited amount of money available (mostly out of my personal pockets). Therefore the technology I use for the prototype might deviate from the technology I would use, when I get funded.

The important factor here is, that I need to choose two kind of technology, where one being hardware the other is software.
We previously described the importance of cloud computing, in my project. This is an important key feature to remember, when we think about hardware components.

Fundamentally my hardware requirements are very basic, in that it consists of a microcontroller (having internet connectivity) and sensors that can sample the required data, so it can be pushed out on the cloud.
This needs to be cost effective for the prototype, but must also be reliable enough so it can be a showcase, until I eventually get funded.  

During my search on the market, I have found several valid microcontroller / microprocessor candidates. However, only one seemed to suit perfectly, because it is economical and it also have internet connectivity, in addition it also runs an operating system.

Screen Shot 2015-12-06 at 00.52.22

The candidates are as following:

Arduino: suffering for the fact, it has no internet connectivity built in, therefore it is needed to create your own TCP/IP stack (which is time consuming and prone to error). However, Arduino is pretty cheap, making them very cost effective, and it have the ability to read from analog sensors, which is the type of sensor we will be using, to measure consumption.

Intel Edison: is Intel’s new IoT (Internet of Things) product. Whilst being very attractive it lost to two factors. One being that it is very expensive for us, even as prototype. The second is that it is missing IO (Input Output) ports. This makes it hard to work with in our project, since we need to read data from analog pins.

Raspberry Pi 2: the number two version, is the latest release from the Raspberry Pi foundation. Dramatically increasing the computing power of the Raspberry Pi. Making it the strongest compute horse in this comparison. However, the strength of this device is the low price tag. It comes with internet connectivity through ethernet, and can also run an operating system of choice.
One issue we will encounter with this is, it only has digital IO per default, which makes it hard for us to do consumption measurement, since that requires analog pins.

Beaglebone: Being very close to the Raspberry Pi architecture (having internet connectivity and IO ports) it solely lost because of the very expensive price tag. Being between 2-3 times higher price than of a Raspberry Pi (depending on the vendor). Making it unfeasible for our prototype.

After our rough comparison, I concluded that Raspberry Pi 2 suit me the most. Not only is it cheap, but it also provides with better computing power and internet connectivity.

The choice of selecting sensors that can measure power consumptions, pales in comparison to the previous comparison. Basically the best option I have, without interfering with the current electricity installation, is to use non invasive method. As such only one type of sensor stands out.

Screen Shot 2015-12-06 at 00.52.14

This is a CT (Current Transformer) sensor, which is a cheap sensor, that can be simply by clamping it around a wire that has electrical current. The data it reads, is sent back through an analog pin measuring the amperage of the unit.

The way it work is like any other transformer, it has a primary winding, a magnetic core, and a secondary winding. Whilst being non invasive making it easier to use, it has a drawback of not being 100% precise. The reading it does can deviate by some percent. However, before we start developing and testing, we do not know exactly by how much it deviates, therefore it is too early to know, if it is unsuitable.

To complete the embedded part of the system, I need a way to detect if there is human presence in the room. This can be done in many different ways, depending on how you combine sensors. 

Screen Shot 2015-12-06 at 00.59.57

One idea could be using these antenna, which can create a field of WiFi. This WiFi field can then be used to count if there is people in the area.
More information, can be found here:

http://www.ece.ucsb.edu/~ymostofi/HeadCountingWithWiFi.html

The way this technology work, is by using the noise on the level, combined with an algorithm.

Screen Shot 2015-12-06 at 01.00.09

Evidently the above picture shows how noise of the level, when someone walks through the LoS (Line oSight). This obviously works fine with dynamic objects moving, however things are differently with static object. Therefor some algorithm must be added that can determine such behaviour.

Screen Shot 2015-12-06 at 01.00.14

This system also creates a scattering effect, which can be seen on the graph as multipath. This means that this system also detects people not directly in the LoS. This makes it powerful, because this effect increases the operational range.

Screen Shot 2015-12-06 at 01.00.24

Thus this works effectively in a room, because of this scattering effect. Evidently, this can be seen on the result below.

Screen Shot 2015-12-06 at 00.59.50

In my case the precision is quite acceptable. Though the outdoor precision is poor, compared to the indoor. I am unaffected by it, because my system is built for indoors.

Conceptional overview

Ultimately I require this system, to be loosely coupled architecture. I do not want the system to halt, if one of the applications stops working.
This means that, if the microcontroller can not establish connection to the backend, it should be able to store data locally, and then proceed to upload them, when the connection is reestablished later. Also that the smartphone application, should be able to show cached data, in case the backend is unavailable, and resync whenever the connection is reestablished.
For me availability, security, maintainability, scalability and reliability is most of concern, when we think about how our overall architecture should be. Therefore it seems feasible to choose the n-Tier architecture, for various reasons.
For one, by having each application on their own tier, I can have a different set of rules that targets that specific tier, rather than being global rules. Let’s say we are having huge database load, and would like to scale it out, we can proceed to modify the database tier, add additional clusters, without any other tier being affected directly.

Also this gives us the opportunity, to maintain each tier independently and transparently from one another.

Screen Shot 2015-12-06 at 00.52.34

An overall architecture could be described as such, where each symbol on the diagram above, is a tier for it self.

We have the microcontroller that sends data to the load balancer, which acts like a proxy that delegates the request to appropriate backends (endpoints), possibly using round robin algorithm, thereafter it could be delegated to the database tier, depending on the request.

Consequently, this give us easy maintainable components, and makes it so much easier to scale.
For Example if HTTP load is our problem, we can simply add another backend as a node, and our load balancer will register it, and add it to the algorithm it uses. If we are having big database load, we can simply add another database cluster to the database tier, all done transparently to the other tiers. Because the other tiers simply do not need to know, they just have to do their part of the chain.
This also enhances the ability for maintaining easily, which was a requirement for us.

Even though this gives us maintainability, scalability and reliability at an architectural level, it can be further enhanced at software level, by using specific design patterns.

 

Early Mockups

Screen Shot 2015-12-06 at 00.55.42

 

The early prototype was done, so I could have an overall idea, before I started to produce the product. The prototype is not totally complete, and deviates from the real product. It is merely used as a “first step”.

Screen Shot 2015-12-06 at 01.44.48

As we can see on the above picture, the Xcode version has deviated from the original prototype, which was expected. However, it is not as complete as the prototype, which it eventually will be.
The screen renders a graph, which is not able to be seen on the Xcode project, because it loads dynamically. 

Screen Shot 2015-12-06 at 01.49.46

At first you are presented with a login screen, which connects to the main REST service. Here your request will be handled, which means you will actually never interact with the hardware device at all. The REST service has all the information, that the app might require.

Screen Shot 2015-12-06 at 01.50.04

The graph, shows the bad energy that the user has been consuming, through the year. There is a pie chart below that gives more detailed information. Evidently, the app is not finished but this is the direction I want to continue with.

Performance

Currently the application uses pooling, which can lead to some serious problems. For one, pooling is generally not such a good idea, on mobile applications, because it can drain the battery, and increase data consumption transparently. This is not something mobile users are interested in, since battery power seems a bit sparse, compared to every other things evolving in the mobile world.

The problem with pooling is, when you do pooling, you are not sure if an actual update has occurred, which can possibly render the pool you just made, useless.
However, the solution to this, would be using sockets. Which creates a full duplex connection, so the server can update the client, without any requests have been made.

An example of a pooling would be:
You send a pooling request, with your JWT token (Json Web Token), this request can be around 80 bytes of size. The respond you typically get, is around 200 – N size of bytes. The N part depends on the sensors you have installed, and the data they contain.
So in best case it would be 280 bytes, for one pooling. Which does not sound a lot, but if you pool 60 times a second, you will have used 280 * 60 = 16.800 bytes = 16.8 Kb.
This still seems like a relatively low number, but if we count this for ten minutes, we have something like 10 * 16.8 = 168 Kb. Ultimately, apart from being an app downside, this also lead to increase load on our backends. Pretend we have one million users: 168 * 1.000.000 = 168000000Kb = 163583MB = 159GB every ten minute!

Therefore the conclusion is, to change the pooling to a full duplex communication.

Demo

A video demo can be seen on the below link.

YouTube

Results

NSTimer.scheduledTimerWithTimeInterval(0.5, target: self, selector: "update", userInfo: nil, repeats: true)

 

    func getRoom() {
        let url = NSURL(string: "http://localhost:8000/rooms/"+currentRoom)
        let request = NSMutableURLRequest(URL: url!)
        request.HTTPMethod = "GET"
        let token = NSUserDefaults.standardUserDefaults().objectForKey("authorization")
        request.setValue((token! as! String), forHTTPHeaderField: "Authorization")
        
        let task = NSURLSession.sharedSession().dataTaskWithRequest(request) {
            data, response, error in
            
            if error != nil {
                print("error=\(error)")
                return
            }
            let statusCode = (response as? NSHTTPURLResponse)?.statusCode
            if statusCode! == 403 {
                NSUserDefaults.standardUserDefaults().removeObjectForKey("authorization")
                NSUserDefaults.standardUserDefaults().synchronize()
                self.performSegueWithIdentifier("loginView", sender: self)
                
            } else {
                var watt: Float64 = 0
                if self.currentRoom == "Total" {
                    let json = try! NSJSONSerialization.JSONObjectWithData(data!, options: .MutableContainers) as? NSArray
                    for obj : AnyObject in json! {
                        let board = obj["Board"] as? NSDictionary
                        watt +=  230 * (board!["TotalAmp"] as! Float64)
                    }
                    
                } else {
                    let json = try! NSJSONSerialization.JSONObjectWithData(data!, options: .MutableContainers) as? NSDictionary
                    let board = json!["Board"] as? NSDictionary
                    watt = 230 * (board!["TotalAmp"] as! Float64)
                }
                watt = watt / 1000
                dispatch_async(dispatch_get_main_queue(), {
                    self.tt.text = NSString(format: "%.2f", watt) as String
                })
            }
        }
        task.resume()

    }

In this snippet, we see the pitfall of the pooling, that we ultimately will change to a socket type later. The pooling occurs every 500ms as shown above. This function happens asynchronously.

func setChart(dataPoints: [String], values: [Double], limit: Float64) {
        
        var dataEntries: [ChartDataEntry] = []
        
        for i in 0..<dataPoints.count {
            let dataEntry = ChartDataEntry(value: values[i], xIndex: i)
            dataEntries.append(dataEntry)
        }
        
        let pieChartDataSet = PieChartDataSet(yVals: dataEntries, label: "Bad kWh")
        let pieChartData = PieChartData(xVals: dataPoints, dataSet: pieChartDataSet)
        pieChartView.data = pieChartData
        
        var colors: [UIColor] = []
        
        for _ in 0..<dataPoints.count {
            let red = Double(arc4random_uniform(256))
            let green = Double(arc4random_uniform(256))
            let blue = Double(arc4random_uniform(256))
            
            let color = UIColor(red: CGFloat(red/255), green: CGFloat(green/255), blue: CGFloat(blue/255), alpha: 1)
            colors.append(color)
        }
        
        pieChartDataSet.colors = colors
        
        
        let lineChartDataSet = LineChartDataSet(yVals: dataEntries, label: "Bad kWh")
        let lineChartData = LineChartData(xVals: dataPoints, dataSet: lineChartDataSet)
        lineChartView.data = lineChartData
        let ll = ChartLimitLine(limit: limit)
        lineChartView.rightAxis.addLimitLine(ll)
        
    }

This function is used, to draw the graph that is bound to a view. So within a view I have a two views, one for the line chart graph, and the other for the pie chart. The chart is then drawn, with what ever data it receives from the server.

Discussion

One of the problems my system had to solve, was being able to fetch data from the user’s house, by using various sensors. There would be sensors measuring the power consumed, and some sensor to detect if there was human presence in the house. I found out that I can measure the the power consumptions, by using non invasive sensors. However, to detect if there is human presence in the room, required various of different sensors, which would increase cost and depending on the sensors it could also create complexity at hardware level, therefore I decided an alternative path, by using software. I found that by using an tx and rx antenna, I can use an algorithm that uses the WiFi signal, to measure how many people there is in the room. 

I am convinced that this would be the solution to the problem, because it reduces my cost and complexity significantly, and also provides a more precise reading rather than having multiple sensors.

Conclusion

The biggest problem that I encountered very early in the development process, was that the cost of the sensors was too high for my student pockets. Making this project otherwise only feasible, if I am funded.

However, this did not halt the overall project, because I decided to create a virtual prototype instead, that is ready to plug and play, when I get the real hardware. The strength of a virtual prototype is, that I can run cost free test and benchmarks on it, and check for cases we eventually could support. The weakness on the other hand is, I do not actually have anything to show as a physical product, making it harder to find investors who wants to fund me.

One of the main obstacles I had to identify, was the ability to detect human presence in the room. This turned out to be solvable both on a hardware and software level.

At a very early stage, I knew that to detect human presence in the room, was non trivial thus not solvable by a single sensor. In order to reliably detect, I would have to combine different sensors, and analyse their output then calculate whether or not if there was human presence. However, this would significantly increase my cost, since I would need more sensors, and increase complexity because of the way I would do the detection.
I then moved from solving the problem on a hardware level, to a software level, where I found an article that makes it possible to count heads in the room, by using WiFi signals and an algorithm. That made me comfortable, that not only could I reduce complexity and save cost, but I could also reliable know if there was human presence in the room, without output of many sensors that could be misleading.

Whilst I do not have a functional physical product, I believe that our virtual prototype can be a strong foothold, that I can run many different scenarios in, until I find the money through an investor or something else. After that, I should be able to develop the real product relatively easy, since I will have the experience and knowledge from the virtual prototype. Making us able to rapidly develop a physical prototype, for my possible investors.

With the virtual prototype experiment, I experienced that the development was being a solution for the identified problem, thus convincing me to continue the development, even after the course ends.
As of the prototype, I feel it sheds light over the problem, before I start investing money into this project.

Leave a Reply