This project was executed and described by Group 1:
Christina Silverwing (chsil12), Daniel Davidsen (dadav13), Fardin Sherzai (fashe12), Rasmus Elving (raelv13)

Download Code in .Zip format

University of Southern Denmark
24th May 2017

Introduction

This project is developed on 2nd Semester of the Master’s Program Learning and Experience Technology. The primary goal for this project was to build an autonomous robot using Lego NXT in the programming language NXC. The secondary goal was to design the system with using Multi Agent systems (MAS).

Concept of the project

A brainstorm in the class resulted the concept of the project. The project is based on a scenario that all agents (Lego mindstorm NXT robots) exploring and mapping the different buildings of the  University of Southern Denmark (SDU) and searching for a treasure. When an agent finds the treasure he gives location to other agents and then all agents meet in the specific area.

Problem definition

  • Build a functional robot with the technical Lego bricks and NXT.
  • The necessary sensors must be selected and attached to the NXT.
  • Build a appropriate map of SDU must be made
  • All tasks from different agents must be assembled and implemented in all robots.

Distribution of the tasks

Due to the scope of the given task all functions were divided into five sub tasks, and delegated to each of the groups. To conclude the project each group were tasked with implementing all the subtasks, in a collective functional code. Description of the individual tasks can be read below.

G1:  Buildings’ mapping, color and location of a building.

  • Build a function that is capable of detecting a location of a buildings and link it with its color.
    • Input: The coordinates of an area to be scanned (area of a whole map).
    • Output: Building location and color. Ex. TEK building is in location x,y and it has a blue color.

G2: Data exchange, Bluetooth connection.

  • Build a communication network based on Bluetooth connection.
    • Input: NXT name.
    • Output: Communication is established/failed.

G3: Area scanning, mainly used for finding the treasure.

  • Building scanner.
    • Input: Building location and color (comes from Group 1) + treasure color.
    • Output: Treasure found/not found.

G4: Data sharing (File sharing) work as a database.

  • Build data exchange mechanism to share information between robots via Bluetooth connection (made by Group 2).
    • Input: Data to be shared. Ex. Push function.
    • Output: A file (database) that can be accessed and fetch data. Explain how to do that

G5: GPS and robot steering. To move towards a specific location.

  • Drive my robot
    • Input: GPS coordinate of the destination building (can be taken from Group
    • Output: Robot drives to the destination location

SDU MAP

A map of SDU was made (see figure 1). The following are the different building:

  • The yellow area: The parking area
  • The black area: The sports club
  • The red area: The main building
  • The green area: The Maersk-building
  • The blue area: The TEK-building

Figure 1 Map of SDU

Theory

Multi Agent systems (MAS) are significant because agents with simple instructions can complete a major task through collaboration. The less complex each agent is, the easier it is to replace. The secondary goal for this project has been to simulate this thought that agents work together to achieve a common goal.

As mentioned, a multi agent system consists of agents in an environment. These agents can be divided into different categories:

  • Software Agents
  • Physical agents
  • Passive agents
  • Active agents

A software agent is an piece of software that functions as an agent for a user or another program, working autonomously and continuously in a particular environment. It is inhibited by other processes and agents, but is also able to learn from its experience in functioning in an environment over a long period of time. Physical agents are real and act in real physical environments. Passive agents are agents without any goals, which thus do not perform any action by themselves, but can act under external input. Active agents have defined goals that they seek to achieve in their environment. In addition to these categories, it is also important that an agent can exercise some autonomy in their environment.

Methods

In this section, the methods used throughout this project will be listed.

Brainstorming process

The project started with a brainstorm where all groups were involved. This was done with the tool PollEv.com/app where each group responded with their ideas. After all ideas were up, each person who had written an idea had to explain it to the class so everyone understood what it was about. These ideas were then made into a poll where each person in the class could vote on the idea they liked the best.

Conceptual design process

For the conceptual design process each group had to find at least three papers that related to the project in some way. This could be through the keywords; multiagent systems, Lego Mindstorm, mapping, GPS and others that related to the project. These papers were then send to the teacher who decided which one of them that seemed most relevant to the current project. The groups would each present their paper in detail for the whole class, where another group would act as questioner to ask questions to the presenting group about their paper. These papers also acted as a form of state of the art for the project.

Incremental building and testing

We used incremental building and testing when constructing the robot for this project. The robots used is the Lego Mindstorms NXT robots. These are assembled from Lego bricks, motors and sensors, which enables the robot to react to and interact with its surroundings.

 

Testing the modules

The first part was to test all sensors and motors required for the project. This was done by taking each module and place it in the NXT module and then upload a piece of test code though the program BrickCC to the NXT module. Each piece of code made one or several modules run to see if they worked and no errors occurred during runtime.

 

 

Assembling the robot

The next part of the project was to assemble the robot to enable it to move. This was done with Technic Lego, and through trial and error where we tried different concepts to fit the modules together.

 

Test run the assembled robot

After assembling the robot, it was time to test if the robot could drive around properly. This was done through test code and through the direct control functionality in the program BrickCC used to upload code to the NXT module.

If a part didn’t work as expected, then that part would be taken apart and adjusted as needed.

Pair programming

As a group we were given the task to program the part that enabled each robot to find and outline the buildings. The programming for this were done by the use of pair programming, where two people work and test different parts of the code at the same time. The advantage of this is that each person can use the other as sparring partner and the work generally goes faster.

 

Establishing the main functions

The first part was to establish the main functions for accomplishing the task. This was done by considering what the robot had to do in different scenarios. An example on such a scenario could be what the robot should do when it meets the building, should it move left, right or backward? or should it follow the line around? To give an overview over the scenarios, the whole thing were written up on the blackboard as psudocode, where each of the main functions were found. A bit of this process can be seen on figure 2.

Figure 2 – psudocode

The main functions through this were found to be:

  • checkRight(): which should enable the robot to check to its right after a building.
  • updateMap(): which should enable the robot to update the current map with the new information about the maps current layout.
  • checkForward(): which should enable the robot to check in front of it after a building.
  • checkColor(): which should check the color of the building the robot meets.
  • drawMap(): which should draw the map to the screen.
  • rotateLeft(): which should rotate the robot to the left.

Programming the main functions

This is where each of the main functions is split up between the group members working on the code, and programmed until they work. A dummy map was used the emulate data coming from the color sensor since the robot doesn’t have code for navigating a map yet.

Testing the assembled program

This part is where the full code with all main functions is tested for errors and compiling mistakes.

Materials

In this section, the materials that have been used to build the robot will be described in more detail.

 

  1. 1 x NXT Intelligent Brick
  2. 2 x NXT Motor
  3. 1 x Light sensors
  4. 1 x Dexter GPS Module (dGPS)
  5. Several Technic Lego bricks

LEGO Mindstorm NXT brick

Lego Mindstorms NXT is a programmable robotics kit released by Lego in late July 2006. The main component in the kit is a brick-shaped computer called the NXT Intelligent Brick. NXT bricks consists of a 32-bit microprocessor. It has four input ports (sensor ports) and three output ports (servo motor ports) and it allows NXT to send wireless communication via Bluetooth. NXT runs on 9V power sources, which can both be done through a rechargeable battery or 6 AA batteries in a battery pack.

Figure 3 – NXT Brick

We placed Lego Mindstorm NXT at the top of our “robot”, as it’s easy to replace the batteries. In Figure 4, you can see a picture of NXT on our robot.

Figure 4 – Robot top view

NXT Motor

To build a runnable robot we needed two servomotors. The NXT servomotor is a very nice piece of hardware and it is also fairly easy to use. It features a 9V DC motor with a gearbox and a double encoder with a resolution of approximately 1 degree. The servomotors act as actuators who receive notification of how fast, which direction and how long they should run. These are the actuators that make the robot move.

Figure 5 NXT Motor

 

Figure 6 shows how the two servomotors are placed on lego Mindstorm NXT with the help of lego blocks.

Figure 6 – Robot side view

 

Color Sensor

To recognize where the buildings are located, we used a color sensor. The color sensor can do many things such as, enabling the robot to distinguish between black and white and a range of 6 colors. This new technology can also detect both reflected and ambient light.

Figure 7 – NXT Color Sensor

The color sensor is located at the front of the robot, and as close as possible to the ground to minimize the impact of external light. See figure 8.

Figure 8 – Robot front

 

Dexter GPS Module (dGPS)

To get robots to find SDU buildings we had to use some kind of tracking device. For this project, the Dexter GPS Module (dGPS) was used. The dGPS Sensor provides GPS coordinate information to the robot and calculates navigation information. It provides latitude, longitude, time, speed and heading to the robot.

Figure 9 – NXT dGPS module

 

As you can see in the figure 10, the dGPS sensor is placed on top of the robot to receive better signals from the satellites. The dGPS sensor is connected to the NXT device like any other Mindstorms sensor, via a cable.

Figure 10 – dGPS mounted on robot

Technic Lego bricks

To build a runnable robot, different Lego technic bricks were used.  As shown in figure 6 the different sensors was attached to the NXT using Lego Bricks. We tried to give the robot a shape similar to a car, to achieve the best stability, and maneuverability.

Stage before the final result

In this section the stages before the final code and results are described, to show the stages the project went through before the final result was reached.

First code

This is a description of the first code that was made before the task was reformulated. This code was never completely finished, but gives a look into how the task was to be solved before the reformulation. In this task the code had to be able to trace around the buildings when the robot detected one and find the buildings form, size and position and send it to the other robots after the tracing was complete.

 

The main functions in this code was found to be:

  • calibrateSensor(): A function to calibrate the sensor by letting the sensor sense the different colors, and find a threshold between the color of the building and the color of the map for each building color.
  • sensorReading(): A function for letting the robot continually read the inputs from the sensor.
  • forward(): A function with a state machine that lets the robot change between search and trace mode.
  • saveCoords(): A function for saving the coordinates from when a reading is done with the GPS module and save them as an array.

 

What you see below (figure 11) is the function called forward(), this code , as mentioned above, was built as a state machine where the robot changed between stages when different criteria was met.

When the task is initiated it runs forever to let the robot change between stages when the criteria for each state/mode is met. The robot starts out in “Search” mode where it searches after the buildings, and when one is found it changes it mode to “TracingBegin” where the robot is moved into position to begin tracing the building. After the robot is in position it changes its mode to “Tracing” and begins to follow the edge of the building, while the saveCoords() function runs every other second to check if the sensor is on the color of the building, and if it is saves the robots coordinates. This is done to get a number of coordinates that runs along the building and gives an overview over the form and position of the building. The robot continues to do this until it nears the coordinate where it began tracing, where it will send the coordinates and change its mode to “Search” to find a new building to trace.

Figure 11 – Forward() function

The GPS sensor was however found to be imprecise as it was found to have an error margin on about 3 meters around a point, which was too much for this code to work. This lead to a discussion where the task was reformulated and new requirements had to be met.

Results

This section will contain the results the project, comprised of the brainstorming results, the conceptual design results, the final design of the robot and the programming of the NXT module.

Brainstorming results

The brainstorming process ended up with two ideas on the top, that was merged into one as the final project. These were Treasure hunt using GPS and Obstacle mapping.

Conceptual design results

The results of the conceptual design process were five different papers presented to the class. Our group presented a paper with the title Towards a theory of delegation for agent-based systems by Cristiano Castelfranchi and Rino Falcone, which describes a theory of delegation among agents in a multiagent system. All these papers presented by the groups all related to the project, and could be used as inspiration for how to handle an agent in a multiagent system.

Final design of the robot

The robots final design, after a few adjustments, ended up as you see on figure xx. It contained two large wheels to hold the bulk of the robot, containing the NXT module and the two motors, and two sets of two small wheels to let it turn easily. The color sensor was placed in front of the robot, while the GPS module was placed in the back. The whole robot was assembled with Technic Lego which fits together with the different modules and sensors used in this project.

Programming of the NXT module

In this section you will find the description of the code to trace the buildings and the final code, where the code from the different groups has been pieced together into a final program.

Building tracer code

After the task was reformulated the we had to use a build in 2D map/grid, to which we would draw the information from. Other than that, the code was now supposed to simulate the movement of the robot, to get a consistent result, which was not obtainable with the GPS.

The mapBuildingOutline(): Is the main function and is responsible for outlining the whole building. The function takes two parameters, a x- and y-value that defines the robots position.

The code will start by running the setup function, where the x- and y-value is copied into a global array, to make it easier accessible for other functions and to make a comparison variable to determine when the robot is back in its start position.

Afterwards the code will begin a do-while loop, where the code will check the next position and draw the map to the LCD screen, until the robot is back in its start position.

Figure 12

The checkLeft() and the checkForward(): Is responsible for checking the next field in the array. To do this the code needs the position of the robot and the direction the robot is facing.

The code uses a switch case to determine what position in the predefined map array (dummieMap) it should check. Based on the robots direction. Afterwards it uses the checkColor() function to get the color of the new field and updates the map, with the updateMap() method.

The checkLeft() method then determines if the robot should move left to the new field or use the checkForward() method, based on the color of the field. The checkForward() method determines if the robot should move forward to the new field or change direction of the robot, by calling the rotateRight() method.

Figure 13

Since the code was only supposed to simulate a robot running we created a method to illustrate the progress of this process. The drawMap() method outputs all the data from the map[] array to the LCD screen. It uses a nested for-loop to output all the data into a 2D map on the screen and gets dynamically updated as new checked fields enters the map[] array.

Figure 14

Final robot code

This code is a collection of all the code from the different groups, and was intended to work as the final program to run the robot. This however has not been possible because of errors and code that didn’t work as intended or at all. This code has been structured with code intended to emulate the intended functionality as replacement for the code that didn’t work. The final code can be run as a simulation on the NXT module where it writes to the screen, to show the events happening in the code.

In the first part of the code, all libraries, made by other groups, have been included to be used in the main file with the statement #include. This can be seen on figure 15.

Figure 15

++ #include

The next part contains two 2D arrays to hold the robots internal knowledge about the map. The first one, mapfree[ ][ ], contains knowledge about the state of the different areas on the map. These different stages can be seen below:

  • 0 = Free (anyone can go here)
  • 1 = Marked (reserved, the robot is going here)
  • 2 = Occupied (exploration in progress)
  • 3 = Explored (have already been explored)
  • 4 = Treasure (contains the treasure)

The other map, buildingMap[ ][ ], contains the position of the buildings that has been found.

Both these maps should be empty in the beginning, but as this is a simulation, they have been partly filled out to illustrate their functionality. These two maps can be seen on figure 16.

Figure 16

++ freeMap and buildingMap

The next part, after the initialization of the maps and before the main task, contains the variables and functions used in the code. Some of these are variables and functions for simulating the functionality, and should be replaced with the proper functions when these work as intended.

The two functions ConvertMapToStringM() and UpdateMapWithBTStringM() are both functions taken from the Datahandler library made by group 4, and modified to handle a map as a parameter, as the original functions was intended for a single map, defined in the Datahandler library. The modified functions can be seen in figure 17.

Figure 17

++ ConvertMapToStringM() and UpdateMapWithBTStringM()

Another notable function is ChooseFreeSquare() that handles the robots choice for what area it wants to search in the map grid. This function simply checks after the number 0, which means the area is free, and changes it to 1 to inform the other robots that the area is occupied. When the other robots receive this map, they need to change the number 1 to 2, which means that the area is occupied by another robot. This is done with the UpdateMapStatus() function. The ChooseFreeSquare() function can be seen in figure 18.

Figure 18

++ChooseFreeSquare()

The most important part in the code is the main task, which runs the whole program. It is here the order of which the functions should run is defined.

The first this to run here should be the bluetooth receiver loop, which isn’t illustrated in the code as this part didn’t work. Otherwise it should be the functions ChooseFreeSquare() and ConvertMapToStringM(), however it should only be one robot that runs these first, otherwise all robots will choose the same square to search, one robot should therefore be the one to start the chain, and the others should wait until they receive a map from the robot before them in the chain.

The function mapBuildingOutline() belongs inside the while loop, but has been placed outside to show how it works. It can’t be placed inside the loop, as it does not work together with the Datahandler library, because of the two libraries handle maps differently. The mapBuildingOutline() function comes from the Edgefinder library, which is described in more detail in the section “First code”.

The while loop is a loop that runs as long as there are unexplored areas on the map. This is to make sure the whole map has been found before the robots begin to search for the treasure inside each of the buildings found on the map. An if statement inside the while loop makes sure the robot waits until it has received an updated map from the robot before it, before it begins to choose and search a new area. This is again to make sure two robots do not choose the same area to search. After the while loop, the code for handling the search after the treasure is run.

Most of this code has been set up to show a simulation of how the robot would handle the task given to it, and where the functions should be run. But as most of the code either didn’t work together or contained errors, it was not possible to set up a fully working program. Even if it was possible, the code would only work in theory because of the inaccuracy of the GPS module.

Link to final code: https://www.dropbox.com/sh/liwvqmn2k9do663/AAADpuHn9XzzR83PqHkqTRYea?dl=0

In the clip below the robot searches and maps a building.

Discussion

As mentioned earlier a series of problems have presented themselves throughout this project, we will try here to shine a light on these problems and potential solutions. The main idea behind this project were to build a system of communicating robots that could map and search an area for a treasure. This in theory sounds like a straightforward task, but as we will show here this turned out to be far from the truth.

The way the task was given and administered meant that five groups, each responsible for a sub-task, were to solve a part of the big task. This resulted in a stalemate problem when problems arose in the different groups as each groups completion of the project were dependent on all other groups completing their tasks at a given point. This subdivision of tasks also brought forth the need for some sort of management of how all these different pieces of the puzzle where to fit together, but this where never established, resulting in a mess of non-functional code, that each group were to try to piece together. The sub-task that we were charged with solving, where to build functionality to find and outline buildings in a mapped grid. Due to problems with the accuracy of the chosen location sensing method (dGPS modules), it was concluded that our functionality could only ever be tested in a simulated environment, with dummy-data standing in for the GPS coordinates. The group achieved this simulation after the parameters of the task were changed mid-project, in an effort to accommodate the functionality of another groups code, as can also be seen in the accompanying video. The macro task were however never achieved due to the aforementioned factors of miscommunication, hardware incompatibility and lack of functional code produced by other groups.

Leave a Reply