Nicklas Schmidt Hansen (nicha13), Anders Nyboe Jensen(andej13) and Niclas Kildahl Laugesen (nilau12)

Introduction

The purpose of this project was to create a multi agent system of robots to scan an area for a treasure. The area for this project was a miniature version of the SDU campus Odense using different colors of cardboard to signify the different buildings. The robots were tasked to find the buildings and register the coordinates of those buildings in a database from which the other agents could then determine where to go. When all buildings were located, each building was to be searched in order to find the treasure.

The project was split between five groups. Each of these five groups were responsible for developing a specific part of this multi agent system. At the end, all of the groups had to implement the work of the other groups in their robot. Our group was responsible for developing software to scan for the treasure within the buildings.

Methods and Materials

The computer platform that was used for the project was a LEGO NXT controller programmed with NXC language. The computer was built in to a 2-wheel based robot with a leg for support at the back. This allowed the robot to have the smallest possible turning circle while having sufficient propulsion. The robot was equipped with a GPS chip on top of the robot and a color sensor underneath.  

The group has worked in an iterative process of gathering ideas testing them and evaluating them to gain innovative ideas.

In this process we have used brainstorming, small scale testing and pair programming.

Furthermore, we have relied on related work both in the form of robot design and in programming.

Results

Process

As mentioned earlier we came up with a solution through an iterative process.

Through this process, a wide range of ideas was discussed as to how the robot should scan a building for the treasure. Some of the ideas that had the most potential was

  • Mapping the borders using the GPS and color sensor and using math to determine the borders of the building and scanning from side to side inside the building using the GPS to follow the plan.
  • Starting from a point inside the building based on the GPS coordinate associated with building and scanning outwards in a spiraling motion stopping only when a full lap had been made without finding the color associated with the building.
  • Locating GPS points for the corners of the building and using these to determine the center of the building to scan in an increasingly large square. Using the color sensor to determine the borders and alter the course accordingly.
  • Using the GPS coordinates associated with the buildings borders provided by another group to create a box surrounding the building. The robot will then calculate points to visit within this box. The robot will then scan the box from side to using GPS coordinates and the GPS sensor.

We ended up using the last solution since it provided us with a solution that was efficient and required less computation compared to some of the others.

The small-scale test showed that we could use the heading function of the robot to keep track of its direction and this could be used to ensure that it keeps on its track when moving from one point to another.

The pair programming method helped us be efficient in our coding process by catching mistakes early and continuously brainstorming on coding solutions. This helped us be sufficient in this part of the process.

Code

This section will contain a description of how this group thinks the general structure of the overall multi agent system should look like. The overall structure of the multi agent system will be broken down into five different tasks: communication between the agents in multiagent system, finding the buildings, verifying the buildings, outline the buildings, scanning the buildings for treasure. Furthermore there will be a detailed description of how the algorithm we are responsible works and the idea behind it. First, however, there will be a description of how to use the different sensors which are: compass, GPS and color sensor.

Sensors

Compass

The compass used for this project is called a HiTechnic NXT Compass Sensor. This compass utilizes the I2C communication protocol which means that in order to use the sensor, you will need to initialize it like so:

SetSensorLowspeed(compass);

When it has been initialized you can access the sensor data by calling this function:

SensorHTCompass(S2);

GPS

The GPS sensor is a GPS sensor from Dexter Industries made for the Lego NXT system. This sensor is utilized through an external library made by Dexter Industries.  In order to initialize the GPS, this function is called:

SetSensor(DGPS, SENSOR_LOWSPEED);

The data we are interested in from the GPS sensor is: latitude, longitude and linkstatus. The linkstatus is a boolean variable that is true when the GPS is properly connected to GPS satellites and false otherwise. This data is accessed using these three functions from the GPS library:

longitude: DGPSreadLongitude(DGPS);

latitude: DGPSreadLatitude(DGPS);

linkstatus: DGPSreadStatus(DGPS);

Color Sensor

The color sensor is a NXT color sensor. We want to access the three color channels separately, so that  we can have maximum precision. To use the sensor in this way the sensor must be initialized with this function:

SetSensorColorFull(colorSensor);

In order to access the three channels the following function calls must be made:

Red: ColorSensorValue(colorSensor, INPUT_RED);

Green:ColorSensorValue(colorSensor, INPUT_GREEN);

Blue: ColorSensorValue(colorSensor, INPUT_BLUE);

Communication between robots

Because the robots are collaborating in this multi agent system, they need to communicate in some way. We believe it would be best to share an array of coordinates for each building, several variables and some shared mutexes.

variables:

  • bool buildingDiscovered
    • a boolean for each building that tells whether the building has been detected
  • bool buildingVerified
    • a boolean for each building that tells whether the building has been verified.
  • bool buildingOutlined
    • a boolean for each building that tells whether the building is outlined..
  • bool buldingTakenForVerification
    • a boolean for each building that tells whether a robot has chosen to try to verify this building.
  • bool buildingTakenForScan
    • a boolean for each building that tells whether a robot has chosen to scan this building for treasure.
  • bool buildingChecked
    • a boolean for each building that tells whether the building has been scanned for treasure.
  • int buildingIndex
    • an integer value for each building that provides the size of the coordinate array for the building
  • bool treasureFound
    • a boolean that tells whether the treasure has been found

mutexes:

  • a mutex for each building, ensuring that only one robot can makes changes in any of that buildings data.
  • a mutex for all the data at once. To be used when updating all the shared information.


We have created a struct called coordinat  e. This struct contains of course longitude and latitude, but also a verified, and a foundByRobot attribute. The longitude and latitude should speak for itself. The verified attribute is a boolean attribute that informs whether the coordinate has been verified by another member of the multi agent system. Lastly, the foundByRobot is an integer attribute and holds the information of the ID of the robot that originally found it. This also means that each robot must have its own unique ID.

Finding the buildings

Because we were not responsible for this task we will not go into much detail describing how we believe this task should be performed, we will merely describe how we would solve the task.

We would make sure that we have an area within which the robots are not allowed to exit. This area should contain all of the buildings. To begin with the robots will go in a straight line. When a robot reaches the boundary of the enclosed area or a building it will randomly select a new direction to go in a straight line. However, it is reached a building it would save the coordinate and set the corresponding buldingDiscovered mutex. Please note that the robot must first acquire the corresponding building mutex before saving and updating the information.

Verifying the buildings

Even though we were not responsible for developing this task we have come up with an idea as to how to solve this task. Furthermore, we have produced a function to determine which coordinate to verify.

When all the buildings has been discovered the robots will start to verify the found coordinates. In order to do that the robots must determine which coordinate to check. The function we have produced for this purpose will pick a random number between 0 and the number of buildings, it will use this number as a starting point. If the building corresponding to this number has a coordinate that the robot executing the function has not discovered it will try to verify this coordinate, but only if the mutex for that building is not acquired by another robot. In the case that the coordinate was not discovered by this robot and the mutex is not acquired by another robot, this robot will acquire the corresponding mutex and go to the coordinate and check if there is a building there and that is the correct building. However, if the mutex is acquired by another robot or the coordinate was found by this robot it will add 1 to the random number if the random number was not equal to the number of buildings. In that case it will set the number to 0.

Outline the buildings

A group was tasked with creating an algorithm for outlining the building these data points are to be communicated out to the rest of the groups so they can be actioned upon.

For a complete set of data points the group could be imagined to have solved the task by following the line that is the border of the building. This can be done by sweeping the robot from inside to outside the building and mapping the coordinate each time there is a change. By only moving one wheel at the time the robot would move along the course of the line it will then stop when it registers the same coordinate twice. This requires a rather accurate GPS, but given such a GPS it would be a sound solution.  

Scanning buildings for treasure

The purpose of the algorithm this group is responsible for is to search for the treasure within a building. This software is utilized when the robot has reached the building. Furthermore, the algorithm has access to an array of coordinates of the edge of the building. The array will be accompanied by the color of the building it belongs to. From this array, the algorithm will calculate corners of a box surrounding the building (see image 1). The algorithm will now calculate points to visit within the building (see image 2).


 

Image 1 – showing the outer points


 

Image 2 – showing the surrounding box and the corners of them


While the robot is going from point to point the robot will use the color sensor to check for the treasure. If the treasure is found the robot will broadcast a message to the other robots (this broadcast message is out of scope for this group).

Now that we have the tasks the algorithm should perform described there will be a description of how the algorithm will perform the task. This description will contain flowcharts and pieces of code. First is a flowchart of the overall task (see image 3).

Image 3 – Flowchart of the algorithm the searching the building for treasure

 

As stated earlier the box description will be handed to us as an array from another group. CalcCorners is the function that will calculate the corners of the box surrounding the building. CalcBoxPoints is the function that will calculate the points to visit within the box. GoToPoints is the function that will move the robot from point to point (this function is provided by another group). After a point is visited the algorithm checks if there are more points in the array. If there is go to the next point, if not end the search. Furthermore, the algorithm will end when the treasure is found.

CalcCorners

The corner coordinates will be (minimum latitude value, minimum longitude value), (minimum latitude value, maximum latitude value), (maximum latitude value, maximum longitude value) and (maximum latitude value, minimum longitude value.

CalcBoxPoints

Before we can calculate the box points we need to convert from meters to degrees in order to calculate the offset to calculate the next point. This is done by utilizing the haversine formula that calculates distance in meters between to coordinate:

a=sin2(Δφ2)+cosφ1⋅cosφ2sin2(Δλ2)

c=2⋅atan2(a,1-a)

d=R⋅c

where Δφ is the difference in latitude, φ1 is the starting latitude, φ2 is the ending latitude, Δλ is the difference in longitude, d is the distance and R is the radius of the earth. Please note that longitude and latitudes must be converted to radians. In order to calculate the offset in coordinates the Δφ is isolated in the haversine formula.

This gives the following formula:

arcsin((((tan(0.1R∙2))-2+1)-1(cos(φ))2))∙2

Please note that the φ is a constant and that is both the start and end latitude.

This function calcBoxPoints will be explained using the following piece of code:

  1. // for storing the calculated coordinates  
    coordinate buildingCoordinates[];  
    // for storing the corners  
    coordinate corners[4];  
    // first building coordinate will be the lower left corner  
    buildingCoordinates[0] =  corners[lowerLeft];  
    // next will be the upper left corner  
    buildingCoordinates[1] = corners[upperLeft];  
    // we have to find what 10 cm to the east is in coordinates for yMax and yMin  
    float yMax = corners[upperLeft].y;  
    float yMin = corners[lowerLeft].y;  
    float offsetYMax = convertDiffMetersToDiffLon(0.1, yMax);  
    float offsetYMin = convertDiffMetersToDiffLon(0.1, yMin);  
    coordinate nextCoordinate;  
      
    int index = 2;  
    int state = 0;  
    do{  
        if (state == 0){  // going towards right and yMax  
            nextCoordinate.y = yMax;  
            nextCoordinate.x = buildingCoordinates[index - 1].x + offsetYMax;  
        }  
        else if (state == 1){ //going towards yMin  
            nextCoordinate.y = yMin;  
            nextCoordinate.x = buildingCoordinates[index - 1].x;  
        }  
        else if(state == 2){ // goin towards right and yMin  
            nextCoordinate.y = yMin;  
            nextCoordinate.x = buildingCoordinates[index - 1].x + offsetYMin;  
        }  
        else if (state == 3){ // going towards yMax  
            nextCoordinate.y = yMax;  
            nextCoordinate.x = buildingCoordinates[index - 1].x;  
        }  
        buildingCoordinates[index] = nextCoordinate;  
        index++;  
        state++;  
    }while(buildingCoordinates[index - 1].x < corners[lowerRight].x);

     

Looking for the treasure

Because we need to look for the treasure while we are going from point to point, the to functions will run in  parallel. This is achieved by utilizing the sub routine structure provided by the nxc language. This means that  all the functionality described above is run in one sub routine while the search for treasure will run in another. The sub routine for searching the treasure will look at the three color channels of the color sensor, and see if it matches the the color of the treasure.

When to switch between the tasks

As mentioned already there are four tasks that has to dealt with. However, they need to be dealt with at different times. First all the buildings need to found, then the buildings need to be verified, afterwards the buildings need to be outlined and lastly the buildings need to searched for treasure. So how to determine what task to perform?

The first task will be performed as long as the one or more buildings are not found:

  1.   
     if( !buildingFound[green]  &&  
              !buildingFound[yellow] &&  
              !buildingFound[red]    &&  
              !buildingFound[black]  &&  
              !buildingFound[blue] ){
    }

     

The second task will be performed when all buildings has been found and one or more buildings has not been verified:

  • else if( !buildingVerified[green]  &&  
                !buildingVerified[yellow] &&  
                !buildingVerified[red]    &&  
                !buildingVerified[black]  &&  
                !buildingVerified[blue] ){  
    }
    

     

The third task will be performed as long as all buildings has been found and verified but one or more buildings has not been outlined:

  • else if( !buildingOutlined[green]  &&  
                !buildingOutlined[yellow] &&  
                !buildingOutlined[red]    &&  
                !buildingOutlined[black]  &&  
                !buildingOutlined[blue] ){  
    }  
    

     

The fourth task will be performed in the case that all the buildings have been found, verified and outlined but the treasure has not been found. This is simply an else

All these if else statements are located within a while loop that looks whether the treasure has been found or not. That means that if the treasure is found at any point during either task, the robots will stop all of the task and gather at the treasure.

Heading controller

Even though we ended up not being responsible for utilizing the compass or steering the robot according to the heading, we were responsible for it at some point. This means that we had implemented a solution for steering the robot according to its heading using a P controller.

We implemented the functionality that you could give the robot a desired coordinate position, the robot would then be able to calculate the heading from its current position to the desired coordinate. Furthermore, the robot is able to turn towards that exact heading using a P controller.

Calculate heading

For calculating the heading we have used information from this page: http://www.movable-type.co.uk/scripts/latlong.html.

The equation is as follows:

θ = atan2( sin Δλ ⋅ cos φ2 , cos φ1 ⋅ sin φ2 − sin φ1 ⋅ cos φ2 ⋅ cos Δλ )

in code it looks like this:

float difLon = desiredLon - longitude;
float desiredHeading;
float toRad = PI/180;
float y = sin(difLon*toRad) * cos(desiredLat*toRad);
float x = cos(latitude*toRad) * sin(desiredLat*toRad) - sin(latitude*toRad) * cos(desiredLat*toRad)*cos(difLon*toRad);
desiredHeading = atan2(y, x) * 180/PI; // convert back to degrees
desiredHeading = normalizeAngle(desiredHeading); // convert from interval [-180, - 180] to [0, 360]

 

P – controller

The kind of controller we are talking about is the controller from control system theory. We will only explain the basics here. The purpose of such a controller is to move a system from one state to another effectively. In this case we want to input the desired heading and then we want the system to get that heading. So what is a control system? A simple control system can be seen in image 4, it consists of an input r, an error signal e, a controller, a control signal u, a system, an output y and feedback. In our case the input is the desired heading, i.e. the heading that makes this robot point towards the desired position. The output is the actual heading of the robot. The error signal is the desired heading subtracted the measured angle. The system is the robot steering ability. In other words, it is how the control signalhe output, i.e. the heading. The controller is responsible for minimizing the error effectively, i.e. changing the heading towards the desired heading. The control signal is the force the controller has calculated to minimize the error effectively.


Image 4
simple control system

 

P stands for proportional. The controller itself is actually a proportional controller combined with a differential controller. A proportional controller is quite straight forward, it takes the error and multiplies with a constant, p, hence the name. What this means is that the greater the error the greater the control signal which in turn turns the robot faster. The equation is as follows:

u=ep

Discussion

The lack of clarity regarding the projects scope and constant change of our specific task in the project has put us in a situation where the less than ideal options has been chosen. Furthermore, we were unable to perform a full-scale test. The solution that we have come up with shows promise in its efficiency and versatility. It can be used on any shape and to locate any object with only very few modifications.

The solution relies heavily on the GPS’s accuracy however tests have shown that the GPS module used for our project was rather inconsistent. This could be accounted for using one of the other solutions however they would prove very time consuming and/or require a lot more code and computation.

For our group, we still need to put all the code bits together and test its ability to act on the information from the other robots in real time. The algorithm that we have created also needs to be tested on a robot and in the same environment as it will be used in.

Download (ZIP, 44KB)

Leave a Reply