Reading GPS & Steering the Robot

This project was executed and described by Group 5:
Jonas Grundtvig (jgrun13), Søren Holm Juul (sojuu12), Thor Lind (thlin10), Trine Søberg Torp (trtor13)

University of Southern Denmark
2th May 2017


The aim of this project is to build a multi-agent system application by using the LEGO Mindstorm robots. The project aims at exploring a certain area at SDU and search for at treasure. While searching for the treasure, the entire area will be mapped and that leads us to the project name: “Treasure Hunting with Obstacle Mapping”. The goal will be reached individually in the groups in the class, but by collaborating and splitting the different tasks among the groups, the project can be done faster and more collaborative.

The tasks that need to be resolved in order to be able to reach the main goal is the following:

  • Build the robot/vehicle and identify the needed sensors and actuators
  • Implement the GPS-module to be able to track position
  • Create a map with the environment for the buildings and treasure
  • Programming:
    • Find out how to send/receive information between the robots
    • Build a function that is capable of detecting a location of a building and link it with its color
    • Build communication network based on Bluetooth connection
    • Build a scanner for buildings
    • Build a data exchange mechanism to share information between robots via Bluetooth connection
    • Drive the robot to the right location when GPS coordinates are received
  • Compile all the work into each robot, so that they are all able to do their task in the multi-agent system
  • Find the treasure and get everybody to the location!!!

The environment

The environment that the robots search for the treasure in is a printed map of a part of SDU. An overview of the entire map can be seen on figure 1A. The map contains the buildings of a certain area at SDU, and the buildings that matter to the project are marked with different colors (the rest is grey):

  • The yellow area: The parking lot between the TEK-building and the rest of the campus
  • The blue area: The TEK-building
  • The black area: The sports club
  • The red area: The main building
  • The green area: The Maersk-building


Figure 1A: The enviroment

To be able to test the early prototypes, a simpler map was made that only takes
the colored building into account. The training map can be seen on figure 1B. The map clearly separates the different colors to make it easier to test the color sensor and associate GPS coordinates with them.


Figure 1B: The training map

Things to consider regarding the environment is among others whether or not other buildings (the grey ones) should be taken into consideration when moving around in the environment. It also needs to be decided how big the printed map should be to make sure that GPS accuracy will not disturb the process of the treasure hunt. Furthermore the environment has to fit all of the robots at the same time, which impact on both the size, decision and on the programming of the robots (to make sure that they do not collide in the environment).

The agents

The agents will share the workload in this project. They will keep track of places they’ve been and communicate them to the other robots during the test. They will also keep track of where the buildings are, and when the treasure have been found, the robot who finds it will send a signal to the other robots, along with the coordinates of the treasure. When this signal is received by the robots they will drive towards the goal and meet up there.

The agents detect the buildings using the color sensor and they distribute that information via bluetooth connection, by updating a map of coordinates which holds a state for each coordinate. These states represent either a search state or a building state. These states are organized as follows:

0 A tile which has not been searched
1 A tile which is going to be searched
2 A tile which is currently being searched
3 A tile which has been searched and does not hold the treasure
4 – 8 Different colored buildings
9 A tile which holds the treasure
Figure 2: The arrows show the direction of the data stream between the robots.

Our sub task is twofold as we need to act as both a master and a slave, because we both need to receive info from the other robots as well as send information to the other robots. This will be done in a cycle as seen on Figure 2. Every robot is a master to one of the other robots and slave to another.

Every robot has the same task, so there is no distribution of tasks among the robots. Every robot should follow the same structure of subtasks.

They should drive around, search each field, and map the buildings. When they start mapping a field they should update the state of the field. After a field has been searched fully the robot should communicate what they’ve found to the other robots (update the field states).

Materials and Methods


This project make use of the LEGO Mindstorms NXT to create a multi-agent system. The LEGO Mindstorms NXT is a programmable robotics kit released by LEGO. The programming of the NXT-brick has been done in the C-inspired programming language NXT, which is designed for programming of the LEGO Mindstorm robots. To be able to repeat this project you will need the following materials:

NXT intelligent brick

The main component of the LEGO Mindstorm kit is the brick-shaped computer called the NXT intelligent brick. It works as “the brain” of the robot. It can take input from up to four sensors and control up to four motors. The brick has a LCD screen and four buttons that can be used to navigate the user interface showed on the screen. The most important feature regarding this project is the bluetooth support. This is what makes the robots able to “talk” to each other. The battery of the NXT is rechargeable.

Two motors

To be able to move the robot around we need two Interactive Servo Motors with wheels connected to them. The motors functions as actuators that receives a message telling them how fast to go, in which direction and how long to go. These actuators will make the robot able to move to the goal and stop when reaching it.

Color sensor

The color sensor enables the robot to distinguish between black, white and a range of six colors. This sensor is used to detect the color of the different buildings on the map. It can detect both reflected and ambient light. It also emits three colors of light (red, green and blue) but this feature will not be necessary in this project.

GPS sensor

With the GPS sensor for the NXT, the robot is able to know where it is and where it is heading when being outdoor. It can provide the following data to the LEGO Mindstorms NXT robot: Latitude, Longitude, time, speed and heading. The module also has a microcontroller able to perform positioning related calculations (e.g. target coordinates).


The Digital Magnetic compass measures the earth’s magnetic field and calculates a heading angle. The magnetic heading is calculated to the nearest 1° and returned as a number from 0 to 359. The heading is updated 100 times per second and operates in two modes, Read mode and Calibrate mode. In Read mode, the current heading is calculated and returned each time the NXT program executes a read command. In Calibrate mode the compass can be calibrated to compensate for externally generated magnetic field anomalies such as those that surround motors and batteries, thereby maintaining maximum accuracy.

Printed map with buildings in different colors

At last you will need a printed map as described in the previous section “Environment”. You can use any printed map containing areas printed in the different colors as represented on the previous described map.


In this section, the methods of the project will be described. Some of the methods have been done in collaboration with the rest of the class, and the rest has been done within the group. The different methods will be described individually, but all affect each other in some way.


The overall goal of this project is to make a “Multi-agent System” (MAS) with LEGO Mindstorms, where groups (divided in the class) are in control of their own robot as part of the bigger system. To make sure that the class had a common take-off for the project, a brainstorming exercise was initialized with the aim of finding the direction of project. The brainstorming was done online, where the members of the class could send their proposals to a shared page. The brainstorming ended up with many proposals for the direction, and when they had all been presented, the class did an online voting to agree on a project.

Conceptual Model

With the direction of the project in the bag, modelling the concept of the robot could begin. The robot needs to be able to move freely around on the basis of GPS readings. The robot also needs to contain the different sensors and actuators needed to solve the given tasks. On top of that, the robot is very dependent on being able to communicate with the other robots to reach the common goal.

Simple Prototype Building

When the conceptual model was done, it was time to start building the robot. The first part of the prototype building was to make sure that the NXT software was up to date and running problem free. This was tested by adding a motor and a touch-sensor to the NXT and making the motor run, when the button was pressed. The next part of the building process included testing of the color sensor and the GPS module.

When all of this worked to our satisfaction, another NXT was added to the process to simulate the other robots in the project (handled by the other groups), but no extra sensors/actuators were added in this process.

High Fidelity Prototype Building

After the first design was done, the different tasks of the project were divided between the different groups. This was done to make the workload smaller, and by that making sure that the common goal would be reached in time. By focusing on the task given to the group an evaluation of the design of the robot was done, and this revealed that we needed another sensor to complete the task. Also adjustments of the primary sensors were done. This will be described in the results.

Multi-agent System

A multi-agent system consists of a group of robots that completes a task by working together. In our case we have a cooperative multi-agent system which searches an area and keeps an updated map of which areas have been searched. This map is distributed amongst the robots via the bluetooth communication and kept up to date at all times. The robots search their own individual field, which means that two robots can’t search the same field at the same time. When the goal is found the robot who finds it, sends the coordinates to the rest of the robots, who then proceeds to move towards the goal.

There are four principles in MAS:

  1. Capabilities for solving the problem in each agent
  2. No global control, which means that no single unit controls the rest
  3. Decentralized data
  4. Asynchronous computation


Execution of the methods

By doing an in-class brainstorming and voting for the best direction of the project, the direction ended up being “Treasure Hunt with Obstacle Mapping” which is a combination of two different proposals.

The design of the robot worked fairly well. To be able to move around, the NXT needed a body made out of LEGO bricks and wheels of some kind controlled by motors. The robot was build with two belts as drivetrain and is controlled using tank-like controls, which gives the robot a lot of grip and makes it able to turn on a dime.

Figure 3: The robot

Figure 4: The Color Sensor

Figure 5: The GPS Sensor

To get input from the surroundings the robot also needed a color-sensor to track color. The color sensor was placed underneath the robot, and worked very well with no false reading at all.

The robot also needed a GPS-module to be able to either send coordinates to the other robots or go to received coordinates, when the treasure is found. The GPS module was placed on the right side of the robot, which was done after testing the performance of the module in different positions and orientations which gave inconclusive results. The robot was sometimes able to locate the given goal position, but it was not reliable due to very poor performance by the GPS module attached to the robot.

By doing a slave- and master program, one robot was able to send information to the others via Bluetooth (which is integrated in the NXTs). At first the master made the motors on the slave run, and afterwards the master became able to send coordinates to the slave. This was an important feature to get to work before the group could do the given task, and was used to simulate the receiving of coordinates from other robots.

Due to missing functions in the first prototype, it was decided to add the Compass Sensor to the robot. The compass was placed on top of the robot, raised about 15 cm from the “body”. This was done in accordance to the design guidelines given in the documentation, which states that the compass should be placed 10-15 cm away from the NXT and any motors.

Programming of the NXT

The robot is programmed using the program BricxCC and the language NXC (Not Exactly C). The idea of the final code (shared by the whole class) was to divide it into different subtasks which was controlled by a main task including:

  1. Mapping buildings using color sensor for distinguishing between location of the buildings
  2. Data exchange between Master and Slave robots using Bluetooth
  3. Area scanning used for finding the treasure
  4. Data sharing which will work as a database
  5. Reading GPS and doing robot steering for moving towards a specific location

Each group had the responsibility of building and explaining a subtask and its function, which should then be presented in the Wiki. This allowed each group to focus on their own subtask, and by doing knowledge exchange we would then be enable to combine the subtask of each group to create a functional MAS system, and thereby achieve the goal of the project. Our group was responsible for subtask five, using the GPS to read the location of the robot and steering it toward a specific location.

The main task will run automatically when the program is executed and from here each subtask can then be called when needed. The five subtasks should then run in parallel within the main task and create the desired behavior of the robot.

Due to the lack of working and completed subtasks from the different groups, this chapter will mainly focus on our own part of the code, how it works and the instantiating of the program in general. Temporary subtasks were also created as a necessity for us to test code from subtask five.

In addition to this, we have tried to create a map of the structure of how the entire project should have worked, if finished. This is explained on a flow chart, which can be seen in the next section. After this, a section with proposed pseudo code can be found, followed by a section with explanation of the source code of our task.

Software Architecture

Figure 6: Flow chart – The blue circles highlights the task that has been done within the group. (Click on image to enlarge)

The flow chart seen on Figure 6 shows the architecture of the project as we see it. When the program starts, the NXT listens for any updates to the field states from the master and finds an empty field to go to.

If there are any empty fields the program goes down the right branch (“!Done”) and the NXT updates the field state for the chosen field to “1” (see the section “Agents” for definition of states) to let the other NXTs know, that this field is going to be searched. Then the NXT calculates the heading towards the field and moves to it. Upon arrival, it updates the field state to “2”, which means that the field is currently being searched. Now the NXT can scan the field to check if there is a building (color) within it or not. If not (left branch, “!Building”), the NXT updates the field to “3” (no building) and goes back to the start. If there is a building (right branch, “Building”) the NXT updates the field state to a number between 4 and 8 according to the color found within the field and goes back to the start.

As soon as all fields have been searched, the program runs down the left branch (“Done”) and finds a field with a building (field state 4-8). Hereafter the NXT goes trough the same procedure as when scanning an empty field. If the treasure is found (right branch, “Treasure”) the NXT updates the field state to 9 (treasure found) and waits for the rest of the NXTs to come to the coordinates of the treasure. If the treasure is not found (left branch, “!Treasure”), the NXT updates the field state to 3 (checked field) and waits for another robot to find the treasure. As soon as this is found, the NXT calculates the heading towards the treasure and moves there.

Pseudo Code

This section shows the pseudo code for the complete project. It explains an example of how the project could have been build to full the extend of development. All of the functions shown in the above architectural description of the project, is described here, to the extend were it will be possible to complete the project in future developments.

// Include driver for the GPS Module
#include "DGPS-driver.nxc"

// Include API functions for NXC
#include "NXCDefs.h"

// Defining Bluetooth Connection slots for mailbox sending/receiving
#define BT_CONN 1
#define OUT_MBOX 1
#define IN_MBOX 5

/* Global variable holding the latitude and longtitude of the map
in a muliti-dimensional array with 4*36 coordinate sets representing
36 fields */
float map[][];

/* Global variable holding the state of each field by index 0-35
   where each index correspond with a coordinate set */
string fieldStates = "000000000000000000000000000000000000";

// Defining Input port of GPS, Color and Compass module
#define DGPS S1
#define Color S2
#define Compass S3

/* Function for opening a Bluetooth Connection to a paired Master robot
   and receive an updated field state string */
sub receiveFieldStates() {

   /* Connect to Slave by Bluetooth and recieve an updated fieldStates String */

/* Function for opening a Bluetooth Connection to a paired Slave robot
   and send an updated field state string */
sub sendFieldStates() {

   /* Connect to Slave by Bluetooth and send the updated fieldStates String */

/* Function for updating the state of a field (From 0 - 9)*/
sub updateFieldStates(index, state) {

   /* Find character by parameter1 and set state to parameter2 and
      update the fieldStates string. Then call sendFieldStates()
      that will send the new fieldStates to the Slave */


/* Function for GPS connection and recieving langtitude/longtitude
   and then calculating the heading toward a given coordinates */
int calculateHeadingToField(int index, int state, string distinationCoord, string currentCoord, int heading) {


   moveToField(index, state, heading, distinationCoord, currentCoord);

/* Function for calculating the relative bearing of the robot and steering
   the robot towards a given heading. Will also check if the given position
   is a WIN state (Treasure) and then ending the program */
int moveToField(int index, int state, int heading, string distinationCoord, string currentCoord) {


   /* When going forward check if distinationCoord is equal to currentCoord
      and if so STOP at the location and update state of field to 2 */
   updateFieldStates(index, state);

   /* When state of field have been updated then start scaning for buildings using
      the color sensor */
   scanField(index, state)

/* Function for scanning a field to identify if a building is present. This
   is done using a color sensor */
scanField(int index, int state) {

   /* Scan the entire field and update the state when meeting the following
      conditions */

   /* If a color is found then call updateFiendStates() and update the field state
      to 4-8 depending. Then reset local state to 0 and look for the next field

      Else if a color is not found then call updateFiendStates() and update the field state
      to 3. Then reset local state to 0 and look for the next field

      Else if the color white (Treasure) is found then call updateFiendStates() and update the field state
      to 9. This is a WIN state and will STOP the program after updateFiendStates() */

      updateFiendStates(index, state);


/* Function for checking the state of a field. Used for checking which
   field have been searched and choosing a random unsearched field to
   investigate. Will also check whether the treasure have been located */
int checkFieldStates(int state) {

   /* For all characters in fieldStates search for the following condition

      If there exist a field state equal to parameter1. Then call updateFieldStates()
      with the index of character that should be updated and the state to 1 */

         updateFieldStates(index, state);
         calculateHeadingToField(index, state, distinationCoord, currentCoord, heading);

      /* Else if no field state is equal to parameter1 then all fields have been scanned and
         checkFieldStates() is called. A state 4-8 is found by searching for the
         first character between 4-8 in the fieldState string. This is the building to
         be scanned */


/* Main Controller for handling all subtasks in the program */
task main () {

   /* Start state that all fields are unoccupied */
   state = 0;

   while(true) {

      /* Continuouesly recieve updated FieldStates String from Master */

      /* Continuouesly check the current state */

Source Code (Subtask 5)

The code of the groups subtask, is explained in the following section, first with a short introduction to the code section followed by the actual code developed in purpose of completing the subtask.

Initializing ports and defining modules

The first part of our code defines a few global values, which can then be used by all tasks running in the program. These include drivers for the DGPS, core NXC API, Bluetooth connections and Input port for the DGPS (S1), Color sensor (S2) and Compass (S3) modules.

// Driver for the GPS Module and core NXC API
#include "DGPS-driver.nxc"
#include "NXCDefs.h"

// Bluetooth Connections for msg in/out
#define BT_CONN 1
#define OUT_MBOX 1
#define IN_MBOX 5

// GPS on Input Port 1
// Color Sensor on Input Port 2
// Compass on Input Port 3
#define DGPS S1
#define Color S2
#define Compass S3

Main Task

The main task defines sensor ports and our own subtask for receiving a message from the Bluetooth connection for testing purposes. This is also where we will be getting the GPS coordinates for the robot. As mentioned before, the while loop will create the behavior of the robot where each subtask will run parallel. For our subtask this is where we will be getting the GPS coordinate and then steer the robot.

// Main Controller
// This function is the main task and will execute all subtasks
task main () {

     // Variables for holding GPS data and communication string
     long longitude            = 0;
     long latitude             = 0;
     int goal                  = 0;
     bool linkstatus           = false;
     string in;
     // Setting the Compass to lowspeed
     // Recieving the message coordinate from Master
     string coord = recieveMSG(in);
     // While loop
     while (true) {

           // Get GPS data from GPS module
           getGPS(coord, longitude, latitude, linkstatus, goal);

Reading the GPS

The getGPS function has the responsibility of respectively getting a link status and receiving the latitude/longitude from the GPS satellite of the robot, parsing the goal coordinate received from a Bluetooth connection, and then calculate the heading towards the goal.

The function will start by converting the parsed coordinate String into two Longs each holding a goal latitude and longitude. This is done by creating two substrings from position 0-9 and 10-19 from a coordinate string not containing any periods or commas. The substring is then converted into numbers.

Using the built-in function from the GPS driver, the heading towards the goal coordinate is calculated and stored for later use. Then latitude and longitude of the robot is read and everything is printed on the LCD screen. Eventually the link status to the GPS satellite is checked, and if the link is up, the move function for steering the robot is called, parsing the goal destination heading.

// GPS Sensor Controller
// Function for getting GPS coordinate of the robot position
// This includes, longitude, langitude and the heading toward waypoint
int getGPS(string coord, long longitude, long latitude, bool linkstatus, int goal) {

    // Variables holding the cordinates recived from the master robot
    // Converting the string to number in two substrings
    long glatitude = StrToNum(SubStr(coord, 0, 9));
    long glongitude = StrToNum(SubStr(coord, 10, 19));

    // Setting the GPS sensor to lowspeed raw mode (without 9V)
    // Setting the corrdinates of the waypoint destination
    // This is used for calcualting the heading and distance to waypoint
    SetSensorDIGPSWaypoint(DGPS, glatitude, glongitude);
    // Reading and declaring the Input GPS data
    longitude = SensorDIGPSLongitude(DGPS);
    latitude = SensorDIGPSLatitude(DGPS);
    linkstatus  = DGPSreadStatus(DGPS);
    goal = SensorDIGPSHeadingToWaypoint(DGPS);

    // Priting the longitude and longitude on screen
    TextOut(0, LCD_LINE2, "Lon:");
    NumOut(25, LCD_LINE2, longitude);
    TextOut(0, LCD_LINE3, "Lat:");
    NumOut(25, LCD_LINE3, latitude);

    // If there is a connection to the GPS satalite
    // Then execute the move function and start moving toward the goal
    // If not then wait and try again until a connection is established
    if (linkstatus) {
      // Priting the Link status
      TextOut(0, LCD_LINE8, "Link Stat: UP");
      // Moving the robot toward the goal using the heading
    } else {
      // Priting the Link status
      TextOut(0, LCD_LINE8, "Link Stat: DOWN");

Steering the robot

The move function has the responsibility of steering the robot towards the goal given as the first parameter. Since the heading received by the GPS is very unstable and unreliable, we decided to use the compass module. The relative bearing is then calculated from the goal heading and motors turn on and off to achieve movement towards the goal location.

At first the function will define the necessary variables for the compass and relative bearing. A while loop will then display the goal heading, read the compass and subtract the goal heading from the current heading of the robot to define the relative bearing. If the relative bearing is above 10 degrees and below 180 degrees the robot should adjust to the left by setting a forward direction of 75 on each motor with one being negative for reverse direction. If the relative bearing is above 180 and below 350 degrees the robot should adjust to the right and use the approach as mentioned above just reversed. If none of the above (a margin of 20 degrees from 351 to 9 degrees) the robot should move forward going toward the goal heading.

// Robot Movement Controller
// Function for steering the robot towards the goal (1st parameter)
// Using a compass the relative bearing is calculated from the goal
// Motors are then turned off and on to steer the robot towards the goal
int move (int goal) {

    // Variables for compass and relative bearing
    int currentCompass;
    int relBearing;
    // While loop
    while(true) {
       // Printing the goal heading on screen
       TextOut(0, LCD_LINE5, "Goal");
       NumOut(10, LCD_LINE6, goal);
       // Reading the compass heading value and set it to currentCompass
       currentCompass = SensorHTCompass(Compass);

       // Subtract the goal heading from the currentCompass
       // This will be the relative bearing
       relBearing = currentCompass - goal;

       // If the relative bearing is below 0 degress
       // Set the relative bearing to be the relative bearing + 360 degress
       if (relBearing < 0){
         relBearing = relBearing  + 360;
       // Printing the Relative bearing on screen
       TextOut(0, LCD_LINE3, "Relative" );
       NumOut(10, LCD_LINE4, relBearing);

       // If the relative bearing is above 10 degress and below 180 degress
       // Then turn left
       if(relBearing > 10 && relBearing < 180) {
         // Printing that the robot is turning left
         TextOut(0, LCD_LINE7, "Turning Left");
         // Set the outputs to forward direction and turn them on
         // -75 for reverse direction
         OnFwd(OUT_C, -75);
         OnFwd(OUT_A, 75);
       // If the relative bearing is above 180 and below 350
       // Then turn right
       } else if (relBearing > 180  && relBearing < 350) {

         // Printing that the robot is turning left
         TextOut(0, LCD_LINE7, "Turning Right");
         // Set the outputs to forward direction and turn them on
         // -75 for reverse direction
         OnFwd(OUT_C, 75);
         OnFwd(OUT_A, -75);
       // If none of the above (between 351 - 9 degrees)
       // Then go forward
       } else {

          // Printing that the robot is turning left
          TextOut(0, LCD_LINE7, "Ahead");
          // Set the outputs to forward direction and turn them on
          // 100 for full speed
          OnFwd(OUT_C, 100);
          OnFwd(OUT_A, 100);


In our final test we were only able to test the “go to destination” part of the task due to other parts of the project not being completed in time. This meant that only the task assigned to us worked correctly and even this was still very unstable due to the GPS which is very unreliable, with a margin of uncertainty between 0 and 20 meters. This made it difficult to ensure that the robot would navigate to the desired position for two reasons: The position is used to calculate the bearing and when the position is unreliable the bearing will be as well. With an unreliable position it is also impossible to confirm that the robot arrived at the correct goal location. If the end-coordinates were hard-coded into the goal for the robot, the steering worked very well and the robot adjusted even faster than expected to the correct heading.

Since the code from the other groups was delayed, we didn’t have the sufficient software to run a full test, and thereby we were only able to test our own subtasks software. Our approach to the given subtask, was to code a flexible stand-alone software which could achieve the task sufficiently – by developing our subtask by this approach, we were able to test our own software without any other code given. The solutions is thereby an independent piece of code, which gave the opportunity to both do a fast implementation in this project for other groups and the possibility of using the code in future project were a robot should go to a specific GPS location.


Overall we are happy with the design of the robot, it has good movement and every module works as expected with the exception of the GPS module.

The missing pieces to reach a successful project was at the end of development: Buildings’ mapping, data exchange, area scanning and data sharing in a database. All of this missing development is of course unacceptable in the means of reaching the goal, but due to the time limits and the extent of work needed, we were not able to reach an acceptable solution for the main task, single handedly. However we have proposed a structure for the entire code written in pseudo code and furthermore created a flow chart to visualize the idea. This shows how our source code can be implemented in future projects with the same goal.

The project has not been completed, due to some groups not delivering their code and we were unable to catch up the slack in the time remaining after we realized that some groups were not going to complete their designated task. The missing code made it impossible for us to test anything other than the navigation. The navigation task which was assigned to us, has been completed to the best of our ability. The navigation is still flawed due to an unreliable GPS module, which makes a quick fix difficult, however we have a possible solution which will be explained further in the perspective section.

The system would, if completed, hold up against the four principles of multi-agent systems as described in the method section. The robots theoretically able to solve the tasks independent of each other. There is no global controller who controls the robots, and the data is stored locally on all the robots and last but not least no robot has to wait for another robot to finish its task to continue working on theirs.


The problem with the inaccurate GPS readings could be solved by using the same methods the smartphones use when trying to pinpoint their location. The location is based on an average taken from a large pool of samples, which means that the position is not based on just one GPS reading, but rather on a number of readings making the impact of incorrect readings smaller.

The code which was delegated to the other groups which has not been completed needs to be completed and distributed across all the agents in the system.

As the system is right now there is nothing preventing them from running into each other when they are on their way to another field. This problem could be solved with an ultrasonic sensor and an algorithm to make the robot able to decide how to avoid obstacles while still keeping the right heading towards the goal.

The code which makes the robot drive a certain set of coordinates has no current end state, which means that the robot is going to drive to the location of the goal and start spinning around itself. Optimally there would be a goal state in which the robot just stops and displays “Reached destination” on the display.


Find the full souce code of the project here.

Leave a Reply