The students behind this report are:

Anna Gabriela Buslowska (anbus17), Andrzej Omieljanowicz (anomi17), & Anne Katrine K. Egsvang (anegs12)

University of Southern Denmark

8th May 2018

Content

  1. Introduktion
    1. Warehouse setup
      1. Track 1
      2. Track 2
    2. Line-following entity
      1. Behaviour and functions
      2. Actions
      3. Sensors
  2. Methods and Materials
    1. Methods
    2. Meterials
  3. Results
    1. Line navigation code
  4. Discussion
  5. References


1. Introduction

The line based navigation consists of warehouse setup with two environments. A basic line-following robot has been designed for the purpose of detecting waypoints and following routes in these environments. A detailed description of each environment (track) is presented below.

Warehouse setup

Figure 1. Set up of a warehouse

On the Figure 1 the map and description of  distinctive points of the warehouse are presented. Line based navigation takes place on both tracks (track 1 and track 2). Track 1 is for the fetch bot and Track 2 is for the delivery bot. The whole process should look like this:

  1. The fetch bot receives the information which object is needed (Communication group)
  2. The line following starts (Line following group)
  3. Once the color sensor detects the marker it stops the line following (Marker group)
  4. Grabbing module grasp the object (Grabbing group)
  5. Line following starts again
  6. Once the color sensor detect the drop off marker it stops the line following
  7. Grabbing module lose the object
  8. Crane transport the object from track one to track two onto the deliver agent
  9. Delivery agents stops same as fetch bot (marker based)
  10. Delivery agent deliver the object to goal/delivery point

Track 1

In this environment one line-following entity works as fetch robot that:

  1. Moves to a needed object (blue [BO], green [GO] or red [RO]). Location of objects is known and marked with markers of the same color as the object.
  2. Next the robot retrieves/picks the needed object.  
  3. Transports the object to the drop off point [DP] where crane transport the object from Track 1 to Track 2 onto a delivery robot.

Track 2

Second environment has another line-following entity transporting the object from pick up point [PP] to the final destination [GP]. Similarly to first track the characteristic point are marked with a markers. Algorithm of a movement is as follows:

  1. Delivery bot stop in pick up location where crane puts the object onto the delivery bot.
  2. Next delivery bot moves to delivery point where a person retrieves the object.
  3. Delivery bot returns to pick up point

Line-following entity

In this section a description of designing and construction the Lego NXT robot/base for the line-following entity of the multi-agent system is presented. The line-following base is designed with the least possible complexity in order to avoid possible errors for the entity  such as driving off the the track. A more detailed description is presented below.

Behaviour and functions

The line-following entity has simple functions/behaviours. The goal has been to cooperate with other modules without problems. That is why line-following function was design to run in a “background”, meaning having the program responsible for line-following as a non-noticeable one in normal work of the robot. Detailed behaviors and functions that the line-following entity provides are listed below:

  1. Stable, firm and universal base for other modules. It include a plane where other modules can be attached.
  2. Keeping the base as simply as possible, not using any odd shaped LEGO bricks.
  3. Proper interpretation of sensor readings and responding to them in according programmed behavior.

Actions

The most important action that the line-following base has to perform is the line-following ability. In order to achieve this, a line sensor (presented in section 1.2.3) has been attached and an algorithm with a use of PID controller (presented in section 2.1) has been  made. A constructed base is simple and easy to modify in order to adjust to other modules. The smallest number of odd shaped LEGO ricks has been selected in order to be able to build many copies of the same device. The base is visible on figure 2 and the list of action is presented below:

  1. Modularity
  2. Simplicity
  3. Line-following ability  

Figure 2. The line following base (a) Side view (b) Corner view

Sensors

The line-following base uses only one sensor for detecting lines. The sensor is a light detection sensor (line sensor), which enables the ability to follow a line. The sensor has two small bulbs. The one on top is a phototransistor, which measures the light; the one on the bottom is a light emitting diode (LED), which shines red light. When the light sensor is positioned closely to a surface, the LED increases the amount of reflected light that the sensor reads; likewise increasing the sensor’s sensitivity to different colors (different colors reflect light differently). It is possible to turn off the LED in order to detect only the surrounding or ambient light[1].

Figure 3. Light LEGO sensor

[https://www.coleka.com/media/item/201803/10/lego-alien-conquest-capteur-de-lumiere-9844.jpg]

The Light Sensor enables the robot to distinguish between light and dark. It is capable of measuring light intensity. The NXT light sensor can determine the brightness or darkness of its surrounding area. The light intensity of surfaces enables it to (indirectly) distinguish between surfaces of different colors. The NXT generates the light sensor readings as a percentage. The highest possible reading is 100 percent, which can easily be achieved by holding the sensor up to a light bulb. The lowest possible reading is 0 percent, which can be achieved in a very dark room [1]. On figure 4 an example of how the light sensor works is presented.

Figure 4. Light sensor reading example [Lego Mindstorms NXT Education Kit]

Attaching the sensor to the module can be challenging due to instability, however, the sensor is attached with a use of technic LEGO bricks shown on Figure 5-7. The sensor is approx. 1 mm of the ground which gives stable measurement.

Figure 5-7. Sensor attachment – 1, 2, & 3


2. Methods and Materials

Methods

In order to complete the project, many questions had to be answered and steps followed (see below):

  1. Setting up a goal – one must ask themselves what they want to achieve
  2. How can the goal be achieved – setting up the plan
  3. What are the constraints or requirements – taking into consideration a broaden picture
  4. Can they be solved – if so, then how  – making adjustments to the plan
  5. Creating blueprint for the task – the first plan
  6. Evaluation of the first project/plan
  7. Prototyping
  8. Evaluation of the prototype & tests – does it meet all constraints, demands from step 3, if so, then go to step 10, if no step 9
  9. Improve the prototype until it meets all demands
  10. The final project
  11. Evaluation of the final project
  12. Close the project, plan

In case of this project, it was clear what was the goal. The plan was set and the first construction built. It was improved several times, in order to meet all requirements. Firstly, the line navigation base was built from random pieces found in lab boxes. Over many iterations and different ideas, robot became what it is presented in figure 2. Once the construction was settled, the focus has been shifted to the line-following algorithm. During development of the project work was also performed on the environment: lines/paths, markers location on it. For setting the environment firstly definition of the goal and problem associated with them was needed (as in the definition of the whole project). Later all those small tasks leading towards the final look of environment should be divided into even smaller sub-tasks, so that there is full control and knowledge of what must be done.

Figure 8. Prototype of the base.

One of the first constructions is presented on Figure 8. It was apparent that the prototype was not stiff enough to cope with the load of other modules. Therefore, the final design seen in Figure 2 was built. In this construction’s case the key to success was keeping in mind the stiffness and good mass distribution as it was not clear what the final load would be.

Materials

  • 1 NXT brick
  • 2 NXT drivers
  • 1 NXT Light sensor
  • 3 cables (2 x 35 cm and 1 x 20 cm)
  • Lego bricks
    • Technic
    • Normal

3. Results

The robot drives smoothly, it follows lines and is not distracted by intersections neither gets lost. It mass distribution is optimal, making it stable, perfect base for mounting additional modules.

Line navigation code

Line navigation code should be adjusted dependently on the needs of other groups. A code shown below is just a bare, ‘core’ of navigation. It has no loops, it only provides continuous line navigation using PID controller. It would be possible to write similarly working code using if, while statements, however, the code would be longer, harder to grasp and read. Firstly, the code was written using only P controller, however, it has not provided such smooth movement as PID controller had. There was also a problem that the range of values on ‘light/white’ surfaces and ‘dark/black’ were a lot different, causing uneven work of the robot – like robot could not turn as efficiently after entering too dark surface as it could after light one. The values of Kp, Ki and Kd were found empirically. There are mathematical, control engineering tools and equations which makes it possible to calculate those values. Nevertheless, we made assumptions based on previous experience and ‘rule of thumb’ and succeeded. It was necessary to measure light ‘value’ in the environment where the robot would finally move, operate. Although, it must be remembered that light intensity varies depending on humidity, season, hour, artificial lighting. The velocity was set to such small value (only 40% of the power), so that, above all line navigation would be smooth, not rapid, not causing movement of the package/object being transported. Whatsmore, it turned out that the color sensor is not very sensitive and provides readings when it is extremely close to an object and it cannot move fast as it would miss a marker.

Kp – proportional

Ki – integral Kd – derivative
Value 7 0.00018 0.00018

Table 1. PID controller settings

task main(){

  float threshold= 60; //threshold, setpoint, optimal value of 'light'

  float Kp = 7; //proportional P, gives output proportional to our error

  float Ki = 0.00018; //integral I, makes response time longer, but makes error decrease (in steady state)

  float Kd = 0.00018; //derivative D, for future errors, it makes response quicker and stronger

  float velocity = 40; //base velocity value

  float error;

  float integral; //variable for I, it sums up errors over time

  float derivative; //variable for D, it is used for prediction based on previous measurements

  float lastError; //holds previous error value

  float turn; //variable for velocity adjustment

  float powerB; //power of the left motor

  float powerC; //power of the right motor

  SetSensorLight(IN_1); //setting up of the sensor, change input if needed

  while(true)
  {  
    error = SensorValue(LS)-threshold; //difference between current light value and desired one 
    integral = integral+error; //I controller  
    derivative = error-lastError; //D controller  
    turn = (Kp*error)+(Ki*integral)+(Kd*derivative); //PID controller, adjustment of velocity  

    powerB = velocity-turn; //adjust velocity based on measurements  

    powerC = velocity+turn; //adjust velocity based on measurements  

    motor[motorB] = powerB; //change motor dependently on outputs, left motor power adjustment  

    motor[motorC] = powerC; //change motor dependently on outputs, right motor power adjustment  

    lastError = error; //for D prediction
    }
}

Figure 9. Side view of finished module

Figure 10. Bottom view of finished module


4. Discussion

From the beginning it was clear what the goal was – line following base of a robot.We decided that it would be best to keep it as simple and as problem-free as possible. The main goal was achieved. The line following base, run by the optimal code was created.

It took trial and error method many times to improve the project. The first code was tested and improved on the white paper sheets, taped together, with black tape in the middle. The code was run again and again with different parameters, so that it could be understood what had the biggest impact on the behavior of the robot. Many trials were performed to get optimal variables values. At the end, in-lab test was performed as light intensity varies dependent on the location and it also must be taken into consideration when developing a line-following robot.

This project’s solution is universal, a top layer/module can be mounted to add functionality. The robot itself does not perform much, it only follows the line, but it was its goal, therefore, it is sufficient.

The final tests – with other modules and markers has not been performed yet. It would be much quicker and more comfortable if there had been a project manager who would keep her/his finger on the pulse. So that, the designs are responsive to changes and limitations of other groups since all projects overlap and all groups have the same goal

Apart from other modules, our base can be extended by obstacle avoidance. It would require ultrasonic sensor and scenario what to do if it was indeed detected. It could also be used as any line-following solution.


5. References:

[1] http://nxt.cs.uwindsor.ca/499football/features_limitations.pdf

Leave a Reply