Autonomous Agents: From Individual Behaviors to Complex Systems

What is an Autonomous Agent?

In artificial intelligence and computer graphics, an Autonomous Agent is an entity that can decide for itself how to act in its environment, without being influenced by a leader or a global plan. There are three key components:

Artificial simulations of ant and termite colonies are excellent demonstrations of autonomous agent systems. I recommend reading Mitchel Resnick’s Turtles, Termites, and Traffic Jams (Bradford Books, 1997).

Vehicles and Steering

In the late 1980s, computer scientist Craig Reynolds developed Steering Behaviors for animated characters. These behaviors enable individual elements to navigate their digital environments in a realistic manner, with strategies such as fleeing, wandering, arriving, pursuing, and avoiding. Later, in his 1999 paper “Steering Behaviors for Autonomous Characters,” Reynolds used the term “vehicle” to describe his autonomous agents.

Therefore, we will name our autonomous agent class Vehicle.

Reynolds described the idealized motion of a vehicle in three layers:

Seek

Consider the following scenario: a vehicle is seeking a target.

I want the vehicle to make intelligent decisions to steer towards the target based on its perception of its own state (its velocity and current direction of movement) and the environment (the position of the target).

The desired velocity can be set as the vector from the vehicle’s current position to the target’s position, p5.Vector.sub(target, position). The magnitude of this vector is set to the maximum speed, so we need to add a maxspeed property to the vehicle.

$$ \text{steering force} = \text{desired velocity} - \text{current velocity} $$

The steering force is equal to the desired velocity minus the current velocity.

We also need to consider the vehicle’s maneuverability to determine if it can immediately change to the desired velocity. To do this, we need to limit the magnitude of the steering force by adding a maxforce property to the vehicle.

Putting it all together, we can write a method called seek() that takes a p5.Vector target and calculates the steering force towards that target.

JAVASCRIPT
seek(target) {
  let desired = p5.Vector.sub(target, this.position);
  desired.setMag(this.maxspeed);
  let steer = p5.Vector.sub(desired, this.velocity);
  steer.limit(this.maxforce);
  this.applyForce(steer);
}
Click to expand and view more

Arrive

When the vehicle is very close to the target, to avoid “overshooting,” we can make the desired velocity proportional to the distance to the target, for example, desired.mult(0.05).

Reynolds described a more complex approach. Imagine a circle with a given radius r around the target. If the vehicle is inside this circle, it will gradually slow down, reaching a velocity of 0 at the target.

JAVASCRIPT
let d = desired.mag();  // Magnitude of the vector pointing to the target
if (d < 100) {
  let m = map(d, 0, 100, 0, this.maxspeed);
  desired.setMag(m);
} else {
  desired.setMag(this.maxspeed);
}
Click to expand and view more

Wander

Both seeking and arriving can be seen as calculating a single vector for each behavior: the desired velocity. Every steering behavior proposed by Reynolds follows this pattern.

The goal of wandering is not random movement, but rather the feeling of moving in one direction for a while, then wandering in another direction for a bit. The direction in the next frame is related to the direction in the previous frame, which produces more interesting movement than generating a random steering direction in each frame.

First, the vehicle predicts its future position at a fixed distance in front of it (in the direction of its current velocity). Then, it draws a circle centered at that position with a radius r and randomly selects a point on the circumference. This point, which moves randomly around the circle in each frame, is the vehicle’s target, so its desired velocity points in that direction.

The wander behavior treats a random point on the circumference of a circle in front of the vehicle as the target.

It uses randomness to drive the vehicle’s steering, but constrains this randomness with the circle to prevent the vehicle’s movement from appearing jittery or completely random.

Flow-field following

What is a flow field? Imagine the canvas as a grid, where each cell has a direction vector. As the vehicle moves across the canvas, it asks, “Hey, what arrow is below me? That’s the velocity I want!”

Using Perlin noise, we can map the noise value from 0-1 to an angle from 0-2π (however, since Perlin noise is normally distributed, it will tend to move to the left).

JAVASCRIPT
let xoff = 0;
for (let i = 0; i < this.cols; i++) {
  let yoff = 0;
  for (let j = 0; j < this.rows; j++) {
    let angle = map(noise(xoff, yoff), 0, 1, 0, TWO_PI);
    this.field[i][j] = p5.Vector.fromAngle(angle);  // Generate a unit vector from the angle
    yoff += 0.1;
  }
  xoff += 0.1;
}
Click to expand and view more

Path Following

Simple Path Following

Path following requires a path, a vehicle, a future position, a normal, and a target.

What is a path? A simple way is to define the path as a series of connected points. The path has a radius, which is the width of the road.

Predict the future position along the current direction. A steering force only needs to be applied when the vehicle deviates from the path.

Find the normal point, set the target in front of the normal point (the target is on the path), and finally calculate the desired velocity.

Path Following with Multiple Segments

The key is how to find the target point along the path, which means first finding the correct line segment and then calculating the normal point on that segment.

The solution proposed by Reynolds is to choose the nearest normal point that is on the path.

Complex Systems

As a logical next step, agents should be able to perceive not only their physical environment but also the behavior of their fellow agents and act accordingly.

A complex system is often defined as a system that is more than the sum of its parts. While the individual elements of the system may be very simple and easy to understand, the behavior of the system as a whole can be highly complex, intelligent, and difficult to predict.

Imagine a tiny crawling ant. The ant is an autonomous agent; it can perceive its environment (using its antennae to gather information about the direction and strength of chemical signals) and make movement decisions based on these signals. But can a single ant acting alone build a nest, gather food, or protect the queen? An ant is a simple unit that can only perceive its immediate environment. However, an ant colony is a complex system, a superorganism working in concert, whose components work together to achieve difficult, complex goals.

Three core principles for building complex systems:

  1. Short-range relationships between simple units: Limited perception of the environment.
  2. Simple units operate in parallel: In each cycle, each unit will calculate its own steering force.
  3. The system as a whole exhibits emergence: Complex behaviors, patterns, and intelligence can emerge from the interactions between simple units. This phenomenon occurs in nature, such as in ant colonies, migration patterns, earthquakes, and snowflakes.

Three other characteristics of complex systems will help frame the discussion and provide guidelines for software simulation. This is a fuzzy set of features, and not all complex systems have all of them:

  1. Non-linearity: This aspect of complex systems is often referred to as the butterfly effect, originating from the mathematician and meteorologist Edward Norton Lorenz, a pioneer in the study of chaos theory. It is called non-linear because there is no linear relationship between changes in initial conditions and the outcome. Small changes in initial conditions can have a huge impact on the outcome. Even in a system composed of many 0s and 1s, changing just one bit can result in a completely different outcome. Non-linear systems are a superset of chaotic systems.
  2. Competition and cooperation: There is both competition and cooperation among the constituent elements.
  3. Feedback: Complex systems often include a loop that feeds the output of the system back into the system, influencing its behavior in a positive or negative direction.

Flocking

Flocking is a group animal behavior found in many organisms, such as birds, fish, and insects. In 1986, Reynolds created a computer simulation of flocking behavior and documented the algorithm in his paper Flocks, Herds, and Schools: A Distributed Behavioral Model.

Reynolds used the term boid (a fictional word referring to a bird-like object) to describe an element of a flocking system.

The three rules of Flocking:

These simple local rules can form natural and realistic flocking patterns at the system level.

Copyright Notice

Author: Aspi-Rin

Link: https://blog.aspi-rin.top/en/posts/autonomous-agents-from-individual-behaviors-to-complex-systems/

License: CC BY 4.0

This work is licensed under a Creative Commons Attribution 4.0 International License. Please attribute the source.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut