Wednesday 29 September 2021

Properly scale random movement in simulation with variable time delta

I want to write a simulation of a particle moving in 1D. The simulation has the following parameters:

  • max_speed: the max speed the particle can obtain (in meters/second)
  • delta: the time interval between two simulation steps (in seconds)
  • iters: the number of steps to run the simulation

Below is an example of a working simulation. It returns two lists of values: one for the particle's position at each time step, and one for the corresponding seconds at each time step.

import random

def simulate_particle(max_speed, delta, iters):
    current_position = 0
    positions = [current_position]
    seconds = [0]
    for frame in range(1, iters+1):
        max_distance_possible = max_speed * delta
        current_position += max_distance_possible * random.uniform(-1, 1)
        positions.append(current_position)
        seconds.append(frame * delta)
    return positions, seconds

As you can see, the particle's position at the next frame is determined by:

  • calculating the max distance possible that the particle is able to travel between two frames (taking into account the max_speed and delta parameters)
  • multiplying this value by a value sampled from an uniform distribution over [-1, 1] (in order to add some randomness)

The issue here is that the particle movement becomes tied to the delta parameter. Since larger deltas allow for a larger max distance, it becomes easier for particles in simulations with large deltas to move further away from the starting position.

Here's a graphical representation of this issue:

import matplotlib.pyplot as plt

for delta, iters, color in zip(
    [600, 60, 6],  # decreasing deltas
    [10, 100, 1000],  # larger number of iterations to achieve same end time
    ['blue', 'orange', 'green'],
):
    for repeat in range(50):  # run simulations 50 times for each delta
        ys, xs = simulate_particle(max_speed=5, delta=delta, iters=iters)
        label = f'{delta=}, {iters=}' if repeat == 0 else None
        plt.plot(xs, ys, color=color, label=label)

plt.xlabel('Time (s)')
plt.ylabel('Position')
plt.legend()
plt.show()

output: enter image description here

I understand why this happens: It's much harder to reach extreme values when sampling many times from a narrow distribution, compared to sampling a few times from a broader one.

My question is how do I fix this? Is there some kind of normalization I can do? I tried changing the uniform distribution to other distributions (e.g. gaussian), but eventually the same effect happens.



from Properly scale random movement in simulation with variable time delta

No comments:

Post a Comment