Direction Fields and Euler's Method¶
Real-World Scenario: Bacterial Growth in a Bioreactor¶
In whole cell modeling, we often encounter ODEs that describe how cellular quantities evolve over time -- metabolite concentrations, protein levels, or population sizes. Before solving these ODEs analytically, we can gain powerful geometric intuition by visualizing direction fields (slope fields) and computing approximate solutions via Euler's method.
We'll model bacterial population dynamics in a bioreactor where the growth rate depends on both the current population size and time (due to nutrient depletion). This gives us an ODE of the form $y' = f(t, y)$ that we can explore visually and numerically.
This notebook covers the key ideas from AEM Section 1.2:
- Geometric interpretation of $y' = f(x, y)$ as a slope at each point
- Direction fields (slope fields) and how they reveal solution behavior
- Isoclines -- curves of constant slope
- Euler's method for numerical approximation
- Error analysis: how step size affects accuracy
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
plt.style.use('seaborn-v0_8-whitegrid')
mpl.rcParams['font.family'] = 'DejaVu Sans'
Key Formulas from AEM Section 1.2¶
First-order ODE (explicit form):
$$y' = f(x, y)$$
Geometric meaning: At every point $(x_0, y_0)$, the ODE prescribes a slope $f(x_0, y_0)$. A solution curve $y(x)$ passing through that point must have slope equal to $f(x_0, y_0)$ there:
$$y'(x_0) = f(x_0, y_0)$$
Direction field: A grid of short line segments (lineal elements) at points $(x, y)$, each with slope $f(x, y)$. Solution curves are tangent to these segments everywhere.
Isoclines: Curves along which $f(x, y) = c$ (constant). All lineal elements along an isocline have the same slope $c$.
Euler's method: Given $y(x_0) = y_0$ and step size $h$:
$$y_{n+1} = y_n + h \, f(x_n, y_n), \qquad x_{n+1} = x_n + h$$
Each step follows the tangent line at $(x_n, y_n)$ for a distance $h$.
1. Direction Fields -- Geometric Meaning of y' = f(x, y)¶
The derivative $y'$ is the slope of the solution curve. At every point $(x, y)$ in the plane, the ODE $y' = f(x, y)$ tells us what slope a solution passing through that point must have. By drawing short line segments with slope $f(x, y)$ at a grid of points, we build a direction field (or slope field).
Let's start with the textbook example: $y' = y + x$.
def plot_direction_field(f, x_range, y_range, nx=20, ny=20, ax=None, title=None):
"""Plot a direction field for y' = f(x, y)."""
if ax is None:
fig, ax = plt.subplots(figsize=(8, 6))
x = np.linspace(*x_range, nx)
y = np.linspace(*y_range, ny)
X, Y = np.meshgrid(x, y)
slopes = f(X, Y)
# Normalize arrows to unit length for uniform appearance
U = np.ones_like(slopes)
V = slopes
N = np.sqrt(U**2 + V**2)
U, V = U / N, V / N
ax.quiver(X, Y, U, V, slopes, cmap='coolwarm', alpha=0.7,
headwidth=0, headlength=0, headaxislength=0, pivot='middle',
scale=35)
ax.set_xlim(x_range)
ax.set_ylim(y_range)
ax.set_xlabel('x')
ax.set_ylabel('y')
if title:
ax.set_title(title)
return ax
# Example from the textbook (Fig. 7): y' = y + x
f_textbook = lambda x, y: y + x
fig, ax = plt.subplots(figsize=(8, 6))
plot_direction_field(f_textbook, (-2, 1), (-2, 2), ax=ax,
title="Direction field of y' = y + x")
# Overlay exact solutions: y = ce^x - x - 1 (general solution)
x_fine = np.linspace(-2, 1, 200)
for c in [-2, -1, 0, 1, 2]:
y_exact = c * np.exp(x_fine) - x_fine - 1
mask = (y_exact > -2.5) & (y_exact < 2.5)
ax.plot(x_fine[mask], y_exact[mask], 'C0', alpha=0.6, lw=1.5)
ax.plot([], [], 'C0', lw=1.5, label='Solution curves y = ce$^x$ - x - 1')
ax.legend(loc='upper left')
plt.tight_layout()
plt.show()
The direction field reveals the structure of all solutions at once. Each short segment shows the slope that any solution passing through that point must have. The blue curves are exact solutions for different initial conditions -- notice how they are tangent to the field segments everywhere.
Key insight: The direction field exists regardless of whether we can solve the ODE analytically. For many ODEs, the field gives us qualitative understanding (growth, decay, oscillation, equilibria) even when no closed-form solution exists.
2. Isoclines¶
An isocline is a curve along which $f(x, y) = c$ (constant slope). All lineal elements along an isocline are parallel. This makes it easy to sketch direction fields by hand:
- Set $f(x, y) = c$ for several values of $c$
- Draw the resulting curves
- Along each curve, draw segments with slope $c$
For $y' = y + x$, the isoclines are $y + x = c$, i.e., the lines $y = -x + c$.
fig, ax = plt.subplots(figsize=(8, 6))
plot_direction_field(f_textbook, (-2, 1), (-2, 2), ax=ax,
title="Isoclines of y' = y + x")
# Isoclines: y + x = c => y = -x + c
x_iso = np.linspace(-2, 1, 100)
colors_iso = plt.cm.viridis(np.linspace(0.2, 0.9, 5))
for c_val, col in zip([-2, -1, 0, 1, 2], colors_iso):
y_iso = -x_iso + c_val
mask = (y_iso > -2.5) & (y_iso < 2.5)
ax.plot(x_iso[mask], y_iso[mask], '--', color=col, lw=2,
label=f'y + x = {c_val} (slope {c_val})')
ax.legend(loc='upper left', fontsize=9)
plt.tight_layout()
plt.show()
Along each dashed line, all the direction field segments have the same slope. The isocline $y + x = 0$ (the line $y = -x$) has slope 0 -- this is where solutions have horizontal tangents and transition from decreasing to increasing (or vice versa).
3. Bioreactor Example -- Direction Field for Logistic Growth¶
Now let's apply direction fields to our biotech scenario. Bacterial growth in a bioreactor with limited nutrients follows the logistic ODE:
$$\frac{dN}{dt} = r \, N \left(1 - \frac{N}{K}\right)$$
where:
- $N(t)$ = population (millions of cells)
- $r$ = intrinsic growth rate
- $K$ = carrying capacity (limited by nutrients and space)
This ODE is nonlinear -- the direction field will reveal equilibria and stability without us solving anything.
# Logistic growth parameters
r = 0.5 # growth rate (per hour)
K = 10.0 # carrying capacity (millions of cells)
f_logistic = lambda t, N: r * N * (1 - N / K)
fig, ax = plt.subplots(figsize=(10, 6))
plot_direction_field(f_logistic, (0, 15), (-1, 14), nx=25, ny=25, ax=ax,
title="Direction field: logistic growth N' = 0.5 N(1 - N/10)")
# Exact solution: N(t) = K / (1 + ((K - N0)/N0) * exp(-rt))
t_fine = np.linspace(0, 15, 300)
for N0 in [0.5, 1, 2, 5, 8, 12, 14]:
if N0 == 0:
continue
N_exact = K / (1 + ((K - N0) / N0) * np.exp(-r * t_fine))
ax.plot(t_fine, N_exact, 'C0', alpha=0.6, lw=1.5)
# Mark equilibria
ax.axhline(y=K, color='C3', ls=':', lw=2, alpha=0.7, label=f'Stable equilibrium N = K = {K:.0f}')
ax.axhline(y=0, color='C1', ls=':', lw=2, alpha=0.7, label='Unstable equilibrium N = 0')
ax.set_xlabel('Time t (hours)')
ax.set_ylabel('Population N (millions)')
ax.legend(loc='right', fontsize=9)
plt.tight_layout()
plt.show()
The direction field immediately reveals:
- Two equilibria: $N = 0$ (unstable) and $N = K = 10$ (stable)
- Below $K$: slopes are positive -- population grows
- Above $K$: slopes are negative -- population decreases back to $K$
- All solutions converge to $K$ regardless of initial population (as long as $N_0 > 0$)
This qualitative analysis required no formula for the solution -- just the direction field!
4. Euler's Method¶
When we can't solve an ODE analytically, Euler's method gives us a simple numerical approximation. The idea is to follow the tangent line at each step:
$$y_{n+1} = y_n + h \cdot f(x_n, y_n)$$
Starting from the initial condition $(x_0, y_0)$, we step forward by $h$, using the slope $f(x_n, y_n)$ to estimate the next value.
Let's reproduce the textbook example (Table 1.1): $y' = y + x$, $y(0) = 0$, with step $h = 0.2$.
def euler_method(f, x0, y0, h, n_steps):
"""Euler's method for y' = f(x, y), y(x0) = y0."""
xs = np.zeros(n_steps + 1)
ys = np.zeros(n_steps + 1)
xs[0], ys[0] = x0, y0
for i in range(n_steps):
ys[i + 1] = ys[i] + h * f(xs[i], ys[i])
xs[i + 1] = xs[i] + h
return xs, ys
# Textbook example: y' = y + x, y(0) = 0, h = 0.2
h = 0.2
n_steps = 5
xs, ys = euler_method(f_textbook, x0=0, y0=0, h=h, n_steps=n_steps)
# Exact solution: y = e^x - x - 1 (with c=1 for y(0)=0)
y_exact_vals = np.exp(xs) - xs - 1
errors = y_exact_vals - ys
# Reproduce Table 1.1
print(f"{'n':>3} {'x_n':>6} {'y_n (Euler)':>12} {'y(x_n) exact':>13} {'Error':>8}")
print("-" * 50)
for i in range(n_steps + 1):
print(f"{i:3d} {xs[i]:6.1f} {ys[i]:12.3f} {y_exact_vals[i]:13.3f} {errors[i]:8.3f}")
n x_n y_n (Euler) y(x_n) exact Error -------------------------------------------------- 0 0.0 0.000 0.000 0.000 1 0.2 0.000 0.021 0.021 2 0.4 0.040 0.092 0.052 3 0.6 0.128 0.222 0.094 4 0.8 0.274 0.426 0.152 5 1.0 0.488 0.718 0.230
The errors grow with each step -- this is typical of Euler's method. Each step introduces a local error, and these errors accumulate. Let's visualize this.
fig, ax = plt.subplots(figsize=(8, 5))
# Direction field in background
plot_direction_field(f_textbook, (0, 1), (-0.1, 0.8), nx=15, ny=12, ax=ax,
title="Euler's method vs exact solution for y' = y + x, y(0) = 0")
# Exact solution
x_fine = np.linspace(0, 1, 200)
ax.plot(x_fine, np.exp(x_fine) - x_fine - 1, 'C0', lw=2, label='Exact: y = e$^x$ - x - 1')
# Euler steps
ax.plot(xs, ys, 'o-', color='C3', lw=2, ms=7, label=f'Euler (h = {h})', zorder=5)
# Show tangent lines at each step
for i in range(n_steps):
slope = f_textbook(xs[i], ys[i])
x_tang = np.array([xs[i], xs[i+1]])
y_tang = ys[i] + slope * (x_tang - xs[i])
ax.plot(x_tang, y_tang, '--', color='C3', alpha=0.4, lw=1)
ax.legend(fontsize=10)
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.tight_layout()
plt.show()
5. Effect of Step Size on Euler's Method Accuracy¶
The accuracy of Euler's method depends critically on the step size $h$. Smaller $h$ means more steps but better accuracy. The global error is $O(h)$ -- halving the step size roughly halves the error.
fig, axes = plt.subplots(1, 2, figsize=(14, 5))
# Left: solutions for different step sizes
ax = axes[0]
x_fine = np.linspace(0, 1, 200)
ax.plot(x_fine, np.exp(x_fine) - x_fine - 1, 'k', lw=2, label='Exact')
step_sizes = [0.5, 0.2, 0.1, 0.05]
colors = ['C3', 'C1', 'C2', 'C4']
for h_val, col in zip(step_sizes, colors):
n = int(1.0 / h_val)
xs_h, ys_h = euler_method(f_textbook, 0, 0, h_val, n)
ax.plot(xs_h, ys_h, 'o-', color=col, ms=4, lw=1.5, label=f'h = {h_val}')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title('Euler approximations with different step sizes')
ax.legend()
# Right: error at x=1 vs step size (log-log)
ax = axes[1]
h_values = np.array([0.5, 0.2, 0.1, 0.05, 0.02, 0.01, 0.005])
errors_at_1 = []
for h_val in h_values:
n = int(1.0 / h_val)
xs_h, ys_h = euler_method(f_textbook, 0, 0, h_val, n)
exact_at_1 = np.exp(1) - 1 - 1 # e - 2
errors_at_1.append(abs(exact_at_1 - ys_h[-1]))
errors_at_1 = np.array(errors_at_1)
ax.loglog(h_values, errors_at_1, 'o-', color='C0', lw=2, ms=7, label='Measured error')
ax.loglog(h_values, h_values * errors_at_1[0] / h_values[0], '--', color='gray',
lw=1.5, label='O(h) reference')
ax.set_xlabel('Step size h')
ax.set_ylabel('|Error| at x = 1')
ax.set_title('Error convergence (log-log)')
ax.legend()
plt.tight_layout()
plt.show()
The log-log plot confirms that Euler's method has first-order convergence: the error is proportional to $h$. The measured errors follow the $O(h)$ reference line. This means we need 10x more steps to gain one digit of accuracy -- a significant limitation that motivates higher-order methods (Runge-Kutta, etc.).
6. Euler's Method Applied to the Bioreactor¶
Let's use Euler's method to simulate bacterial growth in our bioreactor and compare with the exact logistic solution.
# Simulate bioreactor growth with Euler's method
N0 = 0.5 # initial population: 0.5 million cells
t_end = 20 # hours
fig, axes = plt.subplots(1, 2, figsize=(14, 5))
# Exact logistic solution
t_fine = np.linspace(0, t_end, 300)
N_exact = K / (1 + ((K - N0) / N0) * np.exp(-r * t_fine))
# Left: Euler with different step sizes
ax = axes[0]
ax.plot(t_fine, N_exact, 'k', lw=2, label='Exact')
for h_val, col in zip([2.0, 1.0, 0.5, 0.1], ['C3', 'C1', 'C2', 'C4']):
n = int(t_end / h_val)
ts, Ns = euler_method(f_logistic, 0, N0, h_val, n)
ax.plot(ts, Ns, 'o-', color=col, ms=3 if h_val < 0.5 else 5,
lw=1.2, label=f'Euler h = {h_val}', alpha=0.8)
ax.set_xlabel('Time t (hours)')
ax.set_ylabel('Population N (millions)')
ax.set_title('Euler method for logistic growth')
ax.legend(fontsize=9)
# Right: step-by-step illustration with h = 2.0
ax = axes[1]
h_demo = 2.0
n_demo = int(t_end / h_demo)
ts_demo, Ns_demo = euler_method(f_logistic, 0, N0, h_demo, n_demo)
ax.plot(t_fine, N_exact, 'k', lw=2, label='Exact')
ax.plot(ts_demo, Ns_demo, 'o-', color='C3', lw=2, ms=8, label=f'Euler h = {h_demo}', zorder=5)
# Annotate first few steps
for i in range(min(4, n_demo)):
slope = f_logistic(ts_demo[i], Ns_demo[i])
ax.annotate(f'slope = {slope:.2f}',
xy=(ts_demo[i], Ns_demo[i]),
xytext=(ts_demo[i] + 0.3, Ns_demo[i] + 1.5 - 0.5 * i),
fontsize=8, color='C3',
arrowprops=dict(arrowstyle='->', color='C3', lw=0.8))
ax.set_xlabel('Time t (hours)')
ax.set_ylabel('Population N (millions)')
ax.set_title('Euler step-by-step (h = 2.0)')
ax.legend()
plt.tight_layout()
plt.show()
Notice how even with a coarse step size ($h = 2.0$), Euler's method captures the qualitative behavior: initial exponential growth followed by saturation at the carrying capacity. The method systematically underestimates the solution during the growth phase because it uses the slope at the beginning of each interval, which is smaller than the average slope over the interval (the curve is concave up).
7. Gallery: Direction Fields for Different ODE Types¶
Direction fields look different depending on the ODE's structure. Let's compare several types to build intuition.
odes = [
(lambda x, y: np.cos(x), "y' = cos x\n(depends on x only)", (-4, 4), (-3, 3)),
(lambda x, y: -y, "y' = -y\n(exponential decay)", (-2, 4), (-3, 3)),
(lambda x, y: y - y**2, "y' = y - y$^2$\n(logistic, K=1)", (-1, 6), (-0.5, 2)),
(lambda x, y: 1 + y**2, "y' = 1 + y$^2$\n(blows up!)", (-2, 2), (-3, 3)),
(lambda x, y: -x / (y + 1e-10), "y' = -x/y\n(circles)", (-3, 3), (-3, 3)),
(lambda x, y: np.sin(x) * y, "y' = sin(x) y\n(periodic growth)", (-6, 6), (-3, 3)),
]
fig, axes = plt.subplots(2, 3, figsize=(15, 9))
for ax, (f_ode, title, xr, yr) in zip(axes.flat, odes):
plot_direction_field(f_ode, xr, yr, nx=18, ny=18, ax=ax, title=title)
plt.tight_layout()
plt.show()
Each direction field has a distinctive pattern:
- y' = cos x: Slopes depend only on $x$ (autonomous in $y$). Horizontal bands of parallel segments. All solution curves are vertical translates of $y = \sin x + c$.
- y' = -y: Exponential decay. The zero-isocline is $y = 0$; all solutions converge to it.
- y' = y - y$^2$: Logistic growth with equilibria at $y = 0$ (unstable) and $y = 1$ (stable).
- y' = 1 + y$^2$: No equilibria. Solutions blow up in finite time ($y = \tan(x + c)$).
- y' = -x/y: Solutions form circles $x^2 + y^2 = c^2$. The field is perpendicular to the radius at every point.
- y' = sin(x) y: Growth alternates between positive and negative, creating oscillating amplitudes.
8. When Euler's Method Fails: Stiff and Blow-up Cases¶
Euler's method can fail spectacularly for certain ODEs. Let's see two failure modes.
fig, axes = plt.subplots(1, 2, figsize=(14, 5))
# Case 1: Stiff ODE y' = -50y, exact solution y = e^{-50t}
# With h too large, Euler oscillates wildly
ax = axes[0]
f_stiff = lambda t, y: -50 * y
t_fine = np.linspace(0, 0.5, 300)
ax.plot(t_fine, np.exp(-50 * t_fine), 'k', lw=2, label='Exact: y = e$^{-50t}$')
for h_val, col in zip([0.05, 0.03, 0.01], ['C3', 'C1', 'C2']):
n = int(0.5 / h_val)
ts, ys_stiff = euler_method(f_stiff, 0, 1, h_val, n)
ax.plot(ts, ys_stiff, 'o-', color=col, ms=3, lw=1, label=f'h = {h_val}')
ax.set_xlabel('t')
ax.set_ylabel('y')
ax.set_title("Stiff ODE: y' = -50y (Euler oscillates if h > 2/50)")
ax.set_ylim(-2, 2)
ax.legend(fontsize=9)
# Case 2: Blow-up ODE y' = y^2, exact solution y = 1/(1-t), blows up at t=1
ax = axes[1]
f_blowup = lambda t, y: y**2
t_fine = np.linspace(0, 0.95, 200)
ax.plot(t_fine, 1 / (1 - t_fine), 'k', lw=2, label='Exact: y = 1/(1-t)')
for h_val, col in zip([0.2, 0.1, 0.05], ['C3', 'C1', 'C2']):
n = int(0.95 / h_val)
ts, ys_blow = euler_method(f_blowup, 0, 1, h_val, n)
# Clip for display
mask = ys_blow < 30
ax.plot(ts[mask], ys_blow[mask], 'o-', color=col, ms=3, lw=1, label=f'h = {h_val}')
ax.set_xlabel('t')
ax.set_ylabel('y')
ax.set_title("Blow-up ODE: y' = y² (solution → ∞ at t = 1)")
ax.set_ylim(0, 25)
ax.legend(fontsize=9)
plt.tight_layout()
plt.show()
Left (Stiff ODE): For $y' = -50y$, Euler's method requires $h < 2/|{-50}| = 0.04$ for stability. With $h = 0.05 > 0.04$, the approximation oscillates with growing amplitude -- even though the true solution decays smoothly. This is the stiffness problem: fast dynamics demand tiny step sizes.
Right (Blow-up): For $y' = y^2$, the exact solution $y = 1/(1-t)$ goes to infinity at $t = 1$. Euler's method lags behind the true solution -- it always underestimates because the slope is increasing faster than it can keep up. No finite step size can track a blow-up.