Mascot image.
#EECS#Python#Intro

Search and Approximation

In the AM session, we introduced iteration and used it to verify quantified statements by exhaustive checks over a domain. The primality test asked: does there exist a divisor of nn in {2,,n1}\{2, \ldots, n-1\}? The universal check asked: does every element of a domain satisfy some predicate? In each case, the loop served as a computational implementation of a quantifier, scanning the domain one element at a time.

In this recitation, we push the idea further. Instead of merely checking whether a value with a given property exists, we use loops to find it. The shift from verification to search is where iteration becomes powerful, and it brings with it new questions: how do we guarantee that a search terminates? How quickly does it converge? And what happens when the arithmetic itself introduces errors?

Exhaustive Enumeration

The simplest search strategy is the one we have already seen in the context of quantifiers: try every candidate in some domain until one works. When the domain is finite (or can be made finite by restricting attention to a bounded range), this approach is called exhaustive enumeration, and it is guaranteed to find a solution if one exists.

Cube Roots

Suppose we want to find the integer cube root of a perfect cube. Given an integer nn, we seek an integer xx such that x3=nx^3 = n, or wish to report that no such integer exists.

The strategy is direct: test x=0,1,2,x = 0, 1, 2, \ldots in order until either x3=nx^3 = |n| (success) or x3>nx^3 > |n| (failure, since a<ba < b implies a3<b3a^3 < b^3 for positive integers). For negative nn, the cube root is the negation of the cube root of n|n|.

n = 27
x = 0

while x ** 3 < abs(n):
    x = x + 1

if x ** 3 != abs(n):
    print(f'{n} is not a perfect cube')
else:
    if n < 0:
        x = -x
    print(f'Cube root of {n} is {x}')

# Cube root of 27 is 3

Hand simulation for n=27n = 27:

Iterationxx ** 3x ** 3 < 27
Start00\top
111\top
228\top
3327\bot

The loop terminates when x=3x = 3, and since 33=27=n3^3 = 27 = |n|, the programme reports success.

Problem 1

Tracing cube roots. The programme above searches for an integer cube root by incrementing x from 00. Trace the execution for n=8n = 8, n=8n = -8, and n=9n = 9: in each case, construct the hand-simulation table, state the value of x when the loop terminates, and determine whether the programme reports a perfect cube.

Example 1 (Speed of Exhaustive Search)

How fast is exhaustive enumeration in practice? Try n=1,957,816,251n = 1{,}957{,}816{,}251. The cube root is 1,2511{,}251 (since 12513=1,957,816,2511251^3 = 1{,}957{,}816{,}251), so the loop iterates 1,2511{,}251 times. On a modern machine, this completes almost instantaneously. Now try n=7,406,961,012,236,344,616n = 7{,}406{,}961{,}012{,}236{,}344{,}616. The cube root is 1,949,3061{,}949{,}306, so the loop runs nearly two million times, yet the programme still finishes in under a second. Modern processors execute billions of instructions per second; a million iterations is negligible.

Termination and Decrementing Functions

Every loop in a correct programme must eventually terminate (unless it is intentionally infinite, such as a server’s event loop). To prove termination rigorously, we exhibit a quantity that strictly decreases on each iteration and is bounded below.

Definition 1 (Decrementing Function)

A decrementing function for a loop is an integer-valued expression DD of the programme variables such that:

  1. D0D \geqslant 0 whenever the loop test evaluates to \top, and
  2. DD strictly decreases on every iteration of the loop body.

If a loop admits a decrementing function, it terminates after at most D0D_0 iterations, where D0D_0 is the initial value of DD.

The existence of a decrementing function converts the informal assertion “this loop eventually stops” into a rigorous mathematical argument. For the cube-root search, the natural candidate is D=n1/3xD = \lceil |n|^{1/3} \rceil - x: it starts non-negative, and each iteration reduces it by 11.

But this uses a notation we have not yet formalised. In Homework 2 (Problem 12), we introduced the floor function x\lfloor x \rfloor, the greatest integer less than or equal to xx. Its companion rounds upward.

Definition 2 (Ceiling Function)

The ceiling of a real number xx, written x\lceil x \rceil, is the least integer greater than or equal to xx. Equivalently, x=x\lceil x \rceil = -\lfloor -x \rfloor.

For example, 2.3=3\lceil 2.3 \rceil = 3, 1.7=1\lceil -1.7 \rceil = -1, and 5=5\lceil 5 \rceil = 5. The floor and ceiling of xx satisfy xxx\lfloor x \rfloor \leqslant x \leqslant \lceil x \rceil, with equality on both sides precisely when xx is an integer.

Theorem 1 (Termination of the Cube-Root Search)

The exhaustive cube-root search terminates for every non-negative integer nn.

Proof

Define D=n1/3xD = \lceil |n|^{1/3} \rceil - x. Initially x=0x = 0, so D=n1/30D = \lceil |n|^{1/3} \rceil \geqslant 0. Whenever the loop test x3<nx^3 < |n| holds, we have x<n1/3n1/3x < |n|^{1/3} \leqslant \lceil |n|^{1/3} \rceil, giving D1>0D \geqslant 1 > 0. Each iteration increments xx by 11, so DD decreases by exactly 11. Since DD is a non-negative integer that decreases by 11 per iteration, the loop executes at most n1/3\lceil |n|^{1/3} \rceil times.

Note (Debugging with Decrementing Functions)

When a programme appears to run forever, the decrementing-function framework suggests a diagnostic strategy. Identify the quantity that should be decreasing. If you cannot find one, the loop may lack a valid decrementing function, which is the mathematical signature of a potential infinite loop. Print the candidate decrementing function at each iteration; if the printed values fail to decrease, the bug is in the loop body. For example, commenting out x = x + 1 in the cube-root search gives D=n1/3D = \lceil |n|^{1/3} \rceil on every iteration, constant rather than decreasing.

Remark (Speed of Exhaustive Enumeration)

Exhaustive enumeration is correct and simple, but it can be slow. The cube-root search tests n1/3\lceil |n|^{1/3} \rceil candidates. For n=1012n = 10^{12}, that is 10,00010{,}000 iterations: entirely manageable. For n=1030n = 10^{30}, it is 101010^{10} iterations, which could take minutes or hours. We shall shortly see a method that reduces 101010^{10} steps to roughly 100100.

Approximate Solutions

The cube-root search works only for perfect cubes: it demands an exact integer answer. But many numerical problems have no exact answer in the integers, or even in the rationals. The square root of 22 is irrational, If we insist on exactness, no finite programme can return 2\sqrt{2}. The remedy is to relax the requirement: instead of the exact root, we seek a value close enough.

Definition 3 (Epsilon-Approximation)

Let ff be a function and yy a target value. An ε\varepsilon-approximation to a solution of f(x)=yf(x) = y is a value x^\hat{x} such that f(x^)y<ε|f(\hat{x}) - y| < \varepsilon, where ε>0\varepsilon > 0 is a prescribed tolerance.

The idea is to accept a controlled amount of error. The tolerance ε\varepsilon is chosen by the programmer: a smaller ε\varepsilon demands a more accurate answer but, as we shall see, may require vastly more computation.

Exhaustive Search for Square Roots

We adapt exhaustive enumeration to the approximate setting. Instead of testing integers 0,1,2,0, 1, 2, \ldots, we test values 0,δ,2δ,3δ,0, \delta, 2\delta, 3\delta, \ldots where δ>0\delta > 0 is a small step size. For each candidate x^\hat{x}, we check whether x^2n<ε|\hat{x}^2 - n| < \varepsilon.

n = 25
epsilon = 0.01
step = 0.0001
guess = 0.0
num_guesses = 0

while abs(guess ** 2 - n) >= epsilon and guess <= n:
    guess = guess + step
    num_guesses = num_guesses + 1

if abs(guess ** 2 - n) >= epsilon:
    print(f'Failed to find sqrt({n})')
else:
    print(f'{guess} is close to sqrt({n})')
    print(f'Number of guesses: {num_guesses}')

# 4.9999... is close to sqrt(25)
# Number of guesses: ~49990

The programme tests roughly n/δn / \delta candidates before reaching the neighbourhood of n\sqrt{n}. For n=25n = 25 and δ=0.0001\delta = 0.0001, that is up to 250,000250{,}000 candidates. The result is not 55 exactly, but that is by design: 4.99994.9999\ldots is within ε=0.01\varepsilon = 0.01 of the true root, which is all we asked for.

Example 2 (When the Search Space Misses the Answer)

Consider the same programme with n=0.25n = 0.25:

n = 0.25
epsilon = 0.01
step = 0.0001
guess = 0.0

while abs(guess ** 2 - n) >= epsilon and guess <= n:
    guess = guess + step

if abs(guess ** 2 - n) >= epsilon:
    print(f'Failed to find sqrt({n})')

# Failed to find sqrt(0.25)

The search fails because 0.25=0.5\sqrt{0.25} = 0.5, but the loop condition guess <= n halts the search at x^=0.25\hat{x} = 0.25, well before reaching 0.50.5. The guard was designed for n1n \geqslant 1, where nn\sqrt{n} \leqslant n. For 0<n<10 < n < 1, we have n>n\sqrt{n} > n, so the upper bound must be adjusted.

Example 3 (When the Step Size is Too Large)

Now try n=123,456n = 123{,}456 with the same step δ=0.0001\delta = 0.0001. The programme runs for a long time and eventually reports failure. The step size is too large: the algorithm steps over every value close to 123456351.36\sqrt{123456} \approx 351.36 without landing within ε\varepsilon of the true root. Making δ\delta smaller (say δ=ε3=106\delta = \varepsilon^3 = 10^{-6}) fixes the problem, but the programme must then test roughly 351,000,000351{,}000{,}000 candidates. We could try to speed things up by starting closer to the answer, but that presumes we already know the neighbourhood of the answer.

There is a deeper tension at work. The step size δ\delta controls both the accuracy of the answer and the running time. A smaller δ\delta gives a more precise result but requires more iterations. We need a fundamentally better strategy.

The key observation is that exhaustive enumeration ignores information. Each failed guess tells us whether x^2\hat{x}^2 was too small or too large, but we discard this knowledge and blindly proceed to x^+δ\hat{x} + \delta. A cleverer strategy would use the comparison to eliminate half the remaining candidates at each step.

This is exactly what you do when looking up a word in a dictionary. You do not start at page one and scan every entry. You open the dictionary near the middle. If the word you seek comes after the visible page, you discard the first half; otherwise, you discard the second half. You then repeat on the surviving half, halving the search space each time.

Before stating the algorithm precisely, we fix notation for the sets it operates on.

Definition 4 (Interval Notation)

For real numbers aba \leqslant b, we write (reading {xR:}\{x \in \mathbb{R} : \ldots\} as “the set of all real xx such that \ldots”):

  • [a,b]={xR:axb}[a, b] = \{x \in \mathbb{R} : a \leqslant x \leqslant b\}, a closed interval,
  • (a,b)={xR:a<x<b}(a, b) = \{x \in \mathbb{R} : a < x < b\}, an open interval,
  • [a,b)={xR:ax<b}[a, b) = \{x \in \mathbb{R} : a \leqslant x < b\} and (a,b]={xR:a<xb}(a, b] = \{x \in \mathbb{R} : a < x \leqslant b\}, half-open intervals.

The notation (a,b)(a, b) for an open interval is standard but potentially ambiguous with the ordered pair (a,b)(a, b). Context resolves the ambiguity; where confusion is possible, the continental notation ]a,b[]a, b[ is sometimes used.

Definition 5 (Bisection Search)

Bisection search (or binary search) solves f(x)=yf(x) = y on an interval [,h][\ell, h] where ff preserves order (that is, a<ba < b implies f(a)<f(b)f(a) < f(b) throughout), by maintaining the invariant that the solution lies within [,h][\ell, h] and repeatedly halving the interval:

  1. Compute the midpoint m=(+h)/2m = (\ell + h) / 2.
  2. If f(m)f(m) is too large, set hmh \leftarrow m. If f(m)f(m) is too small, set m\ell \leftarrow m.
  3. Repeat until f(m)y<ε|f(m) - y| < \varepsilon or the interval is sufficiently narrow.

After kk steps, the interval has width (h00)/2k(h_0 - \ell_0) / 2^k, where h00h_0 - \ell_0 is the initial width.

Note (Successive Approximation)

The narrowing interval in bisection search is an early instance of a pattern that recurs throughout analysis: a sequence of values approaching a target, with the error shrinking by a predictable factor at each step. We will formalise this idea under the name convergence when we study sequences and limits.

The figure below traces bisection search for 25\sqrt{25} on the interval [0,25][0, 25]. Each row shows one iteration: the grey bar is the surviving interval, the green diamond marks the midpoint, and the dashed red line marks the true root at x=5x = 5.

Bisection search for the square root of 25

Implementation

n = 25
epsilon = 0.01
low = 0.0
high = max(1.0, n)
guess = (low + high) / 2.0
num_guesses = 0

while abs(guess ** 2 - n) >= epsilon:
    num_guesses = num_guesses + 1
    if guess ** 2 < n:
        low = guess
    else:
        high = guess
    guess = (low + high) / 2.0

print(f'{guess} is close to sqrt({n})')
print(f'Number of guesses: {num_guesses}')

# 4.999923706054... is close to sqrt(25)
# Number of guesses: 13

Where exhaustive enumeration needed roughly 50,00050{,}000 guesses, bisection needs 1313. Note the use of max(1.0, n) for the upper bound: Python’s built-in max returns the larger of its arguments (Homework 2, Problem 4), so this ensures that n\sqrt{n} lies within [0,h][0, h] whether n1n \geqslant 1 or 0<n<10 < n < 1, correcting the failure we observed with exhaustive approximation.

Example 4 (Hand Simulation of Bisection)

Tracing the first four steps for 25\sqrt{25} on [0,25][0, 25]:

Steplowhighguessguess ** 2Action
00.025.012.5156.25Too high: high = 12.5
10.012.56.2539.0625Too high: high = 6.25
20.06.253.1259.765625Too low: low = 3.125
33.1256.254.687521.972656Too low: low = 4.6875

After four steps, the interval has shrunk from width 2525 to width 6.254.6875=1.56256.25 - 4.6875 = 1.5625. Four more steps reduce it to roughly 0.010.01.

Example 5 (Bisection on a Larger Input)

Recall that exhaustive approximation failed on n=123,456n = 123{,}456 because the step size was either too large (missing the root) or too small (requiring hundreds of millions of guesses). Bisection search handles it effortlessly. The initial interval is [0,123456][0, 123456], and bisection needs roughly log2(123456/0.01)=log2(12,345,600)=24\lceil \log_2(123456 / 0.01) \rceil = \lceil \log_2(12{,}345{,}600) \rceil = 24 guesses. Running the programme confirms: it terminates in 3030 guesses (the slight excess over the bound comes from testing x^2n|\hat{x}^2 - n| rather than the interval width).

Convergence

The bisection algorithm produces a sequence of guesses m0,m1,m2,m_0, m_1, m_2, \ldots that draw progressively closer to the true root. Each step halves the interval containing the root, so the distance between the guess and the true answer shrinks by a factor of 22 at every iteration. This is an instance of a general phenomenon, a sequence of values approaching a target, which we will later study under the name convergence. For now, we establish the rate at which bisection approaches its target.

The bound involves the logarithm base 22, written log2(N)\log_2(N): the power to which 22 must be raised to produce NN. For example, log2(8)=3\log_2(8) = 3 since 23=82^3 = 8, and log2(1024)=10\log_2(1024) = 10 since 210=10242^{10} = 1024. For values that are not exact powers of 22, log2\log_2 is not an integer; for instance, log2(2500)11.29\log_2(2500) \approx 11.29.

Theorem 2 (Convergence of Bisection Search)

Let [a,b][a, b] be the initial interval and ε>0\varepsilon > 0 the desired tolerance. Bisection search terminates after at most log2((ba)/ε)\lceil \log_2((b - a) / \varepsilon) \rceil iterations.

Proof

After kk iterations, the interval has width (ba)/2k(b - a) / 2^k. The midpoint of an interval of width ww is within w/2w / 2 of any point in the interval, so the guess is within (ba)/2k+1(b - a) / 2^{k+1} of the true root. We need (ba)/2k<ε(b - a) / 2^k < \varepsilon, which gives 2k>(ba)/ε2^k > (b - a) / \varepsilon, hence k>log2((ba)/ε)k > \log_2((b - a) / \varepsilon). The smallest integer satisfying this is k=log2((ba)/ε)k = \lceil \log_2((b - a) / \varepsilon) \rceil.

Example 6

For 25\sqrt{25} on [0,25][0, 25] with ε=0.01\varepsilon = 0.01:

log2(25/0.01)=log2(2500)=11.29=12\lceil \log_2(25 / 0.01) \rceil = \lceil \log_2(2500) \rceil = \lceil 11.29 \rceil = 12

The bound predicts at most 1212 iterations. Our programme used 1313 (the slight discrepancy arises because the termination test checks x^2n|\hat{x}^2 - n| rather than the interval width directly, and floating-point rounding can add an extra step).

Remark (Logarithmic vs. Linear Growth)

Exhaustive enumeration with step δ\delta tests roughly n/δn / \delta candidates. Bisection search tests roughly log2(n/ε)\log_2(n / \varepsilon) candidates. To appreciate the difference concretely: for n=109n = 10^9 and tolerance ε=0.01\varepsilon = 0.01, exhaustive enumeration would need on the order of 101310^{13} guesses, while bisection needs roughly log2(1011)37\log_2(10^{11}) \approx 37. Every time the search range doubles, exhaustive enumeration doubles its work; bisection adds a single extra step. This is the power of halving: even for astronomically large ranges, the number of bisection steps remains modest.

Problem 2

Bisection for cube roots. The bisection programme above approximates n\sqrt{n} by checking whether guess ** 2 overshoots or undershoots nn. Adapt it to approximate 273\sqrt[3]{27} to within ε=0.001\varepsilon = 0.001. How many guesses does your programme require? Compare this to the number of steps exhaustive enumeration with step δ=0.001\delta = 0.001 would need.

Machine Arithmetic

The approximate methods above assume that arithmetic on real numbers is exact: that 0.1+0.1+0.10.1 + 0.1 + 0.1 equals 0.30.3, that halving an interval truly halves it, and that the only source of error is the deliberate tolerance ε\varepsilon. On a computer, these assumptions are false.

Binary Fractions

Computers store numbers in binary. Just as the decimal system represents fractions using negative powers of 1010, so that 0.375=3/10+7/100+5/10000.375 = 3/10 + 7/100 + 5/1000, the binary system uses negative powers of 22:

(0.b1b2b3)2=b12+b24+b38+(0.b_1 b_2 b_3 \ldots)_2 = \frac{b_1}{2} + \frac{b_2}{4} + \frac{b_3}{8} + \cdots
Example 7 (Exact Binary Fractions)

The decimal fraction 0.3750.375 has an exact binary representation:

0.375=14+18=012+114+118=(0.011)20.375 = \frac{1}{4} + \frac{1}{8} = 0 \cdot \frac{1}{2} + 1 \cdot \frac{1}{4} + 1 \cdot \frac{1}{8} = (0.011)_2

Similarly, 0.5=1/2=(0.1)20.5 = 1/2 = (0.1)_2 and 0.625=1/2+1/8=(0.101)20.625 = 1/2 + 1/8 = (0.101)_2. These fractions are exact because their denominators are powers of 22: 0.375=3/80.375 = 3/8, 0.5=1/20.5 = 1/2, 0.625=5/80.625 = 5/8.

But not every decimal fraction is so accommodating.

Theorem 3 (Inexact Representation of 1/10 in Binary)

The fraction 1/101/10 has no finite binary representation.

Proof

Suppose for contradiction that 1/10=(0.b1b2bk)21/10 = (0.b_1 b_2 \ldots b_k)_2 for some finite sequence of bits b1,,bkb_1, \ldots, b_k. Then

110=b12+b24++bk2k=M2k\frac{1}{10} = \frac{b_1}{2} + \frac{b_2}{4} + \cdots + \frac{b_k}{2^k} = \frac{M}{2^k}

for some non-negative integer M=b12k1+b22k2++bkM = b_1 \cdot 2^{k-1} + b_2 \cdot 2^{k-2} + \cdots + b_k. This gives 2k=10M2^k = 10M, so 2k2^k is divisible by 55. But 2k2^k is a power of 22, and since 55 is an odd prime, 52k5 \nmid 2^k for any k0k \geqslant 0. This contradicts 2k=10M2^k = 10M.

This is not a quirk of the number 1/101/10. A rational number p/qp/q in lowest terms has a finite binary expansion if and only if qq is a power of 22. Since 10=2×510 = 2 \times 5, the fraction 1/101/10 requires an infinite repeating binary expansion, just as 1/3=0.3331/3 = 0.333\ldots repeats infinitely in decimal.

Floating-Point Consequences

Python’s float type stores numbers using the IEEE 754 double-precision format: 6464 bits encoding a sign, an 1111-bit exponent, and a 5252-bit significand (plus one implied leading bit). This provides approximately 1515 to 1717 significant decimal digits of precision.

The practical consequence is that decimal constants which look exact in source code are silently rounded to the nearest representable binary fraction. We can expose this by using the format specifier :.20f inside an f-string, which prints 2020 digits after the decimal point:

print(0.1)              # 0.1  (display is rounded)
print(f'{0.1:.20f}')    # 0.10000000000000000555

print(0.1 + 0.1 + 0.1 == 0.3)    # False
print(f'{0.1 + 0.1 + 0.1:.20f}') # 0.30000000000000004441
print(f'{0.3:.20f}')              # 0.29999999999999998890

The sum 0.1+0.1+0.10.1 + 0.1 + 0.1 is not 0.30.3: it overshoots by approximately 5.5×10175.5 \times 10^{-17}. Meanwhile, the literal 0.3 is itself rounded downward from the true 3/103/10. The two rounding errors go in opposite directions, so == returns False.

Example 8 (A Loop That Never Lands)

Consider the following attempt to count from 00 to 11 in steps of 0.10.1:

x = 0.0
count = 0

while x != 1.0:
    x = x + 0.1
    count = count + 1
    if count > 20:
        print('Gave up')
        break

This loop never terminates naturally. After 1010 additions of 0.10.1, x is 0.99999999999999990.9999999999999999, not 1.01.0. The eleventh addition overshoots to 1.09999999999999991.0999999999999999, so x skips past 1.01.0 entirely. The equality test x != 1.0 is never satisfied. Replacing != with < 1.0 or abs(x - 1.0) < epsilon resolves the issue.

Remark (Never Compare Floats with ==)

Never test floating-point numbers for exact equality. The expression x == 0.3 is almost certainly wrong, even if x was computed by a formula that is mathematically equal to 3/103/10. Instead, test whether the difference is small:

x = 0.1 + 0.1 + 0.1
print(abs(x - 0.3) < 1e-9)   # True

This is precisely the ε\varepsilon-approximation philosophy applied to equality testing. The tolerance should reflect the precision of the computation, not the precision of the display.

Example 9 (Accumulation of Rounding Error)

Suppose we sum 0.10.1 one thousand times:

total = 0.0

for i in range(1000):
    total = total + 0.1

print(f'{total:.20f}')    # not exactly 100.0
print(total == 100.0)      # False
print(abs(total - 100.0))  # small but nonzero

Each addition introduces a rounding error of roughly 101710^{-17}. Over 1,0001{,}000 additions, these errors accumulate to roughly 101410^{-14}: small in absolute terms, but enough to defeat an exact equality test.

Remark (When Floating-Point Error Matters)

For the bisection search above, floating-point error is harmless: we already expect an approximate answer, and the tolerance ε\varepsilon is vastly larger than the rounding errors. The danger arises when code implicitly assumes exact arithmetic, for instance by testing x == 0.0 instead of abs(x) < epsilon, or by relying on a loop counter that increments by 0.1 to hit exactly 1.0 after ten steps.

Exercises

Exercise 1 (Quantifier Ordering)

Let p(x,y)p(x, y) denote the predicate ”x+yx + y is even,” where xx and yy range over the integers. The order in which quantifiers appear determines the meaning of a statement; exchanging \forall and \exists can change a true statement into a false one.

(a) Prove that xyp(x,y)\forall x\, \exists y\, p(x, y) is true.

(b) Prove that yxp(x,y)\exists y\, \forall x\, p(x, y) is false.

Exercise 2 (Distribution and Pigeonhole)

You have mm indistinguishable marbles, mZ+m \in \mathbb{Z}^+, and 55 indistinguishable bags.

(a) What is the smallest value of mm such that any distribution of marbles into bags is guaranteed to produce either 55 marbles in the same bag or 55 bags each containing at least one marble? Prove your answer.

(b) Now suppose there are nn marbles and kk bags, where nn and kk are positive integers. Prove that there is a unique way to distribute the marbles such that the number of marbles in any two bags differs by at most one.

Exercise 3 (Parametric Uniqueness)

Consider the equation x4y+ay+x=0x^4 y + ay + x = 0, where a,x,yRa, x, y \in \mathbb{R}.

(a) Show that the following statement is false: for all a,xRa, x \in \mathbb{R}, there is a unique yRy \in \mathbb{R} such that x4y+ay+x=0x^4 y + ay + x = 0.

(b) Find the set of real numbers aa such that the following statement is true: for all xRx \in \mathbb{R}, there is a unique yRy \in \mathbb{R} such that x4y+ay+x=0x^4 y + ay + x = 0.

Exercise 4 (Even-Indexed Extraction)

Strings in Python are indexed starting from 00, so the even-indexed characters of a string s occupy positions 0,2,4,0, 2, 4, \ldots Given a string variable my_str, write a programme that prints a new string containing the even-indexed characters of my_str. For example, if my_str = "abcdefg", the output should be aceg.

Exercise 5 (Bisection Guessing)

Suppose you are given an integer NN with 0N10000 \leqslant N \leqslant 1000. Write a programme that uses bisection search to determine NN. The programme should print two lines: count: followed by the number of guesses needed, and answer: followed by the value of NN. If the midpoint of the current interval falls exactly between two integers, choose the smaller one.

Exercise 6 (Perfect Power Decomposition)

A positive integer nn is a perfect power if n=rpn = r^p for some integers r1r \geqslant 1 and p2p \geqslant 2. Write a programme that reads a positive integer and prints integers root and pwr satisfying 1<pwr<61 < \text{pwr} < 6 and rootpwr=n\text{root}^{\text{pwr}} = n. If no such pair exists, print a message to that effect.

Test on n=64n = 64 (which admits 828^2, 434^3, and 262^6), n=27n = 27 (=33= 3^3), and n=72n = 72 (not a perfect power with exponent below 66).

Exercise 7 (Logarithm by Bisection)

The logarithm base 22 of a positive real number xx is the value aa satisfying 2a=x2^a = x. Write a programme that uses bisection search to approximate log2(x)\log_2(x) to within ε=0.001\varepsilon = 0.001. Your programme must first determine an interval [L,H][L, H] guaranteed to contain log2(x)\log_2(x): note that 20=12^0 = 1, and 2k2^k grows without bound as kk increases, so such an interval can always be found. Test on x=1x = 1 (log21=0\log_2 1 = 0), x=32x = 32 (log232=5\log_2 32 = 5), and x=0.1x = 0.1.