# The Limit of a Function

We have seen that the idea of the limit of a function at a point
\(x_0\) can be fairly intuitive. In order to formalize that idea,
we need somehow to say that the closer \(x\) gets to \(x_0\),
the closer \(f(x)\) gets to the limit. The way this
actually gets stated is a little circumspect. Here we go.

This seems a little circular, but the point is basically that
if you tell me that you want the values of \(f\) to be
within \(\epsilon\) of \(L\),
I can always find some interval \((x_0-\delta,x_0+\delta)\)
where all the values of \(f(x)\) *are* that close.

There are several things worth noting here.

- Note that the definition does not involve \(f(x_0)\). The function \(f\) does not even need to be defined at \(x_0\). Instead, the limit is a description of the behavior of \(f\) at all the points near \(x_0\). That's a good thing. Most of the limits we are actually interested in involve quotients where the denominator vanishes at \(x_0\).
- To emphasize, the definition says that we can make all the function values as close as we want to the limit, just by keeping all the function arguments close to \(x_0\). A function fails to have a limit at \(x_0\) only when, no matter how close we choose the \(x\) values to \(x+0\), there are nonetheless function values \(f(x)\) that are far from the limit.

With a definition like this, we can go about proving (or disproving) that any particular function has a limit at a point. For example, consider the function \(f\) given by \[f(x)=x^2-1.\] We will show that the limit of \(f(x)\) as \(x\) approaches zero is -1. Suppose you give me some number \(\epsilon\) - make it as small as you want. Whatever \(epsilon\) you choose, I choose \(\delta=\sqrt\epsilon\). With that choice we see that no matter what \(x\) we choose from the interval \((x-\sqrt\epsilon,x+\sqrt\epsilon)\) we have \begin{align*} \vert f(x)-(-1)\vert &= \vert (x^2-1)+1\vert\\ &= x^2\\ &< \delta^2\\ &= (\sqrt\epsilon)^2 = \epsilon. \end{align*}

Figure 1: Epsilon-delta box for the function \(f(x)=x^2-1\) with \(x_0=0\).

Figure 1 shows a limit setup for this example. The graph of the function \(f\) is shown in blue. We think that the limit is -1. The lines \(-1+\epsilon\) and \(-1-\epsilon\) are shown in gray. These show how close you have required us to be to the limit. As you make \(\epsilon\) smaller you'll see these lines get closer together, meaning that you are requiring the values of \(f(x)\) to be closer to the limit -1. The red box has left and right edges given by \(0-\delta\) and \(0+\delta\), respectively, and the upper and lower edges of the box show the maximum and minumum values of the function over that interval. Thus, the points inside the red box all have \(x\) coordinates that are within \(\delta\) of 0, and all the points of the graph of \(f\) lie within the box for If this box containing all the values of \(f\) over the delta interval lies between the lines showing \(L+\epsilon\) and \(L-\epsilon\), then our definition is satisfied: the limit is -1. As you make \(\epsilon\) smaller, I have to make \(\delta\) smaller to fit the box between the \(\epsilon\) lines. This is how the definition works: if we can always choose \(\delta\) so the graph of \(f\) stays between the lines, then the limit exists.

Figure 2 shows a limit setup for a different example. In this case we want to show that the limit as \(x\) approaches 1 of \(f\) is equal to 0. We will save the computations required here for the examples page [QuadraticLimit]. Note, however, that this time the function is not symmetric around \(x_0=1\). The function values grow more quickly to the right of \(x_0=1\) than they do to the left, so we have to choose a smaller \(\delta\) to control the function on the right half of the interval \((1-\delta,1+\delta)\). We can do that, though, so the limit exists and it is equal to zero.

Figure 2: Epsilon-delta box for the function \(f(x)=x^2-1\) with \(x_0=1\).

At this point, the obvious question is whether there is anything to this at all - does it always work? What does the situation look like when the definition does not apply? One example function that does not have a limit at zero is given by \[ h(x) = \begin{cases} 1 & x>0\\ 0 & x=0\\ -1 & x\lt 0 \end{cases} \] We can see intuitively that as \(x\) approaches zero from the left, then \(f(x)\) is always -1. When \(x\) approaches from the right, \(f(x)\) is always 1. Since this is a limit process, we don't care at all what \(f(0)\) is. Since the limit from the left is -1 which is different from the limit from the right, we guess that the limit as \(x\) approaches 0 does not exist.

Using the definition, we look at this a bit differently. Choose any value at all for \(L\), the number we think is a limit. Since the definition works for any choice of \(\epsilon\), let's choose \(\epsilon=1\). Then for any positive \(x\), we have \[\vert f(x)-L\vert = \vert 1-L\vert.\] For any negative \(x\) we have \[\vert f(x)-L\vert = \vert -1-L\vert.\] At least one of those differences is greater than or equal to 1. That means that no matter how we choose \(L\), we can never get the function closer than \(1>\epsilon\) to it for all the \(x\) points nearby. The situation is illustrated in Figure 3. This time we choose \(\delta\), but no matter what we choose, the red box containing all the function values does not shrink to fit between the epsilon lines. We conclude that the defition does not apply, and the limit does not exist.