For more discussion of this, and of nonlinear programming in general, see BAZARAA, SHERALI and SHETTY. The main property that we’ll need for these equations is, Example 1 Solve 7 +15e13z 10 7 + 15 e 1 3 z 10. We’ll start with equations that involve exponential functions. In fact, Newton’s algorithm can be interpreted as a modified steepest descent method. In this section we’ll take a look at solving equations with exponential functions or logarithms in them. There is a “pure” steepest descent method, and a multitude of variations on it that improve the rate of convergence, ease of calculation, etc. Wherever that takes you becomes your new point, and you then just keep repeating that procedure until eventually (hopefully) you reach the point where f has its smallest value. The crux of the steepest descent idea, then, is that starting from some initial point, you move a certain amount in the direction of \(-\nabla f\) at that point. Recall that the negative gradient \(- \nabla f\) gives the direction of the fastest rate of decrease of a function \(f\). Many of these methods are based on the steepest descent technique, which is based on an idea that we discussed in Section 2.4. (After all, most people create a design then buy fencing to meet their needs, and not buy fencing and plan later.) But it models well the necessary process: create equations that describe a situation, reduce an equation to a single variable, then find the needed extreme value. James Stewart’s Calculus series is the top-seller in the world because of its problem-solving focus, mathematical precision and accuracy, and outstanding examples and problem sets. This field of study is called nonlinear programming. This example is very simplistic and a bit contrived. SINGLE VARIABLE CALCULUS: EARLY TRANSCENDENTALS provides you with the strongest foundation for a STEM future. A maximization problem can always be turned into a minimization problem (why?), so a large number of methods have been developed to find the global minimum of functions of any number of variables. In general, global maxima and minima tend to be more interesting than local versions, at least in practical applications. In the case of functions which have a global maximum or minimum, Newton’s algorithm can be used to find those points. Our description of Newton’s algorithm is the special two-variable case of a more general algorithm that can be applied to functions of \(n \ge 2\) variables. See RALSTON and RABINOWITZ for more detail and for discussion of other numerical methods. The derivation of Newton’s algorithm, and the proof that it converges (given a “reasonable” choice for the initial point) requires techniques beyond the scope of this text.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |