The main drawback of this approach is that in order to obtain improved resolution, the grid must be made finer over the entire domain, including part of the domain that are far from the zero set, so a lot of work will be done even where it is unnecessary. The main idea of AGrid is to begin with a fairly course initial grid, and then refine it only where it is necessary. This allows for higher resolution in the neighborhood of singularities, for example, while still allowing for lower resolution in places where the level curve can be represented adequately by longer linear segments.
An algorithm of this sort requires a subdivision criterion that determines when a portion of the grid should be divided, and when subdivision is no longer necessary. AGrid employs several different tests in order to decide if a subdivision is needed, and these are divided into three groups: tests based on the behavior of the function along the edges of the grid; tests based on the line segments used to approximate the zero set within a region of the grid; and tests based on the subdivisions occurring within neighboring regions of the grid. If any of the tests along any edge of the region requires subdivision, the region is subdivided. Likewise, if the approximation to the zero set requires more subdivision, or if the neighboring regions suggest that subdivision is needed, then the region will be subdivided, otherwise it is no longer divided and the approximations to the zero set that it contains are added into the final representation of the zero set. The tests themselves are controlled by the parameters described below.
Min_Depth
initial triangles). A setting of 5
typically gives a good starting configuration.
Max_Depth
times, then it
will not be subdivided further even if the other tests say it
should. Setting Max_Depth
equal to
Min_Depth
makes AGrid work
essentially like the traditional (non-adaptive) algorithm. A
value of around 12 to 15 is usually sufficient.
Abs_Interp
in absolute value, then it is not
considered to be
good enough, and the triangle will be subdivided. Otherwise, the
value is compared to the average of the absolute values of the
values of the function at the endpoints; if the proposed zero is
less (in absolute value) than Rel_Interp
times the
average of the
endpoints, then the zero is accepted, otherwise the triangle will
be subdivided. A zero must pass both tests in order to be
considered accurate. The reason for Rel_Interp
is that on an edge
where the value of the function is uniformly small, more accuracy
is needed when determining the zero, so we test for zero relative
to the values of the function nearby, and we always require the
zero to be within Abs_Interp
, regardless.
If the zeros along the edges of a triangle all pass these
tests, then an approximation to the zero set within the triangle
is given by the line segment joining the zeros on the boundary
edges. Before this approximation is accepted, it is also tested
for accuracy; its midpoint is determined, and the function value
at this point is calculated, and it is tested against Abs_Interp
,
and Rel_Interp
(here the average of the absolute values of the
function at each of the corners of the triangles is used for
comparison). If the value passes both tests, then the
approximated zero set is accepted, otherwise the triangle is subdivided.
Reasonable values for Abs_Interp
and
Rel_Interp
depend on the
function you are actually using, but 0.01 for Abs_Interp
and 0.05
for Rel_Interp
make good starting places.
Increase these values if subdivisions are occurring where they
are not needed, or decrease them if not enough division occurs
near the zero set.
Neighbors
to 1 can cause the results to be more
accurate, and allows the other parameters to be less strict, but
at the cost of causing more work to be done when it might not
really be necessary. The actual test performed is: if a region
is being subdivided and its neighbor is more than twice the size
of the resulting triangles, then the neighbor will be subdivided
as well. Such subdivisions may propagate outward to neighbors
that are farther away, as well.
Neighbors
is set
to 1, the amount of processing time may be affected.
Max_h2w
, then the function is considered to
be too far from zero over that edge, and we assume there is no
zero along that edge, so no subdivision is required. Otherwise,
we compare the value of the function at that point to the value of
the linear function connecting the values at the endpoints of the
edge; if these differ by more than Model_Close
times the width of
the edge, then the triangle will be subdivided. The idea here is that we are using a linear approximation to model the actual function, and if it is close enough to the value of the function, then the fact that the linear model doesn't have a zero indicates that the actual function doesn't have a zero on the edge. (The actual formula used is a bit more complicated than the one given above, and is explained in detail in the source code.)
Reasonable values for Max_h2w
and
Model_Close
depend on the
actual function you are using, but values of 2 for
Max_h2w
and 0.1 for Model_Close
are
good starting points. Decreasing either of
these parameters will cause more subdivisions to occur.
Note that Abs_Interp
and Rel_Interp
only come into play on
edges where the function values at the endpoints have opposite
signs, and Model_Close
and Max_h2w
only come into
play when they have the same sign.
software@geom.umn.edu
.
This code is based on an algorithm developed by Davide Cervone at the Geometry Center, though this implementation does not include all the features of the complete algorithm. The full algorithm uses tangent information to improve the results, and to locate and identify singularities and non-transverse zeros.