Gill, as usual, I found you comments quite interesting!
Gill Barequet wrote:
> Mike Brindley wrote:
> > (... slicing - snipped ...)
> > The time consuming part of the topological slicing just described is
> > reconstructing the topology of the surface, normally an O(n^2) algorithm.
> > Using various well known techniques to speed up searches (trees, hashing,
> > etc.) this algorithm becomes O(nlog(n)). The topology reconstrucion
> > can be done ahead of time (before building starts), enabling 'slice-on-
> > the-fly'.
> Here are my two cents. Many years ago I compared many methods for
> reconstructing the topological info (adjacencies of polygons) of a soup
> of triangles (aka stl file). In principle (asymptotically, in the expected
> case) the hashing approach should be the best, because under the assumption
> of good hashing you expect to waste constant time per operation (insert,
> find [delete is not needed here]), summing up to an O(n) expected-time
> algorithm. However, the involved constant of proportionality is too high:
> the hashing approach will eventually win for high-enough values of n, which
> are larger than any file processed in practice. My experiments (again, about
> 10 years ago), led to the conclusion that the simple sort-and-scan approach
> is the best for up to even millions of triangles. Let me explain in short
> this approach. Assume we have n triangles. Then we have 3n edges, and
> Theta(n) vertices (according to Euler's formula). First you need to identify
Just a clarification. The STL file directly specifies n triangular facets,
so we have 3n edges and 3n vertices. The Euler formula (V-E+F=2, where V
is the number of vertices, E is the number of edges, and F is the number
of faces -> 'n' in this discussion) only applies after the topology has
been reconstructed (and the resulting object is error free -> water-
tight and 2-manifold). Analysis for a valid, error-free STL object
reveals that, after topology reconstruction, the resulting object will
have exactly 1.5 times as many edges as faces (E/F = 3/2). And for
objects with a large number of faces (and a couple of other assumptions),
each vertex will be shared by about 6 faces (and edges) on average. This
leads to the number of unique vertices being aproximately 1/2 the number
of faces (V ~= F/2). I did this analysis some years ago when I was
writing software to deal with STL files (I wanted an estimate of how
large an STL file the software could handle with the memory on the
> repeating vertices. Sort the vertices according to lexicographic order (say,
> according to keys X, Y, Z, in this order). This is an O(n log n)-time step.
> Now scan the sorted list in O(n) time and identify all duplicates. The same
> approach works for finding adjacencies. Pile up all the 3n edges, keeping the
> as pairs of 2 point indices (smaller index first; keep also a flag indicating
> edge direction). Again, sort all edges in lexicographic order in O(n log n)
> time, scan the sorted list in O(n) time and find all adjacencies. Nothing
> of the above is new. I just claim that according to my experiments the
> described approach is the fastest. Trees and friends give the same
> functionality (and comparable running times) but require more overhead.
I'll keep this algorithm in mind for possible future use. Was your study
published? I tried your website, but the CS Department web server was
not responsing; I'll try again later.
--> Mike Brindley firstname.lastname@example.org Corvallis, Oregon, USA
"I take unanimity on any difficult topic as a danger sign."
- P. J. Plaugher
For more information about the rp-ml, see http://ltk.hut.fi/rp-ml/
This archive was generated by hypermail 2.1.2 : Tue Jun 05 2001 - 23:02:42 EEST