**Next message:**Ron Doyle: "[Fwd: Very Large Prototype]"**Previous message:**Miller, Michael W: "RE: Very Large Prototype"**Maybe in reply to:**Scott Nelson: "Stl slice algorithm"**Next in thread:**Mike Brindley: "Re: Stl slice algorithm"**Reply:**Mike Brindley: "Re: Stl slice algorithm"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

Mike Brindley wrote:

*> (... slicing - snipped ...)
*

*>
*

*> The time consuming part of the topological slicing just described is
*

*> reconstructing the topology of the surface, normally an O(n^2) algorithm.
*

*> Using various well known techniques to speed up searches (trees, hashing,
*

*> etc.) this algorithm becomes O(nlog(n)). The topology reconstrucion
*

*> can be done ahead of time (before building starts), enabling 'slice-on-
*

*> the-fly'.
*

Here are my two cents. Many years ago I compared many methods for

reconstructing the topological info (adjacencies of polygons) of a soup

of triangles (aka stl file). In principle (asymptotically, in the expected

case) the hashing approach should be the best, because under the assumption

of good hashing you expect to waste constant time per operation (insert,

find [delete is not needed here]), summing up to an O(n) expected-time

algorithm. However, the involved constant of proportionality is too high:

the hashing approach will eventually win for high-enough values of n, which

are larger than any file processed in practice. My experiments (again, about

10 years ago), led to the conclusion that the simple sort-and-scan approach

is the best for up to even millions of triangles. Let me explain in short

this approach. Assume we have n triangles. Then we have 3n edges, and

Theta(n) vertices (according to Euler's formula). First you need to identify

repeating vertices. Sort the vertices according to lexicographic order (say,

according to keys X, Y, Z, in this order). This is an O(n log n)-time step.

Now scan the sorted list in O(n) time and identify all duplicates. The same

approach works for finding adjacencies. Pile up all the 3n edges, keeping them

as pairs of 2 point indices (smaller index first; keep also a flag indicating

edge direction). Again, sort all edges in lexicographic order in O(n log n)

time, scan the sorted list in O(n) time and find all adjacencies. Nothing

of the above is new. I just claim that according to my experiments the

described approach is the fastest. Trees and friends give the same

functionality (and comparable running times) but require more overhead.

Gill.

---------------------------------------------------------------------------

Gill Barequet, Ph.D. Phone: +972-4-829-3219

Faculty of Computer Science Fax: +972-4-822-1128

The Technion---IIT E-mail: barequet@cs.technion.ac.il

Haifa 32000 WWW: http://www.cs.technion.ac.il/~barequet

Israel

"Life is NP-Hard." (-)

For more information about the rp-ml, see http://ltk.hut.fi/rp-ml/

**Next message:**Ron Doyle: "[Fwd: Very Large Prototype]"**Previous message:**Miller, Michael W: "RE: Very Large Prototype"**Maybe in reply to:**Scott Nelson: "Stl slice algorithm"**Next in thread:**Mike Brindley: "Re: Stl slice algorithm"**Reply:**Mike Brindley: "Re: Stl slice algorithm"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.2
: Tue Jun 05 2001 - 23:02:42 EEST
*