Re: Slice Formats....

From: Gary Scholl (gscholl@handel.jlc.net)
Date: Sat Mar 02 1996 - 01:34:16 EET


At 04:31 PM 2/29/96 -0500, you wrote:

>>Peter H. Gien wrote:
>> 4) Contour data does not use surface normal information. A definite advantage
>> given the number of files out there with messed up normals.

>Justin wrote:
>If I knew more about Rapid Prototyping (I'm still learning), I would say
that >this is a disadvantage, not an advantage. From what I understand,
although >they are easy to mess up (and therefore make the model bad), they
are a vital >piece of information for post-processing operations that is
very difficult to >get from a contour format.

I tutorialize:
In a solid model, a planar facet defines a boundary. The normal in the STL
format is only utilized to differentiate between the side with mass and the
side without mass. (A more compact approach would have been to have specify
the sequence of listing the three vertices so as to remove the ambiguity of
the normal)

What is meant by "messed up" is that they indicate that the mass is on the
wrong side. (Want to buy some shore front property?)

A format that specifies contours must also provide inside/outside
information. (One approach might be the direction of the list.)

>Is fitting a surface (NURBS) to the contours really this difficult? Maybe
it >is, but I don't think so...

If the *contours* are conveyed as points that fall on the contour (as you
suggest), one might easily connect the dots with a spline and the curve will
approximate the designers intent. Sometimes we misinterpret the *true*
contour conveyed by: (0,0; 1,0; 1,1; 0,1) and this is the approximation
error we accept when using this approach. On the other hand, if one thinks
of a contour as a serial collection of curves with *exact* mathematical
definitions the ambiguity disappears.

>BTW, you only mentioned surface models here. Is there any difference between
>processing a surface model and processing a solid model (CSG or B-Rep)?
>I would assume that a solid model makes things like topology checking, volume
>computation, etc. easier. Just a guess...

The computations for volumes etc, are easy enough for either model. If the
designers intent is to have an object with all planar surfaces, All models
yield the same result. If the designer intends to use some curvature,
fillets, blends, etc., then the surface model explicitly conveys the
designers intent and the computation yields an *exact* value. Planar
representations of these models will only yield an *approximate* value and
some *interpretation* of the designers intention is required to fully
understand the model. This is the three dimensional manifestation of the
polygonal line problem above.

****
Since everything ends up as point to point instructions anyway, why not let
the designer do the approximation up front and transfer a "cloud of points"?

****
   1) The data size for a surface model doesn't change with a requirement
for increased precision in computation or reproduction.

   2) For an approximate model the data size does increase as the
requirement for precision increases.

   3) Although specific cost factors are applied to model size, computation
time and operator time, according to our individual utility functions, as
the requirement for precision increases, the comparative cost of using
Surface Models and Contours produced from exact surfaces continually decrease.

g
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
Gary W. Scholl
Metal Casting Technology, Inc.
Milford, New Hampshire 03055

v: 1 (603) 673-9720 x 437
f: 1 (603) 673-7456
e: gscholl@jlc.net



This archive was generated by hypermail 2.1.2 : Tue Jun 05 2001 - 22:37:13 EEST