Re: STL format MISINFORMATION (long)

From: Michael Brindley
Date: Monday, October 3, 1994

From: Michael Brindley
To: Stephen Rock (Rensselaer  Polytechnic Institute)
Cc: RP-ML
Date: Monday, October 3, 1994
Subject: Re: STL format MISINFORMATION (long) 
From: brindley@ECE.ORST.EDU
>Steve Rock writes: 
> It's clear that I need to correct some misinformation circulating
> about this work.
> 
> (1) The RPI representation presented IS NOT EXCEEDINGLY COMPLEX, 

Thank you for the clarification.  Is there a publicly available
document detailing your RPI format so it's ideas can be discussed
intelligently?

>     However, along with entity tags, the RPI format specifies the 
>     data type associated with the tag.  This allows a post-processor
>     completely unaware of a particular tag id to skip select information
>     but continue to process the remainder of the file.  This is useful
>     (nearly essential) for forward compatibility as new capabilities 
>     and processes evolve.  Others have recognized the benefit of
>     combining data type information within data files (IBM's 
>     DataExplorer).

This sounds like a good idea.  The minimum needed to support skipping
of unknown tagged data is a size field (as per the IFF-85 spec which
I believe you referenced).  The full details of how this embedded
description of the data works would be interesting to read.  However,
if the tag is unrecognized, then the reader application could not
make sense of it; if the tag is recognized, then the reader application
already knows how the data is layed out and what the data means.
Again, it is difficult to discuss this well without a full description.

My ideas have only been publically stated in this forum (rp-ml mailing
list).  

> Well, a _good_ STL file is not a necessity.  The paper presented
> at SFF91 only talked about slicing "valid" (a.k.a. _good_)
> models; however, it doesn't require much imagination to envision 
> extending this approach for models missing some facets--providing 
> the topology exists for the remaining facets. (March around the 
> holes and interpolate.)

We think alike again!.  I haven't had an opportunity to implement it
yet.

> 
> Mike Brindley replies:
> >I don't discount Rock's slicing ideas - they are nearly identical
> >to mine!  I developed the same ideas (independently) as the only
> >reasonable approach to slicing.  I was amazed and horrified by
> >the descriptions I saw of what 3D Systems was doing to slice files.
> 
> Good to hear :-)
> Are the descriptions of 3D Systems approach public?  If so, can someone
> please point me to a reference?
> Also, Mike, did you publish your slicing ideas somewhere?  I'd be
> interested in comparing thoughts.

Not published, they are a part of the software I am evolving for
work.  I don't have your paper handy at the moment, but i think
the major difference was that I chose to use edges as the basic
intersection unit and you were using faces.  Equivalent, but 
different in implementation.

The 'descriptions' of 3D Systems approach were very vague.  I synthesized
an idea of a brute force method by weaving together hints from numerous
publicly available sources.  Basically, what I think they do is this:

1.  read in the faces from the stl file into a data structure consisting
    of a list of faces, each face containing the coordinates of each
    of its vertices (i.e., the STL file format is essentially a 'dump'
    of this structure).

2.  for each slice plane, check to see if it intersects every face.
  (They may do some sorting here so they can drop out of the checking
   early, before checking each face.)  The intersection with a face
  (normally) gives a line segment.  Save this line segment.

3.  Sort each of the line segments saved in step 2 above 'head-to-tail'
    to hopefully form a set of complete outlines.

The results of step 3 seem to me to be prone to error.  It is fairly
easy to see why the above would lead to the long slice times you
sometimes see mentioned.  I think that Ed Garguilo (spelling?) of 
Dupont mentions that the sliced file provide with the SLA Users
Group benchmark part took about 2.5 hours to produce on a 386 class
personal computer.  These figures are from memory as the appropriate
documents are not in front of me at the moment.

  --> Mike Brindley


Previous message | Next message
Back to 1994 index