SAVE $$ and TIME with RP "Benchmarks"

From: Derek Smith-EDS014 (Derek_Smith-EDS014@email.mot.com)
Date: Wed Apr 30 1997 - 22:24:19 EEST


Elaine,

Interesting comments about "benchmarking." I couldn't agree more that each
company should evaluate systems and processes according to the particular
nature of their parts. You mentioned build times and Motorola in the same
sentence, and that is enough of a connection for me to run with it. I will
also differentiate between the typical "benchmarks" which are performed to
measure part quality, and our "benchmark" which considers build time and
more.

We have characterized our parts by recording historical data for critical
build variables (height, volume, projected vertical surface area). A
theoretical "average" part is then identified. The distribution of these
variables was not normal, therefore, it was appropriate to identify certain
"types" of "average" parts, and a distribution in volume of models. This is
a result of the types of products we do, including both portable (hand-held)
and mobile (much bigger-installed in police cars, etc.). If we
differentiated these categories even further, we would see other ranges such
as knobs and buttons versus the housings.

Given the "average" part(s) (in industrial engineering terms this would be
aggregate units of production) we can then procede to the next step. We have
also obtained an average number of parts which are produced on any given SLA
build. This allows us to calculate an aggregate build. The next step is to
either 1) physically model such a part in CAD (inefficient idea in the long
run), or 2) mathematically model the build process. This is required
becasue comparisons can only be done if both the process (SLA) and parts
(average part) are both in the same world, either physical or virtual.

We took the latter approach, building on the work that Kamesh published with
build time estimation., and created our own build time estimator which uses
all pertinant machine/process variables. This tool is not very efficient to
use because it is not used for estimating in the traditional sense where
every part is run through to give a customer a quote, but rather for process
improvement purposes.

Using the build time estimator and the calculated "aggregate part" and
"aggregate build", you can really get to work efficiently. Process changes
can be simulated, etc. An SLA service center can now minimize their build
times by understanding the balance between recoat and draw time in the case
of SLA. We used linear programming to vary the layer thickness such that
build time was minimized. This calculation takes into account both the
logarithmic nature of resin cure as well as the number of recoats resulting
from a change in layer thickness, plus all relavent machine conditions such
as laser power. So, this minimum value was obtained and we have a new goal
to produce layer thicknesses of 4.2 mils. The time savings was projected
and put into terms of better service and $$ saved, and was used to justify
an intern to accomplish the task as we talked about a few weeks ago. This
method was also used to evaluate Zephyr recoating systems, high power lasers,
 the optimum time to switch out an old laser (wait till dead or switch at 22
mW??), new resins (Dupont, etc.), and other such decisions.

Now, back to the bechmarking part. We have used this approach to not only
optimize the SLA process to suit our SPECIFIC needs at Motorola, but also to
evaluate other RP technologies. I mentioned breaking down the distribution
even further to include a range for knobs, buttons, etc. We did this and
identified a "type" of part that would be good for a small, detailed modeler
such as the Sanders machine. You have one of those in your lab don't you?
Based on the build time estimation provided by Sanders for some sample parts
of this size and volume, we decided to hold off for a while. Things may have
changed with their new machine upgrades, I'm not sure. The point is that
appropriate benchmarks take into account the nature of parts one is going to
build. Cycle time and automation for model production are major competitive
advantages provided by RP over other processes. Understanding the nature of
parts being built by a service center is a critical first step toward making
a good decision for a specific technology. Applying this to project how a
technology will perform in a "production" mode is the next step. This
includes pre- and post-processing in the equation, a consideration I often
see as lacking when talking to people in the industry.

Well, I hope you enjoyed my rambling comments, and I must say, you did a
fine job at the SME conference, just 50+ more weeks till the next one.

E. Derek Smith
Director, Freeform Technology Development
Motorola
Plantation, Florida

________________________________________________________
To: rp-ml@bart.lpt.fi@INTERNET
From: sfarenti@scf.usc.edu@INTERNET on Wed, Apr 30, 1997 1:14 PM
Subject: Re: Benchmark Parts

> I think it is high time that there is a unified site where 1-2 dozen
> standard real-world STL parts can be found. This test suite should include
> at least 2-3 examples each of automotive, aerospace, medical, etc.
>from small parts with intricate detail to large solid objects. This
>would allow people to investigate the suitability of the various RP
>systems to different shapes, volumes, wall thickness, etc.

Funny how benchmarking always surfaces each year.....is it spring or
something?

  Now about benchmarking....... The SLA user's group created a part (Ed
Gargiulo) back in 1990 and at the 1997 meeting approved the Kodak part as
the official injection molding benchmark part. When the NASUG attempted to
do a real study....participants were few and far between....why? BUILD
TIMES...who has time to tinker when real MONEY was being lost.

I guess I favor an individual company doing it's benchmark around it's
product line. If Ford used a Motorola model I doubt they would add much to
their knowledge except some fundamental cans and can'ts. Real information
is learned when a company understands their product line and attempts to
use RP systems to enhance their market share. We learned not by
benchmarking but by getting real work accomplished. Benchmarking is a
security blanket for those who have to have some data to assure the boss
they made or are making the right decisions. But what happens if you
choose the wrong part to benchmark and the end results leads to a bad
business decision?

I don't mean to sound negative but I want to reliable, repeatable process
that I can control and maintain. Benchmarking won't help in that area
unless I am Guy McDonald.

look on fantasia.vr.clemson.edu
somewhere under benchmarks lurkes the files that exist today.

Elaine



This archive was generated by hypermail 2.1.2 : Tue Jun 05 2001 - 22:39:32 EEST