REGULAR MEETINGS
About
the ASA
Regular
Meetings
Conferences
K12
Programs
Officers
Membership
Info
Links
Chapter News

Summaries of Past Regular Meetings
2012
2011
2009
2007
2006
2005
2004
2003
2002
2001
2000
1999 1998
1997 1996 1995
1994 1993 1992
1991 1990
1989 1986
1985 1984 1983
1982 1981 1980
1974 1971
1970
1968 1966
1965
1952
 2012 Talks
 Friday, March 30, 2012
 The Future of Introductory Statistics Courses and the Common Core State Standards in K12
:
Christine Franklin

Common Core State Standards for mathematics in grades K12 have been adopted
by 45 states and the District of Columbia. Standards for teaching
statistics and probability span topics from counting the number in each category
to determining statistical significance. In the near future, most students entering
college will have been taught some statistics and probability, so
introductory statistics courses will have to change.
In addition, we must rethink the preparation of future K12 teachers to teach this curriculum.
This presentation will provide an overview of the statistics and probability content of
these standards, consider the effect in our introductory statistics course, and describe
the knowledge and preparation needed by the future and current K12 teachers who will be teaching using these standards.
 Friday, January 13, 2012
 Applied Survey Sampling
:
Patricia Cirillo

The flavor of this workshop will be an honesttogoodness,
thisishowitreallyworksoutthere viewpoint from someone using real
research applications. Topics will include sources of sampling
(commercial, clientside, etc.), the pitfalls of using databases,
weighting, sampling biases, and nonresponse tricks of the trade.
Attendees will learn "rules of thumb" regarding sample performance, how
to determine the "right" sample size, and how to determine if you have
sampling problems.
 2011 Talks
 Tuesday, March 8, 2011
 The ASA Accreditation Process
:
Tom Short, ASA Board Member

"An accreditation program is a significant way of meeting the challenge,
identified in the ASA Strategic Plan, to reach out to underserved groups
while continuing to serve our traditional constituencies," said ASA President Sally Morton.
"After carefully studying such a program and after hearing from a large number of our members
through a sample survey that such a program would be useful to them, the board felt it
was time for the ASA to provide this distinction to members who feel
they will benefit from it."
Morton reiterated that such accreditation is voluntary; it is not the same as certification,
which, in other professions, may be required before one can practice. "ASA accreditation is for
those who feel it will help them professionally," she said. "As is demonstrated by similar programs
offered by statistical societies in other countries, not everyone will need or want to be
accredited."
Come find out about it!
 2009 Talks
 Friday, January 23, 2009
 Screening for Fuel Economy: Use of a Supersaturated Design and Analysis in a Real World Application
:
Philip Scinto

The purpose of this presentation is to highlight the successful use of a
Supersaturated Design and analysis in an industrial application.
A 28 run Supersaturated Design is developed and run in order to screen
for 70 plus factors and interactions that affect friction properties,
and hence, fuel economy, in passenger car motor oil applications.
Given physical limitations on factors, natural correlation between factors,
a need for factors with more than two levels, and an emphasis on screening for interactions,
the construction of the design, while not complicated, is not straightforward either.
Bayesian Variable Assessment and model averaging techniques are used to identify important
factors and effects. From the analysis, four main effects, several of which are nonlinear,
and one interaction are identified as affecting friction. As a result of the analysis,
additional follow up experiments were planned and conducted with the identified factors.
The results from the follow up confirm the model from the original supersaturated experiment.
 2007 Talks
 Wednesday, December 12, 2007
 Building RiskAdjustment Models for the Assessment of Obstetrical Quality
:
Jennifer L. Bailit and Thomas E. Love

In this talk we will discuss the data and statistical challenges involved in creating riskadjustment
models for the evaluation of obstetrical care.
Many traditional measures of obstetrical quality, such as raw hospital cesarean rates,
are becoming obsolete. When cesarean rates are compared without risk adjustment for
maternal risk factors, hospitals treating complex patients appear to be providing poor
quality care when in fact they may be providing superb care. Additionally, hospitals
providing care to low risk patients may appear to be providing high quality care when in fact they are not.
The statistical and other methodological considerations related to producing valuable
riskadjustment measures in this context are substantial, especially in light of the many
additional issues related to working in a "hot button" field like the assessment of obstetrical care. We will share some
of our more interesting experiences from the varied perspectives of a clinical researcher and a biostatistician.
 Friday, March 30, 2007
 Adventures in Teaching with Technology
:
Webster West

There are a tremendous number of technology resources
available for teaching statistics. These range from
interactive webbased applets to video lectures. The
trick is often times determining which resources
actually work and how best to incorporate them into
your courses. Techniques for evaluating technology
learning tools and some common integration approaches
will be discussed. A discussion of the potential
impact of technology on the future of statistics
education will also be provided.
 2006 Talks
 Wednesday, November 1, 2006
 Missing Data Methodology in Malaria Studies
:
Rhoderick Machekano

Efficacy studies of malaria treatments can be plagued by indeterminate outcomes for some patients.
The study motivating this work defines the outcome of interest (treatment failure) as recrudescence and for
some subjects, it is unclear whether a recurrence of malaria is due to that or new infection.
This results in a specific kind of missing data. The effect of missing data in causal inference
problems is widely recognized. Methods that adjust for possible bias from missing data include a
variety of imputation procedures (extreme case analysis, hotdeck, single and multiple imputation),
inverse weighting methods, and likelihood based methods (data augmentation, EM procedures and their extensions).
In this talk, I focus on multiple imputation, two inverse weighting procedures
(the inverse probability weighted (IPW) and the doubly robust (DR) estimators),
and a likelihood based methodology (Gcomputation), comparing the methods' applicability to the efficient
estimation of malaria treatments effects.
I present results from simulation studies as well as results from an application to
malaria efficacy studies from Uganda.
 Wednesday, September 20, 2006
 What Mathematics and Forrest Gump Teach Us About Lotteries
:
Ron Wasserstein

Since the inception of the Kansas Lottery in 1987, I have been speaking to school and civic groups
about the lotteryhow it works, what the probability of winning is and how that is computed, and most importantly,
what the probability is of coming out ahead (a winner!) in the lottery.
Initial efforts involved a standard lecture, but before long it became clear that more was needed.
Thus, I have developed a computer game that simulates playing the lottery multiple times.
Playing the game is not only fun, but it also shows students vividly what happens in the long run.
Accompanying the computer game is a set of PowerPoint slides which teach about the lottery and probability.
In this presentation to the Cleveland Chapter, I will "teach" this subject, demonstrating how these tools can
be used as outreach to local groups. I will also provide the computer game and slides on CD for free use by chapter members.
 Wednesday, April 12, 2006
 Bayesian and Frequentist Methods for Provider Profiling Using RiskAdjusted Assessments of Medical Outcomes
:
Joe Sedransk

We propose a new method and compare conventional and Bayesian methodologies that are used or proposed
for use for 'provider profiling,' an evaluation of the quality of health care. The conventional approaches
to computing these provider assessments are to use likelihoodbased frequentist methodologies, and the
new Bayesian method is patterned after these. For each of three models we compare the frequentist and
Bayesian approaches using the data employed by the New York State Department of Health for its
annually released reports that profile hospitals permitted to perform coronary artery bypass graft surgery.
Additional, constructed, data sets are used to sharpen our conclusions. With the advances of Markov Chain
Monte Carlo methods, Bayesian methods are easily implemented and are preferable to standard frequentist
methods for models with a binary dependent variable since the latter always rely on asymptotic approximations.
Comparisons across methods associated with different models are important because of current proposals
to use random effect (exchangeable) models for provider profiling. We also summarize and discuss important
issues in the conduct of provider profiling such as inclusion of provider characteristics in the model and
choice of criteria for determining unsatisfactory performance.
 2005 Talks
 December 15, 2005
 Handson Bayesian Data Analysis using WinBUGS [doubled as fall workshop]
:
William F. Guthrie, National Institute of Standards and Technology

This workshop is designed to provide statisticians, scientists, or engineers
with the tools necessary to begin to use Bayesian inference in applied problems. Participants in the course
will learn the basics of Bayesian modeling and inference using Markov chain Monte Carlo simulation with the
opensource software package WinBUGS.
The workshop will introduce some of the theory underlying Bayesian analysis,
but will primarily focus on Bayesian analysis of "realworld" scientific applications using examples from
collaborative research with NIST scientists and engineers. Topics discussed will include Bayesian modeling,
Markov chain Monte Carlo algorithms, convergence tests, model validation, and inference.
 September 14, 2005
 SAT scores for sale? Assessment of Commercial Test
Preparation Via Optimal Full Matching.
:
Ben B. Hansen, University of Michigan

Poststratification is an old, flexible, efficient, and
conceptually plain statistical technique. If a treatment and a
control group are to be compared, and if every treated
subject is sufficiently similar to one or more controls as to
justify comparison to it, then with the right stratification one
can rightly estimate treatment effects simply by averaging and
differencing outcomes.
An impediment is that poststratification is practically feasible,
usually, only when there are few covariates. Often there
are many covariates, not just one or two, on which
subjects should be similar so as to justify their comparison.
For the purpose of estimating treatment effects, this issue is
dispensed with by propensity scores, which reduce the dimension
of the covariates to one. Because of this, observational studies
are increasingly analyzed by way of stratification along a
propensity score. Commonly used methods for stratifying along an
estimated propensity include pair matching, matching with
multiple controls, and subclassification along quintiles of the
score.
Now the "right" stratification  one that pairs only subjects
with sufficiently similar values of the estimated score  need
not take a simple form, in which case the commonly used methods
of stratification carry no guarantee of finding it. However,
there is always a socalled full matching that is at least as
good as any other stratification. This will be illustrated with
a case study of full matching, with propensity scores, applied to
estimate effects of commercial coaching for the SAT. I have
created an addon package to R to perform optimal full matching,
"optmatch," and I shall also illustrate its use.
 June 1, 2005
 When Statistical Process Control Outweighs Randomized Clinical Trials
:
Dr. Mireya DiazInsua, Case Western Reserve University
Clinical trials are the pinnacle in the hierarchy of evidencebased
medicine. The main reason being the control they exert over observable
and nonobservable bias by virtue of the randomization process. We find
proof of the use of trials as far back as verses
in the book of Daniel
in the Old Testament. In the late 1920s, Amberson designed the first
randomized trial in medicine (known to us) to assess the efficacy of
sanocrysin in the treatment of tuberculosis, 43 years after Peirce
incorporated randomization in experimentation. Despite its established
validity in the chain of evidence, there are circumstances when a
randomized trial is not feasible because of time constraints and/or ethical
concerns. In these situations, viable alternatives are necessary.
Vaccine or treatment for diminishing the effect of outbreaks, parachute
jumping, lethal conditions that require approaches with fast results
are just examples of such scenarios. We show here tools from Statistical
Process Control (SPC) that present a feasible alternative for those
cases. We show the implementation of such strategy within the context of
a specific case study of vaccine development and testing for rabies
conducted by Pasteur in the 1880s, when randomization in experimentation
was almost unknown. SPC tools will prove through this example how and
when they are a plausible alternative to the paradigmatic randomized
clinical trial. This is joint work with Dr. Duncan Neuhauser, and
together they have produced three recent articles on related topics for
the journal Quality and Safety in Health Care.
 2004 Talks
 December 1, 2004
 Calls for 911 Service: A Collaboration Between CSU and the Cleveland City Police
:
Dr. John P. Holcomb, Jr., Cleveland State University

In the last two years, professors from the Cleveland State University
Department of Mathematics and Sociology have been working closely with
the City of Cleveland Police Department. This partnership has resulted
in access to police records cataloging all 911 calls for the city
since 1995. In the presentation, I will share summary graphs and statistics
as it relates to calls for service throughout the city and by various
districts. I will also share various models that undergraduate and graduate
students have developed to predict the number of calls for service.
 September 1, 2004
 Regression Modeling of Left and RightCensored Outcomes
:
Jeff Hammel, Cleveland Clinic

This presentation is motivated by the study of the susceptibility of
bacteria to antimicrobial drugs. Susceptibility (or its opposite,
resistance) is often measured by the minimum drug concentration required
to kill the bacteria, or simply the minimum inhibitory concentration (MIC).
An MIC can be measured for a bacteria sample from a patient diagnosed
with an infection. The MIC is often determined in a laboratory using a
finite set of titred drug concentrations. When the MIC is below the
lowest tested concentration or above the largest tested concentration,
then the recorded value is left or rightcensored, respectively. That
is, it is of the form '≤ k_{1}' or '>k_{2}'. The tested concentrations are
also typically integer powers of 2 mg/L, in which case the censored
values can be written as '≤2^{L}' or '>2^{R}' for integer values of L and R.
MIC data of this form is easily analyzed using either SAS or Splus.
I will demonstrate code for doing these analyses.
 June 2, 2004
 An Introduction to the R Language
:
Ethan Katz, Cleveland Clinic Jason Connor, Carnegie Mellon University
 A basic, interactive, handson introduction to the R
language, including statistical analysis and graphics procedures. Topics include:
Getting started with R: downloading R, obtaining documentation, and
using the help system.
Statistics with R: descriptive statistics, statistical tests, and
model fitting.
Graphics with R: basics and examples
 February 27, 2004
 Analytical Tools in the SAS System
:
William Kuhfield, SAS Institute Andrew Karp, Sierra Information Services, Inc.
 Dr. Kuhfield, (a Clevelandarea native) who is a leading developer
in the Research and Development group at SAS Worldwide Headquarters in Cary, NC.,
will present an introduction to the field of experimental design. His talk will
focus on an easytouse tool for building efficient experimental designs.
The second presentation is a tutorial on how to build and interpret predictive
models using PROC LOGISTIC. This analytic technique is frequently used in both
medical/scientific and business analytic projects to predict the probability
that an event (e.g., product purchase, disease outcome) will occur. This
tutorial will be given by Andrew Karp, a wellknown SAS Software consultant,
trainer, and SAS user group meeting presenter.
 2003 Talks
 December 3, 2003
 Propensity Scores: What Do They Do, How Should I Use Them, And Why Should I Care?
:
Thomas E. Love, Case Western Reserve University and MetroHealth Medical Center
 Many statistical problems aim to draw causal inferences about the effects of
policies, treatments or interventions. But the data in most cases are
observational, rather than experimental  that is, the data are collected
through the observation of systems as they operate in normal practice,
rather than under carefully controlled conditions. In particular, the
investigators have no control over the assignments of exposures.
Such data are relatively inexpensive to obtain, and may represent the
"real world" more effectively than the results of randomized experiments.
However, in using an observational study to estimate a treatment effect,
the "exposed" and "control" groups often differ substantially in terms
of background characteristics.
This talk will provide a friendly introduction to propensity score methods
for reducing the impact of selection bias on observational studies.
 October 27, 2003 (Fall Workshop)
 Analysis of Curves: Esteban Walker, University of Tennessee
 Advances in technology have dramatically increased
the amount and quality of data that are recorded in all areas of human
endeavor. Thousands of measurements are available nowadays in situations
where previously only a few measurements, at given points in time or space,
were taken. These measurements allow the reconstruction of the whole profile
or "signature". Basically, the profile becomes the unit of analysis.
This seminar will discuss two problems with profiles: (1) how to determine
if predetermined sets of curves are different, and (2) how to identify clusters
in a set of curves. Examples from various fields will be presented.
Instructions on the implementation of these techniques in SPlus and SAS will
be given.
 September 17, 2003
 Reversible Jump Markov Chain Monte Carlo For Linear Regression Models: Patrick Gaffney, Lubrizol
 Reversible Jump Markov Chain Monte Carlo (RJMCMC) is increasing in
popularity as witnessed by the number of citations to the seminal paper
by Green. This methodology gives the statistician a means of solving
difficult problems where a parametric model has been proposed but the
number of terms in the model is unknown. Some examples include changepoint
analysis, image analysis, and gene discovery.
Several issues arise in practice: initialization of the chain,
the proposal mechanism for parameters, correct formation of the
acceptance ratio, and final model selection. In this talk, I will
address these issues when RJMCMC is applied to solve linear regression
models in a Bayesian framework. The lessons learned can readily be
applied to more complex situations.
 June 4, 2003
 An Introduction to ValueAdded Methods for Teacher Accountability: J.R. Lockwood, RAND
Statistics Group
 As underscored by the federal No Child Left Behind Act of 2001,
a currently active and central education policy initiative is the
use of scores on standardized achievement tests to hold educators
accountable for student outcomes. One of the challenges to this
endeavor is combining test score information into a single measure
that provides evidence of school or teacher effectiveness.
Although many states and districts rely on simple aggregate
score averages or differences, a few are exploring more complex
models that leverage longitudinal data on students to isolate the
"value added" by a particular teacher or school. This modeling
approach has taken a number of forms and is generally referred to as
"valueadded modeling" (VAM). Enthusiasm for this approach stems
from the belief that it can remove the effects of factors not under
the control of the school, such as prior performance and socioeconomic
status, and thereby provide a more accurate indicator of school or teacher
effectiveness.
In this talk, we critically evaluate this claim in the context of
models used to isolate teacher effects. We briefly review the principal
existing modeling approaches and conclusions from the literature. We then
present a general multivariate, longitudinal mixedmodel that incorporates
the crossed grouping structures inherent to longitudinal student data linked
to teachers. We discuss the potential impact of model misspecifications,
including missing student covariates and assumptions about the accumulation
of teacher effects over time, on key inferences made from the models. We
conclude with an assessment of the strengths, limitations, and the most
challenging unresolved issues of valueadded models as a tool for teacher
accountability.
 March 5, 2003
 The Biostatistician's Role in Health Economics: Adrienne Heerey, Cleveland
Clinic Foundation
 Rates of healthcare expenditure have risen in excess of inflation
in the past decade. The fruits of intensive research in the '80s
are becoming realized, resulting in a number of expensive medical
interventions becoming available. Simultaneously, focus is being
placed on reducing healthcare expenditure. As a result, the
requirement for economic evaluation has become vital in the effort
to obtain the best value from a given healthcare budget. Due to the
scarcity of health economists, many biostatisticians are being
asked to contribute to these studies. This presentation details the
fundamental components of cost effectiveness evaluation and economic
modeling and highlights statistical queries faced in these studies.
 2002 Talks
 December 4, 2002
 Financial and Estate Planning by the Estate Planning Team, Inc.: Stuart Kleiner,
Estate Planning Team, Inc.

Estate Planning is "the action an individual takes to
maximize use of their assets to enhance lifestyle and
support their desired level of activity, to protect
their estate from financial devastation in the event
of an unforeseen nursing home stay and to provide for
their loved ones through the transfer of property by
gifting during lifetime or through inheritance after
death, while minimizing all administrative costs
and taxes." Mr. Kleiner will discuss basic legal
documents every person should have, retirement planning,
reducing income and estate taxes, multigenerational
family financial planning, college funding, how to help
your parents with planning, common planning mistakes,
how to select and coordinate professional advisors, etc.
 November 6, 2002
 Industrial Applications in Statistics: Robert Mason, Southwest Research
Institute
 Statistics has been used to solve data problems in the United
States for over 160 years. The fields of application continue to expand and include
common areas such as the physical, engineering, and biological sciences as well as
more recent applications such as in space and environmental sciences. This presentation
provides an interesting application resulting from recent work conducted at Southwest
Research Institute. It concerns a statistical experiment conducted to determine if
honeybees can be used to detect land mines. The results have recently appeared in
major newspapers, magazines, and network news programs.
 October 14, 2002 (Fall Workshop)
 A Review: Missing Data: Joseph Schafer, Pennsylvania State
University
 Statistical procedures for missing data have vastly
improved in the last two decades, yet misconception and unsound practice
still abound. In this seminar, we will
 frame the missingdata problem from a statistician's viewpoint
 introduce fundamental concepts regarding the distribution of
missingness and missing at random (MAR)
 review older, ad hoc procedures including case deletion and
single imputation, discussing their merits and weaknesses
 discuss the theory, implementation and use of maximum likelihood
(ML) for incomplete data problems
 introduce the idea of multiple imputation (MI), discuss its
properties, and demonstrate its use on a real data example
 discuss some of the latest developments in the missingdata
literature that have appeared in the last five years, including
weighted estimating equations and methods for handling missing
values that are not MAR.
 September 4, 2002
 Optimal Price Rules, Administered Prices and Suboptimal
Prevention Evidence from a Medicare Program: Avi Dor, Case Western
Reserve University
 Pricing methodologies in Medicare vary from one
component of the system to another, often leading to conflicting
incentives. Failure to recognize linkages between them may result in
inefficient allocation of resources and higher overall costs. To
motivate the analysis, I derive pricing rules for a welfaremaximizing
regulator. I show that while optimal inpatient payments are standard
Ramsey prices, optimal outpatient payments must incorporate net loss due
to unnecessary hospitalizations, as well as supply elasticities. A
myopic regulator will tend to ignore this, leading to underprovision of
preventive services. The dialysis program represents a useful case for
empirical investigation, since payments for maintenance services more
rigidly determined than payments for related hospital care. Given
constant prices, empirical analysis focuses on the effect of dialysis
intensity on various measures of hospital use. Results indicate that
greater dialysis intensity reduces hospital use. Moreover, this is found
even at moderate or high levels of intensity, where dialysis is viewed
ex ante as being adequate. A simple costbenefit calculation suggests
that for every dollar of additional spending on outpatient intensity,
about $1.5 in hospital expenditures can be saved. This suggest that the
current pricing structure within aspects of the Medicare program is
inefficient, and underscores the problem of regulatory myopia.

 June 5, 2002
 Assessing Risk and Fairness: The Role of Statistical Science in
Policy: David W. Scott, Rice University
 Probability theory provides a mathematical framework
for modeling risk. Philosophy considers fundamental questions of the
nature and meaning of chance. But it falls upon statistical science to
collect and analyze data to estimate risk, influence policy, and make
decisions. Insurance provides a compelling case study for notions of
fairness and subsidy. This talk will examine the notion of a fair game
and consider its application in areas including decision making, social
security, medical insurance, and exit polling. What are some of the
elements of "fair'' policies? A deeper understanding of statistical
modeling and evaluation would illuminate subsidies implicit in public
policy and would sharpen political debate.

 May 1, 2002
 Defining 1Sided PValues in a Genetic Problem: Robert C.
Elston, Case Western Reserve University
 Fisher defined the Pvalue as the probability of an
observed result or anything more extreme, i.e. anything that would alert
the research worker even more than the observed result to the
possibility that the null hypothesis is not true. Depending on the
situation, the appropriate Pvalue should be 1sided or 2sided. If
segregation at a genetic locus underlies the etiology of a disease, then
pairs of siblings who are both affected with the disease will tend to
share a larger number of alleles identical by descent at a linked marker
locus. Under the null hypothesis that the marker locus is not linked to
such a disease locus, the siblings will share 0, 1 or 2 marker alleles
identical by descent with probabilities ¼, ½ and ¼,
respectively. The definition of a 1sided Pvalue in this situation will
be discussed.
 April 3, 2002
 The SAS Solution to Warranty Analysis: Jeff Wright, SAS
Institute
 SAS is in the process of bringing to market a
solution product for Warranty Analysis. This solution has been initially
developed for the automotive (and automotive supplier) industry but will
soon be applicable to any discrete manufacturer who must both manage
warranty claims/costs. The solution will integrate warranty data with
key customer, vehicle, production, and geographic information so that
the organization can achieve a level of process knowledge that
translates into significant value. The solution allows manufacturers to
automatically detect emerging quality issues before they make it to the
top of the 'issue list'; as needed, use statistical analysis to
determine root causes to quickly to focus resources in an accurate and
timely manner (using SAS Enterprise Guide, a graphical user interface to
the SAS system); as needed, use data mining techniques for further
inquiry into manufacturing and customer relationship issues (using SAS
Enterprise Miner); forecast warranty costs to protect against financial
risk. A demo of the Warranty Solution will be presented along with quick
looks at SAS Enterprise Guide and SAS Enterprise Miner.
 March 6, 2002
 Design of Computer Experiments to Optimize the Mean of a Response:
Bill Notz, The Ohio State University
 For purposes of this talk, a computer experiment is
assumed to consist of computer code that produces deterministic
responses for a given set of inputs. We consider the situation in which
there are two types of inputs; manufacturing variables and noise
variables. Manufacturing variables are those that can be controlled by
the product designer. Noise variables cannot be controlled but have
values that follow some probability distribution. We seek the values of
the manufacturing variables that optimize the mean of a response. The
approach is Bayesian; the prior information is that the response is a
draw from a stationary Gaussian stochastic process with correlation
function belonging to a parametric family with unknown parameters.
Following an initial design, a sequential strategy is used to select
values of the inputs at which to observe the response. This strategy
involves computing, for each unobserved set of values of the inputs, the
posterior expected "improvement" over the current best guess
at the optimum and the next set of inputs to observe are those that
maximize this expected improvement. The strategy is illustrated with
examples. Other issues regarding computer experiments will be addressed
as they arise.
 February 6, 2002
 Geometric Morphometric Techniques to Design Skull Prostheses:
David Dean, Case Western Reserve University
 Recently, we have begun to look at the use of
patient specific radiological data as the source for highly accurate
computeraidedmanufacture (CAM) of tissue engineered prosthetic
implants. The templates for the CAD (ComputerAided Design) of these
prosthetics comes from shape study of human anatomy referred to as
geometric morphometrics. Traditional multivariate statistical
morphometrics (circa 18801980) indirectly gave rise to these methods
which are truly a graft from the engineering literature on best fit
(rigid body) and warping (forced fit through bending) algorithms.
Geometric morphometrics has only truly been a field of inquiry and
application for a decade, or so. There are no textbooks, but there are
several good conference and review texts as well as journal papers. Most
emphasize two primary algorithms, Procrustes and Thin Plate Spine, for
best fit and warping, respectively. The primary novelty in these methods
is that the transformed landmark coordinates themselves, instead of
linear, area, volume, or other traditional measurements, have become the
statistical basis of shape measurement. These approaches are useful in
describing phenomena of average biological shape or biological symmetry.
These data have significant implications for CAD/CAM of human
prosthetics.
It is expected to facilitate highly accurate control of
implanthost fit and complex internal structures that promote
resorption. This work has been done with a 3D Systems (Valencia, CA) SLA
250/40 stereolithographic device. Stereolithographic CAM of PPF
components begins with a computeraideddesign (CAD) STL (public
stereolithography ASCII data format) file generated in a 3D (i.e.,
solid) CAD interface. Now that more than 20 patients have received
implants produced in this fashion, we are beginning to look at more
efficient ways to use stereolithographic production of skull prostheses.
Previously other groups and we have used patient CT images as source
data to print a skull model of the patient. These large skull models are
expensive to produce. Additionally, the manual work that follows to
design and manufacture an implant is also expensive. We will present new
technology that allows us to print the implant directly thereby saving
the cost of printing the patient's skull and expense of several
generations of manual work.
 January 9, 2002 (Cleveland Chapter Presidential Address)
 Analyzing Overdispersed Count Data from OneWay Designs:
Nancy Campbell, John Carroll University
 A comparative study of several possible statistical
tests for analyzing overdispersed count data from completely randomized
oneway experimental designs will be discussed. Specifically a Monte
Carlo study was done in which tests involving the general linear model
on the raw data and on transformations of the data were compared to
tests based on the generalized linear model utilizing either the Poisson
or negative binomial distribution. The problem was motivated by a
coauthor's frequent encountering of overdispersed count data from such
experiments, often involving small means and small sample sizes. The
past simulation study will be explained and conclusions drawn, and a
current related comparative study will be discussed as well. In
addition, an ongoing undergraduate research project by Computer Science
majors at John Carroll done in conjunction with the current study will
be discussed.
 2001 Talks
 December 5, 2001
 The Breast Implant Controversy: Statisticians and Epidemiologists
Meet the Media and the Legal Profession: W. Michael O'Fallon, Mayo
Clinic and PastPresident ASA
 In 1992, the FDA announced a moratorium on the
installation of Breast Implants ending a 30 year period of increasing
utilization of this technology. Women with surgically removed or
deformed breasts and women who desired breast augmentation had access to
such implants in the U.S since about 1962 and were byandlarge
satisfied. However, some case reports of a variety of connective tissue
diseases occurring in women with implants and a small number of court
cases led the FDA to use its newly obtained regulatory authority to
request safety data from the various manufacturers of implant materials.
When such data were not forthcoming and/or were deemed incomplete or
unsatisfactory the moratorium was declared. A firestorm of lawsuits
ensued and pleas for scientific information on the topic were issued.
The Department of Health Sciences Research at the Mayo Clinic provided
the first response with a paper published in the New England Journal of
Medicine in 1994. This presentation discusses our contacts with the
media and primarily with the legal profession following this
publication. Some of our experiences were exhilarating but most were
nerve wracking, depressing, disillusioning, and even frightening.
However, we have survived.
 November 7, 2001
 TwoStage Group Screening in the Presence of Noise Factors:
Angela Dean, The Ohio State University
 A major advantage of factorial experiments is the
information that they provide on interactions. When the number of
factors is large and little prior knowledge on the various factorial
effects is available, conventional fractional factorial experiments
capable of estimating interactions require too many observations to be
economically viable. To overcome this problem, interactions are
frequently dropped from consideration and assumed to be negligible,
often without substantive justification. In industrial experimentation,
in particular, the loss of information on interactions is a serious
problem, because a key tool for product improvement is the exploitation
of interactions between design (control) factors which can be set in the
product specification, and noise factors which cannot. Two different
strategies for ''group screening'' will be presented for a large number
of factors, over two stages of experimentation, with particular emphasis
on the detection of interactions between design and noise factors. Three
criteria are used to guide the choice of screening technique, and also
the size of the groups of factors for study in the first stage
experiment. The criteria seek to minimize the expected total number of
observations in the experiment, the probability that the experiment size
exceeds a prespecified target, and the proportion of active factorial
effects which are not detected. Examples will be used to illustrate the
methodology, and some issues and open questions for the practical
implementation of the results will be discussed.
 October 3, 2001 (Fall Workshop)
 Experiments: Planning, Analysis, and Parameter Design
Optimization: C. F. Jeff Wu, The University of Michigan
 This seminar will be based on the book "Experiments:
Planning, Analysis, and Parameter Design Optimization" by Jeff Wu
and Mike Hamada (2000). Course notes will be made available to
attendees. This book contains many new methods not found in existing
textbooks, and covers more than 80 data sets and 200 exercises. The new
tools covered include robust parameter design, use of minimum aberration
criterion for optimal factor assignment, orthogonal arrays of economic
run size, analysis strategies to exploit interactions, experiments for
reliability improvement, and analysis of experiments with nonnormal
responses. Data from real experiments will be used to illustrate
concepts. Time will be reserved for questions and discussion.
 September 5, 2001
 Irregular Factorial Designs: Tena I. Katsaounis, Mansfield
Restaurants Inc
 Partially Balanced arrays with N runs, m factors,
two symbols, strength t greater than two and a given index will be
discussed as being Partially Balanced arrays with N runs, m factors, two
symbols and of strength equal to two. Formulas for calculating the
corresponding index, given any number of factors m, will be presented.
This method is useful in showing easily non existence of a Partially
Balanced array with N runs, m factors, two symbols, strength t greater
than two and a given index. Construction of irregular factorial designs
using this method will be discussed. Such designs belong to the class of
Partially Balanced or PB1 or Extended PB1 arrays.
 June 6, 2001
 A Case Study: An Inside Job by an Outsider: Dennis Fox,
Cleveland Technical Societies Council President
 The presentation will focus on three things. The "Case"
study, the results, trends and hopes for these Fourth grade students at
a local elementary school. Thoughts on how to make meaningful changes in
classes is the second part of the presentation with comments on Time's
Schools of the Year Schools That Stretch article in the May 21, 2001
magazine. Finally, a report about a new charter school I hope will use
EQL, DDM, QL and SEAQL techniques and other technical societies
educational outreach programs as a basis of the curriculum that is
scheduled to open this fall in Cleveland. We will be very open to
discussion on where and how we go from here at Case Elementary and other
programs. Some of the answers, while not difficult or surprising (in
hindsight), may surprise you. We are all educators; some of us just
received formal training in it.
 May 2, 2001 (A Worskshop sponsored by the ASA Council of
Chapters)
 Permutation Tests: A Guide for Practicioners: Phillip Good,
Information Research
 This is a onehalf day course on permutation
methods. It is intended for practicing statisticians and others with
interest in applying statistical methods. High school algebra is assumed
but no higherlevel mathematics is required. Some familiarity with
computer simulation would also be helpful. Attendees will be given
historical background on resampling methods and a formal introduction to
these methods. Emphasis will be placed on the wide variety of
applications of the techniques, the computerintensive nature of
implementation along with many examples and "real world"
applications. The course is intended for those who use statistical
methods in their work. This includes practitioners in medicine,
business, engineering and the social sciences. It also will be useful to
professors of statistics and those who do statistical research but may
not be familiar with resampling methods and want to be updated on the
latest advances in methodology and application. Dr. Good is the author
of two popular texts on resampling methods. Resampling is a powerful
technique which has only recently seen an explosion in applications due
to enhancements in computational techniques that make these
computerintensive methods practical.
 April 4, 2001
 Design of TwoLevel Fractional Factorials with Vague Prior
Information: Arthur G. Holms
 An experimenter's prior information, however vague,
is incorporated, according to the vagueness, into the design of
expansible sequences of twolevel fractional factorials with crossed
classification block effects, and all simpler cases. The experimenter's
prior subjective probabilities of the importance of model coefficients,
block parameters, and likely stopping stages of expansible sequences are
used to maximize the expected utility of a design by making optimum
selections of model coefficients from the estimates aliased and
confounded sets of model coefficients, for all physical to plan variable
matchings, for the experiment designer's choices of groups and subgroups
of defining contrasts. Differing groups and subgroups can then be
compared under the condition of optimal coefficient selection and
physical to plan variable matching.
 March 7, 2001
 Statistics Education and the Making Statistics More Effective in
Schools of Business (MSMESB) Conferences: Thomas Love, Case Western
Reserve University
 The Making Statistics More Effective in Schools of
Business (MSMESB) conferences have been held annually since 1986. Since
its inception, MSMESB has focused on improving the teaching of
statistics and statistical thinking, on crossdisciplinary research, and
crosspollination between academia and industry, and on continuous
improvement in business education. The conferences have led to
substantial changes in curricular content, modes of delivery, and
supplemental material in business (and other) programs all over the
world. This talk describes the impact of MSMESB on the teaching of
statistics, especially the development of textbooks and software. The
talk traces some of MSMESB's history and draws out some practical
recommendations. We also highlight some key challenges and opportunities
for the future. Earlier versions of this work were jointly prepared with
David Hildebrand, of the Wharton School. The MSMESB web site houses
details of many of the sessions at
http://weatherhead.cwru.edu/msmesb
 February 7, 2001
 African American Study of Kidney Disease and Hypertension 
Design and Update : Jennifer J. Gassman, The Cleveland Clinic
Foundation
 The African American Study of Kidney Disease and
Hypertension is a factorial design study of the effect of
antihypertensive regimen and level of blood pressure control on the
progression of kidney disease in African Americans with hypertensive
nephrosclerosis. In this presentation, the study design will be reviewed
and there will be an update on the progress of the study.
 January 3, 2001
 Uses and Abuses of Proficiency Tests: Robert S. Butler,
BFGoodrich
 In recent years tremendous emphasis has been placed
on the idea of quality in education. To this end many states, including
Ohio, have embraced the concept of proficiency testing as a means of
quality control. While the pronouncements from Columbus have all of the
verbal hallmarks of a quality control effort they have little else. This
is unfortunate because, with proper analysis, the proficiency data can
provide valuable information about many aspects of the education
process. In order to provide an understanding of what can and cannot be
expected from proficiency testing, the talk will focus on the analysis
of the 19961999 Shaker Heights 4th grade proficiency scores.
 2000 Talks
 December 6, 2000 (Cleveland Chapter Presidential Address)
 Cryptography, Primes, and Privacy: Wm. O. Orgel, Solutions;
et cetera
 There are three words that describe privacy. They
are secrecy, secrecy, and secrecy. Article IV of the Bill of Rights
guarantees: "The right of the people to be secure in their persons,
houses, papers, and effects...". Modern technology is making the
privacy of medical records, banking, credit, and our bodies (Genetics),
etc. an international issue, which disturbs many people.
Cryptographynot thought of by most people very oftenis supposed to be
at work protecting our privacy every second of every day. Why do we
trust cryptology to protect our privacy/security, both personal and
national? From whom are we being protected? We will discuss the
historical connection among ancient, modern and contemporary mathematics
(prime numbers) to the Internet, e.g., Euclid is usually discussed
respecting Geometry; and Fermat his Last Theorem, but not this time.
There are "keys" to making cryptography work, and we'll
discuss them. Is the future of cryptography in Quantum Computing?
Finally, we will discuss the Awards of "Privacy International".
 November 2, 2000
 Statistics for a New Century: Meeting the Needs of a World of
Data: Richard Scheaffer, University of Florida, President ASA
 The world is awash in data. Most people are aware of
the importance and power of data in their professional and personal
lives, and many attempt to use data in making everyday decisions. But
few are educated in ways that would allow them to comprehend more fully
the vast array of uses (and misuses) of data and to use more effectively
the quantitative information that confronts them daily. Even fewer are
aware of the fact that formal study of statistics can serve to
strengthen their own academic preparation for a wide variety of careers.
This talk will provide an overview of the current efforts in the United
States to infuse statistics into the school (K12) curriculum and to
enhance opportunities for undergraduates to learn more about statistics.
Ties to similar efforts in the international community will be
mentioned. The goals are to empower students through improved
quantitative literacy and to provide strong foundations for careers that
depend increasingly on data. Among the strengths of these efforts are
the terrific interest they have generated among educators and students
at all levels; among the weaknesses are the tendency for programs to
become narrow and rote rather than broad and creative. A goal for the
next century will be to bring to educational programs in statistics the
same kind of creative vitality that marks the practice of statistics
among professionals in the field. This will require new emphases in both
content and pedagogy. Statistics education has caught the attention of
many; it now must prove itself by making effective use of this
opportunity to produce new generations of graduates that will not drown
in their world of data.
 October 4, 2000 (Fall Workshop)
 The Grammar of Graphics: Designing a System for Displaying
Statistical Graphics on the Web: Leland Wilkinson, SPSS, Inc.
 "The Grammar of Graphics" (GOG) is the
title of a recent SpringerVerlag book that encapsulates a new theory of
statistical graphics. GOG is based on an algebraic model. It contrasts
with the prevailing view toward classifying and analyzing charts by
their appearance  a view that one might call Pictures of Graphics
(POG). In POG, there are pie charts, bar charts, line charts, and so on.
Not only are most books and papers on graphics organized by chart type,
but so also are most charting programs and production libraries.By
contrast, GOG begins with a strong statement: there is no such thing as
a pie chart. A pie chart is a stacked bar graphic measured on a
proportional scale and transformed into polar coordinates.
Significantly, the description of simple charts in POG (such as a pie)
are more complex in GOG and seemingly complex charts in POG (such as
scatterplot matrices) are simple to describe in GOG. This contrast
between surface POG descriptions and deep GOG specifications exposes not
only previously unappreciated subtleties in the structures of common
charts but also the existence of charts not generally known. GOG is
ideally suited for designing a system for interactive Web graphics.
Examples will be shown using a Java production library called nViZn
(formerly GPL).
 September 6, 2000
 Epidemiological Causation in the Legal Context: Sana Loue,
Case Western Reserve University
 Reliance on epidemiological evidence has become
increasingly common in various legal contexts, including toxic tort
cases, criminal matters, civil lawsuits between individuals for alleged
harm, and actions to involuntarily quarantine individuals with specified
infectious diseases. However, the operationalization of causation
differs between law and epidemiology and the purposes of law and
epidemiology are quite different. These divergent methodologies and
purposes often result in the misuse and misinterpretation of
epidemiological principles and findings in the courtroom. Case examples
are used to illustrate these difficulties.
 June 7, 2000
 Flourishing or Floundering? Relief Pitching in Major League
Baseball: Rich Charnigo, Case Western Reserve University
 A manager in major league baseball has the
oftendifficult task of deciding when and how often to switch pitchers
during the course of a game. Looking at the simultaneous decline in
pitching performance and increase in the use of relief pitching (and
perhaps with a few contests from last season in mind), one wonders if
relief pitching is being employed past the point of optimal
effectiveness. We will consider how relief pitching has changed over the
last thirtynine years and formulate three linear models that will
relate the earned run average (a primary measure of pitching
performance) to the proportion of complete games (a proxy for the amount
of relief pitching). By considering a statistic designed to assess the
quality of a pitching change and the proportion of pitching
substitutions in 1999 contests that exceed a certain threshold value of
this statistic, we will assess the overall effectiveness of relief
pitching. With this finding and the results from the aforementioned
linear models, we will argue that relief pitching may now be
contributing to the very problem which it is designed to circumvent:
high scoring by the opposition.
 May 3, 2000
 Computer Generation of Magic Squares Using Minitab: Josephina
de los Reyes, University of Akron
 The purpose of this talk is to present a computer
method of generating a magic square using MINITAB®, a statistical
software. A magic square of order n is an n x n array of the integers
1,2,n2 so that the sum of entries in the n rows, n columns, and main
diagonals is a constant. Amid fascinating historical and recreational
aspects of magic squares, and theoretical questions asked about it when
regarded as a matrix, two ideas in particular caught this author's
interest. One idea is the fact that under certain restrictions, a magic
square oforder n may be constructed from two mutually orthogonal Latin
squares ("MOLS") of order n. Is this procedure reversible for
all n  under what conditions can MOLS be obtained from a magic square?
Initial results obtained by the author show that given a magic square
and a Latin square, another Latin square that is orthogonal is
derivable. The second idea comes from an article in electronic
engineering about the effectiveness of a dithering algorithm in color
printing that is based on a 3x3 magic square over that based on a "direct"
dithering algorithm.
 April 5, 2000
 Computing Multivariate Bsplines: A Simulation Based Approach:
Nidhan Choudhuri, Case Western Reserve University
 Univariate Bsplines played an important role, both
as a theoretical and practical tool, in dealing with polynomial splines;
its multivariate counterpart, despite having all the theoretical
properties of the univariate case, lacks application because of the
computational difficulties. Unlike the univariate case, where there is
an explicit form of the Bspline function, the multivariate Bspline is
only implicitly defined and needs numerical approximation. Some
computational procedures, based on a recurrence relation formula by
Micchelli (1980), are available today. But the computing time there is
too long and increases exponentially with the number of knots. In this
talk, we shall introduce a new simulation based procedure of computing a
multivariate Bspline function, which is less time consuming and easy to
implement. Theoretical results will also be presented to support the
validity of this procedure.
 March 1, 2000
 Likelihood Functions, Parametric Statistical Inference and
Mathematica: Daniel Cap, DMC Technology
The relationship between parametric probability
distributions and likelihoodfunctions plays a fundamental role in both
applied and theoretical statistics. The object of this talk is to show
how Mathematica, a modern mathematical modeling environment, can be used
to explore likelihood functions and make parametric statistical
inferences. We'll use simple examples, with binomial, normal and weibull
distributed random variables, to illustrate the key ideas. After this
introduction, we'll explore practical applications to reliability
engineering and robust (Taguchi)engineering design problems, where both
the location and scale parameters of the associated likelihood functions
are modeled with regression equations.
 February 2, 2000
 Recent Findings from Two Clinical Surveys: John Holcomb,
Youngstown State University
This presentation will discuss two recent projects
involving Dr. Ralph Rothenberg, MD of Forum Health Services, Youngstown,
OH. The first project involves determining if guidelines mailed to every
physician in the country by The American College of Rheumatology
impacted baseline and followup blood, kidney and liver testing of
patients given prescriptions of a nonsteroidal antiinflammatory
medication. The second project also involved Dr. Joan Boyd. Here we
investigated the effectiveness of health fair osteoporosis screenings.
Subjects identified to be at risk for osteoporosis by ankle bone density
screening were surveyed six months later to determine if the primary
physician was seen and treatment obtained. The talk will present the
results of these investigations and thoughts on future research in these
areas.
 January 5, 2000
 The Supreme Court's Decision on the Census: Dan O'Leary,
Marconi Medical Systems
On January 25, 1999 the US Supreme Court ruled on
a pair of lawsuits relating to the use of statistical methods for
apportion of the US House of Representatives. The court ruled that using
statistical methods for apportionment of the House of Representatives
violates the US Constitution. They also ruled, however, that current law
requires the Census Bureau to use statistical methods for other
purposes. The presentation reviews the Supreme Court's decision and
explains the issues and the Court's reasoning. It looks at the
Constitutional requirements for apportionment and the census including a
number of constitutional amendments in this area. The presentation
touches on the problem of fair apportionment, the legislation relating
to the Census, the undercount problem, the initial plan for Census 2000,
and the revised plan for Census 2000. Time permitting, we look at some
of the arguments for and against sampling.
 1999 Talks
 December 1, 1999 (Cleveland Chapter Presidential Address)
 Acute Care for Elders: Stopping Functional Decline in
Hospitalized Elders: Linda Quinn, QED Industries
Patients age 65 and over account for approximately
37% of acute nonfederal hospital admissions and 48% of inpatient days
in the United States. Functional decline is common in older adults
following an acute medical illness and hospitalization. Evidence
substantiates that at least onethird of patients age 70 and over lose
the ability to independently perform one or more activities of daily
living (ADL) after hospitalization for an acute medical illness, and
three months postdischarge, 40% of these patients still do not regain
preadmission functional status. Patients with ADL decline accompanying
an acute illness and hospitalization are more likely to have a prolonged
hospital stay, die in hospital, be newly institutionalized at discharge,
and be readmitted to the hospital postdischarge. The Acute Care for
Elders (ACE) Unit is a multifaceted intervention that integrates
geriatric assessment into the optimal medical and nursing care of
patients in an interdisciplinary environment. Designed specifically to
help patients maintain or achieve independence in selfcare activities,
the ACE Unit embodies four key elements: a specially designed
environment (with, for example, uncluttered hallways, large clocks and
calendars, and carpeting); patientcentered care emphasizing
independence, including specific protocols for prevention of disability
and for rehabilitation; discharge planning with the goal of returning
the patient to his or her home; and intensive review of medical care to
minimize the adverse effects of procedures and medications. This talk
will discuss the series of randomized clinical trials that evaluated the
effectiveness of the ACE Unit intervention.
 November 3, 1999 (Fall Workshop)
 Logistic Regression  A Workshop: Mike Kutner, The Cleveland
Clinic Foundation
Data are said to be binary when each outcome falls
into one of two categories such as alive or dead, success or failure,
true or false. Binary outcome data occur frequently in practice and are
commonly analyzed using the logistic regression model. This workshop
will emphasize the practical aspects of modeling binary outcome data
using logistic regression, including checking the adequacy of the fitted
models. Several examples from the health sciences area are presented to
illustrate the techniques. Parallels between multiple linear regression
modeling and multiple logistic regression modeling will be presented
throughout the workshop. Therefore, attendees of this workshop are
expected to be familiar with multiple linear regression modeling
techniques.
 October 6, 1999
 Optimal Small Response Surface Designs: Tena Katsaounis, MS
Three level factorial designs will be discussed
that are suitable for small second order designs. The best choice of
potential design points will be discussed under the criterion of
minimizing the generalized variance of the parameters in the model. Two
methods will be illustrated that yield designs with different resolution
for the main effects and two factor interactions. Both methods involve
the use of Partially Balanced and PB1 arrays. The Extended Partially
Balanced Array will be defined as a generalization of PB.
 September 8, 1999
 Handicapping Systems for Sailboat Racing: Paul Mathews
Wherever there are sailboats in the water there
are sailboat races. There are almost as many race scoring systems as
there are different kinds of sailboats. When boats that are racing are
all the same design (one design racing) or are very similar in design
(box rules) then scoring is easy and accurate. However, most racing
fleets contain a mixture of boats. In a single race it's not unusual to
find boats that run from 20 to 50 feet long, that weigh from 2500 to
30,000 pounds, that race with 3 to 15 people, and that cost from $3000
to $300,000. Under these diverse conditions performance handicapping
systems have evolved so that everyone has a chance (or is supposed to
have a chance) to win. Paul will use video clips to demonstrate the
scale of the handicapping problem. He will discuss the committee driven
handicapping systems that are based on elapsed time (TimeOnTime) and
course length (TimeOnDistance) and the ratings systems that are based
on aero and hydrodynamic computer models (velocity prediction
programs). Then he will present a new rating system that is based on the
results of computer physical modeling but is refined with actual race
data. Paul will also talk about some of the technological developments
that will affect rating systems in the next five years.
 June 2, 1999
 Optimal Experimental Design in Poisson Impaired Reproduction
Studies: Jennifer Huffman, The Lubrizol Corporation
Impaired reproduction experiments are a class of
studies involving Poisson responses that are functions of concentrations
of toxicants, chemotherapy drugs, or other substances. Factoriallike D,
Ds, and interaction optimal designs are demonstrated for models
involving interaction through k factors. Augmentations of these designs
that result in desirable lack of fit properties are discussed. The
augmentation contains "center runs" that are analogous to
standard center runs in factorial designs for linear models. In
addition, fractional factorial designs are detailed along with their
alias structure. Robustness properties are addressed as well.
 May 5, 1999 (Joint Dinner Meeting at the University of Akron)
 Proportional Hazards Models With Informative Censoring: P.V.
Rao, University of Florida
A modification of the proportional hazards model
is proposed for informatively or randomly censored times to two types of
events  a primary event and a followup event. The proposed model
treats informative censoring as a type of risk in a competing risks
setup and incorporates suitable parameters to describe the conditional
probability of failure given the subject is not informatively censored.
Inferences about the parameters in the model can be based on standard
partial likelihood theory. The model is used for analyzing some data on
waiting time to bone marrow transplantation (primary event) and time to
relapse or death after transplantation (followup event) for leukemia
patients.
 April 7, 1999
 Careers in Statistics: the Real World A Panel Discussion:
Susan Cowling, The Lubrizol Corporation; David Nelson, The Cleveland
Clinic; Patricia Cirillo, Cypress Research Group
 The panel presentation will be from 4:00pm to
5:00pm. The panel consists of three local statisticians who are employed
in different application areas. They will discucss: What do I do? What
types of analyses do I run? What type of degrees are required/useful?
What skills are important? Salary and Job market?
 March 3, 1999
 A Unified Regression Approach for Fixed Effects ANOVA Models with
Missing Cells: Mike Kutner, Cleveland Clinic Foundation
Many standard statistical software programs claim
to "handle" fixed effects ANOVA models when some of the cells
do not contain data (missing cell problem). Simple examples using SAS,
SPSS, and BMDP will be given documenting how these programs "handle"
the missing cells. Surprisingly, these computer packages generally
produce strange results. The sources of these difficulties will be
delineated. A unified fullrank regression cell means model approach
will be presented as a definitive solution to this problem. Experimental
design models, such as the balanced incomplete block design, will be
shown to be special cases.
 February 3, 1999
 Using Economic Designs for Multivariate Control Charts:
Thomas Love, Case Western Reserve University
The economic design of control charts means making
rational decisions about several parameters, including control limits,
sample (subgroup) sizes, sampling intervals, and degree of data
smoothing in light of a model for the costs and time associated with
monitoring and controlling a process. Technological advances have
sparked interest in multivariate process control methods  which allow
for the simultaneous study of several quality characteristics. The
Multivariate Exponentially Weighted Moving Average (MEWMA) control chart
is especially attractive to practitioners looking for methods that
effectively detect small shifts in a process mean vector. We discuss the
use of economic designs for multivariate control charts, focusing on the
MEWMA. This presentation is the result of joint work with Kevin
Linderman, at the University of Minnesota.
 January 6, 1999
 "Actuarial" vs. "Actual" Analyses:
Accounting Simultaneously for Multiple Competing Risks of Events:
Eugene Blackstone, Cleveland Clinic Foundation
Typical life tables (generically called "actuarial"
tables, be they related to survival of patients or reliability of
machines) address the distribution of times to a single event. In some
settings, however, other events may occur prior to the event of interest
that effectively remove a patient or item from the possibility of
experiencing the event of interest. For example, one may wish to study
the need for additional procedures following a heart bypass operation.
However, death may occur before the need for an additional procedure.
This usually does not affect the estimates of the probability of
reintervention, but it does affect the "actual" number of
reinterventions that will be observed in patients across time. Or a
heart valve manufacturer may know that there is a high probability that
a valve will deteriorate over a period of many years, but if these are
used in elderly people, the number of actual valve replacements for
deterioration may be low because of natural attrition from other causes.
In the late 18th century Daniel Bernoulli presented the mathematics for
multiple competing events, using it to predict the impact on survival of
a population were smallpox to be eradicated. This led to multiple
decrement life tables in demography and the general theory of competing
risks in statistics. This talk will be introductory in nature using
buckets with different sized leaks as a motif, providing motivation for
the technique in several medical settings, examining the appropriateness
of the methodology to different questions being asked, and briefly
introducing both nonparametric and parametric estimation methods.
 1998 Talks
 December 2, 1998 (Cleveland Chapter Presidential Address)
 Five Randomized Clinical Trials of Abciximab During Percutaneous
Coronary Intervention: Shelly Sapp, Cleveland Clinic Foundation
Platelet thrombus formation, resulting from
platelet activation, adhesion and aggregation, predisposes patients to
the development of ischemic complications after percutaneous coronary
interventions (PCIs). Clinical evidence suggested that sustained
platelet inhibition after PCIs may reduce the occurrence of
posthospital events such as myocardial infarction and coronary
revascularization. Recently, a new class of platelet antagonists
directed against the platelet membrane glycoprotein IIb/IIIa receptor
has undergone extensive clinical testing. One of these agents, Abciximab
(ReoPro, Centocor), blocks this receptor, thus preventing platelet
adhesion and aggregation. Between 1991 and 1998, five randomized,
placebocontrolled clinical trials of Abciximab during percutaneous
coronary intervention involving a total of 9038 patients have been
completed. This talk will describe the motivation behind the design of
these trials as well as some of the major results.
 October 29, 1998 (Dinner meeting)
 Geographical Trends in Cancer Mortality: Using Spatial Smoothers
and Methods for Adjustment : Karen Kafadar, Ph.D., University of
ColoradoDenver
Mapping healthrelated data can lead to important
insights and generate hypotheses about causes and potential effects.
Such data are commonly adjusted for the variables age and gender, so
that inferences are not influenced by these obvious factors. In a
similar fashion, data for certain diseases ought to be adjusted for
known risk factors. One method of adjustment is suggested here, and
insights from the adjusted data are enhanced by smoothing the data in
the two dimensions (longitude and latitude). The process of adjustment
and smoothing is illustrated on three sets of cancer mortality data:
lung cancer (using urbanicity as the adjustor), prostate cancer in
nonwhites (using percent AfricanAmerican as the adjustor), and melanoma
among whites (using latitude as the adjustor). In each case, the maps of
the adjusted rates indicate patterns that are worthy of investigation
and may contribute to the generation of hypotheses and further
understanding of the etiology of the diseases.
 October 7, 1998
 Testing For Equivalence of Diagnostic Tests: Nancy
Obuchowski, Ph.D., Cleveland Clinic Foundation
Studies comparing the diagnostic accuracy of
clinical tests are common, particularly in the field of radiology.
Accuracy is defined in terms of the test's sensitivity, specificity, or
indices associated with the Receiver Operating Characteristic (ROC)
curve. In comparing two tests, we are often interested in determining if
a new test has similar accuracy as an existing test, i.e. `Are the two
tests equivalent?'. We propose two criteria for defining diagnostic
equivalence and methods for testing equivalence. The criteria are
referred to as `Population Equivalence' and `Individual Equivalence';
they are modifications of criteria used for assessing equivalence
between generic and standard drugs. The proposed methods are illustrated
for a study comparing the diagnostic accuracy of digitized mammographic
images to original film. The digitized images are easy to store and
retrieve, but the digitization process may result in a loss of accuracy.
We test whether the accuracy of the digitized images is equivalent to
film and whether the management of individual patients will be impacted
if digitized images replace films.
 September 9, 1998
 Is Rank Transformation Method a bad idea?: GuangHwa "Andy"
Chang, Ph.D., Youngstown State University
The rank transformation (RT) refers to the
replacement of data by their ranks, with a subsequent analysis using the
usual normal theory procedure, but calculated on the ranks rather than
on the original data. This idea was originally suggested by Lemmer and
Stoker (1967) and advocated by Conover and Iman. The availability of
statistical packages for parametric tests makes the rank transformation
method appealing. SAS had also added this option in their package.
However, Blair, Sawilowsky and Higgins (1987) showed that, for 4x3
factorial designs, a severe inflation in Type I error of the RT
statistics for testing interaction is observed as either the cell size
becomes large or the row and column main effects are large. It was a
huge disappointment. Is the rank transformation method a bad idea? Some
research results after the simulation study by Blair et al. will be
presented in this talk.
 June 3, 1998
 Genetic Mapping of Complex Human Diseases: Jane M. Olson,
Ph.D., CWRU
In recent years, genetic study of complex human
diseases has increased dramatically. Most human diseases are now
believed to have some genetic component, and considerable effort is
being made to find and study the genes involved. As a result,
statistical methods used to find disease genes are receiving a great
deal of attention, and statistical mapping methodology is evolving
rapidly. In this talk, I will provide an overview of genetic mapping
methods, focusing primarily on genetic linkage analysis. I will first
explain concepts in genetic inheritance that statisticians exploit in
genetic mapping, and introduce relevant terminology. I will then explore
the two main types of genetic linkage analysis: modelbased and
modelfree. In modelbased linkage analysis, one estimates a genetic
model of inheritance for the disease using pedigree likelihood methods,
then fixes the model in subsequent estimation of the linkage parameter
that describes the relationship between the inheritance of disease and
marker loci. In modelfree linkage analysis, a smaller set of parameters
that are functions of the genetic model and the linkage parameter are
estimated, so that the genetic model of the disease need not be known
prior to analysis. Instead, sharing of marker alleles between related
individuals is exploited. I will discuss the relative usefulness of
these approaches in practical genetic mapping problems, and provide some
discussion of future directions.
 May 6, 1998
 Good Apple? Sampling Biases: Jiayang Sun, Case Western
Reserve University.
In practice, data are more often from "hell"
thaom "heavn fren". They may come with missing values,
censored or truncated observations, outliers and/or from a biased
sample. If missing values are missing at random (and are not too many),
deletion or an imputation procedure may be used to clean up the data
before analyses. If whether a data point is missing depends on the true
value, the data have come from a biased sample. In the presence of
sampling biases, standard procedures often fail badly. In this talk we
present some fun examples with conclusions that were drawn ignoring
biases, and illustrate why some cases with censoring or truncated
observations may be considered from a biased sampling. We then offer
some simple solutions and go into the author's current research in this
area.
 March 24, 1998 (Joint Dinner meeting with Case Western Reserve
University)
 Followup Designs that Make the Most of an Initial Screening
Experiment: Robert Mee, Department of Statistics at University of
Tennessee in Knoxville.
Industrial experimentation is often sequential,
with initial screening experiments followed by other stages of
experimentation. Common alternatives for subsequent experimentation
include the path of steepest ascent, augmenting an initial fractional
factorial via foldover, and adding axial and center points to complete a
central composite design. This talk will focus on two other
alternatives: augmentation via semifolding and noncentral composite
designs. In each case we find opportunity to use estimates from the
initial experiment to direct location of the followup design.
 March 4, 1998
 Navigating the Net: Susan E. Branchick, Ricerca
The Internet contains a wealth of information, but
locating it can be a frustrating experience. This talk will present tips
and techniques to make your experience on the Internet a smoother ride.
Topics will include how to subscribe to a discussion group, work within
frames on the WWW, search for information and handle documents. It will
conclude with an overview of some statistical related sources.

 February 4, 1998
 Statistical Literacy and Statistical Competence in the 21st
Century : David S. Moore, 1998 President, American Statistical
Association
Educated people face a new environment at
century's end: work is becoming intellectualized, formal higher
education more common, technology almost universal, and information (as
well as mis and disinformation) a flood. In this setting, what is
statistical literacy, what every educated person should know? What is
statistical competence, roughly the content of a first course for those
who must deal with data in their work? If the details are automated,
are the concepts and strategies that guide us enough to maintain "statistics"
as a distinct discipline?

 January 7, 1998
 Statistical Opportunities and Job Search Fundamentals: Greg
Jarecki, Trilogy
This presentation will focus on three areas:
career trends in the statistical field, elements of successful career
planning, and fundamentals of job hunting. Included with career trends
are career opportunities, skill requirements, and salary ranges for
various statistics professionals. Included with job hunting fundamentals
are resume preparation, and the differing perspectives of job candidates
and employers.
 1997 Talks
 December 3, 1997 (Cleveland Chapter Presidential Address)
 Quantitative Investing: Bill Katcher
Is the stock market just another process? Yogi
Berra says, "You can see a lot just by looking." I believe
that a lot can be learned with basic statistical methods like regression
analysis, data plotting, and time series analysis. Spreadsheets are the
tool that make it possible for the individual investor to monitor the
process in near real time. There are two parts to the investment
decision. First, what's the overall market direction and second, what
specific investments are the best at this time. I hope to be able to
give you some new ideas in both areas.

 November 5, 1997
 Designing for Better Data, Timely Analysis, and Effective
Reporting: Mark Martin, Chiron Diagnostics Corporation
As statisticians, we too frequently encounter the
frustration of spending more time than we like cleaning and preparing
data. Ideally, we would like to spend the majority of our time designing
for, analyzing, and reporting from good data. Doing so requires a good
balance of foresight, collaborative planning, information sharing,
tools, continuous learning, and communication skills. Experiences from a
fastpaced project will be shared, including successes and pitfalls.
While specific practices vary in different environments, certain basic
principles apply across workplaces. An extensive bibliography of good
references (for both statisticians and nonstatisticians) will be
provided. As a specific tool, the use of "data flow diagrams"
will be demonstrated.

 October 1, 1997
 Rank Analysis of Means: Dan Sommers, General Electric
The typical nonparametric procedure for analyzing
a onefactor design is the KruskalWallis test. An alternative
procedure, using a rank approach to the onefactor analysis of means,
will be presented.

 September 10, 1997
 Testing for Treatment Effects which are Increasing Functions of
the Rate of Disease Progression: Tom Greene, Cleveland Clinic
Foundation
Clinical trials in chronic disease populations are
often designed to test whether a treatment intervention slows disease
progression as assessed by the slope of repeated measurements of a
marker of disease severity. For continuous markers, mixed effects models
are generally used to test for additive treatment effects on the mean
slope, implicitly assuming that the treatment effect for each patient is
independent of that patient's progression rate. However, a close
examination of the hypothesized mechanisms of the treatment often
suggests that the clinical hypothesis can be better formulated by a
treatment effect whose magnitude increases as a function of the
progression rate. We introduce a mixed effects model in which the
treatment effect is proportional to the rate of progression for patients
with negative slope, and equal to zero for patients whose slope is
nonnegative. An estimated least squares approach is proposed for
estimation and hypothesis testing in this setting. We show that use of
standard additive analyses can lead to markedly lower power than the
proposed procedure when the treatment effect increases with the effect
size. We also examine timetoevent analysis based on the time to a
specified clinically meaningful reduction in the outcome marker as a
practical alternative to mixed models when increasing treatment effects
are considered likely. These approaches are illustrated with examples
from clinical trials in renal disease.

 June 4, 1997
 Projection Methods for Generating MixedLevel Fractional
Factorial and Supersaturated Designs: Alonzo Church, Jr.
The definitions of resolution and projectivity
have been used to develop an algorithm to find mixedlevel fractional
factorial designs. Some of the designs differ from standard designs and
have superior projection properties. In addition their least squares
properties are often superior. In his presentation, Mr. Church will
describe the algorithm and give details on some useful alternative
designs.

 May 7, 1997
 Forensic Economics: John F. Burke, Ph.D.
In his talk entitled "Forensic Economics",
Dr. Burke will concentrate on the ways statistics and probability can be
statistical evidence in jury trials, he will use actual cases involving
racial discrimination and the asbestos industry as examples.

 April 2, 1997
 Alternative Control Charts: Tom Ryan, Case Western Reserve
University
Traditional methods for determining control chart
limits have appealed to the Central Limit Theorem when an Xbar chart is
used and to the normal approximations to the binomial and Poisson
distributions when attribute charts are used. Unfortunately, these
approaches can produce very poor results. Recent research on improved
techniques for measurement and attribute data will be presented,
including methods developed by the speaker. The control chart panel
discussion that is appearing in the April issue of the Journal of
Quality Technology will also be briefly
discussed.

 March 5, 1997
 Cleveland Technical Societies Council: Fred Lisy, CTSC
The Cleveland Technical Societies Council (CTSC)
brings together the members of the technical, and engineering societies
in Northeastern Ohio to: Provide a forum for discussion and a vehicle
for action on matters of mutual concern and interest; Serve as a focal
point for contact between industry and society at large with the
technical and scientific community, Promote intersociety communications,
professional interchange of ideas and coordination of society
activities; Promote interest in and encourage careers in the scientific
and technical professions through career guidance and other educational
programs. The CTSC has approximately 50 member societies and
associations such as American Association of Cost Engineers (NE Ohio
Section), American Institute of Aeronautics and Astronautics, American
Institute of Chemical Engineers, the American Society for Information
Science, the American Society for Quality Control (Cleveland Section),
the Cleveland Computer Society, the Cleveland
Engineering Society, Society of Fire Protection Engineers and the
Cleveland Chapter of the American Statistical Association. The CTSC
program's include: Technology infrastructure such as educational
programs complement, education program database, career day, science
fair mentoring and local student programs; Technology transfers such as
consortiums and R&D partnerships issues, technology policy forum and
regional engineering technical symposiums; Technology development such
as forums for technology awareness, public awareness of technology and
industrygovernment R&D partnerships.

 February 5, 1997
 Update on the Data Coordinating Center and the African American
Study of Kidney Disease and Hypertension: Jennifer Gassman,
Cleveland Clinic Foundation
The African American Study of Kidney Disease and
Hypertension (AASK) is a randomized clinical trial sponsored by the
National Institute of Diabetes, Digestive and Kidney (NIDDK) Diseases of
the NIH. There are 21 Clinical Centers across the United States,
including one in Cleveland at Case Western Reserve University. The Data
Coordinating Center is in the Deparment of Biostatistics at the
Cleveland Clinic. Statisticians working on the study include Mike
Kutner (Principal Investigator), Jennifer Gassman, Tom Greene, and Shin
Ru Wang. In this 3x2 factorial design study, participants receive 1 of 3
blinded antihypertensive agents (ACE inhibitors, beta blockers, or
calcium channel blockers) and either usual blood pressure control (MAP
of 102 to 107 mm Hg) or low blood pressure control (MAP <92 mm Hg),
where MAP = 1/3 systolic + 2/3 diastolic blood pressure. The primary
outcome variable is rate of change in glomerular filtration rate (GFR),
a measure of kidney function. Data are entered directly from the
clinical centers into a central Oracle database, using the Internet. The
study has been going on for about two years, and there are about four
more years to go. In this nontechnical
presentation, the speaker will give an update on what the Data
Coordinating Center is doing and how the AASK Study is progressing.

 January 8, 1997
 Response Surface Methods for Costly Noisy Data: Art Holmes
The presentation will include a quick review of
the concepts of response surface methods. The review will be followed by
the rationale of a specialization to high efficiency central composite
experiments for costly noisy data. For 2 through 8 independent
variables, design tables will present the efficient numbers of center
points that can be distributed among the blocks. The numbers of center
points vary from 2 to the number of hypercube blocks plus 4. The
tables list the star point radii for orthogonal blocking.
 1996 Talks
 December 4, 1996 (Cleveland Chapter Presidential Address)
 Presenting Results  Using Tabular Displays Effectively: John
Schollenberger, Ricerca
Graphical displays are important, but, we also
need to give careful consideration to our use of tables. The design and
layout of the tabular displays in a report can also help or hinder a
client's understanding of the results. My recent experiences suggest
that, more often than not, tables are used only to provide a listing,
either of raw data or of summary statistics and results, with little
thought as to their digestability. That is, we don't consider how that
table, or a different table, might be better used to assist the reader
in understanding the results or believing the conclusions that are
drawn. I will step through a suggested redesign of several tables.

 November 6, 1996
 Inference About the ChangePoints in a Sequence of Random
Vectors: Arjun K. Gupta, Bowling Green State University
In this talk, the changepoint problem is
reviewed. Then, testing and estimation of multiple covariance change
points for a sequence of mdimensional (m>1) Gaussian random vectors
by using Schwarz information criterion (SIC) is studied. The unbiased
SIC is also obtained. The asymptotic null distribution of the test
statistic is also derived. The result is applied to the weekly prices of
two stocks (m=2), Exxon and General Dynamics, from 1990 to 1991, and
changes are successfully detected.

 October 2, 1996
 The Meaning of Analysis of Means: Edward G. Schilling,
Rochester Institute of Technology
The Analysis of Means is becoming increasingly
well known. It is a method for the graphical analysis of the averages of
proportions resulting from a designed experiment.The procedure has been
applied to a variety of experiments including crossed and nested fixed
effects models, balanced incomplete blocks, splitplot, etc. It is
especially useful as a communication tool in an industrial environment
since it is based on Shewhart control chart concepts. The background and
use of this approach will be discussed in terms of industrial examples.
 September 4, 1996
 Constructing a Deterministic Model of an Epidemic Process When
All Parameters Must be Estimated : John Neill, the City of Cleveland
Department of Public Health
It is often impossible to obtain the parameters
required to construct a classical deterministic model of an epidemic
process since the number of people at risk and contact rate are usually
unknown. New diseases, such as AIDS, may have no known incubation
period. A technique is presented for using observed incidence to
estimate all the parameters necessary to make deterministic models of
diseases like AIDS or hepatitis A. The method also produces estimates
of incubation distributions for AIDS and hepatitis A comparable to those
in the medical literature.

 June 5, 1996 (Joint Meeting with the Cleveland SAS Users Group)
 Statistical Quality Control Using the SAS System : Dennis
King, STATKing Consulting
This presentation will be divided into two parts.
The first part discusses the seven tools of quality with emphasis on the
SAS software and programming useful for implementing these tools. The
second part of the presentation focuses on the control chart and the
statistical methodology surrounding the use of this tool.

 May 1, 1996
 Pros and Cons of Cutpoints: Jane Goldsmith, Health Sciences
Biostatistics Center at the University of Louisville
Authors have noted the disadvantages of
dichotomizing or taking cutpoints in the data. Some textbooks and
educators still advise students to dichotomize in order to use
convenient Chisquare statistical analysis. Statistical packages make
this transformation easy. This talk summarizes the costs of
dichotomization, considering efficiencies of Chisquare versus ttests,
MannWhitney U tests, Regression and Correlation Analysis, and Spearman
Correlation Analysis. Practical examples will be given for cases when
dichotomization is desirable and undesirable from a power standpoint.
Discussion of logistic regression and implications for the use of CART
are included.

 April 3, 1996
 Statistician Meets the Customer: Mark Martin, Ciba Corning
On an automated immunochemistry system which runs
more than 25 different blood tests, customer satisfaction can vary with
the performance of the assays that a particular hospital or laboratory
runs on the instrument. During assay development, one challenge is to
predict the customer complaint rate if the assay were to be released at
a given point in time, and to provide indicators of when an assay meets
releaseforsale goals that will please the customer. As a part of this
effort, interviews were conducted at several customer sites.
 March 6, 1996
 On The Importance of Assessing Measurement Reliability When
Fitting Regression Models : Leon J. Gleser, University of Pittsburgh
In many regression contexts, some predictor
variables are measured with error, or replaced by proxy measurements for
reasons ranging from convenience to cost. In such cases, adjustments to
classical least squares slope estimates may be needed to correct for
bias. These adjustments require knowledge about the parameters of the
model that is often not supplied by the data. It is argued here that the
required knowledge concerns the reliability matrix of the vector of
measured predictor variables. Ways to design auxiliary experiments to
obtain reliability information, and to combine such information with
information available from published studies, are discussed.
 February 7, 1996
 A Paradigm Shift in Statistical Consulting: Mukul Mehta,
Quality Sciences, Inc.
Eighties show TQM emerge as a major movement in
American industry. The TQM wave is now followed by reengineering and
corporate right sizing. Staff groups are out of fashion or under intense
pressure to produce more with less. This is an excellent opportunity for
an industrial statistician to prosper or perish. Time to market is
becoming a dominant force in the market place and will forever continue
to be. For professional statisticians to survive and prosper we need to
rethink how TQM reengineering and time to market affect our profession
and what we need to do to respond to challenges. We need a paradigm
shift. Information technology is becoming the key component of
reengineering and has the potential of forever changing
the traditional approach to statistical consulting. QSI has developed a
softwarebased approach to enable large groups of scientists and
engineers to take advantage of the power of statistical thinking without
the pain. Through slides and software, Mukul
will illustrate issues, challenges and opportunities that lie ahead.
 January 10, 1996
 Making Statistics Understandable for Students and Industry:
Paul Mathews, Lakeland Community College
Paul's students come from industry, most with 5 to
20 years experience as machine operators, technicians, or supervisors.
They have a growing interest in quality engineering but, unfortunately,
very limited math skills. They quickly come to the opinion that people
write textbooks to make money (based on their experience in the
bookstore) and impress their friends. Paul will describe some of the
notational conventions he has found well received by the students, and
show some of their favorite graphical presentations.
 1995 Talks
 December 5, 1995 (Cleveland Chapter Presidential Address)
 Portrait of a Cleveland Clinic Foundation Statistician: Lisa
Rybicki, The Cleveland Clinic Foundation
George Williams arrived at the Cleveland Clinic
Foundation in August 1980 and the Department of Biostatistics was born.
George began to expand the department in 1981. Now, 15 years later, the
department employs 80 people, almost half of which are statisticians.
The mission of the department is to excel in the conduct of medical and
methodological research, and to promote the proper use of statistics and
epidemiological methods. The department fulfills this mission through a
wide variety of collaborative and educational activities. Lisa will
describe the organizational structure and work environment, illustrate
the mechanisms used to fulfill their mission, and examine the role of
the collaborative biostatistician in this dynamic environment.
 November 1, 1995
 Implementation of Total Quality Management in an R&D
Environment: Dennis J. Keller, RealWorld Quality Systems
This talk gives an overview of the unique
challenges and solutions associated with implementing Total Quality
Management (TQM) in a technical or R&D environment. It begins with
defining TQM, its goals and objectives, and how these differ from their
well understood counterparts in manufacturing and general business
arenas. Next, a generalized global framework of hierarchical "systems"
is presented, which when taken as a whole, represents the "System"
of TQM in R&D. Each system in the hierarchy is then explored in only
moderate detail to demonstrate its critical function in the global
system. The system hierarchy are: 1) Project/Problem Solving, 2) the
Researcher's Personal TQM System, 3) the Branch/Group/Division TQM
System, and 4) the overall TQM System. Lastly, a short overview will be
given on strategies for actual implementation and managing for change.
 October 11, 1995
 Product Optimization in the Chemical Additives Industry:
Carlos L. Cerda de Groote, The Lubrizol Corporation
Very often in industrial practice there is an
interest in understanding how to control and optimize a response. Two
optimization examples are given. The first example deals with
formulating practice. One wants to know how a blend's composition
affects viscosity. The goal is to obtain cost effective blends that
satisfy viscometric requirements. Predictive viscometric equations are
fitted to data and then used together with constraints and a cost
function to obtain minimum cost formulations. The second example deals
with a predictive model for the molecular weight distributions (MWD) of
blends of polymers. Engine performance depends on the MWD of the
polymer(s) used with the additive chemistry. Control of this parameter
also leads to opportunities for optimization within the formulation
framework. In both examples the optimization was done with Microsoft
Excel.
 September 6, 1995
 News and Numbers: How Reporters Can Tell Facts From Lies, How You
Can Help: Victor Cohn, Research Fellow of ASA
Journalists and the public are often confused by
constantly conflicting "they say's" about health, science, the
environment and many other subjects. How can you help reporters report
facts, or the best facts you can muster, correctly? Tell them about
uncertainty. All you can give them is the best estimate at the moment.
Tell them about the use of probability, the power of large numbers, the
pitfalls of variability and the dangers of bias. Some basic rules for
dealing with the media and informing the citizenry will be discussed.
Remember that candor builds credibility.
 June 7, 1995
 Statistics in the Insurance Industry: James Jiang,
Progressive Insurance
Insurance rates are mainly determined by the past
loss experience. Statistical methods have been employed to analyze the
historical data to set up adequate rates. even though there are some
differences in reasoning between a casualty actuary and a statistician,
they often arrive at similar conclusions. Following an introduction to
automobile insurance rate making, James will demonstrate how statistical
science can be applied to reach some important results in credibility
theory and risk classification. The alternative reasoning by the
casualty actuaries will also be presented.
 May 3, 1995
 Robust Methods for the Detection of Linkage and the
Identification of Genes for Complex Disorders: Deborah V. Dawson,
Case Western Reserve University
Genetic linkage refers to the tendency of two
different genes to be inherited together if they are located
sufficiently close together on the same chromosomal strand of DNA.
Modern molecular biology has put at our disposal a vast array of genetic
markers, located at intervals throughout the human genome. The
establishment of genetic linkage between a known marker system and a
putative gene for a disorder is generally regarded as the ultimate
statistical evidence for a genetic component in the disease etiology.
Statistical methods for the detection of linkage include maximum
likelihood based approaches and the socalled robust or modelfree
approaches. The latter are usually based on pairs of relatives, and in
contrast to the likelihood based methods, do not require specification
of the mode of inheritance for the disease trait.
 April 12, 1995 (Joint dinner meeting with the ASQC)
 Quality is Personal: Harry V. Roberts, University of Chicago
Personal quality is the application of quality
ideas to your job (and even to your life). Personal quality does not
directly tell you how to do your job better. Rather, it offers a
philosophy and methodology for learning how to do it better. Personal
quality helps people to perceive and remove waste from their own jobs.
As a result, they can achieve both continuous improvement and
breakthroughs. Some potential gains to the individual care increased job
satisfaction, education in quality management concepts and tools, and
assessment of the usefulness of quality management concepts based on
direct experience rather than the claims of others. Personal quality
differs from traditional time management although it may help to make
better use of time management tools. Personal quality is not just
another selfimprovement program like speed reading, public speaking, or
memory development, although these skills can
lead to improvement of personal quality.
 March 1, 1995
 Statistical Graphics: Innovate! Show the Data: Ralph O'Brien,
The Cleveland Clinic Foundation
Today's computing tools allow us to create
statistical graphics that are customized to the particular hypotheses,
research designs, measures, and conclusions at hand. This demands higher
levels of "statistical graphicacy" from both authors and
readers. We should be open to innovation and tolerant of some
complexity. As Tufte proclaimed, "Above all else, show the data".
In clinical trials, for example, we must strive to show how patients
respond to treatments, so that we can see both general tendencies and
individual variation. These points will be discussed by studying
examples.
 February 1, 1995
 Statistics and the Law: Lynn D. Sivinski, Case Western
Reserve University
In spite of the apparent disparity between the two
fields, there are several areas where statisticians can assist lawyers
in litigation. Lynn will discuss a few of these subject areas and how
the statistical tools are used in making arguments. She will also
discuss how legal standards of proof and scientific ones differ.
 January 11, 1995
 Quantitative Literacy: A Data Driven Curriculum: Jeff Witmer,
Oberlin College
One of the many spinoffs of ASA's Quantitative
Literacy program is the Data Driven Curriculum Project. The goal of this
project, which is supported by an NSF grant, is to develop materials for
mathematics courses taken in grades 712. These materials are intended
to be used within existing courses in algebra, geometry, etc. The
Cleveland Chapter gas been very active in supporting the QL program for
several years and Jerry Moreno has planned a 1995 summer workshop based
on the Data Driven Curriculum materials. During this talk we will look
at the project and some of the activities within it.
 1994 Talks
 December 7, 1994 (Cleveland Chapter Presidential Address)
 The Ethical Statistician: Linda Quinn, Case Western Reserve
University
A working definition for unethical professional
behavior as a statistician is any action intended to mislead, misinform,
or mask information without making clear the relevant limitations of the
outcome and its presentation. From incorrect analyses and violated
assumptions to fraudulent data and misrepresented results, ethics and
statistics interact.
 November 2, 1994
 Interim Analysis for Clinical Trials: A Generalized Spending
Function Approach: Hussein R. AlKhalidi, The Proctor & Gamble
Company
 In most clinical trials, patient recruitment occurs
over a period of years and data or information from the trial
accumulates steadily throughout it's duration. The trial should involve
the smallest number of patients required to reach a firm decision. Group
sequential methodology in which groups of patients are analyzed
periodically is an approach that is economical, ethical and allows for
steady accumulation of data. Boundaries for maintaining a fixed
significance level will be discussed. Since interim analyses will be
conducted at times other than at those planned, a proposed generalized
spending function will be used to generate discrete boundaries. The
optimality property of the proposed function will be discussed.
 October 5, 1994
 Beyond the Shewhart Paradigm: J. Stuart Hunter, Princeton
University
 In today's 'bits and pieces' assembly industries
modern instrumentation allows the precise measurement of almost every
item produced. The consequence is time series data not unlike that
obtained in the continuous process industries. The basic concepts
underlying the popular Shewhart charts are thus often inappropriate when
applied in an assembly industry environment. This lecture proposes an
alternative, yet simple model that can extend the usefulness of the
Shewhart methodology and, if desired, lead to immediate active control
of the industrial process.
 September 7, 1994
 Robust Data Analysis Made Simple: Tom G. Filloon, The Proctor
& Gamble Company
 In many practical situations an observed data set
may contain extreme values (heavy tailed/contamination) that may cause
one to be leery of normality based conclusions. Hence a robust analysis
may be in order. However, a more efficient analysis might be obtained by
something other than a log or rank transformation approach. Tom will
briefly outline various robust methods and then discuss M estimation in
some detail. Implementing a pseudo value approach, he will show how M
estimation can be carried out via simple least square (such as in SAS)
and the utility of this method will be demonstrated using several real
examples.
 June 1, 1994
 Some Interesting Regression Modeling Examples: Mike Kutner,
The Cleveland Clinic Foundation
 As a consulting statistician one sometimes comes
across data sets that are not routine textbook examples. Three such
examples will be presented and solutions proposed. Each of these
examples are from "real life" data sets. In addition, all
three examples are relatively small data sets with only two to four
potential predictor variable that need to be modeled.
 May 4, 1994
 Reporting and Reviewing Research Results: Thomas Lang and
Michelle Secic, The Cleveland Clinic Foundation
 No one can deny the existence of a long standing
problem in the medical literature; errors and omissions in reporting
statistical results. The speakers are developing a comprehensive set of
guidelines for presenting statistical information in biomedical
publication entitled Reporting Statistical Information in Biomedical
publications: A Guide For Authors, Editors and Reviewers. The purpose of
the guide is to aid in recognizing, understanding a properly reporting
statistical analyses. The guide is being developed from a thorough
review of the literature. It will not serve as a statistics text but
will function as a reference designed to help authors, journal editors,
medical students and peer reviewers understand and correctly present
statistical information.
 April 13, 1994
 Practical Issues and Difficulties in Coordinating a Stroke
Treatment Trial  A Statistician's Perspective:Robert Woolson,
University of Iowa
 In Prof. Woolson's talk a particular trial is
discussed with emphasis on issues such as distributed data entry,
reliability assessment, training an monitoring.
 March 2, 1994
 Getting Started with PROC MIXED: Kristen Latour, The SAS
Institute
 This presentation introduces PROC MIXED using
examples that include splitplot designs, repeated measures data and one
way ANOVA with unequal variance. The GLM and MIXED procedures are
compared in terms of syntax, output and interpretation. An overview of
SAS statistical modeling procedures in included.
 February 2, 1994
 Multivariate Data Analysis with New Visualization Methods:Vladimir
Grishin, Case Western Reserve University
 Experienced statisticians know that the
effectiveness of statistical methods does depend on the choice of
solution structure or data model, feature selection, results
interpretation, etc. Today they solve these problems more logically by
applying knowledge, experience and sometimes looking at visual displays
which are often difficult to interpret. Data analysis by pictorial
representation is based on years of research and modeling of human
vision abilities to effectively detect and describe complicated
nonlinear dependencies directly in large dimensional data space.
 January 5, 1994
 Applications of Statistics and Operation Research Methods:Mike
Sweeney, American Greetings Corporation
 Applications of statistics and operations research
methods at American Greetings Corporation will be discussed. A
background on the company will be presented and a description of how
Management Sciences services the needs of the various internal
departments will be provided. Concentration is on solving business
problems through an blend of pure statistics/O.R. and general analytical
methods.
 1993 Talks
 December 1, 1993 (Cleveland Chapter Presidential Address)
 Comparing the Robustness of Alternative Methods of Exploring
Response Surfaces: Gary Skerl, Glidden Paints
 Traditional methods of response surface analysis
frequently involve fitting a quadratic model to a central composite or
BoxBehnken design. This approach purports to give a reasonable
approximation to the "true" response surface is more complex.
New approaches are emerging which claim to perform better. Two of these
approaches include neural networks and locally weighted regression. A
protocol is proposed for comparing the robustness of these alternative.
An example is given using a simulated test case.
 November 3, 1993
 Optimum Experimental Designs for Comparative Bioavailability
Study: Hegang Chen, Case Western Reserve University
 The purpose of pharmacokinetic drugdrug interaction
study is to investigate whether coadministration of two or several drugs
will alter the absorption, distribution, metabolism, and elimination
profile of each drug. To design the study and analyze the data in a way
that can provide answers to out questions, we introduce a model for an
mcomponent drugdrug interaction bioavailability study. Under this
model we identify optimal designs for the criterion of strongly
universal optimality. Using some concepts and results in the areas of
resistant block designs and finite geometries, several families of
strongly universal optimal designs are constructed.
 October 6, 1993
 Certification? Pros and Cons: Linda Quinn, Case Western
Reserve University
 This meeting was a roundtable discussion on
certification. Considerable discussion has taken place to assess whether
or not it would be in the statistical community's best interest to
establish a procedure to identify someone as a "Certified
Professional Statistician" (CPS). Some of the pros and cons are
given below: PROS: 1) Competent professionals are compromised by
unqualified people. 2) Other professions have program accreditation that
define and guarantee professional standards. 3) Undergraduate education
in statistics is highly variable. Certification will provide objectives
for undergraduate course work and may atract more students to the
profession. CONS: 1) The field of statistics is so diverse and changing
so rapidly, no certification examination can possibly work. 2)
Certification may stifle diversity in undergraduate education in
statistics and ultimately negatively influence research. 3)
Certification will create a bureaucracy and may well be costly.
 September 8, 1993
 Incidence of Appropriate Statistical Testing in Orthopaedic Spine
Literature: Are Statistics Misleading?: Lori Vrbos, Loyola
University
 An analysis of 300 randomly drawn orthopaedic spine
articles was performed to assess the quality of biostatistical testing
and research design reported in the literature. Statistical deficiencies
were identified in 54% of the total articles. Conclusions based on
misleading significance levels, failure to document statistical tests
used, and inappropriate statistics for specific design structure are
problems documented by this research. Copies of the papers will be
available.
 June 2, 1993
 From Observations to Inferences: Oscar Kempthorne
 The video will be about an hour in length. This is
one of the more modern videos and should be very good.
 May 5, 1993
 ARIMA Modeling Using SAS/ETS: Donna Woodward, The SAS
Institute
 With large datasets it is essential to use computer
packages to perform useful analysis. SAS is an information delivery
system with a time series module called ETS. ARIMA identification and
modeling can be done with procedures form SAS/ETS.
 April 7, 1993
 The Role of the Statistician in Total Quality Management:
Dale Flowers, Case Western Reserve University
 Under Professor Flowers' guidance, Merit Brass
implemented a computerbased strategy to help pinpoint customer usage of
manufactured and outsourced items. The company adopted a four point
modernization program including sales forecasting, inventory management,
manufacturing and purchaseresource planning, and cellular
manufacturing. Professor Flowers will use his consulting experiences and
TQM course to help us see where the role of the statistician is in TQM.
 March 3, 1993
 1994 Cleveland Biometrics Meeting: Jennifer Gassman, The
Cleveland Clinic Foundation
 We will hear about progress on the plans for holding
the 1994 Biometrics ENAR/ASA/IMS meeting. The meeting will be held at
Stouffer's Tower City Center during April 913, 1994. Jennifer will
describe the Biometric Society and talk about a typical annual meeting,
including the short courses, program, and the banquet. The Program
Committee for this meeting will be charired by Dr. Linda Young of the U.
of Nebraska. Jennifer will briefly summarize the duties of the Local
Arrangements Committee and describe exciting ways YOU can become
involved. This presentation will be in lieu of the preluncheon meeting
that was announced in last month's mailing.
 February 3, 1993
 An Online Demonstration of the Cleveland FreeNet: Jeff
Wright, BFGoodrich
 January 6, 1993
 ARIMA Model Identification: Tom Turiel
 The method of using autocorrelation function and the
partial autocorrelation function to identify values of p, d and q in an
ARIMA (p, d, q) model has been described as: "necessarily inexact"
(p. 173, Time Series Analysis: Forecasting and Control, by Box and
Jenkins), requir(ing)es skill obtained by experience: (p. 262,
Forecasting and Time Series Analysis, 2nd Ed., by Montgomery et al), "difficult
to identify…for a mixed (both AR and MA) models" (p.62,
Forecasting with Dynamic Regression Models, by Pankratz). Some tools
used to identify the values of p, d and q that were developed after the
publication of the book by Box and Jenkins are presented and applied to
several data sets.
 1992 Talks
 December 2, 1992 (Cleveland Chapter Presidential Address)
 An Application of Multivariate Design: Kathleen Blalock, The
Lubrizol Corporation
 November 4, 1992
 Probability and Statistics in K12 Curriculum: Jerry Moreno,
John Carroll University
 In 1989 the National Council of Teachers of
Mathematics published an outstanding document "Curriculum and
Evaluation Standards for School Mathematics" that is
revolutionizing mathematics content and teaching from kindergarten
through twelfth grade. One of the key components of the reformation is
the implementation of probability and statistics into all levels of
mathematics courses. The talk will concentrate on what is expected in
this implementation and what part the QL program is playing. Audience
participation will demonstrate the QL materials. This talk should be
especially beneficial to you if you are a parent of a child in school,
not only so you may be aware of what is taking place in the current
mathematics "revolution", but also so that you may be
encouraged to share your expertise with your school district.
 October 7, 1992
 Statistical Issues in the Assessment of the Effects of Preschool
Lead Exposure on Cognitive Development: Tom Greene, The Cleveland
Clinic Foundation
 Many studies have investigated the relationship
between lowlevel lead exposure and cognitive function. These studies
have produced inconsistent results yielding controversial implications.
Much of the controversy centers on basic methodological difficulties
which are inherent in this area of research. In this talk he will
overview some of the key methodological problems from a statistical
perspective investigating the effects of prenatal and preschool lead
exposure in a cohort of Cleveland children. Topics to be discussed
include: 1) Multicollinearity between measures of lead exposure and
social and demographic variables; 2) Measurement error in lead exposure
indices; 3) Measurement error in covariates used to control for possible
confounding; 4) Problems in controlling for confounders which might
themselves be affected by lead exposure.
 September 2, 1992
 Rank Covariance Methods for Analysis of Survival Data: Saeid
Amini, Case Western Reserve University
 A procedure for comparing survival times between
several groups of patients through rank analysis of covariance was
introduced by Woolson and Lachenbruch (1983). It is a modification of
Quade's rank analysis of covariance procedure (1967) and can be used for
the analysis of rightcensored data. In this paper, two additional
modifications of Quade's original test statistic are proposed and
compared to the original modification introduced by Woolson and
Lachenbruch. These statistics are compared to one another and to the
score test from Cox's proportional hazards model by way of a limited
Monte Carlo study. One of the statistics, Qr2, is recommended for
general use for the rank analysis of covariance of rightcensored
survivorship data.
 June 17, 1992
 Design and Analysis of Antiepileptic Drug Trials: Gordon
Pledger, R.W. Johnson Pharmaceutical Research Institute
 Clinical trials of investigational antiepileptic
drugs usually have as their primary objective the demonstration of
effectiveness. Unfortunately, both the standard designs and analyses are
problematic. That is, for the standard designs the analysis usually
focuses on seizure counts obtained over a treatment period of several
months. The correspondence between effectiveness and a reduction in
seizure rate is not evident since seizures may be of different types,
durations, and severities. Even if seizures kin a given trial are
largely one type, questions remain about the most appropriate function
of seizure rate for analysis. His talk will be based on his six + years
at the NIH.
 May 6, 1992
 Bootstrapping: A Method for Inference When All Else Fails:
Paul Thompson, Case Western Reserve University
 Bootstrapping is a method for estimation of standard
errors when other (analytic) methods have not been successful. The
method proceeds by intensively reexamining the sample of data which is
actually on hand, by reestimating parameter o from B resamples (taken
with replacement). The method will be briefly explained, and some basic
results will be sketched. Three examples will be used to illustrate some
of the advantages: the difference between means, standard errors for
coordinates in factor analysis, and testing some relatively simple
hypotheses for relatively complex psychological data. Finally methods
for actually performing these tedious calculations will be described.
 1991 Talks
 December 4, 1991 (Cleveland Chapter Presidential Address)
 Establishment Of A Series Of Courses In Experimental Design And
Analysis: Jeff Wright, BFG's Avon Lake Technical Center
 In 1991, statisticians at BFGoodrich's Avon Lake
Technical Center developed and offered for the first time a series of
three courses to instruct BFG managers, scientists, and technicians in
using statistical tools to aid in effective industrial decision making.
This presentation discusses issues surrounding the construction,
marketing, and teaching of the courses and offers some suggestions to
those considering establishing a similar program in their own
companies.
 November 6, 1991
 Statistical Analysis For Rankings: Douglas Critchlow, Ohio
State University
 Suppose that each member of a group of judges
independently ranks a set of items from first place to last place. A
variety of probability models for such ranked data have been studied in
both the statistical and psychological literature. In this talk, several
such models are discussed, and the problem of introducing covariates in
these models is considered. Estimation and hypothesis testing schemes
are described for these models, both with and without covariates, and
are implemented in an example involving rankings of salad dressings.
 October 2, 1991
 Statistical Consulting At Price Waterhouse: James Thompson,
Price Waterhouse
 Mr. Thompson is currently managing a nationwide
survey for the U.S. Coast Guard and U.S. Fish and Wildlife Service which
will determine the amount of gasoline and diesel fuel used by
recreational boaters. The results of the survey will be used to allocate
Federal and State fuel excise taxes to boating programs. The
presentation will include brief overviews of Price Waterhouse and the
Office of Government Services. The particulars of the Recreational
Boaters Survey will be explained and lastly, preliminary results will be
discussed.
 September 4, 1991
 The American Statistical Association And Its Cleveland Chapter:
John D. McKenzie, Jr., ASA Board of Directors
 The American Statistical Association (ASA) was
founded in 1839. Eightysix years later a group of statisticians
organized the Cleveland Chapter of the ASA. It was one of the
Association's first chapters. During the first part of this talk, the
speaker will present a history of the ASA and its Cleveland Chapter.
This history will emphasize the interface between the two from the
1920's to the 1990's. Then, the speaker will give some scenarios for the
future of the Association and its Cleveland Chapter. These accounts of
projected events will be based upon his examination of past ASA success
stories and lost opportunities.
 May 1, 1991
 Using The SAS System For SPC Applications: John C. Boling,
SAS Institute Inc.
 The SAS System has always been recognized as the
industry leader in statistical software. Version 6, the current release,
contains new functionality in statistical process control methods and
tools for generating and analyzing experimental designs. These new
methods will be discussed. A tool for generating and analyzing
experimental designs will be demonstrated. Future statistical plans will
also be addressed.
 March 6, 1991
 Autoregressive Estimation Of The Prediction Mean Squared Error
And An R Measure: An Application: Raj Bhansali, CWRU from Liverpool,
England
 For predicting the future values of a stationary
process {x }(t=0, +, +2, ), on the basis of its past, two key parameters
are the hstep mean squared error of prediction, V(h) (h>1), and
Z(h)={R(0)  V(h)} / R(0), the corresponding measure, in an R sense, of
predictability of the process from its past alone, where R(0) denotes
the variance of x . The estimation of V(h) and Z(h) from a realization
of T consecutive observations of {x } is considered, without requiring
that the process follows a finite parameter model. Three different
autoregressive estimators are considered and their large sample
properties discussed. An illustration of the results obtained with real
data is given by applying the estimation procedure to the wellknown
Beveridge Wheat Price Index, the Wolf Sunspot Series, as well as to a
Refrigerator Sales Series.
 February 6, 1991
 Significance Testing, Inference, And Estimation In Randomized
Clinical Trials: Michael P. Meredith, Proctor & Gamble
 Recent literature on randomized clinical trials
(RCTs) has seen strong recommendations for use of confidence intervals
and a coincident suppression of significance testing (pvalues). This
recommendation is typically seen in large RCTs with some welldefined
and meaningful clinical parameter that reflects what we expect to
observe in clinical practice. This strategy is found to be illadvised
when considering a sequence RCTs, as it is necessary for the
development of a new drug. In this case, focus upon estimation can be
destructive as it largely ignores the true population of inference and
diverts attention from design considerations.
 January 9, 1991
 Statistical Evidence In An Age Discrimination Case: Thomas H.
Short, Carnegie Mellon University
 Data from an actual legal case is explored using
various statistical techniques. Emphasis is on the choice of appropriate
techniques used in presenting analysis conclusions to a jury.
 1990 Talks
 November 7, 1990
 Properties Of The Zone Control Chart: Robert B. Davis,
University of Akron
 Consider the problem of monitoring a process mean
under the usual assumption of normality. The Shewhart chart if the most
commonly used control chart in this situation. A new chart, the zone
control chart, has recently been proposed as an easily implementable
alternative to the Shewhart chart. This new chart is actually a special
case of the run sum test developed by Reynolds (1971). The zone control
chart will be characterized as a Markov chain in order to investigate
its run length properties and compare them to Shewhart charts with
various runs rules combinations. It will be shown that the zone control
chart is superior to the Shewhart chart in terms of average run length.
A fast initial response (FIR) feature will also be proposed for the zone
control chart.
 September 5, 1990
 Prediction Of ShortTerm Morbidity And Survival After Coronary
Artery Bypass Surgery: Gerald Beck, Cleveland Clinic Foundation
 Each year about 300,000 Americans undergo coronary
artery bypass graft (CABG) surgery for treatment of coronary artery
disease. Based upon over 5000 CABG surgeries done at the Cleveland
Clinic over a two year period, models have been developed for
identifying risk factors for shortterm mortality and for morbidity in
the intensive care unit after surgery. Logistic and clinical models
based upon preoperative and/or intraoperative factors have been
developed and will be presented. Statistical issues relating to the
development of these models will be discussed as well.
 1989 Talks
 September 6, 1989
 Incorporating Information On Covariates Into Sample Size
Calculations For Survival Studies : Jennifer J. Gassman, Cleveland
Clinic Foundation
 Clinical trials are medical studies in which
patients are randomized to treatments and then followed over time. Many
clinical trials are survival studies, and patient survival is the
primary outcome variable. These trials are designed to detect
differences in survival rates of various treatment groups. This
presentation is a summary of methods biostatisticians have used to
estimate sample size in survival studies, and the focus is on recent
developments which use parametric and nonparametric techniques to more
accurately determine clinical trial sample sizes, utilizing information
on covariates known to affect survival.
 1986 Talks
 September 10, 1986
 Analyzing Linear Models With the SAS System: Kirk Easley,
Cleveland Clinic Foundation
 The SAS System provides tremendous flexibility and
power for analyzing linear models. The GLM (General Linear Model)
procedure coupled with IML (Interactive Matrix Language) provide both
low and high level programming approaches to statistical modeling.
Potential pitfalls often encountered with missing cells, least square
means, covariance analysis and repeated measures will be discussed.

 June 11, 1986
 Direct Quadratic Spectrum Estimation from Unequally Spaced Data:
Donald W. Marquardt, President of American Statistical Association
 One objective of this talk is to emphasize the
importance of unequally spaced time series and to provide contacts with
the literature. The principal objective is to describe a versatile,
generalpurpose method of direct quadratic spectrum estimation, and to
explore various properties of the method. Numerical examples from
several applications will be presented to demonstrate the practical
results that can be obtained. The examples include synthetic data,
chronobiology data (oral temperature, blood pressure), and environmental
data (stratospheric ozone).

 May 7, 1986
 Career Planning for the Statistics Professional: Linda
Burtch, Smith Hanley Associates, Inc.
 Today's program has been presented throughout the
country, in particular, to several other ASA chapters. Its information
is suitable for students who are presently studying to become
statisticians as well as statisticians at all career levels. Statistics
professionals should plan their careers from the moment they select a
mathematics or statistics undergraduate degree through retirement.
Today's talk is designed to better educate you to develop your
background and expertise areas to result in optimizing your career
potential based on your longterm personal objectives. The talk will
touch on specific educational levels and skill requirements for a
variety of career path scenarios. This will include a discussion of the
necessity of an M.S. or Ph.D. as well as the popular question of the
value of an MBA for the Statistician for corporate career growth. A
larger percentage of statisticians are entering less traditional areas
of application, including marketing analysis and finance. The talk will
highlight problem areas and career "traps" to avoid and
surface high growth/high potential opportunities for the future of
statistical professionals. Finally, salary ranges for the various
application areas and experience level will be presented.

 March 5, 1986
 Statistical Aspects to Nonlinear Modeling: Dale Borowiak,
University of Akron
 In this talk, recent developments in nonlinear
modeling are discussed. Measures of nonlinearity and the use of
transformations are to be explored through specific examples. In
addition, procedures for the assessment and selection of a proposed
model based on asymptotic fit and stability are explored.
 February 5, 1986
 Bayesian Classification Statistics in Discriminant Analysis:
Arjun K. Gupta, Bowling Green State University
 In this talk, three models  the fixed effects
model, the random effects model, and the two factor mixed hierarchical
model in the context of discriminant analysis  will be discussed. Under
each of these models, Bayes classification statistics and some of their
properties are discussed. Situations where each of the models is
appropriate will be considered, as well as estimators for each
classification statistic.
 January 8, 1986
 Implementation of Statistical Process Control: Mukul M.
Mehta, B.F. Goodrich Avon Lake Research Center
 At the B.F. Goodrich Chemical Company, statistical
methods have been used in R & D and Manufacturing in some form since
1955. In 1981, Senior Management felt a strong need to develop an
industrywide leadership in quality of our products and technologies and
initiated a topdown implementation of Crosby's Quality Management
process. While both Crosby's "Zero Defect" approach and
Deming's "Statistical Process Control" approach are aimed at
improved quality, productivity, profitability and competitive position
and require that a cultural change be accomplished, their methods are
quite different. Crosby addresses and emphasized management needs
(through zero defects, cost of quality, recognition and rewards for
employees as motivation tools, etc.), whereas Deming stresses and
addresses employee needs (through need and opportunities for training,
pride in workmanship, elimination of communication barriers, etc.) and
recommends statistical process control as a simple but effective tool
for objective decision making. This talk will discuss the issues
concerning the integration of these sometime conflicting viewpoints in
the implemention of statistical process control methods.

 1985 Talks
 December 4, 1985 (Cleveland Chapter Presidential Address)
 Concomitants of Length of Hospital Stay: Paul K. Jones, Case
Western Reserve University
 Data from the National Hospital Discharge Survey are
analyzed using regression analysis. A probability sample of 200,000
patients hospitalized in U.S. short stay hospitals in 1983 is studied.
Length of stay is predicted using characteristics of patients. Observed
and predicted length of stay are compared using size of hospital,
hospital ownership, and geographic region of the U.S.

 November 6, 1985
 On the DOptimal Experimental Design: Chinkyooh Lee, J.N.
Berrettoni and Associates
 Classical experimental designs are the best designs
and they have many unique properties. However, these classical designs
cannot be used when the experimenter is confronted with special problems
such as a restriction on the number of experiments, restricted
experimental runs, augmentation of design, or special shapes of an
experimental region. The experimenter may choose an arbitrary
experimental design, but a better design for his experiment is obtained
by using a criterion yielding good designs. In this talk, the
Doptimality criterion is used because designs based on this criterion
are invariably good in many respects such as low variance for the
parameter, low correlation among parameters, and low maximum variance
(x) over all candidate x. After experiments based on the Doptimal
experimental design are run, the data can be easily analyzed by the
computer program RSP using the backward elimination method.

 October 2, 1985
 Categorical Data Analysis in the Pharmaceutical Industry: a
Specific Application: George Dirnberger MerrellDow Pharmaceuticals,
Inc.
 Physician ratings of patient health or improvement
are frequently used as primary measures of drug efficacy. Hence, methods
of categorical data analysis form a key part of the analytical
techniques used in the pharmaceutical industry. This talk will consider
various techniques (logits, CochranMantelHaenszel, fixed and random
effects linear modes using weighted least squares) in the analysis of a
specific pharmaceutical data set.
 September 4, 1985
 Beware the Client Bearing Statistical Gifts: Jerome Senturia,
Lubrizol Corporation
 The client who brings a preconceived statistical
model to the consulting session can often subtly and adversely influence
the statistician's ability to solve the client's problem. The
presentation describes a collection of interesting and amusing anecdotal
experiences, putting them in the perspective of the properties of
statistical models that statisticians are taught to strive toward.
Additionally, some thoughts on Chapter projects for the ensuing year are
presented.
 June 5, 1985
 On The ErrorinVariables Regression Problem: Dennis J.
Keller Avon Lake Technical Center of the B.F. Goodrich
 The Errorin Variables Regression Problem is till
one of the hottest areas in statistical research today. As yet, no "final
word" has been given on how to handle the general case for using
regression techniques when there exits errors in the independent
variables. Mr. Keller's talk will pave the road to the
ErrorinVariables Problem by looking at simple linear regression,
inverse simple linear regression (with application in the classic
calibration problem), and finally the ErrorinVariables Regression,
thus exposing some of the major road blocks in solving the problem. One
approach, which views the problem more from a correlation rather than a
regression standpoint, will be presented for discussion. Pros and cons
will be shared as well as entertained.

 May 1, 1985
 The Road from Boston in '39 to Las Vegas in '85 to Washington in
'89" (The Professional Association in the Development of the
Profession and its Response to Society): Fred C. Leone, American
Statistical Association (executive director)
 In 1839, five individuals came together to start a
professional association. The first general meeting of 109 members
included the intelligentsia of the Boston area. Very early this included
a U.S. President, foreign ministers and researchers in a number of
fields. In August 1985, 3000 ASA member are coming together in Las Vegas
with the theme of our Annual Meeting "Statistics in Public Policy."
In 1989, perhaps as many as 5,000 from the U.S. and many foreign
countries will celebrate the ASA sesquicentennial in Washington, D.C.
The road has taken the Association from a small group of individuals who
considered it a professional association through its role as a learned
society and in the past 10 to 15 years back to a professional
association. In this talk we will explore ASA programs for individual
members, for the profession, and for society. We talk about programs in
the social sciences, the physical and health sciences, engineering and
many areas. We examine the participation in programs with industry, with
academe and with government. In the past year, Dr. Leone has visited a
number of corporations and foundations. He would like to share with you
the response of the more than 75 interviews with corporate executives
and with funding agencies of the corporations as he explained some of
the programs of value to the corporation and to society. ASA is engaged
in many fine exciting programs. We will explore a few of these and
consider the role of the chapter and its members in these programs.

 April 3, 1985
 Biostatistics in Epidemiology: Jennifer J. Gassman, The
Cleveland Clinic Foundation
 Epidemiology is the study of the distribution and
determination of disease, and this presentation will focus on
biostatistics in epidemiology. Several types of epidemiologic studies
will be identified and discussed, and we will look at some special
topics in medical research statistics.

 March 6, 1985
 A Manufacturing Forecasting System for General Electric Lighting
Products: Roger Rust, General Electric
 Mr. Rust's talk will concern a large main frame
computer system which provides a repository of marketing and sales
intelligence used to forecase requirements for G.E.'s manufacturing
facilities. The facilities provide tope management with basic control
over information which would be used to develop operating plans within
G.E. with respect to manufacturing products and financial budgets.

 February 6, 1985
 Estimating prepayment rates on mortgagedbacked securities:
Frank J. Navratil, John Carroll University
 Trading in mortgagebacked securities is the most
rapidly growing activity on Wall Street. Originally, techniques used to
value these instruments were similar to those used in bond pricing.
Unlike bonds, however, the original mortgage borrower can call the debt
(prepay it) at anytime without substantial call premiums (prepayment
penalties). During periods of volatile interest rates, the financial
incentives to prepay mortgages varies dramatically, and so, not
surprisingly, do the cash flows from such securities. Thus, a critical
first step in the proper valuation of these instruments is the ability
to estimate future prepayment rates. This talk will discuss
mortgagebacked securities and show how the speaker has used econometric
techniques.

 1984 Talks
 December 5, 1984 (Cleveland Chapter Presidential Address)
 Introduction and use of statistics in new areas of your business
or (How to get nonscientists to use statistics): Steven A.
Richardson, Sohio
 November 7, 1984
 A Model for Estimating the Effect of Son or Daughter Preference
on Population Size: Magued I. Osman, Case Western Reserve University
 In his talk, he will describe a probability model in
which son or daughter preferences may be an important factor
infertility. Some simulation results will also be presented.

 October 3, 1984
 Computer Software Reliability Models: M. Kazim Kahn, Kent
State University
 Dr. Khan's talk will present a few statistical
estimation aspects of the new field of software reliability. Some
statistical properties of the most commonly applied models (such as
JelenskiMoranda (1972), Moranda (1975), MusaOkumoto (1984) etc.) will
be discussed. Recent simulation results will be presented to show the
bias of the Maximum Likelihood estimates. Finally, some steps to reduce
the bias will be discussed.

 September 5, 1984
 Use of Discriminant Analysis in Laboratory Diagnosis: Paul K.
Jones, Case Western Reserve University
 Discriminant analysis has wide applicability in
medicine in the diagnosis of abnormality using laboratory measurements.
Statistical techniques include linear and quadratic discriminant
analysis as well as the kernel and nearestneighbor methods.
Applications discussed in Dr. Jones' talk will include laboratory
diagnosis of the carrier state in classic hemophilia.

 June 6, 1984
 Comradery and Business
 1. Progress report from the Microcomputer
Statistical Applications Committee headed by Jim Unger (2663420). 2.
Further discussion on the proposed Statistics Clinic. Jerry Senturia has
updated information: a. Ohio State's experience. b. Cost estimates 
Place, Computer time, Advertising c. Mailing list of possible
participants. 3. Chapter brochure. 4. New Business.

 May 2, 1984
 On Speaking Each Other's Language: Julio N. Berrettoni, Case
Western Reserve University
 He will describe interactions between people in
industry and the consulting statistician in the interpretation of
significance test, confidence intervals, and solutions to simultaneous
equations. Often these interactions have humorous aspects which Dr.
Berrettoni will relate.
 March 7, 1984
 Statistics and Computer Performance: Steve Sidik, CWRU/Lewis
Research Center
 Randall Spoeri, Associate Executive Director, ASA,
has asked local chapters to discuss two matters. The first involves
creation of a chapter brochure to be distributed to various germane
organizations to encourage new membership and participation. The second
concerns creation of a chapter newsletter to be compiled and distributed
on a quarterly basis by the national office. Details and subsequent
discussion on these two issues will be conducted in the business portion
of this month's meeting. Your input on these matters is very important,
so please make an extra effort to attend on March 7.
 January 4, 1984
 A Discussion with C.R. Rao: C. R. Rao via videotape
 This tape is from the National office of the ASA and
is another in their series of 'lectures and discussions' by wellknown
statisticians of our time. The length of this tape is 57 minutes and
consists of Rao with a panel of discussants: Anscombe, Chernott,
Gnanadesikan, Koehn, and Posten. This panel discussion took place in
1981. Based on the speaker, this tape will be on multivariate
statistics.

 1983 Talks
 December 7, 1983 (Cleveland Chapter Presidential Address)
 Nonresponse in Surveys: Ed Durkin
 Nonresponse in a survey can produce biased
estimates. This talk will explain the nature of the problem and some
techniques that may be used in dealing with nonresponse.
 November 2, 1983
 Statistical Process Control Program: Gerald Hurayt, Packard
Electric
 This talk is on a program of Statistical Process
Control developed and being used at Packard Electric (a division of
General Motors). This program was put together as a result of meetings
with Edward Demming after his successes in Japan. The talk will discuss
the various facets and the implimentation of this plan.

 October 5, 1983
 The Importance of Practice in the Development of Statistics:
George Box via videotape
 This videotape is from the ASA national and is a
lecture made by Box. This lecture covers his reasoning on why statistics
is useful and indeed necessary in today's world. The tape is
approximately 45 minuets in length and will be followed by a discussion
of the tape byu the viewers (if they so wish).

 September 7, 1983
 Utilization of Statistical techniques in Fish Management:
Greg Mountz, Ohio Department of Natural Resources
 This talk will encompass a number of statistical
techniques utilized by fish management personnel in defining and
managing fish populations. These techniques are not statistically
complex, but they do allow management personnel to make general
inferences about fish populations. These techniques ultimately help
managers set harvest regulations for the wise use of the fish resource.
Areas addressed will include creel census techniques, agegrowth,
lengthwidth, and condition analysis and general population estimators.

 April 6, 1983
 Regression Diagnostics: Ralph St. John, Bowling Green State
University
 In many regression problems considerable attention
is devoted to identifying unusual values of the dependent or response
variables, sometimes called outliers, but little (if any) attention is
paid to identifying unusual values of the independent or predictor
variables. This talk will address this oversight using two different
measures. The first concentrates on identifying those observations which
have large influence in determining the predicted response value. The
second concentrates on identifying those observations which have large
influence in determining the parameter estimates obtained. Numerical
examples will be given to illustrate the use of these diagnostic tools.

 March 2, 1983
 Are there unbreakable codes?: Richard H. Black, Cleveland
State University
 The need for sending information by coded messages
has existed since man first had information that he wanted kept from
others. Modern man has used codes in business, telegraph messages and of
course warfare. The U.S. government alone employs massive amounts of
time, money, and manpower coding and decoding information (military,
economic, and diplomatic). During World War II the Nazis developed the
famous "enigma' machines for coding and decoding messages.
Recently, there have been developments in encription (encoding) that
seem to imply that unbreakable codes may now become available. Dr. Black
will be discussing these developments.

 February 2, 1983
 Why bother with a probability sample?: Clark E. Zimmerman,
Clark Zimmerman & Associates, Inc.
 Have you ever been interviewed about the brand of
complexion soap you use, the place where you bank or even how you expect
to have someone handle your remains? Have you ever been questioned by
someone while at work regarding your readership of a magazine, the type
of computer services you need or your company's use of Vbelts, raw
chemicals, or plastic tubing? Perhaps you were contacted by phone, in
person, in your home, or possibly as you walked through a mall. Have you
wondered why you were selected? How accurate do you feel the results
might be? Could they lead to erroneous decisions? A waste of money?
Learn how you can develop probability samples by the use of some simple
techniques. Such results can be projected within descriptive limits.


 1982 Talks
 December 1, 1982 (Cleveland Chapter Presidential Address)
 A Look at Real Time Optimization: John Stansbrey
GliddenDurkee Dwight P. Joyce Research Center
 There are a number of frequently occurring problems
presented to the statistician. One of them is optimization. Yet, except
for the approach set forth by Box and Wilson in 1951 and further
developments of Response Surface designs, the statistical literature has
been almost devoid of optimizing techniques. In 1962, Spendley, Hext,
and Himsworth presented a novel approach to optimization based on
simplices. This method is unique since a new "better" point is
determined after each run rather than after an experimental design is
completed. Our effort is to achieve the optimum configuration with the
smallest number of experiments. Usually, the optimization problem is not
simply to find a maximum or minimum, but multiple criteria is a must be
met simultaneously. Some applications of these methods will be discussed
and compared. Your experiences will add to the benefits of the meeting.

 November 3, 1982
 Two New National Acceptance Sampling Standards: Ed.
Schilling, General Lighting Business Group
 When acceptance sampling plans are applied to
measurement characteristics, a choice must be made. With the recent
revisions of the American Can National Standards ANSI/ASQC Z1.9
Variables system, and the ANSI/ASQC Z1.4 Attribute system, the standards
have now been matched so that it is possible to move between them.
Exploitation of the resulting synergistic relationship to achieve more
rational and effective Acceptance Sampling will be discussed. These ANSI
standards are the civilian versions of the familiar MIL Standards 414
and 105D.

 October 6, 1982
 Statistical Methods in Corrosion Research: Charles Barrett,
NASA Lewis Research Center
 Corrosion studies involve a large number of
variables interacting at the same time as, for example, in the
development of high temperature alloys for jet engine turbine blades.
Fractional factorial designs and multiple regression techniques are used
in the design and analysis of test results. Several programs will be
discussed.

 September 8, 1982
 Data Analysis Using Graphical Techniques: Michael Mazu, B. F.
Goodrich Company
 Today, engineers have to be concerned about
identifying manufacturing problems and identifying areas for
improvements. Decisions about the process cannot be made without
analyzing data collected from the process. The analysis of the data is
not an easy task. The formal physical methods require a good
understanding of the theory associated with the method if valid
conclusions are to be drawn from the analysis. Unfortunately, in many
cases, the engineers do not have this understanding. In other cases, the
formal approach is impractically complex. The purpose of this
presentation is to discuss graphical techniques that can be used by
engineers.

 April 29, 1982
 Statistics  American Heritage  Future Challenge Highlights of
the early years of the American Statistical Association: Fred Leone,
American Statistical Association (executive director)
 Highlights of the Early Years of the American
Statistical Association  Why Lemuel Shatluck risked the dangers of Lake
Erie in the early 1800's and other interesting stories about the people
who founded and nourished the ASA. Shatluck was one of the five founding
members of the ASA. Many of these people are "characters" that
Fred and some of us know personally.

 March 3, 1982
 Long Runs in Bernoulli Sequence: Anthony A. Salvia,
Pennsylvania State University
 In a sequence of n Bernoulli variables, define Lo as
the longest run of zeros, L1 the longest run of ones, and W = max (Lo,
L1). The distribution of Lo is important in the analysis of K  inarow
failure networks; the distribution of W in the symmetric case p = ½
has application to tests of trend or contagion. This report describes
those distributions and gives examples of their use.

 February 3, 1982
 Improving Productivity in Quality Control via MultiStage
Attribute Sampling Plans: Dale Flowers, Indiana University
 For acceptance sampling by attributes, multistage
plans are generally more efficient in terms of inspection time, than
simpler single sampling plans. In spite of this, there has not been
widespread use of multistage plans. Current methods for determining
such plans are burdened by restrictive assumptions. The purpose of this
presentation is to explain a new computerized method for determining
such plans which releases many of these assumptions. It is easy to use
and computationally efficient, requiring only a few seconds or less
computer time per plan. It has been implemented for a variety of
industrial sampling inspections with uniformly successful results.

 1981 Talks
 December 2, 1981
 Some Applications of Statistics in an Operations Research Group:
Alan Dutka, B. F. Goodrich Company
 Actual applications of statistics in business and
industry will be reviewed. An objective is to contrast the diverse
nature of techniques which have been successfully employed. For example,
several problems will be discussed in which the financial impact was
measured in millions of dollars. However, the statistical solutions of
these problems required only a knowledge of elementary undergraduate
business statistics. In addition, applications of more sophisticated
statistical techniques will also be presented.

 November 4, 1981
 Biased Regression: Berni Schapiro, General Tire and Rubber
Company
 Biased regression methods (particularly ridge
regression) have for many years been advocated as being helpful in the
analysis of illconditioned data. Based on a computer simulation, this
discussion will address the causes of obtaining poor regression
estimates (poor predictability), how Ridge Regression proposes to
produce improved estimates, and how effective it is. Ridge regression
results will be compared to those obtained with some other, more
commonly used methods. Finally, we will discuss our own, preferred
methods of addressing such problems in an industrial environment.

 September 9, 1981
 Polling and Political Forecasting: Ronald Busch, Cleveland
State University
 Forecasting political events has become a fine art.
Witness the early determination of the winners in the last election. Dr.
Busch has been intimately involved in these activities  modeling,
gathering data, implementing, testing… He will describe the
interesting aspects of these activities and others and discuss the plans
for forecasting the forthcoming elections in November.

 April 1, 1981
 A proposal for a structured MS degree program in applied
statistics: Steven M. Sidik, NASA Lewis Research Center; H. Smith
Haller, Goodrich Chemical Company; Pauline Ramis, CWRU
 The speakers have been developing a proposal for a
new MS program in Applied Statistics. The program is intended to develop
Professional statisticians, not only with expertise in theory and
methods, but with ability and experience to solve practical problems.
This meeting will be different than usual. We are invited to participate
in shaping this program to provide the best possible statistical
training of those new statisticians we need to hire. We have here the
opportunity, indeed the obligation, to provide guidance to our educators
to fill our needs. There is no better group anywhere to fulfill this
service then our own group here in Northeast Ohio. The meeting is
arranged so that everyone will have the chance to introduce and discuss
his/her ideas on how such a program should be structured.

 1980 Talks
 March 5, 1980
 Epidemiology: The Statistics of Cause and Effect: Marion
Chew, Sohio
 An Epidemiologist studies how, where, when, and why
a disease distributes itself over time within human populations.
Application of epidemologic techniques to the industrial environment,
although not new, is a growing, exciting field for statisticians and
biostatisticians. Marian will present some background information about
epidemiology and the statistical tools used by epidemiology and the
statistical tools used by epidemiologists. She will then show the
application of epidemiology to the working environment illustrating with
her ongoing study of a suspected carcinogen that is produced in the
petrochemical industry.

 April 2, 1980
 Regression estimates based on survey and satellite data:
George Hanuschak, U.S. Department of Agriculture
 Crop area estimates using NASA's LANDSAT satellites
and USDA groundgathered data were developed for the USDA' 1978 Annual
Crop Summary. These estimates of Iowa's 1978 planted crop areas for corn
and soybeans had smaller sampling errors than conventional estimates.
The statistical methodology used was a regression estimator. Estimates
were developed at the State, multicounty (analysis district), and
individual county levels. At the State and multicounty level, the
estimates for the regression estimateusing LANDSAT and groundgathered
datawere substantially more precise than the direct expansion estimate,
which used ground data only. Significant gains in time and cost
efficiency were realized in the Iowa project. Improvements were made in
all phases of the LANDSAT data processing. Problems with total project
cost, delivery of LANDSAT data to ESCS in time for analysis, and with
cloud cover, however, remain.

 1974 Talks
 December 4, 1974
 Can Education Survive?: Peter Hilton, Case Western Reserve
University
 November 6, 1974
 Statistics in Research: Norbert Soltys, Babcock and Wilcox
 September 16, 1974
 U.S. Economic Outlook and Its Implications for Cleveland:
Michael McCarthy, Case Western Reserve University
 May 1, 1974
 A Dialog Between the Environmentalist and the Statistician:
Fred C. Leone, American Statistical Association
 April 3, 1974 (evening meeting at Akron University)
 Statistical Applications to Medical Research: William H.
Beyer, Akron University
 March 6, 1974
 Applications of Stochastic Processes: Thaddeus Dillon,
Youngstown State University
 February 4, 1974
 Can We Call It Threshold Regression?: Richard B. Schafer,
Ford Motor Company
 January 9, 1974
 Problems of Statistical Analysis of Air Pollution Data:
Harold Neustadter, NASA
 1971 Talks
 November 3, 1971
 Leo F. Lightner, Ferro Corp.
 October 6, 1971
 Autoregressive Moving Average Time Series Models: David A.
Pierce, Federal Reserve Bank
 May 5, 1971
 An Alternative to Polynomial Regression: Alan F. Dutka, B.F.
Goodrich Co.
 April 7, 1971
 Management Uses of Computer Simulation Models: Alex
Steinbergh, McKinsey & Company
 March 2, 1971
 Telescoping Design of Experiments: Arthur G. Holms, NASA
 February 3, 1971
 A New Approach to Statistical Hypothesis Evaluation: Thaddeus
Dillon, Youngstown State University
 January 6, 1971
 The Theory and Application of Compposite Forecasting: Samuel
Wolpert, Predicasts, Inc.
 1970 Talks
 November 4, 1970
 Presidential Address: Noel Bartlett, The Standard Oil Company
 October 7, 1970
 How Goes the Battle Against Inflation: Charles Walton,
Central National Bank of Cleveland
 September, 1970
 Merlyn Trued, InterAmerican Development Bank
 May, 1970
 Tour: Carling Brewing Company: U.S. Brewers Association
 April 1, 1970
 Using Quality Control Charts to Monitor Dietary Intake:
Arthur Littell, Case Western Reserve University
 March 4, 1970
 Using Discriminant Analysis of Automotive Emission Data:
Marian Chew, Consultant
 February 5, 1970
 Computer Time Sharing and Statistical Applications: Joseph
Mezera, General Electric Company
 January 7, 1970
 A Truncated Sequential Test: Ronald Suich, Case Western
Reserve University
 1968 Talks
 April 3, 1968
 Markov Model for Consumer Behavior: T. N. Bhargava, Kent
State University
 March 6, 1968
 The Use of Statistics in Predicting Election Results: Louis
Masotti, Case Western Reserve University
 February 7, 1968
 Some Industrial Applications of NonParametric Statistics: C.
B. Bell, Case Western Reserve University
 1966 Talks
 November 17, 1966
 A Dozen Uses for Binomial Probability Paper: LLoyd S. Nelson,
General Electric Lamp Division
 October 20, 1966
 The Potential of New Tools for Modifying the Business Cycle:
Robert E. Johnson, Western Electric Company
 September 15, 1966
 The Best Economic and Social Statistics in the World: Raymond
T. Bowman, U.S. bureau of the Budget
 May 19, 1966
 The Automation of Information Retrieval Systems: John
Costello, Battelle Memorial Institute
 April 21, 1966
 The Role of Statistics in Credit: John C. Sawhill, Commercial
Credit Company
 March 17, 1966
 Implications of Projections of Economic Growth for Marketing
Analysis: Jack Alterman, U.S. Bureau of Labor Statistics
 February 17, 1966
 Anticipations and Consumer Behavior: Robert Ferber,
University of Illinois
 January 20, 1966
 How to Make Beer: William A. Golomski, Joseph Schlitz Brewing
Company
 1965 Talks
 November 18, 1965
 New Findings on Economic Indicators: Geoffrey H. Moore,
National Bureau of Economic Research
 October 21, 1965
 The Law of Forecast Feedback: George Cline Smith,
McKayShields Economics, Inc.
 September 7, 1965
 Wordwise in Jargon Land: Lawrence R. Klein, U.S. Department
of Labor
 1952 Talks
 April 8, 1952
 An Analysis of the Revised Wholesale Price Index and Problems
Resulting from its Revision: Aryness Joy Wickens, U.S. Bureau of
Labor Statistics and President of the American Statistical Association
