The Relationship between the Risk of Catastrophic Failure and the Size of the Scale Up Steps in Chemical Process Development
kilomentor | 16 October, 2012 10:51
What do I Mean by Catastrophic Failure.
In the context used herein, I am defining a catastrophic failure of a process step trial as a very large loss of product quality or isolated yield from which
there is no recovery. That is, by definition, there is no patch known and
reprocessing is not viable. Characteristically, the failure, when it occurs,
comes as a complete surprise. Catastrophic failures at scale usually create
serious financial losses and make project schedule extension necessary. It is
the risk we face when we ‘put too many eggs in one basket’.
How is the Size of the Scale Up Steps linked to the Risk of Catastrophic
What is risked when a process step is increased in scale? It is fairly widely
accepted that, at first and quite normally, for any reaction step the yield is likely to fall
somewhat. More serious, but still not unexpected, is that the type and quantity
of impurities in the isolated product may change in unanticipated ways. Worse
still and getting to the catastrophic, the reaction may create a mixture that cannot be purified enough to give
an isolable physical form. Still worse,
the reactor contents may become unprocessible (can’t cut, can’t stir, can’t
cool, can’t filter, can’t distil). When these latter things for which there has
been no preparation occur, unacceptable time and money is lost. More material
must be ordered. The project milestones are missed. These possibilities
limit the size of the scale up steps in development. Consequently, as the
cost of the inputs at risk and/or the probability of catastrophic failure fall, the size of the steps in scale-up can
The approximately optimal conditions determined
with laboratory equipment can still be quite different with respect
to a number of variables from what must be done in a pilot plant. Just for
starters, some parameters such as heating, cooling, stirring and the times for
reagent additions most often cannot be physically matched after increasing
scale because of equipment limitations. Surprises can occur as one increases
the size of operations and these can lead to product with unacceptable properties.
How Does One Rank Risks?
Any risk to workers’ physical safety must be made inconsequential. It would be
immoral to knowingly add to risks to health and safety. Even from a completely
selfish perspective, a lost time industrial
accident can put a chemist manager’s professional career at risk. Safety
issues are paramount and signs of a hazard dictate slow scaling.
A loss of starting material is both a loss of time and money. The budget can perhaps be repaired but the
time required for the delivery of fresh starting materials is lost forever. If
the inputs are inexpensive as a proportion of total costs and are quickly
available from multiple sources, one risk of more aggressive scaling is reduced.
It is usually the early steps in a process where inputs can be replaced
cheaply and quickly and other things being approximately equal early steps these
can be scaled up in larger increments for that reason.
Can One Estimate the Likelihood of a Particular Type of Scale-Up Failure?
Perhaps instead of this section heading one ought to ask: How well have I been
able to scale-down the pilot plant environment and reproduce it in my
laboratory equipment? Scaling down is
the exercise of selecting the bench-scale equipment, operating conditions, and
mathematical models to successfully simulate pilot or production scale
operations in the lab.
Risk can be reduced by testing with such equipment. If the experimentation has
been conducted using exactly the same quality for solvents, reagents,
processing aids and catalysts, the biggest source of deviation in scale up is
removed. If the processing times including times of addition, times for
transfers, and times for filtration approximate those necessitated in the pilot
plant, risk is reduced. If the corrosiveness and abrasiveness of the reactants
have been tested upon the reactor’s materials of construction it reduces risk.
If the procedure is insensitive to rate over a wide range of agitation speeds, another
sensitivity has been allowed for. If the sensitivity to traces of air and
moisture is known and taken into consideration, life is simplified. If none of
the reactants, reagents, or co-products in the process step are more completely
swept out of the reactor at one scale compared to the other, another frequent
source of deviation is accounted for.
There are auguries of danger that can be divined while still in the laboratory
and addressed before moving to higher scale:
addition or removal of a gas
high viscosity of the reaction medium
need for a low reaction temperature
drown out quenching
rapid addition rates
fast reaction relative to the rate of addition of reacting component
decomposition on the reactor walls
presence of byproduct polymer
use of polymer reagents which may disintegrate
high speed stirring
When one scales up, it is advantageous if the first step is of sufficient size that all the changes
in the main discontinuous variables (reactor material, reactor shape, minimum
stirrable volume, type of agitation, heat transfer etc.) are introduced together.
Making these changes together often can be better accommodated by also including
initially an increase in the amount of solvent in the reactor, to give an overall
dilution. Often, the biggest risk impediment to moving into the pilot plant is
the cost of materials to operate at the minimum acceptable volume in the larger
reactor and making an initial dilution, that can later be reversed, may set up
a more acceptable combination of risks at a more acceptable price.
Said another way, it may be better to delay the optimization of the throughput, which is very
often the result of increasing the concentration of the reactants and reducing
the amount of diluents (ie solvent) until after the transition to the pilot
plant or manufacturing equipment. This will result in a less expensive
transition from laboratory to pilot plant. It will require less of the
expensive chemical to reach the minimum stirrable volume at the start of the
The Increased Scale-Up Risk with Catalyzed Reactions
The probability of catastrophic failure is increased for catalyzed reactions of
which, for example, enantioselective reactions are a prominent contemporary
class. The special additional risk is that the catalytic system may be more
easily shut down by small, even trace, impurities that are difficult to measure
much less control. Put another way, a catalyzed reaction is susceptible to
poisoning and this can lead to catastrophic failure of conversion with no
easily identifiable cause. Catalyzed reactions are inherently less rugged than the
uncatalyzed because the catalytic substance by definition is used in lower than
stoichiometric quantity and so would be disproportionately affected by a
particular quantity of a catalyst poison. Impurities in the inputs to a
catalytic process can also accelerate reaction. When they are not added, as
after a switch to a different source of an input, the performance may
deteriorate or fail. Neal G. Anderson wrote in Practical Process Research
& Development , First Edition, pg. 194: “The importance of trace
beneficial impurities may become evident only by failure of the reaction when
using different lots of starting materials, reagents, or solvents.” Thus the
recommendation to perform laboratory experiments with the same materials to be
used in the plant goes double for catalyzed reactions and this includes
chemicals used to wash and prep the reactor.
A catalytic reaction can more easily be shut down without leaving forensic
evidence. A catastrophic failure can poison our minds as much as our reactions.
We may start to harbor conspiracy delusions.
Have we been harmed by some disgruntled or mentally disoriented employee?
Have some operators made an error and
covered it up? Are we now engaged in a long, expensive, and ultimately fruitless
investigation? Human minds, in the
absence of a clear causal connection for a phenomena, are programmed to find
signs suggesting hypotheses even in random data.
When a procedure that has been running successfully at large scale suddenly
fails and if laboratory experiments with the same raw materials run immediately
afterwards succeed, these ideas come to mind and make the resulting
investigation even more difficult to bear.
A suggestion that may be just too inconvenient to implement should at least be
contemplated. When a clear most-probable-cause cannot be detected after a failure, but you must
go on, the next run performed at that scale, to be fair, should use a
completely different group of operators or should be run with special oversight.
If the team is all completely different,
a second failure will at least rule out a malevolent intervention by a team
member. What must be avoided is the
situation where a second failure would
throw what is likely unwarranted suspicion upon employees who participate in
both failing runs.