R. M. R. Fick's Laws
Home ] Up ]

 

FICK'S LAWS

 

      I once had an algebra text book in which the author devoted several paragraphs to a discussion of the nature of mathematical proof and he cited several subjects around which one could still stir up a heated debate among the experts.  A prime ingredient in mathematical proof, according to this author, was our intuition regarding countable sets and their members as often comes up with regard to questions in probability theory.  Consider the toss of a coin, for example.  If we discard the cases when the coin comes to rest on its edge, there are two possible outcomes, namely heads and tails.  We require the probability of heads or tails to be certainty, P{h or t} = 1, and since we can find no logical argument to favor one outcome over the other, our intuition regarding this set of outcomes is that P{h} =  P{t} = 1/2.  Moreover, our intuition tells us that P{h} = P{t} = 1/2 on the next toss yet to be performed is completely independent of the set of outcomes at all previous tosses.  What, then, is the probability P{hh} = P{tt} in 2 tosses?  We can enumerate all of the possible outcomes: hh, ht, th, tt and use our intuition regarding countable sets and their members to find P{hh} = P{tt} = 1/4, since we can find no logical argument to favor any one of these outcomes over any other.  In general we can say that the probability of a sequence of independent outcomes is just the product of the individual probabilities.  Thus the probability of 5 heads in five tosses one after the other is (1/2)^5 = 1/32. 

 

      Suppose, now, that we construct a machine to do the tossing and record the results and then find, as a matter of fact, that in one sequence of 100 tosses heads came up 100 times in a row.  The question arises: is the machine and/or the coin biased in some way, or has an extremely unlikely event occurred and what is the probability of a head or a tail on the next toss?  Our intuition regarding sets and their members is very strong, but so is our intuition regarding the relative frequency of events.  One may argue that the facts speak for themselves and the probability of a head on the next toss is all but certain.  Others will argue that a careful examination of the machine and the coin has turned up no evidence of a bias and the probabilities are still 50/50.  I have heard people I consider to be far more intelligent than I am argue the matter off and on for weeks without coming to any agreement.  The people who operate the games in Nevada and elsewhere strongly promote the idea that their customers should bet the farm when they find themselves on a "winning streak", but my intuition regarding sets and their members is stronger than my intuition regarding relative frequencies, so I have a problem with the concept of a "winning streak", particularly when independent events are at issue.[1]

 

      My former office mate, the historian, was deeply concerned with such dilemmas.  He once chided me for what he found to be an unwarranted reverence on my part for certain fundamental laws of physics as absolute truths.  He said that his study of the history of science showed a painfully turbulent and slow acceptance of many concepts which were routinely taught as proven facts in college at the time.  He was of the strong opinion that the laws of physics as we know them had come to be accepted only because they had been proved out on the battlefield.  Warring nations, he submitted, had been hiring technical consultants since time immemorial to design weapons and strategies and the winners were simply more likely to be those who hired people immersed in the lore of modern physics while the losers tended to hire astrologers and mystics and those who specialized in reading the entrails of birds.

 

      I will now consider the problem of estimating the residence time for an atomic particle trapped by its neighbors and in thermal equilibrium with them.  Extensive experience in the laboratory shows that there is usually a well defined threshold energy for escape in this situation and a finite probability, however small, that the particle will acquire the required energy given sufficient time.  In this model, the trapped particle oscillates back and forth between its neighbors rebounding each time it gets too close to any of them.  The particle may gain or lose energy at each rebound, with the average energy directed along each of three spatial coordinates being kT, roughly .026 electron volts at room temperature.  The probability of a sequence of favorable rebounds mostly increasing the energy of the particle is extremely small, but not zero, while the frequency of rebounds is extremely high.  The combined effect is that some particles trapped in solid solution can be expected to remain in a fixed location at room temperature for times comparable to the age of the universe or longer while other particles can be expected to migrate perceptibly as we watch with every possibility in between.  As the temperature is increased the residence times decrease, often dramatically, and all of the elements we know can be transformed into a liquid or a vapor at temperatures attainable in the laboratory.

 

      Let p represent the probability of escape at a trial.  The probability of failure to escape is thus (1-p).  Since the probability of escape, or failure, at a trial is independent of all previous trials, we see that the probability of a sequence of n failures is simply (1-p)^n = (1-np/n)^n.  This form is studied extensively in advanced high school algebra and beginning calculus courses.  As n gets indefinitely large the expression converges to the form EXP(x) where x = -np, a function found on most, if not all, pocket calculators.  When I enter x = -np = -1, for example, my calculator gives the value .367879441.  If the frequency of escape attempts is f per second, then n = ft where t is the elapsed time since we last looked and found the particle in residence.[2]  The probability that the particle is still in residence after n trials (t seconds) is just EXP(-np) = EXP(-ftp) = EXP(-t/to) where to = 1/fp seconds.  If we think of the residence time as a random variable, it works out that the expected residence time is just to = 1/fp seconds.  The probability of escape at a trial, from Boltzmann Statistics, is just p = EXP(-Qd/kT) while the frequency of escape attempts per second is on the order of 10^12 to 10^13 times per second.  Figure 4-1 is a plot of LOG(Expected Residence Time) vs Temperature for various activation energies from 0.25 ev to 2 ev assuming that f = 10^13 trials/second.  By way of stimulating the intuition, note that Qd for the hydrogen nucleus in soft iron is about .24 ev while Qd for sulphur in stainless steel is in excess of 2 ev.  There are very few, if any, values of Qd less than .24 ev in the literature, but a large number in excess of 2 ev.

 

      We can estimate the frequency of escape attempts in terms of the mass of the particle in question, the physical size of the cell in which the particle is trapped between its neighbors, and its average kinetic energy as determined by the temperature.  It turns out, however, that this quantity appears in our final calculations along with other factors so that precise knowledge of this factor alone is not important.  We can measure directly in the laboratory the overall combined effect of all of these factors without knowing the individual factors precisely.

 

      Suppose we have, in a 3 dimensional solid, a distribution of atomic particles (AP) subject to thermally activated random walk.  We can define a concentration, C, AP/Cm^3, at each point in the solid.  This concentration will, in general, be a function of the three spatial coordinates, say x, y, and z and the time, t and we may find it convenient to use the notation C = C(x,y,z,t) from time to time.  The population of diffusing AP in a neighborhood will decrease in proportion to the population present due to emigration and increase in proportion to the population of neighboring points due to immigration.  Suppose we have an incremental increase in concentration, C, corresponding to an incremental displacement, x, in the x direction.  We can define a concentration gradient,  C/x, in this direction and postulate a net flow of AP in the direction corresponding to a decrease in concentration.  We designate the flow of AP as J, AP/Sec/Cm^2, and argue, from intuition at this point, that J is proportional to the negative of the concentration gradient.  This is Fick's First Law of diffusion which may be stated formally as follows:

 

Jx = -D  C/x  AP/Second/Cm^2

 

Jx being the x component of the total flow.  There are, of course, diffusion currents in the y and z directions as well provided that there are also concentration gradients in those directions.  The 3 dimensional vector equation may be expressed in vector notation as J = -D ÑC, but we shall be concerned mostly with the simpler one dimensional version.

 

      The constant of proportionality, D, has the dimensions Cm^2/Second, and it can be shown that D takes the form D = N*d^2*f*EXP(-Qd/kT)/2 where f is the frequency of escape attempts per second, mentioned earlier, d is the average distance between traps where the AP may reside between migrations, and N is the average number of d spacings the AP travels between traps.  In general, an AP may not fall into the nearest trap as soon as possible after a successful escape, but it may wander through the solid passing up a number of opportunities before falling into one.  The uncertainty in the value of N is covered up by a further uncertainty in the value of f, although d is generally knowable from X-ray studies of the solid crystal lattice under consideration.  f is, in general, a slowly varying function of temperature, but the temperature dependence expressed in the factor EXP(-Qd/kT) dominates the whole process so overwhelmingly that any experimental determination of f as a function of temperature is not possible.  The diffusion parameter, D, is usually expressed in the form D = Do EXP(-Qd/kT) where Do is a constant which can be determined experimentally in a number of ways, for example, as Lewis Hall and I did in the case of the diffusion of sulphur in stainless steel.  We also measured the heat of diffusion, Qd, at the same time.

 

      The derivation of Fick's Second Law of Diffusion is based on the idea that the total number of AP in the extended neighborhood of any interior point is a constant.  At a surface, of course, we will allow AP to escape the solid as a whole.  Consider the simple case where there are no concentration gradients in the y or z directions and thus no net migration of AP in those directions.  If we find that Jx is constant with variation in x then there are as many AP entering a small volume in the neighborhood of x as there are leaving it and the concentration within the volume is constant in time.  If, on the other hand, Jx is not constant with a variation in x, the concentration will not be constant in time.  If there are more AP entering a small volume than there are leaving it, the concentration will increase with time, and visa versa.  We can express this idea in mathematical notation as follows: J/x = - C/t, AP/cm^3/Sec.  We can take a partial derivative of both sides of Fick's First Law with respect to x and find J/x = -D 2C/x2 = - C/t.  This is Fick's Second Law of Diffusion, presented again in the form here:

 

2C/x2 = (1/D) C/t

 

      The left hand side of this expression is roughly identified with the upward curvature of the concentration profile while the right hand side is proportional to an increase, with respect to time, of the local concentration.  If the local concentration is decreasing with time, the concentration profile will be curved downward.

 

      These equations are in the exact form of the celebrated heat equations made famous by M. Fourier who worked for Napoleon Bonapart as a science advisor, or so I was told by my friend, the historian.  The issue was the heating and cooling cycles of cannon.  The heat equation is derived on the assumption that the temperature in the neighborhood of some point in a solid is proportional to the density of the thermal kinetic energy of the atoms in the vicinity and that this energy is conserved as it diffuses from regions of high temperature to regions of lower temperature.  The thermal diffusion parameter D has the same dimensions, Cm^2/Second, and is usually defined in terms of the thermal conductivity of the material, k, Watt/Cm/DgK, the density, ρ, Gm/Cm^3, and the specific heat, c, Joule/Gm/DgC, of the material.  Thus, D = k/ρc, Cm^2/Second, is the thermal diffusivity in a homgenious solid.  The literature is full of solutions to the heat equation and canned computer programs to find solutions in almost limitless situations abound.


    [1]My spiritual advisor once observed that most people believe either (1) that they are, in fact, on a roll, or (2) that their luck is about to change.  He also told me that people have a psychological hangup when it comes to uncertainty, or, for that matter, certainty.

    [2]This is characteristic of attrition statistics in situations where "a live one is as good as a new one".  The expected lifetime at any instant is independent of the length of any previous residence.  The history is irrelevant. 

 

Everyday we rescue items you see on these pages!
What do you have hiding in a closet or garage?
What could you add to the museum displays or the library?

PLEASE CONTACT US!

===================

DONATE! Click the Button Below!


Thank you very much!

===================

Material © SMECC 2007 or by other owners 

Contact Information for
Southwest Museum of Engineering,
Communications and Computation 
&
www.smecc.org

Talk to us!
Let us know what needs preserving!


Telephone 
623-435-1522 

Postal address 
smecc.org - Admin. 
Coury House / SMECC 
5802 W. Palmaire Ave 
Glendale, AZ 85301 

Electronic mail 
General Information: info@smecc.org