Birthday Borax and An Explanation of Crystal Nucleation

For my birthday a while back, a friend of mine gave me food coloring, a box of borax and spools of thread. I asked what they were for, and she said crystals.

Fig. 1: Borax crystal growing birthday gift (orig.)
Of course! I hadn’t ever seen people grow borax crystals before, only sugar or salt, so I looked up a tutorial on YouTube. The procedure was pretty standard: heat water up to near boiling, dissolve the borax, insert a pipe cleaner or thread and wait overnight for the solution to cool down and precipitate out crystals. But while the procedure is simple, the science behind nucleation is more complicated and pretty interesting.

In order for a crystal to form in solution, molecules of a substance must conglomerate to an adequate size to encourage the spontaneous coordination of more molecules. A cluster of this adequate size is called a "nucleus," and the process of its formation is called "nucleation." Clusters that are not large enough to be nuclei are called "embryos." The nucleation process can be described in terms of Gibbs free energy, which accounts for both enthalpy, related to the nucleus’ internal energy, and entropy. Gibbs free energy is given by the equation

                      1.       ΔG=ΔH-TΔS (H is enthalpy, T is temperature in Kelvins and S is entropy)

For a newborn crystal born homogenously (meaning suspended in a medium without contact with other surfaces), there are two main energy changes that are occurring. The first is a lowering of molecular energy due to the formation of attractions between coordinating substance molecules for reasons described in Why Cold Drinks "Sweat". This change in energy can be describe by the equation

                  2.       ΔG=VΔGv (V is volume, ΔGv is the change in internal energy per unit volume)

To find ΔGv, we will assume that the crystal-solution system is cooled to slightly below the substance's melting point, Tm. At this small undercooling, ΔH and ΔS can be approximated as temperature independent [1]. But first, at Tm the difference in free energy ΔG between a substance’s solid and liquid forms is 0. Therefore,

                                                                      3.       ΔGm=ΔH-TmΔS=0
                                                                      4.       ΔSm= ΔH/Tm

Now, assuming that ΔH and ΔS are temperature independent, we can use the same expression for entropy as used at Tm to find the value of ΔGv. While we’re at it, let’s assume the crystal nucleus is spherical for simplicity’s sake. The following are therefore true:

          5.       ΔGv=ΔHfv-TΔS= ΔHfv-T(ΔHfv/Tm)= ΔHfv(Tm-T)/ Tm= ΔHfvΔT/Tm (from equations 1 and 4)
          6.       ΔG=VΔGv=4/3πr3(ΔHfvΔT/Tm) (from equations 2 and 5, Vsphere=4/3πr3)

The second energy change occurring is an increase in a newborn crystal’s energy because the surface of the crystal is disrupting bonding in the liquid medium around it. This change can be described as

          7.       ΔG=Aγs=4πr2γs (A is the nucleus’ surface area, γs is the surface free energy value
           characteristic of the medium-solid interface)

This expression is just the disruption free energy per unit area multiplied by the surface area of a sphere. Together, these two energy changes dictate nucleation. Putting the two expressions together, we get

          8.       ΔGhom=VΔGv+Aγs= 4/3πr3ΔHfvΔT/Tm+4πr2γs (ΔGhom indicates that this equation is for
           homogenous nucleation)

This equation is very useful and can be used to describe nucleation events from borax crystals precipitating to homogenous cloud formation. To tease a little more information out of equation 8, let’s see how ΔGhom changes in response to a substance clump slowly growing in radius. This rate of change corresponds to the derivative of ΔGhom with respect to radius r [2];

                                                             9.       d(ΔGhom)/dr=4πr2ΔGv+8πrγs

The point where ΔGhom no longer changes with r corresponds to the peak on the following graph of equation 8 and its component parts, equations 2 and 7.

Fig. 2: Free energy of nucleation as radius increases (Materials Science and Engineering, an Introduction)
Equation 2 is shown as a decreasing curve because ΔHf is negative for crystallization. This makes sense because as a material crystallizes, energy is lost to its surroundings, so ΔHf must flow out of the system. To find the critical nucleation radius,

                                           10.   d(ΔGhom)/dr=4πr2ΔGv+8πrγs=0 (from equation 9)
                                           11.   r*=2γs/ΔGv

Plugging the critical radius into equation 8 and extracting the negative from ΔHto avoid confusion, we get

                                  12.   ΔG*hom=-4/3π(2γs/ΔGv)3ΔGv+4π(2γs/ΔGv)2γs=16/3πγs3/ΔGv2

This expression describes the magnitude of the energy change needed to get a molecular cluster up to the size of the critical radius r*, a sort of activation energy [3].

However, homogenous nucleation is relatively difficult compared to heterogenous nucleation, as I’m sure you’ve probably heard. Heterogenous nucleation occurs when a nucleus forms on a surface, such as a dust particle for clouds or thread for nucleating borax crystals. Because of the complexity of finding heterogenous nucleation equations due to complex volume and surface area terms as well as extra surface energy considerations, I will not be posting these calculations. The calculations can be found in reference 3, but the summarized result is that the critical radius in heterogenous nucleation remains the same as with homogenous nucleation while the nucleation free energy lowers and makes the nucleation process more accessible. This is why minimizing dust or rough surfaces is important for growing larger crystals rather than clusters of small ones.

What has been described above is just a small bit of the complexity of crystal growing. Knowledge of nucleation rates and crystal growth is widely used in processes such as tuning metals to have properties fit for specific purposes or growing single-crystal silicon for computer processors, and I’m sure this information will come up again on later posts. But mild difficulty of math aside, it’s cool stuff, huh? As a treat for reading through this post, watch this fun Minute Earth video on the homogenous nucleation of clouds and see if you recognize some of the concepts we discussed.


I’m planning on growing some borax crystals soon, and when I do I’ll likely write an experiment post about it so be sure to come back and check that out. I’ve already had my first few days of classes and so far it seems that I will be able to continue posting once a week, likely on Sunday or Monday. As always, thanks for reading and I’ll be posting new stuff soon!

Primary Colors: Why One Set Wouldn't Suffice

Colors are a ubiquitous fact of human life. Imagine a world without colors; all of the great masterpieces would be painted in gray scale, that potato could be purple or brown and there would be no more blue skies. Experientially, we are highly familiar with the concept of colors, but I would say it isn’t common to understand the more technical side of the world of colors. Let’s explore this more analytical side and it’s applications as we try to answer a question most of us have probably had: why are there multiple sets of primary colors?

At the most basic, colors are categories of light within the visible spectrum that can be described as having either different wavelengths or frequencies since the two variables are directly correlated by the equation

                     1. c=λv (c is the speed of light, λ is wavelength and v is frequency)

The visible spectrum is comprised of the rainbow colors describe by the acronym ROYGBIV (red, orange, yellow, green, blue, indigo and violet). 

Fig. 1: Visible spectrum for humans (Arstechnica)
White is the presence of all wavelengths while black is the absence of light. Technically, there is no physical meaning associated with colors since the color spectrum is defined based on human capacity to perceive and differentiate different colors. That is to say, the visible spectrum and colors would be defined very differently had we been insects able to see UV light [1]. So keep in mind that all of this talk of analyzing colors is human-specific and don’t go off trying to explain it to your dog.

The human eye consists of rods, which perceive low intensity light, and cones, which perceive colors and high intensity light [2]. There are three types of cones, dubbed L, M and S, that respond to different wavelengths of light. The peak sensitivities for these three cone types are 580nm (red), 540nm (green) and 440nm (blue) respectively, adding to a maximum sensitivity at 560nm (in the yellow-green region of the spectrum) [3].

Fig. 1: L, M and S cone response curves and response sum (Cyberphysics)

This should start to sound familiar for those of you who are familiar with the concept of primary light colors or who have ever squinted really hard at a television screen. Aside from these three colors, other colors are perceived by simultaneous stimulation of multiple cone types. The color mixing ratios of red, blue and green light to perceive every color was actually indexed in 1931, creating the RGB CIE 1931 system [4]. The impact of breaking each color into three values of red, blue and green, called the RGB tristimulus values, is that each color can now be defined in three-dimensional space as a combination of three basis vectors representing red, blue and green relative intensity values. The mathematical derivation can be found in reference 4, but the result is the chromaticity diagram familiar to aficionados of tech wanting to know what range of human-perceivable colors their devices are capable of displaying. Look along the edge of the chromaticity diagram and you should find a color wheel for light.

So far we have one set of primary colors consisting of red, blue and green that has widespread applications in electronic devices since many of these generate colors for humans to perceive when watching movies or reading billboards and such. But this set of primaries and its corresponding wheel only apply to the production of light by adding ranges of wavelengths together. This is called additive color. When light is absorbed by colored materials via quantum effects, as has been described in Thoughts in Black Ink, the color perceived is the light range that has not been absorbed. To describe the phenomenon of light absorption to generate a reflected color, the painter’s wheel was invented by Isaac Newton in 1666 [5] with the familiar primaries of red, blue and yellow. What this wheel describes is how subtracting light with certain ranges of wavelengths stacks to reflect light of a certain color when starting with ambient pan-frequency white light. However, this is not strictly subtractive color because the painter’s wheel adds to brown, not black as anyone who has tried to make black paint from the primaries in art class knows. The subtractive color wheel is defined with yellow, magenta and cyan as primaries and should be familiar as the different ink cartridges you probably put in your printer so that your computer can print black in theory (but black ink is cheaper).

Fig. 2: Additive and subtractive color (Mac Developer Library)

Why these three colors? It turns out that if you take the three primary colors of light, red, blue and green, and combine them two at a time, you get cyan, magenta and yellow [6]. And since the color of a surface is what the surface doesn’t absorb, each subtractive primary color cancels out one of the additive primary colors until no light is left. And there you have it, the three most common primary color sets.

This post was made in response to a comment by my friend Lilia back on the article How Soap Helps Us Clean. I haven’t address the comment until now because I knew there would be a biological component to this explanation and cellular biology is not my strong suit, hence the brevity with which I describe the rods and cones of the eye. But if you guys have anything you would like to hear about, feel free to leave suggestions in the comments below and I will do my best to write a post for you. Thanks!

Why Cold Drinks "Sweat"

With a horrible heat wave hitting the Philadelphia area, it’s good to think cool thoughts. Already feeling the heat last night, I left a coconut water in the freezer with the intent to drink it but forgot and so took it to work this morning frozen solid. I figured since it’s so hot outside and the metal can is a good conductor, it’d probably melt pretty quickly. And while the ice in immediate contact did melt, the inside remained frozen and I had to cut the top open with scissors to eat it. Before I figured this out, the can had already shed a puddle at my desk. Have you ever wondered why it is that cold things "sweat."

Fig. 1: My favorite coconut juice brand, Foco (pinstopin.com)

Most of us are familiar with the concept of condensation, having learned about the water cycle in elementary school. We are commonly taught in elementary that water exists as vapor at hot temperatures, condenses to liquid as the temperature drops and eventually expands (not condenses, as ice has a lower density than water due to hydrogen bonding) into ice as the temperature drops further. In high school, we learn about the ideal gas law and how pressure also affects phase transitions, yielding the phase diagram.

Fig. 2: Phase diagram for water (myhomeimprovement.org)
So from this standpoint, we are all familiar with why cold things "sweat." What else is there to it? While the basic principles stand, there are some other viewpoints from which we can view this phenomenon.

Phase transitions can be viewed as being an equilibrium process, as is demonstrated by the fact that an ice and water mix maintains a 0°C temperature. In such a mix, the ice melting and the water freezing are competing processes that are controlled by environmental factors; if you cool the mix the ice expands, but if heated the ice melts. Additionally, the entire mix must either become ice or water only before the temperature can deviate significantly from the equilibrium temperature of 0°C. What’s cool about this process is that if you track the energy entering the ice and water mix, say a glass of iced coconut water (let's treat this as an ideal glass of pure iced water), we can predict the corresponding phase transitions based on molecular kinetics.

When bonds are formed, whether strong or weak, we know that energy is released as heat. The reverse is true as well, breaking bonds requiring energy. The direction of bond energy transfer can be simplistically remembered taking into account the conservation of energy in a two molecule one-dimensional collision. Say two water molecules are moving towards each other and stick together upon impact. Where did the kinetic energy go? Ignoring molecular vibrations, the energy had to have been released as work, or heat. In order to separate the water molecules, we need to get them to move apart, a.k.a. add work, or heat, to yield kinetic energy. In our glass of iced water, this sort of energy transfer is happening extremely fast and on a large scale, one that can be described by Le Chatelier’s principle since the ice and water form an equilibrium.

Fig. 3: Ice-water equilibrium state (JVC's Science Fun)
Now let’s put the iced water outside on a hot summer Philadelphia day. From experience, we know that the ice will melt and the water will become unappealingly warm. If we track the direction of energy transfer, the higher energy hot air must be donating energy to the lower energy iced water simply because this is the default direction of energy transfer in our universe according to the Second Law of Thermodynamics. The added energy must translate into kinetic energy as temperature is positively correlated to molecular kinetic energy. From our two water molecule system we know that a decrease in water molecule association is predicted, favoring water over ice and vapor over water. This manifests as the ice melting and the water warming and eventually evaporating.  

What has been so far described, however, is only focused on the iced water itself. Let’s change our basis to focus on the hot air along the iced water glass instead (assume the water glass does not hamper kinetic energy exchange between air and iced water). Hot air carries a lot of water since at higher temperatures water enters the vapor phase preferentially according to Le Chatelier’s principle. From the viewpoint of the air, the cold iced water is pulling kinetic energy from it, accordingly cooling the air within a certain range of the glass. Plugging this information back into our two molecule system, the energy must be afforded by reducing the molecular kinetic energy of the air, increasing the probability of water existing in associated groups, i.e. water. And this is why a cold drink sweats in the summer.

Since school is starting up again, I will not be able to post as frequently as I have been during the summer. I will try to post at least once a week, and will probably be doing so during the weekend since this is when time is most available. Please have patience with me on this, and as always thanks for reading!

The Fun of Latin and Ballroom Dancing Explained with Mechanics

Last semester, I picked up Latin and ballroom dancing as a hobby. All through high school I didn’t dance, but thanks to the recommendation of a friend I decided to go to free introductory dance lessons. At first it was just fun to socialize, learn some new steps and practice body coordination, but I soon grew to love it. Our Latin dance teacher has a particular habit of describing dance movements in terms of coordinates and physics; rumba walks require your center of mass over your front foot for balance and look best if you extend the axis running from your shoulder to opposite leg as far as possible. This mentality has slowly creeped its way into my brain, and eventually I realized that, hey, dance really is about physics! You don’t know it while dancing, but every movement is purposeful when viewed through a pair physics-colored safety glasses.

When first starting to dance, everything is about mechanical control of your body and constantly asking the question, where is my center of mass (let's abbreviate it as COM) right now? Of course you wouldn’t consciously ask yourself that, but even without asking your body will give the answer. While posing still, if your COM is not somewhere well supported by your limbs, you’ll fall down because relative to the point of floor contact your bodyweight is generating torque on your COM. On a single limb, your COM needs to be over the supporting limb to stay balanced. On more than one limb, your COM should be somewhere between the limbs based on how your bodyweight is distributed towards each (unless the floor is slippery. Then it’s best to choose a limb to center over otherwise your supports will simply fall apart and you’ll hit the floor). When you begin to move horizontally, core muscles contract in order to support your center of mass on its journey from one place to another through fluid motion.

Spinning generates torque along other axes and accordingly makes you conscious of where your body axes are. Since torque acts along levers originating at your COM, all body axes will include your COM. Torque is defined in parallel to Newton’s second law by the equation

                                                              1.       τ=Iα    (torque equation)

Where τ is torque, I is inertia and a is angular acceleration. Inertia is related to the distribution of your body mass in space around a rotational axis. While spinning on an axis perpendicular to the ground like a ballerina or ice skater, contracting your arms from an extended position can reduce your moment of inertia and increase your angular acceleration based on the existing torque. And expanding your body to increase inertia while tilting your axis slightly can slow your spin and generate a simultaneous forward momentum to get you out of the spin and into horizontal motion. Manipulating the position of your center of mass so that it remains relatively stationary while moving around it by torque is what allows break-dancers to do such complicated flips. Dancing is all a game of mechanics.

Fig. 1: Dancing as an act of balancing forces (University of Illinois at Urbana-Champaign)

What makes Latin and ballroom dance special to me is that all of the dances are partner dances. This means double the complication, but also double the fun! Instead of your COM, your tandem COM is what counts. And all movements and spins are done together accordingly. What’s even more exhilarating is that your tandem COM is without your body, so when you go for a partner spin you’ll both be spinning about an axis between the two of you (assuming equal mass). In Latin and ballroom dance, a concept that is stressed is partner connection. This means making sure both of your bodies push against each other at the point of contact in a way that conveys information about how each of you is moving. What this also does is make the tandem COM more stable. When partner connection is weak, your tandem COM can constantly dissolve and reform. This makes both partners unstable and rely on knowing their own COMs to maintain balance. A strong connection means both partners can fully commit to the unwavering tandem COM and structure each motion together. And believe me, you can feel this. It’s really exciting when you and your partner start to move as one, and not having to worry about your own balance allows you to focus on the art of the dance.

This post is a little less reference and math heavy than some of the past ones, which I thought would be a nice break. I just wanted to share what makes Latin and ballroom dance such a special hobby to me. Dancers out there, please let me know if you think my description does dance justice in the comments below. Other Thinkers, I wanna hear your thoughts too. And check out a few minutes of this video of world-class Latin dancers. Thanks guys!


It's Just Plastic (But What IS Just Plastic)

I have a question for you. We use the stuff all the time, but do you really know what plastics are? I mean really know. I definitely didn't until this past year when I took a course on materials science that covered the nature of polymers beyond what is traditionally taught about just joining a bunch of monomers together into strings. So let’s take a closer look at a material that has revolutionized the modern age.

Plastic is a broad term for a number of synthetic polymers with varying structures, properties, origins and chemical compositions. The properties of plastics vary according to three major factors: chemical composition, tacticity and structure. Chemical composition as a factor is fairly obvious since the different types of plastic we make vary in make up. Composition affects properties of a plastic such as the range of temperatures in which it maintains its structure. Polyethylene’s (PE, No. {2,4}) melting temperature (Tm) is around 115-130°C while polypropylene’s  (PP, No. 5) Tm is around 130-170°C. The only difference between these two polymers is an extra methyl group every other backbone carbon in the case of PP. Chemical composition also informs how a polymer is produced. Polyethylene terephthalate (PETE, No. 1), the plastic most water bottles are made of, is synthesized through the transesterification of terephthalic acid and ethylene glycol while many other polymers such as PE, PP and polystyrene (PS, No. 6) are produced by free radical polymerization. Copolymers, or polymers made with more than one monomer type, also exist in a variety of forms (block, graft, random and alternating) with more limited usage.

Fig. 1: Free radical polymerization of PS (California State University, Dominguez Hills)


Polymer tacticity is another property of plastics, but one you usually don't hear about. Tacticity relates to the organization of monomer units within polymer strands. There are three categories of polymer tacticity: isotactic, syndiotactic and atactic polymers in order of increasing variability. Isotactic polymers are comprised of monomer units all connected in the same way to each other. Syndiotactic polymers have every other monomer unit in the same orientation and atactic polymers are oriented any which way.

Fig. 2: Isotactic, syndiotactic and atactic polymer models (University of California, Davis)

Different synthesis methods affect the tacticity of the produced polymer,  for example the use of metal-catalyzed polymerization reactions may enact more stereospecific polymer synthesis [1]. Tacticity mainly affects whether or not a polymer can form crystalline regions. Isotactic PS, for example, is more likely to form crystalline regions through pi-stacking than atactic PS is. Plastic performance is also sometimes affected directly by tacticity, as with isoprene rubber. Natural isoprene rubber is isotactic in that it is comprised entirely of cis-double bonds, which make it springy. Atactic isoprene rubber or isotactic trans-polyisoprene rubber are relatively unspringy [2].

A plastic's three-dimensional polymer structure has a lot to do with the macroscopic properties of a plastic. Polymers exist as spaghetti-looking chains of controllable length that bundle and stack into a giant mess. Each chain can be linear, branched, dendritic or cross-linked to other chains. When pulled upon intensely, linear polymer chains slide out from underneath each other and cause plastic deformation. Branched or dendritic polymer chains can make this harder to do, and cross-linked polymer chains, like vulcanized isoprene rubber, can be extremely hard to plastically deform (not a pun, I swear. It’s a type of deformation, along with elastic, related to a material’s tensile strength).

Fig. 3: Some possible polymer morphologies (Nature)

But if this is true, why does high-density polyethylene (HDPE, No. 2), made of linear polymer chains, have a lower tensile strength than its cousin branched low-density polyethylene (LDPE, No. 4)? This is due to crystal packing. When polymer strands organize, they form microcrystalline regions surrounded by amorphous, glassy regions.

Fig. 4: Crystalline and amorphous polymer structure (Polymer Science Learning Center)
Being linear allows HDPE to organize into a higher crystalline content than LDPE, and the extra intermolecular forces associated with these crystalline regions makes HDPE harder to deform than its other form. Fun fact, polymer crystallinity also has an effect on opacity in the same way the amorphous nature of glass causes its transparency, crystalline regions generally being more opaque than amorphous regions.

Due to their low capacity for biodegradation, the ubiquitous use of plastics in our society poses a serious problem for many environmentally-concerned individuals, and for good reason. Current plastic recycling programs are, to be frank, lacking, and this is majorly brought on by the economic limitations of recycling. Cultural attitudes towards sorting trash and recycled products also affect the efficacy of recycling programs. However, grass-roots activism and local- and national-level legislature are quickly changing opinions on the matter of plastic recycling. Knowing which plastics are recyclable (PETE, No.1, and HDPE, No. 2, only in many areas) will help to reduce the cost of sorting recyclables with a little effort, and public pressure for wider recycling programs may light a fire under companies to increase recycling, assuming the science exists. Research related to the synthesis of biodegradable polymers has picked up in recent years to produce products like polylactic acid (PLA, No. 7 (other)), but subsidies or funding is always welcome with science. If you're interested, these are things to talk to your local government representative about. 

So there you have it, a brief overview of the chemical and structural background of plastics (and a little bit on the side). Plastics are quite complicated and therefore impossible to talk about in depth through any medium shorter than a book, but if you want to hear more about specific aspects of polymers let me know in the comments and I’ll be happy to do so. Thanks!

Thoughts in Black Ink

The only foreign language that I claim to know is Japanese. I started studying Japanese in elementary school and took courses throughout middle and high school as well. Simple Japanese books, everyday conversations and Japanese YouTubers are fine for the most part, but vocabulary is still a problem for me. While at work the other day, I decided to practice some new terms I read on a blog post.

Fig. 1: Vocabulary practice (orig.)

Papers covered in words like this are scattered all over my room. Sometimes the need to write just gets to you, you know? Staring at the scribbled words, I suddenly thought, what is this stuff I’m writing with anyways? 

Ignoring the fancy colors, black pen ink is a relatively simple composition made of carbon black suspended in drying oils, alcohols or petroleum products [1]. Carbon black is carbon powder obtained by burning organic matter. This type of black ink has been around for thousands of years, and I use another form of the same stuff while doing Japanese calligraphy. East Asian calligraphy ink is traditionally made of pine soot (carbon black) mixed with bindings and herbs. This page provides a more in-depth description of how sumi, or a Japanese ink stick, is made.

Fig. 2: Carbon black sample molecular structure (Bruno Glaser)

So why is carbon black... black? Writer Maggie Koerth-Baker does a good job of describing this. In organic chemistry, pi-conjugated systems, or systems where electrons of sp2 hybridized atoms are delocalized in combined p-orbitals, can exhibit a wide range of light absorption and emission properties. This is because of the many possible bonding and antibonding p-orbital orientations within the conjugated system.

Fig. 3: Molecular orbital states for benzene (Michigan State University)

When we talked about why excited metals emit colored light back in The Dazzling Chemistry of Fireworks, we mentioned that the Fermi level of metals is located within an electronic band, meaning that electron transitions between the highest occupied energy level and other higher energy electron states are possible over a range of values. The many available higher energy electron states are not unlike the many possible organizations of bonding and antibonding orbitals in pi-conjugated systems. An incident photon on a pi-conjugated system can upset the shared p-orbital system into occupying one of the possible antibonding arrangements, later to reemit the light energy at lower, sometimes non-visible spectrum, wavelengths. Pi-conjugated systems that include more atoms result in more complex molecular orbital states. And when conjugated units of varied length exist together in a material, then on the whole even more electronic states are possible! All of these electron transitions cooperate to make the familiar black ink of pens (graphite pencils too, just think about it).

If you enjoyed this post and those I've written before, share your thoughts in the comments to let me know!

On The Definition of Sustainable Energy

Let’s finally cover what I meant by truly sustainable energy production.

Listing electrical energy generation methods portrayed as sustainable, many would consider solutions such as solar power, wind turbines, hydroelectric dams, geothermal plants, sun farms, wave generators, nuclear power and nuclear fusion to fit the image of futuristic green power. But while all of the above may be green, not all of them are sustainable. To me at least, it seems that green energy is measured from the present as a bench point: anything that produces less greenhouse emissions or causes less ecological destruction than methods currently employed is considered "green." However, sustainable energy production is a more definite construct that can be described through thermodynamics.

From thermodynamics we know that every work-energy conversion in our universe is irreversible, meaning that energy is lost as heat every time something is done. Taking the planet earth as a giant engine, which is accurate in the sense that most life processes convert available energy into work and expend heat, we can conclude that the earth needs constant energy input in order to continue running. Aside from ambient space radiation produced by other sources, the sun is the only source of energy earth knows consistently from day to day. This is something many of us should experientially understand, contrasting the death of winter when days are short to the blossoming of spring when long sunlight returns.

Fig. 1: Boston sun angle during the seasons, affecting solar energy influx (Science Blogs)

Therefore, in order to be sustainable within the lifetime of our sun, an extremely high life expectancy for planet earth, our energy generation tactics should be geared towards the sun’s life-giving energy. Does this mean that solar is the only way to go? No, absolutely not. The sun’s energy hitting the earth enacts a grand cascade of events that grow wind, help generate tides and keep the water cycle flowing. This extends the definition of sustainable energy generation to include wind power, wave power, and hydroelectric generation.

Another side category of what I would consider sustainable energy sources includes those whose failure would correspond with the end of life on earth. One example is geothermal energy. Sure, eventually the earth’s core may cool down, but there would be bigger problems associated with this scenario than humans running out of electricity, such as the dissipation of the earth’s magnetic field letting a bombardment of solar radiation char the planet. Wind power may also be considered part of this category, as wind is in part generated by the Coriolis Effect that depends on the earth continuing to spin. If the earth stopped spinning, bad things would ensue.

So what is not covered under the definition of sustainable electricity generation that is commonly perceived as such? The top two that really get to me are nuclear energy and fusion energy. The selling point of nuclear energy is that it is clean in terms of greenhouse gas generation. However, clean does not directly translate into sustainable. The input into nuclear reactors is uranium, and lots of it. Uranium is not an uncommon element on earth, but a large 1000MWe nuclear power plant requires about 200 tons of refined uranium to operate for one year [1]. Refining uranium is itself a wasteful process, and mining large amounts of ore has an ecological impact as well. Nuclear fusion is often sold as the sustainable energy solution of the future, but in actuality nuclear fusion requires the same input as the sun does: hydrogen. With so much hydrogen on earth, why would this be a problem? Call me paranoid, but any time someone wants to convert something necessary for life into a luxury item (yes, electricity is a luxury item) and some useless helium, then I begin to worry. And relative to the sun’s supply of hydrogen, earth’s hydrogen bank is chump change. The hydrogen fusion reaction that generates the most energy is that of deuterium (1H2) with tritium (1H3) as described by this equation [2]:

                                                        1.  1H2 + 1H3 = 2He4 + neutron

If we calculate the mass difference

       2.  m(2He4) + m(neutron) – m(1H2) – m(1H3) = 4.002602 + 1.008665 – 2.01412 – 3.016050  =                   5.011267 – 5.030152= -0.018885 amu

then insert this mass difference into Einstein’s mass-energy equation

                  3.  ΔE=Δmc2
                  4.  Δm=0.018885 amu(1.66053892 x 10-27 kg/amu)
                  5.  E=(3.135928 x 10-29 kg)(3 x 108 m/s)2 = 2.82 x 10-12 J/He nucleus formed

and convert the result into a meaningful number

       6.  2.82 x 10-12 J/He(2.778 x 10-7 kWh/J)(1 He nucleus formed/2H nuclei consumed)=
            3.92 x 10-19 kWh/H nucleus consumed(6.022 x 1023 H nuclei/mol H)(1 mol H/((2.014102 +                   3.0160492)/2) g) = 9.39 x 104 kWh/g H

we get that 9.39 x 104 kWh of power can be generated from one gram of hydrogen gas (assumption made that hydrogen composition is half deuterium, half tritium). Considering that the average American consumes 10,908 kWh annually [3] and that the population of America is about 321 million [4], approximately 37,300 kg of hydrogen gas would be consumed annually to support America alone by fusion energy. This is not zero input.

All in all, what I am trying to say is that science has laid out a very specific definition of what sustainable energy truly is, so it is the job of every person to evaluate whether what is being portrayed as sustainable energy generation is truly backed by the facts.

Does your definition of sustainable energy sourcing differ? Let me know your thoughts in the comments below!

A Quick Coffee Break

I know that I promised an explanation of what I meant in the last post by truly sustainable energy, and I still owe you guys that. But I wanted to take a quick break from environmental talk to discuss something of great importance to me: coffee. Everyday on the way to work, I stop by Dunkin’ Donuts to pick up a small iced coffee dark roast with cream and sugar. I’m there frequently enough that the employees have begun to reckon a pattern in my order (which basically doesn’t change). Some people aren’t coffee drinkers at all, and neither was I until I started college. The class that broke me was MSE 220. Don’t get me wrong, it was one of the best classes I’ve ever taken. It’s just that a dark, warm classroom was not conducive to my concentration as much as my nodding off.

Fig. 1: The usual Dunkin' Donuts morning coffee (orig.)

So let’s back up this post with a bit of content: coffee’s main active ingredient, as I’m sure you know, is caffeine. This relatively small organic molecule is the most widely consumed psychoactive drug and makes an appearance in many foodstuffs, especially in beverages. As a drug with dopamine-producing properties, it can be a source of addiction, though the symptoms of withdrawal are much lighter than those of controlled substances. The caffeine content for coffee, tea and soda types can be found here if you’re interested.

Fig. 2: Caffeine and adenosine molecules (University of Texas at Austin)
Caffeine works by competitively binding with adenosine receptors on neurons and is therefore deemed a receptor antagonist of all adenosine receptor types (A1, A2A, A2B and A3) [1]. Adenosine is a central nervous system neuromodulator that binds to receptors on neurons, slowing neuronal activity, dilating blood vessels and causing sleepiness [2]. I’m not good at tracking signaling pathways, but information on the adenosine pathway can be found here. When caffeine binds to the adenosine receptor, adenosine can no longer bind to slow neuronal activity and sleepiness is defeated. The increased neuronal activity caused by caffeine also induces the pituitary gland to secrete adrenaline and institute a state of “fight or flight” activity, another way caffeine energizes the sleepy world round [3].

Besides caffeine, there is other chemistry going on in coffee as well. A morning coffee is a good source of antioxidants (good for reasons mentioned in A Bit on That Shampoo Vitamin) and has been identified as the number one source of antioxidants in America, followed by black tea [1]. Coffee also has a nice hearty flavor, especially with my dark roast coffee, that comes from caramelization and Maillard processes (discussed back in Party Science, Part 3: The Tasty Grub) as well as the subtle bitterness of caffeine. And there you have it, a small cup of coffee science.

I’m going to try posting more photos that I take myself, but I’m not particularly trained in photography and only have my phone camera for the moment. Regardless, let me know what you think of the photos and if you have any photography tips I’d love to hear them. Anything helps!