I have been thinking about this subject for a while. How is coauthorship determined for a scientific paper? Are there any standards set by the community by which everyone abides? The motivation for this was the recent publication of two papers by my former group at NIST, upon which I was not listed as coauthor. NIST has decided to obey strict guidelines for authorship - one must make a "significant intellectual contribution" to the work. (This guideline is supposed to work in general, but is often not followed.) Of course, "significant intellectual contribution" is vague and ill-defined. A significant contribution that is not "intellectual" can be overlooked for authorship purposes. Technicians and staff are incredibly important to the generation of scientific knowledge, but are not awarded with authorship status because their contributions are often not "intellectual." As a note, this often leads to authorship discrimination based on formal educational attainment - PhDs can often be given coauthorship while someone with a bachelor's degree who does very similar support work on a project will not.
"Significant" is then next term to discuss. If we talked about the project and the proper way to analyze data, does that count? What about deriving an equation from a known source? Simply being in a position of power on a given project, but contributing essentially nothing to experimental design, sample preparation, and data analysis? Some would argue that these are not significant. Some might draw an arbitrary line (well, we all have to draw an arbitrary line somewhere).
Problems arise when different institutions have different standards for coauthorship. If one individual works in a group with more lax standards, that individual will produce more papers and their CV will look better. Because the general standards for judging scientists often involve total number of publications, this can be problematic for scientists competing for jobs. Simply working in an institution with more stringent publication standards, including how many publications should be written for a given data set (some groups publish essentially the same data in multiple journals for slightly different audiences), can hinder an individual's career for good.
When I was at NIST, the management was cracking down on stringency standards for authorship. This is fine, for an institution that provides full-time, permanent jobs for its staff - they will not need to worry about finding a new job and competing in the market. However, for postdoctoral researchers who will be let go after two years no matter what (as happens now that money has been removed from the system), it can be a disaster.
A final note, as to what motivated me to think about these issues, was reading a recent paper published by a former colleague in a very high impact journal in our field. This colleague asked me to derive an expression so he could compare it to experiments. I did this, and went through the derivation with him, and pointed out differences from some other groups' work due, which he was using to fit the data, to the unique geometry of his samples. (I actually know that the changes were crucial because the original version of his work, which was initially rejected by Nature, is still on the arXiv, and it includes the old expression. The version published in Physical Review Letters has the new expression, based on my input.) In addition, the supplemental material refers to work that I performed, both intellectually and physically, some of which was published elsewhere. And, while I was aware that the work was being published, I was not given access to the prepublication versions. Now, we can argue about the intellectual significance and merit of my contributions. I, personally, don't particularly care about the acknowledgement so long as the science gets done - but I am realizing that I might be too easy-going (for my own career) on the topic.
Thoughts on Science, etc.
Friday, March 27, 2015
Saturday, July 19, 2014
Inspired by recent Science magazine cover and backlash
A recent issue of “Science” magazine (July 11, 2014) featured transgendered Indonesian sex workers (or at least their bodies below the chest region) on its cover. The cover photo was referencing a topic heavily discussed in the magazine, that of HIV transmission and control. The implication is that Indonesian transgender sex workers (presumably one example of many groups) have an HIV problem – that as a group they spread the virus, and governments are not doing enough to prevent the spread and provide treatment. Complaints about the nature of the cover, through Twitter, revealed that some editors are clueless with regards to gender issues and sexuality.
Numerous problems with the cover exist. Putting sexualized images on what is supposed to be a purely scientific magazine is questionable. Sexualized images of a very oppressed minority group, and implicating them in the spread of HIV, is incredibly degrading to them as human beings (never mind that we can not see their faces, even more dehumanizing). I do not know if the individuals depicted on the cover gave consent to be depicted as such. Overall, it is dehumanizing and vilifying this group of people.
How could this have happened? Doesn’t “Science” magazine have editors and people who might look at the cover before it goes to print and object? In principle, yes. In practice, apparently not, and this highlights problems with the scientific community – problems do privilege, closed-mindedness, and failure to educate about these things. This post will not deal with specifics regarding this incident, but will analyze overall opinions within the community of scientists.
Science, as a discipline, wants to think of itself at meritocratic – scientists get where they are due to hard work and intelligence, not because of affirmative action or luck. However, we can not ignore the fact that virtually all higher-level scientists are heterosexual males, specifically ones that do not recognize their own privileged status (recognizing it would remove the myth of advancement purely on merit). As such, heterosexual male norms are the cultural standard – including viewing women as sex objects, using homophobic slurs to describe people, etc. (I have personal experience with this. Less so in graduate school, but at my most recent position at a government lab this was rampant. We had one woman in a group of 30 or so scientists. “Pussy” and “candy-ass,” among others, were used among scientists to describe others. “My grandmother could [verb with comparative adjective, such as run faster] than you” was bandied around. Hitting the racial issue as well, one remarked about putting a sample in a small gap that we should hire a small Asian girl with slim fingers to do it. One told me that he wanted to work in Boulder over Silicon Valley because of the higher female-to-male ratio. Never mind that women scientists/tech workers are more prevalent in Silicon Valley than the certain lab in Boulder – women are there to date, not to be colleagues. I, as a human-rights-supporting, heterosexual white male, was often offended, told I shouldn't be, and felt like I could do nothing about it as it was ingrained in the culture.)
The effect of these attitudes can have positive reinforcement (in the scientific sense, not in the standard use of the word “positive”). Such cultural aspects if the scientific community can drive away some groups who are made to feel marginalized – women, gays, some ethnic/racial minorities, people with mental health problems such as depression and anxiety, even men who simply do not like or fit into the culture. Because of the myth of meritocracy and the need to justify outcomes, the ones who remain must justify their position, often times coming to the conclusion that the marginalized groups are simply weaker – which leads to further marginalizing in casual conversations, and the continued reinforcement of the culture. The feeling that such groups are weaker (which can have manifestations such as stupid, poor work ethic, etc.), and the fact that interactions between scientists and others become very limited, can lead to an attitude of dehumanization of certain groups by scientists – the groups are there to be studied, or used for sex, paid low wages – simply because they are not good enough to be scientists. (Scientists tend to think of themselves as better than the general public, for the reasons referred to above. Egotism is a huge problem.) The lack of understanding and empathy as described here is concerning, for one because many people who would become scientists will choose not to or will be purged from the pipeline, and for another because insensitivity leads to outreach problems like those represented in the “Science” cover – a very marginalized group (transgender people) are put on display and dehumanized without a second thought.
This is a problem, and some of us are able to admit it is a problem (many will not admit it is a problem). One solution is broader education for scientists. Make them take a [fill in group]-studies class. Have them read literature about minority experiences and privilege. It will be a slow, generational process of acceptance, requiring older opinions (many people educated in the 1960s, when sexism and racism were prevalent throughout society, still maintain active leadership roles in science and have passed their attitudes on to those now in their 30s and older) to retire and younger opinions to take over. (I was surprised working at the government lab at people’s attitudes. My college experience involved many lectures and workshops on privilege, discrimination, and casual –isms. Courses I took on cultural anthropology and Black literature opened my eyes, as well as talking to people who grew up less privileged than myself. If not for that educational experience I would probably be part of the problem. I expected everyone to have had similar learning opportunities, but I was mistaken. Working with those people was, in some sense, a good experience, as it showed me that certain attitudes are rampant, even among the supposedly highly educated. Knowing that it was better in graduate school than amongst older workers is encouraging for the future, but it might be that I went to a relatively progressive graduate school.)
As a final note, turning large segments of the population off of science due to internal culture cannot be good for society. We want more educated people pursuing scientific knowledge, and we want to encourage the best, brightest, and motivated (truly motivated, not falling into the artificial categories that make someone seem motivated within the culture), and these characteristics are not limited to white and Asian heterosexual men.
Monday, July 14, 2014
Richard "Tricky Dick" Feynman, and legacy
Richard Feynman’s legacy is has been discussed recently in regards to the fact that he was kind of a womanizing, sexist jerk, and what this fact means for how we judge his scientific work. Richard Feynman, among modern physical scientists, is hailed as sort of a God who walked the Earth. He was instrumental in formulating quantum field theory and interpreting quantum mechanics, was an excellent lecturer and teacher, providing good analogies and qualitative understanding of phenomena. He also spent much of his spare time going to bars to pick up women, picking up undergrads at orientation parties, and other sexual exploits (details can be found in his autobiographical books like “Surely You’re Joking” and “What Do You Care What Other People Think”). He was also kind of condescending to “lesser” fields such as psychology and the humanities.
Is this relevant? At some level, yes. Hailing Feynman as a hero among physical scientists inherently includes his personal life, which we should try to denounce for what it was. Many young scientists will read about his personal life and think how cool he was, and this can only lead to more gender disparities in the physical sciences, and the feeling that physical sciences maintain a culture of machismo, hostile to women, homosexuals, and some minorities. (As a note, my high school physics teacher talked about how awesome Feynman was for picking up women in bars. I did not realize it at the time, but that attitude is definitely hostile towards women in science, and women as anything more than sexual conquests in general.)
Feynman did come from a different time, so maybe we should cut him some slack on that front, but we should be able to realize that and say that that behavior is unacceptable in today’s world. Should this discount his purely scientific legacy? Absolutely not. His work was fundamental and instrumental in our understanding of quantum mechanics, the modern foundation of, well, a lot (solid-state computers, among other things).
Plenty of other individuals have had personalities that today we would find despicable, and plenty of us today do things that 50 years from now might be frowned upon. If we discount accomplishments based on whether the accomplisher is a jerk, progress would halt (I do not know the details, but we would maybe have to remove the wheel, or Kepler’s laws, or the literary works of Hemingway from human knowledge). An advancement is an advancement, regardless of the personality of the discoverer, and we need to accept that.
Should we take measures to prevent some of these personal attitudes in the future? Of course. We can even hold people like Feynman up as people not to be emulated terms of personal life. We all have personal lives and professional lives, and we often want to keep them separate (though it is becoming more difficult these days). When discussing Feynman (or Einstein, or Hawking, or any other number of jerks who happen to be doing good science), focus on the science only. When doing outreach and mentioning these individuals, focus only on their scientific work. Then perhaps their legacies will be limited to scientific exploits, not personal ones. If you need to mention personal exploits, discuss within the context why they are not to be emulated. (Discussions along these lines also help in getting across concepts of privilege, which needs help to be acknowledged by many people.)
Tuesday, July 1, 2014
SciFi
I’ve been reading some old science fiction recently, and it got me thinking about how the world is now versus how people in the 1950s-1970s thought it would be, and why many predictions have not met expectations. It has also made me realize how far our knowledge of biology and physics has come (or not come, as the case may be).
Back when much of the “classic” science fiction (Asimov, Bradbury, Dick . . .) was written, we were in the Cold War, with nuclear attacks imminent. We were also making huge strides in understanding fundamental physics, with new particles and elements created regularly. It was the beginning of the study of biology at the molecular level, with the structure and function of DNA recently having been deduced. Since then, little truly new knowledge in fundamental physics has been gained – the standard model, developed in the 1960s-70s has held firm, grand unification has not happened, and the main discovery has been that the universe is not only expanding but is actually accelerating in expansion.
According to the literature of the time, by now we should have colonized Mars and possibly extrasolar planets, thanks to improvements in rocket technology and abilities to overcome special relativistic limitations. Such new physics has not emerged yet, so of course no new technology has emerged either. This may also help explain why no alien species has yet visited us. If new physics exists, it must be at higher energies (unattainable yet with our most powerful colliders), and any civilization would spend exorbitant amounts of energy in that discovery – and requiring even more energy to power any technology utilizing the new physics. This, plus the great distances to travel, plus not knowing where you would want to go, makes alien encounters near impossible.
The other theme that appears in many works is that of robotics. We should have walking, talking, nearly indistinguishable from humans robots either to serve us, decrease loneliness, or to uprise and try to kill us. While raw computational power has increased (possibly more so than predicted), artificial intelligence has not. We just recently had a chatbot “pass” the Turing test (though in many opinions, looking at some chat transcripts, the judges must be fools), but most judges were clearly not fooled. Siri and the Jeopardy-playing Watson can understand key words but have not mastered true conversational speech. We do not yet have self-driving cars, due to the difficulty of machine learning – the auto driver must learn certain things and discard others and deciding which information to discard is difficult. The human brain does it automatically – and this just shows how far we are from knowing how the human brain processes and retains information. The old science fiction writers thought we would have figured that out by now, and would have mood-altering and memory erasing/implanting technologies available. Once we know how the human brain works, designing artificial ones would be simple – and we are slowly moving in that direction. However, the artificial brains in science fiction always have imperfections, either intentional or not, so that one can distinguish real humans from the artificial ones, even if the artificial ones are biological in nature. Much (though not all) old science fiction still retained the notion that humans are special in some way – they can have empathy, or feel love, or otherwise have emotions beyond simple biochemical processes in a way that artificial life can not, though there is very little scientific basis for that construct.
This brings us to the treatment of “life” in the past. When I was in school, “life” was defined as something that uses energy, grows, reproduces. Now, the definition of “life” is controversial, e.g. are self-replicating molecules “alive?” Alien life in most old science fiction was also. Compatible with terrestrial life. We shared the same biochemistry, could contract the same diseases, could even reproduce together in some stories. Modern information shows that this is unlikely to be true. Even if alien life is carbon based, there is almost no way that we would share the same biochemical processes, such a use of ATP as an energy currency, the same genetic code, the same amino acids, or even the same chirality of biomolecules. All this would make an alien encounter useless – neither set of species could use the other set for food, reproduction, or anything besides raw chemical fuel – completely unlike what is portrayed in any alien encounter science fiction of which I am aware. Part of the explanation may be that these realizations are recent – the next generation of writers may include such caveats. It may also be that stories where aliens arrive, and just die from oxygen exposure, are not interesting. But we do know better now, and criticisms should reflect that fact.
There may or may not be a place for 1960s-style, interplanetary, alien encounter science fiction these days. I do not know. Certainly they can make for interesting fiction, and can delve into philosophical questions about what make us “human.” Stories about robots might be more realistic but will need to be set farther in the future than anyone 50 years ago could predict. Again, these stories can make for good, interesting fiction that can address some philosophical questions about life. Continued reading and writing of fiction, combined with facts of modern science, will continue to improve our minds and provide interesting insights.
Back when much of the “classic” science fiction (Asimov, Bradbury, Dick . . .) was written, we were in the Cold War, with nuclear attacks imminent. We were also making huge strides in understanding fundamental physics, with new particles and elements created regularly. It was the beginning of the study of biology at the molecular level, with the structure and function of DNA recently having been deduced. Since then, little truly new knowledge in fundamental physics has been gained – the standard model, developed in the 1960s-70s has held firm, grand unification has not happened, and the main discovery has been that the universe is not only expanding but is actually accelerating in expansion.
According to the literature of the time, by now we should have colonized Mars and possibly extrasolar planets, thanks to improvements in rocket technology and abilities to overcome special relativistic limitations. Such new physics has not emerged yet, so of course no new technology has emerged either. This may also help explain why no alien species has yet visited us. If new physics exists, it must be at higher energies (unattainable yet with our most powerful colliders), and any civilization would spend exorbitant amounts of energy in that discovery – and requiring even more energy to power any technology utilizing the new physics. This, plus the great distances to travel, plus not knowing where you would want to go, makes alien encounters near impossible.
The other theme that appears in many works is that of robotics. We should have walking, talking, nearly indistinguishable from humans robots either to serve us, decrease loneliness, or to uprise and try to kill us. While raw computational power has increased (possibly more so than predicted), artificial intelligence has not. We just recently had a chatbot “pass” the Turing test (though in many opinions, looking at some chat transcripts, the judges must be fools), but most judges were clearly not fooled. Siri and the Jeopardy-playing Watson can understand key words but have not mastered true conversational speech. We do not yet have self-driving cars, due to the difficulty of machine learning – the auto driver must learn certain things and discard others and deciding which information to discard is difficult. The human brain does it automatically – and this just shows how far we are from knowing how the human brain processes and retains information. The old science fiction writers thought we would have figured that out by now, and would have mood-altering and memory erasing/implanting technologies available. Once we know how the human brain works, designing artificial ones would be simple – and we are slowly moving in that direction. However, the artificial brains in science fiction always have imperfections, either intentional or not, so that one can distinguish real humans from the artificial ones, even if the artificial ones are biological in nature. Much (though not all) old science fiction still retained the notion that humans are special in some way – they can have empathy, or feel love, or otherwise have emotions beyond simple biochemical processes in a way that artificial life can not, though there is very little scientific basis for that construct.
This brings us to the treatment of “life” in the past. When I was in school, “life” was defined as something that uses energy, grows, reproduces. Now, the definition of “life” is controversial, e.g. are self-replicating molecules “alive?” Alien life in most old science fiction was also. Compatible with terrestrial life. We shared the same biochemistry, could contract the same diseases, could even reproduce together in some stories. Modern information shows that this is unlikely to be true. Even if alien life is carbon based, there is almost no way that we would share the same biochemical processes, such a use of ATP as an energy currency, the same genetic code, the same amino acids, or even the same chirality of biomolecules. All this would make an alien encounter useless – neither set of species could use the other set for food, reproduction, or anything besides raw chemical fuel – completely unlike what is portrayed in any alien encounter science fiction of which I am aware. Part of the explanation may be that these realizations are recent – the next generation of writers may include such caveats. It may also be that stories where aliens arrive, and just die from oxygen exposure, are not interesting. But we do know better now, and criticisms should reflect that fact.
There may or may not be a place for 1960s-style, interplanetary, alien encounter science fiction these days. I do not know. Certainly they can make for interesting fiction, and can delve into philosophical questions about what make us “human.” Stories about robots might be more realistic but will need to be set farther in the future than anyone 50 years ago could predict. Again, these stories can make for good, interesting fiction that can address some philosophical questions about life. Continued reading and writing of fiction, combined with facts of modern science, will continue to improve our minds and provide interesting insights.
Wednesday, June 4, 2014
Repatriation
I recently finished reading Return of the Bones, a novel based on a true story of a New Mexican Native American woman who went to Washington, DC and Harvard to regain possession of ancestral bones that had been dug up on an archaeological mission in the early 20th century. The lesson is that the skeletons are sacred and deserving of a proper burial, and that studying them for science robs the individuals (and tribes) of their dignity. The US government now has a streamlined process for surviving Native Americans to regain their ancestral bones for reburial and other purposes.
Repatriation of such skeletons is clearly anti-science. New methods, such as DNA analysis and modern dating techniques, can provide us with significant information about human migrations, evolution, ecology, how societies have managed climate change, and other very useful information. The idea that the skeletons' spirits will be disturbed is as scientifically unfounded as the idea that humans arose by the hand of God some 6000 years ago. Giving in to such belief systems cannot be good for the advancement of knowledge.
That said, we must also look at historical contexts to determine reasons for doing things as well as the best way to proceed. Until recently, anthropology (and its branches such as archaeology and linguistics) has been a less-than-rigorously-scientific, racist endeavor. Much of anthropology was about comparing societal or anatomical aspects to show that European civilization and anatomy was superior to all others, and also to rank societies/races accordingly. Thus, in the not-so-distant past, digging up skeletons (not just in the Americas, but also Africa, Australia, Asia and Europe) for study was absolutely intertwined with White supremacy, and was in fact designed to augment White supremacy arguments. Archaeology at that time was an agent of oppression, and one can definitely see why, in this context, Native Americans would be opposed to digging up of their ancestors for "science."
As a comparison, there is very little outrage (and repatriation requests would not be taken seriously) when, for example, medieval burial sites in England are dug up to study medieval nutrition, plagues, etc. (To note, Return of the Bones puts initial occupation by the Pecos and Jemez tribes in New Mexico in the 1200s. These are not ancient skeletons, but are aged closer to the medieval period in Europe. Truly ancient bones cannot be said, with any scientific certainty outside of DNA testing, to belong to any modern tribe. This does not stop tribes from claiming such skeletons as their own due to mythologies that their people have existed in the same place since time immemorial.)
Current physical, archaeological and linguistic anthropology has been shedding its racist past and being put on more solid, rigorous footing as a science (though some might argue that it is still a "soft" science, with too much noise and variation in data to make definitive and predictive statements). Old bones, tools, and structures can tell us much about our past, how we as humans have dealt with climatic disasters, as well as other information that will be useful in helping us prepare for the future. We do need to acknowledge the use, in the past, of archaeology as a tool of oppression, but it must also be made clear that those days were mistaken and we have moved beyond, and learned from, our past.
Repatriation of such skeletons is clearly anti-science. New methods, such as DNA analysis and modern dating techniques, can provide us with significant information about human migrations, evolution, ecology, how societies have managed climate change, and other very useful information. The idea that the skeletons' spirits will be disturbed is as scientifically unfounded as the idea that humans arose by the hand of God some 6000 years ago. Giving in to such belief systems cannot be good for the advancement of knowledge.
That said, we must also look at historical contexts to determine reasons for doing things as well as the best way to proceed. Until recently, anthropology (and its branches such as archaeology and linguistics) has been a less-than-rigorously-scientific, racist endeavor. Much of anthropology was about comparing societal or anatomical aspects to show that European civilization and anatomy was superior to all others, and also to rank societies/races accordingly. Thus, in the not-so-distant past, digging up skeletons (not just in the Americas, but also Africa, Australia, Asia and Europe) for study was absolutely intertwined with White supremacy, and was in fact designed to augment White supremacy arguments. Archaeology at that time was an agent of oppression, and one can definitely see why, in this context, Native Americans would be opposed to digging up of their ancestors for "science."
As a comparison, there is very little outrage (and repatriation requests would not be taken seriously) when, for example, medieval burial sites in England are dug up to study medieval nutrition, plagues, etc. (To note, Return of the Bones puts initial occupation by the Pecos and Jemez tribes in New Mexico in the 1200s. These are not ancient skeletons, but are aged closer to the medieval period in Europe. Truly ancient bones cannot be said, with any scientific certainty outside of DNA testing, to belong to any modern tribe. This does not stop tribes from claiming such skeletons as their own due to mythologies that their people have existed in the same place since time immemorial.)
Current physical, archaeological and linguistic anthropology has been shedding its racist past and being put on more solid, rigorous footing as a science (though some might argue that it is still a "soft" science, with too much noise and variation in data to make definitive and predictive statements). Old bones, tools, and structures can tell us much about our past, how we as humans have dealt with climatic disasters, as well as other information that will be useful in helping us prepare for the future. We do need to acknowledge the use, in the past, of archaeology as a tool of oppression, but it must also be made clear that those days were mistaken and we have moved beyond, and learned from, our past.
Monday, June 2, 2014
Climate Change
Cosmos, with Neil de Grasse Tyson, discussed climate change last night. The entire discussion was limited to the greenhouse effect, carbon dioxide (with a brief mention of methane), and comparisons of Venus and Earth - one of which had a runaway greenhouse effect early on and now has a life-inhospitable environment, the other of which did not. One implication was that our current situation on Earth is nearing the threshold for a runaway greenhouse effect, which will result in dire consequences for life (including humans).
Now, I do not want this post to put me into the "denier" camp, or anything. The correlation between atmospheric carbon dioxide (and methane, which is less discussed in the media but is also very important) and global temperatures is well established. If we continue to pump carbon dioxide and methane into the atmosphere at current levels it will likely be catastrophic. But this is not the first time an organism, or collection of organisms, has been responsible for devastating climate change. We are just the most recent in a line of atmosphere-changing events precipitated by life.
Early on in Earth's history, its atmosphere was carbon dioxide and methane rich, with very little diatomic oxygen gas. (Currently, the atmosphere is about 20% diatomic oxygen, often mentioned as a prerequisite for life. This takes a very narrow, eukaryote-centric view of life.) Early life forms were chemotrophic, reducing inorganic materials for energy. Once photosynthesis evolved (about 2.8 billion years ago), oxygen gas was released and carbon dioxide pulled from the atmosphere. Free oxygen initially reacted with dissolved metallic elements in the ocean, then oxygenated the atmosphere, oxidizing the methane (and metals on the surface). The net result was a decrease in atmospheric carbon dioxide and methane with an increase of oxygen gas - i.e. a removal of potent greenhouse gases from the atmosphere (about 800 million years ago, after metallic minerals became oxygen-saturated). This may have led to the cooling of Earth (snowball earth). It almost certainly led to the evolution of eukaryotic organisms which require oxygen for metabolism. (Prior species were all anaerobic, and were in fact poisoned by oxygen. The oxygenation of Earth due to photosynthetic organisms may have caused a large-scale extinction of early life forms.) This chain of events likely prevented Earth from the runaway global warming that struck Venus. It also allowed the Earth to obtain an ozone (triatomic oxygen) layer, creating suitable conditions for life to colonize land without being destroyed by UV radiation.
Fast forward a few hundred million years to about 374 million years ago. The late Devonian extinction may have been helped by the evolution of land plants, with the rapid increase in biomass due to colonization of the land pulling more carbon dioxide from the air, and leading to rapid cooling.
Next, the Permian-Triassic extinction (252 million years ago). This is the largest known mass extinction in Earth's history. New results indicate that it may have been triggered by a sudden release of methane from methanogenic (i.e. methane producing) microorganisms who had just evolved the methanogenic capabilities (similar to how evolution of photosynthesis resulted in a huge release of oxygen). Methane is a potent greenhouse gas and such a release would have started rapid global warming, leading to mass extinctions. Such a catastrophe would presumably be similar to what would happen with our current global warming due to the sudden release of carbon dioxide.
As a note, causes for the Triassic-Jurassic extinction are little understood (thought to be ocean acidification, which would be related to sudden increases in acidifying compounds), and the Cretaceous extinction was likely caused by rapid climate change due to blocking of sunlight by dust particles created by an asteroid impact.
Do we see patterns here? Mass extinctions on Earth are generally caused by climatic shifts, which are in turn caused by rapid changes in the equilibrium levels of certain chemical compounds, precursors and solar energy.
If you get nothing else from this post, understand this. The biosphere is continually in flux. Levels of carbon dioxide, oxygen, methane, etc. have been changing since life first evolved, and life on Earth is responsible for the current cycles and equilibrium patterns (if photosynthetic organisms all went extinct, the oxygen in the atmosphere would rapidly disappear) of many elements. Life evolve within the confines of its environment, and if the equilibria change too quickly for evolution to react, you get mass extinctions. This has been going on for millenia, and humans are only the latest organism to participate. The difference is we are the first organism with the intelligence to determine that this is a choice - we can continue on our current path and be extinct (along with numerous other species) in some future time (likely farther in the future than anyone alive today will experience), or we can try to prevent the next mass extinction event from being human-caused and prevent suffering of our descendents.
Now, I do not want this post to put me into the "denier" camp, or anything. The correlation between atmospheric carbon dioxide (and methane, which is less discussed in the media but is also very important) and global temperatures is well established. If we continue to pump carbon dioxide and methane into the atmosphere at current levels it will likely be catastrophic. But this is not the first time an organism, or collection of organisms, has been responsible for devastating climate change. We are just the most recent in a line of atmosphere-changing events precipitated by life.
Early on in Earth's history, its atmosphere was carbon dioxide and methane rich, with very little diatomic oxygen gas. (Currently, the atmosphere is about 20% diatomic oxygen, often mentioned as a prerequisite for life. This takes a very narrow, eukaryote-centric view of life.) Early life forms were chemotrophic, reducing inorganic materials for energy. Once photosynthesis evolved (about 2.8 billion years ago), oxygen gas was released and carbon dioxide pulled from the atmosphere. Free oxygen initially reacted with dissolved metallic elements in the ocean, then oxygenated the atmosphere, oxidizing the methane (and metals on the surface). The net result was a decrease in atmospheric carbon dioxide and methane with an increase of oxygen gas - i.e. a removal of potent greenhouse gases from the atmosphere (about 800 million years ago, after metallic minerals became oxygen-saturated). This may have led to the cooling of Earth (snowball earth). It almost certainly led to the evolution of eukaryotic organisms which require oxygen for metabolism. (Prior species were all anaerobic, and were in fact poisoned by oxygen. The oxygenation of Earth due to photosynthetic organisms may have caused a large-scale extinction of early life forms.) This chain of events likely prevented Earth from the runaway global warming that struck Venus. It also allowed the Earth to obtain an ozone (triatomic oxygen) layer, creating suitable conditions for life to colonize land without being destroyed by UV radiation.
Fast forward a few hundred million years to about 374 million years ago. The late Devonian extinction may have been helped by the evolution of land plants, with the rapid increase in biomass due to colonization of the land pulling more carbon dioxide from the air, and leading to rapid cooling.
Next, the Permian-Triassic extinction (252 million years ago). This is the largest known mass extinction in Earth's history. New results indicate that it may have been triggered by a sudden release of methane from methanogenic (i.e. methane producing) microorganisms who had just evolved the methanogenic capabilities (similar to how evolution of photosynthesis resulted in a huge release of oxygen). Methane is a potent greenhouse gas and such a release would have started rapid global warming, leading to mass extinctions. Such a catastrophe would presumably be similar to what would happen with our current global warming due to the sudden release of carbon dioxide.
As a note, causes for the Triassic-Jurassic extinction are little understood (thought to be ocean acidification, which would be related to sudden increases in acidifying compounds), and the Cretaceous extinction was likely caused by rapid climate change due to blocking of sunlight by dust particles created by an asteroid impact.
Do we see patterns here? Mass extinctions on Earth are generally caused by climatic shifts, which are in turn caused by rapid changes in the equilibrium levels of certain chemical compounds, precursors and solar energy.
If you get nothing else from this post, understand this. The biosphere is continually in flux. Levels of carbon dioxide, oxygen, methane, etc. have been changing since life first evolved, and life on Earth is responsible for the current cycles and equilibrium patterns (if photosynthetic organisms all went extinct, the oxygen in the atmosphere would rapidly disappear) of many elements. Life evolve within the confines of its environment, and if the equilibria change too quickly for evolution to react, you get mass extinctions. This has been going on for millenia, and humans are only the latest organism to participate. The difference is we are the first organism with the intelligence to determine that this is a choice - we can continue on our current path and be extinct (along with numerous other species) in some future time (likely farther in the future than anyone alive today will experience), or we can try to prevent the next mass extinction event from being human-caused and prevent suffering of our descendents.
Thursday, May 29, 2014
STEM (and anti-intellectualism)
The acronym STEM - science, technology, engineering, mathematics. It shows up everywhere. Some ungodly fraction of new jobs will be in STEM fields. We need more STEM majors in college.
As general statements, these are true. The economy is becoming more knowledge-based, with knowledge of technology necessary to perform basic tasks - use of spreadsheets for accounting, computer-generated presentations, and even fancy espresso machines in the coffee industry. (An aside: I know that my colleagues will hate me for stating this fact, but use of most equipment used by scientists - mass spectrometer, PCR machine, thin film deposition chamber, to name a few - is not any more difficult that using the espresso machine at Starbucks. As a result, this has obvious implications for what defines "skilled" versus "unskilled" work, and resulting prestige and salaries.) We therefore do need more people trained on certain equipment, with the appropriate backgrounds.
(I will note: One problem I have with the STEM categorization is that it encompasses a huge range of positions, from glass cleaner at a pharmaceutical company to PhD astrophysicist. Not all positions are truly creative or thinking-based - in fact the vast majority are not.)
Where does this training needs to occur? Currently colleges are the bridge between high school and good jobs in the STEM sectors. However, a college degree requires more than background for a certain job, and much more that simply training on a certain piece of equipment (what many people do at a given job in STEM). As such, there is an argument that can be made to get rid of such "waste" - general education classes - and streamline the process to produce more employment-ready individuals. The implication is that understanding subjects like literature, history, and touchy areas such as systemic racism and sexism is pointless if one wants employment in the modern economy.
Is this true? If one's only goal in life is to be a drone, then it probably is true. And it is true that most employers simply want drones, who know one thing and are trained to do one task (but can be retrained if necessary for a different task) and do not think about their place in the world, in the economy, and might therefore complain. Employers also do not want to be reminded of their own systemic "-isms" which are generally taught in colleges in the US at this point in time (not necessarily true in the past). As such, they are less likely to encourage broad-based studies. But just because something is good for the business community, is it good for everyone else? It is true that some individuals are not interested in learning extraneous topics and are only interested in a decent-paying job, no matter how boring. This is fine, but these people should not be encouraged to attend a traditional college.
What is required here are vocational schools - training centers for specific jobs that take little time to complete and do not require general education. These need to be funded by businesses (who directly benefit from the training) either directly or through corporate taxes. We also need better career paths for intellectually curious college graduates - having them sit inputting numbers into a spreadsheet is boring and a waste of human capital. I do not know if I have a solution for that problem. The modern economy requires a certain amount of labor that is not automatable yet, but also does not require much intellectual skill (leading to some 70% of employees not being engaged at work, including myself in my supposedly high-quality science position at NIST).
(In an ironic twist, there are claims that certain STEM jobs are less likely to be automated, unlike paper pushers, burger flippers and such. This is not true. The unautomatable jobs and careers are all creative - author, artist, historian, museum curator, etc. - and outside of STEM. At least until we develop true artificial intelligence.)
As general statements, these are true. The economy is becoming more knowledge-based, with knowledge of technology necessary to perform basic tasks - use of spreadsheets for accounting, computer-generated presentations, and even fancy espresso machines in the coffee industry. (An aside: I know that my colleagues will hate me for stating this fact, but use of most equipment used by scientists - mass spectrometer, PCR machine, thin film deposition chamber, to name a few - is not any more difficult that using the espresso machine at Starbucks. As a result, this has obvious implications for what defines "skilled" versus "unskilled" work, and resulting prestige and salaries.) We therefore do need more people trained on certain equipment, with the appropriate backgrounds.
(I will note: One problem I have with the STEM categorization is that it encompasses a huge range of positions, from glass cleaner at a pharmaceutical company to PhD astrophysicist. Not all positions are truly creative or thinking-based - in fact the vast majority are not.)
Where does this training needs to occur? Currently colleges are the bridge between high school and good jobs in the STEM sectors. However, a college degree requires more than background for a certain job, and much more that simply training on a certain piece of equipment (what many people do at a given job in STEM). As such, there is an argument that can be made to get rid of such "waste" - general education classes - and streamline the process to produce more employment-ready individuals. The implication is that understanding subjects like literature, history, and touchy areas such as systemic racism and sexism is pointless if one wants employment in the modern economy.
Is this true? If one's only goal in life is to be a drone, then it probably is true. And it is true that most employers simply want drones, who know one thing and are trained to do one task (but can be retrained if necessary for a different task) and do not think about their place in the world, in the economy, and might therefore complain. Employers also do not want to be reminded of their own systemic "-isms" which are generally taught in colleges in the US at this point in time (not necessarily true in the past). As such, they are less likely to encourage broad-based studies. But just because something is good for the business community, is it good for everyone else? It is true that some individuals are not interested in learning extraneous topics and are only interested in a decent-paying job, no matter how boring. This is fine, but these people should not be encouraged to attend a traditional college.
What is required here are vocational schools - training centers for specific jobs that take little time to complete and do not require general education. These need to be funded by businesses (who directly benefit from the training) either directly or through corporate taxes. We also need better career paths for intellectually curious college graduates - having them sit inputting numbers into a spreadsheet is boring and a waste of human capital. I do not know if I have a solution for that problem. The modern economy requires a certain amount of labor that is not automatable yet, but also does not require much intellectual skill (leading to some 70% of employees not being engaged at work, including myself in my supposedly high-quality science position at NIST).
(In an ironic twist, there are claims that certain STEM jobs are less likely to be automated, unlike paper pushers, burger flippers and such. This is not true. The unautomatable jobs and careers are all creative - author, artist, historian, museum curator, etc. - and outside of STEM. At least until we develop true artificial intelligence.)
Subscribe to:
Posts (Atom)