Search This Blog

Friday, February 25, 2011

Try this at home

I was looking through Science’s website today, and beyond a hilarious blog post about what it is actually like to be a scientist, I came across an interesting how-to article. It started with an astronomer in need, Chris Lintott from the Adler Planetarium in Chicago, who had way too many images to classify and not enough staff to do it. 


In response, Lintott and his collaborators setup a website called Galaxy Zoo to allow volunteers to do the classification heavy lifting. They were hoping for a few thousand helpers, but ended up getting 375,000 and counting. The project has exceeded all expectations, resulting in over twenty astronomical papers and the discovery of two astronomical phenomena, mostly through the help of citizen-scientists. 


The second part of the article gave advice for other scientists hoping to tap into the distributed-thinking phenomenon to assist in their research. The advice boils down to making sure the user interface is foolproof, and providing extra resources that advanced users can harness to really dig into problems.


Citizen science isn’t new, it was first popularized in 1999 by the SETI@home project from UC Berkeley, but it seems to have taken off in recent years. SETI@home and other distributed-computing projects were really just glorified grid computers. Users downloaded programs that ran in their computer’s background, adding their machine to a network of thousands, all combined by software to mimic a supercomputer and work on big science problems.


The paradigm started to shift with Foldit, a protein folding game created by researchers from the University of Washington in Seattle. Instead of passively relying on volunteers spare computing power, Foldit enlisted their brainpower. In the distributed-thinking game volunteers experiment with protein folding, determining how a linear chain of amino acids curls up into a three-dimensional shape that minimizes the internal stresses and strains. 


Projects like Galaxy Zoo and Foldit allow anyone to participate in real scientific research, which is pretty amazing when you stop to think about it. Contrary to the common perception of academics perched in ivory towers looking down upon the rest of humanity, most of the researchers I have come across started in science out of a genuine desire to help others. Distributed-thinking is a wonderful tool which allows the people whose lives will ultimately be impacted by the research contribute to the scientific process. 


Check out some of the distributed-thinking games linked to in this post, and the next time someone asks you what you did over the weekend, you can look them in they eye and say... science.

Sunday, February 20, 2011

Bruce Alberts and fostering innovation in science

I attended an interesting talk the other day. Bruce Alberts, a man with many titles - biochemistry and biophysics prof at UCSF, United States Science Envoy, and Editor in Chief of Science magazine - spoke about his research and science in general. The talk was part of the IMED Seminar Series, a great series out of UCLA’s School of Medicine.


Alberts started off with a brief overview of his research, and shared an interesting tidbit about his graduate research. Because he failed to think about what would happen if his PhD experiment didn’t work, he accomplished something that no one else at Harvard had ever done, he failed his PhD thesis defense. After this jarring experience, he realized how important it is to set up an experiment so that whether you get the result you want or not, you learn something valuable. At this point he added the requisite, “my failures have made me the person I am.” Even though this idea is cliche, it is still motivational. 


Because of his work as a United States Science Envoy his views about the current state of science made it into the talk. He is a big advocate of anything that drives research into new directions. He feels that the general structure of academic research discourages innovative thinking. 


To illustrate this point he showed a great figure. It was a pie chart with a great number of varying sized slices alternating in color between red and white. The center of the chart was the beginning of lines of research. As work began in a particular line, signified by being red, the work done expanded the knowledge along that line, visually shown by the slice’s expansion from the middle out towards the edge of the circle. This expansion is also helped by students of researchers, who typically continue in their mentor’s vein of research. As these lines of active research expand, the lines of inactive research, white slices, also expand, with potential but unexplored topics building on other potential but unexplored topics.


I liked this example because it is the counterpoint to the famous saying by Issac Newton. While he saw farther because he was standing on the shoulders of giants, there are these other potential areas of knowledge and innovation with no giant shoulders to stand on.


Alberts didn’t have detailed prescriptions for fostering innovative research, beyond the standard line of favoring interdisciplinary work, but he did advocate a way to spark new research ideas. Universities have a lot of academic talks, but the people who typically attend are those doing research in the same field. These people, students or professors, tend to already know 90% of the material being presented, so it is much more valuable for researchers to go to talks outside their areas of interest. This approach is applicable to all parts of life, new experiences tend to generate ideas and innovative problem solving. 


On the policy side he discussed a project he initiated during his time at the National Academy of Sciences. In 1996 members of Congress were calling for the reduction of funding for research without any immediate application, or basic science. While on the surface it makes sense to favor research that leads directly to products, or applied research, it is terribly difficult to predict what will come of research. Though an experiment might at first seem only useful for knowledge’s sake, eventually someone could build on that knowledge to create something of immense value. 


He managed to change the mind of those members of Congress with a series of articles called Beyond DiscoveryTM: The Path from Research to Human Benefit. These articles showed how basic science can lead, sometimes unexpectedly, to inventions with world-changing consequences. From MRIs to GPS, many of today’s vital tools wouldn’t be available without investments in basic science.

Friday, February 11, 2011

Nanotechnology, what is it good for?

I’d like to use this week’s post to define nanotechnology. This is a bit of a tall order in a blog post, more like a thesis topic, but I think it’s a useful exercise.

The first problem is that the word itself is so vague, and somewhat controversial. Some people prefer nanoscience, feeling that nanotechnology is tilted to heavily towards applied science. Others, including the institute I work for, use NanoSystems to encompass both nanoscience and nanotechnology. But I’ll stick with nanotechnology because that seems to be more widely recognized.

As for the vagueness, it seems that everything involves nanotechnology these days. It is an incredibly popular buzzword in science and engineering, and products from sporting goods to sunscreen have a bit of nano thrown in. Everything, that is, except for food products. Even technology happy Americans recoil in horror at unpronounceable nanotechnologies showing up on ingredient lists.

Nanotechnology has such a wide reach because there has yet to be a real definition of it. Anyone wanting to sprinkle a little marketing magic on their tennis racket or grant proposal is free to enlist nanotechnology, defining it however they like.

Broadening things even further, products and research can be called a nanotechnology as long as one major component involves processes at the nanoscale. Researchers have not had luck creating entire systems at the nanoscale yet. Linking up processes at such small dimensions has turned out to be trickier than once thought. So constructing nanobots entirely at the nanoscale is still in the realm of science fiction at this point.

Moving past these philosophical points of contention, we get into the meat of what is nanotechnology. The National Nanotechnology Initiative, the U.S. program coordinating Federal nanotechnology research and development, defines it as such, “Nanotechnology is the understanding and control of matter at dimensions between approximately 1 and 100 nanometers, where unique phenomena enable novel applications.” Besides 100 seeming like an arbitrarily round number to use as a boundary, this definition only helps if one knows what a nanometer is.

Again, the NNI helpfully points out that a nanometer is one-billionth of a meter, but that is such a mindbendingly small scale to image that points of reference are needed. Nanowerk, a popular website for nanoscience and nanotechnology information, provides one of my favorite ways to describe the nanoscale, “a sphere with a diameter of one nanometer compares to a soccer ball as the soccer ball compares to the Earth.” Also from Nanowerk, 8 to 10 atoms (depending on the element) in a row span one nanometer.

Now that we have a relatively decent idea of what is the nanoscale, why is it important? As the NNI definition alluded to, materials exhibit new and exciting properties at these dimensions. Graphite, made up of carbon atoms, is used in pencils and is pretty brittle. But Graphene, a one-atom-thick layer of carbon atoms, is one of the strongest materials known to man.

There is such a long list of materials that exhibit new properties at the nanoscale that one has to wonder if everything operates this way. The anti-climatic answer is no. The materials that do have different properties get all the press because a story about some material that behaves the same as always isn’t very exciting.

These new properties add up to a revolution in the making with nanotechnology. There are three generally agreed upon waves of innovation. The first is already upon us, electronics like smart phones, tablet computers, and laptops rely heavily on nanotechnology to pack so much computing power into such small packages.

The next wave, which is just approaching shore, is in medicine. Nanotechnology is enabling researchers to design medications that can specifically target diseased cells, providing more effective treatments with fewer side-effects.

The third wave, a bit further off but not out of sight, comes from renewable energy. A handful of promising developments are in the works on this front including biofuels from algae, polymer solar cells, and hydrogen fuel cells.

All this potential adds up to nanotechnology being a very exciting field to work in, definitions are overrated anyway.

Saturday, February 5, 2011

A matter of trust

It could be worse than ‘mildly interested,’ they could be hostile.

On Friday I participated in a workshop on communicating science. As an introduction the teacher outlined the general public’s attitude towards nanotechnology. The sketch painted was of people who are mildly interested in the technology, but aren’t capable of understanding technical material and are very unsure of their knowledge of science. This is the benefit of living in America, where people are generally in favor of new technology, even if they don’t know much about it. Other countries, like Germany, are almost reflexively distrustful of new technology.

The workshop was for researchers who wanted to hone their writing skills, but I crashed it to get some tips for my own communications about science. As a sample of the type of material presented, here is my favorite quote from the day, “never underestimate the intelligence of your audience, but never overestimate their knowledge.”

There are a number of reasons that scientists should be at least competent writers, but career advancement is the main one. Most university research is funded through grants, which have an application process involving competing teams writing proposals. Being able to write a proposal that clearly and effectively argues a point is a big part of getting grants. Also, the frequency and quality of a scientists publications determine whether they will get promotions and eventually tenure. Research gains acceptance in the scientific community through publication in peer-reviewed journals. Though the quality of research is the most important factor in getting published, prestigious journals expect a certain standard of writing. Therefore, to get the money to do research, scientists must be able to write effective grant proposals, and to publish their subsequent work, they must be able to clearly write about the results of their research.

My challenge in writing about science is slightly different. The audience a scientist is writing for in grants and academic publications is other people with technical backgrounds, so their task is to describe their research as accurately as possible. In writing for a general audience, I am translating the scientific material into something that is broadly accessible, but still gets the research correct. My goal is to distill the research into something digestible for the reader.

This idea of synthesizing information into usable bits is relevant in a number of areas. We rely on a number of people including lawyers, doctors and accountants to translate complex information so that we can make decisions. Another group who breaks down complex information for a wider audience is journalists. But a different idea for news publication has emerged, and it comes from WikiLeaks.

Julian Assange, the founder and editor-in-chief of WikiLeaks, is advocating what he calls ‘scientific journalism.’ This entails the publishing the background materials used to write a story, along with the story itself. The basic idea is to let the reader have access to the same materials the journalist does, so that they can determine whether the analysis provided by the journalist is accurate.

I’m not going to make a judgement call on WikiLeaks in general, but I do think the idea of scientific journalism is a bit of a waste of time. There is a world of difference between scientific publishing, where the entire data set is available to be analyzed, and journalism. Global events don’t occur in a vacuum, they exist in a complex world where regional differences and history can play a big role.

Just like I trust a lawyer to make a better legal decision for me because he has studied law, I trust a journalist who specializes in financial markets to analyze the bankruptcy of Lehman Brothers more than I would trust myself to go over their financial statements and come to a valid conclusion. The world is too complex for everyone to be expert in everything, we need to rely on others to provide information.

Hopefully I won’t have to resort to scientific journalism and I’ll be able to retain my audience’s mild interest without having to publish my supporting materials to retain trustworthiness.