• Home
  • AS Physics
    • Particle Physics
    • Electromagnetic Radiation and Quantum Phenomena
    • Waves and Vibrations
  • A2 Physics
    • Gravity Fields and Potentials
    • Electric Fields and Potentials
    • Capacitance
    • Magnetic Fields and Induction
    • Thermal Physics
    • Gas Laws
    • Further Mechanics
    • Nuclear Physics
    • Special Topics
  • Downloads
  • Links
  • Physblog
  • Physics in Pictures
  • Contact
  • Donate
Physics A-Level

"Scientists, what are your dirty secrets?" (part 3/3)

9/16/2018

1 Comment

 
Picture
Part 3 of 3 ...

  • Some scientists mentioned that there are some initiatives underway to make all data available on big data servers. By making all data available (the good with the bad, the logic goes), this would make the field more transparent… I personally think this is a difficult idea to swallow since a very large data set would need an accompanying document of instructions that is so large it becomes prohibitively undo-able. One theorist present, for example, uses a 3 month continuous slot on a super computer to calculate some energy/electrical consideration for a highly complicated surface/molecule/interface/interaction… Good luck sending that in to the ether… Scientists would then have to contend with merely being heard above the background noise.
 
  • One scientist said that science should be presented as a ‘narrative’ story. It “needs appeal”, otherwise it wont gain traction. This appeals to basic human instinct. If a scientist ‘takes me on a journey’ in a lecture or a scientific paper it is not only easier to follow, it helps parameterize the whole field in context of other works. Some erred on the side of caution: “we don’t need to tell stories, those who are interested will read it anyway”… I personally don’t think it works like that. 
 
  • The scandal of ‘authority’. Everyone agreed that ‘authority’ is meaningless i.e. your reputation as a scientist has nothing to do with the number of papers you publish and in which journals. Rather, on the science that you do. Everyone knew, from reading the literature in their own field and in conversation with scientists themselves, who were the ‘good’ and ‘bad’ scientists in their field.
 
  • The issue of ‘h-index’ is a really sour point. H-index is a modern analysis technique that ranks a scientist’s impact in the field based on how many citations they have per publication (more or less). This index is (partially) socially constructed, and follows algorithms, which leads some scientists to play the game of “how do I boost my h-index” without necessarily being a good scientist. One scientist told me afterwards that both extremes of the h-index are suspicious i.e. you could have an h-index of zero which could mean that you are a closet genius with the best science ever but no one has ever heard of who you are (and never publish your work) or you are a loser in your mother’s basement. Conversely, you could have the highest h-index in the world which could indicate that something must be going wrong i.e. you wrote one paper which, for trending reasons, people are citing and re-citing, but you yourself are not actually the greatest scientists in your field.
 
  • All of these discussions led some scientists to demand the ‘abolishing’ of h-indices (that will never happen… computer algorithms exist, get over it) but others merely said that we should just ignore it – and that’s what most scientists do. H-indices have SOME meaning, but it shouldn’t be the final arbiter of any faculty position decision.
 
  • HOWEVER, and a big however it is… young budding scientists are facing a problem. They NEED to publish in high-impact journals to get positions at universities and in many institutions, they have to play this game. It’s mostly the ‘better’ research institutions that can see past this. A few scientists present admitted that this is exactly what happened to them. Hmm, what to do?
 
  • One scientist said that researchers need get out of the habit of only using your own highly specialized technique. Perhaps the best way of ensuring that your results (as) closely (as possible) resemble a scientific ‘fact’ is to approach the problem from multivariate angles, methods, techniques and theories. The sum total of all of these should paint a clearer picture for everyone. Promisingly, good scientists do this, although it did serve as a warning to everyone.

  • A number of scientists commented on the difficulty of the review processes. When a scientific paper is submitted to a journal, the journal asks experts to critically review the work. There is normally an amount of time that they get to work on it. This is hard. Some papers need real consideration, and its not always possible to check every single word, figure and reference. Do you not review this paper? Review it poorly? Or spend more time on it that you are paid for!? These are difficult questions, and the pressure lead to mistaken outcomes. One scientist commented “I received a paper to review and another reviewer let the paper sail through because he respected the researcher…”, alarm bells ringing anyone?! People need to wake up!
 
  • High impact journals (like Nature and Science) don’t guarantee that the work is great quality. Some scientists admitted that their best papers were in journals that were more specialized, with a unique readership.
 
All in all, it was fascinating to hear about these issues. So many of which may seem subsidiary, and out of the remit of science, but they present real problems that serve to bottleneck the scientific endeavor. Its why budding scientists need to be pragmatic and smart about their science.

Don’t stick to your comfort zone. Try many things. Understand that politics is unfortunately interwoven with science in many ways, and that to get ahead you need to be steadfast in your scientific convictions.

1 Comment

"Scientists, what are your dirty secrets?" (part 2/3)

9/13/2018

1 Comment

 
Picture
... So what were my observations .....

  • Reproducibility. This is desirable all across science and there are a number of reasons that make it important. Experiments stand or fall based on whether they can be reproduced. However, often, the precise conditions of measurements are not clearly stated (and often absent) in scientific papers. This, as well as the inappropriate control measurements. Some scientists complained that (like the current status in popular media) 'new science' is sexy and journals will often publish something very exciting at the expense of quality. Essentially, time is the arbiter of whether a piece of science gains acceptance. One person suggested that a repeat experiment (of another group) should be published in an open source format (very simple to do), and builds on the credibility of the experiment or theory.
 
  • Nothing is measured in isolation! Whether you’re measuring the conductance of a single molecule or the vibrational modes of a crystal or the reactive nature of an enzyme you are using tools and instruments. Those tools are limited and in some cases, in certain fields, measuring the ‘same thing’ in a different set up can yield different results. A scientist’s job is to be transparent about methods. Some groups, however, when publishing breakthrough work, withhold methods to block others from entering the field; it gives them a grace period before others can be involved in that same niche of experiments. It’s important to control for your experimental method, by measuring in different ways.

  • Practically speaking, scientific groups, however, cannot publish a repeat experiment of another group (no journal would do that). This is where conferences are important and scientists DO tell other scientists "we cannot reproduce your results". The result of this is either: 1) more details are needed in that experiment, or 2) the experiment was flawed. Both are possible...

  • Issue: every scientist will select interesting data to publish. But don’t think that scientists are being misleading by doing this; most/many (?!) are responsible and publish what their experimental yield is, so you can judge for yourself. "Yield" is essentially what percentage of the time does your 'stuff' actually work, but must be carefully defined (!) e.g. “my experiment showed cool stuff 10% of the time” is different to “of the 100 experiments, 10 showed something but only 1 showed cool stuff” –both could be presented as “10%” yield: be careful. ‘Yield’ is also analogous in our everyday lives: How much of your own work day was productive that you would report it in a minute-by-minute report of what you did?

  • In science, often the things that DON'T work, are very very important. In my own work, I'm quite certain that I've repeated failed experiments of scientists of the past, which was perhaps avoidable had I known that it’s been tried before. In fact, I remember finding an old lab book from 15 years ago (in our lab) and finding details of a failed experiment that I was just about to do myself! This is why its important to speak with others who are in the field to see what they’re doing, what they’ve tried and paint a vision for the future.

...... to be continued

1 Comment

"Scientists, what are your dirty secrets?" (part 1/3)

9/12/2018

1 Comment

 
Picture
(Part 1)

"Ok everyone, gather round", said a conference organiser to the lecture hall of scientists. "We'd like to end by doing something different today. There is some really high quality science coming from all around the room, but I'd like to open a discussion in to those things that we never talk about"... "lets say... what could be considered to be the 'dirty secrets' in your field?"
 
I attended a scientific conference this week on the topic of surface spectroscopy and electrical phenomena. Experts from around the world presented their work in quite a wide range of experimental and theoretical topics. It was one of hundreds of small, specialized conferences, that unassumingly occur all over the world. However, from my judgement, in discussions with other scientists and general reading, the outcomes at this conference seemed rather ubiquitous in scientists’ criticism of… science.  
 
So how does ‘science work’, practically speaking? Peer reviewed journals are a great way of getting good science out in to the scientific community; other experts anonymously review your work, and with the appropriate additions and corrections you can publish and advertise that to the world. Conferences serve as another great venue for science. You can share ideas, collaborate, network, present your finding and get a critical analysis of your work. I've been to some conferences where the question and answer sessions were pretty brutal. Normally, in the good spirit of science, this is positively encouraged, and is mostly done well.
 
But is there anything really 'off the table' that cannot be (or isn't) discussed in a scientific setting? Are there dogmatic truths about the way science is done, published and disseminated to the masses? Is all data published? SHOULD all data be published? Is there an issue with reproducibility? What if two lines of solid experimental evidence directly contradict each other? What are the external factors hindering good science and how can they be addressed? Should we expect scientists to be moral? With whom does the burden of checking for faulty science, lie?
 
All these questions (and more) were posed at the conference that I attended. I would say that most of the scientists knew exactly what was being referred to... And always with many disagreements! Some of my conclusions will follow shortly...


1 Comment

    physbot

    Theres something interesting in the ether...

    Archives

    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018

    Categories

    All

    RSS Feed












Powered by Create your own unique website with customizable templates.