• 1 Post
  • 23 Comments
Joined 1 year ago
cake
Cake day: August 8th, 2023

help-circle
  • JoBo@feddit.uktoScience Memes@mander.xyzCommunity
    link
    fedilink
    English
    arrow-up
    14
    ·
    6 months ago

    The fact of higher protein content appears to be true (without going back to find and critique all the original studies). Explanations are much harder to ‘prove’ for questions like this.

    We can’t do experiments on the evolution of tears, so all we can do is come up with plausible theories and look at how they fit with the body of evidence. With enough evidence, from enough different angles, we might one day be able to say which proposed explanations fit the facts (and which don’t). It’s how we (eventually) proved smoking was killing people (another question we cannot do experiments on human beings to prove one way or the other) but not all questions are as important as smoking was and there isn’t necessarily a neat, single factor explanation to find even if someone was willing to fund all the necessary research.

    Not my area but, for example, I recently saw a study claim that sniffing women’s tears makes men less aggressive. That’s an angle that might help build some support for, or knock down, the theory that emotional tears are useful for social communication (ie help get women killed slightly less often). Did those studies use sad stories or onions? Did any study compare sad stories to onions? If we’re seeing hints of differences between sad stories and onions, that would tend to support the social communication element of the explanation. Unless we think there’s a difference between sad tears and frightened tears, which there probably is, so we should check that too. And the rest of the literature on tears, if it’s considered important enough to get the theory right. And we need to remember that sticky tears are not the same thing as smelly tears, so can we do experiments where non-emotional tears are made sticky, and non-sticky tears made to smell frightened?

    Etc etc.

    Explaining things we observe but cannot directly experiment on is a process, a process which typically takes many years and dozens of research groups. And a lot of funding. And decades of exhausting battles, if there is a lot riding on the answer (as it did with Big Tobacco vs Public Health).



  • Batteries are too heavy for many applications (including, arguably, cars).

    That doesn’t make hydrogen the only solution but it is at least a currently available solution. I posted a link about why the Orkneys (population 23k) are producing hydrogen and switching much of their transport to it: they have so much wind the UK (population 70m) national grid can’t take all the power they generate from it.



  • That is true of all colours of hydrogen other than green (and possibly natural stores of ‘fossil’ hydrogen if they can be extracted without leakage).

    Green hydrogen is better thought of as a battery than a fuel. It’s a good way to store the excess from renewables and may be the only way to solve problems like air travel.

    How hydrogen is transforming these tiny Scottish islands

    That’s not to say it’s perfect. Hydrogen in the atmosphere slows down the decomposition of methane so leaks must be kept well below 5% or the climate benefits are lost. We don’t have a good way to measure leaks. It’s also quite inefficient because a lot of energy is needed to compress it for portable uses.

    And, of course, the biggest problem is that Big Carbon will never stop pushing for dirtier hydrogens to be included in the mix, if green hydrogen paves the way.



  • The study is massively confounded. Did exercise cause good sleep, or did good sleep provide enough energy to do exercise?

    They have not found evidence that doing exercise even when you are exhausted from a lack of sleep and are struggling to do everything else that has to be done will cause you to sleep better. They haven’t done a study which can find causal effects, only associations.

    I don’t think it is bad advice; for people who are struggling to sleep well enough to keep up with the demands of daily life, trying to find the time and energy for more fresh air and walking is very unlikely to do any harm.

    But, it is harmful to imply that people who are struggling are struggling because they’re lazy when it may well be that they appear lazy because they are struggling. Doctors are already fucking terrible for this kind of thing and doctors who do research should not be presenting it this way when, if they are qualified to do the research, they know they have not defined the causal pathway or even the direction of the causal arrow.





  • JoBo@feddit.uktoScience Memes@mander.xyzWe're sorry.
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 months ago

    They do not include the peer reviewers in their list of people who missed it. Which means that either the peer reviewers did pick it up and for some reason it didn’t get addressed (unlikely) or this was a straight up pay-to-play and whoever runs that particular bit of the racket for Elsevier fucked up.


  • JoBo@feddit.uktoScience Memes@mander.xyzdegree in bamf
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    8 months ago

    Because the 'splaining phenomenon is about perceived but unearned superiority which leads the 'splainer to 'splain to someone who knows a great deal more than they do and, crucially, someone who the 'splainer ought to realise knows more than they do but doesn’t because of the illusion created by the society they live in.

    I’d have added “(born) middle-class” because that’s an important part of it too.


  • JoBo@feddit.uktoScience Memes@mander.xyzA modern paper
    link
    fedilink
    English
    arrow-up
    11
    ·
    8 months ago

    That’s why you need the appendices, so that you can check the details behind what is in the paper.

    Journals have word limits, due to the restrictions of print, and because a 200 page paper is too much for most readers. But some of them will need some or all of those 200 pages (which is usually a shed load of tables and figures, not much text apart from protocols etc).

    The quality of the research, and the way it was written up, cannot be assessed by those readers unless all the information is published. And the research cannot be implemented in practice unless it is described in full. There are thousands of papers out there that test a new treatment but don’t give enough detail about the treatment for anyone else to deliver it. Or develop a new measurement scale but don’t publish the scale. Or use a psychometric instrument but don’t publish the instrument. This research is largely useless (especially if the details were never archived properly and there’s no one still about who knows how to fill the gaps).

    We don’t (or should not) publish papers for CV points. We publish them so that other researchers know what research has been done and how to build on it. These days we don’t just publish all the summary tables and all the analyses, we ideally make the data available too. Not because we expect every reader to want to reanalyse it but because we know some of them will need to.



  • JoBo@feddit.uktoScience Memes@mander.xyzKnowledge
    link
    fedilink
    English
    arrow-up
    10
    ·
    8 months ago

    Imposter Syndrome is just the flipside of Dunning-Kruger. You must not let it paralyse you but you should know that it is a good thing. You can’t get better unless you believe there is room to get better. And there always is room to get better, so you’ve been on the right track from the getgo.



  • JoBo@feddit.uktoFoodPorn@lemmy.worldI made shepards pie but with steak!
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    8 months ago

    This is not like those other things.

    We have shepherd’s pie (lamb) and cottage pie (beef) and fisherman’s pie (fish). They’re all constructed in much the same way but the name refers to the contents.

    It’s just a misunderstanding and it’s not important but there is a real non-obsolete reason these dishes have the names they have.




  • JoBo@feddit.uktoScience Memes@mander.xyzhow is pragent formed?
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    11 months ago

    I’m going to have to object. We don’t use “false positive” and “false negative” as synonyms for Type I and Type II error because they’re not the same thing. The difference is at the heart of the misuse of p-values by so many researchers, and the root of the so-called replication crisis.

    Type I error is the risk of falsely concluding that the quantities being compared are meaningfully different when they are not, in fact, meaningfully different. Type II error is the risk of falsely concluding that they are essentially equivalent when they are not, in fact, essentially equivalent. Both are conditional probabilities; you can only get a Type I error when the things are, in truth, essentially equivalent and you can only get a Type II error when they are, in truth, meaningfully different. We define Type I and Type II errors as part of the design of a trial. We cannot calculate the risk of a false positive or a false negative without knowing the probability that the two things are meaningfully different.

    This may be a little easier to follow with an example:

    Let’s say we have designed an RCT to compare two treatments with Type I error of 0.05 (95% confidence) and Type II error of 0.1 (90% power). Let’s also say that this is the first large phase 3 trial of a promising drug and we know from experience with thousands of similar trials in this context that the new drug will turn out to be meaningfully different from control around 10% of the time.

    So, in 1000 trials of this sort, 100 trials will be comparing drugs which are meaningfully different and we will get a false negative for 10 of them (because we only have 90% power). 900 trials will be comparing drugs which are essentially equivalent and we will get a false positive for 45 of them (because we only have 95% confidence).

    The false positive rate is 45/135 (33.3%), nowhere near the 5% Type I error we designed the trial with.

    Statisticians are awful at naming things. But there is a reason we don’t give these error rates the nice, intuitive names you’d expect. Unfortunately we’re also awful at explaining things properly, so the misunderstanding has persisted anyway.

    This is a useful page which runs through much the same ideas as the paper linked above but in simpler terms: The p value and the base rate fallacy

    And this paper tries to rescue p-values from oblivion by calling for 0.005 to replace the usual 0.05 threshold for alpha: Redefine statistical significance.