There are interesting null results that get published and are well known. For example, Card & Kruger (1994) was a null result paper showing that increasing the minimum wage has a null effect on employment rates. This result went against the common assumption that increasing wages will decrease employment at the time.
Other null results are either dirty (e.g., big standard errors) or due to process problems (e.g., experimental failure). These are more difficult to publish because it's difficult to learn anything new from these results.
The challenge is that researchers do not know if they are going to get a "good" null or a "bad" one. Most of the time, you have to invest significant effort and time into a project, only to get a null result at the end. These results are difficult to publish in most cases and can lead to the end of careers if someone is pre-tenure or lead to funding problems for anyone.
Is that actually a null result though? That sounds like a standard positive result: "We managed to show that minimum wage has no effect on employment rate".
A null result would have been: "We tried to apply Famous Theory to showing that minimum wage has no effect on employment rate but we failed because this and that".
Looking at the definition in the article, it definitely is a null result. However the example does illustrate that 'a null result' probably isn't a very interesting thing to talk about because it covers too many types of result. I think what people on HN actually want to track is something more like 'a boring null result'. The real question is whether there a process that is reliably being followed and leading to research that matches reality (where a statistically significant result suggests something is real) or is scientific publishing highly biased towards odd results (where studies that muck up or get lucky with statistical noise are over-represented).
In this case we would expect some studies of the minimum wage to show it increases employment regardless of what the effect of wage rises is in the general case - eg, some official raised the minimum wage while a sector went into a boom for unrelated and coincidental reasons.
From a related nature article (https://www.nature.com/articles/d41586-024-02383-9), "null or negative results — those that fail to find a relationship between variables or groups, or that go against the preconceived hypothesis." According to this definition, I think both examples you provided are null results. Particularly here, where the context is the file drawer problem.
The above is a good point but I would extend it further. I mean, philosophically, you get a positive result from a negative (null) result by merely changing your hypotheses (e.g., something should not cause something else).
That's sort of like saying "I measured acceleration due to gravity by dropping a bowling ball off the Tower of Pisa, and got a null result: the ball simply hovered in midair."
If the result is very surprising and contradicts established theory, I wouldn't consider it a null result, even if some parameter numerically measures as zero.
The question being asked is, "what is the correlation between these two variables": is it positive, negative, or zero (null). The null hypothesis is the baseline you start from because the overwhelming majority of variables from physical observations are uncorrelated (e.g. what is the correlation between "how many people in the treatment group of this clinical trial made a full recovery" and "the price of eggs in Slovakia").
Measurements of some physical quantity are a different kind of experiment, you cannot phrase it as a question about the correlation between two variables. Instead you take measurements and put error bars on them (unless what you're measuring is an indirect proxy for the actual quantity, in which case the null hypothesis and p-value testing does become relevant again).
Null means nothing, zero. In the context of scientific articles a null result means that the difference is zero. What difference? Well it depends. It could be difference between doing something and not doing it. The difference between before and after some intervention. Or perhaps the difference between two different actions.
In this case the difference between before and after raising the minimum wage.
Furthermore, the thing with a null result is that it's always dependant on how sensitive your study is. A null result is always of the form "we can't rule out that it's 0". If the study is powerful then a null result will rule out a large difference, but there is always the possibility that there is a difference too small to detect.
Null means something doesn't exist and proving something doesn't exist can at times especially in certain sciences be as valuable as proving something does exist. Especially if it stops the rest of the field chasing the pot of gold at the end of the rainbow.
What matters for publication is a surprising result, not whether it confirms the main hypothesis or the null one.
The "psychological null hypothesis" is that which follows the common assumption, whether that assumption states that there is a relationship between the variables or that there is not.
One of my most celebrated recent "null result" was published in Biological Psychiatry, a journal with an impact Factor about 13 last I checked. Even though it was a null result, it was decently powered longitudinal study that provided evidence inconsistent with the prevailing view in autism research. Even if there exists a difference that we failed to detect, it probably much smaller than what others had expected on the basis of second-grade evidence (cross-sectional group differences).
I agree. More succinctly, it confounds two types of results: a null result due to statistical noise (big error bars, experiment failure) and a null result where the null model is more likely (actually, the effect doesnt exist).
Like many things in statistics, this is solved by Bayesian analysis: instead of asking if we can reject the null hypothesis, the question should be which model is more likely, the null model or the alternate model.
> Card & Kruger (1994) was a null result paper showing that increasing the minimum wage has a null effect on employment rates. This result went against the common assumption that increasing wages will decrease employment at the time.
I had to look that up, because more precisely, it showed that a particular minimum wage increase in NJ from $4.25 to $5.05 didn't increase unemployment in 410 particular fast food joints in 1989-1990 - https://davidcard.berkeley.edu/papers/njmin-aer.pdf - not that "increasing the minimum wage has a null effect on employment rates" at all, ever, no matter what. It's not as if increasing the minimum wage to something impossible to afford like $100 trillion wouldn't force everyone to get laid off, but nobody generally cares about the limiting case like that, as that's relatively unlikely.
The interesting part is non-linearity in the response, seeing where and how much employment rates might change given the magnitude of a particular minimum wage increase, what sort of profit margins the affected industries have, elasticity of the demand for labor and other adaptations by businesses in response to increases, not whether it's a thing that can happen at all.
And we're seeing a lot more such adaptions these days. There are McDonalds around here that have computers in the lobby where you input your order yourself and I've gotten drones to deliver my food instead of DoorDash drivers. That kind of thing was not yet practical back in 1989, when I remember using an Apple ][ GS and it's not clear that findings like this should be relied upon too heavily given that some of the adaptations available to businesses now were not practical back then, especially when technology may change that even more in the future.
Also, doesn't a null result mean "we weren't able to measure an effect" rather than "there is no effect"? That's a pretty big difference in how important something is.
Sure. But publishing this result with your experimental methodology may help others to refine the experiment and get a better answer.
It is absolutely shameful that negative results are almost never published. I sure that a lot of money and effort wasted by repeating the same dead-end experiments by many research groups just because there is no paper that said: “we tried it this way, it didn’t work”.
1. Papers are written to invite replication — the most important part of the scientific process. It is already difficult to compel replication even when you only put the most promising research in people's faces. Now you want them to have to also sift through thousands of entrants that have almost no reason for replication attempts?
2. It's easy to call it shameful when it isn't you who has to do the work. If you are like most other normally functioning people, you no doubt perform little experiments every day that end up going nowhere. How many have you written papers for? I can confidently say zero. I've got better things to do.
They avoid mentioning the elephant in the room: jobs and tenure. When you can get hired for a tenure-track job based on your null-result publications, and can get tenure for your null-result publications, then people will publish null results. Until then, they won't hit the mainstream.
There are a few different realities here. First, it's not really whether you can get tenure with the publications, because almost none of the major respected journals accept simple null/negative results for publication. It's too "boring". Now, they do occasionally publish "surprising" null/negative results, but that's usually do to rivalry or scandal.
The counter example to some extent is medical/drug control trials, but those are pharma driven, and gov published though an academic could be on the paper, and it might find its way onto a tenure review.
Second, in the beginning there is funding. If you don't have a grant for it, you don't do the research. Most grants are for "discoveries" and those only come about from "positive" scientific results. So the first path to this is to pay people to run the experiments (that nobody wants to see "fail"). Then, you have to trust that the people running them don't screw up the actual experiment, because there are an almost infinite number of ways to do things wrong, and only experts can even make things work at all for difficult modern science. Then you hope that the statistics are done well and not skewed, and hope a major journal publishes.
Third, could a Journal of Negative Results that only published well run experiments, by respected experts, with good statistics and minimal bias be profitable? I don't know, a few exist, but I think it would take government or charity to get it off the ground, and a few big names to get people reading it for prestige. Otherwise, we're just talking about something on par with arXiv.org. It can't just be a journal that publishes every negative result or somehow reviewers have to experts in everything, since properly reviewing negative results from many fields is a HUGE challenge.
My experience writing, and getting grants/research funded, is that there's a lot of bootstrapping where you use some initial funding to do research on some interesting topics and get some initial results, before you then propose to "do" that research (which you have high confidence will succeed) so that you can get funding to finish the next phase of research (and confirm the original work) to get the next grant. It's a cycle, and you don't dare break it, because if you "fail" to get "good" results from your research, and you don't get published, then your proposals for the next set of grants will be viewed very negatively!
This. And the incentives can be even more perverse: If you find a null result you might not want to let your competitors know, because they'll get stuck in the same sand trap.
If people are so interested, they'd presumably read and cite null-result publications, and their authors would get the same boons as if having published a positive result.
There's some issues, though. Firstly, how do you enforce citing negative results? In the case of positive results, reviewers can ask that work be cited if it had already introduced things present in the article. This is because a publication is a claim to originality.
But how do you define originality in not following a given approach? Anyone can not have the idea of doing something. You can't well cite all the paths not followed in your work, considering you might not even be aware of a negative result publication regarding these ideas you discarded or didn't have. Bibliography is time consuming enough as it is, without having to also cite all things irrelevant.
Another issue is that the effort to write an article and get it published and, on the other side, to review it, makes it hard to justify publishing negative results. I'd say an issue is rather that many positive results are already not getting published... There's a lot of informal knowledge, as people don't have time to write 100 page papers with all the tricks and details regularly, nor reviewers to read them.
Also, I could see a larger acceptance of negative result publications bringing perverse incentives. Currently, you have to get somewhere eventually. If negative results become legitimate publications, what would e.g. PhD theses become? Oh, we tried to reinvent everything but nothing worked, here's 200 pages of negative results no-one would have reasonably tried anyways. While the current state of affairs favours incremental research, I think that is still better than no serious research at all.
> If people are so interested, they'd presumably read and cite null-result publications,
The thing is, people mostly cite work they're building upon, and it's often difficult to build much on a null result.
If I'm an old-timey scientist trying to invent the first lightbulb, and I try a brass filament and it doesn't work, then I try a steel filament and it doesn't work, then I try an aluminium filament and it doesn't work - will anyone be interested in that?
On the other hand, if I tested platinum or carbonised paper or something else that actually works? Well, there's a lot more to build on there, and a lot more to be interested in.
I think the failed efforts merit at least a mention, if not a whole publication, otherwise everyone will wonder if they can save money on platinum filaments by switching to aluminum. All the lightbulb companies will continuously be re-confirming that null result internally as part of various cost-reduction R&D efforts.
The influence of money really does hold back scientific progress and is often specifically used to prevent some truths from being known or to reduce the confidence we have in those truths.
Obviously it takes money to do pretty much anything in our society but it does seem like it has way more influence that is necessary. Greed seems to corrupt everything, and even though we can identify areas where things can be improved nobody seems to be wiling or able to change course.
I don't see any way to get around the fact that science is expensive. Researching the boundaries of human knowledge requires access to rare or novel materials, which requires people to invest time and effort in producing them, which of course requires money. If your research requires access to vats of liquid oxygen, or high powered lasers, or ultra pure chemicals, or 500 pure bred rats, you'll need people working full time to produce those things for you, and they need to be able to put food on their table.
I mean, science has always been in one way or another. All the 'scientists' of olde were either wealthy or given some kind of grant by those that were. Science itself won't be exempt from the freeloader problem either.
Not that I'm saying all science has to economic purposes.
I studied physics in university, and found it challenging to find null-result publications to cite, which can be useful when proposing a new experiment or as background info for a non-null paper.
I promised myself if I became ultra-wealthy I would start a "Journal of Null Science" to collect these publications. (this journal still doesn't exist)
Before tackling that, a non-profit should fund well-designed randomized controlled trials in areas where none exist. Which is most of them. Commit to funding and publishing the trial, regardless of outcome, once a cross-disciplinary panel of disinterested experts on trial statistics approve the pre-registered methodology. If there are too many qualified studies to fund, choose randomly.
This alone would probably kill off a lot of fraudulent science in areas like nutrition and psychology. It's what the government should be doing with NIH and NSF funding, but is not.
If you manage to get a good RCT through execution & publication, that should make your career, regardless of outcome.
There is zero incentive for the researcher personally to publish null results.
Null results are the foundations on which “glossy” results are produced. Researchers would be wasting time giving away their competitive advantage by publishing null results.
What do you mean glossy results? Wouldn't it be to your advantage to take down another researcher? Or do you mean they use null results to construct a better theory for more credit?
Glossy results are memeable stories that editors of journals would like to have in their next edition.
There is very little incentive in publicly criticising a paper, there is incentive to show why others are wrong and why your new technically superior method solves the problem and finds new insights to the field.
You could publish them as a listicle "10 falsehoods organic chemists believe!" Because behind most every null result was an hypothesis that sounded like it was probably true. Most likely, it would sound probably true to most people in the field, so publishing the result is of real value to others.
The problem arises that null results are cheap and easy to "find" for things no-one thinks sound plausible, and therefore a trivial way to game the publish or perish system. I suspect that this alone explains the bias against publishing null results.
It's amazing to see this on the front page of HN as it came up in a discussion with my partner early in our relationship. I was saying something about how a lot of people don't understand the robustness of peer review and replication. I was gushing about how it's the most perfect system of knowledge advancement and she replied, "I mean, it's not perfect though," and then said, pretty much verbatim, the title of this article.
The problem specifically isn't so much that null results don't get published, it's that they get published as a positive result in something the researchers weren't studying - they have to change their hypothesis retroactively to make it look like a positive result. Worse, this leads to studies that are designed to attempt to study as many things as possible, to hedge their bets. These studies suffer from quality problems because of course you can't really study something deeply if you're controlling a lot of variables.
Never worked in academia, but in industry null results are really valuable: “we tried that, it didn’t work, don’t waste your time”. Sometimes very intuitive things just don’t work. Sometimes people are less inclined to share those things more publicly.
In a perfect world, there’s still a forcing function to get researchers to publish null results. Maybe the head of a department publishes the research they tried but didn’t work out. I wonder how much money has been lost on repeatedly trying the same approaches that don’t work.
Journals could fix that. They could create a category null results and dedicate it a fixed amount of pages (like 20%). Researchers want to be in journals, if this category doesn’t have a lot of submissions it would be much easier to get published.
Surely publishing a result is not in itself costly. But I guess the peer review is.
So journals could have a section (the grey pages?) for "unsellable results" that they didnt give a peer review. They would of course need to assess them in some other way, to ensure a reasonable level of quality.
We just need a site for posting null result reports.
With really good keyword, search functionality.
Much less formal requirements than regular papers, make it easy. But with some common sense guidelines.
And public upvotes, and commenting, so contributors get some feedback love, and failures can attract potential helpful turnaround ideas.
And of course, annual rewards. For humor & not so serious (because, why so serious?) street cred, but with the serous mission of raising consciousness about how negative results are not some narrow bit of information, but that attempts and results, bad or not, are rich sources of new ideas.
They could just publish them on arxiv or their own websites. Publishing is easy. What they mean is, they struggle to get attention to them in a way that boosts their own career by pumping the metrics their administrators care about. Not the same thing at all.
Not all null results are created equal.
There are interesting null results that get published and are well known. For example, Card & Kruger (1994) was a null result paper showing that increasing the minimum wage has a null effect on employment rates. This result went against the common assumption that increasing wages will decrease employment at the time.
Other null results are either dirty (e.g., big standard errors) or due to process problems (e.g., experimental failure). These are more difficult to publish because it's difficult to learn anything new from these results.
The challenge is that researchers do not know if they are going to get a "good" null or a "bad" one. Most of the time, you have to invest significant effort and time into a project, only to get a null result at the end. These results are difficult to publish in most cases and can lead to the end of careers if someone is pre-tenure or lead to funding problems for anyone.
Is that actually a null result though? That sounds like a standard positive result: "We managed to show that minimum wage has no effect on employment rate".
A null result would have been: "We tried to apply Famous Theory to showing that minimum wage has no effect on employment rate but we failed because this and that".
Looking at the definition in the article, it definitely is a null result. However the example does illustrate that 'a null result' probably isn't a very interesting thing to talk about because it covers too many types of result. I think what people on HN actually want to track is something more like 'a boring null result'. The real question is whether there a process that is reliably being followed and leading to research that matches reality (where a statistically significant result suggests something is real) or is scientific publishing highly biased towards odd results (where studies that muck up or get lucky with statistical noise are over-represented).
In this case we would expect some studies of the minimum wage to show it increases employment regardless of what the effect of wage rises is in the general case - eg, some official raised the minimum wage while a sector went into a boom for unrelated and coincidental reasons.
From a related nature article (https://www.nature.com/articles/d41586-024-02383-9), "null or negative results — those that fail to find a relationship between variables or groups, or that go against the preconceived hypothesis." According to this definition, I think both examples you provided are null results. Particularly here, where the context is the file drawer problem.
> Is that actually a null result though?
The above is a good point but I would extend it further. I mean, philosophically, you get a positive result from a negative (null) result by merely changing your hypotheses (e.g., something should not cause something else).
if you are p testing this isn’t the case. A positive result is a much stronger assertion
Sure and of course you shouldn't change your hypotheses. That's kind of the whole purpose of pre-registrations at a meta-level of things.
No, because in theory a minimum wage increase could decrease the unemployment rate. If it does neither, that’s a null result.
That's sort of like saying "I measured acceleration due to gravity by dropping a bowling ball off the Tower of Pisa, and got a null result: the ball simply hovered in midair."
If the result is very surprising and contradicts established theory, I wouldn't consider it a null result, even if some parameter numerically measures as zero.
The question being asked is, "what is the correlation between these two variables": is it positive, negative, or zero (null). The null hypothesis is the baseline you start from because the overwhelming majority of variables from physical observations are uncorrelated (e.g. what is the correlation between "how many people in the treatment group of this clinical trial made a full recovery" and "the price of eggs in Slovakia").
Measurements of some physical quantity are a different kind of experiment, you cannot phrase it as a question about the correlation between two variables. Instead you take measurements and put error bars on them (unless what you're measuring is an indirect proxy for the actual quantity, in which case the null hypothesis and p-value testing does become relevant again).
In fact, you can express measurement of g as a linear correlation between y-y0 and (t-t0)^0.5.
Null means nothing, zero. In the context of scientific articles a null result means that the difference is zero. What difference? Well it depends. It could be difference between doing something and not doing it. The difference between before and after some intervention. Or perhaps the difference between two different actions.
In this case the difference between before and after raising the minimum wage.
Furthermore, the thing with a null result is that it's always dependant on how sensitive your study is. A null result is always of the form "we can't rule out that it's 0". If the study is powerful then a null result will rule out a large difference, but there is always the possibility that there is a difference too small to detect.
Null means something doesn't exist and proving something doesn't exist can at times especially in certain sciences be as valuable as proving something does exist. Especially if it stops the rest of the field chasing the pot of gold at the end of the rainbow.
What matters for publication is a surprising result, not whether it confirms the main hypothesis or the null one.
The "psychological null hypothesis" is that which follows the common assumption, whether that assumption states that there is a relationship between the variables or that there is not.
One of my most celebrated recent "null result" was published in Biological Psychiatry, a journal with an impact Factor about 13 last I checked. Even though it was a null result, it was decently powered longitudinal study that provided evidence inconsistent with the prevailing view in autism research. Even if there exists a difference that we failed to detect, it probably much smaller than what others had expected on the basis of second-grade evidence (cross-sectional group differences).
I agree. More succinctly, it confounds two types of results: a null result due to statistical noise (big error bars, experiment failure) and a null result where the null model is more likely (actually, the effect doesnt exist).
Like many things in statistics, this is solved by Bayesian analysis: instead of asking if we can reject the null hypothesis, the question should be which model is more likely, the null model or the alternate model.
> Card & Kruger (1994) was a null result paper showing that increasing the minimum wage has a null effect on employment rates. This result went against the common assumption that increasing wages will decrease employment at the time.
I had to look that up, because more precisely, it showed that a particular minimum wage increase in NJ from $4.25 to $5.05 didn't increase unemployment in 410 particular fast food joints in 1989-1990 - https://davidcard.berkeley.edu/papers/njmin-aer.pdf - not that "increasing the minimum wage has a null effect on employment rates" at all, ever, no matter what. It's not as if increasing the minimum wage to something impossible to afford like $100 trillion wouldn't force everyone to get laid off, but nobody generally cares about the limiting case like that, as that's relatively unlikely.
The interesting part is non-linearity in the response, seeing where and how much employment rates might change given the magnitude of a particular minimum wage increase, what sort of profit margins the affected industries have, elasticity of the demand for labor and other adaptations by businesses in response to increases, not whether it's a thing that can happen at all.
And we're seeing a lot more such adaptions these days. There are McDonalds around here that have computers in the lobby where you input your order yourself and I've gotten drones to deliver my food instead of DoorDash drivers. That kind of thing was not yet practical back in 1989, when I remember using an Apple ][ GS and it's not clear that findings like this should be relied upon too heavily given that some of the adaptations available to businesses now were not practical back then, especially when technology may change that even more in the future.
Also, doesn't a null result mean "we weren't able to measure an effect" rather than "there is no effect"? That's a pretty big difference in how important something is.
Sure. But publishing this result with your experimental methodology may help others to refine the experiment and get a better answer.
It is absolutely shameful that negative results are almost never published. I sure that a lot of money and effort wasted by repeating the same dead-end experiments by many research groups just because there is no paper that said: “we tried it this way, it didn’t work”.
1. Papers are written to invite replication — the most important part of the scientific process. It is already difficult to compel replication even when you only put the most promising research in people's faces. Now you want them to have to also sift through thousands of entrants that have almost no reason for replication attempts?
2. It's easy to call it shameful when it isn't you who has to do the work. If you are like most other normally functioning people, you no doubt perform little experiments every day that end up going nowhere. How many have you written papers for? I can confidently say zero. I've got better things to do.
They avoid mentioning the elephant in the room: jobs and tenure. When you can get hired for a tenure-track job based on your null-result publications, and can get tenure for your null-result publications, then people will publish null results. Until then, they won't hit the mainstream.
There are a few different realities here. First, it's not really whether you can get tenure with the publications, because almost none of the major respected journals accept simple null/negative results for publication. It's too "boring". Now, they do occasionally publish "surprising" null/negative results, but that's usually do to rivalry or scandal.
The counter example to some extent is medical/drug control trials, but those are pharma driven, and gov published though an academic could be on the paper, and it might find its way onto a tenure review.
Second, in the beginning there is funding. If you don't have a grant for it, you don't do the research. Most grants are for "discoveries" and those only come about from "positive" scientific results. So the first path to this is to pay people to run the experiments (that nobody wants to see "fail"). Then, you have to trust that the people running them don't screw up the actual experiment, because there are an almost infinite number of ways to do things wrong, and only experts can even make things work at all for difficult modern science. Then you hope that the statistics are done well and not skewed, and hope a major journal publishes.
Third, could a Journal of Negative Results that only published well run experiments, by respected experts, with good statistics and minimal bias be profitable? I don't know, a few exist, but I think it would take government or charity to get it off the ground, and a few big names to get people reading it for prestige. Otherwise, we're just talking about something on par with arXiv.org. It can't just be a journal that publishes every negative result or somehow reviewers have to experts in everything, since properly reviewing negative results from many fields is a HUGE challenge.
My experience writing, and getting grants/research funded, is that there's a lot of bootstrapping where you use some initial funding to do research on some interesting topics and get some initial results, before you then propose to "do" that research (which you have high confidence will succeed) so that you can get funding to finish the next phase of research (and confirm the original work) to get the next grant. It's a cycle, and you don't dare break it, because if you "fail" to get "good" results from your research, and you don't get published, then your proposals for the next set of grants will be viewed very negatively!
This. And the incentives can be even more perverse: If you find a null result you might not want to let your competitors know, because they'll get stuck in the same sand trap.
If people are so interested, they'd presumably read and cite null-result publications, and their authors would get the same boons as if having published a positive result.
There's some issues, though. Firstly, how do you enforce citing negative results? In the case of positive results, reviewers can ask that work be cited if it had already introduced things present in the article. This is because a publication is a claim to originality.
But how do you define originality in not following a given approach? Anyone can not have the idea of doing something. You can't well cite all the paths not followed in your work, considering you might not even be aware of a negative result publication regarding these ideas you discarded or didn't have. Bibliography is time consuming enough as it is, without having to also cite all things irrelevant.
Another issue is that the effort to write an article and get it published and, on the other side, to review it, makes it hard to justify publishing negative results. I'd say an issue is rather that many positive results are already not getting published... There's a lot of informal knowledge, as people don't have time to write 100 page papers with all the tricks and details regularly, nor reviewers to read them.
Also, I could see a larger acceptance of negative result publications bringing perverse incentives. Currently, you have to get somewhere eventually. If negative results become legitimate publications, what would e.g. PhD theses become? Oh, we tried to reinvent everything but nothing worked, here's 200 pages of negative results no-one would have reasonably tried anyways. While the current state of affairs favours incremental research, I think that is still better than no serious research at all.
> If people are so interested, they'd presumably read and cite null-result publications,
The thing is, people mostly cite work they're building upon, and it's often difficult to build much on a null result.
If I'm an old-timey scientist trying to invent the first lightbulb, and I try a brass filament and it doesn't work, then I try a steel filament and it doesn't work, then I try an aluminium filament and it doesn't work - will anyone be interested in that?
On the other hand, if I tested platinum or carbonised paper or something else that actually works? Well, there's a lot more to build on there, and a lot more to be interested in.
I think the failed efforts merit at least a mention, if not a whole publication, otherwise everyone will wonder if they can save money on platinum filaments by switching to aluminum. All the lightbulb companies will continuously be re-confirming that null result internally as part of various cost-reduction R&D efforts.
It's fascinating how utterly dominated science is by economics. Even truth itself needs an angle.
The influence of money really does hold back scientific progress and is often specifically used to prevent some truths from being known or to reduce the confidence we have in those truths.
Obviously it takes money to do pretty much anything in our society but it does seem like it has way more influence that is necessary. Greed seems to corrupt everything, and even though we can identify areas where things can be improved nobody seems to be wiling or able to change course.
I don't see any way to get around the fact that science is expensive. Researching the boundaries of human knowledge requires access to rare or novel materials, which requires people to invest time and effort in producing them, which of course requires money. If your research requires access to vats of liquid oxygen, or high powered lasers, or ultra pure chemicals, or 500 pure bred rats, you'll need people working full time to produce those things for you, and they need to be able to put food on their table.
I mean, science has always been in one way or another. All the 'scientists' of olde were either wealthy or given some kind of grant by those that were. Science itself won't be exempt from the freeloader problem either.
Not that I'm saying all science has to economic purposes.
> Even truth itself needs an angle.
"When even truth itself needs an angle ...
... every lie looks like a viable alternative".-
Imagine being an economist... You can't get away from it.
I studied physics in university, and found it challenging to find null-result publications to cite, which can be useful when proposing a new experiment or as background info for a non-null paper.
I promised myself if I became ultra-wealthy I would start a "Journal of Null Science" to collect these publications. (this journal still doesn't exist)
https://www.jasnh.com , introduced in 2002: https://web.archive.org/web/20020601214717/https://www.apa.o...
This is really really so necessary ...
If really pro science, some non-profit should really fund this sort of research.-
PS. Heck, if nothing else, it'd give synthetic intellection systems somewhere to not go with their research, and their agency and such ...
Before tackling that, a non-profit should fund well-designed randomized controlled trials in areas where none exist. Which is most of them. Commit to funding and publishing the trial, regardless of outcome, once a cross-disciplinary panel of disinterested experts on trial statistics approve the pre-registered methodology. If there are too many qualified studies to fund, choose randomly.
This alone would probably kill off a lot of fraudulent science in areas like nutrition and psychology. It's what the government should be doing with NIH and NSF funding, but is not.
If you manage to get a good RCT through execution & publication, that should make your career, regardless of outcome.
> should fund well-designed randomized controlled trials in areas where none exist.
Indeed. That is the "baseline"-setting science, you are much correct.-
https://journal.trialanderror.org/
http://arjournals.com/
could just be online for a start, then it's just time for the organization that you'd need. sounds like a fun project to be honest
There is zero incentive for the researcher personally to publish null results.
Null results are the foundations on which “glossy” results are produced. Researchers would be wasting time giving away their competitive advantage by publishing null results.
What do you mean glossy results? Wouldn't it be to your advantage to take down another researcher? Or do you mean they use null results to construct a better theory for more credit?
Glossy results are memeable stories that editors of journals would like to have in their next edition.
There is very little incentive in publicly criticising a paper, there is incentive to show why others are wrong and why your new technically superior method solves the problem and finds new insights to the field.
You could publish them as a listicle "10 falsehoods organic chemists believe!" Because behind most every null result was an hypothesis that sounded like it was probably true. Most likely, it would sound probably true to most people in the field, so publishing the result is of real value to others.
The problem arises that null results are cheap and easy to "find" for things no-one thinks sound plausible, and therefore a trivial way to game the publish or perish system. I suspect that this alone explains the bias against publishing null results.
It's amazing to see this on the front page of HN as it came up in a discussion with my partner early in our relationship. I was saying something about how a lot of people don't understand the robustness of peer review and replication. I was gushing about how it's the most perfect system of knowledge advancement and she replied, "I mean, it's not perfect though," and then said, pretty much verbatim, the title of this article.
The problem specifically isn't so much that null results don't get published, it's that they get published as a positive result in something the researchers weren't studying - they have to change their hypothesis retroactively to make it look like a positive result. Worse, this leads to studies that are designed to attempt to study as many things as possible, to hedge their bets. These studies suffer from quality problems because of course you can't really study something deeply if you're controlling a lot of variables.
I won't say it doesn't happen. I'll just say that I haven't seen it happen in my science career on any of the projects I have worked on.
That's crazy, the serendipity.-
PS. Good point about the "shotgun" studies.-
Never worked in academia, but in industry null results are really valuable: “we tried that, it didn’t work, don’t waste your time”. Sometimes very intuitive things just don’t work. Sometimes people are less inclined to share those things more publicly.
In a perfect world, there’s still a forcing function to get researchers to publish null results. Maybe the head of a department publishes the research they tried but didn’t work out. I wonder how much money has been lost on repeatedly trying the same approaches that don’t work.
The Michelson-Morely experiment is probably one of the most famous null result in all of science.
https://en.wikipedia.org/wiki/Michelson%E2%80%93Morley_exper...
It would have been a LOT more famous if it hadn’t been null. But, yes — they provided giant shoulders for Einstein to stand on.
Journals could fix that. They could create a category null results and dedicate it a fixed amount of pages (like 20%). Researchers want to be in journals, if this category doesn’t have a lot of submissions it would be much easier to get published.
This is great idea.-
Heck, "we tried, and did not get there but ..." should be a category unto itself.-
Journals are mainly interested in profit, not fixing anything.
Surely publishing a result is not in itself costly. But I guess the peer review is.
So journals could have a section (the grey pages?) for "unsellable results" that they didnt give a peer review. They would of course need to assess them in some other way, to ensure a reasonable level of quality.
Onlynulls, the subscription-only service you’d be too ashamed to have your supervisor find you on.
We just need a site for posting null result reports.
With really good keyword, search functionality.
Much less formal requirements than regular papers, make it easy. But with some common sense guidelines.
And public upvotes, and commenting, so contributors get some feedback love, and failures can attract potential helpful turnaround ideas.
And of course, annual rewards. For humor & not so serious (because, why so serious?) street cred, but with the serous mission of raising consciousness about how negative results are not some narrow bit of information, but that attempts and results, bad or not, are rich sources of new ideas.
Ironic. Isn't Nature the worst offender?
They could just publish them on arxiv or their own websites. Publishing is easy. What they mean is, they struggle to get attention to them in a way that boosts their own career by pumping the metrics their administrators care about. Not the same thing at all.
There really should be a journal dedicated to this.
[dead]