|
|
Line 156: |
Line 156: |
| ...because epistemology evolves continually, even in medicine: | | ...because epistemology evolves continually, even in medicine: |
|
| |
|
| '''P-value''': In medicine, for example, we rely on statistical inference to confirm experimental results, specifically the {{Tooltip|P-value|2=The p-value represents the probability that observed results are due to chance, assuming the null hypothesis <math> H_0 </math> is true. It should not be used as a binary criterion (e.g., <math> p < 0.05 </math>) for scientific decisions, as values near the threshold require additional verification, such as cross-validation. ''p-hacking'' (repeating tests to achieve significance) increases false positives. Rigorous experimental design and transparency about all tests conducted can mitigate this risk. Type I error increases with multiple tests: for <math> N </math> independent tests at threshold <math> \alpha </math>, the Family-Wise Error Rate (FWER) is <math> FWER = 1 - (1 - \alpha)^N </math>. Bonferroni correction divides the threshold by the number of tests, <math> p < \frac{\alpha}{N} </math>, but can increase false negatives. The False Discovery Rate (FDR) by Benjamini-Hochberg is less conservative, allowing more true discoveries with an acceptable proportion of false positives. The Bayesian approach uses prior knowledge to balance prior and data with a posterior distribution, offering a valid alternative to the p-value. To combine p-values from multiple studies, meta-analysis uses methods like Fisher's: <math> \chi^2 = -2 \sum \ln(p_i) </math>. In summary, the p-value remains useful when contextualized and integrated with other measures, such as confidence intervals and Bayesian approaches.}}, a "significance test" that assesses data validity. Yet, even this entrenched concept is now being challenged. A recent study highlighted a campaign in the journal "Nature" against the use of the P-value.<ref name=":1" /> Signed by over 800 scientists, this campaign marks a "silent revolution" in statistical inference, encouraging a reflective and modest approach to significance.<ref name=":2" /><ref name=":3" /><ref name=":4" /> The American Statistical Association contributed to this discussion by releasing a special issue of "The American Statistician Association" titled "Statistical Inference in the 21st Century: A World Beyond p < 0.05." It offers new ways to express research significance and embraces uncertainty.<ref name="wasser" />
| | '''P-value''': In medicine, for example, we rely on statistical inference to confirm experimental results, specifically the {{Tooltip|P-value|2=The p-value represents the probability that observed results are due to chance, assuming the null hypothesis <math> H_0 </math> is true. It should not be used as a binary criterion (e.g., <math> p < 0.05 </math>) for scientific decisions, as values near the threshold require additional verification, such as cross-validation. ''p-hacking'' (repeating tests to achieve significance) increases false positives. Rigorous experimental design and transparency about all tests conducted can mitigate this risk. Type I error increases with multiple tests: for <math> N </math> independent tests at threshold <math> \alpha </math>, the Family-Wise Error Rate (FWER) is <math> FWER = 1 - (1 - \alpha)^N </math>. Bonferroni correction divides the threshold by the number of tests, <math> p < \frac{\alpha}{N} </math>, but can increase false negatives. The False Discovery Rate (FDR) by Benjamini-Hochberg is less conservative, allowing more true discoveries with an acceptable proportion of false positives. The Bayesian approach uses prior knowledge to balance prior and data with a posterior distribution, offering a valid alternative to the p-value. To combine p-values from multiple studies, meta-analysis uses methods like Fisher's: <math> \chi^2 = -2 \sum \ln(p_i) </math>. In summary, the p-value remains useful when contextualized and integrated with other measures, such as confidence intervals and Bayesian approaches.}}, a "significance test" that assesses data validity. Yet, even this entrenched concept is now being challenged. A recent study highlighted a campaign in the journal "Nature" against the use of the P-value.<ref name=":1">{{cita libro |
| {|
| |
| |-
| |
| |
| |
| *'''P-value''': In medicine, for example, we rely on statistical inference to confirm experimental results, specifically the {{Tooltip|P-value|2=The p-value represents the probability that observed results are due to chance, assuming the null hypothesis <math> H_0 </math> is true. It should not be used as a binary criterion (e.g., <math> p < 0.05 </math>) for scientific decisions, as values near the threshold require additional verification, such as cross-validation. ''p-hacking'' (repeating tests to achieve significance) increases false positives. Rigorous experimental design and transparency about all tests conducted can mitigate this risk. Type I error increases with multiple tests: for <math> N </math> independent tests at threshold <math> \alpha </math>, the Family-Wise Error Rate (FWER) is <math> FWER = 1 - (1 - \alpha)^N </math>. Bonferroni correction divides the threshold by the number of tests, <math> p < \frac{\alpha}{N} </math>, but can increase false negatives. The False Discovery Rate (FDR) by Benjamini-Hochberg is less conservative, allowing more true discoveries with an acceptable proportion of false positives. The Bayesian approach uses prior knowledge to balance prior and data with a posterior distribution, offering a valid alternative to the p-value. To combine p-values from multiple studies, meta-analysis uses methods like Fisher's: <math> \chi^2 = -2 \sum \ln(p_i) </math>. In summary, the p-value remains useful when contextualized and integrated with other measures, such as confidence intervals and Bayesian approaches.}}, a "significance test" that assesses data validity. Yet, even this entrenched concept is now being challenged. A recent study highlighted a campaign in the journal "Nature" against the use of the P-value.<ref name=":1">{{cita libro
| |
| | autore = Amrhein V | | | autore = Amrhein V |
| | autore2 = Greenland S | | | autore2 = Greenland S |
Line 231: |
Line 227: |
| | OCLC = | | | OCLC = |
| }} 73, 1–19.</ref> | | }} 73, 1–19.</ref> |
| |-
| |
| |
| |
| *'''Interdisciplinarity''': Solving science-based problems increasingly demands interdisciplinary research (IDR), as underscored by the European Union’s Horizon 2020 project.<ref>European Union, ''[https://ec.europa.eu/programmes/horizon2020/en/h2020-section/societal-challenges Horizon 2020]''</ref> Yet IDR poses cognitive challenges, partly due to the dominant "Physical Paradigm of Science" that limits its recognition. The "Engineering Paradigm of Science" has been proposed as an alternative, focusing on technological tools and collaboration. Researchers need "metacognitive scaffolds"—tools to enhance interdisciplinary communication and knowledge construction.<ref name=":0">{{cita libro
| |
| | autore = Boon M
| |
| | autore2 = Van Baalen S
| |
| | titolo = Epistemology for interdisciplinary research - shifting philosophical paradigms of science
| |
| | url = https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6383598/
| |
| | volume =
| |
| | opera = Eur J Philos Sci
| |
| | anno = 2019
| |
| | editore =
| |
| | città =
| |
| | ISBN =
| |
| | LCCN =
| |
| | DOI = 10.1007/s13194-018-0242-4
| |
| | OCLC =
| |
| }} 9(1):16.</ref><ref>{{cita libro
| |
| | autore = Boon M
| |
| | titolo = An engineering paradigm in the biomedical sciences: Knowledge as epistemic tool
| |
| | url = https://www.ncbi.nlm.nih.gov/pubmed/28389261
| |
| | volume =
| |
| | opera = Prog Biophys Mol Biol
| |
| | anno = 2017
| |
| | editore =
| |
| | città =
| |
| | ISBN =
| |
| | LCCN =
| |
| | DOI = 10.1016/j.pbiomolbio.2017.04.001
| |
| | OCLC =
| |
| }} Oct;129:25-39.</ref>
| |
| |}
| |
|
| |
|
| | '''Interdisciplinarity''': Solving science-based problems increasingly demands interdisciplinary research (IDR), as underscored by the European Union’s Horizon 2020 project.<ref /> Yet IDR poses cognitive challenges, partly due to the dominant "Physical Paradigm of Science" that limits its recognition. The "Engineering Paradigm of Science" has been proposed as an alternative, focusing on technological tools and collaboration. Researchers need "metacognitive scaffolds"—tools to enhance interdisciplinary communication and knowledge construction.<ref /><ref /> |
| ==Interdisciplinarity== | | ==Interdisciplinarity== |
| A superficial view might suggest a conflict between the disciplinarity of the "Physics Paradigm of Science" (which highlights anomalies) and the interdisciplinarity of the "Engineering Paradigm of Science" (focused on metacognitive scaffolds). However, these perspectives are not in conflict; they are complementary and drive "Paradigmatic Innovation" in science. | | A superficial view might suggest a conflict between the disciplinarity of the "Physics Paradigm of Science" (which highlights anomalies) and the interdisciplinarity of the "Engineering Paradigm of Science" (focused on metacognitive scaffolds). However, these perspectives are not in conflict; they are complementary and drive "Paradigmatic Innovation" in science. |