do these efforts try to replicate any work that comes out, or only the ones that look funky?
A new marketing / consumer research scandal on sight?
-
yes, that is upsetting and makes me question whether i want my name associated with this field anymore... i looked up to DA before starting my PhD, and he was one of the reasons why i wanted to do behavioral research. and now, it feels like a gut punch to confirm the whispers i have been hearing.
that said, back to the QA love... if someone is preaching how to do research, they need to show that they can do research, right? a couple papers do not make you a confirmed 'researcher' in my honest opinion... but this may be biased from our previous expectations about how prolific someone needs to be in CB.
-
I don't know that much about QA love per se, but I am one who is most impressed by ECR's who adopt open science practices and show themselves to be good analysts--not by lengthy CV's that are often built on a career of noise mining.
yes, that is upsetting and makes me question whether i want my name associated with this field anymore... i looked up to DA before starting my PhD, and he was one of the reasons why i wanted to do behavioral research. and now, it feels like a gut punch to confirm the whispers i have been hearing.
that said, back to the QA love... if someone is preaching how to do research, they need to show that they can do research, right? a couple papers do not make you a confirmed 'researcher' in my honest opinion... but this may be biased from our previous expectations about how prolific someone needs to be in CB. -
Research is not a gospel, so I don't think anyone can "preach" how to do it. People write papers about theories and methods, and either they make sense or they don't. People should read the papers and forge your own opinion.
Take Nick Brown: when he argued that Barbara Fredrickson's "Positivity Ratio" was built on utter nonsense, he was a complete nobody with zero publications, and Fredrickson was a full prof with thousands of citations, but it does not matter: Brown's analysis were convincing. See here: https://www.theguardian.com/science/2014/jan/19/mathematics-of-happiness-debunked-nick-brown
Reputations are overblown. And yes, DA is also a painful reminder of that: You can apparently have more than 28k Google Scholar cites, and yet can't be arsed to check the distribution of your dependent variable.
yes, that is upsetting and makes me question whether i want my name associated with this field anymore... i looked up to DA before starting my PhD, and he was one of the reasons why i wanted to do behavioral research. and now, it feels like a gut punch to confirm the whispers i have been hearing.
that said, back to the QA love... if someone is preaching how to do research, they need to show that they can do research, right? a couple papers do not make you a confirmed 'researcher' in my honest opinion... but this may be biased from our previous expectations about how prolific someone needs to be in CB. -
A little of both but the most important thing is the numerator (1). One marketing study has replicated. One.
do these efforts try to replicate any work that comes out, or only the ones that look funky?
Asymmetric dominance replicates, just to negate your point, but there are plenty of other effects that replicate with interesting boundary conditions.
-
So, let me recap here: anyone who is not really able to do research can make a career out of telling people who publish how to do research? that seems simple enough, I am actually tempted to go that route out of this discussion.
Research is not a gospel, so I don't think anyone can "preach" how to do it. People write papers about theories and methods, and either they make sense or they don't. People should read the papers and forge your own opinion.
Take Nick Brown: when he argued that Barbara Fredrickson's "Positivity Ratio" was built on utter nonsense, he was a complete nobody with zero publications, and Fredrickson was a full prof with thousands of citations, but it does not matter: Brown's analysis were convincing. See here: https://www.theguardian.com/science/2014/jan/19/mathematics-of-happiness-debunked-nick-brown
Reputations are overblown. And yes, DA is also a painful reminder of that: You can apparently have more than 28k Google Scholar cites, and yet can't be arsed to check the distribution of your dependent variable.yes, that is upsetting and makes me question whether i want my name associated with this field anymore... i looked up to DA before starting my PhD, and he was one of the reasons why i wanted to do behavioral research. and now, it feels like a gut punch to confirm the whispers i have been hearing.
that said, back to the QA love... if someone is preaching how to do research, they need to show that they can do research, right? a couple papers do not make you a confirmed 'researcher' in my honest opinion... but this may be biased from our previous expectations about how prolific someone needs to be in CB.
-
Imagine a world in which "False-positive psychology" would have been written not by Simonsohn, Simmons and Nelson, but by a graduate student. Would this make the arguments of the paper less valid?
My only point is that papers stand on their own, regardless of who wrote them. The fact that a paper is good or bad, makes sense or doesn't, isn't a function of the h-index of the persons who put their name on it. If you identify things that are problematic in the way people do research, you should write a paper about it and submit it to peer-review.
So, let me recap here: anyone who is not really able to do research can make a career out of telling people who publish how to do research? that seems simple enough, I am actually tempted to go that route out of this discussion.
Research is not a gospel, so I don't think anyone can "preach" how to do it. People write papers about theories and methods, and either they make sense or they don't. People should read the papers and forge your own opinion.
Take Nick Brown: when he argued that Barbara Fredrickson's "Positivity Ratio" was built on utter nonsense, he was a complete nobody with zero publications, and Fredrickson was a full prof with thousands of citations, but it does not matter: Brown's analysis were convincing. See here: https://www.theguardian.com/science/2014/jan/19/mathematics-of-happiness-debunked-nick-brown
Reputations are overblown. And yes, DA is also a painful reminder of that: You can apparently have more than 28k Google Scholar cites, and yet can't be arsed to check the distribution of your dependent variable.yes, that is upsetting and makes me question whether i want my name associated with this field anymore... i looked up to DA before starting my PhD, and he was one of the reasons why i wanted to do behavioral research. and now, it feels like a gut punch to confirm the whispers i have been hearing.
that said, back to the QA love... if someone is preaching how to do research, they need to show that they can do research, right? a couple papers do not make you a confirmed 'researcher' in my honest opinion... but this may be biased from our previous expectations about how prolific someone needs to be in CB.
-
Heck Andrew Gelman built a career out of it (I’m kidding btw)
So, let me recap here: anyone who is not really able to do research can make a career out of telling people who publish how to do research? that seems simple enough, I am actually tempted to go that route out of this discussion.
Research is not a gospel, so I don't think anyone can "preach" how to do it. People write papers about theories and methods, and either they make sense or they don't. People should read the papers and forge your own opinion.
Take Nick Brown: when he argued that Barbara Fredrickson's "Positivity Ratio" was built on utter nonsense, he was a complete nobody with zero publications, and Fredrickson was a full prof with thousands of citations, but it does not matter: Brown's analysis were convincing. See here: https://www.theguardian.com/science/2014/jan/19/mathematics-of-happiness-debunked-nick-brown
Reputations are overblown. And yes, DA is also a painful reminder of that: You can apparently have more than 28k Google Scholar cites, and yet can't be arsed to check the distribution of your dependent variable.yes, that is upsetting and makes me question whether i want my name associated with this field anymore... i looked up to DA before starting my PhD, and he was one of the reasons why i wanted to do behavioral research. and now, it feels like a gut punch to confirm the whispers i have been hearing.
that said, back to the QA love... if someone is preaching how to do research, they need to show that they can do research, right? a couple papers do not make you a confirmed 'researcher' in my honest opinion... but this may be biased from our previous expectations about how prolific someone needs to be in CB.
-
Hello. Although I am not suggesting anything (I am merely asking questions), I am putting some publicly available information here. Can someone provide more information about this?
1 - A study by Patti Williams, Nicole Verrochi Coleman, Andrea C. Morales, and Ludovica Cesareo (2018) - "Connections to Brands That Help Others versus Help the Self: The Impact of Incidental Awe and Pride on Consumer Relationships with Social-Benefit and Luxury Brands" has been retracted. Here is the retraction info: https://retractionwatch.com/2020/06/24/consumer-research-study-is-retracted-for-unexplained-anomalies/
Summary: 1st, 3rd, and 4th authors retracted the paper. All data was collected and analyzed by the 2nd author.
2 - Patti Williams was Nicole Coleman's committee chair/advisor. Sources: https://faculty.wharton.upenn.edu/wp-content/uploads/2016/11/pw-vitae-April-2017.pdf, and the JCR paper what was a Co-Winner of Ferber Award (award to papers based on doctoral dissertations): http://www.ejcr.org/ferberaward.htm (see year 2014 and the paper).
3 - Nicole Coleman's faculty page (https://www.business.pitt.edu/people/nicole-verrochi-coleman) lists several publications (and working papers) with Patti Williams and Andrea Morales - examples include https://academic.oup.com/jcr/article/44/2/283/2939533, https://academic.oup.com/jcr/article/46/1/99/5049929. The retracted article is still on her page.
4 - When you go to Andrea C. Morales updated CV or page (https://wpcarey.asu.edu/people/profile/837523), NO research with Nicole Coleman is listed. Nothing. Just do a Ctrl+F and search for Coleman. The same is true for Morales' Google Scholar Page: https://scholar.google.com/citations?user=_eZDZvYAAAAJ&hl=en. There are no mentions to the publications or working papers.
5 - The same is true for Patti Williams' page (https://marketing.wharton.upenn.edu/profile/pattiw/#research). The ferber award paper is there, but nothing else. All disappeared. http://scholar.google.com/citations?user=Rh121jwAAAAJ&hl=en
6 - I couldn't access Coleman's Google Scholar Citations page. -
Wow! I'm gonna read the procedure and try to prime myself. My dream is to fly like Superman!
Has anyone replicated the superhero priming effect by Leif Nelson?
Last year I located 92% of participants of LN's study. The superhero priming effect was still found among them after all those years.