Check out work of HD from UCLA. Weird results going on
There’s no HD in marketing. There’s an AD (whose empirical work is almost all 2010?) and an HH.
MAKE PUBLISHING LESS (BUT IMPACTFUL SHIT) GREAT AGAIN
I don't know that much about QA love per se, but I am one who is most impressed by ECR's who adopt open science practices and show themselves to be good analysts--not by lengthy CV's that are often built on a career of noise mining.
yes, that is upsetting and makes me question whether i want my name associated with this field anymore... i looked up to DA before starting my PhD, and he was one of the reasons why i wanted to do behavioral research. and now, it feels like a gut punch to confirm the whispers i have been hearing.
that said, back to the QA love... if someone is preaching how to do research, they need to show that they can do research, right? a couple papers do not make you a confirmed 'researcher' in my honest opinion... but this may be biased from our previous expectations about how prolific someone needs to be in CB.
First Milkman and now Dai. are people not buying the fresh start effect? Having never tested it first hand, it does not appear like an unbelievable effect to me personally.
Check out work of HD from UCLA. Weird results going onThere’s no HD in marketing. There’s an AD (whose empirical work is almost all 2010?) and an HH.
Guess it's hengchen dia (HD)?
Can we please use people's given names instead of this AA, BB, CC nonsense. It's creating confusion without helping the person being written about (AA, BB, etc) or the person writing AA, BB, etc.
Check out work of HD from UCLA. Weird results going onThere’s no HD in marketing. There’s an AD (whose empirical work is almost all 2010?) and an HH.
First Milkman and now Dai. are people not buying the fresh start effect? Having never tested it first hand, it does not appear like an unbelievable effect to me personally.
Check out work of HD from UCLA. Weird results going on
There’s no HD in marketing. There’s an AD (whose empirical work is almost all 2010?) and an HH.
Guess it's hengchen dia (HD)?
Katy Milkman is god
Hello. Although I am not suggesting anything (I am merely asking questions), I am putting some publicly available information here. Can someone provide more information about this?
1 - A study by Patti Williams, Nicole Verrochi Coleman, Andrea C. Morales, and Ludovica Cesareo (2018) - "Connections to Brands That Help Others versus Help the Self: The Impact of Incidental Awe and Pride on Consumer Relationships with Social-Benefit and Luxury Brands" has been retracted. Here is the retraction info: https://retractionwatch.com/2020/06/24/consumer-research-study-is-retracted-for-unexplained-anomalies/
Summary: 1st, 3rd, and 4th authors retracted the paper. All data was collected and analyzed by the 2nd author.
2 - Patti Williams was Nicole Coleman's committee chair/advisor. Sources: https://faculty.wharton.upenn.edu/wp-content/uploads/2016/11/pw-vitae-April-2017.pdf, and the JCR paper what was a Co-Winner of Ferber Award (award to papers based on doctoral dissertations): http://www.ejcr.org/ferberaward.htm (see year 2014 and the paper).
3 - Nicole Coleman's faculty page (https://www.business.pitt.edu/people/nicole-verrochi-coleman) lists several publications (and working papers) with Patti Williams and Andrea Morales - examples include https://academic.oup.com/jcr/article/44/2/283/2939533, https://academic.oup.com/jcr/article/46/1/99/5049929. The retracted article is still on her page.
4 - When you go to Andrea C. Morales updated CV or page (https://wpcarey.asu.edu/people/profile/837523), NO research with Nicole Coleman is listed. Nothing. Just do a Ctrl+F and search for Coleman. The same is true for Morales' Google Scholar Page: https://scholar.google.com/citations?user=_eZDZvYAAAAJ&hl=en. There are no mentions to the publications or working papers.
5 - The same is true for Patti Williams' page (https://marketing.wharton.upenn.edu/profile/pattiw/#research). The ferber award paper is there, but nothing else. All disappeared. http://scholar.google.com/citations?user=Rh121jwAAAAJ&hl=en
6 - I couldn't access Coleman's Google Scholar Citations page.
Man, what did you think when you first heard "The Emperor's New Clothes?" That the kid calling out the emperor was a nobody and should have been ignored?
People contribute to knowledge in different ways. I'm glad we have guys like QA and NB around to help the field raise its standards. Would we really be better off if they were desperately chasing flashy pubs instead? The reason people listen to them is because their critiques are valid and convincing, not because of their H-indices.
Maybe, just maybe, there's also a reason that the super prolific TED-talkers and the rigorous methodologists tend not to be the same people?
So, let me recap here: anyone who is not really able to do research can make a career out of telling people who publish how to do research? that seems simple enough, I am actually tempted to go that route out of this discussion.
The problem with these people is that if they don’t do experimental research and publish their findings, they do not truly understand the entire system. It’s blaming and shaming people for imperfect decisions (p hacking not fraud) while ignoring the broader systems that are the root cause of the issue. They pick on people with no power and are working within the system that other people built while making excuses for themselves. They claim the superhero priming effect doesn’t count because it was “different times” - but was it? Have things changed now such that you can get a job and tenure by not responding to reviewers who ask studies to be dropped or additional analyses to be run? Have things changed so that I should care about p hacking more than taking care of my wife and son? DC, QA, NB, AC and the like don’t or won’t see the bigger picture, which is what makes them hypo’crites
What is wrong with you people? You complain that there is a bunch of cr@p out there, that people p-hack their way into publishing too many impossible papers. Then... well, then you judge someone who knows more methods than all of these people combined by the number of pubs he has. Don't you see the irony? QA is already very respected and will keep climbing despite the gatekeepers. And guess what, he will not have a superhero or a 5 way interaction in the closet. Don't you think this is decent progress, given the pace of change in academia?
So, let me recap here: anyone who is not really able to do research can make a career out of telling people who publish how to do research? that seems simple enough, I am actually tempted to go that route out of this discussion.
Research is not a gospel, so I don't think anyone can "preach" how to do it. People write papers about theories and methods, and either they make sense or they don't. People should read the papers and forge your own opinion.
Take Nick Brown: when he argued that Barbara Fredrickson's "Positivity Ratio" was built on utter nonsense, he was a complete nobody with zero publications, and Fredrickson was a full prof with thousands of citations, but it does not matter: Brown's analysis were convincing. See here: https://www.theguardian.com/science/2014/jan/19/mathematics-of-happiness-debunked-nick-brown
Reputations are overblown. And yes, DA is also a painful reminder of that: You can apparently have more than 28k Google Scholar cites, and yet can't be arsed to check the distribution of your dependent variable.yes, that is upsetting and makes me question whether i want my name associated with this field anymore... i looked up to DA before starting my PhD, and he was one of the reasons why i wanted to do behavioral research. and now, it feels like a gut punch to confirm the whispers i have been hearing.
that said, back to the QA love... if someone is preaching how to do research, they need to show that they can do research, right? a couple papers do not make you a confirmed 'researcher' in my honest opinion... but this may be biased from our previous expectations about how prolific someone needs to be in CB.
DC's grad students (Josh Lewis, Celia Gaertig, Rob Mislavsky...) publish experimental research, and their research is pre-registered with open data and methods. They are practicing what they preach, so I don't really see a reason to call DC hypocrites? I don't know about the other names that you are mentioning.
I also disagree that DC "blame and shame" individual researchers. The only blame and shame I've seen on DataColada was targeted at journals who still refuse to require open data and methods, and still drag their feet about pre-registration. Check their conclusion about the Ariely case, or the "pre-registration" paper that they published at JCP: They call on journals to change their practices.
Of course the system encourages p-hacking, HARKing, and favors fraudsters over honest people, but I don't see how it can lead anyone to the conclusion that "business as usual" is fine. We have to put food on the table, but we're also scientists, and we have a broader responsibility toward other researchers, companies, consumers...
The problem with these people is that if they don’t do experimental research and publish their findings, they do not truly understand the entire system. It’s blaming and shaming people for imperfect decisions (p hacking not fraud) while ignoring the broader systems that are the root cause of the issue. They pick on people with no power and are working within the system that other people built while making excuses for themselves. They claim the superhero priming effect doesn’t count because it was “different times” - but was it? Have things changed now such that you can get a job and tenure by not responding to reviewers who ask studies to be dropped or additional analyses to be run? Have things changed so that I should care about p hacking more than taking care of my wife and son? DC, QA, NB, AC and the like don’t or won’t see the bigger picture, which is what makes them hypo’crites
The Qrp crew want to think of themselves as heros and saviors. they’re not. Your example of the emperors new clothes is a perfect example of the narcissism and narrow thinking I’m talking about. They are not the brave, innocent boy pointing out the emperor is naked. they’re outsiders of the village, maybe never even set foot in it - complaining that the villagers didn’t fight the emperor as quickly or fiercely as the outsiders would have liked.
Man, what did you think when you first heard "The Emperor's New Clothes?" That the kid calling out the emperor was a nobody and should have been ignored?
People contribute to knowledge in different ways. I'm glad we have guys like QA and NB around to help the field raise its standards. Would we really be better off if they were desperately chasing flashy pubs instead? The reason people listen to them is because their critiques are valid and convincing, not because of their H-indices.
Maybe, just maybe, there's also a reason that the super prolific TED-talkers and the rigorous methodologists tend not to be the same people?So, let me recap here: anyone who is not really able to do research can make a career out of telling people who publish how to do research? that seems simple enough, I am actually tempted to go that route out of this discussion.
It’s not true. They’re pretty protective of their students, and that behavior would hurt their students in the market.
If they have a problem it’s admitting when they’re wrong.
More info on this please
DC love to spread "gossip" about people based on their suspicions that has denied people jobs. These people were later cleared by investigations.
I'm failing to see how identifying QRPs, explaining their impact on false-positives, and suggesting ways of correcting them is a sign of "narcissism and narrow thinking".
I'm also failing to see how they're "outsiders": They have published, and continue to publish, behavioral research. They have trained, and continue to train, behavioral researchers. Hell, they even have engaged in QRPs, recognized it, and clearly acknowledged that it makes it so much easier to publish papers... But that the resulting papers are crap.
Finally, they are right to point out that marketing is lagging behind. Psych journals have adopted open data, open methods, and embraced pre-registration at a much faster pace than marketing journals. The replication rate of marketing findings pales in comparison to the replication rate of psych findings.
My question for you: What is your argument to fight against reform? Are there benefits to QRPs that I'm missing? Do you think that the field is better thanks to them?
The Qrp crew want to think of themselves as heros and saviors. they’re not. Your example of the emperors new clothes is a perfect example of the narcissism and narrow thinking I’m talking about. They are not the brave, innocent boy pointing out the emperor is naked. they’re outsiders of the village, maybe never even set foot in it - complaining that the villagers didn’t fight the emperor as quickly or fiercely as the outsiders would have liked.
Man, what did you think when you first heard "The Emperor's New Clothes?" That the kid calling out the emperor was a nobody and should have been ignored?
People contribute to knowledge in different ways. I'm glad we have guys like QA and NB around to help the field raise its standards. Would we really be better off if they were desperately chasing flashy pubs instead? The reason people listen to them is because their critiques are valid and convincing, not because of their H-indices.
Maybe, just maybe, there's also a reason that the super prolific TED-talkers and the rigorous methodologists tend not to be the same people?So, let me recap here: anyone who is not really able to do research can make a career out of telling people who publish how to do research? that seems simple enough, I am actually tempted to go that route out of this discussion.