Artwork

תוכן מסופק על ידי scott cunningham and Scott cunningham. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי scott cunningham and Scott cunningham או שותף פלטפורמת הפודקאסט שלו. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !

S1E31: Interview with Rajeev Dehejia, Professor at NYU and Economist

1:02:08
 
שתפו
 

Manage episode 342474097 series 3343922
תוכן מסופק על ידי scott cunningham and Scott cunningham. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי scott cunningham and Scott cunningham או שותף פלטפורמת הפודקאסט שלו. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

Background stuff about causal inference

Josh Angrist once quipped (on my podcast!) that a paper he wishes he had written was written by his classmate, Bob LaLonde. It was LaLonde’s job market paper, later published in the AER, that arguably helped bring to broader attention some of the empirical problems around causal inference within applied labor at the time. It was very ingenious too. LaLonde took a job trainings program conducted as an RCT, showed that the causal effect of the program was around $800-900, then dropped the experimental control group. He then pulled in six datasets from nationally representative surveys of Americans (3 from Current Population Survey, 3 from Panel Survey Income Dynamics). He reran the analysis with and without covariates adjustment. Not surprising to modern readers, the estimates were severely biased. Not only were the magnitudes off, most of the time the results showed a negative result. This is noteworthy mainly because of the RCT because the RCT established the ground truth of the trainings program — the truth was the program caused an average return of $800-900. So if using typical methods couldn’t even get close to that — well, that’s a problem. And they didn’t, and one more spark of many sparks that lit the fuse that became the credibility revolution occurred.

LaLonde’s paper was published in 1986. Angrist would graduate in 1989 and take a job at Harvard where he’d meet Guido Imbens. During the time together at Harvard in the 1990s, Imbens and Angrist would meet with Don Rubin, the head of the stats department, and between the three of them, several breakthrough contributions to instrumental variables were born.

Rajeev Dehejia Revisits Lalonde

In the midst of this time were Angrist, Imbens and Rubin were all at Harvard, there was a young graduate student in the economics department named Rajeev Dehejia. Rubin and Imbens one semester co taught an innovative new class on causal inference and Rajeev was one of the students who took it that year. Together with his classmate, Sadek Wahba, the two students decided after the class concluded to not so much replicate Lalonde, but rather extend the analysis using the more up-to-date methods learned from Imbens and Rubin. They chose the propensity score and published two papers reevaluating the Lalonde data — one in 1999 JASA and one in 2002 Restat. The propensity score analysis ultimately did much better than what Lalonde’s analysis had done. A lot of gains were made simply from recognizing the serious common support violations rampant in all six of those datasets. One value of the propensity score is, after all, the dimension reduction you get from taking for instance 10 variables and collapsing into one scalar (the propensity score). Once they did, they saw how bad the negative selection was. A huge number of people on the non experimental controls had propensity scores with so many zeroes after the decimal it was like the data was saying “these people in the CPS wouldn’t appear in that treatment group in a million years!”

That’s how I knew of Dehejia for years — the author of two papers showing that propensity score analysis might have promise for program evaluation with deep negative selection baked into the data. I saw him as one of the earliest researchers in the broader credibility revolution trained by that next wave of people connected to Princeton Industrial Relations Section like Angrist as well as Imbens and Rubin who began reshaping our applied practices in paper after paper. So it is a great pleasure to introduce him to you this week in my podcast The Mixtape with Scott.

Get full access to Scott's Substack at causalinf.substack.com/subscribe

  continue reading

95 פרקים

Artwork
iconשתפו
 
Manage episode 342474097 series 3343922
תוכן מסופק על ידי scott cunningham and Scott cunningham. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי scott cunningham and Scott cunningham או שותף פלטפורמת הפודקאסט שלו. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

Background stuff about causal inference

Josh Angrist once quipped (on my podcast!) that a paper he wishes he had written was written by his classmate, Bob LaLonde. It was LaLonde’s job market paper, later published in the AER, that arguably helped bring to broader attention some of the empirical problems around causal inference within applied labor at the time. It was very ingenious too. LaLonde took a job trainings program conducted as an RCT, showed that the causal effect of the program was around $800-900, then dropped the experimental control group. He then pulled in six datasets from nationally representative surveys of Americans (3 from Current Population Survey, 3 from Panel Survey Income Dynamics). He reran the analysis with and without covariates adjustment. Not surprising to modern readers, the estimates were severely biased. Not only were the magnitudes off, most of the time the results showed a negative result. This is noteworthy mainly because of the RCT because the RCT established the ground truth of the trainings program — the truth was the program caused an average return of $800-900. So if using typical methods couldn’t even get close to that — well, that’s a problem. And they didn’t, and one more spark of many sparks that lit the fuse that became the credibility revolution occurred.

LaLonde’s paper was published in 1986. Angrist would graduate in 1989 and take a job at Harvard where he’d meet Guido Imbens. During the time together at Harvard in the 1990s, Imbens and Angrist would meet with Don Rubin, the head of the stats department, and between the three of them, several breakthrough contributions to instrumental variables were born.

Rajeev Dehejia Revisits Lalonde

In the midst of this time were Angrist, Imbens and Rubin were all at Harvard, there was a young graduate student in the economics department named Rajeev Dehejia. Rubin and Imbens one semester co taught an innovative new class on causal inference and Rajeev was one of the students who took it that year. Together with his classmate, Sadek Wahba, the two students decided after the class concluded to not so much replicate Lalonde, but rather extend the analysis using the more up-to-date methods learned from Imbens and Rubin. They chose the propensity score and published two papers reevaluating the Lalonde data — one in 1999 JASA and one in 2002 Restat. The propensity score analysis ultimately did much better than what Lalonde’s analysis had done. A lot of gains were made simply from recognizing the serious common support violations rampant in all six of those datasets. One value of the propensity score is, after all, the dimension reduction you get from taking for instance 10 variables and collapsing into one scalar (the propensity score). Once they did, they saw how bad the negative selection was. A huge number of people on the non experimental controls had propensity scores with so many zeroes after the decimal it was like the data was saying “these people in the CPS wouldn’t appear in that treatment group in a million years!”

That’s how I knew of Dehejia for years — the author of two papers showing that propensity score analysis might have promise for program evaluation with deep negative selection baked into the data. I saw him as one of the earliest researchers in the broader credibility revolution trained by that next wave of people connected to Princeton Industrial Relations Section like Angrist as well as Imbens and Rubin who began reshaping our applied practices in paper after paper. So it is a great pleasure to introduce him to you this week in my podcast The Mixtape with Scott.

Get full access to Scott's Substack at causalinf.substack.com/subscribe

  continue reading

95 פרקים

כל הפרקים

×
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

מדריך עזר מהיר