Looks like the publisher may have taken this series offline or changed its URL. Please contact support if you believe it should be working, the feed URL is invalid, or you have any other concerns about it.
התחל במצב לא מקוון עם האפליקציה Player FM !
פודקאסטים ששווה להאזין
בחסות


LW - Interested in Cognitive Bootcamp? by Raemon
סדרה בארכיון ("עדכון לא פעיל" status)
When? This feed was archived on October 23, 2024 10:10 (
Why? עדכון לא פעיל status. השרתים שלנו לא הצליחו לאחזר פודקאסט חוקי לזמן ממושך.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 440885345 series 3337129
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Interested in Cognitive Bootcamp?, published by Raemon on September 20, 2024 on LessWrong.
I'm running more 4-day "Cognitive Bootcamps" over the next couple months (during Lighthaven Eternal September season). DM me if you're potentially interested (either as an individual, or as a team).
The workshop is most valuable to people who:
control their decisionmaking process (i.e. you decide what projects you or a team work on, rather than working at a day-job on someone else's vision)
are either a) confused about planmaking / have a vague sense that they aren't as strategically ambitious as they could be.
and/or, b) are at a place where it's natural to spend a few days thinking big-picture thoughts before deciding on their next project.
There's a secondary[1] focus on "practice solving confusing problems", which IMO is time well spent, but requires more followup practice to pay off.
I wrote about the previous workshop here. Participants said on average they'd have been willing to pay $850 for it, and would have paid $5000 for the ideal, perfectly-tailored-for-them version. My plan is to charge $500/person for the next workshop, and then $1000 for the next one.
I'm most excited to run this for teams, who can develop a shared skillset and accompanying culture. I plan to tailor the workshops for the needs of whichever people show up.
The dates are not scheduled yet (depends somewhat on when a critical mass of participants are available). DM me if you are interested.
The skills being taught will be similar to the sort of thing listed in Skills from a year of Purposeful Rationality Practice and the Feedbackloop-first Rationality sequence. My default curriculum is aiming to teach several interrelated related skills you can practice over four days, that build into a coherent metaskill of "ambitious planning, at multiple timescales."
1. ^
I started this project oriented around "find better feedbackloops for solving confusing problems", and later decided that planmaking was the highest leverage part of the skill tree to focus on.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
1851 פרקים
סדרה בארכיון ("עדכון לא פעיל" status)
When?
This feed was archived on October 23, 2024 10:10 (
Why? עדכון לא פעיל status. השרתים שלנו לא הצליחו לאחזר פודקאסט חוקי לזמן ממושך.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 440885345 series 3337129
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Interested in Cognitive Bootcamp?, published by Raemon on September 20, 2024 on LessWrong.
I'm running more 4-day "Cognitive Bootcamps" over the next couple months (during Lighthaven Eternal September season). DM me if you're potentially interested (either as an individual, or as a team).
The workshop is most valuable to people who:
control their decisionmaking process (i.e. you decide what projects you or a team work on, rather than working at a day-job on someone else's vision)
are either a) confused about planmaking / have a vague sense that they aren't as strategically ambitious as they could be.
and/or, b) are at a place where it's natural to spend a few days thinking big-picture thoughts before deciding on their next project.
There's a secondary[1] focus on "practice solving confusing problems", which IMO is time well spent, but requires more followup practice to pay off.
I wrote about the previous workshop here. Participants said on average they'd have been willing to pay $850 for it, and would have paid $5000 for the ideal, perfectly-tailored-for-them version. My plan is to charge $500/person for the next workshop, and then $1000 for the next one.
I'm most excited to run this for teams, who can develop a shared skillset and accompanying culture. I plan to tailor the workshops for the needs of whichever people show up.
The dates are not scheduled yet (depends somewhat on when a critical mass of participants are available). DM me if you are interested.
The skills being taught will be similar to the sort of thing listed in Skills from a year of Purposeful Rationality Practice and the Feedbackloop-first Rationality sequence. My default curriculum is aiming to teach several interrelated related skills you can practice over four days, that build into a coherent metaskill of "ambitious planning, at multiple timescales."
1. ^
I started this project oriented around "find better feedbackloops for solving confusing problems", and later decided that planmaking was the highest leverage part of the skill tree to focus on.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
1851 פרקים
सभी एपिसोड
×
1 LW - Glitch Token Catalog - (Almost) a Full Clear by Lao Mein 2:50:10

1 LW - Investigating an insurance-for-AI startup by L Rudolf L 26:00

1 LW - Applications of Chaos: Saying No (with Hastings Greer) by Elizabeth 3:39

1 LW - Work with me on agent foundations: independent fellowship by Alex Altair 6:20

1 LW - o1-preview is pretty good at doing ML on an unknown dataset by Håvard Tveit Ihle 3:14

1 LW - Interested in Cognitive Bootcamp? by Raemon 2:05

1 LW - Laziness death spirals by PatrickDFarley 13:04

1 LW - We Don't Know Our Own Values, but Reward Bridges The Is-Ought Gap by johnswentworth 7:41

1 LW - AI #82: The Governor Ponders by Zvi 43:47

1 LW - Which LessWrong/Alignment topics would you like to be tutored in? [Poll] by Ruby 2:03

1 LW - [Intuitive self-models] 1. Preliminaries by Steven Byrnes 39:21

1 LW - The case for a negative alignment tax by Cameron Berg 14:19

1 LW - Monthly Roundup #22: September 2024 by Zvi 1:08:02

1 LW - Generative ML in chemistry is bottlenecked by synthesis by Abhishaike Mahajan 24:59

1 LW - Skills from a year of Purposeful Rationality Practice by Raemon 11:11

1 LW - I finally got ChatGPT to sound like me by lsusr 10:31

1 LW - Book review: Xenosystems by jessicata 1:06:19

1 LW - MIRI's September 2024 newsletter by Harlan 2:29

1 LW - Secret Collusion: Will We Know When to Unplug AI? by schroederdewitt 57:38

1 LW - GPT-4o1 by Zvi 1:13:31

1 LW - How you can help pass important AI legislation with 10 minutes of effort by ThomasW 4:10

1 LW - My disagreements with "AGI ruin: A List of Lethalities" by Noosphere89 31:11

1 LW - Why I funded PIBBSS by Ryan Kidd 5:53

1 LW - Proveably Safe Self Driving Cars by Davidmanheim 11:40

1 LW - Not every accommodation is a Curb Cut Effect: The Handicapped Parking Effect, the Clapper Effect, and more by Michael Cohn 17:45

1 LW - Did Christopher Hitchens change his mind about waterboarding? by Isaac King 10:37

1 LW - Pay-on-results personal growth: first success by Chipmonk 6:45

1 LW - OpenAI o1, Llama 4, and AlphaZero of LLMs by Vladimir Nesov 2:51

1 LW - If-Then Commitments for AI Risk Reduction [by Holden Karnofsky] by habryka 1:05:50

1 LW - Evidence against Learned Search in a Chess-Playing Neural Network by p.b. 9:38

1 LW - The Great Data Integration Schlep by sarahconstantin 15:19

1 LW - AI, centralization, and the One Ring by owencb 13:31

1 LW - Open Problems in AIXI Agent Foundations by Cole Wyeth 18:06

1 LW - How to Give in to Threats (without incentivizing them) by Mikhail Samin 9:17

1 LW - Contra papers claiming superhuman AI forecasting by nikos 14:34

1 LW - OpenAI o1 by Zach Stein-Perlman 2:44

1 LW - AI #81: Alpha Proteo by Zvi 56:51

1 LW - [Paper] Programming Refusal with Conditional Activation Steering by Bruce W. Lee 18:30

1 LW - Refactoring cryonics as structural brain preservation by Andy McKenzie 5:20

1 LW - Reformative Hypocrisy, and Paying Close Enough Attention to Selectively Reward It. by Andrew Critch 4:42

1 LW - Formalizing the Informal (event invite) by abramdemski 1:36

1 LW - AI #80: Never Have I Ever by Zvi 1:03:12

1 LW - Economics Roundup #3 by Zvi 31:52

1 LW - The Best Lay Argument is not a Simple English Yud Essay by J Bostock 6:29

1 LW - My takes on SB-1047 by leogao 6:57
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.