Datasets:

Languages:
English
ArXiv:
License:
mrblobby123 commited on
Commit
c121374
·
1 Parent(s): 1a5d459

Upload aisafety.info.jsonl with huggingface_hub

Browse files
Files changed (1) hide show
  1. aisafety.info.jsonl +1 -0
aisafety.info.jsonl CHANGED
@@ -313,3 +313,4 @@
313
  {"source": "aisafety.info", "source_type": "markdown", "url": "https://aisafety.info?state=88FN", "title": "What is reinforcement learning from human feedback (RLHF)?", "authors": ["Stampy aisafety.info"], "date_published": "2023-07-18T09:10:02Z", "text": "[Reinforcement learning from human feedback](https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback) (RLHF) is a method developed by [OpenAI](https://openai.com/) and is an essential part of their [strategy](https://openai.com/blog/our-approach-to-ai-safety) for building AIs that are safe and aligned with human values. A prime example of an AI trained with RLHF is OpenAI’s [ChatGPT](https://openai.com/blog/chatgpt).\n\nIt is often hard to specify what exactly we want AIs to do. Say we want to make an AI do a [backflip](https://openai.com/research/learning-from-human-preferences); how can we accurately describe what a backflip is and what it isn’t? RLHF solves this the following way: We first show a human two examples of an AI’s backflip attempts, then let the human decide which one looks more backflippy, and finally update the AI correspondingly. Repeat this a thousand times, and we get the AI close to doing [actual backflips](https://images.openai.com/blob/cf6fdf49-ea9e-489d-a1f1-9753291cd09e/humanfeedbackjump.gif)!\n\nWe also want safe and helpful AI assistants, but like with backflips, it’s hard to specify exactly what this entails. The main candidates for AI assistants are [language models](http://aisafety.info?state=8161), which are trained to predict the next words on large datasets. They are not trained to be safe or helpful; their outputs can be toxic, offensive, dangerous, or plain useless. Again, using RLHF can bring us closer to our goal.\n\n[Training language models with RLHF](https://openai.com/research/instruction-following) (to be safe and helpful) works as follows:\n\n- Step 1: We have a dataset of prompts, and use labelers to write (safe and helpful) responses to these prompts. The responses are used to [fine-tune](https://en.wikipedia.org/wiki/Fine-tuning_(machine_learning)) a language model to produce outputs more like those given by the labelers.\n\n- Step 2: We input the prompts to our language model and sample several responses for every prompt. This time the labelers rank the responses from best to worst (according to how safe and helpful they are). A different AI model, the “reward model”, is trained to predict which responses labelers prefer.\n\n- Step 3: Using reinforcement learning, the reward model from step 2 is used to again fine-tune the language model from step 1. The language model is trained to generate responses that the reward model predicts labelers will prefer (safer and more helpful responses).\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/6dc80ffb-2780-42f1-71b4-3837c3e7ea00/public)\n\nSteps for reinforcement learning from human feedback. Source: [OpenAI](https://openai.com/research/instruction-following)\n\nIf we do it right, we end up with a language model that responds to our questions with mostly safe and helpful answers. In reality, though, RLHF still has [many problems](https://www.lesswrong.com/posts/d6DvuCKH5bSoT62DB/compendium-of-problems-with-rlhf).\n\n- Firstly, it’s not [robust](https://www.lesswrong.com/posts/RYcoJdvmoBbi5Nax7/jailbreaking-chatgpt-on-release-day): language models trained using RLHF can and do still produce harmful content.\n\n- Secondly, RLHF has limited scalability: it becomes ineffective when tasks grow so complex that humans can’t give useful feedback anymore. This is where [scalable oversight](https://arxiv.org/abs/2211.03540) methods such as [Iterated Distillation and Amplification](http://aisafety.info?state=897J) could help out in the future.\n\n- Lastly, by optimizing the AI for getting good feedback from humans, we incentivize things such as [deception](https://www.alignmentforum.org/posts/8whGos5JCdBzDbZhH/framings-of-deceptive-alignment).\n\nGiven the limitations of RLHF and the fact that it does not address many of the [hard problems in AI safety](http://aisafety.info?state=8163), we need [better strategies](http://aisafety.info?state=6479) to make AI safe and aligned.\n\n[^kix.vzhw8exxpu5y]: [https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/)\n[^kix.t5zv2pkpghwt]: [https://openai.com/blog/fine-tuning-gpt-2/](https://openai.com/blog/fine-tuning-gpt-2/)\n[^kix.ls0zryxxqolb]: Also sometimes called Reinforcement Learning on Constitutional AI (RL CAI)", "id": "393eaf965c70a2307e525ce05c8dbfc4", "summary": []}
314
  {"source": "aisafety.info", "source_type": "markdown", "url": "https://aisafety.info?state=7640", "title": "How and why should I form my own views about AI safety?", "authors": ["Stampy aisafety.info"], "date_published": "2023-07-19T02:43:56Z", "text": "\n\nAs with most things, collecting data and standing on the shoulders of giants by reading other people's ideas can be very helpful, while noticing disagreements and frames to compare while forming your own perspective.\n\nA key part of forming inside views is taking time to think carefully about how you think things will unfold, and how different approaches might affect this. Investing in having well-grounded models that you can bounce off other people’s to collaboratively figure out what to do is worthwhile if you want to steer the future reliably.\n\nReading through different researchers’ [threat models](https://www.alignmentforum.org/tag/threat-models) and [success stories](https://www.alignmentforum.org/tag/ai-success-models) can be particularly valuable. Some other points to explore:\n\n- The Machine Intelligence Research Institute’s [\"Why AI safety?\" info page](https://intelligence.org/why-ai-safety/) contains links to relevant research.\n\n- The Effective Altruism Forum has an article called [\"How I formed my own views on AI safety\"](https://forum.effectivealtruism.org/posts/xS9dFE3A6jdooiN7M/how-i-formed-my-own-views-about-ai-safety), which could be helpful.\n\n- Below is a Robert Miles YouTube video that can be a good place to start.\n\n- There is also [this article from Vox](https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment). \n\n<iframe src=\"https://www.youtube.com/embed/pYXy-A4siMw\" title=\"Intro to AI Safety, Remastered\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen></iframe>\n\n", "id": "70c71f87ac7d9c1a8cc176e2a0382037", "summary": []}
315
  {"source": "aisafety.info", "source_type": "markdown", "url": "https://aisafety.info?state=5632", "title": "What is an agent?", "authors": ["Stampy aisafety.info"], "date_published": "2023-07-19T04:43:33Z", "text": "Intuitively, we think of an \"agent\" as something whose actions can be understood as [directed at achieving a goal](https://www.alignmentforum.org/posts/jHSi6BwDKTLt5dmsG/grokking-the-intentional-stance).\n\nOne way of identifying agency is to ask how a system would behave if its environment changed. For example, imagine a system which gives advice, but imagine that (for whatever reason) people start to consistently do the opposite of what it suggests. If the system is non-agentic, it will continue to give the same advice, but if it is an agent with the goal of causing people to be successful, it will try to find other ways of achieving that goal. For example, it might start giving the *opposite* advice.\n\nSome researchers use a more restricted definition of agency, focusing on the system’s ability to act in the world on its own. For example, [Richard Ngo argues](https://www.alignmentforum.org/posts/bz5GdmCWj8o48726N/agi-safety-from-first-principles-goals-and-agency) that there are six factors which contribute to a system being agent-like:\n\n> 1. Self-awareness: it understands that it’s a part of the world, and that its behavior impacts the world;\n\n> 2. Planning: it considers a wide range of possible sequences of behaviors (let’s call them “plans”), including long plans;\n\n> 3. Consequentialism: it decides which of those plans is best by considering the value of the outcomes that they produce;\n\n> 4. Scale: its choice is sensitive to the effects of plans over large distances and long time horizons;\n\n> 5. Coherence: it is internally unified towards implementing the single plan it judges to be best;\n\n> 6. Flexibility: it is able to adapt its plans flexibly as circumstances change, rather than just continuing the same patterns of behavior.\n\nThese features are distinct properties of human beings that collectively describe human agency. From this perspective, agency isn’t binary; rather, it exists as a spectrum along each of these dimensions, and different systems could have any combination of them.\n\nIn addition to disagreements over which properties are part of agency, there are also disagreements over whether there is an objective distinction between agents and non-agents: is everything in the world either one or the other, or is agency (merely) a \"[stance](https://www.alignmentforum.org/posts/jHSi6BwDKTLt5dmsG/grokking-the-intentional-stance)\" that an observer can take towards a system as a way of understanding and predicting its behavior? As an example of agency as a stance, we might *think of* another human being *as* an agent because we don’t know all of the psychological factors which go into their decisions; that is, we might interpret their behavior through understanding the goals they are pursuing. However, a superintelligence that was able to fully predict their behavior based on a mechanistic understanding of their psyche would not relate to them as an agent; it wouldn’t need to posit \"goals\" in order to understand their behavior.\n\n[The agent foundations research agenda](http://aisafety.info?state=7782) focuses on defining and clarifying the properties of agents and identifying those properties in systems. However, because the term \"agent\" has a variety of definitions across various fields, some people[^kix.x0ir6f5oi875] think that a focus on \"agents\" *per se* can be misleading and instead focus on more crisply-defined entities like \"[optimizers](http://aisafety.info?state=8C7W).\"\n\n[^kix.6dwbc43iy3c8]: transcribe the tweet\n[^kix.x0ir6f5oi875]: [This includes the originators of the agent foundations research program.](https://twitter.com/robbensinger/status/1648522190760067076)", "id": "b71d4466732b257f42b9c47c5c31002f", "summary": []}
 
 
313
  {"source": "aisafety.info", "source_type": "markdown", "url": "https://aisafety.info?state=88FN", "title": "What is reinforcement learning from human feedback (RLHF)?", "authors": ["Stampy aisafety.info"], "date_published": "2023-07-18T09:10:02Z", "text": "[Reinforcement learning from human feedback](https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback) (RLHF) is a method developed by [OpenAI](https://openai.com/) and is an essential part of their [strategy](https://openai.com/blog/our-approach-to-ai-safety) for building AIs that are safe and aligned with human values. A prime example of an AI trained with RLHF is OpenAI’s [ChatGPT](https://openai.com/blog/chatgpt).\n\nIt is often hard to specify what exactly we want AIs to do. Say we want to make an AI do a [backflip](https://openai.com/research/learning-from-human-preferences); how can we accurately describe what a backflip is and what it isn’t? RLHF solves this the following way: We first show a human two examples of an AI’s backflip attempts, then let the human decide which one looks more backflippy, and finally update the AI correspondingly. Repeat this a thousand times, and we get the AI close to doing [actual backflips](https://images.openai.com/blob/cf6fdf49-ea9e-489d-a1f1-9753291cd09e/humanfeedbackjump.gif)!\n\nWe also want safe and helpful AI assistants, but like with backflips, it’s hard to specify exactly what this entails. The main candidates for AI assistants are [language models](http://aisafety.info?state=8161), which are trained to predict the next words on large datasets. They are not trained to be safe or helpful; their outputs can be toxic, offensive, dangerous, or plain useless. Again, using RLHF can bring us closer to our goal.\n\n[Training language models with RLHF](https://openai.com/research/instruction-following) (to be safe and helpful) works as follows:\n\n- Step 1: We have a dataset of prompts, and use labelers to write (safe and helpful) responses to these prompts. The responses are used to [fine-tune](https://en.wikipedia.org/wiki/Fine-tuning_(machine_learning)) a language model to produce outputs more like those given by the labelers.\n\n- Step 2: We input the prompts to our language model and sample several responses for every prompt. This time the labelers rank the responses from best to worst (according to how safe and helpful they are). A different AI model, the “reward model”, is trained to predict which responses labelers prefer.\n\n- Step 3: Using reinforcement learning, the reward model from step 2 is used to again fine-tune the language model from step 1. The language model is trained to generate responses that the reward model predicts labelers will prefer (safer and more helpful responses).\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/6dc80ffb-2780-42f1-71b4-3837c3e7ea00/public)\n\nSteps for reinforcement learning from human feedback. Source: [OpenAI](https://openai.com/research/instruction-following)\n\nIf we do it right, we end up with a language model that responds to our questions with mostly safe and helpful answers. In reality, though, RLHF still has [many problems](https://www.lesswrong.com/posts/d6DvuCKH5bSoT62DB/compendium-of-problems-with-rlhf).\n\n- Firstly, it’s not [robust](https://www.lesswrong.com/posts/RYcoJdvmoBbi5Nax7/jailbreaking-chatgpt-on-release-day): language models trained using RLHF can and do still produce harmful content.\n\n- Secondly, RLHF has limited scalability: it becomes ineffective when tasks grow so complex that humans can’t give useful feedback anymore. This is where [scalable oversight](https://arxiv.org/abs/2211.03540) methods such as [Iterated Distillation and Amplification](http://aisafety.info?state=897J) could help out in the future.\n\n- Lastly, by optimizing the AI for getting good feedback from humans, we incentivize things such as [deception](https://www.alignmentforum.org/posts/8whGos5JCdBzDbZhH/framings-of-deceptive-alignment).\n\nGiven the limitations of RLHF and the fact that it does not address many of the [hard problems in AI safety](http://aisafety.info?state=8163), we need [better strategies](http://aisafety.info?state=6479) to make AI safe and aligned.\n\n[^kix.vzhw8exxpu5y]: [https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/)\n[^kix.t5zv2pkpghwt]: [https://openai.com/blog/fine-tuning-gpt-2/](https://openai.com/blog/fine-tuning-gpt-2/)\n[^kix.ls0zryxxqolb]: Also sometimes called Reinforcement Learning on Constitutional AI (RL CAI)", "id": "393eaf965c70a2307e525ce05c8dbfc4", "summary": []}
314
  {"source": "aisafety.info", "source_type": "markdown", "url": "https://aisafety.info?state=7640", "title": "How and why should I form my own views about AI safety?", "authors": ["Stampy aisafety.info"], "date_published": "2023-07-19T02:43:56Z", "text": "\n\nAs with most things, collecting data and standing on the shoulders of giants by reading other people's ideas can be very helpful, while noticing disagreements and frames to compare while forming your own perspective.\n\nA key part of forming inside views is taking time to think carefully about how you think things will unfold, and how different approaches might affect this. Investing in having well-grounded models that you can bounce off other people’s to collaboratively figure out what to do is worthwhile if you want to steer the future reliably.\n\nReading through different researchers’ [threat models](https://www.alignmentforum.org/tag/threat-models) and [success stories](https://www.alignmentforum.org/tag/ai-success-models) can be particularly valuable. Some other points to explore:\n\n- The Machine Intelligence Research Institute’s [\"Why AI safety?\" info page](https://intelligence.org/why-ai-safety/) contains links to relevant research.\n\n- The Effective Altruism Forum has an article called [\"How I formed my own views on AI safety\"](https://forum.effectivealtruism.org/posts/xS9dFE3A6jdooiN7M/how-i-formed-my-own-views-about-ai-safety), which could be helpful.\n\n- Below is a Robert Miles YouTube video that can be a good place to start.\n\n- There is also [this article from Vox](https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment). \n\n<iframe src=\"https://www.youtube.com/embed/pYXy-A4siMw\" title=\"Intro to AI Safety, Remastered\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen></iframe>\n\n", "id": "70c71f87ac7d9c1a8cc176e2a0382037", "summary": []}
315
  {"source": "aisafety.info", "source_type": "markdown", "url": "https://aisafety.info?state=5632", "title": "What is an agent?", "authors": ["Stampy aisafety.info"], "date_published": "2023-07-19T04:43:33Z", "text": "Intuitively, we think of an \"agent\" as something whose actions can be understood as [directed at achieving a goal](https://www.alignmentforum.org/posts/jHSi6BwDKTLt5dmsG/grokking-the-intentional-stance).\n\nOne way of identifying agency is to ask how a system would behave if its environment changed. For example, imagine a system which gives advice, but imagine that (for whatever reason) people start to consistently do the opposite of what it suggests. If the system is non-agentic, it will continue to give the same advice, but if it is an agent with the goal of causing people to be successful, it will try to find other ways of achieving that goal. For example, it might start giving the *opposite* advice.\n\nSome researchers use a more restricted definition of agency, focusing on the system’s ability to act in the world on its own. For example, [Richard Ngo argues](https://www.alignmentforum.org/posts/bz5GdmCWj8o48726N/agi-safety-from-first-principles-goals-and-agency) that there are six factors which contribute to a system being agent-like:\n\n> 1. Self-awareness: it understands that it’s a part of the world, and that its behavior impacts the world;\n\n> 2. Planning: it considers a wide range of possible sequences of behaviors (let’s call them “plans”), including long plans;\n\n> 3. Consequentialism: it decides which of those plans is best by considering the value of the outcomes that they produce;\n\n> 4. Scale: its choice is sensitive to the effects of plans over large distances and long time horizons;\n\n> 5. Coherence: it is internally unified towards implementing the single plan it judges to be best;\n\n> 6. Flexibility: it is able to adapt its plans flexibly as circumstances change, rather than just continuing the same patterns of behavior.\n\nThese features are distinct properties of human beings that collectively describe human agency. From this perspective, agency isn’t binary; rather, it exists as a spectrum along each of these dimensions, and different systems could have any combination of them.\n\nIn addition to disagreements over which properties are part of agency, there are also disagreements over whether there is an objective distinction between agents and non-agents: is everything in the world either one or the other, or is agency (merely) a \"[stance](https://www.alignmentforum.org/posts/jHSi6BwDKTLt5dmsG/grokking-the-intentional-stance)\" that an observer can take towards a system as a way of understanding and predicting its behavior? As an example of agency as a stance, we might *think of* another human being *as* an agent because we don’t know all of the psychological factors which go into their decisions; that is, we might interpret their behavior through understanding the goals they are pursuing. However, a superintelligence that was able to fully predict their behavior based on a mechanistic understanding of their psyche would not relate to them as an agent; it wouldn’t need to posit \"goals\" in order to understand their behavior.\n\n[The agent foundations research agenda](http://aisafety.info?state=7782) focuses on defining and clarifying the properties of agents and identifying those properties in systems. However, because the term \"agent\" has a variety of definitions across various fields, some people[^kix.x0ir6f5oi875] think that a focus on \"agents\" *per se* can be misleading and instead focus on more crisply-defined entities like \"[optimizers](http://aisafety.info?state=8C7W).\"\n\n[^kix.6dwbc43iy3c8]: transcribe the tweet\n[^kix.x0ir6f5oi875]: [This includes the originators of the agent foundations research program.](https://twitter.com/robbensinger/status/1648522190760067076)", "id": "b71d4466732b257f42b9c47c5c31002f", "summary": []}
316
+ {"source": "aisafety.info", "source_type": "markdown", "url": "https://aisafety.info?state=MEME", "title": "AI Safety Memes Wiki", "authors": ["Stampy aisafety.info"], "date_published": "2023-07-22T01:32:00Z", "text": "**This is a database of memes relevant to AI existential safety, especially ones which respond to common misconceptions.**\n\n# AGI won’t be powerful enough to destroy humanity\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/93db1f6d-f05d-493b-693b-253d82bb0e00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/4305140a-ce6c-458e-de21-fa4a817b8800/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/1842050b-cf21-4ce4-050e-b00222143200/public)\n\n# AI is “just” x (math, matrix multiplication, etc.)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/cdbeb049-4ed9-43ee-51ca-9bb4c60a6900/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/c7cc17d9-5fa6-40d5-d9a8-f041aefab800/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/5b932ba2-077e-4d2f-adfc-ab1877320d00/public)\n\n# Doubt on AGI capabilities (too far away, impossible, etc)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/45587690-9cf6-4ec3-c9d7-7a2f78a81b00/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/e9539d9a-a735-419c-4fb6-0fc25d1d9000/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/6c247983-618a-4367-eed0-5ed5a4242a00/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/fa910858-69c2-48b2-0b3c-30e968c49500/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/34355cb1-c5ed-4e06-86a9-b20feb468b00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/f6d00353-b9b1-4585-6160-4ff87a550700/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/6bf28d9c-3735-40ff-8c5a-fa6ee1643600/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/8b89f617-d6ab-4dcd-62cf-6033268ba800/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/2cb71bd9-bb78-48ce-a8d6-9ac1247cf800/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/c15bdbc6-6084-483d-1541-a5ddd0500800/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/f54af0d7-81a3-49b6-3a1c-68cbf26b6c00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/484d9d9f-33d7-4ad6-fd9a-6944b870e100/public)\n\n(animated: [https://research.aimultiple.com/wp-content/uploads/2017/08/wow.gif?fbclid=IwAR2JDaaf9eWWp9NnyidpNZI9L_9yf1eVvEFpno2hox83Oq6yii2qnasfFCQ](https://research.aimultiple.com/wp-content/uploads/2017/08/wow.gif?fbclid=IwAR2JDaaf9eWWp9NnyidpNZI9L_9yf1eVvEFpno2hox83Oq6yii2qnasfFCQ))\n\n# Goalpost-moving (still not AGI!)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/03b6e5db-073c-49a6-ec2d-104ae5b94300/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/32e9da92-a847-4f7d-6a8d-a75b5ca71000/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/967779a6-9144-4015-1d0f-9ca8a7538b00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/31fbc738-4835-4a2a-0ff7-859090049300/public)\n\n# Why would AI want to be evil?\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/96f4d130-a728-46cd-8296-5fa85b04f400/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/669c5b7e-0700-4331-b449-19ee59799200/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/033c683c-8afd-4789-5f57-219a0694fc00/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/da0edf22-e22d-44d8-ac27-972176863b00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/5b39ed55-e20c-4035-35d0-404521b59700/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/947e3ca1-5216-493b-fc42-cab334f14100/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/c7c23d97-0376-4823-6fc6-5b9af3da6000/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/08e9d377-a3c2-4396-df8a-a54736216f00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/72c6a1d3-9375-4c61-ce99-fe665657a300/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/3e8356fa-ff10-4ba9-4691-fafda9103500/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/20c33d9e-f4cd-4ac5-171f-2f6959457e00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/f855a91c-4ab4-457c-f97d-a73f9a8cf800/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/2672ee61-ea94-4811-fb39-d91e968b6100/public)\n\n# Whatever AI does is good since they’re way smarter\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/56fad389-9c6f-450d-c7da-9acae290ce00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/5aef89cf-b9b0-4526-4303-dc52f6723200/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/b0a00519-d372-48a7-1f19-1489bfc65f00/public)\n\n# AI alignment is a fringe worry\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/91b8bc39-2054-4c8c-255f-352de2694500/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/0f6cc8d4-9fc9-4caf-bb6b-91220c6ae400/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/2b0ee1a5-0a84-4947-9e25-5f68383f2400/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/04b10d82-2612-498c-55ad-dc1170e2ab00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/57f7941b-6da7-4261-e6dd-f63cb1fc6400/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/3c37a59b-b1d5-4f84-815d-c20089892800/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/6aef158c-da96-4835-2d52-cf2935a6a200/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/f0cfb2ef-8fee-4316-e523-5bef7c4afa00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/93d6a153-8745-45a7-71fd-945c89066c00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/929b1d78-f99d-4e74-bcc5-b1829c8ab000/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/97a03513-eae4-4171-605b-c11391c97600/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/3984e541-b95a-43c3-6eea-e0bdabd3f300/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/14baa677-4fed-4a4a-d639-fbd5d212be00/public)\n\n# We can’t just regulate AI development\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/c9fa905b-b656-479a-1286-fe82e6c4f600/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/99f2f521-b2d1-42ad-85c6-522285b35e00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/074fcf55-aa86-448c-512d-92831e0b5300/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/2b1f3505-bcd5-4b49-e93c-72f89f27ad00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/b9d00924-93e5-40e8-f629-4cb352181000/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/db21a335-99e8-48c4-aaa8-4fcb28939600/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/1f98d9f6-8be5-4eb5-7a1b-aadac678d200/public)\n\n# AI regulation is bad\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/7ca7d100-3fd2-4969-be84-612a17a11800/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/7d727416-7026-460c-0594-5c279fe86400/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/35fe7ada-3d70-4da8-1f11-eaf51a5f8e00/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/aaf8df9b-b2cf-4889-5d3e-f028699d9700/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/9317531c-ba6a-4751-82c3-22d8f52f3400/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/ccf38beb-2575-45be-1fbe-02017366e200/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/15eff5cd-f3a6-47d5-1f97-151699e84300/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/56c93750-470b-424a-9daf-6bc7aa6bc400/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/54a031e1-1a67-4916-0530-6c4d6d944d00/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/c907f17c-2bd7-4347-4ee6-a7b41e13b000/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/27366602-0f4a-41bd-ee74-7d3761917600/public)\n\n# Alignment will be solved, don’t worry\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/ab7d7048-f451-43ca-96fb-907ffdb63000/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/65f65b1e-62f6-40c0-8889-8d93f7677100/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/9cb79d9c-0376-416d-5dd1-2ba0ef6abe00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/e27600f6-0062-4374-6cba-22151bf2b400/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/ab554b8b-8dd5-4d97-e31e-42ca443e3b00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/4efb1b2f-3eec-421b-24e6-8668d890e900/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/4c123f92-8378-4804-bc1c-aef6e3509900/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/b08c7a01-49ab-4392-8bb5-dd7627ec9c00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/bafeff91-7cdd-4209-9fee-7f2c2bd5f800/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/e3bb8fce-9389-4677-700c-312be38eab00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/35d20dc5-6a79-4f65-d0da-1074c916a600/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/d2fe4f94-b2d4-483f-3878-1b53c6282e00/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/f055d42e-b65c-443c-0f41-c083a8136a00/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/53110c7e-4530-46b3-1eae-8c97d34c9700/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/6fc8b71d-e744-48f2-b9b4-d4d7b313f100/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/3e6f0567-0e77-49e2-dc92-8aa59347a300/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/fb3fe1f7-4e66-4446-20c6-55dd2e87c000/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/a894156c-d384-405e-2157-05c1ebb9e400/public)\n\n# The real danger is from modern AI, not ASI (myopia)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/090c6029-3f59-49d5-e2eb-53da078d8b00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/11632fb0-b4c4-46cc-2137-64b86ad3d900/public)\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/354c88d5-cd6a-425f-3b42-7a753b152d00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/f7ee0e1c-e3a4-4c8e-943d-4f4241adf300/public)\n\n# You’re saying other risks aren’t important\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/45834d9c-f17c-40bb-28b4-0903891d8700/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/0c5821cc-819f-4345-7886-c532ce563900/public)\n\n# Debates abt consciousness, sentience, qualia, etc.\n\n# \n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/9163ca9a-0bfb-4acd-2318-462f20128700/public)\n\n# F*ck Moloch!\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/2b0be628-c4cf-447f-bdcf-2212ba696400/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/bde4d4e4-ddb3-4c07-a853-2c6d358d4100/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/aabe3637-ef9d-4d88-5195-ec5cb2df1f00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/2256f6af-9e51-471c-f672-eb09f6f2d900/public)\n\n# Misc other takes\n\n## “AI acceleration is good”\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/bd5f31dc-47a9-4aec-830a-1b08ed7f8800/public)\n\n## “Yudkowskian scenarios are unlikely”\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/6bad4ce8-17ad-4c8a-56a2-9fa14a33ce00/public)\n\n## “Death is inevitable”\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/408626e4-a507-46c0-be13-d448bf35fa00/public)\n\n## “Start more AI labs”\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/f1ccfea4-654a-4381-58a8-492c24365300/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/3dc1e295-ec65-46d9-40eb-f8a4dffe8500/public)\n\n## “AI is like electricity”\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/c7d90016-db2d-4eb5-da79-588be7b8e600/public)\n\n## “I only thought abt it for 5 secs”\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/6b0241f7-498d-4c5b-3734-0777429da800/public)\n\n## “AI safety is a Pascal’s Wager!”\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/ed396ed0-a726-4ebb-7eea-ca67ea50cb00/public)\n\n## “AI safety is an infohazard!”\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/eab25f70-fb6b-4121-e77d-66bad053bc00/public)\n\n## “You can’t predict the future exactly”\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/b56655fe-a365-4615-60f0-27b07b78bd00/public)\n\n# Rapid-fire bad solutions (bro just…)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/05645061-5790-4dd5-adff-9d62a3496d00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/ace0c387-f52a-4528-7b2a-2a373dfee200/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/e6b78f8d-6c19-435f-7fe5-baf265631d00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/fc42c84e-e85c-4973-a10c-a31627699a00/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/b25360ba-22ca-431c-5130-84da32f58d00/public)\n\nBad Yann takes\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/d1b70e97-257a-4705-1f8f-92f4611d2a00/public)\n\nJust train multiple AIs\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/e52f1da9-14cd-4c94-f9ec-e8b0e641d700/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/222afa8f-9f68-4e6d-b7cf-67062e16d000/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/df49d1f8-8ef4-4121-14bc-00ee9aa58400/public)\n\n## Just box the AI\n\n## Just turn it off if it turns against us\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/32c19577-5c93-4222-53b3-db67d285d000/public)\n\n![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/d196a3c8-c7d2-4b81-9348-51e8d31c1000/public)\n\n\t*Just use Asimov’s three laws*\n\n## Just don’t give AI access to the “real world”\n\n## Just merge with the AIs\n\n\t*Just legally mandate that AIs must be aligned*\n\n*Just raise the AI like a child*\n\n", "id": "848cd377c6814446332f4ee0cd1b7aad", "summary": []}