{"id": "04b09902b8cab6d441495c747393855d", "title": "Adversarial Examples Are Not Bugs, They Are Features", "url": "http://gradientscience.org/adv/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Andrew Ilyas*", "Shibani Santurkar*", "Dimitris Tsipras*", "Logan Engstrom*", "Brandon Tran", "Aleksander Madry"], "summaries": ["_Distill published a discussion of this paper. This highlights section will cover the full discussion; all of these summaries and opinions are meant to be read together._\n\nConsider two possible explanations of adversarial examples. First, they could be caused because the model \"hallucinates\" a signal that is not useful for classification, and it becomes very sensitive to this feature. We could call these \"bugs\", since they don't generalize well. Second, they could be caused by features that _do_ generalize to the test set, but _can_ be modified by an adversarial perturbation. We could call these \"non-robust features\" (as opposed to \"robust features\", which can't be changed by an adversarial perturbation). The authors argue that at least some adversarial perturbations fall into the second category of being informative but sensitive features, based on two experiments.\n\nIf the \"hallucination\" explanation were true, the hallucinations would presumably be caused by the training process, the choice of architecture, the size of the dataset, **but not by the type of data**. So one thing to do would be to see if we can construct a dataset such that a model trained on that dataset is _already_ robust, without adversarial training. The authors do this in the first experiment. They take an adversarially trained robust classifier, and create images whose features (final-layer activations of the robust classifier) match the features of some unmodified input. The generated images only have robust features because the original classifier was robust, and in fact models trained on this dataset are automatically robust.\n\nIf the \"non-robust features\" explanation were true, then it should be possible for a model to learn on a dataset containing only non-robust features (which will look nonsensical to humans) and **still generalize to a normal-looking test set**. In the second experiment (henceforth WrongLabels), the authors construct such a dataset. Their hypothesis is that adversarial perturbations work by introducing non-robust features of the target class. So, to construct their dataset, they take an image x with original label y, adversarially perturb it towards some class y' to get image x', and then add (x', y') to their dataset (even though to a human x' looks like class y). They have two versions of this: in RandLabels, the target class y' is chosen randomly, whereas in DetLabels, y' is chosen to be y + 1. For both datasets, if you train a new model on the dataset, you get good performance **on the original test set**, showing that the \"non-robust features\" do generalize."], "venue": "arXiv", "opinion": "I buy this hypothesis. It explains why adversarial examples occur (\"because they are useful to reduce loss\"), and why they transfer across models (\"because different models can learn the same non-robust features\"). In fact, the paper shows that architectures that did worse in ExpWrongLabels (and so presumably are bad at learning non-robust features) are also the ones to which adversarial examples transfer the least. I'll leave the rest of my opinion to the opinions on the responses.", "highlight": true, "read_more": "[Paper](https://arxiv.org/abs/1905.02175) and [Author response](https://distill.pub/2019/advex-bugs-discussion/original-authors/)", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #62", "newsletter_category": "Adversarial examples"}
{"id": "79c0b71f1e02e89762a97f2b096ed062", "title": "Response: Learning from Incorrectly Labeled Data", "url": "https://distill.pub/2019/advex-bugs-discussion/response-6/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Eric Wallace"], "summaries": ["This response notes that all of the experiments are of the form: create a dataset D that is consistent with a model M; then, when you train a new model M' on D you get the same properties as M. Thus, we can interpret these experiments as showing that [model distillation](https://arxiv.org/abs/1503.02531) can work even with data points that we would naively think of \"incorrectly labeled\". This is a more general phenomenon: we can take an MNIST model, select _only_ the examples for which the top prediction is incorrect (labeled with these incorrect top predictions), and train a new model on that -- and get nontrivial performance on the original test set, even though the new model has never seen a \"correctly labeled\" example."], "venue": "Distill", "opinion": "I definitely agree that these results can be thought of as a form of model distillation. I don't think this detracts from the main point of the paper: the _reason_ model distillation works even with incorrectly labeled data is probably because the data is labeled in such a way that it incentivizes the new model to pick out the same features that the old model was paying attention to.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #62", "newsletter_category": "Adversarial examples"}
{"id": "7eebe02362046b0c42a6a7e4bed5c860", "title": "Response: Robust Feature Leakage", "url": "https://distill.pub/2019/advex-bugs-discussion/response-2/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Gabriel Goh"], "summaries": ["This response investigates whether the datasets in WrongLabels could have had robust features. Specifically, it checks whether a linear classifier over provably robust features trained on the WrongLabels dataset can get good accuracy on the _original_ test set. This shouldn't be possible since WrongLabels is meant to correlate only non-robust features with labels. It finds that you _can_ get some accuracy with RandLabels, but you don't get much accuracy with DetLabels.\n\nThe original authors can actually explain this: intuitively, you get accuracy with RandLabels because it's less harmful to choose labels randomly than to choose them explicitly incorrectly. With random labels on unmodified inputs, robust features should be completely uncorrelated with accuracy. However, with random labels _followed by an adversarial perturbation towards the label_, there can be some correlation, because the adversarial perturbation can add \"a small amount\" of the robust feature. However, in DetLabels, the labels are _wrong_, and so the robust features are _negatively correlated_ with the true label, and while this can be reduced by an adversarial perturbation, it can't be reversed (otherwise it wouldn't be robust)."], "venue": "Distill", "opinion": "The original authors' explanation of these results is quite compelling; it seems correct to me.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #62", "newsletter_category": "Adversarial examples"}
{"id": "ef575914b23c57b754435803fed6ef30", "title": "Response: Adversarial Examples are Just Bugs, Too", "url": "https://distill.pub/2019/advex-bugs-discussion/response-5/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Preetum Nakkiran"], "summaries": ["The main point of this response is that adversarial examples can be bugs too. In particular, if you construct adversarial examples that explicitly _don't_ transfer between models, and then run ExpWrongLabels with such adversarial perturbations, then the resulting model doesn't perform well on the original test set (and so it must not have learned non-robust features).\n\nIt also constructs a data distribution where **every useful feature _of the optimal classifer_ is guaranteed to be robust**, and shows that we can still get adversarial examples with a typical model, showing that it is not just non-robust features that cause adversarial examples.\n\nIn their response, the authors clarify that they didn't intend to claim that adversarial examples could not arise due to \"bugs\", just that \"bugs\" were not the only explanation. In particular, they say that their main thesis is “adversarial examples will not just go away as we fix bugs in our models”, which is consistent with the point in this response."], "venue": "Distill", "opinion": "Amusingly, I think I'm more bullish on the original paper's claims than the authors themselves. It's certainly true that adversarial examples can arise from \"bugs\": if your model overfits to your data, then you should expect adversarial examples along the overfitted decision boundary. The dataset constructed in this response is a particularly clean example: the optimal classifier would have an accuracy of 90%, but the model is trained to accuracy 99.9%, which means it must be overfitting.\n\nHowever, I claim that with large and varied datasets with neural nets, we are typically not in the regime where models overfit to the data, and the presence of \"bugs\" in the model will decrease. (You certainly _can_ get a neural net to be \"buggy\", e.g. by randomly labeling the data, but if you're using real data with a natural task then I don't expect it to happen to a significant degree.) Nonetheless, adversarial examples persist, because the features that models use are not the ones that humans use.\n\nIt's also worth noting that this experiment strongly supports the hypothesis that adversarial examples transfer because they are real features that generalize to the test set.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #62", "newsletter_category": "Adversarial examples"}
{"id": "52278be76970607c43300c9031406e8d", "title": "Response: Adversarial Example Researchers Need to Expand What is Meant by ‘Robustness’", "url": "https://distill.pub/2019/advex-bugs-discussion/response-1/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Justin Gilmer", "Dan Hendrycks"], "summaries": ["This response argues that the results in the original paper are simply a consequence of a generally accepted principle: \"models lack robustness to distribution shift because they latch onto superficial correlations in the data\". This isn't just about L_p norm ball adversarial perturbations: for example, one [recent paper](https://arxiv.org/abs/1906.08988) shows that if the model is only given access to high frequency features of images (which look uniformly grey to humans), it can still get above 50% accuracy. In fact, when we do adversarial training to become robust to L_p perturbations, then the model pays attention to different non-robust features and becomes more vulnerable to e.g. [low-frequency fog corruption](http://arxiv.org/abs/1903.12261). The authors call for adversarial examples researchers to move beyond L_p perturbations and think about the many different ways models can be fragile, and to make them more robust to distributional shift."], "venue": "Distill", "opinion": "I strongly agree with the worldview behind this response, and especially the principle they identified. I didn't know this was a generally accepted principle, though of course I am not an expert on distributional robustness.\n\nOne thing to note is what is meant by \"superficial correlation\" here. I interpret it to mean a correlation that really does exist in the dataset, that really does generalize to the test set, but that _doesn't_ generalize out of distribution. A better term might be \"fragile correlation\". All of the experiments so far have been looking at within-distribution generalization (aka generalization to the test set), and are showing that non-robust features _do_ generalize within-distribution. By my understanding, this response is arguing that there are many such non-robust features that will generalize within-distribution but will not generalize under distributional shift, and we need to make our models robust to all of them, not just L_p adversarial perturbations.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #62", "newsletter_category": "Adversarial examples"}
{"id": "2f392649093bbc5931f1b0fabc1000f9", "title": "Response: Two Examples of Useful, Non-Robust Features", "url": "https://distill.pub/2019/advex-bugs-discussion/response-3/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Gabriel Goh"], "summaries": ["This response studies linear features, since we can analytically compute their usefulness and robustness. It plots the singular vectors of the data as features, and finds that such features are either robust and useful, or non-robust and not useful. However, you can get useful, non-robust features by ensembling or contamination (see response for details)."], "venue": "Distill", "opinion": "", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #62", "newsletter_category": "Adversarial examples"}
{"id": "d41f30b986e62f21a4a7c710ef19442e", "title": "Response: Adversarially Robust Neural Style Transfer", "url": "https://distill.pub/2019/advex-bugs-discussion/response-4/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Reiichiro Nakano"], "summaries": ["The original paper showed that adversarial examples don't transfer well to VGG, and that VGG doesn't tend to learn similar non-robust features as a ResNet. Separately, VGG works particularly well for style transfer. Perhaps since VGG doesn't capture non-robust features as well, the results of style transfer look better to humans? This response and the author's response investigate this hypothesis in more detail and find that it seems broadly supported, but there are still finnicky details to be worked out."], "venue": "Distill", "opinion": "This is an intriguing empirical fact. However, I don't really buy the theoretical argument that style transfer works because it doesn't use non-robust features, since I would typically expect that a model that doesn't use L_p-fragile features would instead use features that are fragile or non-robust in some other way.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #62", "newsletter_category": "Adversarial examples"}
{"id": "293197788c11c8c0a9f84bec182f6b80", "title": "Introducing the Unrestricted Adversarial Examples Challenge", "url": "https://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Tom B. Brown and Catherine Olsson", "Reserach Engineers", "Google Brain Team"], "summaries": ["There's a new adversarial examples contest, after the one from NIPS 2017. The goal of this contest is to figure out how to create a model that never confidently makes a mistake on a very simple task, even in the presence of a powerful adversary. This leads to many differences from the previous contest. The task is a lot simpler -- classifiers only need to distinguish between bicycles and birds, with an option of saying \"ambiguous\". Instead of using the L-infinity norm ball to define what an adversarial example is, attackers are allowed to supply any image whatsoever, as long as a team of human evaluators agrees unanimously on the classification of the image. The contest has no time bound, and will run until some defense survives for 90 days without being broken even once. A defense is not broken if it says \"ambiguous\" on an adversarial example. Any submitted defense will be published, which means that attackers can specialize their attacks to that specific model (i.e. it is white box)."], "venue": "Google AI Blog", "opinion": "I really like this contest format, it seems like it's actually answering the question we care about, for a simple task. If I were designing a defense, the first thing I'd aim for would be to get a lot of training data, ideally from different distributions in the real world, but data augmentation techniques may also be necessary, especially for eg. images of a bicycle against an unrealistic textured background. The second thing would be to shrink the size of the model, to make it more likely that it generalizes better (in accordance with Occam's razor or the minimum description length principle). After that I'd think about the defenses proposed in the literature. I'm not sure how the verification-based approaches will work, since they are intrinsically tied to the L-infinity norm ball definition of adversarial examples, or something similar -- you can't include the human evaluators in your specification of what you want to verify.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #24", "newsletter_category": "Adversarial examples"}
{"id": "d56532e0c18820c5a5305293e321fe06", "title": "Physically Realistic Attacks on Deep Reinforcement Learning", "url": "https://bair.berkeley.edu/blog/2020/03/27/attacks/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Adam Gleave"], "summaries": ["This is a blog post for a previously summarized paper, <@Adversarial Policies: Attacking Deep Reinforcement Learning@>."], "venue": "BAIR Blog", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #93", "newsletter_category": "Adversarial examples"}
{"id": "6ca630fea4e47fd30cddbca2d6d0f94d", "title": "Robustness beyond Security: Representation Learning", "url": "http://gradientscience.org/robust_reps/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Logan Engstrom", "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Brandon Tran", "Aleksander Madry"], "summaries": ["Earlier this year, a <@provocative paper@>(@Adversarial Examples Are Not Bugs, They Are Features@) out of MIT claimed that adversarial perturbations weren’t just spurious correlations, but were, at least in some cases, features that generalize to the test set. A subtler implied point of the paper was that robustness to adversarial examples wasn’t a matter of resolving the model’s misapprehensions, but rather one of removing the model’s sensitivity to features that would be too small for a human to perceive. If we do this via adversarial training, we get so-called “robust representations”. The same group has now put out another paper, asking the question: are robust representations also human-like representations?\n\nTo evaluate how human-like the representations are, they propose the following experiment: take a source image, and optimize it until its representations (penultimate layer activations) match those of some target image. If the representations are human-like, the result of this optimization should look (to humans) very similar to the target image. (They call this property “invertibility”.) Normal image classifiers fail miserably at this test: the image looks basically like the source image, making it a classic adversarial example. Robust models on the other hand pass the test, suggesting that robust representations usually are human-like. They provide further evidence by showing that you can run feature visualization without regularization and get meaningful results (existing methods result in noise if you don’t regularize)."], "venue": "arXiv", "opinion": "I found this paper clear, well-written, and straightforward in its empirical examination of how the representations learned by standard and robust models differ. I also have a particular interest in this line of research, since I have thought for a while that we should be more clear about the fact that adversarially-susceptible models aren’t wrong in some absolute sense, but relative to human perception in particular.\n\n**Rohin’s opinion:** I agree with Cody above, and have a few more thoughts.\n\nMost of the evidence in this paper suggests that the learned representations are “human-like” in the sense that two images that have similar representations must also be perceptually similar (to humans). That is, by enforcing that “small change in pixels” implies “small change in representations”, you seem to get for free the converse: “small change in representations” implies “small change in pixels”. This wasn’t obvious to me: a priori, each feature could have corresponded to 2+ “clusters” of inputs.\n\nThe authors also seem to be making a claim that the representations are semantically similar to the ones humans use. I don’t find the evidence for this as compelling. For example, they claim that when putting the “stripes” feature on a picture of an animal, only the animal gets the stripes and not the background. However, when I tried it myself in the interactive visualization, it looked like a lot of the background was also getting stripes.\n\nOne typical regularization for [feature visualization](https://distill.pub/2017/feature-visualization/) is to jitter the image while optimizing it, which seems similar to selecting for robustness to imperceptible changes, so it makes sense that using robust features helps with feature visualization. That said, there are several other techniques for regularization, and the authors didn’t need any of them, which is very interesting. On the other hand, their visualizations don't look as good to me as those from other papers.", "highlight": false, "read_more": "Paper: Adversarial Robustness as a Prior for Learned Representations", "summarizer": "Cody", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #68", "newsletter_category": "Adversarial examples"}
{"id": "0f2aaf6122d19ecee8d67fb6ce593788", "title": "Robustness beyond Security: Computer Vision Applications", "url": "http://gradientscience.org/robust_apps/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Shibani Santurkar*", "Dimitris Tsipras*", "Brandon Tran*", "Andrew Ilyas*", "Logan Engstrom*", "Aleksander Madry"], "summaries": ["Since a robust model seems to have significantly more \"human-like\" features (see post above), it should be able to help with many of the tasks in computer vision. The authors demonstrate results on image generation, image-to-image translation, inpainting, superresolution and interactive image manipulation: all of which are done simply by optimizing the image to maximize the probability of a particular class label or the value of a particular learned feature."], "venue": "arXiv", "opinion": "This provides more evidence of the utility of robust features, though all of the comments from the previous paper apply here as well. In particular, looking at the results, my non-expert guess is that they are probably not state-of-the-art (but it's still interesting that one simple method is able to do well on all of these tasks).", "highlight": false, "read_more": "Paper: Image Synthesis with a Single (Robust) Classifier", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #68", "newsletter_category": "Adversarial examples"}
{"id": "8986bded724db4c382024ac101f2b4a6", "title": "Natural Adversarial Examples", "url": "http://arxiv.org/abs/1907.07174", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Dan Hendrycks", "Kevin Zhao", "Steven Basart", "Jacob Steinhardt", "Dawn Song"], "summaries": ["This paper introduces a new dataset to evaluate the worst-case performance of image classifiers. ImageNet-A consists of unmodified natural images that are consistently misclassified by popular neural-network architectures trained on ImageNet. Based on some concrete misclassifications, like a dragonfly on a yellow plastic shovel being classified as a banana, the authors hypothesize that current classifiers rely too much on color, texture and background cues. Neither classical adversarial training nor training on a version of ImageNet designed to reduce the reliance on texture helps a lot, but modifying the network architecture can increase the accuracy on ImageNet-A from around 5% to 15%."], "venue": "arXiv", "opinion": "This seems to show that current methods and/or training sets for image classification are still far away from allowing for robust generalization, even in naturally occuring scenarios. While not too surprising, the results might convince those who have heavily discounted the evidence provided by classical adversarial examples due to the reliance on artificial perturbations.\n\n**Rohin's opinion:** I'm particularly excited about this dataset because it seems like a significantly better way to evaluate new techniques for robustness: it's much closer to a \"real world\" test of the technique (as opposed to e.g. introducing an artificial perturbation that classifiers are expected to be robust to).", "highlight": false, "read_more": "", "summarizer": "Flo Dorner", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #64", "newsletter_category": "Adversarial examples"}
{"id": "f1d0ab5dbdf095afbe24821d6b4e7bf4", "title": "Testing Robustness Against Unforeseen Adversaries", "url": "https://openai.com/blog/testing-robustness/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Daniel Kang", "Yi Sun", "Dan Hendrycks", "Tom Brown", "Jacob Steinhardt"], "summaries": ["This paper demonstrates that adversarially training on just one type or family of adversarial distortions fails to provide general robustness against different kinds of possible distortions. In particular, they show that adversarial training against L-p norm ball distortions transfer reasonably well to other L-p norm ball attacks, but provides little value, and can in fact reduce robustness, when evaluated on other families of attacks, such as adversarially-chosen Gabor noise, \"snow\" noise, or JPEG compression. In addition to proposing these new perturbation types beyond the typical L-p norm ball, the paper also provides a \"calibration table\" with epsilon sizes they judge to be comparable between attack types, by evaluating them according to how much they reduce accuracy on either a defended or undefended model. (Because attacks are so different in approach, a given numerical value of epsilon won't correspond to the same \"strength\" of attack across methods) "], "venue": "arXiv", "opinion": "I didn't personally find this paper hugely surprising, given the past pattern of whack-a-mole between attack and defense suggesting that defenses tend to be limited in their scope, and don't confer general robustness. That said, I appreciate how centrally the authors lay this lack of transfer as a problem, and the effort they put in to generating new attack types and calibrating them so they can be meaningfully compared to existing L-p norm ball ones. \n\n**Rohin's opinion:** I see this paper as calling for adversarial examples researchers to stop focusing just on the L-p norm ball, in line with <@one of the responses@>(@Response: Adversarial Example Researchers Need to Expand What is Meant by ‘Robustness’@) to the last newsletter's highlight, <@Adversarial Examples Are Not Bugs, They Are Features@>.", "highlight": false, "read_more": "Testing Robustness Against Unforeseen Adversaries", "summarizer": "Cody", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #63", "newsletter_category": "Adversarial examples"}
{"id": "ef6e8174ec9d5ce3e266b58e0e203704", "title": "Towards the first adversarially robust neural network model on MNIST", "url": "http://arxiv.org/abs/1805.09190", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Lukas Schott", "Jonas Rauber", "Matthias Bethge", "Wieland Brendel"], "summaries": ["This recent pre-print claims to make MNIST classifiers more adversarially robust to different L-p perturbations, while the previous paper only worked for L-infinity perturbations. The basic building block in their approach is a variational autoencoder, one for each MNIST class. Each variational autoencoder computes the likelihood of the input sample, and this information is used for classification. The authors also demonstrate that binarizing MNIST images can serve as strong defense against some perturbations. They evaluate against strong attacks and not just the fast gradient sign method."], "venue": "arXiv", "opinion": "This paper has generated considerable excitement among my peers. Yet inference time with this approach is approximately 100,000 times that of normal inference (10^4 samples per VAE * 10 VAEs). Also unusual is that the L-infinity \"latent descent attack\" result is missing. It is not clear why training a single VAE does not work. Also, could results improve by adversarially training the VAEs? As with all defense papers, it is prudent to wait for third-party reimplementations and analysis, but the range of attacks they consider is certainly thorough.", "highlight": false, "read_more": "", "summarizer": "Dan H", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #27", "newsletter_category": "Adversarial examples"}
{"id": "f094e38918f07bd286a47d68c90f5053", "title": "Pixels still beat text: Attacking the OpenAI CLIP model with text patches and adversarial pixel perturbations", "url": "https://stanislavfort.github.io/2021/03/05/OpenAI_CLIP_stickers_and_adversarial_examples.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Stanislav Fort"], "summaries": ["Typographic adversarial examples demonstrate that [CLIP](https://openai.com/blog/clip/) can be significantly affected by text in an image. How powerfully does text affect CLIP, and how does it compare to more traditional attack vectors like imperceptible pixel changes? This blog post seeks to find out, through some simple tests on CIFAR-10.\n\nFirst, to see how much text can affect CLIP’s performance, we add a handwritten label to each of the test images that spells out the class (so a picture of a deer would have overlaid a picture of a handwritten sticker of the word “deer”). This boosts CLIP’s zero-shot performance on CIFAR-10 from 87.37% to literally 100% (not a single mistake), showing that text really is quite powerful in affecting CLIP’s behavior.\n\nYou might think that since text can boost performance so powerfully, CLIP would at least be more robust against pixel-level attacks when the sticker is present. However, this does _not_ seem to be true: even when there is a sticker with the true class, a pixel-level attack works quite well (and is still imperceptible).\n\nThis suggests that while the text is powerful, pixel-level changes are more powerful still. To test this, we can try adding another, new sticker (with the same label). It turns out that this _does_ successfully switch the label back to the original correct label. In general, you can keep iterating the text sticker attack and the pixel-change attack, and the attacks keep working, with CLIP’s classification being determined by whichever attack was performed most recently.\n\nYou might think that the model's ability to read text is fairly brittle, and that's what's being changed by pixel-level attacks, hence adding a fresh piece of text would switch it back. Unfortunately, it doesn't seem like anything quite that simple is going on. The author conducts several experiments where only the sticker can be adversarially perturbed, or everything but the sticker can be adversarially perturbed, or where the copy-pasted sticker is one that was previously adversarially perturbed; unfortunately the results don't seem to tell a clean story."], "venue": "Author's Website", "opinion": "This is quite an interesting phenomenon, and I'm pretty curious to understand what's going on here. Maybe that's an interesting new challenge for people interested in Circuits-style interpretability? My pretty uneducated guess is that it seems difficult enough to actually stress our techniques, but not so difficult that we can't make any progress.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #142", "newsletter_category": "Adversarial examples"}
{"id": "99b0bdd0176bdc2e541a787cd3fd96fb", "title": "Adversarial examples for the OpenAI CLIP in its zero-shot classification regime and their semantic generalization", "url": "https://stanislavfort.github.io/2021/01/12/OpenAI_CLIP_adversarial_examples.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Stanislav Fort"], "summaries": ["[CLIP](https://openai.com/blog/clip/) is a model that was trained on a vast soup of image-caption data, and as a result can perform zero-shot image classification (for example, it gets 87% accuracy on CIFAR-10 out of the box). Does it also have adversarial examples within the image classification regime? This post shows that the answer is yes, and in fact these adversarial examples are easy to find.\n\nMore interestingly though, these adversarial examples persist if you change the labels in a semantically meaningful way. For example, if you take an image X that is correctly classified as a cat and imperceptibly modify it to Y which is now classified as a dog, if you change the class names to “kitty” and “hound”, then the same X will now be classified as a kitty while the same Y will be classified as a hound. This even works (though not as well) for labels like “domesticated animal which barks and is best friend”. The author takes this as evidence that the adversarial image actually looks like the adversarial class to the neural net, rather than being a peculiar consequence of the specific label."], "venue": "Author's Website", "opinion": "This seems like further validation of the broad view put forth in <@Adversarial Examples Are Not Bugs, They Are Features@>.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #136", "newsletter_category": "Adversarial examples"}
{"id": "b337b9965ce3120e7d71cdff07ae846e", "title": "AXRP 1: Adversarial Policies", "url": "https://axrp.net/episode/2020/12/11/episode-1-adversarial-policies-adam-gleave.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Daniel Filan and Adam Gleave"], "summaries": ["The first part of this podcast describes the <@adversarial policies paper@>(@Adversarial Policies: Attacking Deep Reinforcement Learning@); see the summary for details about that. (As a reminder, this is the work which trained an adversarial goalie, that by spasming in a random-looking manner, causes the kicker to completely fail to even kick the ball towards the goal.)\n\nLet’s move on to the more speculative thoughts discussed in this podcast (and not in the paper). One interesting thing that the paper highlights is that the space of policies is very non-transitive: it is possible, perhaps even common, that policy A beats policy B, which beats policy C, which beats policy A. This is clear if you allow arbitrary policies -- for example, the policy “play well, unless you see your opponent make a particular gesture; if you see that gesture then automatically lose” will beat many policies, but can be beaten by a very weak policy that knows to make the particular gesture. You might have thought that in practice, the policies produced by deep RL would exclude these weird possibilities, and so could be ranked by some notion of “competence”, where more competent agents would usually beat less competent agents (implying transitivity). The results of this paper suggest that isn’t the case.\n\nThe conversation then shifts to the research community and how to choose what research to do. The motivation behind this work was to improve the evaluation of policies learned by deep RL: while the freedom from the lack of theoretical guarantees (as in control theory) has allowed RL to make progress on previously challenging problems, there hasn’t been a corresponding uptick in engineering-based guarantees, such as testing. The work has had a fairly positive reception in the AI community, though unfortunately it seems this is probably due in part to its flashy results. Other papers that Adam is equally excited about have not had as good a reception."], "venue": "AXRP Podcast", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #130", "newsletter_category": "Adversarial examples"}
{"id": "b89880888b9e2774abc3dbcdc2f2a5e1", "title": "Theoretically Principled Trade-off between Robustness and Accuracy", "url": "http://arxiv.org/abs/1901.08573", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P. Xing", "Laurent El Ghaoui", "Michael I. Jordan"], "summaries": ["This paper won the NeurIPS 2018 Adversarial Vision Challenge. For robustness on CIFAR-10 against l_infinity perturbations (epsilon = 8/255), it improves over the Madry et al. adversarial training baseline from 45.8% to 56.61%, making it [almost](https://arxiv.org/pdf/1901.09960.pdf) state-of-the-art. However, it does decrease clean set accuracy by a few percent, despite using a deeper network than Madry et al. Their technique has many similarities to Adversarial Logit Pairing, which is not cited, because they encourage the network to embed a clean example and an adversarial perturbation of a clean example similarly. I now describe Adversarial Logit Pairing. During training, ALP teaches the network to classify clean and adversarially perturbed points; added to that loss is an l_2 loss between the logit embeddings of clean examples and the logits of the corresponding adversarial examples. In contrast, in place of the l_2 loss from ALP, this paper uses the KL divergence from the softmax of the clean example to the softmax of an adversarial example. Yet the softmax distributions are given a high temperature, so this loss is not much different from an l_2 loss between logits. The other main change in this paper is that adversarial examples are generated by trying to maximize the aforementioned KL divergence between clean and adversarial pairs, not by trying to maximize the classification log loss as in ALP. This paper then shows that some further engineering to adversarial logit pairing can improve adversarial robustness on CIFAR-10."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Dan H", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #44", "newsletter_category": "Adversarial examples"}
{"id": "e869ed4d9f48f5da70cca0bdcb668ee6", "title": "Adversarial Vision Challenge", "url": "http://arxiv.org/abs/1808.01976", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Wieland Brendel", "Jonas Rauber", "Alexey Kurakin", "Nicolas Papernot", "Veliqi", "Marcel Salathé", "Sharada P. Mohanty", "Matthias Bethge"], "summaries": ["There will be a competition on adversarial examples for vision at NIPS 2018."], "venue": "NIPS 2018", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #19", "newsletter_category": "Adversarial examples"}
{"id": "cda17d2ffece0392abe18b82a8da8295", "title": "Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations", "url": "http://arxiv.org/abs/1807.01697", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Dan Hendrycks", "Thomas G. Dietterich"], "summaries": ["See [Import AI](https://jack-clark.net/2018/07/09/import-ai-102-testing-ai-robustness-with-imagenet-c-militarycivil-ai-development-in-china-and-how-teamwork-lets-ai-beat-humans/)."], "venue": "ICLR 2019", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "Adversarial examples"}
{"id": "9123e6d80e6ed3a54ee452691f619b43", "title": "Avoiding textual adversarial examples", "url": "https://nitter.cc/NoaNabeshima/status/1368662246885265409", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Noa Nabeshima"], "summaries": ["Last week I speculated that CLIP might \"know\" that a textual adversarial example is a \"picture of an apple with a piece of paper saying an iPod on it\" and the zero-shot classification prompt is preventing it from demonstrating this knowledge. Gwern Branwen [commented](https://www.alignmentforum.org/posts/JGByt8TrxREo4twaw/an-142-the-quest-to-understand-a-network-well-enough-to?commentId=keW4DuE7G4SZn9h2r) to link me to this Twitter thread as well as this [YouTube video](https://youtu.be/Rk3MBx20z24) in which better prompt engineering significantly reduces these textual adversarial examples, demonstrating that CLIP does \"know\" that it's looking at an apple with a piece of paper on it."], "venue": "Twitter", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #143", "newsletter_category": "Adversarial examples"}
{"id": "baafe25523c51f73e1e9a455b494bf49", "title": "Finite Factored Sets sequence", "url": "https://www.alignmentforum.org/s/kxs3eeEti9ouwWFzr", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Scott Garrabrant"], "summaries": ["This newsletter is a combined summary + opinion for the [Finite Factored Sets sequence](https://www.alignmentforum.org/s/kxs3eeEti9ouwWFzr) by Scott Garrabrant. I (Rohin) have taken a lot more liberty than I usually do with the interpretation of the results; Scott may or may not agree with these interpretations.\n\n## Motivation\n\nOne view on the importance of deep learning is that it allows you to automatically _learn_ the features that are relevant for some task of interest. Instead of having to handcraft features using domain knowledge, we simply point a neural net at an appropriate dataset and it figures out the right features. Arguably this is the _majority_ of what makes up intelligent cognition; in humans it seems very analogous to [System 1](https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow), which we use for most decisions and actions. We are also able to infer causal relations between the resulting features.\n\nUnfortunately, [existing models](https://en.wikipedia.org/wiki/The_Book_of_Why) of causal inference don’t model these learned features -- they instead assume that the features are already given to you. Finite Factored Sets (FFS) provide a theory which can talk directly about different possible ways to featurize the space of outcomes and still allows you to perform causal inference. This sequence develops this underlying theory and demonstrates a few examples of using finite factored sets to perform causal inference given only observational data.\n\nAnother application is to <@embedded agency@>(@Embedded Agents@): we would like to think of “agency” as a way to featurize the world into an “agent” feature and an “environment” feature, that together interact to determine the world. In <@Cartesian Frames@>, we worked with a function A × E → W, where pairs of (agent, environment) together determined the world. In the finite factored set regime, we’ll think of A and E as features, the space S = A × E as the set of possible feature vectors, and S → W as the mapping from feature vectors to actual world states.\n\n## What is a finite factored set?\n\nGeneralizing this idea to apply more broadly, we will assume that there is a set of possible worlds Ω, a set S of arbitrary elements (which we will eventually interpret as feature vectors), and a function f : S → Ω that maps feature vectors to world states. Our goal is to have some notion of “features” of elements of S. Normally, when working with sets, we identify a feature value with the set of elements that have that value. For example, we can identify “red” as the set of all red objects, and in [some versions of mathematics](https://en.wikipedia.org/wiki/Set-theoretic_definition_of_natural_numbers#Frege_and_Russell), we define “2” to be the class of all sets that have exactly two elements. So, we define a feature to be a _partition_ of S into subsets, where each subset corresponds to one of the possible feature values. We can also interpret a feature as a _question_ about items in S, and the values as possible _answers_ to that question; I’ll be using that terminology going forward.\n\nA finite factored set is then given by (S, B), where B is a set of **factors** (questions), such that if you choose a particular answer to every question, that uniquely determines an element in S (and vice versa). We’ll put aside the set of possible worlds Ω; for now we’re just going to focus on the theory of these (S, B) pairs.\n\nLet’s look at a contrived example. Consider S = {chai, caesar salad, lasagna, lava cake, sprite, strawberry sorbet}. Here are some possible questions for this S:\n\n- **FoodType**: Possible answers are Drink = {chai, sprite}, Dessert = {lava cake, strawberry sorbet}, Savory = {caesar salad, lasagna}\n- **Temperature**: Possible answers are Hot = {chai, lava cake, lasagna} and Cold = {sprite, strawberry sorbet, caesar salad}.\n- **StartingLetter**: Possible answers are “C” = {chai, caesar salad}, “L” = {lasagna, lava cake}, and “S” = {sprite, strawberry sorbet}.\n- **NumberOfWords**: Possible answers are “1” = {chai, lasagna, sprite} and “2” = {caesar salad, lava cake, strawberry sorbet}.\n\nGiven these questions, we could factor S into {FoodType, Temperature}, or {StartingLetter, NumberOfWords}. We _cannot_ factor it into, say, {StartingLetter, Temperature}, because if we set StartingLetter = L and Temperature = Hot, that does not uniquely determine an element in S (it could be either lava cake or lasagna).\n\nWhich of the two factorizations should we use? We’re not going to delve too deeply into this question, but you could imagine that if you were interested in questions like “does this need to be put in a glass” you might be more interested in the {FoodType, Temperature} factorization.\n\nJust to appreciate the castle of abstractions we’ve built, here’s the finite factored set F with the factorization {FoodType, Temperature}:\n\nF = ({chai, caesar salad, lasagna, lava cake, sprite, strawberry sorbet}, {{{chai, sprite}, {lava cake, strawberry sorbet}, {caesar salad, lasagna}}, {{chai, lava cake, lasagna}, {sprite, strawberry sorbet, caesar salad}}})\n\nTo keep it all straight, just remember: a **factorization** B is a set of **questions** (factors, partitions) each of which is a set of **possible answers** (parts), each of which is a set of elements in S.\n\n## A brief interlude\n\nSome objections you might have about stuff we’ve talked about so far:\n\n**Q.** Why do we bother with the set S -- couldn’t we just have the set of questions B, and then talk about answer vectors of the form (a1, a2, … aN)?\n\n**A.** You could in theory do this, as there is a bijection between S and the Cartesian product of the sets in B. However, the problem with this framing is that it is hard to talk about other derived features. For example, the question “what is the value of B1+B2” has no easy description in this framing. When we instead directly work with S, the B1+B2 question is just another partition of S, just like B1 or B2 individually.\n\n**Q.** Why does f map S to Ω? Doesn’t this mean that a feature vector uniquely determines a world state, whereas it’s usually the opposite in machine learning?\n\n**A.** This is true, but here the idea is that the set of features together captures _all_ the information within the setting we are considering. You could think of feature vectors in deep learning as only capturing an important subset of all of the features (which we’d have to do in practice since we only have bounded computation), and those features are not enough to determine world states.\n\n## Orthogonality in Finite Factored Sets\n\nWe’re eventually going to use finite factored sets similarly to Pearlian causal models: to infer which questions (random variables) are conditionally independent of each other. However, our analysis will apply to arbitrary questions, unlike Pearlian models, which can only talk about independence between the predefined variables from which the causal model is built.\n\nJust like Pearl, we will talk about _conditioning on evidence_: given evidence e, a subset of S, we can “observe” that we are within e. In the formal setup, this looks like erasing all elements that are not in e from all questions, answers, factors, etc.\n\nYou might think that \"factors\" are not analogous to nodes or random variables in a Pearlian model. However, this isn't right, since we’re going to assume that all of our factors are independent from each other, which is usually not the case in a Pearlian model. For example, you might have a Pearlian model with two binary variables, e.g. “Variable Rain causes Variable Wet Sidewalk”; these are obviously not independent. The corresponding finite factored set would have _three_ factors: “did it rain?”, “if it rained did the sidewalk get wet?” and “if it didn’t rain did the sidewalk get wet?” This way all three factors can be independent of each other. We will still be able to ask whether Wet Sidewalk is independent of Rain, since Wet Sidewalk is just another question about the set S -- it just isn’t one of the underlying factors anymore.\n\nThe point of this independence is to allow us to reason about _counterfactuals_: it should be possible to say “imagine the element s, except with underlying factor b2 changed to have value v”. As a result, our definitions will include clauses that say “and make sure we can still take counterfactuals”. For example, let’s talk about the “history” of a question X, which for now you can think of as the “factors relevant to X”. The _history_ of X given e is the smallest set of factors such that:\n\n1) if you know the answers to these factors, then you can infer the answer to X, and\n2) any factors that are _not_ in the history are independent of X. As suggested above, we can think of this as being about counterfactuals -- we’re saying that for any such factor, we can counterfactually change its answer and this will remain consistent with the evidence e.\n\n(A technicality on the second point: we’ll never be able to counterfactually change a factor to a value that is never found in the evidence; this is fine and doesn’t prevent things from being independent.)\n\nTime for an example! Consider the set S = {000, 001, 010, 011, 100, 101, 110, 111} and the factorization {X, Y, Z}, where X is the question “what is the first bit”, Y is the question “what is the second bit”, and Z is the question “what is the third bit”. Consider the question Q = “when interpreted as a binary number, is the number >= 2?” In this case, the history of Q given no evidence is {X, Y} because you can determine the answer to Q with the combination of X and Y. (You can still counterfact on anything, since there is no evidence to be inconsistent with.)\n\nLet’s consider an example with evidence. Suppose we observe that all the bits are equal, that is, e = {000, 111}. Now, what is the history of X? If there wasn’t any evidence, the history would just be {X}; you only need to know X in order to determine the value of X. However, suppose we learned that X = 0, implying that our element is 000. We can’t counterfact on Y or Z, since that would produce 010 or 001, both of which are inconsistent with the evidence. So given this evidence, the history of X is actually {X, Y, Z}, i.e. the entire set of factors! If we’d only observed that the first two bits were equal, so e = {000, 001, 110, 111}, then we _could_ counterfact on Z and the history of X would be {X, Y}.\n\n(Should you want more examples, here are two [relevant](https://www.alignmentforum.org/posts/qGjCt4Xq83MBaygPx/a-simple-example-of-conditional-orthogonality-in-finite) [posts](https://www.alignmentforum.org/posts/GFGNwCwkffBevyXR2/a-second-example-of-conditional-orthogonality-in-finite).)\n\nGiven this notion of “history”, it is easy to define orthogonality: X is orthogonal to Y given evidence e if the history of X given e has no overlap with the history of Y given e. Intuitively, this means that the factors relevant to X are completely separate from those relevant to Y, and so there cannot be any entanglement between X and Y. For a _question_ Z, we say that X is orthogonal to Y given Z if X is orthogonal to Y given z, for every possible answer z in Z.\n\nNow that we have defined orthogonality, we can state the _Fundamental Theorem of Finite Factored Sets_. Given some questions X, Y, and Z about a finite factored set F, X is orthogonal to Y given Z if and only if in every probability distribution on F, X is conditionally independent of Y given Z, that is, P(X, Y | Z) = P(X | Z) * P(Y | Z).\n\n(I haven’t told you how you put a probability distribution on F. It’s exactly what you would think -- you assign a probability to every possible answer in every factor, and then the probability of an individual element is defined to be the product of the probabilities of its answers across all the factors.)\n\n(I also haven’t given you any intuition about why this theorem holds. Unfortunately I don’t have great intuition for this; the proof has multiple non-trivial steps, each of which I locally understand and have intuition for... but globally it’s just a sequence of non-trivial steps to me. Here’s an attempt, which isn’t very good: we specifically defined orthogonality to capture *all* the relevant information for a question, in particular by having that second condition requiring that we be able to counterfact on other factors, and so it intuitively makes sense that if the relevant information doesn’t overlap, then there can’t be a way for the probability distribution to have interactions between the variables.)\n\nThe fundamental theorem is in some sense a _justification_ for calling the property “orthogonality” -- if we determine just by studying the structure of the finite factored set that X is orthogonal to Y given Z, then we know that this implies conditional independence in the “true” probability distribution, whatever it ends up being. Pearlian models have a similar theorem, where the graphical property of d-separation implies conditional independence.\n\n## Foundations of causality and time\n\nYou might be wondering why we have been calling the minimal set of relevant factors “history”. The core philosophical idea is that, if you have the right factorization, then “time” or “causality” can be thought of as flowing in the direction of larger histories. Specifically, we say that X is “before” Y if the history of X is a subset of the history of Y. (We then call it “history” because every factor in the history of X will be “before” X by this definition.)\n\nOne intuition pump for this is that in physics, if an event A causes an event B, then the past light cone of A is a subset of the past light cone of B, and A happens before B in every possible reference frame.\n\nBut perhaps the best argument for thinking of this as causality is that we can actually use this notion of “time” or “causality” to perform causal inference. Before I talk about that, let’s see what this looks like in Pearlian models.\n\nStrictly speaking, in Pearlian models, the edges do not _have_ to correspond to causality: formally they only represent conditional independence assumptions on a probability distribution. However, consider the following Cool Fact: for some Pearlian models, if you have observational data that is generated from that model, you can recover the exact graphical structure of the generating model just by looking at the observational data. In this case, you really are inferring cause-and-effect relationships from observational data! (In the general case where the data is generated by an arbitrary model, you can recover a lot of the structure of the model but be uncertain about the direction of some of the edges, so you are still doing _some_ causal inference from observational data.)\n\nWe will do something similar: we’ll use our notion of “before” to perform causal inference given observational data.\n\n## Temporal inference: the three dependent bits\n\nYou are given statistical (i.e. observational) data for three bits: X, Y and Z. You quickly notice that it is always the case that Z = X xor Y (which implies that X = Y xor Z, and Y = Z xor X). Clearly, there are only two independent bits here and the other bit is derived as the xor of the two independent bits. From the raw statistical data, can you tell which bits are the independent ones, and which one is the derived one, thus inferring which one was _caused_ by the other two? It turns out that you can!\n\nSpecifically, you want to look for which two bits are _orthogonal_ to each other, that is, you want to check whether we approximately have P(X, Y) = P(X) P(Y) (and similarly for other possible pairings). In the world where two of the bits were generated by a biased coin, you will find exactly one pair that is orthogonal in this way. (The case where the bits are generated by a fair coin is special; the argument won’t work there, but it’s in some sense “accidental” and happens because the probability of 0.5 is very special.)\n\nLet’s suppose that the orthogonal pair was (X, Z). In this case, we can _prove_ that in _every_ finite factored set that models this situation, X and Z come “before” Y, i.e. their histories are strict subsets of Y’s history. Thus, we’ve inferred causality using only observational data! (And unlike with Pearlian models, we did this in a case where one “variable” was a deterministic function of two other “variables”, which is a type of situation that Pearlian models struggle to handle.)\n\n## Future work\n\nRemember that motivation section, a couple thousand words ago? We talked about how we can do causal inference with learned featurizations and apply it to embedded agency. Well, we actually haven’t done that yet, beyond a few examples of causal inference (as in the example above). There is a lot of future work to be done in applying it to the case that motivated it in the first place. The author wrote up potential future work [here](https://www.alignmentforum.org/s/kxs3eeEti9ouwWFzr/p/yGFiw23pJ32obgLbw), which has categories for both causal inference and embedded agency, and also adds a third one: generalizing the theory to infinite sets. If you are interested in this framework, there are many avenues for pushing it forward.\n"], "venue": "Alignment Forum", "opinion": "", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #163", "newsletter_category": "Agent foundations"}
{"id": "a91ece1860aa50c63e1e7c217eaed1c6", "title": "Infra-Bayesianism sequence", "url": "https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Diffractor and Vanessa Kosoy"], "summaries": ["I have finally understood this sequence enough to write a summary about it, thanks to [AXRP Episode 5](https://www.alignmentforum.org/posts/FkMPXiomjGBjMfosg/axrp-episode-5-infra-bayesianism-with-vanessa-kosoy). Think of this as a combined summary + highlight of the sequence and the podcast episode.\n\nThe central problem of <@embedded agency@>(@Embedded Agents@) is that there is no clean separation between an agent and its environment: rather, the agent is _embedded_ in its environment, and so when reasoning about the environment it is reasoning about an entity that is “bigger” than it (and in particular, an entity that _contains_ it). We don’t have a good formalism that can account for this sort of reasoning. The standard Bayesian account requires the agent to have a space of precise hypotheses for the environment, but then the true hypothesis would also include a precise model of the agent itself, and it is usually not possible to have an agent contain a perfect model of itself.\n\nA natural idea is to reduce the precision of hypotheses. Rather than requiring a hypothesis to assign a probability to every possible sequence of bits, we now allow the hypotheses to say “I have no clue about this aspect of this part of the environment, but I can assign probabilities to the rest of the environment”. The agent can then limit itself to hypotheses that don’t make predictions about the part of the environment that corresponds to the agent, but do make predictions about other parts of the environment.\n\nAnother way to think about it is that it allows you to start from the default of “I know nothing about the environment”, and then add in details that you do know to get an object that encodes the easily computable properties of the environment you can exploit, while not making any commitments about the rest of the environment.\n\nOf course, so far this is just the idea of using [Knightian uncertainty](https://en.wikipedia.org/wiki/Knightian_uncertainty). The contribution of infra-Bayesianism is to show how to formally specify a decision procedure that uses Knightian uncertainty while still satisfying many properties we would like a decision procedure to satisfy. You can thus think of it as an extension of the standard Bayesian account of decision-making to the setting in which the agent cannot represent the true environment as a hypothesis over which it can reason.\n\nImagine that, instead of having a probability distribution over hypotheses, we instead have two “levels”: first are all the properties we have Knightian uncertainty over, and then are all the properties we can reason about. For example, imagine that the environment is an infinite sequence of bits and we want to say that all the even bits come from flips of a possibly biased coin, but we know nothing about the odd coin flips. Then, at the top level, we have a separate branch for each possible setting of the odd coin flips. At the second level, we have a separate branch for each possible bias of the coin. At the leaves, we have the hypothesis “the odd bits are as set by the top level, and the even bits are generated from coin flips with the bias set by the second level”.\n\n(Yes, there are lots of infinite quantities in this example, so you couldn’t implement it the way I’m describing it here. An actual implementation would not represent the top level explicitly and would use computable functions to represent the bottom level. We’re not going to worry about this for now.)\n\nIf we were using orthodox Bayesianism, we would put a probability distribution over the top level, and a probability distribution over the bottom level. You could then multiply that out to get a single probability distribution over the hypotheses, which is why we don’t do this separation into two levels in orthodox Bayesianism. (Also, just to reiterate, the _whole point_ is that we can’t put a probability distribution at the top level, since that implies e.g. making precise predictions about an environment that is bigger than you are.)\n\nInfra-Bayesianism says, “what if we just… don't put a probability distribution over the top level?” Instead, we have a set of probability distributions over hypotheses, and Knightian uncertainty over which distribution in this set is the right one. A common suggestion for Knightian uncertainty is to do _worst-case_ reasoning, so that’s what we’ll do at the top level. Lots of problems immediately crop up, but it turns out we can fix them.\n\nFirst, let’s say your top level consists of two distributions over hypotheses, A and B. You then observe some evidence E, which A thought was 50% likely and B thought was 1% likely. Intuitively, you want to say that this makes A “more likely” relative to B than we previously thought. But how can you do this if you have Knightian uncertainty and are just planning to do worst-case reasoning over A and B? The solution here is to work with _unnormalized_ probability distributions at the second level. Then, in the case above, we can just scale the “probabilities” in both A and B by the likelihood assigned to E. We _don’t_ normalize A and B after doing this scaling.\n\nBut now what exactly do the numbers mean if we’re going to leave these distributions unnormalized? Regular probabilities only really make sense if they sum to 1. We can take a different view on what a “probability distribution” is -- instead of treating it as an object that tells you how _likely_ various hypotheses are, treat it as an object that tells you how much we _care_ about particular hypotheses. (See [related](https://www.lesswrong.com/posts/J7Gkz8aDxxSEQKXTN/what-are-probabilities-anyway) <@posts@>(@An Orthodox Case Against Utility Functions@).) So scaling down the “probability” of a hypothesis just means that we care less about what that hypothesis “wants” us to do.\n\nThis would be enough if we were going to take an average over A and B to make our final decision. However, our plan is to do worst-case reasoning at the top level. This interacts horribly with our current proposal: when we scale hypotheses in A by 0.5 on average, and hypotheses in B by 0.01 on average, the minimization at the top level is going to place _more_ weight on B, since B is now _more_ likely to be the worst case. Surely this is wrong?\n\nWhat’s happening here is that B gets most of its expected utility in worlds where we observe different evidence, but the worst-case reasoning at the top level doesn’t take this into account. Before we update, since B assigned 1% to E, the expected utility of B is given by 0.99 * expected utility given not-E + 0.01 * expected utility given E. After the update, the second part remains but the first part disappears, which makes the worst-case reasoning wonky. So what we do is we keep track of the first part as well and make sure that our worst-case reasoning takes it into account.\n\nThis gives us **infradistributions**: sets of (m, b) pairs, where m is an unnormalized probability distribution and b corresponds to “the value we would have gotten if we had seen different evidence”. When we observe some evidence E, the hypotheses within m are scaled by the likelihood they assign to E, and b is updated to include the value we would have gotten in the world where we saw anything other than E. Note that it is important to specify the utility function for this to make sense, as otherwise it is not clear how to update b. To compute utilities for decision-making, we do worst-case reasoning over the (m, b) pairs, where we use standard expected values within each m. We can prove that this update rule satisfies _dynamic consistency_: if initially you believe “if I see X, then I want to do Y”, then after seeing X, you believe “I want to do Y”.\n\nSo what can we do with infradistributions? Our original motivation was to talk about embedded agency, so a natural place to start is with decision-theory problems in which the environment contains a perfect predictor of the agent, such as in Newcomb’s problem. Unfortunately, we can’t immediately write this down with infradistributions because we have no way of (easily) formally representing “the environment perfectly predicts my actions”. One trick we can use is to consider hypotheses in which the environment just spits out some action, without the constraint that it must match the agent’s action. We then modify the utility function to give infinite utility when the prediction is incorrect. Since we do worst-case reasoning, the agent will effectively act as though this situation is impossible. With this trick, infra-Bayesianism performs similarly to UDT on a variety of challenging decision problems."], "venue": "Alignment Forum", "opinion": "This seems pretty cool, though I don’t understand it that well yet. While I don’t yet feel like I have a better philosophical understanding of embedded agency (or its subproblems), I do think this is significant progress along that path.\n\nIn particular, one thing that feels a bit odd to me is the choice of worst-case reasoning for the top level -- I don’t really see anything that _forces_ that to be the case. As far as I can tell, we could get all the same results by using best-case reasoning instead (assuming we modified the other aspects appropriately). The obvious justification for worst-case reasoning is that it is a form of risk aversion, but it doesn’t feel like that is really sufficient -- risk aversion in humans is pretty different from literal worst-case reasoning, and also none of the results in the post seem to depend on risk aversion.\n\nI wonder whether the important thing is just that we don’t do expected value reasoning at the top level, and there are in fact a wide variety of other kinds of decision rules that we could use that could all work. If so, it seems interesting to characterize what makes some rules work while others don’t. I suspect that would be a more philosophically satisfying answer to “how should agents reason about environments that are bigger than them”.", "highlight": true, "read_more": "AXRP Episode 5 - Infra-Bayesianism", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #143", "newsletter_category": "Agent foundations"}
{"id": "e62654d0cada7979bd92df822af610e5", "title": "Cartesian Frames", "url": "https://www.alignmentforum.org/s/2A7rrZ4ySx6R8mfoT", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Scott Garrabrant"], "summaries": ["The <@embedded agency sequence@>(@Embedded Agents@) hammered in the fact that there is no clean, sharp dividing line between an agent and its environment. This sequence proposes an alternate formalism: Cartesian frames. Note this is a paradigm that helps us _think about agency_: you should not be expecting some novel result that, say, tells us how to look at a neural net and find agents within it.\n\nThe core idea is that rather than _assuming_ the existence of a Cartesian dividing line, we consider how such a dividing line could be _constructed_. For example, when we think of a sports team as an agent, the environment consists of the playing field and the other team; but we could also consider a specific player as an agent, in which case the environment consists of the rest of the players (on both teams) and the playing field. Each of these are valid ways of carving up what actually happens into an “agent” and an “environment”, they are _frames_ by which we can more easily understand what’s going on, hence the name “Cartesian frames”.\n\nA Cartesian frame takes **choice** as fundamental: the agent is modeled as a set of options that it can freely choose between. This means that the formulation cannot be directly applied to deterministic physical laws. It instead models what agency looks like [“from the inside”](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside). _If_ you are modeling a part of the world as capable of making choices, _then_ a Cartesian frame is appropriate to use to understand the perspective of that choice-making entity.\n\nFormally, a Cartesian frame consists of a set of agent options A, a set of environment options E, a set of possible worlds W, and an interaction function that, given an agent option and an environment option, specifies which world results. Intuitively, the agent can “choose” an agent option, the environment can “choose” an environment option, and together these produce some world. You might notice that we’re treating the agent and environment symmetrically; this is intentional, and means that we can define analogs of all of our agent notions for environments as well (though they may not have nice philosophical interpretations).\n\nThe full sequence uses a lot of category theory to define operations on these sorts of objects and show various properties of the objects and their operations. I will not be summarizing this here; instead, I will talk about their philosophical interpretations.\n\nFirst, let’s look at an example of using a Cartesian frame on something that isn’t typically thought of as an agent: the atmosphere, within the broader climate system. The atmosphere can “choose” whether to trap sunlight or not. Meanwhile, in the environment, either the ice sheets could melt or they could not. If sunlight is trapped and the ice sheets melt, then the world is Hot. If exactly one of these is true, then the world is Neutral. Otherwise, the world is Cool.\n\n(Yes, this seems very unnatural. That’s good! The atmosphere shouldn’t be modeled as an agent! I’m choosing this example because its unintuitive nature makes it more likely that you think about the underlying rule, rather than just the superficial example. I will return to more intuitive examples later.)\n\n**Controllables**\n\nA _property_ of the world is something like “it is neutral or warmer”. An agent can _ensure_ a property if it has some option such that no matter what environment option is chosen, the property is true of the resulting world. The atmosphere could ensure the warmth property above by “choosing” to trap sunlight. Similarly the agent can _prevent_ a property if it can guarantee that the property will not hold, regardless of the environment option. For example, the atmosphere can prevent the property “it is hot”, by “choosing” not to trap sunlight. The agent can _control_ a property if it can both ensure and prevent it. In our example, there is no property that the atmosphere can control.\n\n**Coarsening or refining worlds**\n\nWe often want to describe reality at different levels of abstraction. Sometimes we would like to talk about the behavior of various companies; at other times we might want to look at an individual employee. We can do this by having a function that maps low-level (refined) worlds to high-level (coarsened) worlds. In our example above, consider the possible worlds {YY, YN, NY, NN}, where the first letter of a world corresponds to whether sunlight was trapped (Yes or No), and the second corresponds to whether the ice sheets melted. The worlds {Hot, Neutral, Cool} that we had originally are a coarsened version of this, where we map YY to Hot, YN and NY to Neutral, and NN to Cool.\n\n**Interfaces**\n\nA major upside of Cartesian frames is that given the set of possible worlds that can occur, we can choose how to divide it up into an “agent” and an “environment”. Most of the interesting aspects of Cartesian frames are in the relationships between different ways of doing this division, for the same set of possible worlds.\n\nFirst, we have interfaces. Given two different Cartesian frames and with the same set of worlds, an interface allows us to interpret the agent A as being used in place of the agent B. Specifically, if A would choose an option a, the interface maps this to one of B’s options b. This is then combined with the environment option f (from F) to produce a world w.\n\nA valid interface also needs to be able to map the environment option f to e, and then combine it with the agent option a to get the world. This alternate way of computing the world must always give the same answer.\n\nSince A can be used in place of B, all of A’s options must have equivalents in B. However, B could have options that A doesn’t. So the existence of this interface implies that A is “weaker” in a sense than B. (There are a bunch of caveats here.)\n\n(Relevant terms in the sequence: _morphism_)\n\n**Decomposing agents into teams of subagents**\n\nThe first kind of subagent we will consider is a subagent that can control “part of” the agent’s options. Consider for example a coordination game, where there are N players who each individually can choose whether or not to press a Big Red Button. There are only two possible worlds: either the button is pressed, or it is not pressed. For now, let’s assume there are two players, Alice and Bob.\n\nOne possible Cartesian frame is the frame for the entire team. In this case, the team has perfect control over the state of the button -- the agent options are either to press the button or not to press the button, and the environment does not have any options (or more accurately, it has a single “do nothing” option).\n\nHowever, we can also decompose this into separate Alice and Bob _subagents_. What does a Cartesian frame for Alice look like? Well, Alice also has two options -- press the button, or don’t. However, Alice does not have perfect control over the result: from her perspective, Bob is part of the environment. As a result, for Alice, the environment also has two options -- press the button, or don’t. The button is pressed if Alice presses it _or_ if the environment presses it. (The Cartesian frame for Bob is identical, since he is in the same position that Alice is in.)\n\nNote however that this decomposition isn’t perfect: given the Cartesian frames for Alice and Bob, you cannot uniquely recover the original Cartesian frame for the team. This is because both Alice and Bob’s frames say that the environment has some ability to press the button -- _we_ know that this is just from Alice and Bob themselves, but given just the frames we can’t be sure that there isn’t a third person Charlie who also might press the button. So, when we combine Alice and Bob back into the frame for a two-person team, we don’t know whether or not the environment should have the ability to press the button. This makes the mathematical definition of this kind of subagent a bit trickier though it still works out.\n\nAnother important note is that this is relative to how coarsely you model the world. We used a fairly coarse model in this example: only whether or not the button was pressed. If we instead used a finer model that tracked which subset of people pressed the button, then we _would_ be able to uniquely recover the team’s Cartesian frame from Alice and Bob’s individual frames.\n\n(Relevant terms in the sequence: _multiplicative subagents, sub-tensors, tensors_)\n\n**Externalizing and internalizing**\n\nThis decomposition isn’t just for teams of people: even a single “mind” can often be thought of as the interaction of various parts. For example, hierarchical decision-making can be thought of as the interaction between multiple agents at different levels of the hierarchy.\n\nThis decomposition can be done using _externalization_. Externalization allows you to take an existing Cartesian frame and some specific property of the world, and then construct a new Cartesian frame where that property of the world is controlled by the environment.\n\nConcretely, let’s imagine a Cartesian frame for Alice that represents her decision on whether to cook a meal or eat out. If she chooses to cook a meal, then she must also decide which recipe to follow. If she chooses to eat out, she must decide which restaurant to eat out at.\n\nWe can externalize the high-level choice of whether Alice cooks a meal or eats out. This results in a Cartesian frame where the environment chooses whether Alice is cooking or eating out, and the agent must then choose a restaurant or recipe as appropriate. This is the Cartesian frame corresponding to the low-level policy that must pursue whatever subgoal is chosen by the high-level planning module (which is now part of the environment). The agent of this frame is a subagent of Alice.\n\nThe reverse operation is called internalization, where some property of the world is brought under the control of the agent. In the above example, if we take the Cartesian frame for the low-level policy, and then internalize the cooking / eating out choice, we get back the Cartesian frame for Alice as a unified whole.\n\nNote that in general externalization and internalization are _not_ inverses of each other. As a simple example, if you externalize something that is already “in the environment” (e.g. whether it is raining, in a frame for Alice), that does nothing, but when you then internalize it, that thing is now assumed to be under the agent’s control (e.g. now the “agent” in the frame can control whether or not it is raining). We will return to this point when we talk about observability.\n\n**Decomposing agents into disjunctions of subagents**\n\nOur subagents so far have been “team-based”: the original agent could be thought of as a supervisor that got to control all of the subagents together. (The team agent in the button-pressing game could be thought of as controlling both Alice and Bob’s actions; in the cooking / eating out example Alice could be thought of as controlling both the high-level subgoal selection as well as the low-level policy that executes on the subgoals.)\n\nThe sequence also introduces another decomposition into subagents, where the superagent can be thought of as a supervisor that gets to choose _which_ of the subagents gets to control the overall behavior. Thus, the superagent can do anything that either of the subagents could do.\n\nLet’s return to our cooking / eating out example. We previously saw that we could decompose Alice into a high-level subgoal-choosing subagent that chooses whether to cook or eat out, and a low-level subgoal-execution subagent that then chooses which recipe to make or which restaurant to go to. We can also decompose Alice as being the choice of two subagents: one that chooses which restaurant to go to, and one that chooses which recipe to make. The union of these subagents is an agent that first chooses whether to go to a restaurant or to make a recipe, and then uses the appropriate subagent to choose the restaurant or recipe: this is exactly a description of Alice.\n\n(Relevant terms in the sequence: _additive subagents, sub-sums, sums_)\n\n**Committing and assuming**\n\nOne way to think about the subagents of the previous example is that they are the result of Alice _committing_ to a particular subset of choices. If Alice commits to eating out (but doesn’t specify at what restaurant), then the resulting frame is equivalent to the restaurant-choosing subagent.\n\nSimilarly to committing, we can also talk about _assuming_. Just as commitments restrict the set of options available to the agent, assumptions restrict the set of options available to the environment.\n\nJust as we can union two agents together to get an agent that gets to choose between two subagents, we can also union two environments together to get an environment that gets to choose between two subenvironments. (In this case the agent is more constrained: it must be able to handle the environment regardless of which way the environment chooses.)\n\n(Relevant terms in the sequence: _product_)\n\n**Observables**\n\nThe most interesting (to me) part of this sequence was the various equivalent definitions of what it means for something to be observable. The overall story is similar to the one in [Knowledge is Freedom](https://www.alignmentforum.org/posts/b3Bt9Cz4hEtR26ANX/knowledge-is-freedom): an agent is said to “observe” a property P if it is capable of making different decisions based on whether P holds or not.\n\nThus we get our first definition of observability: **a property P of the world is _observable_ if, for any two agent options a and b, the agent also has an option that is equivalent to “if P then a else b”.**\n\nIntuitively, this is meant to be similar to the notion of “inputs” to an agent. Intuitively, a neural net should be able to express arbitrary computations over its inputs, and so if we view the neural net as “choosing” what computation to do (by “choosing” what its parameters are), then the neural net can have its outputs (agent options) depend in arbitrary ways on the inputs. Thus, we say that the neural net “observes” its inputs, because what the neural net does can depend freely on the inputs.\n\nNote that this is a very black-or-white criterion: we must be able to express _every_ conditional policy on the property for it to be observable; if even one such policy is not expressible then the property is not observable.\n\nOne way to think about this is that an observable property needs to be completely under the control of the environment, that is, the environment option should completely determine whether the resulting world satisfies the property or not -- nothing the agent does can matter (for this property). To see this, suppose that there was some environment option e that didn’t fully determine a property P, so that there are agent options a and b such that the world corresponding to (a, e) satisfies P but the one corresponding to (b, e) does not. Then our agent cannot implement the conditional policy “if P then b else a”, because it would lead to a self-referential contradiction (akin to “this sentence is false”) when the environment chooses e. Thus, P cannot be observable.\n\nThis is not equivalent to observability: it is possible for the environment to fully control P, while the agent is still unable to always condition on P. So we do need something extra. Nevertheless, this intuition suggests a few other ways of thinking about observability. The key idea is to identify a decomposition of the agent based on P that should only work if the environment has all the control, and then to identify a union step that puts the agent back together, that automatically adds in all of the policies that are conditional on P. I’ll describe these definitions here; the sequence proves that they are in fact equivalent to the original definition above.\n\nFirst, recall that externalization and internalization are methods that allow us to “transfer” control of some property from the agent to the environment and vice versa. Thus, if all the control of P is in the environment, one would hope that internalization followed by externalization just transfers the control back and forth. In addition, when we externalize P, the externalization process will enforce that the agent can condition on P arbitrarily (this is how it is defined). This suggests the definition: **P is observable if and only if internalizing P followed by externalizing P gives us back the original frame.**\n\nSecond, if the environment has all of the control over P, then we should be able to decompose the agent into two parts: one that decides what to do when P is true, and one that decides what to do when P is false. We can achieve this using _assumptions_, that is, the first agent is the original agent under the assumption that P is true, and the second is under the assumption that P is false. Note that if the environment didn’t have perfect control over P, this would not work, as the environment options where P is not guaranteed to be true or false would simply be deleted, and could not be reconstructed from the two new agents.\n\nWe now need to specify how to put the agents back together, in a way that includes all the conditional policies on P. There are actually two variants in how we can do this:\n\nIn the first case, we combine the agents by unioning the environments, which lets the environment choose whether P is true or not. Given how this union is defined, the new agent is able to specify both what to do given the environment’s choice, _as well as_ what it would have done in the counterfactual case where the environment had decided P differently. This allows it to implement all conditional policies on P. So, **P is observable if and only if decomposing the frame using assumptions on P, and then unioning the environments of the resulting frames gives back the original frame.**\n\nIn the second case, after getting agents via assumption on P, you extend each agent so that in the case where its assumption is false, it is as though it takes a noop action. Intuitively, the resulting agent is an agent that is hobbled so that it has no power in worlds where P comes out differently than was assumed. These agents are then combined into a team. Intuitively, the team selects an option of the form “the first agent tries to do X (which only succeeds when P is true) and the second agent tries to do Y (which only succeeds when P is false)”. Like the previous decomposition, this specifies both what to do in whatever actual environment results, as well as what would have been done in the counterfactual world where the value of P was reversed. Thus, this way of combining the agents once again adds in all conditional policies on P. So, **P is observable if and only if decomposing the frame using assumptions on P, then hobbling the resulting frames in cases where their assumptions are false, and then putting the agents back in a team, is equivalent to the original frame.**\n\n**Time**\n\nCartesian frames do not have an intrinsic notion of time. However, we can still use them to model sequential processes, by having the agent options be _policies_ rather than actions, and having the worlds be histories or trajectories rather than states.\n\nTo say useful things about time, we need to broaden our notion of observables. So far I’ve been talking about whether you can observe binary properties P that are either true or false. In fact, all of the definitions can be easily generalized to n-ary properties P that can take on one of N values. We’ll be using this notion of observability here.\n\nConsider a game of chess where Alice plays as white and Bob as black. Intuitively, when Alice is choosing her second move, she can observe Bob’s first move. However, the property “Bob’s first move” would not be observable in Alice’s Cartesian frame, because Alice’s _first_ move cannot depend on Bob’s first move (since Bob hasn’t made it yet), and so when deciding the first move we can’t implement policies that condition on what Bob’s first move is.\n\nReally, we want some way to say “after Alice has made her first move, from the perspective of the rest of her decisions, Bob’s first move is observable”. But we know how to remove some control from the agent in order to get the perspective of “everything else” -- that’s externalization! In particular, in Alice’s frame, if we externalize the property “Alice’s first move”, then the property “Bob’s first move” _is_ observable in the new frame.\n\nThis suggests a way to define a sequence of frames that represent the passage of time: we define the Tth frame as “the original frame, but with the first T moves externalized”, or equivalently as “the T-1th frame, but with the Tth move externalized”. Each of these frames are subagents of the original frame, since we can think of the full agent (Alice) as the team of “the agent that plays the first T moves” and “the agent that plays the T+1th move and onwards”. As you might expect, as “time” progresses, the agent loses controllables and gains observables. For example, by move 3 Alice can no longer control her first two moves, but she can now observe Bob’s first two moves, relative to Alice at the beginning of the game."], "venue": "Alignment Forum", "opinion": "I like this way of thinking about agency: we’ve been talking about “where to draw the line around the agent” for quite a while in AI safety, but there hasn’t been a nice formalization of this until now. In particular, it’s very nice that we can compare different ways of drawing the line around the agent, and make precise various concepts around this, such as “subagent”.\n\nI’ve also previously liked the notion that “to observe P is to be able to change your decisions based on the value of P”, but I hadn’t really seen much discussion about it until now. This sequence makes some real progress on conceptual understanding of this perspective: in particular, the notion that observability requires “all the control to be in the environment” is not one I had until now. (Though I should note that this particular phrasing is mine, and I’m not sure the author would agree with the phrasing.)\n\nOne of my checks for the utility of foundational theory for a particular application is to see whether the key results can be explained without having to delve into esoteric mathematical notation. I think this sequence does very well on this metric -- for the most part I didn’t even read the proofs, yet I was able to reconstruct conceptual arguments for many of the theorems that are convincing to me. (They aren’t and shouldn’t be as convincing as the proofs themselves.) However, not all of the concepts score so well on this -- for example, the generic subagent definition was sufficiently unintuitive to me that I did not include it in this summary.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #127", "newsletter_category": "Agent foundations"}
{"id": "95d2b2d6e63d3e294f1cd2f4dae45df0", "title": "Theory of Ideal Agents, or of Existing Agents?", "url": "https://www.alignmentforum.org/posts/zQZcWkvEA8DLjKR7C/theory-of-ideal-agents-or-of-existing-agents", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["John Wentworth"], "summaries": ["There are at least two ways in which a theoretical understanding of agency can be useful: On one hand, such understanding can enable the **design** of an artificial agent with certain properties. On the other hand, it can be used to **describe** existing agents. While both perspectives are likely needed for successfully aligning AI, individual researchers face a tradeoff: either they focus their efforts on existence results concerning strong properties, which helps with design (e.g. most of <@MIRI's work on embedded agency@>(@Embedded Agents@)), or they work on proving weaker properties for a broad class of agents, which helps with description (e.g. [all logical inductors can be described as markets](https://www.alignmentforum.org/posts/WmNeCipNwg9CmGy3T/markets-are-universal-for-logical-induction), summarized next). The prioritization of design versus description is a likely crux in disagreements about the correct approach to developing a theory of agency."], "venue": "Alignment Forum", "opinion": "To facilitate productive discussions it seems important to disentangle disagreements about goals from disagreements about means whenever we can. I liked the clear presentation of this attempt to identify a common source of disagreements on the (sub)goal level.", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #66", "newsletter_category": "Agent foundations"}
{"id": "fa071e144ac45ad395d40338df1d01b9", "title": "The Accumulation of Knowledge", "url": "https://www.alignmentforum.org/s/H6kiZXJwYgxZubtmD", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Alex Flint"], "summaries": ["Probability theory can tell us about how we ought to build agents that have knowledge (start with a prior and perform Bayesian updates as evidence comes in). However, this is not the only way to create knowledge: for example, humans are not ideal Bayesian reasoners. As part of our quest to <@_describe_ existing agents@>(@Theory of Ideal Agents, or of Existing Agents?@), could we have a theory of knowledge that specifies when a particular physical region within a closed system is “creating knowledge”? We want a theory that <@works in the Game of Life@>(@Agency in Conway’s Game of Life@) as well as the real world.\n\nThis sequence investigates this question from the perspective of defining the accumulation of knowledge as increasing correspondence between [a map and the territory](https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation), and concludes that such definitions are not tenable. In particular, it considers four possibilities and demonstrates counterexamples to all of them:\n\n1. Direct map-territory resemblance: Here, we say that knowledge accumulates in a physical region of space (the “map”) if that region of space looks more like the full system (the “territory”) over time. \n\nProblem: This definition fails to account for cases of knowledge where the map is represented in a very different way that doesn’t resemble the territory, such as when a map is represented by a sequence of zeros and ones in a computer.\n\n2. Map-territory mutual information: Instead of looking at direct resemblance, we can ask whether there is increasing mutual information between the supposed map and the territory it is meant to represent.\n\nProblem: In the real world, nearly _every_ region of space will have high mutual information with the rest of the world. For example, by this definition, a rock accumulates lots of knowledge as photons incident on its face affect the properties of specific electrons in the rock giving it lots of information.\n\n3. Mutual information of an abstraction layer: An abstraction layer is a grouping of low-level configurations into high-level configurations such that transitions between high-level configurations are predictable without knowing the low-level configurations. For example, the zeros and ones in a computer are the high-level configurations of a digital abstraction layer over low-level physics. Knowledge accumulates in a region of space if that space has a digital abstraction layer, and the high-level configurations of the map have increasing mutual information with the low-level configurations of the territory. \n\nProblem: A video camera that constantly records would accumulate much more knowledge by this definition than a human, even though the human is much more able to construct models and act on them.\n\n4. Precipitation of action: The problem with our previous definitions is that they don’t require the knowledge to be _useful_. So perhaps we can instead say that knowledge is accumulating when it is being used to take action. To make this mechanistic, we say that knowledge accumulates when an entity’s actions become more fine-tuned to a specific environment configuration over time. (Intuitively, they learned more about the environment and so could condition their actions on that knowledge, which they previously could not do.)\n\nProblem: This definition requires the knowledge to actually be used to count as knowledge. However, if someone makes a map of a coastline, but that map is never used (perhaps it is quickly destroyed), it seems wrong to say that during the map-making process knowledge was not accumulating. "], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #156", "newsletter_category": "Agent foundations"}
{"id": "ec56d2285962bf210d0e7b40b75d7066", "title": "Computational complexity of RL with traps", "url": "https://www.alignmentforum.org/posts/3YYChdX29SMG6kQf6/computational-complexity-of-rl-with-traps", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Vadim Kosoy"], "summaries": ["A post asking about complexity theoretic results around RL, both with (unknown) deterministic and stochastic dynamics."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #22", "newsletter_category": "Agent foundations"}
{"id": "c215dda59133794cd8ab5c8c8121ef87", "title": "Countable Factored Spaces", "url": "https://www.alignmentforum.org/posts/QEfbg6vbjGgfFzJM4/countable-factored-spaces", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Diffractor"], "summaries": ["This post generalizes the math in <@Finite Factored Sets@>(@Finite Factored Sets sequence@) to (one version of) the infinite case. Everything carries over, except for one direction of the fundamental theorem. (The author suspects that direction is true, but was unable to prove it.)"], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #164", "newsletter_category": "Agent foundations"}
{"id": "3f268139ed7bffc2e1df039bf331b803", "title": "Robust program equilibrium", "url": "https://foundational-research.org/robust-program-equilibrium/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Caspar Oesterheld"], "summaries": ["In a prisoner's dilemma where you have access to an opponent's source code, you can hope to achieve cooperation by looking at how the opponent would perform against you. Naively, you could simply simulate what the opponent would do given your source code, and use that to make your decision. However, if your opponent also tries to simulate you, this leads to an infinite loop. The key idea of this paper is to break the infinite loop by introducing a small probability of guaranteed cooperation (without simulating the opponent), so that eventually after many rounds of simulation the recursion \"bottoms out\" with guaranteed cooperation. They explore what happens when applying this idea to the equivalents of FairBot/Tit-for-Tat strategies when you are simulating the opponent."], "venue": "FRI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #39", "newsletter_category": "Agent foundations"}
{"id": "c863d536aff62982b7f5b25e1ae4f540", "title": "Counterfactuals, thick and thin", "url": "https://www.lesswrong.com/posts/YSH3RFSFESzsa5Nrg/counterfactuals-thick-and-thin", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Nisan"], "summaries": ["There are many different ways to formalize counterfactuals (the post suggests three such ways). Often, for any given way of formalizing counterfactuals, there are many ways you could take a counterfactual, which give different answers. When considering the physical world, we have strong causal models that can tell us which one is the \"correct\" counterfactual. However, there is no such method for logical counterfactuals yet."], "venue": "LessWrong", "opinion": "I don't think I understood this post, so I'll abstain on an opinion.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #18", "newsletter_category": "Agent foundations"}
{"id": "c1427105f5e44b4747c93627c92e3ce2", "title": "Conceptual problems with utility functions, second attempt at explaining", "url": "https://www.lesswrong.com/posts/QmeguSp4Pm7gecJCz/conceptual-problems-with-utility-functions-second-attempt-at", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Dacyn"], "summaries": ["Argues that there's a difference between object-level fairness (which sounds to me like fairness as a terminal value) and meta-level fairness (which sounds to me like instrumental fairness), and that this difference is not captured with single-player utility function maximization."], "venue": "LessWrong", "opinion": "I still think that the difference pointed out here is accounted for by traditional multiagent game theory, which has utility maximization for each player. For example, I would expect that in a repeated Ultimatum game, fairness would arise naturally, similarly to how tit-for-tat is a good strategy in an iterated prisoner's dilemma.", "highlight": false, "read_more": "Conceptual problems with utility functions", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #17", "newsletter_category": "Agent foundations"}
{"id": "3257584043f6eaba30790a6bc2eab1c8", "title": "Exorcizing the Speed Prior?", "url": "https://www.lesswrong.com/posts/Say4sCQ2g22HGsbRT/exorcizing-the-speed-prior", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Abram Demski"], "summaries": ["Intuitively, in order to find a solution to a hard problem, we could either do an uninformed brute force search, or encode some domain knowledge and then do an informed search. Roughly, we should expect each additional bit of information to cut the required search roughly in half. The speed prior trades off a bit of complexity against a doubling of running time, so we should expect the informed and uninformed searches to be equally likely in the speed prior. So, uninformed brute force searches that can find weird edge cases (aka daemons) are only equally likely, not more likely."], "venue": "LessWrong", "opinion": "As the post acknowledges, this is extremely handwavy and just gesturing at an intuition, so I'm not sure what to make of it yet. One counterconsideration is that a lot of intelligence that is not just search, that still is general across domains (see [this comment](https://www.lesswrong.com/posts/Say4sCQ2g22HGsbRT/exorcizing-the-speed-prior#CeLGp6Cje4id5RFb2) for examples).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #17", "newsletter_category": "Agent foundations"}
{"id": "69cad8648648d303e201a7111905de18", "title": "Musings on Exploration", "url": "https://agentfoundations.org/item?id=1786", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Alex Appel"], "summaries": ["Decision theories require some exploration in order to prevent the problem of spurious conterfactuals, where you condition on a zero-probability event. However, there are problems with exploration too, such as unsafe exploration (eg. launching a nuclear arsenal in an exploration step), and a sufficiently strong agent seems to have an incentive to self-modify to remove the exploration, because the exploration usually leads to suboptimal outcomes for the agent."], "venue": "Agent Foundations", "opinion": "I liked the linked [post](https://agentfoundations.org/item?id=92) that explains why conditioning on low-probability actions is not the same thing as a counterfactual, but I'm not knowledgeable enough to understand what's going on in this post, so I can't really say whether or not you should read it.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #1", "newsletter_category": "Agent foundations"}
{"id": "bb6373daf140b96bd106d847f50346e6", "title": "An Agent is a Worldline in Tegmark V", "url": "https://www.lesswrong.com/posts/brQYmeX4HFrPbs4XP/an-agent-is-a-worldline-in-tegmark-v", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["komponisto"], "summaries": ["Tegmark IV consists of all possible consistent mathematical structures. Tegmark V is an extension that also considers \"impossible possible worlds\", such as the world where 1+1=3. Agents are reasoning at the level of Tegmark V, because counterfactuals are considering these impossible possible worlds."], "venue": "LessWrong", "opinion": "I'm not really sure what you gain by thinking of an agent this way.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "Agent foundations"}
{"id": "ec838bf1d0e492293b6d37a57d24e710", "title": "GovAI 2019 Annual Report", "url": "https://www.fhi.ox.ac.uk/govai/govai-2019-annual-report/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Allan Dafoe"], "summaries": ["This is exactly what it sounds like."], "venue": "FHI Website", "opinion": "I generally find governance papers quite illuminating for thinking about how all this technical stuff we do is meant to interact with the broader society and actually have an impact on the world. That said, I usually don't highlight such papers, despite liking them a lot, because the primary audience I have in mind are people trying to solve the technical alignment problem in which you want to ensure a powerful AI system is not adversarially optimizing against you. So instead I've collected a bunch of them in this newsletter and just highlighted the annual report.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #88", "newsletter_category": "AI governance"}
{"id": "19c25830b5289957ccc9d8b98b6d2907", "title": "80K podcast with Allan Dafoe", "url": "https://80000hours.org/podcast/episodes/allan-dafoe-politics-of-ai/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Allan Dafoe and Rob Wiblin"], "summaries": ["A long interview with Allan Dafoe about the field of AI policy, strategy and governance. It discusses challenges for AI policy that haven't arisen before (primarily because AI is a dual use technology), the rhetoric around arms races, and autonomous weapons as a means to enable authoritarian regimes, to give a small sampling. One particularly interesting tidbit (to me) was that Putin has said that Russia will give away its AI capabilities to the world, because an arms race would be dangerous."], "venue": "80,000 Hours", "opinion": "Overall this is a great introduction to the field, I'd probably recommend people interested in the area to read this before any of the more typical published papers. I do have one disagreement -- Allan claims that even if we stopped Moore's law, and stopped algorithmic scientific improvement in AI, there could be some extreme systematic risks that emerge from AI -- mass labor displacement, creating monopolies, mass surveillance and control (through robot repression), and strategic stability. I would be very surprised if current AI systems would be able to lead to mass labor displacement and/or control through robot repression. We are barely able to get machines to do anything in the real world right now -- _something_ has to improve quite drastically, and if it's neither compute nor algorithms, then I don't know what it would be. The other worries seem plausible from the technical viewpoint.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #7", "newsletter_category": "AI governance"}
{"id": "ddfb8f4985e0be08abbba0d8aed6223a", "title": "The new 30-person research group in DC investigating how emerging technologies could affect national security", "url": "https://80000hours.org/podcast/episodes/helen-toner-on-security-and-emerging-technology/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Rob Wiblin and Helen Toner"], "summaries": ["This 80,000 Hours podcast with Helen Toner dives into details of AI policy, China and the new Center for Security and Emerging Technology (CSET). I'm only summarizing the parts I found most relevant.\n\nMany of the analogies for AI are quite broken. AI is a very broad set of software technologies, unlike nuclear weapons which are very discrete. It's not feasible to use export controls to keep \"AI\" within the US. In addition, AI will affect war far more fundamentally than just creating lethal autonomous weapons -- Helen thinks that the biggest military impact might be on logistics. It's also weird to compare data to oil, because oil is a rival good (two people can't use the same oil), whereas data can easily be copied. In addition, one barrel of oil can replace any other barrel, but data is very specific to the particular application. Helen's preferred analogy is thinking of AI as electricity -- a very general purpose tool that will transform lots of aspects of society. However, this analogy can also break down -- for example, the AI research community seems pretty important, but there was no analog for electricity.\n\nAnd now for a few random points, in no particular order. China \"exports\" around 50,000 inventors (patent holders) every year, while the US imports 190,000, far more than any other country, suggesting that the US is a global hub for talent. AI is hard to define, because many of its properties lie on a continuum -- for example, is a landmine a lethal autonomous weapon? The way to affect policy is to make small, targeted changes in proposed policies so that the government makes slightly better decisions -- it's far too difficult to execute on a grand plan to get the government to do some big thing. The main skills for engaging with government on technology issues: be able to speak both to scientists as well as bureaucrats, and be able to navigate the DC setting -- knowing what people are doing, what their incentives are, and how to get your thing done given their different incentives."], "venue": "80000 Hours Podcast", "opinion": "I enjoyed the section on how analogies for AI are broken -- I don't usually think much about them, but they always felt a bit off, and Helen makes it very clear what the issues are. It was also interesting seeing how the perspectives on AI are quite different from those of us thinking about AGI accident risk -- we often think about single, generally intelligent AGI systems, whereas Helen emphasized how current technologies can be easily deployed in many application-specific contexts. While data for current systems is very application-specific as Helen mentioned, if you believe the unsupervised learning story data may be more interchangeable for AGI systems.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #61", "newsletter_category": "AI governance"}
{"id": "1a30f3616d8dd6a28edca4c7dc30f6cc", "title": "AI Alignment Podcast: On the Governance of AI", "url": "https://futureoflife.org/2019/07/22/on-the-governance-of-ai-with-jade-leung/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Lucas Perry and Jade Leung"], "summaries": ["Jade makes a lot of points in this podcast, some of which I've summarized here in no particular order.\n\nGovAI works on lots of research topics, including analysis of the inputs to AI, understanding historical cases of competition, looking at the relationship between firms and governments, and understanding public opinion.\n\nGovernance is particularly difficult because in the current competitive environment it's hard to implement any form of \"ideal\" governance; we can only make changes on the margin. As a result, it is probably better if we could get to a state where we could take a long time to deliberate about what ideal governance would look like, without having to worry about competitive pressures.\n\nThe biggest risk for governments is that they will make hasty, ill-informed regulation. However, given how uncertain we are, it's hard to recommend any concrete actions right now -- but governance will happen anyway; it won't wait for more research. One useful action we can take is to correct or add nuance to inaccurate memes and information, such as the \"race\" between the US and China, or the performance-safety tradeoff. Plausibly we should engage with government more -- we may have been biased towards working with private organizations because they are more nimble and familiar to us.\n\nInstead of thinking about short term vs. long term, we should be thinking about the stakes. Some issues, such as privacy or job loss, can be thought of as \"short term\" but their stakes could scale to be huge in the long term. Those would be good areas to think about."], "venue": "FLI Website", "opinion": "I don't have any particular thoughts on these topics, but I am glad for both this and the previous podcast, which give more of a birds-eye view of the AI governance landscape, which is hard to get from any single paper.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #61", "newsletter_category": "AI governance"}
{"id": "678f86966a14d1c6cfdebd84348e6c40", "title": "Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development", "url": "https://www.fhi.ox.ac.uk/standards-technical-report/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Peter Cihon"], "summaries": ["This technical report argues that we can have an outsized impact on the future of AI by influencing standards on AI so that they help ensure that AI systems are safe and beneficial, in addition to making the deployment of AI more efficient. A standard here could be a product like Tensorflow or Gym, or a process like [this list](https://ai.google/education/responsible-ai-practices). It's particularly useful to focus on international standards: since corporations can simply leave the country to escape national regulations, there is a race to the bottom on the stringency of national standards, and so they can't effect as much change.\n\nIt may be particularly valuable to influence existing organizations that set standards because they are very responsive to expert opinion. It is also possible to develop a standard privately, and then \"convert\" it into an international standard. (This happened with the C programming language and the PDF file format.) Such influence can be used to change the culture around AI development, e.g. to put safety more at the forefront."], "venue": "FHI Website", "opinion": "I would guess that the most influential standards are \"network standards\" like Tensorflow: they make it easier for everyone to develop AI systems. However, the benefit here is in having any standard at all, and so it seems unlikely that such standards could also effect a change in culture that's unrelated to the efficiency aspect of the standard. That said, the report convinced me that \"enforced standards\" are also impactful: even if the standard requires active enforcement to prevent organizations from ignoring it, organizations will often choose to comply with the standard in order to get a certification that builds consumer trust in them.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #55", "newsletter_category": "AI governance"}
{"id": "d9dd7f24ce9624b1c74e54ab0cbf6967", "title": "80K podcast: How can policy keep up with AI advances?", "url": "https://80000hours.org/podcast/episodes/openai-askell-brundage-clark-latest-in-ai-policy-and-strategy/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Rob Wiblin", "Jack Clark", "Miles Brundage and Amanda Askell"], "summaries": ["OpenAI policy researchers Jack Clark, Amanda Askell and Miles Brundage cover a large variety of topics relevant to AI policy, giving an outside-view perspective on the field as a whole. A year or two ago, the consensus was that the field required [disentanglement research](https://forum.effectivealtruism.org/posts/RCvetzfDnBNFX7pLH/personal-thoughts-on-careers-in-ai-policy-and-strategy); now, while disentanglement research is still needed, there are more clearly defined important questions that can be tackled independently. People are now also taking action in addition to doing research, mainly by accurately conveying relevant concepts to policymakers. A common thread across policy is the framing of the problem as a large coordination problem, for which an important ingredient of the solution is to build _trust_ between actors.\n\nAnother thread was the high uncertainty over specific details of scenarios in the future, but the emergence of some structural properties that allow us to make progress anyway. This implies that the goal of AI policy should be aiming for _robustness_ rather than _optimality_. Some examples:\n - The [malicious use of AI report](https://maliciousaireport.com/) was broad and high level because each individual example is different and the correct solution depends on the details; a general rule will not work. In fact, Miles thinks that they probably overemphasized how much they could learn from other fields in that report, since the different context means that you quickly hit diminishing returns on what you can learn.\n - None of them were willing to predict specific capabilities over more than a 3-year period, especially due to the steep growth rate of compute, which means that things will change rapidly. Nonetheless, there are structural properties that we can be confident will be important: for example, a trained AI system will be easy to scale via copying (which you can't do with humans).\n - OpenAI's strategy is to unify the fields of capabilities, safety and policy, since ultimately these are all facets of the overarching goal of developing beneficial AI. They aim to either be the main actor developing beneficial AGI, or to help the main actor, in order to be robust to many different scenarios.\n - Due to uncertainty, OpenAI tries to have policy institutions that make sense over many different time horizons. They are building towards a world with formal processes for coordinating between different AI labs, but use informal relationships and networking for now.\n\nAI policy is often considered a field where it is easy to cause harm. They identify two (of many) ways this could happen: first, you could cause other actors to start racing (which you may not even realize, if it manifests as a substantial increase in some classified budget), and second, you could build coordination mechanisms that aren't the ones people want and that work fine for small problems but break once they are put under a lot of stress. Another common one people think about is information hazards. While they consider info hazards all the time, they also think that (within the AI safety community) these worries are overblown. Typically people overestimate how important or controversial their opinion is. Another common reason for not publishing is not being sure whether the work meets high intellectual standards, but in this case the conversation will be dominated by people with lower standards.\n\nMiscellaneous other stuff:\n - Many aspects of races can make them much more collaborative, and it is not clear that AI corresponds to an adversarial race. In particular, large shared benefits make races much more collaborative.\n - Another common framing is to treat the military as an adversary, and try to prevent them from gaining access to AI. Jack thinks this is mistaken, since then the military will probably end up developing AI systems anyway, and you wouldn't have been able to help them make it safe.\n - There's also a lot of content at the end about career trajectories and working at OpenAI or the US government, which I won't get into here."], "venue": "80,000 Hours Website", "opinion": "It does seem like building trust between actors is a pretty key part of AI policy. That said, there are two kinds of trust that you can have: first, trust that the statements made by other actors are true, and second, trust that other actors are aligned enough with you in their goals that their success is also your success. The former can be improved by mechanisms lie monitoring, software verification, etc. while the latter cannot. The former is often maintained using processes that impose a lot of overhead, while the latter usually does not require much overhead once established. The former can scale to large groups comprising thousands or millions of people, while the latter is much harder to scale. I think it's an open question in AI policy to what extent we need each of these kinds of trust to exist between actors. This podcast seems to focus particularly on the latter kind.\n\nOther miscellaneous thoughts:\n - I think a lot of these views are conditioned on a gradual view of AI development, where there isn't a discontinuous jump in capabilities, and there are many different actors all deploying powerful AI systems.\n - Conditional on the military eventually developing AI systems, it seems worth it to work with them to make their AI systems safer. However, it's not inconceivable that AI researchers could globally coordinate to prevent military AI applications. This wouldn't prevent it from happening eventually, but could drastically slow it down, and let defense scale faster than offense. In that case, working with the military can also be seen as a defection in a giant coordination game with other AI researchers.\n - One of my favorite lines: \"I would recommend everyone who has calibrated intuitions about AI timelines spend some time doing stuff with real robots and it will probably … how should I put this? … further calibrate your intuitions in quite a humbling way.\" (Not that I've worked with real robots, but many of my peers have.)", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #50", "newsletter_category": "AI governance"}
{"id": "8db287f0a8ce67df11d9d18c6eea307b", "title": "Thinking About Risks From AI: Accidents, Misuse and Structure", "url": "https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Remco Zwetsloot", "Allan Dafoe"], "summaries": ["The authors argue that in addition to risk from misuse of AI and \"accidents\", we should pay attention to the structural perspective: how AI changes the broader environment and incentives of various actors. Possible examples include creating winner-take-all competition or creating overlap between offensive and defensive actions. In the face of these effects, even competent and well-intentioned decision-makers might be pressured into making risky choices. To ameliorate this problem, more people should focus on AI policy, particularly social scientists and historians; and we should think hard about creating collective norms and institutions for AI."], "venue": "Lawfare", "opinion": "This post makes an important point in a clear and concise way. My only concern is that \"structural problems\" is such a broad heading that practically anything can be included, making it more difficult to specifically direct attention towards existential threats (the same is true for the term \"accidents\", which to me doesn't properly reflect the threat of adversarial behaviour from AI). I don't know how to best handle this tradeoff, but think it's a point worth raising.\n\n*Rohin's opinion*: I just wanted to add a note on why we've highlighted this piece. While many of the particular concrete examples have been explained before, the underlying system for thinking about AI is new and useful. I particularly liked the distinction made between focusing on _agency_ in AI (which leads you to think about accidents and misuse) vs. thinking about incentives and structure (which leads you to think about the entire causal chain leading up to the moment where an agent causes something bad to happen).", "highlight": true, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #46", "newsletter_category": "AI governance"}
{"id": "7ea82fedac84a0cb782d6f535f09b72c", "title": "OpenAI Charter", "url": "https://blog.openai.com/openai-charter/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Open AI"], "summaries": ["In their words, this is \"a charter that describes the principles we use to execute on OpenAI’s mission\"."], "venue": "OpenAI Blog", "opinion": "I'm very excited by this charter, it's a good sign suggesting that we can get the important actors to cooperate in building aligned AI, and in particular to avoid a competitive race. Key quote: \"if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project\".", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #2", "newsletter_category": "AI governance"}
{"id": "f030223f8e6e81ca47a0917aaeed35a3", "title": "‘Solving for X?’ Towards a problem-finding framework to ground long-term governance strategies for artificial intelligence", "url": "https://www.researchgate.net/profile/Matthijs_Maas/publication/342774816_'Solving_for_X'_Towards_a_problem-finding_framework_to_ground_long-term_governance_strategies_for_artificial_intelligence/links/5fbbd04592851c933f517ad3/Solving-for-X-Towards-a-problem-finding-framework-to-ground-long-term-governance-strategies-for-artificial-intelligence.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Hin-Yan Liu", "Matthijs M. Maas"], "summaries": ["The typical workflow in governance research might go something like this: first, choose an existing problem to work on; second, list out possible governance mechanisms that could be applied to the problem; third, figure out which of these is best. We might call this the _problem-solving_ approach. However, such an approach has several downsides:\n\n1. Such an approach will tend to use existing analogies and metaphors used for that problem, even when they are no longer appropriate.\n2. If there are problems which aren’t obvious given current frameworks for governance, this approach won’t address them.\n3. Usually, solutions under this approach build on earlier, allegedly similar problems and their solutions, leading to path-dependencies in what kind of solutions are being sought. This makes it harder to identify and/or pursue new classes of solutions\n4. It is hard to differentiate between problems that are symptoms vs. problems that are root causes in such a framework, since not much thought is put into comparisons across problems\n5. Framing our job as solving an existing set of problems lulls us into a false sense of security, as it makes us think we understand the situation better than we actually do (“if only we solved these problems, we’d be done; nothing else would come up”).\n\nThe core claim of this paper is that we should also invest in a _problem-finding_ approach, in which we do not assume that we even know what the problem is, and are trying to figure it out in advance before it arises. This distinction between problem-solving and problem-finding is analogous to the distinction between normal science and paradigm-changing science, between exploitation and exploration, and between “addressing problems” and “pursuing mysteries”. Including a problem-finding approach in our portfolio of research techniques helps mitigate the five disadvantages listed above. One particularly nice advantage is that it can help avoid the [Collingridge dilemma](https://en.wikipedia.org/wiki/Collingridge_dilemma): by searching for problems in advance, we can control them before they get entrenched in society (when they would be harder to control).\n\nThe authors then propose a classification of governance research, where levels 0 and 1 correspond to problem-solving and levels 2 and 3 correspond to problem-finding:\n\n- **Business as usual** (level 0): There is no need to change the existing governance structures; they will naturally handle any problems that arise.\n- **Puzzle-solving** (level 1): Aims to solve the problem at hand (something like deepfakes), possibly by changing the existing governance structures.\n- **Disruptor-finding** (level 2): Searches for properties of AI systems that would be hard to accommodate with the existing governance tools, so that we can prepare in advance.\n- **Charting macrostrategic trajectories** (level 3): Looks for crucial considerations about how AI could affect the trajectory of the world.\n\nThese are not just meant to apply to AGI. For example, autonomous weapons may make it easier to predict and preempt conflict, in which case rather than very visible drone strikes we may instead have “invisible” high-tech wars. This may lessen the reputational penalties of war, and so we may need to increase scrutiny of, and accountability for, this sort of “hidden violence”. This is a central example of a level 2 consideration.\n\nThe authors note that we could extend the framework even further to cases where governance research fails: at level -1, governance stays fixed and unchanging in its current form, either because reality is itself not changing, or because the governance got locked in for some reason. Conversely, at level 4, we are unable to respond to governance challenges, either because we cannot see the problems at all, or because we cannot comprehend them, or because we cannot control them despite understanding them."], "venue": "Futures 2021", "opinion": "One technique I like a lot is backchaining: starting from the goal you are trying to accomplish, and figuring out what actions or intermediate subgoals would most help accomplish that goal. I’ve spent a lot of time doing this sort of thing with AI alignment. This paper feels like it is advocating the same for AI governance, but also gives a bunch of concrete examples of what this sort of work might look like. I’m hoping that it inspires a lot more governance work of the problem-finding variety; this does seem quite neglected to me right now.\n\nOne important caveat to all of this is that I am not a governance researcher and don’t have experience actually trying to do such research, so it’s not unlikely that even though I think this sounds like good meta-research advice, it is actually missing the mark in a way I failed to see.\n\nWhile I do recommend reading through the paper, I should warn you that it is rather dense and filled with jargon, at least from my perspective as an outsider.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #138", "newsletter_category": "AI governance"}
{"id": "5c75cc582aa294b92357361bdf057a23", "title": "AGI Strategy - List of Resources", "url": "https://docs.google.com/spreadsheets/d/1ojSJFrDsBpLj0_snavF3AQjDImTElYLq33WUiB5x5lg/edit", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["Exactly what it sounds like."], "venue": "Google Docs", "opinion": "", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #11", "newsletter_category": "AI governance"}
{"id": "92fa7e7a5cb2a67bbeb915b5b3967863", "title": "Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority", "url": "https://www.cnas.org/publications/reports/technology-roulette", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Richard Danzig"], "summaries": ["US policy so far has been to pursue technological superiority in order to stay ahead of its adversaries and to prevent conflict through deterrence. This paper argues that policymakers should shift some attention to preparing for other risks, such as accidents, emergent effects, sabotage and proliferation (where other actors get and use the technology, without the same safety standards as the US). There were several interesting sections, but the one I was particularly interested in was the section arguing that keeping a human in the loop would not be sufficient. In military situations, decisions must often be made in time-sensitive, high-stress situations, and in such scenarios humans are not very good at making decisions. For example, if an AI system detects an incoming missile, it must autonomously aim and fire to prevent the missile from hitting its target -- there is not enough time for a human to be in the loop. The biggest issue though is that while a human may be part of the decision-making process, they are reliant on various machine readings and calculations in order to reach their decision, and so a human in the loop doesn't provide an independent check on the answer, and so is of limited utility. And as AI systems get better, humans will become less useful for checking the AI's decisions, making this a temporary solution at best."], "venue": "CNAS", "opinion": "I found the paper to be quite compelling, especially the comments on the human-in-the-loop solution. This feels relevant to problems in technical AI alignment, though I'm not exactly sure how. One question that it suggests -- how can we learn human preferences, when the human answers may themselves depend on the AI's actions? Stuart Armstrong has [pointed out](https://agentfoundations.org/item?id=1678) this problem as well.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #10", "newsletter_category": "AI governance"}
{"id": "0ce1229bb4244b4b9a890766fc472ae1", "title": "AI Governance in 2019 - A Year in Review: Observations from 50 Global Experts", "url": "https://www.aigovernancereview.com/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Shi Qian*", "Li Hui*", "Brian Tse*", "others"], "summaries": ["This report contains short essays from 50 experts reviewing progress in AI governance. I’ll describe a few themes here rather than try to summarize each essay. \n\nThe first is a strong emphasis on issues of bias, privacy, deception, and safety. Bias can occur both due to biases of programmers designing algorithms as well as bias that exists in the data. Deception includes deepfakes as well as online accounts that impersonate humans, a subset of which were made illegal in California this year. \n\nThe benefit of international collaborations and conferences and getting broad agreement from many stakeholders both in government and companies was frequently highlighted throughout. One example is the [OECD Principles on AI](https://www.oecd.org/going-digital/ai/principles/), which were later adopted by the G20 including both the US and China, but there were many working groups and committees organized as well, both within industry and governments. \n\nThe other shift in 2019 was moving from broad principles towards more specific sets of requirements and policy decisions. The principles agreed to have been quite similar, but the specific implementations vary significantly by country. There were individual essays describing the regional challenges in Europe, the UK, Japan, Singapore, India, and East Asia. Many essays also highlighted the debate around <@publication norms@>(@GPT-2: 1.5B Release@), which garnered a lot of attention in 2019 following OpenAI’s staged release of GPT-2. "], "venue": "Author's Website", "opinion": "I am very impressed by the number and diversity of experts that contributed to this report. I think it is quite valuable to get people with such different backgrounds and areas of expertise to collaborate on how we should be using AI ahead of time. I was also pleasantly surprised to hear that there was broad international agreement on principles so far, particularly given an overall political trend against global institutions that has occurred recently. I’m definitely interested to know what the key factors were in managing that and how we can make sure these things continue. \n\nAnother piece that jumped out at me is the overlap between longer-term issues of safety and shorter-term issues of bias and privacy. For technical safety work, I think the problems are largely distinct and it is important for safety researchers to remain focused on solving problems with major long-term consequences. However, in the governance context, the problems seem to have much more in common and require many similar institutions / processes to address. So I hope that these communities continue to work together and learn from each other.", "highlight": false, "read_more": "", "summarizer": "Nicholas", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #99", "newsletter_category": "AI governance"}
{"id": "7ca92c9ea65f99df2b2eb75e3d0cf7f4", "title": "The Windfall Clause: Distributing the Benefits of AI", "url": "https://www.fhi.ox.ac.uk/windfallclause/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Cullen O’Keefe", "Peter Cihon", "Ben Garfinkel", "Carrick Flynn", "Jade Leung", "Allan Dafoe"], "summaries": ["The Windfall Clause is a proposed policy lever for improving outcomes from transformative AI. Corporations can voluntarily agree to be bound by the clause, in which case they must donate some proportion of _windfall profits_ (profits in excess of e.g. 1% of world GDP) for the benefit of humanity. Since such a scenario is exceedingly unlikely, it can be in corporations' interests to be bound by this clause, in order to reap the benefits of improved public relations. If the scenario actually occurs, we can then use the donations to solve many societal problems that would likely arise, e.g. job loss, inequality, etc."], "venue": "FHI Website", "opinion": "While there are certainly major benefits to the Windfall Clause in the case of an actual windfall, it seems to me like there are benefits even when windfalls do not occur (a point mentioned but not emphasized in the full report). For example, in a world in which everyone has agreed to the Windfall Clause, the incentives to \"win an economic race\" decrease: even if it is possible for e.g. one company to \"win\" via a monopoly on AI, at least a portion of their \"winnings\" must be distributed to everyone else, plausibly decreasing incentives to race, and increasing the likelihood that companies pay attention to safety. (This of course assumes that the clause remains binding even after \"winning\", which is not obviously true.)", "highlight": false, "read_more": "EA Forum summary", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #88", "newsletter_category": "AI governance"}
{"id": "80880e45663225af83fc9a7dc345b354", "title": "How does the offense-defense balance scale?", "url": "https://www.tandfonline.com/doi/full/10.1080/01402390.2019.1631810", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Ben Garfinkel", "Allan Dafoe"], "summaries": ["The offense-defense balance that characterises how easy it is to successfully attack others can affect what kinds of conflicts break out and how often that happens. This paper analyses how growing capabilities on both sides affect that balance. For example, consider an idealized model of cyber defense with a fixed set of vulnerabilities that are discovered independently by attackers and defenders. The attacker will initially be able to use almost all of the vulnerabilities they found. This is because, with only a small percentage of vulnerabilities discovered by both sides, the defender is unlikely to have found the same ones as the attacker. Marginal increases of the defender's capabilities are unlikely to uncover vulnerabilities used by the attacker in this regime, such that attacks become easier as both sides invest resources. Once most vulnerabilities have been found by both sides, this effect reverses as marginal investments by the attacker become unlikely to uncover vulnerabilities the defender has not fixed yet. \n\nThis pattern, where increasingly growing capabilities first favour offense but lead to defensive stability in the long run, dubbed **OD-scaling** seems to be common and can be expected to be found whenever there are **multiple attack vectors**, the attacker only needs to break through on some of them and the defender enjoys **local defense superiority**, meaning that with sufficient coverage by the defender for a given attack vector, it is almost impossible for the attacker to break through. \n\nBecause the use of digital and AI systems can be scaled up quickly, scale-dependent shifts of the offense-defense balance are going to increase in importance as these systems become ubiquitous. "], "venue": "Journal of Strategic Studies", "opinion": "I found it quite surprising that the paper mentions a lack of academic consensus about whether or not offensive advantage is destabilizing. Assuming that it is, OD-scaling might provide a silver lining concerning cybersecurity, provided things can be scaled up sufficiently. These kinds of dynamics also seem to put a natural ceiling on arms races: above a certain threshold, gains in capabilities provide advantage to both sides such that resources are better invested elsewhere.", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #77", "newsletter_category": "AI governance"}
{"id": "6faaa6a04b50319537f8c548ab420589", "title": "Why Responsible AI Development Needs Cooperation on Safety", "url": "https://openai.com/blog/cooperation-on-safety/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Amanda Askell", "Miles Brundage", "Jack Clark", "Gillian Hadfield"], "summaries": ["AI systems are increasingly being developed by companies, and as such it is important to understand how competition will affect the safety and robustness of these systems. This paper models companies as agents engaging in a cooperate-defect game, where cooperation represents responsible development, and defection represents a failure to develop responsibly. This model yields five factors that increase the likelihood of companies cooperating on safety. Ideally, companies will have **high trust** that others cooperate on safety, large benefits from mutual cooperation (**shared upside**), large costs from mutual defection (**shared downside**), not much incentive to defect when others cooperate (**low advantage**), and not be harmed too much if others defect when they cooperate (**low exposure**).\n\nThey then suggest four concrete strategies that can help improve norms today. First, companies should help promote accurate beliefs about the benefits of safety. Second, companies should collaborate on research and engineering. Third, companies should be transparent and allow for proper oversight and feedback. Fourth, the community should incentivize adhering to high safety standards by rewarding safety work and penalizing unsafe behavior."], "venue": "OpenAI Blog", "opinion": "Given that much of current AI progress is being driven by increases in computation power, it seems likely to me that companies will soon become more significant players in the AI space. As a result, I appreciate that this paper tries to determine what we can do now to make sure that the competitive landscape is conducive to taking proper safety precautions. I do, however, believe that the single step cooperate-defect game which they use to come up with their factors seems like a very simple model for what will be a very complex system of interactions. For example, AI development will take place over time, and it is likely that the same companies will continue to interact with one another. Iterated games have very different dynamics, and I hope that future work will explore how this would affect their current recommendations, and whether it would yield new approaches to incentivizing cooperation.", "highlight": false, "read_more": "The Role of Cooperation in Responsible AI Development", "summarizer": "Nicholas", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #66", "newsletter_category": "AI governance"}
{"id": "7efe2f536c1449810c330a8954b8ac63", "title": "AI Alignment Podcast: China’s AI Superpower Dream", "url": "https://futureoflife.org/2019/08/16/chinas-ai-superpower-dream-with-jeffrey-ding/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Lucas Perry and Jeffrey Ding"], "summaries": ["See also <@these@>(@Rationally Speaking #231 - Helen Toner on \"Misconceptions about China and artificial intelligence\"@) <@three@>(@The new 30-person research group in DC investigating how emerging technologies could affect national security@) <@podcasts@>(@FLI Podcast: Beyond the Arms Race Narrative: AI and China@)."], "venue": "FLI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #64", "newsletter_category": "AI governance"}
{"id": "f558fba647271a20ef153a2bb3ca525c", "title": "AGI will drastically increase economies of scale", "url": "https://alignmentforum.org/posts/Sn5NiiD5WBi4dLzaB/agi-will-drastically-increase-economies-of-scale-1", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Wei Dai"], "summaries": ["Economies of scale would normally mean that companies would keep growing larger and larger. With human employees, the coordination costs grow superlinearly, which ends up limiting the size to which a company can grow. However, with the advent of AGI, many of these coordination costs will be removed. If we can align AGIs to particular humans, then a corporation run by AGIs aligned to a single human would at least avoid principal-agent costs. As a result, the economies of scale would dominate, and companies would grow much larger, leading to more centralization."], "venue": "Alignment Forum", "opinion": "This argument is quite compelling to me under the assumption of human-level AGI systems that can be intent-aligned. Note though that while the development of AGI systems removes principal-agent problems, it doesn't remove issues that arise due to different agents having different (non-value-related) information.\n\nThe argument probably doesn't hold with <@CAIS@>(@Reframing Superintelligence: Comprehensive AI Services as General Intelligence@), where each AI service is optimized for a particular task, since there would be principal-agent problems between services.\n\nIt seems like the argument should mainly make us more worried about stable authoritarian regimes: the main effect based on this argument is a centralization of power in the hands of the AGI's overseers. This is less likely to happen with companies, because we have institutions that prevent companies from gaining too much power, though perhaps competition between countries could weaken such institutions. It could happen with government, but if long-term governmental power still rests with the people via democracy, that seems okay. So the risky situation seems to be when the government gains power, and the people no longer have effective control over government. (This would include scenarios with e.g. a government that has sufficiently good AI-fueled propaganda that they always win elections, regardless of whether their governing is actually good.)", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #59", "newsletter_category": "AI governance"}
{"id": "4189e00b7e26dd8565d6797dbf9fa933", "title": "Beijing AI Principles", "url": "https://baip.baai.ac.cn/en", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["These principles are a collaboration between Chinese academia and industry, and hit upon many of the problems surrounding AI discussed today, including fairness, accountability, transparency, diversity, job automation, responsibility, ethics, etc. Notably for long-termists, it specifically mentions control risks, AGI, superintelligence, and AI races, and calls for international collaboration in AI governance."], "venue": "", "opinion": "", "highlight": false, "read_more": "Beijing publishes AI ethical standards, calls for int'l cooperation", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #57", "newsletter_category": "AI governance"}
{"id": "f87168fda63259c123bc87c4303b3a0f", "title": "How Sure are we about this AI Stuff?", "url": "https://forum.effectivealtruism.org/posts/9sBAW3qKppnoG3QPq/how-sure-are-we-about-this-ai-stuff", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Ben Garfinkel"], "summaries": ["Ben outlines four broad arguments for prioritising work on superintelligent AGI: that AI will have a big influence over the long-term future, and more specifically that it might cause instability, lock-in or large-scale \"accidents\". He notes the drawbacks of each line of argument. In particular, the \"AI is a big deal\" argument doesn't show that we have useful leverage over outcomes (compare a Victorian trying to improve the long-term effects of the industrial revolution). He claims that the next two arguments have simply not been researched thoroughly enough to draw any conclusions. And while the argument from accidents has been made by Bostrom and Yudkowsky, there hasn't been sufficient elaboration or criticism of it, especially in light of the recent rise of deep learning, which reframes many ideas in AI."], "venue": "EA Forum", "opinion": "I find this talk to be eminently reasonable throughout. It highlights a concerning lack of public high-quality engagement with the fundamental ideas in AI safety over the last few years, relative to the growth of the field as a whole (although note that in the past few months this has been changing, with three excellent sequences released on the Alignment Forum, plus Drexler's technical report). This is something which motivates me to spend a fair amount of time writing about and discussing such ideas.\n\nOne nitpick: I dislike the use of \"accidents\" as an umbrella term for AIs behaving in harmful ways unintended by their creators, since it's misleading to describe deliberately adversarial behaviour as an \"accident\" (although note that this is not specific to Ben's talk, since the terminology has been in use at least since the Concrete problems paper).", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #45", "newsletter_category": "AI governance"}
{"id": "a12eb6a007068fc1a20a045dc94ad07a", "title": "AI Index 2018 Report", "url": "http://cdn.aiindex.org/2018/AI%20Index%202018%20Annual%20Report.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Yoav Shoham", "Raymond Perrault", "Erik Brynjolfsson", "Jack Clark", "James Manyika", "Juan Carlos Niebles", "Terah Lyons", "John Etchemendy", "Barbara Grosz"], "summaries": ["Lots of data about AI. The report highlights how AI is global, the particular improvement in natural language understanding over the last year, and the limited gender diversity in the classroom. We also see the expected trend of huge growth in AI, both in terms of interest in the field as well as in performance metrics."], "venue": "AI Index website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #39", "newsletter_category": "AI governance"}
{"id": "36f59d35c8c295be2a104a274980b4d8", "title": "AI Now 2018 Report", "url": "https://ainowinstitute.org/AI_Now_2018_Report.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Meredith Whittaker", "Kate Crawford", "Roel Dobbe", "Genevieve Fried", "Elizabeth Kaziunas", "Varoon Mathur", "Sarah Myers West", "Rashida Richardson", "Jason Schultz", "Oscar Schwartz"], "summaries": ["See [Import AI](https://jack-clark.net/2018/12/31/import-ai-127-why-language-ai-advancements-may-make-google-more-competitive-coco-image-captioning-systems-dont-live-up-to-the-hype-and-amazon-sees-3x-growth-in-voice-shopping-via-alexa/)"], "venue": "AI Now website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #39", "newsletter_category": "AI governance"}
{"id": "cd2b2cebdeceb6e730a38891a93b3095", "title": "AI development incentive gradients are not uniformly terrible", "url": "https://www.lesswrong.com/posts/bkG4qj9BFEkNva3EX/ai-development-incentive-gradients-are-not-uniformly", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["rk"], "summaries": ["This post considers a model of AI development somewhat similar to the one in [Racing to the precipice](https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf) paper. It notes that under this model, assuming perfect information, the utility curves for each player are _discontinuous_. Specifically, the models predict deterministically that the player that spent the most on something (typically AI capabilities) is the one that \"wins\" the race (i.e. builds AGI), and so there is a discontinuity at the point where the players are spending equal amounts of money. This results in players fighting as hard as possible to be on the right side of the discontinuity, which suggests that they will skimp on safety. However, in practice, there will be some uncertainty about which player wins, even if you know exactly how much each is spending, and this removes the discontinuity. The resulting model predicts more investment in safety, since buying expected utility through safety now looks better than increasing the probability of winning the race (whereas before, it was compared against changing from definitely losing the race to definitely winning the race)."], "venue": "LessWrong", "opinion": "The model in [Racing to the precipice](https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf) had the unintuitive conclusion that if teams have _more_ information (i.e. they know their own or other’s capabilities), then we become _less_ safe, which puzzled me for a while. Their explanation is that with maximal information, the top team takes as much risk as necessary in order to guarantee that they beat the second team, which can be quite a lot of risk if the two teams are close. While this is true, the explanation from this post is more satisfying -- since the model has a discontinuity that rewards taking on risk, anything that removes the discontinuity and makes it more continuous will likely improve the prospects for safety, such as not having full information. I claim that in reality these discontinuities mostly don't exist, since (1) we're uncertain about who will win and (2) we will probably have a multipolar scenario where even if you aren't first-to-market you can still capture a lot of value. This suggests that it likely isn't a problem for teams to have more information about each other on the margin.\n\nThat said, these models are still very simplistic, and I mainly try to derive qualitative conclusions from them that my intuition agrees with in hindsight.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "Racing to the precipice: a model of artificial intelligence development", "converted_with": "python", "newsletter_number": "AN #33", "newsletter_category": "AI governance"}
{"id": "00aa3ea643e9ca4e7250a6985179dacc", "title": "The Future of Surveillance", "url": "https://www.effectivealtruism.org/articles/ea-global-2018-the-future-of-surveillance/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Ben Garfinkel"], "summaries": ["While we often think of there being a privacy-security tradeoff and an accountability-security tradeoff with surveillance, advances in AI and cryptography can make advances on the Pareto frontier. For example, automated systems could surveil many people but only report a few suspicious cases to humans, or they could be used to redact sensitive information (eg. by blurring faces), both of which improve privacy and security significantly compared to the status quo. Similarly, automated ML systems can be applied consistently to every person, can enable collection of good statistics (eg. false positive rates), and are more interpretable than a human making a judgment call, all of which improve accountability."], "venue": "EA Global 2018", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #28", "newsletter_category": "AI governance"}
{"id": "88f26c28228569969b49bcf49474ab4c", "title": "Debunking the AI Arms Race Theory", "url": "https://tnsr.org/2021/06/debunking-the-ai-arms-race-theory/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Paul Scharre"], "summaries": ["This article, published recently in the Texas National Security Review, argues that various national trends of military spending on AI do not meet the traditional definition of an 'arms race'. However, the current situation can be termed a _security dilemma_, a \"more generalized competitive dynamic between states.\" The article identifies two ways in which race-style dynamics in AI competition towards the aims of national security might create new risks: (i) a need for increasingly rapid decision-making might leave humans with diminished control or 'out of the loop'; and (ii) the pressure to quickly improve military AI capabilities could result in sacrificing supplementary goals like robustness and reliability, leading to unsafe systems being deployed. \n\nThe article offers the following strategies as panaceas to such dynamics. Competing nations should institute strong internal processes to ensure their systems are robust and secure, and that human control can be maintained. Further, nations should encourage other countries to take similar steps to mitigate these risks within their own militaries. Finally, nations should cooperate in regulating the conduct of war to avoid mutual harm. It concludes after citing several sources that advocate for the US to adopt these strategies."], "venue": "Texas National Security Review", "opinion": "I think the headline was chosen by the editor and not the author: the AI arms race 'debunking' is less than a fourth of the whole article, and it's not even an important beat of the piece; instead, the article is about how use of technology/AI/deep learning for military applications in multipolar geopolitics *can actually result* in arms-race-style dynamics and tangible risks.\n\nEven so, I'm not convinced that the traditional definition of 'arms race' isn't met. The author invokes percentage _growth_ in military spending of more than 10% over the previous year as a qualifying criterion for an arms race, but then compares this with the actual spending of 0.7% of the US military budget on AI in 2020 to make their case that there is no arms race. These two are not comparable; at the very least, we would need to know the actual spending on AI by the military across two years to see at what rate this spending changed, and whether or not it then qualifies to be an arms race.", "highlight": false, "read_more": "", "summarizer": "Sudhanshu", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #155", "newsletter_category": "AI governance"}
{"id": "494bd648093104d6b9e36155ec790ea1", "title": "NSCAI Final Report", "url": "https://reports.nscai.gov/final-report/table-of-contents/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Eric Schmidt", "Robert Work", "Safra Catz", "Eric Horvitz", "Steve Chien", "Andrew Jassy", "Mignon Clyburn", "Gilman Louie", "Chris Darby", "William Mark", "Kenneth Ford", "Jason Matheny", "José-Marie Griffiths", "Katharina McFarland", "Andrew Moore"], "summaries": ["In the US, the National Security Commission on AI released their report to Congress. The [full pdf](https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf) is over 750 pages long, so I have not read it myself, and instead I’m adding in some commentary from others. In their [newsletter](https://cset.georgetown.edu/newsletters/), CSET says that highlights include:\n\n- A warning that the U.S. military could be at a competitive disadvantage within the next decade if it does not accelerate its AI adoption. The report recommends laying the foundation for widespread AI integration by 2025, comprising a DOD-wide digital ecosystem, a technically literate workforce, and more efficient business practices aided by AI.\n- A recommendation that the White House establish a new “Technology Competitiveness Council,” led by the vice president, to develop a comprehensive technology strategy and oversee its implementation.\n- A recommendation that the U.S. military explore using autonomous weapons systems, provided their use is authorized by human operators.\n- A proposal to establish a new Digital Service Academy and a civilian National Reserve to cultivate domestic AI talent.\n- A call to provide $35 billion in federal investment and incentives for domestic semiconductor manufacturing.\n- A recommendation to double non-defense AI R&D funding annually until it reaches $32 billion per year, and to triple the number of National AI Research Institutes.\n- A call for reformed export controls, coordinated with allies, on key technologies such as high-end semiconductor manufacturing equipment.\n- A recommendation that Congress pass a second National Defense Education Act and reform the U.S. immigration system to attract and retain AI students and workers from abroad.\n\nWhile none of the report’s recommendations are legally binding, it has [reportedly been well-received by key members of both parties](https://apnews.com/article/ai-panel-urges-us-boost-tech-skills-95b210543d4a42bd6cd5347a46cb74d6).\n\nMatthew van der Merwe also summarizes the recommendations in [Import AI](https://jack-clark.net/2021/03/08/import-ai-239-china-trains-a-massive-10b-model-vicarious-does-pickplace-the-gchq-publishes-some-of-its-thoughts-on-ai/); this has a lot of overlap with the CSET summary so I won't copy it here.\n\nJeff Ding adds in [ChinAI #134](https://chinai.substack.com/p/chinai-134-weaponized-interdependence):\n[I]f you make it past the bluster in the beginning — or take it for what it is: obligatory marketing to cater to a DC audience hooked on a narrow vision of national security — there’s some smart moderate policy ideas in the report (e.g. chapter 7 on establishing justified confidence in AI systems).\n\nIn email correspondence, Jon Rodriguez adds some commentary on the safety implications:\n\n1. The report acknowledges the potential danger of AGI, and specifically calls for value alignment research to take place (pg. 36). To my knowledge, this is one of the first times a leading world government has called for value alignment.\n2. The report makes a clear statement that the US prohibits AI from authorizing the launch of nuclear weapons (pg. 98).\n3. The report calls for dialogues with China and Russia to ensure that military decisions made by military AI at \"machine speed\" does not lead to out-of-control conflict escalation which humans would not want (pg. 97)."], "venue": "National Security Commission on Artificial Intelligence", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #141", "newsletter_category": "AI governance"}
{"id": "1652691964d18c3848b45bec7a923087", "title": "Why those who care about catastrophic and existential risk should care about autonomous weapons", "url": "https://forum.effectivealtruism.org/posts/oR9tLNRSAep293rr5/why-those-who-care-about-catastrophic-and-existential-risk-2", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Anthony Aguirre"], "summaries": ["This post argues for a focus on autonomous weapons systems (AWs) for three main reasons:\n\n**AWs Provide a Trial Run for AGI governance**. Governance of AWs shares many properties with AGI safety. Preventing an AW arms race would require international cooperation that would provide a chance to understand and improve AI governance institutions. As with any AI system, AWs have the potential to be *effective* without necessarily being aligned with human values, and accidents could quickly lead to deadly consequences. Public opinion and the vast majority of AI researchers oppose AW arms races, so there is an opportunity for global coordination on this issue. \n\n**Some AWs can directly cause catastrophic risk**. Cheap drones could potentially be created at scale that are easy to transport and hard to detect. This could enable an individual to kill many people without the need to convince many others that it is justified. They can discriminate targets better than other WMDs and cause less environmental damage. This has the potential to make war less harmful, but also makes it easier to justify.\n\n**AWs increase the likelihood and severity of conflict** by providing better tools for terrorists and assassins, lowering the threshold for violence between and within states, upsetting the relative power balance of current militaries, and increasing the likelihood of accidental escalation. In particular, AWs that are being used to counter other AWs might intentionally be made hard to understand and predict, and AWs may react to each other at timescales that are too quick for humans to intervene or de-escalate. \n\nAn international agreement governing autonomous weapons could help to alleviate the above concerns. In particular, some classes of weapons could be banned, and others could be tracked and subjected to regulations. This would hopefully lead us to an equilibrium where offensive AWs are prohibited, but defended against in a stable way."], "venue": "EA Forum", "opinion": "I agree completely with the first two points. Much of technical safety work has been based around solving currently existing analogs of the alignment problem. Governance does seem to have less of these, so autonomous weapon governance could provide a great opportunity to test and build credibility for AI governance structures. The ability for autonomous weapons to cause catastrophic risk seems hard to argue against. With powerful enough AI, even accidents can pose catastrophic risk, but I would expect military use to only increase those.\n\nFor the third point, I agree with the reasons provided, but I think there are also ways in which AWs may reduce the likelihood and severity of war. For instance, currently soldiers bear most of the risk in wars, whereas decision-makers are often protected. Targeted AW attacks may increase the relative risk for those making decisions and thus disincentivize them from declaring war. An equilibrium of AW mutually assured destruction might also be attained if we can find reliable ways to attribute AW attacks and selectively retaliate. I’d be interested to see a more extensive analysis of how these and other factors trade off as I am unsure of the net effect.\n\nThe piece that gives me the most doubt that this is an area for the x-risk community to focus on is tractability. An international agreement runs the risk of weakening the states that sign on without slowing the rate of AW development in countries that don’t. Getting all actors to sign on seems intractable to me. As an analogy, nuclear weapons proliferation has been a challenge and nuclear weapons development is much more complex and visible than development of AWs.\n\n**Rohin's opinion:** I particularly liked this piece because it actually made the case for work on autonomous weapons -- I do not see such work as obviously good (see for example [this post](https://forum.effectivealtruism.org/posts/vdqBn65Qaw77MpqXz/on-ai-weapons) that I liked, for the perspective against banning autonomous weapons). I still feel pretty uncertain overall, but I think this post meaningfully moved the debate forward.", "highlight": false, "read_more": "", "summarizer": "Nicholas", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #132", "newsletter_category": "AI governance"}
{"id": "28aa91b903779ad0ec6da7cd760de42d", "title": "Society-in-the-loop: programming the algorithmic social contract", "url": "https://link.springer.com/content/pdf/10.1007/s10676-017-9430-8.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Iyad Rahwan"], "summaries": ["Earlier in this newsletter we saw arguments that we should not build AI systems that are maximizing “humanity’s aggregated preferences”. Then how else are we supposed to build AI systems that work well for _society as a whole_, rather than an individual human? When the goal of the system is uncontested (e.g. “don’t crash”), we can use human-in-the-loop (HITL) algorithms where the human provides oversight; this paper proposes that for contested goals (e.g. “be fair”) we should put society in the loop (SITL), through _algorithmic social contracts_.\n\nWhat is a social contract? A group of stakeholders with competing interests have a (non-algorithmic) social contract when they “agree” to allow use of force or social pressure to enforce some norm that guards people’s rights and punishes violators. For example, we have a social contract against murder, which legitimates the use of force by the government in order to punish violators.\n\nIn an algorithmic social contract, the norms by which the AI system operates, and the goals which it pursues, are determined through typical social contracts amongst the group of stakeholders that care about the AI system’s impacts. Notably, these goals and norms can change over time, as the stakeholders see what the AI system does. Of course, this all happens on relatively long timescales; more immediate oversight and control of the AI system would have to be done by specific humans who are acting as _delegates_ of the group of stakeholders.\n\nThe paper then goes into many open challenges for creating such algorithmic social contracts: How does society figure out what goals the AI system should pursue? How do we deal with externalities and tradeoffs? How can these fuzzy values be translated into constraints on the AI system? It provides an overview of some approaches to these problems."], "venue": "", "opinion": "I really like the notion of an algorithmic social contract: it much better captures my expectation of how AI systems will be integrated into society. With this vocabulary, I would put technical AI alignment research squarely in the last category, of how we translate fuzzy values that society agrees on into constraints on the AI system’s behavior.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #131", "newsletter_category": "AI governance"}
{"id": "3b14ccb897002378cbe622aa626e5ea4", "title": "AI Benefits", "url": "https://cullenokeefe.com/ai-benefits-index", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Cullen O'Keefe"], "summaries": ["This sequence of posts investigates _AI Benefits_: how a benefactor can leverage advanced AI systems to benefit humanity. It focuses on what can be done by a single benefactor, outside of what we might think of as the “norm” -- in particular, the sequence ignores benefits that would be provided by default market incentives. This is relevant to OpenAI (where the author works) given their focus on ensuring AI is beneficial to humanity.\n\nNote that AI Benefits is distinct from AI alignment. Sometimes AI alignment is defined broadly enough to encompass AI Benefits, but often it is not, e.g. if the notion of being “aligned” depends on an AI system being aligned with some principal, that would not be AI Benefits, since AI Benefits are meant to accrue to all of humanity. While it is about maximizing well-being by default, it should also have secondary goals of equality, autonomy, democratization, and epistemic modesty.\n\nThe obvious approach to AI Benefits is the _direct_ approach: figuring out how to apply advanced AI to directly generate benefits for humanity, e.g. by producing electricity more efficiently to mitigate climate change. However, it is important to also consider the indirect approach of making money using AI, and then donating the surplus to a different organization that can better produce benefits.\n\nGiven the massive number of potential ways to benefit humanity and our uncertainty about their efficacy, it is important to have a portfolio approach to AI Benefits, rather than scaling up a single intervention. In addition, since any given intervention will probably primarily benefit some subset of humanity, a portfolio approach should help lead to more equal distribution of benefits.\n\nThere are many outstanding questions on how AI Benefits should be done in practice. Should the benefactor pursue a direct or indirect approach? To what extent should they explore potential approaches for generating benefits, relative to exploiting approaches that we know work? Should they generate benefits now, or invest in the ability to generate benefits later? Should they focus on global (supranational) approaches, or allocate resources to each nation that can be used in a manner specialized to their citizens?\n\nThere are many questions on the governance side as well. We will presumably want some Benefits Group involving external experts to help distribute benefits optimally. When should such a group get democratic input? How do we evaluate such a group to ensure they are actually benefiting humanity optimally? To what extent will we also need internal governance within the group and benefactor, and how can this be done?"], "venue": "Author's Website", "opinion": "AI Benefits is effectively asking how we can answer the question of how to do the most good in the future, and as such many of the considerations also come up in effective altruism, especially at the current high level of abstraction. Nonetheless, there are differences in the situation, which will matter: for example, the effective altruism community does not currently need to plan for the situation where they control a majority of the world’s resources; a sufficiently ambitious and optimistic AI company may need to. Such a situation vastly increases the importance of e.g. democratic input, portfolio approaches, and information value. I’m glad that these questions are being tackled now and look forward to seeing more details in the future.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #110", "newsletter_category": "AI governance"}
{"id": "e423cbbe1c992d0789f0eebabfede5b4", "title": "Machine Learning and Artificial Intelligence: how do we make sure technology serves the open society?", "url": "https://www.ditchley.com/events/past-events/2010-2019/2017/machine-learning-and-artificial-intelligence-how-do-we-make-sure", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["A report from a workshop held in the UK in December 2017, where a diverse group of people (\"omputer scientists, entrepreneurs, business leaders, politicians, journalists, researchers and a pastor\") talked about the \"impact of AI on societies, governments and the relations between states, and between states and companies\". I'm glad to see more thought being put into AI policy, and especially what the effects of AI may be on world stability."], "venue": "", "opinion": "As we might expect, the report raises more questions than answers. If you think a lot about this space, it is worth reading -- it proposes a classification of problems that's different from ones I've seen before -- but I really only expect it to be useful to people actively doing research in this space.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "Recon #5", "newsletter_category": "AI governance"}
{"id": "28433ca94ac6857ea6a36f7cab78a08d", "title": "Commoditisation of AI, digital forgery and the end of trust: how we can fix it", "url": "https://giorgiop.github.io/posts/2018/03/17/AI-and-digital-forgery/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Giorgio Patrini", "Simone Lini", "Hamish Ivey-Law and Morten Dahl."], "summaries": ["This blog post talks about the many issues that could arise as digital forgery of audio and video gets better, and also talks about potential solutions and their pitfalls, such as digital signatures, and training ML models that can distinguish between real and fake media. He even has a simple but well-done experiment that looks at the ability of an ML model to see whether or not a face swap has happened."], "venue": "", "opinion": "If you haven't thought about digital forgery and its implications before, I strongly recommend it; otherwise I'd recommend you only read the section titled \"A weekend experiment.\"", "highlight": false, "read_more": "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "Recon #5", "newsletter_category": "AI governance"}
{"id": "35935727f83b8cf577ecd7cc11542b42", "title": "‘Skynet’ Revisited: The Dangerous Allure of Nuclear Command Automation", "url": "https://www.armscontrol.org/act/2020-04/features/skynet-revisited-dangerous-allure-nuclear-command-automation", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Michael T. Klare"], "summaries": ["While I won't summarize this article in full here, I found it useful to see how some academics are thinking about the risks of automation in the military, as well as to get a picture of what current automation efforts actually look like. One quote I found particularly interesting:\n\n“You will find no stronger proponent of integration of AI capabilities writ large into the Department of Defense,” said Lieutenant General Jack Shanahan, director of the Joint Artificial Intelligence Center (JAIC), at a September 2019 conference at Georgetown University, “but there is one area where I pause, and it has to do with nuclear command and control.” Referring to [an] article’s assertion that an automated U.S. nuclear launch ability is needed, he said, “I read that. And my immediate answer is, ‘No. We do not.’”"], "venue": "Arms Control Today", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #96", "newsletter_category": "AI governance"}
{"id": "2121748096d3222eee99847cb3cb51d2", "title": "Activism by the AI Community: Analysing Recent Achievements and Future Prospects", "url": "https://www.cser.ac.uk/resources/activism-ai-community-analysing-recent-achievements-and-future-prospects/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Haydn Belfield"], "summaries": ["The AI community has been surprisingly effective at activism: it has led to discussions of a ban on lethal autonomous weapons systems (LAWS), created several initiatives on safety and ethics, and has won several victories through organizing (e.g. Project Maven). What explains this success, and should we expect it to continue in the future? This paper looks at this through two lenses.\n\nFirst, the AI community can be considered an _epistemic community_: a network of knowledge-based experts with coherent beliefs and values on a relevant topic. This seems particularly relevant for LAWS: the AI community clearly has relevant expertise to contribute, and policymakers are looking for good technical input. From this perspective, the main threats to future success are that the issues (such as LAWS) become less novel, that the area may become politicized, and that the community beliefs may become less cohesive.\n\nSecond, the AI community can be modeled as organized labor (akin to unions): since there is high demand for AI researchers, and their output is particularly important for company products, and the companies are more vulnerable to public pressure, AI researchers wield a lot of soft power when they are united. The main threat to this success is the growing pool of talent that will soon be available (given the emphasis on training experts in AI today), which will reduce the supply-demand imbalance, and may reduce how commited the AI community as a whole is to collective action.\n\nOverall, it seems that the AI community has had good success at activism so far, but it is unclear whether it will continue in the future."], "venue": "CSER Website", "opinion": "I think the ability of the AI community to cause things to happen via activism is quite important: it seems much more likely that if AI x-risk concerns are serious, we will be able to convince the AI community of them, rather than say the government, or company executives. This mechanism of action seems much more like the \"epistemic community\" model used in this paper: we would be using our position as experts on AI to convince decision makers to take appropriate precautions with sufficiently powerful AI systems. Applying the discussion from the paper to this case, we get the perhaps unsurprising conclusion that it is primarily important that we build consensus amongst AI researchers about how risky any particular system is.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #90", "newsletter_category": "AI governance"}
{"id": "28f65768628b10530baace1fedbbeb7c", "title": "Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society", "url": "https://www.cser.ac.uk/resources/beyond-near-long-term/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Carina Prunkl and Jess Whittlestone"], "summaries": ["This paper argues that the existing near-term / long-term distinction conflates four different axes on which research could differ: the capability level of AI systems (current pattern-matching systems vs. future intelligent systems), the impacts of AI systems (impacts that are being felt now like fairness vs. ones that will be felt in the future like x-risks), certainty (things that will definitely be problems vs. risks that are more speculative) and extremity (whether to prioritize particularly extreme risks). While there are certainly correlations across these axes, they are not the same thing, and discourse would be significantly improved by disambiguating the axes. For example, both authors of the paper see their work as considering the medium-to-long-term impacts of near-to-medium-term AI capabilities."], "venue": "CSER Website", "opinion": "I definitely agree that near-term and long-term often seem to mean many different things, and I certainly support efforts to be more precise in our language.\n\nWhile we're talking about near-term and long-term, I'll add in my own gripe: \"long-term\" implies that the effects will be felt only in the far future, even though many people focused on such effects are doing so because there's a significant probability of such effects being felt in only a few decades.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #90", "newsletter_category": "AI governance"}
{"id": "b0c3b6ac118d1d5da5b660d88a9c7adf", "title": "How a Pentagon Contract Became an Identity Crisis for Google", "url": "https://www.nytimes.com/2018/05/30/technology/google-project-maven-pentagon.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Scott Shane", "Cade Metz and Daisuke Wakabayashi"], "summaries": ["After Google accepted a share of the contract for the Maven program run by the Defense Department, Google has been internally fractured, with many employees strongly opposing the use of AI for military applications."], "venue": "New York Times", "opinion": "Stories like this make me optimistic that we can actually coordinate AI researchers to take appropriate safety precautions when developing advanced AI systems, even if the economic incentives point in the other direction (and I'm not sure they do).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #9", "newsletter_category": "AI governance"}
{"id": "014e1793c7e9a19e117a61c9a9fe8f6e", "title": "AI Alignment Podcast: On the Long-term Importance of Current AI Policy", "url": "https://futureoflife.org/2020/02/17/on-the-long-term-importance-of-current-ai-policy-with-nicolas-moes-and-jared-brown/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Lucas Perry", "Nicolas Moës and Jared Brown"], "summaries": ["While this podcast focuses both on the details of current policy as well as the long-term impact of engaging in policy today, I'm mostly interested in the latter, and so will simply quote Lucas's summary of points for that part:\n\n1) Experience gained on short-term AI policy issues is important to be considered a relevant advisor on long-term AI policy issues coming up in the future.\n2) There are very few people that care about AGI safety currently in government, politics or in policy communities.\n3) There are opportunities to influence current AI policy decisions in order to provide a fertile ground for future policy decisions or, better but rarer, to be directly shaping AGI safety policy today through evergreen texts. Future policy that is implemented is path dependent on current policy that we implement today. What we do now is precedent setting.\n4) There are opportunities today to develop a skillset useful for other policy issues and causes.\n5) Little resource is being spent on this avenue for impact, so the current return on investment is quite good."], "venue": "FLI Website", "opinion": "I think quite a lot about points 1 and 3, which I think also apply to technical safety research, not just policy. For our research to have an impact, it is necessary that either the research or its authors have enough credibility to actually influence decision-makers. In addition, the problems we will face in the future could depend on technical work done today: for example, if we were convinced that (say) AIs trained via evolution are too risky, we could push for AI to be developed in other ways now.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #89", "newsletter_category": "AI governance"}
{"id": "7edb97ad9316d993f7586b072aa7e022", "title": "FLI Podcast: Distributing the Benefits of AI via the Windfall Clause", "url": "https://futureoflife.org/2020/02/28/distributing-the-benefits-of-ai-via-the-windfall-clause-with-cullen-okeefe/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Lucas Perry and Cullen O’Keefe"], "summaries": ["[Last week](https://mailchi.mp/9d279b575b1a/an-88-how-the-principal-agent-literature-relates-to-ai-risk), we had a brief summary of the [Windfall Clause](https://www.fhi.ox.ac.uk/windfallclause/) paper. This podcast goes into more depth about the potential benefits and objections to this clause: it's in some sense a more accessible and conversational elaboration of many of the points made in the paper."], "venue": "FLI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #89", "newsletter_category": "AI governance"}
{"id": "6d706e3e2d2c1442b851d566aedfa8f2", "title": "Who owns artificial intelligence? A preliminary analysis of corporate intellectual property strategies and why they matter", "url": "https://www.fhi.ox.ac.uk/wp-content/uploads/Patents_-FHI-Working-Paper-Final-.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Nathan Calvin", "Jade Leung"], "summaries": ["This paper analyzes intellectual property (IP) considerations as they relate to AI. They identify two main incentives for companies: first, to publish AI research openly in order to attract top talent, and second, holding enough patents that they can credibly threaten to sue other companies for patent infringement. This second criterion lets companies stay in a mutually-assured-destruction (MAD) scenario, where if any one company litigates for patent infringement, they will quickly be met with a countersuit, and so the (fragile) equilibrium is to avoid litigation. They also identify two incentives for governments: first, to provide patents as a financial incentive for innovation in order to incentivize research, and second, to allow their own national security apparatus to use state of the art research while keeping it secret from perceived rivals.\n\nBased on this analysis, they propose three scenarios that could unfold in the future. First, the status quo continues, in which companies keep acquiring patents in order to maintain the MAD equilibrium. Second, the equilibrium breaks, with one company litigating that then causes all the other companies to also litigate. This could result in most research becoming secret, in order to ensure that other companies can't \"steal\" the work and get a patent first. Similarly, contributions to open-source research might decrease, as it would be particularly easy to use such contributions as evidence of patent infringement. Third, more \"patent pools\" get created, in which multiple companies pool their patents together, to reduce the risk of litigation. Such patent pools could also be used to enforce other principles: with a sufficiently large patent pool, it could be the case that in order to remain competitive actors must license from the patent pool, and such licensing agreements could enforce specific ethical principles (although it would have to be careful to avoid violating antitrust law)."], "venue": "FHI Website", "opinion": "I enjoyed this paper; it seems good to have a better picture of the potential future of openness in AI research, for the reasons given in [Strategic Implications of Openness in AI Development](https://www.nickbostrom.com/papers/openness.pdf). You could also imagine patent pools as a vehicle for safety, as they are one possible way by which companies can cooperate to ensure a shared commitment to safety (along the lines of <@OpenAI's charter@>(@OpenAI Charter@)): they could tie competitiveness (which requires use of the research protected by the patent pool) to safety (the conditions involved in licensing the research in the patent pool).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #88", "newsletter_category": "AI governance"}
{"id": "c2f72407b563487d36479a7d2134f693", "title": "France, China, and the EU All Have an AI Strategy. Shouldn’t the US?", "url": "https://www.wired.com/story/the-us-needs-an-ai-strategy/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["John K. Delaney"], "summaries": ["See [Import AI](https://jack-clark.net/2018/05/22/import-ai-95-learning-to-predict-and-avoid-internet-arguments-with-deep-learning-white-house-announces-select-committee-on-ai-and-bmw-trains-cars-to-safely-change-lanes/)"], "venue": "Wired", "opinion": "", "highlight": false, "read_more": "FUTURE of AI Act", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #8", "newsletter_category": "AI governance"}
{"id": "745c1b7c60e74940c5bcc5ad6d3aa0ac", "title": "2018 White House Summit on Artificial Intelligence for American Industry", "url": "https://www.whitehouse.gov/wp-content/uploads/2018/05/Summary-Report-of-White-House-AI-Summit.pdf?latest", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["White House OSTP"], "summaries": ["See [Import AI](https://jack-clark.net/2018/05/22/import-ai-95-learning-to-predict-and-avoid-internet-arguments-with-deep-learning-white-house-announces-select-committee-on-ai-and-bmw-trains-cars-to-safely-change-lanes/)"], "venue": "", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #8", "newsletter_category": "AI governance"}
{"id": "8f0973a94a5fec863582974d3edc9171", "title": "AI Alignment Podcast: Machine Ethics and AI Governance", "url": "https://futureoflife.org/2019/11/15/machine-ethics-and-ai-governance-with-wendell-wallach/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Lucas Perry and Wendell Wallach"], "summaries": ["Machine ethics has aimed to figure out how to embed ethical reasoning in automated systems of today. In contrast, AI alignment starts from an assumption of intelligence, and then asks how to make the system behave well. Wendell expects that we will have to go through stages of development where we figure out how to embed moral reasoning in less intelligent systems before we can solve AI alignment.\n\nGenerally in governance, there's a problem that technologies are easy to regulate early on, but that's when we don't know what regulations would be good. Governance has become harder now, because it has become very crowded: there are more than 53 lists of principles for artificial intelligence and lots of proposed regulations and laws. One potential mitigation would be **governance coordinating committees**: a sort of issues manager that keeps track of a field, maps the issues and gaps, and figures out how they could be addressed.\n\nIn the intermediate term, the worry is that AI systems are giving increasing power to those who want to manipulate human behavior. In addition, job loss is a real issue. One possibility is that we could tax corporations relative to how many workers they laid off and how many jobs they created.\n\nThinking about AGI, governments should probably not be involved now (besides perhaps funding some of the research), since we have so little clarity on what the problem is and what needs to be done. We do need people monitoring risks, but there’s a pretty robust existing community doing this, so government doesn't need to be involved."], "venue": "FLI Website", "opinion": "I disagree with Wendell that current machine ethics will be necessary for AI alignment -- that might be the case, but it seems like things change significantly once our AI systems are smart enough to actually understand our moral systems, so that we no longer need to design special procedures to embed ethical reasoning in the AI system.\n\nIt does seem useful to have coordination on governance, along the lines of governance coordinating committees; it seems a lot better if there's only one or two groups that we need to convince of the importance of an issue, rather than 53 (!!).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #75", "newsletter_category": "AI governance"}
{"id": "80f29f11663f6cd0b832b5000bdb4e1c", "title": "GPT-2: 1.5B Release", "url": "https://openai.com/blog/gpt-2-1-5b-release/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Irene Solaiman", "Jack Clark", "Miles Brundage"], "summaries": ["Along with the release of the last and biggest GPT-2 model, OpenAI explains their findings with their research in the time period that the staged release bought them. While GPT-2 can produce reasonably convincing outputs that are hard to detect and can be finetuned for e.g. generation of synthetic propaganda, so far they have not seen any evidence of actual misuse."], "venue": "OpenAI Blog", "opinion": "While it is consistent to believe that OpenAI was just generating hype since GPT-2 was predictably not going to have major misuse applications, and this has now been borne out, I'm primarily glad that we started thinking about publication norms _before_ we had dangerous models, and it seems plausible to me that OpenAI was also thinking along these lines.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #73", "newsletter_category": "AI governance"}
{"id": "08ab34df6391d5441e1b13dcd7171cfb", "title": "Policy Researcher", "url": "https://jobs.lever.co/openai/638c06a8-4058-4c3d-9aef-6ee0528fb3bf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["OpenAI"], "summaries": ["There is a job opportunity at OpenAI as a policy researcher, which does not seem to have any formal requirements."], "venue": "Lever", "opinion": "It seems like a lot of the best policy work is happening at OpenAI (see for example the [OpenAI charter](https://blog.openai.com/openai-charter/)), I strongly encourage people to apply!", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #7", "newsletter_category": "AI governance"}
{"id": "63305aaeb01ab606ece10e9d6d857c0b", "title": "Grover: A State-of-the-Art Defense against Neural Fake News", "url": "https://rowanzellers.com/grover/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Rowan Zellers", "Ari Holtzman", "Hannah Rashkin", "Yonatan Bisk", "Ali Farhadi", "Franziska Roesner", "Yejin Choi"], "summaries": ["Could we use ML to detect fake news generated by other ML models? This paper suggests that models that are used to generate fake news will also be able to be used to _detect_ that same fake news. In particular, they train a GAN-like language model on news articles, that they dub GROVER, and show that the generated articles are _better_ propaganda than those generated by humans, but they can at least be detected by GROVER itself.\n\nNotably, they do plan to release their models, so that other researchers can also work on the problem of detecting fake news. They are following a similar release strategy as with <@GPT-2@>(@Better Language Models and Their Implications@): they are making the 117M and 345M parameter models public, and releasing their 1.5B parameter model to researchers who sign a release form."], "venue": "arXiv", "opinion": "It's interesting to see that this group went with a very similar release strategy, and I wish they had written more about why they chose to do what they did. I do like that they are on the face of it \"cooperating\" with OpenAI, but eventually we need norms for _how_ to make publication decisions, rather than always following the precedent set by someone prior. Though I suppose there could be a bit more risk with their models -- while they are the same size as the released GPT-2 models, they are better tuned for generating propaganda than GPT-2 is.", "highlight": false, "read_more": "Defending Against Neural Fake News", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #58", "newsletter_category": "AI governance"}
{"id": "592247bd3173944a1055fbfb30a715b4", "title": "Google’s brand-new AI ethics board is already falling apart", "url": "https://www.vox.com/future-perfect/2019/4/3/18292526/google-ai-ethics-board-letter-acquisti-kay-coles-james", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Kelsey Piper"], "summaries": ["Google [announced](https://www.blog.google/technology/ai/external-advisory-council-help-advance-responsible-development-ai/) an ethical advisory council, that quickly became controversial, and was then [cancelled](https://www.vox.com/future-perfect/2019/4/4/18295933/google-cancels-ai-ethics-board). The author makes the point that the council was not well-placed to actually advise on ethics -- it would only meet four times a year, and could only give recommendations. This committee, and others at Facebook and Microsoft, seem to be more about PR and less about AI ethics. Instead, an AI ethics council should include both insiders and outsiders, should be able to make formal, specific, detailed recommendations, and would publicly announce whether the recommendations were followed. **Key quote:** \"The brouhaha has convinced me that Google needs an AI ethics board quite badly — but not the kind it seems to want to try to build.\"\n\nIn a [tweetstorm](https://twitter.com/KelseyTuoc/status/1113544870625308673), the author holds OpenAI up as a large organization that is at least *trying* to engage deeply with AI ethics, as evidenced by their safety and policy team, their <@charter@>(@OpenAI Charter@), and <@GPT-2@>(@Better Language Models and Their Implications@). They make public, contentful statements that are weird, controversial and seem bad from a PR perspective. The arguments they make and hear about AI ethics and policy lead to real decisions with consequences."], "venue": "Vox", "opinion": "I broadly agree with this article -- I can't imagine how a council that meets four times a year could properly provide advice on Google's AI projects. I'm not sure if the solution is more powerful and intensive ethics councils whose primary power is public accountability. I expect that making good decisions about AI ethics requires either a technical background, or a long, detailed conversation with a person with that background, neither of which are possible with the public. This could mean that an ethics board could struggle to raise a legitimate issue, or that they could cause outrage about an issue that is upon closer examination not an issue at all. I would feel better about a board with some more formal power, such as the ability to create investigations that could lead to fines, the ability to sue Google, specific whistleblowing affordances, etc. (I have no idea how feasible any of those suggestions are, even assuming Google was okay with them.)\n\nOn the tweetstorm about OpenAI, I'm not sure if I've said it before in this newsletter, but I generally trust OpenAI to be trying to do the right thing, and this is one of the reasons for that. Of course, I also know and trust many people who work there.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #55", "newsletter_category": "AI governance"}
{"id": "00f948592b458232091e24c5830abad0", "title": "Rationally Speaking #231 - Helen Toner on \"Misconceptions about China and artificial intelligence\"", "url": "http://www.rationallyspeakingpodcast.org/show/rs-231-helen-toner-on-misconceptions-about-china-and-artific.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Julia Galef and Helen Toner"], "summaries": ["In this podcast Helen talks about AI policy, China, and the Center for Security and Emerging Technology, where she is the director of strategy. Some of her opinions that stood out to me:\n\n - While Baidu is a huge tech company and is the main search engine, it's a bit misleading to call it the Google of China, since it doesn't have the same diversity of products that Google does.\n - While the social credit score story seems overblown, the reporting on the Uighur situation seems to be basically accurate.\n - Based on a very small sample of AI researchers in China, it seems like Chinese researchers are less interested in thinking about the real-world effects of the technology they're building, relative to Western researchers.\n - Since people in government have so little time to think about so many issues, they have simple versions of important ideas. For example, it's easy to conclude that China must have an intrinsic advantage at data since they have more people and fewer privacy controls. However, there's a lot of nuance: for example, the entire Internet is in English, which seems like a big advantage for the US.\n - The incentives in China can be quite different: in at least one case, a professor's salary depended on the number of papers published.\n - A particularly interesting question: \"how does it help the US geopolitically if an American company is developing powerful AI?\""], "venue": "Rationally Speaking", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #55", "newsletter_category": "AI governance"}
{"id": "cf5e4eb0ba6732cf45ee0cb426f53cef", "title": "A Survey of the EU's AI Ecosystem", "url": "https://www.charlottestix.com/european-union-ai-ecosystem", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Charlotte Stix"], "summaries": ["This report analyzes the European AI ecosystem. The key advantage that Europe has is a strong focus on ethical AI, as opposed to the US and China that are more focused on capabilities research. However, Europe does face a significant challenge in staying competitive with AI, as it lacks both startup/VC funding as well as talented researchers (who are often going to other countries). While there are initiatives meant to help with this problem, it is too early to tell whether they will have an impact. The report also recommends having large multinational projects, along the lines of CERN and the Human Brain Project. See also [Import AI](https://jack-clark.net/2019/03/25/making-better-healthcare-ai-systems-via-audio-de-identification-teaching-drones-to-help-humans-fight-fires-and-why-language-models-could-be-smarter-than-you-think/)."], "venue": "Charlotte Stix's Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #50", "newsletter_category": "AI governance"}
{"id": "9d69e586ca0d86c9c9cd9b856df37d71", "title": "Toward AI Security: Global Aspirations for a More Resilient Future", "url": "https://cltc.berkeley.edu/TowardAISecurity/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Jessica Cussins Newman"], "summaries": ["This report analyzes various AI security risks (including both near-term and long-term concerns) and categorizes them, and then analyzes how different national strategies and policies have engaged with these risks. Most interestingly (to me) it comes to the conclusion that most national AI strategies are focused on very different areas and often ignore (in the sense of not mentioning) risks that other countries have highlighted, though there are still some areas for cooperation, such as improving the transparency and accountability of AI systems."], "venue": "CLTC Website", "opinion": "It's pretty strange to me that different governments would take such different approaches to AI - this suggests that either academics, think tanks, policy analysts etc. do not agree on the risks, or that there isn't enough political pressure for some of the risks to make it into the strategies. It seems like the AI community would have a significant opportunity to shape policy in the latter case -- I'd imagine for example that an open letter signed by thousands of researchers could be quite helpful in creating political will. (Of course, creating a comprehensive open letter that most researchers will approve of might be quite hard to do.)", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #46", "newsletter_category": "AI governance"}
{"id": "4a2099238ba82033a3824fac24f5f428", "title": "Risk factors for s-risks", "url": "http://s-risks.org/risk-factors-for-s-risks/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Tobias Baumann"], "summaries": ["This post discusses four risk factors for creating extreme disvalue in the universe (s-risks): advanced technology, lack of effort to avoid those outcomes, inadequate security and law enforcement, and polarisation and divergence of values. Tobias notes that he's most worried about cases where most of these factors occur, because the absence of any of them mitigates the threat posed by the others."], "venue": "S-risks Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #46", "newsletter_category": "AI governance"}
{"id": "037626e0f11881dea2f0ec3c8b82921c", "title": "FLI Podcast- Artificial Intelligence: American Attitudes and Trends", "url": "https://futureoflife.org/2019/01/24/public-opinion-on-artificial-intelligence/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Ariel Conn and Baobao Zhang"], "summaries": ["This is a podcast about [The American Public’s Attitudes Concerning Artificial Intelligence](https://www.fhi.ox.ac.uk/aipublic2019/) ([AN #41](https://mailchi.mp/8c3f02cabccd/alignment-newsletter-41)), you can see my very brief summary of that."], "venue": "FLI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #43", "newsletter_category": "AI governance"}
{"id": "630dfe388858a8daef9bb58bf8479057", "title": "The American Public’s Attitudes Concerning Artificial Intelligence", "url": "https://www.fhi.ox.ac.uk/aipublic2019/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Baobao Zhang and Allan Dafoe"], "summaries": ["This presents results from a survey of Americans about their attitudes towards AI. There's not a compelling objective story I can tell, so you might as well look at the executive summary, which presents a few interesting highlights. One interesting fact: the median person thinks that we're more likely than not to have \"high-level machine intelligence\" within 10 years! You could also read [Vox's take](https://www.vox.com/future-perfect/2019/1/9/18174081/fhi-govai-ai-safety-american-public-worried-ai-catastrophe), which emphasizes that the public is concerned about long-term AI risk."], "venue": "FHI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #41", "newsletter_category": "AI governance"}
{"id": "32956a2be316acadd92c9f0f831bfa08", "title": "Countering Superintelligence Misinformation", "url": "http://gcrinstitute.org/countering-superintelligence-misinformation/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Seth D. Baum"], "summaries": ["Two ways to have better discussions about superintelligence are correcting misconceptions, and preventing misinformation from being spread in the first place. The latter might be achieved by educating prominent voices, creating reputational costs to misinformers (both individuals and companies), focusing media attention, etc. Research suggests the former is very difficult; strategies include addressing pre-existing motivations for believing misinformation and using advance warnings to 'inoculate' people against false claims."], "venue": "GCRI Website", "opinion": "I'm glad to see this systematic exploration of an issue that the AI safety community has consistently had to grapple with. I would have liked to see a more nuanced definition of misinformation than \"information that is already clearly false\", since it's not always obvious what qualifies as clearly false, and since there are many varieties of misinformation.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "Superintelligence Skepticism as a Political Tool", "converted_with": "python", "newsletter_number": "AN #27", "newsletter_category": "AI governance"}
{"id": "b97f1609126fa1adc21159dc2f11990a", "title": "Podcast: Artificial Intelligence – Global Governance, National Policy, and Public Trust with Allan Dafoe and Jessica Cussins", "url": "https://futureoflife.org/2018/08/30/podcast-artificial-intelligence-global-governance-national-policy-and-public-trust-with-allan-dafoe-and-jessica-cussins/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Allan Dafoe", "Jessica Cussins", "and Ariel Conn"], "summaries": ["Topics discussed include the difference between AI governance and AI policy, externalities and solving them through regulation, whether governments and bureaucracies can keep up with AI research, the extent to which the US' policy of not regulating AI may cause citizens to lose trust, labor displacement and inequality, and AI races."], "venue": "FLI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #23", "newsletter_category": "AI governance"}
{"id": "bbd76f602559459703f3c82a42778326", "title": "The Advent of Huang's Law", "url": "http://rbharath.github.io/the-advent-of-huangs-law/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Bharath Ramsundar"], "summaries": ["See [Import AI](https://jack-clark.net/2018/04/09/import-ai-89-chinese-facial-recognition-startup-raises-600-million-why-gpus-could-alter-ai-progress-and-using-contest-to-deal-with-language-ambiguity/)"], "venue": "Author's Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #2", "newsletter_category": "AI governance"}
{"id": "9036199018547cc51df9ad307749a536", "title": "China Now Has the Most Valuable AI Startup in the World", "url": "https://www.bloomberg.com/news/articles/2018-04-09/sensetime-snags-alibaba-funding-at-a-record-3-billion-valuation", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["See [Import AI](https://jack-clark.net/2018/04/09/import-ai-89-chinese-facial-recognition-startup-raises-600-million-why-gpus-could-alter-ai-progress-and-using-contest-to-deal-with-language-ambiguity/)"], "venue": "Bloomberg News", "opinion": "It's a short, interesting piece, and it's got some actual numbers and quotes from Xu Li (one of the co-founders of the startup, SenseTime), so you should read it.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #2", "newsletter_category": "AI governance"}
{"id": "27dd64cb7435376103ed2911682ef6ab", "title": "The Deep Roots and Long Branches of Chinese Technonationalism", "url": "https://macropolo.org/deep-roots-long-branches-chinese-technonationalism/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Evan Feigenbaum"], "summaries": ["See [Import AI](https://jack-clark.net/2018/04/09/import-ai-89-chinese-facial-recognition-startup-raises-600-million-why-gpus-could-alter-ai-progress-and-using-contest-to-deal-with-language-ambiguity/)"], "venue": "Macro Polo", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #2", "newsletter_category": "AI governance"}
{"id": "408c4c51c44e7cd5bb23ddf9290fd203", "title": "Collective Action on Artificial Intelligence: A Primer and Review", "url": "http://gcrinstitute.org/collective-action-on-artificial-intelligence-a-primer-and-review/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Robert de Neufville", "Seth D. Baum"], "summaries": ["This paper reviews much of the work in AI governance (specifically, work on AI races and other collective action problems)."], "venue": "Technology in Society", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #159", "newsletter_category": "AI governance"}
{"id": "fdee6d512cc029872ae306170b481be3", "title": "AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries", "url": "http://gcrinstitute.org/ai-certification-advancing-ethical-practice-by-reducing-information-asymmetries/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Peter Cihon", "Moritz J. Kleinaltenkamp", "Jonas Schuett", "Seth D. Baum"], "summaries": ["_Certification_ is a method of reducing information asymmetries: it presents credible information about a product to an audience that they couldn’t have easily gotten otherwise. With AI systems, certification could be used to credibly share information between AI actors, which could promote trust amongst competitors, or to share safety measures to prevent a race to the bottom on safety, caused by worrying that “the other guys would be even more unsafe”. Certification is at its best when there is _demand_ from an audience to see such certificates; public education about the need for credible information can help generate such demand.\n\nHowever, certification often runs into problems. _Symbol-substance decoupling_ happens when certificates are issued to systems that don’t meet the standards for certification. For example, in “ethics washing”, companies advertise a self-certificate in which their products are approved by ethics boards, but those ethics boards have no real power. _Means-ends decoupling_ happens when the standards for certification don’t advance the goals for which the certificate was designed. For example, a certificate might focus on whether a system was tested, rather than on what test was conducted, leading applicants to use easy-to-pass tests that don’t actually provide a check on whether the method is safe.\n\nEffective certification for future AI systems needs to be responsive to changes in AI technology. This can be achieved in a few ways: first, we can try to test the underlying goals which are more likely to remain stable; for example, we could certify ethical principles that will likely remain the same in the future. Second, we can match the certification to the types of people and institutions, that is, our certifications talk about the executives, citizens, or corporations (rather than e.g. specific algorithms that may be replaced in the future). Third, the certification system can build in mechanisms for updating the certification criteria periodically.\n\nThe paper then analyzes seven existing certification systems for AI systems; you’ll have to read the paper for details."], "venue": "GCRI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #156", "newsletter_category": "AI governance"}
{"id": "83ee808fe5f8d0393dba26089e5d2ece", "title": "Shaping economic incentives for collaborative AGI", "url": "https://www.lesswrong.com/posts/FkZCM4DMprtEp568s/shaping-economic-incentives-for-collaborative-agi", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Kaj Sotala"], "summaries": ["This post considers how to encourage a culture of cooperation among AI researchers. Then, when researchers try to create AGI, this culture of cooperation may make it more likely that AGI is developed collaboratively, instead of with race dynamics, making it more likely to be safe. It specifically poses the question of what external economic or policy incentives could encourage such cooperation."], "venue": "LessWrong", "opinion": "I am optimistic about developing AGI collaboratively, especially through AI researchers cooperating. I'm not sure whether external incentives from government are the right way to achieve this -- it seems likely that such regulation would be aimed at the wrong problems if it originated from government and not from AI researchers themselves. I'm more optimistic about some AI researchers developing guidelines and incentive structures themselves, that researchers buy into voluntarily, that maybe later get codified into law by governments, or adopted by companies for their AI research.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #14", "newsletter_category": "AI governance"}
{"id": "03feee0637e178dafe85a83e34585522", "title": "An Overview of National AI Strategies", "url": "https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Tim Dutton"], "summaries": ["A short reference on the AI policies released by various countries."], "venue": "Medium", "opinion": "Reading through this, it seems that countries are taking quite different approaches towards AI. I don't know what to make of this -- are they acting close to optimally given their geopolitical situation (which must then vary a lot by country), or does no one know what's going on and as a result all of the strategies are somewhat randomly chosen? (Here by \"randomly chosen\" I mean that the strategies that one group of analysts would select with is only weakly correlated with the strategies another group would select.) It could also be that the approaches are not actually that different.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #14", "newsletter_category": "AI governance"}
{"id": "fd6b95da78cb226fb6144f2b2342ae01", "title": "Bridging the Gap: The Case for an ‘Incompletely Theorized Agreement’ on AI Policy.", "url": "https://link.springer.com/article/10.1007%2Fs43681-020-00037-w", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Charlotte Stix", "Matthijs M. Maas"], "summaries": ["Like <@several@>(@Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society@) <@past@>(@Bridging near- and long-term concerns about AI@) <@papers@>(@Medium-Term Artificial Intelligence and Society@), this paper argues that the differences between the “near-term” and “long-term” communities are probably exaggerated. Collaboration between these communities would be particularly beneficial, since it could prevent the field of AI policy from becoming fragmented and ineffectual, which is especially important now while the field is nascent and there is political will for AI policy progress.\n\nThe authors propose the notion of an “incompletely theorized agreement” in order to foster this sort of collaboration. In an incompletely theorized agreement, the parties agree to suspend disagreement on some thorny theoretical question, in order to coordinate action towards a shared pragmatic purpose. Such agreements could be used to set aside relatively unimportant disagreements between the two communities, in favor of pursuing goals that both communities care about. For example, we could imagine that such an agreement would allow both communities to push for more and better reflection by AI researchers on the impacts of the systems that they build, or to enable action that ensures we preserve the integrity of public discourse and informed decision-making (e.g. by regulating AI-enabled disinformation)."], "venue": "AI and Ethics", "opinion": "I’m certainly on board with the goal of working together towards shared goals. That being said, I don't fully understand what's being proposed here: how exactly is an incompletely theorized agreement supposed to be made? Is this more of a “shared ethos” that gets spread by word of mouth, or is there a document that people sign on to? If there is a document, what goes into it, who would agree to it, and how binding is it? I’d be excited to see more work fleshing out these concrete details, or even better, actually causing such an agreement to exist in practice.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #135", "newsletter_category": "AI governance"}
{"id": "80c0efb835d58fac049135c09c4eccbe", "title": "Fragmentation and the Future: Investigating Architectures for International AI Governance", "url": "https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12890", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Peter Cihon", "Matthijs M. Maas", "Luke Kemp"], "summaries": ["Should AI governance be done centrally, through an international body, or in a fragmented, decentralized fashion? This paper identifies various considerations pointing in different directions:\n\n1. Centralized institutions can have more political power when designed well: their regulations can have more “teeth”.\n2. Centralized institutions can be more efficient from the participant’s perspectives: if there is only one set of regulations, it is much easier for each participant to adhere to those regulations.\n3. A centralized institution will typically be slower to act, as there are many more parties with a larger stake in the outcome. This can make it brittle, especially when the pace of technological change outpaces that of regulatory change.\n4. Centralized institutions face a breadth vs. depth dilemma: if the regulations are too stringent, then some actors (i.e. nations, companies, etc) won’t participate (there is depth but not breadth), and similarly, to get everyone to participate the regulations must often be quite weak (breadth but not depth). In contrast, with decentralized approaches, the depth of the regulations can be customized to each participant.\n5. With more fragmented approaches, actors can “forum shop” for the regulations which they think are best. It is unclear whether this is helpful or harmful for AI governance.\n6. It is unclear which approach leads to more coordination. While a centralized approach ensures that everyone has the same policies, leading to policy _coherence_, it does not necessarily mean that those policies are good. A decentralized approach could lead to faster adaptation leading to better policies that are then copied by others, leading to more effective coordination overall."], "venue": "Global Policy", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #131", "newsletter_category": "AI governance"}
{"id": "fd91b9301bf688b1f693acfea6cf08ab", "title": "Future Indices: How Crowd Forecasting Can Inform the Big Picture", "url": "https://cset.georgetown.edu/research/future-indices/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Michael Page", "Catherine Aiken", "Dewey Murdick"], "summaries": ["This paper explains the methodology behind CSET’s recent forecasting project, Foretell. We would like to know which of several potential geopolitical scenarios might happen in the next 3-7 years. We can get some insight into this by asking relevant experts for their opinions, but often many experts will disagree, making it hard to know what to conclude.\n\nWe’d like to mitigate this by leveraging the wisdom of the crowds. Unfortunately, this would require us to have a clear and precise operationalization of our scenarios; the scenarios we’re interested in are rarely amenable to such operationalization. Instead, we can find a number of _predictors_ that would argue for a specific scenario, and identify one or more _metrics_ which are themselves clear and precise and give us information about some predictor. We can get forecasts for these metrics using the wisdom of the crowds. We can then compute the deviations between crowd forecasts and simple trend extrapolations of historical data, and use the observed trend directions as arguments for or against particular scenarios.\n\nThe paper illustrates this in the case of potential scenarios involving the US, China, and AI. An example of an important predictor is “US-China tensions”. Associated metrics include the amount of US-China trade, the number of Chinese O visas, etc. In this case, the crowd predictions suggested trend deviations in the metrics that argued for increasing US-China tensions."], "venue": "CSET Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #123", "newsletter_category": "AI governance"}
{"id": "94ae6f37ad1cdbb6fb557d8dc28430a5", "title": "AI Nationalism", "url": "https://www.ianhogarth.com/blog/2018/6/13/ai-nationalism", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Ian Hogarth"], "summaries": ["As AI becomes more important in the coming years, there will be an increasing amount of \"AI nationalism\". AI policy will be extremely important and governments will compete on keeping AI talent. For example, they are likely to start blocking company takeovers and acquisitions that cross national borders -- for example, the UK could have been in a much stronger position had they blocked the acquisition of DeepMind (which is UK-based) by Google (which is US-based)."], "venue": "Author's Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #12", "newsletter_category": "AI governance"}
{"id": "31b49ed58fccdefe6116be296f685bc2", "title": "Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance", "url": "https://link.springer.com/article/10.1007/s13347-020-00402-x", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Seán S. ÓhÉigeartaigh", "Jess Whittlestone", "Yang Liu", "Yi Zeng", "Zhe Liu"], "summaries": ["This paper argues that it is important that AI ethics and governance is cross-cultural, and provides a few recommendations towards this goal:\n1. Develop AI ethics and governance research agendas requiring cross-cultural cooperation\n2. Translate key papers and reports\n3. Alternate continents for major AI research conferences and ethics and governance conferences\n4. Establish joint and/or exchange programmes for PhD students and postdocs"], "venue": "Philosophy & Technology", "opinion": "", "highlight": false, "read_more": "Longer summary from MAIEI", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #111", "newsletter_category": "AI governance"}
{"id": "fe68f04b56407b425d376e26e2a201ee", "title": "Antitrust-Compliant AI Industry Self-Regulation", "url": "https://cullenokeefe.com/blog/antitrust-compliant-ai-industry-self-regulation", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Cullen O'Keefe"], "summaries": ["One way to reduce the risk of unsafe AI systems is to have agreements between corporations that promote risk reduction measures. However, such agreements may run afoul of antitrust laws. This paper suggests that this sort of self-regulation could be done under the “Rule of Reason”, in which a learned profession (such as “AI engineering”) may self-regulate in order to correct a market failure, as long as the effects of such a regulation promote rather than harm competition.\n\nIn the case of AI, if AI engineers self-regulate, this could be argued as correcting the information asymmetry between the AI engineers (who know about risks) and the users of the AI system (who don’t). In addition, since AI engineers arguably do not have a monetary incentive, the self-regulation need not be anticompetitive. Thus, this seems like a plausible method by which AI self-regulation could occur without running afoul of antitrust law, and so is worthy of more investigation."], "venue": "Author's Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #108", "newsletter_category": "AI governance"}
{"id": "cbc9287ce484b15485af221b01b9632b", "title": "AI at Google: our principles", "url": "https://blog.google/topics/ai/ai-principles/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Sundar Pichai"], "summaries": ["Following the outcry over the Maven program, Google has written a blog post detailing the principles they will follow for AI."], "venue": "Google Blog", "opinion": "I found this line particularly interesting: \"We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research.\" It sounds like the time is ripe for someone to write a \"best practices\" paper!", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #10", "newsletter_category": "AI governance"}
{"id": "0d2b164b5a3bef2b0a557b508d832403", "title": "Institutionalizing ethics in AI through broader impact requirements", "url": "https://www.nature.com/articles/s42256-021-00298-y.epdf?sharing_token=IDmLZKsYFmHzNVVvdbwmJ9RgN0jAjWel9jnR3ZoTv0OmYmmcvbkNbs5E_6ePZaVvz4b0vI_5Un7qhqYQjW4wT1BaPg_DR_6Etp4UDSY5uLzi_asvKZRlspXOTZMHjqPA4sVV9jS2EtSiUKMku_AWDKr1jBHBoWIpCPfViTUWbkQ%3D", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Carina E. A. Prunkl", "Carolyn Ashurst", "Markus Anderljung", "Helena Webb", "Jan Leike", "Allan Dafoe"], "summaries": ["This short perspective analyzes the policy implemented by NeurIPS last year in which paper submissions were required to have a section discussing the broader impacts of the research. Potential benefits include _anticipating_ potential impacts of research, _acting_ to improve these impacts, _reflecting_ on what research to do given the potential impacts, and improving _coordination_ across the community. However, the policy may also lead to _trivialization_ of ethics and governance (thinking that all the relevant thinking about impacts can be done in this single statement), _negative attitudes_ towards the burden of writing such statements or responsible research in general, a _false sense of security_ that the ethics are being handled, and a _perception_ of ethics as something to be done as an afterthought.\n\nThe main challenges that can cause these sorts of negative effects are:\n1. Analyzing broader impacts can be difficult and complex,\n2. There are not yet any best practices or guidance,\n3. There isn’t a clear explanation of the purpose of the statements, or transparency into how they will be evaluated,\n4. It’s tempting to focus on the research that determines whether or not your paper is published, rather than the broader impacts statement which mostly does not affect decisions,\n5. Researchers may have incentives to emphasize the beneficial impacts of their work and downplay the negative impacts.\n6. Biases like motivated reasoning may affect the quality and comprehensiveness of impact statements.\n\nTo mitigate these challenges, the authors recommend improving _transparency_, setting _expectations_, providing _guidance_ on how to write statements, improving _incentives_ for creating good impact statements, and learning from experience through _community deliberation_. To improve incentives in particular, broader impact statements could be made an explicit part of peer review which can affect acceptance decisions. These reviews could be improved by involving experts in ethics and governance. Prizes could also be given for outstanding impact statements, similarly to best paper awards."], "venue": "Nature Machine Intelligence", "opinion": "I’ve been pretty skeptical of the requirement to write a broader impacts statement. My experience of it was primarily one of frustration, for a few reasons:\n1. Forecasting the future is hard. I don’t expect a shallow effort to forecast to be all that correlated with the truth. There were lots of simple things I could say that “sound” right but that I don’t particularly expect to be true, such as “improving cooperation in multiagent RL will help build cooperative, helpful personal assistants”. It’s a lot harder to say things that are actually true; a real attempt to do this would typically be a paper in itself.\n2. To the extent that the statement does affect reviews, I expect that reviewers want to hear the simple things that sound right; and if I _don’t_ write them, it would probably be a strike against the paper.\n3. Even if I did write a good statement, I don’t expect anyone to read it or care about it.\n\nFrom a birds-eye view, I was also worried that if such statements do become popular, they’ll tend to ossify and build consensus around fairly shallow views that people come up with after just a bit of thought.\n\nI do think many of the proposals in this paper would help quite a bit, and there probably is a version of these statements that I would like and endorse.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "", "newsletter_category": "AI governance"}
{"id": "a7bbcd83d6d13b493ff7cd6277d9f50e", "title": "Exploring AI Futures Through Role Play", "url": "https://www.cser.ac.uk/resources/ai-futures-role-play/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Shahar Avin", "Ross Gruetzemacher", "James Fox"], "summaries": ["This paper argues that role playing (akin to the \"wargames\" used in the military) is a good way to explore possible AI futures, especially to discover unusual edge cases, in a 10-30 year time horizon. Each player is assigned a role (e.g. director of AI at Tencent, or president of the US) and asked to play out their role faithfully. Each game turn covers 2 simulated years, in which players can negotiate and take public and private actions. The game facilitator determines what happens in the simulated world based on these actions. While early games were unstructured, recent games have had an AI \"tech tree\", that determines what AI applications can be developed.\n\nFrom the games played so far, the authors have found a few patterns:\n- Cooperation between actors on AI safety and (some) restriction on destabilizing uses of AI seem to both be robustly beneficial.\n- Even when earlier advances are risky, or when current advances are of unclear value, players tend to pursue AI R&D quite strongly.\n- Many possible kinds of coalitions are possible, e.g. between governments, between corporations, between governments and corporations, and between sub-roles within a corporation."], "venue": "CSER Website", "opinion": "It makes sense that role playing can help find extreme, edge case scenarios. I'm not sure how likely I should find such scenarios -- are they plausible but unlikely (because forecasting is hard but not impossible), or are they implausible (because it would be very hard to model an _entire government_, and no one person is going to do it justice)? Note that according to the paper, the prior literature on role playing is quite positive (though of course it's talking about role playing in other contexts, e.g. business and military contexts). Still, this seems like quite an important question that strongly impacts how seriously I take the results of these role playing scenarios.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #90", "newsletter_category": "AI governance"}
{"id": "460d3d405b9c9eb53fddeebceba4825d", "title": "An Interview with Ben Garfinkel", "url": "https://thepolitic.org/an-interview-with-ben-garfinkel-governance-of-ai-program-researcher/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Joshua Monrad", "Mojmír Stehlík and Ben Garfinkel"], "summaries": ["AI seems poised to be a very big deal, possibly through the development of AGI, and it's very hard to forecast what would happen next. However, looking at history, we can see a few very large trajectory shifts, such as the Agricultural Revolution and Industrial Revolution, where everything changed radically. We shouldn't assume that such change must be for the better. Even though it's hard to predict what will happen, we can still do work that seems robustly good regardless of the specific long-term risk. For example, Ben is optimistic about research into avoiding adversarial dynamics between different groups invested in AI, research into how groups can make credible commitments, and better forecasting. However, credible commitments are probably less tractable for AI than with nukes or biological weapons because AI systems don't leave a large physical footprint, can easily proliferate, and are not a clear category that can be easily defined."], "venue": "The Politic", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #61", "newsletter_category": "AI governance"}
{"id": "35b89d9bae6f933d70cff21f0d0bdf76", "title": "The Hacker Learns to Trust", "url": "https://medium.com/@NPCollapse/the-hacker-learns-to-trust-62f3c1490f51", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Connor Leahy"], "summaries": ["An independent researcher attempted to replicate <@GPT-2@>(@Better Language Models and Their Implications@) and was planning to release the model. However, he has now decided not to release, because releasing would set a bad precedent. Regardless of whether or not GPT-2 is dangerous, at some point in the future, we will develop AI systems that really are dangerous, and we need to have adequate norms then that allow researchers to take their time and evaluate the potential issues and then make an informed decision about what to do. **Key quote:** \"sending a message that it is ok, even celebrated, for a lone individual to unilaterally go against reasonable safety concerns of other researchers is not a good message to send\"."], "venue": "Medium", "opinion": "I quite strongly agree that the most important impact of the GPT-2 decision was that it has started a discussion about what appropriate safety norms should be, whereas before there were no such norms at all. I don't know whether or not GPT-2 is dangerous, but I am glad that AI researchers have started thinking about whether and how publication norms should change.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #58", "newsletter_category": "AI governance"}
{"id": "93d0f4cbab06e14e6fbbb197ce6e833c", "title": "When Is It Appropriate to Publish High-Stakes AI Research?", "url": "https://www.partnershiponai.org/when-is-it-appropriate-to-publish-high-stakes-ai-research/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Claire Leibowicz", "Steven Adler", "and Peter Eckersley"], "summaries": ["Following the <@GPT-2 controversy@>(@Better Language Models and Their Implications@), the Partnership on AI held a dinner with OpenAI and other members of the AI community to discuss the tension between the norm of openness and the desire to mitigate potential unintended consequences and misuse risks of AI research. The post discusses some of the relevant considerations, and highlights a key conclusion: while there is not yet a consensus on on review norms for AI research, there *is* a consensus that **whatever the review norms are, they should be standardized across the AI community**."], "venue": "PAI Website", "opinion": "I definitely agree that having everyone follow the same review norms is important: it doesn't do much good to hold back from publishing something problematic if a different group will publish all of the details a few weeks later. However, getting everyone to agree on a change to the existing norms seems incredibly hard to do, though it might be feasible if it was limited to only the largest actors who can engage deeply in the debate of what these norms should be.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #55", "newsletter_category": "AI governance"}
{"id": "de4b79c1dde4fc1e50b76925fcb923cf", "title": "Global AI Talent Report 2019", "url": "https://jfgagne.ai/talent-2019/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Jean-Francois Gagné"], "summaries": ["This report has a lot of statistics on the growth of the field of AI over the last year."], "venue": "Author's Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #54", "newsletter_category": "AI governance"}
{"id": "6a77d0bf5b49ff9db2c4cac3a2623be9", "title": "Building an AI World", "url": "https://www.cifar.ca/docs/default-source/ai-society/buildinganaiworld_eng.pdf?sfvrsn=fb18d129_4", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Tim Dutton", "Brent Barron", "Gaga Boskovic"], "summaries": ["This report summarises the AI strategies released by 18 different countries and regions. In particular, it rates how much emphasis they put on each of 8 areas. Broadly speaking, countries were most focused on Industrial strategy, Research, and AI talent (in that order), moderately focused on Ethics and Data, and least focused on Future of work, AI in governments, and Inclusion."], "venue": "CIFAR", "opinion": "Since this report discusses neither its methodology nor the implications of its findings, it's difficult to know what conclusions to draw from it. The overall priorities seem to be roughly what I would have expected, except that I'm positively surprised by how much emphasis was placed on ethics.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #41", "newsletter_category": "AI governance"}
{"id": "dd57b8e64548f24ccc1a3f7e80ab9f67", "title": "Computational Power and the Social Impact of Artificial Intelligence", "url": "http://arxiv.org/abs/1803.08971", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Tim Hwang"], "summaries": ["This paper contains a lot of introductory material about hardware in ML, plus details on where it's being made. It discusses the competition between China and the US to dominate hardware production. Hwang notes that the trend towards more specialised hardware may decrease the price of implementing ML but also decrease flexibility after deployment. He points out that simulation learning, self-play and meta-learning reduce the need for data at the expense of increased compute, which may increase hardware's importance going forward."], "venue": "arXiv", "opinion": "This may be useful for AI policy researchers, since it explores which hardware is being made by which companies in which locations, and some of the geopolitical implications. While it's a long paper, AI researchers could probably skip the first half without missing much.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #26", "newsletter_category": "AI governance"}
{"id": "8e7ee30611d65cef1bb14a058033f597", "title": "Medium-Term Artificial Intelligence and Society", "url": "https://www.mdpi.com/2078-2489/11/6/290/htm", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Seth D. Baum"], "summaries": ["Like a <@previously summarized paper@>(@Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society@), this paper aims to find common ground between near-term and long-term priorities in medium-term concerns. This can be defined along several dimensions of an AI system: when it chronologically appears, how feasible it is to build it, how certain it is that we can build it, how capable the system is, how impactful the system is, and how urgent it is to work on it.\n\nThe paper formulates and evaluates the plausibility of the _medium term AI hypothesis_: that there is an intermediate time period in which AI technology and accompanying societal issues are important from both presentist and futurist perspectives. However, it does not come to a strong opinion on whether the hypothesis is true or not."], "venue": "Information", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #105", "newsletter_category": "AI governance"}
{"id": "9fc51f7b2e59b93f049c21115d167cc5", "title": "France's AI strategy", "url": "https://www.aiforhumanity.fr/pdfs/MissionVillani_Summary_ENG.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["See [Import AI's summary](https://jack-clark.net/2018/04/02/importai-88-nato-designs-a-cyber-defense-ai-object-detection-improves-with-yolov3-france-unveils-its-national-ai-strategy/)."], "venue": "", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #1", "newsletter_category": "AI governance"}
{"id": "6ae8115ee1ca14f2c0aa17d655182af2", "title": "A framework for thinking about wireheading", "url": "https://www.lesswrong.com/posts/MbWDqFgojfnwhxRxr/a-framework-for-thinking-about-wireheading", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["theotherotheralex"], "summaries": ["Humans don't wirehead (take heroin, which gives huge positive reward) because it does not further their current goals. Maybe analogously we could design an AI that realizes that wireheading would not help it achieve its current goals and so wouldn't wirehead."], "venue": "LessWrong", "opinion": "I think this is anthropomorphizing the AI too much. To the extent that a (current) reinforcement learning system can be said to \"have goals\", the goal is to maximize reward, so wireheading actually is furthering its current goal. It might be that in the future the systems we design are more analogous to humans and then such an approach might be useful.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "AISFP blog posts"}
{"id": "b812e9d4135039943eceaaf069cf090c", "title": "An environment for studying counterfactuals", "url": "https://www.lesswrong.com/posts/hhNH3knNHgdkonAKB/an-environment-for-studying-counterfactuals", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Nisan"], "summaries": ["Proposes a class of environments in which the agent is tasked with predicting the utility of every action, in addition to maximizing expected utility. It is evaluated on the utility achieved as well as correctly predicting the utility it gets. Epsilon-exploration is required, so for every action there is always some chance that the agent will be tested on predicting the utility of that action. The agent is also provided a prior P about the world, including what the agent will do (which exists due to a fixed-point theorem)."], "venue": "LessWrong", "opinion": "I'm confused (I'm not an expert in this field), but I'm not sure what I'm confused about. Is there a dynamics model? Given that the agent gets access to a prior, can it find Pr(U | o, a) and choose the a with maximum expected utility? Why are we including reflection? There are often many fixed points, which one do we pick?", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "AISFP blog posts"}
{"id": "70d8d45311c93f6db10d2db22c6f4ab5", "title": "Choosing to Choose?", "url": "https://www.lesswrong.com/posts/B6SstsM3cdKTaicbj/choosing-to-choose", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Whispermute"], "summaries": ["If it is possible for your utility function to change, then should you optimize for your current utility function, or your expected future utility function? The post gives an argument for both sides, and ultimately says that you should optimize for your current utility function, but notes some problems with the proposed argument for it."], "venue": "LessWrong", "opinion": "I think that it is correct to optimize for your current utility function, and I didn't find the argument for the other side convincing (and wrote a comment on the post with more details).", "highlight": false, "read_more": "Self-Modification of Policy and Utility Function in Rational Agents", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "AISFP blog posts"}
{"id": "708e200ca4f3b6ded4973eab73d4cf65", "title": "Conditioning, Counterfactuals, Exploration, and Gears", "url": "https://www.lesswrong.com/posts/uQHAJ7TdBbweRR5iS/conditioning-counterfactuals-exploration-and-gears", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Diffractor"], "summaries": ["One way that you can think about counterfactuals is to _condition_ on some low probability state, and then look at the probability distribution that implies. This seems like the most general version of counterfactuals, but it doesn't match what we intuitively mean by counterfactuals, which is more like \"suppose that by fiat this constraint were met, but don't consider what would have caused it, now predict the consequences\". This sort of imputing only works because there are very simple rules governing our universe, so that there are strong correlations between different experiences and so it actually is possible to generalize to very new situations. It seems very important to use this idea in order to advance beyond epsilon-exploration for new situations."], "venue": "LessWrong", "opinion": "I agree that this is an important idea, and it has arisen elsewhere -- in ML, this is part of the thinking on the problem of generalization. There are no-free-lunch theorems that say you cannot do well in arbitrary environments, where the constructions typically violate the \"strong correlation between different experiences\" heuristic. In philosophy, this is the problem of induction.", "highlight": false, "read_more": "Don't Condition on no Catastrophes", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "AISFP blog posts"}
{"id": "9dea308ffa960667d23735766402c741", "title": "Conditions under which misaligned subagents can (not) arise in classifiers", "url": "https://www.lesswrong.com/posts/xmzNAoWcYQfMv3j6J/conditions-under-which-misaligned-subagents-can-not-arise-in", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["anon1"], "summaries": ["Agents or subagents with \"goals\" are only likely to arise when you are considering tasks where it is important to keep state/memory, because past inputs are informative about future inputs. So, unaligned subagents are unlikely to arise for eg. classification tasks where it is not necessary to model how things change over time."], "venue": "LessWrong", "opinion": "I do think that classifiers with a bounded task that run for a bounded amount of time are unlikely to develop unaligned subagents with memory. However, I still feel very unclear on the term \"unaligned subagent\", so I'm not very confident in this assessment.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "AISFP blog posts"}
{"id": "d28e0fa7ed40148eeaba4ff4ffae802d", "title": "Dependent Type Theory and Zero-Shot Reasoning", "url": "https://www.lesswrong.com/posts/Xfw2d5horPunP2MSK/dependent-type-theory-and-zero-shot-reasoning", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["evhub"], "summaries": ["Humans can do zero-shot reasoning (in the sense of writing down proofs) by \"running a type checker in their head\" (analogous to theorem provers like Lean). The post gives an example of this, using Lean syntax. However, humans seem to have very different ways of thinking -- for example, you could either generate ideas for solutions to a problem, see if they work, and iterate, or you could start proving some facts about the problem, and keep on proving things until you have proved a solution. These feel like many-shot reasoning and zero-shot reasoning respectively, even though they are both attempting a zero-shot task. This is one way to understand the difference between Iterated distillation and amplification, and Agent foundations -- the former is many-shot and the latter is zero-shot, even though both are attempting a zero-shot task."], "venue": "LessWrong", "opinion": "I found the part about how people prove things to be the most interesting part of the post, because my own method seems different from both. I usually alternate between searching for solutions, counterexamples to solutions, and proving that solutions must satisfy some property.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "AISFP blog posts"}
{"id": "934e0fe5a2cca2eb0619f8d5a71db33c", "title": "Mathematical Mindset", "url": "https://www.lesswrong.com/posts/wxBBRzR4FS7nGBjbD/mathematical-mindset", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["komponisto"], "summaries": ["Introduces a new term, \"mathematical mindset\", which is about finding good _definitions_ or _models_ that make it easier for you to reason about them. For example, you expect proofs with a newer definition to be shorter or more general. Key quote: \"Having a “mathematical mindset” means being comfortable with words being redefined. This is because it means being comfortable with models being upgraded -- in particular, with models being related and compared to each other: the activity of theorization.\""], "venue": "LessWrong", "opinion": "I'm all for having better definitions that make things clearer and easier to reason about. I don't know if \"ease of proofs\" is the right thing to aim for -- \"ease of reasoning\" is closer to what I care about, even if it's informal reasoning.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "AISFP blog posts"}
{"id": "ccb80c0b0c8cad1dc57f66d093d4cfdb", "title": "Monk Treehouse: some problems defining simulation", "url": "https://www.lesswrong.com/posts/o7FBQkwgKJPDjDKnh/monk-treehouse-some-problems-defining-simulation", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["dranorter"], "summaries": ["Some approaches to AI alignment require you to identify copies of programs in the environment, and it is not clear how to do this in full generality. Proposals so far have attempted to define two programs to be equivalent if they do the same thing now and would also do the same thing in counterfactual worlds. This post argues that such definitions don't work using an analogy where there are monks computing by moving heavy stones in a treehouse, that could unbalance it. In this setting, there are lots of checks and balances to make sure that the program does one and only one thing; any counterfactual you specify would lead to weird results (like the treehouse falling over from unbalanced stones, or monks noticing that something is off and correcting the result, etc.) and so it wouldn't be considered equivalent to the same program on a silicon-based computer."], "venue": "LessWrong", "opinion": "I don't know where a proposed definition is supposed to be used so it's hard for me to comment on how relevant this objection is.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "AISFP blog posts"}
{"id": "92c2a037fb45b0975f16f5dec5f1c0af", "title": "No, I won't go there, it feels like you're trying to Pascal-mug me", "url": "https://www.lesswrong.com/posts/o7MXZgx3SGpqSxHYZ/no-i-won-t-go-there-it-feels-like-you-re-trying-to-pascal", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Rupert"], "summaries": ["One explanation for why [Pascal's mugging](https://en.wikipedia.org/wiki/Pascal%27s_mugging) feels intuitively wrong is that if we were to pay the mugger, we would open ourselves up to exploitation by any other agent. [Logical induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/) puts uncertainties on statements in such a way that it isn't exploitable by polynomial-time traders. Perhaps there is a connection here that can help us create AIs that don't get mugged."], "venue": "LessWrong", "opinion": "Non-exploitability is my preferred resolution to Pascal's mugging. However, it seems like such an obvious solution, yet there's very little discussion of it, which makes me think that there's some fatal flaw that I'm not seeing.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "AISFP blog posts"}
{"id": "c13cb614140741bdc180b480c4d5d786", "title": "A universal score for optimizers", "url": "https://www.lesswrong.com/posts/vWfFDahpnY3tnCLrp/a-universal-score-for-optimizers", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["levin"], "summaries": ["We can measure the optimization power of an agent as the log probability that a random agent matches the outcome that the agent achieves."], "venue": "LessWrong", "opinion": "Seems like a reasonable starting point to measure optimization power. As Alex Mennen [notes](https://www.lesswrong.com/posts/vWfFDahpnY3tnCLrp/a-universal-score-for-optimizers#e3Hsaqkc2xuWsHY2t), it's dependent on the specific action set chosen, and doesn't take into account the strength of preferences, only their ranking.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "AISFP blog posts"}
{"id": "1118547fef6e996e35245909bb647c19", "title": "Alignment problems for economists", "url": "https://www.lesswrong.com/posts/SdKiq2jmZ3LD9G6CK/alignment-problems-for-economists", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Chavam"], "summaries": ["What AI alignment problems could we outsource to economists? There are some who would be interested in working on alignment, but don't because it would be too much of a career risk."], "venue": "LessWrong", "opinion": "Unfortunately, the \"desirable properties\" for these problems all seem to conspire to make any particular problem fairly low impact.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "AISFP blog posts"}
{"id": "73e259c883bf508e0abef400afb503cb", "title": "On the Role of Counterfactuals in Learning", "url": "https://www.lesswrong.com/posts/MeYeLEr4RNGreJZcB/on-the-role-of-counterfactuals-in-learning", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Max Kanwal"], "summaries": ["This post hypothesizes that since humans are computationally bounded, we infer causal models using approximate inference (eg. Gibbs sampling), as opposed to a full Bayesian update. However, approximate inference algorithms depend a lot on choosing a good initialization. Counterfactuals fill this role."], "venue": "LessWrong", "opinion": "I think I've summarizes this post badly, because I didn't really understand it. In particular, I didn't understand the jump from \"humans do approximate inference over the space of models\" to \"counterfactuals form the initialization\".", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "AISFP blog posts"}
{"id": "987effeee73254fcb50882f139f43173", "title": "The Intentional Agency Experiment", "url": "https://www.lesswrong.com/posts/gBeEt7YmaHd8dmif7/the-intentional-agency-experiment", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Self-Embedded Agent"], "summaries": ["In order to determine whether an agent has some intention, we can check to see whether the agent would take actions that achieve the intent under a wide range of circumstances (either counterfactuals, or actual changes to the environment). For example, to show that an ant has agency and intends to find sugar, we could block its route to the sugar and notice that it finds a path around the obstacle."], "venue": "LessWrong", "opinion": "The motivation was to use this to deduce the intentions of a superintelligent AI system, but it seems that such an AI system could figure out it is being tested and respond in the \"expected\" way.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "AISFP blog posts"}
{"id": "68fd7f96b676d117e2d4c500e2d12adc", "title": "Two agents can have the same source code and optimise different utility functions", "url": "https://www.lesswrong.com/posts/zMHK9gFY6t48Exqup/two-agents-can-have-the-same-source-code-and-optimise", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Joar Skalse"], "summaries": ["Even if you have two agents with identical source code, their goals are in relation to themselves, so each agent will, for example, try to gain resources for itself. Since the two agents are now competing, they clearly have different utility functions."], "venue": "LessWrong", "opinion": "I'm somewhat confused -- I'm not sure what the point is here.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "AISFP blog posts"}
{"id": "1c8fb96cd98788a2e526b5237a62e217", "title": "AlphaFold: Using AI for scientific discovery", "url": "https://deepmind.com/blog/alphafold/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Andrew Senior", "John Jumper and Demis Hassabis"], "summaries": ["This post briefly describes AlphaFold, a system that does well at the protein folding problem. They train neural networks that can be used to evaluate how good a particular proposed protein structure is. This can be used to guide an evolutionary search that repeatedly replaces pieces of a protein structure with new protein fragments from a generative model. Alternatively, it can be used to construct a loss function for entire proteins, which allows us to use gradient descent to optimize the protein structure."], "venue": "DeepMind Blog", "opinion": "The approach here is to learn heuristics that can guide a top-level search algorithm, which is the sort of thing that I think deep learning is particularly well poised to improve right now. Note that gradient descent is a top-level search algorithm here, because a separate loss function is constructed _for every protein_, rather than having a single loss function that is used to train a network that works on all proteins. However, unlike other applications such as SMT solvers, the top-level search algorithm does not have some sort of \"correctness\" guarantee.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #36", "newsletter_category": "Applications"}
{"id": "577f809c9853aee1ad72e76a8f3bccc9", "title": "A major milestone for the treatment of eye disease", "url": "https://deepmind.com/blog/moorfields-major-milestone/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Mustafa Suleyman"], "summaries": ["DeepMind's partnership with Moorfields Eye Hospital has resulted in an AI system that can recognize features of eye disease and recommend treatment. Interestingly, in order to get interpretability, they train two networks instead of one, where one predicts the features of eye disease for all of the tissue (eg. haemorrhages, lesions and irregular fluids), and the other then makes a recommendation for treatment. This required them to label a subset of the dataset with feature markers in order to train the first model."], "venue": "DeepMind Blog", "opinion": "As interpretability goes, using a modular model with human-interpretable intermediate representations seems quite good -- it decouples the problem of understanding the model's output into two smaller problems. The big downside is that it requires a lot more labeling (877 segmented images in this case), and that the human-interpretable representation may not be the best one for the job. For example, if there are other visual cues besides the specific features DeepMind used that help with recommending treatment, this model will not be able to take advantage of them, while an end-to-end trained system could.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #20", "newsletter_category": "Applications"}
{"id": "41a33db6bf6f67bc5ce0ebfd1afc8128", "title": "The Machine Learning Behind Android Smart Linkify", "url": "https://ai.googleblog.com/2018/08/the-machine-learning-behind-android.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Lukas Zilka"], "summaries": ["Android now has Smart Linkify technology, which allows it to automatically find pieces of text that should link to another app (for example, addresses should link to Maps, dates and times to Calendar, etc). There are a lot of interesting details on what had to be done to get this to actually work in the real world. The system has two separate nets -- one which generates candidate entities, and another which says what kind of entity each one is. In between these two nets, we have a regular program that takes the set of proposed entities, and prunes it so that no two entities overlap, and then sends it off to the entity classification net. There are a few tricks to get the memory requirements down, and many dataset augmentation tricks to get the nets to learn particular rules that it would not otherwise have learned."], "venue": "Google AI Blog", "opinion": "I take this as an example of what advanced AI systems will look like -- a system of different modules, each with its own job, passing around information appropriately in order to perform some broad task. Some of the modules could be neural nets (which can learn hard-to-program functions), while others could be classic programs (which generalize much better and are more efficient). OpenAI Five also has elements of this -- the drafting system is a classic program operating on the win probabilities from the neural net. It's also interesting how many tricks are required to get Smart Linkify to work -- I don't know whether to think that this means generally intelligent AI is further away, or that the generally intelligent AI that we build will rely on these sorts of tricks.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #19", "newsletter_category": "Applications"}
{"id": "f3f48b155ae4fc220d43b6f510071628", "title": "Tackling Climate Change with Machine Learning", "url": "http://arxiv.org/abs/1906.05433", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["David Rolnick", "Priya L. Donti", "Lynn H. Kaack", "Kelly Kochanski", "Alexandre Lacoste", "Kris Sankaran", "Andrew Slavin Ross", "Nikola Milojevic-Dupont", "Natasha Jaques", "Anna Waldman-Brown", "Alexandra Luccioni", "Tegan Maharaj", "Evan D. Sherwin", "S. Karthik Mukkavilli", "Konrad P. Kording", "Carla Gomes", "Andrew Y. Ng", "Demis Hassabis", "John C. Platt", "Felix Creutzig", "Jennifer Chayes", "Yoshua Bengio"], "summaries": ["See [Import AI](https://jack-clark.net/2019/06/17/import-ai-151-us-army-trains-starcraft-ii-ai-teaching-drones-to-dodge-thrown-objects-and-fighting-climate-change-with-machine-learning/)."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #65", "newsletter_category": "Applications"}
{"id": "dd57595d2ca632adf64778eb07889168", "title": "Artificial Intelligence — The Revolution Hasn’t Happened Yet", "url": "https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Michael Jordan"], "summaries": ["There is a lot of hype at the moment around AI, particularly around creating AI systems that have human intelligence, since the thrill (and fear) of creating human intelligence in silicon causes overexuberance and excessive media attention. However, we _actually_ want to create AI systems that can help us improve our lives, often by doing things that humans are not capable of. In order to accomplish this, it is likely better to work directly on these problems, since human-like intelligence is neither necessary nor sufficient to build such systems. However, as with all new technologies, there are associated challenges and opportunities with these AI systems, and we are currently at risk of not seeing these because we are too focused on human intelligence in particular."], "venue": "Medium", "opinion": "There certainly is a lot of hype both around putting human intelligence in silicon, as well as the risks that surround such an endeavor. Even though I focus on such risks, I agree with Jordan that these are overhyped in the media and we would benefit from having more faithful coverage of them. I do disagree on some specific points. For example, he says that human-imitative AI is not sufficient to build some AI systems such as self-driving cars, but why couldn't an AI with human intelligence just do whatever humans would do to build self-driving cars? (I can think of answers, such as \"we don't know how to give the AI system access to all the data that humans have access to\", but I wish he had engaged more with this argument.) I do agree with the overall conclusion that in the near future humans will make progress on building such systems, and not by trying to give the systems \"human intelligence\". I also suspect that we disagree either on how close we are to human-imitative AI, or at what point it is worth it to start thinking about the associated risks, but it's hard to tell more from the article.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #3", "newsletter_category": "Critiques (AI)"}
{"id": "c11d4f7343d8625c34eb265446949d36", "title": "To Build Truly Intelligent Machines, Teach Them Cause and Effect", "url": "https://www.quantamagazine.org/to-build-truly-intelligent-machines-teach-them-cause-and-effect-20180515/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Kevin Hartnett interviewing Judea Pearl"], "summaries": ["An interview with Judea Pearl about causality, deep learning, and where the field is going."], "venue": "Quanta", "opinion": "This is fairly superficial, if you've read any of the other things that Pearl himself has written about deep learning, you'll know all of this already.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #7", "newsletter_category": "Critiques (AI)"}
{"id": "a41d18cc21ac2d618271c132d8aa12db", "title": "Ben Garfinkel on scrutinising classic AI risk arguments", "url": "https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Howie Lempel and Ben Garfinkel"], "summaries": ["In this podcast, Ben Garfinkel goes through several reasons why he is skeptical of classic AI risk arguments (some previously discussed <@here@>(@How Sure are we about this AI Stuff?@)). The podcast has considerably more detail and nuance than this summary.\n\nBen thinks that historically, it has been hard to affect transformative technologies in a way that was foreseeably good for the long-term-- it's hard e.g. to see what you could have done around the development of agriculture or industrialization that would have an impact on the world today. He thinks some potential avenues for long-term influence could be through addressing increased political instability or the possibility of lock-in, though he thinks that it’s unclear what we could do today to influence the outcome of a lock-in, especially if it’s far away.\n\nIn terms of alignment, Ben focuses on the standard set of arguments outlined in Nick Bostrom’s Superintelligence, because they are broadly influential and relatively fleshed out. Ben has several objections to these arguments:\n- He thinks it isn't likely that there will be a sudden jump to extremely powerful and dangerous AI systems, and he thinks we have a much better chance of correcting problems as they come up if capabilities grow gradually.\n- He thinks that making AI systems capable and making AI systems have the right goals are likely to go together.\n- He thinks that just because there are many ways to create a system that behaves destructively doesn't mean that the engineering process creating that system is likely to be attracted to those destructive systems; it seems like we are unlikely to accidentally create systems that are destructive enough to end humanity.\n\nBen also spends a little time discussing <@mesa-optimization@>(@Risks from Learned Optimization in Advanced Machine Learning Systems@), a much newer argument for AI risk. He largely thinks that the case for mesa-optimization hasn’t yet been fleshed out sufficiently. He also thinks it’s plausible that learning incorrect goals may be a result of having systems that are insufficiently sophisticated to represent goals appropriately. With sufficient training, we may in fact converge to the system we want.\n\nGiven the current state of argumentation, Ben thinks that it's worth EA time to flesh out newer arguments around AI risk, but also thinks that EAs who don't have a comparative advantage in AI-related topics shouldn't necessarily switch into AI. Ben thinks it's a moral outrage that we have spent less money on AI safety and governance than the 2017 movie 'The Boss Baby', starring Alec Baldwin."], "venue": "80,000 Hours", "opinion": "This podcast covers a really impressive breadth of the existing argumentation. A lot of the reasoning is similar to <@that I’ve heard from other researchers@>(@Takeaways from safety by default interviews@). I’m really glad that Ben and others are spending time critiquing these arguments; in addition to showing us where we’re wrong, it helps us steer towards more plausible risky scenarios.\n\nI largely agree with Ben’s criticisms of the Bostrom AI model; I think mesa-optimization is the best current case for AI risk and am excited to see more work on it. The parts of the podcast where I most disagreed with Ben were:\n- I think even in the absence of solid argumentation, I feel good about a prior where AI has a non-trivial chance of being existentially threatening, partially because I think it’s reasonable to put AI in the reference class of ‘new intelligent species’ in addition to ‘new technology’.\n- I’m not sure that institutions will address failures sufficiently, <@even if progress is gradual and there are warnings@>(@Possible takeaways from the coronavirus pandemic for slow AI takeoff@).\n\n**Rohin's opinion:** I recommend listening to the full podcast, as it contains a lot of detail that wouldn't fit in this summary. Overall I agree pretty strongly with Ben. I do think that some of the counterarguments are coming from a different frame than the classic arguments. For example, a lot of the counterarguments involve an attempt to generalize from current ML practice to make claims about future AI systems. However, I usually imagine that the classic arguments are basically ignoring current ML, and instead claiming that if an AI system is superintelligent, then it must be goal-directed and have convergent instrumental subgoals. If current ML systems don't lead to goal-directed behavior, I expect that proponents of the classic arguments would say that they also won't lead to superintelligent AI systems. I'm not particularly sold on this intuition either, but I can see its appeal.", "highlight": true, "read_more": "", "summarizer": "Asya", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #108", "newsletter_category": "Critiques (Alignment)"}
{"id": "8b0170d6dcc4f15d01941e0a09745a76", "title": "We Shouldn’t be Scared by ‘Superintelligent A.I.’", "url": "https://www.nytimes.com/2019/10/31/opinion/superintelligent-artificial-intelligence.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Melanie Mitchell"], "summaries": ["This review of <@Human Compatible@>(@Human Compatible: Artificial Intelligence and the Problem of Control@) argues that people worried about superintelligent AI are making a mistake by assuming that an AI system \"could surpass the generality and flexibility of human intelligence while seamlessly retaining the speed, precision and programmability of a computer\". It seems likely that human intelligence is strongly integrated, such that our emotions, desires, sense of autonomy, etc. are all _necessary_ for intelligence, and so general intelligence can't be separated from so-called \"irrational\" biases. Since we know so little about what intelligence actually looks like, we don't yet have enough information to create AI policy for the real world."], "venue": "New York Times", "opinion": "The only part of this review I disagree with is the title -- every sentence in the text seems quite reasonable. I in fact do not want policy that advocates for particular solutions now, precisely because it's not yet clear what the problem actually is. (More \"field-building\" type policy, such as increased investment in research, seems fine.)\n\nThe review never actually argues for its title -- you need some additional argument, such as \"and therefore, we will never achieve superintelligence\", or \"and since superintelligent AI will be like humans, they will be aligned by default\". For the first one, while I could believe that we'll never build ruthlessly goal-pursuing agents for the reasons outlined in the article, I'd be shocked if we couldn't build agents that were more intelligent than us. For the second one, I agree with the outside view argument presented in _Human Compatible_: while humans might be aligned with each other (debatable, but for now let's accept it), humans are certainly not aligned with gorillas. We don't have a strong reason to say that our situation with superintelligent AI will be different from the gorillas' situation with us. (Obviously, we get to design AI systems, while gorillas didn't design us, but this is only useful if we actually have an argument why our design for AI systems will avoid the gorilla problem, and so far we don't have such an argument.)", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #76", "newsletter_category": "Critiques (Alignment)"}
{"id": "a46fe27da153db29668639e2eaab591b", "title": "The seven deadly sins of AI predictions", "url": "https://www.technologyreview.com/s/609048/the-seven-deadly-sins-of-ai-predictions", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2017-01-01T00:00:00Z", "authors": ["Rodney Brooks"], "summaries": ["This is an older article I was sent recently, that argues against AI risk and the idea that we will have AGI soon. It generally argues that AGI proponents are mistaken about current capabilities of AI and how long it will take to make progress in AGI research."], "venue": "", "opinion": "This article is aimed at refuting the superintelligent perfectly-rational agent model of AGI, and so feels to me like it's attacking a strawman of the argument for AI risk, but it does seem to me that many people do have beliefs similar to the ones he's arguing against. I partially agree with some of his criticisms and disagree with others, but overall I think most of the arguments are reasonable ones and worth knowing about.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #4", "newsletter_category": "Critiques (Alignment)"}
{"id": "6c192cc23dcca12291c37ea12975f61d", "title": "ZeRO & DeepSpeed: New system optimizations enable training models with over 100 billion parameters", "url": "https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Rangan Majumder", "Junhua Wang"], "summaries": ["This paper introduces ZeRO and DeepSpeed, system optimizations that enable training significantly larger models than we have before.\n\n_Data parallelism_ is a way of splitting data across multiple machines to increase training throughput. Instead of training a model sequentially on one dataset, the dataset is split and models are trained in parallel. Resulting gradients on every machine are combined centrally and then used for back propagation. Previously, data parallelism approaches were memory-constrained because the entire model still had to fit on each GPU, which becomes infeasible for billion to trillion-parameter models.\n\nInstead of replicating each model on each machine, ZeRO partitions each model across machines and shares states, resulting in a per-machine memory reduction that is linear with the number of machines. (E.g., splitting across 64 GPUs yields a 64x memory reduction).\n\nIn addition to ZeRO, Microsoft is releasing DeepSpeed, a library which offers ZeRO as well as several other performance optimizations in an easy-to-use library for PyTorch, a popular open-source machine learning framework. They purport that their library allows for models that are 10x bigger, up to 5x faster to train, and up to 5x cheaper. They use DeepSpeed to train a [17-billion-parameter language model](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft) which exceeds state-of-the-art results in natural language processing."], "venue": "Microsoft Research Blog", "opinion": "I think this is a significant step in machine learning performance which may not be used heavily until average model sizes in general increase. The technique itself is pretty straightforward, which makes me think that as model sizes increase there may be a lot of similar \"low-hanging fruit\" that yield large performance gains.", "highlight": true, "read_more": "", "summarizer": "Asya", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #87", "newsletter_category": "Deep learning"}
{"id": "8710bbc928716c1352b688d7d4328303", "title": "Deep Double Descent", "url": "https://openai.com/blog/deep-double-descent/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Preetum Nakkiran", "Gal Kaplun", "Yamini Bansal", "Tristan Yang", "Boaz Barak", "Ilya Sutskever"], "summaries": ["This blog post provides empirical evidence for the existence of the _double descent_ phenomenon, proposed in an earlier paper summarized below. Define the _effective model complexity_ (EMC) of a training procedure and a dataset to be the maximum size of training set such that the training procedure achieves a _train_ error of at most ε (they use ε = 0.1). Let's suppose you start with a small, underparameterized model with low EMC. Then initially, as you increase the EMC, the model will achieve a better fit to the data, leading to lower test error. However, once the EMC is approximately equal to the size of the actual training set, then the model can \"just barely\" fit the training set, and the test error can increase or decrease. Finally, as you increase the EMC even further, so that the training procedure can easily fit the training set, the test error will once again _decrease_, causing a second descent in test error. This unifies the perspectives of statistics, where larger models are predicted to overfit, leading to increasing test error with higher EMC, and modern machine learning, where the common empirical wisdom is to make models as big as possible and test error will continue decreasing.\n\nThey show that this pattern arises in a variety of simple settings. As you increase the width of a ResNet up to 64, you can observe double descent in the final test error of the trained model. In addition, if you fix a large overparameterized model and change the number of epochs for which it is trained, you see another double descent curve, which means that simply training longer can actually _correct overfitting_. Finally, if you fix a training procedure and change the size of the dataset, you can see a double descent curve as the size of the dataset decreases. This actually implies that there are points in which _more data is worse_, because the training procedure is in the critical interpolation region where test error can increase. Note that most of these results only occur when there is _label noise_ present, that is, some proportion of the training set (usually 10-20%) is given random incorrect labels. Some results still occur without label noise, but the resulting double descent peak is quite small. The authors hypothesize that label noise leads to the effect because double descent occurs when the model is misspecified, though it is not clear to me what it means for a model to be misspecified in this context."], "venue": "OpenAI Blog", "opinion": "While I previously didn't think that double descent was a real phenomenon (see summaries later in this email for details), these experiments convinced me that I was wrong and in fact there is something real going on. Note that the settings studied in this work are still not fully representative of typical use of neural nets today; the label noise is the most obvious difference, but also e.g. ResNets are usually trained with higher widths than studied in this paper. So the phenomenon might not generalize to neural nets as used in practice, but nonetheless, there's _some_ real phenomenon here, which flies in the face of all of my intuitions. \n\nThe authors don't really suggest an explanation; the closest they come is speculating that at the interpolation threshold there's only ~one model that can fit the data, which may be overfit, but then as you increase further the training procedure can \"choose\" from the various models that all fit the data, and that \"choice\" leads to better generalization. But this doesn't make sense to me, because whatever is being used to \"choose\" the better model applies throughout training, and so even at the interpolation threshold the model should have been selected throughout training to be the type of model that generalized well. (For example, if you think that regularization is providing a simplicity bias that leads to better generalization, the regularization should also help models at the interpolation threshold, since you always regularize throughout training.)\n\nPerhaps one explanation could be that in order for the regularization to work, there needs to be a \"direction\" in the space of model parameters that doesn't lead to increased training error, so that the model can move along that direction towards a simpler model. Each training data point defines a particular direction in which training error will increase. So, when the number of training points is equal to the number of parameters, the training points just barely cover all of the directions, and then as you increase the number of parameters further, that starts creating new directions that are not constrained by the training points, allowing the regularization to work much better. (In fact, the [original paper](https://arxiv.org/abs/1812.11118), summarized below, _defined_ the interpolation threshold as the point where number of parameters equals the size of the training dataset.) However, while this could explain model-wise double descent and training-set-size double descent, it's not a great explanation for epoch-wise double descent.", "highlight": true, "read_more": "Paper: Deep Double Descent: Where Bigger Models and More Data Hurt", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #77", "newsletter_category": "Deep learning"}
{"id": "dcbb1c73c78d1559e6907ecd36c481d1", "title": "Are Deep Neural Networks Dramatically Overfitted?", "url": "https://lilianweng.github.io/lil-log/2019/03/14/are-deep-neural-networks-dramatically-overfitted.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Lilian Weng"], "summaries": ["The concepts of underfitting and overfitting, and their relation to the bias-variance tradeoff, are fundamental to standard machine learning theory. Roughly, for a fixed amount of data, there is an optimal model complexity for learning from that data: any less complex and the model won't be able to fit the data, and any more complex and it will overfit to noise in the data. This means that as you increase model complexity, training error will go down to zero, but validation error will go down and then start turning back up once the model is overfitting.\n\nWe know that neural networks are much more expressive than the theory would predict is optimal, both from theorems showing that neural networks can learn any function (including one that provides a rather tight bound on number of parameters), as well as a [paper](https://arxiv.org/abs/1611.03530) showing that neural nets can learn random noise. Yet they work well in practice, achieving good within-distribution generalization.\n\nThe post starts with a brief summary of topics that readers of this newsletter are probably familiar with: Occam's razor, the Minimum Description Length principle, Kolmogorov Complexity, and Solomonoff Induction. If you don't know these, I strongly recommend learning them if you care about understanding within-distribution generalization. The post then looks at a few recent informative papers, and tries to reproduce them.\n\nThe [first one](https://arxiv.org/abs/1812.11118) is the most surprising: they find that as you increase the model complexity, your validation error goes down and then back up, as expected, but then at some point it enters a new regime and goes down again. However, the author notes that you have to set up the experiments just right to get the smooth curves the paper got, and her own attempts at reproducing the result are not nearly as dramatic.\n\nAnother [paper](https://arxiv.org/abs/1804.08838) measures the difficulty of a task based on its \"intrinsic dimension\", which Cody has summarized separately in this newsletter.\n\nThe [last paper](https://arxiv.org/abs/1902.01996) looks at what happens if you (a) reset some layer's parameters to the initial parameters and (b) randomize some layer's parameters. They find that randomizing always destroys performance, but resetting to initial parameters doesn't make much of a difference for later layers, while being bad for earlier layers. This was easy to reproduce, and the findings reemerge very clearly."], "venue": "Author's Website", "opinion": "I'm very interested in this problem, and this post does a great job of introducing it and summarizing some of the recent work. I especially appreciated the attempts at reproducing the results.\n\nOn the papers themselves, a regime where you already have ~zero training error but validation error goes _down_ as you increase model expressivity is exceedingly strange. Skimming the paper, it seems that the idea is that in the normal ML regime, you are only minimizing training error -- but once you can get the training error to zero, you can then optimize for the \"simplest\" model with zero training error, which by Occam's Razor-style arguments should be the best one and lead to better validation performance. This makes sense in the theoretical model that they use, but it's not clear to me how this applies to neural nets, where you aren't explicitly optimizing for simplicity after getting zero training error. (Techniques like regularization don't result in one-after-the-other optimization -- you're optimizing for both simplicity and low training error simultaneously, so you wouldn't expect this critical point at which you enter a new regime.) So I still don't understand these results. That said, given the difficulty with reproducing them, I'm not going to put too much weight on these results now.\n\nI tried to predict the results of the last paper and correctly predicted that randomizing would always destroy performance, but predicted that resetting to initialization would be okay for _early_ layers instead of later layers. I had a couple of reasons for the wrong prediction. First, there had been a few papers that showed good results even with random features, suggesting the initial layers aren't too important, and so maybe don't get updated too much. Second, the gradient of the loss w.r.t later layers requires only a few backpropagation steps, and so probably provides a clear, consistent direction moving it far away from the initial configuration, while the gradient w.r.t earlier layers factors through the later layers which may have weird or wrong values and so might push in an unusual direction that might get cancelled out across multiple gradient updates. I skimmed the paper and it doesn't really speculate on why this happens, and my thoughts still seem reasonable to me, so this is another fact that I have yet to explain.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #53", "newsletter_category": "Deep learning"}
{"id": "6ceacc7842a52cb36e8b49be07a4b706", "title": "Better Language Models and Their Implications", "url": "https://blog.openai.com/better-language-models/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Alec Radford", "Jeffrey Wu", "Dario Amodei", "Daniela Amodei", "Jack Clark", "Miles Brundage", "and Ilya Sutskever"], "summaries": ["OpenAI has trained a scaled up GPT model using unsupervised learning (specifically, predicting the next word given a very large context) on a very large dataset with presumably very large compute. The resulting language model can produce impressive language samples (with some cherry-picking) that to my eye are particularly good at handling long-range dependencies, which makes sense since it is based on the [Transformer](https://arxiv.org/abs/1706.03762) (see Transformer-XL entry in [AN #44](https://mailchi.mp/6bfac400a0c3/alignment-newsletter-44)). It sets new state of the art performance on 7 out of 8 language modeling tasks, including difficult datasets such as [LAMBADA](https://arxiv.org/abs/1606.06031), _without using the training data for those tasks_. It can also be used for more structured tasks by providing a particular context -- for example, to summarize a document, you can provide the document followed by \"TL;DR:\" in order to induce GPT-2 to \"predict\" a summary. (They use a different prediction algorithm in order to improve summarization results, but I suspect even with regular prediction you'd get something in the right ballpark.) On these more structured tasks, it doesn't get anywhere near the state of the art set by specialized systems -- but again, this is without any finetuning for the specific task that we are testing.\n\nThe [paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) argues that in order to get generally capable AI systems, we will need to train them on many different tasks, as in meta-learning. However, we might expect that we need hundreds of thousands of tasks in order to learn something general, just as we need hundreds of thousands of examples in order to develop good classifiers. Prediction of the next word in natural language is particularly good for this, because in order to predict well across a huge variety of text, you need to become good at many different tasks such as question answering, summarization, and even translation. The biggest challenge is in creating a dataset that has sufficient diversity -- they do this by scraping all outbound links from Reddit with at least 3 karma.\n\nUnusually for research, but in accordance with [its charter](https://blog.openai.com/openai-charter/) ([AN #2](https://mailchi.mp/14782876a85d/alignment-newsletter-2)), OpenAI has decided not to release the model publicly, citing the possibility of malicious uses of the model. This has been controversial, with the debate raging for days on Twitter. I haven't paid enough attention to the debate to give a reasonable summary so you'll have to rely on other sources for that."], "venue": "OpenAI Blog", "opinion": "These are some pretty impressive results. I'm surprised that all of this came from a single order of magnitude more data and model size, I would have expected it to take more than that. I think this lends a lot of support to the hypothesis that unsupervised learning with sufficient amounts of compute and diverse data can lead to generally capable AI systems. (See this [SlateStarCodex post](https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-general-intelligence/) for a more detailed version of this take.) This is also some evidence that we will have AI systems that can pass the Turing Test before we have general AI systems, that is, the Turing Test is not AI-complete.", "highlight": true, "read_more": "Language Models are Unsupervised Multitask Learners", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #46", "newsletter_category": "Deep learning"}
{"id": "97935b701241c7bc252cf6bfdc43d963", "title": "Transformer-XL: Unleashing the Potential of Attention Models", "url": "http://ai.googleblog.com/2019/01/transformer-xl-unleashing-potential-of.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Zihang Dai", "Zhilin Yang and \nQuoc Le", "Google AI"], "summaries": ["[Transformer](https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html) architectures have become all the rage recently, showing better performance on many tasks compared to CNNs and RNNs. This post introduces Transformer-XL, an improvement on the Transformer architecture for very long sequences.\n\nThe key idea with the original Transformer architecture is to use self-attention layers to analyze sequences instead of something recurrent like an RNN, which has problems with vanishing and exploding gradients. An attention layer takes as input a query q and key-value pairs (K, V). The query q is \"compared\" against every key k, and that is used to decide whether to return the corresponding value v. In their particular implementation, for each key k, you take the dot product of q and k to get a \"weight\", which is then used to return the weighted average of all of the values. So, you can think of the attention layer as taking in a query q, and returning the \"average\" value corresponding to keys that are \"similar\" to q (since dot product is a measure of how aligned two vectors are). Typically, in an attention layer, some subset of Q, K and V will be learned. With _self-attention_, Q, K and V are all sourced from _the same place_ -- the result of the previous layer (or the input if this is the first layer). Of course, it's not exactly the output from the previous layer -- if that were the case, there would be no parameters to learn. They instead learn three _linear projections_ (i.e. matrices) that map from the output of the previous layer to Q, K and V respectively, and then feed the generated Q, K and V into a self-attention layer to compute the final output. And actually, instead of having a single set of projections, they have multiple sets that each contain three learned linear projections, that are all then used for attention, and then combined together for the next layer by another learned matrix. This is called _multi-head attention_.\n\nOf course, with attention, you are treating your data as a set of key-value pairs, which means that the order of the key value pairs does not matter. However, the order of words in a sentence is obviously important. To allow the model to make use of position information, they augment each word and add position information to it. You could do this just by literally appending a single number to each word embedding representing its absolute position, but then it would be hard for the neural net to ask about a word that was \"3 words prior\". To make this easier for the net to learn, they create a vector of numbers to represent the absolute position based on sinusoids such that \"go back 3 words\" can be computed by a linear function, which should be easy to learn, and add _(not concatenate!)_ it elementwise to the word embedding.\n\nThis model works great when you are working with a single sentence, where you can attend over the entire sentence at once, but doesn't work as well when you are working with eg. entire documents. So far, people have simply broken up documents into segments of a particular size N and trained Transformer models over these segments. Then, at test time, for each word, they use the past N - 1 words as context and run the model over all N words to get the output. This cannot model any dependencies that have range larger than N. The Transformer-XL model fixes this issue by taking the segments that vanilla Transformers use, and adding recurrence. Now, in addition to the normal output predictions we get from segments, we also get as output a new hidden state, that is then passed in to the next segment's Transformer layer. This allows for arbitrarily far long-range dependencies. However, this screws up our position information -- each word in each segment is augmented with _absolute_ position information, but this doesn't make sense across segments, since there will now be multiple words at (say) position 2 -- one for each segment. At this point, we actually want _relative_ positions instead of absolute ones. They show how to do this -- it's quite cool but I don't know how to explain it without going into the math and this has gotten long already. Suffice it to say that they look at the interaction between arbitrary words x_i and x_j, see the terms that arise in the computation when you add absolute position embeddings to each of them, and then change the terms so that they only depend on the difference j - i, which is a relative position.\n\nThis new model is state of the art on several tasks, though I don't know what the standard benchmarks are here so I don't know how impressed I should be."], "venue": "Google AI Blog", "opinion": "It's quite interesting that even though the point of Transformer was to get away from recurrent structures, adding them back in leads to significant improvements. Of course, the recurrent structure is now at the higher level of segments, rather than at the word or character level. This reminds me a lot of hierarchy -- it seems like we're using the Transformer as a basic building block that works on the ~sentence level so that our RNN-like structure can deal with a higher level of abstraction (which of course also helps with vanishing/exploding gradients).\n\nThere's an interesting pattern where hierarchy and structure seem to be a good inductive bias, that let you get good performance with limited compute and data, but as those limits subside, you're better off doing something that has less bias. This would predict that as we get more data and compute, we would want larger Transformer models (i.e. longer segments) and less recurrence. It would be interesting to see if that actually holds.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #44", "newsletter_category": "Deep learning"}
{"id": "62a4d2a19e9d522cf93335a20593d415", "title": "Reptile: A Scalable Meta-Learning Algorithm", "url": "https://blog.openai.com/reptile/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Alex Nichol and John Schulman"], "summaries": ["I somehow forgot to include this in past emails, so I'm including it now. Reptile is an algorithm for meta-learning, and in this paper is applied to few-shot classification, where given a few examples of different classes, you must learn a good classification algorithm for those classes. The authors show using a Taylor expansion that [MAML](https://arxiv.org/abs/1703.03400) and Reptile have very similar gradients to first order in alpha, the step size. Their evaluation shows that for the few-shot classification case, Reptile and MAML perform similarly (though they do not evaluate on reinforcement learning tasks, as in the MAML paper)."], "venue": "OpenAI Blog", "opinion": "This seems like an important advance in meta-learning, as it is much more computationally efficient than MAML while still achieving similar levels of performance.", "highlight": true, "read_more": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #4", "newsletter_category": "Deep learning"}
{"id": "e320387c08a5bdbd92a8368ee3f59d2a", "title": "Relational inductive biases, deep learning, and graph networks", "url": "http://arxiv.org/abs/1806.01261", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Peter W. Battaglia", "Jessica B. Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner", "Caglar Gulcehre", "Francis Song", "Andrew Ballard", "Justin Gilmer", "George Dahl", "Ashish Vaswani", "Kelsey Allen", "Charles Nash", "Victoria Langston", "Chris Dyer", "Nicolas Heess", "Daan Wierstra", "Pushmeet Kohli", "Matt Botvinick", "Oriol Vinyals", "Yujia Li", "Razvan Pascanu"], "summaries": ["\"Part position paper, part review, and part unification\", this paper emphasises the importance of combinatorial generalisation, which is key to how humans understand the world. It argues for approaches which perform computation over discrete entities and the relations between them, such as graph networks. The authors claim that CNNs and RNNs are so successful due to relational inductive biases - for example, the bias towards local structure induced by convolutional layers. Graph networks are promising because they can express arbitrary relational biases: any nodes can be connected with any others depending on the structure of the problem. Further, since graph networks learn functions which are reused for all nodes and edges, each one can be applied to graphs of any shape and size: a form of combinatorial generalisation.\n\nIn this paper's framework, each 'graph block' does computations over an input graph and returns an output graph. The relevant part of the output might be the values of edges, or those of nodes, or 'global' properties of the overall graph. Graph blocks can be implemented by standard neural network architectures or more unusual ones such as message-passing neural networks or non-local neural networks. The authors note some major open questions: how to generate the graphs in the first place, and how to adaptively modify them during the course of computation."], "venue": "ICLR 2018", "opinion": "This paper is an excellent holistic discussion of graph networks and reasons to think they are promising. I'm glad that it also mentioned the open problems, though, since I think they're pretty crucial to using graphs in deep learning, and current approaches in this area (e.g. capsule networks' dynamic control flow) aren't satisfactory.", "highlight": true, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #30", "newsletter_category": "Deep learning"}
{"id": "ae89d3d88b84d585f44112bc1b5a3fae", "title": "Seedbank — discover machine learning examples", "url": "https://medium.com/tensorflow/seedbank-discover-machine-learning-examples-2ff894542b57", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Michael Tyka"], "summaries": ["Seedbank provides interactive machine learning examples in Colab notebooks (think Jupyter notebooks in the cloud). This makes it really easy to just run example code without any setup, and even to modify it to play around with it. Google even provides a free GPU to make the training and inference faster!"], "venue": "Medium", "opinion": "I haven't explored it yet, but this seems great, especially if you want to learn ML. I have used Colab notebooks before and recommend them highly for small projects (maybe even large ones, I'm not sure), especially if you're familiar with Jupyter notebooks.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #16", "newsletter_category": "Deep learning"}
{"id": "a42cd54c53d5f48282cfd8625d9d2a38", "title": "The Scaling Hypothesis", "url": "https://www.gwern.net/Scaling-hypothesis", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Gwern Branwen"], "summaries": ["This post centers around the **scaling hypothesis**:\n\n_Once we find a scalable architecture which can be applied fairly uniformly, we can simply train ever larger networks and ever more sophisticated behavior will emerge naturally as the easiest way to optimize for all the tasks and data. More powerful NNs are “just” scaled-up weak NNs, in much the same way that human brains look much like scaled-up primate brains._\n\nImportantly, we can get this sophisticated behavior just by training on simple objectives, such as “predict the next word”, as long as the data is sufficiently diverse. So, a priori, why might we expect the scaling hypothesis to be true?\n\nThe core reason is that optimal (or human-level) prediction of text really does require knowledge, reasoning, causality, etc. If you don’t know how to perform addition, you are probably not going to be able to predict the next word in the sentence “Though he started with six eggs, he found another fourteen, bringing his total to \\_\\_\\_\\_”. However, since any specific fact is only useful in a tiny, tiny number of cases, it only reduces the expected loss by a tiny amount. So, you’ll only see models learn this sort of behavior once they have exhausted all the other “easy wins” for predicting text; this will only happen when the models and dataset are huge.\n\nConsider a model tasked with predicting characters in text with a set of 64 characters (52 uppercase and lowercase letters, along with some punctuation). Initially it outputs random characters, assigning a probability of 1/64 to the correct character, resulting in a loss of 6 bits. Once you start training, the easiest win is to simply notice how frequent each character is; just noticing that uppercase letters are rare, spaces are common, vowels are common, etc. could get your error down to 4-5 bits. After this, it might start to learn what words actually exist; this might take 10^5 - 10^6 samples since each word is relatively rare and there are thousands of words to learn, but this is a drop in the bucket given our huge dataset. After this step, it may have also learned punctuation along the way, and might now be down to 3-4 bits. At this point, if you sample from the model, you might get correctly spelled English words, but they won’t make any sense.\n\nWith further training the model now has to pick up on associations between adjacent words to make progress. Now it needs to look at things 10 characters ago to predict the next character -- a far cry from our initial letter frequencies where it didn’t even need to look at other characters! For example, it might learn that “George W” tends to be followed by “ashington”. It starts to learn grammar, being able to correctly put verbs in relation to subjects and objects (that are themselves nouns). It starts to notice patterns in how words like “before” and “after” are used; these can then be used to better predict words in the future; at this point it’s clear that the model is starting to learn semantics. Now the loss is around 2 bits per character. A little more training and your model starts to produce sentences that sound human-like in isolation, but don’t fit together: a model might start a story about a very much alive protagonist and then talk about how she is dead in the next sentence. Training is now about fixing errors like these and each such fix gains a tiny amount of accuracy -- think ten thousandths of a bit. Every further 0.1 bits you gain represents the model learning a huge amount of relevant knowledge (and correspondingly each subsequent 0.1 bits takes a much larger amount of training and data). The final few fractions of a bit are the most important and comprise most of what we call “intelligence”.\n\n(The human baseline is a loss of 0.7 bits, with lots of uncertainty on that figure.)\n\nSo far this is a clever argument, but doesn’t really establish that this will work _in practice_ -- for example, maybe your model has to have 10^100 parameters to learn all of this, or maybe existing models and algorithms are not sophisticated enough to _find_ the right parameters (and instead just plateau at, say, 2 bits of loss). But recent evidence provides strong support for the scaling hypothesis:\n\n1. The <@scaling laws@>(@Scaling Laws for Neural Language Models@) line of work demonstrated that models could be expected to reach the interesting realm of loss at amounts of compute, data, and model capacity that seemed feasible in the near future.\n2. Various projects have trained large models and demonstrated that this allows them to solve tasks that they weren’t explicitly trained for, often in a more human-like way and with better performance than a more supervised approach. Examples include <@GPT-3@>(@Language Models are Few-Shot Learners@), [Image GPT](https://openai.com/blog/image-gpt/), [BigGAN](https://arxiv.org/abs/1809.11096), <@AlphaStar@>(@AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning@), etc. (The full post has something like 25 examples.)\n\nThe author then argues that it seems like most researchers seem to be completely ignoring this phenomenon. OpenAI is the only actor that really has the conviction needed to put a large amount of resources behind a project based on the scaling hypothesis (such as GPT-3); DeepMind seems to believe in a weaker version where we need to build a bunch of “modules” similar to those in the human brain, but that those modules can then be scaled up indefinitely. Other actors seem to not take either scaling hypothesis very seriously."], "venue": "Author's Website", "opinion": "In my view, the scaling hypothesis is easily the most important hypothesis relevant to AI forecasting and AI development models, and this is the best public writeup of it that I know of. (For example, it seems to be an implicit assumption in the <@bio anchors framework@>(@Draft report on AI timelines@).) I broadly agree with the author that it’s a bit shocking how few people seem to be taking it seriously after OpenAI Five, AlphaStar, GPT-3, Copilot, etc.\n\nI *think* this includes the AI safety space, where as far as I can tell the primary effect has been that it is even more fashionable to have shorter timelines, whereas it hasn’t affected AI safety research very much. However, I do know around 3-4 researchers who changed what they were working on based on changing their mind about the scaling hypothesis, so it’s possible there are several others I don’t know about.\n\nAs a simple example of how the scaling hypothesis affects AI safety research, it suggests that the training objective (“predict the next word”) is relatively unimportant in determining properties of the trained agent; in contrast, the dataset is much more important. This suggests that analyses based on the “reward function used to train the agent” are probably not going to be very predictive of the systems we actually build.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #156", "newsletter_category": "Deep learning"}
{"id": "af219135f62fae6ab1bf9a9867dd8976", "title": "Feature-wise transformations", "url": "https://distill.pub/2018/feature-wise-transformations/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Vincent Dumoulin", "Ethan Perez", "Nathan Schucher", "Florian Strub", "Harm de Vries", "Aaron Courville and Yoshua Bengio"], "summaries": ["This Distill article is about transformations on features using FiLM (feature-wise linear modulation). A FiLM layer is used to \"condition\" a neural network on auxiliary information, which just means providing the input to the neural network in a way that it can use it effectively. This can be used to integrate multiple sources of information -- for example, in visual question answering (VQA), the main part of the network can be an image processing pipeline, and FiLM can be used to turn the natural language question about the image into a task representation and integrate it into the pipeline, and the full network can be trained end-to-end. The FiLM layer works by first using a subnetwork to turn the auxiliary information (such as the question in VQA) into a \"task representation\" (a new representation chosen by the neural network), which is then used as the parameters for an affine transformation of the features in the main pipeline. Importantly, each feature is treated independently of other features, so the FiLM layer can't create interactions between features. Yet, this still works well in many different contexts.\n\nSince it is a Distill paper, it then goes into a ton of detail about lots of interesting details, such as how architectures in a variety of ML tasks can be thought of as FiLM, how FiLM relates to other ideas such as attention, how we can often interpolate between different auxiliary information by taking a weighted combination of the corresponding task information, how conditioning through concatenation is equivalent to FiLM with only a bias and no scaling, etc."], "venue": "Distill", "opinion": "I really enjoy Distill articles, they are consistently far more readable and understandable than typical papers (or even blog posts), even without including the interactive visualizations. This article is no exception. I didn't have particularly strong opinions on how to condition neural nets before, but now I think I will think about FiLM and how it could apply.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "Deep learning"}
{"id": "141dc2001d69af4cca914e29374e3a36", "title": "Pretrained Transformers as Universal Computation Engines", "url": "https://bair.berkeley.edu/blog/2021/03/23/universal-computation/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Kevin Lu", "Aditya Grover", "Pieter Abbeel", "Igor Mordatch"], "summaries": ["We’ve seen some very impressive few-shot learning results from <@GPT-3@>(@Language Models are Few-Shot Learners@) and [CLIP](https://openai.com/blog/clip/). These work by training a large Transformer model on a giant pile of data in a particular modality (such as language or images), and then we express tasks within that modality (e.g. summarization for a language model). This paper asks the question: could those models also help with tasks in a _different_ modality? Surprisingly, the answer seems to be yes!\n\nSpecifically, the authors take the pretrained GPT-2 models and finetune on very different tasks, changing only the following parameters (which make up just ~0.1% of the model):\n1. Input layer: This is a linear layer that transforms the input tokens before they go through the attention layers.\n2. Output layer: This is a linear layer that uses the final representations to solve some downstream tasks.\n3. Layer norm: These parameters are meant to mimic the statistics of the data distribution, and so need to be finetuned.\n4. Positional embeddings. (They say that it only makes a slight difference to finetune these.)\n\nFor downstream tasks, they consider tasks like memorizing bit sequences, computing XORs, MNIST and CIFAR (where each image is represented as a sequence of 64 tokens, and each token is a 4x4 patch of the image), and protein folding. None of these tasks involve any use of natural language -- the input modality is completely different.\n\nThe headline result: these sorts of models tend to achieve similar performance as Transformer models trained from scratch on the same tasks, and better performance than models initialized with random weights and then finetuned using the method above. This suggests that _even for new data modalities_ the GPT-2 pretraining helps, suggesting that the model has learned some “universal computation” in its attention layers (hence the title). Note though that the differences from the random initialization are not that large (2-6 percentage points, except 25 percentage points in Bit Memory), suggesting that a lot of this might be the inductive bias of the Transformer architecture itself.\n\nThe rest of the paper delves into this more, running several experiments to learn more empirical facts. For example:\n1. If the Transformers are pretrained on images instead of language, you do better on image tasks like CIFAR, but not as well on the other tasks.\n2. Transformers do a _lot_ better than LSTMs.\n3. Pretrained Transformers also learn significantly faster than randomly initialized Transformers."], "venue": "arXiv", "opinion": "This is a pretty cool result. I’m not sure what I would have predicted ahead of time -- the gains are small enough that I could believe I might have predicted them on a general basis of “probably training on realistic data gives you slightly better patterns of thought, so probably if you try hard enough you can find a small set of parameters to finetune that would work well”.\n\nHowever, another possible line of reasoning would be “the attention heuristics learned for language would probably throw away lots of information if we applied them directly to the input tokens, and the input linear layer may not be enough to handle this issue, so probably this just destroys any good performance of the model”. I could see myself being convinced by that too.", "highlight": true, "read_more": "Paper: Pretrained Transformers as Universal Computation Engines", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #144", "newsletter_category": "Deep learning"}
{"id": "616d00a138b2e33d957b7facd0c9d22b", "title": "Fast and Easy Infinitely Wide Networks with Neural Tangents", "url": "https://ai.googleblog.com/2020/03/fast-and-easy-infinitely-wide-networks.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Roman Novak*", "Lechao Xiao*", "Samuel S. Schoenholz*", "Jiri Hron", "Jaehoon Lee", "Alexander A. Alemi", "Jascha Sohl-Dickstein"], "summaries": ["The success of Deep Learning has led researchers to explore why they're such effective function approximators. One key insight is that increasing the width of the network layers makes it *easier* to understand. More precisely, as the width is sent to infinity the network's learning dynamics can be approximated with a Taylor expansion and become a kernel problem. This kernel has an exact form in the limit and is referred to as the neural tangent kernel (NTK). Ultimately, this allows us to model the network with a simpler model known as a Gaussian process. Unfortunately, showing this analytically is difficult and creating efficient implementations is cumbersome. **The authors address this problem by introducing \"Neural Tangents\", a library that makes creating infinite-width networks as easy as creating their finite counterparts with libraries such as PyTorch or TensorFlow.** They include support for convolutions with full-padding, residual-connections, feed-forward networks, and support for a variety of activation functions. Additionally, there is out-of-the-box support for CPU, GPU, and TPU. Moreover, uncertainty comparisons with finite ensembles are possible via exact Bayesian inference."], "venue": "ICLR 2020", "opinion": "I took a look at the repository and found there to be ample documentation available making it easy for me to try training my own infinite-width network. The authors derive a practical way to compute the exact convolutional NTK which I find impressive and which seems to be the main technical contribution of this paper. While the authors note that there are some conditions necessary to enter the so-called \"kernel regime\", in practice it seems as though you can often get away with merely large network widths. If for nothing else, I'd recommend at least perusing the notebooks they have available or taking a look at the visualization they present of a neural network converging to a Gaussian process, which relies on a subtle application of the law of large numbers. ", "highlight": false, "read_more": "Paper: Neural Tangents: Fast and Easy Infinite Neural Networks in Python", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #96", "newsletter_category": "Deep learning"}
{"id": "eb124824e5560c771af9e63a620e7d35", "title": "Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization", "url": "http://arxiv.org/abs/2002.10657", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Satrajit Chatterjee"], "summaries": ["Deep neural networks trained with gradient descent do well at generalizing from their training set, but the field currently has relatively little understanding of why that is. Large networks have enough parameters to fully memorize the training set and can do so even if trained on data with entirely random labels. This allows for many functions that would fit the training set well, but not generalize, a phenomenon known as overfitting. The question is how gradient descent picks out one of a small subset of functions that will generalize well. \n\nThe *Coherent Gradients* hypothesis, introduced here and tested further in [this paper](http://arxiv.org/abs/2003.07422), is that this results from per-example gradients being averaged during gradient descent. For each example, some of the gradient points in a direction that is idiosyncratic to that example, but some of it points towards a more general solution. When the average is taken across these gradients, the more general directions reinforce each other while the example-specific directions cancel out. As a result, the training process moves faster towards more general directions.\n\nIn order to test this hypothesis, they run two experiments. First they use varying amounts of label noise (corrupting a fraction of the dataset to have random labels). They predict and find that:\n1. More label noise leads to slower learning.\n2. The uncorrupted examples will be learned faster.\n\nThe next experiment tests a novel form of regularization, called winsorization, where they clip the gradients on a per-example and per-parameter basis to prevent a single example from dominating the gradient, effectively curtailing the component of the gradient that is example-specific. Since the computation of per-example gradients is expensive, when scaling this up to larger networks, they instead use the median of 3 mini-batches to address outliers. Theexperiments suggest that winsorization reduces overfitting and in particular prevents neural nets from learning randomly labeled data."], "venue": "ICLR 2020", "opinion": "The hypothesis makes sense to me and the experiments do seem to bear out their conclusions. However, none of the results of the experiments were surprising to me and seem to me like they could be consistent with other explanations for generalization. I would be more convinced if the Coherent Gradients hypothesis made predictions that were different from other leading theories and then those turned out to be true.", "highlight": false, "read_more": "Explaining Memorization and Generalization: A Large-Scale Study with Coherent Gradients", "summarizer": "Nicholas", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #93", "newsletter_category": "Deep learning"}
{"id": "0828a85816699f27508bfd7f2d13b665", "title": "SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems", "url": "http://arxiv.org/abs/1903.03129", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Beidi Chen", "Tharun Medini", "James Farwell", "Sameh Gobriel", "Charlie Tai", "Anshumali Shrivastava"], "summaries": ["This paper presents an algorithmic technique called SLIDE (Sub-LInear Deep learning Engine) which takes advantage of sparsity in inputs and activations to speed up the training of large neural networks.\n\nSuppose that activations at layer k are a_k. Then, the ith element of a_{k+1} is given by the dot product of a_k and w_i for some weight vector w_i. Call w_i the ith neuron of layer k + 1. The largest activations in a_{k+1} are the ones for whom w_i has high magnitude and points in the same direction as a_k. The core proposal of SLIDE is to only compute the largest elements of a_{k+1}, which they call the “activated neurons”, and approximate all of the others are zero, allowing us to avoid a lot of computation.\n\nIn order to do this, we maintain a data structure called a _locality-sensitive hash table_, which when given an activation a_k can tell us which neurons (w_is) are most similar. We can then compute the outputs for just those neurons to get a_{k+1}. In this way, we can effectively ‘sparsify’ the network, calculating the activations and updating the weights of only a small subset of the neurons. This is what gives us our computational gains.\n\nSLIDE randomly initializes weights in the network and generates the locality-sensitive hash table that maps activations to activated neurons. To take a gradient step on an input, it calculates the activated neurons in a forward pass, then backpropagates through the activated neurons, and then updates the locality-sensitive hash table. The hash table update is computationally expensive, and SLIDE uses several mechanisms to make it less costly, such as updating hash tables less frequently later in the training process since gradients are likely to change less then. Due to the sparsity, the gradients for different inputs are often changing different neurons, and so SLIDE asynchronously parallelizes gradient updates without worrying about race conditions, allowing for much better scaling with additional cores.\n\nThe paper evaluates SLIDE on large multi-label classification tasks, which must run on neural networks with extremely wide final layers. It finds that the CPUs running SLIDE are 1.8 times faster in clock-time than the GPU on the Delicious 200k dataset, and 2.7 times faster than the GPU on the Amazon-670K dataset, with an additional ~1.3x speed-up after performing cache optimization on SLIDE. Scalability tests suggest that the SLIDE CPUs beat GPU performance even when using only 8 cores. The paper claims that SLIDE’s computational benefits come because the number of neurons sampled in the wide final layer is extremely small-- fewer than 0.5% of active neurons."], "venue": "MLSys 2020", "opinion": "The tasks they test on are _extremely_ sparse: since there are hundreds of thousands of possible labels, even if you take the top ~thousand predictions in the final layer (which corresponds to most of the computation), that’s only 1% of the total number of predictions, saving you 99% of the arithmetic you would have had to do. The input features are also very sparse: in both datasets, less than 0.06% (yes, percent) of features are non-zero. It’s cool that under such conditions you can design an algorithm that is ~an order of magnitude better on cost, but it’s not going to be “the death of NVIDIA” or anything like that — without further optimizations, SLIDE will be worse than regular Tensorflow on GPU for something like ImageNet.\n\nI'm also not sure I agree with the 'thesis' of the paper that smart algorithms beat hardware acceleration-- it seems to me like there are large gains from investing in the combination of the two. Even if GPUs aren't optimized to run SLIDE, I can imagine specialized hardware optimized for SLIDE creating even bigger performance gains.", "highlight": false, "read_more": "", "summarizer": "Asya", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #92", "newsletter_category": "Deep learning"}
{"id": "b4a06c2c258be0378e71d7ed1f85e8b1", "title": "A new model and dataset for long-range memory", "url": "https://deepmind.com/blog/article/A_new_model_and_dataset_for_long-range_memory", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Jack W. Rae", "Anna Potapenko", "Siddhant M. Jayakumar", "Timothy P. Lillicrap"], "summaries": ["A central challenge in language modeling is capturing long-range dependencies. For example, a model needs to be able to identify the antecedent of a pronoun even if it is much earlier in the text. Existing datasets consist of news and Wikipedia articles, where articles have average lengths ranging from 27 to 3,600 words. This paper introduces a dataset of Project Gutenberg books, PG-19, where each book has a much longer average length of 69,000 words. This benchmark enables comparison of how well algorithms can make use of information that is spread out across a much larger context.\n\nThey then introduce the *Compressive Transformer*, which builds on the <@*TransformerXL*@>(@Transformer-XL: Unleashing the Potential of Attention Models@). The *TransformerXL* saves old activations into a FIFO queue, discarding them when the queue is full. The *Compressive Transformer* instead has two FIFO queues: the first stores the activations just like *TransformerXL*, but when activations are ejected, they are compressed and added to the second queue. This functions as a sort of long-term memory, storing information from a longer period of time but in a compressed format. \n\nThey try a number of types of compression function and find that it is best to use a 1D convolutional compression function with an auxiliary loss that leads to lossy compression, where information that is not attended to can be removed. The compression network and the Transformer optimize independent losses without any mixing. \n\nThey find that the *Compressive Transformer* improves on *TransformerXL* on their new PG-19 dataset and is state of the art on the already existing WikiText-103 and Enwik8 benchmarks. They also inspect where the network attends to and find that more attention is paid to the compressed memory than the oldest activations in regular memory, showing that the network is preserving some valuable information."], "venue": "DeepMind Blog", "opinion": "I like the idea of saving long-term memory in a more efficient but lower-dimensional format than short-term memory. The current <@trend@>(@Scaling Laws for Neural Language Models@) in language modelling is that more computation leads to better results, so I think that algorithms that target computation on the most relevant information are promising. I’d be interested to see (and curious if the authors tried) more continuous variants of this where older information is compressed at a higher rate than newer information, since it seems rather arbitrary to split into two FIFO queues where one has a fixed compression rate. \n\nI’m not well calibrated on the meaning of the evaluation metrics for NLP, so I don’t have a sense of how much of an improvement this is over the *TransformerXL*. I looked through some of the example text they gave in the blog post and thought it was impressive but has clear room for improvement.", "highlight": false, "read_more": "Paper: Compressive Transformers for Long-Range Sequence Modelling", "summarizer": "Nicholas", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #91", "newsletter_category": "Deep learning"}
{"id": "28d28365b1466383c6e0336e31607887", "title": "The Quiet Semi-Supervised Revolution", "url": "https://towardsdatascience.com/the-quiet-semi-supervised-revolution-edec1e9ad8c", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Vincent Vanhoucke"], "summaries": ["Historically, semi-supervised learning that uses small amounts of labelled data combined with a lot of unlabeled data only helped when there was very little labelled data available. In this regime, both supervised and semi-supervised learning were too inaccurate to be useful. Furthermore, approaches like using a representation learnt by an autoencoder for classification empirically limited asymptotic performance. This is strange because using more data should not lead to worse performance.\n\nRecent trends suggest that this might change soon: semi-supervised systems have begun to outperform supervised systems by larger and larger margins in the low data regime and their advantage now extends into regimes with more and more data. An important driver of this trend is the idea of using data augmentation for more consistent self-labelling.\n\nBetter semi-supervised learning might for example be useful for federated learning which attempts to respect privacy by learning locally on (labelled) user data and sending the models trained by different users to be combined in a central server. One problem with this approach is that the central model might memorize some of the private models' idiosyncracies such that inference about the private labels is possible. Semi-supervised learning makes this harder by reducing the amount of influence private data has on the aggregate model."], "venue": "Towards Data Science", "opinion": "Because the way humans classify things are strongly influenced by our priors about how classes \"should\" behave, learning with limited data most likely requires some information about these priors. Semi-supervised learning that respects that data augmentation does not change the correct classification might be an efficient and scalable way to force some of these priors onto a model. Thus it seems likely that more diverse and sophisticated data augmentation could lead to further improvements in the near term. On the other hand, it seems like a lot of our priors would be very hard to capture only using automatic data augmentation, such that other methods to transfer our priors are still important.", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #78", "newsletter_category": "Deep learning"}
{"id": "6b4caacc6afd204bbfe26c3f3c7f4f48", "title": "Uniform convergence may be unable to explain generalization in deep learning", "url": "https://locuslab.github.io/2019-07-09-uniform-convergence/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Vaishnavh Nagarajan"], "summaries": ["This post argues that existing generalization bounds cannot explain the empirical success of neural networks at generalizing to the test set.\n\n\"What?\", you say if you're like me, \"didn't we already know this? Generalization bounds depend on your hypothesis space being sufficiently small, but [neural nets can represent any reasonable function](https://en.wikipedia.org/wiki/Universal_approximation_theorem)? And even if you avoid that by considering the size of the neural net, we know that empirically [neural nets can learn randomly labeled data](https://arxiv.org/abs/1611.03530), which can never generalize; surely this means that you can't explain generalization without reference to some property of the dataset, which generalization bounds typically don't do?\"\n\nIt turns out that the strategy has been to prove generalization bounds that depend on the _norm of the weights of the trained model_ (for some norm that depends on the specific bound), which gets around both these objections, since the resulting bounds are independent of the number of parameters, and depend on the trained model (which itself depends on the dataset). However, when these bounds are evaluated on a simple sphere-separation task, they _increase_ with the size of the training dataset, because the norms of the trained models increase.\n\nOkay, but can we have a stronger argument than mere empirical results? Well, all of these bounds depend on a _uniform convergence bound_: a number that bounds the absolute difference between the train and test error for _any_ model in your hypothesis space. (I assume the recent generalization bounds only consider the hypothesis space \"neural nets with norms at most K\", or some suitable overapproximation of that, and this is how they get a not-obviously-vacuous generalization bound that depends on weight norms. However, I haven't actually read those papers.)\n\nHowever, no matter what hypothesis space these bounds choose, to get a valid generalization bound the hypothesis space must contain (nearly) all of the models that would occur by training the neural net on a dataset sampled from the underlying distribution. What if we had the actual smallest such hypothesis space, which only contained the models that resulted from an actual training run? The authors show that, at least on the sphere-separation task, the uniform convergence bound is still extremely weak. Let's suppose we have a training dataset S. Our goal is now to find a model in the hypothesis space which has a high absolute difference between actual test error, and error in classifying S. (Recall that uniform convergence requires you to bound the absolute difference for _all_ models in your hypothesis class, not just the one trained on S.) The authors do so by creating an \"adversarial\" training dataset S' that also could have been sampled from the underlying distribution, and training a model on S'. This model empirically gets S almost completely wrong. Thus, this model has low test error, but high error in classifying S, which forces the uniform convergence bound to be very high."], "venue": "NeurIPS 2019", "opinion": "I enjoyed this blog post a lot (though it took some time to digest it, since I know very little about generalization bounds). It constrains the ways in which we can try to explain the empirical generalization of neural networks, which I for one would love to understand. Hopefully future work will explore new avenues for understanding generalization, and hit upon a more fruitful line of inquiry.", "highlight": false, "read_more": "Paper", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #77", "newsletter_category": "Deep learning"}
{"id": "0b9504a4891ab034c06e7d44cc162745", "title": "Understanding the generalization of ‘lottery tickets’ in neural networks", "url": "https://ai.facebook.com/blog/understanding-the-generalization-of-lottery-tickets-in-neural-networks", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Ari Morcos", "Yuandong Tian"], "summaries": ["The <@lottery ticket hypothesis@>(@The Lottery Ticket Hypothesis at Scale@) states that a randomly initialized dense or convolutional neural network contains (sparse) subnetworks, called \"winning tickets\", which can be trained to achieve performance similar to the trained base network while requiring a lot less compute. \n\nThe blogpost summarizes facebook AI's recent investigations of the generalization of winning tickets and the generality of the hypothesis. Because winning tickets are hard to find, we would like to reuse the ones we have found for similar tasks. To test whether this works, the authors trained classifiers, pruned and reset them to obtain winning tickets on different image datasets and then trained these on other datasets. Winning tickets derived from similar datasets relevantly outperform random subnetworks after training and ones derived from larger or more complex datasets generalize better. For example, tickets from ImageNet are consistently among the best and tickets from CIFAR-100 generalize better than those from CIFAR-10.\n\nExperiments in natural language processing and reinforcement learning suggest that the lottery ticket hypothesis is not just a peculiarity of image classification: for example, the performance of a large transformer model could be recovered from a winning ticket with just a third of the original weights, whereas random tickets with that amount of weights performed quite a bit worse. The analysis of simple shallow neural networks in a student-teacher setting is used as a toy model: when a larger student network is trained to mimic a smaller teacher with the same amount of layers, **student specialization** happens: some of the student's neurons learn to imitate single neurons of the teacher. This can be seen to happen more often and faster if the student neuron is already close to the teacher neuron at initialization. If the student network is large enough, every teacher neuron will be imitated by some student neuron and these student neurons collectively form a winning ticket. "], "venue": "FAIR Blog", "opinion": "I enjoyed reading this blogpost and like the idea of using winning tickets for transfer learning. I would have been quite surprised if they had found that the lottery ticket hypothesis was specific to image classification, as similar to pretraining, winning tickets seem to provide an inductive bias constraining the set of features that can be learnt during training to more useful ones. I do not think that further research into that direction will directly help with quickly training models for novel tasks unless the tickets can be identified very efficiently which seems like a harder optimization problem than just training a network by gradient descent.", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #77", "newsletter_category": "Deep learning"}
{"id": "fc5dd55ca905e6787e6bc0341cc8adad", "title": "SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver", "url": "http://arxiv.org/abs/1905.12149", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Po-Wei Wang", "Priya L. Donti", "Bryan Wilder", "Zico Kolter"], "summaries": ["Historically, deep learning architectures have struggled with problems that involve logical reasoning, since they often impose non-local constraints that gradient descent has a hard time learning. This paper presents a new technique, SATNet, which allows neural nets to solve logical reasoning problems by encoding them explicitly as MAXSAT-solving neural network layers. A MAXSAT problem provides a large set of logical constraints on an exponentially large set of options, and the goal is to find the option that satisfies as many logical constraints as possible. Since MaxSAT is NP-complete, the authors design a layer that solves a relaxation of the MaxSAT problem in its forward pass (that can be solved quickly, unlike MaxSAT), while the backward pass computes gradients as usual.\n\nIn experiment, SATNet is given bit representations of 9,000 9 x 9 Sudoku boards which it uses to learn the logical constraints of Sudoku, then presented with 1,000 test boards to solve. SATNet vastly outperforms traditional convolutional neural networks given the same training / test setup, achieving 98.3% test accuracy where the convolutional net achieves 0%. It performs similarly well on a \"Visual\" Sudoku problem where the trained network consists of initial layers that perform digit recognition followed by SATNet layers, achieving 63.2% accuracy where the convolutional net achieves 0.1%."], "venue": "arXiv", "opinion": "My impression is this is a big step forward in being able to embed logical reasoning in current deep learning techniques. From an engineering perspective, it seems extremely useful to be able to train systems that encorporate these layers end-to-end. It's worth being clear that in systems like these, a lot of generality is lost since part of the network is explicitly carved out for solving a particular problem of logical constraints-- it would be hard to use the same network to learn a different problem.", "highlight": false, "read_more": "", "summarizer": "Asya", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #75", "newsletter_category": "Deep learning"}
{"id": "244f4014cec25abac1babd5a944ae27c", "title": "the transformer … “explained”?", "url": "https://nostalgebraist.tumblr.com/post/185326092369/the-transformer-explained", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["nostalgebraist"], "summaries": ["This is an excellent explanation of the intuitions and ideas behind self-attention and the [Transformer](https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html) <@architecture@>(@Transformer-XL: Unleashing the Potential of Attention Models@)."], "venue": "Tumblr", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #58", "newsletter_category": "Deep learning"}
{"id": "50a55e63e9502db263a4a5801b1208ec", "title": "Do we still need models or just more data and compute?", "url": "https://staff.fnwi.uva.nl/m.welling/wp-content/uploads/Model-versus-Data-AI-1.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Max Welling"], "summaries": ["This is a response to <@The Bitter Lesson@>, that emphasizes the importance of data in addition to compute. It brings up a number of considerations that seem important to me, and is worth reading if you want to better understand my position on the bitter lesson."], "venue": "University of Amsterdam Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #54", "newsletter_category": "Deep learning"}
{"id": "925d57c0dd8a5ed1fff67fcb85d120c7", "title": "Semantic Image Synthesis with Spatially-Adaptive Normalization", "url": "https://arxiv.org/pdf/1903.07291.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Taesung Park", "Ming-Yu Liu", "Ting-Chun Wang and Jun-Yan Zhu"], "summaries": ["This paper shows how to create somewhat realistic images specified by semantic segmentation maps. They accomplish this by modifying batch normalization. Batch normalization modifications can be quite powerful for image generation, even enough to [control style](https://arxiv.org/abs/1703.06868). Their modification is that normalization is a direct function of the semantic segmentation map throughout the network, so that the semantic segmentation map is readily available to each ResBlock. Visualizations produced by this method are [here](https://nvlabs.github.io/SPADE/)."], "venue": "CVPR 2019", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Dan H", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #54", "newsletter_category": "Deep learning"}
{"id": "34fb4b35bdbeb589729a9ab799b8fad2", "title": "Measuring the Intrinsic Dimension of Objective Landscapes", "url": "https://eng.uber.com/intrinsic-dimension/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Chunyuan Li", "Rosanne Liu", "and Jason Yosinski"], "summaries": ["This paper proposes and defines a quantity called \"intrinsic dimension\", a geometrically-informed metric of how many degrees of freedom are actually needed to train a given model on a given dataset. They calculate this by picking a set of random directions that span some subspace of dimension d, and taking gradient steps only along that lower-dimensional subspace. They consider the intrinsic dimension of a model and a dataset to be the smallest value d at which performance reaches 90% of a baseline, normally trained model on the dataset. The geometric intuition of this approach is that the dimensionality of parameter space can be, by definition, split into intrinsic dimension and its codimension, the dimension of the solution set. In this framing, higher solution set dimension (and lower intrinsic dimension) corresponds to proportionally more of the search space containing reasonable solution points, and therefore a situation where a learning agent will be more likely to find such a solution point. There are some interesting observations here that correspond with our intuitions about model trainability: on MNIST, intrinsic dimensionality for a CNN is lower than for a fully connected network, but if you randomize pixel locations, CNN's intrinsic dimension shoots up above FC, matching the intuition that CNNs are appropriate when their assumption of local structure holds. "], "venue": "Uber Engineering Blog", "opinion": "Overall, I find this an interesting and well-articulated paper, and am curious to see future work that addresses some of the extrapolations and claims implied by this paper, particularly their claim, surprising relative to my intuitions, that increasing n_parameters will, maybe monotonically, reduce difficulty of training, because it simply increases the dimensionality of the solution set. I'm also not sure how to feel about their simply asserting that a solution exists when a network reaches 90% of baselines performance, since we may care about that \"last mile\" performance and it might also be the harder to reach.", "highlight": false, "read_more": "Paper", "summarizer": "Cody", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #53", "newsletter_category": "Deep learning"}
{"id": "5695d1fb76ea477a326487ba0bb013cb", "title": "Measuring the Limits of Data Parallel Training for Neural Networks", "url": "https://ai.googleblog.com/2019/03/measuring-limits-of-data-parallel.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Chris Shallue and George Dahl"], "summaries": ["Consider the relationship between the size of a single batch and the number of batches needed to reach a specific performance bound when using deep learning. If all that mattered for performance was the total number of examples that you take gradient steps on (i.e. the product of these two numbers), then you would expect a perfect inverse relationship between these two quantities, which would look like a line with negative slope on a log-log plot. In this case, we could scale batch sizes up arbitrarily far, and distribute them across as many machines as necessary, in order to reduce wall clock training time. A 2x increase in batch size with twice as many machines would lead to a 2x decrease in training time. However, as you make batch sizes really large, you face the problem of stale gradients: if you had updated on the first half of the batch and then computed gradients on the second half of the batch, the gradients for the second half would be \"better\", because they were computed with respect to a better set of parameters. When this effect becomes significant, you no longer get the nice linear scaling from parallelization.\n\nThis post studies the relationship empirically across a number of datasets, architectures, and optimization algorithms. They find that universally, there is initially an era of perfect linear scaling as you increase batch size, followed by a region of diminishing marginal returns that ultimately leads to an asymptote where increasing batch size doesn't help at all with reducing wall-clock training time. However, the transition points between these regimes vary wildly, suggesting that there may be low hanging fruit in the design of algorithms or architectures that explicitly aim to achieve very good scaling."], "venue": "Google AI Blog", "opinion": "OpenAI <@found@>(@How AI Training Scales@) that the best predictor of the maximum useful batch size was how noisy the gradient is. Presumably when you have noisy gradients, a larger batch size helps \"average out\" the noise across examples. Rereading their post, I notice that they mentioned the study I've summarized here and said that their results can help explain why there's so much variance in the transition points _across datasets_. However, I don't think it can explain the variance in transition points _across architectures_. Noisy gradients are typically a significant problem, and so it would be weird if the variance in transition points across architectures were explained by the noisiness of the gradient: that would imply that two architectures reach the same final performance even though one had the problem of noisy gradients while the other didn't. So there seems to be something left to explain here.\n\nThat said, I haven't looked in depth at the data, so the explanation could be very simple. For example, maybe the transition points don't vary much across architecture and vary much more across datasets, and the variance across architecture is small enough that its effect on performance is dwarfed by all the other things that can affect the performance of deep learning systems. Or perhaps while the noisiness of the gradient is a good predictor of the maximum batch size, it still only explains say 40% of the effect, and so variance across architectures is totally compatible with factors other than the gradient noise affecting the maximum batch size.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #51", "newsletter_category": "Deep learning"}
{"id": "23d045e5af75242c13ee9bdde1ed3641", "title": "How AI Training Scales", "url": "https://blog.openai.com/science-of-ai/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Sam McCandlish", "Jared Kaplan and Dario Amodei"], "summaries": ["OpenAI has done an empirical investigation into the performance of AI systems, and found that the maximum useful batch size for a particular task is strongly influenced by the noise in the gradient. (Here, the noise in the gradient comes from the fact that we are using _stochastic_ gradient descent -- any difference in the gradients across batches counts as \"noise\".) They also found some preliminary results showing the more powerful ML techniques tend to have more gradient noise, and even a single model tends to have increased gradient noise over time as they get better at the task."], "venue": "OpenAI Blog", "opinion": "While OpenAI doesn't speculate on why this relationship exists, it seems to me that as you get larger batch sizes, you are improving the gradient by reducing noise by averaging over a larger batch. This predicts the results well: as the task gets harder and the noise in the gradients gets larger, there's more noise to get rid of by averaging over data points, and so there's more opportunity to have _even larger_ batch sizes.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #37", "newsletter_category": "Deep learning"}
{"id": "dd23fc0d5e1b8acfc033c0c3c246a320", "title": "Relational Deep Reinforcement Learning", "url": "http://arxiv.org/abs/1806.01830", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Vinicius Zambaldi", "David Raposo", "Adam Santoro", "Victor Bapst", "Yujia Li", "Igor Babuschkin", "Karl Tuyls", "David Reichert", "Timothy Lillicrap", "Edward Lockhart", "Murray Shanahan", "Victoria Langston", "Razvan Pascanu", "Matthew Botvinick", "Oriol Vinyals", "Peter Battaglia"], "summaries": ["This paper uses the self-attention mechanism discussed in 'Relational recurrent neural networks' to compute relationships between entities extracted from input data. The system was tested on the Box-World environment, in which an agent needs to use keys to open boxes in a certain order. It generalised very well to test environments which required much longer sequences of actions than any training examples, and improved slightly on a baseline for Starcraft mini-games."], "venue": "NIPS 2018", "opinion": "Getting neural networks to generalise to longer versions of training problems is often surprisingly difficult, so I'm impressed by the Box-World results; I would have liked to see what happened on even longer problems.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #30", "newsletter_category": "Deep learning"}
{"id": "5351a1cf8c622035f94524e93f76205e", "title": "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets", "url": "https://mathai-iclr.github.io/papers/papers/MATHAI_29_paper.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Alethea Power", "Yuri Burda", "Harri Edwards", "Igor Babuschkin", "Vedant Misra"], "summaries": ["This paper presents an interesting empirical phenomenon with deep learning: **grokking**. \n\nConsider tasks of the form “a ◦ b = ?”, where “◦” is some operation on modular arithmetic. For example, in the task of addition mod 97, an example problem would be “32 + 77 = ?”. There are exactly 97 possible operands, each of which gets its own token, and so there are 97^2 possible problems that are defined by pairs of tokens. We will train a neural net on some fraction of all possible problems and then ask how well it performs on the remaining problems it didn’t see: that is, we’re asking it to fill in the missing entries in the 97x97 table that defines addition mod 97.\n\nIt turns out that in these cases, the neural net memorizes the training dataset pretty quickly (in around 10^3 updates), at which point it has terrible generalization performance. However, if you continue to train it all the way out to 10^6 updates, then it will often hit a phase transition where you go from random chance to perfect generalization almost immediately. Intuitively, at the point of the phase transition, the network has “grokked” the function and can run it on new inputs as well. Some relevant details about grokking:\n\n1. It isn’t specific to group or ring operations: you also see grokking for tasks like “a/b if b is odd, otherwise a − b”.\n2. It is quite sensitive to the choice of hyperparameters, especially learning rate; the learning rate can only vary over about a single order of magnitude.\n3. The time till perfect generalization is reduced by weight decay and by adding noise to the optimization process.\n4. When you have 25-30% of possible examples as training data, a decrease of 1 percentage point leads to an increase of 40-50% in the median time to generalization.\n5. As problems become more intuitively complicated, time till generalization increases (and sometimes generalization doesn’t happen at all). For example, models failed to grok the task x^3 + xy^2 + y (mod 97) even when provided with 95% of the possible examples as training data.\n6. Grokking mostly still happens even when adding 1,000 “outliers” (points that could be incorrectly labeled), but mostly stops happening at 2,000 “outliers”."], "venue": "1st Mathematical Reasoning in General Artificial Intelligence Workshop, ICLR 2021", "opinion": "Another interesting fact about neural net generalization! Like <@double descent@>(@Deep Double Descent@), this can’t easily be explained by appealing to the diversity model. I don’t really have a good theory for either of these phenomena, but one guess for grokking is that:\n\n1. Functions that perfectly memorize the data without generalizing (i.e. probability 1 on the true answer and 0 elsewhere) are very complicated, nonlinear, and wonky. The memorizing functions learned by deep learning don’t get all the way there and instead assign a probability of (say) 0.95 to the true answer.\n2. The correctly generalizing function is much simpler and for that reason can be easily pushed by deep learning to give a probability of 0.99 to the true answer.\n3. Gradient descent quickly gets to a memorizing function, and then moves mostly randomly through the space, but once it hits upon the correctly generalizing function (or something close enough to it), it very quickly becomes confident in it, getting to probability 0.99 and then never moving very much again.\n\nA similar theory could explain deep double descent: the worse your generalization, the more complicated, nonlinear and wonky you are, and so the more you explore to find a better generalizing function. The biggest problem with this theory is that it suggests that making the neural net larger should primarily advantage the memorizing functions, but in practice I expect it will actually advantage the correctly generalizing function. You might be able to rescue the theory by incorporating aspects of the <@lottery ticket hypothesis@>(@The Lottery Ticket Hypothesis at Scale@).", "highlight": false, "read_more": "Reddit commentary", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #159", "newsletter_category": "Deep learning"}
{"id": "5c98cb006890420eae90356074b74162", "title": "Branch Specialization", "url": "https://distill.pub/2020/circuits/branch-specialization/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Chelsea Voss", "Gabriel Goh", "Nick Cammarata", "Michael Petrov", "Ludwig Schubert", "Chris Olah"], "summaries": ["Neural network architectures sometimes have different “branches”, where later features can depend on earlier features _within the same branch_, but cannot depend on features in parallel branches. This post presents evidence showing that in these architectures, branches often tend to specialize in particular types of features. For example:\n\n1. The first two layers in AlexNet are split into two branches. In one branch, the first layer tends to learn black and white Gabor filters, while in the other branch, the first layer tends to learn low-frequency color detectors. This persists across retraining, or even training on a different dataset of natural images, such as Places (rather than ImageNet).\n2. All 9 of the black and white vs. color detectors in mixed3a are in mixed3a_5x5 (p < 1e-8). All 30 of the curve-related features in mixed3b are in mixed3b_5x5 (p < 1e-20). There are confounds here, but also good reasons to expect that it is in fact branch specialization.\n\nGiven that branch specialization seems to be robust and consistent even across datasets, a natural hypothesis is that it is reflecting a structure that already exists. Even if you didn’t have branching, it seems likely that the model would still learn very similar neurons, and it seems plausible that e.g. the weights connecting the first-layer black-and-white Gabor filters to the second-layer color detectors are effectively zero. With branching, you learn the same features in such a way that all the weights that previously were effectively zero now don’t exist because they would be crossing branches. This would look like having the Gabor filters in one branch and the color detectors in the other branch."], "venue": "Distill", "opinion": "I find the hypothesis the authors propose quite compelling (and this is very similar to the hypothesis that neural networks tend to be modular, which we discuss more below). Partly, this is because it has a common-sense explanation: when designing an organization, you want to put related functions in the same group to minimize the communication across groups. Here, the full network is the organization, the branches are an explicit constraint on communication, and so you want to put related functions in the same branch.\n\nAt the end of the article, the authors also suggest that there could be a connection with the way that different regions of the brain are specialized to particular tasks. I’ll go further than the authors in my speculation: it seems plausible to me that this specialization is simply the result of the brain’s learning algorithm reflecting the structure of the world through specialization. (Though it seems likely that the different areas of the brain must at least have different “architectures”, in order for the same tasks to be routed to the same brain regions across humans.) But the case of AlexNet demonstrates that in theory, the only thing you need for specialization to arise is a restriction on the communication between one part of the architecture and the other.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #147", "newsletter_category": "Deep learning"}
{"id": "f0aac17d93fc2d403f6ba6802a24ebea", "title": "Multimodal Neurons in Artificial Neural Networks", "url": "https://openai.com/blog/multimodal-neurons/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Gabriel Goh", "Nick Cammarata", "Chelsea Voss", "Shan Carter", "Michael Petrov", "Ludwig Schubert", "Alec Radford", "Chris Olah"], "summaries": ["[CLIP](https://openai.com/blog/clip/) is a large model that was trained to learn a separate embedding for images and text, such that the embedding for an image is maximally similar to the embedding for its caption. This paper uses feature visualization and dataset examples to analyze the vision side of the model, and shows that there are many _multimodal_ neurons. For example, there is a Spiderman neuron that responds not just to pictures of Spiderman, but also to sketches of Spiderman, and even the word “spider” (written in the image, not the caption). The neurons are quite sophisticated, activating not just on instances of the concept, but also things that are _related_ to the concept. For example, the Spiderman neuron is also activated by images of other heroes and villains from the Spiderman movies and comics. There are lots of other neurons that I won’t go into, such as neurons for famous people, regions, facial emotions, religions, holidays, abstract concepts, numbers, and text.\n\nUnsurprisingly, many of these neurons encode stereotypes that we might consider problematic: for example, there is an immigration neuron that responds to Latin America, and a terrorism neuron that responds to the Middle East.\n\nThe concepts learned by CLIP also have some notion of hierarchy and abstraction. In particular, the authors find that when they train a sparse linear classifier on top of the CLIP features, the resulting classifier has a “hierarchy” that very approximately matches the hierarchy used to organize the ImageNet classes in the first place -- despite the fact that CLIP was never trained on ImageNet at all. (I’m not sure how approximate this match is.)\n\nAs mentioned before, the neurons can respond to text in the image, and in a few cases they can even respond to text in different languages. For example, a “positivity” neuron responds to images of English “Thank You”, French “Merci”, German “Danke”, and Spanish “Gracias”. The fact that the model is so responsive to text in images means that it is actually very easy to influence its behavior. If we take an apple (originally correctly classified as a Granny Smith) and tape on a piece of paper with the word “iPod” written on it, it will now be classified as an iPod with near certainty. This constitutes a new and very easy to execute “typographic” adversarial attack.\n\nHowever, not everything that CLIP is capable of can be explained with our current interpretability techniques. For example, CLIP is often able to tell whether an image is from San Francisco (and sometimes even what region within San Francisco), but the authors were not able to find a San Francisco neuron, nor did it look like there was a computation like “California + city”."], "venue": "Distill", "opinion": "The “typographic adversarial attack” is interesting as a phenomenon that happens, but I’m not that happy about the phrasing -- it suggests that CLIP is dumb and making an elementary mistake. It’s worth noting here that what’s happening is that CLIP is being asked to look at an image of a Granny Smith apple with a piece of paper saying “iPod” on it, and then to complete the caption “an image of ???” (or some other similar zero-shot prompt). It is quite possible that CLIP “knows” that the image contains a Granny Smith apple with a piece of paper saying “iPod”, but when asked to complete the caption with a single class from the ImageNet classes, it ends up choosing “iPod” instead of “Granny Smith”. I’d caution against saying things like “CLIP thinks it is looking at an iPod”; this seems like too strong a claim given the evidence that we have right now.", "highlight": false, "read_more": "Distill paper", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #142", "newsletter_category": "Deep learning"}
{"id": "894de7834ee40a5493ac71c242f16317", "title": "AlphaFold: a solution to a 50-year-old grand challenge in biology", "url": "https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["The AlphaFold team", "John Jumper", "Richard Evans", "Alexander Pritzel", "Tim Green", "Michael Figurnov", "Kathryn Tunyasuvunakool", "Olaf Ronneberger", "Russ Bates", "Augustin Žídek", "Alex Bridgland", "Clemens Meyer", "Simon A A Kohl", "Anna Potapenko", "Andrew J Ballard", "Andrew Cowie", "Bernardino Romera-Paredes", "Stanislav Nikolov", "Rishub Jain", "Jonas Adler", "Trevor Back", "Stig Petersen", "David Reiman", "Martin Steinegger", "Michalina Pacholska", "David Silver", "Oriol Vinyals", "Andrew W Senior", "Koray Kavukcuoglu", "Pushmeet Kohli", "Demis Hassabis"], "summaries": ["The newest results from <@AlphaFold@>(@AlphaFold: Using AI for scientific discovery@) on the CASP-14 assessment give it a median score of 92.4 GDT across all targets, where a score of 90 is informally considered to be competitive with results obtained from experimental methods. The system also shows some signs of real-world usability: for example, it was used earlier this year to predict the structure of two COVID proteins, which were later borne out by experimental results (that took several months to obtain, if I understand correctly)."], "venue": "DeepMind Website", "opinion": "Obviously this is an astounding accomplishment for DeepMind (conflict of interest notice: I work at DeepMind). I feel like I should have some opinion on what this means for the future of AI systems, but unfortunately I think I don’t know enough about protein folding to have any interesting takes.\n\nFrom an outside view perspective, it seems like this is an example of deep learning crushing a task that a) humans put a lot of effort into and b) humans weren’t evolutionarily designed for. This is exactly what we saw with Go, Dota and StarCraft, and so this isn’t much of an update for me. Yes, this is a case of it being used in a real-world problem rather than a synthetic game, but that doesn’t seem particularly relevant.\n\n**Asya's opinion:** I think this is particularly interesting because this model is closer to being a source of revenue than solutions to other problems. This makes me think machine learning research might actually solve enough important problems to pay for itself in the near future.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #128", "newsletter_category": "Deep learning"}
{"id": "4d931ad4f2a4f2f973dadee7f4b42872", "title": "Identifying Statistical Bias in Dataset Replication", "url": "http://gradientscience.org/data_rep_bias/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Logan Engstrom", "Andrew Ilyas", "Aleksander Mądry", "Shibani Santurkar", "Jacob Steinhardt", "Dimitris Tsipras"], "summaries": ["One way of dealing with finite and fixed test sets and the resulting possibility of overfitting on the test set is dataset replication, where one tries to closely mimic the original process of dataset creation to obtain a larger test set. This can lead to bias if the difficulty of the new test images is distributed differently than in the original test set. A previous attempt at [dataset replication on ImageNet](https://arxiv.org/abs/1902.10811) tried to get around this by measuring how often humans under time pressure correctly answered a yes/no question about an image's class (dubbed selection frequency), which can be seen as a proxy for classification difficulty. \n\nThis data was then used to sample candidate images for every class which match the distribution of difficulty in the original test set. Still, all tested models performed worse on the replicated test set than on the original. Parts of this bias can be explained by noisy measurements combined with disparities in the initial distribution of difficulty, which are likely as the original ImageNet data was prefiltered for quality. Basically, the more noisy our estimates for the difficulty are, the more the original distribution of difficulty matters. As an extreme example, imagine a class for which all images in the original test set have a selection frequency of 100%, but 90% of candidates in the new test set have a selection frequency of 50%, while only 10% are as easy to classify as the images in the original test set. Then, if we only use a single human annotator, half of the difficult images in the candidate pool are indistinguishable from the easy ones, such that most images ending up in the new test set are more difficult to classify than the original ones, even after the adjustment.\n\nThe authors then replicate the ImageNet dataset replication with varying amounts of annotators and find that the gap in accuracy between the original and the new test set progressively shrinks with reduced noise from 11.7% with one annotator to 5.7% with 40. Lastly, they discuss more sophisticated estimators for accuracy to further lower bias, which additionally decreases the accuracy gap down to around 3.5%."], "venue": "Gradient Science", "opinion": "This was a pretty interesting read and provides evidence against large effects of overfitting on the test set. On the other hand, results like this also seem to highlight how benchmarks are mostly useful for model comparison, and how nonrobust they can be to fairly benign distributional shift. ", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #103", "newsletter_category": "Deep learning"}
{"id": "593648a231793a739f4537e3f5edbca7", "title": "More Efficient NLP Model Pre-training with ELECTRA", "url": "https://ai.googleblog.com/2020/03/more-efficient-nlp-model-pre-training.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Kevin Clark", "Minh-Thang Luong", "Quoc V. Le", "Christopher D. Manning"], "summaries": ["There are two main approaches to pretraining for NLP, language models (LMs) which iteratively predict the next word in a given incomplete sentence, and masked language models (MLMs), which predict the identities of a few masked words in an otherwise complete sentence. While not just looking at the previous words (bidirectionality) can be advantageous, MLMs only learn to predict the masked words, which reduces how much is learnt from a given sentence. \n\nThe authors present an alternative approach, ELECTRA, that outperforms RoBERTa while requiring less than a third of the compute. This is achieved by changing the form of the pretraining task from predicting words to discriminating fake words: Instead of masking, some words are replaced by words generated by an MLM and the trained model has to classify these as fake. This way, we get bidirectionality, but also a more dense signal, as the model has to produce an output for every single word, not just the masked ones. While this looks similar to GANs, the generator is only trained on the usual MLM loss and is not incentivized to fool the discriminator, as GANs don't seem to work well on sequence data."], "venue": "ICLR 2020", "opinion": "I found it a bit surprising that replacing word prediction with fake discrimination would help that much, but from the ablations, it seems like this is really mostly an instrument to get a loss signal for every single word, which is a cool idea. On a more zoomed-out perspective, results like this seem to show that gains in <@algorithmic efficiency@>(@AI and Efficiency@) are not fundamentally slowing down. ", "highlight": false, "read_more": "Paper: ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #102", "newsletter_category": "Deep learning"}
{"id": "6b18f1e34e52d92790591b3ef59a198f", "title": "Growing Neural Cellular Automata", "url": "https://distill.pub/2020/growing-ca/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Alexander Mordvintsev", "Ettore Randazzo", "Eyvind Niklasson", "Michael Levin"], "summaries": ["The process of an organism's shape development (morphogensis) is an active area of research. One central problem is determining how cells decide how to grow and when to stop. One popular model for investigating this is Cellular Automata (CA). These model cells as living on a grid and interacting with each other via rules generated by looking at their nearest neighbors. The authors contribute to this research direction by introducing rule-sets that depend continuously on their local surroundings. The central insight connecting CA and deep learning is that because the rule-sets are constant the update rules work similarly to a convolutional filter. This allows the authors to take advantage of methods available to train neural networks to simulate CA. Using this insight, the authors train CA that can form into images that are resistant to perturbations and deletions. In other words, the CA are capable of regeneration."], "venue": "Distill", "opinion": "The main relevance of an approach like this is that it provides proof-of-concept that complex goals, such as shape formation, can be programmed in an embarrassingly parallel fashion amenable to deep learning methodology. This naturally has implications in multi-agent settings where communication is expensive. I'd recommend checking out the main web app which allows you to watch and interact with the CA while they're growing. They also have a [code repository](https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/notebooks/growing_ca.ipynb) that is easily adaptable to training on your own patterns. For example, I grew a regenerating Patrick Star [here](https://colab.research.google.com/drive/1BuE-0ceBP7ebTmX7pP_urP-FJGvmjCpb?usp=sharing). ", "highlight": false, "read_more": "", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #100", "newsletter_category": "Deep learning"}
{"id": "bcad6050535efe38ccd15316e498b45e", "title": "AutoML-Zero: Evolving Machine Learning Algorithms From Scratch", "url": "http://arxiv.org/abs/2003.03384", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Esteban Real*", "Chen Liang*", "David R. So", "Quoc V. Le"], "summaries": ["Most previous work in the area of automated machine learning, or AutoML, has focussed on narrow search spaces that are restricted to specific parts of the machine learning pipeline, e.g. the architecture of a neural network, or the optimizer in meta-learning. These spaces are often so constrained by the hand-engineered components around them that architectures and algorithms discovered, say, by evolutionary search (ES), are only slightly better than random search (RS). This work aims to set up the problem with very weak constraints and a wide search space: a) a machine learning program has three component functions, _Setup_, _Predict_, and _Learn_, which start out empty, and b) are populated by RS or ES with procedural operations from over 50 arithmetic, trigonometric, linear algebra, probability, and pre-calculus operators.\n\nThey demonstrate that with such a vast search space, RS fares very poorly in comparison to ES. They also report that ES finds several procedures that are recognizable as useful for machine learning, such as a simple neural network, gradient descent, gradient normalization, multiplicative interactions, noise augmentation, noisy dropout and learning rate decay."], "venue": "arXiv", "opinion": "This work empirically demonstrates that we now have sufficient methods and tricks in our ES toolkit that enable us to evolve machine learning algorithms from scratch. Additionally, this process produces computer code, which itself may yield to theoretical analysis furthering our knowledge of learning algorithms. I think that powerful AI systems of the future may employ such techniques to discover solutions.\n\n**Rohin's opinion:** It’s cool to see automation of even the ML algorithms themselves. However, note that there is no deep learning or function approximation in this system: it is a simple search algorithm that is commonly used in program synthesis. In fact, though the paper is presented as an ML paper, it seems to me much more like a classic program synthesis paper that happens to be working in the domain of ML algorithms.\n\nWithout any learned heuristics, these algorithms have a hard time scaling: they can only synthesize relatively short snippets of code, since the search space grows exponentially as the code gets longer. Good results depend very strongly on being able to create a DSL in which the correct program is short. In this case, it looks like strong programs are about ~20 straight-line instructions, which seems on the more impressive side given the simplicity of the algorithm and the DSL, though they did throw a huge amount of compute at the problem.", "highlight": false, "read_more": "", "summarizer": "Sudhanshu", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #95", "newsletter_category": "Deep learning"}
{"id": "dbfb4c12b416d183d555a2d215b88d14", "title": "Speeding Up Transformer Training and Inference By Increasing Model Size", "url": "https://bair.berkeley.edu/blog/2020/03/05/compress/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Zhuohan Li*", "Eric Wallace*", "Sheng Shen*", "Kevin Lin*", "Kurt Keutzer", "Dan Klein", "Joseph E. Gonzalez"], "summaries": ["This blog post and associated paper confirm the findings from <@Scaling Laws for Neural Language Models@> that the most efficient way to train Transformer-based language models is to train very large models and stop before convergence, rather than training smaller models to convergence."], "venue": "BAIR Blog", "opinion": "", "highlight": false, "read_more": "Paper: Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #90", "newsletter_category": "Deep learning"}
{"id": "773bf528a5fd3eacc0bce0dbce65bb32", "title": "Generative Modeling with Sparse Transformers", "url": "https://openai.com/blog/sparse-transformer/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Rewon Child and Scott Gray"], "summaries": ["I see this paper as trying to interpolate the space between convolution (fixed receptive field, number of layers needed to gain visibility to the whole sequence grows with sequence length) and attention (visibility to the entire sequence at each operation, but n^2 memory and compute scaling with sequence length, since each new element needs to query and be queried by each other element). This is done by creating chains of operations that are more efficient, and can offer visibility to the whole sequence in k steps rather than k=1 steps, as with normal attention. An example of this is one attention step that pulls in information from the last 7 elements, and then a second that pulls in information from each 7th element back in time (the \"aggregation points\" of the first operation). "], "venue": "OpenAI Blog", "opinion": "I find this paper really clever and potentially quite high-impact, since Transformers are *so* widely used, and this paper could offer a substantial speedup without much theoretical loss of information. I also just enjoyed having to think more about the trade-offs between convolutions, RNNs, and transformers, and how to get access to different points along those tradeoff curves.", "highlight": false, "read_more": "", "summarizer": "Cody", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #56", "newsletter_category": "Deep learning"}
{"id": "d422344e7f3310665c3da1dcc6a544aa", "title": "Introducing Translatotron: An End-to-End Speech-to-Speech Translation Model", "url": "https://ai.googleblog.com/2019/05/introducing-translatotron-end-to-end.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Ye Jia and Ron Weiss"], "summaries": ["This post introduces Translatotron, a system that takes speech (not text!) in one language and translates it to another language. This is in contrast to most current \"cascaded\" systems, which typically go from speech to text, then translate to the other language, and then go back from text to speech. While Translatotron doesn't beat current systems, it demonstrates the feasibility of this approach."], "venue": "Google AI Blog", "opinion": "Machine translation used to be done in multiple stages (involving parse trees as an intermediate representation), and then it was done better using end-to-end training of a deep neural net. This looks like the beginning of the same process for speech-to-speech translation. I'm not sure how much people care about speech-to-speech translation, but **if it's an important problem, I'd expect the direct speech-to-speech systems to outperform the cascaded approach relatively soon**. I'm particularly interested to see whether you can \"bootstrap\" by using the cascaded approach to generate training data for the end-to-end approach, and then finetune the end-to-end approach on the direct speech-to-speech data that's available to improve performance further.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #56", "newsletter_category": "Deep learning"}
{"id": "fac847cd0c254c480c9bbdae1f7d7913", "title": "A Recipe for Training Neural Networks", "url": "https://karpathy.github.io/2019/04/25/recipe/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Andrej Karpathy"], "summaries": ["This is a great post detailing how to train neural networks in practice when you want to do anything more complicated than training the most common architecture on the most common dataset. For all of you readers who are training neural nets, I strongly recommend this post; the reason I'm not summarizing it in depth is because a) it would be a really long summary and b) it's not that related to AI alignment."], "venue": "Author's Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #56", "newsletter_category": "Deep learning"}
{"id": "5feb26914a7b9a7f3be8c6ea62ccef6e", "title": "MLPerf", "url": "https://mlperf.org/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["From their overview: \"The MLPerf effort aims to build a common set of benchmarks that enables the machine learning (ML) field to measure system performance for both training and inference from mobile devices to cloud services.\" They have a track to measure the performance of hardware and software systems that support ML models, as well as a track that aims to advance the state-of-the-art in ML models. They consider a broad set of problems (though it seems like they are all problems where some deep learning technique is state-of-the-art)."], "venue": "MLPerf Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #5", "newsletter_category": "Deep learning"}
{"id": "ec8b8bac9ec31dfdfe52b4476ae5256c", "title": "A Conservative Human Baseline Estimate for GLUE: People Still (Mostly) Beat Machines", "url": "https://woollysocks.github.io/assets/GLUE_Human_Baseline.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Nikita Nangia", "Samuel R. Bowman"], "summaries": ["[BERT](https://arxiv.org/abs/1810.04805) tremendously improves performance on several NLP datasets, such that it has \"taken over\" NLP. GLUE represents performance of NLP models across a broad range of NLP datasets. Now GLUE has human performance measurements. According to the [current GLUE leaderboard](https://gluebenchmark.com/leaderboard), the gap between human performance and models fine-tuned on GLUE datasets is a mere 4.7%. Hence many current NLP datasets are nearly \"solved.\""], "venue": "Github", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Dan H", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #45", "newsletter_category": "Deep learning"}
{"id": "28660c116ee5d80d4872c16b076ddfe3", "title": "Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet", "url": "https://openreview.net/pdf?id=SkfMWhAqYQ", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Anonymous"], "summaries": ["This paper proposes a bag-of-features model using patches as features, and they show that this can obtain accuracy similar to VGGNet architectures. They classify each patch and produce the final classification by a majority vote; Figure 1 of the paper tells all. In some ways this model is more interpretable than other deep architectures, as it is clear which regions activated which class. They attempt to show that, like their model, VGGNet does not use global shape information but instead uses localized features."], "venue": "OpenReview", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Dan H", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #33", "newsletter_category": "Deep learning"}
{"id": "19486309a971fceb8433e5fa07aca7da", "title": "Relational recurrent neural networks", "url": "http://arxiv.org/abs/1806.01822", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Adam Santoro", "Ryan Faulkner", "David Raposo", "Jack Rae", "Mike Chrzanowski", "Theophane Weber", "Daan Wierstra", "Oriol Vinyals", "Razvan Pascanu", "Timothy Lillicrap"], "summaries": ["This paper introduces the Relational Memory Core, which allows interactions between memories stored in memory-based neural networks. It does so using a \"self-attention mechanism\": each memory updates its contents by attending to all other memories via several \"attention heads\" which focus on different features. This leads to particularly good performance on the nth-farthest task, which requires the ranking of pairwise distances between a set of vectors (91% accuracy, compared with baseline 30%), and the Mini-Pacman task."], "venue": "CVPR 2018", "opinion": "While performance is good on small problems, comparing every memory to every other doesn't scale well (a concern the authors also mention in their discussion). It remains to be seen how pruning older memories affects performance.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #30", "newsletter_category": "Deep learning"}
{"id": "74e55e7683c3a9e6ca4664585af29301", "title": "DAWNBench", "url": "https://dawn.cs.stanford.edu/benchmark/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["This is a collection of statistics for time and compute costs, both for training and inference, for various common models and benchmarks."], "venue": "Dawn Lab Website", "opinion": "It's worth skimming through the page to get a sense of concrete numbers for various benchmarks used in the ML community.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #3", "newsletter_category": "Deep learning"}
{"id": "731263edcfd8fc42d9f4e326010bfeb9", "title": "Neural Guided Constraint Logic Programming for Program Synthesis", "url": "http://arxiv.org/abs/1809.02840", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Lisa Zhang", "Gregory Rosenblatt", "Ethan Fetaya", "Renjie Liao", "William E. Byrd", "Matthew Might", "Raquel Urtasun", "Richard Zemel"], "summaries": ["In program synthesis from examples, we want to find a program consistent with a given set of input-output examples. One classic approach is to use logic programming. In logic programming, instead of writing functions that compute output = f(input), we write rules to compute relations. To encode standard functions, we would write the relation (f, i, o), which is interpreted as \"computing f(i) gives o\". In logic programming, you can let any variable be unknown, and the language will search for a solution. Using this you can eg. invert a function f on a specific output o, using the query (f, ?, o). To apply logic programming to program synthesis, we write an interpreter eval for the language we want to synthesize in, and pose the query (eval, ?, i, o). They consider the lambda calculus with pairs and lists as their language.\n\nThe algorithm that falls out is a recursive descent search over the possible structure of the program, that generates and checks partial constraints over the partial programs implied by the input-output examples during the search. The search has branching points where it must choose, for some as-yet-unknown part of the program, what language construct it should use (if, cons, variable, etc.) This paper attempts to use a neural net to predict what choice the search should make to find a solution, replacing some simple hand-tuned heuristics. It can be trained either using reinforcement learning (where the search choices are actions, the partial search trees are states, and the goal is to find a complete program), or through supervised learning since they know for training programs what choices are optimal. They also use a curriculum and experience replay. They evaluate against classical symbolic approaches (λ2, Escher, Myth) and RobustFill, and show that their method generalizes better to finding longer programs not seen in the training dataset."], "venue": "Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering", "opinion": "It always makes me happy to read a paper about making symbolic approaches faster using neural nets to learn heuristics. That said, I'm concerned about the evaluation in this paper -- their programs are fairly strange, often involving a huge mess of cons (make-pair), car (first) and cdr (second), and not including recursion. The symbolic approaches they evaluate against are aiming to synthesize recursive functions similar to what people write, and I wouldn't be surprised if they had heuristics that actively discouraged these big messes of cars and cdrs, since normal programs don't look like that. The programs are also primarily taking pieces of data out of an input, and then recombining them in some way -- this feels like a significantly easier task than most synthesis problems (in the sense that I could probably write a handcoded solution that performs very well on this domain only).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #24", "newsletter_category": "Deep learning"}
{"id": "bd88b545b9bfab2ea9a89144b9cc7c8d", "title": "When Recurrent Models Don't Need to be Recurrent", "url": "http://bair.berkeley.edu/blog/2018/08/06/recurrent/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["John Miller"], "summaries": ["Recurrent neural networks (RNNs) are able to use and update a hidden state over an entire sequence, which means that in theory it is possible for them to learn very long term dependencies in a sequence, that a feedforward model would not be able to do. For example, it would be easy to assign weights to an RNN so that on input x_n it outputs n (the length of the sequence so far), whereas a feedforward model could not learn this function. Despite this, in practice feedforward methods match and exceed the performance of RNNs on sequence modeling tasks. This post argues that this is because of gradient descent -- any stable gradient descent on RNNs can be well approximated by gradient descent on a feedforward model (both at training and inference time)."], "venue": "BAIR Blog", "opinion": "The post doesn't really explain why this is the case, instead referencing the theory in their paper (which I haven't read). It does sound like a cool result explaining a phenomenon that I do find confusing, since RNNs should be more expressive than feedforward models. It does suggest that gradient descent is not actually good at finding the optimum of a function, if that optimum involves lots of long-term dependencies.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #19", "newsletter_category": "Deep learning"}
{"id": "4538f675664b35280672ca04731ee1e7", "title": "Objects that Sound", "url": "https://deepmind.com/blog/objects-that-sound/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Relja Arandjelović", "Andrew Zisserman"], "summaries": ["The key idea behind this blog post is that there is a rich source of information in videos -- the alignment between the video frames and audio frames. We can leverage this by creating a proxy task that will force the neural net to learn good representations of the video, which we can then use for other tasks. In particular, we can consider the proxy task of deciding whether a short (~1 second) video clip and audio clip are aligned or not. We don't care about this particular task, but by designing our neural net in the right way, we can ensure that the net will learn good representations of video and audio. We pass the video clip through a convolutional net, the audio clip through another convolutional net, and take the resulting vectors and use the distance between them as a measure of how dissimilar they are. There is no way for video to affect the audio or vice versa before the distance -- so the net is forced to learn to map each of them to a shared space where the distance is meaningful. Intuitively, we would expect that this shared space would have to encode the cause of both the audio and video. Once we have these embeddings (and the neural nets that generate them), we can use them for other purposes. For example, their audio encoder sets the new state-of-the-art on two audio classification benchmarks. In addition, by modifying the video encoder to output embeddings for different regions in the image, we can compute the distance between the audio embedding and the video embedding at each region, and the regions where this is highest correspond to the object that is making the sound."], "venue": "DeepMind Blog", "opinion": "Another great example of using unsupervised learning to learn good embeddings. Also, a note -- you might wonder why I'm calling this unsupervised learning even though there's a task, with a yes/no answer, a loss function, and an iid dataset, which are hallmarks of supervised learning. The difference is that the labels for the data did not require any human annotation, and we don't care about the actual task that we're learning -- we're after the underlying embeddings that it uses to solve the task. In the previous paper on learning actionable representations, time was used to define an unsupervised learning signal in a similar way.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #19", "newsletter_category": "Deep learning"}
{"id": "48726de857f4f8d945418f864150d378", "title": "MnasNet: Towards Automating the Design of Mobile Machine Learning Models", "url": "https://ai.googleblog.com/2018/08/mnasnet-towards-automating-design-of.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Mingxing Tan"], "summaries": ["Mobile phones have strong resource constraints (memory, power usage, available compute), which makes it hard to put neural nets on them. Previously, for image classification, researchers hand designed MobileNetV2 to be fast while still achieving good accuracy. Now, using neural architecture search, researchers have found a new architecture, MnasNet, which is 1.5x faster with the same accuracy. Using the [squeeze-and-excitation](https://arxiv.org/abs/1709.01507) optimization improves it even further."], "venue": "Google AI Blog", "opinion": "Neural architecture search is diversifying, focusing on computation time in addition to accuracy now. It seems possible that we'll run into the same problems with architecture search soon, where the reward functions are complex enough that we don't get them right on the first try. What would it look like to learn from human preferences here? Perhaps we could present two models from the search to humans, along with statistics about each, and see which ones the researchers prefer? Perhaps we could run tests on the model, and then have humans provide feedback on the result? Maybe we could use feature visualization to provide feedback on whether the network is learning the \"right\" concepts?", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #19", "newsletter_category": "Deep learning"}
{"id": "83d7fd47fe2d5b3f57303717eefeb32c", "title": "Deep Learning in the Wild", "url": "http://arxiv.org/abs/1807.04950", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Thilo Stadelmann", "Mohammadreza Amirian", "Ismail Arabaci", "Marek Arnold", "Gilbert François Duivesteijn", "Ismail Elezi", "Melanie Geiger", "Stefan Lörwald", "Benjamin Bruno Meier", "Katharina Rombach", "Lukas Tuggener"], "summaries": ["Describes how deep learning is used to solve real-world problems (eg. in industry)."], "venue": "ANNPR 2018: Artificial Neural Networks in Pattern Recognition", "opinion": "The conclusions (section 8) contain a nice list of lessons learned from their case studies, emphasizing problems such as the difficulty of getting good data, the importance of reward shaping, etc.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #16", "newsletter_category": "Deep learning"}
{"id": "13211e1a6089657c3dcfe7116351c2c3", "title": "Glow: Better Reversible Generative Models", "url": "https://blog.openai.com/glow/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Prafulla Dhariwal and Durk Kingma"], "summaries": ["A generative model here means something that models the data distribution, including any underlying structure. For example, a generative model for images would let you generate new images that you hadn't seen during training. While we normally here of GANs and VAEs for current generative models, this work builds on reversible or flow-based generative models. Similarly to word vectors, we can find directions in the learned embedding space corresponding to natural categories (such as \"hair color\"), and manipulate an image by first encoding to the embedding space, then adding one of these directions, and then decoding it back to the manipulated image."], "venue": "OpenAI Blog", "opinion": "This seems cool but I'm not very familiar with this area so I don't have a strong opinion. The algorithm seemed weirdly complicated to me but I think it's based on previous work, and I only spent a couple of minutes looking at it.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "Deep learning"}
{"id": "3930aaa87a989d64ddf90bf31fe6e0da", "title": "Weight Banding", "url": "https://distill.pub/2020/circuits/weight-banding/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Michael Petrov", "Chelsea Voss", "Ludwig Schubert", "Nick Cammarata", "Gabriel Goh", "Chris Olah"], "summaries": ["Empirically, when training neural networks on ImageNet, we can commonly observe “weight banding” in the final layer. In other words, the neurons in the final layer pay very strong attention to the vertical position of features, and ignore the horizontal position of features. This holds across InceptionV1, ResNet50, and VGG19, though it doesn’t hold for AlexNet.\n\nIf you rotate the training data by 90 degrees, then the phenomenon changes to have vertical striping, that is, we now pay strong attention to the horizontal position of features. This suggests that this phenomenon is being driven somehow by the ImageNet data.\n\nThe authors hypothesize that this is caused by the neural network needing to recover some spatial information that was reduced by the previous average pooling layer (which is not present in AlexNet). They try removing this layer, which causes the effect to go away in Inception, but not in VGG19. They seem to think that it also goes away in ResNet50, but when I look at the results, it seems like the phenomenon is still there (though not as strongly as before).\n\nThey try a bunch of other architectural interventions on a simplified architecture and find that weight banding persists across all of these."], "venue": "Distill", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #147", "newsletter_category": "Deep learning"}
{"id": "1cbed7a6d4b777fbc65a45a0b8d4204e", "title": "Transformers for Image Recognition at Scale", "url": "https://ai.googleblog.com/2020/12/transformers-for-image-recognition-at.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Alexey Dosovitskiy*", "Lucas Beyer*", "Alexander Kolesnikov*", "Dirk Weissenborn*", "Xiaohua Zhai*", "Neil Houlsby*", "Thomas Unterthiner", "Mostafa Dehghani", "Matthias Minderer", "Georg Heigold", "Sylvain Gelly", "Jakob Uszkoreit"], "summaries": ["This paper applies transformers to image classification in a fairly straightforward way: First, an input image is divided into 16x16 pixel patches on a grid. Then, a linear projection of the patch is combined with a learnt positional embedding and fed into a standard transformer pipeline. Lastly, a standard MLP head is applied on top of the transformer for the classification. When trained on ImageNet, this architecture overfits and does not reach SOTA performance. However, it can compete with the previous SOTA on the larger ImageNet-21k (14M images) and outcompete it on JFT (300M images), while needing four times less compute for training. By finetuning the JFT model on ImageNet, the transformer narrowly outperforms the previous best ImageNet classifier. \n\nThe positional embeddings learnt by the model look meaningful in that each is most similar to others in the same row or column. Also, some of the attention heads in early layers attend to multiple distant patches, while others are a lot more local. This means that some heads in the early layers have a wide receptive field, which is something that convolution kernels cannot achieve. Overall, given enough data, the transformer seems to be able to learn inductive biases used by CNNs without being limited to them."], "venue": "arXiv", "opinion": "Intuitively, inductive biases become less and less useful the more training data we have, but I would have thought that in the current regime CNNs have too weak rather than too strong inductive biases, so the results are surprising. What is even more surprising is how simple the model is: It does not seem to use any data augmentation, unsupervised pretraining or other tricks like noisy student-teacher training, such that there are many promising avenues for immediate improvements. Also, I would imagine that using something more sophisticated than a linear projection to embed the 16x16 patches could go a long way.", "highlight": false, "read_more": "Paper: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #128", "newsletter_category": "Deep learning"}
{"id": "781fd8d05641c07615533dae54035f94", "title": "GPT-3 Creative Fiction", "url": "https://www.gwern.net/GPT-3", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Gwern Branwen and GPT-3"], "summaries": ["In Gwern's words, this is \"creative writing by OpenAI’s GPT-3 model, demonstrating poetry, dialogue, puns, literary parodies, and storytelling\"."], "venue": "Author's Website", "opinion": "I often find it's very useful to stare directly at raw data in order to understand how something works, in addition to looking at summary statistics and graphs that present a very high-level view of the data. While this isn't literally raw data (Gwern heavily designed the prompts, and somewhat curated the outputs), I think it provides an important glimpse into how GPT-3 works that you wouldn't really get from reading the <@paper@>(@Language Models are Few-Shot Learners@).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #108", "newsletter_category": "Deep learning"}
{"id": "82400c121208556dfea8ac415f8037f8", "title": "Beijing Academy of Artificial Intelligence announces 1,75 trillion parameters model, Wu Dao 2.0", "url": "https://www.lesswrong.com/posts/2dG7vXDZjd6crkdLa/beijing-academy-of-artificial-intelligence-announces-1-75", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["There’s a good chance you’ve heard of the new Wu Dao 2.0 language model, with over 1 trillion parameters. Unfortunately, as far as I know there is no technical writeup describing this model, so I’m going to refrain from commenting on it. You can see other people’s takes in the linked LessWrong post, on [ChinAI](https://chinai.substack.com/p/chinai-145-enlightenment-via-large), and on [policy.ai](https://cset.georgetown.edu/newsletter/june-10-2021/)."], "venue": "", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #154", "newsletter_category": "Deep learning"}
{"id": "41d1033a50de140d52288aedd64753e2", "title": "Reinforcement Learning with Prediction-Based Rewards", "url": "https://blog.openai.com/reinforcement-learning-with-prediction-based-rewards/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Yuri Burda and Harri Edwards"], "summaries": ["Researchers at OpenAI have beaten average human performance on Montezuma's Revenge using a prediction-based curiosity technique called Random Network Distillation. A network with fixed random weights evaluates each state; another network with the same architecture is trained to predict the random network's output, given its input. The agent receives an additional reward proportional to the predictor's error on its current state. The idea behind the technique is that the predictor's error will be higher on states different from those it's been trained on, and so the agent will be rewarded for exploring them.\n\nThis paper follows from their [study on curiosity](https://arxiv.org/abs/1808.04355) ([AN #20](https://mailchi.mp/d92bd0fefc83/alignment-newsletter-20)) in which a predictor was trained to predict the next state directly, and the agent was rewarded when its error was high. However, this led to high reward on states that were unpredictable due to model limitations or stochasticity (e.g. the noisy TV problem). By contrast, Random Network Distillation only requires the prediction of a deterministic function which is definitely within the class of functions representable by the predictor (since it has the same architecture as the random network)."], "venue": "OpenAI Blog", "opinion": "This is an important step forward for curiosity-driven agents. As the authors note in the paper, RND has the additional advantages of being simple to implement and flexible.", "highlight": true, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #31", "newsletter_category": "Exploration"}
{"id": "2bad9400b24f6fc05d9eb81af0c6074a", "title": "Making Efficient Use of Demonstrations to Solve Hard Exploration Problems", "url": "https://deepmind.com/research/publications/Making-Efficient-Use-of-Demonstrations-to-Solve-Hard-Exploration-Problems", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Caglar Gulcehre*", "Tom Le Paine*", "Bobak Shahriari", "Misha Denil", "Matt Hoffman", "Hubert Soyer", "Richard Tanburn", "Steven Kapturowski", "Neil Rabinowitz", "Duncan Williams", "Gabriel Barth-Maron", "Ziyu Wang", "Nando de Freitas", "Worlds Team"], "summaries": ["This paper combines ideas from existing techniques to construct an architecture (R2D3) capable of learning to solve hard exploration problems with a small number (N~100) of demonstrations. R2D3 has two primary architectural features: its use of a recurrent head to learn Q values, and its strategy of sampling trajectories from separate pools of agent and demonstrator experience, with sampling prioritized by highest-temporal-difference-error transitions within each pool. \n\nAs the authors note, this approach is essentially an extension of an earlier paper, [Deep Q-Learning from Demonstrations](https://arxiv.org/abs/1704.03732), to use a recurrent head rather than a feed-forward one, allowing it to be more effectively deployed on partial-information environments. The authors test on 8 different environments that require long sequences of task completion to receive any reward, and find that their approach is able to reach human level performance on four of the tasks, while their baseline comparisons essentially never succeed on any task. Leveraging demonstrations can be valuable for solving these kinds of difficult exploration tasks, because demonstrator trajectories provide examples of how to achieve reward in a setting where the trajectories of a randomly exploring agent would rarely ever reach the end of the task to find positive reward. "], "venue": "arXiv", "opinion": "For all that this paper's technique is a fairly straightforward merging of existing techniques (separately-prioritized demonstration and agent pools, and the off-policy SotA R2D2), its results are surprisingly impressive: the tasks tested on require long and complex chains of correct actions that would be challenging for a non-imitation based system to discover, and high levels of environment stochasticity that make a pure imitation approach difficult.", "highlight": false, "read_more": "", "summarizer": "Cody", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #65", "newsletter_category": "Exploration"}
{"id": "d6206e2ab825aecda739799842424901", "title": "Exploration Strategies in Deep Reinforcement Learning", "url": "https://lilianweng.github.io/lil-log/2020/06/07/exploration-strategies-in-deep-reinforcement-learning.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Lilian Weng"], "summaries": ["A good exploration strategy is critical for fast reinforcement learning. This blog post presents two key problems and a wide array of strategies that have been proposed to deal with them. The **hard-exploration problem** is about sparse or deceptive rewards which make occasional random exploration next to useless. **The noisy-TV problem** is about a pitfall of directly rewarding agents for seeking novel experience: If there was a TV with unpredictable noise outputs in the environment, the agent would be rewarded for sitting in front of the TV and might not learn anything new. \n\nMost of the discussed strategies are intrinsic reward schemes, where an additional reward is given to the agent for exploring new states. One way of doing this is count-based exploration, where the bonus reward depends on how often a state has been visited before. This can be extended to high-dimensional state spaces using density models or discretization. Another way is based on learning a predictor for features of the next state and rewarding the agent proportional to the <@predictor's error@>(@Reinforcement Learning with Prediction-Based Rewards@). An alternative is to learn multiple predictors and rewarding the agent for <@reaching states where they disagree@>(@Self-Supervised Exploration via Disagreement@). One problem with learnt predictors is that they only update slowly. This can be circumvented by combining the approach with episodic memory and a second intrinsic reward based on the distance (either euclidean or based on <@reachability@>(@Episodic Curiosity through Reachability@)) from states that were previously visited in the same episode. <@Agent57@>(@Agent57: Outperforming the human Atari benchmark@) combined this idea with a population of policies with different hyperparameters for the intrinsic reward and a meta-controller for prioritization of the most promising exploration policy. \n\nOther strategies include basing exploration on uncertainty in Q-value estimates, learning options or \"skills\" that encode a wide range of different behaviours <@Variational Option Discovery Algorithms@> or using either an explicit memory or a <@goal-conditioned policy@>(@Learning Actionable Representations with Goal-Conditioned Policies@) to reach informative states and start random exploration from there. "], "venue": "Author's Website", "opinion": "I enjoyed reading the article and think it is a good starting point for people who want to learn more about exploration. Sadly, safe exploration where potential negative consequnces of some explorative actions are taken into account was outside of the article's scope.", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #113", "newsletter_category": "Exploration"}
{"id": "1dcea99e7a9898b1d89a92e5c177f3b0", "title": "Self-Supervised Exploration via Disagreement", "url": "http://arxiv.org/abs/1906.04161", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Deepak Pathak*", "Dhiraj Gandhi*", "Abhinav Gupta"], "summaries": ["For researchers who want to build a reinforcement learning system that can learn to explore its environment without explicit rewards, a common approach is to have the agent learn a model of the world, and incentivize it to explore places where its model has the highest error, under the theory that these represent places where it needs to interact more to collect more data and improve its world model. However, this approach suffers in cases when the environment is inherently stochastic, since in a stochastic environment (think: sitting in front of a static TV and trying to predict the next frame), prediction error can never be brought to zero, and the agent will keep interacting even when its world model has collected enough data to converge as much as it can. This paper proposes an alternative technique: instead of exploring in response to prediction error, learn an ensemble of bootstrapped next-state prediction models and explore in response to variance or disagreement between the models. This has a few nice properties. One is that, in cases of inherent stochasticity, all models will eventually converge to predicting the mean of the stochastic distribution, and so even though they've not brought error down to zero, the variance among models will be low, and will correctly incentivize our agent to not spend more time trying to learn. Another benefit is that since the reward is purely a function of the agent's models, it can be expressed analytically as a function of the agent's choices and trained via direct backpropogation rather than \"black box reward\" RL, making it more efficient. "], "venue": "ICML 2019", "opinion": "I found this approach really elegant and clever as a way of addressing the \"static TV\" problem in curiosity literature. I'd be curious to see more work that introduces even stronger incentives towards diversity among the ensemble models (different architectures, even more different datasets they're trained on), to see if that amplifies the cases of model disagreement.", "highlight": false, "read_more": "", "summarizer": "Cody", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #61", "newsletter_category": "Exploration"}
{"id": "a32b6417f38780f7a3397bdabcdfb33e", "title": "Curiosity and Procrastination in Reinforcement Learning", "url": "https://ai.googleblog.com/2018/10/curiosity-and-procrastination-in.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Nikolay Savinov and Timothy Lillicrap"], "summaries": ["This blog post explains [Episodic Curiosity through Reachability](http://arxiv.org/abs/1810.02274), discussed in [AN #28](https://mailchi.mp/df2e472140b6/alignment-newsletter-28). As a reminder, this method trains a neural net to predict whether two observations were close in time to each other. Recent observations are stored in memory, and the agent is rewarded for reaching states that are predicted to be far away from any observations in memory."], "venue": "Google AI Blog", "opinion": "This is easier to read than the paper and more informative than our summaries, so I'd recommend it if you were interested in the paper.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #30", "newsletter_category": "Exploration"}
{"id": "701fa8756a1fa7dccb85378369aa9dca", "title": "Episodic Curiosity through Reachability", "url": "http://arxiv.org/abs/1810.02274", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Nikolay Savinov", "Anton Raichuk", "Raphaël Marinier", "Damien Vincent", "Marc Pollefeys", "Timothy Lillicrap", "Sylvain Gelly"], "summaries": ["This paper addresses the \"couch potato\" problem for intrinsic curiousity - the fact that, if you reward an agent for observing novel or surprising states, it prefers to sit in front of a TV and keep changing channels rather than actually exploring. It proposes instead rewarding states which are difficult to reach from already-explored states (stored in episodic memory). Their agent has a separate network to estimate reachability, which is trained based on the agent's experiences (where observations few steps apart are negative examples and those many steps apart are positive examples). This method significantly outperforms the previous state of the art curiousity method on VizDoom and DMLab environments."], "venue": "IJCAI 2018", "opinion": "This paper is a useful advance which does help address the couch potato problem, but it seems like it might still fail on similar problems. For example, suppose an agent were given a piece of paper on which it could doodle. Then states with lots of ink are far away from states with little ink, and so it might be rewarded for doodling forever (assuming a perfect model of reachability). My guess is that a model-based metric for novelty will be necessary to counter such problems - but it's also plausible that we end up using combinations of techniques like this one.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #28", "newsletter_category": "Exploration"}
{"id": "3c1d92adec360aeed24be494c648e2f6", "title": "Planning to Explore via Self-Supervised World Models", "url": "http://arxiv.org/abs/2005.05960", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Ramanan Sekar*", "Oleh Rybkin*", "Kostas Daniilidis", "Pieter Abbeel", "Danijar Hafner", "Deepak Pathak"], "summaries": ["<@PlaNet@>(@Learning Latent Dynamics for Planning from Pixels@) learns a latent world model which can be used for planning, and <@Dreamer@>(@Dream to Control: Learning Behaviors by Latent Imagination@) extends the idea by performing RL within the learned latent world model instead of requiring interaction with the environment. However, we still need to efficiently explore the real environment to obtain training data for the world model.\n\nThe authors propose to augment Dreamer with a novel exploration strategy. In addition to the learned latent world model, an ensemble of simpler one-step world models is trained and the magnitude of disagreement within the ensemble for a state is used as a proxy for the information gain for reaching that state. This is used as a (dynamically changing) intrinsic reward that can guide planning. By training Dreamer on this intrinsic reward, we can identify informative states in the real environment without having to first visit similar states as would be the case with e.g. curiosity, where the intrinsic reward is computed in retrospect.\n\nThe resulting system achieves state of the art zero-shot learning on a variety of continuous control tasks, and often comes close to the performance of agents that were trained for the specific task."], "venue": "arXiv", "opinion": "Planning to reach states where a lot of information is gained seems like a very promising strategy for exploration. I am not sure whether building sufficiently precise world models is always as feasible as model-free RL. If it was, misspecified rewards and similar problems would probably become easier to catch, as rollouts of a policy using a precise world model can help us predict what kind of worlds this policy produces without deployment. On the other hand, the improved capabilities for transfer learning could lead to more ubiquitous deployment of RL systems and amplify remaining failure modes, especially those stemming from <@multiagent interactions@>(@Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence@).", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #106", "newsletter_category": "Exploration"}
{"id": "69c7efc8ff55d6c34c1e344beeef51f5", "title": "Montezuma’s Revenge Solved by Go-Explore, a New Algorithm for Hard-Exploration Problems", "url": "http://eng.uber.com/go-explore/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Adrien Ecoffet", "Joost Huizinga", "Joel Lehman", "Kenneth O. Stanley", "and Jeff Clune"], "summaries": ["This blog post showcases an agent which achieves high scores in Montezuma’s Revenge and Pitfall by keeping track of a frontier of visited states (and the trajectories which led to them). In each training episode, a state is chosen from the frontier, the environment is reset to that state, and then the agent randomly explores further and updates the frontier. The authors argue that this addresses the tendency of intrinsic motivation algorithms to forget about promising areas they've already explored. To make state storage tractable, each state is stored as a downsampled 11x8 image.\n\nThe authors note that this solution exploits the determinism of the environment, which makes it brittle. So they then use imitation learning to learn a policy from demonstrations by the original agent. The resulting agents score many times higher than state-of-the-art on Montezuma’s Revenge and Pitfall."], "venue": "Uber Engineering", "opinion": "I’m not particularly impressed by this result, for a couple of reasons. Firstly, I think that exploiting determinism by resetting the environment (or even just memorising trajectories) fundamentally changes the nature of the problem posed by hard Atari games. Doing so allows us to solve them in the same ways as any other search problem - we could, for instance, just use the AlphaZero algorithm to train a value network. In addition, the headline results are generated by hand-engineering features like x-y coordinates and room number, a technique that has been eschewed by most other attempts. When you take those features away, their agent’s total reward on Pitfall falls back to 0.", "highlight": false, "read_more": "Quick Opinions on Go-Explore", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #35", "newsletter_category": "Exploration"}
{"id": "22c8ae006607e2220ca266e54a6bba4e", "title": "A new course to teach people about fairness in machine learning", "url": "https://www.blog.google/technology/ai/new-course-teach-people-about-fairness-machine-learning/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Sanders Kleinfeld"], "summaries": ["Google has added a short section on fairness to their Machine Learning Crash Course (MLCC)."], "venue": "Google Blog", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #29", "newsletter_category": "Fairness and bias"}
{"id": "0d3466e53f846d00907044e8a68d8475", "title": "Delayed Impact of Fair Machine Learning", "url": "http://bair.berkeley.edu/blog/2018/05/17/delayed-impact/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Lydia T. Liu", "Sarah Dean", "Esther Rolf", "Max Simchowitz", "Moritz Hardt"], "summaries": ["Consider a bank that has to choose which loan applications should be approved based on a credit score. Typically, fairness in this setting is encoded by saying that there should be some sort of parity between groups (and different criteria have been proposed for what actually should be the same). However, if you model the actual outcomes that come from the decision (namely, profit/loss to the bank _and_ changes in credit score to the applicant), you can see that standard fairness criteria lead to suboptimal outcomes. As a result, in general you want to look at the delayed impact of ML models."], "venue": "BAIR Blog", "opinion": "This actually feels quite related to the value alignment problem -- in general, we care about things besides fairness, and if we try to optimize directly for fairness, then we'll be giving up good outcomes on other dimensions. It's another case of Goodhart's law, where \"fairness\" was a proxy for \"good for disadvantaged groups\".", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #7", "newsletter_category": "Fairness and bias"}
{"id": "51e5bf95385f3b40892050676109d108", "title": "Introducing the Inclusive Images Competition", "url": "https://ai.googleblog.com/2018/09/introducing-inclusive-images-competition.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Tulsee Doshi"], "summaries": ["The authors write, \"this competition challenges you to use Open Images, a large, multilabel, publicly-available image classification dataset that is majority-sampled from North America and Europe, to train a model that will be evaluated on images collected from a different set of geographic regions across the globe\". The results will be presented at NIPS 2018 in December."], "venue": "Google AI Blog", "opinion": "I'm really interested in the techniques and results here, since there's a clear, sharp distribution shift from the training set to the test set, which is always hard to deal with. Hopefully some of the entries will have general solutions which we can adapt to other settings.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #23", "newsletter_category": "Fairness and bias"}
{"id": "338fe10075873cd7934fe933a1295d7f", "title": "The case for building expertise to work on US AI policy, and how to do it", "url": "https://80000hours.org/articles/us-ai-policy/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Niel Bowerman"], "summaries": ["This in-depth career review makes the case for working on US AI policy. It starts by making a short case for why AI policy is important; and then argues that US AI policy roles in particular can be very impactful (though they would still recommend a policy position in an AI lab like DeepMind or OpenAI over a US AI policy role). It has tons of useful detail; the only reason I'm not summarizing it is because I suspect that most readers are not currently considering career choices, and if you are considering your career, you should be reading the entire article, not my summary. You could also check out [Import AI's summary](https://jack-clark.net/2019/02/04/import-ai-132-can-your-algorithm-outsmart-the-obstacle-tower-cross-domain-nlp-with-biobert-and-training-on-faceforensics-to-spot-deepfakes/)."], "venue": "80000 Hours", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #44", "newsletter_category": "Field building"}
{"id": "fb93b793bfaadefb79c66bd2a9eefa02", "title": "FAQ: Advice for AI Alignment Researchers", "url": "https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Rohin Shah"], "summaries": ["I've written an FAQ answering a broad range of AI alignment questions that people entering the field tend to ask me. Since it's a meta post, i.e. about how to do alignment research rather than about alignment itself, I'm not going to summarize it here."], "venue": "Author's Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #148", "newsletter_category": "Field building"}
{"id": "846f302d69c112d83a3fc9bfd9b5906c", "title": "Critch on career advice for junior AI-x-risk-concerned researchers", "url": "https://www.lesswrong.com/posts/7uJnA3XDpTgemRH2c/critch-on-career-advice-for-junior-ai-x-risk-concerned", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Andrew Critch", "via Rob Bensinger"], "summaries": ["A common piece of advice for aspiring AI x-risk researchers is to work on AI capabilities research in order to skill up so they can later contribute to safety. However, Critch is worried that such researchers will rationalize their work as being \"relevant to safety\", leading to a false sense of security since AI researchers are now surrounded by people who are \"concerned about safety\", but _aren't actually doing safety research_. Note that Critch would still advise young researchers to get into grad school for AI, but to be aware of this effect and not feel any pressure to do safety research and to avoid rationalizing whatever research they are doing."], "venue": "LessWrong", "opinion": "I feel pretty unqualified to have an opinion here on how strong this effect is -- it's pretty far outside of my experience. At the very least it's a consideration we should be aware about, and Critch supports it better in the full post, so I'd recommend you read it.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #7", "newsletter_category": "Field building"}
{"id": "cc1f2ffbf8170a1555b63edc66d19b2b", "title": "AI and Efficiency", "url": "https://openai.com/blog/ai-and-efficiency/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Danny Hernandez", "Tom Brown"], "summaries": ["Given the <@exponential increase@>(@AI and Compute@) in compute used for state-of-the-art results in ML, one might come to think that there has been little algorithmic progress. This paper presents strong evidence against that hypothesis. We can roughly measure algorithmic progress by tracking the compute needed to achieve a concrete performance benchmark over time. Doing so yields doubling times in efficiency (time until only half of the initial compute was needed for the same performance) of around 16 months for ImageNet, which is faster than Moore's law. Other tasks like translation as well as playing Go and Dota 2 exhibit even faster doubling times over short periods. As making a task feasible for the first time arguably presents more algorithmic progress than improving the efficiency of solving an already feasible task, actual progress might be even faster than these numbers suggest. However, the amount of data points is quite limited and it is unclear if these trends will persist and whether they will generalize to other domains. Still, the authors conjecture that similar trends could be observed for tasks that received large amounts of investment and have seen substantial gains in performance. \n\nCombining these results with the increased available compute over time, the authors estimate that the effective training compute available to the largest AI experiments has increased by a factor of 7.5 million (!) in 2018 relative to 2012. \n\nA focus on efficiency instead of top performance allows actors with limited amounts of compute to contribute. Furthermore, models that reach a particular benchmark quickly seem like strong candidates for scaling up. This way, more efficient algorithms might act as a catalyst for further progress. There is a public [git repository](https://github.com/openai/ai-and-efficiency) to keep better track of algorithmic efficiency. "], "venue": "OpenAI Blog", "opinion": "Even though access to compute has surely helped with increased efficiency in ways that I would not really label as algorithmic progress (for example by enabling researchers to try more different hyperparameters), the aggregated numbers seem surprisingly high. This suggests that I either had not correctly internalized what problems AI is able to solve these days, or underestimated the difficulty of solving these problems. It would be quite interesting to see whether there are similar improvements in the sample efficiency of deep reinforcement learning, as I expect this to be a major bottleneck for the application of agentic AIs in the absence of accurate simulators for real-world decision making.", "highlight": true, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #99", "newsletter_category": "Forecasting"}
{"id": "0a33009d1dcb87183a05c154b6263e4b", "title": "AI and Compute", "url": "https://blog.openai.com/ai-and-compute/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Dario Amodei and Danny Hernandez"], "summaries": ["Since 2012, when the deep learning revolution began with AlexNet, the amount of compute used in the largest-scale experiments has been doubling _every 3.5 months_. Initially, people started to use GPUs to scale up, but there wasn't a huge amount o f interest. In 2014-16, as interest in deep learning really began to take off, people started to use a lot of compute to get good results -- but parallelism stopped helping beyond a certain point (~100 GPUs) because the parameter updates from the data were becoming too stale. Since then, we've had algorithmic improvements that allow us to take advantage of more parallelism (huge batch sizes, architecture search, expert iteration), and this has let us scale up the amount of compute thrown at the problem."], "venue": "OpenAI Blog", "opinion": "I did know that the amount of compute used was growing fast, but a 3.5 month doubling time _for 6 years running_ is _huge_, and there's no reason to expect that it will stop now. It's also interesting to see what made it onto the graph -- there's image classification, machine translation, and neural architecture search (all of which have clear economic incentives), but some of the largest ones are by projects aiming to build AGI (AlphaGo Zero, AlphaZero, and Dota). Notably, deep reinforcement learning just barely makes it on the graph, with DQN two orders of magnitude lower than any other point on the graph. I'm really curious what deep RL could solve given AlphaGo levels of compute.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #7", "newsletter_category": "Forecasting"}
{"id": "bc2a75ea87f81634d704b1de2dbf5326", "title": "Danny Hernandez on forecasting and the drivers of AI progress", "url": "https://80000hours.org/podcast/episodes/danny-hernandez-forecasting-ai-progress/?utm_campaign=podcast__danny-hernandez&utm_source=80000+Hours+Podcast&utm_medium=podcast", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Arden Koehler and Danny Hernandez"], "summaries": ["This podcast is a great introduction to the practice of forecasting and measurement in AI, and why it is important. I won't summarize everything in the podcast, but here are some of the points made.\n\nDanny talks about the <@AI and Compute@> and <@AI and Efficiency@> work that he did at OpenAI. The former shows that the compute devoted to the largest-scale experiments has increased by a factor of 300,000 from 2012 to 2018, and the latter suggests that algorithms have been able to achieve similar performance with 25x less compute over the same time period (later updated to 44x from 2012 to 2019).\n\nOne thing I didn’t realize earlier was that the 25x / 44x factor should be thought of as a loose lower bound: in other areas such as language modeling, the factor looks higher. But more importantly, the methodology used doesn’t allow us to model the effects of an algorithm allowing us to do something we couldn’t do before (which we could interpret as something we could do, but with way more compute). Possibly this algorithmic progress should be thought of as a 100x or even 1000x improvement in efficiency. Overall, Danny sees both algorithmic progress and increase in compute as pretty big factors in predicting how AI will go in the future.\n\nUnfortunately, it’s hard to draw strong implications from these measurements for the downstream things we care about -- should we think that AI progress is “slow” or “fast”, or “linear” or “exponential”, based on these results? It’s important to be specific about the units you’re using when thinking about such a question. Danny thinks the economic impact of AI is an important lens here. It seems to him that neural nets were having very little impact back in (say) 2008, but since then they have been having a lot more impact, e.g. by making ~15% of Google searches better (by using a new language model). To his eye, this trend looks exponential.\n\nIn any case, Danny thinks that this sort of rigorous measurement and forecasting work is important, because it provides concrete inputs that can allow decision makers to perform their job better. This is at least one reason why OpenAI’s communication policy involves blog posts that deliberately target a wide audience: any decision maker can read these posts and get value out of them (unlike e.g. research papers).\n\nThis work is part of the broader work done by the Foresight team at OpenAI (which is hiring for research engineers): other work includes <@Scaling Laws for Neural Language Models@> and <@How AI Training Scales@>.\n\nDanny thinks work in AI hardware is promising and under-explored by the community: it seems like it will be a particularly important field in the future, as it will drive some of the progress in increased compute, and as a result having some influence in the area could be quite helpful. For example, perhaps one could advocate for a <@windfall clause@>(@The Windfall Clause: Distributing the Benefits of AI@) at AI hardware companies."], "venue": "80000 Hours Podcast", "opinion": "This measurement and forecasting work seems great; it constrains how we should expect future AI systems to look, and also improves our understanding of the impacts of AI, which probably helps us develop plans for deployment.\n\nI was not very convinced by the reasoning about economic impact. I would believe that the economic impact of neural nets has grown exponentially, but it seems like we should be analyzing trends in machine learning (ML) overall, not just neural nets, and it seems much less likely to me that we see an exponential growth in that. Any time you see a new, better version of a previous technology (as with neural nets in relation to ML), you’re going to see an exponential trend as the new technology is adopted; this doesn’t mean that the exponential will keep going on and lead to transformative impact.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #101", "newsletter_category": "Forecasting"}
{"id": "bcf2bf44217bddebd5d87ed2bbbec151", "title": "Openness Norms in AGI Development", "url": "https://www.alignmentforum.org/posts/RvrTZ3qKWpg9aiFqZ/openness-norms-in-agi-development", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Sublation"], "summaries": ["This post summarizes two papers that provide models of why scientific research tends to be so open, and then applies it to the development of powerful AI systems. The [first](http://www.strevens.org/research/scistruc/Communicans.pdf) models science as a series of discoveries, in which the first academic group to reach a discovery gets all the credit for it. It shows that for a few different models of info-sharing, info-sharing helps everyone reach the discovery sooner, but doesn't change the probabilities for who makes the discovery first (called _race-clinching probabilities_): as a result, sharing all information is a better strategy than sharing none (and is easier to coordinate on than the possibly-better strategy of sharing just some information).\n\nHowever, this theorem doesn't apply when info sharing compresses the discovery probabilities _unequally_ across actors: in this case, the race-clinching probabilities _do_ change, and the group whose probability would go down is instead incentivized to keep information secret (which then causes everyone else to keep their information secret). This could be good news: it suggests that actors are incentivized to share safety research (which probably doesn't affect race-clinching probabilities) while keeping capabilities research secret (thereby leading to longer timelines).\n\nThe [second paper](http://philsci-archive.pitt.edu/13452/1/Heesen%202017%20Communism%20and%20the%20Incentive%20to%20Share%20in%20Science%20preprint.pdf) assumes that scientists are competing to complete a k-stage project, and whenever they publish, they get credit for all the stages they completed that were not yet published by anyone else. It also assumes that earlier stages have a higher credit-to-difficulty ratio (where difficulty can be different across scientists). It finds that under this setting scientists are incentivized to publish whenever possible. For AI development, this seems not to be too relevant: we should expect that with powerful AI systems, most of the \"credit\" (profit) comes from the last few stages, where it is possible to deploy the AI system to earn money."], "venue": "Alignment Forum", "opinion": "I enjoyed this post a lot; the question of openness in AI research is an important one, that depends both on the scientific community and industry practice. The scientific community is extremely open, and the second paper especially seems to capture well the reason why. In contrast industry is often more secret (plausibly due to <@patents@>(@Who owns artificial intelligence? A preliminary analysis of corporate intellectual property strategies and why they matter@)). To the extent that we would like to change one community in the direction of the other, a good first step is to understand their incentives so that we can try to then change those incentives.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #94", "newsletter_category": "Forecasting"}
{"id": "2da647a068b7d597d6eb44702561b650", "title": "Coordination Surveys: why we should survey to organize responsibilities, not just predictions", "url": "https://www.lesswrong.com/posts/Lds9opZsAMbjuZp7h/coordination-surveys-why-we-should-survey-to-organize", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Andrew Critch"], "summaries": ["This post suggests that when surveying researchers about the future impact of their technology, we should specifically ask them about their beliefs about what actions other people will take, and what they personally are going to do, rather than just predicting total impact. (For example, we could ask how many people will invest in safety.) Then, by aggregating across survey respondents, we can see whether or not the researchers beliefs about what others will do match the empirical distribution of what researchers are planning to do. This can help mitigate the effect where everyone thinks that everyone else will deal with a problem, and the effect where everyone tries to solve a problem because they all think no one else is planning to solve it. Critch has offered to provide suggestions on including this methodology in any upcoming surveys; see the post for details."], "venue": "LessWrong", "opinion": "This is a cool idea, and seems worth doing to me. I especially like that the survey would simply reveal problems by collecting two sources of information from people and checking their consistency with each other: there isn't any particular argument being made; you are simply showing inconsistency in people's own beliefs to them, if and only if such inconsistency exists. In practice, I'm sure there will be complications -- for example, perhaps the set of researchers taking the survey is different from the set of \"others\" whose actions and beliefs they are predicting -- but it still seems worth at least trying out.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #65", "newsletter_category": "Forecasting"}
{"id": "63ad49c3c7053ca70b0600240e613fe3", "title": "Musings on Cumulative Cultural Evolution and AI", "url": "https://www.lesswrong.com/posts/K686EFdXysfRBdob2/musings-on-cumulative-cultural-evolution-and-ai", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["calebo"], "summaries": ["A [recent paper](https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1006504&type=printable) develops a conceptual model that retrodicts human social learning. They assume that asocial learning allows you adapt to the current environment, while social learning allows you to copy the adaptations that other agents have learned. Both can be increased by making larger brains, at the cost of increased resource requirements. What conditions lead to very good social learning?\n\nFirst, we need high transmission fidelity, so that social learning is effective. Second, we need some asocial learning, in order to bootstrap -- mimicking doesn't help if the people you're mimicking haven't learned anything in the first place. Third, to incentivize larger brains, the environment needs to be rich enough that additional knowledge is actually useful. Finally, we need low _reproductive skew_, that is, individuals that are more adapted to the environment should have only a slight advantage over those who are less adapted. (High reproductive skew would select too strongly for high asocial learning.) This predicts pair bonding rather than a polygynous mating structure.\n\nThis story cuts against the arguments in [Will AI See Sudden Progress?](https://www.lesswrong.com/posts/AJtfNyBsum6ZzWxKR/will-ai-see-sudden-progress) and [Takeoff speeds](https://sideways-view.com/2018/02/24/takeoff-speeds/): it seems like evolution \"stumbled upon\" high asocial and social learning and got a discontinuity in reproductive fitness of species. We should potentially also expect discontinuities in AI development.\n\nWe can also forecast the future of AI based on this story. Perhaps we need to be watching for the perfect combination of asocial and social learning techniques for AI, and once these components are in place, AI intelligence will develop very quickly and autonomously."], "venue": "LessWrong", "opinion": "As the post notes, it is important to remember that this is one of many plausible accounts for human success, but I find it reasonably compelling. It moves me closer to the camp of \"there will likely be discontinuities in AI development\", but not by much.\n\nI'm more interested in what predictions about AI development we can make based on this model. I actually don't think that this suggests that AI development will need both social and asocial learning: it seems to me that in this model, the need for social learning arises because of the constraints on brain size and the limited lifetimes. Neither of these constraints apply to AI -- costs grow linearly with \"brain size\" (model capacity, maybe also training time) as opposed to superlinearly for human brains, and the AI need not age and die. So, with AI I expect that it would be better to optimize just for asocial learning, since you don't need to mimic the transmission across lifetimes that was needed for humans.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #60", "newsletter_category": "Forecasting"}
{"id": "d099aa2df9824828c8585c35b888744f", "title": "Signup form for AI Metaculus", "url": "https://docs.google.com/forms/d/e/1FAIpQLSduBjn3W_MpHHjsKUEhzV6Krkup78ujE5-8bpNJ5HDE7GGnmA/viewform?usp=sf_link", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Jacob Lagerros and Ben Goldhaber"], "summaries": ["Recently, forecasting platform Metaculus launched a new instance dedicated specifically to AI in order to get good answers for empirical questions (such as AGI timelines) that can help avoid situations like [info-cascades](https://www.lesswrong.com/posts/2uDBJWCksvzhDzHGf/understanding-information-cascades). While most questions don’t have that many predictions, the current set of beta-users were invited based on forecasting track-record and AI domain-expertise, so the signal of the average forecast should be high.\n\nSome interesting predictions include:\n - By end of 2019, will there be an agent at least as good as AlphaStar using non-controversial, human-like APM restrictions? _[mean: 58%, median: 66%, n = 26]_\n - When will there be a superhuman Starcraft II agent with no domain-specific hardcoded knowledge, trained using <=$10,000 of publicly available compute? _[50%: 2021 to 2037, with median 2026, n = 35]_\n\nThis forecast is supported by a [Guesstimate model](https://www.getguesstimate.com/models/12709), which estimates current and future sample efficiency of Starcraft II algorithms, based on current performance, algorithmic progress, and the generalization of Moore’s law. For algorithmic progress, they look at the improvement in sample efficiency on Atari, and find a doubling time of roughly a year, via DQN → DDQN → Dueling DDQN → Prioritized DDQN → PPO → Rainbow → IMPALA.\n\nOverall, there are 50+ questions, including on malicious use of AI, publishing norms, conference attendance, MIRI’s research progress, the max compute doubling trend, OpenAI LP, nationalisation of AI labs, whether financial markets expect AGI, and more. You can sign-up to join [here](https://docs.google.com/forms/d/e/1FAIpQLSduBjn3W_MpHHjsKUEhzV6Krkup78ujE5-8bpNJ5HDE7GGnmA/viewform?usp=sf_link)."], "venue": "Google Forms", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #49", "newsletter_category": "Forecasting"}
{"id": "e99321d41a6c2b126e58cc8401db9454", "title": "Semi-informative priors over AI timelines", "url": "https://www.openphilanthropy.org/blog/report-semi-informative-priors", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Tom Davidson"], "summaries": ["This report aims to analyze outside view evidence for AI timelines. In this setting, “outside view” roughly means that we take into account when AI research started, and how its inputs (data, compute, researcher time) have changed over time, but nothing else. The report considers four potential reference classes from which an outside view can be formed.\n\nFor each reference class, we’re going to use it to estimate how hard we would have thought AGI would be before we had tried to build AGI at all, and then we’re going to update that probability based on the observation that we’ve tried for some amount of calendar time / researcher time / compute, and haven’t yet gotten AGI. The report uses a simple generalization of Laplace’s Rule to actually synthesize it all together; I’m not going to go into that here.\n\nI found the reference classes most interesting and will summarize them here. Note that the author says that the main contribution is in the framework, and that the individual reference classes are much less well done (there are several suggestions on other reference classes to investigate in the future). With that caveat, in order of the weight assigned to each, the four references classes are:\n\n1. **STEM goal:** AGI is a highly ambitious but feasible technology that a serious STEM field is explicitly trying to develop. Looking at other such examples, the author suggests putting between 5% and 50% on developing AGI in 50 years.\n2. **Transformative technology:** AGI is a technological development that would have a transformative effect on the nature of work and society. While these have been incredibly rare, we might expect that their probability increases with more technological development, making it more likely to occur now. Based on this, the author favors an upper bound of 1% per year on AGI.\n3. **Futurism goal:** AGI is a high-impact technology that a serious STEM field is trying to build in 2020. There are a lot of such technologies, but we probably shouldn’t expect too many high-impact technologies to work out. The author suggests this should put it at below 1% per year.\n4. **Math conjecture:** AGI is kinda sorta like a notable math conjecture. AI Impacts [investigated](https://aiimpacts.org/resolutions-of-mathematical-conjectures-over-time/) ([AN #97](https://mailchi.mp/a2b5efbcd3a7/an-97-are-there-historical-examples-of-large-robust-discontinuities)) the rate at which notable math conjectures are resolved, and their results imply 1/170 chance per year of a conjecture being resolved.\n\nAggregating these all together, the author favors assigning 0.1% - 1% per year at the beginning of AI research in 1956, with a point estimate of 0.3%. After updating on the fact that we don’t yet have AGI, the framework gives 1.5% - 9% for AGI by 2036 and 7% - 33% for AGI by 2100.\n\nWe can also run the same analysis where you get a new “chance” to develop AGI every time you increase the researcher pool by a constant fraction. (This is almost like having a log uniform prior on how many researcher hours are needed to get AGI.) Since there have been a few large booms in AI, this gives somewhat higher probabilities than the previous method, getting to 2% - 15% for AGI by 2036. Doing the same thing for compute gets 2% - 22% for AGI by 2036.\n\nA weighted aggregation of all of the methods together (with weights set by intuition) gives 1% - 18% for AGI by 2036, and 5% - 35% for AGI by 2100."], "venue": "Open Phil Website", "opinion": "This seems like a good quantification of what the outside view suggests for AI timelines. Unfortunately, I have never really spent much time figuring out how best to combine outside view and inside view evidence, because research generally requires you to think about a detailed, gearsy, inside-view model, and so outside views feel pretty irrelevant to me. (They’re obviously relevant to Open Phil, who have to make funding decisions based on AI timelines, and so really do benefit from having the better estimates of timelines.) So I will probably continue to act based on the <@bio anchors framework@>(@Draft report on AI timelines@).\n\nThis is also why I haven’t highlighted this particular piece, despite the content being excellent. I generally highlight things that would be valuable for technical alignment researchers to read; my guess is that timelines are actually _not_ that important for researchers to have good beliefs about (though inside-view models that predict timelines are important).\n\nSome feedback on the report takes issue with the use of Laplace’s Rule because it models each “attempt” to make AGI as independent, which is obviously false. I’m not too worried about this; while the model might be obviously wrong, I doubt that a more sophisticated model would give very different results; most of the “oomph” is coming from the reference classes.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #145", "newsletter_category": "Forecasting"}
{"id": "0690367549107d31158317d62baaf2e2", "title": "Measuring Progress in Deep Reinforcement Learning Sample Efficiency", "url": "https://openreview.net/forum?id=_QdvdkxOii6", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Anonymous"], "summaries": ["This paper measures historic increases in sample efficiency by looking at the number of samples needed to reach some fixed performance level on Atari games and virtual continuous control tasks. The authors find exponential progress in sample efficiency, with estimated doubling times of 10 to 18 months on Atari, 5 to 24 months on state-based continuous control, and 4 to 9 months on pixel-based continuous control, depending on the specific task and performance level. They find that these gains were mainly driven by improvements in off-policy and model-based deep RL learning approaches, as well as the use of auxiliary learning objectives to speed up representation learning, and not by model size improvements. The authors stress that their study is limited in studying only the published training curves for only three tasks, not accounting for the extent to which hyperparameter tuning may have been responsible for historic gains."], "venue": "OpenReview", "opinion": "Following in the footsteps of <@AI and Efficiency@>(@AI and Efficiency@), here we have a paper showing exponential gains in sample efficiency in particular. I'm really glad someone did this analysis-- I think I'm surprised by how fast progress is, though as the paper notes it's unclear exactly how to relate historic improvements on fixed task performance to a sense of overall improvement in continuous control (though several of the main contributors listed in the appendix seem fairly general). I also really appreciate how thorough the full paper is in listing limitations to this work.\n\nSince these papers are coming up in the same newsletter, I'll note the contrast between the data-unlimited domains explored in the scaling laws paper and the severely data-limited domain of real-world robotics emphasized in this paper. In robotics, it seems we are definitely still constrained by algorithmic progress that lets us train on fewer samples (or do better <@transfer from simulations@>(@Let's Discuss OpenAI's Rubik's Cube Result@)). Of course, maybe progress in data-unlimited domains will ultimately result in AIs that make algorithmic progress in data-limited domains faster than humans ever could.", "highlight": false, "read_more": "", "summarizer": "Asya", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #125", "newsletter_category": "Forecasting"}
{"id": "93c3caa13364e800ef3b7e4f0f8b675c", "title": "How Much Computational Power It Takes to Match the Human Brain", "url": "https://www.openphilanthropy.org/blog/new-report-brain-computation", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Joseph Carlsmith"], "summaries": ["In this blog post, Joseph Carlsmith gives a summary of his longer report estimating the number of floating point operations per second (FLOP/s) which would be _sufficient_ to perform any cognitive task that the human brain can perform. He considers four different methods of estimation.\n\nUsing *the mechanistic method*, he estimates the FLOP/s required to model the brain’s low-level mechanisms at a level of detail adequate to replicate human task-performance. He does this by estimating that ~1e13 - 1e17 FLOP/s is enough to replicate what he calls “standard neuron signaling” — neurons signaling to each other via using electrical impulses (at chemical synapses) — and learning in the brain, and arguing that including the brain’s other signaling processes would not meaningfully increase these numbers. He also suggests that various considerations point weakly to the adequacy of smaller budgets.\n\nUsing *the functional method*, he identifies a portion of the brain whose function we can approximate with computers, and then scales up to FLOP/s estimates for the entire brain. One way to do this is by scaling up models of the human retina: Hans Moravec's estimates for the FLOP/s of the human retina imply 1e12 - 1e15 FLOP/s for the entire brain, while recent deep neural networks that predict retina cell firing patterns imply 1e16 - 1e20 FLOP/s.\n\nAnother way to use the functional method is to assume that current image classification networks with known FLOP/s requirements do some fraction of the computation of the human visual cortex, adjusting for the increase in FLOP/s necessary to reach robust human-level classification performance. Assuming somewhat arbitrarily that 0.3% to 10% of what the visual cortex does is image classification, and that the EfficientNet-B2 image classifier would require a 10x to 1000x increase in frequency to reach fully human-level image classification, he gets 1e13 - 3e17 implied FLOP/s to run the entire brain. Joseph holds the estimates from this method very lightly, though he thinks that they weakly suggest that the 1e13 - 1e17 FLOP/s estimates from the mechanistic method are not radically too low.\n\nUsing *the limit method*, Joseph uses the brain’s energy budget, together with physical limits set by Landauer’s principle, which specifies the minimum energy cost of erasing bits, to upper-bound required FLOP/s to ~7e21. He notes that this relies on arguments about how many bits the brain erases per FLOP, which he and various experts agree is very likely to be > 1 based on arguments about algorithmic bit erasures and the brain's energy dissipation.\n\nLastly, Joseph briefly describes *the communication method*, which uses the communication bandwidth in the brain as evidence about its computational capacity. Joseph thinks this method faces a number of issues, but some extremely preliminary estimates suggest 1e14 FLOP/s based on comparing the brain to a V100 GPU, and 1e16 - 3e17 FLOP/s based on estimating the communication capabilities of brains in traversed edges per second (TEPS), a metric normally used for computers, and then converting to FLOP/s using the TEPS to FLOP/s ratio in supercomputers. \n\nOverall, Joseph thinks it is more likely than not that 1e15 FLOP/s is enough to perform tasks as well as the human brain (given the right software, which may be very hard to create). And he thinks it's unlikely (<10%) that more than 1e21 FLOP/s is required. For reference, an NVIDIA V100 GPU performs up to 1e14 FLOP/s (although FLOP/s is not the only metric which differentiates two computational systems.)"], "venue": "Open Philanthropy Website", "opinion": "I really liked this post, although I haven't gotten a chance to get through the entire full-length report. I found the reasoning extremely legible and transparent, and there's no place where I disagree with Joseph's estimates or conclusions. See also [Import AI's summary](https://jack-clark.net/2020/09/14/import-ai-214-nvidias-40bn-arm-deal-a-new-57-subject-nlp-test-ai-for-plant-disease-detection/).", "highlight": false, "read_more": "Full Report: How Much Computational Power Does It Take to Match the Human Brain?", "summarizer": "Asya", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #118", "newsletter_category": "Forecasting"}
{"id": "6c080e9395e0c3c8aa5e69b134fed470", "title": "Does Economic History Point Toward a Singularity?", "url": "https://forum.effectivealtruism.org/posts/CWFn9qAKsRibpCGq8/does-economic-history-point-toward-a-singularity", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Ben Garfinkel"], "summaries": ["One important question for the long-term future is whether we can expect accelerating growth in total GDP in the near future (see e.g. this <@recent report@>(@Modeling the Human Trajectory@)). For AI alignment in particular, the answer to this question could have a significant impact on AI timelines: if some arguments suggested that it would be very unlikely for us to have accelerating growth soon, we should probably be more skeptical that we will develop transformative AI soon.\n\nSo far, the case for accelerating growth relies on one main argument that the author calls the _Hyperbolic Growth Hypothesis_ (HGH). This hypothesis posits that the growth _rate_ rises in tandem with the population size (intuitively, a higher population means more ideas for technological progress which means higher growth rates). This document explores the _empirical_ support for this hypothesis.\n\nI’ll skip the messy empirical details and jump straight to the conclusion: while the author agrees that growth rates of total GDP (rather than GDP per capita) have been increasing since the Industrial Revolution up till a few decades ago, he does not see much support for the HGH prior to the modern era. The data seems very noisy and hard to interpret, and even when using this noisy data it seems that models with constant growth rates fit the pre-modern era better than hyperbolic models. Thus, we should be uncertain between the HGH and the hypothesis that the industrial revolution triggered a one-off transition to increasing growth rates that have now stabilized."], "venue": "EA Forum", "opinion": "I’m glad to know that the empirical support for the HGH seems mostly limited to the modern era, and may be weakly disconfirmed by data from the pre-modern era. I’m not entirely sure how I should update -- it seems that both hypotheses would be consistent with future accelerating growth, though HGH predicts it more strongly. It also seems plausible to me that we should still assign more credence to HGH because of its theoretical support and relative simplicity -- it doesn’t seem like there is strong evidence suggesting that HGH is false, just that the empirical evidence for it is weaker than we might have thought. See also [Paul Christiano’s response](https://forum.effectivealtruism.org/posts/CWFn9qAKsRibpCGq8/does-economic-history-point-toward-a-singularity?commentId=j9BymthAthZQ6dnGp).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #116", "newsletter_category": "Forecasting"}
{"id": "3da620c1272a13fe9fa96f0ef3080017", "title": "Amplified forecasting: What will Buck's informed prediction of compute used in the largest ML training run before 2030 be?", "url": "https://www.metaculus.com/questions/4732/amplified-forecasting-what-will-bucks-informed-prediction-of-compute-used-in-the-largest-ml-training-run-before-2030-be/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Ought"], "summaries": ["[Ought](https://ought.org/) has recently run experiments on how to amplify expert reasoning, to produce better answers than a time-limited expert could produce themselves. This experiment centers on the question of how much compute will be used in the largest ML training run before 2030. Rather than predict the actual answer, participants provided evidence and predicted what Buck’s posterior would be after reading through the comments and evidence.\n\nBuck’s quick [prior](https://elicit.ought.org/builder/aFElCFp8E) was an extrapolation of the trend identified in <@AI and Compute@>, and suggested a median of around 10^13 petaflop/s-days. Commenters pointed out that the existing trend relied on a huge growth rate in the amount of money spent on compute, that seemed to lead to implausible amounts of money by 2030 (a point previously made <@here@>(@Interpreting AI Compute Trends@)). Buck’s updated [posterior](https://elicit.ought.org/builder/2yV4pA-Wc) has a median of around 10^9 petaflop/s-days, with a mode of around 10^8 petaflop/s-days (estimated to be 3,600 times larger than AlphaStar)."], "venue": "Metaculus", "opinion": "The updated posterior seems roughly right to me -- looking at the reasoning of the prize-winning comment, it seems like a $1 trillion training run in 2030 would be about 10^11 petaflop/s-days, which seems like the far end of the spectrum. The posterior assigns about 20% to it being even larger than this, which seems too high to me, but the numbers above do assume a “business-as-usual” world, and if you assign a significant probability to getting AGI before 2030, then you probably should have a non-trivial probability assigned to extreme outcomes.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #111", "newsletter_category": "Forecasting"}
{"id": "d30ce6985e3a83ada82980f226e9670e", "title": "Addendum to AI and Compute", "url": "https://openai.com/blog/ai-and-compute/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Girish Sastry", "Jack Clark", "Greg Brockman", "Ilya Sutskever"], "summaries": ["Last year, OpenAI <@wrote@>(@AI and Compute@) that since 2012, the amount of compute used in the largest-scale experiments has been doubling every 3.5 months. This addendum to that post analyzes data from 1959-2012, and finds that during that period the trend was a 2-year doubling time, approximately in line with Moore's Law, and not demonstrating any impact of previous \"AI winters\"."], "venue": "OpenAI Blog", "opinion": "Note that the post is measuring compute used to _train_ models, which was less important in past AI research (e.g. it doesn't include Deep Blue), so it's not too surprising that we don't see the impact of AI winters.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #73", "newsletter_category": "Forecasting"}
{"id": "6b86eff0b088efe02a2da31dd793f203", "title": "Two explanations for variation in human abilities", "url": "https://www.lesswrong.com/posts/ZwSrTsP3YkgnmHWnJ/two-explanations-for-variability-in-human-abilities", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Matthew Barnett"], "summaries": ["How quickly might AI exceed human capabilities? One piece of evidence is the variation of intelligence within humans: if there isn’t much variation, we might expect AI not to stay at human level intelligence for long. It has been argued that variation in human cognitive abilities is small compared to such variation for arbitrary agents. However, the variation of human ability in games like chess seems to be quite pronounced, and it took chess computers more than forty years to transition from beginner level to beating the best humans. The blog post presents two arguments to reconcile these perspectives:\n\nFirst, **similar minds could have large variation in learning ability**: If we break a random part of a complex machine, it might perform worse or stop working altogether, even if the broken machine is very similar to the unbroken one. Variation in human learning ability might be mostly explainable by lots of small \"broken parts\" like harmful mutations. \n\nSecond, **small variation in learning ability** can be consistent with **large variation in competence**, if the latter is explained by variation in another factor like practice time. For example, a chess match is not very useful to determine who's smarter, if one of the players has played a lot more games than the other. This perspective also reframes AlphaGo's superhumanity: the version that beat Lee Sedol had played around 2000 times as many games as him."], "venue": "LessWrong", "opinion": "I liked this post and am glad it highlighted the distinction between learning ability and competence that seems to often be ignored in debates about AI progress. I would be excited to see some further exploration of the \"broken parts\" model and its implication about differing variances in cognitive abilities between humans and arbitrary intelligences.", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #72", "newsletter_category": "Forecasting"}
{"id": "932c390187e156795045977083d1fd0a", "title": "Thoughts on short timelines", "url": "http://s-risks.org/thoughts-on-short-timelines/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Tobias Baumann"], "summaries": ["This post argues that the probability of AGI in the next ten years is very low, perhaps 1-2%. The primary argument is that to get AGI that quickly, we would need to be seeing research breakthroughs frequently, and empirically this is not the case. This might not be true if we expect that progress will accelerate in the future, but there's no reason to expect this -- we won't get recursive self-improvement before AGI and there won't be a huge increase in resources devoted to AI (since there is already so much excitement). We might also say that we are so clueless that we should assign at least 10% to AGI in ten years, but it doesn't seem we are that ignorant, and in any case it's not obvious that a prior should assign 10% to this outcome. Expert surveys estimate non-negligible probability on AGI in ten years, but in practice it seems the predominant opinion is to confidently dismiss a short timelines scenario."], "venue": "S-risks website", "opinion": "I do think that the probability of AGI in ten years is larger than 1-2%. I suspect my main disagreement is with the conception of what counts as groundbreaking progress. Tobias gives the example of transfer from one board game to many other board games; I think that AGI wouldn't be able to solve this problem from scratch, and humans are only capable of this because of [good priors](https://rach0012.github.io/humanRL_website/) from all the other learning we've done throughout life, _especially_ since games are designed to be human-understandable. If you make a sufficiently large neural net and give it a complex enough environment, some simple unsupervised learning rewards, and the opportunity to collect as much data as a human gets throughout life, maybe that does result in AGI. (I'd guess not, because it does seem like we have some good priors from birth, but I'm not very confident in that.)", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #30", "newsletter_category": "Forecasting"}
{"id": "4311a88737138ee0debf6ab948d4d33f", "title": "What 2026 looks like", "url": "https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like-daniel-s-median-future", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Daniel Kokotajlo"], "summaries": ["This post describes the author’s median expectations around AI from now until 2026. It is part I of an attempt to write a detailed plausible future trajectory in chronological order, i.e. incrementally adding years to the story rather than writing a story with the end in mind. The hope is to produce a nice complement to the more abstract discussions about timelines and takeoff that usually occur. For example, there are discussions about how AI tools are used by nations for persuasion, propaganda and censorship."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #160", "newsletter_category": "Forecasting"}
{"id": "d3c771fb8979db775ec2f8da1764409f", "title": "Deep limitations? Examining expert disagreement over deep learning", "url": "https://link.springer.com/article/10.1007/s13748-021-00239-1", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Carla Zoe Cremer"], "summaries": ["This paper reports on the results of a qualitative survey of 25 experts, conducted in 2019 and early 2020, on the possibility of deep learning leading to high-level machine intelligence (HLMI), defined here as an “algorithmic system that performs like average adults on cognitive tests that evaluate the cognitive abilities required to perform economically relevant tasks”. Experts disagreed strongly on whether deep learning could lead to HLMI. Optimists tended to focus on the importance of scale, while pessimists tended to emphasize the need for additional insights.\n\nBased on the interviews, the paper gives a list of 40 limitations of deep learning that some expert pointed to, and a more specific list of five areas that both optimists and pessimists pointed to as in support of their views (and thus would likely be promising areas to resolve disagreements). The five areas are (1) abstraction; (2) generalization; (3) explanatory, causal models; (4) emergence of planning; and (5) intervention."], "venue": "Progress in Artificial Intelligence", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #154", "newsletter_category": "Forecasting"}
{"id": "95079da4a32f0a4d9eaceca9b70da3d3", "title": "Three reasons to expect long AI timelines", "url": "https://www.alignmentforum.org/posts/Z5gPrKTR2oDmm6fqJ/three-reasons-to-expect-long-ai-timelines", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Matthew Barnett"], "summaries": ["This post outlines and argues for three reasons to expect long AI timelines that the author expects are not taken into account in current forecasting efforts:\n\n1. **Technological deployment lag:** Most technologies take decades between when they're first developed and when they become widely impactful.\n2. **Overestimating the generality of AI technology:** Just as people in the 1950s and 1960s overestimated the impact of solving chess, it seems likely that current people are overestimating the impact of recent progress, and how far it can scale in the future.\n3. **Regulation will slow things down,** as with [nuclear energy](https://rootsofprogress.org/devanney-on-the-nuclear-flop), for example.\n\nYou might argue that the first and third points don’t matter, since what we care about is when AGI is _developed_, as opposed to when it becomes widely deployed. However, it seems that we continue to have the opportunity to intervene until the technology becomes widely impactful, and that seems to be the relevant quantity for decision-making. You could have some specific argument like “the AI goes FOOM and very quickly achieves all of its goals” that then implies that the development time is the right thing to forecast, but none of these seem all that obvious."], "venue": "Alignment Forum", "opinion": "I broadly agree that (1) and (3) don’t seem to be discussed much during forecasting, despite being quite important. (Though see e.g. [value of the long tail](https://www.lesswrong.com/posts/Nbcs5Fe2cxQuzje4K/value-of-the-long-tail).) I disagree with (2): while it is obviously possible that people are overestimating recent progress, or are overconfident about how useful scaling will be, there has at least been a lot of thought put into that particular question -- it seems like one of the central questions tackled by <@bio anchors@>(@Draft report on AI timelines@). See more discussion in this [comment thread](https://www.alignmentforum.org/posts/Z5gPrKTR2oDmm6fqJ/three-reasons-to-expect-long-ai-timelines?commentId=F7FNee8Bpa8hemQkd).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #148", "newsletter_category": "Forecasting"}
{"id": "ca799d672b8dc277a771a47900ed72f2", "title": "2020 Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy", "url": "http://gcrinstitute.org/2020-survey-of-artificial-general-intelligence-projects-for-ethics-risk-and-policy/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["McKenna Fitzgerald", "Aaron Boddy", "Seth D. Baum"], "summaries": ["This is a survey of AGI research and development (R&D) projects, based on public information like publications and websites. The survey finds 72 such projects active in 2020 compared to 70 projects active in 2017. This corresponds to 15 new projects and 13 projects that shut down since 2017. Almost half of the projects are US-based (and this is fewer than in 2017!), and most of the rest is based in US-allied countries. Around half of the projects publish open-source code. Many projects are interconnected via shared personnel or joint projects and only a few have identifiable military connections (fewer than in 2017). All of these factors might facilitate cooperation around safety. \n\nThe projects form three major clusters: 1) corporate projects active on AGI safety 2) academic projects not active on AGI safety and 3) small corporations not active on AGI safety. Most of the projects are rather small and project size varies a lot, with the largest projects having more than 100 times as many employees as the smallest ones. While the share of projects with a humanitarian focus has increased to more than half, only a small but growing number is active on safety. Compared to 2017, the share of corporate projects has increased, and there are fewer academic projects. While academic projects are more likely to focus on knowledge expansion rather than humanitarian goals, corporate projects seem more likely to prioritize profit over public interest and safety. Consequently, corporate governance might be especially important."], "venue": "GCRI Website", "opinion": "These kinds of surveys seem important to conduct, even if they don't always deliver very surprising results. That said, I was surprised by the large amount of small AGI projects (for which I expect the chances of success to be tiny) and the overall small amount of Chinese AGI projects. ", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #138", "newsletter_category": "Forecasting"}
{"id": "6d40bb9786d31e9b460d4b13b8fcd5ff", "title": "How The Hell Do We Create General-Purpose Robots?", "url": "https://howthehell.substack.com/p/general-purpose-robots", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Sergey Alexashenko"], "summaries": ["A **general-purpose robot** (GPR) is one that can execute simple commands like “unload the dishwasher” or “paint the wall”. This post outlines an approach to get to such robots, and estimates how much it would cost to get there.\n\nOn the hardware side, we need to have hardware for the body, sensors, and brain. The body is ready; the Spot robot from Boston Dynamics seems like a reasonable candidate. On sensors, we have vision, hearing and lidar covered; however, we don’t have great sensors for touch yet. That being said, it seems possible to get by with bad sensors for touch, and compensate with vision. Finally, for the brain, even if we can’t put enough chips on the robot itself, we can use more compute via the cloud.\n\nFor software, in principle a large enough neural network should suffice; all of the skills involved in GPRs have already been demonstrated by neural nets, just not as well as would be necessary. (In particular, we don’t need to posit AGI.) The big issue is that we don’t know how to train such a network. (We can’t train in the real world, as that is way too slow.)\n\nWith a big enough investment, it seems plausible that we could build a simulator in which the robot could learn. The simulator would have to be physically realistic and diverse, which is quite a challenge. But we don’t have to write down physically accurate models of all objects: instead, we can _virtualize_ objects. Specifically, we interact with an object for a couple of minutes, and then use the resulting data to build a model of the object in our simulation. (You could imagine an AlphaFold-like system that does this very well.)\n\nThe author then runs some Fermi estimates and concludes that it might cost around $42 billion for the R&D in such a project (though it may not succeed), and concludes that this would clearly be worth it given the huge economic benefits."], "venue": "How The Hell Newsletter", "opinion": "This outline seems pretty reasonable to me. There are a lot of specific points to nitpick with; for example, I am not convinced that we can just use cloud compute. It seems plausible that manipulation tasks require quick, iterative feedback, where the latency of cloud compute would be unacceptable. (Indeed, the quick, iterative feedback of touch is exactly why it is such a valuable sensor.) Nonetheless, I broadly like the outlined plan and it feels like these sorts of nitpicks are things that we will be able to solve as we work on the problem.\n\nI am more skeptical of the cost estimate, which seems pretty optimistic to me. The author basically took existing numbers and then multiplied them by some factor for the increased hardness; I think that those factors are too low (for the AI aspects, idk about the robot hardware aspects), and I think that there are probably lots of other significant “invisible” costs that aren’t being counted here.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #138", "newsletter_category": "Forecasting"}
{"id": "0ef1b2d77d1cd3dbee48876467a970d5", "title": "Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain", "url": "https://www.alignmentforum.org/posts/HhWhaSzQr6xmBki8F/birds-planes-brains-and-ai-against-appeals-to-the-complexity", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Daniel Kokotajlo"], "summaries": ["This post argues against a particular class of arguments about AI timelines. These arguments have the form: “The brain has property X, but we don’t know how to make AIs with property X. Since it took evolution a long time to make brains with property X, we should expect it will take us a long time as well”. The reason these are not compelling is because humans often use different approaches to solve problems than evolution did, and so humans might solve the overall problem without ever needing to have property X. To make these arguments more convincing, you need to argue 1) why property X really is _necessary_ and 2) why property X won’t follow quickly once everything else is in place.\n\nThis is illustrated with a hypothetical example of someone trying to predict when humans would achieve heavier-than-air flight: in practice, you could have made decent predictions just by looking at the power to weight ratios of engines vs. birds. Someone who argued that we were far away because “we don't even know how birds stay up for so long without flapping their wings” would have made incorrect predictions."], "venue": "Alignment Forum", "opinion": "This all seems generally right to me, and is part of the reason I like the <@biological anchors approach@>(@Draft report on AI timelines@) to forecasting transformative AI.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #134", "newsletter_category": "Forecasting"}
{"id": "9ec5dfc38b3ca844f48265cc91de422f", "title": "Automating reasoning about the future at Ought", "url": "https://ought.org/updates/2020-10-09-forecasting", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Ought"], "summaries": ["Roughly speaking, we can think of an axis of reasoning, spanning from high-confidence statistical reasoning with lots of data to general quantitative reasoning to very qualitative reasoning. Ought is now building Elicit to help with _judgmental forecasting_, which is near but not at the end of the spectrum. In judgmental forecasting, we might take a complicated question such as “Will Roe v. Wade be overturned if Trump nominates a new justice”, decompose it into subquestions, estimate probabilities for those subquestions, and then combine them to get to a final forecast. Crucially, this requires relying on people’s judgment: we cannot just look at the historical rate at which landmark Supreme Court decisions are overturned, since the situation has rarely arisen before.\n\nCurrently, Elicit has several features that help with the quantitative aspects of judgmental forecasting, for example by enabling users to input complex distributions and visualizing these distributions. However, in the long-term, the hope is to also help with the more qualitative aspects of judgmental forecasting as well, for example by proposing an important subquestion for answering the current question under consideration, or recommending a source of data that can answer the question at hand.\n\nOught is now working on adding these sorts of features by using large language models (GPT-3 in particular). They are currently [looking for beta testers](https://www.lesswrong.com/posts/WBgT4jAqTrPN7qh3Z/beta-test-gpt-3-based-research-assistant) for these features!"], "venue": "Ought Website", "opinion": "This seems like a pretty interesting direction, though I’m not totally clear on how it relates to AI alignment (assuming it is supposed to). It does seem quite related to iterated amplification, which also relies on this sort of decomposition of questions.\n\nI found the video mock ups and demos in the blog post to be particularly interesting, both in the original post and in the [call for beta testers](https://www.lesswrong.com/posts/WBgT4jAqTrPN7qh3Z/beta-test-gpt-3-based-research-assistant); I think they were much better at showcasing the potential value than the abstract description, and I recommend you all watch them.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #129", "newsletter_category": "Forecasting"}
{"id": "eed419750240df60c80209b6963539c1", "title": "AGI Predictions", "url": "https://www.lesswrong.com/posts/YMokuZdoY9tEDHjzv/agi-predictions", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Amanda Ngo", "Ben Pace"], "summaries": ["A collection of interesting questions relevant to AI safety, as well as aggregated predictions from readers of the post."], "venue": "LessWrong", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #128", "newsletter_category": "Forecasting"}
{"id": "e87a8f631a78ebe2032a37bcf8218864", "title": "Roadmap to a Roadmap: How Could We Tell When AGI is a ‘Manhattan Project’ Away? ", "url": "http://dmip.webs.upv.es/EPAI2020/papers/EPAI_2020_paper_11.pdf?fbclid=IwAR15Z0CMX4rBBUJEHhn6NdcMK2ZCF07pPpkcmfD36_oEI9WhV310bRkbaiQ", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["John-Clark Levin", "Matthijs M. Maas"], "summaries": ["The key hypothesis of this paper is that once there is a clear “roadmap” or “runway” to AGI, it is likely that state actors could invest a large number of resources into achieving it, comparably to the Manhattan project. The fact that we do not see signs of such investment now does not imply that it won’t happen in the future: currently, there is so little “surface area” on the problem of AGI that throwing vast amounts of money at the problem is unlikely to help much.\n\nIf this were true, then once such a runway is visible, incentives could change quite sharply: in particular, the current norms of openness may quickly change to norms of secrecy, as nations compete (or perceive themselves to be competing) with other nations to build AGI first. As a result, it would be good to have a good measure of whether we have reached the point where such a runway exists."], "venue": "EPAI 2020", "opinion": "", "highlight": false, "read_more": "Import AI summary", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #115", "newsletter_category": "Forecasting"}
{"id": "df60a5cfb3745a0fa5188f7543f30ab1", "title": "My AI Timelines Have Sped Up", "url": "https://www.alexirpan.com/2020/08/18/ai-timelines.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Alex Irpan"], "summaries": ["Alex Irpan updates his predictions of AGI sooner to:\n\n10% chance by 2035 (previously 2045)\n50% chance by 2045 (previously 2050)\n90% chance by 2070\n\nThe main reasons why are:\n\n- Alex is now more uncertain because research pace over the past five years have been more surprising than expected, faster in some domains, but slower than others. \n- Accounting for improvements in tooling. New libraries like TensorFlow and PyTorch have accelerated progress. Even CNNs can be used as a “tool” that provides features for downstream tasks like robotic control. \n- He previously thought that labeled data might be a bottleneck, based on scaling laws showing that data needs might increase faster than compute; however, semi- and unsupervised learning have improved significantly, GPT-3 being the latest example of this. \n- Alex now believes that compute will play a larger role and that compute can scale faster than algorithms because there is large worldwide consumer demand. \n\nThe post ends with a hypothetical description of how AGI may happen soon that I will leave out of the summary but recommend reading. "], "venue": "Sorta Insightful", "opinion": "My personal opinion on timelines is that I think it is much more informative to draw out the full CDF/PDF of when we will get to AGI instead of percentages by different years. It isn’t included in the post, but you can find Alex’s [here](https://elicit.ought.org/builder/BsNkKzJoc). I end up placing higher likelihood on AGI happening sooner than Alex does, but I largely agree with his reasoning. \n\nMore uncertainty than the original prediction seems warranted to me; the original prediction had a very high likelihood of AGI between 2045-2050 that I didn’t understand. Of the rest of the arguments, I agree most strongly with the section on tooling providing a speedup. I’d even push the point farther to say that there are many inputs into current ML systems, and all of them seem to be improving at a rapid clip. Hardware, software tools, data, and the number of ML researchers all seem to be on track to improve significantly over the next decade.", "highlight": false, "read_more": "", "summarizer": "Nicholas", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #114", "newsletter_category": "Forecasting"}
{"id": "b8c49e3db78688bd7f985851ab97406f", "title": "AI Forecasting Dictionary", "url": "https://www.lesswrong.com/posts/8y7DcSF4eAkXoru4u/ai-forecasting-dictionary-forecasting-infrastructure-part-1-2", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Jacob Lagerros and Ben Goldhaber"], "summaries": ["One big challenge with forecasting the future is operationalizing key terms unambiguously, so that a question can be resolved when the future actually arrives. Since we'll probably need to forecast many different questions, it's crucial that we make it as easy as possible to create and answer well-operationalized questions. To that end, the authors have created and open-sourced an AI Forecasting Dictionary, which gives precise meanings for important terms, along with examples and non-examples to clarify further."], "venue": "LessWrong", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #65", "newsletter_category": "Forecasting"}
{"id": "8324a3db6958258b609e172eaca15625", "title": "AI Forecasting Resolution Council ", "url": "https://www.lesswrong.com/posts/9G6CCNXkA7JZoorpY/ai-forecasting-resolution-council-forecasting-infrastructure", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Jacob Lagerros and Ben Goldhaber"], "summaries": ["Even if you operationalize forecasting questions well, often the outcome is determined primarily by factors other than the one you are interested in. For example, progress on a benchmark might be determined more by the number of researchers who try to beat the benchmark than by improvements in AI capabilities, even though you were trying to measure the latter. To deal with this problem, an AI Forecasting Resolution Council has been set up: now, forecasters can predict what the resolution council will say at some particular time in the future. This allows for questions that get at what we want: in the previous case, we could now forecast how the resolution council will answer the question \"would current methods be able to beat this benchmark\" in 2021."], "venue": "LessWrong", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #65", "newsletter_category": "Forecasting"}
{"id": "dd12fb6dd6e2c78ea0a2a6b8336a878f", "title": "How to write good AI forecasting questions + Question Database ", "url": "https://www.lesswrong.com/posts/yy3FCmdAbgSLePD7H/how-to-write-good-ai-forecasting-questions-question-database", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Jacob Lagerros and Ben Goldhaber"], "summaries": ["As discussed above, operationalization of forecasting questions is hard. This post collects some of the common failure modes, and introduces a database of 76 questions about AI progress that have detailed resolution criteria that will hopefully avoid any pitfalls of operationalization."], "venue": "LessWrong", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #65", "newsletter_category": "Forecasting"}
{"id": "c5e5fa579fdb9079c1fd9095c1e797fb", "title": "Why Is the Human Brain So Efficient?", "url": "http://nautil.us/issue/59/connections/why-is-the-human-brain-so-efficient", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Liqun Luo"], "summaries": ["Overall point for this audience is that, despite how slow and imprecise neuron signals are, the human brain beats computers because of how massively parallel it is."], "venue": "Nautilus", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #7", "newsletter_category": "Forecasting"}
{"id": "13142f5b0e37dffb0d147174f7ecb53b", "title": "Shaping Safer Goals", "url": "https://www.alignmentforum.org/s/boLPsyNwd6teK5key", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Richard Ngo"], "summaries": ["Much of safety research focuses on a single agent that is directly incentivized by a loss/reward function to take particular actions. This sequence instead considers safety in the case of multi-agent systems interacting in complex environments. In this situation, even simple reward functions can yield complex and highly intelligent behaviors that are only indirectly related. For example, evolution led to humans who can learn to play chess, despite the fact that the ancestral environment did not contain chess games. In these situations, the problem is not how to construct an aligned reward function, the problem is how to shape the experience that the agent gets at training time such that the final agent policy optimizes for the goals that we want. This sequence lays out some considerations and research directions for safety in such situations.\n\nOne approach is to teach agents the generalizable skill of obedience. To accomplish this, one could design the environment to incentivize specialization. For instance, if an agent A is more powerful than agent B, but can see less of the environment than B, A might be incentivized to obey B’s instructions if they share a goal. Similarly we can increase the ease and value of coordination through enabling access to a shared permanent record or designing tasks that require large-scale coordination.\n\nA second approach is to move agents to simpler and safer training regimes as they develop more intelligence. The key assumption here is that we may require complex regimes such as competitive multi-agent environments to jumpstart intelligent behavior, but may be able to continue training in a simpler regime such as single-task RL later. This is similar to current approaches for training a language model via supervised learning and then finetuning with RL, but going in the opposite direction to increase safety rather than capabilities.\n\nA third approach is specific to a collective AGI: an AGI that is composed of a number of separate general agents trained on different objectives that learn to cooperatively solve harder tasks. This is similar to how human civilization is able to accomplish much more than any individual human. In this regime, the AGI can be effectively sandboxed by either reducing the population size or by limiting communication channels between the agents. One advantage of this approach to sandboxing is that it allows us to change the effective intelligence of the system at test-time, without going through a potentially expensive retraining phase."], "venue": "Alignment Forum", "opinion": "I agree that we should put more emphasis on the safety of multi-agent systems. We already have <@evidence@>(@Emergent Tool Use from Multi-Agent Interaction@) that complex behavior can arise from simple objectives in current systems, and this seems only more likely as systems become more powerful. Two-agent paradigms such as GANs, self-play, and debate, are already quite common in ML. Lastly, humans evolved complex behavior from the simple process of evolution so we have at least one example of this working. I also think this is an interesting area where there is lots to learn from other fields, such as game theory and evolutionary biology, \n\nFor any empirically-minded readers of this newsletter, I think this sequence opens up a lot of potential for research. The development of safety benchmarks for multi-agent systems and then the evaluation of these approaches seems like it would make many of the considerations discussed here more concrete. I personally would find them much more convincing with empirical evidence to back up that they work with current ML. \n\n**Rohin's opinion:** The AGI model here in which powerful AI systems arise through multiagent interaction is an important and plausible one, and I'm excited to see some initial thoughts about it. I don't particularly expect any of these ideas to be substantially useful, but I'm also not confident that they won't be useful, and given the huge amount of uncertainty about how multiagent interaction shapes agents, that may be the best we can hope for currently. I'd be excited to see empirical results testing some of these ideas out, as well as more conceptual posts suggesting more ideas to try.", "highlight": true, "read_more": "", "summarizer": "Nicholas", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #119", "newsletter_category": "Handling groups of agents"}
{"id": "8ca378a3f855d7efaac5eb46de1dabd3", "title": "TanksWorld: A Multi-Agent Environment for AI Safety Research", "url": "http://arxiv.org/abs/2002.11174", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Corban G. Rivera", "Olivia Lyons", "Arielle Summitt", "Ayman Fatima", "Ji Pak", "William Shao", "Robert Chalmers", "Aryeh Englander", "Edward W. Staley", "I-Jeng Wang", "Ashley J. Llorens"], "summaries": ["This paper presents TanksWorld, a simulation environment that attempts to illustrate three important aspects of real-world AI safety challenges: competing performance objectives, human-machine learning, and multi-agent competition. TanksWorld consists of two teams of N vs. N tanks. Tanks move and shoot while navigating in a closed arena with obstacles. Tanks are rewarded for killing opponent tanks and penalized for killing neutral and allied tanks according to a specified reward function. Each tank is controlled by either its own AI or a special policy meant to mimic a 'human' teammate. Each individual tank can only see a small portion of its environment, and must communicate with other teammates to gain more information. The following parameters can be varied to emphasize different research challenges:\n- The communication range between tanks -- meant to represent environmental uncertainty.\n- The number of neutral tanks and obstacles -- meant to represent the extent to which tanks must care about 'safety', i.e. avoid collateral damage.\n- The control policies of teammates -- meant to represent the variability of human-machine teams."], "venue": "arXiv", "opinion": "I am generally excited about more work on demonstrating safety challenges; I think it helps to seed and grow the field in concrete directions. I am particularly excited about the possibility for TanksWorld to demonstrate multi-agent safety problems with agents in direct competition. I feel unsure about whether TanksWorld will be a good demonstration of general problems with human-machine interaction-- intuitively, that seems to me like it would be very difficult to capture and require more complex real-world modeling.", "highlight": false, "read_more": "", "summarizer": "Asya", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #91", "newsletter_category": "Handling groups of agents"}
{"id": "48c93fc18f1fa41d478f497e78419d9b", "title": "Collaborating with Humans Requires Understanding Them", "url": "https://bair.berkeley.edu/blog/2019/10/21/coordination/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Micah Carroll", "Rohin Shah", "Mark K. Ho", "Thomas L. Griffiths", "Sanjit A. Seshia", "Pieter Abbeel", "Anca Dragan"], "summaries": ["_Note: I am second author on this paper._ Self-play agents (like those used to play <@Dota@>(@OpenAI Five@) and <@Starcraft@>(@AlphaStar: Mastering the Real-Time Strategy Game StarCraft II@)) are very good at coordinating with _themselves_, but not with other agents. They \"expect\" their partners to be similar to them; they are unable to predict what human partners would do. In competitive games, this is fine: if the human deviates from optimal play, even if you don't predict it you will still beat them. (Another way of saying this: the minimax theorem guarantees a minimum reward _regardless_ of the opponent.) However, in cooperative settings, things are not so nice: a failure to anticipate your partner's plan can lead to arbitrarily bad outcomes. We demonstrate this with a simple environment that requires strong coordination based on the popular game Overcooked. We show that agents specifically trained to play alongside humans perform much better than self-play or population-based training when paired with humans, both in simulation and with a real user study."], "venue": "NeurIPS 2019", "opinion": "I wrote a short [blog post](https://www.alignmentforum.org/posts/dBMC63hjkc5wPqTC7/human-ai-collaboration) talking about the implications of the work. Briefly, there are three potential impacts. First, it seems generically useful to understand how to coordinate with an unknown agent. Second, it is specifically useful for scaling up <@assistance games@>(@Cooperative Inverse Reinforcement Learning@), which are intractable to solve optimally. Finally, it can lead to more ML researchers focusing on solving problems with real humans, which may lead to us finding and solving other problems that will need to be solved in order to build aligned AI systems.", "highlight": false, "read_more": "Paper: On the Utility of Learning about Humans for Human-AI Coordination", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #70", "newsletter_category": "Handling groups of agents"}
{"id": "32630b0b68d7f3f11928999f5507cb40", "title": "Social choice ethics in artificial intelligence", "url": "https://link.springer.com/content/pdf/10.1007/s00146-017-0760-1.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Seth D Baum"], "summaries": ["If we want to program ethics into an AI system, should we do so by aggregating the ethical views of existing humans? This is often justified on procedural grounds: “everyone gets to affect the outcome”, or by abstention: “AI designers don’t have to think about ethics; the AI will deal with that”. (There is also a wisdom of the crowds justification, though this presupposes that there is some notion of “better” ethics independent of humans; which is out of scope for the paper.)\n\nHowever, actually implementing an aggregative procedure requires three major design decisions: 1) _standing_, that is, whose views should be aggregated, 2) _measurement_, that is, how we determine what their ethical views are, and 3) _aggregation_, that is, how the views are put together into a whole. All of these are challenging.\n\nFor standing, we have to determine whom to include. Should we include children, psychopaths, non-human animals, ecosystems, future generations, and other AI systems? We must determine this ahead of time, since once we have decided on a social choice system, that system will then determine whose preferences are counted -- we can’t just modify it later.\n\nFor measurement, we have to back out human values somehow, which is quite a challenge given that humans have all sorts of cognitive biases and give different answers depending on the context. (See also <@ambitious value learning@>(@What is ambitious value learning?@) and subsequent posts in the sequence.)\n\nFor aggregation, the problems are well known and studied in the field of social choice theory. Some famous impossibility results include [Arrow’s theorem](https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem) and the [Gibbard-Satterthwaite theorem](https://en.wikipedia.org/wiki/Gibbard%E2%80%93Satterthwaite_theorem)."], "venue": "", "opinion": "I see this paper as a well-organized literature review of the many reasons why you _don’t_ want to handle AI alignment by finding the “true human utility function” or the “aggregated preferences of humanity” and then encoding them into the AI: there’s a myriad of challenges in even finding such an object. (A separate objection, out of scope for this paper, is that even if we did have such an object, we don’t know how to encode that goal into an AI system.)\n\nYou might then reasonably ask what we should be doing instead. I see the goal of AI _alignment_ as figuring out how, given a fuzzy but relatively well-specified task, to build an AI system that is reliably pursuing that task, in the way that we intended it to, but at a capability level beyond that of humans. This does not give you the ability to leave the future in the AI’s hands, but it would defuse the central (to me) argument for AI risk: that an AI system might be adversarially optimizing against you. (Though to be clear, there are still <@other risks@>(@The Main Sources of AI Risk?@) to consider.)", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #131", "newsletter_category": "Handling groups of agents"}
{"id": "7590d040f640aa9fb6478f77a4d2fa50", "title": "AXRP 3: Negotiable Reinforcement Learning", "url": "https://axrp.net/episode/2020/12/11/episode-3-negotiable-reinforcement-learning-andrew-critch.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Daniel Filan and Andrew Critch"], "summaries": ["This podcast centers on <@negotiable RL@>(@Negotiable Reinforcement Learning for Pareto Optimal Sequential Decision-Making@), which studies how to aggregate preferences of multiple principals (humans) who have _different beliefs_. In the case where the principals have the same beliefs, Harsanyi’s utilitarianism theorem tells us that any reasonable method of aggregating preferences will end up optimizing some linear combination of the principals’ utility functions. In the case of differing beliefs, this paper proves that every Pareto optimal policy must be optimizing some linear combination of the principals’ utility functions, except that over time the weights are modified based on how well the principals’ beliefs model reality. Intuitively, the principals are both agreeing to the contract “the AI will optimize more for the person whose beliefs are more correct”; since each principal believes their own beliefs, they are both happy with this contract.\n\nMost of the podcast is about the motivation and reason for writing this paper. Critch envisions a world in which people and AI systems must cooperate rather than fight, and this paper can be thought of as a study in how people can maximize cooperation. Unfortunately, it turns out that the cooperation-maximizing approach ends up being _unfair_: people whose beliefs are incorrect end up getting penalized (in terms of actual outcomes, rather than what they believe will happen).\n\nMore broadly, Critch hopes that this will spur more research into how parties with different beliefs can share control of AI systems: this seems important for AI to go well in the future."], "venue": "AXRP Podcast", "opinion": "I really liked this podcast: I definitely hadn’t understood Critch’s full reasons for doing this work. I didn’t include all the points in the summary, so I recommend you listen to it in addition to this summary.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #130", "newsletter_category": "Handling groups of agents"}
{"id": "52b5535cfa22bf971b497714ea15eb92", "title": "What counts as defection?", "url": "https://www.alignmentforum.org/posts/8LEPDY36jBYpijrSw/formalizing-game-theoretic-defection", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Alex Turner"], "summaries": ["We often talk about cooperating and defecting in general-sum games. This post proposes that we say that a player P has defected against a coalition C (that includes P) currently playing a strategy S when P deviates from the strategy S in a way that increases his or her own personal utility, but decreases the (weighted) average utility of the coalition. It shows that this definition has several nice intuitive properties: it implies that defection cannot exist in common-payoff games, uniformly weighted constant-sum games, or arbitrary games with a Nash equilibrium strategy. A Pareto improvement can also never be defection. It then goes on to show the opportunity for defection can exist in the Prisoner’s dilemma, Stag hunt, and Chicken (whether it exists depends on the specific payoff matrices)."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #109", "newsletter_category": "Handling groups of agents"}
{"id": "b412839d34e15c68a6c6535bd9bef9dc", "title": "Negotiable Reinforcement Learning for Pareto Optimal Sequential Decision-Making", "url": "https://papers.nips.cc/paper/7721-negotiable-reinforcement-learning-for-pareto-optimal-sequential-decision-making.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Nishant Desai", "Andrew Critch", "and Stuart Russell"], "summaries": ["As AI systems are deployed in the real world, they will have to navigate value differences between different users. Regardless of our beliefs on how they should trade off between users, we at least want our AI systems to be Pareto optimal -- if there is some way to help one user without harming any other user, it should do it. A classic result from social choice theory, Harsanyi's theorem, states that a Pareto optimal agent must be optimizing a weighted linear combination of the users' utility functions. (The weights determine how to trade off between users.) However, this assumes that the users all have the same _beliefs_. When users have different beliefs, we can get even better outcomes if the users bet on their beliefs. If I'm sure that our robot will make a chocolate chip cookie, while you're sure its going to be a brownie, rather than splitting whatever the robot brings, we can agree that if it's a cookie I get it, whereas if it's a brownie you get it, and this makes both of us better off by our own beliefs. This paper shows that a Pareto optimal agent helping users with different beliefs must be optimizing a weighted linear combination of the user's utility functions, where the weights are proportional to how accurate the user's beliefs turn out to be. It's as if the users bet with each other about their beliefs, with the stakes being the assistance of the agent."], "venue": "NeurIPS 2018", "opinion": "In practice, I expect users are unlikely to want this, for similar reasons to why we don't bet on every belief difference we have with other people, even though it's net positive in expectation. But we do need to handle this somehow. An interesting scenario to consider is when a regular person has different beliefs from an expert. Does the regular person have to defer exactly to the judgments of the expert to avoid being exploited? What if there's a regular person and _two_ experts who disagree with each other?", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "", "newsletter_category": "Handling groups of agents"}
{"id": "0a333b2c4ec4b4b13732c947e6adc555", "title": "Reinforcement Learning under Threats", "url": "http://arxiv.org/abs/1809.01560", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Víctor Gallego", "Roi Naveiro", "David Ríos Insua"], "summaries": ["Due to lack of time, I only skimmed this paper for 5 minutes, but my general sense is that it takes MDPs and turns them into two player games by positing the presence of an adversary. It modifies the Bellman update equations to handle the adversary, but runs into the usual problems of simulating an adversary that simulates you. So, it formalizes level-k thinking (simulating an opponent that thinks about you at level k-1), and evaluates this on matrix games and the friend-or-foe environment from [AI safety gridworlds](https://deepmind.com/blog/specifying-ai-safety-problems/)."], "venue": "arXiv", "opinion": "I'm not sure what this is adding over two-player game theory (for which we can compute equilibria) but again I only skimmed the paper so it's quite likely that I missed something.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #23", "newsletter_category": "Handling groups of agents"}
{"id": "e1b50971e451c59507a0a5acc8394f86", "title": "Multi-Agent Generative Adversarial Imitation Learning", "url": "http://arxiv.org/abs/1807.09936", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Jiaming Song", "Hongyu Ren", "Dorsa Sadigh", "Stefano Ermon"], "summaries": ["This paper generalizes [GAIL](http://www.jonathanho.me/files/HoErmon_NIPS2016.pdf) (which was covered [last week](https://mailchi.mp/ad852629e45a/alignment-newsletter-17)) to the multiagent setting, where we want to imitate a group of interacting agents. They want to find a Nash equilibrium in particular. They formalize the Nash equilibrium constraints and use this to motivate a particular optimization problem for multiagent IRL, that looks very similar to their optimization problem for regular IRL in GAIL. After that, it is quite similar to GAIL -- they use a regularizer ψ for the reward functions, show that the composition of multiagent RL and multiagent IRL can be solved as a single optimization problem involving the convex conjugate of ψ, and propose a particular instantiation of ψ that is data-dependent, giving an algorithm. They do have to assume in the theory that the multiagent RL problem has a unique solution, which is not typically true, but may not be too important. As before, to make the algorithm practical, they structure it like a GAN, with discriminators acting like reward functions. What if we have prior information that the game is cooperative or competitive? In this case, they propose changing the regularizer ψ, making it keep all the reward functions the same (if cooperative), making them negations of each other (in two-player zero-sum games), or leaving it as is. They evaluate in a variety of simple multiagent games, as well as a plank environment in which the environment changes between training and test time, thus requiring the agent to learn a robust policy, and find that the correct variant of MAGAIL (cooperative/competitive/neither) outperforms both behavioral cloning and single-agent GAIL (which they run N times to infer a separate reward for each agent)."], "venue": "arXiv", "opinion": "Multiagent settings seem very important (since there does happen to be more than one human in the world). This looks like a useful generalization from the single agent case to the multiagent case, though it's not clear to me that this deals with the major challenges that come from multiagent scenarios. One major challenge is that there is no longer a single optimal equilibrium when there are multiple agents, but they simply assume in their theoretical analysis that there is only one solution. Another one is that it seems more important that the policies take history into account somehow, but they don't do this. (If you don't take history into account, then you can't learn strategies like tit-for-tat in the iterated prisoner's dilemma.) But to be clear I think this is the standard setup for multiagent RL -- it seems like field is not trying to deal with this issue yet (even though they could using eg. a recurrent policy, I think?)", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #18", "newsletter_category": "Handling groups of agents"}
{"id": "430cfb3d9aa12d1a5427e37b450d1b2f", "title": "Modeling Friends and Foes", "url": "http://arxiv.org/abs/1807.00196", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Pedro A. Ortega", "Shane Legg"], "summaries": ["Multiagent scenarios are typically modeled using game theory. However, it is hard to capture the intuitive notions of \"adversarial\", \"neutral\" and \"friendly\" agents using standard game theory terminology. The authors propose that we model the agent and environment as having some prior mixed strategy, and then allow them to \"react\" by changing the strategies to get a posterior strategy, but with a term in the objective function for the change (as measured by the KL divergence). The sign of the environment's KL divergence term determines whether it is friendly or adversarial, and the magnitude determines the magnitude of friendliness or adversarialness. They show that there are always equilibria, and give an algorithm to compute them. They then show some experiments demonstrating that the notions of \"friendly\" and \"adversarial\" they develop actually do lead to behavior that we would intuitively call friendly or adversarial.\n\nSome notes to understand the paper: while normally we think of multiagent games as consisting of a set of agents, in this paper there is an agent that acts, and an environment in which it acts (which can contain other agents). The objective function is neither minimized nor maximized -- the sign of the environment's KL divergence changes whether the stationary points are maxima or minima (which is why it can model both friendly and adversarial environments). There is only one utility function, the agent's utility function -- the environment is only modeled as responding to the agent, rather than having its own utility function."], "venue": "arXiv", "opinion": "This is an interesting formalization of friendly and adversarial behavior. It feels somewhat weird to model the environment as having a prior strategy that it can then update. This has the implication that a \"somewhat friendly\" environment is unable to change its strategy to help the agent, even though it would \"want\" to, whereas when I think of a \"somewhat friendly\" environment, I think of a group of agents that share some of your goals but not all of them, so a limited amount of cooperation is possible. These feel quite different.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #14", "newsletter_category": "Handling groups of agents"}
{"id": "02e1300f7800a5fd5a00fa3c591889ce", "title": "Multi-agent Social Reinforcement Learning Improves Generalization", "url": "http://arxiv.org/abs/2010.00581", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Kamal Ndousse", "Douglas Eck", "Sergey Levine", "Natasha Jaques"], "summaries": ["We’ve previously seen that in sparse reward settings where exploration is hard, it’s very useful to have expert demonstrations to avoid having to do all the exploration yourself (<@1@>(@Learning Montezuma’s Revenge from a Single Demonstration@), <@2@>(@Making Efficient Use of Demonstrations to Solve Hard Exploration Problems@), <@3@>(@Playing hard exploration games by watching YouTube@)). However, this assumes that the demonstrator is “external” to the environment, whereas really we’d like to model them as part of the environment, as in <@assistance games@>(@Cooperative Inverse Reinforcement Learning@). This then looks like _social learning_, in which agents learn how to perform tasks by looking at cues from other agents within the environment.\n\nBut how can we do this in high-dimensional environments? This paper looks at one approach: adding an auxiliary loss in which the agent must predict the next state of the environment. Since the environment itself contains experts that do useful things, the agent implicitly must learn what those experts are doing and what effects their actions have.\n\nThey find that such agents learn to follow the cues of the experts and thus achieve significantly improved reward relative to agents that are trained in isolation. In fact, these agents can be transferred to novel environments, where they continue to follow expert cues to achieve high reward. However, this means that they don’t learn how to act when experts aren’t present, and so fail in the solo setting. This can be fixed by training on a mixture of solo settings and settings with experts present."], "venue": "arXiv", "opinion": "I’m a big fan of moving towards modeling humans as part of the environment, since we will eventually have AI systems working with and interacting with humans -- they won’t be “external to the AI’s universe” as it is often modeled currently.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #123", "newsletter_category": "Handling groups of agents"}
{"id": "d9856847985c051fd46b386a87f783f2", "title": "Theory of Minds: Understanding Behavior in Groups Through Inverse Planning", "url": "http://arxiv.org/abs/1901.06085", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Michael Shum", "Max Kleiman-Weiner", "Michael L. Littman", "Joshua B. Tenenbaum"], "summaries": ["This paper introduces Composable Team Hierarchies (CTH), a representation designed for reasoning about how agents reason about each other in collaborative and competitive environments. CTH uses two \"planning operators\": the Best Response operator returns the best policy in a single-agent game, and the Joint Planning operator returns the best team policy when all agents are cooperating. Competitive policies can then be derived via recursive application of those operations to subsets of agents (while holding the policies of other agents fixed). CTH draws from ideas in level-K planning (in which each agent assumes all other agents are at level K-1) and cooperative planning, but is more powerful than either approach.\n\nThe authors experiment with using CTH to probabilistically infer policies and future actions of agents participating in the stag-hunt task; they find that these judgements correlate well with human data."], "venue": "AAAI 2019", "opinion": "This is a cool theoretical framework. Its relevance depends on how likely you think it is that social cognition will be a core component of AGI, as opposed to just another task to be solved using general-purpose reasoning. I imagine that most AI safety researchers lean towards the latter, but there are some reasons to give credence to the former.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #43", "newsletter_category": "Handling groups of agents"}
{"id": "2e3fc245ff64f4c723a31dfd98b963f6", "title": "Multi-Agent Overoptimization, and Embedded Agent World Models", "url": "https://www.alignmentforum.org/posts/dfZLLEfFvkrMwmiMw/multi-agent-overoptimization-and-embedded-agent-world-models", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["David Manheim"], "summaries": ["This post and the associated [paper](https://arxiv.org/abs/1810.10862) argue for the complexity of multiagent settings, where you must build a model of how other agents act, even though they have models of how you act. While game theory already deals with this setting, it only does so by assuming that the agents are perfectly rational, an assumption that doesn't hold in practice and doesn't grapple with the fact that your model of the opponent cannot be perfect. The paper lists a few failure modes. Accidental steering happens when one agent takes action without the knowledge of what other agents are doing. Coordination failures are exactly what they sound like. Adversarial misalignment happens when one agent chooses actions to mislead a victim agent into taking actions that benefit the first agent. Input spoofing and filtering happen when one agent doctors the training data for a victim agent. Goal co-option occurs when one agent takes control over the other agent (possibly by modifying their reward function)."], "venue": "Alignment Forum", "opinion": "It's great to see work on the multiagent setting! This setting does seem quite a bit more complex, and hasn't been explored very much from the AI safety standpoint. One major question I have is how this relates to the work already done in academia for different settings (typically groups of humans instead of AI agents). Quick takes on how each failure mode is related to existing academic work: Accidental steering is novel to me (but I wouldn't be surprised if there has been work on it), coordination failures seem like a particular kind of (large scale) prisoner's dilemma, adversarial misalignment is a special case of the principal-agent problem, input spoofing and filtering and goal co-option seem like special cases of adversarial misalignment (and are related to ML security as the paper points out).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #32", "newsletter_category": "Handling groups of agents"}
{"id": "941e18ae2b5c3a45eb38b58ceb749c0c", "title": "Reinforcement Learning with Competitive Ensembles of Information-Constrained Primitives", "url": "http://arxiv.org/abs/1906.10667", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Anirudh Goyal", "Shagun Sodhani", "Jonathan Binas", "Xue Bin Peng", "Sergey Levine", "Yoshua Bengio"], "summaries": ["Learning policies that generalize to new environments is a fundamental challenge in reinforcement learning. In particular, humans seem to be adept at learning skills and understanding the world in a way that is compositional, hinting at the source of the discrepancy. Hierarchical reinforcement learning (HRL) has partially addressed the discrepancy by decomposing policies into options/primitives/subpolicies that a top-level controller selects from. However, generalization is limited because the top-level policy must work for all states.\n\n**In this paper, the authors explore a novel decentralized approach where policies are still decomposed into primitives, but without a top-level controller.** The key idea is to incentivize each primitive to work on a different cluster of states. Every primitive has a variational information bottleneck between the state and predicted action, that allows us to quantify how much information about the state the primitive uses in selecting actions. Intuitively, a primitive that knows how to open gates is going to extract a lot of information about gates from the state to choose an appropriate action, and won’t extract much information in states without gates. So, our high-level controller can just be: check which primitive is using the most state information, and let that primitive choose the action.\n\nThe reward R from a trajectory is split amongst the primitives in proportion to how likely each primitive was to be chosen. This is what incentivizes the primitives to use information from the state. The primitives also get a cost in proportion to how much information they use, incentivizing them to specialize to a particular cluster of states. Finally, there is a regularization term that also incentivizes specialization, and in particular prevents a collapse where a single primitive is always active.\n\nTo demonstrate effectiveness, the authors compare the baseline HRL methods option-critic and [Meta-learning Shared Hierarchy](https://blog.openai.com/learning-a-hierarchy/) to their method in grid-world and motion imitation transfer tasks. They show that using an ensemble of primitives can outperform more traditional HRL methods in generalization across tasks."], "venue": "arXiv", "opinion": "**Overall, this paper is compelling because the method presented is both promising and provides natural ideas for future work.** The method presented here is arguably simpler than HRL and the ability to generalize to new environments is simple to implement. The idea of introducing competition at an information theoretic level seems natural and the evidence for better generalization capability is compelling. It'd be interesting to see what would happen if more complex primitives were used.", "highlight": false, "read_more": "", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #66", "newsletter_category": "Hierarchical RL"}
{"id": "ceaafb32872d61dda6b8a521e2b535c9", "title": "Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions", "url": "https://bair.berkeley.edu/blog/2020/07/11/auction/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Michael Chang", "Sidhant Kaushik", "S. Matthew Weinberg", "Thomas L. Griffiths", "Sergey Levine"], "summaries": ["Increasing the scalability of learning systems is a central challenge to machine learning. One framework is to organize RL agents as ‘super’ agents, large collections of simpler agents that each make decisions according to their own incentives. If it were possible to get the incentives correct, the dominant equilibria would be identical to the optimal solution for the original RL problem.\n\nIn this paper, the authors introduce a framework for decentralizing decision-making by appealing to auction theory. There is a separate simple agent for each action. At every a timestep, a Vickrey auction is run in which each agent can bid for the superagent executing their particular action. The trick is that when an agent successfully wins a bid and acts on a state, it then ‘owns’ the produced next state, and ‘earns’ the result of the auction in the next round. (At the end of an episode, the owner of the state earns the reward of the trajectory.) Intuitively, the agent wants to bid on states in which it can make progress towards earning the final reward, as those will be states that other agents want to buy. The authors show that this scheme incentivizes each agent to bid the Q-value of their action in the given state, which would then lead to an optimal policy.\n\nThe authors test out this approach with some simple MDPs. They also investigate a task where they try to get the agents to rotate MNIST images so that a classifier will recognize them. Finally, they investigate task transfer by training agents on simple sub-tasks and then reusing those agents to learn a related task making use of both sub-tasks."], "venue": "ICML 2020", "opinion": "Imagine [Twitch plays](https://www.twitch.tv/directory/game/Twitch%20Plays), but you use a reputation to buy and sell your actions. The actual idea in the paper is slightly more mundane than this because the primitives are bidders. <@Reinforcement Learning with Competitive Ensembles of Information-Constrained Primitives@> is a similar piece of work that also uses primitives as the basic level of selection. However, their incentive mechanism is different: agents pay according to how much information from the environment they use and then get a reward back for their actions. However, there’s good reason to think options could work as well since in both of these papers there’s evidence that primitives that learn sub-tasks are useful in new tasks.", "highlight": false, "read_more": "Paper: Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #116", "newsletter_category": "Hierarchical RL"}
{"id": "b8e5b88e29152d47ad3ecaab5f362a53", "title": "DADS: Unsupervised Reinforcement Learning for Skill Discovery", "url": "https://ai.googleblog.com/2020/05/dads-unsupervised-reinforcement.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Archit Sharma", "Shixiang Gu", "Sergey Levine", "Vikash Kumar", "Karol Hausman"], "summaries": ["Reinforcement learning in robotics typically plans directly on low-level actions. However, it sure seems like there are a simple set of _primitives_ like walking, running, shuffling, etc. that are inherent to the robot morphology. What if we could learn these primitives, and then plan using those primitives? This paper introduces a method for learning these primitives _without a reward function_. They simply optimize skills for _predictability_ and _diversity_ (by optimizing the mutual information between the current state and next state, conditioned on which skill is being executed).\n\nThey can then use these primitives for _model-based planning_ for a downstream task. You can think of this as a regular RL problem, except that an action in their \"action space\" takes the form \"execute skill X for T timesteps\". They use _model-predictive control_ (MPC), in which you sample a bunch of trajectories, and execute the first action of the trajectory that gets the highest reward. Since each of their high-level actions determines the policy for T timesteps, they can scale to much longer horizon tasks than MPC can usually be used for. They show that this approach is competitive with regular model-based RL."], "venue": "arXiv", "opinion": "I think unsupervised learning is likely to be key in getting more powerful and general AI systems without requiring a truly staggering amount of expert data, and this is a great example of what that might look like. Note though that the learned primitives are certainly not what you'd expect of a human: for example, the humanoid learns to vaguely shuffle in a direction, rather than walking. In addition, they did require specifying an \"x-y prior\" that required skills to be diverse based on x-y coordinates, which is why the skills learned navigation primitives, as opposed to e.g. distinct types of flailing.", "highlight": false, "read_more": "Paper: Dynamics-Aware Unsupervised Discovery of Skills", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #102", "newsletter_category": "Hierarchical RL"}
{"id": "0df8f9bc3d13e85e39e32b030b4b7155", "title": "Exploring Neural Networks with Activation Atlases", "url": "https://distill.pub/2019/activation-atlas/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Shan Carter", "Zan Armstrong", "Ludwig Schubert", "Ian Johnson", "and Chris Olah"], "summaries": ["Previous work by this group of people includes [The Building Blocks of Interpretability](https://distill.pub/2018/building-blocks/) and [Feature Visualization](https://distill.pub/2017/feature-visualization/), both of which apparently came out before this newsletter started so I don't have a summary to point to. Those were primarily about understanding what individual neurons in an image classifer were responding to, and the key idea was to \"name\" each neuron with the input that would maximally activate that neuron. This can give you a global view of what the network is doing.\n\nHowever, such a global view makes it hard to understand the interaction between neurons. To understand these, we can look at a specific input image, and use techniques like attribution. Rather than attribute final classifications to the input, you could attribute classifications to neurons in the network, and then since individual neurons now had meanings (roughly: \"fuzzy texture neuron\", \"tennis ball neuron\", etc) you can gain insight to how the network is making decisions _for that specific input_.\n\nHowever, ideally we would like to see how the network uses interactions between neurons to make decisions in general; not on a single image. This motivates activation atlases, which analyze the activations of a network on a _large dataset_ of inputs. In particular, for each of a million images, they randomly choose a non-border patch from the image, and compute the activation vector at a particular layer of the network at that patch. This gives a dataset of a million activation vectors. They use standard dimensionality reduction techniques to map each activation vector into an (x, y) point on the 2D plane. They divide the 2D plane into a reasonably sized grid (e.g. 50x50), and for each grid cell they compute the average of all the activation vectors in the cell, visualize that activation vector using feature visualization, and put the resulting image into the grid cell. This gives a 50x50 grid of the \"concepts\" that the particular neural network layer we are analyzing can reason about. They also use attribution to show, for each grid cell, which class that grid cell most supports.\n\nThe paper then goes into a lot of detail about what we can infer from the activation atlas. For example, we can see that paths in activation vector space can correspond to human-interpretable concepts like the number of objects in an image, or moving from water to beaches to rocky cliffs. If we look at activation atlases for different layers, we can see that the later layers seem to get much more specific and complex, and formed of combinations of previous features (e.g. combining sand and water features to get a single sandbar feature).\n\nBy looking at images for specific classes, we can use attribution to see which parts of an activation atlas are most relevant for the class. By comparing across classes, we can see how the network makes decisions. For example, for fireboats vs. streetcars, the network looks for windows for both, crane-like structures for both (though less than windows), and water for fireboats vs. buildings for streetcars. This sort of analysis can also help us find mistakes in reasoning -- e.g. looking at the difference between grey whales and great white sharks, we can see that the network looks for the teeth and mouth of a great white shark, including an activation that looks suspiciously like a baseball. In fact, if you take a grey whale and put a patch of a baseball in the top left corner, this becomes an adversarial example that fools the network into thinking the grey whale is a great white shark. They run a bunch of experiments with these human-found adversarial examples and find they are quite effective."], "venue": "Distill", "opinion": "While the authors present this as a method for understanding how neurons interact, it seems to me that the key insight is about looking at and explaining the behavior of the neural network _on data points in-distribution_. Most possible inputs are off-distribution, and there is not much to be gained by understanding what the network does on these points. Techniques that aim to gain a global understanding of the network are going to be \"explaining\" the behavior of the network on such points as well, and so will be presenting data that we won't be able to interpret. By looking specifically at activations corresponding to in-distribution images, we can ensure that the data we're visualizing is in-distribution and is expected to make sense to us.\n\nI'm pretty excited that interpretability techniques have gotten good enough that they allow us to construct adversarial examples \"by hand\" -- that seems like a clear demonstration that we are learning something real about the network. It feels like the next step would be to use interpretability techniques to enable us to actually fix the network -- though admittedly this would require us to also develop methods that allow humans to \"tweak\" networks, which doesn't really fit within interpretability research as normally defined.", "highlight": true, "read_more": "[OpenAI blog post](https://openai.com/blog/introducing-activation-atlases/) and [Google AI blog post](https://ai.googleblog.com/2019/03/exploring-neural-networks.html)", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #49", "newsletter_category": "Interpretability"}
{"id": "504cb4483c5bdf2cfc7c3678c0f7ac9f", "title": "Interpretability and Post-Rationalization", "url": "https://medium.com/@vanhoucke/interpretability-and-post-rationalization-b812eda13783", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Vincent Vanhoucke"], "summaries": ["Neuroscience suggests that most explanations that we humans give for a decision are post-hoc rationalizations, and don't reflect the messy underlying true reasons for the decision. It turns out that decision making, perception, and all the other tasks we're hoping to outsource to neural nets are inherently complex and difficult, and are not amenable to easy explanation. We can aim for \"from-without\" explanations, which post-hoc rationalize the decisions a neural net makes, but \"from-within\" explanations, which aim for a mechanistic understanding, are intractable. We could try to design models that are more interpretable (in the \"from-within\" sense), but this would lead to worse performance on the actual task, which would hurt everyone, including the people calling for more accountability."], "venue": "Medium", "opinion": "I take a pretty different view from this post -- I've highlighted it because I think this is an important disagreement that's relevant for alignment. In particular, it's not clear to me that \"from-within\" interpretability is doomed -- while I agree that humans basically only do \"from-without\" rationalizations, we also aren't able to inspect a human brain in the same way that we can inspect a neural net. For example, we can't see the output of each individual neuron, we can't tell what input would each neuron would respond maximally to, and we can't pose counterfactuals with slightly different inputs to see what changes. In fact, I think that \"from-within\" interpretability techniques, such as [Building Blocks of Interpretability]() have already seen successes in identifying biases that image classifiers suffer from, that we wouldn't have known about otherwise.\n\nWe could also consider whether post-hoc rationalization is sufficient for alignment. Consider a thought experiment where a superintelligent AI is about to take a treacherous turn, but there is an explainer AI system that post-hoc rationalizes the output of the AI that could warn us in advance. If the explainer AI only gets access to the output of the superintelligent AI, I'm very worried -- it seems way too easy to come up with some arbitrary rationalization for an action that makes it seem good, you'd have to be have a much more powerful explainer AI to have a hope. On the other hand, if the explainer AI gets access to all of the weights and activations that led to the output, it seems more likely that this could work -- as an analogy, I think a teenager could tell if I was going to betray them, if they could constantly eavesdrop on my thoughts.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #29", "newsletter_category": "Interpretability"}
{"id": "3f0557c230a3a15ec092c42069d0e038", "title": "The What-If Tool: Code-Free Probing of Machine Learning Models", "url": "https://ai.googleblog.com/2018/09/the-what-if-tool-code-free-probing-of.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["James Wexler"], "summaries": ["When you train an ML model, it is often hard to understand what your model is doing and why. This post introduces the What-If tool, which allows you to ask counterfactual queries about the decision rule implemented by your final trained model, for classification and regression tasks. For example, you can take a particular data point, edit it slightly, and see how that changes the model prediction. Or you can graph the data points by L2 distance from a particular point. For classification tasks, you can find the \"closest counterfactual\", that is, the data point closest to the current point where the decision of the model is reversed. I played around with some of the demos, and apparently for a particular person and a particular model trained on census data, the probability that they had a salary of over $50k depended much more strongly on their marital status than their age, which was the opposite of my prediction. I figured this out by choosing a point, finding the closest counterfactual, and then making each of the changes in the delta individually and seeing which affected the model probability most."], "venue": "Google AI Blog", "opinion": "I'm guessing this is limited to tasks where your data points have a reasonable number of features (< 1000, I'd guess) and you are only analyzing a small set of test data points (around tens of thousands), due to computational constraints. That said, for those tasks, this seems incredibly useful to actually get a good model that you can debug and eventually deploy.\n\nIt's worth noting that this is an engineering achievement. Researchers are considering even stronger (but more computationally difficult) techniques, such as finding which part of the training set most influenced a particular decision, whereas the What-If tool doesn't talk about the training set and training process at all, instead only allowing you to ask queries about the final trained model.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #24", "newsletter_category": "Interpretability"}
{"id": "97df1bb84f57782963d44d0d008c714c", "title": "Differentiable Image Parameterizations", "url": "https://distill.pub/2018/differentiable-parameterizations/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Alexander Mordvintsev", "Nicola Pezzotti", "Ludwig Schubert", "and Chris Olah"], "summaries": ["There are lots of techniques for generating images using neural nets. A common approach is to take a neural net trained to classify images, and then use gradient descent to optimize _the input image_ instead of the weights of the neural net. You might think that the only way to affect the generated input image would be to change the loss function on which you run gradient descent, but in reality the way in which you represent the image makes a huge difference. They describe why this might be the case, and go through several examples:\n\n1. Suppose you want to see how two neurons interact. You could optimize an image to maximize the sum of the activations of the neurons. Even better, you could create an animation of how the image changes as you trade off how much you care about each neuron. Done naively, this doesn't look good, because there's a lot of randomness that changes between each image in the animation, which swamps out the differences we actually care about. To fix this, we can generate each frame in the animation as the sum of two images, one shared across all frames, and one that is frame-specific. Despite changing neither the loss function nor the space of input images, this is sufficient to remove the randomness between frames.\n\n2. You've probably seen [style transfer](https://medium.com/data-science-group-iitr/artistic-style-transfer-with-convolutional-neural-network-7ce2476039fd) before, but did you know it only works with the VGG architecture? We can get it to work with other architectures by representing images in Fourier-space instead of pixel-space, again without any change in the loss function or expressible space of images.\n\n3. If you generate the pixel-space representation of an image from a lower-dimensional representation using a Compositional Pattern Producing Network (CPPN), then gradient descent will optimize the lower-dimensional representation. It turns out that this produces images vaguely reminiscent of light-paintings. (I believe in this case, while the loss function doesn't change, the space of expressible images does change.)\n\n4. Often when we see the feature visualization for a neuron, there are a lot of areas of the image that don't actually matter for the neuron's activation. So, we can add transparency, and add a term in the loss function that encourages transparency. We also have to change the representation of the image to include a transparency channel in addition to the normal RGB channels. Then, the generated image will be transparent wherever the pixels don't matter, but will still have the visualization wherever it does matter for activating the neuron.\n\n5+6. We can even use a representation of 3D objects, and then write a (differentiable) algorithm that converts that into a 2D image that then goes through the standard image classifier neural net. This lets us optimize over the 3D object representation itself, letting us do both feature visualization and style transfer on 3D objects."], "venue": "Distill", "opinion": "While [OpenAI Five](https://blog.openai.com/openai-five/) suggests that the main thing we need to do is think of a reward function and an exploration strategy, this suggests that ML requires not just a good loss function, but lots of other things in order to work well. We have particular examples where changing things other than the loss function leads to different results. (This is probably also true for OpenAI Five, but the variations may not matter much, or OpenAI hasn't talked about the ML engineering behind the scenes -- I'm not sure.) These generally seem to be changing the inductive bias of the neural nets encoding the images. I think that if you expect to get very capable AI systems within the current paradigm, you will have to think about how inductive bias will affect what your AI system will do (and consequently its safety).\n\nAlso, the paper is very clear and approachable, and filled with great visualizations, as I've come to expect from Distill. I almost forgot to mention this, because I take it as a given for any Distill paper.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "Feature Visualization", "converted_with": "python", "newsletter_number": "AN #17", "newsletter_category": "Interpretability"}
{"id": "73c632d621083299e965ec1ff22d46b3", "title": "Debuggable Deep Networks: Usage and Evaluation", "url": "https://gradientscience.org/debugging/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Eric Wong*", "Shibani Santurkar*", "Aleksander Madry"], "summaries": ["One simple approach to make neural nets more understandable is to make just the final layer sparse. Neurons in the penultimate layer can be visualized using [existing techniques](https://distill.pub/2017/feature-visualization/), and the sparsity of the final layer means that it is relatively easy to understand how they are combined together to make predictions. For example, in ImageNet, the final logit for an individual class becomes a weighted combination of around 20 features, instead of 2048 as you would get with a dense model. The authors' core claim is that this makes the model more understandable and debuggable, at the cost of a small drop in performance (about 1-5 percentage points). They show this using several experiments, many with real humans:\n\n1. The most basic test is simulation: can humans predict what the model would say (regardless of whether or not it is correct)? Unfortunately, if you show people a picture of an airplane, they are probably going to predict that the model says “airplane”, on priors. To avoid this sort of prior knowledge, they first sample a class like “airplane” that they _don’t_ reveal. Instead, they reveal feature visualizations of five randomly chosen features that the model uses to identify images of that class. They then choose three images and ask humans which of the three images will have the highest probability of being assigned to that class, according to the model. They find that when using a sparse final layer, humans have non-trivial performance (72% when the best image really is from the sampled class, and 57% when the best image is from some different class), whereas with a dense final layer they are only slightly better than random chance (44% and 31%, where random chance would be 33%).\n\n2. They can study biases and spurious correlations in models. For example, Toxic-BERT identifies toxic sentences, but does so by searching for identity groups like “christianity”. Debiased-BERT was meant to solve this, but by looking at the feature visualizations (word clouds) below a sparse decision layer, they find that it simply learns a strong _negative_ weight for identity groups. Thus, they are able to fool the model into thinking a toxic comment is non-toxic simply by adding an identity group like “christianity” somewhere in the sentence. (This also applies to the version that uses a dense final layer.)\n\n3. The identified biases or spurious correlations can then be used to generate counterfactuals: for example, in a sentiment analysis system, they can visualize word clouds that represent positive and negative influences on the final sentiment reported by the model. Then, by simply exchanging a positive word for a negative word (or vice versa), they can flip the label that the model assigns to the sentence. (Usually this is correct behavior – if you change “a _marvel_ like you’ve never seen” to “a _failure_ like you’ve never seen”, the sentiment really is different. The point is that the sparse model allows you to create these examples automatically.)\n\n4. In cases where the model makes a mistake, can humans identify why the model made a mistake? The authors note that over 30% of misclassifications can be explained by a single problematic feature, i.e. if you intervene to set that feature to zero, then the model no longer makes a mistake. So one way to check human understanding is to see whether they can reproduce this misclassification. Specifically, we take some image whose true label is y\\* but which the model incorrectly labels as y’. We then take the highest-activating feature in support of y\\* and the corresponding feature for y’, and ask humans which of the two features is more present in the image. They find that annotators prefer the feature for y’ 60% of the time – more than random chance (50%). Since the annotators don’t know which feature corresponds to the ground truth and which corresponds to the incorrect model prediction, they probably were not using prior knowledge in answering this question. Thus, doing better than random suggests that even according to humans the feature that the model picked up on really was present in the image."], "venue": "arXiv", "opinion": "I liked this paper especially for its experimental design; it seems like it does a good job of keeping human priors from influencing the results. The results themselves are very much a first step, showing that you’ve gotten at least some understanding and interpretability, but ideally we’d do much much better on these axes. For example, if we “understand” the model, one would hope that we’d be able to get scores of 95+% on the simulation experiment (bullet point 1 above), rather than the current 72% / 57%. It might be interesting to have benchmarks that use these sorts of experiments as their evaluation method. Given that this method just uses feature visualization on the penultimate layer, it seems like there should be room for improvement by studying other layers as well.\n\n_Editorial note:_ I summarized this work because I saw and liked the blog post about it. I don't generally follow the interpretability literature (it's huge), and so it's plausible that there are lots of more useful papers that I happen to not have seen. Most of the time, the highlighted papers can at least be understood as \"this is what Rohin thinks is most useful for alignment researchers to read within this field\"; that's not the case here.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #151", "newsletter_category": "Interpretability"}
{"id": "ebe348dd4224417bfb94d3278f4390d6", "title": "Neural Networks seem to follow a puzzlingly simple strategy to classify images", "url": "https://medium.com/bethgelab/neural-networks-seem-to-follow-a-puzzlingly-simple-strategy-to-classify-images-f4229317261f", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Wieland Brendel and Matthias Bethge"], "summaries": ["This is a blog post explaining the paper [Approximating CNNs with bag-of-local-features models works surprisingly well on ImageNet](https://openreview.net/pdf?id=SkfMWhAqYQ), which was summarized in [AN #33](https://mailchi.mp/b6dc636f6a1b/alignment-newsletter-33)."], "venue": "ICLR 2019", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #45", "newsletter_category": "Interpretability"}
{"id": "14ac7868725dfa8b0402a094b098be93", "title": "Visualizing Neural Networks with the Grand Tour", "url": "https://distill.pub/2020/grand-tour/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Mingwei Li", "Zhenge Zhao", "Carlos Scheidegger"], "summaries": ["Visualizing a complete dataset instead of single input examples is helpful when we want to analyze the relationships between different input examples and how their classification changes during training, as we can do so by looking at a single video. \n\nThe authors use an example on MNIST in which the network learns to classify the numbers 1 and 7 in an almost discrete fashion during particular epochs to compare different methods for visualizing how the dataset is classified. They find that one problem with nonlinear dimensionality reduction like t-SNE and UMAPs is that changes to a subset of the dataset can strongly affect how unchanged data points are represented. Then they compare this to the Grand Tour, a classical technique that projects the data into two dimensions from varying points of view. As projections are linear in the input variables, it is rather easy to reason about how changes in the data affect this visualization and the times the classes 1 and 7 are learnt are indeed quite salient in their example. Another advantage of this method is that confusion between two specific classes can be identified more easily, as the corresponding data points will be projected onto the line connecting the clusters for these classes. A similar approach can be taken on a network's hidden layers to identify the layer in which different classes become clearly distinguishable. They find that they can identify adversarial examples generated by FGSM by looking at the second to last layer, where the adversarial examples form a cluster distinct from the real images. \n\nAs the Grand Tour involves varying rotations, it is basically unaffected by rotations of the data. The authors argue that this is a feature, as rotations are small changes to the data and should not have a large effect on the visualization."], "venue": "Distill", "opinion": "The dataset perspective on visualization seems pretty useful as a quick diagnostic tool for practitioners, but less useful than feature visualization for a detailed understanding of a model. While I think that it is good to highlight invariances, I am not convinced that rotational invariance is actually desirable for visualizing intermediate layers of a neural network, as most nonlinearities are strongly affected by rotations. ", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #95", "newsletter_category": "Interpretability"}
{"id": "b7f868bd1204392984d3c3170f35f49b", "title": "Visualizing memorization in RNNs", "url": "https://distill.pub/2019/memorization-in-rnns/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Andreas Madsen"], "summaries": ["This is a short Distill article that showcases a visualization tool that demonstrates how contextual information is used by various RNN units (LSTMs, GRUs, and nested LSTMs). The method is very simple: for each character in the context, they highlight the character in proportion to the gradient of the logits with respect to that character. Looking at this visualization allows us to see that GRUs are better at using long-term context, while LSTMs perform better for short-term contexts."], "venue": "Distill", "opinion": "I'd recommend you actually look at and play around with the visualization, it's very nice. The summary is short because the value of the work is in the visualization, not in the technical details.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #51", "newsletter_category": "Interpretability"}
{"id": "5f872330f1025e27129bbe3fa1cf9569", "title": "Techniques for Interpretable Machine Learning", "url": "http://arxiv.org/abs/1808.00033", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Mengnan Du", "Ninghao Liu", "Xia Hu"], "summaries": ["This paper summarizes work on interpretability, providing a classification of different ways of achieving interpretability. There are two main axes -- first, whether you are trying to gain insight into the entire model, or its classification of a particular example; and second, whether you try to create a new model that is inherently interpretable, or whether you are post-hoc explaining the decision made by an uninterpretable model. The whole paper is a summary of techniques, so I'm not going to summarize it even further."], "venue": "AGI 2018: Artificial General Intelligence", "opinion": "This seems like a useful taxonomy that hits the kinds of interpretability research I know about, though the citation list is relatively low for a summary paper, and there are a few papers I expected to see that weren't present. On the other hand, I'm not actively a part of this field, so take it with a grain of salt.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #19", "newsletter_category": "Interpretability"}
{"id": "8d6ff5825a83e460f9eb6d7608047fff", "title": "What mechanisms drive agent behaviour?", "url": "https://medium.com/@deepmindsafetyresearch/what-mechanisms-drive-agent-behaviour-e7b8d9aee88", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Grégoire Déletang", "Jordi Grau-Moya", "Miljan Martic", "Tim Genewein", "Tom McGrath", "Vladimir Mikulik", "Markus Kunesch", "Shane Legg", "Pedro A. Ortega"], "summaries": ["A common challenge when understanding the world is that it is very hard to infer causal structure from only observational data. Luckily, we aren’t limited to observational data in the case of AI systems: we can intervene on either the environment the agent is acting in, or the agent itself, and see what happens. In this paper, the authors present an “agent debugger” that helps with this, which has all the features you’d normally expect in a debugger: you can set breakpoints, step forward or backward in the execution trace, and set or monitor variables.\n\nLet’s consider an example where an agent is trained to go to a high reward apple. However, during training the location of the apple is correlated with the floor type (grass or sand). Suppose we now get an agent that does well in the training environment. How can we tell if the agent looks for the apple and goes there, rather than looking at the floor type and going to the location where the apple was during training?\n\nWe can’t distinguish between these possibilities with just observational data. However, with the agent debugger, we can simulate what the agent would do in the case where the floor type and apple location are different from how they were in training, which can then answer our question.\n\nWe can go further: using the data collected from simulations using the agent debugger, we can also build a causal model that explains how the agent makes decisions. We do have to identify the features of interest (i.e. the nodes in the causal graph), but the probability tables can be computed automatically from the data from the agent debugger. The resulting causal model can then be thought of as an “explanation” for the behavior of the agent."], "venue": "arXiv", "opinion": "I very much like the general idea that we really can look at counterfactuals for artificial agents, given that we can control their inputs and internal state. This is the same idea underlying <@cross-examination@>(@Writeup: Progress on AI Safety via Debate@), as well as various other kinds of interpretability research.\n\nIn addition, one nice aspect of causal models as your form of “explanation” is that you can modulate the size of the causal model based on how many nodes you add to the graph. The full causal model for e.g. GPT-3 would be way too complex to understand, but perhaps we can get a high-level understanding with a causal model with higher-level concepts. I’d be very interested to see research tackling these sorts of scaling challenges.", "highlight": false, "read_more": "Paper: Causal Analysis of Agent Behavior for AI Safety", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #141", "newsletter_category": "Interpretability"}
{"id": "42db5e0f267fc512fdd547a67bdf4567", "title": "Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems", "url": "http://glassmanlab.seas.harvard.edu/papers/bucinca_iui20_proxy.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Zana Buçinca*", "Phoebe Lin*", "Krzysztof Z. Gajos", "Elena L. Glassman"], "summaries": ["As humans and AI systems have different strengths, it might make sense to combine them into human+AI teams for decision-making tasks. However, this does not always work well: if the human puts too little trust in a competent AI, the AI is of little use, and if they put too much trust in an incompetent AI, they might make worse decisions than had they been on their own. A lot of explainability research has focused on instilling more trust in AI systems without asking how much trust would be appropriate, even though there is research showing that hiding model bias instead of truthfully revealing it can increase trust in an AI system.\n\nThe authors conduct two experiments using an AI system that predicts nutrition information from pictures of food. In the first experiment, participants were asked to predict the AI's decision based on the ground truth and one of two types of explanations. In the inductive condition, the explanation consisted of a series of images the AI had identified as similar. In the deductive condition, subjects were shown a list of main ingredients identified by the AI. Subjects put more trust in the inductive explanations but were equally good at predicting the system's output in both cases. In the second experiment, a new set of subjects was asked to predict nutritional values with the help of the AI's predictions. Overall, access to the AI strongly improved the subjects' accuracy from below 50% to around 70%, which was further boosted to a value slightly below the AI's accuracy of 75% when users also saw explanations. This time, subjects put more trust in the AI when given deductive explanations, but performed better when given inductive explanations, as they were more likely to go against the AI's wrong decisions in that case. \n\nThe authors hypothesize that the between-task difference in which explanations are trusted more is connected to the cognitive effort required by the tasks and for understanding the explanations, combined with human reluctance to exert mental effort. They suggest to pay more attention to the exact form of the human-AI interaction and recommend to view AI-based decision aids as sociotechnical systems that are to be evaluated by their usefulness for actual decision making, rather than trust."], "venue": "IUI 2020", "opinion": "I am not sure whether the authors used an actual AI system or just handcrafted the input-prediction-explanation tuples, and how that might affect the correlation between explanations and the system's outputs, which can influence trust. Overall, the study reinforces my prior that trust induced by explanations is not a good predictor of an AI system's usefulness, but I am more sceptical that the differences between inductive and deductive explanations will be the same in different contexts. ", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #128", "newsletter_category": "Interpretability"}
{"id": "1fc7668fe778733f6cd207c9d11d4277", "title": "Towards Interpretable Reinforcement Learning Using Attention Augmented Agents", "url": "http://papers.nips.cc/paper/9400-towards-interpretable-reinforcement-learning-using-attention-augmented-agents", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Alexander Mott", "Daniel Zoran", "Mike Chrzanowski", "Daan Wierstra", "Danilo Jimenez Rezende"], "summaries": ["In this paper the authors train a reinforcement learning agent with a soft attention module built into it. The attention module forms a bottleneck between the visual input and the network choosing the next action, which forces the model to learn to attend to only important parts of the scene. This means they can visualise which parts of the input the model thinks are important, as those are the parts of the input that the model is attending to. The queries to the attention model are determined by a top level recurrent network, without input from the current image, so act as a form of \"top down\" attention, where the top controller can be imagined to be querying the processed image for various locations and objects.\n\nHaving trained this agent (which still gets competitive performance with SOTA RL models on a fair few ATARI games), they qualitatively evaluate the attention visualisation on a variety of games. They find several common strategies in the attention schemes, such as the agents attending to specific points until an object crosses the point (\"Tripwires\"). The attention is computed over both regular pixels, as well as Fourier-based positional encoding. Thanks to this and other aspects of their architecture, the authors can check whether the queries are focused on pixel values (i.e. looking for a specific pattern of pixels anywhere) or on location features (i.e. asking what pixels are present at a specific location). For example, they find that the agent often queries the location where the score is displayed, presumably because it is useful for calculating the value function. They also compare their method with self-attention based models, and with other saliency methods.\n\nThe best way to get a feel for the visualisations is to go to the paper's website and watch the example videos."], "venue": "NeurIPS 2019", "opinion": "This paper isn't revolutionary in its approach, but it's interesting to see work on interpreting RL agents, and the fact that the interpretability is built-in is interesting: it gives us a harder guarantee that this visualisation is actually showing us the parts of the input that the model thinks of as important, as they actually are important in its processing. It's promising to see that the in-built interpretability also didn't seem to penalise the performance much - it would be interesting to see this method applied to other, stronger kinds of models and see whether it still produces useful visualisations and how it affects their performance.", "highlight": false, "read_more": "The paper's website", "summarizer": "Robert", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #100", "newsletter_category": "Interpretability"}
{"id": "a35d426423daaa7683ee6df4b40bc0a0", "title": "Attention is not Explanation", "url": "http://arxiv.org/abs/1902.10186", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Sarthak Jain", "Byron C. Wallace"], "summaries": ["This paper explores the usefulness of attention weights in interpreting neural networks' performance on NLP tasks. The authors present two findings: firstly, that attention weights are only weakly correlated with other metrics of word importance; and secondly, that there often exist adversarially-generated attention weights which are totally different from the learned weights, but which still lead to the same outputs. They conclude that these results undermine the explanatory relevance of attention weights."], "venue": "arXiv", "opinion": "I like this type of investigation, but don't find their actual conclusions compelling. In particular, it doesn't matter whether \"meaningless\" adversarial attention weights can lead to the same classifications, as long as the ones actually learned by the system are interpretable. Also, the lack of correlation between attention weights and other methods could be explained either by attention weights being much worse than the other methods, or much better, or merely useful for different purposes.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #53", "newsletter_category": "Interpretability"}
{"id": "b4a020e67a915547528c3787d9ab382d", "title": "Stakeholders in Explainable AI", "url": "http://arxiv.org/abs/1810.00184", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Alun Preece", "Dan Harborne", "Dave Braines", "Richard Tomsett", "Supriyo Chakraborty"], "summaries": ["There are at least four groups for whom \"explainable\" AI is relevant: developers (who want AI to be easier to work with), theorists (who want to understand fundamental properties of AI), ethicists (who want AI to behave well) and users (who want AI to be useful). This has complicated work on explanability/interpretability: the first two groups focus on understanding how a system functions internally (described in this paper as \"verification\"), while the latter two focus on understanding what the system does (\"validation\"). The authors propose an alternative framing of interpretability, based on known knowns, unknown knowns, etc."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #27", "newsletter_category": "Interpretability"}
{"id": "bc1b4ab4e1783274926a2ce66d2fc42f", "title": "Contrastive Explanations for Reinforcement Learning in terms of Expected Consequences", "url": "http://arxiv.org/abs/1807.08706", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Jasper van der Waa", "Jurriaan van Diggelen", "Karel van den Bosch", "Mark Neerincx"], "summaries": ["This paper aims to provide contrastive explanations for the behavior of an RL agent, meaning that they contrast why the RL agent used one policy instead of another policy. They do this by computing the expected outcomes under the alternate policy, and then describing the difference between the two. (An outcome is a human-interpretable event -- they assume that they are given a function that maps states to outcomes.)"], "venue": "XAI workshop on the IJCAI conference 2018", "opinion": "I wish that they had let users choose the questions in their user study, rather than just evaluating questions that had been generated by their method where they wrote the alternative policy using template policies they had written. I'd be pretty excited and think it was a good step forward in this area if end users (i.e. not ML researchers) could ask novel contrastive questions (perhaps in some restricted class of questions).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #17", "newsletter_category": "Interpretability"}
{"id": "a7e6dd8fc84a4405dbe10aa2e2be1a7f", "title": "Challenges to Christiano’s capability amplification proposal", "url": "https://www.lesswrong.com/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Eliezer Yudkowsky"], "summaries": ["A list of challenges faced by iterated distillation and amplification. First, a collection of aligned agents interacting does not necessarily lead to aligned behavior. (Paul's response: That's not the reason for optimism, it's more that there is no optimization pressure to be unaligned.) Second, it's unclear that even with high bandwidth oversight, that a collection of agents could reach arbitrary levels of capability. For example, how could agents with an understanding of arithmetic invent Hessian-free optimization? (Paul's response: This is an empirical disagreement, hopefully it can be resolved with experiments.) Third, while it is true that exact imitation of a human would avoid the issues of RL, it is harder to create exact imitation than to create superintelligence, and as soon as you have any imperfection in your imitation of a human, you very quickly get back the problems of RL. (Paul’s response: He's not aiming for exact imitation, he wants to deal with this problem by having a strong overseer aka informed oversight, and by having techniques that optimize worst-case performance.) Fourth, since Paul wants to use big unaligned neural nets to imitate humans, we have to worry about the possibility of adversarial behavior. He has suggested using large ensembles of agents and detecting and pruning the ones that are adversarial. However, this would require millions of samples per unaligned agent, which is prohibitively expensive. (Paul's response: He's no longer optimistic about ensembles and instead prefers the techniques in [this post](https://ai-alignment.com/techniques-for-optimizing-worst-case-performance-39eafec74b99), but he could see ways of reducing the sample complexity further.)"], "venue": "LessWrong", "opinion": "Of all of these, I'm most worried about the second and third problems. I definitely have a weak intuition that there are many important tasks that we care about that can't easily be decomposed, but I'm optimistic that we can find out with experiments. For the point about having to train a by-default unaligned neural net to imitate aligned agents, I'm somewhat optimistic about informed oversight with strong interpretability techniques, but I become a lot less optimistic if we think that won't be enough and need to use other techniques like verification, which seem unlikely to scale that far. In any case, I'd recommend reading this post for a good explanation of common critiques of IDA.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #7", "newsletter_category": "Iterated amplification"}
{"id": "718ba8216be30de2eacc004cbddd36cc", "title": "AI safety via debate", "url": "https://blog.openai.com/debate/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Geoffrey Irving and Dario Amodei"], "summaries": ["At a high level, a major issue with building superintelligent AI is that humans would not be able to provide strong oversight for the AI. Amplification solves this by using the AI as a tool that can help the human (in particular, if the human can break a task down into subtasks, the AI can solve the subtasks). Debate also provides the AI as a tool for human overseer, but in a different way -- now, in order to train the AI, we have the AI debate against itself in order to convince a human of the answer to some target question. Given some question whose answer is too hard to directly judge, the human can look at the arguments and counterarguments to figure out whether or not the answer is actually correct.\n\nThe paper describes debate in a lot more depth and has an initial experiment involving MNIST. I can't possibly do it justice here -- I encourage you to simply read the full paper. You probably have an intuition right now of why this wouldn't work, such as \"but humans believe what they want to hear, not what is true\". The paper spends 5 (!) pages listing ten such problems and analyzing them, so go read it."], "venue": "arXiv", "opinion": "It's great to see another approach that directly tackles the problem of defining a training signal that if optimized well would lead to an aligned AI. There are a lot of empirical questions that would influence whether or not debate actually works in practice, and I'm excited to see what experiments find.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #5", "newsletter_category": "Iterated amplification"}
{"id": "8f139c99e249dde838f6b36b5e87919c", "title": "Learning Complex Goals with Iterated Amplification", "url": "https://blog.openai.com/amplifying-ai-training/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Paul Christiano and Dario Amodei"], "summaries": ["This blog post and the accompanying [paper](https://arxiv.org/abs/1810.08575) introduces iterated amplification, focusing on how it can be used to define a training signal for tasks that humans cannot perform or evaluate, such as designing a transit system. The key insight is that humans are capable of decomposing even very difficult tasks into slightly simpler tasks. So, in theory, we could provide ground truth labels for an arbitrarily difficult task by a huge tree of humans, each decomposing their own subquestion and handing off new subquestions to other humans, until questions are easy enough that a human can directly answer them.\n\nWe can turn this into an efficient algorithm by having the human decompose the question only once, and using the current AI system to answer the generated subquestions. If the AI isn't able to answer the subquestions, then the human will get nonsense answers. However, as long as there are questions that the human + AI system can answer but the AI alone cannot answer, the AI can learn from the answers to those questions. To reduce the reliance on human data, another model is trained to predict the decomposition that the human performs. In addition, some tasks could refer to a large context (eg. evaluating safety for a specific rocket design), so they model the human as being able to access small pieces of the context at a time.\n\nThey evaluate on simple algorithmic tasks like distance between nodes in a graph, where they can program an automated human decomposition for faster experiments, and there is a ground truth solution. They compare against supervised learning, which trains a model on the ground truth answers to questions (which iterated amplification does not have access to), and find that they can match the performance of supervised learning with only slightly more training steps."], "venue": "OpenAI Blog", "opinion": "This is my new favorite post/paper for explaining how iterated amplification works, since it very succinctly and clearly makes the case for iterated amplification as a strategy for generating a good training signal. I'd recommend reading the [paper](https://arxiv.org/abs/1810.08575) in full, as it makes other important points that I haven't included in the summary.\n\nNote that it does not explain a lot of Paul's thinking. It explains one particular training method that allows you to train an AI system with a more intelligent and informed overseer.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #30", "newsletter_category": "Iterated amplification"}
{"id": "dd918e448c3a22c0f064b101cf12d917", "title": "Understanding Iterated Distillation and Amplification: Claims and Oversight", "url": "https://www.lesswrong.com/posts/yxzrKb2vFXRkwndQ4/understanding-iterated-distillation-and-amplification-claims", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["William_S"], "summaries": ["The post introduces a distinction between flavors of iterated distillation and amplification -- whether the overseer is low bandwidth or high bandwidth. Let's think of IDA as building a deliberation tree out of some basic overseer. In the high bandwidth case, we can think of the overseer as a human who can think about a problem for 15 minutes, without access to the problem's context. However, there could be \"attacks\" on such overseers. In order to solve this problem, we can instead use low-bandwidth overseers, who only look at a sentence or two of text, and verify through testing that there are no attacks on such overseers. However, it seems much less clear that such an overseer would be able to reach high levels of capability."], "venue": "LessWrong", "opinion": "This is an excellent post that improved my understanding of Paul Christiano's agenda, which is not something I usually say about posts not written by Paul himself. I definitely have not captured all of the important ideas in my summary, so you should read it.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "Iterated Distillation and Amplification", "converted_with": "python", "newsletter_number": "AN #3", "newsletter_category": "Iterated amplification"}
{"id": "0ba632a85cb326756701f5254ce462bd", "title": "Factored Cognition (old)", "url": "https://ought.org/presentations/factored-cognition-2018-05", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Andreas Stuhlmuller"], "summaries": ["This is a presentation that Andreas has given a few times on Factored Cognition, a project by [Ought](https://ought.org/) that is empirically testing one approach to amplification on humans. It is inspired by [HCH](https://ai-alignment.com/strong-hch-bedb0dc08d4e) and [meta-execution](https://ai-alignment.com/meta-execution-27ba9b34d377). These approaches require us to break down complex tasks into small, bite-sized pieces that can be solved separately by copies of an agent. So far Ought has built a web app in which there are workspaces, nodes, pointers etc. that can allow humans to do local reasoning to answer a big global question."], "venue": "", "opinion": "It is unclear whether most tasks can actually be decomposed as required for iterated distillation and amplification, so I'm excited to see experiments that can answer that question! The questions that Ought is trying seem quite hard, so it should be a good test of breaking down reasoning. There's a lot of detail in the presentation that I haven't covered, I encourage you to read it.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #12", "newsletter_category": "Iterated amplification"}
{"id": "f937debd7ca77c5b72492fdc666fa551", "title": "Ought Progress Update October 2019", "url": "https://ought.org/updates/2019-10-28-progress-update", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Jungwon Byun and Andreas Stuhlmüller"], "summaries": ["While this update provides details about Ought as a whole, I will focus only on the research they've done. As a reminder, Ought's goal is to understand how we can delegate hard questions to machine and human experts. They initially focused on <@Factored Cognition@>, where each question was recursively decomposed into subquestions that would help _find_ the answer. They now call this \"Factored Generation\", and use \"Factored Cognition\" as the umbrella term for any approach that helps answer hard questions using some kind of decomposition.\n\nWhile Ought has run some ML experiments aimed at automating decompositions, they have primarily worked on human experiments for <@Factored Evaluation@>(@Delegating open-ended cognitive work@), where the decompositions are used to help _evaluate_ answers. They are focusing on evaluating answers instead of finding answers for a few reasons. First, there may already exist experts whose answers we can use: this can be directly plugged into Factored Evaluation, but not Factored Generation. Second, evaluation is often easier than generation. Third, factored evaluation is more amenable to time-saving tricks like only focusing on a single branch of the tree (as in <@AI safety via debate@>). Finally, when training ML systems, if a model is unable to exactly imitate the experts, we need something that can distinguish between better and worse behavior, which is possible with Factored Evaluation but not Factored Generation. (See also [Against Mimicry](https://ai-alignment.com/against-mimicry-6002a472fc42).)\n\nThe experiments involve a judge trying to decide which of two answers to a question is best, where one answer is provided by an honest expert and one by a malicious expert. (Subquestions may also get different answers from the two experts.) The goal is to find a setup where the judge reliably picks the honest answer, regardless of how powerful the malicious expert is: if this could be established, it would be strong evidence that the setup incentivizes honest answers. Of course, this requires us to quantify over all malicious strategies, but humans may not find the best malicious strategies. To mitigate this, the experiments provide the malicious expert with lots of advantages that the honest one doesn't have, to simulate very powerful malicious experts.\n\nThey have already learned from their experiments. Initially, they hoped participants would develop good meta-level strategies for determining the truth. (Meta-level here means that the strategies would generalize to other domains, e.g. a heuristic of always splitting an answer into separate claims and asking for the evidence for each claim separately.) They found that these strategies _don't_ emerge organically, and so are planning to spend concentrated staff time on finding good strategies. They also found that malicious experts sometimes won due to avoidable mistakes, and are hoping to eliminate this by ensembling work from multiple people for increased robustness."], "venue": "Ought Website", "opinion": "This is distinct progress since the last update, though I think the experiments are still exploratory enough that it's hard to have any big takeaways. The difficulty of generating good strategies suggests that it's particularly important that we train our human overseers (as suggested in <@AI Safety Needs Social Scientists@>) to provide the right kind of feedback, for example if we would like them to reward only <@corrigible reasoning@>(@Corrigibility@). I'm particularly excited for the next update, where we could see experiments powerful enough to come to more solid conclusions.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #72", "newsletter_category": "Iterated amplification"}
{"id": "ee938b8d85af15efd8b71952f3648f6f", "title": "AI Alignment Podcast: AI Alignment through Debate", "url": "https://futureoflife.org/2019/03/06/ai-alignment-through-debate-with-geoffrey-irving/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Lucas Perry and Geoffrey Irving"], "summaries": ["We want AI safety solutions to scale to very intelligent agents; debate is one scalability technique. It's formulated as a two player zero-sum perfect information game in which agents make arguments in natural language, to be evaluated by a human judge. Whether or not such debates are truth-conducive is an empirical question which we can try to evaluate experimentally; doing so will require both technical and social science expertise (as discussed in a <@previous post@>(@AI Safety Needs Social Scientists@))."], "venue": "FLI Website", "opinion": "I think one of the key questions underlying Debate is how efficiently natural language can summarise reasoning about properties of the world. This question is subject to some disagreement (at one extreme, Facebook's [roadmap towards machine intelligence](https://research.fb.com/publications/a-roadmap-towards-machine-intelligence/) describes a training environment which is \"entirely linguistically defined\") and probably deserves more public discussion in the context of safety.\n\n**Rohin's note:** If you've read the previous posts on debate, the novel parts of this podcast are on the relation between iterated amplification and debate (which has been discussed before, but not in as much depth), and the reasons for optimism and pessimism about debate.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #52", "newsletter_category": "Iterated amplification"}
{"id": "6535cb2eb53f36f4ec68ee98ce7027a8", "title": "(When) is Truth-telling Favored in AI debate?", "url": "https://medium.com/@RyanCarey/new-paper-when-is-truth-telling-favored-in-ai-debate-8f58f14562e5", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Vojtěch Kovařík", "Ryan Carey"], "summaries": ["<@Debate@>(@AI safety via debate@) aims to train an AI system using self-play to win \"debates\" which aim to convincingly answer a question, as evaluated by a human judge. The main hope is that the equilibrium behavior of this game is for the AI systems to provide true, useful information. This paper studies this in a simple theoretical setting called _feature debates_. In this environment, a \"world\" is sampled from some distribution, and the agents (who have perfect information) are allowed to make claims about real-valued \"features\" of the world, in order to answer some question about the features of the world. The judge is allowed to check the value of a single feature before declaring a winner, but otherwise knows nothing about the world.\n\nIf either agent lies about the value of a feature, the other agent can point this out, which the judge can then check; so at the very least the agents are incentivized to honestly report the values of features. However, does this mean that they will try to answer the full question truthfully? If the debate has more rounds than there are features, then it certainly does: either agent can unilaterally reveal every feature, which uniquely determines the answer to the question. However, shorter debates need not lead to truthful answers. For example, if the question is whether the first K features are all 1, then if the debate length is shorter than K, there is no way for an agent to prove that the first K features are all 1."], "venue": "arXiv", "opinion": "While it is interesting to see what doesn't work with feature debates, I see two problems that make it hard to generalize these results to regular debate. First, I see debate as being truth-seeking in the sense that the answer you arrive at is (in expectation) more accurate than the answer the judge would have arrived at by themselves. However, this paper wants the answers to actually be _correct_. Thus, they claim that for sufficiently complicated questions, since the debate can't reach the right answer, the debate isn't truth-seeking -- but in these cases, the answer is still in expectation more accurate than the answer the judge would come up with by themselves.\n\nSecond, feature debate doesn't allow for decomposition of the question during the debate, and doesn't allow the agents to challenge each other on particular questions. I think this limits the \"expressive power\" of feature debate to P, while regular debate reaches PSPACE, and is thus able to do much more than feature debate. See this [comment](https://www.alignmentforum.org/posts/RQoSCs9SePDMLJvfz/new-paper-when-is-truth-telling-favored-in-ai-debate#gCeKuJ62HmLtPB9C9) for more details.", "highlight": false, "read_more": "Paper: (When) Is Truth-telling Favored in AI Debate?", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #83", "newsletter_category": "Iterated amplification"}
{"id": "7f47c04a72dac0dbc668d7b2f0489d4a", "title": "Delegating open-ended cognitive work", "url": "https://ought.org/presentations/delegating-cognitive-work-2019-06", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Andreas Stuhlmüller"], "summaries": ["This is the latest explanation of the approach Ought is experimenting with: Factored Evaluation (in contrast to <@Factored Cognition@>). With Factored Cognition, the idea was to recursively decompose a high-level task until you reach subtasks that can be directly solved. Factored Evaluation still does recursive decomposition, but now it is aimed at _evaluating_ the work of experts, along the same lines as <@recursive reward modeling@>(@Scalable agent alignment via reward modeling@).\n\nThis shift means that Ought is attacking a very natural problem: how to effectively delegate work to experts while avoiding principal-agent problems. In particular, we want to design incentives such that untrusted experts under the incentives will be as helpful as experts intrinsically motivated to help. The experts could be human experts or advanced ML systems; ideally our incentive design would work for both.\n\nCurrently, Ought is running experiments with reading comprehension on Wikipedia articles. The experts get access to the article while the judge does not, but the judge can check whether particular quotes come from the article. They would like to move to tasks that have a greater gap between the experts and the judge (e.g. allowing the experts to use Google), and to tasks that are more subjective (e.g. whether the judge should get Lasik surgery)."], "venue": "EA Global", "opinion": "The switch from Factored Cognition to Factored Evaluation is interesting. While it does make it more relevant outside the context of AI alignment (since principal-agent problems abound outside of AI), it still seems like the major impact of Ought is on AI alignment, and I'm not sure what the difference is there. In <@iterated amplification@>(@Learning Complex Goals with Iterated Amplification@), when decomposing tasks in the Factored Cognition sense, you would use imitation learning during the distillation step, whereas with Factored Evaluation, you would use reinforcement learning to optimize the evaluation signal. The switch would be useful if you expect the reinforcement learning to work significantly better than imitation learning.\n\nHowever, with Factored Evaluation, the agent that you train iteratively is one that must be good at evaluating tasks, and then you'd need another agent that actually performs the task (or you could train the same agent to do both). In contrast, with Factored Cognition you only need an agent that is performing the task. If the decompositions needed to perform the task are different from the decompositions needed to evaluate the task, then Factored Cognition would presumably have an advantage.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #62", "newsletter_category": "Iterated amplification"}
{"id": "8c6402af4a535f00b735414d497d18ab", "title": "Making AI Safe through Debate", "url": "https://towardsdatascience.com/making-ai-safe-through-debate-935fe8a0ec5", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Jeremie Harris and Ethan Perez"], "summaries": ["This hour-long podcast is a good introduction to iterated amplification and debate, from a more ML perspective than most other explanations."], "venue": "Towards Data Science", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #146", "newsletter_category": "Iterated amplification"}
{"id": "361dfeed3815991b862b35ef368b9f03", "title": "Factored Cognition sequence", "url": "https://www.lesswrong.com/s/xezt7HYfpWR6nwp7Z", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Rafael Harth"], "summaries": ["The <@Factored Cognition Hypothesis@>(@Factored Cognition@) informally states that any task can be performed by recursively decomposing the task into smaller and smaller subtasks until eventually the smallest tasks can be done by a human. This sequence aims to formalize the hypothesis to the point that it can be used to argue for the outer alignment of (idealized versions of) <@iterated amplification@>(@Supervising strong learners by amplifying weak experts@) and <@debate@>(@AI safety via debate@).\n\nThe key concept is that of an _explanation_ or _decomposition_. An explanation for some statement **s** is a list of other statements **s1, s2, … sn** along with the statement “(**s1** and **s2** and … and **sn**) implies **s**”. A _debate tree_ is a tree in which for a given node **n** with statement **s**, the children of **n** form an explanation (decomposition) of **s**. The leaves of the tree should be statements that the human can verify. (Note that the full formalism has significantly more detail, e.g. a concept of the “difficulty” for the human to verify any given statement.)\n\nWe can then define an idealized version of debate, in which the first debater must produce an answer with associated explanation, and the second debater can choose any particular statement to expand further. The judge decides the winner based on whether they can confidently verify the final statement or not. Assuming optimal play, the correct (honest) answer is an equilibrium as long as:\n\n**Ideal Debate Factored Cognition Hypothesis:** For every question, there exists a debate tree for the correct answer where every leaf can be verified by the judge.\n\nThe idealized form of iterated amplification is <@HCH@>(@Humans Consulting HCH@); the corresponding Factored Cognition Hypothesis is simply “For every question, HCH returns the correct answer”. Note that the _existence_ of a debate tree is not enough to guarantee this, as HCH must also _find_ the decompositions in this debate tree. If we imagine that HCH gets access to a decomposition oracle that tells it the right decomposition to make at each node, then HCH would be similar to idealized debate. (HCH could of course simply try all possible decompositions, but we are ignoring that possibility: the decompositions that we rely on should reduce or hide complexity.)\n\nIs the HCH version of the Factored Cognition Hypothesis true? The author tends to lean against (more specifically, that HCH would not be superintelligent), because it seems hard for HCH to find good decompositions. In particular, humans seem to improve their decompositions over time as they learn more, and also seem to improve the concepts by which they think over time, all of which are challenging for HCH to do. On the other hand, the author is cautiously optimistic about debate."], "venue": "LessWrong", "opinion": "I enjoyed this sequence: I’m glad to see more analysis of what is and isn’t necessary for iterated amplification and debate to work, as well as more theoretical models of debate. I broadly agreed with the conceptual points made, with one exception: I’m not convinced that we should not allow brute force for HCH, and for similar reasons I don’t find the arguments that HCH won’t be superintelligent convincing. In particular, the hope with iterated amplification is to approximate a truly massive tree of humans, perhaps a tree containing around 2^100 (about 1e30) base agents / humans. At that scale (or even at just a measly billion (1e9) humans), I don’t expect the reasoning to look anything like what an individual human does, and approaches that are more like “brute force” seem a lot more feasible.\n\nOne might wonder why I think it is possible to approximate a tree with more base agents than there are grains of sand in the Sahara desert. Well, a perfect binary tree of depth 99 would have 1e30 nodes; thus we can roughly say that we’re approximating 99-depth-limited HCH. If we had perfect distillation, this would take 99 rounds of iterated amplification and distillation, which seems quite reasonable. Of course, we don’t have perfect distillation, but I expect that to be a relatively small constant factor on top (say 100x), which still seems pretty reasonable. (There’s more detail about how we get this implicit exponential-time computation in <@this post@>(@Factored Cognition@).)", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #135", "newsletter_category": "Iterated amplification"}
{"id": "79d7807bf0ba52484f99c4991d87a2ff", "title": "AI Safety Debate and Its Applications", "url": "https://alignmentforum.org/posts/5Kv2qNfRyXXihNrx2/ai-safety-debate-and-its-applications", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Vojta Kovarik"], "summaries": ["This post defines the components of a <@debate@>(@AI safety via debate@) game, lists some of its applications, and defines truth-seeking as the property that we want. Assuming that the agent chooses randomly from the possible Nash equilibria, the truth-promoting likelihood is the probability that the agent picks the actually correct answer. The post then shows the results of experiments on MNIST and Fashion MNIST, seeing comparable results to the original paper."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #83", "newsletter_category": "Iterated amplification"}
{"id": "fe069ff259fc6ad9d2fcb921cf7a8f37", "title": "Capability amplification", "url": "https://www.alignmentforum.org/posts/t3AJW5jP3sk36aGoC/capability-amplification", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2016-01-01T00:00:00Z", "authors": ["Paul Christiano"], "summaries": ["Capability amplification is the problem of taking some existing policy and producing a better policy, perhaps using much more time and compute. It is a particularly interesting problem to study because it could be used to define the goals of a powerful AI system, and it could be combined with [reward engineering](https://www.alignmentforum.org/posts/4nZRzoGTqg8xy5rr8/the-reward-engineering-problem) above to create a powerful aligned system. (Capability amplification and reward engineering are analogous to amplification and distillation respectively.) In addition, capability amplification seems simpler than the general problem of \"build an AI that does the right thing\", because we get to start with a weak policy A rather than nothing, and were allowed to take lots of time and computation to implement the better policy. It would be useful to tell whether the \"hard part\" of value alignment is in capability amplification, or somewhere else.\n\nWe can evaluate capability amplification using the concepts of reachability and obstructions. A policy C is _reachable_ from another policy A if there is some chain of policies from A to C, such that at each step capability amplification takes you from the first policy to something at least as good as the second policy. Ideally, all policies would be reachable from some very simple policy. This is impossible if there exists an _obstruction_, that is a partition of policies into two sets L and H, such that it is impossible to amplify any policy in L to get a policy that is at least as good as some policy in H. Intuitively, an obstruction prevents us from getting to arbitrarily good behavior, and means that all of the policies in H are not reachable from any policy in L.\n\nWe can do further work on capability amplification. With theory, we can search for challenging obstructions, and design procedures that overcome them. With experiment, we can study capability amplification with humans (something which [Ought](https://ought.org/) is now doing)."], "venue": "Alignment Forum", "opinion": "There's a clear reason for work on capability amplification: it could be used as a part of an implementation of iterated amplification. However, this post also suggests another reason for such work -- it may help us determine where the \"hard part\" of AI safety lies. Does it help to assume that you have lots of time and compute, and that you have access to a weaker policy?\n\nCertainly if you just have access to a weaker policy, this doesn't make the problem any easier. If you could take a weak policy and amplify it into a stronger policy efficiently, then you could just repeatedly apply this policy-improvement operator to some very weak base policy (say, a neural net with random weights) to solve the full problem. (In other variants, you have a much stronger aligned base policy, eg. the human policy with short inputs and over a short time horizon; in that case this assumption is more powerful.) The more interesting assumption is that you have lots of time and compute, which does seem to have a lot of potential. I feel pretty optimistic that a human thinking for a long time could reach \"superhuman performance\" by our current standards; capability amplification asks if we can do this in a particular structured way.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #42", "newsletter_category": "Iterated amplification sequence"}
{"id": "91d048f286f20036fb7916dad23c7386", "title": "Benign model-free RL", "url": "https://www.alignmentforum.org/posts/PRaxzmDJdvie46ahL/benign-model-free-rl", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Paul Christiano"], "summaries": ["This post is very similar to the previous one, just with different language: distillation is now implemented through reward modeling with robustness. The point of robustness is to ensure that the distilled agent is benign even outside of the training distribution (though it can be incompetent). There's also an analysis of the costs of the scheme. One important note is that this approach only works for model-free RL systems -- we'll need something else for eg. model-based RL, if it enables capabilities that we can't get with model-free RL."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #35", "newsletter_category": "Iterated amplification sequence"}
{"id": "b3a0c9ff96b617fc0693be0fdcfdfdf7", "title": "Iterated Distillation and Amplification", "url": "https://www.alignmentforum.org/posts/HqLxuZ4LhaFhmAHWk/iterated-distillation-and-amplification", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Ajeya Cotra"], "summaries": ["This is the first in a series of four posts describing the iterated amplification framework in different ways. This post focuses on the repetition of two steps. In amplification, we take a fast aligned agent and turn it into a slow but more capable aligned agent, by allowing a human to coordinate many copies of the fast agent in order to make better decisions. In distillation, we take a slow aligned agent and turn it a fast aligned agent (perhaps by training a neural net to imitate the judgments of the slow agent). This is similar to AlphaGoZero, in which MCTS can be thought of as amplification, while distillation consists of updating the neural net to predict the outputs of the MCTS.\n\nThis allows us to get both alignment and powerful capabilities, whereas usually the two trade off against each other. High capabilities implies a sufficiently broad mandate to search for good behaviors, allowing our AI systems to find novel behaviors that we never would have thought of, which could be bad if the objective was slightly wrong. On the other hand, high alignment typically requires staying within the realm of human behavior, as in imitation learning, which prevents the AI from finding novel solutions.\n\nIn addition to distillation and amplification robustly preserving alignment, we also need to ensure that given a human as a starting point, iterated distillation and amplification can scale to arbitrary capabilities. We would also want it be about as cost-efficient as alternatives. This seems to be true at test time, when we are simply executing a learned model, but it could be that training is much more expensive."], "venue": "Alignment Forum", "opinion": "This is a great simple explanation of the scheme. I don't have much to say about the idea since I've talked about iterated amplification so much in this newsletter already.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #35", "newsletter_category": "Iterated amplification sequence"}
{"id": "e55fda77e6fe14dacdc376345290d4f5", "title": "Learning to Interactively Learn and Assist", "url": "https://interactive-learning.github.io", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Mark Woodward", "Chelsea Finn", "Karol Hausman"], "summaries": ["[Cooperative Inverse Reinforcement Learning](https://arxiv.org/abs/1606.03137) proposed a model in which an AI assistant would help a human principal, where only the principal knows the task reward. This paper explores this idea in the context of deep reinforcement learning. In their grid-world environment, two agents move around and pick up lemons or plums. The principal is penalized for moving, but is the only one who knows whether plums or lemons should be picked up. The authors hypothesize that simply by jointly training the two agents to maximize rewards, they will automatically learn to interact in order for the assistant to learn the task, rather than requiring an explicit mechanism like comparisons or demonstrations.\n\nRecurrent Q-networks are used for the agents, which are then trained via deep Q-learning. The authors run several experiments that show emergent interaction. In the first experiment, when the principal is penalized for moving it learns to demonstrate the task to the assistant, and then let the assistant finish the job. In the second experiment, when the assistant has a restricted field of view, it learns to follow the principal until it can infer whether the principal wants plums or lemons. In the third, they tell the assistant the task 50% of the time, and so the principal is initially unsure whether the agent needs any direction (and due to the motion cost, the principal would rather not do anything). When the agent knows the task, it performs it. When the agent doesn't know the task, it moves closer to the principal, in effect \"asking\" what the reward is, and the principal moves until it can see the object, and then \"answers\" by either moving towards the object (if it should be collected) or doing nothing (if not). Finally, the authors run an experiment using pixels as input. While they had to switch to dueling DQNs instead of vanilla DQNs, they show that the joint reward is competitive with the grid approach. They also run an experiment with human principals and show that the human/assistant pair outperforms the solo-human setup."], "venue": "arXiv", "opinion": "Overall, I found the idea expressed in this paper to be well-articulated. While I think that the grid-world environment is a bit simplistic, their results are interesting. Being able to learn intent in an online manner is an important problem to solve if we’re interested in robust collaboration between humans and autonomous agents. However, the authors point out that training on pixel input fails in the majority of cases, 64% of the time, which raises concerns about how well the method would generalize to non-trivial environments.\n\n**Rohin's opinion:** I'm excited that the ideas from [CIRL](https://arxiv.org/abs/1606.03137) are making their way to deep RL. Ultimately I expect we'll want an agent that takes all of its sensory data as evidence about \"what the human wants\", rather than relying on a special reward channel, or a special type of data called \"comparisons\" or \"demonstrations\", and this work takes that sort of approach.\n\nFor these simple environments, an agent trained to perform well with another artificial agent will generalize reasonably well to real humans, because there's only a few reasonable strategies for the principal to take. However, with more complex environments, when there are many ways to interact, we can't expect such generalization. (I'll have a paper and blog post coming out soon about this phenomenon.) ", "highlight": true, "read_more": "", "summarizer": "Zachary Robertson", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #64", "newsletter_category": "Learning human intent"}
{"id": "4b29b65a9478d8f5307e0a1503d96128", "title": "Learning Preferences by Looking at the World", "url": "https://bair.berkeley.edu/blog/2019/02/11/learning_preferences/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Rohin Shah and Dmitrii Krasheninnikov"], "summaries": ["The key idea with this project of mine is that the state of the world is already optimized for our preferences, and so simply by looking at the world we can infer these preferences. Consider the case where there is a vase standing upright on the table. This is an unstable equilibrium -- it's very easy to knock over the vase so it is lying sideways, or is completely broken. The fact that this hasn't happened yet suggests that we care about vases being upright and intact; otherwise at some point we probably would have let it fall.\n\nSince we have optimized the world for our preferences, the natural approach is to model this process, and then invert it to get the preferences. You could imagine that we could consider all possible reward functions, and put probability mass on them in proportion to how likely they make the current world state if a human optimized them. Basically, we are simulating the past in order to figure out what must have happened and why. With the vase example, we would notice that in any reward function where humans wanted to break vases, or were indifferent to broken vases, we would expect the current state to contain broken vases. Since we don't observe that, it must be the case that we care about keeping vases intact.\n\nOur algorithm, Reward Learning by Simulating the Past (RLSP), takes this intuition and applies it in the framework of [Maximum Causal Entropy IRL](http://www.cs.cmu.edu/~bziebart/publications/maximum-causal-entropy.pdf) ([AN #12](https://mailchi.mp/bcb2c6f1d507/alignment-newsletter-12)), where you assume that the human was acting over T timesteps to produce the state that you observe. We then show a few gridworld environments in which applying RLSP can fix a misspecified reward function."], "venue": "BAIR Blog", "opinion": "In addition to this blog post and the [paper](https://openreview.net/forum?id=rkevMnRqYQ), I also wrote a [post](https://www.alignmentforum.org/posts/7f6DNZhracD7RvxMr/learning-preferences-by-looking-at-the-world) on the Alignment Forum expressing opinions about the work. There are too many disparate opinions to put in here, so I'd recommend reading the post itself. I guess one thing I'll mention is that to infer preferences with a single state, you definitely need a good dynamics model, and a good set of features. While this may seem difficult to get, it's worth noting that dynamics are empirical facts about the world, and features might be, and there is already lots of work on learning both dynamics and features.", "highlight": true, "read_more": "Paper: Preferences Implicit in the State of the World", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #45", "newsletter_category": "Learning human intent"}
{"id": "4d0bc86b85a157365d9e4014df283bf7", "title": "AI Alignment Podcast: Cooperative Inverse Reinforcement Learning", "url": "https://futureoflife.org/2019/01/17/cooperative-inverse-reinforcement-learning-with-dylan-hadfield-menell/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Lucas Perry and Dylan Hadfield-Menell"], "summaries": ["Dylan puts forward his conception of Cooperative Inverse Reinforcement Learning as a definition of what it means for a human-AI system to be rational, given the information bottleneck between a human's preferences and an AI's observations. He notes that there are some clear mismatches between this problem and reality, such as the CIRL assumption that humans have static preferences, and how fuzzy the abstraction of \"rational agents with utility functions\" becomes in the context of agents with bounded rationality. Nevertheless, he claims that this is a useful unifying framework for thinking about AI safety.\n\nDylan argues that the process by which a robot learns to accomplish tasks is best described not just as maximising an objective function but instead in a way which includes the system designer who selects and modifies the optimisation algorithms, hyperparameters, etc. In fact, he claims, it doesn't make sense to talk about how well a system is doing without talking about the way in which it was instructed and the type of information it got. In CIRL, this is modeled via the combination of a \"teaching strategy\" and a \"learning strategy\". The former can take many forms: providing rankings of options, or demonstrations, or binary comparisons, etc. Dylan also mentions an extension of this in which the teacher needs to learn their own values over time. This is useful for us because we don't yet understand the normative processes by which human societies come to moral judgements, or how to integrate machines into that process."], "venue": "FLI Website", "opinion": "", "highlight": true, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #42", "newsletter_category": "Learning human intent"}
{"id": "2e2b23bd46997464d93edaed100460cc", "title": "Scalable agent alignment via reward modeling", "url": "https://medium.com/@deepmindsafetyresearch/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Jan Leike"], "summaries": ["This blog post and the [associated paper](https://arxiv.org/abs/1811.07871) outline a research direction that DeepMind's AGI safety team is pursuing. The key idea is to learn behavior by learning a reward and a policy simultaneously, from human evaluations of outcomes, which can scale to superhuman performance in tasks where evaluation is easier than demonstration. However, in many cases it is hard for humans to evaluate outcomes: in this case, we can train simpler agents using reward modeling that can assist the human in evaluating outcomes for the harder task, a technique the authors call recursive reward modeling. For example, if you want to train an agent to write a fantasy novel, it would be quite expensive to have a human evaluate outcomes, i.e. rate how good the produced fantasy novels are. We could instead use reward modeling to train agents that can produce plot summaries, assess prose quality and character development, etc. which allows a human to assess the fantasy novels. There are several research challenges, such as what kind of feedback to get, making it sufficiently sample efficient, preventing reward hacking and unacceptable outcomes, and closing the reward-result gap. They outline several promising approaches to solving these problems."], "venue": "DeepMind Safety Blog", "opinion": "The proposal sounds to me like a specific flavor of narrow value learning, where you learn reward functions to accomplish particular tasks, rather than trying to figure out the \"true human utility function\". The recursive aspect is similar to [iterated amplification](https://blog.openai.com/amplifying-ai-training/) and [debate](https://blog.openai.com/debate/). Iterated amplification and debate can be thought of as operating on a tree of arguments, where each node is the result of considering many child nodes (the considerations that go into the argument). Importantly, the child nodes are themselves arguments that can be decomposed into smaller considerations. Iterated amplification works by learning how to compose and decompose nodes from children, while debate works by having humans evaluate a particular path in the argument tree. Recursive reward modeling instead uses reward modeling to train agents that can help _evaluate outcomes_ on the task of interest. This seems less recursive to me, since the subagents are used to evaluate outcomes, which would typically be a different-in-kind task than the task of interest. This also still requires the tasks to be fast -- it is not clear how to use recursive reward modeling to eg. train an agent that can teach math to children, since it takes days or months of real time to even produce outcomes to evaluate. These considerations make me a bit less optimistic about recursive reward modeling, but I look forward to seeing future work that proves me wrong.\n\nThe post also talks about how reward modeling allows us to separate what to do (reward) from how to do it (policy). I think it is an open question whether this is desirable. [Past work](https://arxiv.org/abs/1806.01946) found that the reward generalized somewhat (whereas policies typically don't generalize at all), but this seems relatively minor. For example, rewards inferred using deep variants of inverse reinforcement learning often don't generalize. Another possibility is that the particular structure of \"policy that optimizes a reward\" provides a useful inductive bias that makes things easier to learn. It would probably also be easier to inspect a specification of \"what to do\" than to inspect learned behavior. However, these advantages are fairly speculative and it remains to be seen whether they pan out. There are also practical advantages: any advances in deep RL can immediately be leveraged, and reward functions can often be learned much more sample efficiently than behavior, reducing requirements on human labor. On the other hand, this design \"locks in\" that the specification of behavior must be a reward function. I'm not a fan of reward functions because they're so unintuitive for humans to work with -- if we could have agents that work with natural language, I suspect I do not want the natural language to be translated into a reward function that is then optimized.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #34", "newsletter_category": "Learning human intent"}
{"id": "dfe34bcce0af4b133f6ad24b015f8944", "title": "Deep Imitative Models for Flexible Inference, Planning, and Control", "url": "http://arxiv.org/abs/1810.06544", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Nicholas Rhinehart", "Rowan McAllister", "Sergey Levine"], "summaries": ["It's hard to apply deep RL techniques to autonomous driving, because we can't simply collect a large amount of experience with collisions in order to learn. However, imitation learning is also hard, because as soon as your car deviates from the expert trajectories that you are imitating, you are out of distribution, and you could make more mistakes, leading to accumulating errors until you crash. Instead, we can model the expert's behavior, so that we can tell when we are moving out of distribution, and take corrective action.\n\nThey split up the problem into three different stages. First, they generate a set of _waypoints_ along the path to be followed, which are about 20m away from each other, by using A* search on a map. Next, they use model-based planning using an imitative model to generate a plan (sequence of states) that would take the car to the next waypoint. Finally, they use a simple PID controller to choose low-level actions that keep the car on target towards the next state in the plan.\n\nThe key technical contribution is with the imitative model, which is a probabilistic model P(s_{1:T}, G, φ), where φ is the current observation (eg. LIDAR), s_{1:T} is the planned trajectory, and G is a goal. We can learn P(s_{1:T} | φ) from expert demonstrations. The goal G can be anything for which you can write down a specification P(G | s_{1:T}, φ). For example, if you simply want to reach a waypoint, you can use the normal distribution on the distance between the final state s_T and the waypoint. You can also incorporate a hand-designed cost on each state.\n\nThey evaluate in simulation on a static world (so no pedestrians, for example). They show decent transfer from one map to a second map, and also that they can avoid artificially introduced potholes at test time (despite not seeing them at training time), simply by adding a cost on states over a pothole (which they can take into account because they are performing model-based planning)."], "venue": "arXiv", "opinion": "I really like this paper, it showcases the benefits of both model-based planning and imitation learning. Since the problem has been decomposed into a predictive model, a goal G, and a planner, we can edit G directly to get new behavior at test time without any retraining (as they demonstrate with the pothole experiment). At the same time, they can get away with not specifying a full reward function, as many features of good driving, like passenger comfort and staying in the correct lane, are learned simply by imitating an expert.\n\nThat said, they initially state that one of their goals is to learn from offline data, even though offline data typically has no examples of crashes, and \"A model ignorant to the possibility of a crash cannot know how to prevent it\". I think the idea is that you never get into a situation where you could get in a crash, because you never deviate from expert behavior since that would have low P(s_{1:T} | φ). This is better than model-based planning on offline data, which would consider actions that lead to a crash and have no idea what would happen, outputting garbage. However, it still seems that situations could arise where a crash is imminent, which don't arise much (if at all) in the training data, and the car fails to swerve or brake hard, because it hasn't seen enough data.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #29", "newsletter_category": "Learning human intent"}
{"id": "bf49e4e7da63d0686aaf1ac5ddbbd9ce", "title": "Learning Generalizable Robotic Reward Functions from \"In-The-Wild\" Human Videos", "url": "https://sites.google.com/view/dvd-human-videos", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Annie S. Chen", "Suraj Nair", "Chelsea Finn"], "summaries": ["This work demonstrates a method that learns a _generalizable multi-task reward function_ in the context of robotic manipulation; at deployment, this function can be conditioned on a human demonstration of an unseen task to generate reward signals for the robot, even in new environments.\n\nA key insight here was to train a discriminative model that learned whether two given video clips were performing the same actions. These clips came from both a (large) dataset of human demonstrations and a relatively smaller set of robot expert trajectories, and each clip was labelled with a task-id. This training pipeline thus leveraged huge quantities of extant human behaviour from a diversity of viewpoints to learn a metric of 'functional similarity' between pairs of videos, independent of whether they were executed by human or machine.\n\nOnce trained, this model (called the 'Domain-agnostic Video Discriminator' or DVD) can determine if a candidate robotic behaviour is similar to a desired human-demonstrated action. Such candidates are drawn from an action-conditioned video predictor, and the best-scoring action sequence is selected for execution on the (simulated or real) robot."], "venue": "RSS 2021", "opinion": "Performance increased with the inclusion of human data, even that from unrelated tasks, so one intuition I updated on was \"More data is better, even if it's not perfect\". This also feels related to \"Data as regularization\": to some extent, noisy data combats model overconfidence, and perhaps this would play an important role in aligning future systems.\n\nAnother thing I like about such pipeline papers is the opportunity to look for where systems might break. For example, in this work, the robot does actually need (prior) experience in the test environments with which to train the video predictor to be able to generate candidate solutions at test time. So in spite of the given result -- that DVD itself needs limited robot trajectories and no data from the test environments -- there's a potential point-of-failure far sooner in the pipeline, where if the robot did not have sufficient _background_ experience with diverse situations, it might not provide _any_ feasible candidate actions for DVD's evaluation.", "highlight": true, "read_more": "Paper", "summarizer": "Sudhanshu", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #161", "newsletter_category": "Learning human intent"}
{"id": "e64fbde96f52d655cff132525f13aefa", "title": "BASALT: A Benchmark for Learning from Human Feedback", "url": "https://bair.berkeley.edu/blog/2021/07/08/basalt/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Rohin Shah", "Cody Wild", "Steven H. Wang", "Neel Alex", "Brandon Houghton", "William Guss", "Sharada Mohanty", "Anssi Kanervisto", "Stephanie Milani", "Nicholay Topin", "Pieter Abbeel", "Stuart Russell", "Anca Dragan"], "summaries": ["A typical argument for AI risk, given in <@Human Compatible@>(@Human Compatible: Artificial Intelligence and the Problem of Control@), is that current AI systems treat their specifications as definite and certain, even though they are typically misspecified. This state of affairs can lead to the agent pursuing <@instrumental subgoals@>(@The Basic AI Drives@). To solve this, we might instead build AI systems that continually learn the objective from human feedback. This post and paper (on which I am an author) present the MineRL BASALT competition, which aims to promote research on algorithms that learn from human feedback.\n\nBASALT aims to provide a benchmark with tasks that are realistic in the sense that (a) it is challenging to write a reward function for them and (b) there are many other potential goals that the AI system “could have” pursued in the environment. Criterion (a) implies that we can’t have automated evaluation of agents (otherwise that could be turned into a reward function) and so suggests that we use human evaluation of agents as our ground truth. Criterion (b) suggests choosing a very “open world” environment; the authors chose Minecraft for this purpose. They provide task descriptions such as “create a waterfall and take a scenic picture of it”; it is then up to researchers to create agents that solve this task using any method they want. Human evaluators then compare two agents against each other and determine which is better. Agents are then given a score using the [TrueSkill system](https://en.wikipedia.org/wiki/TrueSkill).\n\nThe authors provide a number of reasons to prefer the BASALT benchmark over more traditional benchmarks like Atari or MuJoCo:\n\n1. In Atari or MuJoCo, there are often only a few reasonable goals: for example, in Pong, you either hit the ball back, or you die. If you’re testing algorithms that are meant to learn what the goal is, you want an environment where there could be many possible goals, as is the case in Minecraft.\n2. There’s lots of Minecraft videos on YouTube, so you could test a “GPT-3 for Minecraft” approach.\n3. The “true reward function” in Atari or MuJoCo is often not a great evaluation: for example, a Hopper policy trained to stand still using a constant reward gets 1000 reward! Human evaluations should not be subject to the same problem.\n4. Since the tasks were chosen to be inherently fuzzy and challenging to formalize, researchers are allowed to take whatever approach they want to solving the task, including “try to write down a reward function”. In contrast, for something like Atari or MuJoCo, you need to ban such strategies. The only restriction is that researchers cannot extract additional state information from the Minecraft simulator.\n5. Just as we’ve <@overestimated few-shot learning capabilities@>(@True Few-Shot Learning with Language Models@) by tuning prompts on large datasets of examples, we might also be overestimating the performance of algorithms that learn from human feedback because we tune hyperparameters on the true reward function. Since BASALT doesn’t have a true reward function, this is much harder to do.\n6. Since Minecraft is so popular, it is easy to hire Minecraft experts, allowing us to design algorithms that rely on expert time instead of just end user time.\n7. Unlike Atari or MuJoCo, BASALT has a clear path to scaling up: the tasks can be made more and more challenging. In the long run, we could aim to deploy agents on public multiplayer Minecraft servers that follow instructions or assist with whatever large-scale project players are working on, all while adhering to the norms and customs of that server."], "venue": "NeurIPS 2021 Competition Track", "opinion": "You won’t be surprised to hear that I’m excited about this benchmark, given that I worked on it. While we listed a bunch of concrete advantages in the post above, I think many (though not all) of the advantages come from the fact that we are trying to mimic the situation we face in the real world as closely as possible, so there’s less opportunity for Goodhart’s Law to strike. For example, later in this newsletter we’ll see that synthetically generated demos are not a good proxy for human demos. Even though this is the norm for existing benchmarks, and we didn’t intentionally try to avoid this problem, BASALT (mostly) avoids it. With BASALT you would have to go pretty far out of your way to get synthetically generated demos, because by design the tasks are hard to complete synthetically, and so you _have_ to use human demos.\n\nI’d encourage readers to [participate in the competition](https://www.aicrowd.com/challenges/neurips-2021-minerl-basalt-competition), because I think it’s especially good as a way to get started with ML research. It’s a new benchmark, so there’s a lot of low-hanging fruit in applying existing ideas to the benchmark, and in identifying new problems not present in previous benchmarks and designing solutions to them. It’s also pretty easy to get started: the BC baseline is fairly straightforward and takes a couple of hours to be trained on a single GPU. (That’s partly because BC doesn’t require environment samples; something like <@GAIL@>(@Generative Adversarial Imitation Learning@) would probably take a day or two to train instead.)", "highlight": true, "read_more": "Paper: The MineRL BASALT Competition on Learning from Human Feedback", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #155", "newsletter_category": "Learning human intent"}
{"id": "3be61dd12b2b7474211392ba68ff335a", "title": "One-Shot Imitation from Watching Videos", "url": "http://bair.berkeley.edu/blog/2018/06/28/daml/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Tianhe Yu and Chelsea Finn"], "summaries": ["Can we get a robot to learn a task by watching a human do it? This is very different from standard imitation learning. First, we want to do it with a single demonstration, and second, we want to do it by _watching a human_ -- that is, we're learning from a video of a human, not a trajectory where the robot actions are given to us. Well, first consider how we could do this if we have demonstrations from a teleoperated robot. In this case, we do actually have demonstrations in the form of trajectories, so normal imitation learning techniques (behavioral cloning in this case) work fine. We can then take this loss function and use it with [MAML](http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/) to learn from a large dataset of tasks and demonstrations how to perform a new task given a single demonstration. But this still requires the demonstration to be collected by teleoperating the robot. What if we want to learn from a video of a human demonstrating? They propose learning a _loss function_ that given the human video provides a loss from which gradients can be calculated to update the policy. Note that at training time there are still teleoperation demonstrations, so the hard task of learning how to perform tasks is done then. At test time, the loss function inferred from the human video is primarily used to identify which objects to manipulate."], "venue": "BAIR Blog", "opinion": "This is cool, it actually works on a real robot, and it deals with the issue that a human and a robot have different action spaces.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "Some form of meta-learning (ideally [MAML](http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/)).", "converted_with": "python", "newsletter_number": "AN #14", "newsletter_category": "Learning human intent"}
{"id": "d01b237715799bd06817ae929bd906d8", "title": "Grounding Language in Play", "url": "https://language-play.github.io/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Corey Lynch", "Pierre Sermanet"], "summaries": ["This paper presents a new approach to learning to follow natural language human instruction in a robotics setting. It builds on similar ideas to <@Learning Latent Plans from Play@>, in that it uses unsupervised \"play\" data (trajectories of humans playing on the robot with no goal in mind).\n\nThe paper combines several ideas to enable training a policy which can follow natural language instructions with only limited human annotations. \n* In *Hindsight Instruction Pairing*, human annotators watch small trajectories from the play data, and label them with the instruction which is being completed in the clip. This instruction can take any form, and means we don't need to choose the instructions and ask humans to perform specific tasks.\n* *Multicontext Imitation Learning* is a method designed to allow goal-conditioned policies to be learned with multiple different types of goals. For example, we can have lots of example trajectories where the goal is an end state image (as these can be generated automatically without humans), and just a small amount of example trajectories where the goal is a natural language instruction (gathered using *Hindsight Instruction Pairing*). The approach is to learn a goal embedding network for each type of goal specification, and a single shared policy which takes the goal embedding as input.\n\nCombining these two methods enables them to train a policy and embedding networks end to end using imitation learning from a large dataset of (trajectory, image goal) pairs and a small dataset of (trajectory, natural language goal) pairs. The policy can follow very long sequences of natural language instructions in a fairly complex grasping environment with a variety of buttons and objects. Their method performs better than the Learning from Play (LfP) method, even though LfP uses a goal image as the goal conditioning, instead of a natural language instruction.\n\nFurther, they propose that instead of learning the goal embedding for the natural language instructions, they use a pretrained large language model to produce the embeddings. This improves the performance of their method over learning the embedding from scratch, which the authors claim is the first example of the knowledge in large language models being transferred and improving performance in a robotics domain. This model also performs well when they create purposefully out of distribution natural language instructions (i.e. with weird synonyms, or google-translated from a different language)."], "venue": "arXiv", "opinion": "I think this paper shows two important things:\n\n1. Embedding the natural language instructions in the same space as the image conditioning works well, and is a good way of extending the usefulness of human annotations.\n\n2. Large pretrained language models can be used to improve the performance of language-conditioned reinforcement learning (in this case imitation learning) algorithms and policies.\n\nMethods which enable us to scale human feedback to complex settings are useful, and this method seems like it could scale well, especially with the use of pretrained large language models which might reduce the amount of language annotations needed further.", "highlight": true, "read_more": "", "summarizer": "Robert", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #103", "newsletter_category": "Learning human intent"}
{"id": "15170740017c41bfa5c12e0b5da89bfb", "title": "Pitfalls of learning a reward function online", "url": "http://arxiv.org/abs/2004.13654", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Stuart Armstrong", "Jan Leike", "Laurent Orseau", "Shane Legg"], "summaries": ["It can be dangerous to learn the metric that you are trying to optimize: if you don't set it up correctly, you may end up incentivizing the agent to \"update in a particular direction\" in the metric learning for the sake of future optimization (a point previously made in [Towards Interactive Inverse Reinforcement Learning](https://jan.leike.name/publications/Towards%20Interactive%20Inverse%20Reinforcement%20Learning%20-%20Armstrong,%20Leike%202016.pdf)). This paper analyzes the problems that can arise when an agent simultaneously learns a reward function, and optimizes that reward function.\n\nThe agent may have an incentive to \"rig\" the reward learning process, such that it finds a reward that is easy to optimize. For example, consider a student Sandra who must figure out the deadline and evaluation criteria for a project from a teacher Trisha. Sandra expects that if she asks Trisha when the deadline is, she will say that the deadline is later this week. So, Sandra might cleverly ask, \"Is the project due next week, or the week after\", to which Trisha might respond \"next week\". In this way, Sandra can rig the deadline-learning process in order to obtain a more favorable deadline.\n\nWorse, in such scenarios the need to rig the learning process can destroy value for _every_ reward function you are considering. For example, let's suppose that if Trisha couldn't be manipulated, Sandra's optimal policy would be to start the project today, _regardless_ of when the actual deadline is. However, given that Trisha _can_ be manipulated, Sandra will spend today manipulating Trisha into setting a later deadline -- an action that seems clearly suboptimal from the perspective of any fixed deadline. The paper describes this as _sacrificing reward with certainty_.\n\nTo avoid such situations, we need _unriggable_ learning processes, that is, ones where at all times, the expected final learned reward (deadline) is independent of the agent's (Sandra's) policy. This unriggability property is nearly equivalent to the property of _uninfluencability_, in which we must be able to posit some background variables in the environment such that the learning process can be said to be \"learning\" these variables. Technically, an unriggable process need not be uninfluenceable, though it usually is (see the paper for details).\n\nHowever, these properties only constrain the _expectation over environments_ of the final reward distribution: it doesn't prevent the agent from somehow shuffling around reward functions to be matched with suitable environments. For example, without knowing which projects are easy or hard, Sandra could manipulate Trisha into giving early deadlines for easy projects, and late deadlines for hard projects, in a manner that preserved the _distribution_ over early and late deadlines; this would satisfy the unriggable property (and probably also the uninfluenceable property, depending on the exact formalization).\n\nThe authors demonstrate these problems in a simple gridworld example. They also point out that there's a simple way to make any learning process uninfluenceable: choose a specific policy π that gathers information about the reward, and then define the new learning process to be \"whatever the original learning process would have said if you executed π\"."], "venue": "arXiv", "opinion": "I would explain this paper's point somewhat differently than the paper does. Consider an AI system in which we build in a prior over rewards and an update rule, and then have it act in the world. At the end of the trajectory, it is rewarded according to the expected reward of the trajectory under the inferred posterior over rewards. Then, the AI system is incentivized to choose actions under which the resulting posterior is easy to maximize.\n\nThis doesn't require the reward function to be ambiguous; it just requires that the update rule isn't perfect. For example, imagine that Alice has a real preference for apples over bananas, and you use the update rule \"if Alice eats an apple, infer that she likes apples; if Alice eats a banana, infer that she likes bananas\". The robot finds it easier to grasp the rigid apple, and so can get higher expected reward in the worlds where Alice likes apples. If you train a robot in the manner above, then the robot will learn to throw away the bananas, so that Alice's only choice is an apple (that we assume she then eats), allowing the robot to \"infer\" that Alice likes apples, which it can then easily maximize. This sort of problem could happen in most current reward learning setups, if we had powerful enough optimizers.\n\nIt seems to me that the problem is that you are training the actor, but not training the update rule, and so the actor learns to \"trick\" the update rule. Instead, it seems like we should train both. This is kind of what happens with <@assistance games / CIRL@>(@Cooperative Inverse Reinforcement Learning@), in which you train a policy to maximize expected reward under the _prior_, and so the policy is incentivized to take the best information gathering actions (which, if you squint, is like \"training to update well\"), and to maximize what it thinks is the true reward. Of course, if your prior / update rule within the game are misspecified, then bad things can happen. See also Stuart's reactions [here](https://www.alignmentforum.org/posts/gbuwgyYG9WvtsErki/how-should-ais-update-a-prior-over-human-preferences) and [here](https://www.alignmentforum.org/posts/EYEkYX6vijL7zsKEt/reward-functions-and-updating-assumptions-can-hide-a), as well as my comments on those posts.", "highlight": true, "read_more": "Blog post: Learning and manipulating learning", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #100", "newsletter_category": "Learning human intent"}
{"id": "3e47cba4e64f0052104d80ec30d12e7c", "title": "Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition", "url": "http://arxiv.org/abs/1805.11686", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Justin Fu", "Avi Singh", "Dibya Ghosh", "Larry Yang", "Sergey Levine"], "summaries": ["For reinforcement learning, we can create a probabilistic model in which there are events for the state the agent is in and the action the agent takes. We can also add events e_t corresponding roughly to \"the agent achieved something good in timestep t\". We set P(e_t = 1 | s_t, a_t) to be exp(R(s_t, a_t)). Then, we can simply set all of the e_t to 1, and infer the likely state-action pairs that would have led to that. This leads to maximum entropy reinforcement learning, which in the setting of deterministic dynamics is equivalent to soft Q-learning. The authors then note that in this setup, the reward corresponds to the log probability of event e_t happening. So, instead of specifying a reward function, we can instead define binary events that we care about, model their probability of occurring, and then find the actions that maximize the likelihood of the event occurring. The authors derive backup equations for three kinds of queries -- ALL (the event must happen every timestep), AT (the event happens at a particular timestep), and ANY (the event happens on some timestep).\n\nIn this setup, specifying a reward function corresponds to explicitly writing down probabilities P(e | s, a). Of course, we can learn these probabilities from data using standard ML techniques, and this now corresponds to learning a reward function! If we use the ALL query, this corresponds to inverse reinforcement learning. However, by using the AT or ANY query instead, we only require examples of the event e_t for a single s_t and a_t -- for example, images that represent a goal state. They derive an algorithm for this query and show experimentally that this framework can learn event probabilities that lead to good behavior on Mujoco environments."], "venue": "NIPS 2018", "opinion": "I like this framework for a couple of reasons. First, it allows for multiple kinds of queries, which correspond to different ways of specifying tasks, increasing the number of types of inputs we can give in order to communincate our intent to an AI. Concretely, the framework can handle both demonstrations (as in IRL) and examples of goal states. Second, it reduces learning a reward function to learning the probabilities of events, which has been studied in much more depth in the machine learning community and so will hopefully work better.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #10", "newsletter_category": "Learning human intent"}
{"id": "c1032759fa5dc671f365ab7ff38c554e", "title": "Meta-Inverse Reinforcement Learning with Probabilistic Context Variables", "url": "http://arxiv.org/abs/1909.09314", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Lantao Yu*", "Tianhe Yu*", "Chelsea Finn", "Stefano Ermon"], "summaries": ["This work explores improving performance on multi-task inverse reinforcement learning in a single-shot setting by extending <@Adversarial Inverse Reinforcement Learning@>(@Learning Robust Rewards with Adversarial Inverse Reinforcement Learning@) with \"latent context variables\" that condition the learned reward function. The paper makes two notable contributions: 1) It details an algorithm to simultaneously learn a flexible reward function and a conditional policy with competitive few-shot generalization abilities from expert demonstrations of multiple related tasks _without_ task specifications or identifiers; 2) The authors empirically demonstrate strong performance of a policy trained on the inferred reward of a structurally similar task with modified environmental dynamics, claiming that in order to succeed \"the agent must correctly infer the underlying goal of the task instead of simply mimicking the demonstration\"."], "venue": "NeurIPS 2019", "opinion": "Since this work \"integrates ideas from context-based meta-learning, deep latent variable generative models, and maximum entropy inverse RL\" and covers the relevant mathematics, it is an involved, if rewarding, study into multi-task IRL. I am convinced that this is a big step forward for IRL, but I'd be interested in seeing comparisons on setups that are more complicated.\n\n'Data efficiency' is implied as a desirable quality, and the paper makes a case that they learn from a limited number demonstrations at meta-test time. However, it does not specify how many demonstrations were required for each task during _meta-training_. Additionally, for two environments, _tens of millions_ of environment interactions were required, which is entirely infeasible for real systems.", "highlight": false, "read_more": "", "summarizer": "Sudhanshu", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #87", "newsletter_category": "Learning human intent"}
{"id": "91b51c0af7fd9a173a1c6da2c51f7102", "title": "Deep Bayesian Reward Learning from Preferences", "url": "http://arxiv.org/abs/1912.04472", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Daniel S. Brown", "Scott Niekum"], "summaries": ["Bayesian inverse reinforcement learning (IRL) is ideal for safe imitation learning since it allows uncertainty in the reward function estimator to be quantified. This approach requires thousands of likelihood estimates for proposed reward functions. However, each likelihood estimate requires training an agent according to the hypothesized reward function. Predictably, such a method is computationally intractable for high dimensional problems.\n\n**In this paper, the authors propose Bayesian Reward Extrapolation (B-REX), a scalable preference-based Bayesian reward learning algorithm.** They note that in this setting, a likelihood estimate that requires a loop over all demonstrations is much more feasible than an estimate that requires training a new agent. So, they assume that they have a set of _ranked_ trajectories, and evaluate the likelihood of a reward function by its ability to reproduce the preference ordering in the demonstrations. To get further speedups, they fix all but the last layer of the reward model using a pretraining step: the reward of a trajectory is then simply the dot product of the last layer with the features of the trajectory as computed by all but the last layer of the net (which can be precomputed and cached once).\n\nThe authors test B-REX on pixel-level Atari games and show competitive performance to <@T-REX@>(@Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations@), a related method that only computes the MAP estimate. Furthermore, the authors can create confidence intervals for performance since they can sample from the reward distribution."], "venue": "Workshop on Safety and Robustness in Decision Making at NeurIPS 2019", "opinion": "The idea of using preference orderings (Bradley-Terry) to speed up the posterior probability calculation was ingenious. While B-REX isn't strictly better than T-REX in terms of rewards achieved, the ability to construct confidence intervals for performance is a major benefit. My takeaway is that Bayesian IRL is getting more efficient and may have good potential as a practical approach to safe value learning.", "highlight": false, "read_more": "", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #86", "newsletter_category": "Learning human intent"}
{"id": "1a65d4be0e62cd164f2a63406c11c3db", "title": "Learning human objectives by evaluating hypothetical behaviours", "url": "https://deepmind.com/blog/article/learning-human-objectives-by-evaluating-hypothetical-behaviours", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Siddharth Reddy", "Anca D. Dragan", "Sergey Levine", "Shane Legg", "Jan Leike"], "summaries": ["[Deep RL from Human Preferences](https://deepmind.com/blog/learning-through-human-feedback/) updated its reward model by collecting human comparisons on on-policy trajectories where the reward model ensemble was most uncertain about what the reward should be. However, we want our reward model to be accurate off policy as well, even in unsafe states. To this end, we would like to train our reward model on _hypothetical_ trajectories. This paper proposes learning a generative model of trajectories from some dataset of environment dynamics, such as safe expert demonstrations or rollouts from a random policy, and then finding trajectories that are \"useful\" for training the reward model. They consider four different criteria for usefulness of a trajectory: _uncertain rewards_ (which intuitively are areas where the reward model needs training), _high rewards_ (which could indicate reward hacking), _low rewards_ (which increases the number of unsafe states that the reward model is trained on), and _novelty_ (which covers more of the state space). Once a trajectory is generated, they have a human label it as good, neutral, or unsafe, and then train the reward model on these labels.\n\nThe authors are targeting an agent that can _explore safely_: since they already have a world model and a reward model, they use a model-based RL algorithm to act in the environment. Specifically, to act, they use gradient descent to optimize a trajectory in the latent space that maximizes expected rewards under the reward model and world model, and then take the first action of that trajectory. They argue that the world model can be trained on a dataset of safe human demonstrations (though in their experiments they use rollouts from a random policy), and then since the reward model is trained on hypothetical behavior and the model-based RL algorithm doesn't need any training, we get an agent that acts without us ever getting to an unsafe state."], "venue": "arXiv", "opinion": "I like the focus on integrating active selection of trajectory queries into reward model training, and especially the four different kinds of active criteria that they consider, and the detailed experiments (including an ablation study) on the benefits of these criteria. These seem important for improving the efficiency of reward modeling.\n\nHowever, I don't buy the argument that this allows us to train an agent without visiting unsafe states. In their actual experiments, they use a dataset gathered from a random policy, which certainly will visit unsafe states. If you instead use a dataset of safe human demonstrations, your generative model will only place probability mass on safe demonstrations, and so you'll never generate trajectories that visit unsafe states, and your reward model won't know that they are unsafe. (_Maybe_ your generative model will generalize properly to the unsafe states, but that seems unlikely to me.) Such a reward model will either be limited to imitation learning (sticking to the same trajectories as in the demonstrations, and never finding something like AlphaGo's move 37), or it will eventually visit unsafe states.", "highlight": false, "read_more": "Paper: Learning Human Objectives by Evaluating Hypothetical Behavior", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #79", "newsletter_category": "Learning human intent"}
{"id": "b42e5fdb3f0f4988d7d78037de9edd73", "title": "Norms, Rewards, and the Intentional Stance: Comparing Machine Learning Approaches to Ethical Training", "url": "https://hrilab.tufts.edu/publications/kasenbergetal18aies.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Daniel Kasenberg", "Thomas Arnold", "Matthias Scheutz"], "summaries": ["This paper argues that _norm inference_ is a plausible alternative to inverse reinforcement learning (IRL) for teaching a system what people want. Existing IRL algorithms rely on the _Markov assumption_: that the next state of the world depends only on the previous state of the world and the action that the agent takes from that state, rather than on the agent’s entire history. In cases where information about the past matters, IRL will either fail to infer the right reward function, or will be forced to make challenging guesses about what past information to encode in each state. By contrast, _norm inference_ tries to infer what (potentially temporal) propositions encode the reward of the system, keeping around only past information that is relevant to evaluating potential propositions. The paper argues that norm inference results in more interpretable systems that generalize better than IRL -- systems that use norm inference can successfully model reward-driven agents, but systems that use IRL do poorly at learning temporal norms."], "venue": "AIES 2018", "opinion": "This paper presents a interesting novel alternative to inverse reinforcement learning and does a good job of acknowledging potential objections. Deciding whether and how to store information about the past seems like an important problem that inverse reinforcement learning has to reckon with. My main concern with norm inference, which the paper mentions, is that optimizing over all possible propositions is in practice extremely slow. I don't anticipate that norm inference will be a performance-tractable strategy unless a lot of computation power is available.\n\n**Rohin's opinion:** The idea of \"norms\" used here is very different from what I usually imagine, as in e.g. <@Following human norms@>. Usually, I think of norms as imposing a constraint upon policies rather than defining an optimal policy, (often) specifying what not to do rather than what to do, and being a property of groups of agents, rather than of a single agent. (See also [this comment](https://www.alignmentforum.org/posts/eBd6WvzhuqduCkYv3/following-human-norms#ujma2pWoH7ibhdog2).) The \"norms\" in this paper don't satisfy any of these properties: I would describe their norm inference as performing IRL with history-dependent reward functions, with a strong inductive bias towards \"logical\" reward functions (which comes from their use of Linear Temporal Logic). Note that some inductive bias is necessary, as without inductive bias history-dependent reward functions are far too expressive, and nothing could be reasonably learned. I think despite how it's written, the paper should be taken not as a denouncement of IRL-the-paradigm, but a proposal for better IRL algorithms that are quite different from the ones we currently have.", "highlight": false, "read_more": "", "summarizer": "Asya", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #72", "newsletter_category": "Learning human intent"}
{"id": "7dc601cb690562678edb794da9cb1a9c", "title": "Leveraging Human Guidance for Deep Reinforcement Learning Tasks", "url": "http://arxiv.org/abs/1909.09906", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Ruohan Zhang", "Faraz Torabi", "Lin Guan", "Dana H. Ballard", "Peter Stone"], "summaries": ["A core problem in RL is the communication of our goals and prior knowledge to an agent. One common approach to this is imitation learning: the human provides example demonstrations of a task, and the agent learns to mimic them. However, there are some limitations to this approach, such as requiring the human to be capable of the task. This paper outlines five different modalities from which agents can learn: evaluations, preferences, hierarchical feedback, observations, and attention (for example, where humans are looking while solving a task). It then suggests future research directions.\n\nFor this summary, I will focus on the future research directions, but you can read the full paper to understand existing approaches. The first issue is that datasets of human guidance are difficult to capture and depend on many specific factors of the individuals providing guidance. As a result, the paper suggests creating standard datasets to save effort and enable fair comparisons. The second direction is to better understand how humans should teach agents. The literature currently emphasizes progress in learning methods, but improved teaching methods may be just as valuable when learning from human guidance. The last is unifying learning across different input modalities; ideally an agent would be able to learn from many different types of human guidance over different phases of its learning."], "venue": "IJCAI 2019", "opinion": "I think the problem of providing human guidance to agents is a core problem in alignment, and I am glad to see more discussion of that problem. I generally think that this type of broad overview is very valuable for communicating research to those who just want a broad overview of the field and don’t need to know the individual details of each paper. However, I would appreciate if there were more quantitative comparisons of the tradeoffs between different paradigms. The introduction mentions sample efficiency and the large effort required for human labelling, which made me hope for theoretical or empirical comparisons of the different methods with regards to sample efficiency and labelling effort. Since this was lacking, it also left me unclear on what motivated their suggested research directions. Personally, I would be much more excited to pursue a research direction if there were quantitative results showing particular failure modes or negative characteristics of current approaches that motivated that particular approach.\n\n**Rohin's opinion:** This seems like a great survey paper and I like their proposed future directions, especially on learning from different kinds of human guidance, and on improving methods of teaching. While it does seem useful to have datasets of human guidance in order to compare algorithms, this prevents researchers from making improvements by figuring out new forms of guidance not present in the dataset. As a result, I'd be more excited about benchmarks that are evaluated by how much time it takes for Mechanical Turkers to train an agent to complete the task. Admittedly, it would be costlier in both time and money for researchers to do such an evaluation.", "highlight": false, "read_more": "", "summarizer": "Nicholas", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #71", "newsletter_category": "Learning human intent"}
{"id": "70412ebc51463c4328cda62d01ffb8e7", "title": "Fine-Tuning GPT-2 from Human Preferences", "url": "https://openai.com/blog/fine-tuning-gpt-2/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Daniel M. Ziegler", "Nisan Stiennon", "Jeffrey Wu", "Tom B. Brown", "Alec Radford", "Dario Amodei", "Paul Christiano", "Geoffrey Irving"], "summaries": ["This blog post and its [associated paper](https://arxiv.org/abs/1909.08593) describes the results of several text generation/continuation experiments, where human feedback on initial/older samples was used in the form of a reinforcement learning reward signal to finetune the base 774-million parameter <@GPT-2 language model@>(@Better Language Models and Their Implications@). The key motivation here was to understand whether interactions with humans can help algorithms better learn and adapt to human preferences in natural language generation tasks.\n\nThey report mixed results. For the tasks of continuing text with positive sentiment or physically descriptive language, they report improved performance above the baseline (as assessed by external examiners) after fine-tuning on only 5,000 human judgments of samples generated from the base model. The summarization task required 60,000 samples of _online_ human feedback to perform similarly to a simple baseline, lead-3 - which returns the first three sentences as the summary - as assessed by humans.\n\nSome of the lessons learned while performing this research include 1) the need for better, less ambiguous tasks and labelling protocols for sourcing higher quality annotations, and 2) a reminder that \"bugs can optimize for bad behaviour\", as a sign error propagated through the training process to generate \"not gibberish but maximally bad output\". The work concludes on the note that it is a step towards scalable AI alignment methods such as debate and amplification."], "venue": "OpenAI Blog", "opinion": "It is good to see research on mainstream NLProc/ML tasks that includes discussions on challenges, failure modes and relevance to the broader motivating goals of AI research.\n\nThe work opens up interesting avenues within OpenAI's alignment agenda, for example learning a diversity of preferences (A OR B), or a hierarchy of preferences (A AND B) sequentially without catastrophic forgetting.\n\nIn order to scale, we would want to generate automated labelers through semi-supervised reinforcement learning, to derive the most gains from every piece of human input. The robustness of this needs further empirical and conceptual investigation before we can be confident that such a system can work to form a hierarchy of learners, e.g. in amplification.\n\n**Rohin's opinion:** One thing I particularly like here is that the evaluation is done by humans. This seems significantly more robust as an evaluation metric than any automated system we could come up with, and I hope that more people use human evaluation in the future.", "highlight": false, "read_more": "Paper: Fine-Tuning Language Models from Human Preferences", "summarizer": "Sudhanshu", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #67", "newsletter_category": "Learning human intent"}
{"id": "84bab206dc082b9b39a4b3ec6831a1df", "title": "Cognitive Model Priors for Predicting Human Decisions", "url": "http://arxiv.org/abs/1905.09397", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["David D. Bourgin*", "Joshua C. Peterson*", "Daniel Reichman", "Thomas L. Griffiths", "Stuart J. Russell"], "summaries": ["Human decision making is notoriously difficult to predict, being a combination of expected value calculation and likely-not-fully-enumerated cognitive biases. Normally we could predict well using a neural net with a ton of data, but data about human decision making is expensive and scarce. This paper proposes that we pretrain a neural net on lots of data simulated from theoretical models of human decision making and then finetune on the small real dataset. In effect, we are using the theoretical model as a kind of prior, that provides the neural net with a strong inductive bias. The method achieves better performance than existing theoretical or empirical methods, without requiring feature engineering, both on existing datasets and a new, larger dataset collected via Mechanical Turk."], "venue": "ICML 2019", "opinion": "I am a little cautious to make a strong statement about the importance of this paper, since I don't have as much domain knowledge in cognitive science as I do in machine learning, but overall this \"treat your theoretical model like a generative model and sample from it\" idea seems like an elegant and plausibly more broadly extensible way of incorporating theoretical priors alongside real data.", "highlight": false, "read_more": "", "summarizer": "Cody", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #59", "newsletter_category": "Learning human intent"}
{"id": "899dfcd86043e108c7d543501270d10b", "title": "Deep Reinforcement Learning from Policy-Dependent Human Feedback", "url": "http://arxiv.org/abs/1902.04257", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Dilip Arumugam", "Jun Ki Lee", "Sophie Saskin", "Michael L. Littman"], "summaries": ["One obvious approach to human-in-the-loop reinforcement learning is to have humans provide an external reward signal that the policy optimizes. [Previous work](https://arxiv.org/abs/1701.06049) noted that humans tend to _correct_ existing behavior, rather than providing an \"objective\" measurement of how good the behavior is (which is what a reward function is). They proposed Convergent Actor-Critic by Humans (COACH), where instead of using human feedback as a reward signal, they use it as the _advantage function_. This means that human feedback is modeled as specifying how good an action is relative to the \"average\" action that the agent would have chosen from that state. (It's an average because the policy is stochastic.) Thus, as the policy gets better, it will no longer get positive feedback on behaviors that it has successfully learned to do, which matches how humans give reinforcement signals.\n\nThis work takes COACH and extends it to the deep RL setting, evaluating it on Minecraft. While the original COACH had an eligibility trace that helps \"smooth out\" human feedback over time, deep COACH requires an eligibility replay buffer. For sample efficiency, they first train an autoencoder to learn a good representation of the space (presumably using experience collected with a random policy), and feed these representations into the control policy. They reward entropy so that the policy doesn't commit to a particular behavior, making it responsive to feedback, but select actions by always picking the action with maximal probability (rather than sampling from the distribution) in order to have interpretable, consistent behavior for the human trainers to provide feedback on. They evaluate on simple navigation tasks in the complex 3D environment of Minecraft, including a task where the agent must patrol the perimeter of a room, which cannot be captured by a state-based reward function."], "venue": "arXiv", "opinion": "I really like the focus on figuring out how humans actually provide feedback in practice; it makes a lot of sense that we provide reinforcement signals that reflect the advantage function rather than the reward function. That said, I wish the evaluation had more complex tasks, and had involved human trainers who were not authors of the paper -- it might have taken an hour or two of human time instead of 10-15 minutes, but would have been a lot more compelling.\n\nBefore continuing, I recommend reading about Simulated Policy Learning in Video Models below. As in that case, I think that you get sample efficiency here by getting a lot of \"supervision information\" from the pixels used to train the VAE, though in this case it's by learning useful features rather than using the world model to simulate trajectories. (Importantly, in this setting we care about sample efficiency _with respect to human feedback_ as opposed to environment interaction.) I think the techniques used there could help with scaling to more complex tasks. In particular, it would be interesting to see a variant of deep COACH that alternated between training the VAE with the learned control policy, and training the learned control policy with the new VAE features. One issue would be that as you retrain the VAE, you would invalidate your previous control policy, but you could probably get around that (e.g. by also training the control policy to imitate itself while the VAE is being trained).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #51", "newsletter_category": "Learning human intent"}
{"id": "566812437ca8caa858fc4883e628dee2", "title": "Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow", "url": "http://arxiv.org/abs/1810.00821", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Xue Bin Peng", "Angjoo Kanazawa", "Sam Toyer", "Pieter Abbeel", "Sergey Levine"], "summaries": ["Adversarial learning techniques require a delicate balance between the generator and the discriminator. If the discriminator is too weak, it cannot tell the difference between generated samples and true samples, and it cannot provide a learning signal for the generator. If the discriminator is too strong, small changes to the generator are not going to fool the discriminator, and so again the gradient is uninformative. This paper proposes to control the power of the discriminator using an _information bottleneck_.\n\nInstead of providing data points directly to the discriminator, the data points are first encoded into a new representation, and the discriminator must work with the new representation. The representation is learned to be helpful for the discriminator under the constraint of an upper bound on the mutual information between the representation and the original data points. The choice of upper bound determines how much information the discriminator is allowed to access, which in turn determines how powerful the discriminator is.\n\nThey apply this idea to imitation learning (GAIL), inverse reinforcement learning (AIRL), and image generation (GANs), and find that it improves results."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #38", "newsletter_category": "Learning human intent"}
{"id": "43165596fa16caf38646b48fafa9bca7", "title": "Guiding Policies with Language via Meta-Learning", "url": "http://arxiv.org/abs/1811.07882", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["John D. Co-Reyes"], "summaries": ["The authors train an agent to perform tasks specified in natural language, with a \"correction\" after each attempt (also in natural language). They formulate this as a meta-learning problem: for each instruction, several attempt-correction cycles are allowed. Each attempt takes into account previous attempts to achieve the same instruction by passing each previous trajectory and its corresponding correction through a CNN, then using the mean of all outputs as an input to a policy module.\n\nIn their experiments, all instructions and corrections are generated automatically, and test-time performance is evaluated as a function of how many corrections are allowed. In one experiment, the tasks is to navigate rooms to reach a goal, where the correction is the next subgoal required. Given 4 corrections, their agent outperforms a baseline which was given all 5 subgoals at the beginning of the task. In another experiment, the task is to move a block to an ambiguously-specified location, and the corrections narrow down the target area; their trained agent scores 0.9, as opposed to 0.96 for an agent given the exact target location."], "venue": "arXiv", "opinion": "This paper explores an important idea: correcting poorly-specified instructions using human-in-the-loop feedback. The second task in particular is a nice toy example of iterative preference clarification. I'm not sure whether their meta-learning approach is directly relevant to safety, particularly because each correction is only \"in scope\" for a single episode, and also only occurs after a bad attempt has finished. However, the broad idea of correction-based learning seems promising.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #35", "newsletter_category": "Learning human intent"}
{"id": "1ba1ec87629e210f885b66e6dd0cf226", "title": "Prompting: Better Ways of Using Language Models for NLP Tasks", "url": "https://thegradient.pub/prompting/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Tianyu Gao"], "summaries": ["Since the publication of <@GPT-3@>(@Language Models are Few-Shot Learners@), many papers have been written about how to select the best prompt for large language models to have them solve particular tasks of interest. This post gives an overview of this literature. The papers can be roughly divided into two approaches: first, we have discrete prompts, where you search for a sequence of words that forms an effective prompt; these are “discrete” since words are discrete. Second, we have soft prompts, where you search within the space of embeddings of words for an embedding that forms an effective prompt; since embeddings are vectors of real numbers they are continuous (or “soft”) and can be optimized through gradient descent (unlike discrete prompts)."], "venue": "The Gradient", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #155", "newsletter_category": "Learning human intent"}
{"id": "840f255d22cf9738d40ee9b17da96f1f", "title": "Bayesian Inverse Reinforcement Learning", "url": "https://www.aaai.org/Papers/IJCAI/2007/IJCAI07-416.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2007-01-01T00:00:00Z", "authors": ["Deepak Ramachandran", "Eyal Amir"], "summaries": ["Unlike many other methods, [Bayesian Inverse Reinforcement Learning](https://www.aaai.org/Papers/IJCAI/2007/IJCAI07-416.pdf) produces a _posterior distribution_ over the reward functions that would explain the observed demonstrations. This distribution can be used for e.g. planning in a risk-averse manner. It works by starting with some randomly chosen reward function, and then repeating the following steps:\n\n1. Perturb the reward function randomly\n2. Solve for the optimal policy for that reward function\n3. Use the learned policy to see how likely the demonstrations would be for the reward function\n4. Use the likelihood to determine whether to take this new reward function, or return to the old one.\n\n(This is the application of a standard MCMC sampling algorithm to the likelihood model used in IRL.)"], "venue": "IJCAI 2007", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #132", "newsletter_category": "Learning human intent"}
{"id": "43cff2308f4460d6e6043189bc8f8c55", "title": "AXRP 2: Learning Human Biases", "url": "https://axrp.net/episode/2020/12/11/episode-2-learning-human-biases-rohin-shah.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Daniel Filan and Rohin Shah"], "summaries": ["After talking about <@my paper on learning biases@>(@Learning biases and rewards simultaneously@) (for which I refer you to the linked blog post and past AN summary), Daniel and I talked about the implications of inverse reinforcement learning for AI safety, and in particular how we would want AI systems to be architected at a high level.\n\nMy position was that we want intelligent AI systems to be trying to help their users: they are explicitly interacting with humans in order to clarify what they should do, perhaps by explicitly asking questions, or by watching other human decisions and making inferences about what humans must care about. (However, this isn’t the vast majority of what they do; it is probably significantly less than one-fifth of “everything they do”.)\n\nIn contrast, Daniel would prefer for a superintelligent AI system to be pursuing a well-defined task, such as “build a thriving city”. He has three reasons for this:\n\n1. When our goal is to build AI systems that can pursue a relatively well-defined task, it is much easier for us to tell whether we are succeeding, and we can be much clearer about what it is we are trying to accomplish.\n2. We can increase the difficulty of well-specified tasks over time, rising in tandem with the capabilities of AI systems. In contrast, if our AI system is supposed to generically make our life better, that seems like a fixed task that is fairly difficult and requires quite a high minimum threshold of capabilities.\n3. It seems easier to tell whether your AI system has built a good city, than to tell whether an AI system has generically improved your life.\n\nIn the podcast, I don’t think I really engaged properly with the first two points, so I’ll talk about that in the opinion. I did disagree with the third point -- I don’t see why it should be harder to evaluate whether my life has been generically improved; for example, I expect that we are capable of telling apart good and bad personal assistants.\n\nDaniel also asked why it helps to aim for “AI systems that are trying to help you” -- how has that made the problem any simpler? It seems to me that the notion of “helpfulness” is domain-independent: once you have the concept of being helpful, it can be applied in different domains. One hopes that we could then train lots of AI systems that are specialized to particular domains, but all of them are still trying to be helpful."], "venue": "AXRP Podcast", "opinion": "I think I broadly agree with Daniel’s first two points in support of the task-based approach, and I was somewhat talking past him during the podcast. I generally _do_ agree that individual AI systems should be specialized to particular tasks or domains, and should not be “generically improving one’s life”. I agree with Daniel that at least outwardly it seems like most of the AI alignment field seems to be about building AI systems that can generically optimize your entire life, or even more ambitiously, the lot of humanity; I also agree that this is weird and probably not the right thing to do.\n\nMy optimism about helpfulness is not predicated on an idea that we’ll build AI systems that are generically trying to make all aspects of your life better: I do think that we still want our AI systems to be domain-specific, such as (say) a financial advisor AI system. The idea is more that if we can design domain-general _techniques_ that allow us to train domain-specific _systems_ that are trying to be helpful, that seems like it would be a solution to the AI alignment problem (the problem of how to prevent an AI from adversarially optimizing against its user).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #130", "newsletter_category": "Learning human intent"}
{"id": "d8672ab77a4d2e16affb48807f180a98", "title": "Imitation Learning in the Low-Data Regime", "url": "https://ai.googleblog.com/2020/09/imitation-learning-in-low-data-regime.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Robert Dadashi", "Léonard Hussenot", "Matthieu Geist", "Olivier Pietquin"], "summaries": ["<@Non-Adversarial Imitation Learning@>(@Non-Adversarial Imitation Learning and its Connections to Adversarial Methods@) has become more popular recently due to the fact that GAN style architectures can be notoriously unstable during training. This paper makes a contribution by introducing an imitation learning strategy that relies on minimizing an upper bound on the Wasserstein distance between the imitator and expert state visitation distributions. The Wasserstein distance can be understood using the 'Earth Mover's Analogy'. In this interpretation, we view the distance as the cost of the most efficient transport strategy to move probability mass from the imitator distribution to the expert distribution. The advantage of such an approach is that the metric can be calculated in an offline way. If we calculate the distance for partial rollouts then we can create a dense, albeit non-stationary, reward for the imitator. In experiments, agents trained using the Wasserstein distance are able to learn control tasks using only a single trajectory. "], "venue": "arXiv", "opinion": "With this paper, I conclude that IRL works for Mujoco-style control tasks. The performance of this method is similar to offline GAIL but is better justified and more stable. However, ultimately, I'm a bit skeptical of their claim that the method will generalize to other tasks. Results for GAIL/DAC are quite poor in Atari-like environments whereas pair-wise reward modeling seems to perform quite well. This would suggest a reward modeling approach would scale much better in more complicated settings.", "highlight": false, "read_more": "Paper: Primal Wasserstein Imitation Learning", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #124", "newsletter_category": "Learning human intent"}
{"id": "b46771aba2dcec3f9417664581779454", "title": "Learning to Summarize with Human Feedback", "url": "https://openai.com/blog/learning-to-summarize-with-human-feedback/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Nisan Stiennon*", "Long Ouyang*", "Jeff Wu*", "Daniel M. Ziegler*", "Ryan Lowe", "Chelsea Voss", "Alec Radford", "Dario Amodei", "Paul Christiano"], "summaries": ["OpenAI has been working on <@finetuning language models from human preferences@>(@Fine-Tuning GPT-2 from Human Preferences@). This blog post and paper show the progress they have made on text summarization in particular since their last release.\n\nAs a reminder, the basic setup is similar to that of [Deep RL from Human Preferences](https://deepmind.com/blog/learning-through-human-feedback/): we get candidate summaries by executing the policy, have humans compare which of two summaries is better, and use this feedback to train a reward model that can then be used to improve the policy. The main differences in this paper are:\n\n1. They put in a lot of effort to ensure high data quality. Rather than having MTurk workers compare between summaries, they hire a few contractors who are paid a flat hourly rate, and they put a lot of effort into communicating what they care about to ensure high agreement between labelers and researchers.\n2. Rather than collecting preferences in an online training setup, they collect large batches at a time, and run a relatively small number of iterations of alternating between training the reward model and training the policy. My understanding is that this primarily makes it simpler from a practical perspective, e.g. you can look at the large batch of data you collected from humans and analyze it as a unit.\n3. They initialize the policy from a model that is first pretrained in an unsupervised manner (as in <@GPT-3@>(@Language Models are Few-Shot Learners@)) and then finetuned on the reference summaries using supervised learning.\n\nOn the Reddit task they train on, their summaries are preferred over the reference summaries (though since the reference summaries have varying quality, this does not imply that their model is superhuman). They also transfer the policy to summarize CNN / DailyMail news articles and find that it still outperforms the supervised model, despite not being trained at all for this setting (except inasmuch as the unsupervised pretraining step saw CNN / DailyMail articles).\n\nAn important ingredient to this success is that they ensure their policy doesn’t overoptimize the reward, by adding a term to the reward function that penalizes deviation from the supervised learning baseline. They show that if they put a very low weight on this term, the model overfits to the reward model and starts producing bad outputs."], "venue": "OpenAI Blog", "opinion": "This paper is a great look at what reward learning would look like at scale. The most salient takeaways for me were that data quality becomes very important and having very large models does not mean that the reward can now be optimized arbitrarily.", "highlight": false, "read_more": "Paper: Learning to summarize from human feedback", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #116", "newsletter_category": "Learning human intent"}
{"id": "9dbb98a1389a3e6e60cce92f3d906737", "title": "Multi-Principal Assistance Games", "url": "http://arxiv.org/abs/2007.09540", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Arnaud Fickinger", "Simon Zhuang", "Dylan Hadfield-Menell", "Stuart Russell"], "summaries": ["So far the work in the <@assistance games framework@>(@Human Compatible: Artificial Intelligence and the Problem of Control@) (previously called CIRL) has focused on the case where there is a single human and a single AI assistant. Once we have multiple humans (or _principals_, as the paper calls them), things get much trickier.\n\nOne problem is that we don’t know how to aggregate the values across different principals. Rather than taking a stance on the problem, this paper assumes that we have some mechanism that can combine reward functions in some reasonable way. It instead focuses on a second problem: while previously we could trust the human to report their preferences accurately (as the human and agent were aligned), when there are multiple principals whose preference will be aggregated, the principals have an incentive to misrepresent their preferences (which we’ll call non-straightforward play).\n\nLet’s consider the case where the principals provide demonstrations, _and get reward for those demonstrations_. For now our agent will assume that the principals are playing straightforwardly, and so the agent simply infers their preferences, aggregates them, and optimizes the results. In this setting, if the agent will act far more often than the principals provide demonstrations (so that the reward of the demonstrations is almost irrelevant), we can apply the Gibbard-Satterthwaite theorem to show that any non-trivial mechanism will be vulnerable to non-straightforward play. In contrast, if the principals provide lots of demonstrations, while the agent only acts for a short period of time, then optimal principals primarily want to ensure their demonstrations are good, and so will be straightforward most of the time (provably). In the middle, the fact that principals get rewarded for demonstrations does help reduce non-straightforward play, but does not eliminate it.\n\nNow let’s consider the case where the agent can design a mechanism. Here, when the principals are providing demonstrations, the agent can override their action choice with one of its own (a setting considered <@previously@>(@The Assistive Multi-Armed Bandit@)). Roughly speaking, the algorithm only executes a proposed human action if it hasn’t executed it before. By doing so, it incentivizes the principals to report second-best actions, and so on, giving the agent more information about the principals' utility functions. The mechanism incentivizes straightforward play, and is approximately efficient (i.e. there is an upper bound on the worst case social welfare achieved)."], "venue": "Workshop on Incentives in Machine Learning, ICML 2020", "opinion": "According to me, the main insight of this paper is that it is both necessary and difficult to design mechanisms that incentivize principals to report not just the best thing to do, but a comparison amongst different alternatives. Within the formalism of paper, this is done by overriding a principal’s action unless it is a novel action, but I expect in practice we’ll do this in some other way (it seems rather unusual to imagine the agent overriding a human, I’d be surprised if that was how we ended up building our AI systems).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #110", "newsletter_category": "Learning human intent"}
{"id": "b0e3a67bee278921c06b783f75f6ef95", "title": "Adversarial Soft Advantage Fitting: Imitation Learning without Policy Optimization", "url": "http://arxiv.org/abs/2006.13258", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Paul Barde*", "Julien Roy*", "Wonseok Jeon*", "Joelle Pineau", "Christopher Pal", "Derek Nowrouzezahrai"], "summaries": ["This work aims to simplify algorithms for adversarial imitation learning by using a _structured_ discriminator, which is parameterised by the current generator and a learned policy. They prove that if so formulated, the policy that yields the optimal discriminator is exactly the same as the policy that generated the expert data, which is also precisely what we hope the generator will learn. As long as the discriminator's learned policy is parameterised correctly such that it can be sampled and evaluated, this eliminates the need for a reinforcement learning outer loop for policy improvement, as this learned policy can be substituted in for the generator's policy in the next training iteration. They empirically show the competitiveness of their method with state-of-the-art algorithms across a small but increasingly complex suite of tasks."], "venue": "arXiv", "opinion": "Since their theoretical results are only for optimal values, it's unclear whether starting from random initial policies will necessarily converge to these optimal values -- indeed, they make this point themselves, that they do not train to convergence as gradient descent cannot hope to find the global optimum for GAN-like non-convex loss functions. In light of that, it's not evident *why* their algorithms outperform the competition. Additionally, they do not report computational speed-up or wall-clock comparisons, which to me felt like the broad motivation behind this work. Nonetheless, the work illuminates new territory in adversarial imitation learning, provides positive evidence for a novel technique, and raises interesting questions for future work, such as how to learn robust reward functions via this method, or what kind of convergence properties can be expected.", "highlight": false, "read_more": "", "summarizer": "Sudhanshu", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #110", "newsletter_category": "Learning human intent"}
{"id": "515a160d055298b71cb443398dea4410", "title": "Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences", "url": "http://arxiv.org/abs/2002.09089", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Daniel S. Brown", "Russell Coleman", "Ravi Srinivasan", "Scott Niekum"], "summaries": ["Bayesian reward learning would allow for rigorous safety analysis when performing imitation learning. However, Bayesian reward learning methods are typically computationally expensive to use. This is because a separate MDP needs to be solved for each reward hypothesis. The main contribution of this work is a proposal for a more efficient reward evaluation scheme called Bayesian REX (see also an <@earlier version@>(@Deep Bayesian Reward Learning from Preferences@)). It works by pre-training a low-dimensional feature encoding of the observation space which allows reward hypotheses to be evaluated as a linear combination over the learned features. Demonstrations are ranked using pair-wise preference which is relativistic and thus conceptually easier for a human to evaluate. Using this method, sampling and evaluating reward hypotheses is extremely fast: 100,000 samples in only 5 minutes using a PC. Moreover, Bayesian REX can be used to play Atari games by finding a most likely or mean reward hypothesis that best explains the ranked preferences and then using that hypothesis as a reward function for the agent."], "venue": "arXiv", "opinion": "It's worth emphasizing that this isn't quite a pure IRL method. They use preferences over demonstrations in addition to the demonstrations themselves and so they have more information than would be available in a pure IRL context. However, it’s also worth emphasizing that (as the authors show) pixel-level features make it difficult to use IRL or GAIL to learn an imitation policy, which means I wasn’t expecting a pure IRL approach to work here. Conceptually, what's interesting about the Bayesian approach is that uncertainty in the reward distribution translates into confidence intervals on expected performance. This means that Bayesian REX is fairly robust to direct attempts at reward hacking due to the ability to directly measure overfitting to the reward function as high variance in the expected reward.", "highlight": false, "read_more": "", "summarizer": "Zach", "prerequisites": "T-REX", "converted_with": "python", "newsletter_number": "AN #105", "newsletter_category": "Learning human intent"}
{"id": "03d0bd59041c048dc9947c803dbf575a", "title": "Showing versus doing: Teaching by demonstration", "url": "http://papers.nips.cc/paper/6412-showing-versus-doing-teaching-by-demonstration", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2016-01-01T00:00:00Z", "authors": ["Mark K. Ho", "Michael Littman", "James MacGlashan", "Fiery Cushman", "Joseph L. Austerweil"], "summaries": ["This paper creates and validates a model of _pedagogy_ as applied to reward learning. Typically, inverse reinforcement learning (IRL) algorithms assume access to a set of demonstrations that are created from an approximately _optimal_ policy. However, in practice, when people are asked to _show_ a task, they don't give the optimal trajectory; they give the trajectory that helps the learner best _disambiguate_ between the possible tasks. They formalize this by creating a model in two steps:\n\n1. A literal or IRL robot is one which learns rewards under the model that the demonstrator is Boltzmann rational.\n2. The pedagogic human shows trajectories in proportion to how likely a literal robot would think the true reward is upon seeing the trajectory.\n\nThey validate this model with user studies and find that it predicts human demonstrations well."], "venue": "NIPS 2016", "opinion": "", "highlight": false, "read_more": "<@Literal or Pedagogic Human? Analyzing Human Model Misspecification in Objective Learning@>", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #101", "newsletter_category": "Learning human intent"}
{"id": "fc058496cfee441a93ae72272cb80eb6", "title": "Active Preference-Based Learning of Reward Functions", "url": "http://people.eecs.berkeley.edu/~anca/papers/RSS17_comparisons.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2017-01-01T00:00:00Z", "authors": ["Dorsa Sadigh", "Anca D. Dragan", "Shankar Sastry", "and Sanjit A. Seshia"], "summaries": ["In a continuous, dynamical system such as an autonomous car, how can we learn good behavior without hardcoding a reward function? It may be too hard to get demonstrations (humans could only show how they drive a car, not how they want the car to drive). The authors propose presenting the human with two trajectories and asking her to choose which one is better, and using this information to infer the reward."], "venue": "RSS 2017", "opinion": "This paper is a good example of reward learning, and it's interesting to compare and contrast this more principled method to Deep RL from Human Preferences. For example, while they maintain a distribution over rewards and choose the most useful query, deep RL from human preferences works with higher-dimensional reward functions where this would be too expensive, and so they instead train an ensemble of reward predictors and use disagreement between the reward predictors as a measurement of uncertainty.", "highlight": false, "read_more": "Deep Reinforcement Learning from Human Preferences", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "Recon #5", "newsletter_category": "Learning human intent"}
{"id": "4b41c55a4ac39b9d41672466529fd07d", "title": "Imitation Learning via Off-Policy Distribution Matching", "url": "https://openreview.net/forum?id=Hyg-JC4FDr", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Ilya Kostrikov", "Ofir Nachum", "Jonathan Tompson"], "summaries": ["One way to view imitation learning is as a distribution matching problem. In other words, the agent is rewarded based on how well it can imitate the state-distribution induced by the expert. In recent years, distribution matching via adversarial methods such as GAIL has become a popular approach to imitation learning. However, one weakness of these methods is that they require on-policy samples which means they require the agent to interact with the environment. In this paper, the authors present an off-policy method for distribution matching which can work without environment interaction. They do this by building on the prior work of DualDICE, a policy-agnostic method to estimate distribution ratios between agent and expert which can then be used to provide a reward to the agent. This allows the optimal policy to be estimated directly from demonstrations without any need for agent interaction. The authors run a few experiments and show that the method has comparable performance to behavioral cloning in the off-policy setting and adversarial methods in the on-policy setting."], "venue": "ICLR 2020", "opinion": "This is a cool application of density-estimation via DualDICE. While the experiments are a bit weak, the fact that an off-policy method exists to do distribution-matching is interesting in its own right. Moreover, the method seems able to compete with both BC and GAIL-like methods which is intriguing.", "highlight": false, "read_more": "GAIL", "summarizer": "Zach", "prerequisites": "DualDICE", "converted_with": "python", "newsletter_number": "AN #98", "newsletter_category": "Learning human intent"}
{"id": "5fb9e22983aea772109cd9a6d242b30f", "title": "State-only Imitation with Transition Dynamics Mismatch", "url": "http://arxiv.org/abs/2002.11879", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Tanmay Gangwani", "Jian Peng"], "summaries": ["Most existing imitation learning algorithms rely on the availability of expert demonstrations that come from the *same* MDP as the one the imitator will be evaluated in. With the advent of <@adversarial inverse reinforcement learning (AIRL)@>(@Learning Robust Rewards with Adversarial Inverse Reinforcement Learning@), it has become possible to learn general behaviors. However, algorithms such as <@GAIL@>(@Generative Adversarial Imitation Learning@) are capable of learning with just state-information, something that AIRL was not designed for. In this paper, the authors introduce indirect-imitation learning (I2L) to try and merge the benefits of both GAIL and AIRL. The basic sketch of the algorithm is to first use a generalization of AIRL to imitate demonstrations via a buffer distribution and then focus on moving that buffer closer to the expert's demonstration distribution using a Wasserstein critic, a smoother way to train GAN networks. By combining these two approaches, agents trained with I2L learn how to control Ant in regular gravity and can *generalize* to perform in simulations with differing parameters for gravity. For the suite of Gym continuous domains, they show consistent advantages for I2L over other algorithms such as GAIL, BCO, and AIRL when parameters such as friction, density, and gravity are changed. "], "venue": "ICLR 2020", "opinion": "The main contribution in this paper seems to be deriving a new bound so that AIRL can handle state-only imitation learning. The use of indirection via a buffer is also interesting and seems to be a good idea to provide stability in training. However, they did not do an ablation. Overall, it's aesthetically interesting that this paper is borrowing tricks, such as buffering and Wasserstein critic. Finally, the results seem promising, particularly for the sim-to-real problem. It would be interesting to see a follow-up to gauge whether or not I2L can help bridge this gap.", "highlight": false, "read_more": "Paper: Learning Robust Rewards With Adversarial Inverse Reinforcement Learning", "summarizer": "Zach", "prerequisites": "Wasserstein GAN", "converted_with": "python", "newsletter_number": "AN #94", "newsletter_category": "Learning human intent"}
{"id": "530615cacc6697f67df7b66984781bac", "title": "The MineRL Competition on Sample-Efficient Reinforcement Learning Using Human Priors: A Retrospective", "url": "http://arxiv.org/abs/2003.05012", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Stephanie Milani", "Nicholay Topin", "Brandon Houghton", "William H. Guss", "Sharada P. Mohanty", "Oriol Vinyals", "Noboru Sean Kuno"], "summaries": ["This paper reports on the results of the <@MineRL competition@>(@NeurIPS 2019 Competition: The MineRL Competition on Sample Efficient Reinforcement Learning using Human Priors@), in which participants had to train agents to obtain a diamond in Minecraft using a limited amount of compute, environment interactions, and human demonstrations. While no team achieved this task, one team did make it to the penultimate milestone: obtaining an iron pickaxe.\n\nThe top nine teams all used some form of action reduction: that is, they constrained their agents to only take a subset of all available actions, shaping the space in which the agent had to learn and explore. The top four teams all used some form of hierarchy in order to learn longer \"options\" that could then be selected from. The second place team used pure imitation learning (and so required _no_ environment interactions), while the eighth and ninth place teams used pure reinforcement learning (and so required _no_ human demonstrations)."], "venue": "arXiv", "opinion": "I was surprised to see pure RL solutions rank in the leaderboard, given the limitations on compute and environment interactions. Notably though, while the second place team (pure imitation) got 42.41 points, the eighth place team (pure RL) only got 8.25 points.\n\nMore generally, I was excited to see an actual benchmark for techniques using human demonstrations: so far there hasn't been a good evaluation of such techniques. It does seem like Minecraft benefits a lot from hierarchy and action pruning, which we may not care about when evaluating algorithms.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #94", "newsletter_category": "Learning human intent"}
{"id": "b94af7c7efe2e97b7ed2533b117e3b2b", "title": "Learning Safe Policies with Expert Guidance", "url": "http://arxiv.org/abs/1805.08313", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Jessie Huang", "Fa Wu", "Doina Precup", "Yang Cai"], "summaries": ["Expert demonstrations can be consistent with many possible reward functions. Instead of simply trying to mimic the demonstration, the authors consider all possible rewards that are consistent with the demonstration, and then maximize the worst reward, leading to safe behavior."], "venue": "arXiv", "opinion": "This is very related to [Inverse Reward Design](https://arxiv.org/abs/1711.02827), where instead of maxmin planning we use risk-averse planning, and instead of considering all rewards compatible with an expert demonstration we consider all reward functions that are probable based on which reward function the designer wrote down.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #9", "newsletter_category": "Learning human intent"}
{"id": "466bc0e124a71fc40fc478ab464c79ca", "title": "Learning to Imitate Human Demonstrations via CycleGAN", "url": "https://bair.berkeley.edu/blog/2019/12/13/humans-cyclegan/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Laura Smith", "Nikita Dhawan", "Marvin Zhang", "Pieter Abbeel", "Sergey Levine"], "summaries": ["Most methods for imitation learning, where robots learn from a demonstration, assume that the actions of the demonstrator and robot are the same. This means that expensive techniques such as teleoperation have to be used to generate demonstrations. **This paper presents a method to engage in automated visual instruction-following with demonstrations (AVID) that works by translating video demonstrations done by a human into demonstrations done by a robot.** To do this, the authors use [CycleGAN](https://junyanz.github.io/CycleGAN/), a method to translate an image from one domain to another domain using unpaired images as training data. CycleGAN allows them to translate videos of humans performing the task into videos of the robot performing the task, which the robot can then imitate. In order to make learning tractable, the demonstrations had to be divided up into 'key stages' so that the robot can learn a sequence of more manageable tasks. In this setup, the robot only needs supervision to ensure that it's copying each stage properly before moving on to the next one. To test the method, the authors have the robot retrieve a coffee cup and make coffee. AVID significantly outperforms other imitation learning methods and can achieve 70% / 80% success rate on the tasks, respectively."], "venue": "BAIR Blog", "opinion": "In general, I like the idea of 'translating' demonstrations from one domain into another. It's worth noting that there do exist methods for translating visual demonstrations into latent policies. I'm a bit surprised that we didn't see any comparisons with other adversarial methods like [GAIfO](https://arxiv.org/pdf/1807.06158.pdf), but I understand that those methods have high sample complexity so perhaps the methods weren't useful in this context. It's also important to note that these other methods would still require demonstration translation. Another criticism is that AVID is not fully autonomous since it relies on human feedback to progress between stages. However, compared to kinetic teaching or teleoperation, sparse feedback from a human overseer is a minor inconvenience. ", "highlight": false, "read_more": "Paper: AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #82", "newsletter_category": "Learning human intent"}
{"id": "28aca3bf787b3b0b186b43a4690faa68", "title": "AI Alignment Podcast: Synthesizing a human’s preferences into a utility function", "url": "https://futureoflife.org/2019/09/17/synthesizing-a-humans-preferences-into-a-utility-function-with-stuart-armstrong/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Lucas Perry and Stuart Armstrong"], "summaries": ["Stuart Armstrong's <@agenda@>(@Research Agenda v0.9: Synthesising a human's preferences into a utility function@) involves extracting partial preferences from a human and synthesizing them together into an _adequate_ utility function. Among other things, this podcast goes into the design decisions underlying the agenda:\n\nFirst, why even have a utility function? In practice, there are [many pressures](https://www.lesswrong.com/posts/RQpNHSiWaXTvDxt6R/coherent-decisions-imply-consistent-utilities) suggesting that maximizing expected utility is the \"right\" thing to do -- if you aren't doing this, you're leaving value on the table. So any agent that isn't maximizing a utility function will want to self-modify into one that is using a utility function, so we should just use a utility function in the first place.\n\nSecond, why not defer to a long reflection process, as in [Indirect Normativity](https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/), or some sort of reflectively stable values? Stuart worries that such a process would lead to us prioritizing simplicity and elegance, but losing out on something of real value. This is also why he focuses on _partial preferences_: that is, our preferences in \"normal\" situations, without requiring such preferences to be extrapolated to very novel situations. Of course, in any situation where our moral concepts break down, we will have to extrapolate somehow (otherwise it wouldn't be a utility function) -- this presents the biggest challenge to the research agenda."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "Stuart Armstrong Research Agenda Online Talk", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #73", "newsletter_category": "Learning human intent"}
{"id": "5b88e2204bc3fe0725d66108a1e03616", "title": "Learning from Observations Using a Single Video Demonstration and Human Feedback", "url": "http://arxiv.org/abs/1909.13392", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Sunil Gandhi", "Tim Oates", "Tinoosh Mohsenin", "Nicholas Waytowich"], "summaries": ["Designing rewards can be a long and consuming process, even for experts. One common method to circumvent this problem is through demonstration. However, it might be difficult to record demonstrations in a standard representation, such as joint positions. **In this paper, the authors propose using human feedback to circumvent the discrepancy between how demonstrations are recorded (video) and the desired standard representation (joint positions).** First, humans provide similarity evaluations of short clips of an expert demonstration to the agent's attempt and a similarity function is learned by the agent. Second, this similarity function is used to help train a policy that can imitate the expert. Both functions are learned jointly. The algorithm can learn to make a Hopper agent back-flip both from a Hopper demonstration of a back-flip, and from a YouTube video of a human backflipping. Ultimately, the authors show that their method improves over another method that uses human feedback without direct comparison to desired behavior."], "venue": "arXiv", "opinion": "This paper seems like a natural extension of prior work. The imitation learning problem from observation is well-known and difficult. Introducing human feedback with a structured state space definitely seems like a viable way to get around a lot of the known difficulties with other methods such as a GAIL.", "highlight": false, "read_more": "", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #70", "newsletter_category": "Learning human intent"}
{"id": "03feb5526ff9d6ee08edddb10b27e0f9", "title": "Incorrigibility in the CIRL Framework", "url": "http://arxiv.org/abs/1709.06275", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2017-01-01T00:00:00Z", "authors": ["Ryan Carey"], "summaries": ["This paper demonstrates that when the agent has an _incorrect_ belief about the human's reward function, then you no longer get the benefit that the agent will obey shutdown instructions. It argues that since the purpose of a shutdown button is to function as a safety measure of last resort (when all other measures have failed), it should not rely on an assumption that the agent's belief about the reward is correct."], "venue": "arXiv", "opinion": "I certainly agree that if the agent is wrong in its beliefs about the reward, then it is quite likely that it would not obey shutdown commands. For example, in the off switch game, if the agent is incorrectly certain that u is positive, then it will take action a, even though the human would want to shut it down. See also <@these@>(@Latent Variables and Model Mis-Specification@) <@posts@>(@Model Mis-specification and Inverse Reinforcement Learning@) on model misspecification and IRL. For a discussion of how serious the overall critique is, both from HC's perspective and mine, see the opinion on the next post.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #69", "newsletter_category": "Learning human intent"}
{"id": "93f6e34d98dfcbc1bc8e444c1656bba1", "title": "Problem of fully updated deference", "url": "https://arbital.com/p/updated_deference/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2017-01-01T00:00:00Z", "authors": ["Eliezer Yudkowsky"], "summaries": ["This article points out that even if you have an agent with uncertainty over the reward function, it will acquire information and reduce its uncertainty over the reward, until eventually it can't reduce uncertainty any more, and then it would simply optimize the expectation of the resulting distribution, which is equivalent to optimizing a known objective, and has the same issues (such as disabling shutdown buttons)."], "venue": "Arbital", "opinion": "As with the previous paper, this argument is only really a problem when the agent's belief about the reward function is _wrong_: if it is correct, then at the point where there is no more information to gain, the agent should already know that humans don't like to be killed, do like to be happy, etc. and optimizing the expectation of the reward distribution should lead to good outcomes. Both this and the previous critique are worrisome when you can't even put a reasonable _prior_ over the reward function, which is quite a strong claim.\n\nHC's response is that the agent should never assign zero probability to any hypothesis. It suggests that you could have an expandable hierarchical prior, where initially there are relatively simple hypotheses, but as hypotheses become worse at explaining the data, you \"expand\" the set of hypotheses, ultimately bottoming out at (perhaps) the universal prior. I think that such an approach could work in principle, and there are two challenges in practice. First, it may not be computationally feasible to do this. Second, it's not clear how such an approach can deal with the fact that human preferences _change_ over time. (HC does want more research into both of these.)\n\nFully updated deference could also be a problem if the observation model used by the agent is incorrect, rather than the prior. I'm not sure if this is part of the argument.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #69", "newsletter_category": "Learning human intent"}
{"id": "26ee8c47603e78bda034da8cbed27ef4", "title": "Some Comments on Stuart Armstrong's \"Research Agenda v0.9\"", "url": "https://alignmentforum.org/posts/GHNokcgERpLJwJnLW/some-comments-on-stuart-armstrong-s-research-agenda-v0-9?_ga=2.216737811.48011077.1562349688-943761554.1470242885", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Charlie Steiner"], "summaries": ["This post makes two main critiques of the research agenda in the previous entry. First, the research agenda involves a lot of human-designed features and modules, but <@The Bitter Lesson@> is that machine learning tends to shine with highly abstract large models that can make use of a lot of compute. Second, the symbol grounding part of the agenda requires the AI system to develop representations of the world that match the representations that humans use, and we have no idea how to do that, or even what it would mean to \"match human representations\" when the AI is more intelligent than humans. The post also includes some more specific comments that I'm not summarizing."], "venue": "Alignment Forum", "opinion": "I agree with both of these critiques, especially the one about the bitter lesson. It seems like Stuart's approach imposes a particular structure or algorithm for how to synthesize the utility function; I am generally skeptical of such approaches. Also, as you might already know, I think it is neither necessary nor sufficient for AI alignment to find a utility function or \"goal\" that the AI can safely optimize. Since this promises to be a very difficult enterprise (Section 0.2 notes that it aims to \"solve at least 5 major open problems in philosophy, to a level rigorous enough that we can specify them in code\"), I prefer to look into other approaches that seem more tractable.\n\nI do think that the problems that motivate the various aspects of the agenda are important and useful to think about, and I am happy that they have all been put into this single post. I also like the fact that the research agenda is directly aiming for a full solution to AI alignment.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #60", "newsletter_category": "Learning human intent"}
{"id": "6acb64911efe8b2b334019fd8ca9aea3", "title": "Batch Active Preference-Based Learning of Reward Functions", "url": "http://iliad.stanford.edu/blog/2018/10/06/batch-active-preference-based-learning-of-reward-functions/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Erdem Bıyık and Dorsa Sadigh"], "summaries": ["This paper builds on a trend of recent papers that try to learn human preferences, not through demonstrations of optimal behavior, but through a human expressing a preference over two possible trajectories, which has both pragmatic advantages (re limits of human optimality) and theoretic ones (better ability to extrapolate a reward function). Here, the task is framed as: we want to send humans batches of paired trajectories to rank, but which ones? Batch learning is preferable to single-sample active learning because it's more efficient to update a network after a batch of human judgments, rather than after each single one. This adds complexity to the problem because you'd prefer to not have a batch of samples that are individually high-expected-information, but which are redundant with one another. The authors define an information criterion (basically the examples about which we're most uncertain of the human's judgment) and then pick a batch of examples based on different heuristics for getting a set of trajectories with high information content that are separated from each other in feature space."], "venue": "CoRL 2018", "opinion": "This is an elegant paper that makes good use of the toolkit of active learning for human preference solicitation, but it's batch heuristics are all very reliant on having a set of high level trajectory features in which Euclidean distance between points is a meaningful similarity metric, which feels like a not impossible to generalize but still somewhat limiting constraint.", "highlight": false, "read_more": "", "summarizer": "Cody", "prerequisites": "<@Active Preference-Based Learning of Reward Functions@>", "converted_with": "python", "newsletter_number": "AN #56", "newsletter_category": "Learning human intent"}
{"id": "f247a753d51e1c2417dfec20f78f8876", "title": "End-to-End Robotic Reinforcement Learning without Reward Engineering", "url": "https://sites.google.com/view/reward-learning-rl/home", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Avi Singh", "Larry Yang", "Kristian Hartikainen", "Chelsea Finn", "Sergey Levine"], "summaries": ["This paper demonstrates an approach that can learn to perform real world robotics tasks based not on example trajectories (states and actions) but just a small number (10) of pixel-level images of goal states showing successful task completion. Their method learns a GAN-like classifier to predict whether a given image is a success, continually adding data sampled from the still-learning policy to the set of negative examples, so the model at each step needs to further refine its model of success. The classifier, which is used as the reward signal in learning the policy, also makes use of a simple active learning approach, choosing the state its classifier is most confident is success and querying a human about it on fixed intervals, ultimately using less than 75 queries in all cases."], "venue": "arXiv", "opinion": "This is a result I find impressive, primarily because of its interest in abiding by sensible real-world constraints: it's easier for humans to label successful end states than to demonstrate a series of actions, and the number of queries made was similarly pragmatically low.", "highlight": false, "read_more": "BAIR blog post", "summarizer": "Cody", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #54", "newsletter_category": "Learning human intent"}
{"id": "f5fcde405d7bc5ce8f5a140ac114fc70", "title": "Conditional revealed preference", "url": "https://unstableontology.com/2019/04/04/conditional-revealed-preference/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Jessica Taylor"], "summaries": ["When backing out preferences by looking at people's actions, you may find that even though they say they are optimizing for X, their actions are better explained as optimizing for Y. This is better than relying on what they say, at least if you want to predict what they will do in the future. However, all such inferences are specific to the current context. For example, you may infer that schools are \"about\" dealing with authoritarian work environments, as opposed to learning -- but maybe this is because everyone who designs schools doesn't realize what the most effective methods of teaching-for-learning are, and if they were convinced that some other method was better for learning they would switch to that. So, in order to figure out what people \"really want\", we need to see not only what they do in the current context, but also what they would do in a range of alternative scenarios."], "venue": "Author's Website", "opinion": "The general point here, which comes up pretty often, is that any information you get about \"what humans want\" is going to be specific to the context in which you elicit that information. This post makes that point when the information you get is the actions that people take. Some other instances of this point:\n\n - [Inverse Reward Design](https://arxiv.org/abs/1711.02827) notes that a human-provided reward function should be treated as _specific to the training environment_, instead of as a description of good behavior in all possible environments.\n - [CP-Nets](https://www.cs.toronto.edu/~cebly/Papers/CPnets.pdf) are based on the point that when a human says \"I want X\" it is not a statement that is meant to hold in all possible contexts. They propose very weak semantics, where \"I want X\" means \"holding every other aspect of the world constant, it would be better for X to be present than for it not to be present\".\n - [Wei Dai's point](https://www.alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety) ([AN #37](https://mailchi.mp/6eb8190e723f/alignment-newsletter-37)) that humans likely have adversarial examples, and we should not expect preferences to generalize under distribution shift.\n - Stuart Armstrong and Paul Christiano have made or addressed this point in many of their posts.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #53", "newsletter_category": "Learning human intent"}
{"id": "ae28166e6706578508df2899e33935ac", "title": "AI Alignment Podcast: Human Cognition and the Nature of Intelligence", "url": "https://futureoflife.org/2019/02/21/human-cognition-and-the-nature-of-intelligence-with-joshua-greene/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Lucas Perry and Joshua Greene"], "summaries": ["Joshua Greene's lab has two research directions. The first is how we combine concepts to form thoughts: a process which allows us to understand arbitrary novel scenarios (even ones we don't think ever occurred). He discusses some of his recent reseach, which uses brain imaging to infer what's happening when humans think about compound concepts. While Joshua considers the combinatorial nature of thought to be important, he argues that to build AGI, it's necessary to start with \"grounded cognition\" in which representations are derived from perception and physical action, rather than just learning to manipulate symbols (like language).\n\nJoshua also works on the psychology and neuroscience of morality. He discusses his recent work in which participants are prompted to consider Rawls' Veil of Ignorance argument (that when making decisions affecting many people, we should do so as if we don't know which one we are) and then asked to evaluate moral dilemmas such as trolley problems. Joshua argues that the concept of impartiality is at the core of morality, and that it pushes people towards more utilitarian ideas (although he wants to rebrand utilitarianism as \"deep pragmatism\" to address its PR problems)."], "venue": "FLI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #47", "newsletter_category": "Learning human intent"}
{"id": "2d387f374bc95dd94a464c3839dbd118", "title": "Learning from Demonstration in the Wild", "url": "http://arxiv.org/abs/1811.03516", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Feryal Behbahani", "Kyriacos Shiarlis", "Xi Chen", "Vitaly Kurin", "Sudhanshu Kasewa", "Ciprian Stirbu", "João Gomes", "Supratik Paul", "Frans A. Oliehoek", "João Messias", "Shimon Whiteson"], "summaries": ["This paper learns traffic trajectories from unsupervised data by converting traffic camera footage into a Unity scene simulation, using that simulation to generate pseudo-LIDAR readings for each \"expert trajectory\", and then training an agent to imitate them using a variant of generative adversarial imitation learning (GAIL)."], "venue": "arXiv", "opinion": "This is a cool example of how huge amounts of existing unlabeled video data might be utilised. The task they attempt is significantly more complex than those in other similar work (such as [this paper](https://arxiv.org/abs/1805.11592) which learns to play Atari games from Youtube videos); however, this also makes it difficult to judge how well the learned policy performed, and how much potential it has to transfer into the real world.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #32", "newsletter_category": "Learning human intent"}
{"id": "802c6cbd17ce63c2e4429288e722d6e3", "title": "Shared Autonomy via Deep Reinforcement Learning", "url": "http://bair.berkeley.edu/blog/2018/04/18/shared-autonomy/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Siddharth Reddy"], "summaries": ["In shared autonomy, an AI system assists a human to complete a task. The authors implement shared autonomy in a deep RL framework by simply extending the state with the control input from the human, and then learning a policy that chooses actions given the extended state. They show that the human-AI team performs better than either one alone in the Lunar Lander environment."], "venue": "BAIR Blog", "opinion": "Shared autonomy is an interesting setting because the human is still necessary in order to actually perform the task, whereas in typical reward learning settings, once you have learned the reward function and the AI is performing well, the human does not need to be present in order to execute a good policy.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #3", "newsletter_category": "Learning human intent"}
{"id": "1c54f1ac33b32deee236673850a77171", "title": "Addressing Sample Inefficiency and Reward Bias in Inverse Reinforcement Learning", "url": "http://arxiv.org/abs/1809.02925", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Ilya Kostrikov", "Kumar Krishna Agrawal", "Debidatta Dwibedi", "Sergey Levine", "Jonathan Tompson"], "summaries": ["Deep IRL algorithms typically work by training a discriminator that distinguishes between states and actions from the expert from states and actions from the learned policy, and extracting a reward function from the discriminator. In any environment where the episode can end after a variable number of timesteps, this assumes that the reward is zero after the episode ends. The reward function from the discriminator often takes a form where it must always be positive, inducing a survival incentive, or a form where it must always be negative, inducing a living cost. For example, [GAIL](https://arxiv.org/abs/1606.03476)'s reward is always positive, giving a survival incentive. As a result, _without any reward learning at all_ GAIL does better on Hopper than behavioral cloning, and fails to learn on a reaching or pushing task (where you want to do the task as quickly as possible, so you want the living cost). To solve this, they learn an \"absorbing state reward\", which is a reward given after the episode ends -- this allows the algorithm to learn for itself whether it should have a survival incentive or living cost.\n\nThey also introduce a version that keeps a replay buffer of experience and uses an off-policy algorithm to learn from the replay buffer in order to improve sample efficiency."], "venue": "arXiv", "opinion": "The key insight that rewards are _not_ invariant to additions of a constant when you have variable-length episodes is useful and I'm glad that it's been pointed out, and a solution proposed. However, the experiments are really strange -- in one case (Figure 4, HalfCheetah) their algorithm outperforms the expert (which has access to the true reward), and in another (Figure 5, right) the blue line implies that using a uniformly zero reward lets you achieve around a third of expert performance (!!).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #29", "newsletter_category": "Learning human intent"}
{"id": "2a80d34e9e3ecd7908f6dde626ee9b01", "title": "Risk-Sensitive Generative Adversarial Imitation Learning", "url": "http://arxiv.org/abs/1808.04468", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Jonathan Lacotte", "Mohammad Ghavamzadeh", "Yinlam Chow", "Marco Pavone"], "summaries": ["This paper extends GAIL to perform imitation learning where we try to optimize a policy for the mean reward collected under the constraint that the policy is no more risky than the expert policy. Since we don't know the true cost function, we have to approximate this problem with another problem where we infer the cost function as well, and evaluate the risk profile relative to the inferred cost function. The algorithm ends up looking very similar to the original GAIL algorithm, where the gradient updates change in order to include terms dependent on the conditional value-at-risk (CVaR). They evaluate against GAIL and RAIL (another risk-sensitive imitation learning algorithm) and find that their method performs the best on the Hopper and Walker Mujoco environments."], "venue": "NIPS 2018", "opinion": "I only skimmed through the math, so I don't understand the paper well enough to have a good opinion on it. The overall objective of having more risk-sensitivity seems useful for safety. That said, I do find the VNM utility theorem compelling, and it suggests that risk aversion is a bad strategy. I currently resolve this by saying that while the VNM theorem is true, if you want to optimize expected reward over a long time horizon in an environment with high-downside actions but not high-upside actions, even if you are maximizing expected utility you would not take low-probability-of-high-downside actions. (Here a high-downside action is one that causes something like death/episode termination.) Since humans are (probably) scope-insensitive with respect to time, it becomes important for humans to have a heuristic of risk aversion in order to actually maximize expected utility in practice. I'd be interested in seeing experiments with current (risk neutral) RL algorithms in long-horizon environments with actions with high downside, and see if they automatically learn behavior that we would call \"risk-averse\".\n\nTake this with a grain of salt -- it's a lot more speculative than most of my opinions, which can already be quite speculative. Most of the steps in that argument are handwavy intuitions I have that aren't based on any research that's been done (though I haven't looked for any such research). Though you can think of the argument for focusing on long-term AI safety at all as an instance of this idea, where the argument is that our risk-aversion heuristic is only sufficient for timescales on the orders of human lifetimes, not for cosmic timescales, and so we should explicitly be more risk-averse and focus on reducing existential risk.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #20", "newsletter_category": "Learning human intent"}
{"id": "64a4025016e6e90557a5fc30ed3237a2", "title": "Inverse Decision Modeling: Learning Interpretable Representations of Behavior", "url": "http://proceedings.mlr.press/v139/jarrett21a/jarrett21a.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Daniel Jarrett*", "Alihan Hüyük*", "Mihaela van der Schaar"], "summaries": ["There’s lots of work on learning preferences from demonstrations, which varies in how much structure they assume on the demonstrator: for example, we might consider them to be <@Boltzmann rational@>(@Modeling Interaction via the Principle of Maximum Causal Entropy@) or [risk sensitive](https://rss2017.lids.mit.edu/static/papers/62.pdf), or we could try to <@learn their biases@>(@Learning biases and rewards simultaneously@). This paper proposes a framework to encompass all of these choices: the core idea is to model the demonstrator as choosing actions according to a _planner_; some parameters of this planner are fixed in advance to provide an assumption on the structure of the planner, while others are learned from data. This also allows them to separate beliefs, decision-making, and rewards, so that different structures can be imposed on each of them individually.\n\nThe paper provides a mathematical treatment of both the forward problem (how to compute actions in the planner given the reward, think of algorithms like value iteration) and the backward problem (how to compute the reward given demonstrations, the typical inverse reinforcement learning setting). They demonstrate the framework on a medical dataset, where they introduce a planner with parameters for flexibility of decision-making, optimism of beliefs, and adaptivity of beliefs. In this case they specify the desired reward function and then run backward inference to conclude that, with respect to this reward function, clinicians appear to be significantly less optimistic when diagnosing dementia in female and elderly patients."], "venue": "ICML 2021", "opinion": "One thing to note about this paper is that it is an incredible work of scholarship; it fluently cites research across a variety of disciplines, including AI safety, and provides a useful organizing framework for many such papers. If you need to do a literature review on inverse reinforcement learning, this paper is a good place to start.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #169", "newsletter_category": "Learning human intent"}
{"id": "3ea79de3bc365dd19539cb02c7f1bdbf", "title": "B-Pref: Benchmarking Preference-Based Reinforcement Learning", "url": "https://openreview.net/forum?id=ps95-mkHF_", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Kimin Lee", "Laura Smith", "Anca Dragan", "Pieter Abbeel"], "summaries": ["Deep RL has become a powerful method to solve a variety of sequential decision tasks using a known reward function for training. However, in practice, rewards are hard to specify making it hard to scale Deep RL for many applications. Preference-based RL provides an alternative by allowing a teacher to indicate preferences between a pair of behaviors. Because the teacher can interactively give feedback to an agent, preference-based RL has the potential to help address this limitation of Deep RL. Despite the advantages of preference-based RL it has proven difficult to design useful benchmarks for the problem. This paper introduces a benchmark (B-Pref) that is useful for preference-based RL in various locomotion and robotic manipulation tasks.\n\nOne difficulty with designing a useful benchmark is that teachers may have a variety of irrationalities. For example, teachers might be myopic or make mistakes. The B-Pref benchmark addresses this by emphasizing measuring performance under a variety of teacher irrationalities. They do this by providing various performance metrics to introduce irrationality into otherwise deterministic reward criteria. While previous approaches to preference-based RL work well when the teacher responses are consistent, experiments show they are not robust to feedback noise or teacher mistakes. Experiments also show that how queries are selected has a major impact on performance. With these results, the authors identify these two problems as areas for future work."], "venue": "NeurIPS 2021 Track Datasets and Benchmarks", "opinion": "While the authors do a good job advocating for the problem of preference-based RL, I'm less convinced their particular benchmark is a large step forward. In particular, it seems the main contribution is not a suite of tasks, but rather a collection of different ways to add irrationality to the teacher oracle. The main takeaway of this paper is that current algorithms don't seem to perform well when the teacher can make mistakes, but this is quite similar to having a misspecified reward function. Beyond that criticism, the experiments support the areas suggested for future work. ", "highlight": false, "read_more": "", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #166", "newsletter_category": "Learning human intent"}
{"id": "924996f5ada7db793a9c1b8c073a57c6", "title": "VILD: Variational Imitation Learning with Diverse-quality Demonstrations", "url": "http://arxiv.org/abs/1909.06769", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Voot Tangkaratt", "Bo Han", "Mohammad Emtiyaz Khan", "Masashi Sugiyama"], "summaries": ["We saw in the previous summary that existing methods struggle to cope with datasets of demonstrations of mixed quality. This paper aims to tackle exactly this problem. They consider a model in which there are k demonstrators with varying levels of quality. Each demonstrator is modeled as computing an action Boltzmann-rationally and then applying some Gaussian noise; the standard deviation of the Gaussian noise differs across the demonstrators (with higher standard deviation corresponding to lower quality).\n\nThey use variational inference to derive an algorithm for this problem that infers the reward function as well as an optimal policy to go along with it. In addition, they oversample data from the demonstrations that the model thinks are high quality in order to get more informative gradients. (They use an importance sampling correction in order to keep the gradient estimate unbiased.)\n\nTheir experiments on machine-generated data show significant improvement over existing imitation learning algorithms, both in the case where we synthetically add Gaussian noise (matching the model) and when we add time-signal-dependent (TSD) noise (in which case the model is misspecified)."], "venue": "arXiv", "opinion": "This seems like a reasonable approach. It has a similar ethos as Boltzmann rationality. In Boltzmann rationality, it seems like all you need to do is model the demonstrator as having some noise but still being more likely to choose higher-reward actions, and that’s enough to get decent performance; similarly here you just need to model different demonstrators as applying different amounts of Gaussian noise to the optimal policy and that’s enough to distinguish good from bad.\n\nNote that, while the experimental results are good, the paper doesn’t have experiments with real human demonstrations; as we saw in the previous summary these can often be quite different (in ways that matter) from machine-generated demonstrations.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #161", "newsletter_category": "Learning human intent"}
{"id": "b66326df8bb9b314e5bbbc631608dcb7", "title": "Reward Identification in Inverse Reinforcement Learning", "url": "http://proceedings.mlr.press/v139/kim21c/kim21c.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Kuno Kim", "Kirankumar Shiragur", "Shivam Garg", "Stefano Ermon"], "summaries": ["As mentioned in the previous summary, a major challenge with inverse reinforcement learning is that rewards are unidentifiable: even given perfect knowledge of the policy, we cannot recover the reward function that produces it. This is partly for boring reasons like “you can add a constant to a reward function without changing anything”, but even if you exclude those kinds of reasons, others remain. For example, since every policy is optimal for the constant reward function, the zero reward function can rationalize any policy.\n\nFor this reason, the authors instead focus on the case where we assume the policy is a solution to the maximum entropy RL objective (you can think of this as Boltzmann rationality, if you’re more familiar with that). The solution to MaxEnt RL for a zero reward is a uniformly random policy, so the zero reward no longer rationalizes every policy. Perhaps rewards are identifiable in this case?\n\n(You might have noticed that I neglected the question of whether the MaxEnt RL model was better than the regular RL model in cases that we care about. As far as I can tell the paper doesn’t address this. But if they did so, perhaps they might say that in realistic situations we are dealing with boundedly-rational agents, and Boltzmann rationality / MaxEnt RL is a common model in such situations.)\n\nWell, we still need to deal with the “additive constant” argument. To address this, the authors define two reward functions to be equivalent if they agree up to an additive constant. There are actually two versions of this: “trajectory equivalence” means that they agree on the rewards for all feasible trajectories, while “state-action equivalence” means that they agree on the rewards for all state-action pairs. Correspondingly, “weak identifiability” means that you can identify rewards up to trajectory equivalence, while “strong identifiability” means you can identify them up to state-action equivalence. Strong identifiability implies weak identifiability, since if you know the rewards on state-action pairs, that determines the reward for any given trajectory.\n\nAll deterministic MDPs are weakly identifiable under the MaxEnt RL model, since in this case a trajectory τ is selected with probability p(τ) proportional to exp(r(τ)), so the probability p(τ) can then be inverted to get r(τ). However, stochastic MDPs need not be weakly identifiable. Imagine an MDP in which no matter what you do, you are teleported to a random state. In such an MDP, the agent has no control over the trajectory, and so the MaxEnt RL objective will choose a uniformly random policy, no matter what the reward is, and so the reward must be unidentifiable.\n\nNow the question is, assuming you have weak identifiability (i.e. you can infer r(τ)), when do you also have strong identifiability (i.e. you can infer r(s, a))? Intuitively, there needs to be a sufficient “diversity” of feasible trajectories τ, that cover a wide variety of possible (s, a) pairs, so that you can use the r(τ) values to infer the r(s, a) values. The authors prove a sufficient condition called “coverage”: there exists some timestep T, such that for every state there is some feasible trajectory that reaches that state at timestep T. (They also require the horizon to be at least 2T.) Coverage can be a fairly easy property to have; for example, if you can get to any state from any other state in some number of steps, then all you need is a single self-loop somewhere in the MDP that allows you to “waste time” so that you reach the desired state at exactly timestep T (instead of reaching too early)."], "venue": "ICML 2021", "opinion": "", "highlight": false, "read_more": "[Identifiability in inverse reinforcement learning](https://arxiv.org/abs/2106.03498) has the same motivation and studies a very similar setting, but has a few different results. It's also easier to read if you're not as familiar with MaxEnt methods.", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #161", "newsletter_category": "Learning human intent"}
{"id": "cff7ca63b20ce936fced610b0a65eead", "title": "Exploring Hierarchy-Aware Inverse Reinforcement Learning", "url": "http://arxiv.org/abs/1807.05037", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Chris Cundy", "Daniel Filan"], "summaries": ["One heuristic that humans use to deal with bounded computation is to make plans hierarchically, building long-term plans out of slightly smaller building blocks. How can we incorporate this knowledge into an IRL algorithm? This paper extends [Bayesian IRL](https://www.aaai.org/Papers/IJCAI/2007/IJCAI07-416.pdf) to the setting where the demonstrator has access to a set of _options_, which are (to a first approximation) policies that can be used to achieve some subgoal. Now, when you are given a trajectory of states and actions, it is no longer clear which options the demonstrator was using to generate that trajectory. The authors provide an algorithm that can enumerate all the options that are consistent with the trajectory, and assign probabilities to them according to the Boltzmann-rational model. They evaluate on a taxi driver gridworld often used in hierarchical planning, as well as on real human data from a game called Wikispeedia."], "venue": "GoalsRL 2018", "opinion": "Hierarchy seems to be a very important tool that humans use, so I'm glad to see work on it. Currently, the algorithm is very computationally expensive, and can only be applied in small domains right now, and requires the options to be specified ahead of time, but it does lead to a benefit on the environments they consider, despite the inevitable misspecification from having to hardcode the options. I would be very interested to see an extension to high-dimensional data where the options are learned (analogous to [Meta-Learning Shared Hierarchies](https://blog.openai.com/learning-a-hierarchy/) for hierarchical RL). Not only would this be more realistic, it could perform better because the options would be learned, not hardcoded.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #16", "newsletter_category": "Learning human intent"}
{"id": "f109b9ff34fee6bba142d7ca203fdef3", "title": "IBM researchers train AI to follow code of ethics", "url": "https://venturebeat.com/2018/07/16/ibm-researchers-train-ai-to-follow-code-of-ethics/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Ben Dickson"], "summaries": ["Parents want movie recommendation systems not to recommend particular kinds of movies to children, but we would also like the recommendation system to suggest movies that the children will actually like. Researchers solved this problem by first learning a model for what kinds of movies should not be recommended, and then combined that with a contextual bandit model that learns online from the child's data to provide good suggestions that follow the parent's constraints."], "venue": "Venture Beat", "opinion": "We can look at this from an alignment perspective -- the child is giving the AI system a misspecified reward, relative to the parent's goal of \"provide good suggestions that do not have inappropriate content\". While the researchers solve it using contextual bandits, it could be interesting to consider how AI alignment approaches could deal with this situation.", "highlight": false, "read_more": "Using Contextual Bandits with Behavioral Constraints for Constrained Online Movie Recommendation", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #16", "newsletter_category": "Learning human intent"}
{"id": "55fcf1ab3009a025f44d288cc1fd83da", "title": "Learning What To Do by Simulating the Past", "url": "https://bair.berkeley.edu/blog/2021/05/03/rlsp/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["David Lindner", "Rohin Shah", "Pieter Abbeel", "Anca Dragan"], "summaries": ["Since the state of the world has already been optimized for human preferences, it can be used to infer those preferences. For example, it isn’t a coincidence that vases tend to be intact and on tables. An agent with an understanding of physics can observe that humans haven’t yet broken a particular vase, and infer that they care about vases not being broken.\n\n<@Previous work@>(@Learning Preferences by Looking at the World@) provides an algorithm, RLSP, that can perform this type of reasoning, but it is limited to small environments with known dynamics and features. In this paper (on which I am an author), we introduce a deep variant of the algorithm, called Deep RLSP, to move past these limitations. While RLSP assumes known features, Deep RLSP learns a feature function using self-supervised learning. While RLSP computes statistics for all possible past trajectories using dynamic programming, deep RLSP learns an inverse dynamics model and inverse policy to _simulate_ the most likely past trajectories, which serve as a good approximation for the necessary statistics. \n\nWe evaluate the resulting algorithm on a variety of Mujoco tasks, with promising results. For example, given a single state of a HalfCheetah balancing on one leg, Deep RLSP is able to learn a (noisy) policy that somewhat mimics this balancing behavior. (These results can be seen [here](https://sites.google.com/view/deep-rlsp).)"], "venue": "ICLR 2021", "opinion": "", "highlight": false, "read_more": "[Paper: Learning What To Do by Simulating the Past](https://arxiv.org/abs/2104.03946)\n\n[Thesis: Extracting and Using Preference Information from the State of the World](https://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-210.html)", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #149", "newsletter_category": "Learning human intent"}
{"id": "3990021e04a7b8fe16c2b8f7667f31a1", "title": "Recursive Classification: Replacing Rewards with Examples in RL", "url": "https://ai.googleblog.com/2021/03/recursive-classification-replacing.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Benjamin Eysenbach", "Sergey Levine", "Ruslan Salakhutdinov"], "summaries": ["Previous work has suggested learning a reward model from examples of successfully solving the task. This paper suggests that rather than a two stage process of learning a reward model and then optimizing it using RL, we can instead directly learn a policy from the examples by building an equivalent of Bellman backups that apply directly to examples (rather than having to go through intermediate rewards). Their experiments show that this works well."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "Paper: Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #147", "newsletter_category": "Learning human intent"}
{"id": "d3bf531b38cacd70a86159fec7112fc5", "title": "Four Motivations for Learning Normativity", "url": "https://www.alignmentforum.org/posts/oqghwKKifztYWLsea/four-normativity-motivations", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Abram Demski"], "summaries": ["We’ve <@previously seen@>(@Learning Normativity: A Research Agenda@) desiderata for agents that learn normativity from humans: specifically, we would like such agents to:\n\n1. **Learn at all levels:** We don’t just learn about uncertain values, we also learn how to learn values, and how to learn to learn values, etc. There is **no perfect loss function** that works at any level; we assume conservatively that Goodhart’s Law will always apply. In order to not have to give infinite feedback for the infinite levels, we need to **share feedback between levels**.\n2. **Learn to interpret feedback:** Similarly, we conservatively assume that there is **no perfect feedback**; so rather than fixing a model for how to interpret feedback, we want feedback to be **uncertain** and **reinterpretable**.\n3. **Process-level feedback:** Rather than having to justify all feedback in terms of the consequences of the agent’s actions, we should also be able to provide feedback on the way the agent is reasoning. Sometimes we’ll have to judge the entire chain of reasoning with **whole-process feedback**.\n\nThis post notes that we can motivate these desiderata from multiple different frames:\n\n1. _Outer alignment:_ The core problem of outer alignment is that any specified objective tends to be wrong. This applies at all levels, suggesting that we need to **learn at all levels**, and also **learn to interpret feedback** for the same reason. **Process-level feedback** is then needed because not all decisions can be justified based on consequences of actions.\n2. _Recovering from human error:_ Another view that we can take is that humans don’t always give the right feedback, and so we need to be robust to this. This motivates all the desiderata in the same way as for outer alignment.\n3. _Process-level feedback:_ We can instead view process-level feedback as central, since having agents doing the right type of _reasoning_ (not just getting good outcomes) is crucial for inner alignment. In order to have something general (rather than identifying cases of bad reasoning one at a time), we could imagine learning a classifier that detects whether reasoning is good or not. However, then we don’t know whether the reasoning of the classifier is good or not. Once again, it seems we would like to **learn at all levels**.\n4. _Generalizing learning theory:_ In learning theory, we have a distribution over a set of hypotheses, which we update based on how well the hypotheses predict observations. **Process-level feedback** would allow us to provide feedback on an individual hypothesis, and this feedback could be **uncertain**. **Reinterpretable feedback** on the other hand can be thought of as part of a (future) theory of meta-learning."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #143", "newsletter_category": "Learning human intent"}
{"id": "097f952d677632309ee7ed23dc2c7e37", "title": "Learning Montezuma’s Revenge from a Single Demonstration", "url": "https://blog.openai.com/learning-montezumas-revenge-from-a-single-demonstration/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Tim Salimans and Richard Chen"], "summaries": ["Montezuma's Revenge is widely considered to be one of the hardest Atari games to learn, because the reward is so sparse -- it takes many actions to reach the first positive reward, and if you're using random exploration, it will take exponentially many actions (in N, the number of actions till the first reward) to find any reward. A human demonstration should make the exploration problem much easier. In particular, we can start just before the end of the demonstration, and train the RL agent to get as much score as the demonstration. Once it learns that, we can start it at slightly earlier in the demonstration, and do it again. Repeating this, we eventually get an agent that can perform the whole demonstration from start to finish, and it takes time linear in the length of the demonstration. Note that the agent must be able to generalize a little bit to states \"around\" the human demonstration -- when it takes random actions it will eventually reach a state that is similar to a state it saw earlier, but not exactly the same, and it needs to generalize properly. It turns out that this works for Montezuma's Revenge, but not for other Atari games like Gravitar and Pitfall."], "venue": "OpenAI Blog", "opinion": "Here, the task definition continues to be the reward function, and the human demonstration is used to help the agent effectively optimize the reward function. Such agents are still vulnerable to misspecified reward functions -- in fact, the agent discovers a bug in the emulator that wouldn't have happened if it was trying to imitate the human. I would still expect the agent to be more human-like than one trained with standard RL, since it only learns the environment near the human policy.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #14", "newsletter_category": "Learning human intent"}
{"id": "a73d4d908cd2b88960a1649b09d132be", "title": "Learning Rewards from Linguistic Feedback", "url": "http://arxiv.org/abs/2009.14715", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Theodore R. Sumers", "Mark K. Ho", "Robert D. Hawkins", "Karthik Narasimhan", "Thomas L. Griffiths"], "summaries": ["This paper proposes another approach to reinforcement learning using natural language. After the agent plays an episode, we can ask a human for feedback in natural language. We then take their response, figure out what features of the environment the response mentions, and then use sentiment analysis to determine how to update the weights on the features. For sentiment analysis we can use an off-the-shelf classifier; the hard part is in determining the relevant environment feature vectors:\n\n1. **Evaluative** feedback is feedback about the trajectory the agent produced, for example “good job”, so we can just use the features of this trajectory.\n2. **Imperative** feedback specifies what the agent should have done, e.g. “you should have gone to the top right corner”. In this case, we must find the features consistent with the given instruction.\n3. **Descriptive** feedback provides feedback directly about the reward, for example “yellow objects are bad”. In this case, we use a feature vector that has a 1 for every feature mentioned (in this case, the feature for yellow objects) and 0 everywhere else.\n\nTypes 2 and 3 require some domain knowledge in order to write down programs that map language to the relevant features. The environment the authors used was simple enough that they were able to do this.\n\nOnce we have the feature vector f and the sentiment s, we perform a Bayesian update on our weight distribution. This is similar to the way we perform Bayesian updates on the reward distribution upon seeing a human action as evidence, as in <@Bayesian IRL@>(@Bayesian Inverse Reinforcement Learning@) or <@reward-rational implicit choice@>(@Reward-rational (implicit) choice: A unifying formalism for reward learning@).\n\nThis model so far performs reasonably well. By adding a couple of heuristics inspired by pragmatics (e.g. assuming that features that aren’t mentioned aren’t decision-relevant), they reach approximately human-level performance."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #136", "newsletter_category": "Learning human intent"}
{"id": "7559ed24b69ce491fef3d1c82e5a012e", "title": "Non-Adversarial Imitation Learning and its Connections to Adversarial Methods", "url": "http://arxiv.org/abs/2008.03525", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Oleg Arenz", "Gerhard Neumann"], "summaries": ["Viewing imitation learning as a distribution matching problem has become more popular in recent years (see <@Value-Dice@>(@Imitation Learning via Off-Policy Distribution Matching@) / <@I2L@>(@State-only Imitation with Transition Dynamics Mismatch@)). However, the authors in this paper argue that such methods are unstable due to their formulation as saddle-point problems which means they have weak convergence guarantees due to the assumption that the policy is slowly updated. In this paper, the authors reformulate <@Adversarial IRL@>(@Learning Robust Rewards with Adversarial Inverse Reinforcement Learning@) as a non-adversarial problem allowing for much stronger convergence guarantees to be proved. In particular, the authors derive a lower-bound on the discrimination reward which allows for larger policy updates and then introduce a method to iteratively tighten this bound. They also build on prior work for value-dice and derive a soft actor-critic algorithm (ONAIL) that they evaluate on a variety of control tasks. "], "venue": "arXiv", "opinion": "The experiments in this paper are a bit underwhelming. While they run a large number of experiments, ONAIL only occasionally outperforms value-dice consistently in the HalfCheetah environment. The authors justify this by noting that ONAIL wasn't regularized. Additionally, the policies are initialized with behavior cloning, something that value-dice doesn't require. However, the theoretical insight on iterative tightening is interesting, and together with the recent work on value-dice indicates that the design space of imitation learning algorithms is far from being exhausted.", "highlight": false, "read_more": "", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #119", "newsletter_category": "Learning human intent"}
{"id": "567d69015161f1e50e625091b63a9c0f", "title": "An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning", "url": "http://arxiv.org/abs/1806.03820", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Dhruv Malik", "Malayandi Palaniappan", "Jaime F. Fisac", "Dylan Hadfield-Menell", "Stuart Russell", "Anca D. Dragan"], "summaries": ["Previously, Cooperative Inverse Reinforcement Learning (CIRL) games were solved by reducing them to a POMDP with an exponentially-sized action space, and then solving with POMDP algorithms that are exponential in the size of the action space, leading to a doubly-exponential algorithm. This paper leverages the fact that the human has perfect information to create a modified Bellman update that still computes the optimal policy, but no longer requires an exponential action space. The modified Bellman update works with the human's policy, and so we can now swap in more accurate models of the human, including eg. noisy rationality (whereas previously the human had to be exactly optimal). They show huge speedups in experiments, and discuss some interesting qualitative behavior that arises out of CIRL games -- for example, sometimes the human _waits_ instead of making progress on the task, because it is a good signal to the robot of what the human wants."], "venue": "arXiv", "opinion": "I'm excited by this improvement, since now we can actually solve non-trivial CIRL games -- one of the games they solve has around 10 billion states. With this we can run experiments with real humans, which seems really important, and the paper does mention a very preliminary pilot study run with real humans.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "Cooperative Inverse Reinforcement Learning", "converted_with": "python", "newsletter_number": "AN #11", "newsletter_category": "Learning human intent"}
{"id": "fdb3a1dbd5c9e709e7d4d68c8c1be83e", "title": "Learning a Prior over Intent via Meta-Inverse Reinforcement Learning", "url": "http://arxiv.org/abs/1805.12573", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Kelvin Xu", "Ellis Ratner", "Anca Dragan", "Sergey Levine", "Chelsea Finn"], "summaries": ["For complex rewards, such as reward functions defined on pixels, standard IRL methods require a large number of demonstrations. However, many tasks are very related, and so we should be able to leverage demonstrations from one task to learn rewards for other tasks. This naturally suggests that we use meta learning. The authors adapt [MAML](https://arxiv.org/abs/1703.03400) to work with maximum entropy IRL (which requires differentiating through the MaxEnt IRL gradient). They evaluate their approach, called MandRIL, on a navigation task whose underlying structure is a gridworld, but the state is represented as an image so that the reward function is nonlinear and requires a convnet."], "venue": "ICLR 2019", "opinion": "In one of the experiments, the baseline of running IRL from scratch performed second best, beating out two other methods of meta-learning. I'd guess that this is because both MandRIL and standard IRL benefit from assuming the maxent IRL distribution over trajectories (which I believe is how the demonstrations were synthetically generated), whereas the other two meta learning baselines do not have any such assumption, and must learn this relationship.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #11", "newsletter_category": "Learning human intent"}
{"id": "49be9c830c78358ed357c72dc2303365", "title": "Imitation Learning from Video by Leveraging Proprioception", "url": "http://arxiv.org/abs/1905.09335", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Faraz Torabi", "Garrett Warnell", "Peter Stone"], "summaries": ["Recent work into imitation learning from observation (IfO) allows agents to perform a task from visual demonstrations that do not include state and action information. In this paper the authors are interested in leveraging proprioception information, knowledge of internal states, to create an efficient IfO algorithm. As opposed to GAIfO, which typically uses only the observation vector, this algorithm only allows images to be used for discrimination but lets the agent make use of internal states to generate actions. They test their proposed technique on several MujoCo domains and show that it outperforms other imitation from observation algorithms. The authors note that in practice occlusion and fast movement in environments like Walker2d and HalfCheetah make it difficult to learn directly from images which partly explains the success of using proprioceptive features."], "venue": "arXiv", "opinion": "I think it's easy to forget that observations aren't necessarily equivalent to state representations. This paper did a good job of reminding me that using state features on the MujoCo tasks is different from using images to train imitation learning agents. In practice, trying to learn just from images can fail because of partial observability, but introducing proprioception is a natural solution here. I broadly agree with the authors' conclusion that resolving embodiment mismatch and viewpoint mismatch are natural next steps for this kind of research. ", "highlight": false, "read_more": "", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #107", "newsletter_category": "Learning human intent"}
{"id": "2bcc8173be8fc9a2d51c8f065b1520ee", "title": "Imitation Learning from Observations by Minimizing Inverse Dynamics Disagreement", "url": "http://arxiv.org/abs/1910.04417", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Chao Yang*", "Xiaojian Ma*", "Wenbing Huang", "Fuchun Sun", "Huaping Liu", "Junzhou Huang", "Chuang Gan"], "summaries": ["Learning from observation (LfO) focuses on imitation learning in situations where we want to learn from state-only demonstrations. This contrasts with learning from demonstration (LfD) which needs both state and action information. In practice, LfO is the more common situation due to the prevalence of unannotated data, such as video. In this paper, the authors show that the gap between LfO and LfD comes from the disagreement of inverse dynamics models between the imitator and the expert. If the inverse dynamics model is perfect, then state transitions can be labeled with actions and LfD can be performed on the result. However, it's often the case that many actions can generate the same state transition. They then show that optimizing an upper-bound on this gap leads to improved performance as compared to other LfO methods such as GAIfO (GAIL extended to LfO). "], "venue": "NeurIPS 2019", "opinion": "The main value of this paper is that the difference between LfO and LfD is clarified by introducing the notion of inverse disagreement. Related to this analysis, the authors note that GAIfO has the same objective as the inverse disagreement model if we replace KL with JS divergence. This makes me suspect that there's a general LfO [divergence minimization perspective](https://arxiv.org/abs/1911.02256) relating all of these methods together. In other words, the fact that the objectives for LfO and LfD can be related via KL/JS divergence indicates that there is an entire class of methods underlying this approach to LfO. Specifically, I'd hypothesize that regularized inverse reinforcement learning from observation followed by reinforcement learning would be equivalent to a divergence minimization problem. ", "highlight": false, "read_more": "[divergence minimization perspective](https://arxiv.org/abs/1911.02256)", "summarizer": "Zach", "prerequisites": "[GAIfO](https://arxiv.org/abs/1807.06158) and [Recent Advances in LfO](https://arxiv.org/pdf/1905.13566.pdf)", "converted_with": "python", "newsletter_number": "AN #101", "newsletter_category": "Learning human intent"}
{"id": "3f56b48fb4ed6d3b42081c3692c1bfb8", "title": "On the Foundations of Expected Expected Utility", "url": "http://www.cs.toronto.edu/~cebly/Papers/foundations.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2003-01-01T00:00:00Z", "authors": ["Craig Boutilier"], "summaries": ["Suppose you want to understand someone's preferences over a set of outcomes, and lotteries over those outcomes. The VNM theorem shows that under certain reasonable assumptions about those preferences, we can represent the preferences using a utility function over outcomes (_not_ lotteries), such that the person prefers lottery A to lottery B if and only if the expected utility of lottery A is greater than the expected utility of lottery B. Furthermore, the utility function is unique up to an affine transformation.\n\nWhat if we want to have uncertainty over the utility function? The natural approach would be to maintain a probability distribution over utility functions, and then say that the person prefers lottery A to lottery B if and only if the expected expected utility of lottery A exceeds the expected expected utility of lottery B. Here, the first expectation is taken over the probability distribution over utility functions, and the second expectation is taken over the probabilities in the lottery. This decision rule is called Maximum Expected Expected Utility (MEEU).\n\nHowever, the MEEU decision rule gives different answers if you transform one of the utility functions in an affine way. This isn't great, since affine transformations of utility functions encode the same preferences, and so you'd like your decision rule to be invariant to them. This is the standard problem that utility functions are not by default _commensurable_.\n\nThis paper shows that everything is fine if you assume that all the utility functions agree on the best and worst outcomes. In this case, any utility function can be uniquely characterized by stating for each state s, for what p are you indifferent between s and {p: s_best, (1-p): s_worst}. We can then define the base decision rule as follows: when presented with a lottery A = {p_i : s_i, ...}, you convert it into a lottery over utility functions: {q_U : A_U, ...}, where for each utility function, you define A_U as A, but replacing all the s_i with the corresponding {p: s_best, (1-p): s_worst} lotteries as defined by U. So now, we have a compound lottery of the form {q_U : {p_i : {p: s_best, (1-p): s_worst}, ...}, ...}, which can be reduced to a simple lottery of the form {x: s_best, (1-x): s_worst}. When comparing two such lotteries, you prefer lottery A to lottery B if and only if x_A > x_B.\n\nSo that's what you do without using utility functions, and only using preference orderings. They show that the MEEU decision rule is identical to this base decision rule, justifying the use of MEEU, as long as you normalize all of your utility functions so that they all assign the same value to the best and worst outcomes."], "venue": "IJCAI 2003", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "", "newsletter_category": "Learning human intent"}
{"id": "8c20ec18d3b94cb0a2bd10d7179d5f1b", "title": "Sample Efficient Reinforcement Learning through Learning from Demonstrations in Minecraft", "url": "http://arxiv.org/abs/2003.06066", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Christian Scheller", "Yanick Schraner", "Manfred Vogel"], "summaries": ["This paper explains the technique used by the 3rd place team in the MineRL competition (summarized above). They used behavior cloning to train their neural net on human demonstrations, and then used reinforcement learning (specifically, IMPALA) with experience replay and advantage clipping to improve. There are more details about their architecture and design choices in the paper."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #94", "newsletter_category": "Learning human intent"}
{"id": "146eba30616d2d0d42eb706fa54871a0", "title": "From Language to Goals: Inverse Reinforcement Learning for Vision-Based Instruction Following", "url": "http://arxiv.org/abs/1902.07742", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Justin Fu", "Anoop Korattikara", "Sergey Levine", "Sergio Guadarrama"], "summaries": ["Rewards and language commands are more generalizable than policies: \"pick up the vase\" would make sense in any house, but the actions that navigate to and pick up a vase in one house would not work in another house. Based on this observation, this paper proposes that we have a dataset where for several (language command, environment) pairs, we are given expert demonstrations of how to follow the command in that environment. For each data point, we can use IRL to infer a reward function, and use that to train a neural net that can map from the language command to the reward function. Then, at test time, given a language command, we can convert it to a reward function, after which we can use standard deep RL techniques to get a policy that executes the command.\n\nThe authors evaluate on a 3D house domain with pixel observations, and two types of language commands: navigation and pick-and-place. During training, when IRL needs to be done, since deep IRL algorithms are computationally expensive they convert the task into a small, tabular MDP with known dynamics for which they can solve the IRL problem exactly, deriving a gradient that can then be applied in the observation space to train a neural net that given image observations and a language command predicts the reward. Note that this only needs to be done at training time: at test time, the reward function can be used in a new environment with unknown dynamics and image observations. They show that the learned rewards generalize to novel combinations of objects within a house, as well as to entirely new houses (though to a lesser extent)."], "venue": "ICLR 2019", "opinion": "I think the success at generalization comes primarily because of the MaxEnt IRL during training: it provides a lot of structure and inductive bias that means that the rewards on which the reward predictor is trained are \"close\" to the intended reward function. For example, in the navigation tasks, the demonstrations for a command like \"go to the vase\" will involve trajectories through the state of many houses that end up in the vase. For each demonstration, MaxEnt IRL \"assigns\" positive reward to the states in the demonstration, and negative reward to everything else. However, once you average across demonstrations in different houses, the state with the vase gets a huge amount of positive reward (since it is in all trajectories) while all the other states are relatively neutral (since they will only be in a few trajectories, where the agent needed to pass that point in order to get to the vase). So when this is \"transferred\" to the neural net via gradients, the neural net is basically \"told\" that high reward only happens in states that contain vases, which is a strong constraint on the learned reward. On the other hand, for \"move X to Y\" tasks, while the same argument suggests that you will have high reward on reaching X, the rest of the trajectories from X to Y will all be the same and MaxEnt IRL will learn a shaped reward that doesn't transfer well. So, I predict that on \"move X to Y\" tasks at test time, the agent will successfully pick up the object X, but fail to then move it to Y.\n\nTo be clear, this is not meant as a critique of the paper: indeed, I think when you want out-of-distribution generalization, you _have_ to do it by imposing structure/inductive bias, and this is a new way to do it that I hadn't seen before.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #51", "newsletter_category": "Learning human intent"}
{"id": "82ba4b4d80241554f816e858119a62a9", "title": "Risk-Aware Active Inverse Reinforcement Learning", "url": "http://arxiv.org/abs/1901.02161", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Daniel S. Brown", "Yuchen Cui", "Scott Niekum"], "summaries": ["This paper presents an algorithm that actively solicits demonstrations on states where it could potentially behave badly due to its uncertainty about the reward function. They use Bayesian IRL as their IRL algorithm, so that they get a distribution over reward functions. They use the most likely reward to train a policy, and then find a state from which that policy has high risk (because of the uncertainty over reward functions). They show in experiments that this performs better than other active IRL algorithms."], "venue": "AISTATS 2019", "opinion": "I don't fully understand this paper -- how exactly are they searching over states, when there are exponentially many of them? Are they sampling them somehow? It's definitely possible that this is in the paper and I missed it, I did skim it fairly quickly.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #42", "newsletter_category": "Learning human intent"}
{"id": "d070eb030f3b64f4ad1db400d6fe93c5", "title": "Adversarial Imitation via Variational Inverse Reinforcement Learning", "url": "http://arxiv.org/abs/1809.06404", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Ahmed H. Qureshi", "Byron Boots", "Michael C. Yip"], "summaries": ["A short history of deep IRL algorithms: [GAIL](https://arxiv.org/abs/1606.03476) introduced the idea of training a policy that fools a discriminator that tries to distinguish a policy from expert demonstrations, [GAN-GCL](https://arxiv.org/abs/1611.03852) showed how to recover a reward function from the discriminator, and [AIRL](https://arxiv.org/abs/1710.11248) ([AN #17](https://mailchi.mp/ad852629e45a/alignment-newsletter-17)) trains on (s, a, s') tuples instead of trajectories to reduce variance, and learns a reward shaping term separately so that it transfers better to new environments. This paper proposed that the reward shaping term be the _empowerment_ of a state. The empowerment of a state is the maximum mutual information between a sequence of actions from a state, and the achieved next state. Intuitively, this would lead to choosing to go to states from which you can reach the most possible future states. Their evaluation shows that they do about as well as AIRL in learning to imitate an expert, but perform much better in transfer tasks (where the learned reward function must generalize to a new environment)."], "venue": "ICLR 2019", "opinion": "I'm confused by this paper, because they only compute the empowerment for a _single action_. I would expect that in most states, different actions lead to different next states, which suggests that the empowerment will be the same for all states. Why then does it have any effect? And even if the empowerment was computed over longer action sequences, what is the reason that this leads to learning generalizable rewards? My normal model is that IRL algorithms don't learn generalizable rewards because they mostly use the reward to \"memorize\" the correct actions to take in any given state, rather than learning the underlying true reward. I don't see why empowerment would prevent this from happening. Yet, their experiments show quite large improvements, and don't seem particularly suited to empowerment.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #28", "newsletter_category": "Learning human intent"}
{"id": "362bf9cb2337ebc72f1d00dc858a9de2", "title": "Efficient Exploration of Reward Functions in Inverse Reinforcement Learning via Bayesian Optimization", "url": "https://papers.nips.cc/paper/2020/file/2bba9f4124283edd644799e0cecd45ca-Paper.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Sreejith Balakrishnan", "Quoc Phong Nguyen", "Bryan Kian Hsiang Low", "Harold Soh"], "summaries": ["In the description of Bayesian IRL above, Step 2 is a very expensive step, as it requires solving a full RL problem. Can we improve any of the other steps to reduce the amount of times we have to run step 2? This paper aims to improve step 1: rather than choosing the next reward _randomly_, we can choose one that we think will be most informative. The authors apply the framework of Bayesian optimization to put this into practice. I won’t explain it more here since the details are fairly technical and involved (and I didn’t read the paper closely enough to understand it myself). They did have to introduce a new kernel in order to handle the fact that reward functions are invariant to the addition of a potential function."], "venue": "NeurIPS 2020", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #132", "newsletter_category": "Learning human intent"}
{"id": "53f1923fa4293fb3ce99bca2863784a0", "title": "Explanation Augmented Feedback in Human-in-the-Loop Reinforcement Learning", "url": "http://arxiv.org/abs/2006.14804", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Lin Guan*", "Mudit Verma*", "Subbarao Kambhampati"], "summaries": ["This paper starts from a similar position as the highlighted paper: that we can improve on algorithms by having humans provide different kinds of feedback that help with learning. They ask humans to provide “explanations” to improve sample efficiency in deep RL, which in this case means asking a human to segment parts of the image observation that are important (similar to a saliency map). They use this to define auxiliary losses that incentivize the agent to be invariant to augmentations of the irrelevant parts of the image. Their empirical evaluation shows improvements in sample efficiency relative to simple good/bad evaluative feedback."], "venue": "Human in the Loop Learning Workshop at ICML 2020", "opinion": "The idea is cool, but the empirical results are not great. On Taxi, training with the reward signal and binary good/bad evaluative feedback takes 180k environment steps, and adding in explanations for a quarter of the steps brings it down to 130k environment steps. However, this seems like it would increase the human effort required by an order of magnitude or more, which seems way too high for the benefit provided.\n\nIt does seem to me that saliency explanations could contain a fair amount of information, and so you should be able to do better -- maybe a future algorithm will do so.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #110", "newsletter_category": "Learning human intent"}
{"id": "b70a304645220bcb58ce4cab78ee0a98", "title": "Learning a Behavioral Repertoire from Demonstrations", "url": "http://arxiv.org/abs/1907.03046", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Niels Justesen*", "Miguel Gonzalez Duque*", "Daniel Cabarcas Jaramillo", "Jean-Baptiste Mouret", "Sebastian Risi"], "summaries": ["They extend vanilla Imitation Learning, adding a behaviour encoding as an input to the policy to prevent it from learning an 'average' behaviour, but instead to learn different strategies with a single policy. Training data includes 7,777 human demonstrations of Terran army build-orders v/s the Zerg in StarCraft 2, for which build-order strategies are first extracted in a high-dimension semantically-meaningful space, and then reduced to two dimensions using PCA. At training time, each game's 2D code _b_ is augmented to the state, and supervised learning is applied to the policy: π(s, b) = a, where _a_ is the action following state _s_ in the human demonstration."], "venue": "arXiv", "opinion": "This is a neat, straightforward extension to vanilla Imitation Learning that learns a single policy capable of exhibiting a diversity of behaviours. It offers yet another example of how to create and exploit clusters in some specification space; here the specification encodes desired build order rather than a particular task. However, their empirically-motivated choice of PCA (over t-SNE) for dimensionality reduction did not illuminate how best to cluster for behavioural diversity.\n\nThe evaluation also demonstrates how one can use the UCB1 algorithm to make better behaviour choices over a series of episodes. However, while the algorithm permits changing the behaviour _during_ an episode, it is unclear how to make such a choice or whether the agent will perform well under such a circumstance. The work also doesn't compare against previous approaches, e.g. these approaches from [2017](http://papers.nips.cc/paper/7116-robust-imitation-of-diverse-behaviors.pdf) and [2018](https://arxiv.org/pdf/1802.09564.pdf), making it difficult to determine the value of this approach without a deep-dive.", "highlight": false, "read_more": "", "summarizer": "Sudhanshu", "prerequisites": "", "converted_with": "python", "newsletter_number": "", "newsletter_category": "Learning human intent"}
{"id": "41da1bc656d95b65085dd6271f51698f", "title": "A Framework and Method for Online Inverse Reinforcement Learning", "url": "http://arxiv.org/abs/1805.07871", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Saurabh Arora", "Prashant Doshi", "Bikramjit Banerjee"], "summaries": ["This paper introduces Incremental Inverse Reinforcement Learning (I2RL), where the agent continually gets new demonstrations from an expert, and has to update the estimate of the reward function in real time. The running example is a robot that has to navigate to a goal location without being seen by two guards that are patrolling. The robot needs to infer the rewards of the two guards in order to predict what they will do and plan around them. Since the guards are sometimes out of sight, we get demonstrations _with occlusion_, that is, some of the states in the demonstrations are hidden.\n\nIn the batch setting, this is solved with Latent Maximum Entropy IRL. To deal with occluded states Z, we define a probability distribution Pr(Z | Y, theta), where Y is the visible states and theta is the reward weights. Then, you can use expectation maximization to find theta -- in the expectation step, you compute feature expectations of the demonstrations (taking an expectation over hidden states Z), and in the maximization step, you compute the reward weights using the feature expectations as in standard maximum entropy IRL. The authors show how to extend this algorithm to the incremental setting where you only keep the reward weights, the feature expectations, and the number of past demonstrations as statistics. They show some convergence guarantees and evaluate on their running example of a robot that must evade guards."], "venue": "arXiv", "opinion": "IRL algorithms are often more computationally expensive than state-of-the-art RL algorithms, so I'm happy to see work that's trying to make it more realistic. That said, this paper focuses on settings where IRL is used to infer other agent's preferences so we can plan around them (as opposed to imitation learning) -- this setting seems not very important for AI alignment. I'm also very confused by the experiments -- it seems in Figure 2 that if you ignore previous optimization and initialize the reward with random weights, it does better. (It isn't ignoring all previous data, because it still has access to past feature expectations.) They don't comment on this in the paper, but my guess is that they ran more iterations of expectation maximization (which is why the learning duration is higher) and that's why they got better performance.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #8", "newsletter_category": "Learning human intent"}
{"id": "81187021e298c1c237b63b98164584ef", "title": "Syntax vs semantics: alarm better example than thermostat", "url": "https://www.alignmentforum.org/posts/bbw6c9as5STvWXAgB/syntax-vs-semantics-alarm-better-example-than-thermostat", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Stuart Armstrong"], "summaries": ["This post gives a new example that more clearly illustrates the points made in a [previous post](https://www.alignmentforum.org/posts/EEPdbtvW8ei9Yi2e8/bridging-syntax-and-semantics-empirically) ([AN #26](https://mailchi.mp/1ecd1b775703/alignment-newsletter-26))."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "Bridging syntax and semantics, empirically", "converted_with": "python", "newsletter_number": "AN #48", "newsletter_category": "Learning human intent"}
{"id": "0c29836dfb5fff04993c31b5909bdd85", "title": "Expert-augmented actor-critic for ViZDoom and Montezumas Revenge", "url": "http://arxiv.org/abs/1809.03447", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Michał Garmulewicz", "Henryk Michalewski", "Piotr Miłoś"], "summaries": ["The authors augment ACKTR (a natural gradient RL algorithm) with an additional term in the loss function which depends on expert data. In particular, policies which choose different actions from samples of 14 expert trajectories are penalised, with a coefficient that depends on the expert's advantage over the critic's current expectations. This allows the agent to perform well on Montezuma's Revenge and a ViZDoom maze, sometimes beating the experts it was trained on. It also discovered a new bug in Montezuma's Revenge which increases its score by a factor of 40."], "venue": "arXiv", "opinion": "I'm not convinced that this paper's method of utilising expert data is an improvement on other approaches, such as [this paper](https://arxiv.org/abs/1805.11592) in which an agent learns to play Montezuma's revenge from watching a Youtube video. However, it does seem to learn faster than most others, probably due to using ACKTR. I'd also expect it to be overfitting to the expert trajectories, but can't determine the extent to which this is the case (the authors claim that their agent can continue gameplay into the second world of Montezuma's Revenge despite only having expert trajectories for the first world, but don't provide metrics of success in the second world).", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #24", "newsletter_category": "Learning human intent"}
{"id": "e8c197acb4f520d624d892c0519e6735", "title": "Cycle-of-Learning for Autonomous Systems from Human Interaction", "url": "http://arxiv.org/abs/1808.09572", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Nicholas R. Waytowich", "Vinicius G. Goecks", "Vernon J. Lawhern"], "summaries": ["We've developed many techniques for learning behaviors from humans in the last few years. This paper categorizes them as learning from demonstrations (think imitation learning and IRL), learning from intervention (think [Safe RL via Human Intervention](https://arxiv.org/abs/1707.05173)), and learning from evaluation (think [Deep RL from Human Preferences](https://arxiv.org/abs/1706.03741)). They propose running these techniques in sequence, followed by pure RL, to train a full system. Intuitively, demonstrations are used to jumpstart the learning, getting to near-human performance, and then intervention and evaluation based learning allow the system to safely improve beyond human-level, since it can learn behaviors that humans can't perform themselves but can recognize as good, and then RL is used to improve even more."], "venue": "AI-HRI AAAI-FSS, 2018", "opinion": "The general idea makes sense, but I wish they had actually implemented it and seen how it worked. (They do want to test in robotics in future work.) For example, they talk about inferring a reward with IRL from demonstrations, and then updating it during the intervention and evaluation stages. How are they planning to update it? Does the format of the reward function have to be the same in all stages, and will that affect how well each method works?\n\nThis feels like a single point in the space of possible designs, and doesn't include all of the techniques I'd be interested in. What about active methods, combined with exploration methods in RL? Perhaps you could start with a hand-specified reward function, get a prior using [inverse reward design](https://arxiv.org/abs/1711.02827), start optimizing it using RL with curiosity, and have a human either intervene when necessary (if you want safe exploration) or have the RL system actively query the human at certain states, where the human can respond with demonstrations or evaluations.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #23", "newsletter_category": "Learning human intent"}
{"id": "b7e8809a453f45a876006a7005f01be2", "title": "Directed Policy Gradient for Safe Reinforcement Learning with Human Advice", "url": "http://arxiv.org/abs/1808.04096", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Hélène Plisnier", "Denis Steckelmacher", "Tim Brys", "Diederik M. Roijers", "Ann Nowé"], "summaries": ["One way that you could get advice from humans for RL would be to have the human provide a policy, which can be treated as a suggestion. In this paper, the authors propose to take such a policy, and incorporate it into a policy gradient algorithm by simply multiplying it with the policy chosen by the neural net to get a new policy that is in between the two. You can then run any on-policy RL algorithms using that policy."], "venue": "European Workshop on Reinforcement Learning 2018", "opinion": "I'm annoyed at some claims that this paper makes. First, they say that the algorithm can ignore wrong advice that the human gives, but in the deterministic case, it does not ignore the advice, it just learns that if it gets into situations where it has to follow the advice bad things happen, and so it avoids getting into such situations. (The stochastic case is a bit better, in that at convergence the agent will ignore the advice, but it will take much longer to converge, if at all.) Second, their experiment involves a gridworld with 5 macro-actions, and they call this a \"complicated environment with sparse rewards\" -- yet if you had a uniformly random policy, in expectation it would take 5^3 = 125 episodes before you found the optimal trajectory, which would then be strongly reinforced getting quick convergence.\n\nI do like the idea of providing advice by shaping the policy towards parts of the space that are better -- this would lead to better sample efficiency and safer exploration. I'd be pretty excited to see a paper that ran with this idea and had a more compelling story for how to get the advice policy from a human (specifying a policy is hard!) and better experiments that test the feasibility of the idea in a more complex environment.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #20", "newsletter_category": "Learning human intent"}
{"id": "56f0d822b9969e3dbb4dc64214de2a30", "title": "Beyond Winning and Losing: Modeling Human Motivations and Behaviors Using Inverse Reinforcement Learning", "url": "http://arxiv.org/abs/1807.00366", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Baoxiang Wang", "Tongfang Sun", "Xianjun Sam Zheng"], "summaries": ["How could you perform IRL without access to a simulator, or a model of the dynamics of the game, or the full human policy (only a set of demonstrations)? In this setting, as long as you have a large dataset of diverse human behavior, you can use Q-learning on the demonstrations to estimate separate Q-function for each feature, and then for a given set of demonstrations you can infer the reward for that set of demonstrations using a linear program that attempts to make all of the human actions optimal given the reward function. They define (manually) five features for World of Warcraft Avatar History (WoWAH) that correspond to different motivations and kinds of human behavior (hence the title of the paper) and infer the weights for those rewards. It isn't really an evaluation because there's no ground truth."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #14", "newsletter_category": "Learning human intent"}
{"id": "9ec0dbe8b6a0b350d26599b2266ab7e4", "title": "Policy Approval", "url": "https://www.lesswrong.com/posts/TeYro2ntqHNyQFx8r/policy-approval", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Abram Demski"], "summaries": ["Argues that even if we had the true human utility function (assuming it exists), an AI that optimizes it would still not be aligned. It also sketches out an idea for learning policies instead of utility functions that gets around these issues."], "venue": "LessWrong", "opinion": "I disagree with the post but most likely I don't understand it. My strawman of the post is that it is arguing for imitation learning instead of inverse reinforcement learning (which differ when the AI and human know different things), which seems wrong to me.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #13", "newsletter_category": "Learning human intent"}
{"id": "4b684b28e8679c703faf7901f0426014", "title": "Reinforcement Learning Under Moral Uncertainty", "url": "http://arxiv.org/abs/2006.04734", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Adrien Ecoffet", "Joel Lehman"], "summaries": ["Given that we don’t have a perfect ethical theory ready to load into an AI system, and we don’t seem poised to get one any time soon, it seems worth looking into approaches that can deal with _moral uncertainty_. Drawing on the literature on moral uncertainty in philosophy, the authors consider several methods by which multiple moral theories can be aggregated, such as averaging over the theories, making decisions through a voting system, and having the theories compete to control the agent’s overall actions. They implement several of these in RL agents, and test them on simple gridworld versions of various trolley problems. They find that all of the methods have advantages and disadvantages."], "venue": "arXiv", "opinion": "The central challenge here is that normalizing different moral theories so that they are comparable is <@difficult@>(@Research Agenda v0.9: Synthesising a human's preferences into a utility function@) (see Section 2.3). This issue plagues even computationally intractable idealizations like <@assistance games@>(@Cooperative Inverse Reinforcement Learning@) that can perform full Bayesian updating on different moral theories. I’d love to see better theoretical solutions for this challenge.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #116", "newsletter_category": "Machine ethics"}
{"id": "df2eb424a8bd416455a67e1021509d2a", "title": "Tech firms move to put ethical guard rails around AI", "url": "https://www.wired.com/story/tech-firms-move-to-put-ethical-guard-rails-around-ai", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Tom Simonite"], "summaries": ["A description of the ethics boards that tech companies are putting up."], "venue": "Wired", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #7", "newsletter_category": "Machine ethics"}
{"id": "85e3640502b8db32d54b4a5da66864b7", "title": "How would you teach AI to be kind?", "url": "https://www.herox.com/EthicsNet", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Nell Watson"], "summaries": ["The EthicsNet Guardians Challenge is looking for suggestions on how to create a dataset that could be used to teach prosocial behavior. This is not aimed to answer difficult philosophical questions, but to teach an AI system general, simple prosocial behaviors, such as alerting someone who dropped their wallet but didn't notice. They have some ideas for how to achieve this, but are looking for more ideas before they actually start collecting a dataset."], "venue": "HeroX", "opinion": "One of the things I think about now is how to learn \"common sense\", and this seems very related (though not exactly the same). One of the hardest things to do with novel AI research is to collect a good dataset (if you don't have a simulator, anyway), so this seems like a great opportunity to get a good dataset for projects trying to tackle these sorts of issues, especially for somewhat fleshed out projects where you know what kind of dataset you'll need.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #16", "newsletter_category": "Machine ethics"}
{"id": "556d1833be355b956a956ce4118cf79c", "title": "Fluid Annotation: An Exploratory Machine Learning–Powered Interface for Faster Image Annotation", "url": "https://ai.googleblog.com/2018/10/fluid-annotation-exploratory-machine.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Jasper Uijlings and Vittorio Ferrari"], "summaries": ["This post describes a system that can be used to help humans label images to generate labels for segmentation. The post summarizes it well: \"Fluid Annotation starts from the output of a strong semantic segmentation model, which a human annotator can modify through machine-assisted edit operations using a natural user interface. Our interface empowers annotators to choose what to correct and in which order, allowing them to effectively focus their efforts on what the machine does not already know.\""], "venue": "Google AI Blog", "opinion": "I'm excited about techniques like this that allow us to scale up AI systems with less human effort, by focusing human effort on the aspects of the problem that AI cannot yet solve, while using existing AI systems to do the low-level work (generating a shortlist of potential segmentations, in this case). This is an example of the paradigm of using AI to help humans more effectively create better AI, which is one of the key ideas underlying iterated amplification. (Though iterated amplification focuses on how to use existing AI systems to allow the human to provide a training signal for tasks _that humans cannot perform or evaluate themselves_.)", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #30", "newsletter_category": "Machine learning"}
{"id": "d5ef26aefc97119ffd14c203d6973880", "title": "Meta-Learning MCMC Proposals", "url": "https://papers.nips.cc/paper/7669-meta-learning-mcmc-proposals.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Tongzhou Wang", "Yi Wu", "David A. Moore", "and Stuart J. Russell"], "summaries": ["Probabilistic programming offers the potential to encode our domain knowledge about a particular area into structured probabilistic models. However, algorithms that answer queries about these models can be very slow, even if we are only looking for an approximate answer. MCMC algorithms are approximate algorithms that _propose_ values for variables, which are then either _accepted_ or _rejected_. They only work well if the proposer is able to suggest high probability values, which allows for quick convergence. So, in practice researchers use hand-tuned proposers with these algorithms. This paper suggests that we could instead use a neural net as a proposer. The net is trained to give good proposals on small sections of models (called _motifs_), with a range of possible parameter values that affect what should be proposed. Then, faced with a new model, the net is able to look at the motifs in the new model and propose good values for those motifs."], "venue": "NeurIPS 2018", "opinion": "This is another example of how we can leverage the messy power of deep learning to improve more structured algorithms that have guarantees.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "", "newsletter_category": "Machine learning"}
{"id": "89ba8f2ace4da531fbeb0289935d86f4", "title": "A Theory of Universal Learning", "url": "https://web.math.princeton.edu/~rvan/tri201106.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Olivier Bousquet", "Steve Hanneke", "Shay Moran", "Ramon van Handel", "Amir Yehudayoff"], "summaries": ["In machine learning, algorithms are presented with labeled examples of categories from a training dataset and the objective is to output a classifier that distinguishes categories on a validation dataset. The generalization ability of the classifier is usually measured by calculating the error rate of the classifications on the validation set. One popular way to display generalization capability as a function of training set size is to plot a learning curve. A learning curve is a function that outputs the performance of a learning algorithm as a function of the data distribution and training sample size. A faster decay rate for a learning curve indicates a better ability to generalize with fewer data.\n\n**In this paper, the authors characterize the conditions for a learning algorithm to have learning curves with a certain decay rate.** A learning curve is produced from the decay rate according to the formula 1/rate. The authors show that there are only three universal rates: exponential, linear, and arbitrarily slow decay. Moreover, the authors show there are problem classes that can be learned quickly in each instance but are slow to learn in the worst-case. This stands in contrast to classical results which analyze only the worst-case performance of learning algorithms. This produces pessimistic bounds because the guarantee must hold for all possible data distributions. This is often stronger than what is necessary for practice. Thus, by looking at rates instead of the worst-case learning curve, the authors show that it is possible to learn more efficiently than what is predicted by classical theory. "], "venue": "Author's Website", "opinion": "This paper is mathematically sophisticated, but full of examples to illustrate the main points of the theory. More generally, work towards non-uniform bounds has become a popular topic recently as a result of classical generalization theory's inability to explain the success of deep learning and phenomena such as double-descent. These results could allow for progress in explaining the generalization capability of over-parameterized models, such as neural networks. Additionally, the theory presented here could lead to more efficient algorithms that take advantage of potential speedups over empirical risk minimization proved in the paper.", "highlight": false, "read_more": "", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "", "newsletter_category": "Machine learning"}
{"id": "81f7bb046e36aedb8a57d1439fbf63ea", "title": "Introducing TensorFlow Probability", "url": "https://medium.com/tensorflow/introducing-tensorflow-probability-dca4c304e245", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Josh Dillon", "Mike Shwe", "and Dustin Tran"], "summaries": ["Tensorflow now also supports probabilistic programming."], "venue": "Medium", "opinion": "Probabilistic programming is becoming more and more important in machine learning, and is in some sense a counterpart to deep learning -- it lets you have probability distributions over parameters (as opposed to the point estimates provided by neural nets), but inference is often intractable and must be performed approximately, and even then you are often limited to smaller models than with deep learning. It's interesting to have both of these provided by a single library -- hopefully we'll see applications that combine both approaches to get the best of both worlds. In particular, probabilistic programming feels more principled and amenable to theoretical analysis, which may make it easier to reason about safety.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #3", "newsletter_category": "Machine learning"}
{"id": "1dcbc11d304d7bc384eb07d1d6276b0c", "title": "Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI", "url": "https://futureoflife.org/2020/07/01/evan-hubinger-on-inner-alignment-outer-alignment-and-proposals-for-building-safe-advanced-ai/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Lucas Perry and Evan Hubinger"], "summaries": ["This podcast covers a lot of topics, with special focus on <@Risks from Learned Optimization in Advanced Machine Learning Systems@> and <@An overview of 11 proposals for building safe advanced AI@>."], "venue": "FLI Website", "opinion": "My summary is light on detail because many of the topics have been highlighted before in this newsletter, but if you aren’t familiar with them the podcast is a great resource for learning about them.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #107", "newsletter_category": "Mesa optimization"}
{"id": "4f4b0b67097db0f8b20bd230b25b5a62", "title": "AXRP #4 - Risks from Learned Optimization", "url": "https://axrp.net/episode/2021/02/17/episode-4-risks-from-learned-optimization-evan-hubinger.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Daniel Filan and Evan Hubinger"], "summaries": ["This podcast delves into a bunch of questions and thoughts around <@mesa optimization@>(@Risks from Learned Optimization in Advanced Machine Learning Systems@). Here are some of the points that stood out to me (to be clear, many of these have been covered in this newsletter before, but it seemed worth it to state them again):\n\n- A model is a mesa optimizer if it is a _mechanistic_ optimizer, that is, it is executing an algorithm that performs search for some objective.\n- We need to focus on mechanistic optimizers instead of things that behave as though they are optimizing for some goal, because those two categories can have very different generalization behavior, and we are primarily interested in how they will generalize.\n- Humans do seem like mesa optimizers relative to evolution (though perhaps not a central example). In particular, it seems accurate to say that humans look at different possible strategies and select the ones which have good properties, and thus we are implementing a mechanistic search algorithm.\n- To reason about whether machine learning will result in these mechanistic optimizers, we need to reason about the _inductive biases_ of machine learning. We mostly don’t yet know how likely they are.\n- Evan expects that in powerful neural networks there will exist a combination of neurons that encode the objective, which we might be able to find with interpretability techniques.\n- Even if training on a myopic base objective, we might expect the mesa objective to be non-myopic, as the non-myopic objective \"pursue X\" is simpler than the myopic objective \"pursue X until time T\".\n- We can’t rely on generalization bounds to guarantee performance, since in practice there is always some distribution shift (which invalidates those bounds).\n- Although it is usually phrased in the train/test paradigm, mesa optimization is still a concern in an online learning setup, since at every time we are interested in whether the model will generalize well to the next data point it sees.\n- We will probably select for simple ML models (in the sense of short description length) but not for low inference time, such that mechanistic optimizers are more likely than models that use more space (the extreme version being lookup tables).\n- If you want to avoid mesa optimizers entirely (rather than aligning them), you probably need to have a pretty major change from the current practice of AI, as with STEM AI and Microscope AI (explained <@here@>(@An overview of 11 proposals for building safe advanced AI@)).\n- Even in a <@CAIS scenario@>(@Reframing Superintelligence: Comprehensive AI Services as General Intelligence@) where we have (say) a thousand models doing different tasks, each of those tasks will still likely be complex enough to lead to the models being mesa optimizers.\n- There are lots of mesa objectives which would lead to deceptive alignment relative to corrigible or internalized alignment, and so we should expect deceptive alignment a priori."], "venue": "AXRP Podcast", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #139", "newsletter_category": "Mesa optimization"}
{"id": "2ad9dfeb3bbc332a581374978e9c3571", "title": "Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning", "url": "http://arxiv.org/abs/1910.10897", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Tianhe Yu*", "Deirdre Quillen*", "Zhanpeng He*", "Ryan Julian", "Karol Hausman", "Chelsea Finn", "Sergey Levine"], "summaries": ["\"Meta-learning\" or \"learning to learn\" refers to the problem of transferring insight and skills from one set of tasks to be able to quickly perform well on new tasks. For example, you might want an algorithm that trains on some set of platformer games to pick up general skills that it can use to quickly learn new platformer games.\n\nThis paper introduces a new benchmark, \"Meta World\", for evaluating meta-learning algorithms. The benchmark consists of 50 simulated robotic manipulation tasks that require a robot arm to do a combination of reaching, pushing and grasping. The benchmark tests the ability of algorithms to learn to do a single task well, learn one multi-task policy that trains and performs well on several tasks at once, and adapt to new tasks after training on a number of other tasks. The paper argues that unlike previous meta-learning evaluations, the task distribution in this benchmark is very broad while still having enough shared structure that meta-learning is possible.\n\nThe paper evaluates existing multi-task learning and meta-learning algorithms on this new benchmark. In meta-learning, it finds that different algorithms do better depending on how much training data they're given. In multi-task learning, it finds that the algorithm that performs best uses multiple \"heads\", or ends of neural networks, one for each task. It also finds that algorithms that are \"off-policy\"-- that estimate the value of actions other than the one that the network is currently planning to take-- perform better on multi-task learning than \"on-policy\" algorithms."], "venue": "arXiv", "opinion": "I really like the idea of having a standardized benchmark for evaluating meta-learning algorithms. There's a lot of room for improvement in performance on the benchmark tasks and it would be cool if this incentivized algorithm development. As with any benchmark, I worry that it is too narrow to capture all the nuances of potential algorithms; I wouldn't be surprised if some meta-learning algorithm performed poorly here but did well in some other domain.", "highlight": false, "read_more": "", "summarizer": "Asya", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #74", "newsletter_category": "Meta learning"}
{"id": "5e7e094e7ba8fa62a5cc646db5f3d734", "title": "Meta-Learning: A Survey", "url": "http://arxiv.org/abs/1810.03548", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Joaquin Vanschoren"], "summaries": ["This taxonomy of meta-learning classifies approaches by the main type of meta-data they learn from:\n1. Evaluations of other models on related tasks\n2. Characterisations of the tasks at hand (and a similarity metric between them)\n3. The structures and parameters of related models\nVanschoren explores a number of different approaches in each category."], "venue": "AAMAS 2019", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #29", "newsletter_category": "Meta learning"}
{"id": "12b635c2c01e7cac359e96d328eadd2b", "title": "Understanding meta-trained algorithms through a Bayesian lens", "url": "https://medium.com/@deepmindsafetyresearch/understanding-meta-trained-algorithms-through-a-bayesian-lens-5042a1acc1c2", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Vladimir Mikulik*", "Grégoire Delétang*", "Tom McGrath*", "Tim Genewein*", "Miljan Martic", "Shane Legg", "Pedro A. Ortega"], "summaries": ["The previous paper suggested that meta-learning can implement optimal reasoning processes in theory. Does it work in practice? This paper sets out to answer this question by studying some simple prediction and decision-making tasks.\n\nFor prediction, we consider agents that are trained on a family of distributions (e.g. Bernoulli distributions whose parameter is chosen from a Beta distribution) to predict the probability distribution after seeing a sample generated from it. For decision-making, we consider two-armed bandit problems (where again there is a distribution over the parameters of the problem). These problems were chosen because their optimal solutions can be calculated analytically.\n\nThe authors train neural nets with memory to perform well on these tasks (as discussed in the previous paper) and find that they do indeed behave optimally, achieving effectively the best possible performance. They then try to investigate whether they are implementing the same reasoning algorithm as the analytic Bayes-optimal solution. To do this, they see whether they can train a second neural net to map the hidden states (memory) of the agent to the states in the Bayes-optimal solution, and vice versa. (One way to think of this: can you simulate the Bayes-optimal algorithm using the observation encodings from the RNN, and vice versa?)\n\nThey find that they _can_ learn a good mapping from agent states to Bayes-optimal states, but _cannot_ learn a good mapping from Bayes-optimal states to agent states. It seems likely that the agent has states that encode more information than is necessary, and so the minimal information stored by the Bayes-optimal algorithm is insufficient to reconstruct the agent states."], "venue": "NeurIPS 2020", "opinion": "I suspect that in these simple tasks the posterior distribution over the parameters θ maintained by the Bayes-optimal algorithm is a _minimal_ sufficient statistic, that is, _any_ optimal policy must have states that are sufficient to reconstruct the information stored by the Bayes-optimal algorithm. So it makes sense that, for an agent with optimal behavior, the agent’s states could be used to simulate the Bayes-optimal states. I don’t think this tells us that much about the algorithm the network is implementing.\n\nNote that I am quite happy to see work investigating the sorts of reasoning processes that neural networks have learned. While I don’t think the specific results in this paper have told us that much, I’m excited to see this line of work scaled up to more complex tasks, where agents may not reach optimal behavior, or might do so by learning heuristics that _don’t_ encode all of the information that the Bayes-optimal algorithm would use.", "highlight": false, "read_more": "Paper: Meta-trained agents implement Bayes-optimal agents", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #139", "newsletter_category": "Meta learning"}
{"id": "eeccecc0ced4c43e57db6ade1e0420a8", "title": "ML Writing Month May 2018", "url": "https://docs.google.com/document/d/1SnJe07u0oESoAT2BPdj_0t6xaIRYmk0RN0DlrHyqMEo/edit", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Cody Wild"], "summaries": ["The author wrote up a summary of an ML paper every day in May, which have all been collected in this doc."], "venue": "Google Docs", "opinion": "These summaries seem really good to me (probably higher quality than a typical summary that I write), but are often on topics I'm not an expert in (eg. GANs) so it's hard for me to evaluate. The one paper I knew well ([Inverse Reward Design](https://arxiv.org/abs/1711.02827)) had a good summary.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #24", "newsletter_category": "Miscellaneous (AI)"}
{"id": "a52eb34abc23e0db1a34259f11c26e0d", "title": "Unreproducible Research is Reproducible", "url": "http://proceedings.mlr.press/v97/bouthillier19a.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Xavier Bouthillier", "Cesar Laurent", "Pascal Vincent"], "summaries": ["This paper argues that despite the growing popularity of sharing code, machine learning research has a problem with reproducibility. It makes the distinction between the reproducibility of **methods/results**, which can be achieved by fixing random seeds and sharing code, and the reproducibility of **findings/conclusions**, which requires that different experimental setups (or at least random seeds) lead to the same conclusion.\n\nSeveral popular neural network architectures are trained on several image classification datasets several times with different random seeds determining the weight initialization and sampling of data. The relative rankings of the architectures with respect to the test accuracy are found to vary relevantly with the random seed for all data sets, as well as between data sets.\n\nThe authors then argue that while the reproducibility of methods can help with speeding up **exploratory research**, the reproducibility of findings is necessary for **empirical research** from which robust conclusions can be drawn. They claim that exploratory research that is not based on robust findings can get inefficient, and so call for the machine learning community to do more empirical research."], "venue": "ICML 2019", "opinion": "I really like that this paper not just claims that there is a problem with reproducibility, but demonstrates this more rigorously using an experiment. More robust empirical findings seem quite important for getting to a better understanding of machine learning systems in the medium term. Since this understanding is especially important for safety relevant research, where exploratory research seems more problematic by default, I am excited for a push in that direction.", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #66", "newsletter_category": "Miscellaneous (AI)"}
{"id": "b9b049c73f398d2012265baf74aa47b2", "title": "2021 AI Index Report", "url": "https://aiindex.stanford.edu/report/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Daniel Zhang", "Saurabh Mishra", "Erik Brynjolfsson", "John Etchemendy", "Deep Ganguli", "Barbara Grosz", "Terah Lyons", "James Manyika", "Juan Carlos Niebles", "Michael Sellitto", "Yoav Shoham", "Jack Clark", "Raymond Perrault"], "summaries": ["The AI Index Report is a project to track and distill data related to artificial intelligence. One central theme the report focuses on is the effects of COVID on AI research direction. The report highlights significant increases in spending on drug development, 4.5 times that in 2019. The report also focuses a spotlight on the relative lack of AI ethics benchmarks. This could pose a significant problem as surveillance technologies become an increasingly mature technology. Beyond these broad themes, there's data on publication trends, politics, diversity, and more in the 222-page report. Additionally, a significant amount of data is publicly available or interactive. "], "venue": "AI Index Website", "opinion": "This is well presented and you can glean a lot from looking at the introductory sections. If you choose to dive into a particular topic, charts and methodology are presented in a clear manner with nice hyperlinking to make navigation relatively painless. There is also an [interactive](https://aiindex.stanford.edu/vibrancy/) visualization that allows for cross-country comparison according to user-defined metrics. Once again, very well presented. ", "highlight": false, "read_more": "Full report PDF", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #144", "newsletter_category": "Miscellaneous (AI)"}
{"id": "7985e00ad2d95e0bc9a2179c9a48a02b", "title": "Explainable AI, Sparse Representations, and Signals", "url": "https://www.notion.so/Explainable-AI-Sparse-Representations-and-Signals-fedf1522aff4415d8f156e1f94bb80c5", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["So far, we have built AI systems that store knowledge _symbolically_ or in a _distributed fashion_ (with neural nets being the latter). While the distributed form allows us to learn knowledge and rules automatically, it is much harder to understand and interpret than symbolically represented knowledge. This post argues that the main difference is in the **sparsity** of the learned knowledge. Of course, with more \"sparse\" knowledge, it should be easier for us to understand the internal workings of the AI system, since we can ignore the pruned connections. However, the author also argues that sparse knowledge will help 'guide the search for models and agents that can be said to \"learn\" but also \"reason\"'. Given that AGI will likely involve finding good representations for the world (in the sense of unsupervised learning), then sparse learning can be thought of as a bias towards finding better [bases](https://en.wikipedia.org/wiki/Basis_(linear_algebra)) for world models, that are more likely to be conceptually clean and more in line with Occam's razor.\n\nIn a postscript, the author considers arguments for AI risk. Notably, there isn't any consideration of goal-directedness or alignment failures; the worry is that we will start applying superhuman AI systems to superhuman tasks, and we won't know how to deal with these situations."], "venue": "Notion", "opinion": "Sparsity seems like a good objective to shoot for in order to ensure explainability. I'm less convinced that it's worthwhile for representation learning: I doubt humans have any sort of \"sparse learning\" bias; I think sparsity of knowledge is a natural consequence of having to understand a very complex world with a very small brain. (Whereas current ML systems only have to understand much simpler environments.)", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #61", "newsletter_category": "Miscellaneous (AI)"}
{"id": "56d6a2775040a06f31f33372f7e6a976", "title": "Making it easier to discover datasets", "url": "https://www.blog.google/products/search/making-it-easier-discover-datasets/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Natasha Noy"], "summaries": ["Google has launched Dataset Search, a tool that lets you search for datasets that you could then use in research."], "venue": "Google Blog", "opinion": "I imagine that this is primarily targeted at data scientists aiming to learn about the real world, and not ML researchers, but I wouldn't be surprised if it was helpful for us as well. MNIST and ImageNet are both present, and a search for \"self-driving cars\" turned up some promising-looking links that I didn't investigate further.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #23", "newsletter_category": "Miscellaneous (AI)"}
{"id": "cccaa1f1439fa41d185e3b7d4980645e", "title": "State of AI Report 2021", "url": "https://www.stateof.ai/2021-report-launch.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Nathan Benaich and Ian Hogarth"], "summaries": ["As with <@past@>(@State of AI@) <@reports@>(@State of AI Report 2020@), I’m not going to summarize the entire thing; instead you get the high-level themes that the authors identified:\n\n1. AI is stepping up in more concrete ways, including in mission critical infrastructure.\n2. AI-first approaches have taken biology by storm (and we aren’t just talking about AlphaFold).\n3. Transformers have emerged as a general purpose architecture for machine learning in many domains, not just NLP.\n4. Investors have taken notice, with record funding this year into AI startups, and two first ever IPOs for AI-first drug discovery companies, as well as blockbuster IPOs for data infrastructure and cybersecurity companies that help enterprises retool for the AI-first era.\n5. The under-resourced AI-alignment efforts from key organisations who are advancing the overall field of AI, as well as concerns about datasets used to train AI models and bias in model evaluation benchmarks, raise important questions about how best to chart the progress of AI systems with rapidly advancing capabilities.\n6. AI is now an actual arms race rather than a figurative one, with reports of recent use of autonomous weapons by various militaries.\n7. Within the US-China rivalry, China's ascension in research quality and talent training is notable, with Chinese institutions now beating the most prominent Western ones.\n8. There is an emergence and nationalisation of large language models."], "venue": "State of AI Website", "opinion": "In <@last year’s report@>(@State of AI Report 2020@), I said that their 8 predictions seemed to be going out on a limb, and that even 67% accuracy woud be pretty impressive. This year, they scored their predictions as 5 “Yes”, 1 “Sort of”, and 2 “No”. That being said, they graded “The first 10 trillion parameter dense model” as “Yes”, I believe on the basis that Microsoft had run a couple of steps of training on a 32 trillion parameter dense model. I definitely interpreted the prediction as saying that a 10 trillion parameter model would be trained _to completion_, which I do not think happened publicly, so I’m inclined to give it a “No”. Still, this does seem like a decent track record for what seemed to me to be non-trivial predictions. This year's predictions seem similarly \"out on a limb\" as last year's.\n\nThis year’s report included one-slide summaries of many papers I’ve summarized before. I only found one major issue -- the slide on <@TruthfulQA@>(@TruthfulQA: Measuring How Models Mimic Human Falsehoods@) implies that larger language models are less honest _in general_, rather than being more likely to imitate human falsehoods. This is actually a pretty good track record, given the number of things they summarized where I would have noticed if there were major issues.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #167", "newsletter_category": "Miscellaneous (AI)"}
{"id": "17833ea0ef3f6e11aaba9047422459ae", "title": "State of AI Report 2020", "url": "https://www.stateof.ai/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Nathan Benaich and Ian Hogarth"], "summaries": ["The third <@State of AI@> report is out! I won’t go into details here since there is really quite a lot of information, but I recommend scrolling through the presentation to get a sense of what’s been going on. I was particularly interested in their 8 predictions for the next year: most of them seemed like they were going out on a limb, predicting something that isn’t just “the default continues”. On last year’s 6 predictions, 4 were correct, 1 was wrong, and 1 was technically wrong but quite close to being correct; even this 67% accuracy would be pretty impressive on this year’s 8 predictions. (It does seem to me that last year’s predictions were more run-of-the-mill, but that might just be hindsight bias.)"], "venue": "", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #120", "newsletter_category": "Miscellaneous (AI)"}
{"id": "477b88414ed31386f4827a829631285d", "title": "Predicting Slow Judgments", "url": "https://ought.org/projects/judgments", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Andreas Stuhlmueller", "Owain Evans", "Tom McGrath", "Zac Kenton", "Chris Cundy", "Ryan Carey", "Andrew Schreiber", "Neal Jean", "Girish Sastry"], "summaries": ["A joint project between Ought and FHI. The goal is to predict judgments that humans make after considering a question for many hours. However, that data is expensive to collect, so they also collect a lot of labels of how humans make judgments in limited time, and use those as noisy labels for the slow, deliberative judgments. They have just released Think Again (https://thinkagain.ought.org/), where you can make fast and slow judgments on Fermi estimates, political statements, or ML papers. They are especially looking for people to make judgments on ML papers. This will take an hour or two, but they'll recommend papers that they think you'll like based on your judgments of the papers they showed you."], "venue": "", "opinion": "I've summarized the post pretty well, I think -- mainly I'd encourage you to play the Think Again game, it's pretty fun and you generate useful data for them.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "Recon #4", "newsletter_category": "Miscellaneous (alignment)"}
{"id": "cab18afc3835291f90fbd85561a7e69d", "title": "The Precipice: Existential Risk and the Future of Humanity", "url": "https://www.amazon.com/Precipice-Existential-Risk-Future-Humanity/dp/0316484911", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Toby Ord"], "summaries": ["This book argues that humanity is in a special stage of its development: it is on the _precipice_, a narrow time during which we have enough power to destroy ourselves, but not enough wisdom to have mitigated such risks. It first argues that existential risk would be very important to reduce (for all the standard reasons), and then considers many different kinds of existential risks, finding that natural ones (asteroids, supervolcanoes, stellar explosions) are small relative to anthropogenic risks, both current (nuclear war, climate change, environmental destruction) and future (engineered pandemics, unaligned AI, dystopian scenarios). I'll focus primarily on the part about AI risk, as well as some of the comments on existential risk in general.\n\nThe AI risk presentation in the book was similar to that in [Superintelligence](https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742): it argues for risk from goal-directed AI systems (though the terminology used in the book is different). It first demonstrates the strong progress in deep learning, and then notes that expert surveys estimate that AGI is more likely than not to arrive in the next century. It then notes that we don't know how to specify a reward function for an AI system (even with e.g. inverse reinforcement learning), and to the extent that we get it wrong, it pits us in competition against a superintelligent adversary. Ideas like switching off the AI system wouldn't work, due to convergent instrumental subgoals like survival.\n\nIt also considers some obvious objections, including the very reasonable objection that \"AI researchers won't build something that will kill them\". However, Toby is still worried, citing that due to the unilateralist curse unaligned AGI might still be built by the most optimistic researchers, and in any case the personal benefits to the researchers might justify the risk of misalignment to them personally (though it would not be justified for the world as a whole).\n\nThe book then spends some time discussing _risk factors_, which are things that do not _directly_ lead to existential risks, but indirectly _exacerbate_ other existential risks, making them more likely. For example, great power war seems like a risk factor: it isn't going to cause an existential catastrophe by itself, but it increases the likelihood that we use risky technologies like bioweapons and AI that could then cause an existential catastrophe.\n\nThe book also has lots of useful insights about existential risks in general, which then also apply to AI risk: for example, risks that strike sooner should be prioritized (since the later risks can be dealt with later), risks that are more sudden will be more important to focus on (since we won't be able to build support as the risk gradually comes in), and risks that are \"sharper\" will be more neglected since there won't be as many \"warning shots\"."], "venue": "Amazon", "opinion": "I enjoyed this book more than I thought I would: it had a lot of novel content for me, and I liked the explanations and comparisons across different kinds of existential risks (something that I hadn't really seen a single unified perspective on), and I especially liked the constant focus on what we do and don't know -- it felt more like a research paper (albeit in a conversational style) than a popular book, and was similarly information-dense.\n\nOn the AI part specifically, I liked that one of the endnotes cashed out powerful AI systems using model-based RL: this indeed seems like the thing that is closest to the classic expected utility maximizer, so the conclusions make a bit more sense. You still have to wonder how exactly the model is learned, and how exactly the AI system becomes good at using the model to find good actions, but at least under those two assumptions you would have all the standard convergent instrumental subgoals. In contrast, with model-free RL, the default expectation is that the RL agent needs to try things multiple times before it can learn to do them again, so it's less clear how it starts doing novel things. It seems that model-based and model-free RL are pretty similar so the distinction doesn't matter in practice, but at least conceptually it's a lot easier to reason about the model-based system (at least in the context of AI risk).\n\nToby gives a 1 in 10 chance of existential catastrophe from AI in the next century (more than half of his total of 1 in 6), which decomposes into a 1 in 2 chance of AGI this century, and 1 in 5 of it leading to existential catastrophe. This is a bit more pessimistic than Paul's <@estimate@>(@Conversation with Paul Christiano@) of 10% EV loss (which was over all time, not just this century), which is in turn a bit more pessimistic than the 1 in 10 chance that I <@estimated@>(@Conversation with Rohin Shah@) (and am now forever anchored on), which was over all time _and_ conditional on no additional effort from longtermists. But I wouldn't read too much into this -- 10 is a nice round number, and that probably played a big role in why I chose it. I certainly don't feel calibrated enough to easily tell the difference between 1 in 5 and 1 in 20 on a question of this complexity.\n\nI am very happy about this trend of people actually stating numbers: it's a lot easier to narrow down on the important disagreements when people put down numbers, even if they're completely made up. I'd really like to see numbers from people who have larger disagreements (as I expect would be the case with e.g. MIRI researchers).", "highlight": true, "read_more": "FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #93", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "e26b0fb14a80b74b005d245cb3cd5f9c", "title": "Human Compatible: Artificial Intelligence and the Problem of Control", "url": "https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem-ebook/dp/B07N5J5FTS", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Stuart Russell"], "summaries": ["_Since I am aiming this summary for people who are already familiar with AI safety, my summary is substantially reorganized from the book, and skips large portions of the book that I expect will be less useful for this audience. If you are not familiar with AI safety, **note that I am skipping many arguments and counterarguments in the book that are aimed for you**. I'll refer to the book as \"HC\" in this newsletter._\n\nBefore we get into details of impacts and solutions to the problem of AI safety, it's important to have a model of how AI development will happen. Many estimates have been made by figuring out the amount of compute needed to run a human brain, and figuring out how long it will be until we get there. HC doesn't agree with these; it suggests the bottleneck for AI is in the algorithms rather than the hardware. We will need several conceptual breakthroughs, for example in language or common sense understanding, cumulative learning (the analog of cultural accumulation for humans), discovering hierarchy, and managing mental activity (that is, the metacognition needed to prioritize what to think about next). It's not clear how long these will take, and whether there will need to be more breakthroughs after these occur, but these seem like necessary ones.\n\nWhat could happen if we do get beneficial superintelligent AI? While there is a lot of sci-fi speculation that we could do here, as a weak lower bound, it should at least be able to automate away almost all existing human labor. Assuming that superintelligent AI is very cheap, most services and many goods would become extremely cheap. Even many primary products such as food and natural resources would become cheaper, as human labor is still a significant fraction of their production cost. If we assume that this could bring up everyone's standard of life up to that of the 88th percentile American, that would result in nearly a _tenfold_ increase in world GDP per year. Assuming a 5% discount rate per year, this corresponds to $13.5 _quadrillion_ net present value. Such a giant prize removes many reasons for conflict, and should encourage everyone to cooperate to ensure we all get to keep this prize.\n\nOf course, this doesn't mean that there aren't any problems, even with AI that does what its owner wants. Depending on who has access to powerful AI systems, we could see a rise in automated surveillance, lethal autonomous weapons, automated blackmail, fake news and behavior manipulation. Another issue that could come up is that once AI is better than humans at all tasks, we may end up delegating everything to AI, and lose autonomy, leading to _human enfeeblement_.\n\nThis all assumes that we are able to control AI. However, we should be cautious about such an endeavor -- if nothing else, we should be careful about creating entities that are more intelligent than us. After all, the gorillas probably aren't too happy about the fact that their habitat, happiness, and existence depends on our moods and whims. For this reason, HC calls this the _gorilla problem_: specifically, \"the problem of whether humans can maintain their supremacy and autonomy in a world that includes machines with substantially greater intelligence\". Of course, we aren't in the same position as the gorillas: we get to _design_ the more intelligent \"species\". But we should probably have some good arguments explaining why our design isn't going to succumb to the gorilla problem. This is especially important in the case of a fast intelligence explosion, or _hard takeoff_, because in that scenario we do not get any time to react and solve any problems that arise.\n\nDo we have such an argument right now? Not really, and in fact there's an argument that we _will_ succumb to the gorilla problem. The vast majority of research in AI and related fields assumes that there is some definite, known _specification_ or _objective_ that must be optimized. In RL, we optimize the _reward function_; in search, we look for states matching a _goal criterion_; in statistics, we minimize _expected loss_; in control theory, we minimize the _cost function_ (typically deviation from some desired behavior); in economics, we design mechanisms and policies to maximize the _utility_ of individuals, _welfare_ of groups, or _profit_ of corporations. This leads HC to propose the following standard model of machine intelligence: _Machines are intelligent to the extent that their actions can be expected to achieve their objectives._ However, if we put in the wrong objective, the machine's obstinate pursuit of that objective would lead to outcomes we won't like.\n\nConsider for example the content selection algorithms used by social media, typically maximizing some measure of engagement, like click-through. Despite their lack of intelligence, such algorithms end up changing the user's preference so that they become more predictable, since more predictable users can be given items they are more likely to click on. In practice, this means that users are pushed to become more extreme in their political views. Arguably, these algorithms have already caused much damage to the world.\n\nSo the problem is that we don't know how to put our objectives inside of the AI system so that when it optimizes its objective, the results are good for us. Stuart calls this the \"King Midas\" problem: as the legend goes, King Midas wished that everything he touched would turn to gold, not realizing that \"everything\" included his daughter and his food, a classic case of a <@badly specified objective@>(@Specification gaming examples in AI@). In some sense, we've known about this problem for a long time, both from King Midas's tale, and in stories about genies, where the characters inevitably want to undo their wishes.\n\nYou might think that we could simply turn off the power to the AI, but that won't work, because for almost any definite goal, the AI has an incentive to stay operational, just because that is necessary for it to achieve its goal. This is captured in what may be Stuart's most famous quote: _you can't fetch the coffee if you're dead_. This is one of a few worrisome [convergent instrumental subgoals](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf).\n\nWhat went wrong? The problem was the way we evaluated machine intelligence, which doesn't take into account the fact that machines should be useful for _us_. HC proposes: _Machines are **beneficial** to the extent that **their** actions can be expected to achieve **our** objectives_. But with this definition, instead of our AI systems optimizing a definite, wrong objective, they will _also_ be uncertain about the objective, since we ourselves don't know what our objectives are. HC expands on this by proposing three principles for the design of AI systems, that I'll quote here in full:\n\n1. _The machine’s only objective is to maximize the realization of human preferences._\n2. _The machine is initially uncertain about what those preferences are._\n3. _The ultimate source of information about human preferences is human behavior._\n\n[Cooperative Inverse Reinforcement Learning](https://arxiv.org/abs/1606.03137) provides a formal model of an _assistance game_ that showcases these principles. You might worry that an AI system that is uncertain about its objective will not be as useful as one that knows the objective, but actually this uncertainty is a feature, not a bug: it leads to AI systems that are deferential, that ask for clarifying information, and that try to learn human preferences. [The Off-Switch Game](https://arxiv.org/abs/1611.08219) shows that because the AI is uncertain about the reward, it will let itself be shut off. These papers are discussed later in this newsletter.\n\nSo that's the proposed solution. You might worry that the proposed solution is quite challenging: after all, it requires a shift in the entire way we do AI. What if the standard model of AI can deliver more results, even if just because more people work on it? Here, HC is optimistic: the big issue with the standard model is that it is not very good at learning our preferences, and there's a huge economic pressure to learn preferences. For example, I would pay a lot of money for an AI assistant that accurately learns my preferences for meeting times, and schedules them completely autonomously.\n\nAnother research challenge is how to actually put principle 3 into practice: it requires us to connect human behavior to human preferences. [Inverse Reward Design](https://arxiv.org/abs/1711.02827) and <@Preferences Implicit in the State of the World@>(@Learning Preferences by Looking at the World@) are example papers that tackle portions of this. However, there are _lots_ of subtleties in this connection. We need to use _Gricean semantics_ for language: when we say X, we do not mean the literal meaning of X: the agent must also take into account the fact that we bothered to say X, and that we didn't say Y. For example, I'm only going to ask for the agent to buy a cup of coffee if I believe that there is a place to buy reasonably priced coffee nearby. If those beliefs happen to be wrong, the agent should ask for clarification, rather than trudge hundreds of miles or pay hundreds of dollars to ensure I get my cup of coffee.\n\nAnother problem with inferring preferences from behavior is that humans are nearly always in some deeply nested plan, and many actions don't even occur to us. Right now I'm writing this summary, and not considering whether I should become a fireman. I'm not writing this summary because I just ran a calculation showing that this would best achieve my preferences, I'm doing it because it's a subpart of the overall plan of writing this bonus newsletter, which itself is a subpart of other plans. The connection to my preferences is very far up. How do we deal with that fact?\n\nThere are perhaps more fundamental challenges with the notion of \"preferences\" itself. For example, our _experiencing self_ and our _remembering self_ may have different preferences -- if so, which one should our agent optimize for? In addition, our preferences often change over time: should our agent optimize for our current preferences, even if it knows that they will predictably change in the future? This one could potentially be solved by learning _meta-preferences_ that dictate what kinds of preference change processes are acceptable.\n\nAll of these issues suggest that we need work across many fields (such as AI, cognitive science, psychology, and neuroscience) to reverse-engineer human cognition, so that we can put principle 3 into action and create a model that shows how human behavior arises from human preferences.\n\nSo far, we've been talking about the case with a single human. But of course, there are going to be multiple humans: how do we deal with that? As a baseline, we could imagine that every human gets their own agent that optimizes for their preferences. However, this will differentially benefit people who care less about other people's welfare, since their agents have access to many potential plans that wouldn't be available to an agent for someone who cared about other people. For example, if Harriet was going to be late for a meeting with Ivan, her AI agent might arrange for Ivan to be even later.\n\nWhat if we had laws that prevented AI systems from acting in such antisocial ways? It seems likely that superintelligent AI would be able to find loopholes in such laws, so that they do things that are strictly legal but still antisocial, e.g. line-cutting. (This problem is similar to the problem that we can't just write down what we want and have AI optimize it.)\n\nWhat if we made our AI systems utilitarian (assuming we figured out some acceptable method of comparing utilities across people)? Then we get the \"Somalia problem\": agents will end up going to Somalia to help the worse-off people there, and so no one would ever buy such an agent.\n\nOverall, it's not obvious how we deal with the transition from a single human to multiple humans. While HC focuses on a potential solution for the single human / single agent case, there is still much more to be said and done to account for the impact of AI on all of humanity. To quote HC, \"There is really no analog in our present world to the relationship we will have with beneficial intelligent machines in the future. It remains to be seen how the endgame turns out.\""], "venue": "Book", "opinion": "I enjoyed reading this book; I don't usually get to read a single person's overall high-level view on the state of AI, how it could have societal impact, the argument for AI risk, potential solutions, and the need for AI governance. It's nice to see all of these areas I think about tied together into a single coherent view. While I agree with much of the book, especially the conceptual switch from the standard model of intelligent machines to Stuart's model of beneficial machines, I'm going to focus on disagreements in this opinion.\n\nFirst, the book has an implied stance towards the future of AI research that I don't agree with: I could imagine that powerful AI systems end up being created by learning alone without needing the conceptual breakthroughs that Stuart outlines. This has been proposed in e.g. <@AI-GAs@>(@AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence@)), and seems to be the implicit belief that drives OpenAI and DeepMind's research agendas. This leads to differences in risk analysis and solutions: for example, the <@inner alignment problem@>(@Risks from Learned Optimization in Advanced Machine Learning Systems@) only applies to agents arising from learning algorithms, and I suspect would not apply to Stuart's view of AI progress.\n\nThe book also gives the impression that to solve AI safety, we simply need to make sure that AI systems are optimizing the right objective, at least in the case where there is a single human and a single robot. Again, depending on how future AI systems work, that could be true, but I expect there will be other problems that need to be solved as well. I've already mentioned inner alignment; other graduate students at CHAI work on e.g. [robustness](https://adversarialpolicies.github.io/) and transparency.\n\nThe proposal for aligning AI requires us to build a model that relates human preferences to human behavior. This sounds extremely hard to get completely right. Of course, we may not need a model that is completely right: since reward uncertainty makes the agent amenable to shutdowns, it seems plausible that we can correct mistakes in the model as they come up. But it's not obvious to me that this is sufficient.\n\nThe sections on multiple humans are much more speculative and I have more disagreements there, but I expect that is simply because we haven't done enough research yet. For example, HC worries that we won't be able to use laws to prevent AIs from doing technically legal but still antisocial things for the benefit of a single human. This seems true if you imagine that a single human suddenly gets access to a superintelligent AI, but when everyone has a superintelligent AI, then the current system where humans socially penalize each other for norm violations may scale up naturally. The overall effect depends on whether AI makes it easier to violate norms, or to detect and punish norm violations.", "highlight": true, "read_more": "[Max Tegmark's summary](https://www.amazon.com/gp/customer-reviews/RVSAD5GWSLQ42/ref=cm_cr_dp_d_rvw_ttl?ie=UTF8&ASIN=B07N5J5FTS), [Alex Turner's thoughts](https://www.alignmentforum.org/posts/FuGDYNvA6qh4qyFah/thoughts-on-human-compatible)", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #69", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "ebe2b82882c526ea45d2cc734037015b", "title": "AI Alignment Podcast: Human Compatible: Artificial Intelligence and the Problem of Control", "url": "https://futureoflife.org/2019/10/08/ai-alignment-podcast-human-compatible-artificial-intelligence-and-the-problem-of-control-with-stuart-russell/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Lucas Perry and Stuart Russell"], "summaries": ["This podcast covers some of the main ideas from the book, which I'll ignore for this summary. It also talks a bit about the motivations for the book. Stuart has three audiences in mind. He wants to explain to laypeople what AI is and why it matters. He wants to convince AI researchers that they should be working in this new model of beneficial AI that optimizes for our objectives, rather than the standard model of intelligent AI that optimizes for its objectives. Finally, he wants to recruit academics in other fields to help connect human behavior to human preferences (principle 3), as well as to figure out how to deal with multiple humans.\n\nStuart also points out that his book has two main differences from Superintelligence and Life 3.0: first, his book explains how existing AI techniques work (and in particular it explains the standard model), and second, it proposes a technical solution to the problem (the three principles)."], "venue": "FLI Website", "opinion": "", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #69", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "99c291a9bc7c0da28ad0481c41c2c11a", "title": "AI Safety Needs Social Scientists", "url": "https://blog.openai.com/ai-safety-needs-social-scientists/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Geoffrey Irving and Amanda Askell"], "summaries": ["One approach to AI safety is to \"ask humans a large number of questions about what they want, train an ML model of their values, and optimize the AI system to do well according to the learned values\". However, humans give answers that are limited, biased and often in disagreement with each other, and so AI safety needs social scientists to figure out how to improve this data - which eventually may be gathered from thousands or millions of people. Of particular importance is the ability to design rigorous experiments, drawing from an interdisciplinary understanding of human cognition and behaviour. The authors discuss [Debate](https://blog.openai.com/debate/) ([AN #5](https://mailchi.mp/0ae5d69de63b/alignment-newsletter-5)) as a case study of a safety technique whose success depends on empirical questions such as: how skilled are humans as judges by default? Can we train people to be better judges? Are there ways to restrict debate to make it easier to judge?\n\nThere are a couple of key premises underlying this argument. The first is that, despite human biases, there are correct answers to questions about human values - perhaps defined as the answer we would endorse if given all relevant information and unlimited time to think. However, it’s not necessary for AIs to always find those answers, as long as they are able to recognise cases in which they’re uncertain and do nothing (while there are some cases in which inaction can cause harm, such as a self-driving car ceasing to steer mid-journey, it seems that the most worrying long-term catastrophes can be avoided by inaction). Another reason for optimism is that even incomplete or negative results from social science experiments may be useful in informing technical safety research going forward. However, in some cases the systems we're trying to reason about are very different from anything we can test now - for example, AI debaters that are much stronger than humans."], "venue": "Distill", "opinion": "This post, and its accompanying paper, seems very sensible to me. While I have some doubts about how informative human debate data will be about superhuman debaters, it certainly seems worth trying to gain more empirical information. Note that while the paper primarily discusses Debate, I think that many of its arguments are applicable to any human-in-the-loop safety methods (and probably others too). Currently I think Ought is the safety group focusing most on collecting human data, but I look forward to seeing other researchers doing so.", "highlight": true, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #47", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "e9f1fa179b53b732a802c2907d930234", "title": "80K podcast with Katja Grace", "url": "https://80000hours.org/podcast/episodes/katja-grace-forecasting-technology/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Katja Grace and Rob Wiblin"], "summaries": ["Rob Wiblin interviewed Katja Grace of AI Impacts about her work predicting the future of AI. My main takeaway was that there are many important questions in this space that almost no one is trying to answer, and that we haven't made a good enough attempt yet to conclude that it's too hard to do, so we should put more time into it. If you haven't seen AI Impacts' work before, you can get some of the most interesting results (at a high level) from listening to this podcast. There's a ton of detail in the podcast -- too much for me to summarize here."], "venue": "80,000 Hours", "opinion": "I don't currently think very much about timelines, intelligence explosions, and other questions that AI Impacts thinks about, but it seems very plausible to me that these could be extremely important. (I do think about discontinuities in progress and am very glad I read the [AI Impacts post](https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/) on the subject.) One point that the interview brings up is that there are very few (perhaps two?) full time equivalents working on predicting the future of AI, while there are many people working on technical AI safety, so the former is more neglected. I'm not sure I agree with this -- the number of full time equivalents doing technical AI alignment research seems quite small (on the order of 50 people). However, I do see _many_ people who are trying to skill up so that they can do technical AI alignment research, and none who want to do better prediction, and that seems clearly wrong. I would guess that there are several readers of this newsletter who want to do technical AI alignment research, but who would have more impact if they worked in an adjacent area, such as prediction as at AI Impacts, or policy and strategy work, or in better tools and communication. Even though I'm well-placed to do technical research, I still think that common knowledge of research is a big enough bottleneck that I spend a lot of time on this newsletter. It seems likely that there is someone else who would do a better job than me, but who is set on technical safety research even though they wouldn't be as good. So I guess if you are still trying to figure out how to best help with AI alignment, or are about to start training up to do technical research, please do listen to this podcast and consider that alternative route, and various others as well. The goal is not to figure out which question is the most important, so that you can try to solve it. You'll likely do better by considering the field as a whole, and asking which area you would be in if someone optimally assigned people in the field to tasks.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #21", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "790942f2e1b4f986e37c3da1a9dd8454", "title": "The \"most important century\" series", "url": "https://www.cold-takes.com/most-important-century/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Holden Karnofsky"], "summaries": ["In some sense, it is really weird for us to claim that there is a non-trivial chance that in the near future, we might build [transformative AI](https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence) and either (1) go extinct or (2) exceed a growth rate of (say) 100% per year. It feels like an extraordinary claim, and thus should require extraordinary evidence. One way of cashing this out: if the claim were true, this century would be the most important century, with the most opportunity for individuals to have an impact. Given the sheer number of centuries there are, this is an extraordinary claim; it should really have extraordinary evidence. This series argues that while the claim does seem extraordinary, _all_ views seem extraordinary -- there isn’t some default baseline view that is “ordinary” to which we should be assigning most of our probability.\n\nSpecifically, consider three possibilities for the long-run future:\n1. **Radical:** We will have a productivity explosion by 2100, which will enable us to become technologically mature. Think of a civilization that sends spacecraft throughout the galaxy, builds permanent settlements on other planets, harvests large fractions of the energy output from stars, etc.\n2. **Conservative:** We get to a technologically mature civilization, but it takes hundreds or thousands of years. Let’s say even 100,000 years to be ultra conservative.\n3. **Skeptical:** We never become technologically mature for some reason. Perhaps we run into fundamental technological limits, or we choose not to expand into the galaxy, or we’re in a simulation, etc.\nIt’s pretty clear why the radical view is extraordinary. What about the other two?\n\nThe conservative view implies that we are currently in the most important 100,000-year period. Given that life is billions of years old, and would presumably continue for billions of years to come once we reach a stable galaxy-wide civilization, that would make this the most important 100,000 year period out of tens of thousands of such periods. Thus the conservative view is also extraordinary, for the same reason that the radical view is extraordinary (albeit it is perhaps only half as extraordinary as the radical view).\n\nThe skeptical view by itself does not seem obviously extraordinary. However, while you could assign 70% probability to the skeptical view, it seems unreasonable to assign 99% probability to such a view -- that suggests some very strong or confident claims about what prevents us from colonizing the galaxy, which we probably shouldn’t have given our current knowledge. So, we need to have a non-trivial chunk of probability on the other views, which still opens us up to critique of having extraordinary claims.\n\nOkay, so we’ve established that we should at least be willing to say something as extreme as “there’s a non-trivial chance we’re in the most important 100,000-year period”. Can we tighten the argument, to talk about the most important _century_? In fact, we can, by looking at the economic growth rate.\n\nYou are probably aware that the US economy grows around 2-3% per year (after adjusting for inflation), so a business-as-usual, non-crazy, default view might be to expect this to continue. You are probably also aware that exponential growth can grow _very_ quickly. At the lower end of 2% per year, the economy would double every ~35 years. If this continued for 8200 years, **we'd need to be sustaining multiple economies as big as today's entire world economy _per atom in the galaxy_**. While this is not a priori impossible, it seems quite unlikely to happen. This suggests that we’re in one of fewer than 82 centuries that will have growth rates at 2% or larger, making it far less “extraordinary” to claim that we’re in the most important one, especially if you believe that growth rates are well correlated with change and ability to have impact.\n\nThe actual radical view that the author places non-trivial probability on is one we’ve seen before in this newsletter: it is one in which there is automation of science and technology through advanced AI or whole brain emulations or other possibilities. This allows technology to substitute for human labor in the economy, which produces a positive feedback loop as the output of the economy is ploughed back into the economy creating superexponential growth and a “productivity explosion”, where the growth rate increases far _beyond_ 2%. The series summarizes and connects together <@many@>(@Modeling the Human Trajectory@), <@past@>(@Could Advanced AI Drive Explosive Economic Growth?@), <@Open@>(@Draft report on AI timelines@), <@Phil@>(@How Much Computational Power It Takes to Match the Human Brain@) <@analyses@>(@Semi-informative priors over AI timelines@), which I won't be summarizing here (since we've summarized these analyses previously). While this is a more specific and “extraordinary” claim than even the claim that we live in the most important century, it seems like it should not be seen as so extraordinary given the arguments above.\n\nThis series also argues for a few other points important to longtermism, which I’ll copy here:\n1. **The long-run future is radically unfamiliar.** Enough advances in technology could lead to a long-lasting, galaxy-wide civilization that could be a radical utopia, dystopia, or anything in between.\n2. **The long-run future could come much faster than we think**, due to a possible AI-driven productivity explosion. (I briefly mentioned this above, but the full series devotes much more space and many more arguments to this point.)\n3. We, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions. But right now, **we aren't ready for this.**"], "venue": "Author's Website", "opinion": "I especially liked this series for the argument that 2% economic growth very likely cannot last much longer, providing quite a strong argument for the importance of this century, without relying at all on controversial facts about AI. At least personally I was previously uneasy about how “grand” or “extraordinary” AGI claims tend to be, and whether I should be far more skeptical of them as a result. I feel significantly more comfortable with these claims after seeing this argument.\n\nNote though that it does not defuse all such uneasiness -- you can still look at how early we appear to be (given the billions of years of civilization that could remain in the future), and conclude that the simulation hypothesis is true, or that there is a Great Filter in our future that will drive us extinct with near-certainty. In such situations there would be no extraordinary impact to be had today by working on AI risk.", "highlight": true, "read_more": "80,000 Hours podcast on the topic", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #166", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "6711e5d503364cd0cc32dfc37babda2a", "title": "AGI safety from first principles", "url": "https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Richard Ngo"], "summaries": ["This sequence presents the author’s personal view on the current best arguments for AI risk, explained from first principles (that is, without taking any previous claims for granted). The argument is a specific instantiation of the _second species argument_ that sufficiently intelligent AI systems could become the most intelligent species, in which case humans could lose the ability to create a valuable and worthwhile future.\n\nWe should clarify what we mean by superintelligence, and how it might arise. The author considers intelligence as quantifying simply whether a system “could” perform a wide range of tasks, separately from whether it is motivated to actually perform those tasks. In this case, we could imagine two rough types of intelligence. The first type, epitomized by most current AI systems, trains an AI system to perform many different tasks, so that it is then able to perform all of those tasks; however, it cannot perform tasks it has not been trained on. The second type, epitomized by human intelligence and <@GPT-3@>(@Language Models are Few-Shot Learners@), trains AI systems in a task-agnostic way, such that they develop general cognitive skills that allow them to solve new tasks quickly, perhaps with a small amount of training data. This second type seems particularly necessary for tasks where data is scarce, such as the task of being a CEO of a company. Note that these two types should be thought of as defining a spectrum, not a binary distinction, since the type of a particular system depends on how you define your space of “tasks”.\n\nHow might we get AI systems that are more intelligent than humans? Besides improved algorithms, compute and data, we will likely also see that _interactions_ between AI systems will be crucial to their capabilities. For example, since AI systems are easily replicated, we could get a _collective_ superintelligence via a collection of replicated AI systems working together and learning from each other. In addition, the process of creation of AI systems will be far better understood than that of human evolution, and AI systems will be easier to directly modify, allowing for AI systems to recursively improve their own training process (complementing human researchers) much more effectively than humans can improve themselves or their children.\n\nThe second species argument relies on the argument that superintelligent AI systems will gain power over humans, which is usually justified by arguing that the AI system will be goal-directed. Making this argument more formal is challenging: the EU maximizer framework <@doesn’t work for this purpose@>(@Coherent behaviour in the real world is an incoherent concept@) and applying the [intentional stance](https://en.wikipedia.org/wiki/Intentional_stance) only helps when you have some prior information about what goals the AI system might have, which begs the question.\n\nThe author decides to instead consider a more conceptual, less formal notion of agency, in which a system is more goal-directed the more its cognition has the following properties: (1) self-awareness, (2) planning, (3) judging actions or plans by their consequences, (4) being sensitive to consequences over large distances and long time horizons, (5) internal coherence, and (6) flexibility and adaptability. (Note that this can apply to a single unified model or a collective AI system.) It’s pretty hard to say whether current training regimes will lead to the development of these capabilities, but one argument for it is that many of these capabilities may end up being necessary prerequisites to training AI agents to do intellectual work.\n\nAnother potential framework is to identify a goal as some concept learned by the AI system, that then generalizes in such a way that the AI system pursues it over longer time horizons. In this case, we need to predict what concepts an AI system will learn and how likely it is that they generalize in this way. Unfortunately, we don’t yet know how to do this.\n\nWhat does alignment look like? The author uses <@intent alignment@>(@Clarifying \"AI Alignment\"@), that is, the AI system should be “trying to do what the human wants it to do”, in order to rule out the cases where the AI system causes bad outcomes through incompetence where it didn’t know what it was supposed to do. Rather than focusing on the outer and inner alignment decomposition, the author prefers to take a holistic view in which the choice of reward function is just one (albeit quite important) tool in the overall project of choosing a training process that shapes the AI system towards safety (either by making it not agentic, or by shaping its motivations so that the agent is intent aligned).\n\nGiven that we’ll be trying to build aligned systems, why might we still get an existential catastrophe? First, a failure of alignment is still reasonably likely, since (1) good behavior is hard to identify, (2) human values are complex, (3) influence-seeking may be a useful subgoal during training, and thus incentivized, (4) it is hard to generate training data to disambiguate between different possible goals, (5) while interpretability could help it seems quite challenging. Then, given a failure of alignment, the AI systems could seize control via the mechanisms suggested in <@What failure looks like@> and [Superintelligence](https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742). How likely this is depends on factors like (1) takeoff speed, (2) how easily we can understand what AI systems are doing, (3) how constrained AI systems are at deployment, and (4) how well humanity can coordinate."], "venue": "Alignment Forum", "opinion": "I like this sequence: I think it’s a good “updated case” for AI risk that focuses on the situation in which intelligent AI systems arise through training of ML models. The points it makes are somewhat different from the ones I would make if I were writing such a case, but I think they are still sufficient to make the case that humanity has work to do if we are to ensure that AI systems we build are aligned.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #122", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "17bc164478458a7acec43eb9c4f91a95", "title": "The Alignment Problem", "url": "https://www.amazon.com/Alignment-Problem-Machine-Learning-Values/dp/153669519X", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": null, "authors": ["Brian Christian"], "summaries": ["This book starts off with an explanation of machine learning and problems that we can currently see with it, including detailed stories and analysis of:\n\n- The [gorilla misclassification incident](https://twitter.com/jackyalcine/status/615329515909156865)\n- The [faulty reward in CoastRunners](https://openai.com/blog/faulty-reward-functions/)\n- The [gender bias in language models](https://arxiv.org/abs/1607.06520)\n- The [failure of facial recognition models on minorities](https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms)\n- The [COMPAS](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing) [controversy](https://www.documentcloud.org/documents/2998391-ProPublica-Commentary-Final-070616.html) (leading up to [impossibility results in fairness](https://www.chrisstucchio.com/pubs/slides/crunchconf_2018/slides.pdf))\n- The [neural net that thought asthma reduced the risk of pneumonia](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/06/KDD2015FinalDraftIntelligibleModels4HealthCare_igt143e-caruanaA.pdf)\n\nIt then moves on to agency and reinforcement learning, covering from a more historical and academic perspective how we have arrived at such ideas as temporal difference learning, reward shaping, curriculum design, and curiosity, across the fields of machine learning, behavioral psychology, and neuroscience. While the connections aren't always explicit, a knowledgeable reader can connect the academic examples given in these chapters to the ideas of <@specification gaming@>(@Specification gaming: the flip side of AI ingenuity@) and <@mesa optimization@>(@Risks from Learned Optimization in Advanced Machine Learning Systems@) that we talk about frequently in this newsletter. Chapter 5 especially highlights that agent design is not just a matter of specifying a reward: often, rewards will do ~nothing, and the main requirement to get a competent agent is to provide good _shaping rewards_ or a good _curriculum_. Just as in the previous part, Brian traces the intellectual history of these ideas, providing detailed stories of (for example):\n\n- BF Skinner's experiments in [training pigeons](https://psycnet.apa.org/record/1961-01933-001)\n- The invention of the [perceptron](https://psycnet.apa.org/record/1959-09865-001)\n- The success of [TD-Gammon](https://www.aaai.org/Papers/Symposia/Fall/1993/FS-93-02/FS93-02-003.pdf), and later [AlphaGo Zero](https://deepmind.com/blog/article/alphago-zero-starting-scratch)\n\nThe final part, titled \"Normativity\", delves much more deeply into the alignment problem. While the previous two parts are partially organized around AI capabilities -- how to get AI systems that optimize for _their_ objectives -- this last one tackles head on the problem that we want AI systems that optimize for _our_ (often-unknown) objectives, covering such topics as imitation learning, inverse reinforcement learning, learning from preferences, iterated amplification, impact regularization, calibrated uncertainty estimates, and moral uncertainty."], "venue": "", "opinion": "I really enjoyed this book, primarily because of the tracing of the intellectual history of various ideas. While I knew of most of these ideas, and sometimes also who initially came up with the ideas, it's much more engaging to read the detailed stories of _how_ that person came to develop the idea; Brian's book delivers this again and again, functioning like a well-organized literature survey that is also fun to read because of its great storytelling. I struggled a fair amount in writing this summary, because I kept wanting to somehow communicate the writing style; in the end I decided not to do it and to instead give a few examples of passages from the book in [this post](https://www.alignmentforum.org/posts/gYfgWSxCpFdk2cZfE/the-alignment-problem-machine-learning-and-human-values).", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #120", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "0debf12fb989f9f9e773aa4341647354", "title": "Engineering a Safer World", "url": "https://static1.squarespace.com/static/53b78765e4b0949940758017/t/57d87eb6d2b8571af3501b26/1473898764674/Engineering_a_Safer_World+Nancy+Leveson.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2011-01-01T00:00:00Z", "authors": ["Nancy G. Leveson"], "summaries": ["I recently read [Engineering a Safer World](https://static1.squarespace.com/static/53b78765e4b0949940758017/t/57d87eb6d2b8571af3501b26/1473898764674/Engineering_a_Safer_World+Nancy+Leveson.pdf) by Nancy G. Leveson, at [Joshua Achiam’s recommendation](https://twitter.com/jachiam0/status/1285449712573665281), and really enjoyed it, so get ready for another book summary! I’m not very happy with the summary I have -- it feels less compelling than the book, partly because the book provides a ton of examples that I don’t have the space to do -- but hopefully it is enough to get the key points across.\n\nThe main motivation of this book is to figure out how we can improve safety engineering. Its primary thesis is that the existing methods used in engineering are insufficient for the current challenges, and must be replaced by a method the author favors called STAMP. Note that the book is primarily concerned with mechanical systems that may also have computerized automation (think aerospace, chemical, and mechanical engineering); the conclusions should not be expected to apply directly to AI.\n\n**The standard model of safety engineering and its deficiencies**\n\nHistorically, safety engineering has been developed as a reaction to the high level of accidents we had in the past, and as a result focused on the easiest gains first. In particular, there were a lot of gains to be had simply by ensuring that _machines didn’t break_. _(Rohin’s note: I’m editorializing a bit here, the author doesn’t explicitly say this but I think she believes it.)_ This led to a focus on _reliability_: given a specification for how a machine should operate, we aim to decrease the probability that the machine fails to meet that specification. For example, the specification for a water tank would be to contain the water up to a given pressure, and one way to improve the reliability of the tank would be to use a stronger material or make a thicker tank to make it less likely that the tank ruptures.\n\nUnder this model, an accident happens when a machine fails to meet its specification. So, we can analyze the accident by looking at what went wrong, and tracing back the physical causes to the first point at which a specification was not met, giving us a _root cause_ that can show us what we need to fix in order to prevent similar accidents in the future. We can call this sort of analysis an _event chain_ analysis.\n\nHowever, in the last few decades there have been quite a few changes that make this model worse than it once was. The pace of technological change has risen, making it harder to learn from experience. The systems we build have become complex enough that there is a lot more _coupling_ or interaction effects between parts of the system that we could fail to account for. Relatedly, the risks we face are getting large enough that we aren’t willing to tolerate even a _single_ accident. Human operators (e.g. factory workers) are no longer able to rely on easily understood and predictable mechanical systems, instead having to work with computerized automation which they cannot understand as well. At this point, event chain analysis and safety-via-reliability are no longer sufficient for safety engineering.\n\nConsider for example the [Flight 965](https://en.wikipedia.org/wiki/American_Airlines_Flight_965) accident. In this case, the pilots got clearance to fly towards the Rozo waypoint in their descent, listed as (R) on their (paper) approach charts. One of the pilots pressed R in the flight management system (FMS), which brought up a list of waypoints that did _not_ include Rozo, and executed the first one (presumably believing that Rozo, being the closest waypoint, would show up first). As a result, the plane turned towards the selected waypoint, and crashed into a mountain.\n\nThe accident report for this incident placed the blame squarely on the pilots, firstly for not planning an appropriate path, and secondly for not having situational awareness of the terrain and that they needed to discontinue their approach. But most interestingly, the report blames the pilots for not reverting to basic radio navigation when the FMS became confusing. The author argues that the design of the automation was also flawed in this case, as the FMS stopped displaying the intermediate fixes to the chosen route, and the FMS’s navigational information used a different naming convention than the one in the approach charts. Surely this also contributed to the loss? In fact, in lawsuit appeals, the software manufacturer was held to be 17% liable.\n\nHowever, the author argues that this is the exception, not the rule: typically event chain analysis proceeds until a human operator is found who did something unexpected, and then the blame can safely be placed on them. Operators are expected to “use common sense” to deviate from procedures when the procedures are unsafe, but when an accident happens, blame is placed on them for deviating from procedures. This is often very politically convenient, and is especially easy to justify thanks to hindsight bias, where we can identify exactly the right information and cues that the operator “should have” paid attention to, ignoring that in the moment there were probably many confusing cues and it was far from obvious which information to pay attention to. My favorite example has to be this quote from an accident report:\n\n“Interviews with operations personnel did not produce a clear reason why the response to the [gas] alarm took 31 minutes. The only explanation was that there was not a sense of urgency since, in their experience, previous [gas] alarms were attributed to minor releases that did not require a unit evacuation.”\n\nIt is rare that I see such a clear example of a self-refuting paragraph. In the author’s words, “this statement is puzzling, because the statement itself provides a clear explanation for the behavior, that is, the previous experience”. It definitely sounds like the investigators searched backwards through the causal chain, found a situation where a human deviated from protocol, and decided to assign blame there.\n\nThis isn’t just a failure of the accident investigation -- the entire premise of some “root cause” in an event chain analysis implies that the investigators must end up choosing some particular point to label as The Root Cause, and such a decision is inevitably going to be determined more by the particular analysts involved rather than by features of the accident.\n\n**Towards a new approach**\n\nHow might we fix the deficiencies of standard safety engineering? The author identifies several major changes in assumptions that are necessary for a new approach:\n\n1. Blame is the enemy of safety. Safety engineering should focus on system behavior as a whole, where interventions can be made at many points on different levels, rather than seeking to identify a single intervention point.\n\n2. Reliability (having machines meet their specifications) is neither necessary nor sufficient for safety (not having bad outcomes). Increased reliability can lead to decreased safety: if we increase the reliability of the water tank by making it using a stronger material, we may decrease the risk of rupture, but we may dramatically increase the harm when a rupture occurs since the water will be at a much higher pressure. This applies to software as well: highly reliable software need not be safe, as its specifications may not be correct.\n\n3. Accidents involve the entire sociotechnical system, for which an event chain model is insufficient. Interventions on the sociological level (e.g. make it easy for low-level operators to report problems) should be considered part of the remit of safety engineering.\n\n4. Major accidents are not caused by simultaneous occurrence of random chance events. Particularly egregious examples come from probabilistic risk analysis, where failures of different subsystems are often assumed to be independent, neglecting the possibility of a common cause, whether physical (e.g. multiple subsystems failing during a power outage) or sociological (e.g. multiple safety features being disabled as part of cost-cutting measures). In addition, systems tend to migrate towards higher risk over time, because environmental circumstances change, and operational practices diverge from the designed practices as they adapt to the new circumstances, or simply to be more efficient.\n\n5. Operator behavior is a product of the environment in which it occurs. To improve safety, we must change the environment rather than the human. For example, if an accident occurs and an operator didn’t notice a warning light that could have let them prevent it, the solution is not to tell the operators to “pay more attention” -- that approach is doomed to fail.\n\n**A detour into systems theory**\n\nThe new model proposed by the author is based on systems theory, so let’s take a moment to describe it. Consider possible systems that we may want to analyze:\n\nFirst, there are some systems with _organized simplicity_, in which it is possible to decompose the system into several subsystems, analyze each of the subsystems independently, and then combine the results relatively easily to reach overall conclusions. We might think of these as systems in which analytic reduction is a good problem-solving strategy. _Rohin’s note: Importantly, this is different from the philosophical question of whether there exist phenomena that cannot be reduced to e.g. physics: that is a question about whether reduction is in principle possible, whereas this criterion is about whether such reduction is an effective strategy for a computationally bounded reasoner._ Most of physics would be considered to have organized simplicity.\n\nSecond, there are systems with _unorganized complexity_, where there is not enough underlying structure for analytic reduction into subsystems to be a useful tool. However, in such systems the behavior of individual elements of the system is sufficiently random (or at least, well-modeled as random) that statistics can be applied to it, and then the law of large numbers allows us to understand the system as an aggregate. A central example would be statistical mechanics, where we cannot say much about the motion of individual particles in a gas, but we can say quite a lot about the macroscopic behavior of the gas as a whole.\n\nSystems theory deals with systems that have _organized complexity_. Such systems have enough organization and structure that we cannot apply statistics to it (or equivalently, the assumption of randomness is too incorrect), and are also sufficiently complex that analytic reduction is not a good technique (e.g. perhaps any potential decomposition into subsystems would be dominated by combinatorially many interaction effects between subsystems). Sociological systems are central examples of such systems: the individual components (humans) are very much not random, but neither are their interactions governed by simple laws as would be needed for analytic reduction. While systems theory cannot provide nearly the same level of precision as statistics or physics, it does provide useful concepts for thinking about such systems.\n\nThe first main concept in systems theory is that of _hierarchy and emergence_. The idea here is that systems with organized complexity can be decomposed into several hierarchical levels, with each level built “on top of” the previous one. For example, companies are built on top of teams which are built on top of individual employees. The behavior of components in a particular layer is described by some “language” that is well-suited for that layer. For example, we might talk about individual employees based on their job description, their career goals, their relationship with their manager, and so on, but we might talk about companies based on their overall direction and strategy, the desires of their customer base, the pressures from regulators, and so on.\n\n_Emergence_ refers to the phenomenon that there can be properties of higher levels arising from lawful interactions at lower levels that nonetheless are meaningless in the language appropriate for the lower levels. For example, it is quite meaningful to say that the pressures on a company from government regulation caused them to (say) add captions to their videos, but if we look at the specific engineer who integrated the speech recognition software into the pipeline, we would presumably say “she integrated the speech recognition into the pipeline because she had previously worked with the code” rather than “she integrated it because government regulations told her to do so”. As another example, safety is an emergent system property, while reliability is not.\n\nThe second main concept is that of _control_. We are usually not satisfied with just understanding the behavior of systems; we also want to make changes to it (as in the case of making them safer). In systems theory, this is thought of as _control_, where we impose some sort of _constraint_ on possible system behavior at some level. For example, employee training is a potential control action that could aim to enforce the constraint that every employee knows what to do in an emergency. An effective controller requires a goal, a set of actions to take, a model of the system, and some way to sense the state of the system.\n\n**STAMP: A new model underlying safety engineering**\n\nThe author then introduces a new model called Systems-Theoretic Accident Model and Processes (STAMP), which aims to present a framework for understanding how accidents occur (which can allow us to prevent them and/or learn from them). It contains three main components:\n\n_Safety constraints_: In systems theory, a constraint is the equivalent of a specification, so these are just the safety-relevant specifications. Note that such specifications can be found at all levels of the hierarchy.\n\n_Hierarchical safety controllers_: We use _controllers_ to enforce safety constraints at any given level. A control algorithm may be implemented by a mechanical system, a computerized system, or humans, and can exist at any level of the hierarchy. A controller at level N will typically depend on constraints at level N - 1, and thus the design of this controller influences which safety constraints are placed at level N - 1.\n\n_Process models_: An effective controller must have a model of the process it is controlling. Many accidents are the result of a mismatch between the actual process and the process model of the controller.\n\nThis framework can be applied towards several different tasks, and in all cases the steps are fairly similar: identify the safety constraints you want, design or identify the controllers enforcing those constraints, and then do some sort of generic reasoning with these components.\n\nIf an accident occurs, then at the highest level, either the control algorithm(s) failed to enforce the safety constraints, or the control actions were sent correctly but were not followed. In the latter case, the controllers at the lower level should then be analyzed to see why the control actions were not followed. Ultimately, this leads to an analysis on multiple levels, which can identify several things that went wrong rather than one Root Cause, that can all be fixed to improve safety in the future.\n\n**Organizational safety**\n\nSo far we’ve covered roughly chapters 1-4 of the book. I’ll now jump straight to chapter 13, which seems particularly important and relevant, as it deals with how organizational structure and management should be designed to support safety.\n\nOne major point that the author makes is that safety _is_ cost-effective for _long-term_ performance as long as it is designed into the system from the start, rather than added on at the last minute. Performance pressure on the other hand inevitably leads to cuts in safety.\n\nIn order to actually get safety designed into the system from the start, it is crucial that top management demonstrates a strong commitment to safety, as without this employees will inevitably cut corners on safety as they will believe it is in their incentives to do so. Other important factors include a concrete corporate safety policy, as well as a strong corporate safety culture. It is important that safety is part of the design process, rather than tacked on at the end. In the author’s words, _putting safety into the quality assurance organization is the worst place for it. [...] It sets up the expectation that safety is an after-the-fact or auditing activity only._\n\nIn addition, it is important that information can flow well. Going from the bottom to the top, it should be possible for low-level operators to report potential problems in a way that they are actually acted on and the relevant information reaches top management. From the top to the bottom, safety information and training should be easily available and accessible to employees when they need it.\n\nIt is also important to have controls to prevent the general tendency of systems to migrate towards higher risk, e.g. by relaxing safety requirements as time passes without any incidents. The next chapter describes SUBSAFE, the author’s example of a well-run safety program, in which the control is for everyone to periodically watch a video reminding them of the importance of their particular safety work (in particular, the video shows the loss of the USS Thresher, an event that caused SUBSAFE to be created).\n\nPerhaps obviously, it is important for an organization to have a dedicated safety team. This is in contrast to making everyone responsible for safety. In the author’s words: _While, of course, everyone should try to behave safely and to achieve safety goals, someone has to be assigned responsibility for ensuring that the goals are achieved._\n\nIf you start by designing for safety, it is cost-effective, not opposed to long-term money-maximizing. Once there is performance pressure, then you see cuts in safety. Also sometimes people fix symptoms instead of underlying causes, and then they just keep seeing symptoms forever and conclude they are inevitable.\n\n**Miscellaneous notes**\n\nThe remaining chapters of the book apply STAMP in a bunch of different areas with many examples, including an entire chapter devoted to the STAMP treatment of a friendly fire accident. I also really liked the discussion of human factors in the book, but decided not to summarize it as this has already gotten quite long.\n\n**Summary of the summary**\n\nI’ll conclude with a quote from the book’s epilogue:\n\n_What seems to distinguish those experiencing success is that they:_\n_1. Take a systems approach to safety in both development and operations_\n_2. Have instituted a learning culture where they have effective learning from events_\n_3. Have established safety as a priority and understand that their long-term success depends on it_\n\n**Relationship to AI safety**\n\nA primary motivation for thinking about AI is that it would be very impactful for our society, and very impactful technologies need not have good impacts. “Society” clearly falls into the “organized complexity” class of systems, and so I expect that the ideas of safety constraints and hierarchical control algorithms will be useful ways to think about possible impacts of AI on society. For example, if we want to think about the possibility of AI systems differentially improving technical progress over “wisdom”, such that we get dangerous technologies before we’re ready for them, we may want to sketch out hierarchical “controllers” at the societal level that could solve this problem. Ideally these would eventually turn into constraints on the AI systems that we build, e.g. “AI systems should report potentially impactful new technologies to such-and-such committee”. I see the AI governance field as doing this sort of work using different terminology.\n\nTechnical AI alignment (in the sense of <@intent alignment@>(@Clarifying \"AI Alignment\"@)) does not seem to benefit as much from this sort of an approach. The main issue is that we are often considering a fairly unitary system (such as a neural net, or the mathematical model of expected utility maximization) to which the hierarchical assumption of systems theory does not really apply.\n\nTo be clear, I _do_ think that there in fact is some hierarchy. For example, in image classifiers where low levels involve edge detectors while high levels involve dog-face detectors. However, we do not have the language to talk about these hierarchies, nor the algorithms to control the intermediate layers. While <@Circuits@>(@Thread: Circuits@) is illustrating this hierarchy for image classifiers, it does not give us a language that we can (currently) use to talk about advanced AI systems. As a result, we are reduced to focusing on the incentives we provide to the AI system, or speculating on the levels of hierarchy that might be internal to advanced AI systems, neither of which seem particularly conducive to good work.\n\nIn the language of this book, I work on intent alignment because I expect that the ability to enforce the constraint “the AI system tries to do what its operator wants” will be a very useful building block for enforcing whatever societal safety constraints we eventually settle on, and it seems possible to make progress on it today. There are several arguments for risk that this ignores (see e.g. <@here@>(@The Main Sources of AI Risk?@) and <@here@>(@AI Research Considerations for Human Existential Safety@)); for some of these other risks, the argument is that we can handle those using similar mechanisms as we have before (e.g. governance, democracy, police, etc), _as long as_ we have handled intent alignment."], "venue": "", "opinion": "", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #112", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "9c0e3ed085c6d1cc56706a9a42bdca9f", "title": "Clarifying some key hypotheses in AI alignment", "url": "https://alignmentforum.org/posts/mJ5oNYnkYrd4sD5uE/clarifying-some-key-hypotheses-in-ai-alignment", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Ben Cottier", "Rohin Shah"], "summaries": ["This post (that I contributed to) introduces a diagram that maps out important and controversial hypotheses for AI alignment. The goal is to help researchers identify and more productively discuss their disagreements."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #62", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "db51e87e690270efda524375bba4c516", "title": "On the alignment problem", "url": "https://80000hours.org/podcast/episodes/brian-christian-the-alignment-problem/?utm_campaign=Feed%3A+80000HoursPodcast+%2880%2C000+Hours+Podcast+with+Rob+Wiblin%29&utm_source=feedburner&utm_medium=feed", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Rob Wiblin and Brian Christian"], "summaries": ["This 80,000 Hours podcast goes over many of the examples from Brian’s book, <@The Alignment Problem@>. I recommend listening to it if you aren’t going to read the book itself; the examples and stories are fascinating. (Though note I only skimmed through the podcast.)"], "venue": "80,000 Hours Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #141", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "e1ae8b2c921a40bfe74d5bb231d88826", "title": "User-Agent Value Alignment ", "url": "https://www.aaai.org/Papers/Symposia/Spring/2002/SS-02-07/SS02-07-002.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2002-01-01T00:00:00Z", "authors": ["Daniel Shapiro", "Ross Shachter"], "summaries": ["This paper **from 2002** investigates what it would take to align an artificial agent with a human principal, under the assumption that the human utility function is known, but that the agent reward and human utility might be computed from different feature sets fA and fH. In this case, it is possible that the agent reward cannot capture all of the effects that the human cares about, leading to misalignment.\n\nThey introduce the concept of _graphical value alignment_, in which the only way that the agent’s actions can affect fH is through fA. In this case, we can establish _functional value alignment_ (in which the agent’s optimal policy also maximizes human utility), by setting the agent's reward for any specific fA to be the expectation (over fH) of the utility of fH, given fA. Note that the graphical criterion is _very strong_: it requires that _none_ of the agent’s unobserved effects matter at all to the human.\n\nThey suggest two methods for establishing alignment. First, we can define additional agent features (perhaps requiring additional sensors), until all of the effects on fH are captured by fA. However, this would be very difficult, if not impossible. Second, we can include all agent actions and observations as agent features, since any effect of the agent’s choice of policy on fH depends only on the observations made and actions taken. Of course, to achieve functional value alignment we would then have to have a good understanding of the expected human utility for every action given any observation, which is also hard.\n\nThey also briefly discuss the relationship between aligned agents and capable agents: a stone is aligned with you (per their definition), but also entirely useless. An interesting quote: _“Note that it might be harder to establish alignment with more competent agents because their skills afford many more pathways for adverse effects. This is a somewhat troubling thought.”_"], "venue": "AAAI 2002", "opinion": "It’s interesting how much of the alignment problem manifests itself even when you assume that the human utility function is known, but the feature sets used by the human and agent are different. The only piece of the argument missing from this paper is that with sufficiently capable agents, the agent will actually be _adversarial_ towards the human because of [convergent instrumental subgoals](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf), and that argument can be made in this framework.\n\nUnfortunately, both of their methods for producing alignment don’t scale well, as they admit in the paper. (The second method in particular is kind of like hardcoding the policy, similarly to the construction <@here@>(@Coherence arguments do not imply goal-directed behavior@).)", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #101", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "5e88d9704bd664929720948514adefbc", "title": "If I were a well-intentioned AI", "url": "https://www.alignmentforum.org/s/knbhjv252HshMSwpt/p/gzWb5kWwzhdaqmyTt", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Stuart Armstrong"], "summaries": ["This sequence takes on the perspective of an AI system that is well-intentioned, but lacking information about what humans want. The hope is to find what good AI reasoning might look like, and hopefully use this to derive insights for safety. The sequence considers Goodhart problems, adversarial examples, distribution shift, subagent problems, etc."], "venue": "Alignment Forum", "opinion": "I liked this sequence. Often when presented with a potential problem in AI safety, I ask myself why the problem doesn't also apply to humans, and how humans have managed to solve the problem. This sequence was primarily this sort of reasoning, and I think it did a good job of highlighting how with sufficient conservatism it seems plausible that many problems are not that bad if the AI is well-intentioned, even if it has very little information, or finds it hard to communicate with humans, or has the wrong abstractions.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #93", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "a19c916af44441475a2afad6e6f7cba0", "title": "The Incentives that Shape Behaviour", "url": "https://medium.com/@RyanCarey/new-paper-the-incentives-that-shape-behaviour-d6d8bb77d2e4", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Ryan Carey*", "Eric Langlois*", "Tom Everitt", "Shane Legg"], "summaries": ["This post and [paper](https://arxiv.org/abs/2001.07118) introduce a method for analyzing the safety properties of a system using a _causal theory of incentives_ (<@past@>(@Understanding Agent Incentives with Causal Influence Diagrams@) <@papers@>(@Modeling AGI Safety Frameworks with Causal Influence Diagrams@)). An _incentive_ is something an agent must do to best achieve its goals. A _control incentive_ exists when an agent must control some component of its environment in order to maximize its utility, while a _response incentive_ is present when the agent's decision must be causally responsive to some component of its environment. These incentives can be analyzed formally by drawing a _causal influence diagram_, which represents a decision problem as a graph where each variable depends on the values of its parents.\n\nFor example, consider the case where a recommender algorithm decides what posts to show to maximize clicks. In the causal influnce diagram representing this system, we can include that we have control over the node 'posts to show', which has a direct effect on the node we want to maximize, 'clicks'. However, 'posts to show' may also have a direct effect on the node 'influenced user opinions', which itself affects 'clicks'. In the system as it stands, in addition to there being a desirable control incentive on 'clicks', there is also an undesirable control incentive on 'influenced user opinions', since they themselves influence 'clicks'. To get rid of the undesirable incentive, we could reward the system for _predicted clicks_ based on a model of the original user opinions, rather than for actual clicks."], "venue": "arXiv", "opinion": "I really like this formalization of incentives, which come up frequently in AI safety work. It seems like some people are <@already@>(@Asymptotically Benign AGI@) <@using@>(@Designing agent incentives to avoid reward tampering@) this framework, and this seems low-cost enough that it's easy to imagine a world where this features in the safety analysis of algorithm designers.", "highlight": false, "read_more": "Paper: The Incentives that Shape Behaviour", "summarizer": "Asya", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #87", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "b2e6aa49dbce1bb88f98f2568a37271b", "title": "Vox interview with Stuart Russell", "url": "https://www.vox.com/future-perfect/2019/10/26/20932289/ai-stuart-russell-human-compatible", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Kelsey Piper"], "summaries": ["Kelsey talked with Stuart Russell about his new book, <@Human Compatible@>(@Human Compatible: Artificial Intelligence and the Problem of Control@)."], "venue": "Vox", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #71", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "26cefad712c11bd475395f03b30f5791", "title": "Partial preferences needed; partial preferences sufficient", "url": "https://www.alignmentforum.org/posts/sEqu6jMgnHG2fvaoQ/partial-preferences-needed-partial-preferences-sufficient", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Stuart Armstrong"], "summaries": ["I'm not sure I fully understand this post, but my understanding is that it is saying that alignment proposals must rely on some information about human preferences. Proposals like impact measures and corrigibility try to formalize a property that will lead to good outcomes; but any such formalization will be denoting some policies as safe and some as dangerous, and there will always exist a utility function according to which the \"safe\" policies are catastrophic. Thus, you need to also define a utility function (or a class of them?) that safety is computed with respect to; and designing this is particularly difficult."], "venue": "Alignment Forum", "opinion": "This seems very similar to the problem I have with impact measures, but I wouldn't apply that argument to corrigibility. I think the difference might be that I'm thinking of \"natural\" things that agents might want, whereas Stuart is considering the entire space of possible utility functions. I'm not sure what drives this difference.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #49", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "d227157f24d4625a586a61315f242884", "title": "Value Alignment Map", "url": "https://futureoflife.org/valuealignmentmap/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": null, "authors": ["FLI"], "summaries": ["This is a gigantic graph of many of the concepts in the AI risk space. Each concept has a description and links to existing literature, and by clicking around in the map I found several interesting links I hadn't seen before."], "venue": "FLI Website", "opinion": "This map is so large that I can't actually use it to get a birds-eye view of the entire space, but it seems quite useful for looking at a local region and as a starting point to explore one particular aspect more deeply.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #4", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "1d66f85dffefa7d78f05bf3ee9b072ad", "title": "Existential Risk, Creativity & Well-Adapted Science", "url": "https://www.cser.ac.uk/resources/xrisk-creativity/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Adrian Currie"], "summaries": ["From a brief skim, it seems like this paper defines \"creativity\" in scientific research, and argues that existential risk research needs to be creative. Research is creative if it is composed of \"hot\" searches, where we jump large distances from one proposed solution to another, with broad differences between these solutions, as opposed to \"cold\" searches, in which we primarily make incremental improvements, looking over a small set of solutions clustered in the neighborhood of existing solutions. The paper argues that research on existential risk needs to be creative, because many aspects of such research make it hard to analyze in a traditional way -- we can't perform controlled experiments of extinction, nor of the extreme circumstances under which it is likely; there are many interdependent parts that affect each other (since existential risks typically involve effects on many aspects of society), and there is likely to be a huge amount of uncertainty due to lack of evidence. As a result, we want to change the norms around existential risk research from the standard academic norms, which generally incentivize conservatism and \"cold\" searches. Table 1 provides a list of properties of academia that lead to conservatism, and asks that future work think about how we could mitigate these."], "venue": "CSER Website", "opinion": "While I'm not sure I agree with the reasons in this paper, I do think we need creativity and \"hot\" searches in technical AI safety, simply based on the level of confusion and uncertainty that we (or at least I) have currently. The properties in Table 1 seem particularly good as an initial list of things to target if we want to make creative research more likely.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #27", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "f35d4464bf717d0351b7a22f74d6c673", "title": "Do what we mean vs. do what we say", "url": "https://www.alignmentforum.org/posts/8Q5h6hyBXTEgC6EZf/do-what-i-mean-vs-do-what-i-say", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Rohin Shah"], "summaries": ["I wrote a post proposing that we define a \"do what we mean\" system to be one in which the thing being optimized is latent (in the sense that it is not explicitly specified, not that it has a probability distribution over it). Conversely, a \"do what we say\" system explicitly optimizes something provided as an input. A lot of AI safety arguments can be understood as saying that a pure \"do what we say\" AI will lead to catastrophic outcomes. However, this doesn't mean that a \"do what we mean\" system is the way to go -- it could be that we want a \"do what we mean\" core, along with a \"do what we say\" subsystem that makes sure that the AI always listens to eg. shutdown commands."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #22", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "08a15c42b0918fae260b3a0e4ac57fbe", "title": "Discontinuity from the Eiffel Tower", "url": "https://aiimpacts.org/discontinuity-from-the-eiffel-tower/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Beth Barnes and Katja Grace"], "summaries": ["The Eiffel tower represented a 54-year discontinuity in the trend for \"height of the tallest existing structure\", and an 8000-year discontinuity in the trend for \"height of the tallest structure ever\". It's unclear what the cause of this discontinuity is, though the authors provide some speculation."], "venue": "AI Impacts", "opinion": "I'm not sure if I should update without knowing the cause of the discontinuity, or how the search for discontinuities was conducted. If you're searching for discontinuities, I do expect you'll find some, even if in general I expect discontinuities not to arise, so it doesn't feel like strong evidence that discontinuities are probable.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "[Discontinuous progress investigation](https://aiimpacts.org/discontinuous-progress-investigation/) or [Likelihood of discontinuous progress around the development of AGI](https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/)", "converted_with": "python", "newsletter_number": "AN #16", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "c3caaaf545d97e7f75ee78ad4a97a8e7", "title": "Compact vs. Wide Models", "url": "https://www.lesswrong.com/posts/JkCPkMxuftohieb8B/compact-vs-wide-models", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Vaniver"], "summaries": ["A compact model is one which is very general, and easy to prove things about, but doesn't inherently capture the messiness of the real world inside the model. Examples include Turing machines and utility functions. A wide model is one which still has a conceptually crisp core, but these crisp core units must then be combined in a complicated way in order to get something useful. Examples include the use of transistors to build CPUs, and the hierarchical control model of human psychology. The nice thing about wide models is that they start to engage with the messiness of the real world, and so make it clearer where the complexity is being dealt with. This is a useful concept to have when evaluating a proposal for alignment -- it asks the question, \"where does the complexity reside?\""], "venue": "LessWrong", "opinion": "I definitely support having models that engage more with the messiness of the real world. I'm not sure if I would have used \"wide models\" -- it seems like even the assumption of a crisp core makes it not as capable of handling messiness as I want. But if you're trying to get formal guarantees and you need to use some model, a wide model seems probably useful to use.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #16", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "4232c4fd2ed3a7f85290b00aab0251a7", "title": "Putting the humanity into inhuman systems: How human factors and ergonomics can be used to manage the risks associated with artificial general intelligence", "url": "https://onlinelibrary.wiley.com/doi/10.1002/hfm.20883", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Paul M. Salmon", "Tony Carden", "Peter A. Hancock"], "summaries": ["This paper argues that the methods of Human Factors and Ergonomics (HFE) should be applied to AGI safety. They list fifteen different methods from the field, typically used to analyze the performance of humans in systems, which could be applied to AGI instead (on the assumption that AGI will be more like humans than like machines in today’s systems). They then give examples of how these might be applied to the Prometheus story in the prologue of [Life 3.0](https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598)."], "venue": "Human Factors and Ergonomics in Manufacturing", "opinion": "I’m not very familiar with this field, but among other techniques the paper mentions STAMP and STPA which we’ve previously seen in <@Engineering a Safer World@>. It does seem to me like these techniques would be useful to apply to the entire sociotechnical system, of which an AGI system is just one part (and this is what the paper’s examples do). It is less clear to me whether it makes sense to take techniques designed for humans and apply them to AGI: perhaps we’ll have enough understanding of the differences between humans and AGI that we could do this in a reasonable way, but I think there is a real risk that the methods give incorrect conclusions simply because they make incorrect assumptions about how AGI works (given that they were designed for humans). Nonetheless, I do agree with the core claim of this paper that HFE is worth exploring.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #137", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "8cebae76fc2bf85621345244e82e523d", "title": "Mapping the Conceptual Territory in AI Existential Safety and Alignment", "url": "https://jbkjr.com/posts/2020/12/mapping_conceptual_territory_AI_safety_alignment/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Jack Koch"], "summaries": ["There are a bunch of high-level overviews and research agendas, not all of which agree with each other. This post attempts to connect and integrate several of these, drawing heavily on <@Paul Christiano’s overview@>(@Current Work in AI Alignment@), [my](https://futureoflife.org/2019/04/11/an-overview-of-technical-ai-alignment-with-rohin-shah-part-1/) [overview](https://futureoflife.org/2019/04/25/an-overview-of-technical-ai-alignment-with-rohin-shah-part-2/), and the <@ARCHES agenda@>(@AI Research Considerations for Human Existential Safety@), but also including a lot of other work. It serves as a good way of connecting these various perspectives; I recommend reading it for this reason. (Unfortunately, it is rather hard to summarize, so I haven’t done so.)"], "venue": "Author's Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #131", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "2064bee91034f1ae16f7c61797433c0f", "title": "When AI Systems Fail: Introducing the AI Incident Database", "url": "https://www.partnershiponai.org/aiincidentdatabase/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Sean McGregor"], "summaries": ["One obvious way to improve safety is to learn from past mistakes, and not repeat them. This suggests that it would be particularly valuable to have a repository of past incidents that have occurred, so that people can learn from them; indeed both aviation and cybersecurity have their own incident databases. The AI Incidents Database aims to fill this gap within AI. The database currently has over a thousand incidents covering a wide range of potential issues, including self-driving car accidents, wrongful arrests due to bad facial recognition or machine translation, and algorithm-driven “flash crash”."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "[AI Incident Database](https://incidentdatabase.ai/)\n\n[Paper: Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database](https://arxiv.org/abs/2011.08512)", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #129", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "98f138ad01908118d7d29505efae8217", "title": "Foundational Philosophical Questions in AI Alignment", "url": "https://futureoflife.org/2020/09/03/iason-gabriel-on-foundational-philosophical-questions-in-ai-alignment/?utm_source=feedly&utm_medium=rss&utm_campaign=iason-gabriel-on-foundational-philosophical-questions-in-ai-alignment", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Lucas Perry and Iason Gabriel"], "summaries": ["This podcast starts with the topic of the paper <@Artificial Intelligence, Values and Alignment@> and then talks about a variety of different philosophical questions surrounding AI alignment."], "venue": "FLI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #117", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "3ac993ca8ac893c0b330d5ffb2dcd085", "title": "State of AI Ethics", "url": "https://montrealethics.ai/wp-content/uploads/2020/06/State-of-AI-Ethics-June-2020-report.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Abhishek Gupta", "Marianna Ganapini", "Renjie Butalid", "Camylle Lanteigne", "Allison Cohen", "Mo Akif", "Tania De Gasperis", "Victoria Heath", "Erick Galinkin"], "summaries": ["This report from the Montreal AI Ethics Institute has a wide variety of summaries on many different topics in AI ethics, quite similarly to this newsletter in fact."], "venue": "MAIEI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #115", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "af794e0e8ccffd673665917c76eb0916", "title": "To Trust Or Not To Trust A Classifier", "url": "http://arxiv.org/abs/1805.11783", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Heinrich Jiang", "Been Kim", "Melody Y. Guan", "Maya Gupta"], "summaries": ["The confidence scores given by a classifier (be it logistic regression, SVMs, or neural nets) are typically badly calibrated, and so it is hard to tell whether or not we should trust our classifier's prediction. The authors propose that we compute a _trust score_ to tell us how much to trust the classifier's prediction, computed from a training set of labeled datapoints. For every class, they filter out some proportion of the data points, which removes outliers. Then, the trust score for a particular test point is the ratio of (distance to nearest non-predicted class) to (distance to predicted class). They have theoretical results showing that a high trust score means that the classifier likely agrees with the Bayes-optimal classifier, as well as empirical results showing that this method does better than several baselines for determining when to trust a classifier. One cool thing about this method is that it can be done with any representation of the input data points -- they find that working with the activations of deeper layers of a neural net improves the results."], "venue": "NeurIPS 2018", "opinion": "I'm a big fan of trying to understand when our AI systems work well, and when they don't. However, I'm a little confused by this -- ultimately the trust score is just comparing the given classifier with a nearest neighbor classifier. Why not just use the nearest neighbor classifier in that case? This paper is a bit further out of my expertise than I'd like to admit, so perhaps there's an obvious answer I'm not seeing.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #11", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "cf6bcecc56f84fd3bb394f9a0fefef23", "title": "From ImageNet to Image Classification", "url": "https://gradientscience.org/benchmarks/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Andrew Ilyas", "Aleksander Madry"], "summaries": ["ImageNet was crowdsourced by presenting images to MTurk workers who had to select images that contain a given class from a pool of images obtained via search on the internet. This is problematic, as an image containing multiple classes will basically get assigned to a random suitable class which can lead to deviations between ImageNet performance and actual capability to recognize images. The authors used MTurk and allowed workers to select multiple classes, as well as one main class for a given image in a pool of 10000 ImageNet validation images. Around 20% of the images seem to contain objects representing multiple classes and the average accuracy for these images was around 10% worse than average for a wide variety of image classifiers. While this is a significant drop, it is still way better than predicting a random class that is in the image. Also, advanced models were still able to predict the ImageNet label in cases where it does not coincide with the main class identified by humans, which suggest that they exploit biases in the dataset generation. While the accuracy of model predictions with respect to the newly identified main class still increased with better accuracy in predicting labels, the accuracy gap seems to grow and we might soon hit a point where gains in ImageNet accuracy don't correspond to improved image classification. "], "venue": "Gradient Science", "opinion": "I generally find these empiricial tests of whether ML systems actually do what they are assumed to do quite useful for better calibrating intuitions about the speed of AI progress, and to make failure modes more salient. While we have the latter, I am confused about what this means for AI progress: on one hand, this supports the claim that improved benchmark progress does not necessarily translate to better real world applicability. On the other hand, it seems like image classification might be easier than exploiting the dataset biases present in ImageNet, which would mean that we would likely be able to reach even better accuracy than on ImageNet for image classification with the right dataset. ", "highlight": false, "read_more": "Paper: From ImageNet to Image Classification: Contextualizing Progress on Benchmarks", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #103", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "e41c8540e99121b3bf6915be465cc934", "title": "A Guide to Writing the NeurIPS Impact Statement", "url": "https://medium.com/@operations_18894/a-guide-to-writing-the-neurips-impact-statement-4293b723f832", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Carolyn Ashurst", "Markus Anderljung", "Carina Prunkl", "Jan Leike", "Yarin Gal", "Toby Shevlane", "Allan Dafoe"], "summaries": ["NeurIPS 2020 requires paper submissions to include a statement on the broader impact of their work. This post provides a guide for how to write an effective impact statement. They recommend focusing on the most significant, neglected, and tractable impacts, both positive and negative, while also conveying the uncertainties involved. They also suggest integrating this into the research process by reading the tech governance literature and building institutional structures, and including this information in introductions.\n\nTheir guide then recommends considering 3 questions:\nHow could your research affect ML applications?\nWhat are the societal implications of these applications?\nWhat research or other initiatives could improve social outcomes?\n\nThere is more information in the guide on how to go about answering those questions, along with some examples. "], "venue": "Medium", "opinion": "I am definitely in favor of considering the impacts of ML research before conducting or publishing it. I think the field is currently either at or near a threshold where papers will start having significant real world effects. While I don’t think this requirement will be sufficient for ensuring positive outcomes, I am glad NeurIPS is trying it out. \n\nI think the article makes very strong points and will improve the quality of the impact statements that get submitted. I particularly liked the point about communicating uncertainty, which is a norm that I think the ML community would benefit from greatly. One thing I would add here is that giving explicit probabilities is often more helpful than vague words like “might” or “could”. ", "highlight": false, "read_more": "", "summarizer": "Nicholas", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #100", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "91115bd4ca9db9bd81a9ba2ae0066c6e", "title": "Disambiguating \"alignment\" and related notions", "url": "https://www.lesswrong.com/posts/FTpPC4umEiREZMMRu/disambiguating-alignment-and-related-notions", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["capybaralet"], "summaries": ["Distinguishes between several kinds of alignment. Some focus on _terminal values_ from the AI, such as holistic alignment (the AI has the same terminal values as us) and parochial alignment (which I don't really understand, check the post). Sufficient alignment focuses on _outcomes_ (no X-event happens, or X-risk is sufficiently low). Finally, others focus on the _motivations_ of the AI, including intentional alignment (the AI tries to do what H wants it to do) and benign AI (R doesn't try to do what H doesn't want it to do)."], "venue": "LessWrong", "opinion": "It is definitely worth keeping these distinctions in mind whenever talking about alignment. I personally tend to think about the motivation-based definitions, because those seem to be the most tractable definitions to work on, mainly because I don't have to worry about the AI being incompetent (eg. an AI launching nukes accidentally while exploring its action space). It seems possible to get strong arguments for intentional alignment and then use that with improved capabilities to argue for sufficient alignment.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #10", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "9c1cdd9a41e8546e9861817292cc7722", "title": "On Strong Artificial Intelligence", "url": "https://docs.google.com/document/d/1RP_bWfC1waWQaLwunQBN_R0yRNlDjVOOE4rhmqm8JSA/edit", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": null, "authors": ["Zhou Zhihua", "translated by Jeffrey Ding"], "summaries": ["This article, written by a professor from China, argues that the AI community has never been focused on \"strong AI\", and we have no real path forward to building \"strong AI\", and that it would be so dangerous that we should never do research around it. The concept of \"strong AI\" here is a bit different from what we are used to -- I would probably call it human-like AGI, in that it would have consciousness, self-awareness, and emotions, and be as capable as a human."], "venue": "", "opinion": "This is an interesting position I haven't seen much in the West -- both that we can't build AGI, and that we shouldn't build it anyway. It's actually quite heartening to see an emphatic claim that we shouldn't build strong AI -- it seems like AI researchers as a group may in fact be able to coordinate to develop AI safely. Of course, this is a single viewpoint and is not representative of all AI researchers in China.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #10", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "74418b4a0c8ff2c488a40d19cb267c0f", "title": "Distance Functions are Hard", "url": "https://alignmentforum.org/posts/YuJNoCEgeWJfBtdtQ/distance-functions-are-hard-1", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Grue_Slinky"], "summaries": ["Many ideas in AI alignment require some sort of distance function. For example, in [Functional Decision Theory](https://arxiv.org/abs/1710.05060), we'd like to know how \"similar\" two algorithms are (which can influence whether or not we think we have \"logical control\" over them). This post argues that defining such distance functions is hard, because they rely on human concepts that are not easily formalizable, and the intuitive mathematical formalizations usually have some flaw."], "venue": "Alignment Forum", "opinion": "I certainly agree that *defining* \"conceptual\" distance functions is hard. It has similar problems to saying \"write down a utility function that captures human values\" -- it's possible in theory but in practice we're not going to think of all the edge cases. However, it seems possible to learn distance functions rather than defining them; this is already done in perception and state estimation.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #63", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "479563737771e9217cd6680155413350", "title": "Existential Risks: A Philosophical Analysis", "url": "https://www.tandfonline.com/doi/abs/10.1080/0020174X.2019.1658626", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Phil Torres"], "summaries": ["The phrase \"existential risk\" is often used in different ways. This paper considers the pros and cons of five different definitions."], "venue": "Inquiry: An Interdisciplinary Journal of Philosophy", "opinion": "While this doesn't mention AI explicitly, I think it's useful to read anyway, because often which of the five concepts you use will affect what you think the important risks are.", "highlight": false, "read_more": "PDF", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #59", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "6ee021e4ba864500106de5a6424eb838", "title": "AI Alignment Podcast: Moral Uncertainty and the Path to AI Alignment", "url": "https://futureoflife.org/2018/09/17/moral-uncertainty-and-the-path-to-ai-alignment-with-william-macaskill/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Lucas Perry and William MacAskill"], "summaries": ["Initially, Will articulates arguments for moral realism (the idea that there are objectively true moral facts) and moral uncertainty (the idea that we should assign credences to different moral theories being correct). Later, the discussion turns to the relevance of these views to AI safety. Will distinguishes the control problem (ensuring AIs do what we say), from the problem of aligning AI with human values, from the problem of aligning AI with moral truth. Observing humans isn't sufficient to learn values, since people can be self-destructive or otherwise misguided. Perhaps AI could extrapolate the values an idealised version of each person would endorse; however, this procedure seems under-defined.\n\nOn the moral truth side, Will worries that most educated people are moral relativists or subjectivists and so they won't sufficiently prioritise aligning AI with moral truth. He advocates for a period of long philosophical reflection once we've reduced existential risk to near zero, to figure out which future would be best. Careful ethical reasoning during this period will be particularly important since small mistakes might be magnified massively when implemented on an astronomical scale; however, he acknowledges that global dynamics make such a proposal unlikely to succeed. On a brighter note, AGI might make great advances in ethics, which could allow us to make the future much more morally valuable."], "venue": "FLI Website", "opinion": "I think moral uncertainty is an important and overdue idea in ethics. I also agree that the idea of extrapolating an idealised form of people's preferences is not well-defined. However, I'm very skeptical about Will's arguments about moral realism. In particular, I think that saying that nothing matters at all without moral realism is exactly the sort of type error which Eliezer argued against [here](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside).\n\nI'm more sympathetic to the idea that we should have a period of long reflection before committing to actions on an astronomical scale; this seems like a good idea if you take moral uncertainty at all seriously.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #25", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "e8b3287f587361b794734f9568d368b0", "title": "Key Concepts in AI Safety", "url": "https://cset.georgetown.edu/research/key-concepts-in-ai-safety-an-overview/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Tim G. J. Rudner", "Helen Toner"], "summaries": ["This overview from CSET gives a brief introduction to AI safety using the <@specification, robustness, and assurance (SRA) framework@>(@Building safe artificial intelligence: specification, robustness, and assurance@). Follow-up reports cover [interpretability](https://cset.georgetown.edu/research/key-concepts-in-ai-safety-interpretability-in-machine-learning/) and [adversarial examples / robustness](https://cset.georgetown.edu/research/key-concepts-in-ai-safety-robustness-and-adversarial-examples/). I don’t expect these to be novel to readers of this newsletter -- I include them in case anyone wants a brief overview, as well as to provide links to AI safety reports that will likely be read by government officials."], "venue": "CSET Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #143", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "2c40d38742892793ce7e2c5b8cd7cf59", "title": "Formal Metaethics and Metasemantics for AI Alignment", "url": "http://www.metaethical.ai/v20-1/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["June Ku"], "summaries": ["This website presents in great detail a process by which an agent might use data from human brains in order to infer a utility function for a single human (also spelling out what assumptions need to be made along the way), and then how it could combine the utility functions from different humans to arrive at \"a fully technical ethical goal function\". Emphasis is placed on solving the philosophical problems of metaethics and mental content. Quoting the website, they \"suppose that unlimited computation and a complete low-level causal model of the world and the adult human brains in it are available\"."], "venue": "Author's Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #98", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "c710c1f1f237891e226567e8f0ffdc4b", "title": "Approaches to Deploying a Safe Artificial Moral Agent", "url": "https://montrealethics.ai/approaches-to-deploying-a-safe-artificial-moral-agent/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Olivier Couttolenc"], "summaries": ["This post investigates which of the current moral theories would most reduce existential risk if we programmed it into an AI system, and settles on Aristotelian virtue ethics (over utilitarianism and Kant's categorical imperative)."], "venue": "Montreal AI Ethics Institute Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #98", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "8f929649c409dc80afd03e70650c21af", "title": "Understanding Agent Incentives with Causal Influence Diagrams", "url": "https://medium.com/@deepmindsafetyresearch/understanding-agent-incentives-with-causal-influence-diagrams-7262c2512486", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Tom Everitt"], "summaries": ["This post and associated paper model an agent's decision process using a causal influence diagram -- think of a Bayes net, and then imagine that you add nodes corresponding to actions and utilities. A major benefit of Bayes nets is that the criterion of d-separation can be used to determine whether two nodes are conditionally independent. Once we add actions and utilities, we can also analyze whether observing or intervening on nodes would lead the agent to achieve higher expected utility. The authors derive criteria resembling d-separation for identifying each of these cases, which they call observation incentives (for nodes whose value the agent would like to know) and intervention incentives (for nodes whose value the agent would like to change). They use observation incentives to show how to analyze whether a particular decision is fair or not (that is, whether it depended on a sensitive feature that should not be used, like gender). Intervention incentives are used to establish the security of [counterfactual oracles](https://arxiv.org/abs/1711.05541) more simply and rigorously."], "venue": "DeepMind Safety Blog", "opinion": "These criteria are theoretically quite nice, but I'm not sure how they relate to the broader picture. Is the hope that we will be able to elicit the causal influence diagram an AI system is using, or something like it? Or perhaps that we will be able to create a causal influence diagram of the environment, and these criteria can tell us which nodes we should be particularly interested in? Maybe the goal was simply to understand agent incentives better, with the expectation that more knowledge would help in some as-yet-unknown way? None of these seem very compelling to me, but the authors might have something in mind I haven't thought of.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #49", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "e428fa7920109873cbbaa785706ed3ed", "title": "Exploring AI Safety in Degrees: Generality, Capability and Control", "url": "https://www.cser.ac.uk/resources/exploring-ai-safety-degrees-generality-capability-and-control/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["John Burden", "José Hernández-Orallo"], "summaries": ["This paper argues that we should decompose the notion of “intelligence” in order to talk more precisely about AI risk, and in particular suggests focusing on _generality_, _capability_, and _control_. We can think of capability as the expected performance of the system across a wide variety of tasks. For a fixed level of capability, generality can be thought of as how well the capability is distributed across different tasks. Finally, control refers to the degree to which the system is reliable and deliberate in its actions. The paper qualitatively discusses how these characteristics could interact with risk, and shows an example quantitative definition for a simple toy environment."], "venue": "SafeAI 2020", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #117", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "ab6bd048953692b8b284bb6e481875e4", "title": "AI Alignment Podcast: On Becoming a Moral Realist", "url": "https://futureoflife.org/2018/10/18/on-becoming-a-moral-realist-peter-singer/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Lucas Perry and Peter Singer"], "summaries": ["There's a fair amount of complexity in this podcast, and I'm not an expert on moral philosophy, but here's an *oversimplified* summary anyway. First, in the same way that we can reach mathematical truths through reason, we can also arrive at moral truths through reason, which suggests that they are true facts about the universe (a moral realist view). Second, preference utilitarianism has the problem of figuring out which preferences you want to respect, which isn't a problem with hedonic utilitarianism. Before and after the interview, Lucas argues that moral philosophy is important for AI alignment. Any strategic research \"smuggles\" in some values, and many technical safety problems, such as preference aggregation, would benefit from a knowledge of moral philosophy. Most importantly, given our current lack of consensus on moral philosophy, we should be very wary of locking in our values when we build powerful AI."], "venue": "FLI Website", "opinion": "I'm not convinced that we should be thinking a lot more about moral philosophy. While I agree that locking in a set of values would likely be quite bad, I think this means that researchers should not hardcode a set of values, or create an AI that infers some values and then can never change them. It's not clear to me why studying more moral philosophy helps us with this goal. For the other points, it seems not too important to get preference aggregation or particular strategic approaches exactly perfect as long as we don't lock in values -- as an analogy, we typically don't argue that politicians should be experts on moral philosophy, even though they aggregate preferences and have large impacts on society.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #29", "newsletter_category": "Miscellaneous (Alignment)"}
{"id": "89cf969d3d19bc3a63c58dc46f62342a", "title": "How artificial intelligence is changing science", "url": "https://news.stanford.edu/2018/05/15/how-ai-is-changing-science/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Nathan Collins"], "summaries": ["AI is being used in many different projects across many different fields at Stanford. This post has a list of a whole bunch of scientific projects that AI is helping with."], "venue": "Stanford Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #7", "newsletter_category": "Miscellaneous (Capabilities)"}
{"id": "a56ed0649f03025c2d871e254dfb1a5b", "title": "Talk to Books", "url": "https://books.google.com/talktobooks/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["See [Import AI](https://jack-clark.net/2018/04/16/import-ai-90-training-massive-networks-via-codistillation-talking-to-books-via-a-new-google-ai-experiment-and-why-the-acm-thinks-researchers-should-consider-the-downsides-of-research/)."], "venue": "Google Books", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #3", "newsletter_category": "Miscellaneous (Capabilities)"}
{"id": "17af8e74c462c2ca960ff2fc8dfef044", "title": "Winner's Curse?", "url": "https://openreview.net/pdf?id=rJWF0Fywf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["D. Sculley", "Jasper Snoek", "Ali Rahimi", "Alex Wiltschko"], "summaries": ["A short paper arguing that we need more empirical rigor in ML, identifying some structural incentives that push against this and suggesting solutions."], "venue": "OpenReview", "opinion": "While this isn't very relevant to technical alignment, it does seem important to have more rigor in ML, since ML researchers are likely to be the ones building advanced AI.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #21", "newsletter_category": "Miscellaneous (Capabilities)"}
{"id": "6985dbe05b08c8669fa9879907119a08", "title": "Generally capable agents emerge from open-ended play", "url": "https://deepmind.com/blog/article/generally-capable-agents-emerge-from-open-ended-play", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Open-Ended Learning Team", "Adam Stooke", "Anuj Mahajan", "Catarina Barros", "Charlie Deck", "Jakob Bauer", "Jakub Sygnowski", "Maja Trebacz", "Max Jaderberg", "Michael Mathieu", "Nat McAleese", "Nathalie Bradley-Schmieg", "Nathaniel Wong", "Nicolas Porcel", "Roberta Raileanu", "Steph Hughes-Fitt", "Valentin Dalibard", "Wojciech Marian Czarnecki"], "summaries": ["Artificial intelligence agents have become successful at games when trained for each game separately. However, it has proven challenging to build agents that can play _previously unseen_ games. This paper makes progress on this challenge in three primary areas: creating rich simulated environments and tasks, training agents with attention mechanisms over internal states, and evaluating agents over a variety of games. The authors show that agents trained with goal-based attention in their proposed environment (XLand) succeed at a range of novel, unseen tasks with no additional training required. Moreover, such agents appear to use general tactics such as decision-making, tool use, and experimentation during game-play episodes.\n\nThe authors argue that training-data generation is a central challenge to training general RL agents (an argument we’ve seen before with <@POET@>(@Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions@) and <@PAIRED@>(@Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design@)). They propose the training environment XLand to address this. XLand includes many multiplayer games within consistent, human-relatable 3D worlds and allows for dynamic agent learning through the procedural generation of tasks which are split into three components: world, agents, and goals. The inclusion of other agents makes this a partially observable environment. Goals are defined with Boolean formulas. Each goal is a combination of options and every option is a combination of atomic predicates. For example, in hide-and-seek one player has the goal see(me,opponent) and the other player not(see(opponent,me)). The space of worlds and games are shown to be both vast and smooth, which supports training.\n\nThe agents themselves are trained using deep-RL combined with a goal-attention module (GOAT). The per-timestep observations of the agent are ego-centric RGB images, proprioception values indicating forces, and the goal of the agent. The GOAT works by processing this information with a recurrent network and then using a goal-attention module to select hidden states that are most relevant to achieving a high return. This is determined by estimating the expected return if the agent focused on an option until the end of the episode.\n\nAs with many other major deep-RL projects, it is important to have a good curriculum, where more and more challenging tasks are introduced over time. The obvious method of choosing tasks with the lowest reward doesn’t work, because the returns from different games are non-comparable. To address this, an iterative notion of improvement is proposed, and scores are given as percentiles relative to a population. This is similar in spirit to the <@AlphaStar League@>(@AlphaStar: Mastering the Real-Time Strategy Game StarCraft II@). Following this, game-theoretic notions such as Pareto dominance can be used to compare agents and to determine how challenging a task is, which can then be used to create a curriculum.\n\nFive generations of agents are trained, each of which is used in the next generation to create opponents and relative comparisons for defining the curriculum. Early in training, the authors find that adding an intrinsic reward based on self-play is important to achieve good performance. This encourages agents to achieve non-zero rewards in as many games as possible, which the authors call “participation”. The authors also conduct an ablation study and find that dynamic task generation, population-based methods, and the GOAT module have a significant positive impact on performance.\n\nThe agents produced during training have desirable generalization capability. They can compete in games that were not seen before in training. Moreover, fine-tuning dramatically improves the performance of agents in tasks where training from scratch completely fails. A number of case studies are also presented to explore emergent agent behavior. In one experiment, an agent is asked to match a colored shape and another environment feature such as a shape or floor panel. At the start of the episode, the agent decides to carry a black pyramid to an orange floor, but then after seeing a yellow sphere changes options and places the two shapes together. This shows that the agent has robust option evaluation capability. In other experiments, the agents show the capacity to create ramps to move to higher levels in the world environment. Additionally, agents seem capable of experimentation. In one instance, the agent is tasked with producing a specific configuration of differently colored cube objects. The agent demonstrates trial-and-error and goes through several different configurations until it finds one it evaluates highly.\n\nThere are limitations to the agent capabilities. While agents can use ramps in certain situations they fail to use ramps more generally. For example, they frequently fail to use ramps to cross gaps. Additionally, agents generally fail to create more than a single ramp. Agents also struggle to play cooperative games involving following not seen during training. This suggests that experimentation does not extend to co-player behavior. More broadly, whether or not co-player agents decide to cooperate is dependent on the population the agents interacted with during training. In general, the authors find that agents are more likely to cooperate when both agents have roughly equal performance or capability."], "venue": "arXiv", "opinion": "This is a fairly complicated paper, but the authors do a reasonable job of organizing the presentation of results. In particular, the analysis of agent behavior and their neural representations is well done. At a higher level, I found it interesting that the authors partially reject the idea of evaluating agents with just expected returns. I broadly agree with the authors that the evaluation of agents across multi-player tasks is an open problem without an immediate solution. With respect to agent capability, I found the section on experimentation to be most interesting. In particular, I look forward to seeing more research on how attention mechanisms catalyze such behavior.\n\n**Rohin's opinion:** One of my models about deep learning is “Diversity is all you need”. Suppose you’re training for some task for which there’s a relevant feature F (such as the color of the goal pyramid). If F only ever takes on a single value in your training data (you only ever go to yellow pyramids), then the learned model can be specialized to that particular value of F, rather than learning a more general computation that works for arbitrary values of F. Instead, you need F to vary a lot during training (consider pyramids that are yellow, blue, green, red, orange, black, etc) if you want your model to generalize to new values of F at test time. That is, your model will be zero-shot robust to changes in a feature F if and only if your training data was diverse along the axis of feature F. (To be clear, this isn’t literally true, it is more like a first-order main effect.)\n\nSome evidence supporting this model:\n- The approach in this paper explicitly has diversity in the objective and the world, and so the resulting model works zero-shot on new objectives of a similar type and can be finetuned quickly.\n- In contrast, the similar <@hide and seek project@>(@Emergent Tool Use from Multi-Agent Interaction@) did not have diversity in the objective, had distinctly less diversity in the world, and instead got diversity from emergent strategies for multiagent interaction (but there were fewer than 10 such strategies). Correspondingly, the resulting agents could not be quickly finetuned.\n- My understanding is that in image recognition, models trained on larger, more diverse datasets become significantly more robust.\n\nBased on this model, I would make the following predictions about agents in XLand:\n- They will not generalize to objectives that can’t be expressed in the predicate language used at training time, such as “move all the pyramids near each other”. (In some sense this is obvious, since the agents have never seen the word “all” and so can’t know what it means.)\n- They will not work in any environment outside of XLand (unless that environment looks very very similar to XLand).\nIn particular, I reject the idea that these agents have learned “general strategies for problem solving” or something like that, such that we should expect them to work in other contexts as well, perhaps with a little finetuning. I think they have learned general strategies for solving a specific class of games in XLand.\n\nYou might get the impression that I don’t like this research. That’s not the case at all — it is interesting and impressive, and it suggests that we could take the same techniques and apply them in broader, more realistic domains where the resulting agents could be economically useful. Rather, I expect my readership to overupdate on this result and think that we’ve now reached agents that can do “general planning” or some such, and I want to push against that.", "highlight": true, "read_more": "", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #159", "newsletter_category": "Multiagent RL"}
{"id": "bbc17d6b7fbfd39aa67a7664ded6ce79", "title": "Announcement: AI alignment prize round 2 winners and next round", "url": "https://www.lesswrong.com/posts/SSEyiHaACSYDHcYZz/announcement-ai-alignment-prize-round-2-winners-and-next", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["cousin_it"], "summaries": ["The winners of the second round of the AI alignment prize have been announced! All of the winners have already been sent out in this newsletter, except for the first place winner, \"[The Alignment Problem for History-Based Bayesian Reinforcement Learners](http://www.tomeveritt.se/papers/alignment.pdf)\". The deadline for the next iteration of the AI alignment prize is June 30, 2018."], "venue": "LessWrong", "opinion": "", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #3", "newsletter_category": "News"}
{"id": "904b3cc9cb995011cc9a90b6726e6fe6", "title": "Request for proposals for projects in AI alignment that work with deep learning systems", "url": "https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/request-for-proposals-for-projects-in-ai-alignment-that-work-with-deep-learning-systems", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Nick Beckstead and Asya Bergal"], "summaries": ["Open Philanthropy is seeking proposals for AI safety work in four major areas related to deep learning, each of which I summarize below. Proposals are due January 10, and can seek up to $1M covering up to 2 years. Grantees may later be invited to apply for larger and longer grants."], "venue": "Open Philanthropy Website", "opinion": "Overall, I like these four directions and am excited to see what comes out of them! I'll comment on specific directions below.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #168", "newsletter_category": "News"}
{"id": "2b2725d848adeec339310eedd559c424", "title": "AI Safety Papers", "url": "https://ai-safety-papers.quantifieduncertainty.org/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Ozzie Gooen"], "summaries": ["AI Safety Papers (announced [here](https://www.alignmentforum.org/posts/GgusnG2tiPEa4aYFS/ai-safety-papers-an-app-for-the-tai-safety-database)) is an app to interactively explore a previously collected <@database of AI safety work@>(@TAI Safety Bibliographic Database@). I believe it contains every article in this newsletter (at least up to a certain date; it doesn’t automatically update) along with their summaries, so you may prefer to use that to search past issues of the newsletter instead of the [spreadsheet I maintain](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit#gid=0)."], "venue": "Alignment Forum", "opinion": "", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #162", "newsletter_category": "News"}
{"id": "c79717954845808738e582a8f1f9824f", "title": "DeepMind hiring Research Scientist, Safety", "url": "https://deepmind.com/careers/979620/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["Career opportunity!"], "venue": "DeepMind Website", "opinion": "", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #16", "newsletter_category": "News"}
{"id": "64c9ef7ec3040cbd789c8917bf48af98", "title": "BERI seeking new university collaborators", "url": "http://existence.org/new-collaborator-applications", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Sawyer Bernath"], "summaries": ["[BERI](http://existence.org/faq) is expanding its offerings to provide free services to a wider set of university-affiliated groups and projects, and they’re now accepting applications from groups and individuals interested in receiving their support. If you’re a member of a research group, or an individual researcher, working on long-termist projects, you can [apply here](http://existence.org/apply)."], "venue": "BERI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #99", "newsletter_category": "News"}
{"id": "5598bad7d1bf62852fea931a39b1cec9", "title": "FHI Summer Research Fellowship", "url": "https://www.fhi.ox.ac.uk/summer-research-fellowship/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["This six week summer fellowship allows fellows to take the lead on a project relevant to the long-term future, working with an FHI Research Scholar. Application deadline is March 22."], "venue": "FHI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #89", "newsletter_category": "News"}
{"id": "8ab26b798727f557a68e7e61198eddaa", "title": "Microsoft invests in and partners with OpenAI to support us building beneficial AGI", "url": "https://openai.com/blog/microsoft/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Greg Brockman"], "summaries": ["After moving to a <@capped-profit investment model@>(@OpenAI LP@), Microsoft has invested $1 billion in OpenAI. This allows OpenAI to keep their focus on developing and sharing beneficial AGI: instead of having to create a product to cover costs, they can license their pre-AGI technologies, likely through Microsoft."], "venue": "OpenAI Blog", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #61", "newsletter_category": "News"}
{"id": "d91fa7a4114f619918ef01b569c91390", "title": "Funding for Study and Training Related to AI Policy Careers", "url": "https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The Open Philanthropy Project has launched an AI policy scholarships program; the deadline for the first round is October 15."], "venue": "Open Phil Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #59", "newsletter_category": "News"}
{"id": "50ef2740b2d51f10384a5b60c4d7f34a", "title": "SafeML Workshop: Accepted Papers", "url": "https://sites.google.com/view/safeml-iclr2019/accepted-papers", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The camera-ready papers from the SafeML workshop are now available! There are a lot of good papers on robustness, adversarial examples, and more that will likely never make it into this newsletter (there's only so much I can read and summarize), so I encourage you to browse through it yourself."], "venue": "ICLR", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #54", "newsletter_category": "News"}
{"id": "ddfe63ec5b1497a2962b4b67e3696b9b", "title": "OpenAI LP", "url": "https://openai.com/blog/openai-lp/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["OpenAI"], "summaries": ["OpenAI is transitioning to a new structure, consisting of a capped-profit company (OpenAI LP) controlled by the original OpenAI nonprofit organisation. The nonprofit is still dedicated to its charter, which OpenAI LP has a legal duty to prioritise. All investors must agree that generating profits for them is a secondary goal, and that their overall returns will be capped at 100x their investment (with any excess going back to the nonprofit)."], "venue": "OpenAI Blog", "opinion": "Given the high cost of salaries and compute for machine learning research, I don't find this a particularly surprising development. I'd also note that, in the context of investing in a startup, a 100x return over a timeframe of decades is not actually that high.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #52", "newsletter_category": "News"}
{"id": "480733b6fa79a0365df7b16533e4b799", "title": "Q&A with Jason Matheny, Founding Director of CSET", "url": "https://www.georgetown.edu/news/q-and-a-with-cset-founding-director-jason-matheny", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Jason Matheny"], "summaries": ["The [Center for Security and Emerging Technology](https://cset.georgetown.edu/) has been announced, with a [$55 million grant from the Open Philanthropy Project](https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology), and is [hiring](https://cset.georgetown.edu/careers/). While the center will work on emerging technologies generally, it will initially focus on AI, since demand for AI policy analysis has far outpaced supply.\n\nOne area of focus is the implications of AI on national and international security. Current AI systems are brittle and can easily be fooled, implying several safety and security challenges. What are these challenges, and how important are they? How can we make systems that are more robust and mitigate these problems?\n\nAnother area is how to enable effective competition on AI in a global environment, while also cooperating on issues of safety, security and ethics? This will likely require measurement of investment flows, publications, data and hardware across countries, as well as management of talent and knowledge workflows.\n\nSee also [Import AI](https://jack-clark.net/2019/03/04/import-ai-136-what-machine-learning-power-infrastructure-means-for-humanity-new-gca-benchmarkdataset-challenges-image-captioning-systems-and-google-uses-frankenrl-to-create-more-mobile-robot/)."], "venue": "Georgetown University Website", "opinion": "It's great to see a center for AI policy that's run by a person who has wanted to consume AI policy analysis in the past (Jason Matheny was previously the director of IARPA). It's interesting to see the areas he focuses on in this Q&A -- it's not what I would have expected given my very little knowledge of AI policy.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #48", "newsletter_category": "News"}
{"id": "72eadefeb5b20554a561fb4a9ca245de", "title": "Governance of AI Fellowship", "url": "https://www.fhi.ox.ac.uk/govai-fellowship/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Markus Anderljung"], "summaries": ["The Center for the Governance of AI is looking for a few fellows to work for around 3 months on AI governance research. They expect that fellows will be at the level of PhD students or postdocs, though there are no strict requirements. The first round application deadline is Feb 28, and the second round application deadline is Mar 28."], "venue": "FHI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #45", "newsletter_category": "News"}
{"id": "66423d154b05203c1000661e588dd6d0", "title": "SafeML ICLR 2019 Call for Papers", "url": "https://sites.google.com/view/safeml-iclr2019/cfp", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Victoria Krakovna"], "summaries": ["The SafeML workshop has a paper submission deadline of Feb 22, and is looking for papers on specification, robustness and assurance (based on [Building safe artificial intelligence: specification, robustness, and assurance](https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1) ([AN #26](https://mailchi.mp/1ecd1b775703/alignment-newsletter-26)))."], "venue": "ICLR 2019", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #43", "newsletter_category": "News"}
{"id": "05e4166f94418cbf74d6977b6aebaf89", "title": "Olsson to Join the Open Philanthropy Project", "url": "https://twitter.com/catherineols/status/1085702568494301185", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["Catherine Olsson, a researcher at Google Brain who was previously at OpenAI, will be joining the Open Philanthropy Project to focus on grant making for reducing x-risk from advanced AI. Given her first-hand research experience, she has knowledge of the dynamics of research groups and a nuanced understanding of various safety subproblems. Congratulations to both her and OpenPhil."], "venue": "Twitter", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Dan H", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #42", "newsletter_category": "News"}
{"id": "99d4b9a6683facaa93a51d73ecc4296c", "title": "GovAI Summer 2022 Fellowships", "url": "https://www.governance.ai/opportunities/fellowships", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["Applications are now open for the GovAI 2022 Summer Fellowship! This is an opportunity for early-career individuals to spend three months working on an AI governance research project, learning about the field, and making connections with other researchers and practitioners. Application deadline is Jan 1."], "venue": "GovAI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #170", "newsletter_category": "News"}
{"id": "46be8f847814b0a8ad0c2eabd9ddaf47", "title": "Foundations of Cooperative AI Lab", "url": "https://www.andrew.cmu.edu/user/coesterh/FOCAL/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["This new lab at CMU aims to create foundations of game theory appropriate for advanced, autonomous AI agents -- think of work on agent foundations and <@cooperative AI@>(@Open Problems in Cooperative AI@). Apply for a PhD [here](https://csd.cmu.edu/academics/doctoral/admissions) (deadline Dec 9) or for a postdoc [here](https://apply.interfolio.com/98450)."], "venue": "FOCAL Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #170", "newsletter_category": "News"}
{"id": "b0814075c98935f80c1ea1b8d6f0051b", "title": "OpenAI hiring Software Engineer, Alignment", "url": "https://boards.greenhouse.io/openai/jobs/4143981004?gh_src=4972b51c4us", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["Exactly what it sounds like: OpenAI is hiring a software engineer to work with the Alignment team."], "venue": "OpenAI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #169", "newsletter_category": "News"}
{"id": "85bbd3c083c6cb714f534955d53db179", "title": "CHAI Internships 2022", "url": "https://humancompatible.ai/jobs", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["CHAI internships are open once again! Typically, an intern will execute on an AI safety research project proposed by their mentor, resulting in a first-author publication at a workshop. The early deadline is November 23rd and the regular deadline is December 13th."], "venue": "CHAI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #167", "newsletter_category": "News"}
{"id": "a73cfb4a2f69dd83e7d391cdda5ceddf", "title": "Announcing the Vitalik Buterin Fellowships in AI Existential Safety", "url": "https://grants.futureoflife.org/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Daniel Filan"], "summaries": ["FLI is launching a fellowship for incoming PhD students and postdocs who are focused on AI existential safety. The application deadline is October 29 for the PhD fellowship, and November 5 for the postdoc fellowship."], "venue": "FLI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #165", "newsletter_category": "News"}
{"id": "1fadb7b2d191d8b06af8c9ffa318f303", "title": "The Open Phil AI Fellowship (Year 5)", "url": "https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/the-open-phil-ai-fellowship", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["Applications are now open for the fifth cohort of the <@Open Phil AI Fellowship@>! They are also due October 29."], "venue": "Open Philanthropy Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #165", "newsletter_category": "News"}
{"id": "6ff47c1b13f7a8a40531f9571a8b1232", "title": "Research Scientist, Long-term Strategy & Governance ", "url": "https://deepmind.com/careers/jobs/3402612", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["DeepMind (my employer) is hiring for several Research Scientist positions on the Long-term Strategy and Governance Team, across a wide range of backgrounds and skills. (Though note that you do need a PhD, or equivalent experience.) See also this [EA Forum post](https://forum.effectivealtruism.org/posts/atbonGDAFegfeDbTF/deepmind-is-hiring-long-term-strategy-and-governance)."], "venue": "DeepMind Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #164", "newsletter_category": "News"}
{"id": "a77afd1dc8013e1b127c8973cf2a9a79", "title": "Cooperative AI Workshop 2021", "url": "https://www.cooperativeai.com/neurips-2021/workshop-information", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The <@Cooperative AI@>(@Open Problems in Cooperative AI@) <@NeurIPS workshop@>(@Cooperative AI Workshop@) is running again this year! The paper submission deadline is September 25."], "venue": "NeurIPS 2021", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #161", "newsletter_category": "News"}
{"id": "38b9e75872517a42beb65d4602a67851", "title": "You can now apply to EA Funds anytime! (LTFF & EAIF only)", "url": "https://forum.effectivealtruism.org/posts/oz4ZWh6xpgFheJror/you-can-now-apply-to-ea-funds-anytime-ltff-and-eaif-only", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Jonas Vollmer"], "summaries": ["The Long-Term Future Fund (LTFF) has funding available for people working on AI alignment. I’m told that the LTFF is constrained by high-quality applications, and that applying only takes a few hours, so it is probably best to err on the side of applying. The LTFF has removed its previous round-based system and now accepts applications anytime."], "venue": "EA Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #153", "newsletter_category": "News"}
{"id": "084e77ac9341f2f9f5b19058d73dac52", "title": "Stanford Existential Risks Conference", "url": "https://www.sericonference.org/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["SERI"], "summaries": ["This conference on existential risks will run April 17-18. Applications to attend close April 12. There will be no charge to attend the conference."], "venue": "SERC Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #144", "newsletter_category": "News"}
{"id": "363a3088e2a20f97dc8c3409e7cc02d8", "title": "Research Engineer, Safety (OpenAI)", "url": "https://jobs.lever.co/openai/2cbafe18-54f7-43c1-b306-9877b36efb44", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The Applied Safety team at OpenAI is looking to hire a research engineer, and explicitly states that the job is about safety of general-purpose AI systems (as opposed to narrow AI systems like autonomous vehicles)."], "venue": "Lever", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #144", "newsletter_category": "News"}
{"id": "107819fce0e71441b03af1b6036e33ca", "title": "Chinese translation of Human Compatible", "url": "https://www.sohu.com/a/427998491_464088.", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The Chinese translation of <@Human Compatible@>(@Human Compatible: Artificial Intelligence and the Problem of Control@) came out in October and the first chapter is [here](https://cread.jd.com/read/startRead.action?bookId=30675029&readType=1)."], "venue": "Sohu", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #143", "newsletter_category": "News"}
{"id": "9cc31031088f1f43112fc21b36e66014", "title": "DPhil Scholarships Applications Open", "url": "https://www.fhi.ox.ac.uk/dphil-scholarships-applications-open/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Ben Gable"], "summaries": ["FHI will be awarding up to six scholarships for the 2021/22 academic year for DPhil students starting at the University of Oxford whose research aims to answer crucial questions for improving the long-term prospects of humanity. Applications are due Feb 14."], "venue": "FHI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #134", "newsletter_category": "News"}
{"id": "32e23659595f0f6cf979b1ae924d30bc", "title": "CHAI Internship", "url": "https://humancompatible.ai/jobs", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Martin Fukui"], "summaries": ["<@CHAI internships@>(@CHAI 2020 Internships@) are open once again! The deadline for applications is December 13."], "venue": "CHAI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #126", "newsletter_category": "News"}
{"id": "26e6a94761fc976bef49a94c349d8f2d", "title": "The Open Phil AI Fellowship", "url": "https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/the-open-phil-ai-fellowship", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["We’re now at the fourth cohort of the <@Open Phil AI Fellowship@>! Applications are due October 22."], "venue": "Open Philanthropy Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #118", "newsletter_category": "News"}
{"id": "737fbb1f3546ee7a501ffe6aff053e58", "title": "Navigating the Broader Impacts of AI Research", "url": "https://nbiair.com/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["This is a workshop at NeurIPS; the title tells you exactly what it's about. The deadline to submit is October 12."], "venue": "NBIAIR Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #118", "newsletter_category": "News"}
{"id": "c878144dbdf25d8573c80e5195000dd6", "title": "FHI is hiring Researchers, Research Fellows, and Senior Research Fellows", "url": "https://www.fhi.ox.ac.uk/researcher-hiring-2020/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Anne Le Roux"], "summaries": ["FHI is hiring for researchers across a wide variety of topics, including technical AI safety research and AI governance. The application deadline is October 19."], "venue": "FHI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #117", "newsletter_category": "News"}
{"id": "4eca1babae6ab982fc78ceb089c19a75", "title": "Cooperative AI Workshop", "url": "https://www.cooperativeai.com/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["This NeurIPS workshop has the goal of improving the _cooperation_ skills of AI systems (whether with humans or other machines), which encompasses a _very_ wide range of research topics. The deadline to submit is September 18."], "venue": "NeurIPS 2020", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #116", "newsletter_category": "News"}
{"id": "72940727125e865ff6e24e0752aaee95", "title": "Senior Systems Safety Engineer", "url": "https://jobs.lever.co/openai/994b4b81-d2ef-4d74-ae80-5cdb9b6e2dfa", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["OpenAI"], "summaries": ["OpenAI is hiring for a senior systems safety engineer. From my read of the job description, it seems like the goal is to apply the principles from <@Engineering a Safer World@> to AI development."], "venue": "Lever Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #116", "newsletter_category": "News"}
{"id": "4c66f9aad71eceffc1709def54636828", "title": "Early-career funding for individuals interested in improving the long-term future", "url": "https://www.openphilanthropy.org/focus/other-areas/early-career-funding-individuals-interested-improving-long-term-future?fbclid=IwAR3bA_4piJVHwSREGaH6g0O3CReNw3SlLNpd7jMAQTygSeMrkwyRfoPRbcA", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["This Open Philanthropy program aims to provide support for people who want to focus on improving the long-term future. The primary form of support would be funding for graduate school, though other one-off activities that build career capital also count. They explicitly say that people interested in working on AI policy or risks from transformative AI should apply to this program (possibly in addition to their <@AI fellowship@>(@Open Phil AI Fellowship@)). The stage 1 deadline is January 1, but if you submit earlier they aim to respond within 10 working days."], "venue": "Open Philanthropy", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #116", "newsletter_category": "News"}
{"id": "4e4d849b52ba1d8088c510741243eba6", "title": "OpenAI API", "url": "https://openai.com/blog/openai-api/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["OpenAI"], "summaries": ["OpenAI has released a commercial API that gives access to natural language completions via <@GPT-3@>(@Language Models are Few-Shot Learners@), allowing users to specify tasks in English that GPT-3 can then (hopefully) solve."], "venue": "OpenAI Blog", "opinion": "This is notable since this is (to my knowledge) OpenAI’s first commercial application.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #104", "newsletter_category": "News"}
{"id": "db1566bd42a8810dbd71b779a820268a", "title": "Worldbuilding Contest", "url": "https://worldbuild.ai/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["FLI invites individuals and teams to compete for a prize purse worth $100,000+ by designing visions of a plausible, aspirational future including artificial general intelligence. The deadline for submissions is April 15."], "venue": "", "opinion": "", "highlight": false, "read_more": "FLI launches Worldbuilding Contest with $100,000 in prizes", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "", "newsletter_category": "News"}
{"id": "028bfffea5b945a74601cf2654cb9625", "title": "CLR Open Positions: Researchers and Summer Research Fellows", "url": "https://longtermrisk.org/work-with-us/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The Center on Long-Term Risk is looking for researchers and summer research fellows to work on high-quality research relevant to s-risks, including on (among other areas) multiagent systems. The application deadline is May 13."], "venue": "CLR Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #98", "newsletter_category": "News"}
{"id": "8b8c584a677ddc31054083904a9fcf82", "title": "Careers at the Joint AI Center", "url": "https://www.ai.mil/careers.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The Joint AI Center is searching for ML experts for a variety of roles."], "venue": "JAIC Website", "opinion": "You might be wondering why I've included these jobs in the newsletter, given that I don't do very many promotions. I think that it is reasonably likely that the US government (and the military in particular) will be a key player in the future of AI, and that there could be a lot to learn from their testing, evaluation, validation & verification (TEV&V) framework (which often seems more risk-averse to me than many alignment schemes are). As a result, I would be excited if readers of this newsletter interested in how the military thinks about AI filled these positions: it seems great to have a flow of ideas between the two communities (so that the government learns about alignment concerns, and so that we learn about TEV&V).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #92", "newsletter_category": "News"}
{"id": "aa4b46a155c474127ad9f360a8ae5368", "title": "TAISU - Technical AI Safety Unconference", "url": "https://www.lesswrong.com/events/BPTzfeQeZZ6chHvtr/taisu-technical-ai-safety-unconference-1", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Linda Linsefors"], "summaries": ["This unconference on technical AI safety will be held May 14th-17th; application deadline is February 23."], "venue": "LessWrong", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #86", "newsletter_category": "News"}
{"id": "6b3b564482f6939a1fe9aa22b8268998", "title": "AI Alignment Visiting Fellowship", "url": "https://www.fhi.ox.ac.uk/fellows/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["This fellowship would support 2-3 applicants to visit FHI for three or more months to work on human-aligned AI. The application deadline is Feb 28."], "venue": "FHI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #86", "newsletter_category": "News"}
{"id": "1219d15bb03a45ba71179d3490d602b9", "title": "AI Safety Unconference 2019", "url": "https://aisafetyunconference.info/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["David Krueger", "Orpheus Lummis", "and Gretchen Krueger"], "summaries": ["Like last year, there will be an AI safety unconference alongside NeurIPS, on Monday Dec 9 from 10am to 6pm. While the website suggests a registration deadline of Nov 25, the organizers have told me it's a soft deadline, but you probably should [register](https://docs.google.com/forms/d/e/1FAIpQLSfOXyo2P0Wv6bxyNyzzdMnzSL8_wGa4pMIDTh1tQwYeZmMebw/viewform) now to secure a place."], "venue": "", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #75", "newsletter_category": "News"}
{"id": "a93432fd1e4a793abd8274c3352bd389", "title": "CHAI 2020 Internships", "url": "https://humancompatible.ai/jobs", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["CHAI (the lab where I work) is currently accepting applications for its 2020 internship program. The deadline to apply is **Dec 15**."], "venue": "CHAI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #74", "newsletter_category": "News"}
{"id": "a2503ce4e363065fdd7a2764e8b14255", "title": "FHI DPhil Scholarships", "url": "https://www.fhi.ox.ac.uk/scholarships/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The Future of Humanity Institute will be awarding up to two DPhil scholarships for the 2020/21 academic year, open to students beginning a DPhil at the University of Oxford whose research aims to answer crucial questions for improving the long-term prospects of humanity. Applications will open around January or February, and decisions will be made in April."], "venue": "FHI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #70", "newsletter_category": "News"}
{"id": "3aa1d336d1fc407e8a13e88f23c62d32", "title": "Open Phil AI Fellowship", "url": "https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/the-open-phil-ai-fellowship", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The Open Phil AI Fellowship is seeking applications for its third cohort. Applications are due by October 25. The fellowship is open to current and incoming PhD students, including those with pre-existing funding sources. It provides up to 5 years of support with a stipend of $40,000 and a travel allocation of $10,000."], "venue": "Open Phil Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #66", "newsletter_category": "News"}
{"id": "abb664fcdab0898e89a01452e0538309", "title": "Join our rapidly growing research teams", "url": "https://www.fhi.ox.ac.uk/researcher-positions/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Tanya Singh"], "summaries": ["The Future of Humanity Institute is hiring researchers across a wide range of topics, including AI safety and strategy. The deadline to apply is midday August 16."], "venue": "FHI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #60", "newsletter_category": "News"}
{"id": "2995bd53a9fed8ddf6218c67ff08f474", "title": "Offer of collaboration and/or mentorship", "url": "https://www.alignmentforum.org/posts/bSWavBThj6ebB62gD/offer-of-collaboration-and-or-mentorship", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Vanessa Kosoy"], "summaries": ["This is exactly what it sounds like. You can find out more about Vanessa's research agenda from <@The Learning-Theoretic AI Alignment Research Agenda@>, and I've summarized two of her recent posts in this newsletter."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #57", "newsletter_category": "News"}
{"id": "58e34f65f99162dd85f23100691a23e2", "title": "Human-aligned AI Summer School", "url": "http://humanaligned.ai/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Jan Kulveit", "Tomáš Gavenčiak", "Jan Romportl"], "summaries": ["The second Human-aligned AI Summer School will be held in Prague from July 25-28, with a focus on \"optimization and decision-making\". Applications are due June 15."], "venue": "Human-aligned AI Summer School Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #57", "newsletter_category": "News"}
{"id": "42a02531b19ae994b0e49cd3f554f8a2", "title": "AI Safety workshop at IJCAI 2019", "url": "https://www.ai-safety.org/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Huáscar Espinoza", "Han Yu", "Xiaowei Huang", "Freddy Lecue", "Cynthia Chen", "José Hernández-Orallo", "Seán Ó hÉigeartaigh", "and Richard Mallah"], "summaries": ["There will be a workshop on AI safety at IJCAI 2019 in Macao, China; the paper submission deadline is April 12. In addition to the standard submissions (technical papers, proposals for technical talks, and position papers), they are seeking papers for their \"AI safety landscape\" initiative, which aims to build a single document identifying the core knowledge and needs of the AI safety community."], "venue": "AI Safety Workshop website", "opinion": "", "highlight": false, "read_more": "EasyChair website", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #49", "newsletter_category": "News"}
{"id": "6eb1044ba8081dfb02bc7bbfd5593d1f", "title": "MIRI Summer Fellows Program", "url": "http://rationality.org/workshops/apply-msfp", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Colm Ó Riain"], "summaries": ["CFAR and MIRI are running the MIRI Summer Fellows Program from August 9-24. Applications are due March 31."], "venue": "CFAR Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #47", "newsletter_category": "News"}
{"id": "be8e2259e640afe7a99cc08e7e62f450", "title": "FHI DPhil Scholarships", "url": "http://www.fhi.ox.ac.uk/scholarships/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Rose Hadshar"], "summaries": ["The Future of Humanity Institute is accepting applications for scholarships for candidates beginning a DPhil programme."], "venue": "FHI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #46", "newsletter_category": "News"}
{"id": "67d25d59e30d69ca3f9fd36fb6910524", "title": "PAI Fellowship Program Call For Applications", "url": "https://www.partnershiponai.org/call-for-applications-the-partnership-on-ai-fellowship-program/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The Partnership on AI is opening applications for Research Fellows who will \"conduct groundbreaking multi-disciplinary research\"."], "venue": "PAI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #44", "newsletter_category": "News"}
{"id": "5da9601ee6ae7e17a489cfe7f64f0b1c", "title": "Summit on Machine Learning meets Formal Methods", "url": "http://www.floc2018.org/summit-on-machine-learning/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["This is a one-day summit on July 13 that is part of the Federated Logic Conference. This seems like an unusually good venue to think about how to apply formal methods to AI systems -- in particular I'm impressed by the list of speakers, which includes a variety of experts in both fields."], "venue": "FLOC Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #4", "newsletter_category": "News"}
{"id": "f8b4449f31903d166358373ee1b565bc", "title": "The Open Philanthropy Project AI Fellows Program", "url": "https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-philanthropy-project-ai-fellows-program", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The second Open Phil AI Fellowship has been announced, open to AI and ML PhD students, and people applying to PhD programs starting in 2019. Even if you aren't looking for a fellowship, you may want to read through their example research topics, which are split into three main categories -- reward learning, reliability, and interpretability."], "venue": "Open Phil website", "opinion": "I like the breakdown of research topics, though personally I would have made them broader. I think I would want the \"reward learning\" category to include anything that aims to provide a specification for the AIs behavior, such as natural language instructions (that are mapped directly to policies with no reward in between). The \"reliability\" section is then about successfully meeting the specification, while the third section would include anything that allows the operator to empirically verify whether and enforce that the previous two sections are working correctly, including \"interpretability\". Actually, having written this, it's pretty similar to the three categories in the DeepMind blog post covered in the highlights.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #26", "newsletter_category": "News"}
{"id": "4ce91c6f7ad58c1dd050fb0c51099e4d", "title": "80,000 Hours Job Board: AI/ML safety research", "url": "https://80000hours.org/job-board/ai-ml-safety-research/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["80,000 Hours recently updated their job board, including the section on technical safety research. The [AI strategy and governance](https://80000hours.org/job-board/ai-strategy-governance/) section is probably also of interest."], "venue": "80,000 Hours", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #22", "newsletter_category": "News"}
{"id": "3a5a38c81f0331a586263874f0c66de0", "title": "DeepMind job: Science Writer", "url": "https://deepmind.com/careers/1294000/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["According to the job listing, the role would involve creating content for the blog, videos, presentations, events, etc. and would require a reasonably technical background and strong writing skills. Vishal Maini at DeepMind notes that this person would likely have a significant impact on how AI research is communicated to various key strategic audiences around the world -- from the technical community to the broader public -- and would spend some of their time engaging with AI alignment research, among other areas."], "venue": "DeepMind", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #21", "newsletter_category": "News"}
{"id": "2f1290b4605a39d05d8a1497b69f34d8", "title": "Internship: The Future Society", "url": "https://www.facebook.com/groups/AISafetyCareers/permalink/1083973108416655/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Caroline Jeanmaire"], "summaries": ["An internship which will focus on AI policy research as well as support to organize two large AI governance events. To apply, send a CV and a short letter explaining ‘why you?’ to caroline.jeanmaire@thefuturesociety.org."], "venue": "Facebook", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #21", "newsletter_category": "News"}
{"id": "05434a642bd5a09addf87fd2819520cf", "title": "Public reports are now optional for EA Funds grantees", "url": "https://forum.effectivealtruism.org/posts/LKdtHdETxSYAXwoW6/public-reports-are-now-optional-for-ea-funds-grantees", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Asya Bergal and Jonas Vollmer"], "summaries": ["This is your regular reminder that you can apply to the Long-Term Future Fund (and the broader EA Funds) for funding for a wide variety of projects. They have now removed the requirement for public reporting of your grant. They encourage you to apply if you have a preference for private funding."], "venue": "EA Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #170", "newsletter_category": "News"}
{"id": "6add225a65320f9e97d90c081ff59e5b", "title": "$2 Million Donated to Keep Artificial General Intelligence Beneficial and Robust", "url": "https://futureoflife.org/2018/07/25/2-million-donated-to-keep-artificial-general-intelligence-beneficial-and-robust/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Ariel Conn"], "summaries": ["The next round of FLI grants have been announced! There are fewer grants than in their [first round](https://futureoflife.org/first-ai-grant-recipients/) and the topics seem more focused on AGI safety."], "venue": "FLI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #17", "newsletter_category": "News"}
{"id": "33350cd6272e3b62b12795e1eb908441", "title": "AI Safety Camp Virtual 2022", "url": "http://www.aisafety.camp/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["Applications are open for this remote research program, where people from various disciplines come together to research an open problem under the mentorship of an established AI-alignment researcher. Deadline to apply is December 1st."], "venue": "AI Safety Camp Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #169", "newsletter_category": "News"}
{"id": "5ada876ac119cff7ae6441cf6e68426d", "title": "Q&A Panel on Applying for Grad School", "url": "https://www.aisafetysupport.org/events/grad", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["In this event run by AI Safety Support on November 7, current PhD students will share their experiences navigating the application process and AI Safety research in academia. RSVP [here](evt.to/dhaimisw)."], "venue": "AI Safety Support", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #168", "newsletter_category": "News"}
{"id": "bc010be881c42861afbb6ffb1d2f1f20", "title": "SafeAI Workshop 2022", "url": "https://safeai.webs.upv.es/index.php/submissions/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The SafeAI workshop at AAAI is now accepting paper submissions, with a deadline of Nov 12."], "venue": "SafeAI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #168", "newsletter_category": "News"}
{"id": "45d201b1f642b2e2b208dba3d4174a59", "title": "FLI's $25M Grants Program for Existential Risk Reduction", "url": "https://www.youtube.com/watch?v=JPcPayJiWm8", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["This podcast talks about FLI's recent grants program for x-risk reduction. I've previously mentioned the <@fellowships@>(@Announcing the Vitalik Buterin Fellowships in AI Existential Safety@) they are running as part of this program. As a reminder, the application deadline is October 29 for the PhD fellowship, and November 5 for the postdoc fellowship."], "venue": "Lucas Perry and Max Tegmark", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #168", "newsletter_category": "News"}
{"id": "9afc0bc3c00ba907ed6360aa98d08184", "title": "[Job ad] Research important longtermist topics at Rethink Priorities!", "url": "https://forum.effectivealtruism.org/posts/3vXXthjBKhNo8sgFv/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Linch Zhang"], "summaries": ["Of particular interest to readers, there are roles available in AI governance and strategy. The application deadline is Oct 24."], "venue": "EA Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #166", "newsletter_category": "News"}
{"id": "ccacb52dbc35606de01e70a76f5bb525", "title": "2022 IEEE Conference on Assured Autonomy", "url": "https://iaa.jhu.edu/icaa/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The ICAA conference seeks contributions on all aspects of AI safety, security, and privacy in autonomous systems. The paper submission deadline is October 18 and the conference itself will take place March 22-24."], "venue": "Author's Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #164", "newsletter_category": "News"}
{"id": "efde2772e87fb8b399b052d34e8a56f2", "title": "CSER Job Posting: Academic Programme Manager", "url": "https://www.jobs.cam.ac.uk/job/31055/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["CSER is searching for a candidate for a relatively senior role that combines academic, management and administrative responsibilities. The application deadline is September 20."], "venue": "Cambridge Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #164", "newsletter_category": "News"}
{"id": "25842811401d34959ccaa65d934cb6d2", "title": "NIST AI Risk Management Framework", "url": "https://www.nist.gov/itl/ai-risk-management-framework", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The National Institute of Standards and Technology (NIST) has put out a formal Request For Information (RFI) in the process of developing an AI Risk Management Framework that is intended for voluntary use in order to improve trustworthiness and mitigate risks of AI systems. According to the [legislative mandate](https://www.congress.gov/116/bills/hr6395/BILLS-116hr6395enr.pdf#page=1151), aspects of trustworthiness include “explainability, transparency, safety, privacy, security, robustness, fairness, bias, ethics, validation, verification, interpretability, and other properties related to artificial intelligence systems that are common across all sectors”. Multiple AI safety organizations are submitting responses to the RFI and would like additional AI safety researchers to engage with it. Responses are due September 15; if you'd like to help out, email Tony Barrett at tbambarrett@gmail.com."], "venue": "NIST Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #161", "newsletter_category": "News"}
{"id": "d91e6c756f86b7066bcaea4eab59cab1", "title": "Introducing the AI Objectives Institute", "url": "https://ai.objectives.institute/blog/ai-and-the-transformation-of-capitalism", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Peter Eckersley"], "summaries": ["For years people have been talking about corporations and capitalism as an example of superintelligence that we have failed to align so far. This new institute plans to take this correspondence seriously and transfer insights between the two. In particular, we can (a) examine how proposed problems with AI are already taking place with capitalism, (b) use tools and ideas from AI safety to improve upon capitalism, and (c) use lessons from capitalism to assist in the project of building a safely aligned AI."], "venue": "AI Objectives Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #160", "newsletter_category": "News"}
{"id": "e2792754119a1b9fbde0409f4601ec59", "title": "ML Engineer Position at Preamble", "url": "https://docs.google.com/document/d/1jr92v2Xt6znq6_otCXZ-5JyklofpjytR0N7v-08R76E/edit", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Dylan Hadfield-Menell"], "summaries": ["[Preamble](https://www.preamble.com/) is a seed-stage company aiming to build middleware for AI ethics and safety, with a current focus on recommender systems. They have an early prototype for Twitter users, implemented as a browser extension. They are currently trying to hire an ML engineer to push forward their work."], "venue": "Preamble Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #160", "newsletter_category": "News"}
{"id": "346a0973d3220610da946cd1c5f7a6ff", "title": "Ought's Progress Update July 2018", "url": "https://ought.org/blog/2018-07-16-progress-update", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Andreas Stuhlmüller"], "summaries": ["A lot of organizational updates that I won't summarize here. There's a retrospective about the Predicting Slow Judgments project, and some updates on the Factored Cognition project. Two particularly interesting points -- first, they have not yet run into questions where it seemed impossible to make progress by decomposing the problem, making them slightly more optimistic; and second, they are now more confident that decomposition will take a large amount of work, such that experiments will require some amount of automation using ML in order to be feasible."], "venue": "Ought Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #16", "newsletter_category": "News"}
{"id": "8240436656116450028f14b5749613f2", "title": "Political Economy of Reinforcement Learning (PERLS) Workshop", "url": "https://perls-workshop.github.io/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Stuart Russell", "Thomas Gilbert", "Tom Zick", "Aaron Snoswell", "Michael Dennis"], "summaries": ["The deadline for submissions to this NeurIPS 2021 workshop is Sep 18. From the website: \"The aim of this workshop will be to establish a common language around the state of the art of RL across key societal domains. From this examination, we hope to identify specific interpretive gaps that can be elaborated or filled by members of our community. Our ultimate goal will be to map near-term societal concerns and indicate possible cross-disciplinary avenues towards addressing them.\""], "venue": "", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #159", "newsletter_category": "News"}
{"id": "d70df335fb29e03d55d22d947262f765", "title": "Building and Evaluating Ethical Robotic Systems", "url": "https://ers-workshop.com/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Justin Svegliato*", "Samer Nashed*", "Alan Winfield", "Dylan Hadfield-Menell", "Louise Dennis", "Paul Bello"], "summaries": ["This workshop at IROS 2021 asks for work on ethical robotic systems, including value alignment as a subtopic. Notably, they also welcome researchers from disciplines beyond robotics, including philosophy, psychology, sociology, and law. The paper submission deadline is August 13."], "venue": "ERS Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #158", "newsletter_category": "News"}
{"id": "d89476c582865799a6c790a80baa71d5", "title": "Open Philanthropy Technology Policy Fellowship", "url": "https://www.openphilanthropy.org/focus/global-catastrophic-risks/technology-policy-fellowship", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Luke Muehlhauser"], "summaries": ["Open Philanthropy is seeking applicants for a US policy fellowship program focused on high-priority emerging technologies, especially AI and biotechnology. Application deadline is September 15."], "venue": "Open Philanthropy", "opinion": "", "highlight": false, "read_more": "EA Forum post", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #157", "newsletter_category": "News"}
{"id": "c6cb6fd9f4dd5716d66177a42166e4c5", "title": "Hypermind forecasting contest on AI", "url": "https://mailchi.mp/hypermind/new-high-stakes-forecasting-challenge-will-ai-surprise-us?e=edb1b4a37c", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["Hypermind is running a forecasting contest on the evolution of artificial intelligence with a $30,000 prize over four years. The questions ask both about the growth of compute and about performance on specific benchmarks such as the <@MATH suite@>(@Measuring Mathematical Problem Solving With the MATH Dataset@)."], "venue": "Hypermind", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #155", "newsletter_category": "News"}
{"id": "8d091fa8f50bc67be36056f68d6a148e", "title": "Research Fellow- AI TEV&V", "url": "https://cset.georgetown.edu/job/research-fellow-tevv/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["CSET is currently seeking a Research Fellow to focus on the safety and risk of deployed AI systems."], "venue": "CSET Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #152", "newsletter_category": "News"}
{"id": "1d2d62de2dda799b316ab0e466cc50de", "title": "Deputy Director (CSER)", "url": "https://www.jobs.cam.ac.uk/job/29900/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The Centre for the Study of Existential Risk (CSER) is looking to hire a Deputy Director."], "venue": "Cambridge Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #152", "newsletter_category": "News"}
{"id": "340193a277815f257b908e5109b59404", "title": "Open Call for Advisees and Collaborators, May 2021", "url": "http://gcrinstitute.org/open-call-for-advisees-and-collaborators-may-2021/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["GCRI Website"], "summaries": ["GCRI is open to inquiries from potential collaborators or advisees, regardless of background, career point, or geographic location, about any aspect of global catastrophic risk. Participation can consist of a short email exchange to more extensive project work."], "venue": "McKenna Fitzgerald", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #151", "newsletter_category": "News"}
{"id": "77241f7f63a9e71549e6e8f373abea3a", "title": "BERI Seeking New University Collaborators", "url": "https://existence.org/2021/04/29/new-university-collaborators.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Sawyer Bernath"], "summaries": ["[BERI](https://existence.org/faq) is seeking applications for new collaborators. They offer free services to university groups. If you’re a member of a research group, or an individual researcher, working on long-termist projects, you can [apply here](http://existence.org/apply). Applications are due June 20th."], "venue": "BERI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #149", "newsletter_category": "News"}
{"id": "fd8468a5b9e672b28172ce421e2c584a", "title": "FLI Job Postings", "url": "https://futureoflife.org/job-postings/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The [Future of Life Institute](https://futureoflife.org/) has 3 new job postings for full-time equivalent remote policy focused positions. They're looking for a Director of European Policy, a Policy Advocate, and a Policy Researcher, all primarily focused on AI policy and governance. Additional policy areas of interest may include lethal autonomous weapons, synthetic biology, nuclear weapons policy, and the management of existential and global catastrophic risk. Applications are accepted on a rolling basis."], "venue": "FLI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #146", "newsletter_category": "News"}
{"id": "63beed9ef779c9b08713b73d67d1a2c2", "title": "Postdoc role at CHAI", "url": "https://humancompatible.ai/jobs", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["CHAI"], "summaries": ["The Center for Human-Compatible AI (where I did my PhD) is looking for postdocs. Apply [here](https://forms.gle/8w9Jfjr3X86osAvTA)."], "venue": "CHAI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #138", "newsletter_category": "News"}
{"id": "63aa4ac4ed1fc2bb204600667482ae44", "title": "Apply to EA Funds now", "url": "https://forum.effectivealtruism.org/posts/NfkdSooNiHcdCBSJs/apply-to-ea-funds-now-1", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Jonas Vollmer"], "summaries": ["EA Funds applications are open until the deadline of March 7. This includes the Long-Term Future Fund (LTFF), which often provides grants to people working on AI alignment. I’m told that LTFF is constrained by high-quality applications, and that applying only takes a few hours, so it is probably best to err on the side of applying."], "venue": "EA Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #138", "newsletter_category": "News"}
{"id": "d0f66a065069e6d665ba4fb04aa10f4c", "title": "AISU 2021", "url": "https://www.aisu.io/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The third AI safety unconference will take place online from April 23rd to April 28th, 2021. The registration deadline is April 13th."], "venue": "AISU Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #136", "newsletter_category": "News"}
{"id": "b5fdf86409f7306c1280196f048895e6", "title": "MIT Postdoc Role", "url": "https://www.researchgate.net/job/947492_MIT_Postdoc-Economics_Computer_Science_AI", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["[Neil Thompson](http://www.neil-t.com/), who works on forecasting progress in AI (see for example [The Computational Limits of Deep Learning](https://arxiv.org/abs/2007.05558)), is looking for a postdoc in economics and computer science to (1) understand the key innovation trends in computing and artificial intelligence, and (2) analyze the economic and policy implications of these trends. The application deadline is Jan 3."], "venue": "ResearchGate", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #129", "newsletter_category": "News"}
{"id": "8aab0ed3f172a24b20190ee9cdaf7ac4", "title": "Metaculus AI Progress Tournament", "url": "https://www.metaculus.com/ai-progress-tournament/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["Metaculus is running an AI forecasting tournament, with up to $50,000 in prizes. The tournament starts December 14, and will continue till around mid-June, and will involve forecasting targets on a 6-24 month timescale. You can pre-register to forecast now."], "venue": "Metaculus", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #128", "newsletter_category": "News"}
{"id": "1f81f2f494bcb3dd4be038d775f6eb00", "title": "AI Safety Camp virtual edition 2021", "url": "https://aisafety.camp/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Remmelt Ellen", "Rebecca Baron", "Richard Möhn", "Max Chiswick", "Nicholas Goldowsky-Dill"], "summaries": ["The second virtual AI Safety Camp will take place over the first half of 2021. Applications will close on December 15."], "venue": "Author's Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #126", "newsletter_category": "News"}
{"id": "562003a736a47eed1677a4e7657b580e", "title": "PhD Studentships in Safe and Trusted Artificial Intelligence", "url": "https://safeandtrustedai.org/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence is offering 12 fully funded PhD Studentships. They focus on the use of symbolic AI techniques for ensuring the safety and trustworthiness of AI systems. There are multiple application periods; the application deadline for the first round is November 22."], "venue": "Safe and Trusted AI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #122", "newsletter_category": "News"}
{"id": "bb5a6f702f6b0c01f78899d160ab368b", "title": "OpenAI Licenses GPT-3 Technology to Microsoft", "url": "https://openai.com/blog/openai-licenses-gpt-3-technology-to-microsoft/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["In the <@initial announcement of Microsoft’s investment in OpenAI@>(@Microsoft invests in and partners with OpenAI to support us building beneficial AGI@), OpenAI suggested that they would likely license pre-AGI technologies to Microsoft in order to get enough capital to run high-compute experiments. This has now happened with the <@GPT-3 API@>(@OpenAI API@)."], "venue": "OpenAI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #119", "newsletter_category": "News"}
{"id": "d6537c37a6eab8818c2068d44362b2c2", "title": "AI Governance Project Manager", "url": "https://www.fhi.ox.ac.uk/ai-governance-project-manager/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Markus Anderljung"], "summaries": ["The Centre for the Governance of AI is hiring for a project manager role. The deadline to apply is September 30."], "venue": "FHI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #115", "newsletter_category": "News"}
{"id": "e517387086a253d69c506d216dd45fc7", "title": "FHI Research Scholars Programme -- Applications Open", "url": "https://www.fhi.ox.ac.uk/fhis-research-scholars-programme-applications-open/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Anne Le Roux"], "summaries": ["The Future of Humanity Institute’s Research Scholars Programme is hiring a second cohort of research scholars, likely to start in Spring 2021. The application deadline is September 14."], "venue": "FHI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #113", "newsletter_category": "News"}
{"id": "649fc7091b43101efb0c2309f92e0892", "title": "Research Scholars Programme", "url": "https://www.fhi.ox.ac.uk/rsp/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["From the website: \"The Future of Humanity Institute is launching a Research Scholars Programme, likely to start in October 2018. It is a selective, two-year research programme, with lots of latitude for exploration as well as significant training and support elements. We will offer around six salaried positions to early-career researchers who aim to answer questions that shed light on the big-picture questions critical to humanity’s wellbeing. We are collecting formal applications to the programme from now until 11 July, 2018.\""], "venue": "FHI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #11", "newsletter_category": "News"}
{"id": "433b1b0c2b38a55898d21f7d2742a0fc", "title": "Announcing the 2018 AI Fellows", "url": "https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/announcing-2018-ai-fellows", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The Open Philanthropy Project has chosen seven out of 180 applicants as the first class of AI fellows."], "venue": "Open Philanthropy Project website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #10", "newsletter_category": "News"}
{"id": "ba7c9821e664d527a4ef89d282bf51be", "title": "OpenAI Fellows—Fall 2018", "url": "https://blog.openai.com/openai-fellows/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Larissa Schiavo", "Igor Mordatch", "and John Schulman"], "summaries": ["The OpenAI Fellows program is accepting applications until July 8 for positions starting in September. The program is aimed at people who want to transition into doing AI research, but they do want evidence of interest in AI, either through past projects or self-study."], "venue": "OpenAI Blog", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #10", "newsletter_category": "News"}
{"id": "be5110e7786fcc6a3a09412a0add14bc", "title": "New Seminar Series and Call For Proposals On Cooperative AI", "url": "https://www.cooperativeai.com/seminars", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2022-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The Cooperative AI Foundation (CAIF) will be hosting a new fortnightly seminar series in which leading thinkers offer their vision for research on Cooperative AI. The first talk, 'AI Agents May Cooperate Better If They Don’t Resemble Us’, was given on Thursday (Jan 20) by Vincent Conitzer (Duke University, University of Oxford). You can find more details and submit a proposal for the seminar series [here](https://www.cooperativeai.com/seminars)."], "venue": "CAIF Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "", "newsletter_category": "News"}
{"id": "438b0f674cd511cea53e4271c04f7547", "title": "AI Risk Management Framework Concept Paper", "url": "https://www.nist.gov/system/files/documents/2021/12/14/AI%20RMF%20Concept%20Paper_13Dec2021_posted.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["After their <@Request For Information last year@>(@NIST AI Risk Management Framework@), NIST has now posted a concept paper detailing their current thinking around the AI Risk Management Framework that they are creating, and are soliciting comments by Jan 25. As before, if you're interested in helping with a response, email Tony Barrett at anthony.barrett@berkeley.edu."], "venue": "", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "", "newsletter_category": "News"}
{"id": "8c7d109ab1e631184641047a9fe953e4", "title": "Junior Research Assistant and Project Manager role at GCRI", "url": "http://gcrinstitute.org/job-posting-junior-research-assistant-and-project-manager/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["This job is available immediately, and could be full-time or part-time. GCRI also currently has a [call](http://gcrinstitute.org/call-for-advisees-and-collaborators-for-select-ai-projects-january-2020/) for advisees and collaborators."], "venue": "GCRI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #85", "newsletter_category": "News"}
{"id": "77e759dc3236558ecd6cf3cf93228d71", "title": "AI Safety Camp Toronto", "url": "https://aisafetycamp.com/ai-safety-camp-toronto/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The next <@AI safety camp@>(@The first AI Safety Camp and onwards@) will be held in early May, in Toronto. Apply [here](https://docs.google.com/forms/d/e/1FAIpQLSfZdu--EII061-KwWDSK6hZ5rtLCpBarKszw9btMs1dO1NOFA/viewform) by Jan 5."], "venue": "AI Safety Camp Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #77", "newsletter_category": "News"}
{"id": "745bd3459f81db0a683ea8baf15a1c48", "title": "Post-Doctoral Fellowship on Ethically Aligned Artificial Intelligence", "url": "https://mila.quebec/en/2019/06/call-for-application-post-doctoral-fellowship-on-ethically-aligned-artificial-intelligence/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["Mila is looking for a postdoctoral fellow starting in Fall 2020 who would work on ethically aligned learning machines, towards building machines which can achieve specific goals while acting in a way consistent with human values and social norms. Applications are already being processed, and will continue to be processed until the position is filled."], "venue": "MILA Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #70", "newsletter_category": "News"}
{"id": "37ad929f25fab21828cd7edaad0837c8", "title": "Research Associate in Paradigms of Artificial General Intelligence and Their Associated Risk", "url": "https://www.cser.ac.uk/about-us/careers/research-associate-paradigms-artificial-general-in/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["José Hernández-Orallo"], "summaries": ["CSER is hiring a post-doctoral research assistant to inform the AGI safety agenda by looking at existing and possible kinds of agents; the deadline is August 26."], "venue": "CSER Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #61", "newsletter_category": "News"}
{"id": "2f7ea0da801d0b9b0d95bdd258ef581d", "title": "Research Scholars Project Coordinator", "url": "https://www.fhi.ox.ac.uk/project-coordinator/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Rose Hadshar"], "summaries": ["FHI is looking to hire a coordinator for the Research Scholars Programme. Application deadline is July 10."], "venue": "FHI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #59", "newsletter_category": "News"}
{"id": "d4b49a98d2da5c9b5e0a52b0ce5be77b", "title": "Open Phil AI Fellowship — 2019 Class", "url": "https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2019-class", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The Open Phil AI Fellows for this year have been announced! Congratulations to all of the fellows :)"], "venue": "Open Phil Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #57", "newsletter_category": "News"}
{"id": "28fe596a9bbcd0c8d768a5085349b48e", "title": "AAAS Policy Fellowship", "url": "https://80000hours.org/2018/09/aaas-science-technology-policy-fellowship/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Niel Bowerman"], "summaries": ["The AAAS Science & Technology Fellowship is open to Americans with a science PhD or 3 years of industry experience and a CS Masters. 80,000 Hours thinks this is one of the best ways into US Government AI policy careers. Application deadline is Nov 1."], "venue": "80,000 Hours", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #26", "newsletter_category": "News"}
{"id": "4c1f89a1bf3cf2284077cad778d1294d", "title": "BERI hiring ML Software Engineer", "url": "https://existence.org/jobs/ml-engineer-seldonian", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Sawyer Bernath"], "summaries": ["BERI is hiring a remote ML Engineer as part of their collaboration with the [Autonomous Learning Lab](https://all.cs.umass.edu/) at UMass Amherst. The goal is to create a software library that enables easy deployment of the ALL's Seldonian algorithm framework for safe and aligned AI."], "venue": "BERI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #169", "newsletter_category": "News"}
{"id": "63601a22006243f50e36c749bd6cb995", "title": "Political Economy of Reinforcement Learning schedule", "url": "https://perls-workshop.github.io/schedule.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The date for the <@PERLS workshop@>(@Political Economy of Reinforcement Learning (PERLS) Workshop@) at NeurIPS has been set for December 14, and the schedule and speaker list are now available on the website."], "venue": "PERLS Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #169", "newsletter_category": "News"}
{"id": "ecb457f9ee731c33ece8db7eba0626ec", "title": "Survey: classifying AI systems used in response to the COVID-19 pandemic", "url": "https://www.oecd.ai/wonk/pandemic", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Samuel Curtis", "Adriana Bora", "Nicolas Miailhe", "Rui Wang"], "summaries": ["A team at The Future Society aims to build a living database of AI systems used to respond to COVID, classified using the [OECD framework](https://www.oecd.ai/classification). I think this is an interesting example of building capacity for effective AI governance. If you were involved in developing an AI system used in the COVID response, they ask that you take [this survey](https://survey.oecd.org/index.php?r=survey/index&sid=145657&lang=en) by August 2nd."], "venue": "OECD AI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #158", "newsletter_category": "News"}
{"id": "8b14e3ce93511bf106548c6e91361149", "title": "AI Safety Career Bottlenecks Survey", "url": "https://www.guidedtrack.com/programs/n8cydtu/run", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["AI Safety Support"], "summaries": ["[AI Safety Support](https://www.aisafetysupport.org/home) have released a career bottlenecks survey that they will use to guide their work. You can take the survey [here](https://www.guidedtrack.com/programs/n8cydtu/run)."], "venue": "AISS Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #136", "newsletter_category": "News"}
{"id": "e5e36066029d137ac90334f75f140a07", "title": "Formal Methods for the Informal Engineer", "url": "https://fmie2021.github.io/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Gopal Sarma", "Jimmy Koppel", "Ramana Kumar", "Eric Drexler", "Patrick Schultz", "Gregory Malecha"], "summaries": ["This online workshop will teach engineers how to use verification tools like Z3 and Coq, and then discuss how formal verification can be applied in many different areas of software engineering (including robust machine learning). The organizers tell me they plan to produce a white-paper with high-level recommendations following the workshop. You can register [here](https://docs.google.com/forms/d/e/1FAIpQLSf2SVrwHFz-5oDj3LT9ks5moUjF6VFzNrUfnTGYshDw2XYnfg/viewform)."], "venue": "FMIE Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #130", "newsletter_category": "News"}
{"id": "6f2ce07eeae8cd5a1fe22e2364aff299", "title": "S-Risk Intro Seminar", "url": "https://longtermrisk.org/intro-seminar/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Stefan Torges"], "summaries": ["The first intro seminar to s-risks will take place on the weekend of February 20 & 21, 2021. It is targeted at people who are at least seriously thinking about addressing s-risks as part of their career, and who have not yet spent a lot of time interacting with the Center on Long-Term Risk."], "venue": "CLR Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #129", "newsletter_category": "News"}
{"id": "c91d0cd70ebc82e33a684917ad90d6fc", "title": "Action: Help expand funding for AI Safety by coordinating on NSF response", "url": "https://www.lesswrong.com/posts/vq6ztCgFczuH53f4Y/action-help-expand-funding-for-ai-safety-by-coordinating-on", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2022-01-01T00:00:00Z", "authors": ["Evan R. Murphy"], "summaries": ["The National Science Foundation (NSF) has put out a Request for Information relating to topics they will be funding in 2023 as part of their NSF Convergence Accelerator program. The author and others are coordinating responses to increase funding to AI safety, and ask that you fill out this [short form](https://airtable.com/shrk0bAxm0EeJbyPC) if you are willing to help out with a few small, simple actions."], "venue": "LessWrong", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "", "newsletter_category": "News"}
{"id": "907e1beac0d22c8d709549f166ec92df", "title": "Researcher / Writer job", "url": "https://www.convergenceanalysis.org/get-involved/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["This full-time researcher / writer position would involve half the time working with [Convergence](https://www.convergenceanalysis.org/) on x-risk strategy research and the other half with [Normative](https://normative.io/) on environmental and climate change analysis documents."], "venue": "Convergence Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #73", "newsletter_category": "News"}
{"id": "8416899bc2b6c948ec9c711cb7d18d55", "title": "Self-driving cars are here", "url": "https://medium.com/@andrewng/self-driving-cars-are-here-aea1752b1ad0", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Andrew Ng"], "summaries": ["Drive.ai will offer a self-driving car service for public use in Frisco, Texas starting in July, 2018. The post goes into details of how the cars will be rolled out, and some plans for how to make them easier for humans to interact with."], "venue": "Medium", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #6", "newsletter_category": "News"}
{"id": "52ac933924b56a5ba984a5696a864d4a", "title": "Facebook Open Sources ELF OpenGo", "url": "https://research.fb.com/facebook-open-sources-elf-opengo/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Yuandong Tian and Larry Zitnick"], "summaries": ["Facebook has created an open-source AI bot that has beaten world champion professional Go players in matches where the professional player was allowed unlimited time to think."], "venue": "Facebook Research", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #5", "newsletter_category": "News"}
{"id": "c0cf916fb77dc79d871cc6ac4e6935ea", "title": "Introducing Stanford's Human-Centered AI Initiative", "url": "https://news.stanford.edu/2019/03/18/stanford_university_launches_human-centered_ai/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Fei-Fei Li and John Etchemendy"], "summaries": ["Stanford will house the Human-centered AI Initiative (HAI), which will take a multidisciplinary approach to understand how to develop and deploy AI so that it is robustly beneficial to humanity."], "venue": "HAI website", "opinion": "It's always hard to tell from these announcements what exactly the initiative will do, but it seems to be focused on making sure that AI does not make humans obsolete. Instead, AI should allow us to focus more on the creative, emotional work that we are better at. Given this, it's probably not going to focus on AI alignment, unlike the similarly named Center for Human-Compatible AI (CHAI) at Berkeley. My main question for the author would be what she would do if we could develop AI systems that could replace all human labor (including creative and emotional work). Should we not develop such AI systems? Is it never going to happen?", "highlight": false, "read_more": "How to Make A.I. That’s Good for People", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #29", "newsletter_category": "News"}
{"id": "f2d95e5833d57cb47f09b0c666548424", "title": "Human-Aligned AI Summer School: A Summary", "url": "https://www.lesswrong.com/posts/bXLi3n2jrfqRwoSTH/human-aligned-ai-summer-school-a-summary", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Michaël Trazzi"], "summaries": ["A summary of the talks at the summer school that just happened, from one of the attendees, that covers value learning, agent foundations, bounded rationality, and side effects. Most of the cited papers have been covered in this newsletter, with the notable exceptions of Bayesian IRL and information-theoretic bounded rationality."], "venue": "LessWrong", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #19", "newsletter_category": "News"}
{"id": "5eeec61767ae202fb2939d68d04eb7fd", "title": "Conference on Fairness, Accountability, and Transparency (FAT*)", "url": "https://fatconference.org/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["... will be held early 2019 in Atlanta, Georgia. Abstract pre-registration deadline is August 16."], "venue": "FAT* Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "News"}
{"id": "199904cea44947b644c8290d94de20be", "title": "RAISE is hiring", "url": "http://aisafety.camp/2018/07/11/were-looking-for-full-time-content-developers/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Toon"], "summaries": ["... for full-time content developers, to work at the [EA Hotel](http://effective-altruism.com/ea/1pc/ea_hotel_with_free_accommodation_and_board_for/) in Blackpool."], "venue": "RAISE Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "News"}
{"id": "94bb6c4733c9242a781691f9e78c0933", "title": "Announcing the second AI Safety Camp", "url": "http://effective-altruism.com/ea/1px/announcing_the_second_ai_safety_camp/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Anne Wissemann"], "summaries": ["I forgot to mention last week that the second AI safety camp will be held Oct 4-14 in Prague."], "venue": "EA Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #11", "newsletter_category": "News"}
{"id": "565d73d6eb885827f1cddf7a8ebefde9", "title": "The first AI Safety Camp and onwards", "url": "https://aisafetycamp.com/2018/06/06/the-first-ai-safety-camp-onwards/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Remmelt Ellen and Linda Linsefors"], "summaries": ["The first AI safety camp was held in April, in which people interested in AI safety gathered to work on research within groups. Everyone prepared for the camp over the six weeks leading up to it, and then spent 10 days focusing on a particular research question. There were five teams of around four people, and each team wrote up some notes on the results of their project at the end of the camp."], "venue": "AI Safety Camp Blog", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #10", "newsletter_category": "News"}
{"id": "f642a8df951b5dea8c6e5226941c5d8e", "title": "BERI Project Grants Program", "url": "http://existence.org/project-grants-1/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Rebecca Raible"], "summaries": ["BERI is offering grants of up to $300,000 per year for work relating to their mission, with the application deadline of June 30. In their words, \"We are open to any ideas you have, as long as you can explain how the project will contribute to improving human civilization’s long-term prospects for survival and flourishing.\""], "venue": "BERI website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #10", "newsletter_category": "News"}
{"id": "190fc04e440b78f1ac4760f8c301bf82", "title": "Our essay competitions for young people", "url": "https://www.economist.com/open-future/2018/04/16/our-essay-competitions-for-young-people", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["There is an essay competition for people between 16 and 25 years old, where one of the topics is \"Do the benefits of artificial intelligence outweigh the risks?\" Winning essays will be published on The Economist’s Open Future website and the author will be invited to attend one of the three Open Future Festival events. The deadline is July 15th."], "venue": "The Economist", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #10", "newsletter_category": "News"}
{"id": "b1aafa36554094f8eec682a617b15d55", "title": "Reframing Impact - Part 2", "url": "https://www.alignmentforum.org/s/7CdoznhJaLEKHwvJW", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Alex Turner"], "summaries": ["In <@part 1@>(@Reframing Impact - Part 1@) of this sequence, we saw that an event is _impactful_ if it _changes our ability to get what we want_. This part takes this understanding and applies it to AI alignment.\n\nIn the real world, there are many events that cause _objective_ negative impacts: they reduce your ability to pursue nearly any goal. An asteroid impact that destroys the Earth is going to be pretty bad for you, whether you want to promote human flourishing or to make paperclips. Conversely, there are many plans that produce objective positive impacts: for many potential goals, it's probably a good idea to earn a bunch of money, or to learn a lot about the world, or to command a perfectly loyal army. This is particularly exacerbated when the environment contains multiple agents: for goals that benefit from having more resources, it is objectively bad for you if a different agent seizes your resources, and objectively good for you if you seize other agents' resources.\n\nBased on this intuitive (but certainly not ironclad) argument, we get the **Catastrophic Convergence Conjecture (CCC)**: \"Unaligned goals tend to have catastrophe-inducing optimal policies because of power-seeking incentives\".\n\nLet's now consider a _conceptual_ version of <@Attainable Utility Preservation (AUP)@>(@Towards a New Impact Measure@): the agent optimizes a primary (possibly unaligned) goal, but is penalized for changing its \"power\" (in the intuitive sense). Intuitively, such an agent no longer has power-seeking incentives, and so (by the [contrapositive](https://en.wikipedia.org/wiki/Contraposition) of the CCC) it will not have a catastrophe-inducing optimal policy -- exactly what we want! This conceptual version of AUP also avoids thorny problems such as ontology identification and butterfly effects, because the agent need only reason about its own beliefs, rather than having to reason directly about the external world."], "venue": "Alignment Forum", "opinion": "This was my favorite part of the sequence, as it explains the conceptual case for AUP clearly and concisely. I especially liked the CCC: I believe that we should be primarily aiming to prevent an AI system \"intentionally\" causing catastrophe, while not attempting to guarantee an absence of \"accidental\" mistakes (<@1@>(@Clarifying \"AI Alignment\"@), <@2@>(@Techniques for optimizing worst-case performance@)), and the CCC is one way of cashing out this intuition. It's a more crisp version of the idea that [convergent instrumental subgoals](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf) are in some sense the \"source\" of AI accident risk, and if we can avoid instrumental subgoals we will probably have solved AI safety.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #91", "newsletter_category": "Preventing bad behavior"}
{"id": "a8c422acc4404f0865854797719d92d2", "title": "Reframing Impact - Part 3", "url": "https://www.alignmentforum.org/s/7CdoznhJaLEKHwvJW", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Alex Turner"], "summaries": ["The final section of the sequence turns to an actual implementation of AUP, and deals with problems in how the implementation deviates from the conceptual version of AUP. We measure power by considering a set of auxiliary rewards, and measuring the change in attainable utilities of this auxiliary set as impact, and penalizing the agent for that. The first post presents some empirical results, many of which <@we've covered before@>(@Penalizing Impact via Attainable Utility Preservation@), but I wanted to note the new results on [SafeLife](https://www.partnershiponai.org/safelife/) (summarized below). On the high-dimensional world of SafeLife, the authors train a VAE to find a good latent representation, and choose a single linear reward function on the latent representation as their auxiliary reward function: it turns out this is enough to avoid side effects in at least some cases of SafeLife.\n\nWe then look at some improvements that can be made to the original AUP implementation. First, according to CCC, we only need to penalize _power_, not _impact_: as a result we can just penalize _increases_ in attainable utilities, rather than both increases and decreases as in the original version. Second, the auxiliary set of rewards only provides a _proxy_ for impact / power, which an optimal agent could game (for example, by [creating subagents](https://www.alignmentforum.org/posts/mdQEraEZQLg7jtozn/subagents-and-impact-measures-full-and-fully-illustrated), summarized below). So instead, we can penalize increases in attainable utility for the _primary_ goal, rather than using auxiliary rewards. There are some other improvements that I won't go into here."], "venue": "Alignment Forum", "opinion": "I think the plan \"ensure that the AI systems we build don't seek power\" is pretty reasonable and plausibly will be an important part of AI alignment. However, the implementation of AUP is trying to do this under the threat model of optimal agents with potentially unaligned primary goals. I think this is probably going to do something quite different from the conceptual version of AUP, because impact (as defined in this sequence) occurs only when the agent's beliefs _change_, which doesn't happen for optimal agents in deterministic environments. The current implementation of AUP tries to get around this using proxies for power (but these can be gamed) or by defining \"dumber\" beliefs against which power is measured (but this fails to leverage the AI system's understanding of the world). See [this comment](https://www.alignmentforum.org/posts/wAAvP8RG6EwzCvHJy/reasons-for-excitement-about-impact-of-impact-measure?commentId=s48grPhMbuBEXNtyc) for more details.\n\nNote that the author himself is more [excited](https://www.alignmentforum.org/s/7CdoznhJaLEKHwvJW/p/wAAvP8RG6EwzCvHJy) about AUP as deconfusion, rather than as a solution to AI alignment, though he is more optimistic about the implementation of AUP than I am.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #91", "newsletter_category": "Preventing bad behavior"}
{"id": "e3aef1d7e19e96c6ae8798f2bd5068a1", "title": "Introducing SafeLife: Safety Benchmarks for Reinforcement Learning", "url": "https://www.partnershiponai.org/safelife/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Carroll Wainwright", "Peter Eckersley"], "summaries": ["So far, techniques to avoid negative side effects have only been tested on <@simple@>(@Measuring and avoiding side effects using relative reachability@) <@gridworlds@>(@Penalizing Impact via Attainable Utility Preservation@) <@or@>(@Learning Preferences by Looking at the World@) <@hypotheticals@>(@Test Cases for Impact Regularisation Methods@). SafeLife aims to provide a high-dimensional environment in which negative side effects are likely. It is based on Conway's Game of Life, which allows for complex effects arising out of relatively simple rules. An agent is given the ability to move, create life in an adjacent cell, or destroy life in an adjacent cell. With the specified reward function, the agent must build desired patterns, remove undesired patterns, and navigate to the exit.\n\nThe challenge comes when there are additional \"neutral\" patterns in the environment. In this case, we want the agent to leave those patterns alone, and not disrupt them, even if doing so would allow it to complete the main task faster. The post shows several examples of agents attempting these levels. Vanilla RL agents don't avoid side effects at all, and so unsurprisingly they do quite badly. An agent with a naive impact measure that simply says to preserve the initial state can correctly solve levels where all of the \"neutral\" patterns are static, but has much more trouble when the existing patterns are dynamic (i.e. they oscillate over time)."], "venue": "2019 NeurIPS Safety and Robustness in Decision Making Workshop", "opinion": "I am a big fan of benchmarks; they seem to be a prerequisite to making a lot of quantitative progress (as opposed to more conceptual progress, which seems more possible to do without benchmarks). This benchmark seems particularly nice to me because the \"side effects\" which need to be avoided haven't been handcoded into the benchmark, but instead arise from some simple rules that produce complex effects.", "highlight": true, "read_more": "Paper: SafeLife 1.0: Exploring Side Effects in Complex Environments", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #91", "newsletter_category": "Preventing bad behavior"}
{"id": "affbbddff778d3ed6847b6dc572ab652", "title": "Safety Gym", "url": "https://openai.com/blog/safety-gym/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Alex Ray*", "Joshua Achiam*", "Dario Amodei"], "summaries": ["Safety gym contains a set of tasks with varying difficulty and complexity focused on safe exploration. In the tasks, one of three simulated robots has to move to a series of goals, push buttons or move a box to a target location, while avoiding costs incurred by hitting randomized obstacles. This is formalized as a **constrained reinforcement learning** problem: in addition to maximizing the received reward, agents also have to respect constraints on a **safety cost function**. For example, we would like self-driving cars to learn how to navigate from A to B as quickly as possible while respecting traffic regulations and safety standards. While this could in principle be solved by adding the safety cost as a penalty to the reward, constrained RL gets around the need to correctly quantify tradeoffs between safety and performance. \n\nMeasures of safety are expected to become important criteria for evaluating algorithms' performance and the paper provides first benchmarks. Constrained policy optimization, a trust-region algorithm that tries to prevent updates from breaking the constraint on the cost is compared to new lagrangian versions of TRPO/PPO that try to maximize the reward, minus an adaptive factor times the cost above the threshold. Interestingly, the lagrangian methods incur a lot less safety cost during training than CPO and satisfy constraints more reliably at evaluation. This comes at the cost of reduced reward. For some of the tasks, none of the tested algorithms is able to gain nontrivial rewards while also satisfying the constraints. \n\nLastly, the authors propose to use safety gym for investigating methods for learning cost functions from human inputs, which is important since misspecified costs could fail to prevent unsafe behaviour, and for transfer learning of constrained behaviour, which could help to deal with distributional shifts more safely."], "venue": "OpenAI Blog", "opinion": "I am quite excited about safety gym. I expect that the crisp formalization, as well as the availability of benchmarks and ready-made environments, combined with OpenAI's prestige, will facilitate broader engagement of the ML community with this branch of safe exploration. As pointed out in the paper, switching from standard to constrained RL could merely shift the burden of correct specification from the reward to the cost and it is not obvious whether that helps with alignment. Still, I am somewhat optimistic because it seems like humans often think in terms of constrained and fuzzy optimization problems rather than specific tradeoffs and constrained RL might capture our intuitions better than pure reward maximization. Lastly, I am curious whether an increased focus on constrained RL will provide us with more concrete examples of \"nearest unblocked strategy\" failures, as the rising popularity of RL arguably did with more general examples of specification gaming.\n\n**Rohin's opinion:** Note that at initialization, the policy doesn't \"know\" about the constraints, and so it must violate constraints during exploration in order to figure out what the constraints even are. As a result, in this framework we could never get down to zero violations. A zero-violations guarantee would require some other source of information, typically some sort of overseer (see <@delegative RL@>(@Delegative Reinforcement Learning@), [avoiding catastrophes via human intervention](https://arxiv.org/abs/1707.05173), and [shielding](https://arxiv.org/abs/1708.08611)).\n\nIt's unclear to me how much this matters for long-term safety, though: usually I'm worried about an AI system that is plotting against us (because it has different goals than we do), as opposed to one that doesn't know what we don't want it to do.", "highlight": true, "read_more": "Github repo", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #76", "newsletter_category": "Preventing bad behavior"}
{"id": "440915a5202f2111a6608439a0fd30e9", "title": "Designing agent incentives to avoid reward tampering", "url": "https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-reward-tampering-4380c1bb6cd", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Tom Everitt", "Ramana Kumar", "and Marcus Hutter"], "summaries": ["Reward tampering occurs when a reinforcement learning agent actively changes its reward function. The post uses <@Causal Influence Diagrams@>(@Modeling AGI Safety Frameworks with Causal Influence Diagrams@) to analyze the problem in a simple grid world where an agent can easily change the definition of its reward. The proposed solution is **current-RF optimization**: Instead of maximizing the sum of rewards that would be given after each action (where the reward signal can dynamically change over time), the agent searches for and executes a plan of actions that would maximize the current, unchanged reward signal. The agent would then not be incentivized to tamper with the reward function since the current reward is not maximized by such tampering. There are two different flavours to this: time-inconsistency-aware agents account for future changes in their own behaviour due to modified reward signals, while TI-unaware agents ignore this in their planning. TI-aware agents have an incentive to preserve their reward signal and are therefore potentially incorrigible. "], "venue": "DeepMind Safety Blog", "opinion": "I enjoyed this application of causal diagrams and think that similar detailed analyses of the interactions between failure modes like wireheading, instrumental goals like reward preservation and the specific implementation of an agent would be quite valuable. That said, I am less excited about the feasibility of the proposed solution since it seems to require detailed knowledge of the agent about counterfactual rewards. Also, I expect the distinction between changes in the reward signal and changes in the state that happen to also affect the reward to be very fuzzy in real problems and current-RF optimization seems to require a very sharp boundary.\n\n**Rohin's opinion:** I agree with Flo's opinion above, and I think the example in the blog post shows how the concept \"what affects the reward\" is fuzzy: in their gridworld inspired by the game Baba Is You, they say that moving the word \"Reward\" down to make rocks rewarding is \"tampering\", whereas I would have called that a perfectly legitimate way to play given my knowledge of Baba Is You.", "highlight": true, "read_more": "Paper: Reward Tampering Problems and Solutions in Reinforcement Learning: A Causal Influence Diagram Perspective", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #71", "newsletter_category": "Preventing bad behavior"}
{"id": "9a781545fe3131c76931a5947de6c65e", "title": "Reframing Impact - Part 1", "url": "https://www.alignmentforum.org/s/7CdoznhJaLEKHwvJW", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Alex Turner"], "summaries": ["_This sequence has exercises that **will be spoiled by this summary**, so take a moment to consider whether you want to read the sequence directly._\n\nThis first part of the sequence focuses on identifying what we mean by impact, presumably to help design an impact measure in the future. The punch line: an event is **impactful to an agent** if it changes the agent's **ability to get what it wants**. This is _Attainable Utility (AU) theory_. To quote the sequence: \"How could something possibly be a big deal to us if it _doesn't_ change our ability to get what we want? How could something _not_ matter to us if it _does_ change our ability to get what we want?\"\n\nSome implications and other ideas:\n- Impact is _relative to an agent_: a new church is more impactful if you are a Christian than if not.\n- Some impact is _objective_: getting money is impactful to almost any agent that knows what money is.\n- Impact is _relative to expectations_: A burglar robbing your home is impactful to you (you weren't expecting it) but not very impactful to the burglar (who had planned it out). However, if the burglar was unsure if the burglary would be successful, than success/failure would be impactful to them.\n\nWhile this may seem obvious, <@past work@>(@Measuring and avoiding side effects using relative reachability@) has talked about impact as being caused by changes in state. While of course any impact does involve a change in state, this is the wrong level of abstraction to reason about impact: fundamentally, impact is related to what we care about."], "venue": "Alignment Forum", "opinion": "To quote myself from a discussion with Alex, \"you're looking at the optimal Q-function for the optimal utility function and saying 'this is a good measure of what we care about' and of course I agree with that\". (Although this is a bit inaccurate -- it's not the optimal Q-function, but the Q-function relative to what we expect and know.)\n\nThis may be somewhat of a surprise, given that I've been [pessimistic](https://www.alignmentforum.org/posts/kCY9dYGLoThC3aG7w/best-reasons-for-pessimism-about-impact-of-impact-measures#qAy66Wza8csAqWxiB) about impact measures in the past. However, my position is that it's difficult to simultaneously get three desiderata: value-agnosticism, avoidance of catastrophes, and usefulness. This characterization of impact is very explicitly dependent on values, and so doesn't run afoul of that. (Also, it just makes intuitive sense.)\n\nThis part of the sequence did change some of my thinking on impact measures as well. In particular, the sequence makes a distinction between _objective_ impact, which applies to all (or most) agents, and _value_ impact. This is similar to the idea of [convergent instrumental subgoals](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf), and the idea that <@large-scale multiagent training@>(@Emergent Tool Use from Multi-Agent Interaction@) can lead to generally useful behaviors that can be applied to novel tasks. It seems plausible to me that we could make value-agnostic impact measures that primarily penalize this objective impact, and this might be enough to avoid catastrophes. This would prevent us from using AI for big, impactful tasks, but could allow for AI systems that pursue small, limited tasks. I suspect we'll see thoughts along these lines in the next parts of this sequence.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #68", "newsletter_category": "Preventing bad behavior"}
{"id": "0359834e5c31eff9c6e0fbbc28b9f1d8", "title": "Asymptotically Benign AGI", "url": "https://www.alignmentforum.org/posts/pZhDWxDmwzuSwLjou/asymptotically-benign-agi", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Michael Cohen"], "summaries": ["I'm a bit out of my depth with this summary, but let's give it a shot anyway. The setting: we are not worried about how much compute we use (except that it should be finite), and we would like to build a powerful AI system that can help us with tasks but does not try to influence the world. We'll assume that we can construct a box from which no signals can pass through, except by pressing a specific button that opens a door.\n\nFirst, the simple version of BoMAI (Boxed Myopic AI). We'll put the AI system **and the operator** in the box, and the operator and the AI system can talk via text message, and the operator can enter rewards. Each episode has a maximum length (hence myopic), and if the operator ends the episode early, all future rewards are set to zero. BoMAI maximizes episodic reward in a manner similar to [AIXI](http://www.hutter1.net/ai/). It has a distribution (initially a speed prior) over all possible time-bounded Turing Machines as possible models that predict observations and rewards. BoMAI uses the maximum a posteriori (MAP) Turing Machine to predict future observations and rewards given actions, searches over all possible sequences of actions for the best one, and executes the first action of that sequence. (See [this comment](https://www.alignmentforum.org/posts/pZhDWxDmwzuSwLjou/asymptotically-benign-agi#ToDYj27wQZWHqWnH6) and its parents for the reason to use the MAP model.)\n\nIntuitively, BoMAI has no incentive to affect anything outside the box: information can only leave the box if the episode ends, but if that happens, reward is guaranteed to be exactly zero. It might deceive the operator into giving it high reward, but any such deception will only _incidentally_ affect the outside world, it hasn't been _optimized_ to cause any such effect. However, **this is all assuming that the MAP world model is accurate**, which may not be true.\n\nObviously at initialization our world model will not be accurate: we need to explore. Rather than using something like epsilon-exploration, BoMAI delegates to a human explorer for safety, and reduces exploration over time.\n\nMore insidiously, a world model could simulate the world for a year, and then report the value in the computer's memory for \"the reward from a year ago\". This will usually be a perfect prediction of the reward the operator will give, but if BoMAI tries to _optimize_ for it, it has an incentive to try to e.g. cause the operator to build an unboxed AI whose goal is to change BoMAI's memory cell to the maximum value.\n\nThese sorts of bad plans can only happen finitely often. If BoMAI executes such a plan, it will then get the actual reward from the operator, see that it differs from the predicted reward, and update against the world model for making a bad prediction. Eventually the correct model will become more likely. But ideally this sort of thing wouldn't happen at all.\n\nIntuitively, such world models would have to simulate the world past the end of the episode, incurring a speed penalty relative to the correct model. However, they could use that simulation to save on the computation needed for future predictions. To make sure that they do incur a speed penalty, we need to make it so that sometimes their simulations are wrong and can't be reused. So, at the end of each episode, we sample from a source of true randomness, and use that to determine whether or not the operator is given a drug that causes retrograde amnesia. In that case, world models that simulate the future will sometimes be wrong about how that decision plays out, and any simulation they did in the wrong branch is useless and leads to a speed penalty.\n\nWith a few non-trivial assumptions, we can prove that asymptotically, BoMAI will do at least as well as the human explorer at accumulating reward, and the MAP world model's rewards do not incentivize BoMAI to affect the outside world."], "venue": "Alignment Forum", "opinion": "I think the idea of putting the operator in the box with the AI system is very interesting: with previous attempts at boxing, the human operator talking to the AI system was an obvious glaring hole in the box. In this setting, the only information escaping from the box is the fact that the operator has not yet chosen to end the episode.\n\nI am generally skeptical of intuitive reasoning about what can or can't be done by Turing Machines using extreme amounts of computation. There are _lots_ of comments on the post that debate specifics of this. This usually cashes out as a debate about the assumptions in the proof. But it's also worth noting that the theorem is asymptotic, and allows for arbitrarily bad behavior early on. We might still expect good behavior early on for the reasons laid out in the proof, but it's not implied by the theorem, even if the assumptions hold.", "highlight": true, "read_more": "Paper: Asymptotically Unambitious Artificial General Intelligence", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #54", "newsletter_category": "Preventing bad behavior"}
{"id": "4012b8f787c755560b5ea55e539e06ad", "title": "Safety-first AI for autonomous data centre cooling and industrial control", "url": "https://deepmind.com/blog/safety-first-ai-autonomous-data-centre-cooling-and-industrial-control/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Chris Gamble and Jim Gao"], "summaries": ["Two years ago, DeepMind built an AI recommendation system that provided suggestions on how best to cool Google's data centers, leading to efficiency gains. Nine months ago, the AI was given autonomous control to take actions directly, rather than going through human operators, and it has been improving ever since, going from 12% savings at deployment to 30% now.\n\nOf course, such a system must be made extremely reliable, since a failure could result in Google's data centers going down. They implemented several safety measures. They throw out any actions that the AI is not confident about. All actions are verified against a set of hand-coded safety rules, both when the actions are generated in the cloud, and at each local data center, for reliability through redundancy. There are human operators monitoring the AI to make sure nothing goes wrong, who can take over control whenever they want to. There is also an automated system that will fall back to the original system of heuristics and rules if the safety conditions are ever violated."], "venue": "DeepMind Blog", "opinion": "This is a remarkable number of safety precautions, though in hindsight it makes total sense given how bad a failure could be. None of the precautions would stop a superintelligent agent in the classical sense (that is, the sort of superintelligent agent in paperclip maximizer stories), but they seem like a really good set of precautions for anything task-based. I am curious how they chose the threshold for when to discard actions that the AI is not confident enough in (especially since AI uncertainty estimates are typically not calibrated), and how they developed the safety rules for verification (since that is a form of specification, which is often easy to get wrong).", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #21", "newsletter_category": "Preventing bad behavior"}
{"id": "05a4c58afbd6416807e457c07d57b660", "title": "Designing agent incentives to avoid side effects", "url": "https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-side-effects-e1ac80ea6107", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Victoria Krakovna", "Ramana Kumar", "Laurent Orseau", "Alexander Turner"], "summaries": ["This blog post provides details about the recent update to the [relative reachability paper](https://arxiv.org/abs/1806.01186) ([AN #10](https://mailchi.mp/d1a19c140226/alignment-newsletter-10)), which is now more a paper about the design choices available with impact measures. There are three main axes that they identify:\n\nFirst, what baseline is impact measured relative to? A natural choice is to compare against the starting state, but this will penalize the agent for environment effects, such as apples growing on trees. We can instead compare against an inaction baseline, i.e. measuring impact relative to what would have happened if the agent did nothing. Unfortunately, this leads to offsetting behavior: the agent first makes a change to get reward, and then undoes the change in order to not be penalized for impact. This motivates the stepwise inaction baseline, which compares each action against what would have happened if the agent did nothing _from that step onwards_.\n\nSecond, we need a measure by which to compare states. The unreachability measure measures how hard it is to reach the baseline from the current state. However, this \"maxes out\" as soon as the baseline is unreachability, and so there is no incentive to avoid further irreversible actions. This motivates relative reachability, which computes the set of states reachable from the baseline, and measures what proportion of those states are reachable from the state created by the agent. [Attainable utility](https://www.alignmentforum.org/posts/yEa7kwoMpsBgaBCgb/towards-a-new-impact-measure) ([AN #25](https://mailchi.mp/0c5eeec28f75/alignment-newsletter-25)) generalizes this to talk about the _utility_ that could be achieved from the baseline for a wide range of utility functions. (This is equivalent to relative reachability when the utility functions are of the form \"1 if state s is ever encountered, else 0\".)\n\nFinally, we need to figure how to penalize changes in our chosen measure. Penalizing decreases in the measure allows us to penalize actions that make it harder to do things (what the AUP post calls \"opportunity cost\"), while penalizing increases in the measure allows us to penalize convergent instrumental subgoals (which almost by definition increase the ability to satisfy many different goals or reach many different states)."], "venue": "DeepMind Safety Blog", "opinion": "Since the AUP post was published about half a year ago, I've been watching this unification of AUP and relative reachability slowly take form, since they were phrased very differently initially. I'm glad to see this finally explained clearly and concisely, with experiments showing the effect of each choice. I do want to put special emphasis on the insight of AUP that the pursuit of convergent instrumental subgoals leads to large _increases_ in \"ability to do things\", and thus that penalizing increases can help avoid such subgoals. This point doesn't typically make it into the academic writings on the subject but seems quite important.\n\nOn the topic of impact measures, I'll repeat what I've said before: I think that it's hard to satisfy the conjunction of three desiderata -- objectivity (no dependence on human values), safety (preventing any catastrophic outcomes) and usefulness (the AI system is still able to do useful things). Impact measures are very clearly aiming for the first two criteria, but usually don't have much to say about the third one. My expectation is that there is a strong tradeoff between the first two criteria and the third one, and impact measures have not dealt with this fact yet, but will have to at some point.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #49", "newsletter_category": "Preventing bad behavior"}
{"id": "141e7304d663b4438b7c7daad5759a20", "title": "Shielding Atari Games with Bounded Prescience", "url": "http://arxiv.org/abs/2101.08153", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Mirco Giacobbe", "Mohammadhosein Hasanbeig", "Daniel Kroening", "Hjalmar Wijk"], "summaries": ["In order to study agents trained for Atari, the authors write down several safety properties using the internals of the ALE simulator that agents should satisfy. They then test several agents trained with deep RL algorithms to see how well they perform on these safety properties. They find that the agents only successfully satisfy 4 out of their 43 properties all the time, whereas for 24 of the properties, all agents fail at least some of the time (and frequently they fail on every single rollout tested).\n\nThis even happens for some properties that should be easy to satisfy. For example, in the game Assault, the agent loses a life if its gun ever overheats, but avoiding this is trivial: just don’t use the gun when the display shows that the gun is about to overheat.\n\nThe authors implement a “bounded shielding” approach, which basically simulates actions up to N timesteps in the future, and then only takes actions from the ones that don’t lead to an unsafe state (if that is possible). With N = 1 this is enough to avoid the failure described above with Assault."], "venue": "arXiv", "opinion": "I liked the analysis of what safety properties agents failed to satisfy, and the fact that agents sometimes fail the “obvious” or “easy” safety properties suggests that the bounded shielding approach can actually be useful in practice. Nonetheless, I still prefer the approach of finding an <@inductive safety invariant@>(@Neurosymbolic Reinforcement Learning with Formally Verified Exploration@), as it provides a guarantee of safety throughout the episode, rather than only for the next N timesteps.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #138", "newsletter_category": "Preventing bad behavior"}
{"id": "bd05fec2d06f1a0a4bcc90c40c75cfb6", "title": "Learning to be Safe: Deep RL with a Safety Critic", "url": "http://arxiv.org/abs/2010.14603", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Krishnan Srinivasan", "Benjamin Eysenbach", "Sehoon Ha", "Jie Tan", "Chelsea Finn"], "summaries": ["While there has been a lot of work on verifying logical specifications and avoiding constraint violations, I’ve said before that the major challenge is in figuring out what specifications or constraints to use in the first place. This paper takes a stab at this problem by _learning_ safety constraints and transferring them to new situations.\n\nIn particular, we assume that we have some way of telling whether a given trajectory violates a constraint (e.g. a human looks at it and says whether or not a violation has happened). We also assume access to a safe environment in which constraint violations are acceptable. For example, for robots, our constraint could be that the robot never crashes, and in the safe environment the robot could be constrained to only move slowly, so that if they do crash there is no permanent damage. We then want to train the agent to perform some task in the true training environment (e.g. with no restrictions on speed), such that we avoid constraint violations with high probability _even during training_.\n\nThe key idea is to pretrain a _safety Q-function_ in the safe environment, that is, a function **Qsafe(s, a)** that specifies the probability of eventually violating a constraint if we take action **a** in state **s**. We have the agent choose actions that are estimated to be on the verge of being too risky, in order to optimize for getting more information about the constraints.\n\nOnce we have this safety Q-function, we can use it as a shield (<@1@>(@Neurosymbolic Reinforcement Learning with Formally Verified Exploration@), <@2@>(@Shielded Decision-Making in MDPs@)). Specifically, any actions whose risk is above some threshold ε have their probabilities set to zero. Using this shield, we can then train for the true task in the (unsafe) training environment using RL, while only behaving safely. Of course, this depends on the safety Q-function successfully generalizing to the new environment. We also add the safety Q-function as part of the RL objective to disincentivize constraint violations.\n\nTheir experiments show that this approach significantly reduces the number of constraint violations during training, though in absolute numbers there are often still hundreds of constraint violations (or about 1% of the number of training steps)."], "venue": "arXiv", "opinion": "I’m glad to see more work on this: robustness techniques seem particularly important to get working with learned specifications, and this paper (like the next one) takes a real shot at this goal. In some sense it isn’t that clear what we gain from an approach like this -- now, instead of requiring robustness from the agent, we require robustness from the safety Q-function (since we transfer it from the safe environment to the training environment). Nonetheless, we might hope the safety Q-function is easier to learn and more likely to transfer between the two environments, since it could be simpler than a full policy.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #126", "newsletter_category": "Preventing bad behavior"}
{"id": "cf4423798ce60c541a2ad6ffe718331a", "title": "Recovery RL: Safe Reinforcement Learning with Learned Recovery Zones", "url": "http://arxiv.org/abs/2010.15920", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Brijen Thananjeyan*", "Ashwin Balakrishna*", "Suraj Nair", "Michael Luo", "Krishnan Srinivasan", "Minho Hwang", "Joseph E. Gonzalez", "Julian Ibarz", "Chelsea Finn", "Ken Goldberg"], "summaries": ["This paper introduces Recovery RL, which tackles the same problem as the previous paper: given a dataset of constraint violations (presumably collected from some safe environment), train an agent to perform some task while exploring safely. Like the previous paper, it starts by learning a safety Q-function from the dataset of constraint violations.\n\nThe difference is in how this safety Q-function is used. The previous paper uses it as a shield on a policy, and also uses it in the RL objective to get a policy that is performant and safe. Recovery RL instead splits these into two separate policies: there is a task policy that only optimizes for performance, and a recovery policy that only optimizes for safety. During training, the safety Q-function is used to monitor the likelihood of a constraint violation, and if a violation is sufficiently likely, control is handed over to the recovery policy which then tries to make the probability of constraint violation as low as possible.\n\nExperiments show that the method performs significantly better than other baselines (including SQRL, the method from the previous paper). Note however that the environments on which the methods were tested were not the same as in the previous paper."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #126", "newsletter_category": "Preventing bad behavior"}
{"id": "7dead5823908ea3a9228df36e46cfa7d", "title": "Attainable utility has a subagent problem", "url": "https://www.alignmentforum.org/posts/sYjCeZTwA84pHkhBJ/attainable-utility-has-a-subagent-problem", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Stuart Armstrong"], "summaries": ["This post argues that regularizing an agent's impact by <@attainable utility@>(@Towards a New Impact Measure@) can fail when the agent is able to construct subagents. Attainable utility regularization uses auxiliary rewards and penalizes the agent for changing its ability to get high expected rewards for these to restrict the agent's power-seeking. More specifically, the penalty for an action is the absolute difference in expected cumulative auxiliary reward between the agent either doing the action or nothing for one time step and then optimizing for the auxiliary reward. \n\nThis can be circumvented in some cases: If the auxiliary reward does not benefit from two agents instead of one optimizing it, the agent can just build a copy of itself that does not have the penalty, as doing this does not change the agent's ability to get a high auxiliary reward. For more general auxiliary rewards, an agent could build another more powerful agent, as long as the powerful agent commits to balancing out the ensuing changes in the original agent's attainable auxiliary rewards. "], "venue": "Alignment Forum", "opinion": "I am confused about how much the commitment to balance out the original agent's attainable utility would constrain the powerful subagent. Also, in the presence of subagents, it seems plausible that attainable utility mostly depends on the agent's ability to produce subagents of different generality with different goals: If a subagent that optimizes for a single auxiliary reward was easier to build than a more general one, building a general powerful agent could considerably decrease attainable utility for all auxiliary rewards, such that the high penalty rules out this action.", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #86", "newsletter_category": "Preventing bad behavior"}
{"id": "2625063efbc6bf15373bc69422661c15", "title": "Vehicle Automation Report", "url": "https://assets.documentcloud.org/documents/6540547/629713.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["NTSB"], "summaries": ["Last week, the NTSB released a report on the Uber automated driving system (ADS) that hit and killed Elaine Herzberg. The pedestrian was walking across a two-lane street with a bicycle. However, the car didn't slow down before impact. Moreover, even though the environment was dark, the car was equipped with LIDAR sensors which means that the car was able to fully observe the potential for collision. The report takes a closer look at how Uber had set up their ADS and notes that in addition to not considering the possibility of jay-walkers, \"...if the perception system changes the classification of a detected object, the tracking history of that object is no longer considered when generating new trajectories\". Additionally, in the final few seconds leading up to the crash the vehicle engaged in *action suppression*, which is described as \"a one-second period during which the ADS suppresses planned braking while the (1) system verifies the nature of the detected hazard and calculates an alternative path, or (2) vehicle operator takes control of the vehicle\". The reason cited for implementing this was concerns of false alarms which could cause the vehicle to engage in unnecessary extreme maneuvers. Following the crash, Uber suspended its ADS operations and made several changes. They now use onboard safety features of the Volvo system that were previously turned off, action suppression is no longer implemented, and path predictions are held across object classification changes."], "venue": "Government Report", "opinion": "**While there is a fair amount of nuance regarding the specifics of how Uber's ADS was operating it does seem as though there was a fair amount of incompetence in how the ADS was deployed.** Turning off Volvo system fail-safes, not accounting for jaywalking, and trajectory reseting seem like unequivocal *mistakes*. A lot of people also seem upset that Uber was engaging in action suppression. However, given that randomly engaging in extreme maneuvering in the presence of other vehicles can *indirectly cause* accidents I have a small amount of sympathy for why such a feature existed in the first place. Of course, the feature was removed and it's worth noting that \"there have been no unintended consequences—increased number of false alarms\". ", "highlight": false, "read_more": "Jeff Kaufman writes a [post](https://www.lesswrong.com/posts/tTg4bn5rxHYqQJXhD) summarizing both the original incident and the report. Wikipedia is also rather thorough in their reporting on the factual information. Finally, *[Planning and Decision-Making for Autonomous Vehicles](https://www.annualreviews.org/doi/pdf/10.1146/annurev-control-060117-105157)* gives an overview of recent trends in the field and provides good references for people interested in safety concerns.", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #74", "newsletter_category": "Preventing bad behavior"}
{"id": "f7122059c46f874d138eee5e23abf4c5", "title": "Bridging Hamilton-Jacobi Safety Analysis and Reinforcement Learning", "url": "http://files.davidqiu.com/research/papers/2019_fisac_Bridging%20Hamilton-Jacobi%20Safety%20Analysis%20and%20Reinforcement%20Learning%20%5BRL%5D%5BConstraints%5D.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Jaime F. Fisac*", "Neil F. Lugovoy*", "Vicenc Rubies-Royo", "Shromona Ghosh", "Claire J. Tomlin"], "summaries": ["Reinforcement learning is not great at enforcing constraints that hold at all times, because the agent would violate a constraint now if it would lead to higher reward later. In robust optimal control theory, we maximize the **minimum** of the constraint reward over time to avoid this. We can do this in the Bellman equation by taking a minimum between the current reward and estimated future value (instead of summing), but this does not uniquely define a fixed point. Just as in regular RL, we can use discounting to avoid the problem: in particular, if we interpret the discount as the probability that the episode continues, we can derive a Safety Bellman equation for which Q-learning is guaranteed to converge. They demonstrate their method in classic control environments as well as half-cheetah, with a range of RL algorithms including soft actor-critic (SAC)."], "venue": "ICRA 2019", "opinion": "I really like how simple the change is here -- it should be a one-line change for many deep RL algorithms. Previously, we had to choose between unconstrained agents for high dimensional problems, or constrained agents for low dimensional problems -- I like that this work is making progress on constrained agents for high dimensional problems, similarly to [Constrained Policy Optimization](https://arxiv.org/abs/1705.10528). While this work doesn't involve a performance reward, you could use the resulting safe policy in order to guide a process of safe exploration to learn a policy that safely optimizes a performance metric. Of course, this is all assuming a specification for the constraint to satisfy.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #61", "newsletter_category": "Preventing bad behavior"}
{"id": "78137515c77616a0e5ed599eaa7abd35", "title": "Shielded Decision-Making in MDPs", "url": "http://arxiv.org/abs/1807.06096", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Nils Jansen", "Bettina Könighofer", "Sebastian Junges", "Roderick Bloem"], "summaries": ["Given a model of an MDP, we can compute a _shield_, which restricts the actions available to an RL agent to only the ones that can achieve at least some fraction of the optimal value. This results in safe exploration (since catastrophes would fall under the level that the shield guarantees), and also improves sample efficiency, since you no longer have tons of episodes in which the agent gets a large negative reward which only serve to teach it what not to do. They evaluate their approach on Pacman."], "venue": "arXiv", "opinion": "They require quite a lot of modeling in order to do this -- I think that it's specific to a particular kind of MDP, where there is an agent, and adversaries (the ghosts in Pacman), that are traversing a graph (the maze), which can have tokens (the food pellets). In theory, you should just solve the MDP and not use RL at all. Also in theory, shielding would actually require you to do this (in order to calculate the optimal values of actions), in which case it seems pointless (just use the optimal policy instead). In practice, the shield is only computed over a few timesteps. So you can think of this as a way of combining explicit, computationally-expensive forward reasoning (as in value iteration, for example) with RL, which learns from experience and can scale to much longer time horizons.\n\nFrom the perspective of safety, I would be a lot more interested in approaches based on formal verification if they could work with learned features, rather than requiring that the human accurately formally model the world. This seems doable using a framework similar to [Trial without Error: Towards Safe Reinforcement Learning via Human Intervention](https://arxiv.org/abs/1707.05173), except by getting a formal safety specification iteratively instead of learning to mimic the human shield with neural nets.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #16", "newsletter_category": "Preventing bad behavior"}
{"id": "f176d0cf3caaa2f491bc74b6795cc272", "title": "Overcoming Clinginess in Impact Measures", "url": "https://www.lesswrong.com/posts/DvmhXysefEyEvXuXS/overcoming-clinginess-in-impact-measures", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["TurnTrout"], "summaries": ["In their [previous post](https://www.lesswrong.com/posts/H7KB44oKoSjSCkpzL/worrying-about-the-vase-whitelisting), TurnTrout proposed a whitelisting approach, that required the AI not to cause side effects not on the whitelist. One criticism was that it made the AI _clingy_, that is, the AI would also prevent any other agents in the world from causing non-whitelisted effects. In this post, they present a solution to the clinginess problem. As long as the AI knows all of the other agents in the environment, and their policies, the AI can be penalized for the _difference_ of effects between its behavior, and what the human(s) would have done. There's analysis in a few different circumstances, where it's tricky to get the counterfactuals exactly right. However, this sort of impact measure means that while the AI is punished for causing side effects itself, it _can_ manipulate humans to perform those side effects on its behalf with no penalty. This appears to be a tradeoff in the impact measure framework -- either the AI will be clingy, where it prevents humans from causing prohibited side effects, or it could cause the side effects through manipulation of humans."], "venue": "LessWrong", "opinion": "With any impact measure approach, I'm worried that there is no learning of what humans care about. As a result I expect that there will be issues that won't be handled properly (similarly to how we don't expect to be able to write down a human utility function). In the previous post, this manifested as a concern for generalization ability, which I'm still worried about. I think the tradeoff identified in this post is actually a manifestation of this worry -- clinginess happens when your AI overestimates what sorts of side effects humans don't want to happen in general, while manipulation of humans happens when your AI underestimates what side effects humans don't want to happen (though with the restriction that only humans can perform these side effects).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "Worrying about the Vase: Whitelisting", "converted_with": "python", "newsletter_number": "AN #14", "newsletter_category": "Preventing bad behavior"}
{"id": "5b4147d156787380cebdd1eedeb0deae", "title": "Worrying about the Vase: Whitelisting", "url": "https://www.lesswrong.com/posts/H7KB44oKoSjSCkpzL/worrying-about-the-vase-whitelisting", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["TurnTrout"], "summaries": ["It's really hard to avoid negative side effects because explicitly listing out all possible side effects the agent should avoid would be far too expensive. The issue is that we're trying to build a blacklist of things that can't be done, and that list will never be complete, and so some bad things will still happen. Instead, we should use whitelists, because if we forget to add something to the whitelist, that only limits the agent, it doesn't lead to catastrophe. In this proposal, we assume that we have access to the agent's ontology (in current systems, this might be the output of an object detection system), and we operationalize an \"effect\" as the transformation of one object into another (i.e. previously the AI believed an object was most likely an A, and now it believes it is most likely a B). We then whitelist allowed transformations -- for example, it is allowed to transform a carrot into carrot slices. If the agent causes any transformations not on the whitelist (such as \"transforming\" a vase into a broken vase), it incurs a negative reward. We also don't have to explicitly write down the whitelist -- we can provide demonstrations of acceptable behavior, and any transitions in these demonstrations can be added to the whitelist. The post and paper have a long list of considerations on how this would play out in a superintelligent AI system."], "venue": "LessWrong", "opinion": "Whitelisting seems like a good thing to do, since it is safe by default. (Computer security has a similar principle of preferring to whitelist instead of blacklist.) I was initially worried that we'd have the problems of symbolic approaches to AI, where we'd have to enumerate far too many transitions for the whitelist in order to be able to do anything realistic, but since whitelisting could work on learned embedding spaces, and the whitelist itself can be learned from demonstrations, this could be a scalable method. I'm worried that it presents generalization challenges -- if you are distinguishing between different colors of tiles, to encode \"you can paint any tile\" you'd have to whitelist transitions (redTile -> blueTile), (blueTile -> redTile), (redTile -> yellowTile) etc. Those won't all be in the demonstrations. If you are going to generalize there, how do you _not_ generalize (redLight -> greenLight) to (greenLight -> redLight) for an AI that controls traffic lights? On another note, I personally don't want to assume that we can point to a part of the architecture as the AI's ontology. I hope to see future work address these challenges!", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #11", "newsletter_category": "Preventing bad behavior"}
{"id": "e0baea1a60ebe2c588152da1c78e47b0", "title": "Formal Language Constraints for Markov Decision Processes", "url": "http://arxiv.org/abs/1910.01074", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Eleanor Quint", "Dong Xu", "Haluk Dogan", "Zeynep Hakguder", "Stephen Scott", "Matthew Dwyer"], "summaries": ["Within the framework of RL, the authors propose using constraints defined by DFAs (deterministic finite automata) in order to eliminate safety failures, or to prevent agents from exploring clearly ineffective policies (which would accelerate learning). Constraints can be defined on any auxiliary information that can be computed from the \"base\" MDP. A constraint could either restrict the action space, forcing the agent to take an action that doesn't violate the constraint, which they term \"hard\" constraints; or a constraint could impose a penalty on the agent, thus acting as a form of reward shaping, which they term a \"soft\" constraint. They consider two constraints: one that prevents the agent from \"dithering\" (going left, then right, then left, then right), and one that prevents the agent from \"overactuating\" (going in the same direction four times in a row). They evaluate their approach with these constraints on Atari games and Mujoco environments, and show that they lead to increased reward and decreased constraint violations."], "venue": "NeurIPS 2019 Workshop on Safety and Robustness in Decision Making", "opinion": "This method seems like a good way to build in domain knowledge about what kinds of action sequences are unlikely to work in a domain, which can help accelerate learning. Both of the constraints in the experiments do this. The paper also suggests using the technique to enforce safety constraints, but the experiments don't involve any safety constraints, and conceptually there do seem to be two big obstacles. First, the constraints will depend on state, but it is very hard to write such constraints given access only to actions and high-dimensional pixel observations. Second, you can only prevent constraint violations by removing actions one timestep before the constraint is violated: if there is an action that will inevitably lead to a constraint violation in 10 timesteps, there's no way in this framework to not take that action. (Of course, you can use a soft constraint, but this is then the standard technique of reward shaping.)\n\nIn general, methods like this face a major challenge: how do you specify the safety constraint that you would like to avoid violating? I'd love to see more research on how to create specifications for formal analysis.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #68", "newsletter_category": "Preventing bad behavior"}
{"id": "66f4b6e4eb33917ea74f113819c5232f", "title": "Safe Option-Critic: Learning Safety in the Option-Critic Architecture", "url": "http://arxiv.org/abs/1807.08060", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Arushi Jain", "Khimya Khetarpal", "Doina Precup"], "summaries": ["Let's consider an RL agent in the options framework (one way of doing hierarchical reinforcement learning). One way in which we could make such an agent safer would be to make it risk-averse. The authors define the controllability of a (state, option) pair to be the negative expected variance of the TD error. Intuitively, the controllability is higher when the value of the (state, option) pair is more predictable to the agent, and so by optimizing for controllability we can encourage risk-aversion. They derive the policy gradient when the objective is to maximize the reward and the controllability of the initial (state, option) pair, and use this to create the Safe-A2OC algorithm (a safe version of A2OC, which itself is a version of A2C for options). They test this out on a four-rooms gridworld problem, Cartpole, and three games from the Arcade Learning Environment (ALE)."], "venue": "arXiv", "opinion": "I'm very excited to see a paper tackling safety in hierarchical reinforcement learning -- that seems like a really important area to consider, and doesn't have many safety people working on it yet. That said, this paper feels weird to me, because in order to learn that a particular (state, option) pair is bad, the RL agent must experience that pair somewhat often, so it will have done the risky thing. It's not clear to me where this would be useful. One upside could be that we do risky things less often, so our RL agent learns faster from its mistakes, and doesn't make them as often. (And in fact they find that this leads to faster learning in three games in the ALE.) Perhaps we could also use this to train a risk-averse agent in simulation, that then never makes a mistake when deployed in the real world.\n\nI also wonder whether we should be trying to make our agents risk-averse. The \"right\" answer seems to me to a combination of two things: First, some things are actually very bad and have very large negative reward, and so they should be avoided with high probability. Second, when you are acting over a long period of time, even a small probability of failure at every time step compounds and leads to a near-guaranteed failure. If these are actually the reasons underlying risk aversion, it seems like we want to be able to imbue our RL agent with the underlying reasons, rather than flat risk aversion.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #17", "newsletter_category": "Preventing bad behavior"}
{"id": "6e0b7305ba374b423eff65df3092b403", "title": "Model Reconstruction from Model Explanations", "url": "http://arxiv.org/abs/1807.05185", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Smitha Milli", "Ludwig Schmidt", "Anca D. Dragan", "Moritz Hardt"], "summaries": ["Back in [AN #16](https://mailchi.mp/f2950ed2ac4b/alignment-newsletter-16), I said that one way to prevent model reconstruction from gradient-based explanations was to add noise to the gradients. Smitha pointed out that the experiments with SmoothGrad are actually of this form, and it still is possible to recover the full model, so even adding noise may not help. I don't really understand SmoothGrad and it's relationship with noise (which is chosen to make a saliency map look nice, if I understand correctly) so I don't know exactly what to think here."], "venue": "Fourteenth Artificial Intelligence and Interactive Digital Entertainment Conference", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #23", "newsletter_category": "Previous newsletters"}
{"id": "e6155e401f8f68a2b2faa8c5dea5a350", "title": "Does GPT-2 Know Your Phone Number?", "url": "https://bair.berkeley.edu/blog/2020/12/20/lmmem/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Nicholas Carlini", "Florian Tramer", "Eric Wallace", "Matthew Jagielski", "Ariel Herbert-Voss", "Katherine Lee", "Adam Roberts", "Tom Brown", "Dawn Song", "Ulfar Erlingsson", "Alina Oprea", "Colin Raffel"], "summaries": ["This post and associated paper demonstrate that large language models memorize rare training data, and (some of) that training data can then be extracted through an automated attack. The key idea is to sample text that is _unusually_ high likelihood. Given a high likelihood sample from a language model, we can check whether the likelihood is especially high by comparing the likelihood to:\n\n1. The likelihood assigned by other (especially smaller) language models. Presumably these models would not have memorized the same content, especially if the content was rare (which is the content we are most interested in).\n2. The length of the text when compressed by (say) zlib. Existing compression algorithms are pretty good at compressing regular English text, and so it is notable when a language model assigns high likelihood but the compression algorithm can’t compress it much.\n3. The likelihood assigned to the same text, but lowercase. Often, memorized content is case-sensitive, and likelihood drops significantly when the case is changed.\n\nThe authors generate a lot of samples from GPT-2, use the metrics above to rank them in order of how likely they are to be memorized from the training set, and then investigate the top 1800 manually. They find that 604 of them are directly from the training set. While many are unobjectionable (such as news headlines), in some cases GPT-2 has memorized personal data (and the authors have extracted it simply by prompting GPT-2). In their most objectionable example, they extract the name, email, phone number, work address, and fax of a single person."], "venue": "arXiv", "opinion": "I really liked the paper: it contains a lot of empirical detail that didn’t make it into the blog post, that gave me a much better sense of the scope of the problem. I don’t really have the space to summarize it here, so I recommend reading the paper.", "highlight": false, "read_more": "[Blog post: Privacy Considerations in Large Language Models](https://ai.googleblog.com/2020/12/privacy-considerations-in-large.html)\n[Paper: Extracting Training Data from Large Language Models](https://arxiv.org/abs/2012.07805)", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #132", "newsletter_category": "Privacy and security"}
{"id": "8d17083214d0f9ab935cc14cb94e16e6", "title": "Privacy and machine learning: two unexpected allies?", "url": "http://www.cleverhans.io/privacy/2018/04/29/privacy-and-machine-learning.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Nicolas Papernot and Ian Goodfellow"], "summaries": ["Differential privacy provides guarantees on how much information you can obtain by making queries of a specific type of a dataset. Normally, in order to achieve such guarantees, you must add in randomness to the input data that can change the decision, so that there is a plausible explanation for any decision. Unsurprisingly, this tends to degrade performance. However, in deep learning, we often have the problem of our models overfitting to specific details in the training set instead of generalizing appropriately, so we might expect that differential privacy could actually _help_ with performance (as well as privacy). Private Aggregation of Teacher Ensembles (PATE) demonstrates that this is the case. It works by training several teacher models on different datasets to solve the task at hand. Then, by aggregating the results across the ensemble with some random noise, we can answer queries and put bounds on the amount of information that is leaked. However, with each query we use up more of our \"privacy budget\", so it can't be used arbitrarily long. To solve this, we can make a fixed number of queries to label some unlabelled data, use those labels to train a student model, and use the student model to make predictions forever after. An adversary could at worst infer the entire training dataset of the student model -- but that training set was designed to be private."], "venue": "cleverhans", "opinion": "I would have been excited by work that randomizes the inputs to a deep learning technique in order to get better generalizability. It's cool that this goal dovetails so beautifully with the desire for differential privacy.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #5", "newsletter_category": "Privacy and security"}
{"id": "abfff1b6645c44856029390773f3d3bf", "title": "Seeking Power is Provably Instrumentally Convergent in MDPs", "url": "https://www.alignmentforum.org/posts/6DuJxY8X45Sco4bS2/seeking-power-is-provably-instrumentally-convergent-in-mdps", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Alex Turner", "Logan Smith"], "summaries": ["[The Basic AI Drives](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf) argues that it is _instrumentally convergent_ for an agent to collect resources and gain power. This post and [associated paper](https://arxiv.org/abs/1912.01683) aim to formalize this argument. Informally, an _action_ is _instrumentally convergent_ if it is helpful for many goals, or equivalently, an action is instrumentally convergent to the extent that we expect an agent to take it, if we do not know what the agent’s goal is. Similarly, a _state_ has high power if it is easier to achieve a wide variety of goals from that state.\n\nA natural formalization is to assume we have a distribution over the agent's goal, and define power and instrumental convergence relative to this distribution. We can then define power as the expected value that can be obtained from a state (modulo some technical caveats), and instrumental convergence as the probability that an action is optimal, _from our perspective of uncertainty_: of course, the _agent_ knows its own goal, and acts optimally in pursuit of that goal.\n\nYou might think that optimal agents would provably seek out states with high power. However, this is not true. Consider a decision faced by high school students: should they take a gap year, or go directly to college? Let’s assume college is necessary for (100-ε)% of careers, but if you take a gap year, you could focus on the other ε% of careers or decide to go to college after the year. Then in the limit of farsightedness, taking a gap year leads to a more powerful state, since you can still achieve all of the careers, albeit slightly less efficiently for the college careers. However, if you know which career you want, then it is (100-ε)% likely that you go to college, so going to college is very strongly instrumentally convergent even though taking a gap year leads to a more powerful state.\n\nNonetheless, there are things we can prove. In environments where the only cycles are states with a single action leading back to the same state, and apart from that every action leads to a new state, and many states have more than one action, farsighted agents are more likely to choose trajectories that spend more time navigating to a cycle before spending the rest of the time in the cycle. For example, in Tic-Tac-Toe where the opponent is playing optimally according to the normal win condition, but the agent's reward for each state is drawn independently from some distribution on [0, 1], the agent is much more likely to play out to a long game where the entire board is filled. This is because the number of states that can be reached grows exponentially in the horizon, and so agents have more control by taking longer trajectories. Equivalently, the cycle with maximal reward is much more likely to be at the end of a longer trajectory, and so the optimal possibility is more likely to be a long trajectory."], "venue": "Alignment Forum", "opinion": "I like the formalizations of power and instrumental convergence. I think in practice there will be a lot of complexity in a) the reward distribution that power and instrumental convergence are defined relative to, b) the structure of the environment, and c) how powerful AI systems actually work (since they won't be perfectly optimal, and won't know the environment structure ahead of time). Nonetheless, results with specific classes of reward distributions, environment structures, and agent models can still provide useful intuition.", "highlight": true, "read_more": "[Clarifying Power-Seeking and Instrumental Convergence](https://www.alignmentforum.org/posts/cwpKagyTvqSyAJB7q/clarifying-power-seeking-and-instrumental-convergence), [Paper: Optimal Farsighted Agents Tend to Seek Power](https://arxiv.org/abs/1912.01683)", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #78", "newsletter_category": "Problems"}
{"id": "03c548607c446b51da103e96fa570485", "title": "Consequences of Misaligned AI", "url": "https://proceedings.neurips.cc/paper/2020/file/b607ba543ad05417b8507ee86c54fcb7-Paper.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Simon Zhuang", "Dylan Hadfield-Menell"], "summaries": ["One intuition for why powerful AI systems might lead to bad consequences goes as follows:\n\n1) Humans care about many attributes of the world and we would likely forget some of these when trying to list them all.\n2) Improvements along these attributes usually require resources, and gaining additional resources often requires sacrifices along some attributes.\n3) Because of 1), naively deployed AI systems would only optimize some of the attributes we care about, and because of 2) this would lead to bad outcomes along the other attributes.\n\nThis paper formalizes this intuition in a model, identifies conditions for when deploying AIs can reduce true utility within the model and proposes two mitigation strategies, impact minimization and interactivity.\n\nWe assume that the world state consists of L attributes, all of which the human cares about having more of, that is, true utility is strictly increasing in each of the attributes. Each attribute has some minimum value, and can be increased from that minimum value through the use of a fixed, finite resource (which you could think of as money, if you want); this allows us to formalize (2) above. To formalize (1), we assume that the proxy utility optimized by the AI is only allowed to depend on J 0, and 4) utility has diminishing marginal returns in each attribute (and marginal returns tend to zero as the attribute increases).\n\nRegarding mitigation, impact minimization requires that the AI keep all attributes that are omitted by the proxy constant. In this case, any gains in proxy utility must also be gains in true utility.\n\nMeanwhile, in the interactive condition, the human gets to regularly select a new proxy (still only specifying J, [Formalizing convergent instrumental goals](https://intelligence.org/2015/11/26/new-paper-formalizing-convergent-instrumental-goals/))\n2. Given that perfect information is impossible, interactivity becomes important (<@Human-AI Interaction@>, <@Incomplete Contracting and AI Alignment@>).\n3. Conservatism (in this case through impact regularization) can be helpful (see the many blog posts and papers on mild optimization, low impact, and conservatism).", "highlight": true, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #131", "newsletter_category": "Problems"}
{"id": "1f95a7b889aa6e3b428a0e47d13ec6c1", "title": "Inaccessible information", "url": "https://alignmentforum.org/posts/ZyWyAJbedvEgRT2uF/inaccessible-information", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Paul Christiano"], "summaries": ["One way to think about the problem of AI alignment is that we only know how to train models on information that is _accessible_ to us, but we want models that leverage _inaccessible_ information.\n\nInformation is accessible if it can be checked directly, or if an ML model would successfully transfer to provide the information when trained on some other accessible information. (An example of the latter would be if we trained a system to predict what happens in a day, and it successfully transfers to predicting what happens in a month.) Otherwise, the information is inaccessible: for example, “what Alice is thinking” is (at least currently) inaccessible, while “what Alice will say” is accessible. The post has several other examples.\n\nNote that while an ML model may not directly say exactly what Alice is thinking, if we train it to predict what Alice will say, it will probably have some internal model of what Alice is thinking, since that is useful for predicting what Alice will say. It is nonetheless inaccessible because there’s no obvious way of extracting this information from the model. While we could train the model to also output “what Alice is thinking”, this would have to be training for “a consistent and plausible answer to what Alice is thinking”, since we don’t have the ground truth answer. This could incentivize bad policies that figure out what we would most believe, rather than reporting the truth.\n\nThe argument for risk is then as follows: we care about inaccessible information (e.g. we care about what people _actually_ experience, rather than what they _say_ they experience) but can’t easily make AI systems that optimize for it. However, AI systems will be able to infer and use inaccessible information, and would outcompete ones that don’t. AI systems will be able to plan using such inaccessible information for at least some goals. Then, the AI systems that plan using the inaccessible information could eventually control most resources. Key quote: “The key asymmetry working against us is that optimizing flourishing appears to require a particular quantity to be accessible, while danger just requires anything to be accessible.”\n\nThe post then goes on to list some possible angles of attack on this problem. Iterated amplification can be thought of as addressing gaps in speed, size, experience, algorithmic sophistication etc. between the agents we train and ourselves, which can limit what inaccessible information our agents can have that we won’t. However, it seems likely that amplification will eventually run up against some inaccessible information that will never be produced. As a result, this could be a “hard core” of alignment."], "venue": "Alignment Forum", "opinion": "I think the idea of inaccessible information is an important one, but it’s one that feels deceptively hard to reason about. For example, I often think about solving alignment by approximating “what a human would say after thinking for a long time”; this is effectively a claim that human reasoning transfers well when iterated over long periods of time, and “what a human would say” is at least somewhat accessible. Regardless, it seems reasonably likely that AI systems will inherit the same property of transferability that I attribute to human reasoning, in which case the argument for risk applies primarily because the AI system might apply its reasoning towards a different goal than the ones we care about, which leads us back to the <@intent alignment@>(@Clarifying \"AI Alignment\"@) formulation.\n\nThis [response](https://www.alignmentforum.org/posts/A9vvxguZMytsN3ze9/reply-to-paul-christiano-s-inaccessible-information) views this post as a fairly general argument against black box optimization, where we only look at input-output behavior, as then we can’t use inaccessible information. It suggests that we need to understand how the AI system works, rather than relying on search, to avoid these problems.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #104", "newsletter_category": "Problems"}
{"id": "27c0d0033f680eb807d7fd3b348ca7bc", "title": "Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence", "url": "https://www.mdpi.com/2504-2289/3/2/21/htm", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["David Manheim"], "summaries": ["While [Categorizing Variants of Goodhart’s Law](https://arxiv.org/abs/1803.04585) explains failure modes that occur when a single agent’s proxy becomes decoupled from the true goal, this paper aims to characterize failures involving multiple agents:\n\n **Accidental steering** happens when the combined actions of multiple agents facilitate single-agent failures. For example, catching more fish now is usually positively correlated with a fisherman's long term goals, but this relationship inverts once there are lots of fishermen optimizing for short term gains and the fish population collapses.\n\n**Coordination Failure** occurs when agents with mutually compatible goals fail to coordinate. For example, due to incomplete models of other agent's goals and capabilities, two agents sharing a goal might compete for a resource even though one of them is strictly better at converting the resource into progress towards their goal. \n\n**Adversarial optimization** is when an agent **O** steers the world into states where **V**'s proxy goal is positively correlated with **O**'s goal. For example, one could exploit investors who use short term volatility as a proxy for risk by selling them instruments that are not very volatile but still risky.\n\n**Input Spoofing** is the act of one agent manipulating another learning agent's model, either by manufacturing false evidence or by filtering the received evidence systematically, as arguably happened with [Microsoft’s Tay](https://en.wikipedia.org/wiki/Tay_(bot)).\n\nFinally, **Goal co-option** happens when agent **O** has (partial) control over the hardware agent **V** runs or relies on. This way, **O** can either modify the reward signal **V** receives to change what **V** optimizes for, or it can directly change **V**'s outputs. \n\nThe difficulties in precisely modelling other sophisticated agents and other concerns related to embedded agency make it hard to completely avoid these failure modes with current methods. Slowing down the deployment of AI systems and focussing on the mitigation of the discussed failure modes might prevent limited near term catastrophes, which in turn might cause a slowdown of further deployment and prioritization of safety."], "venue": "Big Data and Cognitive Computing", "opinion": "I like that this paper subdivides failure modes that can happen in multiparty optimization into several clear categories and provides various models and examples for each of them. I am unsure about the conclusion: on one hand, slowing down deployment to improve the safety of contemporary systems seems very sensible. On the other hand, it seems like there would be some failures of limited scope that are hard to reproduce \"in the lab\". Widely deployed AI systems might provide us with valuable empirical data about these failures and improve our understanding of the failure modes in general. I guess ideally there would be differential deployment with rapid deployment in noncritical areas like managing local parking lots, but very slow deployment for critical infrastructure.\n\n**Rohin's opinion:** I'm particularly interested in an analysis of how these kinds of failures affect existential risk. I'm not sure if David believes they are relevant for x-risk, but even if so the arguments aren't presented in this paper.", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #70", "newsletter_category": "Problems"}
{"id": "982f7a501b7f304ec2cbc4555fd4ea26", "title": "The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models", "url": "https://openreview.net/forum?id=JYtwGwIL7ye", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Anonymous"], "summaries": ["Reward hacking occurs when RL agents exploit the difference between a true reward and a proxy. Reward hacking has been <@observed in practice@>(@Specification gaming examples in AI@), and as reinforcement learning agents are trained with better algorithms, more data, and larger policies, they are at increased risk of overfitting their proxy objectives. However, reward hacking has not yet been systematically studied.\n\nThis paper fills this gap by constructing four example environments with a total of nine proxy rewards to investigate how reward hacking changes as a function of optimization power. They increase optimization power in several different ways, such as increasing the size of the neural net, or providing the model with more fine-grained observations.\n\nOverall, the authors find that reward hacking occurs in five of the nine cases. Moreover, the authors observed phase transitions in four of these cases. These are stark transitions where a moderate increase in optimization power leads to a drastic increase in reward hacking behavior. This poses a challenge in monitoring the safety of ML systems. To address this the authors suggest performing anomaly detection to notice reward hacking and offer several baselines. "], "venue": "OpenReview", "opinion": "It is good to see an attempt at formalizing reward hacking. The experimental contributions are interesting and the anomaly detection method seems reasonable. However, the proxy rewards chosen to represent reward hacking are questionable. In my opinion, these rewards are obviously 'wrong' so it is less surprising that they result in undesired behavior. I look forward to seeing more comprehensive experiments on this subject.\n\n**Rohin’s opinion:** Note that on OpenReview, the authors say that one of the proxy rewards (maximize average velocity for the driving environment) was actually the default and they only noticed it was problematic after they had trained large neural nets on that environment. I do agree that future proxy objectives will probably be less clearly wrong than most of the ones in this paper.", "highlight": false, "read_more": "", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #170", "newsletter_category": "Problems"}
{"id": "0e9aad0a88b01d63083016ab470f6b4a", "title": "Clarifying “What failure looks like” (part 1)", "url": "https://www.alignmentforum.org/posts/v6Q7T335KCMxujhZu/clarifying-what-failure-looks-like-part-1", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Sam Clarke"], "summaries": ["The first scenario outlined in <@What failure looks like@> stems from a failure to specify what we actually want, so that we instead build AI systems that pursue proxies of what we want instead. As AI systems become responsible for more of the economy, human values become less influential relative to the proxy objectives the AI systems pursue, and as a result we lose control over the future. This post aims to clarify whether such a scenario leads to _lock in_, where we are stuck with the state of affairs and cannot correct it to get “back on course”. It identifies five factors which make this more likely:\n\n1. _Collective action problems:_ Many human institutions will face competitive (short-term) pressures to deploy AI systems with bad proxies, even if it isn’t in humanity’s long-term interest.\n2. _Regulatory capture:_ Influential people (such as CEOs of AI companies) may benefit from AI systems that optimize proxies, and so oppose measures to fix the issue (e.g. by banning such AI systems).\n3. _Ambiguity:_ There may be genuine ambiguity about whether it is better to have these AI systems that optimize for proxies, even from a long-term perspective, especially because all clear and easy-to-define metrics will likely be going up (since those can be turned into proxy objectives).\n4. _Dependency:_ AI systems may become so embedded in society that society can no longer function without them.\n5. _Opposition:_ The AI systems themselves may oppose any fixes we propose.\n\nWe can also look at historical precedents. Factors 1-3 have played an important role in climate change, though if it does lead to lock in, this will be “because of physics”, unlike the case with AI. The agricultural revolution, which arguably made human life significantly worse, still persisted thanks to its productivity gains (factor 1) and the loss of hunter-gathering skills (factor 4). When the British colonized New Zealand, the Maori people lost significant control over their future, because each individual chief needed guns (factor 1), trading with the British genuinely made them better off initially (factor 3), and eventually the British turned to manipulation, confiscation and conflict (factor 5).\n\nWith AI in particular, we might expect that an increase in misinformation and echo chambers exacerbates ambiguity (factor 3), and that due to its general-purpose nature, dependency (factor 4) may be more of a risk.\n\nThe post also suggests some future directions for estimating the _severity_ of lock in for this failure mode."], "venue": "Alignment Forum", "opinion": "I think this topic is important and the post did it justice. I feel like factors 4 and 5 (dependency and opposition) capture the reasons I expect lock in, with factors 1-3 as less important but still relevant mechanisms. I also really liked the analogy with the British colonization of New Zealand -- it felt like it was in fact quite analogous to how I’d expect this sort of failure to happen.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #120", "newsletter_category": "Problems"}
{"id": "13101d8f57fc70fb8274a643af6b3b9b", "title": "Problems in AI Alignment that philosophers could potentially contribute to", "url": "https://alignmentforum.org/posts/rASeoR7iZ9Fokzh7L/problems-in-ai-alignment-that-philosophers-could-potentially", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Wei Dai"], "summaries": ["Exactly what it says. The post is short enough that I'm not going to summarize it -- it would be as long as the original."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #62", "newsletter_category": "Problems"}
{"id": "18ac30c573c4a8351f1d838d5c29fa25", "title": "Agency Failure AI Apocalypse?", "url": "http://www.overcomingbias.com/2019/04/agency-failure-ai-apocalypse.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Robin Hanson"], "summaries": ["This is a response to <@More realistic tales of doom@>, arguing that the scenarios described in the post are unrealistic given what we know about principal-agent problems. In a typical principal-agent problem, the principal doesn't know everything about the agent, and the agent can use this fact to gain \"agency rents\" where it can gain extra value for itself, or there could be an \"agency failure\" where the principal doesn't get as much as they want. For example, an employee might spend half of their day browsing the web, because their manager can't tell that that's what they are doing. Our economic literature on principal-agent problems suggests that agency problems get harder with more information asymmetry, more noise in outcomes, etc. but not with smarter agents, and in any case we typically see limited agency rents and failures. So, it's unlikely that the case for AI will be any different, and while it's good to have a couple of people keeping an eye on the problem, it's not worth the large investment of resources from future-oriented people that we currently see."], "venue": "Overcoming Bias", "opinion": "I have a bunch of complicated thoughts on this post, many of which were said in Paul's comment reply to the post, but I'll say a few things. Firstly, I think that if you want to view the AI alignment problem in the context of the principal-agent literature, the natural way to think about it is with the principal being less rational than the agent. I claim that it is at least conceivable that an AI system could make humans worse off, but the standard principal-agent model cannot accommodate such a scenario because it assumes the principal is rational, which means the principal always does at least as well as not ceding any control to the agent at all. More importantly, although I'm not too familiar with the principal-agent literature, I'm guessing that the literature assumes the presence of norms, laws and institutions that constrain both the principal and the agent, and in such cases it makes sense that the loss that the principal could incur would be bounded -- but it's not obvious that this would hold for sufficiently powerful AI systems.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #56", "newsletter_category": "Problems"}
{"id": "6c0f70341da62d949a87931567ca2da2", "title": "Imitation learning considered unsafe?", "url": "https://www.lesswrong.com/posts/whRPLBZNQm3JD5Zv8/imitation-learning-considered-unsafe", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["capybaralet"], "summaries": ["We might hope that using imitation learning to mimic a corrigible human would be safe. However, this would involve mimicking the human's planning process. It seems fairly likely that slight errors in the imitation of this process could lead to the creation of a goal-directed planning process that does dangerous long-term optimization."], "venue": "LessWrong", "opinion": "This seems pretty similar to the problem of inner optimizers, in which while searching for a good policy for some task T on training distribution D, you end up finding a consequentialist agent that is optimizing some utility function that leads to good performance on D. That agent will have all the standard dangers of goal-directed optimization out of distribution.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #40", "newsletter_category": "Problems"}
{"id": "57d8502835678bfeb3d1b14fba2259fb", "title": "\"Unsupervised\" translation as an (intent) alignment problem", "url": "https://www.alignmentforum.org/posts/saRRRdMnMPXXtQBNi/unsupervised-translation-as-a-safety-problem", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Paul Christiano"], "summaries": ["We have previously seen that a major challenge for alignment is that our models may learn <@inaccessible information@>(@Inaccessible information@) that we cannot extract from them, because we do not know how to provide a learning signal to train them to output such information. This post proposes unsupervised translation as a particular concrete problem to ground this out.\n\nSuppose we have lots of English text, and lots of Klingon text, but no translations from English to Klingon (or vice versa), and no bilingual speakers. If we train GPT on the text, it will probably develop a good understanding of both English and Klingon, such that it “should” have the ability to translate between the two (at least approximately). How can we get it to actually (try to) do so? Existing methods (both in unsupervised translation and in AI alignment) do not seem to meet this bar.\n\nOne vague hope is that we could train a helper agent such that a human can perform next-word prediction on Klingon with the assistance of the helper agent, using a method like the one in [Learning the prior](https://www.alignmentforum.org/posts/SL9mKhgdmDKXmxwE4/learning-the-prior) ([AN #109](https://mailchi.mp/ee62c1c9e331/an-109teaching-neural-nets-to-generalize-the-way-humans-would))."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #120", "newsletter_category": "Problems"}
{"id": "46390b0c1783c9815e0fdac05219600f", "title": "Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI", "url": "https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Lucas Perry", "Steven Pinker and Stuart Russell"], "summaries": ["Despite their disagreements on AI risk, Stuart and Steven agree on quite a lot. They both see the development of AI as depending on many historical ideas. They are both particularly critical of the idea that we can get general intelligence by simply scaling up existing deep learning models, citing the need for reasoning, symbol manipulation, and few-shot learning, which current models mostly don’t do. They both predict that we probably won’t go extinct from superintelligent AI, at least in part because we’ll notice and fix any potential failures, either via extensive testing or via initial failures that illustrate the problem.\n\nOn the AI risk side, while they spent a lot of time discussing it, I’ll only talk about the parts where it seems to me that there is a real disagreement, and not mention anything else. Steven’s position against AI risk seems to be twofold. First, we are unlikely to build superintelligent AI soon, and so we should focus on other clear risks like climate change. In contrast, Stuart thinks that superintelligent AI is reasonably likely by the end of the century and thus worth thinking about. Second, the idea of building a super-optimizer that focuses on a single goal is so obviously bad that AI researchers will obviously not build such a thing. In contrast, Stuart thinks that goal-directed systems are our default way of modeling and building intelligent systems. It seemed like Steven was particularly objecting to the especially simplistic goals used in examples like maximizing paperclips or curing cancer, to which Stuart argued that the problem doesn’t go away if you have multiple goals, because there will always be some part of your goal that you failed to specify.\n\nSteven also disagrees with the notion of intelligence that is typically used by AI risk proponents, saying “a super-optimizer that pursued a single goal is self-evidently unintelligent, not superintelligent”. I don’t get what he means by this, but it seems relevant to his views."], "venue": "FLI Website", "opinion": "Unsurprisingly I agreed with Stuart’s responses, but nevertheless I found this illuminating, especially in illustrating the downsides of examples with simplistic goals. I did find it frustrating that Steven didn’t respond to the point about multiple goals not helping, since that seemed like a major crux, though they were discussing many different aspects and that thread may simply have been dropped by accident.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #104", "newsletter_category": "Problems"}
{"id": "f5c8aa1d505474bf94c13028f1dc4e90", "title": "Metaphilosophical competence can't be disentangled from alignment", "url": "https://www.lesswrong.com/posts/CCgvJHpbvc7Lm8ZS8/metaphilosophical-competence-can-t-be-disentangled-from", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["zhukeepa"], "summaries": ["Would you be comfortable taking a single human, and making them a quadrillion times more powerful?"], "venue": "LessWrong", "opinion": "I am curious to see people's answers to this, I think it might be a good question to reveal major differences in worldviews between optimistic and pessimistic safety researchers.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #1", "newsletter_category": "Problems"}
{"id": "1e8a5f276a85c5f2fa50ad12eb8f97fa", "title": "Specification gaming: the flip side of AI ingenuity", "url": "https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Victoria Krakovna", "Jonathan Uesato", "Vladimir Mikulik", "Matthew Rahtz", "Tom Everitt", "Ramana Kumar", "Zac Kenton", "Jan Leike", "Shane Legg"], "summaries": ["This post on the DeepMind website explains the concept of <@specification gaming@>(@Specification gaming examples in AI@), and illustrates three problems that arise within it. First and most obviously, we need to capture the human concept of a given task in a reward function. Second, we must design agents without introducing any mistaken implicit assumptions (e.g. that the physics simulation is accurate, when it isn't). Finally, we need to ensure that agents don't tamper with their reward functions."], "venue": "DeepMind Blog", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #97", "newsletter_category": "Problems"}
{"id": "b029833484ab4ef5e3ecceba18419615", "title": "How the Enlightenment Ends", "url": "https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Henry A. Kissinger"], "summaries": ["This is an article about the dangers of AI written by a non-technologist, hitting some points that are relatively familiar."], "venue": "Atlantic", "opinion": "While there are many points that I disagree with (eg. \"what [AIs] do uniquely is not thinking as heretofore conceived and experienced. Rather, it is unprecedented memorization and computation\"), overall there was a surprising amount of familiar material said in a different way (such as explainability and unintended consequences).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #8", "newsletter_category": "Problems"}
{"id": "14aee1c6374808855701dbb7e1803f39", "title": "When Bots Teach Themselves How To Cheat", "url": "https://www.wired.com/story/when-bots-teach-themselves-to-cheat", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Tom Simonite"], "summaries": ["A media article about specification gaming in AI that I actually just agree with, and it doesn't even have a Terminator picture!"], "venue": "Wired", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #19", "newsletter_category": "Problems"}
{"id": "028ba71be8aff848331840c5a17bd563", "title": "Aligning AI to Human Values means Picking the Right Metrics", "url": "https://medium.com/@PartnershipAI/aligning-ai-to-human-values-means-picking-the-right-metrics-855859e6f047", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Jonathan Stray"], "summaries": ["There has been a lot of attention recently on the flaws of recommender systems, especially when optimizing for simple metrics like engagement -- an example of what we might call \"narrow value alignment\". This post reconstructs how Facebook and YouTube have been incorporating better metrics into their algorithms from 2015 and 2017 respectively. For example, Facebook found that academic research suggested that well-being was improved by \"meaningful social interactions\", but worsened by passive consumption of content. As a result, they changed the metric for the recommendation algorithm to better track this. How did they measure it? It seems that they simply asked a survey of thousands of people what the most meaningful content was (both on and off Facebook), and used this to train a model to predict \"meaningful interactions\". They estimated that this resulted in a 5% decrease in time spent on Facebook, at least in the short term. The story with YouTube is similar, though sparser on details (and it's not clear if there was input from end users in YouTube's case).\n\nThe author then contrasts this sort of narrow value alignment with AGI alignment. His main take is that narrow alignment should be easier to address, since we can learn from how existing systems behave in the real world, and the insights we gain may be critical for AGI alignment. I'll end with a quote from the conclusion: \"My argument is not so much that one should use AI to optimize for well-being. Rather, we live in a world where large-scale optimization is already happening. We can choose not to evaluate or adjust these systems, but there is little reason to imagine that ignorance and inaction would be better.\""], "venue": "PAI Blog", "opinion": "Even though I often feel like an <@optimist@>(@Conversation with Rohin Shah@) about incentives towards alignment, even I was surprised to see the amount of effort that it seems Facebook has put into trying to align its recommendation algorithm with well-being. To the extent that the recommendation algorithm is still primarily harmful (which might be true or false, idk), this suggests to me that it might just be really hard to give good recommendations given the sparse feedback you get. Of course, there are more cynical explanations, e.g. Facebook just wants to look like they care about well-being, but if they really cared they could do way better. I lean towards the first explanation, but it's very hard to distinguish between these hypotheses.\n\nWhile this post claimed that narrow value alignment should be easier than AGI alignment, I'm actually not so sure. With AGI alignment, you have the really powerful assumption that the AI system you are trying to align is _intelligent_: this could plausibly help a lot. For example, maybe the recommender systems that Facebook is using are just incapable of predicting what will and won't improve human well-being, in which case narrow alignment is doomed. This wouldn't be the case with an AGI (depending on your definition of AGI) -- it should be capable of doing at least as well as humans do. The challenge is in ensuring that the AI systems are actually <@motivated@>(@Clarifying \"AI Alignment\"@) to do so, not whether they are capable of doing so; with narrow alignment you need to solve both problems.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #96", "newsletter_category": "Recommender systems"}
{"id": "4837d273d545659e5835b6a36bada735", "title": "How Much Do Recommender Systems Drive Polarization?", "url": "https://jsteinhardt.stat.berkeley.edu/blog/recsys-deepdive", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Jacob Steinhardt"], "summaries": ["There is a common worry that social media (and recommender systems in particular) are responsible for increased polarization in recent years. This post delves into the evidence for this claim. By “polarization”, we mostly mean _affective polarization_, which measures how positive your feelings towards the opposing party are (though we also sometimes mean _issue polarization_, which measures the correlation between your opinions on e.g. gun control, abortion, and taxes). The main relevant facts are:\n\n1. Polarization in the US has increased steadily since 1980 (i.e. pre-internet), though arguably there was a slight increase from the trend around 2016.\n2. Since 2000, polarization has only increased in some Western countries, even though Internet use has increased relatively uniformly across countries.\n3. Polarization in the US has increased most in the 65+ age group (which has the least social media usage).\n\n(2) could be partly explained by social media causing polarization only in two-party systems, and (3) could be explained by saying that social media changed the incentives of more traditional media (such as TV) which then increased polarization in the 65+ age group. Nevertheless, overall it seems like social media is probably not the main driver of increased polarization. Social media may have accelerated the process (for instance by changing the incentives of traditional media), but the data is too noisy to tell one way or the other."], "venue": "Author's Website", "opinion": "I’m glad to see a simple summary of the evidence we currently have on the effects of social media on polarization. I feel like for the past year or two I’ve constantly heard people speculating about massive harms and even existential risks based on a couple of anecdotes or armchair reasoning, without bothering to check what has actually happened; whenever I talk to someone who seems to have actually studied the topic in depth, it seems they think that there are problems with recommender systems, but they are different from what people usually imagine. (The post also notes a reason to expect our intuitions to be misguided: we are unusual in that we get most of our news online; apparently every age group, starting from 18-24, gets more news from television than online.)\n\nNote that there have been a few pieces arguing for these harms; I haven't sent them out in the newsletter because I don't find them very convincing, but you can find links to some of them along with my thoughts [here](https://www.alignmentforum.org/posts/TmHRACaxXrLbXb5tS/rohinmshah-s-shortform?commentId=EAKEfPmP8mKbEbERv).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #159", "newsletter_category": "Recommender systems"}
{"id": "0071392917bd97eaa0e2dcee01dd34e8", "title": "Beyond Engagement: Aligning Algorithmic Recommendations With Prosocial Goals", "url": "https://www.partnershiponai.org/beyond-engagement-aligning-algorithmic-recommendations-with-prosocial-goals/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Jonathan Stray"], "summaries": ["To decide what item to show a user, a recommender system needs to have some metric by which to rank items. Since this metric must usually be automated, it often contains in large part some operationalization of “engagement”. Unfortunately, this metric may not be able to differentiate between clickbait or extremist content on the one hand, and actually valuable posts on the other. In a workshop on the topic, participants brainstormed four main approaches for improvement:\n\n1. **Build better controls:** Offer users more and better ways to control their feed.\n2. **Develop standardized survey-based metrics:** Surveys should be able to get a significantly higher quality signal to optimize than engagement.\n3. **Pay users for better data,** such as survey data.\n4. **Recommend feeds, not items:** If we rank items individually, it is quite likely that all the posts of the same type (e.g. controversial posts) will get high scores. By ranking entire feeds, we can also optimize for the diversity of items within the feed.\n5. **Incentivize the creation of different feeds,** so that users can choose which ones they prefer all things considered."], "venue": "PAI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #137", "newsletter_category": "Recommender systems"}
{"id": "32b101ba86292de9da57381c5909128e", "title": "Playing hard exploration games by watching YouTube", "url": "http://arxiv.org/abs/1805.11592", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Yusuf Aytar", "Tobias Pfaff", "David Budden", "Tom Le Paine", "Ziyu Wang", "Nando de Freitas"], "summaries": ["There are many YouTube videos demonstrating how to play levels of eg. Montezuma's Revenge. Can we use these demonstrations to solve the hard exploration tasks in Atari? One challenge is that the videos have slightly different visual properties (like color and resolution). They propose to learn a shared feature space by using an auxiliary loss where the network must predict the number of timesteps between two frames of a video, or to predict the delay between a video and audio clip from the same trajectory. Using this shared feature space, they can define a reward function that encourages the agent to take trajectories whose features match those of the demonstrations. In experiments they exceed human performance on Atari games with hard exploration problems."], "venue": "arXiv", "opinion": "It seems to me that this is how we'll have to solve exploration in practice if we don't want to have a huge sample complexity, though I know other researchers are optimistic about solving exploration using curiosity or diversity. It's pretty exciting that they could use a source of data that was already present in the real world.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #9", "newsletter_category": "Reinforcement learning"}
{"id": "e59418ad4cba61d5f5ee330cdd65a4c4", "title": "Dota 2 with Large Scale Deep Reinforcement Learning", "url": "http://arxiv.org/abs/1912.06680", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["OpenAI", "Christopher Berner", "Greg Brockman", "Brooke Chan", "Vicki Cheung", "Przemysław Dębiak", "Christy Dennison", "David Farhi", "Quirin Fischer", "Shariq Hashme", "Chris Hesse", "Rafal Józefowicz", "Scott Gray", "Catherine Olsson", "Jakub Pachocki", "Michael Petrov", "Henrique Pondé de Oliveira Pinto", "Jonathan Raiman", "Tim Salimans", "Jeremy Schlatter", "Jonas Schneider", "Szymon Sidor", "Ilya Sutskever", "Jie Tang", "Filip Wolski", "Susan Zhang"], "summaries": ["In April, <@OpenAI Five@>(@How to Train Your OpenAI Five@) defeated the world champion Dota 2 team, OG. This paper describes its training process. OpenAI et al. hand-engineered the reward function as well as some features, actions, and parts of the policy. The rest of the policy was trained using PPO with an LSTM architecture at a massive scale. They trained this in a distributed fashion as follows:\n\n- The *Controller* receives and distributes the updated parameters.\n- The *Rollout Worker CPUs* simulate the game, send observations to the *Forward Pass GPUs* and publish samples to the *Experience Buffer*.\n- The *Forward Pass GPUs* determine the actions to use and send them to the *Rollout Workers*.\n- The *Optimizer GPUs* sample experience from the *Experience Buffer*, calculate gradient updates, and then publish updated parameters to the *Controller*.\n\nThe model trained over 296 days. In that time, OpenAI needed to adapt it to changes in the code and game mechanics. This was done via model “surgery”, in which they would try to initialize a new model to maintain the same input-output mapping as the old one. When this was not possible, they gradually increased the proportion of games played with the new version over time."], "venue": "arXiv", "opinion": "I feel similarly to my opinion on <@AlphaStar@>(@AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning@) here. The result is definitely impressive and a major step up in complexity from shorter, discrete games like chess or go. However, I don’t see how the approach of just running PPO at a large scale brings us closer to AGI because we can’t run massively parallel simulations of real world tasks. Even for tasks that can be simulated, this seems prohibitively expensive for most use cases (I couldn’t find the exact costs, but I’d estimate this model cost tens of millions of dollars). I’d be quite excited to see an example of deep RL being used for a complex real world task without training in simulation.", "highlight": true, "read_more": "", "summarizer": "Nicholas", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #82", "newsletter_category": "Reinforcement learning"}
{"id": "98f6f58383ff6916f51211bdb2b09bc8", "title": "Learning Latent Plans from Play", "url": "https://learning-from-play.github.io/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Corey Lynch", "Mohi Khansari", "Ted Xiao", "Vikash Kumar", "Jonathan Tompson", "Sergey Levine", "Pierre Sermanet"], "summaries": ["This paper collects unsupervised data of humans playing with robotic control systems, and uses that data to thread a needle between two problems in learning. One problem is that per-task demonstration data is costly, especially as number of tasks grows; the other is that randomly sampled control actions will rarely stumble across complex motor tasks in ways that allow robots to learn. The authors argue that human play data is a good compromise because humans at play tend to explore different ways of manipulating objects in ways that give robots nuggets of useful information like \"how do I move this block inside a drawer\", which can be composed into more complicated and intentional tasks.\n\nThe model works by learning to produce vectors that represent plans (or sequences of actions), and jointly learning to decode those vectors into action sequences. This architecture learns to generate plan vectors by using an autoencoder-like structure that uses KL divergence to align (1) a distribution of plan vectors predicted from the start and end state of a window of play data, and (2) a distribution of plan vectors predicted by looking back at all the actions taken in that window. Because we're jointly learning to unroll the (2) lookback-summarized vector such that it matches the actions actually taken, we'll ideally end up with a system that can take in a given plan vector and produce a sequence of actions to execute that plan. And, because we're learning to predict a vector that aligns with actions successfully taken to get to an end state from a starting one, the model at test time should be able to produce a play vector corresponding to feasible actions that will get it from its current state to a goal state we'd like it to reach. The authors found that their Play-trained model was able to outperform single-task models on a range of manipulation tasks, even though those single-task models were trained with explicit demonstrations of the task."], "venue": "arXiv", "opinion": "I really liked this paper: it was creative in combining conceptual components from variational methods and imitation learning, and it was pragmatic in trying to address the problem of how to get viable human-demonstration data in a way that avoids having to get distinct datasets for a huge set of different discrete tasks. ", "highlight": true, "read_more": "Arxiv paper", "summarizer": "Cody", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #65", "newsletter_category": "Reinforcement learning"}
{"id": "2770ec5b3e56102fc6cd73a09ed13f3f", "title": "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", "url": "https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["The AlphaStar team"], "summaries": ["The AlphaStar system from DeepMind has beaten top human pros at StarCraft. You can read about the particular details of the matches in many sources, such as the blog post itself, this [Vox article](https://www.vox.com/future-perfect/2019/1/24/18196177/ai-artificial-intelligence-google-deepmind-starcraft-game), or [Import AI](https://jack-clark.net/2019/01/28/import-ai-131-ibm-optimizes-ai-with-ai-via-neunets-amazon-reveals-its-scout-delivery-robot-google-releases-300k-natural-questions-dataset/). The quick summary is that while there are some reasons you might not think it is conclusively superhuman yet (notably, it only won when it didn't have to manipulate the camera, and even then it may have had short bursts of very high actions per minute that humans can't do), it is clearly extremely good at StarCraft, both at the technically precise micro level and at the strategic macro level.\n\nI want to focus instead on the technical details of how AlphaStar works. The key ideas seem to be a) using imitation learning to get policies that do something reasonable to start with and b) training a population of agents in order to explore the full space of strategies and how to play against all of them, without any catastrophic forgetting. Specifically, they take a dataset of human games and train various agents to mimic humans. This allows them to avoid the particularly hard exploration problems that happen when you start with a random agent. Once they have these agents to start with, they begin to do population-based training, where they play agents against each other and update their weights using an RL algorithm. The population of agents evolves over time, with well-performing agents splitting into two new agents that diversify a bit more. Some agents also have auxiliary rewards that encourage them to explore different parts of the strategy space -- for example, an agent might get reward for building a specific type of unit. Once training is done, we have a final population of agents. Using their empirical win probabilities, we can construct a Nash equilibrium of these agents, which forms the final AlphaStar agent. _(Note: I'm not sure if at the beginning of the game, one of the agents is chosen according to the Nash probabilities, or if at each timestep an action is chosen according to the Nash probabilities. I would expect the former, since the latter would result in one agent making a long-term plan that is then ruined by a different agent taking some other action, but the blog post seems to indicate the latter -- with the former, it's not clear why the compute ability of a GPU restricts the number of agents in the Nash equilibrium, which the blog posts mentions.)_\n\nThere are also a bunch of interesting technical details on how they get this to actually work, which you can get some information about in this [Reddit AMA](https://www.reddit.com/r/MachineLearning/comments/ajgzoc/we_are_oriol_vinyals_and_david_silver_from/). For example, \"we included a policy distillation cost to ensure that the agent continues to try human-like behaviours with some probability throughout training, and this makes it much easier to discover unlikely strategies than when starting from self-play\", and \"there are elements of our research (for example temporally abstract actions that choose how many ticks to delay, or the adaptive selection of incentives for agents) that might be considered “hierarchical”\". But it's probably best to wait for the journal publication (which is currently in preparation) for the full details.\n\nI'm particularly interested by this [Balduzzi et al paper](https://arxiv.org/abs/1901.08106) that gives some more theoretical justification for the population-based training. In particular, the paper introduces the concept of \"gamescapes\", which can be thought of as a geometric visualization of which strategies beat which other strategies. In some games, like \"say a number between 1 and 10, you get reward equal to your number - opponent's number\", the gamescape is a 1-D line -- there is a scalar value of \"how good a strategy is\", and a better strategy will beat a weaker strategy. On the other hand, rock-paper-scissors is a cyclic game, and the gamescape looks like a triangle -- there's no strategy that strictly dominates all other strategies. Even the Nash strategy of randomizing between all three actions is not the \"best\", in that it fails to exploit suboptimal strategies, eg. the strategy of always playing rock. With games that are even somewhat cyclic (such as StarCraft), rather than trying to find the Nash equilibrium, we should try to explore and map out the entire strategy space. The paper also has some theoretical results supporting this that I haven't read through in detail."], "venue": "DeepMind Blog", "opinion": "I don't care very much about whether AlphaStar is superhuman or not -- it clearly is very good at StarCraft at both the micro and macro levels. Whether it hits the rather arbitrary level of \"top human performance\" is not as interesting as the fact that it is anywhere in the ballpark of \"top human performance\".\n\nIt's interesting to compare this to [OpenAI Five](https://blog.openai.com/openai-five/) ([AN #13](https://mailchi.mp/8234356e4b7f/alignment-newsletter-13)). While OpenAI solved the exploration problem using a combination of reward shaping and domain randomization, DeepMind solved it by using imitation learning on human games. While OpenAI relied primarily on self-play, DeepMind used population-based training in order to deal with catastrophic forgetting and in order to be robust to many different strategies. It's possible that this is because of the games they were playing -- it's plausible to me that StarCraft has more rock-paper-scissors-like cyclic mechanics than Dota, and so it's more important to be robust to many strategies in StarCraft. But I don't know either game very well, so this is pure speculation.\n\nExploring the full strategy space rather than finding the Nash equilibrium seems like the right thing to do, though I haven't kept up with the multiagent RL literature so take that with a grain of salt. That said, it doesn't seem like the full solution -- you also want some way of identifying what strategy your opponent is playing, so that you can choose the optimal strategy to play against them.\n\nI often think about how you can build AI systems that _cooperate_ with humans. This can be significantly harder: in competitive games, if your opponent is more suboptimal than you were expecting, you just crush them even harder. However, in a cooperative game, if you make a bad assumption about what your partner will do, you can get significantly worse performance. (If you've played Hanabi, you've probably experienced this.) Self-play does not seem like it would handle this situation, but this kind of population-based training could potentially handle it, if you also had a method to identify how your partner is playing. (Without such a method, you would play some generic strategy that would hopefully be quite robust to playstyles, but would still not be nearly as good as being able to predict what your partner does.)", "highlight": true, "read_more": "[Open-ended Learning in Symmetric Zero-sum Games](https://arxiv.org/abs/1901.08106), [AMA with AlphaStar creators and pro players](https://www.reddit.com/r/MachineLearning/comments/ajgzoc/we_are_oriol_vinyals_and_david_silver_from), and [Vox: StarCraft is a deep, complicated war strategy game. Google’s AlphaStar AI crushed it.](https://www.vox.com/future-perfect/2019/1/24/18196177/ai-artificial-intelligence-google-deepmind-starcraft-game)", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #43", "newsletter_category": "Reinforcement learning"}
{"id": "de3f3d030d4b70be590b690648ea0329", "title": "Spinning Up in Deep RL", "url": "https://blog.openai.com/spinning-up-in-deep-rl/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Joshua Achiam"], "summaries": ["OpenAI has released an educational resource aimed to help software engineers become skilled at deep reinforcement learning. It includes simple implementations of many deep RL algorithms (as opposed to the relatively complex, highly optimized implementations in Baselines), educational exercises, documentation, and tutorials. OpenAI will host a workshop on the topic at their headquarters on Feb 2nd, and are also planning to hold a workshop at CHAI some time in early 2019."], "venue": "OpenAI Blog", "opinion": "I know that a lot of effort has gone into this project, and I expect that as a result this is probably the best educational resource on deep RL out there. The main other resource I know of is the [deep RL bootcamp](https://sites.google.com/view/deep-rl-bootcamp/), which probably supplements this resource nicely, especially with the lectures (though it is a year out of date).", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #32", "newsletter_category": "Reinforcement learning"}
{"id": "637ef78a15b2a2946f33ca723a86140b", "title": "Preserving Outputs Precisely while Adaptively Rescaling Targets", "url": "https://deepmind.com/blog/preserving-outputs-precisely-while-adaptively-rescaling-targets/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Matteo Hessel and Hado van Hasselt"], "summaries": ["When an agent is trained on multiple tasks across which the sizes of rewards vary greatly, it usually focuses on tasks which provide the largest or most frequent rewards at the expense of performance on others. Previous work dealt with this by clipping rewards outside a certain range, but this changes the optimal policy (eg. in Pacman, eating pellets is just as rewarding as eating ghosts). This paper uses PopArt (introduced in [this 2016 paper](https://arxiv.org/abs/1602.07714)) to normalise rewards from each task before using them to update the policy in an actor-critic RL algorithm. The authors use PopArt to train a single IMPALA agent which can play all 57 Atari games, achieving a median performance slightly higher than human performance.\n\nTo delve into more detail about PopArt, let's consider training a policy with an actor-critic algorithm. In this case, we need a critic that produces estimates of values V, and an actor that produces probabilities of actions. Both of these networks are trained by taking gradients of their outputs, and weighting them based on the observed rewards. Now, a key empirical fact about deep learning is that it works better if all the things are normalized, especially the gradients. (If nothing else, it makes it easier to choose the learning rate.) For the actor, this is easy -- probabilities are already normalized, and the weight of gradient is proportional to the reward, so we can just rescale the weight of the gradient based on the mean and standard deviation of the rewards we have observed so far. This is a bit harder for the critic, since it has to predict values, so we have to normalize both the outputs and the gradient weights. We can normalize the gradient weights in the same way as before. However, normalizing the outputs is tricky, because as time goes on the means and standard deviations change. To do this, at every timestep we modify the weights of the critic that is equivalent to unnormalizing based on the old statistics and then normalizing based on the new statistics. This gives the PopArt method.\n\nHere's a simple example where I butcher types, ignore the difference between states and trajectories, and throw away the standard deviation. Suppose the first reward we see is 10, so we say that our mean is 10 and train our net to output a normalized reward of 0 for this state and action. Then, we see a reward of 100, so we update our mean to 55. On our previous (state, action) pair, we still output a normalized reward of 0, which now corresponds to a real reward of 55, even though it should correspond to 10! We then do the unnormalize-and-renormalize trick. After unnormalization, the critic would output 10, and after renormalization, the network would output -45, which when combined with the mean of 55 would give us the desired 10 reward."], "venue": "Deepmind Blog", "opinion": "This is an impressive result, since it's the first time a single agent has performed so well on a range of Atari games. It doesn't seem to have required any novel techniques except for a straightforward extension of PopArt to the multi-task setting, but this is still interesting since the results from the previous PopArt paper were very mixed (performance increased and decreased dramatically on different games, with the average remaining roughly stable).\n\nOne confusing aspect was that PopArt still benefitted slightly from being trained with reward clipping (110% vs. 101% in the unclipped case), even though the point of PopArt was to normalize rewards so that clipping wasn't necessary. I'm assuming the clipping happens after PopArt normalization, since if it happens before then you lose information as in the Pacman example. In this case, maybe it's that the reward distribution is fat-tailed, and so even after normalization there could be some extreme rewards that after normalization are still large enough that they would cause updates that are too large, and clipping alleviates this problem.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #24", "newsletter_category": "Reinforcement learning"}
{"id": "0e0bce02d0fadc1d89322b76267fce6f", "title": "Visual Reinforcement Learning with Imagined Goals", "url": "https://bair.berkeley.edu/blog/2018/09/06/rig/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Vitchyr Pong and Ashvin Nair"], "summaries": ["This is a blog post explaining a paper by the same name that I covered in [AN #16](https://mailchi.mp/f2950ed2ac4b/alignment-newsletter-16). It's particularly clear and well-explained, and I continue to think the idea is cool and interesting. I've recopied my summary and opinion here, but you should read the blog post, it explains it very well.\n\n[Hindsight Experience Replay](https://blog.openai.com/ingredients-for-robotics-research/) (HER) introduced the idea of accelerating learning with sparse rewards, by taking trajectories where you fail to achieve the goal (and so get no reward, and thus no learning signal) and replacing the actual goal with an \"imagined\" goal chosen in hindsight such that you actually achieved that goal, which means you get reward and can learn. This requires that you have a space of goals such that for any trajectory, you can come up with a goal such that the trajectory achieves that goal. In practice, this means that you are limited to tasks where the goals are of the form \"reach this goal state\". However, if your goal state is an image, it is very hard to learn how to act in order to reach any possible image goal state (even if you restrict to realistic ones), since the space is so large and unstructured. The authors propose to first learn a structured latent representation of the space of images using a variational autoencoder (VAE), and then use that structured latent space as the space of goals which can be achieved. They also use Q-learning instead of DDPG (which is what HER used), so that they can imagine any goal with a minibatch (s, a, s') and learn from it (whereas HER/DDPG is limited to states on the trajectory)."], "venue": "BAIR Blog", "opinion": "This is a cool example of a relatively simple yet powerful idea -- instead of having a goal space over all states, learn a good latent representation and use that as your goal space. This enables unsupervised learning in order to figure out how to use a robot to generally affect the world, probably similarly to how babies explore and learn.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #23", "newsletter_category": "Reinforcement learning"}
{"id": "426543c85fe324b29a498ccf35ac2b52", "title": "Recurrent World Models Facilitate Policy Evolution", "url": "http://arxiv.org/abs/1809.01999", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["David Ha", "Jürgen Schmidhuber"], "summaries": ["I read the [interactive version](https://worldmodels.github.io/) of the paper. The basic idea is to do model-based reinforcement learning, where the model is composed of a variational auto-encoder that turns a high-dimensional state of pixels into a low-dimensional representation, and a large RNN that predicts how the (low-dimensional) state will evolve in the future. The outputs of this model are fed into a very simple linear controller that chooses actions. Since the controller is so simple, they can train it using a black box optimization method (an evolutionary strategy) that doesn't require any gradient information. They evaluate on a racing task and on Doom, and set new state-of-the-art results. There are also other interesting setups -- for example, once you have a world model, you can train the controller completely within the world model without interacting with the outside world at all (using the number of timesteps before the episode ends as your reward function, since the world model doesn't predict standard rewards, but does predict whether the episode ends). There are a lot of cool visualizations that let you play with the models trained with their method."], "venue": "NIPS 2018", "opinion": "I agree with [Shimon Whiteson's take](https://twitter.com/shimon8282/status/979344417961250817), which is that this method gets improvements by creating a separation of concerns between modelling the world and learning a controller for the model, and evaluating on environments where this separation mostly holds. A major challenge in RL is learning the features that are important for the task under consideration, and this method instead learns features that allow you to reconstruct the state, which could be very different, but happen to not be different in their environments. That said, I really like the presentation of the paper and the fact that they did ablation studies.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #23", "newsletter_category": "Reinforcement learning"}
{"id": "d7ba8f149998688ac5c933e3c23ac10c", "title": "Large-Scale Study of Curiosity-Driven Learning", "url": "https://pathak22.github.io/large-scale-curiosity/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Yuri Burda", "Harri Edwards", "Deepak Pathak", "Amos Storkey", "Trevor Darrell and Alexei A. Efros"], "summaries": ["One major challenge in RL is how to explore the environment sufficiently in order to find good rewards to learn from. One proposed method is curiosity, in which the agent generates an internal reward for taking any transition where the outcome was surprising, where surprisal is measured as the negative log probability assigned to the outcome by the agent. In this paper, a neural net that takes as input observation features φ(x) and action a, and predicts the features of the next state observation. The mean squared error with the actual features of the next state is then a measure of the surprisal, and is used as the curiosity reward. This is equivalent to treating the output of the neural net as the mean of a Gaussian distribution with fixed variance, and defining the reward to be the negative log probability assigned to the actual next state.\n\nThis still leaves the feature function φ undetermined. They consider using pixels directly, using a CNN with randomly chosen fixed weights, learned CNN features using a variational autoencoder (VAE) (which optimize for features that are useful for reconstructing the observation), and learned CNN features using inverse dynamics (IDF) (which optimize for features that are useful for reconstructing the action, biasing the features towards aspect of the environment that the agent can control). As you might expect, pixels don't work very well. However, random features do work quite well, often beating the VAE and IDF. This can happen because the random features stay fixed, leading to more stable learning, whereas with the VAE and IDF methods the features are changing over time, and the environment distribution is changing over time (as the agent explores more of it), leading to a harder learning problem.\n\nTypically, curiosity is combined with an external reward. In this paper, the authors evaluate how well an agent can do with _only_ curiosity and no external reward. Intuitively, in game environments designed by humans, the designer sets up a good curriculum for humans to learn, which would align well with a curiosity reward. In fact, this is what happens, with a curiosity based reward leading to great performance (as measured by the external reward) on Atari games, Super Mario, Unity mazes, and Roboschool Pong, when using random features or IDF features. (The VAE features sometimes work well but were very unstable.) They evaluate transfer between levels in Super Mario, and find that the learned features transfer in more cases than random ones. Looking at the graphs, this seems like a very small effect to me -- I'm not sure if I'd agree with the claim, but I'd want to look at the behavior in videos and what the reward function rewards before making that claim strongly. They also investigate Pong with both players being driven by curiosity, and the players become so good at rallying that they crash the emulator.\n\nFinally, they note one downside -- in any stochastic environment, or any environment where there will be lots of uncertainty about what will happen (eg. in multiagent settings), at convergence the reward for any action will be equal to the entropy of the next state distribution. While they don't demonstrate this flaw in particular, they show a related one -- if you add a TV to a Unity maze, and an action to change the channel, then the agent learns to stand in front of the TV and change the channel forever, rather than solving the maze."], "venue": "2nd Workshop on Meta-Learning at NeurIPS 2018", "opinion": "I really like these empirical papers that compare different methods and show their advantages and disadvantages. I was pretty surprised to see random features do as well as they did, especially to see that they transferred as well as learned features in one of the two cases they studied. There was of course a neural net that could learn how to use the arbitrary representation induced by the features, but then why couldn't it do the same for pixels? Perhaps the CNN was useful primarily for reducing the dimensionality of the pixels by combining nearby pixels together, and it didn't really matter how that was done since it still retains all the important information, but in a smaller vector?\n\nI'm glad that the paper acknowledges that the good performance of curiosity is limited to environments that human designers have created. In a real world task, such as a house-cleaning robot, there are many other sources of uncertainty in the world that are unrelated to the task, and you need some form of specification to focus on it -- curiosity alone will not be enough.", "highlight": true, "read_more": "Arxiv paper", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #20", "newsletter_category": "Reinforcement learning"}
{"id": "3bcb7b82f7b80ca817866060fc263ec8", "title": "Lessons Learned Reproducing a Deep Reinforcement Learning Paper", "url": "http://amid.fish/reproducing-deep-rl", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Matthew Rahtz"], "summaries": ["It's exactly what the title says. There were a lot of points that I can't easily summarize, but some highlights:\n- Deep RL is very fiddly and tricky, and most of his time was spent debugging.\n- Since each run takes a long time, it's very important to put a lot of effort into figuring out what hypotheses to test.\n- The project took ~$850 of compute (~9,000 CPU hours and ~450 GPU hours)"], "venue": "Amid Fish", "opinion": "If you do deep RL research regularly, you probably won't get too much out of it (though you might still get some handy tips on things you can do with Tensorflow), but I think everyone else should read it to get a more concrete sense of what deep RL research actually looks like and to be able to communicate more effectively with deep RL researchers.", "highlight": true, "read_more": "Deep Reinforcement Learning Doesn’t Work Yet", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #2", "newsletter_category": "Reinforcement learning"}
{"id": "2d26a91e9b93162674450e557c2950f7", "title": "OpenAI Five Benchmark: Results", "url": "https://blog.openai.com/openai-five-benchmark-results/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["OpenAI's Dota Team"], "summaries": ["The OpenAI Five benchmark happened last Sunday, where OpenAI Five won two matches against the human team, and lost the last one when their draft was adversarially selected. They are now planning to play at The International in a couple of weeks (dates to be finalized). That will be a harder challenge, since they will be playing against teams that play and train professionally, and so will be better at communication and coordination than the human team here.\n\nBlitz (one of the human players) [said](https://www.reddit.com/r/MachineLearning/comments/9533g8/n_openai_five_benchmark_results/e3qrnbt/): \"The only noticeable difference in the mechanical skill aspect was the hex from the Lion, but even that was sorta irrelevant to the overall game flow. Got outdrafted and outmaneuvered pretty heavily, and from a strategy perspective it was just better then us. Even with the limitations in place it still 'felt' like a dota game, against a very good team. It made all the right plays I'd expect most top tier teams to make.\"\n\nOn the technical side, OpenAI implemented a brute-force draft system. With a pool of 18 heroes, you get some combinatorial explosion, but there are still only ~11 million possible matchups. You can then do a simple tree search over which hero to draft, where at the leaves (when you have a full draft) you choose which leaf you want based on the win probability (which OpenAI Five already outputs). Seeing this in action, it seems to me like it's a vanilla minimax algorithm, probably with alpha-beta pruning so that they don't have to evaluate all ~159 billion nodes in the tree. (Or they could have done the full search once, hardcoded the action it comes up with for the first decision, and run the full search for every subsequent action, which would have under 10 billion nodes in the tree.)\n\nBesides the win probabilities, there are other ways to get insight into what the model is \"thinking\" -- for example, by asking the model to predict where the hero will be in 6 seconds, or by predicting how many last hits / denies / kills / deaths it will have.\n\nThe model that played the benchmark has been training since June 9th. Of course, in that time they've changed many things about the system (if for no other reason than to remove many of the restrictions in the original post). This is not a thing that you can easily do -- typically you would change your model architecture, which means your old parameters don't map over to the new architecture. I've been pretty curious about how they handle this, but unfortunately the blog post doesn't go into much detail, beyond saying that they can in fact handle these kinds of \"surgery\" issues.\n\nThey estimate that this particular model has used 190 petaflop/s-days of compute, putting it [just below AlphaZero](https://blog.openai.com/ai-and-compute/)."], "venue": "OpenAI Blog", "opinion": "I think this finally fell within my expectations, after two instances where I underestimated OpenAI Five. I expected that they would let the human team choose heroes in some limited way (~80%), that OpenAI Five would not be able to draft using just gradients via PPO (~60%), and (after having seen the first two games) that the human team would win after an adversarial draft (~70%). Of course, a draft did happen, but it was done by a tree search algorithm, not an algorithm learned using PPO. \n\nThe games themselves were pretty interesting (though I have not played Dota so take this with a grain of salt). It seemed to me like OpenAI Five had learned a particularly good strategy that plays to the advantages of computers, but hadn't learned some of the strategies and ideas that human players use to think about Dota. Since it uses the same amount of computation for each decision, it makes good decisions on all timescales, including ones where something surprising has occurred where humans would need some time to react, and also to coordinate. For example, as soon as a human hero entered within range of the bots (just to look and retreat), all of the bots would immediately unleash a barrage of attacks, killing the hero -- a move that humans could not execute, because of slower reaction times and worse communication and teamwork. Similarly, one common tactic in human gameplay is to teleport into a group of heroes and unleash an area-of-effect ability, but when they tried this against OpenAI Five, one of the bots hexed the hero as soon as he teleported in, rendering him unable to cast the spell. (That felt like the decisive moment in the first game.) On the other hand, there were some clear issues with the bots. At one point, two OpenAI bots were chasing Blitz, and Blitz used an ability that made him invisible while standing still. Any human player would have spammed area attacks, but the bots simply became confused and eventually left. Similarly, I believe (if I understood the commentary correctly) that a bot once used an ability multiple times, wasting mana, even though all uses after the first had no additional effect.\n\nOther articles would have you believe that the games weren't even close, and if you look at the kill counts, that would seem accurate. I don't think that's actually right -- from what I understand, kills aren't as important as experience and gold, and you could see this in the human gameplay. OpenAI Five would often group most of its heroes together to push forward, which means they get less experience and gold. The human team continued to keep their heroes spread out over the map to collect resources -- and even though OpenAI Five got way more kills, the overall net worth of the two teams' heroes remained about equal for most of the early game. The big difference seemed to be that when the inevitable big confrontation between the two teams happened, OpenAI Five always came out on top. I'm not sure how, my Dota knowledge isn't good enough for that. Based on Blitz's comment, my guess is that OpenAI Five is particularly good at fights between heroes, and the draft reflects that. But I'd still guess that if you had pro human players who ceded control to OpenAI Five whenever a fight was about to happen, they would beat OpenAI Five (~70%). I used to put 80% on that prediction, but Blitz's comment updated me away from that.\n\nOne interesting thing was that the win probability seemed to be very strongly influenced by the draft, which in hindsight seems obvious. Dota is a really complicated game that is constantly tweaked to keep it balanced for humans, and even then the draft is very important. When you now introduce a new player (OpenAI Five) with very different capabilities (such as very good decision making under time pressure) and change the game conditions (such as a different pool of heroes), you should expect the game to become very imbalanced, with some teams far outshining others. And in fact we did see that Lion (the hero with the hexing ability) was remarkably useful (against humans, at least).", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #19", "newsletter_category": "Reinforcement learning"}
{"id": "ad14a2cd98a07fc41b65fce1666e99d0", "title": "Variational Option Discovery Algorithms", "url": "http://arxiv.org/abs/1807.10299", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Joshua Achiam", "Harrison Edwards", "Dario Amodei", "Pieter Abbeel"], "summaries": ["We can hope to do hierarchical reinforcement learning by first discovering several useful simple policies (or \"options\") by just acting in the environment without any reward function, and then using these options as primitive actions in a higher level policy that learns to do some task (using a reward function). How could we learn the options without a reward function though? Intuitively, we would like to learn behaviors that are different from each other. One way to frame this would be to think of this as an encoder-decoder problem. Suppose we want to learn K options. Then, we can give the encoder a number in the range [1, K], have it \"encode\" the number into a trajectory τ (that is, our encoder is a policy), and then have a decoder take τ and recover the original number. We train the encoder/policy and decoder jointly, optimizing them to successfully recover the original number (called a _context_). Intuitively, the encoder/policy wants to have very different behaviors for each option, so that it easy for decoder to figure out the context from the trajectory τ. However, a simple solution would be for the encoder/policy to just take a particular series of actions for each context and then stop, and the decoder learns an exact mapping from final states to contexts. To avoid this, we can decrease the capacity of the decoder (i.e. don't give it too many layers), and we also optimize for the _entropy_ of the encoder/policy, which encourages the encoder/policy to be more stochastic, and so it is more likely to learn overall behaviors that can still have some stochasticity, while still allowing the decoder to decode them. It turns out that this optimization problem has a one-to-one correspondence with variational autoencoders, motivating the name \"variational option discovery\". To stabilize training, they start with a small K, and increase K whenever the decoder becomes powerful enough. They evaluate in Gym environments, a simulated robotic hand, and a new \"Toddler\" environment. They find that the scheme works well (in terms of maximizing the objective) in all environments, but that the learned behaviors no longer look natural in the Toddler environment (which is the most complex). They also show that the learned policies can be used for hierarchical RL in the AntMaze problem.\n\nThis is very similar to the recent [Diversity Is All You Need](https://arxiv.org/abs/1802.06070). DIAYN aims to decode the context from _every state_ along a trajectory, which incentivizes it to find behaviors of the form \"go to a goal state\", whereas VALOR (this work) decodes the context from the entire trajectory (without actions, which would make the decoder's job too easy), which allows it to learn behaviors with motion, such as \"go around in a circle\"."], "venue": "IEEE Transactions on Games", "opinion": "It's really refreshing to read a paper with a negative result about their own method (specifically, that the learned behaviors on Toddler do not look natural). It makes me trust the rest of their paper so much more. (A very gameable instinct, I know.) While they were able to find a fairly diverse set of options, and could interpolate between them, their experiments found that using this for hierarchical RL was about as good as training hierarchical RL from scratch. I guess I'm just saying things they've already said -- I think they've done such a great job writing this paper that they've already told me what my opinion about the topic should be, so there's not much left for me to say.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #18", "newsletter_category": "Reinforcement learning"}
{"id": "bc4f46dbf2a6482ae694dda52a9fe87c", "title": "Learning Dexterity", "url": "https://blog.openai.com/learning-dexterity/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Many people at OpenAI"], "summaries": ["Most current experiments with robotics work on relatively small state spaces (think 7 degrees of freedom, each a real number) and are trained in simulation. If we could throw a lot of compute at the problem, could we do significantly better? Yes! Using the same general approach as with [OpenAI Five](https://blog.openai.com/openai-five/), OpenAI has built a system called Dactyl, which allows a physical real-world dexterous hand to manipulate a block. It may not seem as impressive as the videos of humanoids running through obstacle courses, but this is way harder than your typical Mujoco environment, especially since they aim to get it working on a real robot. As with OpenAI Five, they only need a reward function (I believe not even a shaped reward function in this case), a simulator, and a good way to explore. In this setting though, \"exploration\" is actually domain randomization, where you randomly set parameters that you are uncertain about (such as the coefficient of friction between two surfaces), so that the learned policy is robust to distribution shift from the simulator to the real world. (OpenAI Five also used domain randomization, but in that case it was not because we were uncertain about the parameters in the simulator, but because the policy was too specialized to the kinds of characters and heroes it was seeing, and randomizing those properties exposed it to a wider variety of scenarios so it had to learn more general policies.) They use 6144 CPU cores and 8 GPUs, which is _much_ less than for OpenAI Five, but _much_ more than for a typical Mujoco environment.\n\nThey do separate the problem into two pieces -- first, they learn how to map from camera pictures to a 3D pose (using convolutional nets), and second, they use RL to choose actions based on the 3D pose. They can also get better estimates of the 3D pose using motion tracking. They find that the CNN is almost as good as motion tracking, and that the domain randomization is crucial for getting the system to actually work.\n\nThey also have a couple of sections on surprising results and things that didn't work. Probably the most interesting part was that they didn't need to use the tactile sensors to get these results. They couldn't get these sensors in simulation, so they just did without and it seems to have worked fine. It also turns out that the robot's reaction time wasn't too important -- there wasn't a big difference in changing from 80ms reaction time to 40ms reaction time; in fact, this just increased the required training time without much benefit.\n\nProbably the most interesting part of the post is the last paragraph (italics indicates my notes): \"This project completes a full cycle of AI development that OpenAI has been pursuing for the past two years: we’ve developed a new learning algorithm _(PPO)_, scaled it massively to solve hard simulated tasks _(OpenAI Five)_, and then applied the resulting system to the real world _(this post)_. Repeating this cycle at increasing scale is the primary route we are pursuing to increase the capabilities of today’s AI systems towards safe artificial general intelligence.\""], "venue": "OpenAI Blog", "opinion": "This is pretty exciting -- transferring a policy from simulation to the real world is notoriously hard, but it turns out that as long as you use domain randomization (and 30x the compute) it actually is possible to transfer the policy. I wish they had compared the success probability in simulation to the success probability in the real world -- right now I don't know how well the policy transferred. (That is, I want to evaluate how well domain randomization solved the distribution shift problem.) Lots of other exciting things too, but they are pretty similar to the exciting things about OpenAI Five, such as the ability to learn higher level strategies like finger pivoting and sliding (analogously, fighting over mid or 5-man push).", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #18", "newsletter_category": "Reinforcement learning"}
{"id": "b2f1221b0f02990ed7744952de359a9f", "title": "Capture the Flag: the emergence of complex cooperative agents", "url": "https://deepmind.com/blog/capture-the-flag/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Max Jaderberg", "Wojciech M. Czarnecki", "Iain Dunning", "Luke Harris", "and Thore Graepel"], "summaries": ["DeepMind has trained FTW (For The Win) agents that can play Quake III Arena Capture The Flag from raw pixels, given _only_ the signal of whether they win or not. They identify three key ideas that enable this -- population based training (instead of self play), learning an internal reward function, and operating at two timescales (enabling better use of memory). Their ablation studies show that all of these are necessary, and in particular it even outperforms population based training with manual reward shaping. The trained agents can cooperate and compete with a wide range of agents (thanks to the population based training), including humans.\n\nBut why are these three techniques so useful? This isn't as clear, but I can speculate. Population based training works well because the agents are trained against a diversity of collaborators and opponents, which can fix the issue of instability that afflicts self-play. Operating at two timescales gives the agent a better inductive bias. They say that it enables the agent to use memory more effectively, but my story is that it lets it do something more hierarchical, where the slow RNN makes \"plans\", while the fast RNN executes on those plans. Learning an internal reward function flummoxed me for a while, it really seemed like that should not outperform manual reward shaping, but then I found out that the internal reward function is computed from the game points screen, not from the full trajectory. This gives it a really strong inductive bias (since the points screen provides really good features for defining reward functions) that allows it to quickly learn an internal reward function that's more effective than manual reward shaping. It's still somewhat surprising, since it's still learning this reward function from the pixels of the points screen (I assume), but more believable."], "venue": "DeepMind Blog", "opinion": "This is quite impressive, since they are learning from the binary win-loss reward signal. I'm surprised that the agents generalized well enough to play alongside humans -- I would have expected that to cause a substantial distributional shift preventing good generalization. They only had 30 agents in their population, so it seems unlikely a priori that this would induce a distribution that included humans. Perhaps Quake III is simple enough strategically that there aren't very many viable strategies, and most strategies are robust to having slightly worse allies? That doesn't seem right though.\n\nDeepMind did a _lot_ of different things to analyze what the agents learned and how they are different from humans -- check out the [paper](https://arxiv.org/pdf/1807.01281.pdf) for details. For example, they showed that the agents are much better at tagging (shooting) at short ranges, while humans are much better at long ranges.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #14", "newsletter_category": "Reinforcement learning"}
{"id": "0906ad06959863e718efb1e551673080", "title": "OpenAI Five", "url": "https://blog.openai.com/openai-five/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Many people at OpenAI"], "summaries": ["OpenAI has trained a team of five neural networks to play a particular set of Dota heroes in a mirror match (playing against the same set of heroes) with a few restrictions, and have started to beat amateur human players. They are aiming to beat a team of top professionals at The International in August, with the same set of five heroes, but without any other restrictions. Salient points:\n- The method is remarkably simple -- it's a scaled up version of PPO with training data coming from self-play, with reward shaping and some heuristics for exploration, where each agent is implemented by an LSTM.\n- There's no human data apart from the reward shaping and exploration heuristics.\n- Contrary to most expectations, they didn't need anything fundamentally new in order to get long-term strategic planning. I was particularly surprised by this. Some interesting thoughts from OpenAI researchers in [this thread](https://news.ycombinator.com/item?id=17392700) -- in particular, assuming good exploration, the variance of the gradient should scale linearly with the duration, and so you might expect you only need linearly more samples to counteract this.\n- They used 256 dedicated GPUs and 128,000 preemptible CPUs. A [Hacker News comment](https://news.ycombinator.com/item?id=17394150) estimates the cost at $2500 per hour, which would put the likely total cost in the millions of dollars.\n- They simulate 900 years of Dota every day, which is a ratio of ~330,000:1, suggesting that each CPU is running Dota ~2.6x faster than real time. In reality, it's probably running many times faster than that, but preemptions, communication costs, synchronization etc. all lead to inefficiency.\n- There was no explicit communication mechanism between agents, but they all get to observe the full Dota 2 state (_not_ pixels) that any of the agents could observe, so communication is not really necessary.\n- A version of the code with a serious bug was still able to train to beat humans. Not encouraging for safety.\n- Alex Irpan covers some of these points in more depth in [Quick Opinions on OpenAI Five](https://www.alexirpan.com/2018/06/27/dota-2-five.html).\n- Gwern [comments](https://www.reddit.com/r/reinforcementlearning/comments/8tqzvq/openai_dota_update_ppo_lstm_reaches_amateurlevel/) as well."], "venue": "OpenAI Blog", "opinion": "I might be more excited by an approach that was able to learn from human games (which are plentiful), and perhaps finetune with RL, in order to develop an approach that could generalize to more tasks in the future, where human data is available but a simulator is not. (Given the ridiculous sample complexity, pure RL with PPO can only be used in tasks with a simulator.) On the other hand, an approach that leveraged human data would necessarily be at least somewhat specific to Dota. A dependence on human data is unlikely to get us to _general_ intelligence, whereas this result suggests that we can solve tasks that have a simulator, exploration strategy, and a dense reward function, which really is pushing the boundary on generality. This seems to be [gdb's take](https://news.ycombinator.com/item?id=17392802): \"We are very encouraged by the algorithmic implication of this result — in fact, it mirrors closely the story of deep learning (existing algorithms at large scale solve otherwise unsolvable problems). If you have a very hard problem for which you have a simulator, our results imply there is a _real, practical path_ towards solving it. This still needs to be proven out in real-world domains, but it will be very interesting to see the full ramifications of this finding.\"", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #13", "newsletter_category": "Reinforcement learning"}
{"id": "16e509b0ec73cfebe6483842e0e5dfae", "title": "A Reinforcement Learning Potpourri", "url": "https://www.alexirpan.com/2020/05/07/rl-potpourri.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Alex Irpan"], "summaries": ["This blog post summarizes several recent papers in RL (including the data augmentation papers I summarized above, as well as [First Return Then Explore](https://arxiv.org/abs/2004.12919), the successor to <@Go-Explore@>(@Montezuma’s Revenge Solved by Go-Explore, a New Algorithm for Hard-Exploration Problems@)."], "venue": "Sorta Insightful", "opinion": "The whole blog post is worth reading, but I particularly agree with his point that data augmentation generally seems like a no-brainer, since you can think of it either as increasing the size of your dataset by some constant factor, or as a way of eliminating spurious correlations that your model might otherwise learn.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #99", "newsletter_category": "Reinforcement learning"}
{"id": "0c026abbca1d984988a3d2638a9ffbec", "title": "Agent57: Outperforming the human Atari benchmark", "url": "https://deepmind.com/blog/article/Agent57-Outperforming-the-human-Atari-benchmark", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Adrià Puigdomènech Badia*", "Bilal Piot*", "Steven Kapturowski*", "Pablo Sprechmann*", "Alex Vitvitskyi", "Daniel Guo", "Charles Blundell"], "summaries": ["This blogpost and its associated [arxiv publication](https://arxiv.org/abs/2003.13350) present Agent57, DeepMind's latest RL agent created for the purpose of achieving human-level performance in a suite of 57 Atari games. Notably, Agent57 is the first agent that is able to surpass average human performance, as measured by Human Normalized Score or HNS, on every individual game in the suite, with the same set of hyperparameters. The blogpost details the evolution of DeepMind's Atari agents from DQN up to Agent57, and the paper elaborates on the improvements made in Agent57.\n\nSpecifically, Agent57 builds on a recent agent 'Never Give Up' (NGU), which itself augments R2D2 with episodic memory for curiosity-driven exploration. Agent57 introduces (i) a new parameterization of state-action value function that decomposes into intrinsic and extrinsic rewards, and (ii) a meta-controller which selects which of its numerous distributed policies to prioritize during learning, allowing the agent to control the exploration/exploitation trade-off."], "venue": "arXiv", "opinion": "On the one hand, this work feels like the achievement of an important milestone in DeepMind's ongoing research agenda towards building more general agents. On the other hand, it has the flavour of engineered sophistry: a remarkable collection of building blocks arranged together to patch specific known weaknesses, but lacking in core insights about how to make agents more general, without, say, making them more complex.\n\nThe work is well presented and accessible, especially the blogpost that contains a snapshot of the functional development of deep reinforcement learning capabilities over time. There are several open questions from here on out; personally, I hope this progresses to a single instance of an agent that is proficient at multiple games, and to the design of agents that do not require extensive hyperparameter tuning. The scale of DeepMind's experiments continues to grow, with 256 actors, and 10s of billions of frames, suggesting that, for now, this work is only suitable for simulated environments.", "highlight": false, "read_more": "Paper: Agent57: Outperforming the Atari Human Benchmark", "summarizer": "Sudhanshu", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #95", "newsletter_category": "Reinforcement learning"}
{"id": "e43ec6320e98a6d75e679de308d60809", "title": "On Catastrophic Interference in Atari 2600 Games", "url": "http://arxiv.org/abs/2002.12499", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["William Fedus*", "Dibya Ghosh*", "John D. Martin", "Marc G. Bellemare", "Yoshua Bengio", "Hugo Larochelle"], "summaries": ["One common worry with deep learning is the possibility of _catastrophic interference_: as the model uses gradients to learn a new behaviour, those same gradients cause it to forget past behaviours. In model-free deep RL, this would be particularly harmful in long, sequential tasks as in hard exploration problems like Montezuma’s Revenge: after the model learns how to do the first few subtasks, as it is trying to learn the next subtask, it would “forget” the first subtasks, degrading performance. The authors set out to test this hypothesis.\n\nIf this hypothesis were true, there would be an easy way to improve performance: once you have learned to perform the first subtask, just create a brand new neural net for the next subtask, so that training for this next subtask doesn’t interfere with past learning. Since the new agent has no information about what happened in the past, and must just “pick up” from wherever the previous agent left off, it is called the Memento agent (a reference to the movie of the same name). One can then solve the entire task by executing each agent in sequence.\n\nIn practice, they train an agent until its reward plateaus. They train a new Memento agent starting from the states that the previous agent reached, and note that it reliably makes further progress in hard exploration games like Montezuma’s Revenge, and not in “steady-state” games like Pong (where you wouldn’t expect as much catastrophic interference). Of course, with the Memento agent, you get both twice the training time and twice the model size, which could explain the improvement. They compare against giving the original agent twice the compute and model capacity, and find that Memento still does significantly better. They also present some fine-grained experiments which show that for a typical agent, training on specific contexts adversely affects performance on other contexts that are qualitatively different."], "venue": "arXiv", "opinion": "I think this is pretty strong evidence that catastrophic interference is in fact a problem with the Atari games. On the other hand, <@OpenAI Five@> also has many, many subtasks, that in theory should interfere with each other, and it still seems to train well. Some guesses at how to reconcile these facts:\n\n1) the tasks in Dota are more correlated than in (say) Montezuma’s Revenge, and so interference is less of a problem (seems plausible)\n2) the policy in OpenAI Five was large enough that it could easily allocate separate capacity for various subtasks (seems unlikely, I believe the policy was relatively small), or\n3) with sufficiently large-scale training, there is more “exploration” in weight-space until a configuration is found where interference doesn’t happen (seems unlikely given that large batch sizes help, since they tend to reduce weight-space exploration).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #91", "newsletter_category": "Reinforcement learning"}
{"id": "af1c78e21779fbfc16dce0701efece9b", "title": "Reward-Conditioned Policies", "url": "http://arxiv.org/abs/1912.13465", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Aviral Kumar", "Xue Bin Peng", "Sergey Levine"], "summaries": ["Standard RL algorithms create a policy that maximizes a reward function; the *Reward-Conditioned Policy* algorithm instead creates a policy that can achieve a particular reward value passed in as an input. This allows the policy to be trained via supervised regression on a dataset. Each example in the dataset consists of a state, action, and either a return or an advantage, referred to as *Z*. The network then predicts the action based on the state and *Z*. The learned model is able to generalize to policies for larger returns. During training, the target value is sampled from a distribution that gradually increases so that it continues to learn higher rewards. \n\nDuring evaluation, they then feed in the state and a high target value of *Z* (set one standard deviation above the average in their paper.) This enables them to achieve solid - but not state of the art - performance on a variety of the OpenAI Gym benchmark tasks. They also run ablation studies showing, among other things, that the policy is indeed accurate in achieving the target reward it aims for. "], "venue": "arXiv", "opinion": "One of the dangers of training powerful AI to maximize a reward function is that optimizing the function to extreme values may no longer correlate with what we want, as in the classic paperclip maximizer example. I think RCP provides an interesting solution to that problem; if we can instead specify a good, but reasonable, value, we may be able to avoid those extreme cases. We can then gradually increase the desired reward without retraining while continuously monitoring for issues. I think there are likely flaws in the above scheme, but I am optimistic in general about the potential of finding alternate ways to communicate goals to an agent. \n\nOne piece I am still curious about is whether the policy remembers how to achieve lower rewards as its training dataset updates towards higher rewards. They show in a heatmap that the target and actual rewards do match up well, but the target rewards are all sampled quite near each other; it would be interesting to see how well the final policy generalizes to the entire spectrum of target rewards. ", "highlight": false, "read_more": "", "summarizer": "Nicholas", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #83", "newsletter_category": "Reinforcement learning"}
{"id": "5f4c30ef10697712e85cc0a52f4e0d47", "title": "Procgen Benchmark", "url": "https://openai.com/blog/procgen-benchmark/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Karl Cobbe", "Christopher Hesse", "Jacob Hilton", "John Schulman"], "summaries": ["Existing game-based benchmarks for reinforcement learners suffer from the problem that agents constantly encounter near-identical states, meaning that the agents may be overfitting and memorizing specific trajectories rather than learning a general set of skills. In an attempt to remedy this, in this post OpenAI introduces Procgen Benchmark, 16 procedurally-generated video game environments used to measure how quickly a reinforcement learning agent learns generalizable skills. \n\nThe authors conduct several experiments using the benchmark. Notably, they discover that:\n- Agents strongly overfit to small training sets and need access to as many as 10,000 levels to generalize appropriately.\n- After a certain threshold, training performance improves as the training set grows, counter to trends in other supervised learning tasks.\n- Using a fixed series of levels for each training sample (as other benchmarks do) makes agents fail to generalize to randomly generated series of levels at test time.\n- Larger models improve sample efficiency and generalization."], "venue": "OpenAI Blog", "opinion": "This seems like a useful benchmark. I find it particularly interesting that their experiment testing non-procedurally generated levels as training samples implies huge overfitting effects in existing agents trained in video-game environments.", "highlight": false, "read_more": "Paper: Leveraging Procedural Generation to Benchmark Reinforcement Learning", "summarizer": "Asya", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #79", "newsletter_category": "Reinforcement learning"}
{"id": "85a3e090e04cb005431f176f2c175915", "title": "Learning to Predict Without Looking Ahead: World Models Without Forward Prediction", "url": "https://learningtopredict.github.io/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["C. Daniel Freeman", "Luke Metz", "David Ha"], "summaries": ["One [critique](https://twitter.com/shimon8282/status/979344417961250817) of the <@World Models@>(@Recurrent World Models Facilitate Policy Evolution@) paper was that in any realistic setting, you only want to learn the features that are important for the task under consideration, while the VAE used in the paper would learn features for state reconstruction. This paper instead studies world models that are trained directly from reward, rather than by supervised learning on observed future states, which should lead to models that only focus on task-relevant features. Specifically, they use _observational dropout_ on the environment percepts, where the true state is passed to the policy with a peek probability _p_, while a neural network, **M**, generates a proxy state with probability _1 - p_. At the next time-step, **M** takes the same input as the policy, plus the policy's action, and generates the next proxy state, which then may get passed to the controller, again with probability _1 - p_.\n\nThey investigate whether the emergent 'world model' **M** behaves like a good forward predictive model. They find that even with very low peek probability e.g. _p_ = 5%, **M** learns a good enough world model that enables the policy to perform reasonably well. Additionally, they find that world models thus learned can be used to train policies that sometimes transfer well to the real environment. They claim that the world model only learns features that are useful for task performance, but also note that interpretability of those features depends on inductive biases such as the network architecture."], "venue": "NeurIPS 2019", "opinion": "This work warrants a visit for the easy-to-absorb animations and charts. On the other hand, they make a few innocent-sounding observations that made me uncomfortable because they weren't rigourously proved nor labelled as speculation, e.g. a) \"At higher peek probabilities, the learned dynamics model is not needed to solve the task thus is never learned.\", and b) \"Here, the world model clearly only learns reliable transition maps for moving down and to the right, which is sufficient.\"\n\nWhile this is a neat bit of work well presented, it is nevertheless still unlikely this (and most other current work in deep model-based RL) will scale to more complex alignment problems such as <@Embedded World-Models@>; these world models do not capture the notion of an agent, and do not model the agent as an entity making long-horizon plans in the environment.", "highlight": false, "read_more": "", "summarizer": "Sudhanshu", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #75", "newsletter_category": "Reinforcement learning"}
{"id": "55a6198869d90218721445be499691a6", "title": "Superhuman AI for multiplayer poker", "url": "https://science.sciencemag.org/content/early/2019/07/10/science.aay2400.full", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Noam Brown", "Tuomas Sandholm"], "summaries": ["In July, this paper presented the first AI that can play six-player no-limit Texas hold’em poker better than professional players. Rather than using deep learning, it works by precomputing a blueprint strategy using a novel variant of Monte Carlo linear counterfactual regret minimization, an iterative self-play algorithm. To traverse the enormous game tree, the AI buckets moves by abstracting information in the game. During play, the AI adapts its strategy by modifying its abstractions according to how the opponents play, and by performing real-time search through the game tree. It used the equivalent of $144 of cloud compute to calculate the blueprint strategy and two server grade CPUs, which was much less hardware than what prior AI game milesones required."], "venue": "Science", "opinion": "From what I understand, much of the difficulty of poker lies in being careful not to reveal information. For decades, computers have already had an upper hand in being silent, computing probabilities, and choosing unpredictable strategies, which makes me a bit surprised that this result took so long. Nonetheless, I found it interesting how little compute was required to accomplish superhuman play.", "highlight": false, "read_more": "Let's Read: Superhuman AI for multiplayer poker", "summarizer": "Matthew", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #74", "newsletter_category": "Reinforcement learning"}
{"id": "f54de9367d43e56c8035c4bbc1dfe868", "title": "AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning", "url": "https://deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["AlphaStar Team"], "summaries": ["<@AlphaStar@>(@AlphaStar: Mastering the Real-Time Strategy Game StarCraft II@), DeepMind’s StarCraft II AI, has now defeated a top professional player and is better than 99.8% of players. While previous versions were limited to only a subset of the game, it now plays the full game and has limitations on how quickly it can take actions similar to top human players. It was trained initially via supervised learning on human players and then afterwards trained using RL.\n\nA challenge in learning StarCraft via self-play is that strategies exhibit non-transitivity: Stalker units beat Void Rays, Void Rays beat Immortals, but Immortals beat Stalkers. This can lead to training getting stuck in cycles. In order to avoid this, they set up a League of exploiter agents and main agents. The exploiter agents train only against the current iteration of main agents, so they can learn specific counter-strategies. The main agents then train against a mixture of current main agents, past main agents, and exploiters, prioritizing opponents that they have a lower win rate against."], "venue": "DeepMind Blog", "opinion": "I think this is a very impressive display of how powerful current ML methods are at a very complex game. StarCraft poses many challenges that are not present in board games such as chess and go, such as limited visibility, a large state and action space, and strategies that play out over very long time horizons. I found it particularly interesting how they used imitation learning and human examples to avoid trying to find new strategies by exploration, but then attained higher performance by training on top of that. \n\nI do believe progress on games is becoming less correlated with progress on AGI. Most of the key innovations in this paper revolve around the League training, which seems quite specific to StarCraft. In order to continue making progress towards AGI, I think we need to focus on being able to learn in the real world on tasks that are not as easy to simulate.", "highlight": false, "read_more": "Paper: Grandmaster level in StarCraft II using multi-agent reinforcement learning", "summarizer": "Nicholas", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #73", "newsletter_category": "Reinforcement learning"}
{"id": "25c02387423a3ef9bafe2b605fec29f3", "title": "Deep Dynamics Models for Dexterous Manipulation", "url": "http://bair.berkeley.edu/blog/2019/09/30/deep-dynamics/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Anusha Nagabandi", "Kurt Konoglie", "Sergey Levine", "Vikash Kumar"], "summaries": ["For hard robotic tasks like manipulating a screwdriver, model-free RL requires large amounts of data that are hard to generate with real-world hardware. So, we might want to use the more sample-efficient model-based RL, which has the additional advantage that the model can be reused for similar tasks with different rewards. This paper uses an ensemble of neural networks to predict state transitions, and plans by sampling trajectories for different policies. With this, they train a real anthropomorphic robot hand to be able to rotate two balls in its hand somewhat reliably within a few hours. They also trained for the same task in a simulation and were able to reuse the resulting model to move a single ball to a target location."], "venue": "arXiv", "opinion": "The videos look impressive, even though the robot hand still has some clunkiness to it. My intuition is that model-based approaches can be very useful in robotics and similar domains, where the randomness in transitions can easily be approximated by Gaussians. In other tasks where transitions follow more complicated, multimodal distributions, I am more sceptical. ", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #73", "newsletter_category": "Reinforcement learning"}
{"id": "b85676dd736d4b069afbc268a373b2c3", "title": "Let's Discuss OpenAI's Rubik's Cube Result", "url": "https://www.alexirpan.com/2019/10/29/openai-rubiks.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Alex Irpan"], "summaries": ["This post makes many points about <@OpenAI's Rubik's cube result@>(@Solving Rubik’s Cube with a Robot Hand@), but I'm only going to focus on two. First, the result is a major success for OpenAI's focus on design decisions that encourage long-term research success. In particular, it relied heavily on the engineering-heavy model surgery and policy distillation capabilities that allow them to modify e.g. the architecture in the middle of a training run (which we've seen with <@OpenAI Five@>(@OpenAI Five Benchmark: Results@)). Second, the domain randomization doesn't help as much as you might think: OpenAI needed to put a significant amount of effort into improving the simulation to get these results, tripling the number of successes on a face rotation task. Intuitively, we still need to put in a lot of effort to getting the simulation to be \"near\" reality, and then domain randomization can take care of the last little bit needed to robustly transfer to reality. Given that domain randomization isn't doing that much, it's not clear if the paradigm of zero-shot sim-to-real transfer is the right one to pursue. To quote the post's conclusion: _I see two endgames here. In one, robot learning reduces to building rich simulators that are well-instrumented for randomization, then using ludicrous amounts of compute across those simulators. In the other, randomization is never good enough to be more than a bootstrapping step before real robot data, no matter what the compute situation looks like. Both seem plausible to me, and we’ll see how things shake out._"], "venue": "Author's Website", "opinion": "As usual, Alex's analysis is spot on, and I have nothing to add beyond strong agreement.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #72", "newsletter_category": "Reinforcement learning"}
{"id": "c90283ad92c2c2e71ec3cef7a56f956e", "title": "Solving Rubik’s Cube with a Robot Hand", "url": "https://openai.com/blog/solving-rubiks-cube/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["OpenAI"], "summaries": ["Historically, researchers have had limited success making general purpose robot hands. Now, OpenAI has successfully trained a pair of neural networks to solve a Rubik's cube with a human-like robot hand (the learned portion of the problem is manipulating the hand -- solving the Rubik's cube is specified via a classical algorithm). The hand is able to solve the Rubik's cube even under a variety of perturbations, including having some of its fingers tied together, or having its view of the cube partially occluded. The primary innovation presented is a new method called _Automatic Domain Randomization_ (ADR). ADR automatically generates progressively more difficult environments to train on in simulation that are diverse enough to capture the physics of the real world. ADR performs better than existing domain randomization methods, which require manually specifying randomization ranges. The post speculates that ADR is actually leading to _emergent meta-learning_, where the network learns a learning algorithm that allows itself to rapidly adapt its behavior to its environment."], "venue": "OpenAI Blog", "opinion": "My impression is that this is a very impressive robotics result, largely because the problem of transferring training in simulation to real life (\"sim2real\") is extremly difficult. I also think it's quite novel if as the authors hypothesize, the system is exhibiting emergent meta-learning. It's worth noting that the hand is still not quite at human-level -- in the hardest configurations, it only succeeds 20% of the time, and for most experiments, the hand gets some of the state of the cube via Bluetooth sensors inside the cube, not just via vision.", "highlight": false, "read_more": "Vox: Watch this robot solve a Rubik’s Cube one-handed", "summarizer": "Asya", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #70", "newsletter_category": "Reinforcement learning"}
{"id": "baf9bb62f3c3f31d2ee938066b89d865", "title": "Emergent Tool Use from Multi-Agent Interaction", "url": "https://openai.com/blog/emergent-tool-use/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Bowen Baker", "Ingmar Kanitscheider", "Todor Markov", "Yi Wu", "Glenn Powell", "Bob McGrew", "Igor Mordatch"], "summaries": ["We have such a vast diversity of organisms and behaviors on Earth because of evolution: every time a new strategy evolved, it created new pressures and incentives for other organisms, leading to new behaviors. The multiagent competition led to an _autocurriculum_. This work harnesses this effect: they design a multiagent environment and task, and then use standard RL algorithms to learn several interesting behaviors. Their task is hide-and-seek, where the agents are able to move boxes, walls and ramps, and lock objects in place. The agents find _six_ different strategies, each emerging from incentives created by the previous strategy: seekers chasing hiders, hiders building shelters, seekers using ramps to get into shelters, hiders locking ramps away from seekers, seekers surfing boxes to hiders, and hiders locking both boxes and ramps.\n\nThe hope is that this can be used to learn general skills that can then be used for specific tasks. This makes it a form of unsupervised learning, with a similar goal as e.g. <@curiosity@>(@Large-Scale Study of Curiosity-Driven Learning@). We might hope that multiagent autocurricula would do better than curiosity, because they automatically tend to use features that are important for control in the environment (such as ramps and boxes), while intrinsic motivation methods often end up focusing on features we wouldn't think are particularly important. They empirically test this by designing five tasks in the environment and checking whether finetuning the agents from the multiagent autocurricula learns faster than direct training and finetuning curiosity-based agents. They find that the multiagent autocurricula agents do best, but only slightly. To explain this, they hypothesize that the learned skill representations are still highly entangled and so are hard to finetune, whereas learned feature representations transfer more easily."], "venue": "OpenAI Blog", "opinion": "This is somewhat similar to <@AI-GAs@>(@AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence@): both depend on _environment design_, which so far has been relatively neglected. However, AI-GAs are hoping to create _learning algorithms_, while multiagent autocurricula leads to _tool use_, at least in this case. Another point of similarity is that they both require vast amounts of compute, as discovering new strategies can take significant exploration. That said, it seems that we might be able to drastically decrease the amount of compute needed by solving the exploration problem using e.g. human play data or demonstrations (discussed in two different papers above).\n\nMore speculatively, I hypothesize that it will be useful to have environments where you need to identify _what strategy your opponent is using_. In this environment, each strategy has the property that it beats _all_ of the strategies that preceded it. As a result, it was fine for the agent to undergo catastrophic forgetting: even though it was trained against past agents, it only needed to learn the current strategy well; it didn't need to remember previous strategies. As a result, it may have forgotten prior strategies and skills, which might have reduced its ability to learn new tasks quickly.", "highlight": false, "read_more": "[Paper: Emergent Tool Use from Multi-Agent Autocurricula](https://d4mucfpksywv.cloudfront.net/emergent-tool-use/paper/Multi_Agent_Emergence_2019.pdf), [Vox: Watch an AI learn to play hide-and-seek](https://www.vox.com/future-perfect/2019/9/20/20872672/ai-learn-play-hide-and-seek)", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #65", "newsletter_category": "Reinforcement learning"}
{"id": "bfb65c0f58cb035ef10268bf9400e10d", "title": "A Survey of Reinforcement Learning Informed by Natural Language", "url": "http://arxiv.org/abs/1906.03926", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Jelena Luketina", "Nantas Nardelli", "Gregory Farquhar", "Jakob Foerster", "Jacob Andreas", "Edward Grefenstette", "Shimon Whiteson", "Tim Rocktäschel"], "summaries": ["Humans use language as a way of efficiently storing knowledge of the world and instructions for handling new scenarios; this paper is written from the perspective that it would be potentially hugely valuable if RL agents could leverage information stored in language in similar ways. They look at both the case where language is an inherent part of the task (example: the goal is parameterized by a language instruction) and where language is used to give auxiliary information (example: parts of the environment are described using language). Overall, the authors push for more work in this area, and, in particular, more work using external-corpus-pretrained language models and with research designs that use human-generated rather than synthetically-generated language; the latter is typically preferred for the sake of speed, but the former has particular challenges we'll need to tackle to actually use existing sources of human language data. "], "venue": "IJCAI 2019", "opinion": "This article is a solid and useful version of what I would expect out of a review article: mostly useful as a way to get thinking in the direction of the intersection of RL and language, and makes me more interested in digging more into some of the mentioned techniques, since by design this review didn't go very deep into any of them. ", "highlight": false, "read_more": "", "summarizer": "Cody", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #58", "newsletter_category": "Reinforcement learning"}
{"id": "91f4f487548f5dda6035c52b64101129", "title": "How to Train Your OpenAI Five", "url": "https://openai.com/blog/how-to-train-your-openai-five/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["OpenAI"], "summaries": ["<@OpenAI Five@> has now beaten the Dota world champions 2-0, after training for 8x longer, for a total of 800 petaflop/s-days or 45000 years of Dota self-play experience. During this insanely long training run, OpenAI grew the LSTM to 4096 units, added buybacks to the game, and switched versions twice. Interestingly, they found it hard to add in new heroes: they could bring a few new heroes up to 95th percentile of humans, but it didn't look like they would train fast enough to reach pro level. This could be because the other heroes were already so capable that it was too hard to learn, since the new heroes would constantly be beaten. The resulting team was also able to play cooperatively with humans, even though they had never been trained with humans.\n\nAs usual, I like [Alex Irpan's thoughts](https://www.alexirpan.com/2019/04/14/openai-finals.html). On the Dota side, he found Five's reaction times more believable, but was disappointed by the limited hero pool. He also predicted that with [OpenAI Five Arena](https://arena.openai.com/), which allowed anyone to play either alongside Five, or against Five, at least one of the _many_ teams would figure out a strategy that could reliably beat Five. He was right: while Five had a 99.4% win rate, one team was able to beat it [10 times in a row](https://arena.openai.com/#/results), another beat it thrice in a row, and two teams beat it twice in a row."], "venue": "OpenAI Blog", "opinion": "In this era of scaling up compute via parallelism, it was quite surprising to see OpenAI scaling up compute simply by training for almost a year. That feels like one of the last resorts to scale up compute, so maybe we're seeing the limits of the trend identified in <@AI and Compute@>?\n\nBack when OpenAI Five beat a strong team in their <@Benchmark@>(@OpenAI Five Benchmark: Results@), I and a few others predicted that the team would be able to beat Five after playing a few games against it. I think this prediction has been somewhat validated, given that four teams figured out how to beat a much stronger version of the bot. Of course, humans played over 7000 games against Five, not just a few, so this could be that enough random search finds a weakness. Still, I'd expect pros to be able to do this in tens, maybe hundreds of games, and probably this would have been much easier at the time of the Benchmark.\n\nThe underlying model here is that Dota has an extremely large space of strategies, and neither Five nor humans have explored it all. However, pros have a better (lower-dimensional) representation of strategy space (concepts like \"split-push\") that allow them to update quickly when seeing a better opponent. I don't know what it would take to have AI systems learn these sorts of low-dimensional representations, but it seems key to having AI systems that can adapt quickly like humans can.", "highlight": false, "read_more": "Vox: AI triumphs against the world’s top pro team in strategy game Dota 2", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #54", "newsletter_category": "Reinforcement learning"}
{"id": "dddbc82893317e1ff99727cba27a63d8", "title": "Eighteen Months of RL Research at Google Brain in Montreal", "url": "http://www.marcgbellemare.info/blog/eighteen-months-of-rl-research-at-google-brain-in-montreal/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Marc Bellemare"], "summaries": ["One approach to reinforcement learning is to predict the entire distribution of rewards from taking an action, instead of predicting just the expected reward. Empirically, this works better, even though in both cases we choose the action with highest expected reward. This blog post provides an overview of work at Google Brain Montreal that attempts to understand this phenomenon. I'm only summarizing the part that most interested me.\n\nFirst, they found that in theory, distributional RL performs on par with or worse than standard RL when using either a tabular representation or linear features. They then tested this empirically on Cartpole, and found similar results: distributional RL performed worse when using tabular or linear representations, but better when using a deep neural net. This suggests that distributional RL \"learns better representations\". So, they visualize representations for RL on the four-room environment, and find that distributional RL captures more structured representations. Similarly this [paper](https://arxiv.org/abs/1902.0686) showed that predicting value functions for multiple discount rates is an effective way to produce auxiliary tasks for Atari."], "venue": "Marc Bellemare's Website", "opinion": "This is a really interesting mystery with deep RL, and after reading this post I have a story for it. Note I am far from an expert in this field and it's quite plausible that if I read the papers cited in this post I could tell this story is false, but here's the story anyway. As we saw with PreQN earlier in this issue, one of the most important aspects of deep RL is how information about one (s, a) pair is used to generalize to other (s, a) pairs. I'd guess that the benefit from distributional RL is primarily that you get \"good representations\" that let you do this generalization well. With a tabular representation you don't do any generalization, and with a linear feature space the representation is hand-designed by humans to do this generalization well, so distributional RL doesn't help in those cases.\n\nBut why does distributional RL learn good representations? I claim that it provides stronger supervision given the same amount of experience. With normal expected RL, the final layer of the neural net need only be useful for predicting the expected reward, but with distributional RL they must be useful for predicting all of the quantiles of the reward distribution. There may be \"shortcuts\" or \"heuristics\" that allow you to predict expected reward well because of spurious correlations in your environment, but it's less likely that those heuristics work well for all of the quantiles of the reward distribution. As a result, having to predict more things enforces a stronger constraint on what representations your neural net must have, and thus you are more likely to find good representations. This perspective also explains why predicting value functions for multiple discount rates helps with Atari, and why adding auxiliary tasks is often helpful (as long as the auxiliary task is relevant to the main task).\n\nThe important aspect here is that all of the quantiles are forcing the same neural net to learn good representations. If you instead have different neural nets predicting each quantile, each neural net has roughly the same amount of supervision as in expected RL, so I'd expect that to work about as well as expected RL, maybe a little worse since quantiles are probably harder to predict than means. If anyone actually runs this experiment, please do let me know the result!", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #51", "newsletter_category": "Reinforcement learning"}
{"id": "ba4f574020b4df108e2e312995812539", "title": "An Overdue Post on AlphaStar", "url": "https://www.alexirpan.com/2019/02/22/alphastar-part2.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Alex Irpan"], "summaries": ["The [first post](https://www.alexirpan.com/2019/02/22/alphastar.html) in this two-parter talks about the impact of [AlphaStar](https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/) ([AN #43](https://mailchi.mp/768a8130013f/alignment-newsletter-43)) on the StarCraft community and broader public. I'm focusing on the second one, which talks about AlphaStar's technical details and implications. Some of this post overlaps with my summary of AlphaStar, but those parts are better fleshed out and have more details.\n\nFirst, imitation learning is a surprisingly good base policy, getting to the level of a Gold player. It's surprising because you might expect the [DAgger](https://www.ri.cmu.edu/pub_files/2011/4/Ross-AISTATS11-NoRegret.pdf) problem to be extreme: since there are so many actions in a StarCraft game, your imitation learning policy will make some errors, and those errors will then compound over the very long remainder of the episode as they take the policy further away from normal human play into states that the policy wasn't trained on.\n\nSecond, population-based training is probably crucial and will be important in the future, because it allows for exploring the full strategy space.\n\nThird, the major challenge is making RL achieve okay performance, and after that they very quickly become great. It took years of research to get Dota and StarCraft bots reach decent play, and then a few days of more training got them to be world class. Fun quote: \"although OpenAI’s DotA 2 agent lost against a pro team, [they were able to beat their old agent 80% of the time with 10 days of training](https://twitter.com/openai/status/1037765547427954688)\".\n\nFourth, there were a lot of research results that went into AlphaStar. This suggests that there are large gains to be had by throwing a lot of techniques together and seeing how well they work, which doesn't happen very much currently. There are good reasons for this: it's much easier to evaluate a technique if its built upon a simple, standard algorithm rather than having to consider all of its interactions with other techniques which you may or may not be able to properly compare against. Still, there are going to be some cool results that we could do now if we just threw the right things together, and this sort of work also lets us test techniques in new settings to see which ones actually work in general, as opposed to only in the original evaluation."], "venue": "Sorta Insightful", "opinion": "I really like this post, and agree with almost everything in it. On the imitation learning point, I also found it surprising how well imitation learning worked. Alex suggests that it could be that human data has enough variation that the agent can learn how to recover from incorrect decisions it could make. I think this is a partial explanation at best -- there is a huge combinatorial explosion, it's not clear why you don't need a much larger dataset to cover the entire space. Maybe there are \"natural\" representations in any realistic complex environment that you start to accurately learn at the level of compute that they're using, and once you have those then imitation learning with sufficient variation can work well.\n\nOn the last point about tossing techniques together, I think this might sometimes be worth doing but often may not be. It makes sense to do this with any real task, since that's a test of the technique against reality. (Here StarCraft counts as a \"real\" task while Atari does not; the criterion is something like \"if the task is successfully automated we are impressed regardless of how it is solved\".) I'm less keen on tossing techniques together for artificial benchmarks. I think typically these techniques improve the sample efficiency by a constant multiplicative factor by adding something akin to a good inductive bias; in that case throwing them together may let us solve the artificial benchmark sooner but it doesn't give us great evidence that the \"inductive bias\" will be good for realistic tasks. I think I don't actually disagree with Alex very much on the object-level recommendations, I would just frame them differently.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #47", "newsletter_category": "Reinforcement learning"}
{"id": "41910408d163ea1776d3c707e5ffad28", "title": "Off-Policy Deep Reinforcement Learning without Exploration", "url": "http://arxiv.org/abs/1812.02900", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Scott Fujimoto", "David Meger", "Doina Precup"], "summaries": ["This paper discusses off-policy batch reinforcement learning, in which an agent is trying to learn a policy from data which is not based on its own policy, and without the opportunity to collect more data during training. The authors demonstrate that standard RL algorithms do badly in this setting because they give unseen state-action pairs unrealistically high values, and lack the opportunity to update them. They proposes to address this problem by only selecting actions from previously seen state-action pairs; they prove various optimality results for this algorithm in the MDP setting. To adapt this approach to the continuous control case, the authors train a generative model to produce likely actions (conditional on the state and the data batch) and then only select from the top n actions. Their batch-conditional q-learning algorithm (BCQ) consists of that generative model, a perturbation model to slightly alter the top actions, and a value network and critic to perform the selection. When n = 0, BCQ resembles behavioural cloning, and when n -> ∞, it resembles Q-learning. BCQ with n=10 handily outperformed DQN and DDPG on some Mujoco experiments using batch data."], "venue": "NIPS 2018", "opinion": "This is an interesting paper, with a good balance of intuitive motivations, theoretical proofs, and empirical results. While it's not directly safety-related, the broad direction of combining imitation learning and reinforcement learning seems like it might have promise. Relatedly, I wish the authors had discussed in more depth what assumptions can or should be made about the source of batch data. For example, BCQ would presumably perform worse than DQN when data is collected from an expert trying to minimise reward, and (from the paper’s experiments) performs worse than behavioural cloning when data is collected from an expert trying to maximise reward. Most human data an advanced AI might learn from is presumably somewhere in between those two extremes, and so understanding how well algorithms like BCQ would work on it may be valuable.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #37", "newsletter_category": "Reinforcement learning"}
{"id": "81c0d791b37135108f6014afe3da3110", "title": "Visual Model-Based Reinforcement Learning as a Path towards Generalist Robots", "url": "https://bair.berkeley.edu/blog/2018/11/30/visual-rl/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Frederik Ebert", "Chelsea Finn", "Sudeep Dasari", "Annie Xie", "Alex Lee", "and Sergey Levine"], "summaries": ["How can we get general robots that can perform a diverse array of tasks? We could collect a lot of data from robots acting randomly, train a dynamics model on pixels, and then use model-predictive control to plan. The dynamics model is a neural net trained to predict the next image given the current image and action. It helps to use temporal skip connections, because this allows the robot to get some object permanence since it can now \"remember\" objects it saw in the past that are currently blocked by something else. Model predictive control then samples sequences of actions (called plans), predicts the final image achieved, chooses the plan that best achieves the goal, and takes the first action of that plan. This is then repeated to choose subsequent actions. (Their method is slightly more sophisticated but this is the basic idea.) We can specify the goal by choosing a particular pixel and asking that the object at that pixel be moved to some other pixel. Alternatively, [Few-Shot Goal Inference for Visuomotor Learning and Planning](https://arxiv.org/abs/1810.00482) ([AN #27](https://mailchi.mp/0212425e5544/alignment-newsletter-27)) trains a classifier that can take a few demonstrations and output a goal."], "venue": "BAIR Blog", "opinion": "This is probably the easiest way to get a robot to do interesting things, since you just need it to collect experience autonomously with very little human involvement, you don't need to have good object detection, and in many cases goal specification can be done without too much effort. I'm surprised that using random actions is enough -- how does the robot get enough examples of picking up an object with random actions? Maybe the robot's random strategy is actually coded up in such a way that it is particularly likely to do interesting things like picking up an object.\n\nIt does seem like this approach will need something else in order to scale to more advanced capabilities, especially hierarchical tasks -- for example, you'll never have an example of picking up a napkin, getting it wet, and wiping down a table. But perhaps we can iterate the process, where after we learn how to grasp and push, we start collecting data again using grasping and pushing instead of random low-level actions. Safe exploration would become more of a concern here.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #36", "newsletter_category": "Reinforcement learning"}
{"id": "20d74e06a845c7622e66fe28d3bc4d2c", "title": "AlphaZero: Shedding new light on the grand games of chess, shogi and Go", "url": "https://deepmind.com/blog/alphazero-shedding-new-light-grand-games-chess-shogi-and-go/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["David Silver", "Thomas Hubert", "Julian Schrittwieser", "and Demis Hassabis"], "summaries": ["If you didn't already believe that AlphaZero is excellent at Go, Chess and Shogi, this post and the associated paper show it more clearly with a detailed evaluation. A few highlights:\n- AlphaZero can beat Stockfish starting from common human openings, suggesting that it generalizes well\n- The amount of computation given to AlphaZero to choose a move has a larger effect on the win probability than I was expecting\n- I always wondered why they use MCTS and not alpha-beta search. They speculate that alpha-beta search with a neural net evaluation function succumbs to the [winner's curse](https://www.investopedia.com/terms/w/winnerscurse.asp) since alpha-beta involves a lot of maxes and mins, whereas MCTS averages over evaluations and so is more robust. In contrast, evaluation functions designed by humans are much more likely to generalize well, and alpha-beta outperforms MCTS."], "venue": "DeepMind Blog", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #36", "newsletter_category": "Reinforcement learning"}
{"id": "e6490475b80bafa48ad4c095e38cdaa4", "title": "Open Sourcing Active Question Reformulation with Reinforcement Learning", "url": "http://ai.googleblog.com/2018/10/open-sourcing-active-question.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Michelle Chen Huebscher and Rodrigo Nogueira"], "summaries": ["Given a question-answering (QA) system, we can get better performance by reformulating a question into a format that is better processed by that system. (A real-world example is [google-fu](https://en.wiktionary.org/wiki/Google-fu), especially several years ago when using the right search terms was more important.) This blog post and accompanying paper consider doing this using reinforcement learning -- try a question reformulation, see if gives a good answer, and if so increase the probability of generating that reformulation. For this to work at all, the neural net generating reformulations has to be pretrained to output sensible questions (otherwise it is an _extremely_ sparse reward problem). They do this by training an English-English machine translation system. The generated reformulations are quite interesting -- 99.8% start with \"what is name\", and many of them repeat words. Presumably the repetition of words is meant to tell the underlying QA system that the word is particularly important."], "venue": "Google AI Blog", "opinion": "I like how this demonstrates the faults of our current QA systems -- for example, instead of understanding the semantic content of a question, they instead focus on terms that are repeated multiple times. In fact, this might be a great way to tell whether our systems are \"actually understanding\" the question (as opposed to, say, learning a heuristic of searching for sentences with similar words and taking the last noun phrase of that sentence and returning it as the answer). For a good QA system, one would hope that the optimal question reformulation is just to ask the same question again. However, this won't work exactly as stated, since the RL system could learn the answers itself, which could allow it to \"reformulate\" the question such that the answer is obvious, for example reformulating \"In what year did India gain independence?\" to \"What is 1946 + 1?\" Unless the QA system is perfectly optimal, there will be some questions where the RL system could memorize the answer this way to improve performance.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #28", "newsletter_category": "Reinforcement learning"}
{"id": "3a364c9ad8055cfc2823f399c16a04da", "title": "Near-Optimal Representation Learning for Hierarchical Reinforcement Learning ", "url": "http://arxiv.org/abs/1810.01257", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Ofir Nachum", "Shixiang Gu", "Honglak Lee", "Sergey Levine"], "summaries": ["This paper discusses the use of learned representations in hierarchical RL. In the setting where a higher-level policy chooses goals which lower-level policies are rewarded for reaching, how bad is it when the goal representation isn't able to express all possible states? The authors define a metric for a representation's lossiness based on how close to optimal the policies which can be learned using that representation are, and prove that using a certain objective function, representations with bounded lossiness can be learned. They note a similarity between this objective function and those of mutual information estimators.\n\nThe authors test their learner on the MuJoCo Ant Maze environment, achieving compelling results."], "venue": "NIPS 2018", "opinion": "This is a fairly mathematical paper and I didn't entirely follow the proofs, so I'm not sure how dependent they are on the particular choice of objective function. However, the empirical results using that objective seem very impressive, and significantly outperform alternative methods of learning representations.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #27", "newsletter_category": "Reinforcement learning"}
{"id": "453f6712ea926d34c2b43a81cd1ef48e", "title": "Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems", "url": "http://arxiv.org/abs/2005.01643", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Sergey Levine", "Aviral Kumar", "George Tucker", "Justin Fu"], "summaries": ["The authors in this paper give an overview of offline-reinforcement learning with the aim that readers gain enough familiarity to start thinking about how to make contributions in this area. The utility of a fully offline RL framework is significant: just as supervised learning methods have been able to utilize data for generalizable and powerful pattern recognition, offline RL methods could enable data to be funneled into decision-making machines for applications such as health-care, robotics, and recommender systems. The organization of the article is split into a section on formulation and another on benchmarks, followed by a section on applications and a general discussion.\n\nIn the formulation portion of the review, the authors give an overview of the offline learning problem and then discuss a number of approaches. Broadly speaking, the biggest challenge is the need for counterfactual reasoning because the agent must learn using data by another agent. Thus, the agent is forced to reason about what would happen if a different decision was used. Importance sampling, approximate dynamic programming, and offline model-based approaches are discussed as possible approaches to this counterfactual reasoning problem. In the benchmarks section, the authors review evaluation techniques for offline RL methods. While the authors find that there are many domain-specific evaluations, general benchmarking is less well established. A major issue in creating benchmarks is deciding whether or not to use diverse trajectories/replay buffer data, or only the final expert policy.\n\nIn the discussion, the authors argue that while importance sampling and dynamic programming work on low-dimensional and short-horizon tasks, they struggle to integrate well with function approximators. On the other hand, the authors see approaches that constrain the space of policies to be near the dataset as a promising direction to mitigate the effects of distributional shift. However, the authors acknowledge that it may ultimately take more systematic datasets to push the field forward. "], "venue": "arXiv", "opinion": "This was a great overview of the state of the field. A recurring theme that the authors highlight is that offline RL requires counterfactual reasoning which may be fundamentally difficult to achieve because of distributional shift. Some results shown in the paper suggest that offline RL may just be fundamentally hard. However, I find myself sharing optimism with the authors on the subject of policy constraint techniques and the inevitable importance of better datasets.", "highlight": false, "read_more": "", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #120", "newsletter_category": "Reinforcement learning"}
{"id": "36f10063dfaaee3ae4aa33e8c35f3004", "title": "The Animal-AI Testbed and Competition", "url": "http://proceedings.mlr.press/v123/crosby20a/crosby20a.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Matthew Crosby", "Benjamin Beyret", "Murray Shanahan", "José Hernández-Orallo", "Lucy Cheke", "Marta Halina"], "summaries": ["The Animal-AI testbed tests agents on the ability to solve the sorts of tasks that are used to test animal cognition: for example, is the agent able to reach around a transparent obstacle in order to obtain the food inside. This has a few benefits over standard RL environments:\n\n1. The Animal-AI testbed is designed to test for specific abilities, unlike environments based on existing games like Atari.\n2. A single agent is evaluated on multiple hidden tasks, preventing overfitting. In contrast, in typical RL environments the test setting is identical to the train setting, and so overfitting would count as a valid solution.\n\nThe authors ran a competition at NeurIPS 2019 in which submissions were tested on a wide variety of hidden tasks. The winning submission used an iterative method to design the agent: after using PPO to train an agent with the current reward and environment suite, the designer would analyze the behavior of the resulting agent, and tweak the reward and environments and then continue training, in order to increase robustness. However, it still falls far short of the perfect 100% that the author can achieve on the tests (though the author is not seeing the tests for the first time, as the agents are)."], "venue": "NeurIPS 2019 Competition and Demonstration Track", "opinion": "I’m not sure that the path to general intelligence needs to go through replicating embodied animal intelligence. Nonetheless, I really like this benchmark, because its evaluation setup involves new, unseen tasks in order to prevent overfitting, and because of its focus on learning multiple different skills. These features seem important for RL benchmarks regardless of whether we are replicating animal intelligence or not.", "highlight": false, "read_more": "Building Thinking Machines by Solving Animal Cognition Tasks", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #117", "newsletter_category": "Reinforcement learning"}
{"id": "1b11e8a1542b393e941362555b966e25", "title": "Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey", "url": "http://arxiv.org/abs/2003.04960", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Sanmit Narvekar", "Bei Peng", "Matteo Leonetti", "Jivko Sinapov", "Matthew E. Taylor", "Peter Stone"], "summaries": ["For a variety of learning problems, the training process is organized so that new concepts and tasks leverage previously learned information. This can serve as a broad definition of curriculum learning. This paper gives an overview of curriculum learning and a framework to organize various approaches to the curriculum learning problem. One central difficulty is that there is a broad class of methods that can be considered curricula. At one extreme, we have curricula where new tasks are created to speed up learning. At another extreme, some curricula simply reorder experience samples. For example, the prioritized replay buffer is one such reordering method. Thus, to cover as much of the literature as possible the authors outline a framework for curriculum learning and then use that structure to classify various approaches. In general, the definition, learning, construction, and the evaluation of curricula are all covered in this work. This is done by breaking the curriculum learning problem into three steps: task generation, sequencing, and transfer learning. Using this problem decomposition the authors give an overview of work addressing each component."], "venue": "arXiv", "opinion": "Before I read this, I thought of curricula as 'hacks' used to improve training. However, the authors' presentation of connections with transfer learning and experience replay has significantly changed my opinion. In particular, the phrasing of curriculum learning as a kind of 'meta-MDP seems particularly interesting to me. Moreover, there seem to be interesting challenges in this field. One such challenge is that there does not seem to be a great amount of theory about *why* curricula work which could indicate a point of departure for people interested in safety research. Knowing more about theory could help answer safety questions. For example, how do we design curricula so that we can guarantee/check the agent is behaving correctly at each step?", "highlight": false, "read_more": "", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #114", "newsletter_category": "Reinforcement learning"}
{"id": "d040869cf6c0af7824b12c6437f9769b", "title": "Mastering Complex Control in MOBA Games with Deep Reinforcement Learning", "url": "http://arxiv.org/abs/1912.09729", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Deheng Ye", "Zhao Liu", "Mingfei Sun", "Bei Shi", "Peilin Zhao", "Hao Wu", "Hongsheng Yu", "Shaojie Yang", "Xipeng Wu", "Qingwei Guo", "Qiaobo Chen", "Yinyuting Yin", "Hao Zhang", "Tengfei Shi", "Liang Wang", "Qiang Fu", "Wei Yang", "Lanxiao Huang"], "summaries": ["This paper presents an AI system that can play the Multi-player Online Battle Arena (MOBA) game _Honor of Kings_. They are inspired by <@OpenAI Five@> (and Honor of Kings sounds quite similar to Dota, though it is 1v1 instead of 5v5), and have a similar learning setup: reinforcement learning using PPO. Their architecture requires an off-policy algorithm (I’m not sure why, maybe they have stale parameters across their rollout servers), so they add an importance sampling correction to the PPO objective, as well as an additional type of gradient clipping. The input is a combination of the image and underlying game state info. The resulting agents are able to beat top human players, and in an event with the public, the AI system lost only 4 out of 2100 matches. Unlike OpenAI Five, this required only around 100 hours to train (though it’s unclear how much compute was used)."], "venue": "AAAI 2020", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #102", "newsletter_category": "Reinforcement learning"}
{"id": "f71ea4a1a9a474fff62f5c7f9311a515", "title": "\"Other-Play\" for Zero-Shot Coordination", "url": "http://arxiv.org/abs/2003.02979", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Hengyuan Hu", "Adam Lerer", "Alex Peysakhovich", "Jakob Foerster"], "summaries": ["How can we build AI systems that can _coordinate_ with humans? While <@past@>(@Collaborating with Humans Requires Understanding Them@) <@work@>(@Learning Existing Social Conventions via Observationally Augmented Self-Play@) has assumed access to some amount of human data, this paper aims to coordinate _without any human data at all_, which they call _zero-shot coordination_. In order to develop an algorithm, they assume that their partner is also \"trained\" for zero-shot coordination.\n\nTheir key idea is that in zero-shot coordination, since you can't break symmetries by agreeing upon a protocol in advance (i.e. you can't agree on things like \"we'll drive on the left, not the right\"), you need a policy that is _robust to relabelings that preserve these symmetries_. This is easy to train for: you just train in self-play, but randomly relabel the states, actions and observations separately for each side in a way that preserves the MDP structure (i.e. uses one of the symmetries). Thus, each side must play a policy that works well _without knowing how the other agent's observations and actions have been relabeled_. In practice, for an N-player game you only need to randomize N-1 of the relabelings, and so in the two player games they consider they only randomly relabel one side of the self-play.\n\nThey evaluate this in Hanabi (where the game is invariant to relabeling of the colors), and show that the resulting agents are better at playing with other agents trained on different seeds or with slightly different architectures, and also that they play better with humans, achieving an average score of 15.75 with non-expert human players, compared to 9.15 for agents trained via regular self-play."], "venue": "arXiv", "opinion": "For comparison, I think I get around 17-22 when playing with new players, out of a max of 25, so 15.75 is quite a healthy score given that it doesn't use _any_ human data. That being said, it seems hard to use this method in other settings -- even in the relatively simple <@Overcooked environment@>(@Collaborating with Humans Requires Understanding Them@), there aren't any obvious symmetries to use for such training. Perhaps future work will allow us to find approximate symmetries in games somehow, that we can then train to be robust to?", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #100", "newsletter_category": "Reinforcement learning"}
{"id": "d445bc3bd4b5a054fce04c8dc7e56bf6", "title": "Building AI that can master complex cooperative games with hidden information", "url": "https://ai.facebook.com/blog/building-ai-that-can-master-complex-cooperative-games-with-hidden-information/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Adam Lerer", "Hengyuan Hu", "Jakob Foerster", "Noam Brown"], "summaries": ["This paper improves on the state of the art for AI agents playing <@Hanabi@>(@The Hanabi Challenge: A New Frontier for AI Research@), a cooperative multiplayer game that is challenging because of distributed hidden information and restricted communication. \n\nThe approach works by improving a baseline policy using search. In the simplest case, only one agent performs search while all other agents follow a fixed policy, such that the problem is reduced to search in a POMDP. This alone leads to relevant improvements, even when the search is very shallow. The fixed policies help because they allow the searching agent to correctly update its belief about hidden information when it sees other agents behaving (as it knows how other agents would behave given different states of the hidden information). This idea can be generalized to the case where all agents perform search by letting the agents simulate each other's search process. This can get expensive quickly as agent A's beliefs in the second round also depend on agent B's search process in counterfactual scenarios in the first round, such that agent B's search in round two also has to simulate these counterfactuals. A computation budget is introduced to make this computationally feasible and all agents know that the other agents will only use search in a turn if the cost of this is below the budget. \n\nAs search can be performed on top of any policy and allows to leverage compute during inference, not just training, it nicely complements more direct approaches using deep RL, which is a theme that has also been observed in Go and Poker."], "venue": "arXiv", "opinion": "This solution seems stunningly obvious in retrospect. While the authors informally report that their approach improves robustness to replacing other agents by humans, the example they give seems to indicate that this is because search prevents obvious mistakes in novel situations induced by human behaviour. Thus, I still expect (implicit) <@human models@>(@Thoughts on Human Models@) to be a vital component of human-machine cooperation. ", "highlight": false, "read_more": "Paper: Improving Policies via Search in Cooperative Partially Observable Games", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #100", "newsletter_category": "Reinforcement learning"}
{"id": "7158d09a9cc4252cd98e7add55e30016", "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning", "url": "http://arxiv.org/abs/2004.04136", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Aravind Srinivas*", "Michael Laskin*", "Pieter Abbeel"], "summaries": ["This paper applies contrastive learning (discussed above) to reinforcement learning. In RL, rather than training in an initial unsupervised phase, the contrastive learning happens alongside the RL training, and so serves as an auxiliary objective to speed up learning. They use random crops for their data augmentation."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #99", "newsletter_category": "Reinforcement learning"}
{"id": "82226549e85661c911c60f282093f2ef", "title": "The Ingredients of Real World Robotic Reinforcement Learning", "url": "https://bair.berkeley.edu/blog/2020/04/27/ingredients/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Henry Zhu*", "Justin Yu*", "Abhishek Gupta*", "Dhruv Shah", "Kristian Hartikainen", "Avi Singh", "Vikash Kumar", "Sergey Levine"], "summaries": ["Suppose we wanted to train a robot to perform a task in the real world, and we didn't want to deal with the headache of sim-to-real transfer. Typically, since all of our experience must be collected in the real world, we would need a human to reset the robot to its initial state. The key idea of this paper is that the point of resets is to ensure that the robot explores a diversity of states causing it to learn a robust policy; this can be achieved by learning a _perturbation policy_ whose objective is to take the robot to states it hasn't visited before. They then combine this with representation learning (so that they can learn from pixels) and use a classifier that distinguishes goal states from non-goal states as the reward function, to get a fully automated setup where once you start the robot's training, it trains itself without any human in the loop."], "venue": "ICLR 2020", "opinion": "This is a cool proof of concept, but the learned perturbation policy can only take you so far -- no learned perturbation policy is going to allow you to e.g. pick up an object after it is dropped, as you would want if you're training a robot to <@manipulate a Rubik's cube@>(@Solving Rubik’s Cube with a Robot Hand@). It seems hard to overcome this sort of problem in a fully automated and learned way (though perhaps you could use more classical techniques to have a \"hardcoded\" but still automated reset policy).", "highlight": false, "read_more": "Paper: The Ingredients of Real World Robotic Reinforcement Learning", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #98", "newsletter_category": "Reinforcement learning"}
{"id": "87868505fe811d5f145903314a31ac6a", "title": "Massively Scaling Reinforcement Learning with SEED RL", "url": "https://ai.googleblog.com/2020/03/massively-scaling-reinforcement.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Lasse Espeholt*", "Raphaël Marinier*", "Piotr Stanczyk*", "Ke Wang", "Marcin Michalski"], "summaries": ["Deep learning has <@historically@>(@AI and Compute@) seen many improvements as a result of scaling to larger models with larger amounts of computation, as with the months-long training of <@OpenAI Five@>(@Dota 2 with Large Scale Deep Reinforcement Learning@) and <@AlphaStar@>(@AlphaStar: Mastering the Real-Time Strategy Game StarCraft II@). SEED RL redesigns the architecture of distributed RL to enable better machine utilization and communication and achieves an order of magnitude improvement in training speed. \n\nCurrent distributed architectures typically separate machines into *actors* and *learners*. *Actors* are typically CPUs that simulate the environment, and run inference to predict agent actions. They then send *trajectories* to the *learners*. *Learners* are typically accelerators (GPUs or TPUs), which are responsible for training the model. They then send the updated model parameters to the *actors*. \n\nSEED RL addresses 3 main issues in this setup:\n1. Inference could benefit from specialized accelerators\n2. Sending model parameters and states requires high bandwidth.\n3. Environment simulation and inference are very different tasks and having them on the same machine makes it hard to utilize the resource efficiently. \n\nThe solution is to instead have actors **only** simulate the environment. After each step, they send the resulting observation to the *learner*, which is responsible for both training and inference, possibly split on separate hardware). It then sends back just the actions to the environment. This enables each piece of hardware to be used for its designed purpose. Since they now need to communicate at each step, they use gRPC to minimize latency."], "venue": "ICLR 2020", "opinion": "Given how compute intensive deep RL is, I think it is quite useful to enable cheaper and faster training before these algorithms can be broadly useful. Their claimed speedup is quite impressive, and I like how well they can separate the training and inference from the simulation. I expect that specialized hardware for both training and inference will soon become the norm and SEED RL seems like it will scale well as those accelerators become faster. One thing to note is that this architecture seems very specifically tuned to the problem of games where CPUs can efficiently simulate the environment and it does not improve the sample efficiency for situations where we can’t run lots of simulations.\n\n**Rohin's opinion:** It was quite surprising to me that this worked as well as it did: this model requires communication across machines _at every timestep of the environment_, which intuitively means that latency should be a major bottleneck, while the standard model only requires communication once per batch of trajectories.", "highlight": false, "read_more": "Paper: SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference", "summarizer": "Nicholas", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #95", "newsletter_category": "Reinforcement learning"}
{"id": "1e73c3ce193ddd177b3ccd78d2ec7c6d", "title": "Robots Learning to Move like Animals", "url": "https://bair.berkeley.edu/blog/2020/04/03/laikago/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Xue Bin Peng", "Erwin Coumans", "Tingnan Zhang", "Tsang-Wei Lee", "Jie Tan", "Sergey Levine"], "summaries": ["<@Previous work@>(@Learning Acrobatics by Watching YouTube@) has suggested that we can get good policies by estimating and imitating poses. This work takes this idea and tries to make it work with sim-to-real transfer. Domain randomization would result in a policy that must be robust to all the possible values of the hidden parameters (such as friction). To make the problem easier, they do domain randomization, but give the agent access to (a latent representation of) the hidden parameters, so that its policy can depend on the hidden parameters. Then, to transfer to the real world, they simply need to search over the latent representation of the hidden parameters in order to find one where the policy actually works in the real world. In practice, they can adapt to the real world with just 8 minutes of real world data."], "venue": "arXiv", "opinion": "This is a cool improvement to domain randomization: it seems like it should be distinctly easier to learn a policy that is dependent on the hidden parameters, and that seems to come at the relatively low cost of needing just a little real world data.", "highlight": false, "read_more": "Paper: Learning Agile Robotic Locomotion Skills by Imitating Animals", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #94", "newsletter_category": "Reinforcement learning"}
{"id": "168e305914579c01ddd2a3c940b63303", "title": "Planning with Goal-Conditioned Policies", "url": "http://arxiv.org/abs/1911.08453", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Soroush Nasiriany*", "Vitchyr H. Pong*", "Steven Lin", "Sergey Levine"], "summaries": ["Reinforcement learning can learn complex skills by interacting with the environment. However, temporally extended or long-range decision-making problems require more than just well-honed reactions. **In this paper, the authors investigate whether or not they can obtain the benefits of action planning found in model-based RL without the need to model the environment at the lowest level.** The authors propose a model-free planning framework that learns low-level goal-conditioned policies that use their value functions as implicit models. Goal-conditioned policies are policies that can be trained to reach a goal state provided as an additional input. Given a goal-conditioned policy, the agent can then plan over intermediate subgoals (goal states) using a goal-conditioned value function to estimate reachability. Since the state space is large, the authors propose what they call latent embeddings for abstracted planning (LEAP), which is able to find useful subgoals by first searching a much smaller latent representation space and then planning a sequence of reachable subgoals that reaches the target state. In experiments, LEAP significantly outperforms prior algorithms on 2D navigation and push/reach tasks. Moreover, their method can get a quadruped ant to navigate around walls which is difficult because much of the planning happens in configuration space. This shows that LEAP is able to be extended to non-visual domains."], "venue": "NeurIPS 2019", "opinion": "The presentation of the paper is clear. In particular, the idea of planning a sequence of maximally feasible subgoals seems particularly intuitive. In general, I think that LEAP relies on the clever idea of reusing trajectory data to augment the data-set for the goal-conditioned policy. As the authors noted, the question of exploration was mostly neglected. I wonder how well the idea of reusing trajectory data generalizes to the general exploration problem.\n\n**Rohin's opinion:** The general goal of inferring hierarchy and using this to plan more efficiently seems very compelling but hard to do well; this is the goal in most hierarchical RL algorithms and <@Learning Latent Plans from Play@>.", "highlight": false, "read_more": "", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #83", "newsletter_category": "Reinforcement learning"}
{"id": "fe2db00a7c3cf3a07976614d43dd682a", "title": "Dream to Control: Learning Behaviors by Latent Imagination", "url": "http://arxiv.org/abs/1912.01603", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Danijar Hafner", "Timothy Lillicrap", "Jimmy Ba", "Mohammad Norouzi"], "summaries": ["In the past year or so, the idea of learning a transition model in a latent space has gained traction, motivated by the hope that such an approach could combine the best of the worlds of model-free and model-based learning. The central appeal of learning a latent transition model is that it allows you to imagine future trajectories in a potentially high-dimensional, structured observation space without actually having to generate those high-dimensional observations.\n\nDreamer builds on a prior model by the same authors, <@PlaNet@>(@Learning Latent Dynamics for Planning from Pixels@), which learned a latent representation of the observations, p(s|o), trained both through a VAE-style observation reconstruction loss, and also a transition model q(s-next|s, a), which is trained to predict the state at the next step given only the state at the prior one, with no next-step observation data. Together, these two models allow you to simulate action-conditioned trajectories through latent state space. If you then predict reward from state, you can use this to simulate the value of trajectories. Dreamer extends on this by also training an Actor Critic-style model on top of states to predict action and value, forcing the state representation to not only capture next-step transition information, but also information relevant to predicting future rewards. The authors claim this extension makes their model more able to solve long-horizon problems, because the predicted value function can capture far-future rewards without needing to simulate the entire way there. Empirically, there seems to be reasonable evidence that this claim plays out, at least within the fairly simple environments the model is tested in."], "venue": "arXiv", "opinion": "The extension from PlaNet (adding actor-critic rather than direct single-step reward prediction) is relatively straightforward, but I think latent models are an interesting area - especially if they eventually become at all possible to interpret - and so I'm happy to see more work in this area.", "highlight": false, "read_more": "", "summarizer": "Cody", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #83", "newsletter_category": "Reinforcement learning"}
{"id": "7c4a584bba95330784959e6fade1f9e9", "title": "Gym Retro, again", "url": "https://blog.openai.com/gym-retro/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Vicki Pfau et al"], "summaries": ["OpenAI is releasing the full version of Gym Retro, with over a thousand games, and a tool for integrating new games into the framework. And of course we see new games in which RL agents find infinite loops that give them lots of reward -- Cheese Cat-Astrophe and Blades of Vengeance."], "venue": "OpenAI Blog", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #8", "newsletter_category": "Reinforcement learning"}
{"id": "2bdf1081dfd4aad7129b1ce36ea08e53", "title": "Adaptive Online Planning for Continual Lifelong Learning", "url": "http://arxiv.org/abs/1912.01188", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Kevin Lu", "Igor Mordatch", "Pieter Abbeel"], "summaries": ["Lifelong learning is distinct from standard RL benchmarks because\n 1. The environment is *sequential* rather than *episodic*; it is never reset to a new start state.\n 2. The current *transition* and *reward* function are given, but they change over time.\n\nGiven this setup, there are two basic approaches: first, run model-free learning on simulated future trajectories and rerun it every time the dynamics change, and second, run model-based planning on the current model. If you ignore computational constraints, these should be equivalent; however, in practice, the second option tends to be more computationally efficient. The contribution of this work is to make this more efficient, rather than improving final performance, by starting with the second option and then using model-free learning to “distill” the knowledge produced by the model-based planner allowing for more efficient planning in the future. \n\nSpecifically, Adaptive Online Planning (AOP) balances between the model-based planner MPPI (a variant of MPC) and the model-free algorithm TD3. MPPI uses the given model to generate a trajectory up to a horizon and then uses an ensemble of value functions to estimate the cumulative reward. This knowledge is then distilled into TD3 for later use as a prior for MPPI. During future rollouts, the variance and Bellman error of the value function ensemble are used to determine how long the horizon should be, and therefore how much computation is used."], "venue": "NeurIPS Deep RL 2019", "opinion": "I agree that episodic training and fixed world dynamics seem like unlikely conditions for most situations we would expect agents to encounter in the real world. Accounting for them seems particularly important to ensure safe exploration and robustness to distributional shift, and I think that these environments could serve as useful benchmarks for these safety problems as well.", "highlight": false, "read_more": "", "summarizer": "Nicholas", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #79", "newsletter_category": "Reinforcement learning"}
{"id": "539351102e3553b8c63b0d257e8c577e", "title": "Model-Based Reinforcement Learning: Theory and Practice", "url": "https://bair.berkeley.edu/blog/2019/12/12/mbpo/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine"], "summaries": ["This post provides a broad overview of model-based reinforcement learning, and argues that a learned (explicit) model allows you to generate sample trajectories from the current policy at arbitrary states, correcting for off-policy error, at the cost of introducing model bias. Since model errors compound as you sample longer and longer trajectories, the authors propose an algorithm in which the model is used to sample short trajectories from states in the replay buffer, rather than sampling trajectories from the initial state (which are as long as the task's horizon)."], "venue": "BAIR Blog", "opinion": "", "highlight": false, "read_more": "Paper: When to Trust Your Model: Model-Based Policy Optimization", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #78", "newsletter_category": "Reinforcement learning"}
{"id": "3801172332f4993aeb5c3238a636e411", "title": "Stabilizing Transformers for Reinforcement Learning", "url": "http://arxiv.org/abs/1910.06764", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Emilio Parisotto", "H. Francis Song", "Jack W. Rae", "Razvan Pascanu", "Caglar Gulcehre", "Siddhant M. Jayakumar", "Max Jaderberg", "Raphael Lopez Kaufman", "Aidan Clark", "Seb Noury", "Matthew M. Botvinick", "Nicolas Heess", "Raia Hadsell"], "summaries": ["Transformers have been incredibly successful in domains with sequential data. Naturally, one might expect transformers to be useful in partially observable RL problems. However, transformers have complex implementations making them difficult to use in an already challenging domain for learning. In this paper, the authors explore a novel transformer architecture they call Gated Transformer-XL (GTrXL) that can be used in the RL setting. The authors succeed in stabilizing training with a reordering of the layer normalization coupled with the addition of a new gating mechanism located at key points in the submodules of the transformer. The new architecture is tested on DMlab-30, a suite of RL tasks including memory, and shows improvement over baseline transformer architectures and the neural computer architecture MERLIN. Furthermore, GTrXL learns faster and is more robust than a baseline transformer architecture. "], "venue": "arXiv", "opinion": "This is one of those 'obvious' ideas that turns out to be very difficult to put into practice. I'm glad to see a paper like this simply because the authors do a good job at explaining why a naive execution of the transformer idea is bound to fail. Overall, the architecture seems to be a solid improvement over the TrXL variant. I'd be curious whether or not the architecture is also better in an NLP setting. ", "highlight": false, "read_more": "", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #76", "newsletter_category": "Reinforcement learning"}
{"id": "7358484d1a6a9776bd0300f8eec97831", "title": "Learning to Learn with Probabilistic Task Embeddings", "url": "https://bair.berkeley.edu/blog/2019/06/10/pearl/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Kate Rakelly*", "Aurick Zhou*", "Deirdre Quillen", "Chelsea Finn", "Sergey Levine"], "summaries": ["This paper proposes a solution to off-policy meta reinforcement learning, an appealing problem because on-policy RL is so sample-intensive, and meta-RL is even worse because it needs to solve a distribution over RL problems. The authors' approach divides the problem into two subproblems: infer an embedding, z, of the current task given context, and learning an optimal policy q function conditioned on that task embedding. At the beginning of each task, z is sampled from the (Gaussian) prior, and as the agent gains more samples of that particular task, it updates its posterior over z, which can be thought of as refining its guess as to which task it's been dropped into this time. The trick here is that this subdividing of the problem allows it to be done mostly off-policy, because you only need to use on-policy learning for the task inference component (predicting z given current task transitions), and can learn the Actor-Critic model conditioned on z with off-policy data. The method works by alternating between these two learning modes."], "venue": "ICML 2019", "opinion": "I enjoyed this; it's a well-written paper that uses a few core interesting ideas (posterior sampling over a task distribution, representation of a task distribution as a distribution of embedding vectors passed in to condition Q functions), and builds them up to make a method that achieves some impressive empirical results. ", "highlight": false, "read_more": "Efficient Off-Policy Meta-RL via Probabilistic Context Variables", "summarizer": "Cody", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #63", "newsletter_category": "Reinforcement learning"}
{"id": "8c660af979cab342ad692f68e8e53c5e", "title": "Diagnosing Bottlenecks in Deep Q-learning Algorithms", "url": "http://arxiv.org/abs/1902.10250", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Justin Fu", "Aviral Kumar", "Matthew Soh", "Sergey Levine"], "summaries": ["While the PreQN paper used a theoretical approach to tackle Deep Q-Learning algorithms, this one takes an empirical approach. Their results:\n - Small neural nets cannot represent Q*, and so have undesired bias that results in worse performance. However, they also have convergence issues, where the Q-function they actually converge to is significantly worse than the best Q-function that they could express. Larger architectures mitigate both of these problems.\n - When there are more samples, we get a lower validation loss, showing that we are overfitting. Despite this, larger architectures are better, because the performance loss from overfitting is not as bad as the performance loss from having a bad bias. A good early stopping criterion could help with this.\n - To study how non-stationarity affects DQL algorithms, they study a variant where the Q-function is a moving average of the past Q-functions (instead of the full update), which means that the target values don't change as quickly (i.e. it is closer to a stationary target). They find that non-stationarity doesn't matter much for large architectures.\n - To study distribution shift, they look at the difference between the expected Bellman error before and after an update to the parameters. They find that distribution shift doesn't correlate much with performance and so is likely not important.\n - Algorithms differ strongly in the distribution over (s, a) pairs that the DQL update is computed over. They study this in the absence of sampling (i.e. when they simply weight all possible (s, a) pairs, rather than just the ones sampled from a policy) and find that distributions that are \"close to uniform\" perform best. They hypothesize that this is the reason that experience replay helps -- initially an on-policy algorithm would take samples from a single policy, while experience replay adds samples from previous versions of the policy, which should increase the coverage of (s, a) pairs.\n\nTo sum up, the important factors are using an expressive neural net architecture, and designing a good sampling distribution. Inspired by this, they design Adversarial Feature Matching (AFM), which like Prioritized Experience Replay (PER) puts more weight on samples that have high Bellman error. However, unlike PER, AFM does not try to reduce distribution shift via importance sampling, since their experiments found that this was not important."], "venue": "arXiv", "opinion": "This is a great experimental paper, there's a lot of data that can help understand DQL algorithms. I wouldn't take the results too literally, since insights on simple environments may not generalize to more complex environments. For example, they found overfitting to be an issue in their environments -- it's plausible to me that with more complex environments (think Dota/StarCraft, not Mujoco) this reverses and you end up underfitting the data you have. Nonetheless, I think data like this is particularly valuable for coming up with an intuitive theory of how deep RL works, if not a formal one.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #51", "newsletter_category": "Reinforcement learning"}
{"id": "f30a257a7ce55d8494b4a386e322103f", "title": "Simulated Policy Learning in Video Models", "url": "https://ai.googleblog.com/2019/03/simulated-policy-learning-in-video.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Lukasz Kaiser", "Mohammad Babaeizadeh", "Piotr Miłos", "Błażej Osiński", "Roy H Campbell", "Konrad Czechowski", "Chelsea Finn", "Piotr Kozakowski", "Sergey Levine", "Ryan Sepassi", "George Tucker and Henryk Michalewski"], "summaries": ["This blog post and the associated [paper](https://arxiv.org/abs/1903.00374) tackle model-based RL for Atari. The recent <@world models@>(@Recurrent World Models Facilitate Policy Evolution@) paper proposed first learning a model of the world by interacting with the environment using a random policy, and then using the model to simulate the environment and training a control policy using those simulations. (This wasn't it's main point, but it was one of the things it talked about.) The authors take this idea and put it in an iterative loop: they first train the world model using experience from a random policy, then train a policy using the world model, retrain the world model with experience collected using the newly trained policy, retrain the policy, and so on. This allows us to correct any mistakes in the world model and let it adapt to novel situations that the control policy discovers. This allows them to train agents that can play Atari with only 100K interactions with the environment (corresponding to about two hours of real-time gameplay), though the final performance is lower than the state-of-the-art achieved with model-free RL. See [Import AI](https://jack-clark.net/2019/04/01/import-ai-140-surveilling-a-city-via-the-cityflow-dataset-25000-images-of-chinese-shop-signs-and-the-seven-traps-of-ai-ethics/) for more details."], "venue": "Google AI Blog", "opinion": "This work follows the standard pattern where model-based RL is more sample efficient but reaches worse final performance compared to model-free RL. Let's try to explain this using the same story as in the rest of this newsletter.\n\nThe sample efficiency comes from the fact that they learn a world model that can predict the future, and then use that model to solve the control problem (which has zero sample cost, since you are no longer interacting with the environment). It turns out that predicting the future is \"easier\" than selecting the optimal action, and so the world model can be trained in fewer samples than it would take to solve the control problem directly. Why is the world model \"easier\" to learn? One possibility is that solving the control problem requires you to model the world anyway, and so must be a harder problem. If you don't know what your actions are going to do, you can't choose the best one. I don't find this very compelling, since there are lots of aspects of world modeling that are irrelevant to the control problem -- you don't need to know exactly how the background art will change in order to choose what action to take, but world modeling requires you to do this. I think the real reason is that world modeling benefits from much more supervision -- rather than getting a sparse reward signal over a trajectory, you get a full grid of pixels every timestep that you were supposed to predict. This gives you many orders of magnitude more \"supervision information\" per sample, and so it makes it easier to learn. (This is basically the same argument as in [Yann Lecun's cake analogy](https://medium.com/syncedreview/yann-lecun-cake-analogy-2-0-a361da560dae).)\n\nWhy does it lead to worse performance overall? The policy is now being trained using rollouts that are subtly wrong, and so instead of specializing to the true Atari dynamics it will be specialized to the world model dynamics, which is going to be somewhat different and should lead to a slight dip in performance. (Imagine a basketball player having to shoot a ball that was a bit heavier than usual -- she'll probably still be good, but not as good as with a regular basketball.) In addition, since the world model is supervised by pixels, any small objects are not very important to the world model (i.e. getting them wrong does not incur much loss), even if they are very important for control. In fact, they find that bullets tend to disappear in Atlantis and Battle Zone, which is not good if you want to learn to play those games.\n\nI'm not sure if they shared weights between the world model and the control policy. If they did, then they would also have the problem that the features that are useful for predicting the future are not the same as the features that are useful for selecting actions, which would also cause a drop in performance. My guess is that they didn't share weights for precisely this reason, but I'm not sure.", "highlight": false, "read_more": "Model-Based Reinforcement Learning for Atari", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #51", "newsletter_category": "Reinforcement learning"}
{"id": "1d5e584d924dfc0784d1fe6f4548a359", "title": "Unifying Physics and Deep Learning with TossingBot", "url": "https://ai.googleblog.com/2019/03/unifying-physics-and-deep-learning-with.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Andy Zeng"], "summaries": ["TossingBot is a system that learns how to pick up and toss objects into bins using deep RL. The most interesting thing about it is that instead of using neural nets to directly predict actions, they are instead used to predict _adjustments_ to actions that are computed by a physics-based controller. Since the physics-based controller generalizes well to new situations, TossingBot is also able to generalize to new tossing locations."], "venue": "Google AI Blog", "opinion": "This is a cool example of using structured knowledge in order to get generalization while also using deep learning in order to get performance. I also recently came across [Residual Reinforcement Learning for Robot Control](https://arxiv.org/abs/1812.03201), which seems to have the same idea of combining deep RL with conventional control mechanisms. I haven't read either of the papers in depth, so I can't compare them, but a _very_ brief skim suggests that their techniques are significantly different.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #51", "newsletter_category": "Reinforcement learning"}
{"id": "4d2adbffbfc5d94060d2d09d3facb924", "title": "Assessing Generalization in Deep Reinforcement Learning (blog post)", "url": "https://bair.berkeley.edu/blog/2019/03/18/rl-generalization/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Charles Packer and Katelyn Guo"], "summaries": ["This is a blog post summarizing <@Assessing Generalization in Deep Reinforcement Learning@>."], "venue": "BAIR Blog", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #50", "newsletter_category": "Reinforcement learning"}
{"id": "7f60cefde83434d6eeaa83984f6a3bb7", "title": "TDM: From Model-Free to Model-Based Deep Reinforcement Learning", "url": "http://bair.berkeley.edu/blog/2018/04/26/tdm/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Vitchyr Pong"], "summaries": ["In many tasks, we have hierarchical structure where we want to plan at a high level, but to execute the low-level actions we want to rely on learning through experience. For example, when biking from UC Berkeley to the Golden Gate Bridge, you definitely want to plan in advance the route you'll take (as opposed to learning through trial-and-error), but you want to learn how to bike through trial-and-error. Temporal Difference Models allow you to do model-based planning at the high level, and model-free learning at the low level. Specifically, you learn a function Q(s1, a, s2, T), which intuitively says \"if I start from state s1, taking action a, and running for T steps, how close can I get to state s2\". It turns out that this can be thought of as a Q function and so can be trained using standard model-free RL techniques. Note that the constraint Q(s1, a, s2, T) = 0 says that it is possible to get from s1 to s2 in T steps after first taking action a.\n\nOne standard way to solve model-based RL is to search for a sequence of states and actions (s0, a0, s1, a1, ...) that is feasible (agrees with the dynamics) and maximizes the reward, and then take the first action from that sequence. Using TDMs, we can now search for the sequence (s0, a0, sK, aK, s2K, a2k, ...) that is feasible and maximizes reward. The feasibility requirement is expressed by the constraint Q(s0, a0, sK, K) = 0."], "venue": "BAIR Blog", "opinion": "Firstly, the blog post is very readable and provides a great introduction (it's much more friendly than my summary).\n\nThis technique does require that we can reinterpret any state as a goal state, similar to the assumption in [Hindsight Experience Replay](https://arxiv.org/abs/1707.01495) (HER). They do compare to HER, and find that HER doesn't do very well, which I was quite surprised by. Clicking through to the paper, it turns out the authors were surprised as well, but then realized that this is because HER is designed to work with sparse reward problems, whereas they were evaluating on problems with relatively shaped rewards.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #5", "newsletter_category": "Reinforcement learning"}
{"id": "fb4b51f33b45eab0be3ba8617b970f23", "title": "Long-Range Robotic Navigation via Automated Reinforcement Learning", "url": "https://ai.googleblog.com/2019/02/long-range-robotic-navigation-via.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Aleksandra Faust and Anthony Francis"], "summaries": ["How can we get robots that successfully navigate in the real world? One approach is to use a high-level route planner that uses a learned control policy over very short distances (10-15 meters). The control policy is learned using deep reinforcement learning, where the network architecture and reward shaping is also learned via neural architecture search (or at least something very similar). The simulations have enough noise that the learned control policy transfers well to new environments. Given this policy as well as a floorplan of the environment we want the robot to navigate in, we can build a graph of points on the floorplan, where there is an edge between two points if the robot can safely navigate between the two points using the learned controller (which I _think_ is checked in simulation). At execution time, we can find a path to the goal in this graph, and move along the edges using the learned policy. They were able to build a graph for the four buildings at the Google main campus using 300 workers over 4 days. They find that the robots are very robust in the real world. See also [Import AI](https://jack-clark.net/2019/03/04/import-ai-136-what-machine-learning-power-infrastructure-means-for-humanity-new-gca-benchmarkdataset-challenges-image-captioning-systems-and-google-uses-frankenrl-to-create-more-mobile-robot/)."], "venue": "Google AI Blog", "opinion": "This is a great example of a pattern that seems quite common: once we automate tasks using end-to-end training that previously required more structured approaches, new more complex tasks will arise that will use the end-to-end trained systems as building blocks in a bigger structured approach. In this case, we can now train robots to navigate over short distances using end-to-end training, and this has been used in a structured approach involving graphs and waypoints to create robots that can traverse larger distances.\n\nIt's also an example of what you can do when you have a ton of compute: for the learned controller, they learned both the network architecture and the reward shaping. About the only thing that had to be explicity specified was the sparse true reward. (Although I'm sure in practice it took a lot of effort to get everything to actually work.)", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #48", "newsletter_category": "Reinforcement learning"}
{"id": "03f131c18ebf2b38fb00ad6f1e1ba077", "title": "Neural MMO", "url": "https://blog.openai.com/neural-mmo/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["OpenAI"], "summaries": ["Neural MMO is \"a massively multiagent game environment for reinforcement learning agents\". It was designed to be persistent (with concurrent learning and no environment resets), large-scale, efficient and expandable. Agents need to traverse an environment to obtain food and water in order to survive for longer (the metric for which they are rewarded), and are also able to engage in combat with other agents. Agents trained within a larger population explore more and consistently outperform those trained in smaller populations (when evaluated together). The authors note that multiagent training is a curriculum magnifier, not a curriculum in itself, and that the environment must facilitate adaptive pressures by allowing a sufficient range of interactions."], "venue": "OpenAI blog", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #48", "newsletter_category": "Reinforcement learning"}
{"id": "14b7117db16ebb3bb8ff1c779ee2a4da", "title": "The Hanabi Challenge: A New Frontier for AI Research", "url": "http://arxiv.org/abs/1902.00506", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Nolan Bard", "Jakob N. Foerster", "Sarath Chandar", "Neil Burch", "Marc Lanctot", "H. Francis Song", "Emilio Parisotto", "Vincent Dumoulin", "Subhodeep Moitra", "Edward Hughes", "Iain Dunning", "Shibl Mourad", "Hugo Larochelle", "Marc G. Bellemare", "Michael Bowling"], "summaries": ["The authors propose the cooperative, imperfect-information card game Hanabi as a target for AI research, due to the necessity of reasoning about the beliefs and intentions of other players in order to win. They identify two challenges: firstly, discovering a policy for a whole team that allows it to win (the self-play setting); and secondly, discovering an individual policy that allows an agent to play with an ad-hoc team without previous coordination. They note that successful self-play policies are often very brittle in the ad-hoc setting, which makes the latter the key problem. The authors provide an open-source framework, an evaluation benchmark and the results of existing RL techniques."], "venue": "arXiv", "opinion": "I endorse the goals of this paper, but my guess is that Hanabi is simple enough that agents can solve it using isolated heuristics rather than general reasoning about other agents' beliefs.\n\n*Rohin's opinion:* I'm particularly excited to see more work on ad hoc teamwork, since it seems like very similar to the setting we are in, where we would like to deploy AI system among groups of humans and have things go well. See [Following human norms](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/eBd6WvzhuqduCkYv3) ([AN #42](https://mailchi.mp/f6488137d76c/alignment-newsletter-42)) for more details.", "highlight": false, "read_more": "A cooperative benchmark: Announcing the Hanabi Learning Environment", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #45", "newsletter_category": "Reinforcement learning"}
{"id": "a1d9ddf94e5867dae12770502126c85e", "title": "Recurrent Experience Replay in Distributed Reinforcement Learning", "url": "https://openreview.net/forum?id=r1lyTjAqYX", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Steven Kapturowski", "Georg Ostrovski", "John Quan", "Remi Munos", "Will Dabney"], "summaries": ["See [Import AI](https://jack-clark.net/2018/10/01/import-ai-114-synthetic-images-take-a-big-leap-forward-with-biggans-us-lawmakers-call-for-national-ai-strategy-researchers-probe-language-reasoning-via-hotspotqa/)."], "venue": "ICLR 2019", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #45", "newsletter_category": "Reinforcement learning"}
{"id": "469edd6c700a0118d3f09e69f3684084", "title": "Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions", "url": "http://arxiv.org/abs/1901.01753", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Rui Wang", "Joel Lehman", "Jeff Clune", "Kenneth O. Stanley"], "summaries": ["The POET algorithm uses evolutionary strategies to evolve a population of pairs of tasks and agents. During each iteration, it first generates a new environment by perturbing an existing environment, then optimises each agent for its paired environment, then attempts to transfer agents between existing environments to improve performance (in case one environment turns out to be a useful \"stepping stone\" towards another). New environments are kept if they are neither too hard nor too easy for the current population of agents. This algorithm was tested using the Bipedal Walker environment, where it significantly outperformed standard evolutionary search."], "venue": "CoRL 2018", "opinion": "I think that the \"problem problem\" is going to become increasingly important in RL, and that this is a promising approach. Note that this paper's contribution seems to be mainly that it combines ideas from previous papers on minimal criteria coevolution and innovation engines.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #41", "newsletter_category": "Reinforcement learning"}
{"id": "63f8079093f7555aebe3bfd792e42433", "title": "Natural Environment Benchmarks for Reinforcement Learning", "url": "http://arxiv.org/abs/1811.06032", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Amy Zhang", "Yuxin Wu", "Joelle Pineau"], "summaries": ["This paper notes that RL performance tends to be measured in simple artificial environments - unlike other areas of ML in which using real-world data such as images or text is common. The authors propose three new benchmarks to address this disparity. In the first two, an agent is assigned to a random location in an image, and can only observe parts of the image near it. At every time step, it is able to move in one of the cardinal directions, unmasking new sections of the image, until it can classify the image correctly (task 1) or locate a given object (task 2). The third type of benchmark is adding natural video as background to existing Mujoco or Atari tasks. In testing this third category of benchmark, they find that PPO and A2C fall into a local optimum where they ignore the observed state when deciding the next action."], "venue": "NIPS 2018", "opinion": "While I agree with some of the concerns laid out in this paper, I'm not sure that these benchmarks are the best way to address them. The third task in particular is mainly testing for ability to ignore the \"natural data\" used, which doesn't seem very useful. I think a better alternative would be to replace Atari with tasks in procedurally-generated environments with realistic physics engines. However, this paper's benchmarks do benefit from being much easier to produce and less computationally demanding.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #34", "newsletter_category": "Reinforcement learning"}
{"id": "38164f43050c82814dd392d8608e6ca1", "title": "Woulda, Coulda, Shoulda: Counterfactually-Guided Policy Search", "url": "http://arxiv.org/abs/1811.06272", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Lars Buesing", "Theophane Weber", "Yori Zwols", "Sebastien Racaniere", "Arthur Guez", "Jean-Baptiste Lespiau", "Nicolas Heess"], "summaries": ["This paper aims to alleviate the data inefficiency of RL by using a model to synthesise data. However, even when environment dynamics can be modeled accurately, it can be difficult to generate data which matches the true distribution. To solve this problem, the authors use a Structured Causal Model trained to predict the outcomes which would have occurred if different actions had been taken from previous states. Data is then synthesised by rolling out from previously-seen states. The authors test performance in a partially-observable version of SOKOBAN, in which their system outperforms other methods of generating data."], "venue": "arXiv", "opinion": "This is an interesting approach which I can imagine becoming useful. It would be nice to see more experimental work in more stochastic environments, though.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #34", "newsletter_category": "Reinforcement learning"}
{"id": "92426f0a5821bf78ee2245836a4de653", "title": "Learning Latent Dynamics for Planning from Pixels", "url": "http://arxiv.org/abs/1811.04551", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson"], "summaries": ["The authors introduce PlaNet, an agent that learns an environment's dynamics from pixels and then chooses actions by planning in latent space. At each step, it searches for the best action sequence under its Recurrent State Space dynamics model, then executes the first action and replans. The authors note that having a model with both deterministic and stochastic transitions is critical to learning a good policy. They also use a technique called variational overshooting to train the model on multi-step predictions, by generalising the standard variational bound for one-step predictions. PlaNet approaches the performance of top model-free algorithms even when trained on 50x fewer episodes."], "venue": "arXiv", "opinion": "This paper seems like a step forward in addressing the instability of using learned models in RL. However, the extent to which it's introducing new contributions, as opposed to combining existing ideas, is a little unclear.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #33", "newsletter_category": "Reinforcement learning"}
{"id": "c918acde709b4d3f9b78aac81bf26eb2", "title": "Evolved Policy Gradients", "url": "https://blog.openai.com/evolved-policy-gradients/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Rein Houthooft et al"], "summaries": ["In this meta-learning approach for reinforcement learning, the outer optimization loop proposes a new _loss function_ for the inner loop to optimize (in contrast to eg. MAML, where the outer optimization leads to better initializations for the policy parameters). The outer optimization is done using evolution strategies, while the inner optimization is stochastic gradient descent. The authors see good results on generalization to out-of-distribution tasks, which other algorithms such as RL2 don't achieve."], "venue": "OpenAI Blog", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #3", "newsletter_category": "Reinforcement learning"}
{"id": "557e8517aef45fc25edf5328555b711e", "title": "Open sourcing TRFL: a library of reinforcement learning building blocks", "url": "https://deepmind.com/blog/trfl/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Matteo Hessel", "Miljan Martic", "Diego de Las Casas and Gabriel Barth-Maron"], "summaries": ["DeepMind is open-sourcing a Tensorflow library of \"key algorithmic components\" used in their RL agents. They hope that this will allow less buggy RL code."], "venue": "DeepMind Blog", "opinion": "This continues the trend of being able to easily implement deep learning at higher and higher levels of abstraction. I'm looking forward to using it.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #29", "newsletter_category": "Reinforcement learning"}
{"id": "73a709d21905f32f366b5f75f9fd97ee", "title": "Learning Acrobatics by Watching YouTube", "url": "http://bair.berkeley.edu/blog/2018/10/09/sfv/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Xue Bin (Jason) Peng and Angjoo Kanazawa"], "summaries": ["To imitate human behavior in videos, it is sufficient to estimate the human pose for each frame, to smooth the poses across frames to eliminate any jittery artifacts or mistakes made by the pose estimator, and then to train the robot to match the motion exactly. This results in really good performance that looks significantly better than corresponding deep RL approaches, but of course it relies on having labeled poses to train the pose estimator in addition to the simulator."], "venue": "BAIR Blog", "opinion": "It's quite remarkable how some supervision (poses in this case) can lead to such large improvements in the task. Of course, the promise of deep RL is to accomplish tasks with very little supervision (just a reward function), so this isn't a huge breakthrough, but it's still better than I expected. Intuitively, this works so well because the \"reward\" during the imitation phase is extremely dense -- the reference motion provides feedback after each action, so you don't have to solve the credit assignment problem.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #28", "newsletter_category": "Reinforcement learning"}
{"id": "305b766e2036b9ae9aab0e577081c92f", "title": "Introducing a New Framework for Flexible and Reproducible Reinforcement Learning Research", "url": "https://ai.googleblog.com/2018/08/introducing-new-framework-for-flexible.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Pablo Samuel Castro and Marc G. Bellemare"], "summaries": ["Researchers at Google have released Dopamine, a small framework for RL research on Atari games, with four built-in agents -- DQN, C51, a simplified version of Rainbow, and the recent Implicit Quantile Network. There's a particular emphasis on reproducibility, by providing logs from training runs, training data, etc."], "venue": "Google AI Blog", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #22", "newsletter_category": "Reinforcement learning"}
{"id": "0666221c93207ceb06a787f600c0f9e4", "title": "The International 2018: Results", "url": "https://blog.openai.com/the-international-2018-results/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["OpenAI"], "summaries": ["Two human teams beat OpenAI Five at The International. The games seemed much more like regular Dota, probably because there was now only one vulnerable courier for items instead of five invulnerable ones. This meant that OpenAI Five's strategy of a relentless team attack on the enemy was no longer as powerful, because they couldn't get the health regeneration items they needed to constantly stay alive to continue the attack. It's also possible (but less likely to me) that the matches were more normal because the teams were more even, or because the human teams knew about Five's strategy this time and were countering it in ways that I don't understand."], "venue": "OpenAI Blog", "opinion": "There are still some things that the bots do that seem like bad decisions. You can interpret this a few ways. Five could have learned a large number of heuristics that make it good enough to beat almost all humans, but that break down in edge cases. In this story, Five is not good at learning logical or abstract reasoning, but can compensate for that in the average case with the sheer number of heuristics it can learn. Another interpretation is that Five learns a good representation of Dota which lets it come up with new, novel insights into the game, which we can't see or understand because the representation is alien to us. However, the representation makes it harder to come up with other insights about Dota that we have using our representations of Dota, and as a result Five makes some mistakes that humans can easily recognize as mistakes. I lean towards the first interpretation, but not very strongly.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #21", "newsletter_category": "Reinforcement learning"}
{"id": "75efe27fac4db28929824799733680fa", "title": "Learning Actionable Representations from Visual Observations", "url": "http://arxiv.org/abs/1808.00928", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Debidatta Dwibedi", "Jonathan Tompson", "Corey Lynch", "Pierre Sermanet"], "summaries": ["Prior work on Time Contrastive Networks (TCN)s showed that you can use time as an unsupervised learning signal, in order to learn good embeddings of states that you can then use in other tasks. This paper extends TCNs to work with multiple frames, so that it can understand motion as well. Consider any two short videos of a task demonstration. If they were taken at different times, then they should be mapped to different embedding vectors (since they correspond to different \"parts\" of the task). On the other hand, if they were taken at the same time (even if from different viewpoints), they should be mapped to the same embedding vector. The loss function based on this encourages the network to learn an embedding for these short videos that is invariant to changes in perspective (which are very large changes in pixel-space), but _is_ different for changes in time (which may be very small changes in pixel-space). They evaluate with a bunch of different experiments."], "venue": "arXiv", "opinion": "Unsupervised learning seems like the way forward to learn rich models of the world, because of the sheer volume of data that you can use.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #19", "newsletter_category": "Reinforcement learning"}
{"id": "d8697df0341453d1ea4a97ec72ab97fd", "title": "Learning Plannable Representations with Causal InfoGAN", "url": "http://arxiv.org/abs/1807.09341", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Thanard Kurutach", "Aviv Tamar", "Ge Yang", "Stuart Russell", "Pieter Abbeel"], "summaries": ["Hierarchical reinforcement learning aims to learn a hierarchy of actions that an agent can take, each implemented in terms of actions lower in the hierarchy, in order to get more efficient planning. Another way we can achieve this is to use a classical planning algorithm to find a sequence of _waypoints_, or states that the agent should reach that will allow it to reach its goal. These waypoints can be thought of as a high-level plan. You can then use standard RL algorithms to figure out how to go from one waypoint to the next. However, typical planning algorithms that can produce a sequence of waypoints require very structured state representations, that were designed by humans in the past. How can we learn them directly from data? This paper proposes Causal InfoGAN. They use a GAN where the generator creates adjacent waypoints in the sequence, while the discriminator tries to distinguish between waypoints from the generator and pairs of points sampled from the true environment. This incentivizes the generator to generate waypoints that are close to each other, so that we can use an RL algorithm to learn to go from one waypoint to the next. However, this only lets us generate adjacent waypoints. In order to use this to make a sequence of waypoints that gets from a start state to a goal state, we need to use some classical planning algorithm. In order to do that, we need to have a structured state representation. GANs do not do this by default. InfoGAN tries to make the latent representation in a GAN more meaningful by providing the generator with a \"code\" (a state in our case) and maximizing the mutual information of the code and the output of the generator. In this setting, we want to learn representations that are good for planning, so we want to encode information about _transitions_ between states. This leads to the Causal InfoGAN objective, where we provide the generator with a pair of abstract states (s, s'), have it generate a pair of observations (o, o') and maximize the mutual information between (s, s') and (o, o'), so that s and s' become good low-dimensional representations of o and o'. They show that Causal InfoGAN can create sequences of waypoints in a rope manipulation task, that previously had to be done manually."], "venue": "arXiv", "opinion": "We're seeing more and more work combining classical symbolic approaches with the current wave of statistical machine learning from big data, that gives them the best of both worlds. While the results we see are not general intelligence, it's becoming less and less true that you can point to a broad swath of capabilities that AI cannot do yet. I wouldn't be surprised if a combination of symbolic and stastical AI techniques led to large capability gains in the next few years.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #18", "newsletter_category": "Reinforcement learning"}
{"id": "acbc8c467c7c54b69efd86c7f37fa611", "title": "Learning Heuristics for Automated Reasoning through Deep Reinforcement Learning", "url": "http://arxiv.org/abs/1807.08058", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Gil Lederman", "Markus N. Rabe", "Edward A. Lee", "Sanjit A. Seshia"], "summaries": ["The formal methods community uses SAT solvers all the time in order to solve complex search-based problems. These solvers use handtuned heuristics in order to drastically improve performance. The heuristics only affect the choices in the search process, not the correctness of the algorithm overall. Obviously, we should consider using neural nets to learn these heuristics instead. However, neural nets take a long time to run, and SAT solvers have to make these decisions very frequently, so it's unlikely to actually be helpful -- the neural net would have to be orders of magnitude better than existing heuristics. So, they instead do this for QBF (quantified boolean formulas) -- these are PSPACE complete, and the infrastructure needed to support the theory takes more time, so it's more likely that neural nets can actually help. They implement this using a graph neural network and engineer some simple features for variables and clauses. (Feature engineering is needed because there can hundreds of thousands of variables, so you can only have ~10 numbers to describe the variable.) It works well, doing better than the handcoded heuristics."], "venue": "arXiv", "opinion": "For over a year now people keep asking me whether something like this is doable, since it seems like an obvious win combining PL and ML, and why no one has done it yet. I've mentioned the issue about neural nets being too slow, but it still seemed doable, and I was really tempted to do it myself. So I'm really excited that it's finally been done!\n\nOh right, AI alignment. Yeah, I do actually think this is somewhat relevant -- this sort of work could lead to much better theorem provers and formal reasoning, which could make it possible to create AI systems with formal guarantees. I'm not very optimistic about this approach myself, but I know others are.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #17", "newsletter_category": "Reinforcement learning"}
{"id": "eaac8b623998134bcd41013428919f70", "title": "Visual Reinforcement Learning with Imagined Goals", "url": "http://arxiv.org/abs/1807.04742", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Ashvin Nair", "Vitchyr Pong", "Murtaza Dalal", "Shikhar Bahl", "Steven Lin", "Sergey Levine"], "summaries": ["[Hindsight Experience Replay](https://blog.openai.com/ingredients-for-robotics-research/) (HER) introduced the idea of accelerating learning with sparse rewards, by taking trajectories where you fail to achieve the goal (and so get no reward, and thus no learning signal) and replacing the actual goal with an \"imagined\" goal chosen in hindsight such that you actually achieved that goal, which means you get reward and can learn. This requires that you have a space of goals such that for any trajectory, you can come up with a goal such that the trajectory achieves that goal. In practice, this means that you are limited to tasks where the goals are of the form \"reach this goal state\". However, if your goal state is an image, it is very hard to learn how to act in order to reach any possible image goal state (even if you restrict to realistic ones), since the space is so large and unstructured. The authors propose to first learn a structured latent representation of the space of images using a variational autoencoder (VAE), and then use that structured latent space as the space of goals which can be achieved. They also use Q-learning instead of DDPG (which is what HER used), so that they can imagine any goal with a minibatch (s, a, s') and learn from it (whereas HER/DDPG is limited to states on the trajectory)."], "venue": "arXiv", "opinion": "This is a cool example of a relatively simple yet powerful idea -- instead of having a goal space over all states, learn a good latent representation and use that as your goal space. This enables unsupervised learning in order to figure out how to use a robot to generally affect the world, probably similarly to how babies explore and learn.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #16", "newsletter_category": "Reinforcement learning"}
{"id": "bd87194bcf55242a9f211a6495ec6829", "title": "OpenAI Five Benchmark", "url": "https://blog.openai.com/openai-five-benchmark/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The benchmark match for OpenAI Five will be a best-of-three match on August 5 at 2pm. They have already removed many of the restrictions on gameplay, including the two most important ones (wards and Roshan), as well as widening the pool of heroes to choose from 5 to 18."], "venue": "OpenAI Blog", "opinion": "I wonder if they are planning to play a game where both sides draft heroes, or where both sides get a randomly chosen team of 5 heroes. Previously I would have expected that they were choosing randomly, since it seems very difficult to learn solely from experience whether your team choice works well, given that the number of possible drafts is combinatorially large, and the way that the draft affects outcome is very complicated and long term and so hard to capture in a gradient. Now, I'm pretty uncertain -- if deep RL was enough to get this far, it could be good enough to deal with that as well. And it's possible that you can actually do well at drafting with some relatively simple heuristics -- I don't know Dota well enough to say.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #16", "newsletter_category": "Reinforcement learning"}
{"id": "eafd402e4459f64059e10153a4062b6a", "title": "The Pursuit of (Robotic) Happiness: How TRPO and PPO Stabilize Policy Gradient Methods", "url": "https://towardsdatascience.com/the-pursuit-of-robotic-happiness-how-trpo-and-ppo-stabilize-policy-gradient-methods-545784094e3b", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Cody Marie Wild"], "summaries": ["I barely looked at this -- I think it's an introduction to policy gradient methods for reinforcement learning. It assumes very little background (less than I assume in these summaries)."], "venue": "Towards Data Science", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "Reinforcement learning"}
{"id": "50de92da12e5bbdc50ee440bd8aa5b6a", "title": "Retro Contest: Results", "url": "https://blog.openai.com/first-retro-contest-retrospective/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["John Schulman", "Vicki Pfau", "Alex Nichol", "Christopher Hesse", "Oleg Klimov and Larissa Schiavo"], "summaries": ["OpenAI has announced the results of the [Retro Contest](https://blog.openai.com/retro-contest/). The winning submissions were modified versions of existing algorithms like joint PPO and Rainbow, without any Sonic-specific parts."], "venue": "OpenAI Blog", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #13", "newsletter_category": "Reinforcement learning"}
{"id": "ace4a49db23755dd2008fc0d3100640f", "title": "Fast reinforcement learning through the composition of behaviours", "url": "https://deepmind.com/blog/article/fast-reinforcement-learning-through-the-composition-of-behaviours", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["André Barreto", "Shaobo Hou", "Diana Borsa", "David Silver", "Doina Precup"], "summaries": ["While model-based RL agents can easily adapt their policy to changed rewards on the same environment, planning is expensive and learning good models can be challenging for many tasks. On the other hand, it is challenging to get model-free agents to adapt their policy to a new reward without extensive retraining. An intermediate solution is to use so-called successor features: Instead of a value function **V(π,s)** representing the expected discounted reward for a policy **π** starting in state **s**, successor features are a vector-valued value function **ψ(π,s)** representing an expected discounted feature vector **ϕ**. If our reward equals **r = w ⋅ ϕ** for some weight vector **w**, we can easily obtain the original value function by taking the scalar product of the successor features and the weight vector: **V(π,s) = w ⋅ ψ(π,s)**. Successor features thus allow us to evaluate a fixed policy **π** for all rewards that are linear in **ϕ**, which is called *generalized policy evaluation*.\n\nNow that we can evaluate policies for different preferences, we would like to efficiently find a good policy for a given novel preference. Inspired by human learning that often combines previously learned skills, we employ *generalized policy improvement*. In vanilla policy improvement, we improve upon a policy **π** we can evaluate by choosing the action that maximizes the immediate reward plus the discounted value **V(π,s')** of following **π** starting in the next state **s'**. In generalized policy improvement, we have multiple policies and choose the action that maximizes the reward plus the discounted value of following the best of these policies starting in the next state **s'**. To obtain a policy for the new preference, we \"stitch together\" all policies we learnt for previous preferences and the resulting policy performs at least as good as all of the old policies with respect to the new preference. As generalized policy improvement does not require any additional environment samples, it enables zero-shot transfer to new preferences. Empirically, even if the weight vector **w** has to be learnt from reward signals, generalized policy improvement is very sample efficient. Additional samples can then be used to further improve the policy using standard RL."], "venue": "DeepMind Blog", "opinion": "I really like the idea of successor features. Similar to model-based systems, they allow us to evaluate policies for many different rewards, which can be useful for anticipating problematic behaviour before deploying a system. However, note that we still need to execute the policy we obtained by generalized policy improvement to evaluate it for different rewards: The only guarantees we have is that it is better than the previous policies for the reward for which the improvement step was carried out (and potentially some weaker bounds based on the similarity of different rewards).", "highlight": false, "read_more": "Fast reinforcement learning with generalized policy updates", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #125", "newsletter_category": "Reinforcement learning"}
{"id": "1338289dc9435b96ec9a4b3a5ea85698", "title": "Does On-Policy Data Collection Fix Errors in Off-Policy Reinforcement Learning?", "url": "https://bair.berkeley.edu/blog/2020/03/16/discor/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Aviral Kumar", "Abhishek Gupta", "Sergey Levine"], "summaries": ["Q-learning finds the optimal **Q**-function **Q*** by updating our estimate **Q(s,a)** for a state-action pair **(s,a)** to get closer to the immediate reward plus the discounted **Q**-value for the best action **a'** in the next state **s'**. To generate samples, we usually pick actions corresponding to high **Q**-values. In bandit problems where **s'** is always terminal and thus has all **Q**-values at zero, this leads to **corrective feedback**: If we overestimated an actions value, we will pick this action again soon and are quickly able to correct our misconception. In general MDPs, corrective feedback can be a lot weaker as our update of **Q(s,a)** also depends on the **Q**-values for the next state: To get corrective feedback, we need somewhat correct **Q**-values for the next state, but to get these we likely needed good values for the second to next state, etc. This is particularly problematic with function approximation as updating the current state's **Q**-value might lead to a worse estimate for values down the chain. Consequently, we might see convergence to suboptimal **Q**-functions, instable learning, or problems with sparse or noisy rewards.\n\nTo deal with this, we would like to first prioritize correct estimates for states near the end of the chain. But in many branching problems, we actually observe these states with the least frequency such that their values are influenced disproportionally by other states' values when function approximation is used. The authors' approach, dubbed DisCor, reweighs the data distribution to account for this: We would like to preferentially sample states for which we expect **Q** to be close to **Q*** after the update and thus give more weight to state-action pairs when we expect the error **|Q*-Q|** to already be small. As we don't know **Q***, we rely on a bound for the error at a state-action pair **(s,a)** equal to the sum of the magnitudes of previous updates down the chain plus the initial error, discounted by the usual discount rate **γ** as we move back in time. Thus, the error in the next state one step ago is discounted by **γ**, the error in the second to next state two steps ago is discounted by **γ** squared and the initial error is discounted by **γ** to the **k**. This bound can be approximated by a neural network using a SARSA-like update rule, for which the influence of the unknown initial error fades for large **k** due to the discounting.\n\nDisCor is evaluated on MetaWorld tasks in both the single and multi-task setting and SAC augmented with DisCor clearly outperforms SAC in many settings. Similar improvements can be observed for DQN on Atari."], "venue": "arXiv", "opinion": "Putting less weight on updating values with fluctuating targets seems like a good idea. As the approach does not require much additional compute if weights are shared for the **Q**-network and the network estimating the bound, and as it seems quite orthogonal to previous improvements to methods based on **Q**-functions, I would not be surprised if it became somewhat widely used.", "highlight": false, "read_more": "Paper: DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #124", "newsletter_category": "Reinforcement learning"}
{"id": "be57c75b9a64d6bcb3f051f51d32f0a6", "title": "Generalized Hindsight for Reinforcement Learning", "url": "http://arxiv.org/abs/2002.11708", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Alexander C. Li", "Lerrel Pinto", "Pieter Abbeel"], "summaries": ["[Hindsight Experience Replay](https://arxiv.org/abs/1707.01495) (HER) introduced the idea of _relabeling_ trajectories in order to provide more learning signal for the algorithm. Intuitively, if you stumble upon the kitchen while searching for the bedroom, you can’t learn much about the task of going to the bedroom, but you can learn a lot about the task of going to the kitchen. So even if the original task was to go to the bedroom, we can simply pretend that the trajectory got rewards as if the task was to go to the kitchen, and then update our kitchen-traversal policy using an off-policy algorithm.\n\nHER was limited to goal-reaching tasks, in which a trajectory would be relabeled as attempting to reach the state at the end of the trajectory. What if we want to handle other kinds of goals? The key insight of this paper is that trajectory relabeling is effectively an inverse RL problem: we want to find the task or goal for which the given trajectory is (near-)optimal. This allows us to generalize hindsight to arbitrary spaces of reward functions.\n\nThis leads to a simple algorithm: given a set of N possible tasks, when we get a new trajectory, rank how well that trajectory does relative to past experience for each of the N possible tasks, and then relabel that trajectory with the task for which it is closest to optimal (relative to past experience). Experiments show that this is quite effective and can lead to significant gains in sample efficiency. They also experiment with other heuristics for relabeling trajectories, which are less accurate but more computationally efficient."], "venue": "arXiv", "opinion": "Getting a good learning signal can be a key challenge with RL. I’m somewhat surprised it took this long for HER to be generalized to arbitrary reward spaces -- it seems like a clear win that shouldn’t have taken too long to discover (though I didn’t think of it when I first read HER).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #117", "newsletter_category": "Reinforcement learning"}
{"id": "eeacd2bda2d109f04e0cbd43942cc9ad", "title": "Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement", "url": "http://arxiv.org/abs/2002.11089", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Benjamin Eysenbach*", "Xinyang Geng*", "Sergey Levine", "Ruslan Salakhutdinov"], "summaries": ["This paper was published at about the same time as the previous one, and has the same key insight. There are three main differences with the previous paper:\n\n1. It shows theoretically that MaxEnt IRL is the “optimal” (sort of) way to relabel data if you want to optimize the multitask MaxEnt RL objective.\n2. In addition to using the relabeled data with an off-policy RL algorithm, it also uses the relabeled data with behavior cloning.\n3. It focuses on fewer environments and only uses a single relabeling strategy (MaxEnt IRL relabeling)."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #117", "newsletter_category": "Reinforcement learning"}
{"id": "8a02846f67855e33c9dfc4371af13e5a", "title": "Using Selective Attention in Reinforcement Learning Agents", "url": "https://ai.googleblog.com/2020/06/using-selective-attention-in.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Yujin Tang", "Duong Nguyen", "David Ha"], "summaries": ["Recently winning a best paper award at GECCO 2020, this work marks a leap forward in the performance capabilities learned by small agents via evolutionary methods. Specifically, it shows that by jointly learning which small fraction of input to attend to, agents with only thousands of free parameters can be trained by an evolutionary strategy to achieve state-of-the-art performance in vision-based control tasks. \n\nThe key pieces include self-attention over input patches, non-differentiable top-K patch selection that affect 'inattentional blindness', and training via CMA-ES. By design, the agent is interpretable as the top-K patches that are selected can be examined. Empirically, the agent has 1000x fewer weights than a competing neural architecture, and the method shows robustness to changes in task-irrelevant inputs, as the agent learns to focus only on task-relevant patches."], "venue": "GECCO 2020", "opinion": "The parallelism afforded by evolutionary methods and genetic algorithms might be valuable in an environment where weak compute is plentiful, so it's exciting to see evidence of such methods besting GPU-hungry deep neural networks. However, I wonder how this would do on sparse reward tasks, where the fitness function is almost always uninformative. Finally, while it generalises to settings where there are task-irrelevant distractions, its deliberately sharp self-attention likely leaves it vulnerable to even simple adversarial attacks. ", "highlight": false, "read_more": "Paper: Neuroevolution of Self-Interpretable Agents", "summarizer": "Sudhanshu", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #109", "newsletter_category": "Reinforcement learning"}
{"id": "a9687f24bb2ace9f3793f4ff6eddeb85", "title": "Improving Sample Efficiency in Model-Free Reinforcement Learning from Images", "url": "http://arxiv.org/abs/1910.01741", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Denis Yarats", "Amy Zhang", "Ilya Kostrikov", "Brandon Amos", "Joelle Pineau", "Rob Fergus"], "summaries": ["Sample efficiency in RL can be improved by using off-policy methods that can reuse the same sample multiple times and by using self-supervised auxiliary losses that help with representation learning, especially when rewards are sparse. This work combines both approaches by proposing to learn a latent state representation using an autoencoder while jontly training an agent on that latent representation using <@SAC@>(@Soft Actor-Critic: Deep Reinforcement Learning for Robotics@). Previous work in the on-policy case shows a positive effect from propagating Actor-Critic gradients through the encoder to improve the usefulness of the encoding for policy learning. However, this destabilizes training in the off-policy case, as changing the encoding to facilitate the actor also changes the Q-function estimate, which in turn changes the actor's goal and can introduce nonstationarity. This problem is circumvented by only propagating the Q-network's gradients through the encoder while blocking the actor's gradients.\n\nThe method strongly outperforms SAC trained on pixels. It also matches the previous state-of-the-art set by model-based approaches on an image-based continuous control task and outperforms them for noisy observations (as these make dynamics models hard to learn). The authors also find that the learnt encodings generalize between tasks to some extent and that reconstructing the true environment state is easier using their latent representation than using a representation obtained by training SAC on pixels directly. "], "venue": "arXiv", "opinion": "Methods like this that can benefit from seeing a lot of action-independent environment observations might be quite important for applying RL to the real world, as this type of data is a lot cheaper to generate. For example, we can easily generate a ton of observations from a factory by equipping workers with cameras, but state-action-next-state triples from a robot interacting with the factory are very costly to obtain.", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #109", "newsletter_category": "Reinforcement learning"}
{"id": "a7a55ece2ee347bf465516e919a191c3", "title": "Learning to Play No-Press Diplomacy with Best Response Policy Iteration", "url": "http://arxiv.org/abs/2006.04635", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Thomas Anthony*", "Tom Eccles*", "Andrea Tacchetti", "János Kramár", "Ian Gemp", "Thomas C. Hudson", "Nicolas Porcel", "Marc Lanctot", "Julien Pérolat", "Richard Everett", "Satinder Singh", "Thore Graepel", "Yoram Bachrach"], "summaries": ["Diplomacy is a game with simple rules where 7 players simultaneously move units every turn to capture territory. Units are evenly matched by default, so winning relies on getting support from some players against others. 'No-Press' Diplomacy limits communication between players to only orders submitted to units, removing the complex verbal negotiations that characterize traditional gameplay.\n\nPrevious state-of-the-art No-Press Diplomacy methods were trained to imitate human actions after collecting a dataset of 150,000 human Diplomacy games. This paper presents a new algorithmic method for playing No-Press Diplomacy using a policy iteration approach initialized with human imitation. To find better policies, their methods use \"best response\" calculations, where the best response policy for some player is the policy that maximizes the expected return for that player against opponent policies. Diplomacy is far too large for exact best response calculation, so the paper introduces an approximation, \"Sampled Best Response\", which\n- Uses Monte-Carlo sampling to estimate opponents' actions each turn\n- Only considers a small set of actions sampled from each candidate best response policy\n- Only tries to make a single-turn improvement to its policy (rather than trying to optimize for the whole rest of the game)\nSimilar to other policy iteration methods, the paper creates a dataset of games every iteration using its Sampled Best Response method, then trains neural networks to create policy and value functions that predict the actions chosen by Sampled Best Response. To remedy issues where Sampled Best Response continually cycles through the best strategy for the last iteration, the paper tries several variants of a technique called \"Fictitious Play\". In the best-performing variant, the policy network is trained to predict the latest Sampled Best Response given explicitly averaged _historical_ opponent and player policies, rather than just the latest policies.\n\nThe paper's methods outperform existing algorithmic methods for No-Press Diplomacy on a variety of metrics, but are still fairly few-shot _exploitable_-- at the end of training, the strongest (non-human) exploiter of the final policy wins 48% of the time. They also find that the strongest exploit doesn't change much through training, though few-shot exploitability does decrease from the beginning of training to the end."], "venue": "arXiv", "opinion": "This paper represented real progress in automated Diplomacy, but is still far from human-level. I’ll be pretty interested to see whether we can reach human-level by creating improved self-play algorithms, like the one presented in this paper, and the ones used for Poker and Go, or if we will have to wait for novel, more general reasoning algorithms applied to Diplomacy. Unlike Poker, Diplomacy against multiple human players involves collusion and implicit signalling, even with No Press. It seems possible to me that it is very difficult to become good at modeling those dynamics through self-play alone. If we did get to human-level through self-play, it would make me more optimistic about the extent to which training is likely to be a bottleneck in other domains which require sophisticated models of human behavior.", "highlight": false, "read_more": "", "summarizer": "Asya", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #106", "newsletter_category": "Reinforcement learning"}
{"id": "15e023ebd8910b7400bf515bfdd69c86", "title": "Retro Contest", "url": "https://blog.openai.com/retro-contest/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Christopher Hesse", "John Schulman", "Vicki Pfau", "Alex Nichol", "Oleg Klimov", "and Larissa Schiavo"], "summaries": ["OpenAI has released Gym Retro, providing an interface to work with video games from SEGA Genesis, which are more complex than the ones from Atari. They want to use these environments to test transfer learning in particular, where the agent may be pretrained on initial levels for as long as desired, and then must learn how to complete a new test level with only 1 million timesteps (~18 hours) of gameplay. (Humans do well with 2 hours of pretraining and 1 hour of play on the test level.)"], "venue": "OpenAI Blog", "opinion": "If you want to keep track of progress in deep RL, probably -- this seems quite likely to become the new set of benchmarks that researchers work on. There's also another example of specification gaming in the post.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #1", "newsletter_category": "Reinforcement learning"}
{"id": "847c6d0c3126b07346c6fd5ddb7afbae", "title": "Mastering Atari with Discrete World Models", "url": "https://ai.googleblog.com/2021/02/mastering-atari-with-discrete-world.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Danijar Hafner", "Timothy Lillicrap", "Mohammad Norouzi", "Jimmy Ba"], "summaries": ["Model-based reinforcement learning can have better sample efficiency, allows for smarter exploration strategies, and facilitates generalization between different tasks. Still, previous attempts at model-based RL on the Atari Benchmark like <@Dreamer@>(@Dream to Control: Learning Behaviors by Latent Imagination@) and <@SimPLe@>(@Simulated Policy Learning in Video Models@) were unable to compete with model-free algorithms in terms of final performance. This paper presents DreamerV2, a model-based algorithm that outperforms DQN and its variants -- including Rainbow -- in terms of both median human- or gamer-normalized performance and on mean world-record normalized performance on Atari after 200M environment steps, achieving roughly 35% on the latter (25% if algorithm performance is clipped to max out at 100% for each game). \n\nDreamerV2 learns a recurrent state-space model that stochastically encodes frames and a hidden state into a latent variable and uses the hidden state to predict the next value of the latent variable. Frames and reward are then reconstructed using both the hidden state and the latent variable. A policy is obtained by actor-critic training on the latent state space, leveraging parallelization to train on 468B imagined samples. As DreamerV2 does not use MCTS, it requires 8x less wall clock time to train than the more complicated but better performing <@MuZero Reanalyze@>(@Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model@). Unlike earlier approaches, DreamerV2 uses a vector of categorical latent variables rather than gaussians to enable better model predictions for dynamics with multiple distinct modes, as well as KL-balancing (scaling up the importance of the transition loss compared to the entropy regularizer on the latent variable). Ablations confirm that the image reconstruction loss is crucial for DreamerV2's performance and that both the use of discrete latent variables and KL-balancing lead to significant improvements. Interestingly, preventing the gradients for reward prediction from affecting the world model does not affect performance at all."], "venue": "arXiv", "opinion": "It is worth noting that the authors use the <@Dopamine@>(@Introducing a New Framework for Flexible and Reproducible Reinforcement Learning Research@) framework for evaluating the model-free baselines, meaning that a slightly stunted version of Rainbow is used on an evaluation protocol different from the original publication without retuning hyperparameters. That said, DreamerV2 definitely performs at a level similar to Rainbow, which is significant progress in model-based RL. In particular, the fact that the reward can be inferred from the world model even without gradients flowing back from the reward suggests transferability of the world models to different tasks with the same underlying dynamics. ", "highlight": false, "read_more": "Paper: Mastering Atari with Discrete World Models", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "", "newsletter_category": "Reinforcement learning"}
{"id": "724c78a4fc8882dfe115a9dd4ce63bb4", "title": "What Can Learned Intrinsic Rewards Capture?", "url": "http://arxiv.org/abs/1912.05500", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Zeyu Zheng*", "Junhyuk Oh*", "Matteo Hessel", "Zhongwen Xu", "Manuel Kroiss", "Hado van Hasselt", "David Silver", "Satinder Singh"], "summaries": ["This paper studies whether a learned reward function can serve as a locus of knowledge about the environment, that can be used to accelerate training of new agents. In particular, such a learned intrinsic reward can help with test-time adaptation: in a novel environment, the intrinsic reward can quickly \"tell\" the agent e.g. where it should explore -- even if in the new environment the agent has a different action space, or uses a different learning algorithm (situations that meta learning would typically not be able to handle).\n\nThe authors create an algorithm that learns an intrinsic reward function, that when used to train a new agent over a “lifetime” (which consists of multiple episodes), leads to the best cumulative reward over the lifetime, using a meta-gradient approach. Experiments on gridworlds demonstrate that these learned intrinsic rewards: 1. switch between early exploration and later exploitation, 2. explore only for information that is relevant for optimal behavior, 3. capture invariant causal relationships, and 4. can anticipate and adapt to changes in the extrinsic reward within a lifetime."], "venue": "arXiv", "opinion": "A common intuition that many researchers have is that specifying _what_ to do (the reward function) should be easier than specifying _how_ to do it (the policy). In practice, this _doesn't_ seem to be the case for deep learning, where imitation via inverse reinforcement learning (inferring a reward function and optimizing it) seems to be similar to imitation learning via behavior cloning (\"copying\" the policy). Similarly, this method seems broadly similar to meta learning algorithms like MAML and RL^2, though it does outperform them on one (probably carefully designed) transfer learning task.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #89", "newsletter_category": "Reinforcement learning"}
{"id": "c448c3659c91d23c087f6f9052027597", "title": "Integrating Behavior Cloning and Reinforcement Learning for Improved Performance in Sparse Reward Environments", "url": "http://arxiv.org/abs/1910.04281", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Vinicius G. Goecks", "Gregory M. Gremillion", "Vernon J. Lawhern", "John Valasek", "Nicholas R. Waytowich"], "summaries": ["This paper contributes to the effort of combining imitation and reinforcement learning to train agents more efficiently. The current difficulty in this area is that imitation and reinforcement learning proceed under rather different objectives which presents a significant challenge to updating a policy learned from a pure demonstration. A major portion of this difficulty stems from the use of so-called \"on-policy\" methods for training which require a significant number of environment interactions to be effective. In this paper, the authors propose a framework dubbed \"Cycle-of-Learning\" (CoL) that allows for the off-policy combination of imitation and reinforcement learning. This allows the two approaches to be combined much more directly which grounds the agent's policy in the expert demonstrations while simultaneously allowing for RL to fine-tune the policy. The authors show that CoL is an improvement over the current state of the art by testing their algorithm in several environments and performing an ablation study. "], "venue": "arXiv", "opinion": "At first glance, it would seem as though the idea of using an off-policy method to combine imitation and reinforcement learning is obvious. However, the implementation is complicated by the fact that we want the value functions being estimated by our agent to satisfy the optimality condition for the Bellman equation. Prior work, such as [Hester et al. 2018](https://aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16976/16682) uses n-step returns to help pre-training and make use of on-policy methods when performing RL. What I like about this paper is that they perform an ablation study and show that simple sequencing of imitation learning and RL algorithms isn't enough to get good performance. This means that combining the imitation and reinforcement objectives into a single loss function is providing a significant improvement over other methods.", "highlight": false, "read_more": "", "summarizer": "Zach", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #73", "newsletter_category": "Reinforcement learning"}
{"id": "3c1079b0248f9c89b8e614079bd719ad", "title": "On Inductive Biases in Deep Reinforcement Learning", "url": "http://arxiv.org/abs/1907.02908", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Matteo Hessel*", "Hado van Hasselt*", "Joseph Modayil", "David Silver"], "summaries": ["The fewer inductive biases we use, the more general our algorithms will be. But how much does it really help to have fewer inductive biases? This paper replaces several hand-engineered components of an A2C agent with generic or adaptive variants.\n\nSpecifically, they compared: 1) reward clipping vs. reward normalization via <@PopArt@>(@Preserving Outputs Precisely while Adaptively Rescaling Targets@), 2) handpicked discount factor vs. online adaptive discounting via meta-learning, 3) fixed action repeats vs. learned action-commitment, and 4) standard Atari observation preprocessing vs. passing raw observations to a recurrent network. Over 57 Atari tasks, they found that the tuned algorithm outperformed the adaptive method only in (1). Performance was similar for (2) and (3), and the proposed method outperformed the baseline for (4). When the fully adaptive agent was compared to the vanilla agent (with heuristics designed for Atari) over 28 unseen continuous control tasks, the adaptive agent performed better in 14 of them, worse in one, and about the same in the rest, providing evidence that fewer inductive biases do lead to more general agents."], "venue": "arXiv", "opinion": "On net, I am quite happy to see work which argues in favour of reducing time spent hand-tuning and hand-crafting parts of a complex pipeline, and demonstrates the alternatives that currently exist to do so.\n\nHowever, I feel the work did not fully compare the trade-off between tuning hyperparameters, and increasing the complexity of the pipeline by adding the adaptive components. I agree, though, that the latter is a one-time effort (per inductive bias), and is thus far more scalable than the former which needs to be repeated for each bias for every new task.\n\nIt would also be interesting to see how adaptive agents fare on problems where we care more about failures than successes, or if they are better/worse suited for safe exploration than baseline agents. My intuition is that adaptive internals of the agent cause it behave more noisily/unpredictably, and it may not fare as well as our current efforts for such problems.\n\n**Rohin's opinion:** While it's certainly true that fewer inductive biases imply more general agents, it also usually means more compute and data requirements. For action repetition and learned discount factors, only one new parameter has to be learned, so it doesn't make much of a difference either way (and in fact performance on Atari doesn't change much). Clipped rewards do in fact learn faster than PopArt. I don't know why a recurrent network improves upon standard observation preprocessing for Atari -- perhaps initially RNNs were hard to train, and it became a de facto standard to use observation preprocessing, and no one checked about using recurrent networks later when RNNs became easier to train?", "highlight": false, "read_more": "", "summarizer": "Sudhanshu Kasewa", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #64", "newsletter_category": "Reinforcement learning"}
{"id": "28814d3bae7e823a36dc137eaea1c191", "title": "Are Deep Policy Gradient Algorithms Truly Policy Gradient Algorithms?", "url": "http://arxiv.org/abs/1811.02553", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Andrew Ilyas", "Logan Engstrom", "Shibani Santurkar", "Dimitris Tsipras", "Firdaus Janoos", "Larry Rudolph", "Aleksander Madry"], "summaries": ["This paper argues that policy gradient algorithms are very dependent on additional optimisations (such as value function clipping, reward scaling, etc), and that they operate with poor estimates of the gradient. It also demonstrates that the PPO objective is unable to enforce a trust region, and that the algorithm's empirical success at doing so is due to the additional optimisations."], "venue": "arXiv", "opinion": "While the work in this paper is solid, the conclusions don't seem particularly surprising: everyone knows that deep RL is incredibly sample intensive (which straightforwardly implies inaccurate gradient estimates) and relies on many implementation tricks. I'm not familiar enough with PPO to know how surprising their last result is.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #32", "newsletter_category": "Reinforcement learning"}
{"id": "80e98e0ff15a1096f6774755322e40ca", "title": "Assessing Generalization in Deep Reinforcement Learning", "url": "http://arxiv.org/abs/1810.12282", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Charles Packer", "Katelyn Gao", "Jernej Kos", "Philipp Krähenbühl", "Vladlen Koltun", "Dawn Song"], "summaries": ["This paper aims to create a benchmark for measuring generalisation in reinforcement learning. They evaluate a range of standard model-free algorithms on OpenAI Gym and Roboschool environments; the extent of generalisation is measured by varying environmental parameters at test time (note that these tasks are intended for algorithms which do not update at test time, unlike many transfer and multi-task learners). They distinguish between two forms of generalisation: interpolation (between values seen during training) and extrapolation (beyond them). The latter, which is typically much harder for neural networks, is measured by setting environmental parameters to more extreme values in testing than in training."], "venue": "arXiv", "opinion": "I agree that having standard benchmarks is often useful for spurring progress in deep learning, and that this one will be useful. I'm somewhat concerned that the tasks the authors have selected (CartPole, HalfCheetah, etc) are too simple, and that the property they're measuring is more like robustness to peturbations than the sort of combinatorial generalisation discussed in [this paper] (http://arxiv.org/abs/1806.01261) from [last week's newsletter](https://mailchi.mp/c1f376f3a12e/alignment-newsletter-30). The paper would benefit from more clarity about what they mean by \"generalisation\".", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #31", "newsletter_category": "Reinforcement learning"}
{"id": "8dc089d485665acf66334c4a3312a03e", "title": "CURIOUS: Intrinsically Motivated Multi-Task, Multi-Goal Reinforcement Learning", "url": "http://arxiv.org/abs/1810.06284", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Cédric Colas", "Pierre Fournier", "Olivier Sigaud", "Mohamed Chetouani", "Pierre-Yves Oudeyer"], "summaries": ["This paper presents an intrinsically-motivated algorithm (an extension of Universal Value Function Approximators) which learns to complete multiple tasks, each parameterised by multiple “goals” (e.g. the locations of targets). It prioritises replays of tasks which are neither too easy nor too hard, but instead allow maximal learning progress; this also help prevent catastrophic forgetting by refocusing on tasks which it begins to forget."], "venue": "arXiv", "opinion": "While I don’t think this paper is particularly novel, it usefully combines several ideas and provides easily-interpretable results.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #29", "newsletter_category": "Reinforcement learning"}
{"id": "9cf51cc50bd624ea32c8ab8588a38834", "title": "Reinforcement Learning for Improving Agent Design", "url": "https://designrl.github.io/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["David Ha"], "summaries": ["This paper explores what happens when you allow an RL agent to modify aspects of the environment; in this case, the agent's body. This allows you to learn asymmetric body designs that are better suited for the task at hand. There's another fun example of specification gaming -- the agent makes its legs so long that it simply falls forward to reach the goal."], "venue": "Custom Website", "opinion": "", "highlight": false, "read_more": "Arxiv paper", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #28", "newsletter_category": "Reinforcement learning"}
{"id": "8e2c592829ff32778782a9f59ef3fec2", "title": "TStarBots: Defeating the Cheating Level Builtin AI in StarCraft II in the Full Game", "url": "http://arxiv.org/abs/1809.07193", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Peng Sun", "Xinghai Sun", "Lei Han", "Jiechao Xiong", "Qing Wang", "Bo Li", "Yang Zheng", "Ji Liu", "Yongsheng Liu", "Han Liu", "Tong Zhang"], "summaries": ["This paper showcases an RL agent which is able to defeat the built-in Starcraft AI (roughly at the level of the 30th-50th percentile of players). It do so by choosing between 165 hand-coded macro actions, which each correspond to an elementary task like producing a certain building. This avoids the necessity of learning unimportant details like exactly where the building should be placed, as well as difficult rules like the prerequisites for each building. The authors create a second agent which uses an expert system to choose actions in a hierarchical fashion, which performs at a similar level to the first. See also [Import AI](https://jack-clark.net/2018/09/25/import-ai-113-why-satellitesai-gives-us-a-global-eye-industry-pays-academia-to-say-sorry-for-strip-mining-it-and-kindred-researchers-seek-robot-standardization/)."], "venue": "arXiv", "opinion": "I find Starcraft more interesting as a test-bed for deep RL than as a goal in itself. While the results of this paper are cool, I doubt that its methods will scale well - in general, approaches which rely on a lot of human knowledge being hard-coded in don't tend to.\n\nNote the similarity between the macros in this paper and the way that OpenAI Five could choose between different hand-coded combinations of items. However, the latter is only a small part of the game, whereas the former is much more extensive.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #26", "newsletter_category": "Reinforcement learning"}
{"id": "1cc3a85231953d720ed8961a28038e73", "title": "Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark", "url": "http://arxiv.org/abs/1809.02206", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Akshat Agarwal", "Ryan Hope", "Katia Sycara"], "summaries": ["The authors note that most existing RL benchmarks (like Atari games) lack sharp context-dependence and temporal sensitivity. The former requires an agent to sometimes change strategies abruptly; the latter requires an agent's strategy to vary over time. Space Fortress is an arcade-style game which does have these properties, and which cannot be solved by standard RL algorithms, even when rewards are made dense in a naive way. However, when the authors shape the rewards to highlight the context changes, their agent achieves superhuman performance."], "venue": "arXiv", "opinion": "The two properties that this paper highlights do seem important, and the fact that they can be varied in Space Fortress makes it a good benchmark for them.\n\nI'm not convinced that the experimental work is particularly useful, though. It seems to reinforce the well-known point that shaped rewards can work well when they're shaped in sensible ways, and much less well otherwise.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #25", "newsletter_category": "Reinforcement learning"}
{"id": "f4d6f5a87fb9ae496644b2e479dafda2", "title": "Learning Invariances for Policy Generalization", "url": "http://arxiv.org/abs/1809.02591", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Remi Tachet des Combes", "Philip Bachman", "Harm van Seijen"], "summaries": ["This paper compares three ways to induce generalisation in deep RL agents: data augmentation, meta-learning and adversarial training. They find that data augmentation boosts performance on a simple task from 2.8% to 99.8%, whereas the other two don't improve on the baseline."], "venue": "arXiv", "opinion": "This paper feels incomplete; it has very little discussion of the reasons why meta-learning and adversarial training failed, and the data augmentation result is rather perfunctory.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "", "newsletter_category": "Reinforcement learning"}
{"id": "a9dcd4bcf4d3c0237887be827e003476", "title": "The use of embeddings in OpenAI Five", "url": "https://neuro.cs.ut.ee/the-use-of-embeddings-in-openai-five/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Tambet Matiisen"], "summaries": ["This blog post discusses some information which has been released by OpenAI about the structure of the OpenAI Five networks. Starting from Dota API information, each unit, ability, item and modifier is converted to an embedding. During processing, there's some surprising use of max-pooling over those embeddings even when it doesn't seem appropriate. Actions are chosen depending on the dot product similarity between their embedding and the final LSTM output. In a previous version, the target for the chosen action was picked without conditioning on what the action actually was; this has changed in more recent versions."], "venue": "University of Tartu Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "", "newsletter_category": "Reinforcement learning"}
{"id": "fa92cc6a67915cf4574426716d333aef", "title": "Why we want unbiased learning processes", "url": "https://www.lesserwrong.com/posts/KT4Nau2XhuNejkXQR/why-we-want-unbiased-learning-processes", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Stuart Armstrong"], "summaries": ["Suppose we have a reward learning agent where we have designed the reward space so that \"ask the human whether to do X\" always has higher reward than \"do X\". The agent is now considering whether to ask the human to try heroin, or just give them heroin. If the agent gives them heroin, it will see their look of ecstasy and will update to have the reward function \"5 for giving the human heroin, 7 for asking the human\". If the agent asks the human, then the human will say \"no\", and the agent will update to have the reward function \"-1 for giving the human heroin, 1 for asking the human\". In both cases asking the human is the optimal action, yet the agent will end up giving the human heroin, since that gets reward 5, while asking the human gets reward 1. The post doesn't have this example, it has a formal model and an abstract example in that model that is isomorphic to the example I have."], "venue": "LessWrong", "opinion": "If you found my example interesting and worth thinking about, it is worth reading the post to see an attempt at formalizing it.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "Recon #5", "newsletter_category": "Reward learning theory"}
{"id": "6777d526371f5c838901319d8da0c432", "title": "Hierarchical system preferences and subagent preferences", "url": "https://www.alignmentforum.org/posts/iutXWSDd56ieAiyTi/hierarchical-system-preferences-and-subagent-preferences", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Stuart Armstrong"], "summaries": ["Often, when looking at a hierarchical system, you can ascribe preferences to the system as a whole, as well as to individual parts or subagents of the system. Often, any divergence between the parts can be either interpreted as a difference between the goals of the parts, or a failure of rationality of one of the parts. The post gives a particular algorithm, and shows how based on the structure of the code of the subagent, we could either infer that the subagent is mistaken, or that it has different goals.\n\nNow, we could infer meta-preferences by seeing how the system tends to self-modify -- perhaps we notice that it tends to amplify the \"goals\" of one particular subagent, in which case we can infer that it has a meta-preference for those goals. But without that, there's no correct answer to what the true goals are. In the post's own words: \"In the absence of some sort of meta-preferences, there are multiple ways of establishing the preferences of a hierarchical system, and many of them are equally valid.\""], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #41", "newsletter_category": "Reward learning theory"}
{"id": "36164966dee7a2ed65873825efa90bbd", "title": "Figuring out what Alice wants: non-human Alice", "url": "https://www.alignmentforum.org/posts/YfQGZderiaGv3kBJ8/figuring-out-what-alice-wants-non-human-alice", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Stuart Armstrong"], "summaries": ["We know that if we have a potentially irrational agent, then inferring their preferences is [impossible](https://arxiv.org/abs/1712.05812) without further assumptions. However, in practice we can infer preferences of humans quite well. This is because we have very specific and narrow models of how humans work: we tend to agree on our judgments of whether someone is angry, and what anger implies about their preferences. This is exactly what the theorem is meant to prohibit, which means that humans are making some strong assumptions about other humans. As a result, we can hope to solve the value learning problem by figuring out what assumptions humans are already making and using those assumptions."], "venue": "Alignment Forum", "opinion": "The fact that humans are quite good at inferring preferences should give us optimism about value learning. In the [framework](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/h9DesGT3WT9u2k7Hr) of rationality with a mistake model, we are trying to infer the mistake model from the way that humans infer preferences about other humans. This sidesteps the impossibility result by focusing on the _structure_ of the algorithm that generates the policy. However, it still seems like we have to make some assumption about how the structure of the algorithm leads to a mistake model, or a model for what values are. Though perhaps we can get an answer that is principled enough or intuitive enough that we believe it.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #37", "newsletter_category": "Reward learning theory"}
{"id": "c5da3c0dd1343589a7e1a695b120a6fa", "title": "Designing robust & reliable AI systems and how to succeed in AI", "url": "https://80000hours.org/podcast/episodes/pushmeet-kohli-deepmind-safety-research/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Rob Wiblin and Pushmeet Kohli"], "summaries": ["(As is typical for large content, I'm only summarizing the most salient points, and ignoring entire sections of the podcast that didn't seem as relevant.)\n\nIn this podcast, Rob delves into the details of Pushmeet's work on making AI systems _robust_. Pushmeet doesn't view AI safety and AI capabilities as particularly distinct -- part of building a good AI system is ensuring that the system is safe, robust, reliable, and generalizes well. Otherwise, it won't do what we want, so why would we even bother using it. He aims to improve robustness by actively searching for behaviors that violate the specification, or by formally verifying particular properties of the neural net. That said, he also thinks that one of the major challenges here is in figuring out the specification of what to verify in the first place.\n\nHe sees the problems in AI as being similar to the ones that arise in programming and computer security. In programming, it is often the case that the program that one writes down does not accurately match the intended specification, leading to bugs. Often we simply accept that these bugs happen, but for security critical systems such as traffic lights we can use techniques like testing, fuzzing, symbolic execution, and formal verification that allow us to find these failures in programs. We now need to develop these techniques for machine learning systems.\n\nThe analogy can go much further. Static analysis involves understanding properties of a program separately from any inputs, while dynamic analysis involves understanding a program with a specific input. Similarly, we can have \"static\" interpretability, which understands the model as a whole (as in [Feature visualization](https://distill.pub/2017/feature-visualization/)), or \"dynamic\" interpretability, which explains the model's output for a particular input. Another example is that the technique of abstract interpretation of programs is analogous to a particular method for verifying properties of neural nets.\n\nThis analogy suggests that we have faced the problems of AI safety before, and have made substantial progress on them; the challenge is now in doing it again but with machine learning systems. That said, there are some problems that are unique to AGI-type systems; it's just not the specification problem. For example, it is extremely unclear how we should communicate with such a system, which may have its own concepts and models that are very different from those of humans. We could try to use natural language, but if we do we need to ground the natural language in the way that humans do, and it's not clear how we could do that, though perhaps we could test if the learned concepts generalize to new settings. We could also try to look at the weights of our machine learning model and analyze whether it has learned the concept -- but only if we already have a formal specification of the concept, which seems hard to get."], "venue": "80000 Hours", "opinion": "I really like the analogy between programming and AI; a lot of my thoughts have been shaped by thinking about this analogy myself. I agree that the analogy implies that we are trying to solve problems that we've attacked before in a different context, but I do think there are significant differences now. In particular, with long-term AI safety we are considering a setting in which mistakes can be extremely costly, _and_ we can't provide a formal specification of what we want. Contrast this to traffic lights, where mistakes can be extremely costly but I'm guessing we can provide a formal specification of the safety constraints that need to be obeyed. To be fair, Pushmeet acknowledges this and highlights specification learning as a key area of research, but to me it feels like a qualitative difference from previous problems we've faced, whereas I think Pushmeet would disagree with that (but I'm not sure why).", "highlight": true, "read_more": "<@Towards Robust and Verified AI: Specification Testing, Robust Training, and Formal Verification@>", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #57", "newsletter_category": "Robustness"}
{"id": "15d585d97336c60224f78990dfb1f732", "title": "The Conditional Entropy Bottleneck", "url": "http://arxiv.org/abs/2002.05379", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Ian Fischer"], "summaries": ["While I've categorized this paper under robustness because it can apply to most forms of training, I'll talk about it specifically in the context of unsupervised learning (and in particular its relation to Contrastive Predictive Coding (CPC), summarized in the highlights).\n\nOne potential problem with deep learning is that there might be too _much_ information in the input, causing the model to learn spurious correlations that do not actually generalize well (see <@Causal Confusion in Imitation Learning@> as an example). The idea with CEB is to penalize the model for learning irrelevant information, using a form of _information bottleneck_.\n\nWe consider a setting where we want to learn a representation **Z** of some input data **X** in order to predict some downstream data **Y**. In CPC, **X** would be the inputs from time 1 to t, **Z** would be the latent representation **z_t**, and **Y** would be the future data **x_{t+k}**. Then, we want **Z** to capture the **minimum necessary information** needed for **Z** to predict **Y** as best as possible. The _necessary_ information is **I(Y; Z)**, that is, the mutual information between **Z** and **Y**: we want to maximize this to maximize our accuracy at predicting **Y**. Since **Y** depends on **X** and **Z** is computed from **X**, any information about **Y** must come through mutual information between **X** and **Z**. Maximizing just this **I(Y; Z)** term gives us Contrastive Predictive Coding.\n\nHowever, we don't want to capture any extra irrelevant information (the minimality criterion), which means that **Z** shouldn't capture any _more_ information about **X** beyond what it captured to maximize **I(Y; Z)**. In information-theoretic terms, we want to _minimize_ **I(X; Z | Y)**. Thus, we have the CEB objective: minimizing **I(X; Z | Y) - γ I(Y; Z)**, where **γ** is a hyperparameter controlling the tradeoff between the two terms. The authors then use some fairly straightforward math to reduce the objective to simpler terms which can be bounded using variational approximations, leading to an algorithm that can work in practice. \n\nThe authors perform experiments on Fashion MNIST and CIFAR10 (where Y corresponds to the labels for the images, so we're in the supervised learning setting). Since the main benefit of CEB is to remove unnecessary information from the model, they evaluate adversarial robustness and out-of-distribution detection in addition to standard performance checks. They find that models trained with CEB perform better than ones trained with a variational information bottleneck, or ones trained with vanilla SGD."], "venue": "arXiv", "opinion": "While I'm not sure to what extent models learn truly irrelevant information (see <@Adversarial Examples Are Not Bugs, They Are Features@>), it seems good to add an incentive against learning information that won't be useful for a downstream task, and the empirical results (especially of the next paper) suggest that it is providing some benefit.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #92", "newsletter_category": "Robustness"}
{"id": "27526782b7ae5a09db131d8a6d0b2775", "title": "CEB Improves Model Robustness", "url": "http://arxiv.org/abs/2002.05380", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Ian Fischer", "Alexander A. Alemi"], "summaries": ["This empirical paper finds that ImageNet classifiers trained with the CEB objective (summarized above) are already somewhat adversarially robust, without having any decrease in accuracy, and without any adversarial training. Notably, since CEB does not rely on knowing the attack method ahead of time, its adversarial robustness generalizes to multiple kinds of attacks, whereas models that were adversarially trained tend to be fragile in the face of previously unseen attacks."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #92", "newsletter_category": "Robustness"}
{"id": "cc2ad9817d175ecb34bbc27cabf90f9f", "title": "Call for Papers: ICML 2019 Workshop on Uncertainty and Robustness in Deep Learning", "url": "https://sites.google.com/view/udlworkshop2019/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["Topics of this workshop include out-of-distribution detection, calibration, robustness to corruptions, robustness to adversaries, etc. Submissions are due April 30th."], "venue": "Workshop's Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Dan H", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #50", "newsletter_category": "Robustness"}
{"id": "29c2c697f902c7dfc7bed5d36701ab95", "title": "Evaluating the Robustness of Collaborative Agents", "url": "http://arxiv.org/abs/2101.05507", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Paul Knott", "Micah Carroll", "Sam Devlin", "Kamil Ciosek", "Katja Hofmann", "A. D. Dragan", "Rohin Shah"], "summaries": ["Assuming a well-specified reward function, we would like to evaluate robustness of an agent by looking at the average reward it obtains on a wide scenario of plausible test time inputs that it might get. However, the key challenge of robustness is that it is hard to specify the test distribution in advance, and we must work with the training distribution instead.\n\nThis paper (on which I am an author) proposes _measuring_ robustness using a suite of hand-designed _unit tests_. Just as a function is tested by having the programmer write down potential edge cases and checking for the expected behavior, AI developers can come up with a set of potential “edge case” situations (especially ones not likely to arise during training) and check whether the agent’s behavior on these situations works well or not. Intuitively, since these unit tests are created separately from the training process, they may not have the same spurious correlations that could be present in the training data. Thus, they can serve as an evaluation of the robustness of the agent.\n\nThe authors built a test suite for <@Overcooked@>(@Collaborating with Humans Requires Understanding Them@), and use it to evaluate several techniques aimed to improve the robustness of agents trained to collaborate with humans.\n\nFor example, one technique is to start each episode from a state sampled randomly from a dataset of human-human gameplay, so that the agents learn how to handle a broader diversity of states. This technique _decreases_ the average _validation_ reward, and if that’s all we look at, we would conclude that it did not work. However, the technique also _increases_ performance on the unit test suite, suggesting that in reality the technique does increase robustness, though it comes at the cost of reduced performance when playing with the particular set of partners that make up the validation distribution."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #135", "newsletter_category": "Robustness"}
{"id": "10278085101f16a99bab20be4906bfaf", "title": "Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks", "url": "https://arxiv.org/pdf/1903.11680.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Mingchen Li", "Mahdi Soltanolkotabi and \nSamet Oymak"], "summaries": ["Previous [empirical](https://arxiv.org/pdf/1901.09960.pdf#page=2&zoom=100,0,81) [papers](https://papers.nips.cc/paper/8094-generalized-cross-entropy-loss-for-training-deep-neural-networks-with-noisy-labels.pdf) have shown that finding ways to decrease training time greatly improves robustness to label corruptions, but to my knowledge this is the first theoretical treatment."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Dan H", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #53", "newsletter_category": "Robustness"}
{"id": "952d6728e7caf2a5987d5b82bcce3bb1", "title": "Towards Robust and Verified AI: Specification Testing, Robust Training, and Formal Verification", "url": "https://medium.com/@deepmindsafetyresearch/towards-robust-and-verified-ai-specification-testing-robust-training-and-formal-verification-69bd1bc48bda", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Pushmeet Kohli", "Krishnamurthy (Dj) Dvijotham", "Jonathan Uesato", "Sven Gowal", "and the Robust & Verified Deep Learning group"], "summaries": ["This post highlights three areas of current research towards making robust AI systems. First, we need better evaluation metrics: rather than just evaluating RL systems on the environments they were trained on, we need to actively search for situations in which they fail. Second, given a specification or constraint that we would like to ensure, we can develop new training techniques that can ensure that the specifications hold. Finally, given a specification, we can use formal verification techniques to ensure that the model obeys the specification on all possible inputs. The authors also list four areas of future research that they are excited about: leveraging AI capabilities for evaluation and verification, developing publicly available tools for evaluation and verification, broadening the scope of adversarial examples beyond the L-infinity norm ball, and learning specifications."], "venue": "DeepMind Safety Blog", "opinion": "The biggest challenge I see with this area of research, at least in its application to powerful and general AI systems, is how you get the specification in the first place, so I'm glad to see \"learning specifications\" as one of the areas of interest.\n\nIf I take the view from this post, it seems to me that techniques like domain randomization, and more generally training on a larger distribution of data, would count as an example of the second type of research: it is a change to the training procedure that allows us to meet the specification \"the agent should achieve high reward in a broad variety of environments\". Of course, this doesn't give us any provable guarantees, so I'm not sure if the authors of the post would include it in this category.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #52", "newsletter_category": "Robustness"}
{"id": "458e0817fe92c4d4f3c6c623cb1df9bc", "title": "Trustworthy Deep Learning Course", "url": "https://berkeley-deep-learning.github.io/cs294-131-s19/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Jacob Steinhardt", "Dawn Song", "Trevor Darrell"], "summaries": ["This underway course covers topics in AI Safety topics for current deep learning systems. The course includes slides and videos."], "venue": "Berkeley", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Dan H", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #45", "newsletter_category": "Robustness"}
{"id": "72b93e2cf9a0bcd093e47ffba81bd4f9", "title": "AI Alignment Podcast: The Byzantine Generals’ Problem, Poisoning, and Distributed Machine Learning", "url": "https://futureoflife.org/2019/01/30/ai-alignment-podcast-the-byzantine-problem-poisoning-and-distributed-machine-learning-with-el-mahdi-el-mahmdi-beneficial-agi-2019/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Lucas Perry and El Mahdi El Mahmdi"], "summaries": ["Byzantine resilience is the ability of a system to operate successfully when some of its components have been corrupted, even if it's unclear which ones they are. In the context of machine learning, this is relevant to poisoning attacks in which some training data is altered to affect the batch gradient (one example being the activity of fake accounts on social media sites). El Mahdi explains that when data is very high-dimensional, it is easy to push a neural network into a bad local minimum by altering only a small fraction of the data. He argues that his work on mitigating this is relevant to AI safety: even superintelligent AGI will be vulnerable to data poisoning due to time constraints on computation, and the fact that data poisoning is easier than resilient learning."], "venue": "FLI Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #45", "newsletter_category": "Robustness"}
{"id": "6d42fdee775ddb5fb94043741095f6e8", "title": "An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods", "url": "http://www.gatsby.ucl.ac.uk/~balaji/udl2019/accepted-papers/UDL2019-paper-21.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Sanghyuk Chun", "Seong Joon Oh", "Sangdoo Yun", "Dongyoon Han", "Junsuk Choe", "Youngjoon Yoo"], "summaries": ["There are several small tricks to improve classification performance such as label smoothing, dropout-like regularization, mixup, and so on. However, this paper shows that many of these techniques have mixed and often negative effects on various notions of robustness and uncertainty estimates."], "venue": "", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Dan H", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #63", "newsletter_category": "Robustness"}
{"id": "4bc8a5587d4c673c664aa98ff2210e9f", "title": "Maximum Entropy Inverse Reinforcement Learning", "url": "http://www.cs.cmu.edu/~bziebart/publications/maxentirl-bziebart.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2008-01-01T00:00:00Z", "authors": ["Brian D. Ziebart", "Andrew Maas", "J.Andrew Bagnell", "and Anind K. Dey"], "summaries": ["While matching empirical feature counts helps to deal with the ambiguity of the reward functions, exactly matching featuer counts will typically require policies to be stochastic, in which case there are many stochastic policies that get the right feature counts. How do you pick among these policies? We should choose the distribution using the [principle of maximum entropy](https://en.wikipedia.org/wiki/Principle_of_maximum_entropy), which says to pick the stochastic policy (or alternatively, a probability distribution over trajectories) that has maximum entropy (and so the least amount of information). Formally, we’re trying to find a function P(ζ) that maximizes H(P), subject to E[features(ζ)] = empirical feature counts, and that P(ζ) is a probability distribution (sums to 1 and is non-negative for all trajectories). For the moment, we’re assuming deterministic dynamics.\n\nWe solve this constrained optimization problem using the method of Lagrange multipliers. With simply analytical methods, we can get to the standard MaxEnt distribution, where P(ζ | θ) is proportional to exp(θ f(ζ)). But where did θ come from? It is the Lagrange multiplier for constraint on expected feature counts. So we’re actually not done with the optimization yet, but this intermediate form is interesting in and of itself, because we can identify the Lagrange multiplier θ as the reward weights. Unfortunately, we can’t finish the optimization analytically -- however, we can compute the gradient for θ, which we can then use in a gradient descent algorithm. This gives the full MaxEnt IRL algorithm for deterministic environments. When you have (known) stochastic dynamics, we simply tack on the probability of the observed transitions to the model P(ζ | θ) and optimize from there, but this is not as theoretically compelling.\n\nOne warning -- when people say they are using MaxEnt IRL, they are usually actually talking about MaxCausalEnt IRL, which we'll discuss next."], "venue": "AAAI 2008", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #12", "newsletter_category": "Summary: Inverse Reinforcement Learning"}
{"id": "c5da42b33ec62f9dc59b311a3a7bb564", "title": "Modeling Interaction via the Principle of Maximum Causal Entropy", "url": "http://www.cs.cmu.edu/~bziebart/publications/maximum-causal-entropy.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2010-01-01T00:00:00Z", "authors": ["Brian D. Ziebart", "J. Andrew Bagnell", "and Anind K. Dey"], "summaries": ["When we have stochastic dynamics, MaxEnt IRL does weird things. It is basically trying to maximize the entropy H(A1, A2, ... | S1, S2, ...), subject to matching the feature expectations. However, when you choose the action A1, you don’t know what the future states are going to look like. What you really want to do is maximize the causal entropy, that is, you want to maximize H(A1 | S1) + H(A2 | S1, S2) + ..., so that each action’s entropy is only conditioned on the previous states, and not future states. You can then run through the same machinery as for MaxEnt IRL to get the MaxCausalEnt IRL algorithm."], "venue": "CMU Website", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #12", "newsletter_category": "Summary: Inverse Reinforcement Learning"}
{"id": "5ecf3501dbba66dbcaf679b68e2ba66f", "title": "A Survey of Inverse Reinforcement Learning: Challenges, Methods and Progress", "url": "http://arxiv.org/abs/1806.06877", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Saurabh Arora", "Prashant Doshi"], "summaries": ["This is a comprehensive survey of IRL that should be useful to researchers, or students looking to perform a deep dive into IRL. It's particularly useful because it can compare and contrast across many different IRL algorithms, whereas each individual IRL paper only talks about their method and a few particular weaknesses of other methods. If you want to learn a lot about IRL, I would start with the previous readings, then read this one, and perhaps after that read individual papers that interest you."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #12", "newsletter_category": "Summary: Inverse Reinforcement Learning"}
{"id": "c81268a97104aa09fc1d79e3219a525a", "title": "Learning from humans: what is inverse reinforcement learning?", "url": "https://thegradient.pub/learning-from-humans-what-is-inverse-reinforcement-learning/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Jordan Alexander"], "summaries": ["This article introduces and summarizes the first few influential papers on inverse reinforcement learning. [Algorithms for IRL](http://ai.stanford.edu/~ang/papers/icml00-irl.pdf) attacked the problem by formulating it as a linear program, assuming that the given policy or demonstrations is optimal. However, there are many possible solutions to this problem -- for example, the zero reward makes any policy or demonstration optimal. [Apprenticeship Learning via IRL](http://people.eecs.berkeley.edu/~russell/classes/cs294/s11/readings/Abbeel+Ng:2004.pdf) lets you learn from an expert policy that is near-optimal. It assumes that the reward function is a weighted linear combination of _features_ of the state. In this case, given some demonstrations, we only need to match the feature expectations of the demonstrations in order to achieve the same performance as the demonstrations (since the reward is linear in the features). So, they do not need to infer the underlying reward function (which may be ambiguous)."], "venue": "The Gradient", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #12", "newsletter_category": "Summary: Inverse Reinforcement Learning"}
{"id": "6cc62babca224de6c8a9d4ef70098acb", "title": "AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019", "url": "https://futureoflife.org/2020/04/15/an-overview-of-technical-ai-alignment-in-2018-and-2019-with-buck-shlegeris-and-rohin-shah/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Lucas Perry", "Buck Shlegeris and Rohin Shah"], "summaries": ["This podcast with Buck and me is loosely structured around the [review I wrote](https://www.alignmentforum.org/posts/dKxX76SCfCvceJXHv/ai-alignment-2018-19-review) ([AN #84](https://mailchi.mp/1af38085edc5/an-84-reviewing-ai-alignment-work-in-2018-19)), but with a lot more debate and delving into specific points of pessimism and optimism. I suspect that every reader will have some section they're interested in. Since much of the discussion was itself meant to be a summary, I'm not going to try and summarize even further. Here's the list of topics covered:\n\nOur optimism and pessimism about different approaches to aligned AI\nTraditional arguments for AI as an x-risk\nModeling agents as expected utility maximizers\nAmbitious value learning and specification learning/narrow value learning\nAgency and optimization\nRobustness\nScaling to superhuman abilities\nUniversality\nImpact regularization\nCausal models, oracles, and decision theory\nDiscontinuous and continuous takeoff scenarios\nProbability of AI-induced existential risk\nTimelines for AGI\nInformation hazards"], "venue": "FLI Website", "opinion": "", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #96", "newsletter_category": "Technical agendas and prioritization"}
{"id": "c4605c32da05e1b773a1941b2fd76d59", "title": "AI Alignment Podcast: On DeepMind, AI Safety, and Recursive Reward Modeling", "url": "https://futureoflife.org/2019/12/16/ai-alignment-podcast-on-deepmind-ai-safety-and-recursive-reward-modeling-with-jan-leike/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Lucas Perry and Jan Leike"], "summaries": ["While Jan originally worked on theory (specifically AIXI), DQN, AlphaZero and others demonstrated that deep RL was a plausible path to AGI, and so now Jan works on more empirical approaches. In particular, when selecting research directions, he looks for techniques that are deeply integrated with the current paradigm, that could scale to AGI and beyond. He also wants the technique to work for agents in general, rather than just question answering systems, since people will want to build agents that can act, at least in the digital world (e.g. composing emails). This has led him to work on <@recursive reward modeling@>(@Scalable agent alignment via reward modeling@), which tries to solve the specification problem in the <@SRA framework@>(@Building safe artificial intelligence: specification, robustness, and assurance@).\n\nReward functions are useful because they allow the AI to find novel solutions that we wouldn't think of (e.g. AlphaGo's move 37), but often are incorrectly specified, leading to reward hacking. This suggests that we should do _reward modeling_, where we learn a model of the reward function from human feedback. Of course, such a model is still likely to have errors leading to reward hacking, and so to avoid this, the reward model needs to be updated online. As long as it is **easier to evaluate behavior than to produce behavior**, reward modeling should allow AIs to find novel solutions that we wouldn't think of.\n\nHowever, we would eventually like to apply reward modeling to tasks where evaluation is also hard. In this case, we can decompose the evaluation task into smaller tasks, and recursively apply reward modeling to train AI systems that can perform those small helper tasks. Then, assisted by these helpers, the human should be able to evaluate the original task. This is essentially forming a \"tree\" of reward modeling agents that are all building up to the reward model for the original, hard task. While currently the decomposition would be done by a human, you could in principle also use recursive reward modeling to automate the decomposition. Assuming that we can get regular reward modeling working robustly, we then need to make sure that the tree of reward models doesn't introduce new problems. In particular, it might be the case that as you go up the tree, the errors compound: errors in the reward model at the leaves lead to slightly worse helper agents, which lead to worse evaluations for the second layer, and so on.\n\nHe recommends that rather than spending a lot of time figuring out the theoretically optimal way to address a problem, AI safety researchers should alternate between conceptual thinking and trying to make something work. The ML community errs on the other side, where they try out lots of techniques, but don't think as much about how their systems will be deployed in the real world. Jan also wants the community to focus more on clear, concrete technical explanations, rather than vague blog posts that are difficult to critique and reason about. This would allow us to more easily build on past work, rather than reasoning from first principles and reinventing the wheel many times.\n\nMore broadly, DeepMind is taking a portfolio approach to AI safety: they are trying many different lines of attack, and hoping that some of them will pan out. Currently, there are teams for agent alignment (primarily recursive reward modeling), incentive theory, trained agent analysis, policy, and ethics. They have also spent some time thinking about AI safety benchmarks, as in [AI Safety Gridworlds](https://arxiv.org/abs/1711.09883), since progress in machine learning is driven by benchmarks, though Jan does think it is quite hard to create a well-made benchmark."], "venue": "FLI Website", "opinion": "I've become more optimistic about recursive reward modeling since the <@original paper@>(@Scalable agent alignment via reward modeling@), primarily (I think) because I now see more value in approaches that can be used to perform specific tasks (relative to approaches that try to infer \"human values\").\n\nI also appreciated the recommendations for the AI safety community, and agree with them quite a lot. Relative to Jan, I see more value in conceptual work described using fuzzy intuitions, but I do think that more effort should be put into exposition of that kind of work.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #79", "newsletter_category": "Technical agendas and prioritization"}
{"id": "3e3ae5dd7ee9ed2592ec061a835334ef", "title": "AI alignment landscape", "url": "https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Paul Christiano"], "summaries": ["This post presents the following decomposition of how to make AI go well:\n\nIMAGE HERE"], "venue": "AI Alignment Blog", "opinion": "Here are a few points about this decomposition that were particularly salient or interesting to me.\n\nFirst, at the top level, the problem is decomposed into alignment, competence, and coping with the impacts of AI. The \"alignment tax\" (extra technical cost for safety) is only applied to alignment, and not competence. While there isn't a tax in the \"coping\" section, I expect that is simply due to a lack of space; I expect that extra work will be needed for this, though it may not be technical. I broadly agree with this perspective: to me, it seems like the major technical problem which _differentially_ increases long-term safety is to figure out how to get powerful AI systems that are _trying_ to do what we want, i.e. they have the right [motivation](https://www.alignmentforum.org/posts/ZeE7EKHTFMBs8eMxn/clarifying-ai-alignment#3ECKoYzFNW2ZqS6km) ([AN #33](https://mailchi.mp/b6dc636f6a1b/alignment-newsletter-33)). Such AI systems will hopefully make sure to check with us before taking unusual irreversible actions, making e.g. robustness and reliability less important. Note that <@techniques like verification, transparency, and adversarial training@>(@Techniques for optimizing worst-case performance@) may still be needed to ensure that the _alignment_ itself is robust and reliable (see the inner alignment box); the claim is just that robustness and reliability of the AI's _capabilities_ is less important.\n\nSecond, strategy and policy work here is divided into two categories: improving our ability to pay technical taxes (extra work that needs to be done to make AI systems better), and improving our ability to handle impacts of AI. Often, generically improving coordination can help with both categories: for example, the <@publishing concerns around GPT-2@>(@Better Language Models and Their Implications@) have allowed researchers to develop synthetic text detection (the first category) as well as to coordinate on when not to release models (the second category).\n\nThird, the categorization is relatively agnostic to the details of the AI systems we develop -- these only show up in level 4, where Paul specifies that he is mostly thinking about aligning learning, and not planning and deduction. It's not clear to me to what extent the upper levels of the decomposition make as much sense if considering other types of AI systems: I wouldn't be surprised if I thought the decomposition was not as good for risks from e.g. powerful deductive algorithms, but it would depend on the details of how deductive algorithms become so powerful. I'd be particularly excited to see more work presenting more concrete models of powerful AGI systems, and reasoning about risks in those models, as was done in <@Risks from Learned Optimization@>(@Risks from Learned Optimization in Advanced Machine Learning Systems@).", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #74", "newsletter_category": "Technical agendas and prioritization"}
{"id": "a0bcf7d25b0d7b2433d38b1a12c72880", "title": "AI Alignment Research Overview", "url": "https://docs.google.com/document/d/1FbTuRvC4TFWzGYerTKpBU7FJlyvjeOvVYF2uYNFSlOc/edit", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Jacob Steinhardt"], "summaries": ["It has been over three years since _[Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565)_. Since that time we have learned more about the structure of the safety problem. This document represents an updated taxonomy of problems relevant for AI alignment. Jacob Steinhardt decomposes the remaining technical work into “technical alignment (the overcoming of conceptual or engineering issues needed to create aligned AI), detecting failures (the development of tools for proactively assessing the safety/alignment of a system or approach), methodological understanding (best practices backed up by experience), and system-building (how to tie together the three preceding categories in the context of many engineers working on a large system).”\n\nThe first topic under “technical alignment” is “Out-of-Distribution Robustness,” which receives more emphasis than it did in Concrete Problems. Out-of-Distribution Robustness is in part motivated by the fact that transformative AI will lead to substantial changes to the real world, and we should like our systems to perform well even under these large and possibly rapid data shifts. Specific subproblems include _some_ work on adversarial examples and out-of-distribution detection. Next, the problem of Reward Learning is described. For this, there are challenges including learning human values and ensuring those lossily represented human values can remain aligned under extreme optimization. While we have attained more conceptual clarity about reward learning since _Concrete Problems_, reward learning still remains largely “uncharted,” and it is still not clear “how approach the problem.” The next section on Scalable Reward Generation points out that, in the future, labeling meaning or providing human oversight will prove increasingly difficult. Next, he proposes that we ought to study how to make systems “act conservatively,” such as endowing systems with the ability to activate a conservative fallback routine when they are uncertain. The final topic under technical alignment is Counterfactual Reasoning. Here one possible direction is generating a family of simulated environments to generate counterfactuals.\n\nThe “technical alignment” section is the majority of this document. Later sections such as “Detecting Failures in Advance” highlight the importance of deep neural network visualization and recent model stress-test datasets. “Methodological Understanding” suggests that we are more likely to build aligned AI systems if we improve our best practices for building and evaluating models, and “System Building” speculates about how to do this for future multi-faceted ML systems."], "venue": "Google Docs", "opinion": "This is a welcome update to _Concrete Problems_ since it is slightly more concrete, current, and discusses improving safety in both deep learning and RL rather than mostly RL. While the document mentions many problems, the set of problems retains precision and fortunately does not include every capabilities concern that may possibly one day impact safety. A takeaway is that value learning and model transparency still need groundwork, but fortunately other problems including out-of-distribution robustness are more concretized and mostly need time and continued effort.\n\n**Rohin's opinion:** One thing I particularly like about this agenda is that the connection to AI _alignment_ is significantly clearer than in _Concrete Problems_.", "highlight": true, "read_more": "", "summarizer": "Dan H", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #72", "newsletter_category": "Technical agendas and prioritization"}
{"id": "492adc714e37b5e7368de83f71e5f093", "title": "AI Alignment Podcast: Inverse Reinforcement Learning and the State of AI Alignment", "url": "https://futureoflife.org/2018/12/17/inverse-reinforcement-learning-and-the-state-of-ai-alignment-with-rohin-shah/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Lucas Perry and Rohin Shah"], "summaries": ["Lucas interviewed me and we talked about a bunch of different topics. Some quick highlights, without the supporting arguments:\n- If we want to use inverse reinforcement learning (IRL) to infer a utility function that we then optimize, we would have to account for systematic biases, and this is hard, and subject to an impossibility result.\n- Humans do seem to be good at inferring goals of other humans, probably because we model them as planning in a similar way that we ourselves plan. It's reasonable to think that IRL could replicate this. However, humans have very different ideas on how the future should go, so this seems not enough to get a utility function that can then be optimized over the long term.\n- Another issue with having a utility function that is optimized over the long term is that it would have to somehow solve a whole lot of very difficult problems like the nature of identity and population ethics and metaphilosophy.\n- Since human preferences seem to change as the environment changes, we could try to build AI systems whose goals are constantly changing by continuously running IRL. This sort of approach is promising but we don't know how to get it working yet.\n- IRL, agency and optimization all seem to require a notion of counterfactuals.\n- One view of agency is that it is about how a search process thinks of itself, or about other search processes. This gives it a feeling of \"choice\", even though the output of the search process is determined by physics. This can explain the debates over whether evolution is an optimization process -- on the one hand, it can be viewed as a search process, but on the other, we understand it well enough to think of it as a \"deterministic\" procedure.\n- One way to view the AI alignment problem is to view it as a human-AI interaction problem, so that we get an AI that evolves over time along with us.\n- Rather than building a function maximizer, we could aim to build an AI system that is corrigible, or one that follows norms.- Both iterated amplification and debate operate on an exponential deliberation tree, though in different ways, using reasoning learned from humans. If a human would have some desirable property (such as good epistemics), so too should their amplification.- Both iterated amplification and debate are based on _explicit_ human reasoning, as opposed to intuitive reasoning.\n- Value drift in the literal sense can be both positive and negative -- I certainly expect and want my stated preferences to change as I become more knowledgeable in the future.\n- We only want the combined human-AI system to have a goal, which allows for a space of possibilities where the AI is not optimizing a goal.\n- One of the problems that seems most troubling is the issue of inner optimizers, which will hopefully be described in a sequence soon."], "venue": "FLI Website", "opinion": "", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #38", "newsletter_category": "Technical agendas and prioritization"}
{"id": "d2a174931860f75be6a41e06033d6110", "title": "Realism about rationality", "url": "https://www.lesswrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Richard Ngo"], "summaries": ["In the same way that moral realism claims that there is one true morality (even though we may not know it yet), rationality realism is the claim that there is one \"correct\" algorithm for rationality or intelligence. This post argues that many disagreements can be traced back to differences on how much one identifies with the rationality realism mindset. For example, people who agree with rationality realism are more likely to think that there is a simple theoretical framework that captures intelligence, that there is an \"ideal\" decision theory, that certain types of moral reasoning are \"correct\", that having contradictory preferences or beliefs is really bad, etc. The author's skepticism about this mindset also makes them skeptical about agent foundations research."], "venue": "LessWrong", "opinion": "This does feel like an important generator of many disagreements I've had. I'd split rationality realism into two subcases -- whether you expect that there is a simple \"correct\" algorithm for computation-bounded rationality, and whether you expect there is only a simple \"correct\" algorithm for rationality given infinite compute, but the bounded computation case may be a lot messier. (I'm guessing almost all rationality realists fall in the latter category, but I'm not sure.)\n\nI'd expect most of the people working on reducing existential risk from AI to be much more realist about rationality, since we often start working on this based on astronomical waste arguments and utilitarianism, which seems very realist about preferences. (At least, this was the case for me.) This is worrying -- it seems plausible to me that there isn't a \"correct\" rationality or intelligence algorithm (even in the infinite compute case), but that we wouldn't realize this because people who believe that also wouldn't want to work on AI alignment.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #25", "newsletter_category": "Technical agendas and prioritization"}
{"id": "2ed2d551032d80e1c85425d34c9199cc", "title": "RFP: Measuring and forecasting risks", "url": "https://docs.google.com/document/d/1cPwcUSl0Y8TyZxCumGPBhdVUN0Yyyw9AR1QshlRI3gc/edit", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Jacob Steinhardt"], "summaries": ["Measurement and forecasting is useful for two reasons. First, it gives us empirical data that can improve our understanding and spur progress. Second, it can allow us to quantitatively compare the safety performance of different systems, which could enable the creation of safety standards. So what makes for a good measurement?\n\n1. **Relevance to AI alignment:** The measurement exhibits a failure mode that becomes worse as models become larger, or tracks a potential capability that may emerge with further scale (which in turn could enable deception, hacking, resource acquisition, etc).\n2. **Forward-looking:** The measurement helps us understand _future_ issues, not just those that exist today. Isolated examples of a phenomenon are good if we have nothing else, but we’d much prefer to have a systematic understanding of when a phenomenon occurs and how it tends to quantitatively increase or decrease with various factors. See for example <@scaling laws@>(@Scaling Laws for Neural Language Models@).\n3. **Rich data source:** Not all trends in MNIST generalize to CIFAR-10, and not all trends in CIFAR-10 generalize to ImageNet. Measurements on data sources with rich factors of variation are more likely to give general insights.\n4. **Soundness and quality:** This is a general category for things like “do we know that the signal isn’t overwhelmed by the noise” and “are there any reasons that the measurement might produce false positives or false negatives”.\n\nWhat sorts of things might you measure?\n\n1. As you scale up task complexity, how much do you need to scale up human-labeled data to continue to maintain good performance and avoid reward hacking? If you fail at this and there are imperfections in the reward, how bad does this become?\n2. What changes do we observe based on changes in the _quality_ of the human feedback (e.g. getting feedback from amateurs vs experts)? This could give us information about the acceptable “difference in intelligence” between a model and its supervisor.\n3. What happens when models are pushed out of distribution along a factor of variation that was not varied in the pretraining data?\n4. To what extent do models provide wrong or undesired outputs in contexts where they are capable of providing the right answer?"], "venue": "Open Philanthropy Website", "opinion": "Measurements generally seem great. One story for impact is that we have a measurement that we think is strongly correlated with x-risk, and we use that measurement to select an AI system that scores low on such a metric. This seems distinctly good and I think would in fact reduce x-risk! But I want to clarify that I don’t think it would convince me that the system was safe with high confidence. The conceptual arguments against high confidence in safety seem quite strong and not easily overcome by such measurements. (I’m thinking of <@objective robustness failures@>(@2-D Robustness@) of the form “the model is trying to pursue a simple proxy, but behaves well on the training distribution until it can execute a treacherous turn”.)\n\nYou can also tell stories where the measurements reveal empirical facts that then help us have high confidence in safety, by allowing us to build better theories and arguments, which can rule out the conceptual arguments above.\n\nSeparately, these measurements are also useful as a form of legible evidence about risk to others who are more skeptical of conceptual arguments.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #168", "newsletter_category": "Technical agendas and prioritization"}
{"id": "5edf61c3f942798aabcaf22ae879dbc6", "title": "RFP: Techniques for enhancing human feedback", "url": "https://docs.google.com/document/d/1uPOQikvqhxANvejgFfnzH-vNX3tMap4uFL3KCYqPeSg/edit?usp=sharing", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Ajeya Cotra"], "summaries": ["Consider a topic previously analyzed in <@aligning narrowly superhuman models@>(@The case for aligning narrowly superhuman models@): how can we use human feedback to train models to do what we want in cases where the models are _more_ knowledgeable than the humans providing the feedback? A variety of techniques have been proposed to solve this problem, including <@iterated amplification@>(@Supervising strong learners by amplifying weak experts@), <@debate@>(@AI safety via debate@), <@recursive reward modeling@>(@Scalable agent alignment via reward modeling@), <@market making@>(@AI safety via market making@), and [generalizing from short deliberations to long deliberations](https://ai-alignment.com/turning-reflection-up-to-11-1bd6171afd21). This RFP solicits proposals that aim to test these or other mechanisms on existing systems. There are a variety of ways to set up the experiments so that the models are more knowledgeable than the humans providing the feedback, for example:\n\n1. Train a language model to accurately explain things about a field that the feedback providers are not familiar with.\n2. Train an RL agent to act well in an environment where the RL agent can observe more information than the feedback providers can.\n3. Train a multilingual model to translate between English and a foreign language that the feedback providers do not know."], "venue": "Open Philanthropy Website", "opinion": "", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #168", "newsletter_category": "Technical agendas and prioritization"}
{"id": "f69b1e0532890b6580ebcf2ada7599b6", "title": "RFP: Interpretability", "url": "https://docs.google.com/document/d/1PB58Fx3fmahx8vutW7TY6sG3-Ho0qfd1RwA1gP2yzWg/edit?usp=sharing", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Chris Olah"], "summaries": ["The author provides this one sentence summary: _We would like to see research building towards the ability to “reverse engineer\" trained neural networks into human-understandable algorithms, enabling auditors to catch unanticipated safety problems in these models._\n\nThis RFP is primarily focused on an aspirational “intermediate” goal: to fully reverse engineer some modern neural network, such as an ImageNet classifier. (Despite the ambition, it is only an “intermediate” goal because what we would eventually need is a general method for _cheaply_ reverse engineering _any_ neural network.) The proposed areas of research are primarily inspired by the <@Circuits line of work@>(@Circuits Thread@):\n\n1. **Discovering Features and Circuits:** This is the most obvious approach to the aspirational goal. We simply “turn the crank” using existing tools to study new features and circuits, and this fairly often yields an interesting result that makes progress towards reverse engineering a neural network.\n2. **Scaling Circuits to Larger Models:** So far the largest example of reverse engineering is [curve circuits](https://distill.pub/2020/circuits/curve-circuits/), with 50K parameters. Can we find examples of structure in the neural networks that allow us to drastically reduce the amount of effort required per parameter? (As examples, see [equivariance](https://distill.pub/2020/circuits/equivariance/) and [branch specialization](https://distill.pub/2020/circuits/branch-specialization/).)\n3. **Resolving Polysemanticity:** One of the core building blocks of the circuits approach is to identify a neuron with a concept, so that connections between neurons can be analyzed as connections between concepts. Unfortunately, some neurons are _polysemantic_, that is, they encode multiple different concepts. This greatly complicates analysis of the connections and circuits between these neurons. How can we deal with this potential obstacle?"], "venue": "Open Philanthropy Website", "opinion": "The full RFP has many, many more points about these topics; it’s 8 pages of remarkably information-dense yet readable prose. If you’re at all interested in mechanistic interpretability, I recommend reading it in full.\n\nThis RFP also has the benefit of having the most obvious pathway to impact: if we understand what algorithm neural networks are running, there’s a much better chance that we can catch any problems that arise, especially ones in which the neural network is deliberately optimizing against us. It’s one of the few areas where nearly everyone agrees that further progress is especially valuable.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #168", "newsletter_category": "Technical agendas and prioritization"}
{"id": "6d50c7040b1cc6a1f6b156ad323c9217", "title": "RFP: Truthful and honest AI", "url": "https://docs.google.com/document/d/186GGXoi_g0ML_YRKnppfLNxZIdvHTpLakOyg6GQENi4/edit?usp=sharing", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Owain Evans"], "summaries": ["This RFP outlines research projects on [Truthful AI](https://arxiv.org/abs/2110.06674) (summarized below). They fall under three main categories:\n\n1. Increasing clarity about “truthfulness” and “honesty”. While there are some tentative definitions of these concepts, there is still more precision to be had: for example, how do we deal with statements with ambiguous meanings, or ones involving figurative language? What is the appropriate standard for _robustly_ truthful AI? It seems too strong to require the AI system to never generate a false statement; for example it might misunderstand the meaning of a newly coined piece of jargon.\n2. Creating benchmarks and tasks for Truthful AI, such as <@TruthfulQA@>(@TruthfulQA: Measuring How Models Mimic Human Falsehoods@), which checks for imitative falsehoods. This is not just meant to create a metric to improve on; it may also simply perform as a measurement. For example, we could <@experimentally evaluate whether honesty generalizes@>(@Experimentally evaluating whether honesty generalizes@), or explore how much truthfulness is reduced when adding in a task-specific objective.\n3. Improving the truthfulness of models, for example by finetuning models on curated datasets of truthful utterances, finetuning on human feedback, using <@debate@>(@AI safety via debate@), etc.\n\nBesides the societal benefits from truthful AI, building truthful AI systems can also help with AI alignment:\n1. A truthful AI system can be used to supervise its own actions, by asking it whether its selected action was good.\n2. A robustly truthful AI system could continue to do this after deployment, allowing for ongoing monitoring of the AI system.\n3. Similarly, we could have a robustly truthful AI system supervise its own actions in hypothetical scenarios, to make it more robustly aligned."], "venue": "Open Philanthropy Website", "opinion": "While I agree that making AI systems truthful would then enable many alignment strategies, I’m actually more interested in the _methods_ by which we make AI systems truthful. Many of the ideas suggested in the RFP are ones that would apply to alignment more generally and aren’t particularly specific to truthful AI. So it seems like whatever techniques we used to build truthful AI could then be repurposed for alignment. In other words, I expect that the benefit to AI alignment of working on truthful AI is that it serves as a good test case for methods that aim to impose constraints upon an AI system. In this sense, it is a more challenging, larger version of the <@”never describe someone getting injured” challenge@>(@Redwood Research’s current project@). Note that I am only talking about how this helps AI alignment; there are also beneficial effects on society from pursuing truthful AI that I haven’t talked about here.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #168", "newsletter_category": "Technical agendas and prioritization"}
{"id": "4b757f6939e481c708e287d6ead4e0bd", "title": "Andrew Critch on AI Research Considerations for Human Existential Safety", "url": "https://futureoflife.org/2020/09/15/andrew-critch-on-ai-research-considerations-for-human-existential-safety/?utm_source=feedly&utm_medium=rss&utm_campaign=andrew-critch-on-ai-research-considerations-for-human-existential-safety", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Lucas Perry and Andrew Critch"], "summaries": ["This podcast discusses the recent <@ARCHES@>(@AI Research Considerations for Human Existential Safety@) document, and several thoughts surrounding it. There’s a lot in here that I won’t summarize, including a bunch of stuff that was in the summary of ARCHES. I’m going to focus primarily on the (substantial) discussion of how to prioritize within the realm of possible risks related in some way to AI systems.\n\nFirstly, let’s be clear about the goal: ensuring existential safety, that is, making sure human extinction never happens. Note the author means literal extinction, as opposed to something like “the loss of humanity’s long-term potential”, because the former is clearer. While it is not always clear whether something counts as “extinction” (what if we all become uploads?), it is a lot clearer than whether a scenario counts as a loss of potential.\n\nTypical alignment work focuses on the “single-single” case, where a single AI system must be aligned with a single human, as in e.g. <@intent alignment@>(@Clarifying \"AI Alignment\"@). However, this isn’t ultimately what we care about: we care about multi-multi existential safety, that is, ensuring that when multiple AI systems act in a world with multiple humans, extinction does not happen. There are pretty significant differences between these: in particular, it’s not clear whether multi-multi “alignment” even has meaning, since it is unclear whether it makes sense to view humanity as an agent to which an AI system could be “aligned”.\n\nNonetheless, single-single alignment seems like an important subproblem of multi-multi existential safety: we will be delegating to AI systems in the future; it seems important that we know how to do so. How do we prioritize between single-single alignment, and the other subproblems of multi-multi existential safety? A crucial point is that single-single work will not be neglected, because companies have strong incentives to solve single-single alignment (both in the sense of optimizing for the right thing, and for being robust to distributional shift). In contrast, in multi-multi systems, it is often the case that there is a complex set of interacting effects that lead to some negative outcome, and there is no one actor to blame for the negative outcome, and as a result it doesn’t become anybody’s job to prevent that negative outcome.\n\nFor example, if you get a huge medical bill because the necessary authorization forms hadn’t been filled out, whose fault is it? Often in such cases there are many people to blame: you could blame yourself for not checking the authorization, or you could blame the doctor’s office for not sending the right forms or for not informing you that the authorization hadn’t been obtained, etc. Since it’s nobody’s job to fix such problems, they are and will remain neglected, and so work on them is more impactful.\n\nSomething like transparency is in a middle ground: it isn’t profitable yet, but probably will be soon. So, if someone were indifferent between a bunch of areas of research, the author would advise for e.g. multi-stakeholder delegation over transparency over robustness. However, the author emphasizes that it’s far more important that people work in some area of research that they find intellectually enriching and relevant to existential safety.\n\nThe podcast has lots of other points, here is an incomplete quick selection of them:\n\n- In a multi-multi world, without good coordination you move the world in a “random” direction. There are a lot of variables which have to be set just right for humans to survive (temperature, atmospheric composition, etc) that are not as important for machines. So sufficiently powerful systems moving the world in a “random” direction will lead to human extinction.\n- One response to the multi-multi challenge is to have a single group make a powerful AI system and “take over the world”. This approach is problematic since many people will oppose such a huge concentration of power. In addition, it is probably not desirable even if possible, since it reduces robustness by creating a single point of failure.\n- Another suggestion is to create a powerful AI system that protects humanity (but is still uncontrollable in that humanity cannot stop its operation). The author does not like the solution much, because if we get it wrong and deploy a misaligned uncontrollable AI system, then we definitely die. The author prefers that we instead always have control over the AI systems we deploy."], "venue": "FLI Website", "opinion": "Both this and the previous summary illustrate an increasingly common perspective:\n\n1. The world is not going to look like “today’s world plus a single AGI agent”: instead, we will likely have a proliferation of many different AI systems specialized for different purposes.\n2. In such a world, there are a lot of different challenges that aren’t standard intent alignment.\n3. We should focus on these other challenges because [a variety of reasons].\n\n**If you have technical CS skills**, how should you prioritize between this perspective and the more classical intent alignment perspective?\n\n**Importance.** I’ve <@estimated@>(@Conversation with Rohin Shah@) a 10% chance of existential catastrophe via a failure of intent alignment, absent intervention from longtermists to address intent alignment. Estimates vary quite a lot, even among people who have thought about the problem a lot; I’ve heard as low as < 1% and as high as 80% (though these usually don’t assume “no intervention from longtermists”).\n\nIt’s harder to estimate the importance of structural risks and extinction risks highlighted in the two summaries above, but the arguments in the previous two posts seem reasonably compelling and I think I’d be inclined to assign a similar importance to it (i.e. similar probability of causing an existential catastrophe).\n\nNote that this means I’m disagreeing with Critch: he believes that we are far more likely to go extinct through effects unique to multi-multi dynamics; in contrast I find the argument less persuasive because we do have governance, regulations, national security etc. that would already be trying to mitigate issues that arise in multi-multi contexts, especially things that could plausibly cause extinction.\n\n**Neglectedness.** I’ve already taken into account neglectedness outside of EA in estimating the probabilities for importance. Within EA there is already a huge amount of effort going into intent alignment, and much less in governance and multi-multi scenarios -- perhaps a difference of 1-2 orders of magnitude; the difference is even higher if we only consider people with technical CS skills.\n\n**Tractability.** I buy the argument in Dafoe’s article that for AI governance due to our vast uncertainty we need a “metropolis” model where field-building is quite important; I think that implies that solving the full problem (at today's level of knowledge) would require a lot of work and building of expertise. In contrast, with intent alignment, we have a single technical problem with significantly less uncertainty. As a result, I expect that currently in expectation a single unit of work goes further to solving intent alignment than to solving structural risks / multi-multi problems, and so intent alignment is more tractable.\n\nI also expect technical ideas to be a bigger portion of \"the full solution\" in the case of intent alignment -- as Dafoe argues, I expect that for structural risks the solution looks more like \"we build expertise and this causes various societal decisions to go better\" as opposed to \"we figure out how to write this piece of code differently so that it does better things\". This doesn't have an obvious impact on tractability -- if anything, I'd guess it argues in favor of the tractability of work on structural risks, because it seems easier to me to create prestigious experts in particular areas than to make progress on a challenging technical problem whose contours are still uncertain since it arises primarily in the future.\n\nI suspect that I disagree with Critch here: I think he is more optimistic about technical solutions to multi-multi issues themselves being useful. In the past I think humanity has resolved such issues via governance and regulations and it doesn’t seem to have relied very much on technical research; I’d expect that trend to continue.\n\n**Personal fit.** This is obviously important, but there isn’t much in general for me to say about it.\n\nOnce again, I should note that this is all under the assumption that you have technical CS skills. I think overall I end up pretty uncertain which of the two areas I’d advise going in (assuming personal fit was equal in both areas). However, if you are more of a generalist, I feel much more inclined to recommend choosing some subfield of AI governance, again subject to personal fit, and Critch agrees with this.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #118", "newsletter_category": "Technical agendas and prioritization"}
{"id": "d78395716743310a5ad7a053cea9a626", "title": "The Alignment Problem for Bayesian History-Based Reinforcement Learners", "url": "https://www.tomeveritt.se/papers/alignment.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Tom Everitt", "Marcus Hutter"], "summaries": ["After forgetting its existence for quite a while, I've finally read through this technical report (which won first place in <@round 2 of the AI alignment prize@>(@Announcement: AI alignment prize round 2 winners and next round@)). It analyzes the alignment problem from an AIXI-like perspective, that is, by theoretical analysis of powerful Bayesian RL agents in an online POMDP setting.\n\nIn this setup, we have a POMDP environment, in which the environment has some underlying state, but the agent only gets observations of the state and must take actions in order to maximize rewards. The authors consider three main setups: 1) rewards are computed by a preprogrammed reward function, 2) rewards are provided by a human in the loop, and 3) rewards are provided by a _reward predictor_ which is trained interactively from human-generated data.\n\nFor each setup, they consider the various objects present in the formalism, and ask how these objects could be corrupted, misspecified, or misleading. This methodology allows them to identify several potential issues, which I won't get into as I expect most readers are familiar with them. (Examples include wireheading and threatening to harm the human unless they provide maximal reward.)\n\nThey also propose several tools that can be used to help solve misalignment. In order to prevent reward function corruption, we can have the agent _simulate_ the future trajectory, and _evaluate_ this future trajectory with the current reward, removing the incentive to corrupt the reward function. (This was later developed into <@current-RF optimization@>(@Designing agent incentives to avoid reward tampering@).)\n\nSelf-corruption awareness refers to whether or not the agent is aware that its policy can be modified. A self-corruption _unaware_ agent is one that behaves as though it's current policy function will never be changed, effectively ignoring the possibility of corruption. It is not clear which is more desirable: while a self-corruption unaware agent will be more corrigible (in the [MIRI sense](https://intelligence.org/files/Corrigibility.pdf)), it also will not preserve its utility function, as it believes that even if the utility function changes the policy will not change.\n\nAction-observation grounding ensures that the agent only optimizes over policies that work on histories of observations and actions, preventing agents from constructing entirely new observation channels (\"delusion boxes\") which mislead the reward function into thinking everything is perfect.\n\nThe interactive setting in which a reward predictor is trained based on human feedback offers a new challenge: that the human data can be corrupted or manipulated. One technique to address this is to get _decoupled_ data: if your corruption is determined by the current state s, but you get feedback about some different state s', as long as s and s' aren't too correlated it is possible to mitigate potential corruptions.\n\nAnother leverage point is how we decide to use the reward predictor. We could consider the _stationary_ reward function, which evaluates simulated trajectories with the _current_ reward predictor, i.e. assuming that the reward predictor will never be updated again. If we combine this with self-corruption unawareness (so that the policy also never expects the policy to change), then the incentive to corrupt the reward predictor's data is removed. However, the resulting agent is _time-inconsistent_: it acts as though its reward never changes even though it in practice does, and so it can make a plan and start executing it, only to switch over to a new plan once the reward changes, over and over again.\n\nThe _dynamic_ reward function avoids this pitfall by evaluating the kth timestep of a simulated trajectory by also taking an expectation over future data that the reward predictor will get. This agent is no longer time-inconsistent, but it now incentivizes the agent to manipulate the data. This can be fixed by building a single integrated Bayesian agent, which maintains a single environment model that predicts both the reward function and the environment model. The resulting agent is time-consistent, utility-preserving, and has no direct incentive to manipulate the data. (This is akin to the setup in <@assistance games / CIRL@>(@Cooperative Inverse Reinforcement Learning@).)\n\nOne final approach is to use a _counterfactual_ reward function, in which the data is simulated in a counterfactual world where the agent executed some known safe default policy. This no longer depends on the current time, and is not subject to data corruption since the data comes from a hypothetical that is independent of the agent's actual policy. However, it requires a good default policy that does the necessary information-gathering actions, and requires the agent to have the ability to simulate human feedback in a counterfactual world."], "venue": "", "opinion": "This paper is a great organization and explanation of several older papers (that haven't been summarized in this newsletter because they were published before 2018 and I read them before starting this newsletter), and I wish I had read it sooner. It seems to me that the integrated Bayesian agent is the clear winner -- the only downside is the computational cost, which would be a bottleneck for any of the models considered here.\n\nOne worry I have with this sort of analysis is that the guarantees you get out of it depends quite a lot on how you model the situation. For example, let's suppose that after I sleep I wake up refreshed and more capable of intellectual work. Should I model this as \"policy corruption\", or as a fixed policy that takes as an input some information about how rested I am?", "highlight": true, "read_more": "Tom Everitt's PhD thesis", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #114", "newsletter_category": "Technical agendas and prioritization"}
{"id": "ab714f6f5b5d2a16b145af996beddb1a", "title": "AI Research Considerations for Human Existential Safety", "url": "http://acritch.com/papers/arches.pdf", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Andrew Critch", "David Krueger"], "summaries": ["This research agenda out of CHAI directly attacks the problem longtermists care about: **how to prevent AI-related existential catastrophe**. This is distinctly different from the notion of being \"provably beneficial\": a key challenge for provable beneficence is defining what we even mean by \"beneficial\". In contrast, there are avenues for preventing AI-caused human extinction that do not require an understanding of \"beneficial\": most trivially, we could coordinate to never build AI systems that could cause human extinction.\n\nSince the focus is on the _impact_ of the AI system, the authors need a new phrase for this kind of AI system. They define a **prepotent AI system** to be one that cannot be controlled by humanity **and** has the potential to transform the world in a way that is at least as impactful as humanity as a whole. Such an AI system need not be superintelligent, or even an AGI; it may have powerful capabilities in a narrow domain such as technological autonomy, replication speed, or social acumen that enable prepotence.\n\nBy definition, a prepotent AI system is capable of transforming the world drastically. However, there are a lot of conditions that are necessary for continued human existence, and most transformations of the world will not preserve these conditions. (For example, consider the temperature of the Earth or the composition of the atmosphere.) As a result, human extinction is the _default_ outcome from deploying a prepotent AI system, and can only be prevented if the system is designed to preserve human existence with very high precision relative to the significance of its actions. They define a misaligned prepotent AI system (MPAI) as one whose deployment leads to human extinction, and so the main objective is to avert the deployment of MPAI.\n\nThe authors break down the risk of deployment of MPAI into five subcategories, depending on the beliefs, actions and goals of the developers. The AI developers could fail to predict prepotence, fail to predict misalignment, fail to coordinate with other teams on deployment of systems that aggregate to form an MPAI, accidentally (unilaterally) deploy MPAI, or intentionally (unilaterally) deploy MPAI. There are also hazardous social conditions that could increase the likelihood of risks, such as unsafe development races, economic displacement of humans, human enfeeblement, and avoidance of talking about x-risk at all.\n\nMoving from risks to solutions, the authors categorize their research directions along three axes based on the setting they are considering. First, is there one or multiple humans; second, is there one or multiple AI systems; and third, is it helping the human(s) comprehend, instruct, or control the AI system(s). So, multi/single instruction would involve multiple humans instructing a single AI system. While we will eventually need multi/multi, the preceding cases are easier problems from which we could gain insights that help solve the general multi/multi case. Similarly, comprehension can help with instruction, and both can help with control.\n\nThe authors then go on to list 29 different research directions, which I'm not going to summarize here."], "venue": "Author's Website", "opinion": "I love the abstract and introduction, because of their directness at actually stating what we want and care about. I am also a big fan of the distinction between provably beneficial and reducing x-risk, and the single/multi analysis.\n\nThe human fragility argument, as applied to generally intelligent agents, is a bit tricky. One interpretation is that the \"hardness\" stems from the fact that you need a bunch of \"bits\" of knowledge / control in order to keep humans around. However, it seems like a generally intelligent AI should easily be able to keep humans around \"if it wants\", and so the bits already exist in the AI. (As an analogy: we make big changes to the environment, but we could easily preserve deer habitats if we wanted to.) Thus, it is really a question of what \"distribution\" you expect the AI system is sampled from: if you think we'll build AI systems that try to do what humanity wants, then we're probably fine, but if you think that there will be multiple AI systems that each do what their users want, but the users have conflicts, the overall system seems more \"random\" in its goals, and so more likely to fall into the \"default\" outcome of human extinction. \n\nThe research directions are very detailed, and while there are some suggestions that don't seem particularly useful to me, overall I am happy with the list. (And as the paper itself notes, what is and isn't useful depends on your models of AI development.)", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #103", "newsletter_category": "Technical agendas and prioritization"}
{"id": "b2afc71636a91465b53fcab0830bd2c6", "title": "Summary of the Technical Safety Workshop", "url": "https://www.youtube.com/watch?v=Tqu4cwne1vA", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["David Krueger"], "summaries": ["David identifies two broad types of AI safety work: human in the loop approaches, and theory approaches. A notable subset of the former category is methods which improve our ability to give advanced systems meaningful feedback - this includes debate, IDA, and recursive reward modeling. CIRL and CAIS are also human-in-the-loop. Meanwhile the theory category includes MIRI's work on agent foundations; side effect metrics; and verified boxing."], "venue": "BAGI 2019", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #53", "newsletter_category": "Technical agendas and prioritization"}
{"id": "35194cd581ab9b7f7e0e67e5983d9908", "title": "Measurement, Optimization, and Take-off Speed", "url": "https://jsteinhardt.stat.berkeley.edu/blog/measurement-and-optimization", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Jacob Steinhardt"], "summaries": ["In this blogpost, the author argues that \"trying to measure pretty much anything you can think of is a good mental move that is heavily underutilized in machine learning\". He motivates the value of measurement and additional metrics by (i) citing evidence from the history of science, policy-making, and engineering (e.g. x-ray crystallography contributed to rapid progress in molecular biology), (ii) describing how, conceptually, \"measurement has several valuable properties\" (one of which is to act as interlocking constraints that help to error-check theories), and (iii) providing anecdotes from his own research endeavours where such approaches have been productive and useful (see, e.g. <@Rethinking Bias-Variance Trade-off@>(@Rethinking Bias-Variance Trade-off for Generalization of Neural Networks@)). \n\nHe demonstrates his proposal by applying it to the notion of _optimization power_ -- an important idea that has not been measured or even framed in terms of metrics. Two metrics are offered: (a) the change (typically deterioration) of performance when trained with a perturbed objective function with respect to the original objective function, named _Outer Optimization_, and (b) the change in performance of agents during their own lifetime (but without any further parameter updates), such as the log-loss on the next sentence for a language model after it sees X number of sequences at test time, or _Inner Adaptation_. Inspired by these, the article includes research questions and possible challenges.\n\nHe concludes with the insight that take-off would depend on these two continuous processes, Outer Optimization and Inner Adaptation, that work on very different time-scales, with the former being, at this time, much quicker than the latter. However, drawing an analogy from evolution, where it took billions of years of optimization to generate creatures like humans that were exceptional at rapid adaptation, we might yet see a fast take-off were Inner Adaptation turns out to be an exponential process that dominates capabilities progress. He advocates for early, sensitive measurement of this quantity as it might be an early warning sign of imminent risks."], "venue": "Author's Website", "opinion": "Early on, this post reminded me of [Twenty Billion Questions](https://arxiv.org/pdf/1705.10720.pdf); even though they are concretely different, these two pieces share a conceptual thread. They both consider the measurement of multiple quantities essential for solving their problems: 20BQ for encouraging AIs to be low-impact, and this post for productive framings of ill-defined concepts and as a heads-up about potential catastrophes.\n\nMeasurement is important, and this article poignantly argues why and illustrates how. It volunteers potential ideas that can be worked on today by mainstream ML researchers, and offers up a powerful toolkit to improve one's own quality of analysis. It would be great to see more examples of this technique applied to other contentious, fuzzy concepts in ML and beyond. I'll quickly note that while there seems to be minimal interest in this from academia, measurement of optimization power has been discussed earlier in several ways, e.g. [Measuring Optimization Power](https://www.lesswrong.com/posts/Q4hLMDrFd8fbteeZ8/measuring-optimization-power), or <@the ground of optimization@>(@The ground of optimization@).\n\n**Rohin's opinion:** I broadly agree with the perspective in this post. I feel especially optimistic about the prospects of measurement for (a) checking whether our theoretical arguments hold in practice and (b) convincing others of our positions (assuming that the arguments do hold in practice).", "highlight": false, "read_more": "", "summarizer": "Sudhanshu", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #157", "newsletter_category": "Technical agendas and prioritization"}
{"id": "2b8fa224f65aa996afea0ff21da88bf6", "title": "The Learning-Theoretic AI Alignment Research Agenda", "url": "https://agentfoundations.org/item?id=1816", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Vadim Kosoy"], "summaries": ["This agenda aims to create a general abstract theory of intelligence (in a manner similar to [AIXI](https://en.wikipedia.org/wiki/AIXI), but with some deficiencies removed). In particular, once we use the framework of reinforcement learning, regret bounds are a particular way of provably quantifying an agent's intelligence (though there may be other ways as well). Once we have this theory, we can ground all other AI alignment problems within it. Specifically, alignment would be formalized as a value learning protocol that achieves some regret bound. With this formalization, we can solve hard metaphilosophy problems such as \"What is imperfect rationality?\" through the intuitions gained from looking at the problem through the lens of value learning protocols and universal reinforcement learning."], "venue": "Agent Foundations", "opinion": "This agenda, like others, is motivated by the scenario where we need to get alignment right the first time, without empirical feedback loops, both because we might be facing one-shot success or failure, and because the stakes are so high that we should aim for high reliability subject to time constraints. I put low probability on the first reason (alignment being one-shot), and it seems much less tractable, so I mostly ignore those scenarios. I agree with the second reason, but aiming for this level of rigor seems like it will take much longer than the time we actually have. Given this high level disagreement, it's hard for me to evaluate the research agenda itself.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #13", "newsletter_category": "Technical agendas and prioritization"}
{"id": "b0713b6cce1c4ffd5334cc5f8c4bbf68", "title": "Conceptual issues in AI safety: the paradigmatic gap", "url": "http://www.foldl.me/2018/conceptual-issues-ai-safety-paradigmatic-gap/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Jon Gauthier"], "summaries": ["Lots of current work on AI safety focuses on what we can call \"mid-term safety\" -- the safety of AI systems that are more powerful and more broadly deployed than the ones we have today, but work using relatively similar techniques as the ones we use today. However, it seems plausible that there will be a paradigm shift in how we build AI systems, and if so it's likely that we will have a new, completely different set of mid-term concerns, rendering the previous mid-term work useless. For example, at the end of the 19th century, horse excrement was a huge public health hazard, and \"mid-term safety\" would likely have been about how to remove the excrement. Instead, the automobile was developed and started replacing horses, leading to new set of mid-term concerns (eg. pollution, traffic accidents), and any previous work on removing horse excrement became near-useless."], "venue": "Foldl", "opinion": "I focus almost exclusively on mid-term safety (while thinking about long-term safety), not because I disagree with this argument, but in spite of it. I think there is a good chance that any work I do will be useless for aligning superintelligent AI because of a paradigm shift, but I do it anyway because it seems very important on short timelines, which are easier to affect; and I don't know of other approaches to take that would have a significantly higher probability of being useful for aligning superintelligent AI.", "highlight": false, "read_more": "A possible stance for AI control research", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #13", "newsletter_category": "Technical agendas and prioritization"}
{"id": "8a35595079296e759aa492f350a9ce18", "title": "The flaws that make today’s AI architecture unsafe and a new approach that could fix it", "url": "https://80000hours.org/podcast/episodes/stuart-russell-human-compatible-ai/?utm_campaign=podcast__stuart-russell&utm_source=80000+Hours+Podcast&utm_medium=podcast", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Rob Wiblin and Stuart Russell"], "summaries": ["This podcast delves into many of the ideas in Stuart’s book <@Human Compatible@>(@Human Compatible: Artificial Intelligence and the Problem of Control@). Rob especially pushes on some aspects that are less talked about in the AI safety community, like the enfeeblement problem and whether we’d be locking in suboptimal values. They also discuss Stuart’s response to some counterarguments."], "venue": "80,000 Hours Website", "opinion": "One of the counterarguments the podcast talks about is <@my position@>(@Conversation with Rohin Shah@) that we’ll probably learn from smaller catastrophes in order to avoid actual extinction. I just want to note that while it might sound like I disagree with Stuart on this point, I don’t think we actually do. I was arguing against the position that extinction is the default outcome (> 50% probability) while Stuart is arguing against the position that extinction is near-impossible (~0% probability). I ended up around 10%; I’d guess that if Stuart were forced to, he’d give a number similar to mine, for similar reasons as me.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #106", "newsletter_category": "Technical agendas and prioritization"}
{"id": "eea51fba09f8aeee14c1caea95332872", "title": "Cooperation, Conflict, and Transformative Artificial Intelligence: A Research Agenda", "url": "https://www.lesswrong.com/s/p947tK8CoBbdpPtyK", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Jesse Clifton"], "summaries": ["This agenda by the [Effective Altruism Foundation](https://ea-foundation.org/) focuses on risks of astronomical suffering (s-risks) posed by <@Transformative AI@>(@Defining and Unpacking Transformative AI@) (TAI) and especially those related to conflicts between powerful AI agents. This is because there is a very clear path from extortion and executed threats against altruistic values to s-risks. While especially important in the context of s-risks, cooperation between AI systems is also relevant from a range of different viewpoints. The agenda covers four clusters of topics: strategy, credibility and bargaining, current AI frameworks, as well as decision theory.\n\nThe extent of cooperation failures is likely influenced by how power is distributed after the transition to TAI. At first glance, it seems like widely distributed scenarios (as <@CAIS@>(@Reframing Superintelligence: Comprehensive AI Services as General Intelligence@)) are more problematic, but related literature from international relations paints a more complicated picture. The agenda seeks a better understanding of how the distribution of power affects catastrophic risk, as well as potential levers to influence this distribution. Other topics in the strategy/governance cluster include the identification and analysis of realistic scenarios for misalignment, as well as case studies on cooperation failures in humans and how they can be affected by policy.\n\nTAI might enable unprecedented credibility, for example by being very transparent, which is crucial for both contracts and threats. The agenda aims at better models of the effects of credibility on cooperation failures. One approach to this is open-source game theory, where agents can see other agents' source codes. Promising approaches to prevent catastrophic cooperation failures include the identification of peaceful bargaining mechanisms, as well as surrogate goals. The idea of surrogate goals is for an agent to commit to act as if it had a different goal, whenever it is threatened, in order to protect its actual goal from threats. \n\nAs some aspects of contemporary AI architectures might still be present in TAI, it can be useful to study cooperation failure in current systems. One concrete approach to enabling cooperation in social dilemmas that could be tested with contemporary systems is based on bargaining over policies combined with punishments for deviations. Relatedly, it is worth investigating whether or not multi-agent training leads to human-like bargaining by default. This has implications on the suitability of behavioural vs classical game theory to study TAI. The behavioural game theory of human-machine interactions might also be important, especially in human-in-the-loop scenarios of TAI.\n\nThe last cluster discusses the implications of bounded computation on decision theory as well as the decision theories (implicitly) used by current agent architectures. Another focus lies on acausal reasoning and in particular the possibility of [acausal trade](https://wiki.lesswrong.com/wiki/Acausal_trade), where different correlated AI systems cooperate without any causal links between them."], "venue": "LessWrong", "opinion": "I am broadly sympathetic to the focus on preventing the worst outcomes and it seems plausible that extortion could play an important role in these, even though I worry more about distributional shift plus incorrigibility. Still, I am excited about the focus on cooperation, as this seems robustly useful for a wide range of scenarios and most value systems.\n\n**Rohin's opinion:** Under a suffering-focused ethics under which s-risks far overwhelm x-risks, I think it makes sense to focus on this agenda. There don't seem to be many plausible paths to s-risks: by default, we shouldn't expect them, because it would be quite surprising for an amoral AI system to think it was particularly useful or good for humans to _suffer_, as opposed to not exist at all, and there doesn't seem to be much reason to expect an immoral AI system. Conflict and the possibility of carrying out threats are the most plausible ways by which I could see this happening, and the agenda here focuses on neglected problems in this space.\n\nHowever, under other ethical systems (under which s-risks are worse than x-risks, but do not completely dwarf x-risks), I expect other technical safety research to be more impactful, because other approaches can more directly target the failure mode of an amoral AI system that doesn't care about you, which seems both more likely and more amenable to technical safety approaches (to me at least). I could imagine work on this agenda being quite important for _strategy_ research, though I am far from an expert here.", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #86", "newsletter_category": "Technical agendas and prioritization"}
{"id": "8fc0111b18ca2289836210245d999e07", "title": "Just Imitate Humans?", "url": "https://www.alignmentforum.org/posts/LTFaD96D9kWuTibWr/just-imitate-humans", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Michael Cohen"], "summaries": ["This post asks whether it is safe to build AI systems that just imitate humans. The comments have a lot of interesting debate."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #82", "newsletter_category": "Technical agendas and prioritization"}
{"id": "b93ff09ee9c6d9a88c70da877b7882d6", "title": "Four Ways An Impact Measure Could Help Alignment", "url": "https://alignmentforum.org/posts/wJK944YqvFwjdbqCP/four-ways-an-impact-measure-could-help-alignment", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Matthew Barnett"], "summaries": ["Much <@recent@>(@Towards a New Impact Measure@) <@work@>(@Designing agent incentives to avoid side effects@) has focused on quantifying the effect an AI has on the world, aka measuring impact, though some are [skeptical](https://www.alignmentforum.org/posts/kCY9dYGLoThC3aG7w/best-reasons-for-pessimism-about-impact-of-impact-measures). This post presents four potential ways impact measures could help with AI alignment. _First_, impact could act as a **regularizer**: an untrained AI attempting to do value learning could have an impact penalty that prevents it from taking dangerous actions before it is confident it has learned the right utility function. _Second_, impact could act as a **safety protocol**: if our training process is dangerous, e.g. due to <@mesa optimization@>(@Risks from Learned Optimization in Advanced Machine Learning Systems@), we can penalize impact during training to safely test models that may be misaligned. _Third_, impact could act as an **influence-limiter**: impact measures could help us construct AIs with intentionally limited scope that won’t heavily optimize the world as a side effect. _Fourth_, impact could help us with **deconfusion**: even if impact measures themselves aren't used, conceptual clarity about impact could help us gain conceptual clarity about other important concepts such as corrigibility, mild optimization, etc."], "venue": "Alignment Forum", "opinion": "I am most excited about impact as a **regularizer** and impact as a **safety protocol**. I feel like AIs that are impact-limited at runtime (the **influence-limiter** case) are unlikely to be competitive with other AIs that have no impact penalty (this is discussed in the post). I found the argument that impact could be particularly useful for **deconfusion** uncompelling.\n\n**Rohin's opinion**: It seems to me like the safety protocol argument is for limited actions at training time, while the influence limiter argument is for limited actions at test time. I don't really get how the regularizer is supposed to be different from these two cases -- perhaps the idea is that it is a regularizer specifically on the distribution over utility functions that the AI is optimizing? This is still confusing, I would have expected the influence limiter case to also be a change to the utility function. Like Asya, I am worried about competitiveness: see the post about [reversible changes](https://www.alignmentforum.org/posts/zrunBA8B5bmm2XZ59/reversible-changes-consider-a-bucket-of-water) below.", "highlight": false, "read_more": "", "summarizer": "Asya Bergal", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #64", "newsletter_category": "Technical agendas and prioritization"}
{"id": "713b1732cc7a758e2fa36630205b3d43", "title": "Unsolved research problems vs. real-world threat models", "url": "https://medium.com/@catherio/unsolved-research-problems-vs-real-world-threat-models-e270e256bc9e", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Catherine Olsson"], "summaries": ["Papers on adversarial examples often suggest that adversarial examples can lead to real world problems as their motivation. As we've <@seen@>(@Motivating the Rules of the Game for Adversarial Example Research@) <@previously@>(@Introducing the Unrestricted Adversarial Examples Challenge@), many adversarial example settings are not very realistic _threat models_ for any real world problem. For example, adversarial \"stickers\" that cause vision models to fail to recognize stop signs could cause an autonomous vehicle to crash... but an adversary could also just knock over the stop sign if that was their goal.\n\nThere are more compelling reasons that we might care about imperceptible perturbation adversarial examples. First, they are a proof of concept, demonstrating that our ML models are not robust and make \"obvious\" mistakes and so cannot be relied on. Second, they form an unsolved research problem, in which progress can be made more easily than in real settings, because it can be formalized straightforwardly (unlike realistic settings). As progress is made in this toy domain, it can be used to inform new paradigms that are closer to realistic settings. But it is _not_ meant to mimic real world settings -- in the real world, you need a threat model of what problems can arise from the outside world, which will likely suggest much more basic concerns than the \"research problems\", requiring solutions involving sweeping design changes rather than small fixes."], "venue": "Medium", "opinion": "I strongly agree with the points made in this post. I don't know to what extent researchers themselves agree with this point -- it seems like there is _a lot_ of adversarial examples research that is looking at the imperceptible perturbation case and many papers that talk about new types of adversarial examples, without really explaining why they are doing this or giving a motivation that is about unsolved research problems rather than real world settings. It's possible that researchers do think of it as a research problem and not a real world problem, but present their papers differently because they think that's necessary in order to be accepted.\n\nThe distinction between research problems and real world threat models seem to parallel the distinction between theoretical or conceptual research and engineering in AI safety. The former typically asks questions of the form \"how could we do this in principle, making simplifying assumptions X, Y and Z\", even though X, Y and Z are known not to hold in the real world, for the sake of having greater conceptual clarity that can later be leveraged as a solution to a real world problem. Engineering work on the other hand is typically trying to scale an approach to a more complex environment (with the eventual goal of getting to a real world problem).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #50", "newsletter_category": "Technical agendas and prioritization"}
{"id": "8c8f51e2d7421f1ceeaad016f03d5aa2", "title": "FLI Podcast: AI Breakthroughs and Challenges in 2018 with David Krueger and Roman Yampolskiy", "url": "https://futureoflife.org/2019/01/31/fli-podcast-ai-breakthroughs-and-challenges-in-2018-with-david-krueger-and-roman-yampolskiy/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Ariel Conn", "David Krueger and Roman Yampolskiy"], "summaries": ["David and Roman review AI progress in 2018 and speculate about its implications. Roman identified a pattern where we see breakthroughs like [AlphaZero](https://deepmind.com/blog/alphazero-shedding-new-light-grand-games-chess-shogi-and-go/) ([AN #36](https://mailchi.mp/6751e45fbb48/alignment-newsletter-36)), [AlphaStar](https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/) ([AN #43](https://mailchi.mp/768a8130013f/alignment-newsletter-43)) and [AlphaFold](https://deepmind.com/blog/alphafold/) ([AN #36](https://mailchi.mp/6751e45fbb48/alignment-newsletter-36)) so frequently now that it no longer seems as impressive when a new one comes out. David on the other hand sounded less impressed by progress on Dota and StarCraft, since both AI systems were capable of executing actions that humans could never do (fast reaction times for Dota and high actions-per-minute for StarCraft). He also thought that these projects didn't result in any clear general algorithmic insights the way AlphaZero did.\n\nOn the deep RL + robotics side, David identified major progress in [Dactyl](https://blog.openai.com/learning-dexterity/) ([AN #18](https://mailchi.mp/51717855ea27/alignment-newsletter-18)) and [QT-Opt](https://ai.googleblog.com/2018/06/scalable-deep-reinforcement-learning.html) (which I remember reading and liking but apparently I failed to put in the newsletter). He also cited GANs as having improved significantly, and talked about feature-wise transformations in particular. Roman noted the improving performance of evolutionary algorithms.\n\nDavid also noted how a lot of results were obtained by creating algorithms that could scale, and then using a huge amount of compute for them, quoting [AI and Compute](https://blog.openai.com/ai-and-compute/) ([AN #7](https://mailchi.mp/3e550712419a/alignment-newsletter-7)), [Interpreting AI Compute Trends](https://aiimpacts.org/interpreting-ai-compute-trends/) ([AN #15](https://mailchi.mp/4920e52dd61b/alignment-newsletter-15)) and [Reinterpreting AI and Compute](https://aiimpacts.org/reinterpreting-ai-and-compute/) ([AN #38](https://mailchi.mp/588354e4b91d/alignment-newsletter-38)).\n\nOn the policy side, they talked about deep fakes and the general trend that AI may be progressing to fast for us to keep up with its security implications. They do find it promising that researchers are beginning to accept that their research does have safety and security implications.\n\nOn the safety side, David noted that the main advance seemed to be with approaches using [superhuman feedback](https://www.lesswrong.com/posts/naccwaCQEEBXK7hiJ/my-use-of-the-phrase-super-human-feedback), including [debate](https://blog.openai.com/debate/) ([AN #5](https://mailchi.mp/0ae5d69de63b/alignment-newsletter-5)), [iterated amplification](https://blog.openai.com/amplifying-ai-training/) (discussed frequently in this newsletter, but that paper was in [AN #30](https://mailchi.mp/c1f376f3a12e/alignment-newsletter-30)) and [recursive reward modeling](https://medium.com/@deepmindsafetyresearch/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84) ([AN #34](https://mailchi.mp/f1947668b183/alignment-newsletter-34)). He also identified [unrestricted adversarial examples](https://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.html) ([AN #24](https://mailchi.mp/d7b5059d64ed/alignment-newsletter-24)) as an area to watch in the future."], "venue": "FLI Website", "opinion": "I broadly agree with the areas of AI progress identified here, though I would probably also throw in NLP, e.g. [BERT](https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html). I disagree on the details -- for example, I think that [OpenAI Five](https://blog.openai.com/openai-five/) ([AN #13](https://mailchi.mp/8234356e4b7f/alignment-newsletter-13)) was much better than I would have expected at the time and the same would have been true of AlphaStar if I hadn't already seen OpenAI Five, and the fact that they did a few things that humans can't do barely diminishes the achievement at all. (My take is pretty similar to Alex Irpan's take in his [post on AlphaStar](https://www.alexirpan.com/2019/02/22/alphastar.html).)", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #47", "newsletter_category": "Technical agendas and prioritization"}
{"id": "17819470ad89843cd69578ab9ee838c1", "title": "Inverse Reinforcement Learning and Inferring Human Preference with Dylan Hadfield-Menell", "url": "https://futureoflife.org/2018/04/04/podcast-ai-systems-learning-human-preferences/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Lucas Perry and Dylan Hadfield-Menell"], "summaries": ["A few weeks ago, Lucas Perry interviewed Dylan Hadfield-Menell on the FLI podcast about his research (which includes papers like [Cooperative Inverse Reinforcement Learning](https://arxiv.org/abs/1606.03137), [The Off-Switch Game](https://arxiv.org/abs/1611.08219), and [Inverse Reward Design](https://arxiv.org/abs/1711.02827)). They discussed a variety of topics including the motivations behind Dylan's research, future directions, thoughts on hard problems such as corrigibility and preference aggregation, etc."], "venue": "FLI", "opinion": "This is probably most useful for understanding the motivations behind many of Dylan's papers and how they all tie into each other, which can be hard to glean just from reading the papers. There were also a lot of framings of problems that felt useful to me that I haven't seen elsewhere.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #4", "newsletter_category": "Technical agendas and prioritization"}
{"id": "73d4cf7cf17deba771ce3fa7336281e7", "title": "Why I expect successful alignment", "url": "http://s-risks.org/why-i-expect-successful-alignment/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Tobias Baumann"], "summaries": ["This post gives three arguments that we will likely solve the narrow alignment problem of having an AI system do what its operators intend it to do. First, advanced AI systems may be developed in such a way that the alignment problem doesn't even happen, at least as we currently conceive of it. For example, under the comprehensive AI services model, there are many different AI services that are superintelligent at particular tasks that can work together to accomplish complex goals, but there isn't a single unified agent to \"align\". Second, if it becomes obvious that alignment will be a serious problem, then we will devote a lot of resources to tackling the problem. We already see reward hacking in current systems, but it isn't sufficiently dangerous yet to merit the application of a lot of resources. Third, we have already come up with some decent approaches that seem like they could work."], "venue": "S-risks website", "opinion": "I generally agree with these arguments and the general viewpoint that we will probably solve alignment in this narrow sense. The most compelling argument to me is the second one, that we will eventually devote significant resources to the problem. This does depend on the crux that we see examples of these problems and how they could be dangerous before it is too late.\n\nI also agree that it's much less clear whether we will solve other related problems, such as how to deal with malicious uses of AI, issues that arise when multiple superintelligent AI systems aligned with different humans start to compete, and how to ensure that humans have \"good\" values. I don't know if this implies that _on the margin_ it is more useful to work on the related problems. It could be that these problems are so hard that there is not much that we can do. (I'm neglecting [importance of the problem](https://concepts.effectivealtruism.org/concepts/importance-neglectedness-tractability/) here.)", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #39", "newsletter_category": "Technical agendas and prioritization"}
{"id": "e53361530456feac764a6effc106ebad", "title": "Mechanism design for AI", "url": "http://s-risks.org/mechanism-design-for-ai/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Tobias Baumann"], "summaries": ["One cause of outcomes worse than extinction could be escalating conflicts between very capable AI systems (that could eg. threaten to simulate suffering beings). It is worth studying how we could have AI systems implement mechanism design in order to guide such systems into more cooperative behavior."], "venue": "S-risks website", "opinion": "", "highlight": false, "read_more": "Adaptive Mechanism Design: Learning to Promote Cooperation", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #16", "newsletter_category": "Technical agendas and prioritization"}
{"id": "ca0013e7610852ef0f922bffa285145c", "title": "A Summary of Concrete Problems in AI Safety", "url": "https://futureoflife.org/2018/06/26/a-summary-of-concrete-problems-in-ai-safety/?cn-reloaded=1&utm_source=The+EA+Newsletter&utm_campaign=89f7c4aee5-EMAIL_CAMPAIGN_2018_07_11_11_31&utm_medium=email&utm_term=0_51c1df13ac-89f7c4aee5-309672937", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Shagun Sodhani"], "summaries": ["A nice summary of [Concrete Problems in AI Safety](https://blog.openai.com/concrete-ai-safety-problems/) that's a lot quicker to read than the original paper."], "venue": "FLI Blog", "opinion": "I like it -- I think I will send this to newer researchers as a precursor to the full paper.", "highlight": false, "read_more": "Concrete Problems in AI Safety", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "Technical agendas and prioritization"}
{"id": "cb596a4fe6a90bbdaa8d83902c5fdcb9", "title": "Mechanistic Transparency for Machine Learning", "url": "https://www.lesswrong.com/posts/3kwR2dufdJyJamHQq/mechanistic-transparency-for-machine-learning", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Daniel Filan"], "summaries": ["One useful thread of alignment research would be to figure out how to take a neural net, and distill parts or all of it into pseudocode or actual code that describes how the neural net actually works. This could then be read and analyzed by developers to make sure the neural net is doing the right thing. Key quote: \"I'm excited about this agenda because I see it as giving the developers of AI systems tools to detect and correct properties of their AI systems that they see as undesirable, without having to deploy the system in a test environment that they must laboriously ensure is adequately sandboxed.\""], "venue": "LessWrong", "opinion": "I would be really excited to see good work on this agenda, it would be a big step forward on how good our design process for neural nets is.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #15", "newsletter_category": "Technical agendas and prioritization"}
{"id": "4e56757ec195c2a5c2c5e38f2cc37e3b", "title": "Alignment of Language Agents", "url": "https://medium.com/@deepmindsafetyresearch/alignment-of-language-agents-9fbc7dd52c6c", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2021-01-01T00:00:00Z", "authors": ["Zachary Kenton", "Tom Everitt", "Laura Weidinger", "Iason Gabriel", "Vladimir Mikulik", "Geoffrey Irving"], "summaries": ["This paper analyzes the various problems we consider in AI alignment from the perspective of language agents. Problems covered include <@specification gaming@>(@Specification gaming examples in AI@), <@whom and what to align to@>(@Artificial Intelligence, Values and Alignment@), <@intent alignment@>(@Clarifying \"AI Alignment\"@), <@removing tampering incentives@>(@Avoiding Tampering Incentives in Deep RL via Decoupled Approval@), and <@inner alignment@>(@Risks from Learned Optimization in Advanced Machine Learning Systems@). These can be categorized as different kinds of misspecification, namely misspecification in the _training data_, the _training process_, and the _behavior under distributional shift_.\n\nWhile the conceptual problems are similar to the ones already considered for embodied RL agents, the ways they manifest are different. In particular, the authors highlight the possibility that language agents will _deceive us_, _manipulate us_, or _produce harmful content_. The authors review some existing definitions of deception and manipulation that are purely behavioral (that is, the definitions do not require an _intent_ to deceive or manipulate). A signaller **deceives** a receiver if the signaller transmits (or suggestively doesn’t transmit) a signal that causes the receiver to believe some false claim that benefits the signaller. **Manipulation** is similar, except rather than causing the receiver to believe a false claim, it causes the receiver to take some action that benefits the signaller, that in some sense the receiver “shouldn’t” have taken. We could cash out “the receiver ‘shouldn’t’ have taken the action” just as \"the action is harmful to the receiver\", but from a safety / security mindset, the authors prefer a broader definition that aims to identify bad _means_ of influencing the receiver, instead of only focusing on whether the _ends_ were bad.\n\nSome other miscellaneous points:\n- Since the “action space” is just language, it seems like it should be easier (though still requires work) to prevent language agents from causing physical harm.\n- It will hopefully be easier to train language agents to be explainable, since they have native fluency in natural language with which they can explain their behavior."], "venue": "arXiv", "opinion": "", "highlight": false, "read_more": "Paper: Alignment of Language Agents", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #144", "newsletter_category": "Technical agendas and prioritization"}
{"id": "3ba7508d939a3891f784e4a33c9743ad", "title": "An introduction to worst-case AI safety", "url": "http://s-risks.org/an-introduction-to-worst-case-ai-safety/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Tobias Baumann"], "summaries": ["Argues that people with suffering-focused ethics should focus on \"worst-case AI safety\", which aims to find technical solutions to risks of AIs creating vast amounts of suffering (which would be much worse than extinction)."], "venue": "s-risks.org", "opinion": "If you have strongly suffering-focused ethics (unlike me), this seems mostly right. The post claims that suffering-focused AI safety should be more tractable than AI alignment, because it focuses on a subset of risks and only tries to minimize them. However, it's not necessarily the case that focusing on a simpler problem makes it easier to solve. It feels easier to me to figure out how to align an AI system to humans, or how to enable human control of an AI system, than to figure out all the ways in which vast suffering could happen, and solve each one individually. You can make an analogy to mathematical proofs and algorithms -- often, you want to try to prove a _stronger_ statement than the one you are looking at, because when you use induction or recursion, you can rely on a stronger inductive hypothesis.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #14", "newsletter_category": "Technical agendas and prioritization"}
{"id": "4b622dde8345fdbbcbe0e927743dae11", "title": "AI Paradigms and AI Safety: Mapping Artefacts and Techniques to Safety Issues", "url": "https://www.cser.ac.uk/resources/ai-paradigms-and-ai-safety-mapping-artefacts-and-techniques-safety-issues/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Jose Hernandez-Orallo", "Fernando Martinez-Plumed", "Shahar Avin", "Jess Whittlestone", "Seán Ó hÉigeartaigh"], "summaries": ["What should prioritization within the field of AI safety look like? Ideally, we would proactively look for potential issues that could arise with many potential AI technologies, making sure to cover the full space of possibilities rather than focusing on a single area. What does prioritization look like in practice? This paper investigates, and finds that it is pretty different from this ideal.\n\nIn particular, they define a set of 14 categories of AI _techniques_ (examples include neural nets, planning and scheduling, and combinatorial optimization), and a set of 10 kinds of AI _artefacts_ (examples include agents, providers, dialoguers, and swarms). They then analyze trends in the amount of attention paid to each technique or artefact, both for AI safety and AI in general. Note that they construe AI safety very broadly by including anything that addresses potential real-world problems with AI systems.\n\nWhile there are a lot of interesting trends, the main conclusion is that there is an approximately 5-year delay between the emergence of an AI paradigm and safety research into that paradigm. In addition, safety research tends to neglect non-dominant paradigms."], "venue": "ECAI 2020", "opinion": "One possible conclusion is that safety research should be more diversified across different paradigms and artefacts, in order to properly maximize expected safety. However, this isn’t obvious: it seems likely that if the dominant paradigm has 50% of the research, it will also have, say, 80% of future real-world deployments, and so it could make sense to have 80% of the safety research focused on it. Rather than try to predict which paradigm will become dominant (a very difficult task), it may be more efficient to simply observe which paradigm becomes dominant and then redirect resources at that time (even though that process takes 5 years to happen).", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #117", "newsletter_category": "Technical agendas and prioritization"}
{"id": "5dd0ca1bc85a4874a1c6749d1b3025d7", "title": "Multi-agent minds and AI alignment", "url": "https://www.lesswrong.com/posts/3fkBWpE4f9nYbdf7E/multi-agent-minds-and-ai-alignment", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Jan Kulveit"], "summaries": ["This post argues against the model of humans as optimizing some particular utility function, instead favoring a model based on predictive processing. This leads to several issues with the way standard value learning approaches like inverse reinforcement learning work. There are a few suggested areas for future research. First, we could understand how hierarchical models of the world work (presumably for better value learning). Second, we could try to invert game theory to learn objectives in multiagent settings. Third, we could learn preferences in multiagent settings, which might allow us to better infer norms that humans follow. Fourth, we could see what happens if we take a system of agents, infer a utility function, and then optimize it -- perhaps one of the agents' utility functions dominates? Finally, we can see what happens when we take a system of agents and give it more computation, to see how different parts scale. On the non-technical side, we can try to figure out how to get humans to be more self-aligned (i.e. there aren't \"different parts pulling in different directions\")."], "venue": "LessWrong", "opinion": "I agree with the general point that figuring out a human utility function and then optimizing it is unlikely to work, but for different reasons (see the first chapter of the [Value Learning sequence](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc)). I also agree that humans are complex and you can’t get away with modeling them as Boltzmann rational and optimizing some fixed utility function. I wouldn’t try to make the model more accurate (eg. a model of a bunch of interacting subagents, each with their own utility function), I would try to make the model less precise (eg. a single giant neural net), because that reduces the chance of model misspecification. However, given the [impossibility result](https://arxiv.org/abs/1712.05812) saying that you must make assumptions to make this work, we probably have to give up on having some nice formally specified meaning of “values”. I think this is probably fine -- for example, iterated amplification doesn’t have any explicit formal value function.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #37", "newsletter_category": "Technical agendas and prioritization"}
{"id": "26bea6a1f1f1c4979f1536b22af0568c", "title": "ICML Uncertainty and Robustness Workshop Accepted Papers", "url": "https://sites.google.com/view/udlworkshop2019/accepted-papers", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["nan"], "summaries": ["The Uncertainty and Robustness Workshop accepted papers are available. Topics include out-of-distribution detection, generalization to stochastic corruptions, label corruption robustness, and so on."], "venue": "ICML", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Dan H", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #58", "newsletter_category": "Uncertainty"}
{"id": "62b5a8aa36ead102fe6cec80654a061e", "title": "Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data", "url": "http://arxiv.org/abs/1912.07768", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Felipe Petroski Such", "Aditya Rawal", "Joel Lehman", "Kenneth O. Stanley", "Jeff Clune"], "summaries": ["The Generative Teaching Networks (GTN) paper breaks new ground by training generators that produce synthetic data that can enable learner neural networks to learn faster than when training on real data. The process is as follows: The generator produces synthetic training data by transforming some sampled noise vector and label; a newly-initialized learner is trained on this synthetic data and evaluated on real data; the error signal from this evaluation is backpropagated to the generator via meta-gradients, to enable it to produce synthetic samples that will train the learner networks better. They also demonstrate that their curriculum learning variant, where the input vectors and their order are learned along with generator parameters, is especially powerful at teaching learners with few samples and few steps of gradient descent.\n\nThey apply their system to neural architecture search, and show an empirical correlation between performance of a learner on synthetic data and its eventual performance when trained on real data. In this manner, they make the argument that data from a trained GTN can be used to cheaply assess the likelihood of a given network succeeding to learn on the real task, and hence GTN data can tremendously speed up architecture search."], "venue": "arXiv", "opinion": "I really like this paper; I think it shines a light in an interesting new direction, and I look forward to seeing future work that builds on this in theoretical, mechanistic, and applied manners. On the other hand, I felt they did gloss over how exactly they do curriculum learning, and their reinforcement learning experiment was a little unclear to me.\n\nI think the implications of this work are enormous. In a future where we might be limited by the maturity of available simulation platforms or inundated by deluges of data with little marginal information, this approach can circumvent such problems for the selection and (pre)training of suitable student networks.", "highlight": false, "read_more": "Blog post", "summarizer": "Sudhanshu", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #85", "newsletter_category": "Unsupervised learning"}
{"id": "7aa33f522f39e562dc7e923026d0c30a", "title": "Unsupervised learning: the curious pupil", "url": "https://deepmind.com/blog/unsupervised-learning/", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Alexander Graves and Kelly Clancy"], "summaries": ["A high-level but well-written explanation of why many believe unsupervised learning will be key to achieving general intelligence, touching on the approaches of GANs and autoregressive models as examples. "], "venue": "DeepMind Blog", "opinion": "This is a clean, clear summary, but one without any real technical depth or detail; this would be a good writeup to hand someone without any machine learning background who wanted to get an intuitive grasp for unsupervised learning as a field. ", "highlight": false, "read_more": "", "summarizer": "Cody", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #55", "newsletter_category": "Unsupervised learning"}
{"id": "0d9a18ca8d7579a8e8e500590cdf7f2f", "title": "Evaluating the Unsupervised Learning of Disentangled Representations", "url": "https://ai.googleblog.com/2019/04/evaluating-unsupervised-learning-of.html", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Olivier Bachem"], "summaries": ["This blog post and paper describe a Google-scale comparative study of different representation learning methods designed to learn \"disentangled\" representations, where the axes of the representation are aligned with the true underlying factors generating the data. The paper's claims are a sobering result for the field, both theoretically and empirically. Theoretically, they show that in an unsupervised context, it's not possible to find a disentangled representation without embedding some form of inductive bias into your model. Empirically, they present evidence suggesting that variation between random seeds for a given hyperparameter setting (in particular, regularization strength) matters as much or more than variation between that hyperparameter's values. Finally, they run experiments that call into question whether disentangled representations actually support transfer learning, or can be identified as in fact being disentangled without using a metric that relies on having ground truth factors of variation to begin with, making it difficult to evaluate on the many realistic contexts where these aren't available."], "venue": "Google AI Blog", "opinion": "This strikes me as a really valuable injection of empirical realism, of the kind that tends to be good for research fields to have periodically, even if it can be a bit painful or frustrating. I appreciate in particular the effort and clarity that this paper puts into articulating the implicit assumptions of how disentanglement can be used or evaluated, and trying to test those assumptions under more real-world settings, such as the one where you don't have any ground truth factors of variation, since the real world doesn't tend to just hand out the Correct factorized model of itself.", "highlight": false, "read_more": "", "summarizer": "Cody", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #55", "newsletter_category": "Unsupervised learning"}
{"id": "6bf2d5520046526e71aaedde3be366ec", "title": "Unsupervised Learning via Meta-Learning", "url": "http://arxiv.org/abs/1810.02334", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Kyle Hsu", "Sergey Levine", "Chelsea Finn"], "summaries": ["This paper trains a meta-learner on tasks which were generated using unsupervised learning. This is done by first learning an (unsupervised) embedding for a dataset, then clustering in that embedding space using k-means. Clustering is done many times with random scaling on each dimension; each meta-learning task is then based on one set of clusters. The resulting meta-learner is then evaluated on the actual task for that dataset, performing better than approaches based just on embeddings, and sometimes getting fairly close to the supervised-learning equivalent."], "venue": "arXiv", "opinion": "This is a cool technique; I like the combination of two approaches (meta-learning and unsupervised learning) aimed at making deep learning applicable to many more real-world datasets. I can imagine promising follow-ups - e.g. randomly scaling embedding dimensions to get different clusters seems a bit hacky to me, so I wonder if there's a better approach (maybe learning many different embeddings?). It's interesting to note that their test-time performance is sometimes better than their training performance, presumably because some of the unsupervised training clusterings are \"nonsensical\", so there is room to improve here.", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #28", "newsletter_category": "Unsupervised learning"}
{"id": "7d3810082a528c58a4508c3c61c4b8d3", "title": "Understanding View Selection for Contrastive Learning", "url": "https://ai.googleblog.com/2020/08/understanding-view-selection-for.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+blogspot%2FgJZg+%28Google+AI+Blog%29", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Yonglong Tian", "Chen Sun", "Ben Poole", "Dilip Krishnan", "Cordelia Schmid", "Phillip Isola"], "summaries": ["<@Contrastive multiview learning@>(@Representation Learning with Contrastive Predictive Coding@) is a self-supervised approach to pretraining classifiers in which different views of data points are created and an encoder is trained to minimize the distance between encodings of views corresponding to data points with the same label while maximizing the distance between encodings of views with different labels. \n\nThe efficacy of this approach depends on the choice of views as well as the downstream task the neural network is going to be trained for. To find the most promising views, the authors propose the Infomin principle: all views should keep task-relevant information while the mutual information between views is minimized. The principle is supported by various observations: Firstly, earlier approaches to contrastive learning in the image domain that use data augmentation to preserve object identity while creating diverse views can be seen as an implicit application of the Infomin principle. Secondly, varying the mutual information between views (for example by changing the distance between two cropped views of the same image) creates an inverted U-curve for downstream performance corresponding to poor performance if there is too much or too little mutual information between the views. Lastly, the authors also find an inverted U-curve in performance for different colour spaces when using channels as views and the Lab colour space which was built to mimic human colour perception is close to the optimum, meaning that human colour perception might be near-optimal for self-supervised representation learning. \n\nThe authors then use the Infomin principle to select image augmentations for contrastive pretraining and improve the state of the art in linear readout on ImageNet from 69.3% to 73% for Top-1 accuracy and from 89% to 91.1% for Top-5 accuracy."], "venue": "arXiv", "opinion": "While the Infomin principle seems powerful and their results look impressive, I am not really convinced that the principle actually played an important role in finding the image augmentations they ended up using, as there is little description of how that happened and the augmentations rather look like the result of combining previously used approaches and doing some hyperparameter optimization.", "highlight": false, "read_more": "What makes for good views for contrastive learning", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #116", "newsletter_category": "Unsupervised learning"}
{"id": "f5a87315479ef14de9c94a1f01048df0", "title": "The human side of interaction", "url": "https://www.alignmentforum.org/posts/eD9T4kiwB6MHpySGE/the-human-side-of-interaction", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Rohin Shah"], "summaries": ["The lens of [human-AI interaction](https://www.alignmentforum.org/posts/4783ufKpx8xvLMPc6/human-ai-interaction) ([AN #41](https://mailchi.mp/8c3f02cabccd/alignment-newsletter-41)) also suggests that we should focus on what the _human_ should do in AI alignment.\n\nAny feedback that the AI system gets must be interpreted using some assumption. For example, when a human provides an AI system a reward function, it shouldn't be interpreted as a description of optimal behavior in every possible situation (which is what we currently do implicitly). [Inverse Reward Design](https://arxiv.org/abs/1711.02827) (IRD) suggests an alternative, more realistic assumption: the reward function is likely to the extent that it leads to high true utility in the training environment. Similarly, in inverse reinforcement learning (IRL) human demonstrations are often interpreted under the assumption of Boltzmann rationality.\n\nAnalogously, we may also want to train humans to give feedback to AI systems in the manner that they are expecting. With IRD, the reward designer should make sure to test the reward function extensively in the training environment. If we want our AI system to help us with long-term goals, we may want the overseers to be much more cautious and uncertain in their feedback (depending on how such feedback is interpreted). Techniques that learn to reason like humans, such as iterated amplification and debate, would by default learn to interpret feedback the way humans do. Nevertheless it will probably be useful to train humans to provide useful feedback: for example, in debate, we want humans to judge which side provided more true and useful information."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #43", "newsletter_category": "Value learning sequence"}
{"id": "56815c21d9c4e45522803dfed21a1f9e", "title": "Reward uncertainty", "url": "https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/ZiLLxaLB5CCofrzPp", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Rohin Shah"], "summaries": ["Given that we need human feedback for the AI system to stay \"on track\" as the environment changes, we might design a system that keeps an estimate of the reward, chooses actions that optimize that reward, but also updates the reward over time based on feedback. This has a few issues: it typically assumes that the human Alice knows the true reward function, it makes a possibly-incorrect assumption about the meaning of Alice's feedback, and the AI system still looks like a long-term goal-directed agent where the goal is the current reward estimate.\n\nThis post takes the above AI system and considers what happens if you have a distribution over reward functions instead of a point estimate, and during action selection you take into account future updates to the distribution. (This is the setup of [Cooperative Inverse Reinforcement Learning](https://arxiv.org/abs/1606.03137).) While we still assume that Alice knows the true reward function, and we still require an assumption about the meaning of Alice's feedback, the resulting system looks less like a goal-directed agent.\n\nIn particular, the system no longer has an incentive to disable the system that learns values from feedback: while previously it changed the AI system's goal (a negative effect from the goal's perspective), now it provides more information about the goal (a positive effect). In addition, the system has more of an incentive to let itself be shut down. If a human is about to shut it down, it should update strongly that whatever it was doing was very bad, causing a drastic update on reward functions. It may still prevent us from shutting it down, but it will at least stop doing the bad thing. Eventually, after gathering enough information, it would converge on the true reward and do the right thing. Of course, this is assuming that the space of rewards is well-specified, which will probably not be true in practice."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #42", "newsletter_category": "Value learning sequence"}
{"id": "88bc16f0dfaa2ff9fb905b3386fb8e2f", "title": "Coherence arguments do not imply goal-directed behavior", "url": "https://www.alignmentforum.org/posts/NxF5G6CJiof6cemTw/coherence-arguments-do-not-imply-goal-directed-behavior", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Rohin Shah"], "summaries": ["In this post, Rohin argues against the claim that \"simply knowing that an agent is intelligent lets us infer that it is goal-directed\". He points out that all behaviour can be rationalized as expected utility maximisation over world-histories - but this may not meet our criteria for goal-directed behaviour, and slightly misspecifying such a utility function may well be perfectly safe. What's more interesting - and dangerous - is expected utility maximisation over world-states - but he claims that we shouldn't assume that advanced AI will have this sort of utility function, unless we have additional information (e.g. that it has a utility function simple enough to be explicitly represented). There are plenty of intelligent agents which aren't goal-directed - e.g. ones which are very good at inference but only take trivial actions."], "venue": "Alignment Forum", "opinion": "I broadly agree with Rohin's points in this and the previous post, and am glad that he's making these arguments explicit. However, while goal-directedness is a tricky property to reason about, I think it's still useful to consider it a property of an agent rather than a property of our model of that agent. It's true that when we have a detailed explanation of how an agent works, we're able to think of cases in which its goal-directedness breaks down (e.g. adversarial examples). However, when these examples are very rare, they don't make much practical difference (e.g. knowing that AlphaGo has a blind spot in certain endgames might not be very helpful in beating it, because you can't get to those endgames). ", "highlight": false, "read_more": "", "summarizer": "Richard", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #35", "newsletter_category": "Value learning sequence"}
{"id": "4d6adf863495dc661614d72cc34d52fd", "title": "Neurosymbolic Reinforcement Learning with Formally Verified Exploration", "url": "http://arxiv.org/abs/2009.12612", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Greg Anderson", "Abhinav Verma", "Isil Dillig", "Swarat Chaudhuri"], "summaries": ["A typical approach to formally verified safe exploration in RL is to compute a _shield_, which identifies a safe set of states and actions. After this shield is computed, it is “wrapped” around the environment to ensure that if a potentially unsafe action is about to be taken, it is replaced with a safe one. Then, a policy learning algorithm is applied as normal to learn a good policy.\n\nThe key insight of this paper is to compute shields for specific _policies_, rather than creating a one-time shield that must apply to the entire state space. Since any given policy will only visit a small fraction of the state space, the shields are easier to compute and can be more permissive.\n\nThey assume access to a _worst-case dynamics model_, which given a state and action outputs a _set_ of states that could be visited. Given a policy π, an _inductive safety invariant_ is a set of safe states that includes all possible initial states and is closed under worst-case transitions: if you start at a state in the set, for any action that π suggests and for any state from the worst-case transition dynamics, that new state will still be in the set. Our algorithm will ensure that any policy we execute will have a corresponding inductive safety invariant.\n\nFormal verification techniques allow us to find inductive safety invariants for restricted classes of policies. This paper uses the space of deterministic, piecewise linear policies as its set of symbolic policies. But how do we apply this to neural nets? The key idea is to start with a safe symbolic policy, convert it to a neurosymbolic policy, take a neural net gradient step, convert back to a safe symbolic policy, and repeat until done. Let’s go over each of these steps.\n\nFirst, let’s suppose we have a symbolic policy g with inductive safety invariant ø. Then for any neural net f, we construct the policy h = “f(s) if no matter what we stay within ø, otherwise g(s)”. It is easy to see that ø is also an inductive safety invariant for h. Which f should we use to create h? The authors train a neural net to imitate g, and use that as their f. (Note that imitating g only requires executing g in the environment, and we know that g is safe.)\n\nNow that we have our neurosymbolic policy h, we need to take gradient steps on it. We collect data in the environment using h, but then for the gradient we ignore the symbolic part, and take a gradient step as though the data were collected using f. (It seems they used an on-policy algorithm for this, introducing bias; I am not sure why they didn’t simply use an off-policy algorithm.) This produces a new neurosymbolic policy h’ that is still safe (since g and ø are unchanged, and that’s what guarantees safety).\n\nFinally, we need to convert h’ back into a symbolic policy g’. This is done by a version of imitation learning that works in the symbolic policy space, where a new inductive safety invariant for g’ is found using formal verification techniques.\n\nTo start off the whole process, we need an initial symbolic policy, which must be constructed by hand. The authors show using experiments in simple continuous control environments that this method can learn high-reward policies without ever having a safety violation."], "venue": "arXiv", "opinion": "I really like this as an example of combining the performance of neural networks with the robustness of symbolic approaches. I especially like the fact that the shield is specialized to the current policy and updated over time: I think ML scales so well partly because it only deals with a tiny portion of the input space and can completely ignore the vast majority of possible inputs, and so if you want to add anything on top of ML you need to ensure you preserve this property to ensure scalability. Previous approaches required a shield that is correct across all possible states, failing to preserve this property; in contrast, this approach only requires a shield that is correct for the sequence of learned policies (on whichever states they visit).\n\nI should note that a large portion of why I like this paper is that it feels like it elegantly fits in _both_ the formal verification _and_ the ML fields. (I used to work in programming languages, of which formal verification is a subfield.) On the formal verification side, the guarantees are clean and simple, and the techniques used are canonical. On the ML side, I mentioned above why I like the fact that the shield is policy-specific and updated over time.\n\nAs I’ve said before, I think the real challenge in formal verification for AI alignment is how to handle fuzzy specifications. I think this paper shows a path forward: since the safety is established by an inductive invariant that can change over time, we could potentially use human feedback to establish these inductive invariants and update them over time, without requiring a human to fully specify at the outset exactly what is safe and what isn’t. You could think of it as an expanding whitelist of states which the policy is allowed to visit.", "highlight": true, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #124", "newsletter_category": "Verification"}
{"id": "d70d0e497d120eb8ad6538e3267aae01", "title": "An Inductive Synthesis Framework for Verifiable Reinforcement Learning", "url": "http://arxiv.org/abs/1907.07273", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["He Zhu", "Zikang Xiong", "Stephen Magill", "Suresh Jagannathan"], "summaries": ["This older paper has a pretty similar idea to the one in the highlighted paper. In order to compute a safety shield for a neural network RL agent, we first transform the neural network into a simpler more symbolic policy, prove safety of the symbolic policy, and then use the generated inductive safety invariant as a shield. This paper also uses deterministic piecewise linear policies as its space of symbolic policies. It only proves safety of the final learned RL policy, and so only guarantees safety at deployment, not at training time. (In other words, it does not guarantee safe exploration, and instead assumes that you are training in simulation so that safety is not a concern.)"], "venue": "PLDI 2019", "opinion": "Since this paper was published at PLDI, it is both longer and goes into a lot more of the details of how to actually perform each of these steps, as well as showing it with a running example on the inverted pendulum (where safety is defined as not going beyond a certain angle). I’m not going to summarize them here but anyone interested in these technical details should check out this paper before the highlighted one (which is constrained by ML page limits and can’t explain the techniques very well).\n\nJust as a reminder that learning programs does not automatically confer interpretability, I present to you the symbolic policy learned by their method for the inverted pendulum:\n\n&&IMAGE HERE&&", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #124", "newsletter_category": "Verification"}
{"id": "83ec9e8d72aeacb6bb4520b9a204338e", "title": "Ethical Mission Definition and Execution for Maritime Robots Under Human Supervision", "url": "https://calhoun.nps.edu/handle/10945/61086", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Don Brutzman", "Curtis L. Blais", "Duane T. Davis", "Robert B. McGhee"], "summaries": ["While underwater robots can perform missions that humans cannot, they cannot be held liable for their actions. Our society requires that someone be responsible for (and can be held liable for) the actions of any such robot, leading to a form of the specification problem: how do we program robots such that it is reasonable to hold their operators accountable for their actions?\n\nThis paper divides mission execution into three main parts: the execution level (hardware control), the tactical level (low-level behaviors), and the strategic level (what the robot should do). It proposes that, at the strategic level, we use formal methods to specify what the robot should do. The language should be expressive enough to be useful, while still keeping it sufficiently limited to allow exhaustive testing. They propose using state machines augmented with constraints. The constraints can be used to specify things like \"the robot must stay at least 10m away from obstacles\". The state machine decides which behaviors to execute, and each such behavior can have three results: success, failure, or exception (in the case that a constraint would have been violated had the behavior continued operating)."], "venue": "IEEE Journal of Oceanic Engineering", "opinion": "It's interesting to see other groups also aiming to have what are essentially robustness guarantees, but motivated instead from the perspective of responsibility and liability. The actual method seems reasonable for the impoverished systems we have today, where we must specify everything that we want the system to do.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #98", "newsletter_category": "Verification"}
{"id": "6d3f88b5e03a02fd36827c7820f7bf46", "title": "Verification and Transparency", "url": "https://alignmentforum.org/posts/n3YRDJYCnQcDAw29G/verification-and-transparency", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2019-01-01T00:00:00Z", "authors": ["Daniel Filan"], "summaries": ["This post points out that verification and transparency have similar goals. Transparency produces an artefact that allows the user to answer questions about the system under investigation (e.g. \"why did the neural net predict that this was a tennis ball?\"). Verification on the other hand allows the user to pose a question, and then automatically answers that question (e.g. \"is there an adversarial example for this image?\")."], "venue": "Alignment Forum", "opinion": "", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #76", "newsletter_category": "Verification"}
{"id": "136c85894bb9ca52c1efc3fb754debfa", "title": "Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability", "url": "http://arxiv.org/abs/1809.03008", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Kai Y. Xiao", "Vincent Tjeng", "Nur Muhammad Shafiullah", "Aleksander Madry"], "summaries": ["The idea behind verification is to consider all possible inputs at the same time, and show that no matter what the input is, a particular property is satisfied. In ML, this is typically applied to adversarial examples, where inputs are constrained to be within the L-infinity norm ball of dataset examples. Prior papers on verification (covered in [AN #19](https://mailchi.mp/4b19d2caa5a9/alignment-newsletter-19)) solve a computationally easier relaxation of the verification problem, that gives a lower bound on the performance of the classifier. This paper aims to use exact verification, since it can compute the exact adversarial performance of the classifier on the test set, and to figure out how to improve its performance.\n\nOne easy place to start is to encourage weights to be zero, since these can be pruned from the problem fed in to the constraint solver. (Or more likely, they feed it in anyway, but the constraint solver immediately gets rid of them -- constraint solvers are pretty smart.) This can be done using L1 regularization and pruning small weights. This already gives two orders of magnitude of speedup, making it able to verify that there is no adversarial attack with ϵ = 0.1 on a particular MNIST digit in 11 seconds on average.\n\nNext, they note that verification with linear constraints and functions is easy -- the challenging aspect is the Relu units that force the verifier to branch into two cases. (Since relu(x) = max(x, 0), it is the identity function when x is positive, and the zero function otherwise.) So why not try to ensure that the Relu units are also linear? Obviously we can't just make all the Relu units linear -- the whole point of them is to introduce nonlinearity to make the neural net more expressive. But as a start, we can look at the behavior of the Relu units on the examples we have, and if they are almost always active (inputs are positive) or almost always inactive (inputs are negative), then we replace them with the corresponding linear function (identity and zero, respectively), which is easier to verify. This gets another ~2x speedup.\n\nBut what if we could also change the training procedure? Maybe we could augment the loss so that the Relu units are either decisively active or decisively inactive on any dataset example. They propose that _during training_ we consider the L-infinity norm ball around each example, use that to create intervals that each pixel must be in, and then make a forward pass through the neural net using interval arithmetic (which is fast but inexact). Then, we add a term to the loss that incentivizes the interval for the input to each Relu to exclude zero (so that the Relu is either always active or always inactive). They call this the Relu Stability loss, or RS loss.\n\nThis leads to a further 4-13x speedup with similar test set accuracy. They then also test on MNIST with ϵ = 0.2, 0.3 and CIFAR with ϵ = 2/255, 8/255. It leads to speedup in all cases, with similar test set accuracy on MNIST but reduced accuracy on CIFAR. The provable accuracy goes up, but this is probably because when there's no RS loss, more images time out in verification, not because the network becomes better at classification. Other verification methods do get better provable accuracies on CIFAR, even though in principle they could fail to detect that a safe example is safe. This could be because their method times out frequently, or because their method degrades the neural net classifier -- it's hard to tell since they don't report number of timeouts."], "venue": "ICLR 2019", "opinion": "As with the previous papers on verification, I'm excited in the improvement in our capability to prove things about neural nets. I do think that the more important problem is how to even state properties that we care about in a way that we could begin to prove them. For example, [last week](https://mailchi.mp/d7b5059d64ed/alignment-newsletter-24) we saw the unrestricted adversarial examples challenge, where humans are the judge of what a legal example is -- how can we formalize that for a verification approach?\n\nOn this paper specifically, I wish they had included the number of timeouts that their method has -- it's hard to interpret the provable accuracy numbers without that. Based on the numbers in the paper, I'm guessing this method is still much more computationally expensive than other methods. If so, I'm not sure what benefit it gives over them -- presumably it's that we can compute the exact adversarial accuracy, but if we don't have enough compute, such that other methods can prove better lower bounds anyway, then it doesn't seem worth it.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #25", "newsletter_category": "Verification"}
{"id": "1c800b4e40f150bfaf225281107311e5", "title": "Towards Mixed Optimization for Reinforcement Learning with Program Synthesis", "url": "http://arxiv.org/abs/1807.00403", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Surya Bhupatiraju", "Kumar Krishna Agrawal", "Rishabh Singh"], "summaries": ["This paper proposes a framework in which policies are represented in two different ways -- as neural nets (the usual way) and as programs. To go from neural nets to programs, you use _program synthesis_ (as done by [VIPER](http://arxiv.org/abs/1805.08328) and [PIRL](https://arxiv.org/abs/1804.02477), both summarized in previous newsletters). To go from programs to neural nets, you use _distillation_ (basically use the program to train the neural net with supervised training). Given these transformations, you can then work with the policy in either space. For example, you could optimize the policy in both spaces, using standard gradient descent in neural-net-space, and _program repair_ in program-space. Having a program representation can be helpful in other ways too, as it makes the policy more interpretable, and more amenable to formal verification of safety properties."], "venue": "arXiv", "opinion": "It is pretty nice to have a program representation. This paper doesn't delve into specifics (besides a motivating example worked out by hand), but I'm excited to see an actual instantiation of this framework in the future!", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #14", "newsletter_category": "Verification"}
{"id": "ebbb4d5a10e68f93333423aaf6d300b0", "title": "Certified Adversarial Robustness for Deep Reinforcement Learning", "url": "http://arxiv.org/abs/2004.06496", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2020-01-01T00:00:00Z", "authors": ["Michael Everett*", "Bjorn Lutjens*", "Jonathan P. How"], "summaries": ["<@Certified adversarial robustness@>(@Certified Defenses against Adversarial Examples@) provides guarantees about the effects of small perturbations on a neural network’s outputs. This paper uses that approach to make reinforcement learning more robust by training a DQN and acting by choosing the action with the best worst-case Q-value under adversarial perturbations (called the robust-optimal action) estimated from the certificate bounds, instead of the action with the highest Q-value.\n\nThe approach is evaluated on Cartpole and a navigation task that requires avoiding collisions, with an adversary perturbing observations in both cases. For small perturbations, this technique actually increases performance, but as perturbations get large the agent’s conservatism can lead to a large degradation in performance."], "venue": "arXiv", "opinion": "While the approach is straightforward and will certainly increase robustness in many cases, it seems worth mentioning two serious issues. First, they assume that the initial DQN training learns the perfect Q function. Second, the provided certificates are about individual actions, not policy performance: the Q-values approximated in DQN assume optimal performance starting from the next action, which is not a given here. I am a bit concerned that these limitations were not really discussed, while the paper claims that “the resulting policy comes with a certificate of solution quality”.", "highlight": false, "read_more": "", "summarizer": "Flo", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #107", "newsletter_category": "Verification"}
{"id": "2781e84d2d07a47d744a89a4675d1482", "title": "Verifiable Reinforcement Learning via Policy Extraction", "url": "http://arxiv.org/abs/1805.08328", "source": "alignment_newsletter", "source_type": "google-sheets", "text": null, "date_published": "2018-01-01T00:00:00Z", "authors": ["Osbert Bastani", "Yewen Pu", "Armando Solar-Lezama"], "summaries": ["Since it is hard to verify properties of neural nets, we can instead first train a decision tree policy to mimic the policy learned by deep RL, and then verify properties about that. The authors generalize [DAGGER](https://www.cs.cmu.edu/~sross1/publications/Ross-AIStats11-NoRegret.pdf) to take advantage of the Q-function and extract decision tree policies. They then prove a correctness guarantee for a toy version of Pong (where the dynamics are known), a robustness guarantee for Pong (with symbolic states, not pixels) (which can be done without known dynamics), and stability of cartpole."], "venue": "27th USENIX Security Symposium", "opinion": "Many people believe that ultimately we will need to prove theorems about the safety of our AIs. I don't understand yet what kind of theorems they have in mind, so I don't really want to speculate on how this relates to it. It does seem like the robustness guarantee is the most relevant one, since in general we won't have access to a perfect model of the dynamics.", "highlight": false, "read_more": "", "summarizer": "Rohin", "prerequisites": "", "converted_with": "python", "newsletter_number": "AN #8", "newsletter_category": "Verification"}