Most of the effective altruists, especially those who work in AI safety, major in Computer science or math. However, do you think those effective altruists who work in AI safety, should spend time learning multiple basic (university introduction course level) physics, chemistry, biology, social sciences...? I think mastering computer science knowledge doesn't require learning natural science. However, some argue regardless what you major in, you should study the basis of every subjects, those will be helpful for you someday (Such as Brian Tomasik's article stands for this opinion: Education matters for effective altruism). Thus, we are different than normal people who only aims to earn money, we want to do altruistic things(which is usually quite unique), so our needs of knowledge may be different than others. Do you think EA people should be a generalist, spend time learning such as General Physics, General Chemistry, General Biology...? Or we don't have to spend any time on subjects that are irrelevant to the issues we work in?
A few questions that you might find helpful for thinking this through:
• What are your AI timelines?
• Even if you think AI will arrive by X, perhaps you'll target a timeline of Y-Z years because you think you're unlikely to be able to make a contribution by X
• What agendas are you most optimistic about? Do you think none of these are promising and what we need are outside ideas? What skills would you require to work on these agendas?
• Are you likely to be the kind of person who creates their own agenda or contributes to someone else's?
• How enthusiastic are you about these subjects? Are you likely to be any good at them? Many people make a contribution without using things outside of computer science, but sometimes it takes a person with outside knowledge to really push things forward to the next level.