D

defun

Software Engineer @ Sabbatical
684 karmaJoined Working (6-15 years)

Bio

Participation
3

How I can help others

Keywords: software engineering, startups, web development, html, js, css, React, Ruby on Rails, Django, mobile apps, Android.

Comments
54

Meta has just released Llama 3.1 405B. It's open-source and in many benchmarks it beats GPT-4o and Claude 3.5 Sonnet:

Zuck's letter "Open Source AI Is the Path Forward".

Thanks again for the comment.

You think that the primary value of the paper is in its help with forecasting, right?

In that case, do you think it would be fair to ask expert forecasters if this paper is useful or not?

Thanks for the comment @aogara <3. I agree this paper seems very good from an academic point of view.

My main question: how does this research help in preventing existential risks from AI?

 

Other questions:

  • What are the practical implications of this paper?
  • What insights does this model provide regarding text-based task automation using LLMs?
  • Looking into one of the main computer vision tasks: self-driving cars. What insights does their model provide? (Tesla is probably ~3 years away from self-driving cars and this won't require any hardware update, so no cost)

Hi calebp.

If you have time to read the papers, let me know if you think they are actually useful.

Thanks a lot for giving more context. I really appreciate it.

These were not “AI Safety” grants

These grants come from Open Philanthropy's focus area "Potential Risks from Advanced AI". I think it's fair to say they are "AI Safety" grants.

Importantly, the awarded grants were to be disbursed over several years for an academic institution, so much of the work which was funded may not have started or been published. Critiquing old or unrelated papers doesn't accurately reflect the grant's impact.

Fair point. I agree old papers might not accurately reflect the grant's impact, but they correlate.

Your criticisms of the papers lack depth ... Do you do research in this area, ...

I totally agree. That's why I shared this post as a question. I'm not an expert in the area and I wanted an expert to give me context.

Could you please update your post to address these issues and provide a more accurate representation of the grants and the lab's work?

I added an update linking to your answer.


Overall, I'm concerned about Open Philanthropy's granting. I have nothing against Thompson or his lab's work.

Sorry, I should have attached this in my previous message.

where does it say that he is a guest author?

Here.

This paper is from Epoch. Thompson is a "Guest author".

I think this paper and this article are interesting but I'd like to know why you think they are "pretty awesome from an x-risk perspective".


Epoch AI has received much less funding from Open Philanthropy ($9.1M), yet they are producing world-class work that is widely read, used, and shared.

Agree. OP's hits-based giving approach might justify the 2020 grant, but not the 2022 and 2023 grants.

Load more