I’m looking for podcasts, papers, or reviews on fish sentience.
Specifically:
Long form interview about the moral weight of fish.
Papers which estimate their moral weight
Information on the long term damage of fish hooks, being out of water, or the moral harm of fishing.
I would also like to know if there are practical methods to reduce the amount of harm done if you are fishing.
Rethink priorities had their moral weights report which placed salmon at 0.056, I’m not sure I understood completely what that figure meant. I think this means they have 5% of the sentience of a human? I also don’t know how to interpret that number.
I would like to read or listen to academic discussion about the ethics of catch and release fishing.
Quick research shows: Some people use Velcro to catch the fish’s teeth and use a net to land it. You can keep the fish submerged in the net while you take off the Velcro if it’s caught on there. I don’t know how much suffering this would be but if you’re into fishing this might be a solution.
If there are any EAs who fish, do you have any practices for making the experience as painless as possible for the fish?
Are there any advancements being made to create a fishing experience which is nearly completely pain free?
I would love to hear from someone who has thought this through.
Here’s what I know can reduce the suffering of the fish if you do decide to fish:
Remove the barbs on the hook
Choose a small hook and use only one hook rather than three-pronged hooks
Reduce the amount of time the fish has out of the water and gently hold the fish.
But if I could I would like to remove the hook out of the experience entirely and just move towards something that gently reels in the fish. I would love to hear if there’s some technology that does this.
Also actually giving the fish food could reduce the unpleasantness+give them pleasure when caught.
But there's probably still some risk of longer term injury and pain. And it might condition them to be less risk-averse, and make them more prone to being caught by less humane fishers.
Carrick Flynn lost the nomination, and over $10 million dollars from EA aligned individuals went to support his nomination.
So these questions may sound pointed:
There was surely a lot of expected value in having an EA aligned thinker in congress supporting pandemic preparedness, but there were a lot of bottlenecks that he would have had to go through to make a change.
He would have been one of hundreds of congresspeople. He would have had to get bills passed. He would have had to win enough votes to make it past the primary. He would have had to have his policies get churned through the bureaucratic agencies and it’s not entirely clear any bill he would’ve supported would have kept it’s same form through that process.
What can we learn from the political gambling that was done in this situation? Should we try this again? What are the long term side effects of aligning EA with any political side or making EA a political topic?
Could that $10+ million wasted on Flynn have been better used in just trying to get EA or longtermist bureaucrats in the CDC or other important decision making institutions?
We know the path that individuals take to get these positions, we know what people usually get selected to run pandemic preparedness for the government, why not spend $10 million in gaining the attention of bureaucrats or placing bureaucrats in federal agencies?
Should we consider political gambling in the name of EA a type of intervention that is meant for us to get warm fuzzies rather than do the most good?
I think seeing the attacks that he's captured by crypto interests was useful, in that future EA political forays will know that attack is coming and be able to fend it off better. Worth $11 mil in itself, probably not, but the expected value was already pretty high (a decent probability of having someone in congress who can champion bills no one disagrees with but doesn't want to spend time and effort on) so this information gained is helpful and might make either future campaigns more successful or alternatively dissuade future spending in this area. Definitely good to try once, we'll see how it plays out in the long run. We didn't know he'd lose until he lost!
Has anyone ever considered the possibility that if the cyclic theory of the universe is correct then that would create an opportunity for agents to encode messages into the big bang seconds before it occurs?
Is a there a web page which gives rewards for solving problems in philanthropy? I’m imagining something like a Fiverr or website which lists bounties for problems.
If this hasn’t been made yet it should be! Imagine someone had a lot of free time or an organization was looking for a new project to work. They could go to the CharityBounty website and select a problem to work on.
You could have problems which are small such as “build a website for our new charity”or as large as “create a medicine which cures this neglected tropical disease”.
Donors would contribute to project ideas like “XYZ Foundation has attached a $300 million dollar bounty for any team which cures malaria” or “XYZ individual is offering $5000 to whoever can make a spreadsheet which discovers regions likely to be hit by an earthquake”.
Bounty hunters would go on this website and work on projects to solve problems and get money. These bounty hunters could sort problems by difficulty or reward amount.
Metaculus betters could bet on the likelihood that each bounty would be solved or when it would be solved.
The likelihood that a project would be completed would increase as more money was donated towards the award.
The website could list all of the active people working on a project so teams could get formed in real time (and they could work together to get the money).
Everyday people could become grant makers and people who are passionate about a particular cause area could target the cause they find most important.
Forums could list and organize knowledge related to solving the bounty to encourage people who might solve the bounty and lower the barriers for people who might end of solving the problem.
There could be grand challenges. Maybe open philanthropy creates a major problems list. “$100 million dollar questions” like “cure Alzheimer’s” or something very ambitious. And researchers could list minor problems which their field sees as important steps towards solving a major problem (find the mechanism which causes Alzheimer’s)
There could even be a bounty attached to creating the bounty list.
Has anyone considered the idea of setting up independent arbitration courts for artificial intelligence research labs that keeps them ethically aligned? So here is the problem and a possible (poorly thought out) solution:
US courts would probably be overly punitive if they were policing AI research. They would undoubtedly be unintentionally regressive, limit the growth of the field, and create a situation in which other worse actors in other localities would get AGI first. It would probably move AI research offshores. The strong hammer of the law is not well equipped or agile enough to thread this small needle. In the worst case federal regulation on AI would be dangerous and harmful, at best it would be clumsy. The rules to govern AI labs would also have to be international, and agreeable to all parties.
Private arbitration courts work well for many corporations. There is a lot of precedent here. Both parties voluntarily enter into the arbitration court and agree on the outcome.
Imagine independent courts established to ensure AI safety measures are taken. Individuals from different labs could peek into other labs, “view the soviet warheads”, and ensure everyone is behaving responsibly. If one lab has repeated ethics concerns all major AI labs or corporations could agree to not work with that lab, raise concern with their local government, or lose some agreed upon deposit they agreed they would forfeit if they had too many ethics violations.
Individual AI labs benefit from sharing research, promoting a healthy community, and all working towards the common good to have a safe launch of AGI if it happens.
There are many out of the box ways to run this. All AI labs could pool money for courts in return for shared research, Open Phil could establish grants for smaller non profits to create courts, grant making organizations could withhold money for ethics violations. There are many other ways of setting it up. There’s a rich history of creative ways for individuals to combat collective action or externality problems voluntarily in a way that is often more effective than top down monopolistic legal systems.
There could be strong incentives to take part in the courts for public image reasons. It would tarnish Googles image if DeepMind was the only major AI lab that didn’t play by the community’s rules. Activists could draw attention to this.
You might be able to have a big impact on AI reform by changing the framing. Right now framing it as “AI” alignment sells the idea that there will be computers with agency. Or something like free will. Or they will choose acts like a human.
It could instead be marketed as something like preventing “automated weapons” or “computational genocide”.
By emphasizing the fact that a large part of the reason we work on this problem is that humans could use computers to systematically cleanse populations, we could win people to our side.
Proposal: change the framing from “Computers might choose to kill us” to “Humans will use computers to kill us” regardless of whether either potential outcome is more likely than the other.
You could probably get more funding, more serious attention, and better reception by just marketing the idea in a better way.
Who knows, maybe some previously unsympathetic billionaire or government would be willing to commit hundreds of millions to this area just by changing the way we talk about it.
[This comment is no longer endorsed by its author]Reply
EA's greatest strength, in my mind, is our epistemic ability - our willingness to weigh the evidence and carefully think through problems. All of the billions of dollars and thousands of people working on the world's most pressing problems came from that, and we should continue to have that as our top priority.
Thus, I'm not comfortable with sentences like "Proposal: change the framing from “Computers might choose to kill us” to “Humans will use computers to kill us” regardless of whether either potential outcome is more likely than the other." We shouldn't be misleading people, including by misrepresenting our beliefs. Plus, remember - if you tell one lie, the truth is forever after your enemy. What if I'm a new EA engaging with AI safety arguments, and you use that argument on me, and I push back? Maybe I say something like "Well, if the problem is that humans will use computers to kill us, why not give the computer enough agency that, if the humans tell it to kill us, the computer tells us to shove it?"
This would obviously be a TERRIBLE idea, but it's not obvious how you could argue against it within the framework you've just constructed where humans are the real danger. Every good argument against this comes from the idea that agentic AI's are super dangerous, which contrasts the claim you just made. If the danger is humans using these weapons to kill each other, giving the AI's more agency might be a good idea. If the danger is computers choosing to kill humans, giving the AI's more agency is a terrible idea. I'm sure you could come up with a way of reconciling these examples, but you'll notice that it sounds a bit forced, and I bet there are more sophisticated arguments I couldn't come up with in two minutes that would further separate these two worlds.
We have to be able to think clearly about these problems to solve them, especially AI alignment, which is such a difficult problem to even properly comprehend. I feel like this would be both counterproductive and just not the direction EA should be going. Accuracy is super important - it's what brought EA from a few people wanting to find the world's best charities to what we have today.
I’m looking for podcasts, papers, or reviews on fish sentience.
Specifically:
I would also like to know if there are practical methods to reduce the amount of harm done if you are fishing.
Rethink priorities had their moral weights report which placed salmon at 0.056, I’m not sure I understood completely what that figure meant. I think this means they have 5% of the sentience of a human? I also don’t know how to interpret that number.
I would like to read or listen to academic discussion about the ethics of catch and release fishing.
There actually was an EA-adjacent podcast about this in 2018, from Future Perfect. It discusses a japanese method called ikejime, which instantly kills the fish and renders it immobile.
Alternatively, just get into (legal) magnet fishing instead. No harm done.
Quick research shows: Some people use Velcro to catch the fish’s teeth and use a net to land it. You can keep the fish submerged in the net while you take off the Velcro if it’s caught on there. I don’t know how much suffering this would be but if you’re into fishing this might be a solution.
If there are any EAs who fish, do you have any practices for making the experience as painless as possible for the fish?
Are there any advancements being made to create a fishing experience which is nearly completely pain free?
I would love to hear from someone who has thought this through.
Here’s what I know can reduce the suffering of the fish if you do decide to fish:
But if I could I would like to remove the hook out of the experience entirely and just move towards something that gently reels in the fish. I would love to hear if there’s some technology that does this.
Also actually giving the fish food could reduce the unpleasantness+give them pleasure when caught.
But there's probably still some risk of longer term injury and pain. And it might condition them to be less risk-averse, and make them more prone to being caught by less humane fishers.
I trout fish, and I can assure you that the fish I have caught are far too stressed to eat so that wouldn't work for trout fishing at least.
What are your thoughts about catch and release fishing on bigger game fish? Have you seen any methods for doing it that seem safe?
Carrick Flynn lost the nomination, and over $10 million dollars from EA aligned individuals went to support his nomination.
So these questions may sound pointed:
There was surely a lot of expected value in having an EA aligned thinker in congress supporting pandemic preparedness, but there were a lot of bottlenecks that he would have had to go through to make a change.
He would have been one of hundreds of congresspeople. He would have had to get bills passed. He would have had to win enough votes to make it past the primary. He would have had to have his policies get churned through the bureaucratic agencies and it’s not entirely clear any bill he would’ve supported would have kept it’s same form through that process.
What can we learn from the political gambling that was done in this situation? Should we try this again? What are the long term side effects of aligning EA with any political side or making EA a political topic?
Could that $10+ million wasted on Flynn have been better used in just trying to get EA or longtermist bureaucrats in the CDC or other important decision making institutions?
We know the path that individuals take to get these positions, we know what people usually get selected to run pandemic preparedness for the government, why not spend $10 million in gaining the attention of bureaucrats or placing bureaucrats in federal agencies?
Should we consider political gambling in the name of EA a type of intervention that is meant for us to get warm fuzzies rather than do the most good?
I think seeing the attacks that he's captured by crypto interests was useful, in that future EA political forays will know that attack is coming and be able to fend it off better. Worth $11 mil in itself, probably not, but the expected value was already pretty high (a decent probability of having someone in congress who can champion bills no one disagrees with but doesn't want to spend time and effort on) so this information gained is helpful and might make either future campaigns more successful or alternatively dissuade future spending in this area. Definitely good to try once, we'll see how it plays out in the long run. We didn't know he'd lose until he lost!
Has anyone ever considered the possibility that if the cyclic theory of the universe is correct then that would create an opportunity for agents to encode messages into the big bang seconds before it occurs?
Kinda bizarre idea there, but I wrote up something quick and choppy to get out this idea. It must have been thought before. This is all almost certainly incorrect.
A question for the community
Is a there a web page which gives rewards for solving problems in philanthropy? I’m imagining something like a Fiverr or website which lists bounties for problems.
If this hasn’t been made yet it should be! Imagine someone had a lot of free time or an organization was looking for a new project to work. They could go to the CharityBounty website and select a problem to work on.
You could have problems which are small such as “build a website for our new charity”or as large as “create a medicine which cures this neglected tropical disease”.
Donors would contribute to project ideas like “XYZ Foundation has attached a $300 million dollar bounty for any team which cures malaria” or “XYZ individual is offering $5000 to whoever can make a spreadsheet which discovers regions likely to be hit by an earthquake”.
Bounty hunters would go on this website and work on projects to solve problems and get money. These bounty hunters could sort problems by difficulty or reward amount.
Metaculus betters could bet on the likelihood that each bounty would be solved or when it would be solved.
The likelihood that a project would be completed would increase as more money was donated towards the award.
The website could list all of the active people working on a project so teams could get formed in real time (and they could work together to get the money).
Everyday people could become grant makers and people who are passionate about a particular cause area could target the cause they find most important.
Forums could list and organize knowledge related to solving the bounty to encourage people who might solve the bounty and lower the barriers for people who might end of solving the problem.
There could be grand challenges. Maybe open philanthropy creates a major problems list. “$100 million dollar questions” like “cure Alzheimer’s” or something very ambitious. And researchers could list minor problems which their field sees as important steps towards solving a major problem (find the mechanism which causes Alzheimer’s)
There could even be a bounty attached to creating the bounty list.
You might be interested in Impact Certificates, Markets for Altruism and https://www.super-linear.org/
Thank you for your help Lorenzo
Forgive me if this has been thought of before.
Has anyone considered the idea of setting up independent arbitration courts for artificial intelligence research labs that keeps them ethically aligned? So here is the problem and a possible (poorly thought out) solution:
US courts would probably be overly punitive if they were policing AI research. They would undoubtedly be unintentionally regressive, limit the growth of the field, and create a situation in which other worse actors in other localities would get AGI first. It would probably move AI research offshores. The strong hammer of the law is not well equipped or agile enough to thread this small needle. In the worst case federal regulation on AI would be dangerous and harmful, at best it would be clumsy. The rules to govern AI labs would also have to be international, and agreeable to all parties.
Private arbitration courts work well for many corporations. There is a lot of precedent here. Both parties voluntarily enter into the arbitration court and agree on the outcome.
Imagine independent courts established to ensure AI safety measures are taken. Individuals from different labs could peek into other labs, “view the soviet warheads”, and ensure everyone is behaving responsibly. If one lab has repeated ethics concerns all major AI labs or corporations could agree to not work with that lab, raise concern with their local government, or lose some agreed upon deposit they agreed they would forfeit if they had too many ethics violations.
Individual AI labs benefit from sharing research, promoting a healthy community, and all working towards the common good to have a safe launch of AGI if it happens.
There are many out of the box ways to run this. All AI labs could pool money for courts in return for shared research, Open Phil could establish grants for smaller non profits to create courts, grant making organizations could withhold money for ethics violations. There are many other ways of setting it up. There’s a rich history of creative ways for individuals to combat collective action or externality problems voluntarily in a way that is often more effective than top down monopolistic legal systems.
There could be strong incentives to take part in the courts for public image reasons. It would tarnish Googles image if DeepMind was the only major AI lab that didn’t play by the community’s rules. Activists could draw attention to this.
Just an idea I had
Marketing AI reform:
You might be able to have a big impact on AI reform by changing the framing. Right now framing it as “AI” alignment sells the idea that there will be computers with agency. Or something like free will. Or they will choose acts like a human.
It could instead be marketed as something like preventing “automated weapons” or “computational genocide”.
By emphasizing the fact that a large part of the reason we work on this problem is that humans could use computers to systematically cleanse populations, we could win people to our side.
Proposal: change the framing from “Computers might choose to kill us” to “Humans will use computers to kill us” regardless of whether either potential outcome is more likely than the other.
You could probably get more funding, more serious attention, and better reception by just marketing the idea in a better way.
Who knows, maybe some previously unsympathetic billionaire or government would be willing to commit hundreds of millions to this area just by changing the way we talk about it.
EA's greatest strength, in my mind, is our epistemic ability - our willingness to weigh the evidence and carefully think through problems. All of the billions of dollars and thousands of people working on the world's most pressing problems came from that, and we should continue to have that as our top priority.
Thus, I'm not comfortable with sentences like "Proposal: change the framing from “Computers might choose to kill us” to “Humans will use computers to kill us” regardless of whether either potential outcome is more likely than the other." We shouldn't be misleading people, including by misrepresenting our beliefs. Plus, remember - if you tell one lie, the truth is forever after your enemy. What if I'm a new EA engaging with AI safety arguments, and you use that argument on me, and I push back? Maybe I say something like "Well, if the problem is that humans will use computers to kill us, why not give the computer enough agency that, if the humans tell it to kill us, the computer tells us to shove it?"
This would obviously be a TERRIBLE idea, but it's not obvious how you could argue against it within the framework you've just constructed where humans are the real danger. Every good argument against this comes from the idea that agentic AI's are super dangerous, which contrasts the claim you just made. If the danger is humans using these weapons to kill each other, giving the AI's more agency might be a good idea. If the danger is computers choosing to kill humans, giving the AI's more agency is a terrible idea. I'm sure you could come up with a way of reconciling these examples, but you'll notice that it sounds a bit forced, and I bet there are more sophisticated arguments I couldn't come up with in two minutes that would further separate these two worlds.
We have to be able to think clearly about these problems to solve them, especially AI alignment, which is such a difficult problem to even properly comprehend. I feel like this would be both counterproductive and just not the direction EA should be going. Accuracy is super important - it's what brought EA from a few people wanting to find the world's best charities to what we have today.
I agree with you here.