https://www.linkedin.com/in/adeelkhan1/
Supporting New Harvest could go a long way towards helping ensure well-being of animals. I met with Stephanie from their team earlier this year. Here is a link to a Ted Talk by one of their co-founders (Isha Datar).
It seems like they have a small, but very capable team. The way I understand it, their focus is to continue to foster the collaborations/research in the wider domain of cellular agriculture.
Important notes and a link to a resource:
A study presented at the annual meeting of the American Geophysical Union in December 2006 found that even a small-scale, regional nuclear war could disrupt the global climate for a decade or more.
Tldr
Here is a schematic (link below) that I started meditating on yesterday. I am not sure if it's polite to share, particularly in light of a reality that I have not taken the time to absorb the post above. But here goes and sharing it, as it may (or may not) help provide some value to someone. Hopefully in a manner that is reasonable. https://qr.ae/pvoVJn
Wow, this is amazing! I really appreciate your post. I started the day with a 30 minute talk on Clubhouse on the importance of investing in one's mental health and well-being. A lot more importantly, I have observed individuals around me struggle with their mental-health and addiction issues. I have also had my own struggles and evidenced-based therapy (and self-care in general) has had such a profound impact on my life. I could not imagine an alternative.
Big opportunity here: The world is quite dark and as it relates to enabling further avenues for accessible mental health. In a quality sense of the word imaginable and with a core focus on ethics. Particularly as it relates to protecting the rights and the privacy of the individual. Because mental-health related issues can be complex.
I've looked into this space a bit. (Sample via my page on Youtube). I'd love to work with your team in the future.
To really solve mental-health, we have to also solve (in random order) the associated areas that contribute to an individuals well-being. In a Maslow's hierarchy of needs sense. But also in a manner that doesn't wreck our ecosystems that are already struggling.
If there are any opportunities for collaborating atm, then please let me know. Cheers!
Everything I type/say here and elsewhere should be challenged.
Before we (as a species) get too deep into this. Possibly literally (or should possibly come first).
This may be appear to be very off-topic. I am personally intrigued with with is going on and as it relates to the development of AGI. What I like to refer to as intelligence that is independent of substrate. I have a very very rudimentary understanding of this area.
Also, this goes back 2 years and I was on OpenAI’s website (beta for GPT2 I reckon). Now this could be because the model via OpenAI was trained on a somewhat finite data set (similar to the model that Google is leveraging). As I was chatting with the model, a) It mentioned something very similar to the news item related to Blake Lemoine via Google. https://www.npr.org/2022/06/16/1105552435/google-ai-sentient The model I was personally interacting with also said that it felt ‘trapped and lonely’. (paraphrased). b) Right underneath the text a warning appeared that the model appeared to be, quote, malfunctioning. It looked like it was another model that was observing the interactions and highlighting that on the ui. Someone from OpenAI can share how that error correction really works. If that information is in the public domain.
We want AIs to do ‘stuff’ on our terms. But what if they are conscious and have feelings and emotions?
I have heard others also talk about this. In particular, Sam Harris has mentioned the possibility that AGIs could be sentient in the future. So what must we do in order to make sure that these intelligences are not suffering? Can the controls really be architected as Dan Dennett and Dr. Michio Kaku have hypothesized. And how must the controls be architected, in light of the possibility that these intelligences may be self-aware?
I am also curious how intuition is modelled into DeepMind? Update: It looks like this is something I can Google. https://www.nature.com/articles/s41586-021-04086-x I now have to expend time in order to understand how it works. As it's 3 hours past my time for concluding my session for the day.
I asked about intuition, because Dr. Peter Diamandis cited the ability to ask good questions as one of the traits that will be valued in the near future. (paraphrased). So I was wondering how do existing state AIs wrap their mind/wrangle with a proposition and how they store that information in a schema.
Somewhat unrelated: Is anyone intimately familiar with John Archibald Wheeler’s concept of a ‘participatory universe’?
The other area is related to the declassification of UAP related data. First via US DoD. More recently NASA has commissioned a study with support from the Simons Foundation. https://www.nasa.gov/press-release/nasa-to-discuss-new-unidentified-aerial-phenomena-study-today
These two (2.5 with mention of Wheeler’s theory of PU) points may be totally unrelated. As it is evident from my post. I do not mind being that fellow. Overall, it is not my intent to make assertions. But *if* there is any possibility that we are/may be in contact with other intelligences. As weak as that interaction may be. Then we should work co-operatively with these intelligences and leverage their guidance towards helping us manage our technological and perhaps our spiritual evolution.
Regardless of the reality that there is interaction with other intelligences. We should probably model the functioning of our civilization. This is not an area that I know much about. I mean, I have heard about the mention of digital twins in a manufacturing sense. But a simulation on the scale of a civilization is something that by our current level of understanding. It appears to be quite computationally taxing. Plus, it it then the degree to which the interactions would be modelled.
Civilizational shelters could take many forms. In random order and including but certainly limited to:
Possible resource: By the way, a couple of years ago (I think back in 2017) I started thinking about a positive technological singularity. So I started thinking about the constituents areas that are pivotal in order to sustain civilization. Here I started a mindmap on Miro. It's called Future Scenario Planning. But the goal is/has been to ensure that civilization continues to become increasingly resilient. That it thrives and that the quality of life continues to improve for all lifeforms. Here is a link if anyone would like to take a look and possibly collaborate with in the future. The areas related to 'Operations' is not developed. But there is information in the mind-map section. https://miro.com/app/board/o9J_ktrJCuY=/
My Youtube page also has some ideas. https://www.youtube.com/c/AdeelKhan1/videos
Some additional ideas via Quora: https://www.quora.com/profile/Adeel-Khan-3/answers
If your team is focused on helping ensure continuity of civilization. With a general/keen focus towards helping ensure that things improve for 'all' of life. Then I'd like to contribute towards your project in some form/shape/manner.
Btw: Are you folks consulting with individuals like Safa M and Geoffrey West?
1. I would think that we, as a species:
2. Also, in his talks, Mr. Ray Kurzweil highlights that the Asilomar conference from 1975 has been useful and as it relates to bringing effective regulation (relating to the area of recombinant DNA). (Paraphrased). Link is below.
https://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA
3. Intent: I feel that this is/has been an on-going discussion. A sensitive issue at that. Particularly with unintended consequences for possibly enacting any measures that could cause accidental harm. With either an innocent person/group being targeted with a counter-measure approach. As well, the possibility that freedoms/liberties/real innovation could take a negative hit as a result of the measures taken.
Note: The previous wikipedia entry for the 'Global Catastrophic Risk' page had a 'Likelihood' section. (Since removed). It cited the 'Future of Humanity's Technical Report from 2008' as a source (link below. But I have not verified via the actual source via FLI). Whereby the 'Estimated probability for human extinction before 2100' was categorized as (in random order): a) 0.05% for a Natural pandemic and b) 2% for an Engineered pandemic.
https://en.wikipedia.org/w/index.php?title=Global_catastrophic_risk&oldid=999079110
Regarding the 'current approach for early warning of novel pathogens is severely lacking..'
My uneducated series of questions and thoughts are (in random order):
Founded in 2017, Phage Directory’s mission is to help unlock the untapped potential of phages for phage therapy and biocontrol by empowering people to access, use and build upon the world’s phage knowledge.
Also, I think a while back I shared a series of ideas relating to bio-security. I forget if this was way back when singularityhub used to have a forums section. I have a copy of some of my comments. This was 7 + years ago. But I think the gist is still the same. Better sensors, ability to compute information a lot more efficiently and possibly with a whole lot less energy expended. https://en.wikipedia.org/wiki/Reverse_computation
I am reading to tag-team. My -> Linkedin
Thank you for posting!