Elon Musk’s Final Warning
Dear Friends & Neighbors,
(Please click on red links & note magenta)
For updated global info & data on COVID-19, please click HERE.
For updated global data & graphs on COVID-19, please click HERE.
For COVID-19 cases and death counts in USA by state, please click HERE.
For COVID-19 cases in Florida via Florida COVID Action, please click HERE.
For COVID-19 cases in Florida, via Florida state government, please click HERE.
Elon Musk gives his final warning about many things that people are doing wrong. This is a Motivational and educational video which will give you different perspective about many things, including the need of regulations for AI (artificial intelligence) in order to develop AI safely, in light of the rate of advancement in AI. You will listen so many incredible advices that are very important. Check it out, in the video published on April 15, 2021, “Elon Musk – My Final Warning“, below:
Elon Musk reminds us, “The danger of AI is much greater than the danger of nuclear warheads, by a lot. And nobody would suggest that we allow any one to just build nuclear warheads if they want; that would be insane. Mark my words that AI is far more dangerous than nukes. So why do we have no regulatory oversight? This is insane! I’m not really all that worried about the short-term stuff, things like narrow AI is not a species level risk. It will result in dislocation in lost jobs and that kind of things, but it is not a fundamental species level risk. Whereas digital super intelligence is. So it’s really all about laying the ground work to make sure that if humanity collectively decides that creating digital super intelligence is the right move, then we should do so very very carefully. This is the most important thing that we could possibly do. There are other things that are on a longer time scale, obviously, the things that I believe in like extending life beyond Earth, making life multiplanetary. And I’m a big believer in Asimov’s Foundation Series or the principle that you really want to, I recommend reading the Foundation Series. But if there’s likely to be another Dark Ages which it seems, my guess is there probably will be, at some point. I’m not predicting that we’re about to enter a dark ages but that there’s some probability that we will, particularly if there’s a third world war, then we want to make sure that there’s enough of us, of a seed of human civilization somewhere else, to bring civilization back and perhaps shorten the length of the dark ages. I think that’s why it’s important to get a self-sustaining base, ideally on Mars, because Mars is far enough away from Earth, that if war is on Earth, Mars base might survive is more likely to survive than a Moon base. But I think a Moon base and a Mars base that could perhaps help regenerate life back here on Earth, it would be really important and to get that done before a possible world war 3. You know last century we had two massive world wars, three if you count the cold war. I think it’s unlikely that we’ll never have another world war…probably will be at some point again. I’m not predicting this…seems like if you say given enough time will it be most likely given enough time this has been our pattern in the past. So, I think sustainable energy is also obviously really important as tautological if it’s not sustainable , it’s unsustainable. I think that the core technologies are there with the wind and solar, with batteries. The fundamental problem is that there’s an unpriced externality in the cost of CO2 the market economics works very well if things are priced correctly. But when things are not priced correctly or something that has a real cost has zero cost, then that’s where you get distortions in the market that inhibit the progress of other technologies. So essentially, anything that puts carbon into the atmosphere which includes rockets, by the way. So I’m not excluding rockets from this: there has to be a price and that you can start off with a low price, but then depending upon whether that price has any effect on the possibility of CO2 in the atmosphere, you can adjust that price up or down. But in the absence of a price, we sort of pretend that digging trillions of tons of fossil fuels from deep under the earth and putting it into the atmosphere, we are pretending that has no probability of a bad outcome. And the entire scientific community is saying obviously it has. It’s going to have a bad outcome, obviously. You’re changing the chemical constituency in the atmosphere. So, it’s really up to people and governments to put a price on carbon and then automatically the right thing happens. It’s really straightforward. It sounds like I’m backtracking, but there’s actually an argument that more carbon in the atmosphere is actually good, but up to a point, so we might actually arguably have been a little carbon starved if you go back 200 years ago and you had like 280, or 290 parts per million of carbon, we were probably a little carbon starved. Now we’re about 400, just passed 400 mark, I think, some where in the 400s, probably ok, we don’t have to worry about sequestering carbon or anything like that. But now if this momentum keeps going and we start going up to 600, 800, 1000, 1500 that’s where things get really squirrely and the sheer momentum of the world’s energy infrastructure is leading us in that direction. So it’s just very important that the public and the government push to ensure the correct price of carbon is paid. That will be the thing that matters. Right now, the only things that area really stressing me out in a big way are AI, obviously. There’s some body who I can’t remember his name but had a good suggestion for what the optimization of the AI should be, what’s its utility function. You have to be careful about this because if you say “maximize happiness” and the AI concludes that happiness is functioning dopamine and serotonin, so she captures all humans and jack your brain with large amounts of dopamine and serotonin…like…ok…it’s not what we meant. I think AI should try to maximize the freedom of action of humanity…maximize the freedom of action…maximize freedom, essentially. I like that definition.”
In essence, Elon Musk reminds us:
- The need to regulate AI & to define the prime objective of AI is to maximize the freedom of humanity
- Pricing carbon correctly in order to steer world energy infrastructure’s momentum in the right direction, away from dangerous level of carbon in earth’s atmosphere.
Gathered, written, and posted by Windermere Sun-Susan Sun Nunamaker More about the community at www.WindermereSun.com
~Let’s Help One Another~
Please also get into the habit of checking at these sites below for more on solar energy topics: