Elon Musk among experts urging a halt to AI training https://www.bbc.com/news/technology-65110030 Musk is the headliner but there are over 1,000 signatories here. Glad to see this happening. The of what it can become are very scary.
They probably just want to slow everyone down so they can catch up. But on the surface they also appear to be making a societal morality play even though it’s cloaked over their own desire for personal profit.
I mean that's kinda dumb right? "hey guys lets all agree to STOP developing new technologies ..." good luck with that
I agree with it in principle, I just don't think it will be adhered to by people with that level of influence. If an army of internet hustlebros can affect the world like they have in the past year or so, what can Elon Musk do with that tech?
The faster guys like Musk have a lead on this, the faster this type of AI takes over white collar jobs. This letter also assumes I think Musk and the others are the ones I need to listen to when it comes to containing an AI apocalypse. I do think it'll be very important to create a regulatory system for Open AI. Internet and hardware both follow industry regulations/regulators to a certain degree, this should be no different. It's a brand new type of tech.
North Korea is not going to sign the pledge you create to do this. Neither will Iran or Pakistan or anybody competing with them in a big way. The genie is out now and the only question is how long until it puts us in the bottle. Could be a few years or it could be a thousand years but it is definitely going to happen.
Musk is an asshole, first tesla was a battery company with the car as proof of concept, then it was a car company, and now that the stock is crashing it is an AI tech company. I don't believe a word that guy says. I was just in China, the BYD electric cars are so much better than Tesla. Way nicer inside and the Automated driving/parking actually works. Also, I am more hopeful about AI. Rather than the beginning of the end for humankind, it may be the end of the beginning. It may free us from CEO's and bankers, bringing real rationality to the economy and allowing many to pursue deeper things with more of their time. I think the cats out of the bag with AI already. It won't take too much more work to create AI that makes perpetually better AI.
IDK about all the areas of use, but in terms of data analysis and insights, AI adds nothing beyond what a trained data scientist can do--all it looks for is what's programmed into it--it cannot come up with new ideas or insights nor can it reason about the future. I wonder how much of this is just scare tactics similar to a myriad of tactics meant to gain influence and power.
My main worries about AI are more personally-affecting than humanity-affecting. My company is already rolling out AI companions and the understanding that the decision makers have on this subject seems to be really basic. The expectation is that employees will have to decide whether the AI companion is giving them poor or even completely incorrect data... in a large corporation where competence and accountability are at a premium, this is just asking for an unending series of minor catastrophes that snowball into negative affects on large institutions in this country. People don't want to work, so they will embrace something they think does the work for them. And company boards of directors are really inept at seeing things from the ground floor that affect the bottom line.
The Execs are always going to test new systems usually to the detriment of the worker/employees current system, putting the employee at risk, making the job harder, etc. This seems like one of the cases--it could have been something like "keep a log with justification for all your actions"--of course its gonna suck-- I've found from experience is that you take the temperature of the orgainization--does it want it to work? Does it want to be shown that there's a long way to go? Once you decide what the decision makers want to hear, give them that with the addendum that of course, anything could happen etc. Stay safe lol.
That's the approach in big companies that make me laugh. Users manually reporting breaks in an automated system usually yields minimal results. That's supposed to be what Test and Acceptance of these systems accomplishes, but many companies' Project Managers are now approaching that like Microsoft, trying to push the product through and let the bugs be found in the production environment. It's idiotic.
Unrelated to this specific discussion, this is a worrisome article from 2018: https://www.fanaticalfuturist.com/2...weapon-by-injecting-viruses-into-neural-nets/ And then there's the US Navy asking for AI-managed kamikaze drone swarms....
That's the real rub. The pentagon will never stop developing AI no matter what public figures want. Neither will intel agencies.
Hackett could certainly use some AI generated plays, Saleh could use AI to call plays but that would not be needed as long as Aaron is playing. I wonder if the NFL would allow it.
If there were an OC that could be replaced with the simplest of algorithms, it's this guy. ChatGPT for OC!!!
I like it already. Consider this, I posed this question to ChatGPT: "It is 3rd and 7 from midfield in the 4th quarter, the NY Jets are playing Miami Dolphins in Miami and are trailing by 4 points. Which play should they run?" Coach ChatGPT gave me multiple options but option 1, what a refreshing answer that Hackett could learn from! "Considering it's 3rd and 7, they need a play that has a high probability of gaining at least 7 yards. They could opt for a passing play, preferably one that targets a receiver beyond the first-down marker." Hire this man. Or woman. Or machine, I guess