biostud
Lifer
- Feb 27, 2003
- 18,791
- 5,631
- 136
As long as it isn't a five fingered handCan AI produce a 3D model based on a description? That would be handy.
As long as it isn't a five fingered handCan AI produce a 3D model based on a description? That would be handy.
Ehh, history will be the decider of that.Scammy scammies will be scammy.
The byproduct of emerging technologies.
Let's just hope AI doesn't become a dirty word like it has with blockchain or crypto. No respectful business dare utter those words anymore.
Huang has a practical mind-set, dislikes speculation, and has never read a science-fiction novel. He reasons from first principles about what microchips can do today, then gambles with great conviction on what they will do tomorrow. “I do everything I can not to go out of business,” he said at breakfast. “I do everything I can not to fail.” Huang believes that the basic architecture of digital computing, little changed since it was introduced by I.B.M. in the early nineteen-sixties, is now being reconceptualized. “Deep learning is not an algorithm,” he said recently. “Deep learning is a method. It’s a new way of developing software.” The evening before our breakfast, I’d watched a video in which a robot, running this new kind of software, stared at its hands in seeming recognition, then sorted a collection of colored blocks. The video had given me chills; the obsolescence of my species seemed near. Huang, rolling a pancake around a sausage with his fingers, dismissed my concerns. “I know how it works, so there’s nothing there,” he said. “It’s no different than how microwaves work.” I pressed Huang—an autonomous robot surely presents risks that a microwave oven does not. He responded that he has never worried about the technology, not once. “All it’s doing is processing data,” he said. “There are so many other things to worry about.”
Following the interview, Huang took questions from the audience, including one about the potential risks of A.I. “There’s the doomsday A.I.s—the A.I. that somehow jumped out of the computer and consumes tons and tons of information and learns all by itself, reshaping its attitude and sensibility, and starts making decisions on its own, including pressing buttons of all kinds,” Huang said, pantomiming pressing the buttons in the air. The room grew very quiet. “No A.I. should be able to learn without a human in the loop,” he said. One architect asked when A.I. might start to figure things out on its own. “Reasoning capability is two to three years out,” Huang said. A low murmur went through the crowd.
As described in the OP, "aye eye" is nothing more than a glorified tape recorder.Another claim is that the content generated is verbatim excerpts from NYT articles, meaning the publication is losing viewers and paying customers to the likes of ChatGPT.
That’s an interesting argument. Is AI using paid for content which then undermines the person making said content. What should an AI based news company pay for this content? Certainly more than an individual because it is selling the content it acquired.The New York Times files copyright lawsuit against OpenAI and Microsoft
It's no secret that LLMs use swaths of information from the internet as training data, but the NYT claims in its copyright infringement lawsuit that its content...www.techspot.com
AI is literally crypto/NFT all over again: exploit first, ask questions later.
As described in the OP, "aye eye" is nothing more than a glorified tape recorder.
His daughter is not too fond of it. I can see why.George Carlin is coming back to life in new AI-generated comedy special
Comedy legend George Carlin died in 2008, but a new AI-generated special is bringing him back to life with commentary on current events.www.usatoday.com
jfc, just stop this shit
Once an Ai has been trained on a set of data, does it still require access to the original data, or does it create meta data that it accesses?
Unless AI is fed with extremely esoteric knowledge before being pushed out, chatgpt at least is quite capable of reaching out to gather new information when asked to do so. I'm not sure what differentiates product documentation ('please explain the design intent of software blah blah') vs live feed data ('what should I invest $100 in today?'), but it's definitely not an entirely static feed of information.Deep Learning is a better description than AI.
DL is training on input data, together with desired outcomes. In between is a network of values that gets created.
Each training item updates the network of values, these are kind of analogous to human neurons that change as we learn. Hence the "Learning" part.
The end product is the Neural Network of values. When you done training you don't need the input data anymore. Now you are running the network instead of training it, so you give it a new input and it creates an output based on that trained network.
I spent most of my career writing software and I'm very impressed what you can do with DL. Things that you could theoretically program with people but would never really work in practice.
Imagine having 10,000 chest X-rays and the analysis from highly trained experts. Try to get a programmer to build a program from that data to read X-rays and it will fail. But with DL, you just feed the data and outcomes through a training network and now you have Neural Network that reads Xrays like a human expert.
Or even use chest X-rays to detect things humans can't:
Deep-learning model uses chest X-rays to detect heart disease – Physics World
Artificial intelligence classifies cardiac functions and valvular heart diseases from widely available chest radiographsphysicsworld.com
There is a lot of hype, but there is also enormous potential. This is no crypto-coin boondoggle.
Unless AI is fed with extremely esoteric knowledge before being pushed out, chatgpt at least is quite capable of reaching out to gather new information when asked to do so. I'm not sure what differentiates product documentation ('please explain the design intent of software blah blah') vs live feed data ('what should I invest $100 in today?'), but it's definitely not an entirely static feed of information.
So that is reason the forums has been crawling...Chat GPT is kind of special case. Initially it's answers were from it's trained network only. It's likely an EXTREMELY large network, so it can contain a LOT of knowledge.
More recently it can also pull information off the internet, but I expect that's more about the network acting as agent to pull information off the internet, which is different than the network synthesis approach.
ChatGPT can now access the whole internet — this is big
ChatGPT just got a whole lot more powerfulwww.tomsguide.com
Hey Cortana, is that post "aye eye"(tm)?The crucial question arises...
The crucial question arises: can finding a balance help humanity in collaborating with AI? After all, AI continually learns, while individuals, for the most part, seem to lose a certain level of inventiveness and adaptability (not everyone, but the majority). The path to successful collaboration likely lies in the ability to strike a balance between the contributions of both parties, establishing a reliable mechanism of interaction where humans and artificial intelligence leverage their unique advantages to achieve shared goals. This will require not only technological advancements but also wise utilization and control by humans to ensure the harmonious development of both sides in the era of increasing AI influence.