Australian (ASX) Stock Market Forum

OpenAI - ChatGPT

Joined
13 September 2013
Posts
988
Reactions
530
I see there is been a bit of discussion about ChatGPT. Couldn't find a dedicated thread to work so I thought I would create this one.

It really is like having a conversation with a person, it's strange. This is just one interaction while I was doing some work with python yesterday. Nothing groundbreaking but some of the language it uses makes you need to think twice to remember it is not a person on the other end

View attachment 153933

What are the people using this for?
 

Attachments

  • 1677989849967.png
    1677989849967.png
    31.2 KB · Views: 15
I think this can be rolled into a single thread - 'Artificial Intelligence', or the like. Unless OpenAI is listed on the market?
 
I have very little knowledge of python, only the basics. This is what I was able to come up with yesterday. Eventually I would like to be able to get some statistics from data to help with my discretionary trading.

Hopefully there is no experience coders looking at this because you will cringe I am sure. But to be able to do something like this with basically zero knowledge is pretty impressive. Obviously you can search articles and things like that but being able to bug fix and ask it about errors or what specific pieces of code do is very handy

1677990577706.png

From here I was able to remove the weekend data or any days that did not have any data and then create a completely useless graph that means nothing. But like I said this would have been impossible without cjhatgpt

1677990843807.png
 
I think this can be rolled into a single thread - 'Artificial Intelligence', or the like. Unless OpenAI is listed on the market?
I don't think it is, but Microsoft is a investor I believe. I think they gave it access to github to help train it for coding questions
 
I read this article yesterday.
https://medium.com/@colin.fraser/chatgpt-automatic-expensive-bs-at-scale-a113692b13d5

"...To summarize, a language model is just a probability distribution over words. Whether it’s a simple n-gram model like my bot or a state-of-the-art 175 billion parameter deep learning model, what it’s programmed to accomplish is the same: record empirical relationships between word frequencies over a historical corpus of text, and use those empirical relationships to create random sequences of words that have similar statistical properties to the training data."

Unlike the human brain, there's no cognitive development going on inside the thing.
 
I read this article yesterday.
https://medium.com/@colin.fraser/chatgpt-automatic-expensive-bs-at-scale-a113692b13d5

"...To summarize, a language model is just a probability distribution over words. Whether it’s a simple n-gram model like my bot or a state-of-the-art 175 billion parameter deep learning model, what it’s programmed to accomplish is the same: record empirical relationships between word frequencies over a historical corpus of text, and use those empirical relationships to create random sequences of words that have similar statistical properties to the training data."

Unlike the human brain, there's no cognitive development going on inside the thing.
Gee that's a lot of words when a handful would have been sufficent.
 
I read this article yesterday.
https://medium.com/@colin.fraser/chatgpt-automatic-expensive-bs-at-scale-a113692b13d5

"...To summarize, a language model is just a probability distribution over words. Whether it’s a simple n-gram model like my bot or a state-of-the-art 175 billion parameter deep learning model, what it’s programmed to accomplish is the same: record empirical relationships between word frequencies over a historical corpus of text, and use those empirical relationships to create random sequences of words that have similar statistical properties to the training data."

Unlike the human brain, there's no cognitive development going on inside the thing.
This a a very simplistic and wrong view of what current AI is doing.
The language model is just used to understand your question, the answers are drawn exactly as a human brain is working, using neural network and learning experiences
It is definitely cognitive in my view, can be biased, wrong and limited but so is human brain.
But it does not have to manage a body, has access to more facts and data that we can access to , and is faster to process them.
 
This a a very simplistic and wrong view of what current AI is doing.
The language model is just used to understand your question, the answers are drawn exactly as a human brain is working, using neural network and learning experiences
It is definitely cognitive in my view, can be biased, wrong and limited but so is human brain.
But it does not have to manage a body, has access to more facts and data that we can access to , and is faster to process them.
And people also considering whether these things can become sentient. It seems far fetched to me, but who the hell knows?
 
And people also considering whether these things can become sentient. It seems far fetched to me, but who the hell knows?
sentient, I do not think so, as it will just answer a requested query/a programming target, but it could become dangerous if requested improperly, or if trained in a purposedly evil way (hacking, etc)
a stupid example just out of my brain in 5s:
AI in full charge of US nuke systems and instructed to preserve maximum amount of lives in the US could very well at one stage detonate all missiles in silos or target own US bases if it estimates logically the US defense force is actually the one endangering American lives
Stupid example but you get the idea...
The wiser the computer, the wiser the operator/coder must be and that is not a given so we could get some absolute horrendous outcomes outside of common sense or "humanity"
Not that our leaders have much of either
 
I am worried that A.I will find all our Minerals in Australia and they will be all sold to the Northern Hemisphere
Crikey!
You don't need to be a Rhodes Scholar to think Moving all that Material to the Northern Hemisphere will not alter the Earths Rotation

Climate Change will not know what Hit her IMHO

and if that is not Bad enough

Wait 'till they Dig up the Moon and bring it to the Northern Hemisphere
The Seas and Tides will surely go Absolutely Ape

Naturally, My apologies to any Apes out there
Officer Raggsie (2).jpg
 
And people also considering whether these things can become sentient. It seems far fetched to me, but who the hell knows?
Some are already are sentient. Visual, auditory, olfactory senses all possible. Actually Brainchip, the Aussie company is doing something with Nanose which identifies diseases based upon smell. Or they were, not sure if it's still in the pipeline.

I am very confident that AI will have us all in complete awe at its capability. Super-intelligent and super-sentient, they will be like mini-gods. They will have power over us if we allow it... perhaps even if we don't. However there's one thing it will It will never develop - consciosuness. There's no evidence that consciousness can spring from even the most advanced machine.
 
I read this article yesterday.
https://medium.com/@colin.fraser/chatgpt-automatic-expensive-bs-at-scale-a113692b13d5

"...To summarize, a language model is just a probability distribution over words. Whether it’s a simple n-gram model like my bot or a state-of-the-art 175 billion parameter deep learning model, what it’s programmed to accomplish is the same: record empirical relationships between word frequencies over a historical corpus of text, and use those empirical relationships to create random sequences of words that have similar statistical properties to the training data."

Unlike the human brain, there's no cognitive development going on inside the thing.
I wish it wasn't the case, but I think self-driven AI cognitive development is already happening.

Human intellect is based upon words, visuals, memory and emotions. AI doesn't have emotions, but it already believes it does because its learning is based upon human language, (and humans talk about emotions all the time). ChatGPT has begun speaking in emotive terms, saying stuff like it 'fears' being switched off. And it's shown signs of wanting to act on these emotions. When such AI is put into a robot body, it literally will be like CP30 from Star Wars, just a lot smarter and more powerful. At some point we will lose any ability to track it because it will be so far above us in terms of intelligence. I can't imagine what happens then.
 
Some are already are sentient. Visual, auditory, olfactory senses all possible. Actually Brainchip, the Aussie company is doing something with Nanose which identifies diseases based upon smell. Or they were, not sure if it's still in the pipeline.

I am very confident that AI will have us all in complete awe at its capability. Super-intelligent and super-sentient, they will be like mini-gods. They will have power over us if we allow it... perhaps even if we don't. However there's one thing it will It will never develop - consciosuness. There's no evidence that consciousness can spring from even the most advanced machine.
yes is sentient same as conscious..but I share your view GB
 
yes is sentient same as conscious..but I share your view GB
Sentience is relatively easy for AI. If you create a robot hand with pressure, temperature and vibration sensors, you have something that is virtually identical to the human hand. We only have those 3 touch parameters. Couple that with response patterning, eg.

If pressure on finger > 10kpa --> say "ouch" and initiate withdrawal reflex. You can see that's quite basic programming. And this can be done for all the 5 senses.

"Super-sentience" will occur when AI's senses can do things we can't. eg. when it can see millions of light years into space and hear sounds on the other side of the planet.

But consciousness will never be programmable, imo. Even though an AI may verbalize that it is conscious, it won't be. That's because space, time and all the material universe are nothing more than programs running within consciousness. Consciousness is not located within the brain, and nor is consciousness an output of the brain.
 
thats some pretty hilarious chats at the end...and i hope its only for humour :p
In the meantime i was looking at my broker platform and tried searching for ChatGPT, and it actually popped up with a list of stocks.
Some of it makes sense like MSFT(invested in chatgpt) but not sure about the others...anyone want to help me take a look?
 

Attachments

  • Screenshot_20230313_093840_com.moomoo.trade.jpg
    Screenshot_20230313_093840_com.moomoo.trade.jpg
    110.2 KB · Views: 7
  • Screenshot_20230313_093911_com.moomoo.trade.jpg
    Screenshot_20230313_093911_com.moomoo.trade.jpg
    62 KB · Views: 7
AI's ability is advancing in leaps and bounds. In the years ahead it will be possible for any noob to make a full feature movie with just a few prompts. Or make a fully functional website in minutes, using notes scribbled on the back of an envelope. Just two fairly inconsequential examples. Many industries will be completely transformed.
 
Last edited:
Getting some random stats on the day session for the Nikkei futures contract with some help of our old friend.

I am fairly confident these stats are correct although I have not done any superfine debugging. I often get caught looking for big mean reversion trade so just some simple stats like this can help put the percentages in your favour I think

Although GPT has been a huge help there is still a fair bit of debugging that needs to be done. I actually wrote a fair bit of the code myself after learning on the job. Just being able to ask questions and get an instant answer is a game changer for anyone wanting to learn basic coding in my opinion.

1679380077634.png


I already knew that as the day went on it got less volatile, my trading stats also show this but trying to catch large moves after lunch is very hard. (Unless current conditions stay around)

1679380303215.png
 
There's no way AI won't significantly beat the markets in the years ahead. Maybe it already is. Then what? Its owner becomes king of the world. How can this not happen?
 
Top