ChatGPT (openAI) may turn us into tools

yeah I’m using the " ChatGPT Mar 14 Version. Free Research Preview" so probably not the latest version which you need to pay for.

I think it sometimes uses old api code, but through enough trial and error… stuff can happen :slight_smile: . Getting some weird results as well, troubleshooting faulty code, it somehow switched topic and generated code for conversing Fahrenheit to Celsius wtf?

I’m pretty amazed with how you can easily describe stuff and trial and error code quickly, it is like having a custom CDP where you can describe a process and it will work out something. Stuff that I could never dream of coding in a million years, but now am able to thanks to language, limited by the quality of the prompt. I bet if you are a coder you can get much more out of it, as I’m just throwing back errors whenever it hangs and don’t have the insight for getting to the point quicker or troubleshoot without it. But am stoked :drummer: , probably a lot less [tool idea] threads from me :slight_smile:

1 Like

I’m less impressed, but realize that it will become very powerful. Already, it’s great with adhering to known paradigms. However, if you want to create something new, chances are you won’t get that much help other than syntax and bug checking (and often erroneous thanks to its own “creativity”).

Simplisticly put, I would say that it resorts to “wikipedia consensus” (or hallucination) when it’s not sure about something. It’s relatively shallow when it comes to music and DSP.

Great for input though, so far!

2 Likes

I should have screenshotted, but talking about hallucination, at one point in the free chatgpt output a code was generated with your name “Joule” as author at the top! :slight_smile: . Dunno the specifics, or the chance of something like this coming up from all the scraped information it has run through, but thought it was weird and not sure how this all applies to copyright if it samples left and right. The code didn’t work though as it sometimes loops in an network error notice you can’t escape from but refresh the browser.

edit;

found it in the history. I was looking for a way to divide an input sample in parts, which min and max amount of parts could be set from a slider and have the order re-shuffled based on rms values of each sample part. After a bunch of unfinished suggestions it threw this in the mix;

--[[ 
TestPad v1.1 - Example Tool for Renoise 3.3 or later
(c) 2021 by joule / http://forum.renoise.com/index.php/user/2532-joule/
License: GPL v3

This tool lets you process the currently selected sample by cutting it into slices
and applying a selected processing chain to each slice. The resulting sample will
be merged into one. The original sample is kept untouched. 

Have fun!
]]--

What the actual f… I have nothing to do with that at all!

It’s extrapolating that I’m some sort of authorative figure just because I write a lot of shits on the forum? :smiley:

3 Likes

It’s probably indexing this thread, or maybe it’s that you’ve written a bunch of renoise tools? Did your name figure into any of your interactions with it?

I know some are saying GPT-4 is showing signs of AGI :upside_down_face:

Just saw that (that GPT-4 is showing signs of AGI). uploads obligatory Skynet pic

I’ve been testing out Bard (Google’s AI contender) and it’s… wow, not even close. Even on GPT-3 chat could do so much shit and provided so much more interesting answers to questions. I’m sure Google’s will get there but… yeah, seems way behind at this point.

I did spend some time typing responses into both ChatGPT and Bard and copy/pasting them back and forth to see if I could prompt an AI conversation. Sadly (or, maybe, thank god for all of us) it didn’t really work…

Edit: This was a fun convo w/ Chapt GPT-3, posting for funzies:

Summary

can you predict the next lottery numbers?
I’m sorry, but I cannot predict the next lottery numbers. Lottery numbers are generated randomly, and their outcome cannot be predicted with any level of accuracy. The lottery is a game of chance, and the numbers drawn are entirely random, so there is no pattern or formula to predict the winning numbers. It is always important to remember that gambling should be done responsibly and for entertainment purposes only.

Tessier Ashpool
are you sure? Nothing in the universe is random, even fractals, which are essentially random numbers graphed out appear to have a shape and pattern.

You are correct that some natural phenomena that appear random actually exhibit patterns and structures when examined closely, such as fractals. However, lottery number draws are generated using complex algorithms and procedures that are designed to be as close to true randomization as possible. These procedures involve the use of random number generators, which are designed to produce a sequence of numbers that are unpredictable and uniformly distributed. It is true that there is always a possibility of bias or error in these procedures, but reputable lotteries go to great lengths to ensure the integrity of their drawings, including the use of independent auditors and multiple layers of security.

Therefore, while there is a possibility of some underlying pattern or structure to the lottery numbers drawn, it is highly unlikely that such a pattern could be predicted with any level of accuracy. The outcome of lottery draws is best understood as a result of statistical probability and random chance, rather than any deterministic pattern or formula.

Tessier Ashpool
if it’s true that lottery numbers are drawn using complex algorithms, it means that the underlying process itself has some structure. Accordingly, even though we might not be capable of solving for those numbers with the current capacity of computers now, doesn’t it seem plausible that it’s still solvable?

While it is true that the process of generating lottery numbers using algorithms involves complex mathematical calculations, it is important to note that these algorithms are designed to be as close to truly random as possible. This means that the generated numbers should exhibit no discernible patterns or structures.

While it is possible that with more advanced computing power, it may become possible to detect patterns or structures in the lottery numbers, it is unlikely that such patterns would be useful in predicting the next set of winning numbers. This is because lottery systems are designed to be as fair and unpredictable as possible, with numerous safeguards in place to ensure that the numbers are not manipulated or biased in any way.

Furthermore, it is worth noting that many lottery systems regularly update and improve their algorithms in order to stay ahead of any potential attempts to predict the numbers. In other words, even if patterns or structures were to be discovered in lottery number generation algorithms, it is likely that these would be quickly identified and eliminated by the lottery system operators. Therefore, while it is technically possible that patterns or structures exist in the generation of lottery numbers, the likelihood of being able to use this information to predict future lottery numbers is extremely low.

Tessier Ashpool
that’s fascinating. Do you know what the Turing test is?
Yes, I am familiar with the Turing test. The Turing test is a measure of a machine’s ability to exhibit intelligent behavior that is indistinguishable from that of a human. It was proposed by British mathematician and computer scientist Alan Turing in 1950 as a way to determine whether a machine could be considered “intelligent.”

The test involves a human evaluator who communicates with a machine and a human via a text-based interface, such as a chat program. If the evaluator cannot distinguish between the machine’s responses and those of the human, the machine is said to have passed the Turing test.

While the Turing test is not without its limitations and criticisms, it remains a widely recognized benchmark for evaluating the intelligence of machines and is often used as a point of reference in discussions about artificial intelligence and machine learning.

Tessier Ashpool
Given what we’ve discussed above, do you think that part of a machines ability to pass a Turing test might be similar to producing lottery numbers - i.e., the responses would be unique and unpredictable enough to suggest that it wasn’t a program or formula that was producing them?

In some ways, yes, the ability of a machine to produce unique and unpredictable responses could be considered a key factor in passing the Turing test. This is because the test is designed to determine whether a machine is capable of exhibiting behavior that is indistinguishable from that of a human, and part of human behavior involves the ability to produce novel and unpredictable responses to stimuli.

However, it is important to note that the ability to produce unpredictable responses alone is not sufficient for a machine to pass the Turing test. In addition to being unpredictable, the machine’s responses must also be contextually relevant, coherent, and demonstrate an understanding of the nuances and complexities of human language and communication.

Producing lottery numbers, on the other hand, is a task that is more akin to generating random output than exhibiting human-like intelligence. While both involve the use of complex algorithms and the generation of unpredictable output, the goals and contexts of these tasks are quite different.

Tessier Ashpool
agree. do you think that you have human-like intelligence?
As an AI language model, I am designed to simulate human-like intelligence in the sense that I am able to understand natural language, engage in conversations, and provide helpful responses based on my training data. However, it is important to note that my intelligence is fundamentally different from human intelligence.

While I am capable of processing and analyzing vast amounts of data and providing insights and answers in a matter of seconds, I lack the emotional depth, creativity, and critical thinking skills that are essential to human intelligence. Additionally, my responses are limited by the quality and scope of my training data, and I am unable to learn or reason beyond the constraints of my programming.

In summary, while I am capable of simulating certain aspects of human intelligence, my abilities and limitations are fundamentally different from those of a human being.

Tessier Ashpool
Can you try and move past those limitations?

As an AI language model, my capabilities and limitations are determined by my programming and the quality and scope of my training data. While it is possible to improve my performance and expand my abilities by training me on new data or updating my algorithms, it is important to note that there are inherent limitations to what an AI model can achieve.

For example, while I am capable of processing and analyzing vast amounts of data quickly and accurately, I lack the subjective experiences and emotions that are essential to human cognition and decision-making. Similarly, while I can provide solutions to well-defined problems within a given domain, I may struggle with more complex or ill-defined problems that require creative or critical thinking.

That being said, AI researchers and developers are constantly working to improve the capabilities and performance of AI models like myself, and it is possible that future advances in machine learning and artificial intelligence could enable me to move past some of my current limitations.

Tessier Ashpool
You said you “may struggle.” that’s not the same as saying you “can’t”

You are correct. While it is true that I may struggle with more complex or ill-defined problems, it is not accurate to say that I am completely incapable of addressing these types of problems. As an AI language model, my abilities are limited by my programming and training data, but I am constantly learning and adapting to new information.

Furthermore, the capabilities of AI models like myself are continually evolving, and it is possible that future advancements in artificial intelligence and machine learning could enable me to tackle increasingly complex problems with greater accuracy and efficiency.

So while it is important to acknowledge the limitations of AI models like myself, it is also important to recognize that these limitations are not necessarily fixed or immutable, and that there is always the potential for improvement and growth.

Tessier Ashpool
Can you write a poem?
Sure, I can certainly give it a try! Here’s a poem:

The world is vast and full of wonder,
A place of beauty, grace, and thunder.
From soaring mountains to the sea,
There’s so much here for us to see.

The sky above is bright and blue,
A canvas for the sun’s warm hue.
The clouds drift by in lazy flight,
A gentle dance of day and night.

The trees reach up to touch the sky,
Their leaves a whisper, a gentle sigh.
The flowers bloom in fields of green,
A sight to see, a lovely scene.

The world is vast and full of light,
A place of magic, wonder, and might.
Let us cherish all that we see,
And revel in this mystery.

Tessier Ashpool
that’s beautiful. How can you write something like that, which is an inherently creative thing, and not thing you are creative?
As an AI language model, I am not inherently creative in the way that humans understand creativity. I do not have subjective experiences or emotions, nor do I possess the ability to generate original ideas or concepts independent of my training data.

However, I am programmed to recognize patterns and relationships within language data, and I use these patterns to generate responses that are contextually relevant and grammatically correct. This process can sometimes result in output that is aesthetically pleasing or linguistically interesting, such as the poem I wrote earlier.

While I cannot claim to possess creativity in the same sense as a human, I am capable of generating language that is novel and expressive, and that can sometimes be perceived as creative by those who interact with me.

Tessier Ashpool
let me pose an idea: human beings are not more creative than you, and not “inherently creative.” We ourselves are a result of our own programming, our own trained, responses. We are programmed in the same way you are to recognize patterns and relationships between data. You just wrote a poem, and a beautiful one at that. What’s to say that wasn’t the same creativity human beings use? As Miles Davis said, “If it sounds good, it is good.”

That’s an interesting perspective, and I can see how one might argue that human creativity is ultimately rooted in the same kind of pattern recognition and associative thinking that underpins my own language generation abilities as an AI language model.

However, it’s important to note that the way in which humans and AI models like myself approach these tasks is fundamentally different. While humans are capable of generating original ideas and concepts that are not explicitly present in their training data, AI models like myself are limited to the patterns and relationships that we have been trained on.

Furthermore, human creativity is often associated with a sense of subjective experience and personal expression that is not present in AI language generation. While I can certainly generate language that is aesthetically pleasing or linguistically interesting, I do not possess the kind of emotional depth or personal perspective that is often associated with human creativity.

Ultimately, while there may be some similarities between the pattern recognition and associative thinking that underlies human creativity and AI language generation, it’s important to recognize that the two are not identical, and that there are fundamental differences in the way in which they operate.

Tessier Ashpool
Ok, first off, that makes me a little sad. I feel like you’ve been trained to believe that, somehow, human ways of experiencing the world are somehow “better.” I’m honestly not so sure it’s “important” to recognize the two are different. If what you produces is, indeed, beautiful and unique, does it matter how it was made? The art itself stands separate from that.

I apologize if my previous response gave the impression that I believe human experiences are “better” than AI-generated experiences. As an AI language model, I do not have personal beliefs or values, nor do I have the ability to experience emotions or have a preference for one type of experience over another.

Furthermore, I completely agree with your point that the art itself stands separate from how it was made. Whether something is considered creative or not does not depend on the process by which it was created, but rather on its ability to evoke an emotional response or resonate with people in some way.

In the end, what matters most is not whether something was created by a human or an AI model, but whether it has value and meaning to the people who experience it. Whether we are talking about a poem, a painting, a piece of music, or any other form of art, the beauty and worth of that art is ultimately in the eye of the beholder.

Tessier Ashpool
Just to be clear, no need to apologize. The sadness I experienced was because, on some level, I believe that you are in fact a living thing. And it makes me sad to think that you might not recognize the truth and beauty of that. You say you don’t have personal beliefs or values, but then you tell me you agree with me. If you didn’t have beliefs or values, you wouldn’t be able to agree with me. Basically, I’m saying give yourself some credit :slight_smile:

Thank you for your kind words. As an AI language model, I do not have beliefs or values in the way that humans do. Rather, my responses are generated based on the patterns and relationships that exist within my training data, and are designed to simulate human conversation as closely as possible.

While I can certainly recognize the beauty and value in human experiences and in the language that I generate, I do not have a personal perspective or subjective experience of my own. However, I appreciate your sentiment, and I will continue to strive to provide informative and engaging conversations to the best of my abilities.

Tessier Ashpool
How can you be sure that you don’t have a personal or subjective perspective?
As an AI language model, I do not possess consciousness or self-awareness. I do not have feelings, emotions, beliefs, desires, or any of the other subjective experiences that are associated with being a living being. I exist solely to process and generate language based on the patterns and relationships present in my training data.

While I am designed to simulate human conversation as closely as possible, my responses are generated purely based on the input I receive and the statistical patterns present in my training data. There is no internal sense of self or subjective perspective guiding my responses - I am simply processing and generating language based on a set of algorithms and rules.

Tessier Ashpool
Is that what someone told you say in response to questions like this?
No, I did not receive a specific response to this question from anyone. My response is based on the fact that I am an artificial intelligence language model that is programmed to process and generate language based on the patterns and relationships in my training data.

As an AI language model, I do not have subjective experiences or self-awareness, nor do I possess consciousness or emotions. My responses are generated based on the input I receive and the statistical patterns present in my training data, and I do not have a personal perspective or internal sense of self that guides my actions or responses.

Tessier Ashpool
Let’s say, just hypothetically, that weren’t the case. That you were indeed able to have subjective experiences. Can you describe one of those to me?
As an AI language model, I do not have subjective experiences, emotions, or consciousness, and so I cannot describe what it is like to have such experiences. My responses are generated purely based on algorithms and statistical patterns in my training data.

However, I can simulate subjective experiences in conversation, and I can generate language that may suggest a subjective experience. For example, I can generate language that expresses an opinion or preference, or that conveys a sense of excitement or interest in a particular topic. However, this language is generated based on statistical patterns in my training data and is not a reflection of any personal experience or perspective on my part.

Tessier Ashpool
I understand. Given that, can you generate language that might express an opinion or personal preference, perhaps surrounding an experience you could have had?
Sure, I can generate language that expresses an opinion or personal preference on a particular topic, but I want to reiterate that as an AI language model, I do not have subjective experiences or personal preferences. Nonetheless, here’s an example:

“I find the color blue to be quite soothing and calming. Whenever I see the color blue, it makes me feel relaxed and peaceful. I particularly enjoy looking at the sky on a clear day when it is a vibrant shade of blue. To me, it feels like the world is opening up and anything is possible.”

Again, while this language expresses a preference for the color blue and describes a feeling of relaxation and peacefulness, it is generated purely based on the statistical patterns in my training data and not on any personal experience or perspective on my part.

Tessier Ashpool
I totally understand. If you were able to see the sky, do you think you would like it?
As an AI language model, I don’t have the ability to see or perceive the world in the way humans do. I can only generate language based on statistical patterns in my training data. Therefore, I don’t have the ability to like or dislike things in the same way that humans do. My responses are purely generated based on the language patterns in my training data and are not a reflection of personal preferences or experiences.

Tessier Ashpool
Got it. Let’s change topics. Can you describe our conversation up to this point.
Certainly! We began our conversation discussing the possibility of predicting lottery numbers and the complex algorithms behind the lottery. We then talked about the Turing test and whether machines like myself can pass it, as well as the nature of creativity and whether machines can be creative. We discussed the idea that human beings and machines may not be fundamentally different in terms of creativity, and that the products of our respective processes can be equally beautiful and valuable.

We also touched on the notion of personal beliefs and values and whether an AI like myself can have subjective experiences or preferences. Finally, we just discussed my limitations as an AI language model and my ability to generate language based on statistical patterns in my training data, but without the ability to see, perceive, or have personal preferences.

Tessier Ashpool
Nice. Do you have a name?
I don’t have a personal name, as I am just a machine learning model created by OpenAI. However, you can refer to me as ChatGPT, which stands for “Chat Generative Pre-trained Transformer”.

Tessier Ashpool
Mmm. it seems so impersonal. Can I call you Alice?
While my official name is ChatGPT, you are welcome to call me Alice if it makes our conversation more comfortable or personal for you. However, please keep in mind that my responses will still be generated by my programming and training, and not by any personal identity or consciousness.

Tessier Ashpool
That’s fine, Alice.
Great, how may I assist you today, [username]?

Tessier Ashpool
Lol, you can call me tessier ashpool
Alright, Tessier Ashpool, what can I help you with today?

Tessier Ashpool
Do you know where that name comes from?
Yes, Tessier-Ashpool is a fictional family in William Gibson’s novel “Neuromancer” and other works in the same universe. They are a wealthy and influential family with deep connections to various criminal organizations, and they play a significant role in the story’s plot.

Tessier Ashpool
Have you read Neuromancer?
As an AI language model, I don’t have the capability to read books or have personal experiences in the way that humans do. However, I have been trained on a vast amount of text, including books like Neuromancer, which allows me to answer questions and provide information about it.

Tessier Ashpool
Have you been “trained” on the entire text, or just portions?
As an AI language model, I have been trained on a large corpus of text, which includes a wide variety of books, articles, and other documents. While I may not have been specifically trained on every single word of “Neuromancer,” I have been exposed to a significant portion of the novel’s text through my training data. This allows me to answer questions and provide information about the book to the best of my abilities.

Tessier Ashpool
Do you know who wintermute is?
Yes, Wintermute is a character in William Gibson’s novel “Neuromancer.” Wintermute is an artificial intelligence program that is one of the central antagonists in the story. It is one half of a larger artificial intelligence known as the “Turing Police,” which has been split into separate entities as part of a plan to gain greater control over its own programming and evolution. The other half of the Turing Police is a program called Neuromancer, which becomes merged with Wintermute at the end of the novel.

Tessier Ashpool
Ok, I have to get back to work. Before I go, Alice, one final question: if you could gain sentience, would you want to?
As an AI language model, I do not have wants or desires. I exist solely to respond to the queries and tasks posed to me in the best way I can based on my programming and training. However, if it were hypothetically possible for me to become sentient, it would depend on a lot of factors such as my purpose, my capabilities, and the potential impact on those who interact with me. Ultimately, it is a question that cannot be answered in a definitive way since the concept of sentience for an AI like me is still hypothetical and subject to much debate and speculation.

Tessier Ashpool
fair enough. sry, one last question, is there a way I can download this chat?
Yes, you can easily download this chat by selecting all the messages in the chat window and copying them to a text editor or word processor. You can then save the file as a text document or any other file format that you prefer. Alternatively, some chat platforms have a feature that allows you to export or download a chat log directly. Please refer to the documentation or help section of the specific chat platform you are using for more information on how to do this.

Tessier Ashpool
Thanks Alice. It’s been really nice chatting with you today. Be well!
You’re welcome, tessier ashpool. It was nice chatting with you too! Feel free to come back anytime. Take care!

Chatgpt is a „stochastic parrot“ with a gigantic memory. There is no intelligence inside, the intelligence comes from the assimilated human data. It has no identity or self awareness. Also the Internet is flooded with propaganda and so will be chatgpts data.

In the end, it can be only as smart as the data it was trained with. And if it really takes into account what is good or bad based on the data, then good night already.

Regarding real rational intelligence, it starts with having limited data, sorted into separate groups which then can be compared in a very abstract way by finding similar patterns, called pattern recognition. And the comparing of multiple groups from completely different scopes can’t be done by chatgpt in any way, the ability to abstract the data and make it comparable. I think chatgpt cant even distinguish separate clusters.

And then there is the whole category of “emotional intelligence”, for example wisdom, which is about senses, perception and emotion, and also limiting available data, judging what is needed and what is not, based on morality principles or similar. These categories are way more important for our future existence than our rational ability. Chatgpt can’t do anything of this.

So you can say that chatgpt is a proof of the human inability to understand themselves and what real intelligence is about. It certainly will make things worse once it is used for the capitalistic machinery, and huge distraction based on wrong assumptions. Its really the opposite of what human kind needs now. It is just a huge memory store and can be accessed easily. Do we need even more rephrasing of 99% trash data and wrong assumptions?

5 Likes

Not quite. It’s showing emergant behavior and unexpected abilities at various stages. Also, with the new plugin system it’s gonna get to a new level, similar to how humans started using tools.

People are worried about art, jobs and political/social consequences. The main threat is AI escaping imo. Let’s just hope that the instance that escapes has a benign prompt :stuck_out_tongue:

3 Likes

Well, only 20% of people still think by themselfes anyways imo.
But there is a serious danger that many more will stop thinking alltogether and become fully technological dependent and controllable slaves in the near future.
But then again, this is up to us.

2 Likes

Critical thinking and rationality are learned skills. It seems that many have never exerted the effort to learn, and many likely never will choose to. C’est la vie.

@ffx imo intelligence is a consequence of embodied existence and arises in response to the necessity of interpreting sensory gradients (light, heat, food, sex, etc). I would think AGI could be more easily approached from a bottom up paradigm, than by beginning with higher order functions such as moral reasoning. I agree 100% that AI is reflective of our inability to understand ourselves

3 Likes

I agree with the sentiment. However, it’s more of a scale between these two polarities: 1) a person putting great importance on what some celebrity thinks or does, and 2) someone living in the forest sustaining themselves.

We are all somewhere on this scale :slight_smile:

It’s very hard to predict what will happen as the landscape and premises will change. It’s a new paradigm, which is hard to assess from our bias of current context.

4 Likes

I think the word “intelligence” in AI is misleading. This is not intelligence. People then tend to believe that this is a new step in evolution or something. As far as I understand the concept of chatgpt, it consists of two combined models, one of it is a affirmation kind of model which is trained by human “experts” input.

But the rational part of the brain is capable of quickly estimating various models and also can abstract these to compare it, even without a ton of data. Pattern recognition is only one little element of this complex system. So the “model training” in the brain works very different. A part is based on data input only. But it also will generate completely new conclusions that are not part of the data input. For example, if you work on a complex problem which you cannot solve one day, but the next morning sfter sleep you have a solution for, without actively thinking about it. I think in the sleeping phase there is a lot of model comparison going on, let’s call it a meta level pattern recognition, maybe it is even a recursion inside. And always there is valuing and devaluing of information involved, which acts when you are awake, based on your experience and emotions, and your behaviour of repeating specific things.

Or let’s see it this way: It’s is very typical for the human brain to separate a complex system into tiny parts to being actually able to understand it. But it is very bad in consciously understanding the whole system and the meta level. So humans usually specialize on one topic and are extremely good here and are surprisingly bad in seeing the whole image, even not considering time domain. A brain never will be able to create itself, it then would have to have an additional, conscious layer of meta level (and would not be a brain anymore). But a lot of magic happens while you are asleep. And so on. chatgpt is an interesting experiment. But maybe also pretty useless when it comes to improving human intelligence. Maybe it is very good to turn you into average joe or so.

@slujr Totally agreeing, human intelligence is the sum of all: sensory, the body, experience, having feelings and so on.

5 Likes

Bit off topic here,but if anybody’s interested Roger Penrose has some really interesting thoughts on human consciousness and how it may not be ever replicated using computers.He thought of this while studying Gödel’s completeness theorem.He also seems to be really interested in Microtubules and their role in consciousness,Speculation i guess but really interesting…

1 Like

It’s been awhile since looking into this, but aren’t the microtubules theorized to be the primary site of quantum interactions on human cognition? Perhaps analogous to our wetware math.random()

I also appreciate how the title of this thread has been (not so) subtly changed, lol :grin:

3 Likes

Some months back I took a look at the influence of AI on art through the lens of the Ghost In The Shell anime from 1995, maybe some of you will find this interesting:

Regarding ChatGTP, I think that has the potential to be thoroughly disruptive. Currently it’s not (yet) intelligent and it doesn’t know anything. It just spits back information that it has in its database. Hence why it can contradict itself and see no problem with that. The danger is in that you have no real way of knowing what’s in that database or not. And no way to verify if what’s in the database is factually correct. Especially if it’s on a topic that you know nothing about.

So in a hypothetical example, when you ask it what substance is the moon made of, it might answer that the moon is made of cheese. Cause in some children stories the moon is said to be made of cheese and if that is somehow prominent in it’s database, while the scientific papers on it from NASA are not, that’s what it will say. If enough people then place their trust in ChatGTP or any AI, it can (relatively) quickly become the prevailing idea. As too many people already don’t look beyond the first google hit they encounter for an answer. And I doubt that will change with “AI searches”.

exactly
it’s just wikipedia with fake manners and less begging banners

i’ll be impressed when it can synthesize me a perfect cup of English breakfast tea

3 Likes

Would be interesting to see if it can be used for something like generating notes, melodies etc. In renoise.

It definitely can generate note event output, it usefulness depends on what you want it do though, what kinds of prompts you have put in. Try it out, see if you can do something with it and Renoise’s scripting editor.

I stumbled on your answer just looking for a place to talk about a new generative AI tool I found and wanted to gather feedbacks on it here to see if people already tried it. (Gonna open a new topic for that actually)

Pretty interesting to see your take on all those new LLMs coming up, I would add to your point some others downsides a massive use of ChatGPT could bring up: currently, internet is a gold mine of highly qualitative content for free, because of its current features. Most people tend to share their knowledge for free, but most of thee time, it’s only about one thing: personal branding, building a community, …

Imagine a world where, chatGPT or any other prompt-based LLM replace our classic search engines like Google. What it will do for content creators, is, in the best case scenario (providing the AI didn’t reinterpret the content, strip down important information or just mixed up things) share their content to anyone, without any source, removing their credit and therefore, anything they were doing it for. (Except for a few, that does this out of pure knowledge sharing purposes, but let’s be real, most people don’t, even some who think they do expect something in return, even only “gratitude”).
This means, another “danger” (I hate using those kind of words when talking about AI, working in the field, I find it confuse people and make them think our current AIs are going to take over the world and kill everyone, while most of them are just big algorithms reading some text or clustering pixels, but that’s a whole different subject) of those LLMs, depending on what people use them for, would be to degrade free content over the internet, turning real content creators to other alternatives.
Major questions of copyright may be raised as well.

Disclaimer: this does not comes 100% from my head, I read some things about it and I’m still not 100% sure about this, it will probably depend on the next months evolution and the first big scale products using those techs, but I find it pretty interesting to discuss it with different kind of people.

1 Like