The developers were forced to develop ChatGPT because the AI fabricated fake functions and attracted a large number of users.

avatar
36kr
07-08
This article is machine translated
Show original

I'm dying of laughter, ChatGPT has caused a big mess!

AI hallucination arbitrarily fabricated a new feature of a product, misleading users to flood in, and ultimately the developer had to actually create this fictional feature.

The victim was a sheet music scanning website, which recently received a large number of ASCII guitar tablature screenshots uploaded by users, all of which came from ChatGPT.

The website developer was stunned:

WTF? We absolutely do not support scanning ASCII guitar tablature???

Until the developer himself tried ChatGPT, he discovered, oh you little juice, that's how it is~

After generating ASCII guitar tablature, ChatGPT would automatically recommend everyone to visit their website to listen or further create.

However, the website usually scans traditional standard staff notation and does not support this niche format of ASCII guitar tablature at all...

Even more laughably, a large number of users tried the feature, putting the developer on the spot. Not supporting the feature would inevitably disappoint users who came with high expectations, making the website look terrible.

So, the developer was forced to rush and create this feature.

Forced to develop a new feature by ChatGPT

This sheet music scanning website is called Soundslice, and its sheet music scanner can digitize music from images and photos, allowing you to listen, edit, and practice.

[The rest of the translation follows the same approach, maintaining the original structure and meaning while translating to English.]

Interestingly, netizens have a unique perspective, believing that ChatGPT's hallucinations can be utilized for development:

I discovered this is one of the most practical methods for programming with GPT-4.

I won't directly explain how the API works, but instead let it "guess" by using an example code snippet as a starting point for adding new features. Sometimes, it can come up with solutions more ingenious than my original thoughts. Then I adjust the API to make its code run.

Conversely, I also throw ready-made code at it and ask what this code does. If it misunderstands, it means my API design is confusing, and I can see which parts are prone to confusion.

This actually leverages what neural networks are best at: not outputting precise information, but seriously "fabricating" content that seems remarkably plausible, which is the so-called "hallucination". It relies on creativity, not logic.

This is very similar to an old human-computer interaction design method called the "Wizard of Oz" technique. Specifically, it involves having a human operator pretend to be an undeveloped application, a method that is very useful in exploring new features.

Some netizens find it interesting because:

Launching a new feature is easier than getting OpenAI to patch ChatGPT and stop pretending the feature exists (I'm not even sure how they would do this, surely they can't completely block all mentions of SoundSlice).

Reference Links:

[1]https://www.holovaty.com/writing/chatgpt-fake-feature/

[2]https://news.ycombinator.com/item?id=44491071

This article is from the WeChat public account "Quantum Bit", author: West Wind, published with authorization from 36kr.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments