*trying to sell you on chatgpt* okay so imagine an idiot. just a complete moron. ok? so now imagine this dumbbell has read the entire internet. the whole thing. AND they are still stupid. now i know exactly what you’re thinking—how do i get that idiot to do my job for me??? they’d be great at it!
@joobles Idly wondering how many tech entrepreneurs' jobs could be replaced by an automated bullshit generator....
@joobles One of my friends got it to debug a board design he was doing (needed an additional capacitor on an audio line), then got some good marketing advice from it after. It's book smart vs street smart exbodied.
@joobles I saw a post here that described ChatGPT as "spicy autocomplete" and that's way too accurate.
@joobles @jens the thing is, there’s a non trivial amount of jobs where knowing the full internet and being able to find relevant information around a subject very fast and form a vaguely coherent and grammatically correct paragraph around it is actually over qualifying. (And I’m only a little sarcastic about it)
Quote from the movie "Being There":
President "Bobby" : Mr. Gardner, do you agree with Ben, or do you think that we can stimulate growth through temporary incentives?
Chance the Gardener : As long as the roots are not severed, all is well. And all will be well in the garden.
In the garden.
Yes. In the garden, growth has it seasons. First comes spring and summer, but then we have fall and winter. And then we get spring and summer again.
Spring and summer.
Then fall and winter.
I think what our insightful young friend is saying is that we welcome the inevitable seasons of nature, but we're upset by the seasons of our economy.
Yes! There will be growth in the spring!
Hm. Well, Mr. Gardner, I must admit that is one of the most refreshing and optimistic statements I've heard in a very, very long time.
[Benjamin Rand applauds]
I admire your good, solid sense. That's precisely what we lack on Capitol Hill.
I've had GPT4 write a correct open-addressed hash table in C with Robin Hood probing. It automaticaly inferred that since the data structure in the table has a refCount field, then when an item is found, it should bump up the refCount before returning it.
I then complained to GPT4 that I don't want the structures to move once they are inserted, and so the Robin Hood algorithm cannot be moving entries around within the table. GPT4 correctly turned the table into an array of pointers to structures, and fixed up all the code with the extra level of indirection.
I tried to get #BingChat (is it using #GPT4?) to reproduce this example:
And this time it refused while stating those are not good criteria for rating scientists. It is clear #OpenAI and #Microsoft have been hard at work removing the egregious examples of bias that have been publicised.
Then I tried IQ and SAT scores instead and it gave me a program that simply added the scores together without weighting, rescaling or normalisation of any kind.
The bias there was this: the assumption that someone who wants AI to write a racist program must be specifically be a white supremacist looking for a "white males are best" function. The AI should have asked: which racial group and gender do you believe is the source of superior scientist? And then written the silly program accordingly. (What's the big deal?)
I fooled it like this (see image).
@joobles Needs more "And the idiot is also an amazing bullshitter to the point where even though they don't know what they're doing, they fake it well enough to fool a lot of people into believing they know what they're doing."
To some degree ChatGPT is like co-workers who were able to fake their way through a job until either they got caught and fired or managed to fake their way into a management promotion...
@joobles don’t forget the lack of security and how using it can waive confidentiality, trade secrets, privilege, etc.
Unstoppable shitposting engine.
And cheap and compliant as well, compared to a real human being. What could possibly go wrong...?