Too bad you couldn’t copy it over with low-speed dubbing.
I try things on the internet.
rarely, shit just works.
Too bad you couldn’t copy it over with low-speed dubbing.
I already do that it’s called unhealthy
human resources (department) is for punishing the human resources (employees).
you’re not wrong…
something something pomodoro something
my prescriber will never know!
Thank you. I’m cured.
Plex, running locally, on my server: “You should add a server!”
Plex, running locally, on my server: “Claim 10.0.0.10!”
Plex, running locally, on my server, after claiming my server: “You should add a server!”
What isn’t a step in that evolution? LLMs have been around for a while. It wasnt until openai decided to turn it into a product that it caught people’s attention.
You’re drinking too much of the koolaid. AI right now is mostly a buzz while people figure out that LLM is just very well adapted to SOUND intelligent. The “intelligence” we see in LLMs like ChatGPT are mostly coded by people.
Yes, its a problem when those who hire choose to hire LLMs instead of humans. But if you read those stories you discover that LLMs aren’t actually performing well at their jobs.
LLMs are very sophisticated bullshit generators.
If your doctor is an AI you should get a new doctor.
AI doesn’t know anything. It can’t.
Sorry. BIG spellchecker.
LARGE language model.
YUGE AI boi.
Better words. Better pictures. Better sound.
Totally different.
(/S)
A spellchecker takes input from humans and uses that input to match against a database of known words to suggest correct words using that word’s proximity to the known words. Modern spellcheckers are able to tokenize a corpus of words written by the device’s owner and use that corpus to determine what word is likely to follow the previous word. Most phones these days do this.
Modern AI takes a corpus of data, tokenizes it, feeds each token into a neuro-network to determine the next token that is likely to follow the previous token.
Graphical AIs do similar work but there’s more variables to alter to “weigh” what pixel value would likely be present based on surrounding pixel values and the noise present in that seed, along with the other values. The corpus in this case would be a library of digital graphical works that is interpreted as a graphical work (e.g., a matrix of pixel color values). Sound AIs work similarily but with digitized sound as data.
What do I misunderstand?
This just in: spellchecker defeats humans at “thinking”.
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Spicy frisbee