Underrated comment
Everyone’s conspiring folks. What’s hard to measure, is who’s conspiring
Underrated comment
Everyone’s conspiring folks. What’s hard to measure, is who’s conspiring
I wish more guys just said they didn’t know something instead of clearly not knowing what they’re talking about and running their mouth based on vibes
25% of reddit comments are chatgpt trash if not worse. It used to be an excellent Open Source Intelligence tool but now it’s just a bunch of fake supportive and/or politically biased bots
I will miss reddits extremely niche communities, but I believe Lemmy has reached the inflection point to eventually reach the same level of niche communities
Don’t tell him, if too many people get ad blockers they’re just going to keep evolving
I’ll look into LN more, I’m familiar with the centralization concerns (but still think they’re able to be mitigate until more upgrades), but am not familiar with the costs you’re bringing up. Fee estimators notoriously round up, I’ve never spent more than a dollar but that’s anecdotal
BCH is still an attempt at centralization from bitmain, a company which literally installed kill switches in their miners without telling anyone, and ran botting attacks in /r/Bitcoin and /r/BTC during that fiasco - the hard fork they created is absolutely more centralized than Bitcoin
There will be a time to do something as risky as hard fork for a block size upgrade, but to do it for the sake of just one upgrade that serious doesn’t make sense to me. If a hard fork must happen there might as well include other bips that necessitate a hard fork like drivechain.
Soft fork upgrades which enable more efficient algorithms like schnorr / SegWit in the meantime have scaled tps without having to waste block space. Bch is cheap because there’s no demand or usage.
Fiat makes itself obsolete
Bitcoin cash was an attempt at centralized control by Jihan Wu. Just because the block size is bigger doesn’t mean it’s better for decentralization. In fact, the increased costs of maintaining a node just makes it harder for people in (typically poorer) oppressive countries to self verify
They are still increasing the TPS, lightning network isn’t perfect, but it can scale beyond visa until more upgrades are implemented
Ollama (+ web-ui but ollama serve & && ollama run
is all you need) then compare and contrast the various models
I’ve had luck with Mistral for example
Russia (allegedly) has elections too however
We might as well change the baseline for ADHD since technology has hammered everyone’s dopamine receptors
I doubt you could put him in prison, he’s still technically a former president, where would you put secret service for example? Lots of undefined legal gray area here
Unfortunately, it’s for the best. If you’re serious about research you have to present yourself. Especially if you’re the first person to discover it, you’re the most - possibly only - qualified person to talk about that thing.
Part of scientific communication is giving elevator talks. You have to be able to argue for funding.
Not to mention, if you never develop those skills, you’re just opening yourself up to getting a worse financial incentive for the same amount of work
I wanted to be a hacker as a kid, so I had some experience with Backtrack 5. A prof said if you wanted to be a cowboy coder, do everything in your terminal. That was good advice, I’ve learned a lot about OS’s from that
Your OS is basically a set of drivers that allow you to leverage your hardware, as well as a package manager for managing your software, and a system for managing services (like at startup or by some event trigger)
I’m an advanced user but NixOS has been an excellent OS, it’s like all the fun of tuning arch but with less elbow grease. I was a kde neon (ubuntu base + plasma display manager + KDE desktop environment) user before
Thanks for the feedback! I also asked a similar question on the ai stack exchange thread and got some helpful feedback there
It was a great project for brushing up on seq2seq modeling, but I decided to shelve it since someone released a polished website doing the same thing.
The idea was the vocabulary of music composition are chords and the sentences / paragraphs that are measures are sequences of chords or sequences of measures
I think it’s a great project because the limited vocab size and max sequence length are much shorter than what is typical for transformers applied to LLM tasks like digesting novels for example. So for consumer grade harder (12GB VRam) it’s feasible to train a couple different model architectures in tandem
Additionally, nothing sounds bad in music composition, it’s up to the musician to find a creative way to make it sound good. So even if the model is poorly trained, so long as it doesn’t output EOS immediately after BOS, and the sequences are unique enough, it’s pretty hard to find something that isn’t different that still works.
It’s also fairly easy to gather data from a site like iRealPro
The repo is still disorganized, but if you’re curious the main script is scrape.py
https://github.com/Yanall-Boutros/pyRealFakeProducer