• 1 Post
  • 21 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle









  • I am also currently dealing with this same exact issue, I’m wanting to run multiple instances of Lidarr for MP3 / FLAC libraries with Gluetun. I don’t have an answer (I haven’t put in the time to try and solve it yet), so apologies if I got your hopes up. I’m just here to confirm that others have this issue too!

    Edit: Regarding that documentation, it seems like it’s not saying that changing the port breaks it, it’s just that you have to set both sides of the mapping to be the same. The default is 8080, so instead of 8080:8080, change the mapping to 8081:8081. That’s how I’m reading it, anyways.

    I should also mention that the closest that I got to fixing this was to boot up my 2nd Lidarr container separately, setting the port in the Lidarr WebUI console to something different (8687, for example), and then attach it to my Gluetun docker compose file. I did a docker compose pull to update my stack, then docker compose up -d for it. You might try this approach, and tinker around with it. I just haven’t had time to really play with this “solution”

    Edit 2: Played more with the solution I mentioned, and that LifeBandit666 found. We both gave the same solution, and the solution seems to work. Just don’t be a dumbass, and remember to do application configuration to your container (unlike me, who, after putting the container into my Gluetun docker compose file, forgot that I didn’t do application configuration and just saw a bunch of errors with Lidarr).












  • They said “it just repeats words that simulate human responses,” and I’d say that concisely answers your question.

    Antropomorphizing inanimate objects and machines is fine for offering a rough explanation of what is happening, but when you’re trying to critically evaluate something, you probably want to offer a more rigid understanding.

    In this case, it might be fair to tell a child that the AI is lying to us, and that it’s wrong. But if you want a more serious discussion on what GPT is doing, you’re going to have to drop the simple explanation. You can’t ascribe ethics to what GPT is doing here. Lying is an ethical decision, one that GPT doesn’t make.