Sorry about that :) But you get the credit for spotting the problem! Thanks for that!
Sorry about that :) But you get the credit for spotting the problem! Thanks for that!
Thanks, I have taken @sugar_in_your_tea@sh.itjust.works’s suggestion and I have added “create”.
With Simplelogin integration Proton does PGP encryption because effectively all emails are forwarded by a simplelogin address. I have just tested to be sure, and I can confirm it is the case. I agree though that this only protects “my side”, which is why I said that it doesn’t provide all the PGP features.
Publishing your PGP public key next to your email doesn’t require “wasting a domain” or anything like that
It does if I don’t have any key that I use for emails. My key(s) is bound to the Proton account with the other domains I use, so for this domain I would need to either add it (back) to Proton (easier option, but “wastes” a domain) or just generate and manage a key myself, that I can then even add manually to Proton, but I didn’t bother doing this just yet. I am not going to use any other public key I have because I wanted specifically to keep this domain separated from my identity.
I just thought it was amusing that you didn’t seem to actually follow your own advice.
FWIW, I do follow the described setup for everything personal, which is what matters to me. As I said, ~1/2 months ago I did have my PGP key because I enrolled the domain into Proton, which if anything is a testament to how annoying it is having to manage keys myself (which I already do for signing commits etc.). Maybe I will spend some time to polish the setup, eventually.
I don’t think so, does it sound weird? Not a native speaker, so maybe it does :)
Yep, I am aware of the contradiction. I used to, but since then I moved to an alias as it was not worth wasting a domain for a single address. I may spend eventually the time to setup PGP for the alias itself, but I just didn’t. It’s a Proton alias, so I get anyway PGP encryption, though (obviously without all the features, but good enough for the near-zero volume I currently have).
Not that I know, which is the reason why I essentially didn’t consider those threats relevant for my personal threat model. However, it’s also possible it happened and it was never discovered. The point is that there are risks associated with having the same provider having access to both the emails (and the operations around them) and the keys/crypto operations.
The cost of stealthily compromising a secure email company is simply disproportionate compared to the gain from accessing my emails. Likewise, it’s unrealistic to think some sophisticated attacker would target me specifically to the point that they will discover and then compromise the specific tooling I am using to access/encrypt/decrypt emails. Also, a $5 wrench could probably achieve the same goal in a quicker and cheaper way.
If I were a Snowden-level person, I would probably consider that though, as it’s possible that the US government would try to coerce -say- Proton in serving bad JS code to user X. For most people I argue these are theoretical attacks that do not pose concrete risk.
Thanks, I will go and double check, I am sure there are more typos!
I honestly didn’t think at all about the use of checkmarks/crosses and the fact that it can be misinterpreted, I will add a disclaimer.
A bigger issue IMO is how you describe email encryption in transit as a matter of fact, but according to Google transparency report[1] there are still domains that do not support in transit encryption, and, what’s worse, when you send an email you can’t tell if it will be encrypted or not.
you are right. The reason why I took that for granted is because I assumed the scenario in which people use the “mainstream” providers. I was looking at data and I think Outlook and Gmail alone make up more than 50% of the market share. I made an assumption which I considered fair, as 99%+ of the users do not need to worry about this at all. However, this is interesting data and I might add a note about it as well, so thanks!
Thanks!
Can you make the images clickable? They’re impossible to read at that size.
I will look into it, there might be a zola option for it. If there is, sure!
This paragraph should probably mention that this won’t work if the provider uses E2EE
That paragraph is in the context of what I call “transparent encryption”, which means E2EE works until the provider is not compromised and the E2EE is effectively broken by delivering malicious software or disclosing the key. E2EE is as resilient as the security of the provider, which is why picking a trusted one is important. Of course, compromising the provider and breaking the E2EE is quite complex.
Thanks a lot! Hopefully at least someone finds it helpful!
But in a new window i don’t have the 10-20 pinned tabs that I jump to very often, having tab groups helps in this regard.
Well, I did not mean replacement (in fact, most orgs run in clouds which uses VMs) but I meant that a lot of orgs moved from VMs as the way to slice their compute to containers/kubernetes. Often the technologies are combined, so you are right.
but that also shows that most modern software is poorly written
Does it? I mean, this is especially annoying with old software, maybe dynamically linked or PHP, or stuff like that. Modern tools (go, rust) don’t actually even have this problem. Dependencies are annoying in general, I don’t think it’s a property of modern software.
Yes, that’s exactly point point. There are many options, yet people stick with Docker and DockerHub (that is everything but open).
Who are these people? There are tons of registries that people use, github has its own, quay.io, etc. You also can simply publish Dockerfiles and people can build themselves. Ofc Docker has the edge because it was the first mainstream tool, and it’s still a great choice for single machine deployments, but it’s far from the only used. Kubernetes abandoned Docker as default runtime for years, for example… who are you referring to?
Yes… maybe we just need some automation/orchestration tool for that. This is like saying that it’s way too hard to download the rootfs of some distro, unpack it and then use unshare to launch a shell on a isolated namespace… Docker as you said provides a convenient API but it doesn’t mean we can’t do the same for systemd.
But Systemd also uses unshare, chroot, etc. They are at the same level of abstraction. Docker (and container runtimes) are simply specialized tools, while systemd is not. Why wouldn’t I use a tool that is meant for this when it’s available. I suppose bubblewrap does something similar too (used by Flatpak), and I am sure there are more.
Completely proprietary… like QEMU/libvirt? :P
Right, because organizations generally run QEMU, not VMware, Nutanix and another handful of proprietary platforms… :)
Most of the pro-Docker arguments go around security
Actually Docker and the success of containers is mostly due to the ease of shipping code that carries its own dependencies and can be run anywhere. Security is a side-effect and definitely not the reason why containers picked-up.
systemd can provide as much isolation a docker containers and 2) there are other container solutions that are at least as safe as Docker and nobody cares about them.
Yes, and it’s much harder to achieve the same.
In systemd you need to use 30 different options to get what using containers you achieve almost instantly and with much less hussle. I made an example on my blog where I decided to run blocky
in Systemd and not in Docker. It’s just less convenient and accessible, harder to debug and also relies on each individual user to do it, while with containers a lot gets packed into the image and therefore harder to mess up.
Docker isn’t totally proprietary
There are a many container runtimes (CRI-O, podman, mirantis, containerd, etc.). Docker is just a convenient API, containers are fully implemented just with Linux native features (namespaces, seccomp, capabilities, cgroups) and images follow an open standard (OCI).
I will avoid comment what looks like a rant, but I want to simply remind you that containers are the successor of VMs (virtualize everything!), platforms that were completely proprietary and in the hands of a handful of vendors, while containers use only native OS features and are therefore a step towards openness.
I wouldn’t say that namespaces are virtualization either. Container don’t virtualize anything, namespaces are all inherited from the root namespaces and therefore completely visible from the host (with the right privileges). It’s just a completely different technology.
I used to run systemd units that just start docker-compose files, that’s also a thing, I suppose. Also generally it’s easy to manage the container directly (killing/restarting) without the needed lifecycle a systemd unit gives, I would say.
Yeah, and it also requires quite many options, some with harder-to-predict outcomes. For example RootDirectory can be used to effectively chroot the process, but that carries implications such as the application not having access to CA certificates anymore, which in general in containers is a solved problem.
I would also add security, or at least accessible security. Containers provide a number of isolation features out-of-the-box or extremely easy to configure which other systems require way more effort to achieve, or can’t achieve.
Ironically, after some conversation on the topic here on Lemmy I compiled a blog post about it.
citizen
Actually I believe it’s “residents”. You don’t need to be a citizen.
Not to the level I can get with rofi and i3. The only way to get somewhat similar is to use yabai, which needs SIP disabled to have somewhat similar features.
The biggest items on the graph are all out of bounds accesses, use-after-free and overflows. It is undeniable that memory safe languages help reducing vulnerabilities, we know for decades that memory corruption vulnerabilities are both the most common and the most severe in programs written in memory-unsafe languages.
Unsafe rust is also not turning off every safety feature, and it’s much better to have clear highlighted and isolated parts of code that are unsafe, which can be more easily reviewed and tested, compared to everything suffering from those problems.
I don’t think there is debate here, rewriting is a huge effort, but the fact that using C is prone to memory corruption vulnerabilities and memory-safe languages are better from that regard is a fact.