Yes, as a true Brit you have to have your emergency beanz can with you at all times.
Yes, as a true Brit you have to have your emergency beanz can with you at all times.
Exactly! I add a random string to each email address, too, so you can’t just guess other addresses. So, it’s usually something similar to lemmy-r4nd0m@mydomain.me
. And, whenever a breach happens, I’ll generate a new random part and set that as my email address and invalidate the old one. Until the next breach. (Looking at you, LinkedIn…)
Be aware that some countries make you liable for what people post on your forum.
Also, have you looked at Discourse? There are some nice apps that work with it and make the experience on mobile slightly better.
My email provider allows for unlimited aliases. So, while I have 600+ email addresses, emails to them all end up in the same mailbox.
The accounts for all the websites and services (with their specific email address) are in a KeePass database and they all have random passwords, too.
The only small issue is when you have to contact support of some service. Then, I have to configure the specific email address in my client so they can match that to my account with them. But most email clients allow multiple sender addresses without having to fiddle with the rest of the settings.
I don’t remember whether it was some news article or a discussion thread. But other people also suggested this might help during therapy and/or rehab. And they had the same argument in that nobody gets harmed in creating these.
As for uses outside of controlled therapy, I’d be afraid it might make people want the “real thing” at some point. And, as others already pointed out: Good luck proving to your local police that those photos on your laptop are all “fake”.
This vulnerability made it possible to collect user data simply by knowing someone’s email address or phone number.
Another example of where it pays off to have separate email addresses/aliases for every website/service you use.
If I interpret this toot correctly, there wasn’t a direct commit from a sanctioned region, but one developer was in one of those regions for a short while quite some time ago. And he may have been flagged because of this.
That’s why I self-host SearXNG. And have enabled several other “underdog” search engines like Mojeek and Marginalia. On my devices I’m using Redirect Web for Safari to send any search request to Ecosia (configured in my Safaris) to my SearXNG instance. Works great for me!
Ok so your prediction won’t be perfect, it’ll be a fraction of a percent off one way or another. It will be a figure that’s statistically irrelevant. Flip a coin a ten thousand you’re not likely to get exactly 5000 heads and 5000 tails. You’ll get a bit over five thousand of one and a bit under five thousand of the other.
And here’s the kicker: the way climate models work is by predicting the next timeframe based on the previous one. Because of this, your “statistically irrelevant” error becomes larger and larger with each prediction, as the next prediction will be based on these small errors. And the next one will be based on those plus the further “irrelevant” errors. And so on… after a few iterations some values in these climate predictions get so out of line that there are actual routines in the models that force these values back into realistic ranges. And then, the next prediction gets calculated based on this out-of-realistic-but-forced-to-reasonable-range value. And the outcome of this kind of calculation is what all these climate researchers want to sell to us as the bitter truth.
The rapidity is the issue as much or more than the change itself. The speed means plants and animals can’t migrate to areas that are better suited to them climatically, let alone give time for evolutionary based adaptations.
There were similar rapid temperature rises in the past, the latest one around 800-1000AD (if you want to trust the GISP2 data) or around 7970BC (if you want to go with the multi-core reconstruction method). Plants and animals are still here, aren’t they? Also, these graphs show that temperatures were way over +2.0 in the past.
We have tens of thousands of years of ice core data and hundreds to thousands of years of tree ring data.
Ice core data is just calculated based on the oxygen isotope solved in that ice. However, you can’t properly make deductions from a single ice core, so later models use the data from multiple cores. And even there, they had to “tune” the data to make it fit.
Yet that prediction will only work for a theoretical, perfectly even, perfectly flat and hermetically sealed roulette table with a perfectly round ball. Because you can never predict any microscopic material defects or any other influences on the ball in your model. Any piece of dust on the table will change the outcome.
And if you read up on the usually used models to “calculate” future climate, you’ll learn that those models need extra helper functions to e.g. declare water in mountain lakes at -10℃ as “ice” because the model doesn’t properly work that out on its own. It’s not that much better than Tasseography.
We’re coming out of an ice age, all the exact data we have to train those models is from the past 150-200 years. And even that data is questionable in parts. Of course, they’ll predict temperatures rising indefinitely, because they rose in the past 150-200 years. But nobody knows exactly when it’ll stop and where. So, how are the models supposed to predict that properly?
Was Earth hotter than now before? Sure, why else do we find mummified animals and perfectly preserved roads and settlements under the melting ice! Will temperatures rise indefinitely and kill us all? Probably not.
They can’t properly predict weather 3 days in advance and yet here they’re trying to predict climate in 45 years?
It seems to use the file system - which basically IS a database. 😉
They don’t have to lose full control. They could be following too closely and swerve onto the sidewalk to avoid a collision with a car and end up striking a pedestrian.
This is a matter of the human factor - and you can never make that disappear. There will always be the odd idiot driver.
The reduced speed limit should also be accompanied by lane narrowing, speed humps, and other traffic calming techniques.
This is totally fine for housing areas, but definitely not for through-roads. There’s no one-for-all solution in the same way our bodies don’t have only one size of blood vessel.
If these drivers don’t obey the rules now, what makes you think they will obey them if you lower the speed limit?
And you don’t just lose control of your car at 30mph or even 50. Especially not in today’s cars with all their safety features.
Shooting someone or throwing someone off a cliff is a deliberate act to hurt/kill someone else. No driver wants to kill someone. (Well, apart from these extremists that occasionally drive into German Christmas markets…)
People mindlessly walking into traffic, because that funny video on Instagram is more important than watching their surroundings, they are the problem.
Don’t worry, the intelligent ones will survive.
I’m all for survival of the fittest. If people are too stupid to stay on the pavement, it’s on them. Why let drivers suffer to protect those idiots that blindly run into traffic?
Maybe we should ban ALL cars to get traffic related injuries to 0… 🤦♂️
- Until someone re do the test, we will not know.
If you’ve just enabled Apple Intelligence, it’ll also go through all your Photos and detect objects, faces, pets, POIs, settings, etc… Depending on the amount of photos, this can take a few days to complete. (Should only happen while the phone is connected to a charger, though.) During that time, some sluggishness is to be expected. However, after a few days, the phone should be snappy again.
At least I can’t notice any such issues on my 16 Pro. And I’m using AI since it became available in the Betas here in the UK.
Why skulls, though?