• 0 Posts
  • 110 Comments
Joined 5 years ago
cake
Cake day: February 15th, 2021

help-circle
  • Ah, I see. Sorry, the text was too long and I’m not dutch so it was hard to spot that for me too.

    But I interpret that part differently. I think them saying that there’s an ambiguous section about risks does not necessarily mean that the ambiguity is in the responsibility of those who choose to not implement the detection… it could be the opposite: risks related to the detection mechanism, when a service has chosen to add it.

    I think we would need to actually see the text of the proposal to see where is that vague expression used that she’s referring to.


  • Thanks for the link, and the clarification (I didn’t know about april 2026)… although it’s still confusing, to be honest. In your link they seem to allude to this just being a way to maintain a voluntary detection that is “already part of the current practice”…

    If that were the case, then at which point “the new law forces [chat providers] to have systems in place to catch or have data for law inforcements”? will services like signal, simplex, etc. really be forced to monitor the contents of the chats?

    I don’t find in the link discussion about situations in which providers will be forced to do chat detection. My understanding from reading that transcript is that there’s no forced requirement on the providers to do this, or am I misunderstanding?

    Just for reference, below is the relevant section translated (emphasis mine).

    In what form does voluntary detection by providers take place, she asks. The exception to the e-Privacy Directive makes it possible for services to detect online sexual images and grooming on their services. The choice to do this lies with the providers of services themselves. They need to inform users in a clear, explicit and understandable way about the fact that they are doing this. This can be done, for example, through the general terms and conditions that must be accepted by the user. This is the current practice. Many platforms are already doing this and investing in improving detection techniques. For voluntary detection, think of Apple Child Safety — which is built into every iPhone by default — Instagram Teen Accounts and the protection settings for minors built into Snapchat and other large platforms. We want services to take responsibility for ourselves. That is an important starting point. According to the current proposal, this possibility would be made permanent.

    My impression from reading the dutch, is that they are opposing this because of the lack of “periodic review” power that the EU would have if they make this voluntary detection a permanent thing. So they aren’t worried about services like signal/simplex which wouldn’t do detection anyway, but about the services that might opt to actually do detection but might do so without proper care for privacy/security… or that will use detection for purposes that don’t warrant it. At least that’s what I understand from the below statement:

    Nevertheless, the government sees an important risk in permanently making this voluntary detection. By permanently making the voluntary detection, the periodic review of the balance between the purpose of the detection and privacy and security considerations disappears. That is a concern for the cabinet. As a result, we as the Netherlands cannot fully support the proposal.


  • Where is this explained? the article might be wrong then, because it does state the opposite:

    scanning is now “voluntary” for individual EU states to decide upon

    It makes it sound like it’s each state/country the one deciding, and that the reason “companies can still be pressured to scan chats to avoid heavy fines or being blocked in the EU” was because of those countries forcing them.

    Who’s the one deciding what is needed to reduce “the risks of the of the chat app”? if it’s each country the ones deciding this, then it’s each country who can opt to enforce chat scanning… so to me that means the former, not the latter.

    In fact, isn’t the latter already a thing? …I believe companies can already scan chats voluntarily, as long as they include this in their terms, and many do. A clear example is AI chats.




    1. The Pixel is easily unlockable, so one can install custom firmware without being a “pro”, its hardware is (or was reverse-engineered to be) compatible enough to make the experience seamless, with a whole firmware project / community that it’s exclusively dedicated on that specific range of hardware devices, making it a target for anyone looking for a phone where to install custom Android firmware on.

    But I’d bet it’s a mix of 2 and 3.



  • Yea, but he’s (intentionally?) misrepresenting things… people are not “unimpressed” by AI, what they are is not interested in MS “agentic OS”, these are not the same things.

    It’s irresponsible to hand in control of your machine to an AI integrated that deeply into the OS, particularly when it’s designed to be tethered to the network and it’s privately owned and managed by human entrepreneurs that do have the company’s interests as first and main priority.



  • Ferk@lemmy.mltoGaming@lemmy.mlSteam Hardware Announcement
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    27 days ago

    I’m afraid of the price… this looks much more capable and powerful than the Index, which was quite expensive, I suspect it might end up in a similar price range, if not higher. But let’s hope.

    Interestingly, it seems to be using a snapdragon ARM-based unit. Which means it requires another layer of emulation/translation for running Steam games standalone. It’s said it uses FEX (https://fex-emu.com/), probably combined/integrated with Proton.


  • Is the database publicly accessible somewhere? is it limited to an extension or can we simply browse it?

    This looks like it could work better if developed in the open / collaboratively. Though from their FAQ it looks like they are still working in some open source platform:

    Our wonderful devs are currently working on an open-source website to replace and improve our current and temporary platform.

    In the meantime, we will continue to add and verify European brands to the database.



  • The question is where is the difference going to.

    I wish these kind of graphs and analysis digged deeper into the distribution of the profits, instead of showing a generic “productivity” that’s typically just a measure of GDP (which is the overall value of products, affected by wages in production but also by taxes, benefits / welfare arrangements, intermediaries, etc)… it’s also affected by imports/exports… (GDP = C + I + G + ((X − M)) ; where X and M are exports and imports respectively).

    Another thing these graphs usually don’t include is neither the value in stocks nor the wages of the highest earners (typically when they show “wages” they exclude the top 20%), nor the bonuses and benefits they might receive (both for top earners and for workers)… so we don’t see exactly where the profits are going, how much it actually affects the inequality (and btw, they should show the profits as well… which imho is more directly relevant than the GDP)… the graph is a simplification that makes it very hard to see what’s happening and what needs fixing.

    My guess is that the reason this is not done is because the data is not as clear… records of GDP value and (taxed) wages are relatively easy to collect data, it’s more transparent. The problem here is the lack of transparency… I actually believe that this is the biggest issue in society in general… if things were truly transparent it would make things so much easier. Easier to denounce injustice and easier to smack exactly where you need to. They insist in installing surveillance and vigilance systems, but they put them in the wrong places, it’s money/value exchanges what should be more transparent, not private conversations.


  • The thing is that vi and emacs have existed since long before those other new editors came around.

    What you want is possible to do by configuring your ~/.inputrc (see readline manual page for details), it’s just that the defaults are different because they are from a time when many keyboards didn’t even have arrow keys (and the ones that had them were in non-standard positions) so most of the shortcuts that became standard in those days are completely different than the ones common today. Given that the terminal is meant to emulate old style DEC VT100 terminals (that’s why it’s called terminal “emulator”) it made sense to use those default that people had grown used to.

    Personally, I’ve grown used to Ctrl+a, Ctrl+k, Ctrl+w, Ctrl+e and Ctrl+y …I dont have to reach to wherever the Home key is in whatever keyboard I happen to be using at the moment (specially with modern 75%/60%-sized keyboards today). Or use a combination that also requires shift and having to hold so many keys together. In fact I went the opposite direction and customized my Powershell profile while I’m on windows to keep many of those old shortcuts in the Windows pwsh terminal as well.




  • I’ve commented it in the other post, but in my opinion, the issue of the “nothing to hide” -> “no worry in showing” statement is that in between lines (specially in the context for which it’s used) it seems to want to imply that having something to hide must be something rare or perhaps wrong… as if it were not possible to want to hide things that are good for society to keep hidden.

    This isn’t a formal, logical fallacy, but an informal one: https://en.wikipedia.org/wiki/Informal_fallacy

    From a perspective free of presuppositions and biases, I don’t think the logic of the argument on itself is wrong, because of course I wouldn’t be worried about my privacy if I had no interest in keeping my private information hidden… but the premise isn’t true here! the context in which the argument is used is the problem… not the logic of it.

    It’s not incorrect to say: “nothing to hide” -> “no worry in showing” …what’s incorrect is assuming that the “nothing to hide” antecedent is true for all law abiding citizens …as if people didn’t have an interest in keeping perfectly legal and legitimate things hidden and safe from as many prying eyes as possible. The fallacy is in the way that it’s used, they are pretending that this means people shouldn’t be worried, when in fact it means the opposite, since everyone does, in fact, have information that should remain hidden. For our own safety and the safety of our society! …so everyone should in fact be worried about breaches in privacy.


  • In my opinion, this looks more like an informal fallacy, the problem is in the context and the intent that is given to the statement, not so much in the logic of it.

    The postulate has some ambiguity… because in between lines it seems to want to imply that having something to hide must be something rare or perhaps wrong… as if it were not possible to want to hide things that are good for society to keep hidden.

    This isn’t a formal, logical fallacy, but an informal one: https://en.wikipedia.org/wiki/Informal_fallacy

    From a perspective free of presuppositions and biases, I don’t think the logic of the argument on itself is wrong, because of course I wouldn’t be worried about my privacy if I had no interest in keeping my private information hidden… but that premise isn’t true here! the context in which the argument is used is the problem… not the logic of it.

    It’s not incorrect to say: “nothing to hide” -> “No worry for showing it” …what’s incorrect is assuming that the “nothing to hide” antecedent is true for all law abiding citizens …as if people didn’t have an interest in keeping perfectly legal and legitimate things hidden. So it’s not that the statement isn’t logically sound, the fallacy is in the way that it’s used, they are pretending that this means people shouldn’t be worried, when in fact it means the opposite, since everyone does, in fact, have information that should remain hidden. For our own safety and the safety of our society!



  • Yes! I mean, blame those who post AI-generated translations as if they were their own, or blame the AI scrappers that use those poorly generated pages for training, but it makes no sense to blame Wikipedia when the only thing they have done is just exist there and offer a platform for knowledge sharing.

    In fact, this problem is hardly exclusive to Wikipedia, every platform with crowdsourced content is in some level susceptible to AI poisoning which ultimately ends up feeding other AIs, the loop exists in all platforms. Though I understand wanting to highlight particularly the risk of endangered languages being more vulnerable to this, since they have less content available to them so the AI models have a smaller dataset which makes them worse and more sensible to bad data.