• 0 Posts
  • 8 Comments
Joined 5 months ago
cake
Cake day: November 30th, 2024

help-circle
  • pcalau12i@lemmygrad.mltoScience Memes@mander.xyzETERNAL TORMENT
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 days ago

    There are no “paradoxes” of quantum mechanics. QM is a perfectly internally consistent theory. Most so-called “paradoxes” are just caused by people not understanding it.

    QM is both probabilistic and, in its own and very unique way, relative. Probability on its own isn’t confusing, if the world was just fundamentally random you could still describe it in the language of classical probability theory and it wouldn’t be that difficult. If it was just relative, it can still be a bit of a mind-bender like special relativity with its own faux paradoxes (like the twin “paradox”) that people struggle with, but ultimately people digest it and move on.

    But QM is probabilistic and relative, and for most people this becomes very confusing, because it means a particle can take on a physical value in one perspective while not having taken on a physical value in another (called the relativity of facts in the literature), and not only that, but because it’s fundamentally random, if you apply a transformation to try to mathematically place yourself in another perspective, you don’t get definite values but only probabilistic ones, albeit not in a superposition of states.

    For example, the famous “Wigner’s friend paradox” claims there is a “paradox” because you can setup an experiment whereby Wigner’s friend would assign a particle a real physical value whereas Wigner would be unable to from his perspective and would have to assign an entangled superposition of states to both his friend and the particle taken together, which has no clear physical meaning.

    However, what the supposed “paradox” misses is that it’s not paradoxical at all, it’s just relative. Wigner can apply a transformation in Hilbert space to compute the perspective of his friend, and what he would get out of that is a description of the particle that is probabilistic but not in a superposition of states. It’s still random because nature is fundamentally random so he cannot predict what his friend would see with absolute certainty, but he can predict it probabilistically, and since this probability is not a superposition of states, what’s called a maximally mixed state, this is basically a classical probability distribution.

    But you only get those classical distributions after applying the transformation to the correct perspective where such a distribution is to be found, i.e. what the mathematics of the theory literally implies is that only under some perspectives (defined in terms of any physical system at all, kind of like a frame of reference, nothing to do with human observers) are the physical properties of the system actually realized, while under some other perspectives, the properties just aren’t physically there.

    The Schrodinger’s cat “paradox” is another example of a faux paradox. People repeat it as if it is meant to explain how “weird” QM is, but when Schrodinger put it forward in his paper “The Present Situation in Quantum Mechanics,” he was using it to mock the idea of particles literally being in two states at once, by pointing out that if you believe this, then a chain reaction caused by that particle would force you to conclude cats can be in two states at once, which, to him, was obviously silly.

    If the properties of particles only exist in some perspectives and aren’t absolute, then a particle can’t meaningfully have “individuality,” that is to say, you can’t define it in complete isolation. In his book “Science and Humanism,” Schrodinger talks about how, in classical theory, we like to imagine particles as having their own individual existence, moving around from interaction to interaction, carrying their properties with themselves at all times. But, as Schrodinger points out, you cannot actually empirically verify this.

    If you believe particles have continued existence in between interactions, this is only possible if the existence of their properties are not relative so they can be meaningfully considered to continue to exist even when entirely isolated. Yet, if they are isolated, then by definition, they are not interacting with anything, including a measuring device, so you can never actually empirically verify they have a kind of autonomous individual existence.

    Schrodinger pointed out that many of the paradoxes in QM carry over from this Newtonian way of thinking, that particles move through space with their own individual properties like billiard balls flying around. If this were to be the case, then it should be possible to assign a complete “history” to the particle, that is to say, what its individual properties are at all moments in time without any gaps, yet, as he points out in that book, any attempt to fill in the “gaps” leads to contradiction.

    One of these contradictions is the famous “delayed choice” paradox, whereby if you imagine what the particle is doing “in flight” when you change your measurement settings, you have to conclude the particle somehow went back in time to rewrite the past to change what it is doing. However, if we apply Schrodinger’s perspective, this is not a genuine “paradox” but just a flaw of actually interpreting the particle as having a Newtonian-style autonomous existence, of having “individuality” as he called it.

    He also points out in that book that when he originally developed the Schrodinger equation, the purpose was precisely to “fill in the gaps,” but he realized later that interpreting the evolution of the wave function according to the Schrodinger equation as a literal physical description of what’s going on is a mistake, because all you are doing is pushing the “gap” from those that exist between interactions in general to those that exist between measurement, and he saw no reason as to why “measurement” should play an important role in the theory.

    Given that it is possible to make all the same predictions without using the wave function (using a mathematical formalism called matrix mechanics), you don’t have to reify the wave function because it’s just a result of an arbitrarily chosen mathematical formalism, and so Schrodinger cautioned against reifying it, because it leads directly to the measurement problem.

    The EPR “paradox” is a metaphysical “paradox.” We know for certain QM is empirically local due to the no-communication theorem, which proves that no interaction a particle could undergo could ever cause an observable alteration on its entangled pair. Hence, if there is any nonlocality, it must be invisible to us, i.e. entirely metaphysical and not physical. The EPR paper reaches the “paradox” through a metaphysical criterion it states very clearly on the first page, which is to equate the ontology of a system to its eigenstates (to “certainty”). This makes it seem like the theory is nonlocal because entangled particles are not in eigenstates, but if you measure one, both are suddenly in eigenstates, which makes it seem like they both undergo an ontological transition simultaneously, transforming from not having a physical state to having one at the same time, regardless of distance.

    However, if particles only have properties relative to what they are physically interacting with, from that perspective, then ontology should be assigned to interaction, not to eigenstates. Indeed, assigning it to “certainty” as the EPR paper claims is a bit strange. If I flip a coin, even if I can predict the outcome with absolute certainty by knowing all of its initial conditions, that doesn’t mean the outcome actually already exists in physical reality. To exist in physical reality, the outcome must actually happen, i.e. the coin must actually land. Just because I can predict the particle’s state at a distance if I were to travel there and interact with it doesn’t mean it actually has a physical state from my perspective.

    I would recommend checking out this paper here which shows how a relative ontology avoids the “paradox” in EPR. I also wrote my own blog post here which if you go to the second half it shows some tables which walk through how the ontology differs between EPR and a relational ontology and how the former is clearly nonlocal while the latter is clearly local.

    Some people frame Bell’s theorem as a paradox that proves some sort of “nonlocality,” but if you understand the mathematics it’s clear that Bell’s theorem only implies nonlocality for hidden variable theories. QM isn’t a hidden variable theory. It’s only a difficulty that arises in alternative theories like pilot wave theory, which due to their nonlocal nature have to come up with a new theory of spacetime because they aren’t compatible with special relativity due to the speed of light limit. However, QM on its own, without hidden variables, is indeed compatible with special relativity, which forms the foundations of quantum field theory. This isn’t just my opinion, if you go read Bell’s own paper himself where he introduces the theorem, he is blatantly clear in the conclusion, in simple English language, that it only implies nonlocality for hidden variable theories, not for orthodox QM.

    Some “paradoxes” just are much more difficult to catch because they are misunderstandings of the mathematics which can get hairy at times. The famous Frauchiger–Renner “paradox” for example stems from incorrect reasoning across incompatible bases, a very subtle point lost in all the math. The Cheshire cat “paradox” tries to show particles can disassociate from their properties, but those properties only “disassociate” across different experiments, meaning in no singular experiment are they observed to dissociate.

    I ran out of charact-


  • That’s more religion than pseudoscience. Pseudoscience tries to pretend to be science and tricks a lot of people into thinking it is legitimate science, whereas religion just makes proclamations and claims it must be wrong if any evidence debunks them. Pseudoscience is a lot more sneaky, and has become more prevalent in academia itself ever since people were infected by the disease of Popperism.

    Popperites believe something is “science” as long as it can in principle be falsified, so you invent a theory that could in principle be tested then you have proposed a scientific theory. So pseudoscientists come up with the most ridiculous nonsense ever based on literally nothing and then insist everyone must take it seriously because it could in theory be tested one day, but it is always just out of reach of actually being tested.

    Since it is testable and the brain disease of Popperism that has permeated academia leads people to be tricked by this sophistry, sometimes these pseudoscientists can even secure funding to test it, especially if they can get a big name in physics to endorse it. If it’s being tested at some institution somewhere, if there is at least a couple papers published of someone looking into it, it must be genuine science, right?

    Meanwhile, while they create this air of legitimacy, a smokescreen around their ideas, they then reach out to a laymen audience through publishing books, doing documentaries on television, or publishing videos to YouTube, talking about woo nuttery like how we’re all trapped inside a giant “cosmic consciousness” and we are all feel each other’s vibrations through quantum entanglement, and that somehow science proves the existence of gods.

    As they make immense dough off of the laymen audience they grift off of, if anyone points to the fact that their claims are based on nothing, they just can deflect to the smokescreen they created through academia.


  • pcalau12i@lemmygrad.mltoMemes@lemmygrad.mlBe Honest
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    27 days ago

    does the trump base even care about the economy? they seem to just care about the wokes in their video games or whatever. i even saw a fox news segment saying that the economic crash making you poor or lose your retirement is patriotic because it’s like giving up your wealth for the war effort during WW2.


  • pcalau12i@lemmygrad.mltoMemes@lemmy.mlAmericans and socialism
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 months ago

    I have the rather controversial opinion that the failure of communist parties doesn’t come down the the failure of crafting the perfect rhetoric or argument in the free marketplace of ideas.

    Ultimately facts don’t matter because if a person is raised around thousands of people constantly telling them a lie and one person telling them the truth, they will believe the lie nearly every time. What matters really is how much you can propagate an idea rather than how well crafted that idea is.

    How much you can propagate an idea depends upon how much wealth you have to buy and control media institutions, and how much wealth you control depends upon your relations to production. I.e. in capitalist societies capitalists control all wealth and thus control the propagation of ideas, so arguing against them in the “free marketplace of ideas” is ultimately always a losing battle. It is thus pointless to even worry too much about crafting the perfect and most convincing rhetoric.

    Control over the means of production translates directly to political influence and power, yet communist parties not in power don’t control any, and thus have no power. Many communist parties just hope one day to get super lucky to take advantage of a crisis and seize power in a single stroke, and when that luck never comes they end up going nowhere.

    Here is where my controversial take comes in. If we want a strategy that is more consistently successful it has to rely less on luck meaning there needs to be some sort of way to gradually increase the party’s power consistently without relying on some sort of big jump in power during a crisis. Even if there is a crisis, the party will be more positioned to take advantage of it if it has already gradually built up a base of power.

    Yet, if power comes from control over the means of production, this necessarily means the party must make strides to acquire means of production in the interim period before revolution. This leaves us with the inevitable conclusion that communist parties must engage in economics even long prior to coming to power.

    The issue however is that to engage in economics in a capitalist society is to participate in it, and most communists at least here in the west see participation as equivalent to an endorsement and thus a betrayal of “communist principles.”

    The result of this mentality is that communist parties simply are incapable of gradually increasing their base of power and their only hope is to wait for a crisis for sudden gains, yet even during crises their limited power often makes it difficult to take advantage of the crisis anyways so they rarely gain much of anything and are always stuck in a perpetual cycle of being eternal losers.

    Most communist parties just want to go from zero to one-hundred in a single stroke which isn’t impossible but it would require very prestine conditions and all the right social elements to align perfectly. If you want a more consistent strategy of getting communist parties into power you need something that doesn’t rely on such a stroke of luck, any sort of sudden leap in the political power of the party, but is capable of growing it gradually over time. This requires the party to engage in economics and there is simply no way around this conclusion.


  • pcalau12i@lemmygrad.mltoMemes@lemmy.mlAmericans and socialism
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    2 months ago

    You people have good luck with this? I haven’t. I don’t find that you can just “trick” people into believing in socialism by changing the words. The moment if becomes obvious you’re criticizing free markets and the rich and advocating public ownership they will catch on.



  • As I said, they will likely come to the home in form of cloud computing, which is how advanced AI comes to the home. You can run some AI models at home but they’re nowhere near as advanced as cloud-based services and so not as useful. I’m not sure why, if we ever have AGI, it would need to be run at home. It doesn’t need to be. It would be nice if it could be ran entirely at home, but that’s no necessity, just a convenience. Maybe your personal AGI robot who does all your chores for you only works when the WiFi is on. That would not prevent people from buying it, I mean, those Amazon Fire TVs are selling like hot cakes and they only work when the WiFi is on. There also already exists some AI products that require a constant internet connection.

    It is kind of similar with quantum computing, there actually do exist consumer-end home quantum computers, such as Triangulum, but it only does 3 qubits, so it’s more of a toy than a genuinely useful computer. For useful tasks, it will all be cloud-based in all likelihood. The NMR technology Triangulum is based on, it’s not known to be scalable, so the only other possibility that quantum computers will make it to the home in a non-cloud based fashion would be optical quantum computing. There could be a breakthrough there, you can’t rule it out, but I wouldn’t keep my fingers crossed. If quantum computers become useful for regular people in the next few decades, I would bet it would be all through cloud-based services.


  • If quantum computers actually ever make significant progress to the point that they’re useful (big if) it would definitely be able to have positive benefits for the little guy. It is unlikely you will have a quantum chip in your smartphone (although, maybe it could happen if optical quantum chips ever make a significant breakthrough, but that’s even more unlikely), but you will still be able to access them cheaply over the cloud.

    I mean, IBM spends billions of on its quantum computers and gives cloud access to anyone who wants to experiment with them completely free. That’s how I even first learned quantum computing, running algorithms on IBM’s cloud-based quantum computers. I’m sure if the demand picks up if they stop being experimental and actually become useful, they’ll probably start charging a fee, but the fact it is free now makes me suspect it will not be very much.

    I think a comparison can be made with LLMs, such as with OpenAI. It takes billions to train those giant LLMs as well and can only be trained on extremely expensive computers, yet a single query costs less than a penny, and there are still free versions available. Expense for cloud access will likely always be incredibly cheap, it’s a great way to bring super expensive hardware to regular people.

    That’s likely what the future of quantum computing will be for regular people, quantum computing through cloud access. Even if you never run software that can benefit from it, you may get benefits indirectly, such as, if someone uses a quantum computer to help improve medicine and you later need that medicine.