I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.

They send me documents they “put together” that are clearly ChatGPT generated, with no shame. They tell us that if we aren’t doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.

I feel like I am living in a nightmare.

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 days ago

    I find it annoying, because the hype means that if you’re not building a solution that involves AI in some way, you practically can’t get funding. Many vital projects are being cancelled due to a lack of funding and tons of bullshit projects get spun up, where they just slap AI onto a problem for which the current generation of AI is entirely ill-suited.

    Basically, if you don’t care for building useful stuff, if you’re an opportunistic scammer, then the hype is fucking excellent. If you do care, then prepare for pain.

  • Mrkawfee@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    4 days ago

    We have CoPilot at my corporate job and I use it every day. Summarising email chains, reviewing documents and research. Its a huge time saver.

    Its good, not perfect. It makes mistakes and because of hallucination risks i have to double check sources. I dont see it taking my job as its more like having an assistant whose output you have to sense check. Its made me much more productive.

  • Brutticus@midwest.social
    link
    fedilink
    arrow-up
    7
    ·
    4 days ago

    I work in social work; I would say about 60 percent of what I do is paperwork. My agency has told us not to use LLMs, as that would be a massive HIPPA nightmare. That being said, we use “secure” corporate emails. These use Microsoft 365 office suite, which are copilot enabled. These include TLDRs at the top, before you even look at the email, predictive texts… and not much else.

    Would I love a bot who could spit out a Plan based on my notes or specifications? absolutely. Do I trust them not to make shit up. Absolutely not.

  • theparadox@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 days ago

    I am very, very concerned at how widely it is used by my superiors.

    We have an AI committee. When ChatGPT went down, I overheard people freaking out about it. When our paid subscription had a glitch, IT sent out emails very quickly to let them know they were working to resolve it ASAP.

    It’s a bit upsetting because may of them are using it to basically automate their job (write reports & emails). I do a lot of work to ensure that our data is accurate from manual data entry by a lot of people… and they just toss it into an LLM to convert it into an email… and they make like 30k more than me.

  • Our company is forcing us to do everything with AI, hell they developed a “tool” to generate simple apps using AI our customers can use for their enterprise applications and we are forced to generate 2 a week minimum to “experiment” with the constant new features being added by the dev teams behind it (but we’re basically training it).

    The director uses AI spy bots to tell him who read and who didn’t read his emails.

    Can’t even commit code to our corporate github unless copilot gives it the thumbs up and its constantly nitpicking things like how we wrote our comments and asking to replace pronouns or to write them a different way, which I always reply with “no” because the code is what matters here.

    We are told to be positive about AI and to push the use of AI into every facet of our work lives, but I just feel my career as a developer ending because of AI. We’re a bespoke software company and our customers are starting to ask if they should use AI to built their software instead of paying us, which I then have to spend hours explaining them why it would be a disaster due to the shear complexity of what they want to build.

    Most if not all executives I talk to are idiots who don’t understand web development, shit some don’t even understand the basics of technology but think they can design an app.

    After being a senior dev and writing code for 15 years I’m starting to look at other careers to switch to… Maybe becoming an AI evangelist? I hear companies are desperately looking for them… Lol, what a fucking disaster this shit is becoming.

  • Rayquetzalcoatl@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    5 days ago

    The head of my agency is a gullible rube who is terrified of being “left behind”, and the head of my department is a grown-up with a family and a career who spends his days off sending AI videos and memes into the work chat.

    I’ve been called into meetings and told I have to be positive about AI. I’ve been told to stop coding and generate (very important) things with AI.

    It’s disheartening. My career is over, because I have no interest in generating mountains of no-intention code rather than putting in the effort to build reliable, good, useful things for our clients. AI dorks can’t fathom human effort and hard work being important.

    I’m working to pay off my debts, and then I’m done. I strongly want to get a job that allows me to be offline.

  • Bunbury@feddit.nl
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 days ago

    I’m in an environment with various level of sensitive data, including very sensitive data. Think GDPR type stuff you really don’t want to accidentally leak.

    One day when we started up our company laptops Copilot just was installed and auto launched on startup. Nobody warned us. No indication about how to use it or not use it. That lasted about 3 months. Eventually they limited some ways of using it, gave a little bit of guidance on not putting the most sensitive data in there. Then they enabled Copilot in most apps that we use to actually process the sensitive data. It’s in everything. We are actively encouraged to learn more about it and use it.

    I overheard a colleague recently trying to let it create a whole PowerPoint presentation. From what I heard the results were less than ideal.

    The scary thing is that I’m in a notoriously risk averse industry. Yet they do this. It’s… a choice.

  • owsei@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    4 days ago

    A higher up really likes AI for simple proofs of concept, but at times they get so big it’s unusable. With the people on my team, however, bad code is synonymous to AI usage or stupidity (same thing)

  • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    4 days ago

    We had a discussion about AI at work. Our consensus was that it doesn’t matter how you want to do your work. What matters is the result, not the process. Are you writing clean code and on finishing tasks on time? That’s the metric. How you get there is up to you.

  • Godnroc@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 days ago

    So far it’s a glorified search engine, which it is mildly competent at. It just speeds up collecting the information I would anyways and then I can get to sorting useful from useless faster.

    That said, I’ve seen emails from people that were written with AI and it instantly makes me less likely to take it seriously. Just tell me what the end goal is and we can discuss how to best get there instead is regurgitating some slop that wouldn’t get is there in the first place!

  • muxika@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    5 days ago

    I feel like giving AI our information on a regular basis is just training AI to do our jobs.

    I’m a teacher and we’re constantly encouraged to use Copilot for creating questions, feedback, writing samples, etc.

    You can use AI to grade papers. That sure as shit shouldn’t happen.

  • Passerby6497@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 days ago

    I use chatgpt when I care to, and while I was given a subscription by work, I’m not actively encouraged to use it. I really only use it for researching problems that Google search is too seo poisoned to help me with, or debugging scripts. Past that it doesn’t have much professional use for me, given how much time I spend validating output and insulting the AI for hallucinations and just generally being terrible at moderate tasks.

    Basic data interpretation can be pretty great though. I’ve had it find a couple problems I missed after having it parse log files.

  • Lettuce eat lettuce@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    5 days ago

    I work in IT, many of the managers are pushing it. Nothing draconian, there are a few true believers, but the general vibe is like everybody is trying to push it because they feel like they’ll be judged if they don’t push it.

    Two of my coworkers are true believers in the slop, one of them is constantly saying he’s been, “consulting with ChatGPT” like it’s an oracle or something. Ironically, he’s the least productive member of the team. It takes him days to do stuff that takes us a few hours.

  • GissaMittJobb@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    5 days ago

    We get encouraged to try out AI tools for various purposes to see where we can find value out of them, if any. There are some use-cases where the tech makes sense when wielded correctly, and in those cases I make use of it. In other cases, I don’t.

    So far, I suspect we may be striking a decent balance. I have however noticed a concern trend of people copy-pasting unfiltered slop as a response to various scenarios, which is obviously not helpful.