updates: the headphones, the snoring coworker, and more

It’s “where are you now?” month at Ask a Manager, and all December I’m running updates from people who had their letters here answered in the past. Here are four updates from past letter-writers.

1. Can I make my spouse wear noise-canceling headphones at home? (#3 at the link)

We do have a resolution, and it has absolutely nothing to do with headphones.

I didn’t put all the pieces together, but at the time I sent the letter, my youngest child was struggling and we were all pretty sleep-deprived and stressed. Not long after I wrote to you we saw the pediatrician and started some medical and therapeutic interventions that have been an absolute game changer for our family. My kid is so much happier and as a result, we are too.

My husband… still does not wear headphones. He doesn’t like the way they make him look so it isn’t really an issue of how effective they are. And at this point, we’ve all accepted that if he won’t wear them, he’s going to have to deal with standard family noise. Now that our kid is doing better he is much more flexible and tolerant about it.

Looking back this was clearly more about what was going on with our family and our own well-being, and I really appreciated the commentators that pointed in that direction! (And there was some great advice about headphones, so I might pick up a pair for myself.)

2. Diplomatically criticizing AI in an interview (#3 at the link)

Thank you so much for your advice with my question!

My writing sample got me the next interview. At that stage my interviewer hardly talked about my writing at all—other than to say it was great!—and didn’t ask about my experience with the AI. Based on your suggestions, I asked about their experiences so far with AI as a writing tool. I learned they were still in the very early stages of exploring how AI could support the team and that the person they would hire for the role would get to take the lead on that exploration.

That person is now me! So far, we haven’t had a lot of success in getting the AI machine to learn our style and tone, and it has a tendency to make stuff up that’s not necessarily obvious at a quick glance (deceptive word salad). For now, writing from scratch is still the fastest and most accurate way to get the job done.

We’ll keep experimenting with the writing, but for now I’ve been using AI in other ways. It’s been a great help in brainstorming solutions to data challenges the team faces. After I learned how to formulate questions in a way the machine understands, it’s written Excel formulas that speed up my workflows. I’ve even had it write Excel scripts, and teach me how to use them, since that’s something I’ve never done before.

I’m cautiously optimistic about the potential uses for AI in the future in this role. I’ll be sure to keep you posted.

3. My coworker jokes about suicide (#3 at the link)

First off, I’d like to thank you for posting my question. You and the readers have been really kind, and it helped a lot to know that it bothers other people too and it’s a legitimate thing to be bothered by.

I tried your advice and said, “Please, don’t joke about suicide” or “That’s not funny,” but he would just respond that it’s just a joke and keep on doing it.

I decided to follow the readers’ advice, thinking that maybe the jokes were a cry for help (and I feel bad for not thinking of that earlier; I really wanted the jokes to seem like they never occurred to me to think about why they were happening). Unfortunately, I’m not in a good enough place to have a face-to-face conversation about this, so I grabbed a suicide prevention flyer and left it on his desk.

I saw when he read the flyer, and I don’t know if he needed help or realized it could be taken seriously. But it’s been a week, and he hasn’t made any kind of joke. So, I consider it a win.

4. I’m sharing a hotel room with a coworker but I snore

I took your advice and said I’d just head home in the evenings to take care of family responsibilities … and then we got hit with a massive ice storm during the conference. We ended up stuck at the hotel for five days! I did have to share a room with a colleague, but there was enough other stuff going on that I decided to just roll with the punches, and it was fine.

My husband and I moved about five hours away from our old city this year. The 2024 annual conference is in our new city, just about 10 minutes from our house. I made sure everyone knows well in advance that I’m going to spend the nights at home.

And yes, my snoring is much better!

{ 64 comments… read them below }

  1. Junior Dev*

    I love 3 because if he’s struggling and needs help, it gives him resources; but if he’s joking because he thinks it’s not serious, it reminds him that suicide is a real problem that affects real people. Win-win.

    1. Stuckinacrazyjob*

      I agree. This letter reminded me of a coworker who ruined the meeting by making a suicide joke in response to the regional manager asking what were doing on the weekend.

      1. allathian*

        Yikes on bikes! I hope the regional manager reprimanded him.

        Thankfully my immediate family hasn’t been affected by it, but a close coworker lost an adult stepchild a couple years ago and he’s still affected by that loss and probably will be for the foreseeable future. One of my friends lost the estranged father of her kids about 10 years ago, and now her teenage daughter is on 24/7 suicide watch in a mental hospital and has been for more than a year. Suicide is a tragedy for everyone who’s affected by it and I don’t think it’s an appropriate subject to joke about, ever.

        1. new old friend*

          I tend to fall into agreeing with you, and I think it’s true even if the person in question is struggling– in part because reinforcing that suicidality with jokes is really bad, but also because, like you say, a lot of people have been impacted by suicide and you can’t tell at a glance who falls into that list!

    2. ferrina*

      I love what OP 3 did! I think putting the flyer on his desk was really smart- OP didn’t have to do something that they felt uncomfortable or unsafe doing, and the coworker got info about available resources. Whether or not he needed them, I think it’s a great message that we take these kinds of things seriously.

  2. Falling Diphthong*

    It has a tendency to make stuff up that’s not necessarily obvious at a quick glance.
    This seems to be a big thing with AI–it’s plausible on topics you know nothing about, but on anything where you have some knowledge, it’s obvious that it’s making things up.

    1. whingedrinking*

      I’m a proofreader for academic/scientific papers, and while it’s certainly possible that I’m editing papers written by AI without knowing it, some very obviously were. (That’s without getting into the ones where someone very obviously stuck their original paper, which may or may not have been written well to begin with, into Google Translate. Those are just word salad.) Reading journal articles about a subject you’re not familiar with is boring as hell, but a reasonably literate person should be able to more or less understand them; even when they’ve been written by someone struggling with the language, meaning tends to break down in different ways – the issue there is that the *author* knows what she wants to say, she just doesn’t have the tools to express it. Whereas AI theoretically does have the tools but doesn’t know what the sentence is supposed to mean; it just knows which words people tend to stick together. The result is weirdly flat sentences that don’t connect and don’t go anywhere and, while they might not contain anything that a proofreader could look at and say, “That’s wrong” (at least from a grammar and spelling POV), they can still read something several times and say, “This doesn’t make any sense.”

      1. CoffeeIsMyFriend*

        yup. I’ve noticed this with students who use AI to answer questions. I also love how often it doesn’t actually answer the given question which is ends up with lots of words that relate to the topic. technically using AI when you were supposed to write it yourself is grounds for a zero but it’s very hard to prove. day I had a student who I’m 90% sure used AI end up with less than 45% on a very easy paper because the AI didn’t actually answer the given questions it just wrote things that related to the topic without giving any conclusive statements.

      2. Myrin*

        I read an article yesterday which I’m 90% sure was AI-generated. Either that or it was written by one of the most scatterbrained people on the planet. It kept repeating the same information in slightly different ways (like “In 2018, Jane wanted to buy a golden llama. Jane wanted to buy a golden llama. Jane bought a golden llama in 2018 and wanted to do it.”) and the sentences really didn’t connect to each other at all. I felt like I was wading through some sort of literary twilight zone.

      3. Yoyoyo*

        What gets me is that people evidently don’t even take the time to read through the article they had AI write because in a lot of cases it is so glaringly obvious if you actually read the text. My funniest example is when I was postpartum and trying to figure out how to get the pacifier to stay in my child’s mouth so we could all get some sleep. I was googling and clicked on an article that told me to put marbles in the corners of the room and stick the pacifier in my child’s ear!

      4. Pizza Rat*

        Whereas AI theoretically does have the tools but doesn’t know what the sentence is supposed to mean; it just knows which words people tend to stick together.

        I wish more people understood this.

    2. Spiders Everywhere*

      it’s frustrating because the tech is genuinely good for some things, just…not any of the things these companies want to use it for. it can help with like grammar level stuff like when you’re struggling to make a sentence work, and the way it understands natural language can sometimes allow it to answer questions in a way that would be much harder with a search engine – but only if it’s something where you can check its work! recently I got chatgpt to identify a movie I’d forgotten the name of from a fairly minor detail I remembered, albeit after it straight up lied to me about the detail being in two other movies.

      1. Freya*

        I’m a bookkeeper; I’ve had employees of contractors get unhappy because their pay didn’t go through on a public holiday because the person processing it (me) wasn’t working due to public holiday, and suggest as a solution that AI could do it.

        a) no, no AI is touching something that if you get it sufficiently wrong the director of the company can go to jail,
        b) pays have historically gone through early sometimes because that’s better than late, so going through on a Tuesday instead of a public holiday Monday is still on time, especially since timesheets due on Friday didn’t land in my inbox until Tuesday morning, and
        c) that won’t help because the banks won’t do the transfer on a public holiday anyway!

    3. Beth*

      Yeah, it’s definitely the big weakness that I’ve seen. Even humans aren’t always good at evaluating sources, determining what’s reliable, and incorporating the results into their writing. An AI that has an entire internet of sources (including the good, the bad, and the very very bad) at its disposal? It messes up on that a lot.

      I’ve seen AI be very successful when used as a tool to assist a human with writing. Both 1) an expert writes a first draft to ensure info is good and the AI edits for grammar and style, and 2) the AI writes a first draft and an expert edits for content, can work. (Whether either is faster than just writing the copy yourself depends on your level of writing experience!) But we’re not at a point where you can reliably have an AI write everything and assume it’s OK.

      1. Wilbur*

        I’ve seen it work really well as a first draft for some cover letters, but you have to give it a few prompts and then edit it yourself. I think it can be a big help for people who need help getting started. When I tried it I definitely hit a wall where it just couldn’t provide better output. I wonder if this is something we’re going to see being taught in schools, just like typing/computers were taught.

      2. Banana Pyjamas*

        I was curious how well AI could work, so I experimented by asking about my field. I started my question as broadly as possible, and third was iteration ended with: Define assessment equity as it pertains to property tax assessments.

        The overview was decent, so I decided to drill down. New prompts were: Tell me more about [copy and paste from original result]. Half the time it was able to elaborate, and half the time it regurgitated the same information.

        It never expressly mentioned the Price Related Differential, which is a primary measure of equity. New prompt: Explain the price related differential as it pertains to assessment equity. Three points were spot on, one point was objectively wrong. You absolutely do not use the PRD to calculate adjustment factors; you use the median, mean, or weighted mean depending on the procedure set by your state.

        My final take away was that if you know what to ask and how to ask it, you can use ChatGPT as a starting point. The overview will probably be fine. There will be a lot of redundancy, and you need to look for errors/inaccurate information. If you don’t know the subject matter, you definitely should not use it.

    4. Indolent Libertine*

      I have a friend who is a researcher on a medical school faculty. Published articles in journals, etc. He asked ChatGPT to write a bio of himself. The first graph was accurate; the rest was completely false and random. Sort of like the situation where the attorney had it write a brief and it cited cases which simply didn’t exist.

    5. Rainsomg*

      LegalEagle (awesome lawyer YouTuber) has a great video about a lawyer who used Chat GPT to prepare a case filing and it wound up inventing cases (and citations!) that didn’t exist and he didn’t validate them and sent them on to the court.

      It’s a hard video to watch because you have such second hand embarrassment, but it’s also very interesting.

      1. Kesnit*

        I came to the comments to mention this. (LegalEagle is great!)

        And yes, it is a very difficult video to watch. I could not stop cringing. (I say that as a lawyer who has written many briefs.)

    6. Irish Teacher*

      I tested out chatgpt with some of my students and…woah, I asked it questions about the endings to some books and in one case, it said the villain was a character who wasn’t even in that book. In another case, it made up an ending completely, taking some stuff vaguely related to the reality. It was actually quite hilarious, something about the murderer killing somebody by poisoning their dog to shock them into a heart attack or something along those lines. I don’t remember the details, just “x killed her by poisoning her dog.” Clearly, that is not what happened.

      Another very interesting result was that we tested its bias by asking it for the top twenty writers and of those, either 13 or 14 were from either the US or the UK (the “either” is because one was James Joyce and technically, Ireland was part of the UK for at least part of the time he was writing), but either way, those two countries alone had 2/3rs of all the results. And yeah, they are large countries, but…I don’t think they each have as many well-known authors as the entire rest of the world.

      1. Phryne*

        Speaking of bias. I’ve also seen examples where AI is very, very hard to convince that e.g. an expert is a women. As in, contorts sentences into impossible loops to keep the doctor and the male pronoun together. (ill post a link in next comment)
        Like any other datapool, the output is never that much better than the input, and in the world of techbros focussed solely on what can be done, input is rarely good.

    7. Dr. BOM*

      It’s the Gell-Mann amnesia effect all over again. It’s why I get so frustrated any time I see upper management touting how great AI is going to be and how it’s going to 10x everything.

    8. Flor*

      Bing’s chatbot tried to poison me last week!

      I was looking for a replacement for something like 25g of coconut flour in a recipe that was to be eaten raw. I specified it was to be eaten raw in my initial query. Bing suggested almond flour and cassava flour. I asked if cassava flour could be eaten raw. Bing was quite adamant that cassava flour could NOT be eaten raw because it contains cyanide. I then asked Bing again what I could use as a substitute for coconut flour, and again it suggested cassava flour.

      In a less horrifying but equally odd error that I think encapsulates one of the issues with generative AI, Bing was suggesting vastly different quantities of almond flour and cassava flour instead of 25g. I think it suggested 100g of cassava flour instead of the 25g (and 7.5g of almond flour!). What I believe happened here is that Bing read an article that mentioned a 4:1 ratio, but didn’t realise this was for *volume* not *weight* (I checked sources that said things like 1 cup of cassava for 1/4 cup of coconut flour).

      I think this is a good example of how the AI doesn’t really “understand” what it’s doing and is just collating information, so while it might be clear to a human that the article is talking about volume because it says a 4:1 ratio and then 1 cup for 1/4 cup, but Bing just gets the 4:1 ratio, reads the quantity I requested, and then extrapolates.

      1. JustaTech*

        I also had a chatbot possibly try to poison me – I was looking for a sous vide recipe for BBQ chuck roast and on the Anova website (the folks who make my immersion circulator) they had a recipe that looked perfectly reasonable until I got to the very bottom where it said “this recipe was written by AI, please test it out for us!”

        This would be annoying for any average recipe, but with sous vide you’re leaving meat at a specific temperature for many, many hours, and getting it wrong could create the perfect environment to grow dangerous bacteria. So I don’t want to use a recipe where the temperatures and times haven’t been checked!
        I’m a biologist and I grow cells for a living and I know how easy it can be to grow some bacteria, and that temperature is a major factor. I want to know I’m going for a safe cooking temp!

      2. Kara*

        Note: don’t use chickpea/garbanzo flour raw either. I’ve never actually tried it, but I’m told it tastes awful unless fully cooked!

    9. Fishsticks*

      I work in health care, and our leadership is constantly trying to push automation/AI and we keep pushing back and pointing out that AI giving medical advice sounds like a nightmare scenario for everyone involved.

      To which we get, “Well, this study showed that AI was able to pinpoint (illness) in people listing certain combinations of symptoms, so you’re wrong.”

      Right, but. That isn’t giving medical -advice-. That’s doctors using a program that helps consolidate their existing knowledge to give them an idea of where to look next. What you want us to use AI for will explicitly be giving people bad medical advice!

      1. Slartibartfast*

        AI giving a list of possible rule outs based on an input of symptoms, yeah that could be beneficial.

        AI listening to the exam room conversation and writing the chart notes, that sounds like a nightmare.

        AI writing a treatment plan? That’s going to kill people.

        1. Fishsticks*

          That’s what I said, right out loud in the middle of a meeting. I was just so tired, it was the fourth time leadership had pushed AI on us, and I flat out said, “Allowing AI to give out medical advice is going to open us up to serious legal liabilities when someone gets hurt following that advice, and someone WILL get hurt.”

          I was told I worry too much and that leadership is sure the people designing the AI programs have “thought of that”. Which made me laugh.

          1. JustaTech*

            “the people designing the AI programs have “thought of that”.”

            Lol sob.

            There’s optimistic and then there’s Pollyanna-ish.
            Just let the AI do what it’s good at, like drug interaction checking and stuff like that.

          2. Kara*

            Can you show them the news articles about the lawyers who got sanctioned after they used an AI to write i believe a brief, and it made up a bunch of cases and citations?

      2. Daisy-dog*

        Seems to me that AI would likely start making up conditions if it can’t identify a straightforward possible diagnosis.

      3. Wilbur*

        I already can’t see my primary care physician because she’s too busy. I guess I can look forward to the APNs being overbooked too and getting shunted off to a CNA using AI in the future. Maybe they’ll stop raising premiums every year?

    10. hmmmmm*

      I’ve had Canva’s AI presentation maker make me two different presentations, as an experiment. The one it managed, barely, was to answer the question “Are large glass spheres edible?” The presentation managed to stay on topic and answer that correctly, although it used a lot of photos of bamboo, for some reason. When I asked it to make a presentation on “local resources for small business owners,” it did well at generating professional-looking slides and pulled good stock photos, but the text was either re-stating the prompt, or making up organizations that don’t exist.

      I’ll probably use Canva’s AI presentation maker again, if only because “Are large glass spheres edible?” was a great icebreaker at a Powerpoint Party for my friends.

    11. Elizabeth West*

      I refuse to use it for writing or graphics. All my indie stuff is made by me entirely, including my (admittedly not perfect) covers. If I decide to outsource cover design, I want to make sure I’m not paying someone to use it there either, even if it costs more. Oh, and I paid a human actor to do VO work for trailers and will continue to do so. No artificial voices — if I can’t afford him later, I’ll do it myself!

      There are just too many issues around AI and creative work for me to be comfortable with it.

  3. Always Tired*

    #2, love the update! I work in HR and the very trendy thing right now is to suggest having ChatGPT write your job descriptions, and I keep saying writing from scratch is easier, but managers love hopping on the bandwagon. Then they read some AI job descriptions and shut up. I’ve also been getting quite a few AI cover letters that applicants didn’t bother editing. I’m sure there will eventually be decent public use language models, but right now the only semi-decent ones are proprietary and trained in house on specific data sets.

    1. Lady Kelvin*

      My husband actually successfully used ChatGPT to write a job description. He took three job descriptions from similar positions in the company and asked ChatGPT to find the common duties and roles, and highlight the unique ones. Then he used those notes to write a new job description for the position he was hiring for. I think it has its uses for first drafts and research stages, but you can’t just ask it to do something from scratch and expect a good result. I use it a lot in coding when I can’t remember how to do something relatively complex, which has saved me so much time and has made my code cleaner and more efficient.

      1. Mongrel*

        I think it’s important to recognize that “Written by AI” and “Assisted by AI” are two very different things. Assisted means you can feed it some examples and with the result either use that as a jumping off point for yourself or start an iterative process to refine the end result.
        The mistake people make is not knowing (or caring) enough about the material to know when it’s just outputted a meandering non-answer or, worse, a nonsensical answer.

  4. I Have RBF*

    #4, I hope that you got a CPAP or appliance to help with the snoring. (I used to snore, and had insomnia. A CPAP helped with the snoring, but not much with the insomnia.)

    The storm and room sharing sounds like a nightmare, but I guess that “emergency conditions” would make it more tolerable.

    Congratulations on your move, hope the conference is great this year.

  5. Jaybeetee*

    LW1: It sounds as though your husband’s issues with noise were an outcropping of other stressors you were both dealing with. I’m glad the situation has improved and that you found a solution that seems to work for everyone.

    I went back to read the previous comments, and I want to commend the commenters here for actually discussing potential solutions and cutting the husband slack that even if he was behaving unreasonably, his stress was likely genuine and it was a problem that needed solving. It gets tiring with relationship columns how often an “offending spouse” is transmuted into an abusive monster in comment sections, and I found the comments on this original letter quite refreshing in that regard.

    1. Pennyworth*

      I know there are many reasons for behavior we find irritating, but a spouse refusing to wear headphones because he didn’t like the way they made him look made my eyes bug out.

      1. Shiara*

        In fairness, when you’re the only one in a zoom meeting with massive over ear headphones it can make you feel awkward and out of step. (Speaking from experience). It can also read as more “gamer” and less professional.

        This shouldn’t matter, but might, depending on the company.

      2. Coverage Associate*

        Yes. Doesn’t top of the line active noise canceling come in earbuds now? They have microphones too and would barely show up on a video call. (Not that the family needs top of the line, just that advanced technology can be very small)

        1. Weaponized Pumpkin*

          It’s not what he looks like to family, the concern is colleagues on video calls. At my company they would be fine — it isn’t very common but he would not be the only one wearing them.

          My eyes bugged out at that as well. As long as the outcome is that he doesn’t complain about family noise, it’s reasonable as a personal choice. But not if he insisted on no headphones AND shushing!

      3. speed boost*

        I put it down to–sometimes you just hate something for no real solid reason at all, right? Like, I refuse to buy long zippered coats because I hate having to hunch over to zip them–it feels toddler-ish. This is objectively a non-problem, but I still hardline don’t buy coats I’ll have to hunch over to zip.

        I just figured he felt something similar about wearing over-ear headphones, and other stressful stuff going on in the household can really magnify that kind of trivial thing by association.

      4. coffee*

        I do still find it really surprising that he thought “I will just get my children and entire family to be very quiet for hours” to be a) achievable and b) reasonable over any other solution.

  6. nodramalama*

    I have never heard of someone refusing to wear headphones because of the way they look. How do they look?!?!?!

    1. CommentKoi*

      I’m guessing OP’s husband was only considering over-ear headphones with a mic, or maybe wired earbuds with an obvious cord? But that just made me think about how there are Airpods and countless other Bluetooth earbuds with perfectly good mics, are still decently noise-cancelling (EPOS comes to mind though they’re not cheap), and that don’t look like much of anything, so I’m wondering if OP’s husband hadn’t considered those.

      1. KateM*

        Makes me so happy to work for a company where over-year headphones with a mic are The Way People Usually Look.

      2. Billy Preston*

        I have a pair of anker soundcore p3 and they’re fantastic at noise cancelling. I assume he didn’t look at any of these either.

        I wear wired, over the ear headphones for work because I need something comfortable and with a reliable connection. Many other coworkers do the same. I never though about it looking odd.

        1. CommentKoi*

          Me neither, for what it’s worth! I love my over-ear headset. I don’t think it looks odd at all, but I could see some people thinking it looks bulky or messes up their hair. Everyone’s got their preferences I guess.

    2. Mermaid of the Lunacy*

      I never even thought about how my over-ear headphones look, but now I’m even more determined to wear them if they might be perceived as garish, because I have this weird personality trait that sometimes wants to do the exact opposite of what is normal. LOL

    3. lilsheba*

      Yeah I don’t care how headphones look (although could people stop putting them over their hair, move that out of the way and put them over your ears!) But I still won’t wear them myself because I hate the way they FEEL. And I don’t like being shutoff from natural sounds of the house. It’s still a speaker for me.

      1. whingedrinking*

        Yeah, I discovered at a certain point that I can only have so many things on my head and face before it starts to really bug me. Glasses, hair clip, headset, mask/face shield – one of those things has to go.

    4. Gumby*

      I did have a classmate who didn’t bring her books to class because they wouldn’t fit in her purse. And wouldn’t bring a backpack instead because it would clash with her outfit.

      We were in junior high.

  7. DameB*

    I’m a writer and we just underwent an exhaustive test to prove (to the tech side of the business) that we can write faster than than ChatGPT, assuming you take into account all the errors it puts in and the time it takes us to edit those. Never mind the failure to adhere to a basic style guide, the DEI errors (OMG the DEI errors) and the wacky turns of phrase.

  8. new old friend*

    The comment about AI being really good at Excel is interesting to me, and lines up with what I’ve found too– ChatGPT is good at code! (Well, not *great*, but if it produces something wrong it’s very easy to check and have it try again if it did a bad job.)
    I suspect that much of the hype around AI taking everyone’s jobs comes from techy sorts (of which I am one) who *can* use that technology to supplement and speed up their workflow. And because tech bros are the way they are, they assume that this one tool will be just as effective at things they view as “simpler”, like writing and making art. But when you can’t pop the output into a compiler and see if it works, it takes a lot more time to troubleshoot.

  9. Ama*

    That’s because unfortunately what most of these companies want to use AI for is cheap labor; if you still have to pay a human to input the right prompts and check its work (which you do if you want to use it effectively, as you note) that goes out the window.

  10. Lurker*

    LW2-I have read articles and posts throughout the internet that state AI frequently lies or gives out false information when it can’t determine the answer. Glad you kept an eye out-and good luck with your efforts!

    1. Procedure Publisher*

      I’ve read articles on AI as well, but they are specific to my field of technical writing. It is more likely that AI will be used for improving and maintaining existing documentation. For the humans, they’ll be doing all of the new stuff.

Comments are closed.