Ask a Manager in the media

Here’s some coverage of Ask a Manager in the media recently:

I talked to Money about whether you should use Chat GPT to write your cover letter.

I talked to Vox about what to do before, during, and after a layoff.

Quartz covered the AAM letter about the boss who was pushing people to share how they’re doing emotionally at team meetings.

I talked to Dr. NerdLove about how to handle a coworker’s problematic crush.

{ 69 comments… read them below }

  1. CharlieBrown*

    I kind of feel that if companies are going to use technology to screen applicants, using something like ChatGPT is only fair.

    What’s good for the goose…

    1. Ask a Manager* Post author

      My point in that article was that, ethics totally aside, it’s bad choice as a candidate because it won’t produce the kind of cover letter you need. You’ll end up with the same sort of generic, crappy cover letter that does nothing to help your candidacy.

      1. CharlieBrown*

        True, but it’s in its infancy now, when AI is always not great. Letter writing AI will eventually get better, and we will eventually get to a point where you can drop in your resume and the job ad, and it will kick out a fairly good cover letter which you can then edit as needed.

        At least it eliminates the horror of the blank page.

        1. Emily (she/hers)*

          My cover letters are usually about why I’m specifically interested in the position/organization and how I approach my work, as opposed to the list of responsibilities/achievements on my cover letter. No matter how good the AI is, simply inputting my resume and the position description would not get there.

        2. I should really pick a name*

          But a solid cover letter includes info that isn’t in your resume.

          I could see the benefit of using it to create a rough outline, but there would be significant things to add which may end up making the times savings insignificant

      2. David's Skirt-Pants*

        I received the weirdest, dumbest resume yesterday that I immediately assumed was AI-generated. The email that came with it was weird and dumb as well but I might have overlooked that if the resume hadn’t been so terrible.

        The self-proclaimed “mogul” and “guru” will not be hired by my company in the low-level capacity they were gunning for, sorry.

    2. Warrior Princess Xena*

      The only problem I have with this isn’t actually an ethical one; it’s that it will result in new hoops. One of the reasons companies use technology to screen applicants is because they get too many applications to feasibly screen it by hand. Cover letters, while nerve wracking to write, are frequently a way to screen out ‘actual person applying to a job with actual interest’ from ‘job hunter sending out 60+ applications a day to anything with an opening’. If chatgpt starts pumping out cover letters, companies will start adding a new barrier (such as submitting a personalized video – nightmare) that job hunters will have to go through

      1. rayray*

        on that last part about the personal video, many companies are implementing one-way video interviews as part of the screening process. It’s awful for so many reasons.

        1. Warrior Princess Xena*

          I know; I specifically excluded several companies from my job search a few years back because their interview process involved a live, ai-reviewed, one-way video. Nightmare.

          1. rayray*

            I am certain that these videos only serve to discriminate against candidates based on their appearance, or to exclude those who may come off as awkward or odd. I would really be interested to meet anyone who has actually “passed” one of these video interviews.

            1. MigraineMonth*

              I only had to do one of these once as part of an application, and I did not “pass”. Which may have been partly because I couldn’t help laughing at one of the questions which had the approximate difficulty of “please spell the word ‘hat'”. I think people *not* in my field would have gotten the answer correct.

    3. learnedthehardway*

      The real issue with ChatGPT is that it is never available – I have been trying to try it out for weeks now, and it always comes up as busy.

    4. fine tipped pen aficionado*

      I don’t especially care about the ethics and agree with Alison that the issue is more the quality of the output, but the problem with AI as it exists right now is that you have no idea what dataset its trained on or if it had the rights to use that data and giving up your resume to a bot just seems like a privacy nightmare of an idea.

      1. Caramel & Cheddar*

        I don’t think the idea is that you give it your resume to extract details from so much as you ask it for a “cover letter for senior llama groomer job” and it spits one out based on other data it has.

    5. Observer*

      I kind of feel that if companies are going to use technology to screen applicants, using something like ChatGPT is only fair.

      The ethical concerns with ChatGPT are not about whether or not someone is using technology or not.

      Also, as Alison and others point out, the ethics are not even the biggest issue. Ultimately, that cover letter could cause you more trouble than it’s wroth.

  2. Aggretsuko*

    I already hate ChatGPT. I swear it’s just a bunch of regurgitated barf text every time I read something it “writes.”

    That said, I can kind of see the idea behind it because I’ve written some incredibly dull/generic cover letters when I was applying for a job I didn’t really want or like or fit the listing for (i.e. I had nothing really to “sell” me for in the position), and ChatGPT writing it probably wouldn’t be that much different than the generic boring letters I’ve turned out in those cases.

    1. Falling Diphthong*

      I’m ever perplexed about the ChatGPT hype because the stuff written by it isn’t good. It’s impressive for a program, but not as the thing itself.

      1. Lilas*

        I think it’s mostly dumb technofetishism. The same reason there’s so little outcry over tech dudes unleashing ai on the community with no regulation, Jurassic Parking us into a world of deepfakes and murderous “driverless” cars. They’re being allowed to use the whole world as guinea pigs because people are used to do tech being default assumed to be cool and novel until its deep irrevocable societal harms can be thoroughly proven, by which point is usually too late.

      2. ecnaseener*

        Seriously! There are plenty of better GPT-based bots that can actually write good, interesting text – why so much hype over the one that’s locked into a super-boring “personal assistant” voice?

      3. I am Emily's failing memory*

        There’s a wide range of quality that can be output, and crafting a good prompt that will produce a high-quality result is itself a skill – and one that economists are predicting is going to be increasingly in demand. Much like you will get better, more relevant search results typing something like, “Historical global population statistics data tables” into a search engine than, “How have global population levels fluctuated over time?” The same type of skill is needed when prompting these new AI generative tools.

        As more companies start using more AI generative tools, and they begin to see that some workers can get much better results from a tool, that need less revision and perform better than the results other employees get, creative professionals can likely expect to start seeing employers looking for people who have developed that skill in the coming years.

      4. Warrior Princess Xena*

        I can think of half a dozen examples over the last two months where having something/someone come in behind me and insert relatively boilerplate language into sections of work I’d done (along the lines of ‘we have considered this problem – see our documentation of it at Y) would have been useful. I think that’s where chatGPT and its friends will eventually find a niche, especially since automated things are good if you need it to do things like go through a document and find every spot where there are two spaces and replace it with one space. But that’s not writing as much as it is documentation.

      5. Splendid Colors*

        I wish I had a dollar for every time the small business advisors have told me to use AI for any writing for my company website or online marketplaces. My About page, my social media posts, blog posts, and of course item names and descriptions.

        I have not been impressed by the samples they provide. They’re phony and generic, with peculiar word choices that remind me of when my 8th grade class was introduced to the thesaurus. I don’t see how being phony and generic will improve my brand image as a detail-oriented creator of original designs. I don’t have copywriting training, just technical writing experience, but I don’t want to feature writing on my site/media that makes me cringe because it’s full of buzzwords that I don’t think fit and fluff about how this item is essential for a happy modern life.

        According to the instructors, the software knows what kind of language sells products, but when I read a bunch of marketing fluff it just pisses me off. Maybe that’s because I’m Autistic and my mother brought me up to be suspicious of marketing hype. I don’t feel right making empty claims about my goods.

        I’d like to stand out in a field full of generic nonsense for getting to the point. One of the benefits I have from in-person popups is that I take mental notes of what shoppers like about my stuff, even if they can’t afford it that day. That gives me insight into what to say in product descriptions. That AI wasn’t at the show with me and doesn’t know what features appeal to prospective buyers. It’s just throwing in words that it sees in a lot of product descriptions and ad copy to generate ad copy just like everyone else uses.

        I can see how people who struggle with writing might benefit from a tool to overcome that barrier. They might not have the resources to have someone read the results and fix the weird bits, but I find the results off-putting. I used to suspect that sites with really strange writing were plagiarizing other people’s ad copy to sell knockoffs of their items and just substituting synonyms to get around searches for plagiarism. Now that I’m suddenly hearing about AI for copywriting, that may be more likely. It still doesn’t reassure me that a company is legitimate if their website reads like a Mad Libs filled in by 8th graders with a thesaurus.

    2. fine tipped pen aficionado*

      It really seems like having a template with some standard verbiage and structure that you customize is way more efficient and effective than screwing around with AI that will keep any data you feed it forever.

  3. Falling Diphthong*

    I was so thrilled that Dr Nerdlove reached out. So often the comment sections of advice blogs cite Alison as the obvious person to ask when you have a work problem.

    1. Grace*

      You often see it over on Slate columns – the actual answer ranges from decent to bad, and then the comments are just “WHY was this not submitted to AAM instead?!”

    2. MigraineMonth*

      I thought Dr Nerdlove’s answer was a bit awkward in the way it centered his experience dealing with a boundary-crossing ‘mentor’ figure. The LW thought the coworker had a crush on her and thought the incidents pointed to sexism; I think he should have trusted her to judge that correctly.

      (The LW provided updates in the comments, and it sounds like she wasn’t the first young woman he targeted for this type of attention, which reinforces my opinion that LW judged it correctly.)

      1. Sherm*

        ChatGPT is overhyped for sure. I mean, I don’t want to be dismissive of all the work and achievement it represents, but it’s just the latest step in the years-long advance of artificial intelligence. It seems great for a middle schooler asking basic questions, but not so great if you need a deeper dive into a sub-topic of interest.

        1. Splendid Colors*

          What bothered me the most was when it regurgitated false information from the internet, and when it fabricated references.

  4. Keymaster of Gozer*

    The original poster gave an update on the Dr Nerdlove letter in the comments and holy WOW was that guy at work being creepy.

    I sincerely hope he stops being a creep, or OP gets him far away from her in whatever manner.

    1. Observer*

      Really.

      I don’t have PTSD, but this comment in response to the LW really resonated:

      “‘ holy cow, the “didn’t need to be scary” thing just triggered some serious PTSD. ”

      Like REALLY, that’s BAD.

      And, yes, I’m yelling on the LW’s behalf.

    2. bratschegirl*

      Whoa, he sure was, and the boss was pretty much no help. I am SOOOOO OVER the whole “he’s socially awkward/he’s lonely/he doesn’t mean any harm” excuse. None of that noise means anything. He is NOT entitled to her time and attention and friendship just because he wants it and isn’t an axe murderer (as far as we know). She said straight up that she had no desire to get to know him outside of work and his response was “…but I’d like to get to know you better!!1!” He clearly will not choose on his own to respect her boundaries; HR will have to use the ban hammer and I really hope they do.

      1. MigraineMonth*

        LW: “I don’t want to see you.”

        Creepy coworker: “But I want to see you, and my desires are more important than your needs.”

        I hope HR does a better job dealing with him than LW’s manager, who seemed to want to bury her head in the sand.

        1. Splendid Colors*

          “Socially awkward” stops being a valid excuse after the first time someone asks you to stop doing something. Yes, it’s possible to simply not know how a behavior looks to others. But after they let you know that, you are choosing to violate a boundary if you don’t stop or if you try to negotiate not stopping.

  5. Putting the Dys in Dysfunction*

    The Dr. Nerdlove problematic crush post was an interesting (and sad) one.

    The co-worker with the crush seems like someone who trades in statements that to him appear to give plausible deniability when questioned. One way people deal with such statements is to act as if you don’t comprehend and make him be clear:

    Him: Okay. Sick…I see what that really means.

    Nerdlove OP: I don’t understand what you’re saying. What do you think “sick” really means?

    Him: It means you don’t have time for me.

    Nerdlove OP: I don’t understand. We work together. We regularly talk about and collaborate on [whatever].

    Him: You don’t have any time for me outside those interactions.

    Nerdlove OP: Of course I don’t chat with coworkers outside of work hours. I leave work behind when I leave for the day/weekend.

    Him: I thought I was more than a coworker. I thought I was a friend.

    Nerdlove OP (putting things gently): We’re friendly at work, and I appreciate that. But friends and coworkers are different beasts.

    This is the point where the co-worker either gets the message, or reveals more about what he really wants. At that point deniability has disappeared, in which case Nerdlove OP can gently shut things down. This doesn’t necessarily prevent the co-worker from continuing, but it provides clear evidence for HR of what this person is doing.

  6. Anne Wentworth*

    In re: the Money article:
    As someone who has been hiring for a small team lately, that claim that we’re going to ignore the cover letter absurd. In a sea of resumes, cover letters helped us identify which candidates were seriously interested in our position and had the skills we needed. Maybe large companies who use algorithms to sort resumes don’t need them, but we sure do.

    1. Hiring Mgr*

      Cover letters seem to really vary by industry. I’m in technology sales and have never sent or received one, but in other businesses they’re apparently crucial.

      1. Susan Calvin*

        I’m in tech services/delivery, and I wouldn’t call them crucial, but they can really give you a leg up with some hiring managers.

        When I was hiring, I wish I had gotten more good cover letters – maybe 10% of applications had one, but 90% of those were worse than none because they made me wish I had spent the last 5 minutes doing literally anything other than reading them.

  7. BellyButton*

    I know companies and managers mean well with all the Mindfulness and emotional support, but it is usually done so poorly it becomes invasive. I coach my leaders to talk to the employees about the things we can realistically do. Workload distribution, burnout (which can be exacerbated by life- but we can balance the work part of it), career pathing, skill/knowledge development that will help them in their work, and the resources we offer through benefits. Those are within the boundaries of work relationships and support and they can actually help people manage the stressors of work. We will also do a quick pulse reading with an anonymous “above or below line” feeling today.

    1. Snow Globe*

      I…really didn’t care for that article. The article extrapolated survey results that show employees want mental health “support” to discussion about how managers can talk to the employees about mental health, which is not the same thing. Then saying that three hours of training is enough to help managers navigate this? Ooof.

      1. rollcake*

        I also wasn’t impressed with the article, it still seemed to suggest that discussing mental health was the only way to address the issue, when seemingly “unrelated” solutions could also alleviate mental stress. Workers could be burned out because the workload is too high and the employer can’t or won’t hire more people. They could be stressed because they have a tiny joint pool of PTO and have spent all their PTO on sick days, leaving none for a proper vacation to disconnect from work. They could be frustrated by lack of flexibility in setting their own schedules in order to accommodate appointments, childcare, etc. There are absolutely kinds of mental stress and anxiety that need to be treated purely with therapy and medication, but I think there are also some stresses that could be alleviated with better actual working conditions! A manager pointing me to the EAP when what I really need is not having to work tons of OT will not help.

        1. Splendid Colors*

          I was sent to EAP when the actual solution would have been for the new manager to stop trying to bully me into quitting. Apparently they needed to reduce headcount but didn’t want to lay people off because then they’d qualify for unemployment.

      2. MigraineMonth*

        Yeah, pretty cringy that their takeaway from Alison’s excellent advice on respecting boundaries was “but employees don’t want boundaries!”

        There are so many ways to help people’s mental health that doesn’t involve training managers (and coworker peers!) on how to have boundary-crossing conversations about mental health. The only three-hour course I can imagine being helpful is three hours of repeating “You are not a psychologist, stay in your lane.”

  8. BellyButton*

    Holy updates from the the Nerdlove letter in the comment sections. I hope HR or the manager have a talk with him. This is too much!

    Final update, yesterday he told me he was hurt because he heard me laughing and having fun with other coworkers but I wasn’t talking to him. This was really the nail in the coffin for me. That’s creepy and possessive.

  9. Vancouver*

    @Alison: The Vox article repeats the claim that “…most jobs are never publicly advertised…” and seems to cite you as the source for that information. I know you’ve written in the past about the lack of evidence for that statement, so I’m wondering if this is just clunky editing on their part or if something has changed your mind?

    1. MigraineMonth*

      Looked like clunky editing to me. It wasn’t clear which part of the paragraph came from Alison.

  10. Don't kneel in front of me*

    What actual reasons are there for claiming AI writing is unethical, immoral, wrong, dishonest, etc? It seems to me that the only arguments against it are rooted in puritanical “hard work” values, and they are the same ones we have over and over again whenever there is new technology. Historically technology has taken away repetitive and algorithmic tasks (we used to employ people to physically type 100 copies of a memo prior to the photocopier, for example), but now technology is reaching the point where it can take away menial mental tasks too.

    If the end result is good* then why does it matter how we got there? Why should we spend time and energy doing something that a computer can automate and execute immediately?

    *I get that its currently not great, but it won’t stay that way.

    1. M*

      I think the issue Warrior Princess Xena brought up above is probably the best one. If cover letters are meant to convey what an applicant’s individual motivations, thought processes, and etc. are (basically that they’re a human thinking through the qualifications and their career path) outside of just their resume, an AI-generated one goes against that purpose. This is especially vital for field changes and resumes that aren’t intuitively compatible.

      Of course, if you don’t care to put in that effort for whatever reason (which I don’t judge, some job postings don’t necessarily require that level of work or you’re really just trying to get a job ASAP), then that’s fine, but you have to be fine with a substandard product or something you might to spend just as much tweaking.

    2. AnotherSarah*

      The reason for me (I’m a prof who grades a lot of writing) is that students need to be demonstrating skills in writing, editing, and choosing the right way to express their ideas (form/content/etc.). You might then ask, okay, but do they need to be doing that, isn’t it like teaching them to add by hand when there are calculators…except that yeah, people do need to be able to add by hand. Maybe not every day, but from where I stand, we need to be able to assess *why* a particular piece of writing is good/effective, and fix it if it’s not. IMO the best way to do that is to know how to write.

      For the calculator analogy–sure, I use a calculator to do most of my budgeting etc. But if there’s an error–on my bank statement, or I press a wrong number, or anything like that, if I don’t know how to add by hand/by looking, I’m not going to catch it.

      I don’t know that AI writing is always wrong–but claiming it’s your writing is a lie. In jobs where originality and voice matters, that’s a problem.

      1. Don't kneel in front of me*

        We’re talking about cover letters here. The question of AI writing in school is a whole different discussion.

        1. AnotherSarah*

          I did understand the issue–and think there’s a link between how we talk about writing for school and how we want professionals to write.

    3. hellohello*

      My biggest issue with AI generated content (art, writing, etc) right now is that there’s extremely rarely any transparency on what data it’s been trained on and how much of that data it’s using in the generated content. For one, a lot of the writing/art/music/etc. that they train on is added without permission or even knowledge of the original creator, which is a nightmare in terms of copyright and just general rights of artists. For two, if the AI spits out content and you use it without really knowing how it was created and sourced, you’re at real risk of plagiarizing without knowing it. If I write a cover writer with chatGPT and it pulls sentences or full paragraphs from the material it was trained on and I submit it to a company without knowing it, the company may recognize a familiar line or run it through plagiarism detection software or even just google bits of it and find out it’s not my work.

    4. Snell*

      Lots of the ethics issues concern where credit is due. Even in the article, it’s recommended that if you use AI writing, you should also disclose that openly. And as Alison said, especially if you were hired for your writing skills, submitting a sample of your writing—one that you didn’t actually write yourself—is a bad idea all around. Also, AI only knows what it’s trained on. What kind of material was the AI trained on? Did person who owns the rights to that material give permission for that use? These aren’t questions that society has taken a conclusive position on.

      This is one of those new situations where the technology advanced so quickly that we don’t have rules on how to handle it. A few years ago, there was no legal recourse for victims of revenge porn, because new technology was involved, and there weren’t laws in place to address these situations. Right now, AI is a big question mark. It’s new and newly accessible. Of course there are ethics questions, and the fact that you couldn’t see that kinda gives me the impression that you only showed up here to unilaterally defend AI and didn’t even read the article posted by the woman whose blog you’re on.

      1. Don't kneel in front of me*

        I did read the article in Money, and it doesn’t answer my question at all. So far all all the criticisms have been about the quality of work or that AI is prone to plagarism. The former is irrelevant, and the latter is not a question of the ethics of using AI writing–rather it is a question of using a specific engine/service.
        Again: why should we care if someone used AI to write a cover letter? Why should we bother disclosing it? I just don’t understand how it matters where it came from.

        1. Falling Diphthong*

          I’m just mystified that the quality of the work is irrelevant.

          Are there AIs that have not stumbled into plagiarism? Like I agree with your basic point, but it sounds like this will become an easy “Well I didn’t write the code, and it’s too complicated to check.”

          For writing cover letters, I’d say that if you really want the job, you want something that isn’t random BS with factual errors. If you are sending out 60 applications to jobs you don’t care about, sure, use this to slap on cover letters–but the reason to do it is that you don’t really care about the quality of the results, just need to ship out 60 resumes with something vaguely cover letterish on the front.

        2. Snell*

          Maybe you didn’t take the same message from the article as I did, maybe the article and the comments on this particular AAM blog post have not discussed it as prominently, but in the greater discussion, there absolutely are criticisms of AI beyond the quality of work (which, as Falling Dipthong says, why would /quality/ be irrelevant?) or that AI is prone to plagiarism…but you gloss over PLAGIARISM. And no, it’s not a question of using a specific service. As hellohello says, there is a lack of transparency on how content is generated, and which resources were drawn on in the process. As I mentioned, there aren’t any established rules/norms at the moment, so people are doing what they want, to get away with what they can. If that means committing a type of plagiarism that hasn’t been outlawed, people will do it.

          Like, at this point, yes, you are excusing plagiarism. You said “Why should we bother disclosing it?” What do you think plagiarism is? You didn’t write the cover letter. And for what it’s worth, your line of thinking there is out of step with ChatGPT, who, in the article that you read, recommends disclosure if you submit an AI-generated cover letter.

    5. Falling Diphthong*

      Re asterisk: It’s not impossible that it will become great, but it’s certainly not a given. When people talked about the promise of AI 20 years ago, they didn’t say “In 20 years, we’ll have an AI that can convincingly bs it’s way through a wrong answer”–this really is moving the goalposts.

      I am just mystified at why we wanted a program that could give you the wrong answer to a word problem, delivered with a confident-sounding layer of BS. Like, if my cat could do that, it would be impressive because she is a cat–but not because it was a correct answer that was well supported by evidence. (This might tie to the astrology thread from earlier–I think we are astonishingly terrible at judging what is good supporting evidence. Not sure if that’s a modern thing or something that became much more visible with the information explosion, but I suspect the latter.)

      For the morals part, I would come down on hellohello’s “If the computer plagiarizes and then I cut and paste what it gave me, why would I be innocent of plagiarism?”

      But mostly I’m scratching my head at how bad the prose and art are. Like, it used to be you needed a human using Photoshop to stick Lt. Sulu on a kangaroo on Europa, and now a machine can do that–maybe that analogizes to automating some task, but the kangaroo art is not, objectively speaking, compelling unless you want to give it to a friend who is really into Lt. Sulu, kangaroos, and Europa and so will get a kick out of it.

  11. Kyrielle*

    Now I’m tempted to go ask ChatGPT to write some cover letters, just to find out how hilariously awful the results would be.

      1. MigraineMonth*

        I would be very interested in what prompt they gave to ChatGPT that resulted in a letter that off-the-rails. I can’t believe that was the result from just “coverletter professional dog food taster” or similar.

Comments are closed.