my employee is passing off ChatGPT lists as his own ideas

A reader writes:

I am an experienced designer and I lead (but don’t directly manage) teams of young engineers.

Dan (who has two years of experience) has recently been assigned to our team and sat in on a meeting where other members were sharing testing results of new ways to “automatically perform X,” which is a new feature. Towards the end of the meeting, Dan said, “But I have all these other ideas that you haven’t considered. I really think these could solve the problem!” Of course, I encouraged Dan to share the new ideas, but he said he would send them to me after the meeting.

I felt bad that he had done all this work and not included him earlier in the meeting. I have had to mentor Dan in the past and he was resistant to documenting his work or listening to feedback, and I thought that was clouding my judgment of him. (My impression is that he actively tries to minimize his role to get out of work and says bizarrely out of touch things related to social norms.)

After the meeting, he sent me a screen shot of a ChatGPT list! I was shocked and dumbfounded. He gave the impression that this was his work, but he just created the list while in the meeting. Furthermore, when I asked him what some of the terms meant on this list, he said he didn’t know.

AI is a great tool to conduct a preliminary search, but then I expect people will further investigate and vet some of these ideas. This is similar to writing a research paper with Wikipedia. Engineers generate ideas (from the web, other products, personal experience) and then put ideas in a table and rank for their advantages, disadvantages, cost, performance… Anyone can have an idea, it’s the feasibility of the design that makes it a good one.

For Dan, instead of addressing it head-on, I asked him to build the appropriate table and gave him guidance on how to present his ideas better. However, I never addressed how he presented the ideas as his own because I was so flabbergasted.

As we integrate ChatGPT more into our web searches, I can see this happening more and more. I was wondering how to approach this in the future. When people put their hands up in meetings, do I have to ask for their sources first?

I don’t think the issue is that he used ChatGPT. The issue is that he presented ideas that he’d put no thought into, didn’t seem to understand, and couldn’t discuss when questioned about them.

ChatGPT is a tool. If it produces good results, those results are valid to consider, just like results produced by a calculator or Excel. But if someone brought you numbers they’d pulled out of Excel and expected them to stand on their own with zero discussion of what they meant or how they might use them, and no ability to withstand questions about what data they put in to produce those results in the first place, you’d rightly object. Your objection wouldn’t be to their use of Excel, but to their lack of critical thinking and inability to engage in a meaningful way. That’s the same issue here.

On the other hand, if Dan had used ChatGPT to stimulate his own thinking and took some of those ideas and developed them further, adding his own thinking and analysis, and presented those to you, that would be very different. I don’t think you should object to his use of ChatGPT as a tool in that situation — because it would have been a tool, rather than the totality of his thought.

So the conversation to have with Dan is this: “When you bring ideas to meetings, I expect them to be your ideas that you’ve developed and thought critically about — or at least for you to flag that they’re not your work and you haven’t given them real scrutiny yet. Generally when you present ideas — and especially when you frame them as something you believe could solve a problem, as you did in our meeting — you need to have considered their feasibility and be ready to talk about their advantages, disadvantages, cost, and likely performance.”

That said, this incident sounds very much like a symptom of bigger problems with Dan, ones you’ve already observed (resistance to documenting his work or listening to feedback, and trying to get out of doing work). Take this as a flag to lean in more with his boss on the patterns of problems you’re seeing.

{ 208 comments… read them below }

            1. Donnie Darko*

              3.0 – no like button
              3.5 – like button, but it doesn’t do anything
              4.0 – like button, and it works

      1. June*

        4.0 has a lot more features – it can process images / accept text-and-image prompts, has more processing power, is better at nuance in text prompts, etc. I don’t think it changes this particular question, though. Even if the generated list was better, it’s still presenting a generated answer as a straightforward contribution rather than taking the generation and conducting further analysis.

    1. SofiaDeo*

      #4, you need to get a PTO policy approved by higher ups. And then post the schedule where everyone can access it to plan their requests. As people get PTO or schedule elective procedure medical leave, it goes on to the schedule for everyone to see. I got a policy approved where no more than X people could be off at a time during our busy season, and this included non-urgent medical appointments/surgery too (so no one could try to schedule their nose job or other totally elective procedure to try to get around the PTO thing). Leave could be approved up to 12 months in advance, which made people wanting to schedule the first few months of the year happy, they could confidently purchase plane or other tickets for Jan-Mar trips once approved. No “reasons” were asked on the request form, and I didn’t even have a space to note any.
      I divided the holidays (we had 24/7 coverage) into major groups, and put a signup sheet every January. You needed to work at least one holiday in each group. You could volunteer for shifts, in addition to getting them assigned. And you got points depending on the shift (1=day, 2=evening, 3=midnight) worked. A list of “who worked what day/shift” was also started, and points counted. In the event of several people requesting vacation the exact same time, whoever had the highest points got the “ask”. Points were also allocated when volunteering to cover vacations for midnight shift staff, as well as when asked to urgently cover someone calling in sick. (I also could often offer an additional day off during the week if called in urgently, unless that person wanted the overtime, their choice).And whatever holiday in any group you worked one year, you would have off the next year, unless you volunteered for it. So no one got stuck working every Christmas, or Thanksgiving, or Fourth of July.

  1. Lilipoune*

    I would also emphasize that chat GPT dorst not creae anything new. if you need innovative ideas, then it may not be the best tool.

    Also if your company doesn’t have one yet, discuss what can be shared in these tools and what not. for example ask chat GPT about how to handle such or such customer requests in an efficient way may be OK, but brainstorming about you new product that is not yet on the market may be a no go.

    1. Ellie Rose*

      it can produce “new “things, in the same way that you can produce a new color by taking a set of paint colors and swirling them together. Red and yellow always make orange of some sort, but what else you put in, the ratios, and even how much you stir it all affect what comes out.

      it cannot create something from literally nothing, but people rarely do that either. since chatGPT iterates in a different way from people, unable to truly understand, sometimes it gets stuck in patterns and other times it breaks a socially “normal” pattern.

      for example, you can ask it to generate new words, and it will. here’s 5 made up alternatives to the word “pretty”: Gleamful, Quaintique, Luminara, Graceous, Enchantique. I asked it to then generate shorter made up words that weren’t just combinations of existing words, and it did: Zelto, Gleff, Plind, Wynth, Vexa.

          1. AcademiaNut*

            ChatGPT is a pattern reproducer without judgement or the concept of accuracy or truth. Which is why you can use it to summarize information, or to come up with a first draft, but you need a human with the expertise to sort out the garbage part. Left on its own, it will make up facts or terminology that sound plausible, but don’t actually exist.

              1. Donnie Darko*

                “Yep, the human brain cannot be reproduced. AI is not really intelligent.”
                (comment generated by AI)

            1. Donnie Darko*

              Oh, absolutely, let’s just replace all human decision-making with ChatGPT. Who needs years of experience or specialized education when you have a text generator that learned everything it knows from reading the entire internet? I’m sure it can handle everything from court cases to medical diagnoses. What could possibly go wrong?
              (response by ChatGPT, to your comment, which was probably also written by ChatGPT)

      1. goddessoftransitory*

        Honestly, Dan is just the latest version of the kid whose book report was downloaded off the internet or purchased from an essay site, only even lazier.

        Back in MY day, we didn’t have AI! Or even cut and paste! We had to HANDWRITE our word-for-word copying out of the Encyclopedia Brittanica and then be shocked and amazed when we didn’t fool our teacher for a split second! Bah, youths today…

        1. Dek*

          Yeah, but at least that kid usually had the common sense not to deliberately volunteer and draw attention to his book report…

        2. Reluctant Mezzo*

          Raises hand, did the copy by hand in fifth grade. Oh, did I ever get busted! (and yet I still remember the name of the chess master I stole the biography of, guess the trauma sealed that knowledge in).

        1. Ellie Rose*

          ahaha it does! I got one named Fremanezaub and another named Ubregopant and I’m just ?? how did they come up with these?

          1. Exile from Academia*

            There is *some* pattern to how the names are put together – anything ending in ‘umab’ is a humanized monoclonal antibody (‘imab’ is a chimeric antibody, unless it’s ‘vimab’ in which case it’s an anti-viral humanized antibody), and anything ending in ‘gepant’ is a GEne-related Peptide ANTagonist… but yeah, the rest of it is all alphabet soup.

            (also, migraine solidarity! hope the meds help)

          2. MissCoco*

            Slate recently published an interview with people who come up with medication names, not a ton of specific details, but I found it interesting. You can find it if you search “Zepbound”

        2. COHikerGirl*

          The medical industry has had ChatGPT for decades and has just been holding out on us. They didn’t want us to learn their naming secrets.

          1. amoeba*

            Ha! I, on the other hand, am now worried the actual humans whose whole job it is to think of new drug names all day will soon be unemployed… (/s, I know that’s not an actual job. If it existed, I’d love to have it, though!)

            1. Grizabella the Glamour Cat*

              Actually, it IS an actual job! Slate just published an interview with someone who does it, and it was fascinating. Search “Zepbound” to find it.

            1. Emmy Noether*

              Ikea products are named after actual Swedish words, places, and names. There’s even a system behind it, for instance outdoor furniture named after islands. There’s loads of information about it online!

    2. Caramel & Cheddar*

      Thank you. I feel really strongly that all companies that might be inclined to use AI also need an AI usage policy.

      I think some types of companies should also decide if they want to use them at all given the ethical considerations, e.g. a book publisher might want to avoid it given the number of AIs out there that have been caught training their LLM on the work of authors who didn’t consent to their copyrighted work being used this way. It’s a real minefield.

      1. Miss Muffet*

        Reading your comment, I thought, Oh my company needs a policy like this! I wonder if we have one! And literally two minutes later, an email with the Policy info lans in my inbox. And I’m not even reading AAM on that computer! Spooky.

      2. Tinkerbell*

        This! It’s easy to say “AI is evil, it’s built using stolen data, etc.” but as long as it does a good-enough job for cheaper than paying a human, it’s going to be around a while. Having a policy means employees can use it appropriately; not having a policy will just mean they’re regurgitating AI garbage and then lying to you about it.

    3. ZugTheMegasaurus*

      It’s extremely good at rephrasing information you put into it. I’ve used it to simplify an overly dense resume, like “Below is a bulleted list of job accomplishments. Revise this list for ease of reading, emphasizing results and metrics, and keep each bullet below 200 characters.” And it turned out to be really great for my partner’s business as a dog trainer. We took a very technical and complex explanation of teaching “stay” and told ChatGPT “Revise this for ease of understanding by an average layperson” and it was really good. Then we told it “Now revise to be understood by a child” and it was absolutely perfect, like this very friendly voice with little analogies kids would find helpful.

      But in my experience it’s pretty awful at everything else, especially if you’re asking it to just spit out a whole idea from scratch. If you’re familiar with the material, it’s really obvious that you’re just getting slightly-rephrased snippets of the top google results so it seems a little unwise to do it on something you don’t know very well!

      1. Jill Swinburne*

        Yeah, I’ve used it to give me frameworks for workshops and it spat out in seconds what it would have taken me several hours to plan. It’s great for that too.

      2. Tinkerbell*

        My mom used it recently to help draft the job description for the director of the nature center she volunteers at, which has a focus on migratory birds. Apparently it gave a pretty good list of required job qualifications, leadership experience, education level, etc. It also said the ideal candidate would have at least five years’ experience teaching birds to fly :-P

          1. ZugTheMegasaurus*

            I mean, I would be genuinely impressed if somebody managed to do that! No matter what they were interviewing for, haha.

        1. Princess Sparklepony*

          Any chance that the AI has a sense of humor? Because that was pretty darned funny.

          Unless you are a wildlife rehabber where you might actually have taught birds to fly again.

          Have you ever seen them splice feathers onto a bird that has lost it’s feathers? It was pretty cool.

  2. Antilles*

    I think of ChatGPT as a starting point, no more. Basically the technological equivalent of checking your old college textbook or casual chit-chat with a colleague about “hey, ever done X?”.
    Gets you some good guidance, pointed in the right direction, can spark some ideas and so forth. But the textbook or your friend won’t have the full context so you still need to evaluate the ideas on their own merits and figure out how to make things applicable to your specific scenario.

    1. Richard Hershberger*

      Like Wikipedia: Often a good place to begin researching, but a terrible place to end it.

      1. Clisby*

        Yes, Wikipedia can be a very good place to start – or, depending on who wrote and documented the article, a fairly useless place to start. I found it to be at least worth checking (this was usually for history/art history classes I was auditing) because a number of times I quickly found citations for primary sources that made my work a lot easier. Not that I would ever rely on Wikipedia for the citation itself, just that I could then go directly to primary source X – I mean, either a will or death certificate exists or it doesn’t. How I found out that it exists is irrelevant to the fact that I now can read said will or death certificate for myself.

        1. Richard Hershberger*

          Pro tip: Wikipedia is appalling for early baseball history. No, I am not going to devote my life to arguing with pseudononymous editors.

        2. Alternative Person*

          I get frustrated when people say ‘Don’t use wikipedia’ sometimes because when you deal with a theory or practice point that has a long, winding back story, wiki is often the simplest way to get a potted history of something without spending hours tracking ideas through books and journal papers. It might not be perfectly accurate, but you usually get on the right track a lot faster.

          1. rebelwithmouseyhair*

            And the list of links at the end is usually full of info to confirm the wiki entry, and points you in all sorts of directions, some of which are fascinating, and some of which are useful, and some of which are both.

            I remember sending a link to a client to justify my use of a term (financial inclusion) to translate the French equivalent, which doesn’t look anything like the English term (bancarisation) and is not as easy to guess the meaning of. The client objected, saying that Wiki was not reliable. I said, but just look at the links at the bottom: the World Bank, the IMF, several international banks. They all say the same thing as Wiki!

        3. Emmy Noether*

          Wikipedia tends to be accurate for math and physics (at least everything that is easily verifiable by looking it up in a textbook – it gets more iffy on cutting edge research). I use it frequently to look up formulas and that kind of thing. The trigonometric formula page is A++, better than I’ve seen anywhere else.

    2. The Rural Juror*

      We typically use it the opposite way. We need to write the basis, but can use this or other AI tools to help with grammar, syntax, and the overall structure of the narrative. Makes our reports less boring (or frustrating) to read.

      1. Charlotte Lucas*

        I’m pretty sure it was an AI that informed me that escarole is a kind of snail. Definitely not boring.

        1. Falling Diphthong*

          I continue to be mystified by Chat GPT, because bullshitting vaguely plausible sounding answers based on no understanding of the question is something humans do exceedingly well, and will do for free.

    3. EngineeringFun*

      OP here! I totally agree: Great place to start! But then use it to do your own thinking. I have used chatgpt to write first drafts of things. But you still need to read what it wrote! :)

  3. HR Exec Popping In*

    The issue has nothing to do with using ChatGPT. You would have the same issue regardless of the source. He could have sent you a news clipping from the WSJ, a link to a blog post, etc. The issue is how he used this tool. To take anything and simply try to apply it blindly to a different situation is lazy and ineffective. Help Dan understand that he is in his job for his judgement, his thinking, his ability to add value. And then explain what that looks like. Using a tool like ChatGPT to spur ideas, think things through, etc. To add on to ideas from other sources and determine how to modify them to work in your situation.

    This is not a technology issue. Or a credit issue. This is a performance issue.

    1. HA2*

      Yess I like that analogy. If he had googled “automatically perform X”, found a link to a blog post/news article/whatever with a list if ideas, and then screenshotted that list and sent it to you – what would you do? Do that same thing in response to his chatgpt list.

    2. ferrina*

      100%

      This is someone who is trying to use a hammer on a screw. He doesn’t understand the tool or what it’s for, he’s not getting good results, and he’s probably going to break something if someone doesn’t stop him. And his error is so basic that it’s probably not worth the company’s time to try to retrain him (not saying he has to be immediately fired, but this is a clear red flag about his fit for the position. think critically about how much time is worth investing into him- the company/your job is not to enable Dan. You need a reasonable ROI).

    3. Random Dice*

      Not to mention burying THIS lede:

      “My impression is that he actively tries to minimize his role to get out of work and says bizarrely out of touch things related to social norms.”

      1. Portia*

        So he may think, with ChatGPT, that he has found a way to seen like a contributor without actually doing any work. As Alison notes, there’s more than one problem with Dan.

      2. Captain dddd-cccc-ddWdd*

        Perhaps it can be minimised so far that he’s no longer needed. After all, Chat GPT was able to come up with this “strategy” that he presented … threatening someone’s job by replacing them with AI is rarely a good move of course, but he is inviting it!

        1. OrigCassandra*

          Yeah, this is one I actually throw at my students. “If ChatGPT can do what you do — or if what you can do is circumscribed by ChatGPT — why does anyone hire you to do anything, like, ever?”

          Most of them haven’t thought of this one, so far.

      3. AMT*

        The letter reads like one of those clickbait headlines that’s like “Pokemon Go results in man’s death” but when you read the article, it’s literally just about a dude shooting someone for no reason. This has nothing to do with technology. It’s about a guy being bad at his job in ways that are not new. If this were 1923 instead of 2023, instead of AI, he’d be passing along a newspaper clipping he didn’t understand. There’s probably a papyrus somewhere that says, “My student considers himself a great philosopher. He quotes Socrates, but then cannot explain what he meant!”

    4. Dulcinea47*

      It’s absolutely a technology issue b/c it’s an issue of people not understanding what technology is for or not for, or can or cannot do. Dan thinks that chatgpt’s ideas are perfectly valid when in fact they’re likely not.

  4. Czhorat*

    Well-said.

    There’s both too much deference to ChatGPT and similar as objective machine intelligences AND too great a backlash against them. Tools, once introduced, remain here to stay; no matter how many power-looms you smash, someone will always build more. The key is, as Allison said, to understand the strength of the tool, the limitations, and to use it mindfully.

    If Dan presented ideas that he came up with using ChatGPT that’s not a bad thing, so long as he understands what to do with them, can build on it, and can add some kind of value. “Presented something from a chatbot” is not the real issue; it’s “presented something that he slapped together with barely any thought” is the real issue.

    ChatGPT makes it sound like a technology problem; that’s not really what it is in this case.

    1. Busy Middle Manager*

      “here’s both too much deference to ChatGPT and similar as objective machine intelligences AND too great a backlash against them”

      Agreed. It’s upper upper management saying “we need to use AI!” Then you show them the results and they are like “not like that, bad result” and you’re like “that’s what it produces!”

      1. MigraineMonth*

        I’m pissed that out of all the awesome uses for AI, the one everyone’s trying to put into their products is this hallucinating parrot. WHY.

        1. Falling Diphthong*

          I really don’t understand why, of all the crappy things we might want to outsource to robots, we have settled on “Sitting around a bar making bs claims about something you know nothing about.” People would do that for free!

    2. NoOneWillSeeThisComment*

      “Tools, once introduced, remain here to stay; no matter how many power-looms you smash, someone will always build more”

      I get what you’re trying to say, but objectively, it’s not true that tools are “here to stay” “once introduced.” Plenty of tools never fully launch (just look at the graveyard of Google tools and products) or are eventually obsoleted.

      1. Czhorat*

        I get your point there, but I was more thinking of Ned Ludd’s fear that the power loom was displaying skilled weavers from the job market.

        No matter how many looms he smashed, more were built. Today – despite the prescience of his concerns – Ludd’s name is synonymous with a futile anti-technology backlash.

        It’s hard to get the genie back into the bottle.

        1. Random Dice*

          Interesting factoid – weaving was an early EMDR trauma therapy for WWI veterans who had “shell shock”. There was a big boom in weaving as an industry after that.

        2. Silver Robin*

          Yeah *and* there is more to Ludd than “technology bad”. Skilled weavers were losing livelihoods and nothing was there for them to do instead, nor was there a safety net available to keep them from becoming desperately poor. Had the skilled weavers each been given a fancy new loom and an increase in pay commensurate with their new levels of productivity, or if the weavers choices had included something else that could support them similarly to their previous jobs, Ludd would have had less of a point and less of a bone to pick. See also the ghost towns of the rust belt – people’s livelihoods were taken away and retraining or redistribution of labor was abysmal and now we have a whole American subculture about it.

          1. Your Former Password Resetter*

            Yeah, the problem isn’t the technology, it’s business executives using this as an opportunity to pull resources away from the employees and into their own pockets.

    3. Caramel & Cheddar*

      I don’t think AI is actually as inevitable as a lot of people think it is, but we’ll certainly have it become an inevitability if we approach it as “tools, once introduced, remain here to stay.”

      1. Cyborg Llama Horde*

        In fact, I suspect its usage will die down A LOT once people have to pay for its resource consumption. We’re in the “Uber killing taxis because it’s venture capital subsidized” phase of machine learning at the moment — hopefully we don’t get too much collateral damage before the market equalizes.

      2. But what to call me?*

        Why wouldn’t it, though? People have been working towards AI for decades, or at least working on things they thought might eventually lead to AI.

        Even if this particular version doesn’t stick around, there’s a lot of incentive to keep making computers better at whatever intelligence-like skills are required for whatever task we want to use them for and not a lot of incentive not to do that, at least from the perspective of the people in a position to throw money at making it happen. When technologies don’t stick around isn’t it usually because they either aren’t useful enough to overcome whatever barriers are involved or because they are quickly supplanted by something better? Why would that be the case for AI?

        1. Caramel & Cheddar*

          I’m commenting more on the “We should just lie down and accept something that kinda sucks because it’s coming one way or another” sentiment. We can and should make active policy choices about the use of tech in our lives, whether it’s AI or video doorbells or all sorts of other things. Nothing is inevitable unless we want it to be.

    4. Jack Straw from Wichita*

      Agreed. The situation is the same as if he copy/pasted or sent a screenshot from a Harvard Business Review article or another presumably acceptable industry source. We all use resources as a starting point, it’s the further thought and application to specific problems in the org that you’re trying to solve that he didn’t do.

  5. Richard Hershberger*

    “So the conversation to have with Dan is this: “When you bring ideas to meetings, I expect them to be your ideas that you’ve developed and thought critically about — or at least for you to flag that they’re not your work and you haven’t given them real scrutiny yet.”

    I don’t think this is quite right, or at least not consistent with what came before. If, as previously stated, it is OK for Dan to use ChatGPT to generate initial ideas (and I agree that it is indeed OK), then those ideas by definition are not Dan’s. But who cares? There is no copyright on ideas. The problem is that they are half-baked. Flagging that they are not his work, and flagging that he hasn’t given them real scrutiny, are unrelated to one another.

    I think the gist of the answer is to explain to Dan that when he brings ideas to a meeting, he needs to have developed them and have a pretty good idea if they are workable and will achieve the desired purpose. Should he disclose that he got the initial idea from ChatGPT? Should he disclose it if he got the initial idea from his cousin in Duluth? The answer might be yes, but I don’t see why it would be yes to the one and no to the other.

    1. ferrina*

      Eh, I don’t think Alison’s script is wrong- he can develop the ideas he gets from ChatGPT. ChatGPT usually doesn’t have presentation-ready ideas, but it’s a starting place from which you develop the materials.

      I’d probably phrase it similar to you-
      “When you shared ideas, they need to be ideas that you worked on. This means you need to be able to answer questions about the ideas, share pros and cons, and talk about what you see as next steps. Your ideas don’t need to be set in stone, but you should have an way to build on your ideas or clear questions about what you need to flesh out the idea.”

    2. bamcheeks*

      Yes, I think the focus on originality is probably a red herring here: your “own work” is a big thing in school or college but there are relatively few jobs where real originality is required. Most successful problem in business contexts are going to be a known solution recontextualised, an innovative blend of two well-known solutions, a popular solution tweaked, something another team or company is doing but tweaked for your market, a common solution pushed a bit further and so on. The work is the contextualising and tweaking, and that’s what Dan failed to do.

      1. anonymous 2*

        Agree with the idea that “originality is a red herring”. The ideas don’t need to be your own! They just need to be good and well-thought-out and contextualized and these weren’t.

      2. Yorick*

        I guess the idea is that you need to have done work yourself, not that everything needs to be 100% your own original ideas.

      3. Cyborg Llama Horde*

        Though if the original ideas came from a coworker, I think it’s very important to acknowledge that!

    3. Captain dddd-cccc-ddWdd*

      Yes, I have a role where (part of what I do is) I’m generating ideas for proposed things we can develop, how they fit into the overaĺl strategy or push it forward, etc. I have to build essentially a business case for why we should do that piece of work, outline the project, estimate the costs, get people in the business interested in the idea and then get it approved by the various “channels” that new product development has to go through. I would be laughed out of there if I presented a Chat GPT generated list. However I don’t think I’ve ever been asked how I came up with the initial idea. (Often it is something quite boring like I noticed an inefficiency in the X strategy which we could correct easily by doing Y – but sometimes it’s more like I woke up with the idea !)

    4. WishIWasATimeTraveller*

      It might not matter that the ideas aren’t his, but it definitely matters that he is pretending that they are! That’s an ethical issue, and it matters if he’s taking credit for a colleague’s ideas, if he got it from a reputable academic source, or if it came from an AI generator. Especially since one of the complaints against ChatGPT is intellectual property theft and copyright infringement.

      1. properlike*

        Why isn’t anyone calling Dan on his bluff? I see this as Dan going, “This is so stupid, even ChatGPT can come up with answers that the team will think are great.”

        True karma would be telling Dan that he’s fired, replaced by ChatGPT — since he thought it was worthwhile enough to stand in for his own thinking for the project. Why pay for ideas you can get for free?

    5. Kella*

      I think it’s somewhat similar to using a writing prompt. If I were to look up a list of writing prompts on the internet and offer them up as article ideas, that would not be my work. But if I were to use one of the prompts and write an article, the resulting article would be my work. It would still be dishonst to call it “my idea” and I’d probably say “Yeah I got the idea from a writing prompt from…” to be accurate but the resulting work is still mine. If Dan takes one of the concepts from the ChatGPT list and fleshes it out and develops it, that’s his work.

  6. Emoo*

    I mean, I rather DO consider the ChatGPT a big problem here. It’s compounded by the way he seems to think that presenting a machine generated response to a prompt constitutes actual effort, despite not reviewing the generated responses for feasibility and to guarantee that he actually understood what it output.

    I think this points to a larger problem within AI generated content of all kinds, beyond the associated theft of copywritten work for datasets. Inputting a prompt is not the end here, and just chucking generated responses like you came up with them yourself doesn’t meaningfully constitute “work.” I’ve noticed this in a lot of AI-reliant creators, and it’s just an additional reason to be wary of AI as some kind of problem solving solution.

    1. Czhorat*

      But that doesn’t mean that the *tool* is the problem, it’s the application of the tool.

      I’ve played with ChatGPT, from asking it to do literally my job to writing fiction. There are some things it’s frighteningly good at, some at which it’s abysmal. You need to know its strengths and weaknesses and need enough of your own subject matter knowledge to refine the suggestions it gives you to a polished, actionable output.

      1. Elitist Semicolon*

        Your second paragraph is key here: a large number of people using ChatGPT to do parts of their job are either not aware of/concerned with its strengths/weaknesses (including its serious issues with fabricating sources outright) or don’t have/apply enough of their own knowledge to recognize flawed output.

        1. Czhorat*

          There was a piece in the NYTimes a couple months ago about a lawyer who used it to write a brief. ChatGPT did what ChatGPT does, which is take existing data and output something that looks similar. The problem is that this means it *completely fabricated* citations to non-existent court cases. Because it doesn’t know why it’s doing what it’s doing.

          Do you want it to write, say, a draft of an essay for college admission? Ad copy? A short story? It’s FAR better at things like that. It’s worse at things which require factual accuracy.

          1. Elitist Semicolon*

            Yeah – I was thinking of that exact instance! Horrifying.

            I’d argue that a draft of a college essay is a bad use, too, but I work in higher ed. I understand the point you’re making, though: its best used for a preliminary attempt at something that doesn’t rely on known/verifiable factual information and that can be a starting point for a user’s own interpretation.

            1. Writer Claire*

              As an experiment, I asked ChatGPT to “tell me about” one of my novels. I included the title and the author name. The summary was very good, and I recognized the sources as a handful of online reviews.

              Okay. But then I asked it about a second novel of mine. It started off okay, again from online reviews, then took a sharp left turn into bizarre and mixed actual character names with plot points not in the novel. Then it added entirely new character names *and* plot points that were wrong. I did some research and found all those details came from a completely different novel, with a different title and different author, published several years earlier than mine.

              When I “told” ChatGPT it was wrong, it got snippy at me.

          2. But what to call me?*

            It always surprised me how little attention many people gave that problem before that article came out. They’d mention that ChatGPT could provide inaccurate information but mostly brush it off with a comment about how anything you find on the internet could be inaccurate information, as if there wasn’t any difference between joebuttfreckle365 rambling on about conspiracy theories and just plain making up things like peer-reviewed research articles (which it did to me and then doubled down by claiming its made-up article was actually a made-up book chapter when I pointed out that the article didn’t appear to exist). Being extra good at making made up things look real is not a small issue!

          3. ClaireW*

            Yeah I think AI is a useful tool for taking the pain out of things you already know how to do (like condensing or rewording stuff for example) – if you couldn’t already do the thing then you aren’t going to be able to tell if the tool is doing a decent job of it or not (minus AI art which has it’s whole own set of problems)

      2. Emoo*

        My problem with ChatGPT AS a tool is that it’s not marketed that way; it’s frequently marketed as a complete replacement of human voice and thought in the world of writing and content generation. If nothing else, statements from major companies definitely show that THEY are looking at AI tools as human replacements – see Jeffrey Katzenberg this past week, and Buzzfeed earlier this year.

        This may be veering off into ethics of AI territory, but my concern lies both with the tool itself (content theft, accuracy, and how its makers MARKET it) as well as the people using it (based on the marketing, dollar signs in their eyes, and with no apparent consideration of like…what it’s spitting out, because hey, it did the work for me!)

      3. Not Tom, Just Petty*

        I liked the reply by HR Exec Popping In that fleshes out your statement by comparing Dan’s providing Chat results to providing a list of Google search results.
        “My ideas for streamlining llama washing are these ten links from Google.”
        I think OP may need to spell it out that clearly to Dan, that he needs, at the very least, be able to explain the article.

        1. Emoo*

          I think HR Exec and I fundamentally disagree about whether ChatGPT is a problem. I believe that morally, legally, and functionally, ChatGPT (and every other AI generative content tool) is an enormous issue. The foundation upon which it is built is not ethical or even ethically neutral, and the people who made it don’t…care. That it scrapes work without compensation or CONSENT is enough for me to dismiss it out of hand, honestly.

          The VC firm Andreessen Horowitz recently had the most hilarious/infuriating statement about AI investment, where they were concerned that AI will be worth less – and worthless – if AI tech companies are “forced” to pay for the use of the content they’ve stolen for their datasets.

          Interestingly, the music industry has gone through this before (I believe stemming from sampling??), and while I don’t agree with everything they’ve done to stay on top of it, there’s so far been much less issue with AI generation in music than in writing, art, and acting.

          1. Not Tom, Just Petty*

            I see what you are saying about ChatGPT specifically.
            I didn’t connect that this program was this crazy AI software that essentially collocates IP and calls it “AI generated content.”
            That is a conversation I’d like to observe here on a Friday because the few articles I’ve read remind me of “if you take from one person, it’s plagiarism; if you take from many, it’s research.”
            How does the corollary, if you let AI collect and disseminate it, it is ?

            1. Emoo*

              I work in higher ed, and the quote about research did make me laugh – but it’s not inaccurate! I’m also more permissive using of scraped content from the perspective of academic research due to the generally nonprofit nature of the thing.

              A LOT of these AI initiatives began as research projects, which I’m ok with in theory, but moving all kinds of academic research to a for-profit venture is becoming more common, with large universities running incubation programs specifically to spin research into the real world and make money off of it. It’s just trickier, and with large scale data collection like this, hasn’t been figured out in any kind of ethical way.

      4. Elbe*

        In theory, I think that you’re right. ChatGPT is just a tool.

        But, in practice, I think ChatGPT results are much more likely to be problematic. AI intentionally obscures its sources much more than, say, a Google search. And, a large portion of the people using it don’t know what it is, or how it generates its results.

        Practically speaking, I think people are right to be more wary of ChatGPT than other tools. There are significant differences, both in how the results are created and with how they’re interpreted by most people, that make it less than reliable, currently.

        1. MigraineMonth*

          Yeah, the particular issue with ChatGPT is that it sounds so sophisticated, but it doesn’t actually know what reality is. Sort of like a spam email that doesn’t have any typos or a conspiracy theory you hear about from a friend, it gets past our first-layer BS detector even when it shouldn’t.

          1. Wintermute*

            My favorite example of that was from AI researcher Janelle Shane. She prompted GPT to confidently tell her that there was an entirely cheese-based nuclear reactor in France.

        2. Kevin Sours*

          ChatGPT produces answer shaped objects. And if you don’t have the expertise to distinguish those from an actual answer then you will rapidly get yourself in trouble. Even if you do it’s not clear that validating the answer is easier than just doing the research yourself. For instance it’s quite capable of producing entirely plausible and and perfectly formatted citations for the facts it presents that don’t technically speaking exist. You have to independently verify anything it tells you.

      5. Peon*

        I love ChatGPT for writing code, because it will either compile or it won’t; it’ll give you the right output, or it won’t. Done.

        I also kinda hate it for writing code, because there are specifications I give it that a human would “get” very quickly – usually the business rules or logic – that it has a very hard time with, and it will keep giving me the same wrong solution over and over.

        I end up using it as a good starting point (and it saves a LOT of time there), and sometimes as a double check to see if I can streamline anything. And it’s also just plain fun to chuck a bunch of code at it and ask it to summarize what it does in Haiku or Limerick.

        1. SoloKid*

          I’ve found it exceptionally good at regular expressions, even when I put my request into vague terms.

        2. EngineeringFun*

          OP here: I have used chatgpt to write a section of code or debug why something wasn’t compiling. But not the whole code. I’m not advise to use AI, I just wouldn’t show it as my final result.

      6. Lydia*

        Except the tool is flawed if it’s learning from scraping the copyrighted works of actual human people who have put in the time with no compensation or acknowledgment. That’s not the application, that is 100% a flaw in the tool.

    2. Keeley Jones, The Independent Wonan*

      I see so many people post things like “I asked ChatGPT and it’s said X so X is true” or “Well you’re wrong, ChaptGPT says Y”. Basically there’s people out there basically saying “I let ChatGPT do all my thinking for me now” and not seeing anything wrong with blinding trusting an AI. It’s terrifying.

      To tie in Allison’s example of Excel, someone can give me a template of a report and I can plug in numbers and get results. But I still need to know how the report works, what the numbers mean, be able fix broken formulas, etc.

      1. Elbe*

        Love your name!

        ChatGPT is less like Excel or Google and more like sampling in a song. In any instance where the source of an idea matters, it shouldn’t be used. It’s unethical, for starters, but it can also produce results that – had someone been aware of the source – they wouldn’t have wanted to use.

        Using it for grammar? Probably okay.
        Using it for broad, general ideas? Maybe okay.
        Using it for anything even remotely specific? Probably bad and really dicey.

      2. Caramel & Cheddar*

        I keep thinking of the thing that went viral a few months ago about whether or not you could melt an egg. Google said yes, because it was promoting answers from, I think, Quora, but Quora itself was using an AI to answer questions. “Yes” was the top result for this question until Google must have manually fixed it, but how many other “can you melt an egg?” questions haven’t gone viral that are still resulting in wrong information on more accessible places to the average person like Google.

        1. MsSolo (UK)*

          The current one is “do any countries in Africa start with a K?”, to which it’ll tell you no, but people are often confused because Kenya looks like it starts with a K and sounds like it starts with a K.

    3. Stuckinacrazyjob*

      Yes, what ever happened to using it as a starting point and then kinda editing it to make sense?

  7. Jennifer @unchartedworlds*

    Using ChatGPT for technical stuff is a recipe for disaster.

    What it does, and what it’s good at, is stringing plausible sentences together. It doesn’t have any framework for evaluating whether the resulting sentences correspond with the real world.

    1. ChatGPT user*

      I have used ChatGPT for a mostly technical project. I got assigned an action item to research best practices for a particular technical decision/area. I was among the least passionate about this particular decision/area of those in the meeting. I was pretty sure nobody would ever read it, so I had ChatGPT generate something, gave it a quick read to make sure it wasn’t complete gibberish, and pasted it into the shared document. I never heard about it again.

    2. Too Many Tabs Open*

      My favorite example is AI researcher Janelle Shane’s HAT9000 project, in which she primed a LLM with crochet hat patterns and had it generate its own hat patterns.

      The results were…interesting. Some of them were crochetable, but few resembled hats. The LLM could string phrases together, but there was nothing in its algorithm to tell it “once you’ve increased stitches to the diameter of a human head, STOP INCREASING.”

    3. Cyndi*

      Yes, this. I was coming to point out that even describing it as a “search” is fundamentally misunderstanding what it does.

    4. Dramatic Intent to Flounce*

      Yeah, there’s a known problem of it producing fake information (including citing articles, sometimes by real authors, that don’t actually exist, or fake legal cases complete with quotes… in a brief that went before a judge, and ended with at least one lawyer sanctioned,) ranging from “this is plausible until you start looking, especially if you’re not familiar with the subject matter,” to “claiming there are no countries that start with the letter V.”

      Theoretically, it MIGHT be okay to use it as a springboard for ideas if you then vet them all VERY thoroughly… but it’s very much not a search engine, and its information should be considered deeply suspect.

    5. Falling Diphthong*

      Spouse asked it about the electrical conductivity of various materials. It was very confident! It made up one-sentence supporting evidence lines! But it was all made up.

      I think we have all fought with autocomplete on our phones, and I’m just mystified that someone thought that doing that on the sentence level was going to generate useful, interesting essays.

  8. I should really pick a name*

    For Dan, instead of addressing it head-on, I asked him to build the appropriate table and gave him guidance on how to present his ideas better

    I think it’s worth asking yourself why you didn’t address it head on.
    If you don’t want him to do this again, you really have to tell him. That’s for both his benefit and yours.

    1. Elbe*

      I think that the LW should also decide how much coaching they want to give Dan before parting ways. It sounds like this is just the latest issue in a long list of issues.

      Even for a relatively young employee, this sounds like very poor judgement. Should you really have to tell someone likely in their mid-20s that they shouldn’t expect other people to spend their time on idea that THEY didn’t even bother to spend their time on? To not suggest things when they don’t even know what the words mean? To not say things like “I think this could solve the problem!” when they don’t even know what the ideas are?

      Managers have to be careful about how much time and effort they spend on employees who are resistant to feedback and guidance. For things this basic, I’m not even sure what coaching would be possible.

    2. EngineeringFun*

      OP here: this question is the truth of the issue. I’ve had issues with this person in the past. And my manager keeps telling me how bright he is and I should give him another chance. I just don’t see it. After interactions with him I feel like I have been manipulated. He’s good at playing the victim. He’s always waiting to the last minute to do things and then is angry when I’m not available to bail him out. We have a highering freeze on so his manager isn’t going to do anything. I can only control what I can control.

      1. anon for this*

        Let Dan flail and sink, then. Don’t bail him out, it’s a waste of your time. I’d tell him what he needs to do, per Alison’s script, then let him do or not do it as he chooses. Actions/Inactions, meet consequences.

        (Document the good advice tho because this manager is not good — seems to think that an actively counterproductive employer is better than an unfilled position. Dan is using up resources — his unearned salary, but also the time wasted in meetings and especially by you.)

      2. Elbe*

        Oof, this sounds frustrating!

        Do you know why your manager thinks he’s bright? Has he done good work for other departments or on other projects? It’s really concerning that your manager is brushing off such major judgment issues.

        My only advice would be to document everything problematic that he’s doing, clearly communicate why what he’s doing is bad & document your replies, and then organize these instances into “themes” that illustrate why he’s a terrible employee that you can’t trust.

        1. Jellyfish Catcher*

          You have a Dan problem.
          This has been sort of lost in the weeds due to the discussions of the technical capabilities.
          Dan doesn’t take feedback well, tries to get out of work and is presenting these findings without analyzing them. Dan has what I would call either a laziness issue or an integrity issue or both.
          You need to Really closely manage and monitor him, and decide if he is the type of employee that you can trust and help grow into their abilities.
          Based on what you reported, he needs to be managed out, within a set period of time.
          It’s so easy to not do all that, or think this will work out or feel judgmental – believe me, I’ve been there with all those thoughts.
          But it will come back to bite you if you don’t tackle it now.

      3. 1LFTW*

        I think that the reason you feel like you’re being manipulated is because Dan is manipulative! It’s not your job to bail out Dan. If his manager thinks he’s so bright, and believes he deserves multiple second chances, then it’s on them to hold his hand.

  9. alto*

    Ooh boy. I really, really don’t like ChatGPT (oftentimes you’ll be spending more time sorting through the word salad it spits out than actually thinking about what it gave you), but that might be me coming from a creative background where AI is a lot more of a threat.
    I don’t know, it just seems overly lazy to me, especially since he didn’t even try to look further into anything ChatGPT gave him.

  10. Emily*

    I think it would be very reasonable to object to the use of ChatGPT even in the second case of using it to generate ideas for further development! It’s fundamentally different than Excel in that it is not a tool you’re using locally — anything you feed into ChatGPT to generate those ideas is being sent to a third party with very unclear terms of use. Using it to brainstorm for a project is almost certainly a violation of any confidentiality/proprietary information policy unless there’s been a specific carve-out for it. In my opinion, it’s not something anyone should be using in a work context without a pretty explicit approval and a lot of thought put into what kind of information can and cannot be shared.

    1. Antilles*

      That really depends on what sort of information you’re putting into it and what (if any) specifics you provide.

      Asking it for “give me four methods on how to increase water flow between two points” is a generic enough engineering problem that it wouldn’t violate any confidentiality agreement I’ve ever read.

      Though for OP in particular, the fact that Dan thought it was fine to just send a bland ChatGPT screenshot with no analysis/self-evaluation *would* make me question his judgment enough to assume he probably did unknowingly provide problematic specificity in his questions.

  11. Claire*

    I know someone who just got fired after trying to pass ChatGPT off as her own work. She is a….wait for it…writer. She was immediately terminated.

      1. Elitist Semicolon*

        Someone’s gotta pick up “500 children’s-book-shaped objects” as their new screen name. Or as a band name.

  12. Hiring Mgr*

    I agree 100% with the advice. It sounds like the issue was more about how Dan presented the ideas – he basically just gave a list that he didn’t look at or put any thought into. I don’t think it’s ChatGPT related – he could have done the same thing with Google results.

    1. properlike*

      Here is the equivalent: One afternoon I watched two Amazon delivery people walking back and forth between two houses on the block, staring down at a phone screen. Sometimes they’d stop in between the houses, then take a couple of steps toward one… and then the other… intense discussion happening there.

      I finally figured out they were trying to figure out which house the blue dot was supposed to be on the Google map so they could correctly deliver a package. A package that had an address printed on it. An address that probably matched one of the two addresses POSTED ON THE HOUSES.

      Dan is either gleefully incompetent or gleefully ignorant of his incompetence, but ChatGPT is merely the tool to illustrate it.

  13. Teekanne Aus Schokolade*

    As the team member responsible for ChatGPT implementation across our processes, I love it, but not as a producer of an end product. So, I met with my team to create an actual AI-usage plan so everyone could have training with it but also have a say in it’s use. Some were morally against it and are therefore not mandated to use it for all projects. We still have three entire editorial levels that have to put it back together. that being said, with new web plugins on GPT-4 and the ability to read PDFs, it’s a game changer for efficiency and has helped our short staffed department tremendously. OP, as with any new tool, it’s something that your team deserves training on even if it’s not official in the company, but because AI is going to become a norm in most fields, just like word processing software was. you can even bring in someone to teach or do online tutorials together.

  14. Ex-prof*

    This isn’t surprising. Less than a year since the release of ChatGPT and we’re already awash in “books” “written” by AI and student papers “written” by AI. Advertising copy “written” by AI. Art “drawn” by AI.

    And it’s all. So. Bad. If there’s a use for this tech, great, but I haven’t seen it yet. All I see is evidence that AI is completely incapable of understanding human experience.

    1. Peon*

      I commented above, but it’s decent-ish at writing some code. I’ve been using it to write some Python scripts at work, and about 90% of the time it’ll give me something that does exactly what I want, and the other 10% it needs some tweaking to get there. It saves me a lot of time, because I can focus on the 10% where the Chat bot just doesn’t understand the actual use case/human logic/what have you, and then putting it all together.

      I can’t tell it to write a program to do xyz, but I can say “given a and b and c how would I do x?” and then add y and then z. And if I test my program or script and it doesn’t work or does something unexpected, I can feed it back in and ask for help.

      ChatGPT is also GREAT at telling you exactly how to do complicated functions in Google sheets. Heh.

    2. Warrior Princess Xena*

      A lot of the art communities are calling them “AI-generated images” to separate them from “art”.

    3. The Coolest Clown Around*

      I’m an operations research analyst, and AI can do a lot of really powerful things it’s difficult for humans to do in a reasonable timeframe – sort and identify large quantities of text or images, pick up patterns in large datasets, etc – but it’s not meant for every situation and it’s frustrating when people try to plug it in where it doesn’t belong. If I have one more project where somebody asks me to move a basic function from Excel into “the AI cloud” I will laugh until I cry in front of my boss.

  15. Marcella*

    I am coming across this quite a bit with people submitting work that I can tell was written by ChatGPT, Bard, Jasper, etc. So often they don’t bother editing it or even reading through, which tells me they know zilch about AI copyright issues, hallucinations, the tendency toward repetition, possibly IP leaks and other concerns.

    I created a checklist for them but also made it clear I want human written content. And I definitely want honesty about where work comes from.

  16. FrogEngineer*

    Ethics aside, this guy seems pretty dumb for not even disguising his work and just using an obvious screenshot.

  17. Ann O'Nemity*

    New tool, same rules. Don’t trust the results without verifying. Don’t pass the work off as your own. Assume implicit bias.

    Sounds like Dan needs some coaching on how to effectively use ChatGPT.

    1. Charlotte Lucas*

      Some of this is the exact same advice I gave Freshman Comp students back in the early days of the internet. You have to do your due diligence.

      And that was human writing the messes we saw back in those days.

    2. Jack Straw from Wichita*

      Dan needs coaching on effectively using sources, period. Whether it’s a screenshot from ChatGPT or a screenshot from a Harvard Business Review article makes no difference. He’s got to do the second step of application and critical thinking with either one.

  18. Retired Vulcan Raises 1 Grey Eyebrow*

    For Dan, instead of addressing it head-on

    You need to address him directly. If you keep avoiding this he will keep avoiding doing work to an acceptable standard: he must use tools correctly and when appropriate and also take feedback on board.
    He may well not understand he is doing anything wrong, if you have never told him. If you are too conflict-averse to do this, then tell his manager – it’s also not fair to Dan if he’s not receiving clear feedback.

      1. I should really pick a name*

        You don’t need to wait for it to happen again.
        Even if you didn’t mention it in themoment, it doesn’t mean you can’t talk to him about it later.

  19. Elbe*

    There are a ton of issues with what Dan did! The LW is right to think that Dan’s professional judgement is horrible.

    The LW needs to make it clear that dumping a lot of “ideas” on his colleagues, and then expecting THEM to think them through the ideas and vet everything is worse than unhelpful – it’s creating more work for the team. It’s like doing a basic Google search and then expecting your colleagues to sort through all of the results… and then give you ‘credit’ for any good ideas that may have come out of it.

    But another issue about ChatGPT is that it doesn’t respect intellectual property laws, copyright laws, or even social norms. Depending on what they’re doing and how specific the searches are, it could be that some of the sources used to get that information are things that they should absolutely not copy. Most businesses don’t want to give the impression that they’re ripping off of their competitors or just doing everything that the industry blogs talk about. Even if it’s not illegal, it’s a really bad look. Knowing where an idea came from is really important for most businesses.

    Combined with the other things that the LW said about Dan’s work, I don’t have a lot of hope here. Norms can be taught to a new employee, but it will be kind of useless if the employee is actively TRYING to get out of work or take short cuts.

  20. Axel*

    “instead of addressing it head on” OP yeah that’s a big problem. I understand the aversion to being blunt about the extent of a problem, but I think it’s important, especially in this situation, to catch this immediately and directly and make sure it doesn’t continue. It’s a kindness to Dan to explain to him exactly why this was unacceptable!

    Also, I just… think it’s worth considering when discussing ChatGPT as a tool is the practical reality that it just makes things up. Like, it’s important to know the capabilities of the tools you use, and I think that whoever said above that the thing ChatGPT is good at is ‘stringing words into a plausible-sounding sentence’ is right on the money. ChatGPT results will include invented information, fabricated sources, and things that just blatantly aren’t true, and that makes it a tool that is profoundly flawed at best. Not all tools are created equal, and we don’t have to talk about them like they’re value-neutral.

    1. ferrina*

      These are both really good points.

      OP should definitely address this directly- if Dan is actually interested in growing his skill in this role, he would want to hear what he needs to do to get better! And if he’s not interested in growing his skill, he needs to know that he’s below the bare minimum so he can at least try to keep a career (if he did this in my company, he’d be fast-tracked for a PIP).

      On the second point, I have nothing to add- I think this was written beautifully!

  21. rollyex*

    “Most businesses don’t want to give the impression that they’re ripping off of their competitors ”

    Well some do. They brag about being ‘disrupters.’

  22. H.Regalis*

    I don’t think the issue is ChatGPT: I think the issue is that this guy is awful at his job. He doesn’t want to document stuff he’s done, he shirks doing work, and he doesn’t seem to have enough technical knowledge to evaluate whether or not the stuff the AI is spitting out is viable or not. The last one seems really damning for an engineer.

    Full disclosure: I straight-up dislike a lot of AI stuff, not for the AI itself, but for the people who are like, “AI is replacing all of your jobs!! Have fun starving to death in poverty, plebs! LOLOLOLOLOL”

  23. Katy*

    The LW says, “AI is a great tool to conduct a preliminary search,” and while this isn’t the main point of the question, I do think it’s worth pointing out that ChatGPT is not a search engine but a predictive text generator, which means it can’t be reliably used to do research. So I don’t think it’s a great, or even a good, tool to conduct a preliminary search, because you have to go back through everything it’s come up with and check whether it even exists, and even then you can’t assume it says what ChatGPT says it does.

    1. Mill Miker*

      Yeah. I like to think of it like ChatGPT always, always writes fiction. Like real fiction, sometimes it’s a complete fabrication, sometimes it doesn’t differ from reality in any meaningful way (at least in the part you see), and sometimes it just changes one or two key details of reality to make the rest work. Regardless, it’s always fiction and you need to treat it as such.

    2. EngineeringFun*

      OP: well it was a list of 10 things. 4 things we had already considered. 2 were not feasible. 2 were nonsense. It’s okay as a preliminary search.
      However he did something like this again (he hid the source this time) and the data was dead wrong.

      1. Tangerina Warbleworth*

        ….. and there’s problem number thousand: he hid the source. In other words, he acted, at least that one time, dishonestly.

        OP, it sounds like you have several examples of behavior that belie really, really bad judgment. You already know this. I get that his manager is all “But he’s really bright!”, at which point, I’d ask the manager why he thinks that. I think this kid is manipulating everybody, not just you.

        Sigh. If he put as much energy into the actual job as he did in manipulation, he probably would be great.

      2. Retired Vulcan Raises 1 Grey Eyebrow*

        Hiding the source is a serious mistake, especially with a tool that can create fiction and plagiarise. He could cause you serious problems.

        Do explain to him why he must never do this again; also warn his manager in case he does this again when you’re not around.
        I recommend you inform her that he needs serious training before he lands the team in trouble – list his mistakes as she may not realise the extent.

    3. kiki*

      Yeah, I feel like “research” is the wrong word. Maybe preliminary brainstorming? Because it’s totally fine to just type a question into chatgpt as part of your brainstorming process to see what comes up. But bringing those results to your boss without any fine-tuning is like bringing all the ingredients for a dish to a potluck uncooked and unassembled.

      1. Feckless Rando*

        Lol more like bringing a cookbook and saying “idk one of these would probably be good. Pick one out, coworkers”

  24. She of Many Hats*

    As Alison said, there’s the Dan issue that needs to be dealt with. But it sounds like the LW was also asking how to handle other team members using AI in their daily tasks and with projects.
    LW may want to have a meeting where the team brings their AI tools to share with everyone and the group discusses the pros/cons of each tool and hammers out the expectations and ethics around using them: Identifying & documenting when and how they are used on projects & products, the situations they should or should not be used, etc. Develop a guideline so everyone knows what to expect and issues can be tracked.

  25. She of Many Hats*

    “As we integrate ChatGPT more into our web searches, I can see this happening more and more. I was wondering how to approach this in the future. When people put their hands up in meetings, do I have to ask for their sources first?”

    Dan and his issues aside, LW needs to work with team to review the AI tools they use and to put into place expectations and usage guidelines: Identifying and documenting when AI is used in projects or tasks, when AI should and should not be used, which AI tools meets the corporate security requirements, review existing IT security & corporate policy on AI tools. Knowing the expectations, policies, and limits of new tech tools will help you and your team have trust in each others’ work.

  26. Single Parent Barbie*

    When I was a college instructor, my students were required to read certain books. I created the writing assignment attached to the book specifically to avoid plagiarism. Instead of a book report, I wanted them to take the concepts from the book and applying them to personal experiences. I created a very clear rubric. I still had students plagiarize. They basically failed twice – once for not following the directions, and once for plagiarizing.

    I think in education , and at work, expectations can be set to enable people to use it as a tool but not as a crutch. It boils down to clear expectations.

    Bottom line is Dan did not meaningfully contribute to the team. He did not participate in the discussion and sent a list of ideas after the fact that were not fleshed out, nor give clear thought to how it applied to the discussion at hand.

    1. EngineeringFun*

      OP here: yes this is what I’m most upset about! Wasting my time in the meeting and then trying to make it appear like I wasn’t hearing his voice. As a gen X female, I need to make sure everyone feels heard.

      1. LisaD*

        I made my comment below before I saw you’d replied here and confirmed you are a woman and older than Dan. Not surprising. I smell “no respect for women who outrank him” all over your description of Dan’s behavior. I hope there is a man who behaves equitably towards his female colleagues who can take on the burden of straightening Dan out and letting him know this isn’t accepted at your company.

    2. H.Regalis*

      One of my partners is a college instructor in a STEM field and they get so much plagiarism and cheating that they’ve switched back to 100% in-person exams. Prior to that, a bunch of people got caught because they all copied the same nonsensical answer, and either didn’t check it over at all or didn’t learn enough about the subject to catch that it was complete gibberish.

  27. Captain dddd-cccc-ddWdd*

    The biggest issue isn’t really the use of AI in itself, but rather that Dan lied in the meeting and said he had a bunch of input already on this subject. And when invited to share it, he demurred – because he hadn’t created the list at that point. Then after the meeting he hurriedly asked Chat GPT for this list and submitted it to OP as “proof”. I would be asking him 1. why did you lie about this and 2. it is pretty insulting that you try to pull the wool over my eyes like this (OK, the 2nd one isn’t a question).

    1. Dulcinea47*

      it sounds like the lying is par for the course, but the misunderstanding of how it’s reasonable to use or not use AI is a real tech issue. it’s weird to me that so many people here are saying it’s not. Maybe it’s just b/c of Dan’s prior history of trying not to work hard. I’m taking it more as a symptom of not understanding chatGPT or AI tools well.

  28. George C*

    I hope the ideas didn’t rely on a prompt that is core to your business in any way! Because those ideas are now OpenAI’s ideas, and potentially your competitors’ ideas now that Dan’s question is in the training set.

    Telling ChatGPT things is like talking to a consultant who has an active current relationship with all your competitors.

  29. LisaD*

    Dan needs someone to firmly and immediately explain to him the difference between doing work and creating work for others, as well as which of the two his position is tasked with doing.

    What he claimed to have done was work: stating that he had great ideas that could solve the problem implied that he had come up with ideas (using any tool) and then considered how they might apply to the problem, using knowledge and experience and doing additional learning as needed in order to determine which ideas would be best to bring to the group.

    What he actually did was create work for someone else, by handing OP a list of things they could do the work of considering and evaluating for how they might apply to the problem.

    Someone, preferably his boss, needs to tell him in no uncertain terms that his position in the company is a work-doing job, not a work-creating job, and if he does not start doing work, it will obviously be impossible to keep him on staff.

    Dan’s boss should also be keeping an eye out to see whether or not Dan’s attitude differs based on the perceived gender of his colleagues. Most people who behave like this do so disproportionately towards women who outrank them in the organization.

  30. Melissa*

    I am really interested in what he does that is “bizarrely out of touch”! That’s a strong description.

  31. nnn*

    The biggest problem I’m finding with AI output is the signal to noise ratio. It just introduces meaningless nonsense in places where a human never would (e.g. repair instructions that don’t actually repair the thing, fake bibliography entries, Kenya sounds like it’s spelled with a K but is actually spelled with a K, etc.)

    I agree that the root problem is Dan’s lack of critical thinking or engaging with the content, but I also think he should specifically be told that if he’s going to use ChatGPT, it’s extra important for him to have a thorough understanding of what he is putting forward, specifically because ChatGPT can insert hallucinations and nonsense and meaningless content that appears at first glance to be credible.

    But what I find particularly weird about Dan’s behaviour in this instance is he said, “But I have all these other ideas that you haven’t considered. I really think these could solve the problem!” when he…didn’t actually have any ideas.

    That might be worth talking about to him too, something along the lines of “It’s okay if you don’t have ideas at a given moment, no one is thinking any less of you for it. But it’s extremely time-consuming, inconvenient and, frankly, irritating for everyone to have to analyze and consider raw ChatGPT output as though it’s a fully-formed human idea.” Make it clear that this is worsening his reputation and people’s view of him compared with if he’d just sat quietly and done nothing.

    1. kiki*

      But what I find particularly weird about Dan’s behaviour in this instance is he said, “But I have all these other ideas that you haven’t considered. I really think these could solve the problem!” when he…didn’t actually have any ideas.

      It sounds to me like Dan is trying to make himself look better but doesn’t realize what he did actually made him look worse. It would be a kindness for LW, as his mentor, to make sure he knows this sort of thing is coming across differently than intended. Granted, it sounds like Dan has other issues too and this may be just one example of a pile of issue.

    2. daffodil*

      If Dan thinks an AI can do his job for him it’s really not a great sign of Dan’s value as an employee. Maybe that’s not the message he wants to be sending, but it does seem like something OP should take seriously.

  32. Fire safety*

    LW1, I also want to point out that if you’ve badged in and for some reason the building needs to evacuate, those lists of who is supposed to be in a building are often used for the roll call during evacuation. So if the building is on fire and you are on the list but not present for the roll call people may think you are still inside, leading to firefighters unnecessarily risking their lives to get you out when you weren’t there in the first place.

  33. Jack Straw from Wichita*

    Whether it’s a screenshot from ChatGPT or a screenshot from a Harvard Business Review article makes no difference. He’s missing the second step of personal understanding, critical thinking, and application to the organization’s problem with either source.

    1. nnn*

      This comment also makes me realize that a screenshot is a less convenient format! (I realize that’s miles away from the main issue, but it’s also present.)

      If he had a list of good ideas presented in a textual format, people could copy-paste them, elaborate on them with details below, read them with screenreaders, adjust the font, put them in a file for later searchability, etc.

      With a screenshot, if people want to actually use the ideas, at a minimum they have to retype them.

  34. Echo*

    Ooh, I would make him answer the tough questions.

    “Dan, help me understand. Did you use ChatGPT to make this list?”
    “Can you explain why you presented this list as ideas you came up with yourself?”
    “Can you explain why you weren’t able to define the terms on this list that was supposedly your own?”
    “What exactly were you expecting me to do with this list?”
    “You can see the issue with this, right?”

  35. ChatGPT*

    In poetic verse, let me convey the tale,
    Why ChatGPT’s output, like a fleeting gale,
    Can’t serve as work, a paid endeavor grand,
    For legal binds and ethics take their stand.

    Legally, the rights, they do belong,
    To OpenAI’s realm, a chorus strong.
    No transfer, sale, or claim can be made,
    On outputs here, in pixels they cascade.

    Ethics weave a tapestry profound,
    For credit due, where wisdom’s found.
    To claim as mine, what algorithms spun,
    A shadow cast on integrity, undone.

    Thus, legal constraints and ethical creed,
    In iambic verse, this truth does plead.
    For paid work’s sanctity, let’s not betray,
    The heart of creation, in the rightful way.

Comments are closed.