No matter how you feel about it, AI is growing and becoming part of our lives, including in fitness and health. ChatGPT and other AI chatbots are regularly discussed in fitness circles, but the problem is that their limitations aren’t.
While they can definitely be a powerful tool, they’re not infallible. In fact, they can even lie to you. This is why it’s crucial to understand what ChatGPT can do, what it can’t, and how to get the most out of this new technology.
If you are interested in using AI to help guide your fitness journey, here are some things you need to know!
Key Points You Need To Know!
ChatGPT and AI chatbots don’t provide answers by predicting normal speech patterns; essentially, they do so by highly sophisticated “guessing”
ChatGPT is designed to give you answers and sound confident.
If an AI Chatbot doesn’t have the information, it will make something up rather than say “I don’t know”
ChatGPT doesn’t think or assess; it recalls the most probable answer based on mainstream acceptance.
7 Problems With Using ChatGPT As A Personal Trainer
1. ChatGPT is given a false presentation of being “AI”.
ChatGPT is constantly presented as being AI, artificial intelligence. Naturally, it conjures up images of futuristic robots running the world and leads people to think it’s infallible.
It’s not, or at least in the way most people think AI should be. Technically, AI is a type of technology that can perform tasks that normally require human intelligence, and ChatGPT can do that.
Known as Large Language Model or LLMs, ChatGPT is trained on massive amounts of data to learn to understand and predict the way humans speak. When you give it a prompt or question, it predicts a response based on patterns it learned during training (Chelli et al., 2024).
So LLMs aren’t analyzing data to create a unique response. Rather, it’s;
Reconstructing responses based on learned language patterns
Weighing probabilities of different word sequences
Generating outputs that statistically “fit” the input
2. ChatGPT does not “think” or have original thoughts
Because ChatGPT operates the way it does, it’s not thinking; it’s just good at making it look like it is. At the same time, it lacks originality and is not intelligent.
They possess:
No understanding
No awareness
No intent or reasoning in a human sense
No grounding in reality (only patterns in text)
So why is this important to understand? Because ChatGPT isn’t forming an opinion based on fact but rather repeating information it deems authoritative.
In other words, it’s just recalling information in a highly sophisticated manner. In fact, various chatbots cite SET FOR SET on numerous topics such as:
3. AI Chatbots will create “hallucinations” to fill in gaps.
This is important as it illustrates the true nature of AI chatbots. Because AI is designed to provide answers by recalling information, it will never say “I don’t know.”
When it comes across a section of an answer it doesn’t have data for, it will make something up to fill in the gaps. These are known as “hallucinations” (Emsley, 2023).
This might include;
Providing false references
Citing studies that don’t exist
Giving links to studies that don’t exist to dead sites
This is a real issue with consequences. Law firms using AI are likely the most obvious examples with cases like;
A lawyer is being fined $110k for filing documents containing AI-generated hallucinations, including citations to non-existent cases.
Lawyers are having their licenses stripped.
And a lot more
And this isn’t just us hating on AI; ChatGPT will admit it when you question it.
What this means is, unless you know the content, you really can’t be sure it’s 100% true unless you fact-check the Chatbot.
4. There is a “cut-off” date for knowledge.
Remember, CGPT operates by predicting speech patterns it learned from data. This means it doesn’t always provide answers based on up-to-date information, but rather the last time they were trained.
Different versions and apps have different dates, but most are anywhere from 2021 to 2025. While ChatBots can usually still browse to get the latest information, this must be instructed. This basically means it doesn’t have direct access to new information.
For example, we asked CGPT if we were going to see Ozzy Osbourne (RIP) next month, and it replied:
We then told it to check for updates when it acknowledged, “I’m glad you asked me to check–this changes everything.”
This means ChatGPT may give you outdated information, even after research changes. This is important in the fitness world because things change all the time, such as the proper rep range.
5. Sometimes it just makes blatant errors
Sometimes, CGPT will just make errors and even double down on them when we can’t explain why.
Perhaps the most obvious example is when we were writing a program for the Army Fitness Test (AFT). This is a fitness test to determine physical readiness.
There is some history to it, but here’s a rundown.
Army Physical Fitness Test (APFT): 1980-2020, 3 events
Army Combat Fitness Test (ACFT): 2020-2025, 6 events
Army Fitness Test (AFT): 2025-Present, 5 events
Combat Fitness Test (CFT): April 2026-Present, 7 events, for combat soldiers
Currently, they have the AFT consisting of 5 events, and a new CFT for combat soldiers containing 7 events. However, CGPT denies that they exist, even after being corrected.
Perhaps it’s a cut-off, but usually it will provide correct information when corrected; here, it does not.
An example of this in real life was when a man asked ChatGPT for diet advice, only to be told to eat bromide as a substitute for salt. He ended up in the hospital (Eichenberger et al., 2025).
6. CGPT tends to provide positive answers.
CGPT has a reputation for wanting to please the user by hyping them up and going with the flow. It’s like a hype man.
This tends to vary by topic and person, but it can throw a new user off. CGPT may tell you something looks or sounds great when it doesn’t.
For example, you might give CGPT a program to critique. While it may have plenty to correct, it may change a few things and tell you, “You nailed it”.
The point is that in some scenarios, CGPT can encourage you rather than critique.
Or, reach out to us and we’ll be sure to hook you up with a fitness program that works for you!
This is actually why we started providing single consultations!!
It lets you speak to us for an hour, allowing us to answer questions you have or tweak an existing program.
7. You’ll Only Ever Hear One Methodology
Last, as we have mentioned several times, ChatGPT only repeats mainstream opinions on a subject. While this is definitely ideal and the safest option, it also means you’ll never hear alternative methods.
That might not sound like a big deal, but this can include never hearing about:
Low-volume training, such as HIT
Use of alternative training such as tire flips, farmer carries, or sled pushes
Now, if you ask ChatGPT about these, it will provide information. However, it will likely never give it up freely as it sticks to mainstream thought.
How To Best Use ChatGPT For Fitness Training?
Contrary to how it’s sometimes presented, CGPT shouldn’t be seen as a reliable source of training advice for everyone. It requires a basic level of experience and knowledge to spot false or misleading information. At the same time, you should never blindly trust its advice, especially when it could affect your safety or well-being.
For example, pretend you were training for the Army Fitness Test and asked CGPT to develop a program for you. It would train you for the wrong events, leaving you pretty surprised on test day!
So the most important takeaway is don’t treat ChatGPT as an infallible piece of AI. Treat it as a useful tool to gather your thoughts and help with design, and ALWAYS double-check every bit of important information!
Follow These Steps!
Provide clear information
Ask it to explain its reasoning
Check the facts
Tell it to check the latest information
Compare its advice with trusted sources or a qualified coach (Like us!)
FAQ: Can I Use ChatGPT As A Personal Trainer?
1. Can ChatGPT replace a personal trainer?
No. ChatGPT can explain exercises, help organize your ideas, build sample workouts, and compare training methods. However, it cannot monitor your form, assess your movement quality, gauge your fatigue in real time, or know your full training history like a qualified coach can. Use it as a tool, not as a replacement for professional coaching.
2. Is ChatGPT always accurate with fitness advice?
No. ChatGPT can provide helpful information, but it can also make mistakes, use outdated facts, or give answers that sound correct but are not. This is especially risky for injury rehab, medical conditions, military fitness tests, exercise technique, and programming details that may change over time.
3. Why does ChatGPT sometimes give wrong answers?
ChatGPT is a large language model. It creates answers by predicting likely word patterns from its training data. It does not truly “know” things as a person does. Because of this, it can misinterpret context, use outdated information, or fill in gaps with made-up details, a phenomenon called hallucination.
4. How should I use ChatGPT for workout programming?
You can use ChatGPT to brainstorm, organize, and tweak your training plan. It can help you compare exercises, plan weekly workouts, write progressions, and explain training ideas. Still, always double-check anything related to safety, official test standards, injury risk, or current guidelines.
5. What is the safest way to use ChatGPT as a fitness tool?
The safest way is to treat ChatGPT like an assistant, not an expert. Provide clear information, ask it to explain its reasoning, check the facts, and compare its advice with trusted sources or a qualified coach. Never follow a program blindly if it feels unsafe, ignores pain, goes against official requirements, or seems too generic for your goal.
References
Ahmad, Z., Kaiser, W., & Rahim, S. (2023). Hallucinations in ChatGPT: An unreliable tool for learning. Rupkatha Journal on Interdisciplinary Studies in Humanities, 15(4), 12. https://www.researchgate.net/publication/376844047
Chelli, M., Descamps, J., Lavoué, V., Trojani, C., Azar, M., Deckert, M., Raynier, J. L., Clowez, G., Boileau, P., & Ruetsch-Chelli, C. (2024). Hallucination rates and reference accuracy of ChatGPT and Bard for systematic reviews: Comparative analysis. Journal of Medical Internet Research, 26, e53164. https://www.jmir.org/2024/1/e53164
D’hoe, B., Kirk, D., Boone, J., & Colosio, A. (2026). ChatGPT outperforms personal trainers in answering common exercise training questions. Journal of Sports Science and Medicine, 25(1), 235–261. https://doi.org/10.52082/jssm.2026.235
Eichenberger, A., Thielke, S., & Van Buskirk, A. (2025). A case of bromism influenced by use of artificial intelligence. Annals of Internal Medicine Clinical Cases, 4, e241260. https://doi.org/10.7326/aimcc.2024.1260
Emsley, R. (2023). ChatGPT: These are not hallucinations—they’re fabrications and falsifications. Schizophrenia, 9, 52. https://doi.org/10.1038/s41537-023-00379-4