Potential roadblocks for AI coaching for powerlifting
In light of the advent of so-called artificially intelligent powerlifting training plans, I began thinking about some of the most relevant problems that such systems need to tackle in order to better serve athletes. At their best, a true AI system for powerlifting would change lifting prescriptions in the most optimal way, at the right time, potentially including nutritional adjustments, load adjustments, exercise adjustments, frequency and training amount adjustments, all to serve the athlete best.
First, I think it's important to state what AI is versus what algorithms are. I'm no expert, but I've taken courses on AI and ethics and AI in complex social science systems and my casual reading has taken me past topics of consciousness, AI, robotics, neural networks, genetic algorithms, and complexity more generally. An algorithm is a set of instructions about what to do with a set of inputs. Microsoft Excel does this all the time, and algorithms can range from very simple to very complex. The more inputs that exist or conditions on how to modify data, the more complex outcomes can look. "If the box is less than 10 inches in any direction, send it down the left conveyor belt" is a potential algorithm for sorting packages down an assembly line. "If the box is less than 10 inches in any direction, weighs less than 1lb, apply a thank-you note to the outside and send it down conveyor belt C" is another, more complex algorithm. Note that no AI is present, requiring no learning of a system and mere identification of initial values.
"AI" as a broad term encapsulates machine learning to mimic or improve on human functioning. AIs can do many things, from predicting the text you're likely to write in a new Gmail function, to suggesting your next purchases in Amazon, to learning to drive a car by assessing data from sensors, attempting to reduce commute times by optimizing infrastructure, and on and on. Unlike algorithms, machine learning (which I'll use interchangeably with AI going forward) take alot of data, are given a task, and given time to run solution patterns.
There are a few problems AI solutions to powerlifting coaching (or athlete coaching in general) face that are common with their human counterparts, and problems that are unique to AI, and I'd like to get into them now.
Problems AI shares with human-based coaching:
1. On what is optimal
If a goal of AI is to find the best training approach for an individual athlete as fast as possible, we need to know what "best" looks like. In other words, we're searching for optimal. Unfortunately, you don't know what optimal is because you can't compare what is going on right now to all possible training styles. You can only compare to what the athlete has previously done, because other frames of reference are far less relevant with how much people vary, and because it matters more what happens at the individual level than what happens to populations more generally. This is an epistemic problem about the type of things we're even able to know. If we don't know what optimal looks like, we can't define accurately if we're heading there. Of course, progress is progress. But I think athletes would prefer a 30lb PR over a 10lb PR.
Even if you could, giving an athlete the same training prescription at different points in an athlete's development will have starkly different outcomes. At some points I've given the athlete almost identical training approaches that have worked well for a while and then didn't work, showing that the sets/reps/loads/exercises the athlete is doing, even if optimal at one point, are not optimal at another. "Optimal" isn't a permanent quality of a training plan.
2. Self-reporting errors causing shifts in programming
For example, athletes can report @7 RPE's when they are actually hitting @9 RPE's. Coaches can assess video to correct for the error (and use it as a point to educate the athlete), but AI is at present incapable of analyzing more than perhaps how fast a barbell is moving (without 3D model tracking software). The AI (or the coach) can only make changes based on the data we have available to us (and our intuition as coaches). If the athlete inaccurately reports how training is affecting them, the AI or coach will come to inaccurate conclusions.
Problems unique to AI:
Admittedly, these are some of the same problems that plague AI research generally in both computation and ethics, and are being worked on by some of the brightest minds in the world.
1. The Black Box problem
A trained AI comes up with a way of manipulating data to arrive at outcomes. In training, that might be seeing a high fatigue level, a decrease in performance, training notes that include the words "bad" or "poor", and modifies future training plans by dropping 5% from training loads. The problem is that even the programmers who created the AI can't see how solutions were reached based on the inputs. Because in actual AI, solutions are formed based on crunching large amounts of data and giving the AI a specific end task. A better example might be that after looking at 50,000 athlete training plans over two years of their training, a correlation exists between the words “bad”, high RPEs, and the machine learning code has learned to adjust after that happens. In a classic example involving neural networks, a programmer might be looking at how specific book recommendations arrive to you on Amazon.com based on recommendations. You get recommended a paperback copy of Gone With the Wind. Why this specific book? A programmer can only speculate, but the reality is that a complex system of weighted nodes and connections over hundreds of reviews, clicked links, page hovers, and purchases influence outcomes.
What's the big deal? You can't see inside the box--you can't tell why an athlete's training plan was modified in the way that it was. And for humans, that's a big deal. We want to be able to reproduce outcomes and know why an athlete is doing a specific training plan. Indeed if we don't know how solutions are reached, we don't know if a system needs to be debugged or modified or if it is working correctly. This is tied in with the inability to reverse engineer solutions.
Plus, solution values are different each time you run an AI--You might end up with a different solution grid that spits out different outcomes than the first time you construct the AI. This is the problem of reproducibility. Given the same initial data, we can't be sure that the same outcome will be reached. As an example, here's two problems:
1) An AI tells you that today you're doing 5x20 on squats. Is this the correct training for you? How do you know? Or do you have to trust the system?
2) You fire up your AI training twice. With the exact same initial data, the first time it tells you that you're doing paused squats on Day 2 and the second time it says you're doing pin squats on Day 3 and not on Day 2. Which is correct? How do you know? Why are they different?
2. AI systems match the same biases of either programmers or the world
In both algorithmic approaches and AI approaches, the system can only modify data in the way that represents solutions relevant to the world. If the world or the programmers are biased, so too are the solutions. As an example, there was a police department that tried to develop an AI to send police to patrol areas of their jurisdiction. They tried to maximize the number of police in areas where crimes are likely to happen. An unfortunate byproduct is that cops were sent to patrol predominantly in lower class/more ethnically diverse areas, perpetuating a cycle of mistrust and profiling. Was the AI wrong? No, it sent cops to areas where more crimes had happened in the past, but perpetuated a problem the police department was trying to avoid.
In powerlifting, it might be the case that AIs select only exercises that the programmer is aware of, only training constraints that the programmer allows for, only frequencies that seem to make sense based on what we as humans know. This may lead to blind spots and limitations on what athletes actually experience. More generally, it's likely the case that there is no "golden ticket to gains" in powerlifting, and that some exercises can be swapped for each other with the athlete experiencing near identical outcomes. What's the difference between 12 reps and 13 reps over 3 years of training, the difference between pause squats and pin squats if loads are matched? Given the ambiguity of single path solutions and the idea that there are multiple roads to Rome in strength training, how is a system of training selected by the AI?
3. Good AIs require massive amounts of high quality data to build predictive power
The amount of data needed to accurately assess exactly how things like age, body weight, bone structure, training age, distance to competition affect the training plan is absolutely massive. This is the type of thing that I haven't seen accessible in any population of lifters that I know of. It's hard enough getting 20 people together for a scientific study than to have a well-formatted data field of 20,000 athletes and their training plans to feed into an AI. The less data, the less secure we can be that the model accurately represents the data.
4. Trained AIs are highly specifically task-relevant and unable to apply solutions to new problems.
Programmers take massive amounts of data and feed it into an AI model to solve a specific task. Maybe that's how to navigate a dot through a maze or rotate a block a specific direction using a robotic arm. The problem is that these solutions aren't generalizable. The AI can't navigate a square through a new maze, even if those problems are similar without re-learning from the beginning. Or, rotate a sphere to a new direction with a robotic arm.
If I change my competition date, if I injure myself, if I take a week off, how does this affect the AI?
5. The problem of fatigue
Fatigue is sometimes a good thing, even necessary for athletes to progress. It's sometimes a bad thing, being too much for an athlete to effectively recover from or to execute training lifts with good technique. How does a system separate between beneficial fatigue and detrimental fatigue, and what is the sensitivity on that dial? More generally, what is actual signal versus noise?
6. Analyzing technique to predict specified movements
An AI system can't (currently, at least) look at an athlete's technique to suggest training interventions, leading to one of six major training manipulations to be done in relative blindness. (The six major training manipulations are sets, reps, load, exercise, and frequency, and training days per microcycle)
7. Dials on how fast to push progress
How does a system weight progress? As coaches, we could agree with an athlete to be aggressive with training volume over a shorter period of time for large, short-term progress or be slower with less training volume for more consistent but longer term progress. How does an AI system weight the difference between these, or is the decision left to the athlete? Essentially, how is sustainability or even enjoyment factored into a computational model?
Conclusion
While there are problems, I'm optimistic we're still in early stages of this new frontier of training athletes. Some of these sample problems extend to limitations of coaches on what we can do with data and how we do it. Coaches are faced with data limitations and lack of computational and predictive resources on how athletes work, but I hope that we can head forward, together with AI.
We might one day, while programming for athletes, be able to see a pop-up that says "adding 2 sets on squats has resulted in favorable outcomes for 80% of athletes like Susan". That kind of information would be insanely helpful. Or, "Be careful adding 5 sets for Jim. only 3% of athletes like Jim have accommodated that much training volume without injury."
All of this might just be problems with early stages anyhow, and human coaches and AI might not be so far apart. As someone once said at an AI convention, “you don’t need to run an analysis on the synapses of an accountant to know if he is trustworthy, just look at the numbers.”