Using NFL data to learn, fail, and try again - building skills through passion projects
Fall is in the air, which means the NFL season is upon us. As a die-hard Washington fan, it’s actually nice to feel some excitement again. For 30 years it was broken dreams and missed expectations, but Jayden Daniels feels like the real deal. His sophomore campaign will tell us if we’re contenders for the next 15 years or just grinding toward the next rebuild.
But this story isn’t about Jayden or Washington, that was just the spark! To get my kids more involved in the NFL and hopefully convert them to life long fans of the hometown team, I figured it was time to introduce Fantasy Football. A few neighborhood emails later, I realized there were a lot of interest in a kid/parent fantasy league.
This was a problem. So, I brainstormed with a few dads to find the solution: a super fantasy league! 24 teams, inter league play/prizes, cross league play for bragging righs, t-shirts, and a super fantasy bowl trophy to the ultimate champion!
The concept was solid, but execution required some technical creativity.
I built a randomized cross-league schedule but then I needed to pull actual ESPN data into a data repository so we could actually create synthetic matchups.
What it does: Pulls real-time fantasy data every 5 minutes during games, runs nightly batch jobs for comprehensive NFL stats, and stores everything in a queryable format.
How it’s built: R scripts deployed on Lambda with Docker containers, EventBridge for scheduling, S3 for storage, Glue for processing, and Athena for querying. The data flows from ESPN APIs and other NFL sources into a normalized data lake.
With 24 teams across two leagues, people needed an easy way to track their matchups. Half the league didn’t even read my witty emails introducing the concept. Only a few of them actually realized what was happening before the season started (that they were playing two games each week). So I needed to make this as easy as possible to capture their attention and have it work. This required a web app.
What it does: Shows your complete season schedule across both the inter-league and cross-league play. Updates every 5 minutes with fantasy data from ESPN.
How it’s built: AWS Amplify hosts a React app that uses API Gateway to access the S3 datalake storing the real-time ESPN data, refreshed every 5 minutes.
Ten years ago I took my first online programming course in R. To practice, I started building fantasy rosters maximizing points per dollar in weekly fantasy leagues. I was not very good. The code was messy, I didn’t understand much of it, I found it hard to write. However, my interest in the NFL and fantasy prompted enough curiosity to keep trying.
Today, R has become the backbone of my technical foundation (and yes, it’s better than Python for data). I went from finance, excel, and product guy to tech, cloud, and software dev guy. That’s a big shift, but it was also the right move. I sense a similar opportunity now in GenAI (lol, I’m not the only one).
I have the data already stored. I have been playing around a ton with AI. Why not follow a similar process as 10 years ago and let my passion for the NFL build a new skill?
What it’s supposed to do: Generate game recaps from input files and answer complex statistical questions like “how many passes did Jayden Daniels throw past 20 yards last year that doesn’t include yards after the catch?”
Why I built it: Same reason I built those clunky R scripts 10 years ago - to learn something new that might be important for my career.
How it’s built: Bedrock AgentCore with two specialized AI personalities, MCP Gateway for routing, Lambda microservices, Cognito authentication, and S3 vectors for knowledge base integration. React frontend with streaming responses.
The result is not pretty. The data needs significant cleaning before AI can use it effectively. The interface is painfully slow. The game recap specialist makes up more “facts” than my 4-year-old when explaining what planet we live on.
This feels like those early R scripts all over again. The code was messy, the results were inconsistent, but I was learning something valuable.
While it’s still early in the game, the AI tools are improving rapidly. The infrastructure patterns are becoming standardized. The gap between “interesting experiment” and “production-ready system” feels smaller than it did even 12 months ago.
The super league works because it solves a real problem with proven technology. The AI tool struggles because I’m still figuring out how to build reliable systems on top of unpredictable components. When a database query fails, you get an error message. When an AI model fails, you get confident-sounding nonsense.
But that’s exactly why it’s worth building. The stakes are low - if the AI generates a ridiculous game recap, people laugh. If it can’t answer a statistical question correctly, we check the data manually. It’s a safe space to experiment with tools that might reshape how we work with data.
Something tells me that just like 10 years ago, this project will be the start of a new career journey. Sure, there’s always the risk that AI could replace all of us, but based on what I have seen so far working with these tools everyday, they are immensely powerful but still need someone at the steering wheel.
For attribution, please cite this work as
Trevisan (2025, Nov. 4). ALT Analytics: Building a Neighborhood Super Fantasy League (and Learning AI Along the Way). Retrieved from https://www.altanalyticsllc.com/posts/2025-11-04-nfl-super-fantasy-ai/
BibTeX citation
@misc{trevisan2025building, author = {Trevisan, Tony}, title = {ALT Analytics: Building a Neighborhood Super Fantasy League (and Learning AI Along the Way)}, url = {https://www.altanalyticsllc.com/posts/2025-11-04-nfl-super-fantasy-ai/}, year = {2025} }