Written by Ken Mendoza Β· Fibromyalgia research advocate. Toni's partner.

I'm going to tell you about the dumbest smart thing I've ever built.

It started because I kept asking Toni the same question after her deck sessions: "How was it?" And she kept giving me the same answer: "Good." Or: "Weird." Or: "I don't know, but different." Which β€” for someone trying to understand what's actually happening out there β€” is like trying to debug a program that returns only "something."

I needed data. Toni needed me to stop asking her to quantify her pain at 4 AM when she's finally feeling better. We compromised: I'd build a system that tracked the things she couldn't remember to track, and she'd rate her pain before and after each session. Fifteen seconds of effort on her end. Everything else on mine.

Six months later, I have 147 session records, a weather station on the deck railing, a spreadsheet that would make a data scientist wince, and the beginning of something that might actually be useful.

Medical Disclaimer This blog describes a personal tracking system for one person's chronic pain. It is not a clinical tool, a validated instrument, or a substitute for medical care. N-of-1 data is personal and not generalizable. It's useful for the individual patient and their care team β€” not as evidence for anyone else's treatment decisions. Always consult your healthcare provider.

Why Track at All?

The standard answer is "so you can see patterns." That's true but incomplete. The real reason to track pain β€” and I didn't understand this until we were six weeks in β€” is that tracking changes the relationship between the person and the pain.

A 2018 pilot trial of PainTracker, a self-management platform for chronic pain, found that patients who systematically tracked their pain showed significant improvements in pain self-efficacy β€” their confidence in their ability to manage their own condition β€” compared to usual care. They also showed improvements in activity engagement and pain interference. Not because tracking changed their pain. Because tracking changed their sense of agency.1

There's a term for this in research: Ecological Momentary Assessment (EMA). A 2014 study tested EMA specifically with fibromyalgia patients using smartphones versus paper diaries. The smartphone method produced more accurate and more complete data β€” even in patients with low familiarity with technology. But the finding that mattered most: patients who used the real-time tracking method reported that it helped them understand their pain better.2

Toni put it differently. She said: "When I started rating my pain before and after, I stopped being a person who hurts and started being a person who's collecting information about hurting. Same pain. Different relationship."

Toni's Reality Check

Ken makes this sound tidy. It wasn't. The first two weeks, I forgot to rate my pain before going outside about half the time. I'd get to the deck, settle in with Kona, and twenty minutes later Ken would text me "before score?" and I'd have to guess what I was feeling before I came out, which defeats the entire purpose.

We solved it with a Post-it note on the back door. It says "NUMBER?" with a Sharpie drawing of a thermometer. I see it every time I reach for the door handle. Annoying. Effective. That's our design philosophy.

What I Track (and What I Don't)

I went through three versions of the tracking system before landing on one that Toni would actually use. The first version was 14 fields. She abandoned it after one session. The second was 8 fields. She used it for a week and then started leaving blanks. The current version is 6 fields for Toni and everything else is automated or recorded by me.

DECK SESSION RECORD β€” v3.1

// ── Toni's inputs (pre and post) ──
pain_before: 0-10 // NRS at back door
pain_after: 0-10 // NRS coming back in
mood_before: "low | flat | okay | good"
mood_after: "low | flat | okay | good"
sleep_prior: hours // previous night
notes: "free text" // optional, usually empty

// ── Automated / Ken-recorded ──
date: "YYYY-MM-DD"
time_start: "HH:MM" // from motion sensor on door
time_end: "HH:MM"
duration_min: integer
temp_f: float // Davis Vantage Vue station
humidity_pct: float
baro_pressure: inHg
wind_mph: float
wind_dir: "N|NE|E|SE|S|SW|W|NW"
cloud_cover: "clear | partial | overcast | fog"
moon_phase: "new | wax_cres | first_q | wax_gib | full | wan_gib | last_q | wan_cres"
moon_visible: "yes | no | partial"
tide_state: "incoming | slack_high | outgoing | slack_low"
kona_present: "yes | no" // always yes so far
kona_position: "pressed | near | roaming"
pain_delta: auto-calc // before - after

The weather station was the best $300 I've spent on this project. It updates every 2.5 seconds and logs to a console I can export. Before that, I was manually checking weather apps, which are accurate for the region but not for our specific microclimate. Alsea Bay has its own weather, especially at night. The fog, the wind patterns, the temperature near the water versus 50 yards inland β€” it all differs from what NOAA reports for Waldport.

What 147 Sessions Told Us

I want to be careful here. This is an N-of-1 dataset with no blinding, no randomization, and a subject who knows what outcome she's hoping for. Every bias in the book applies. I know this. I'm a researcher. I can hear the methodologists in my head, and they're right.

And yet.

147 sessions is more data points than most fibromyalgia intervention studies collect from a single participant. The environmental variables are objectively measured by an instrument, not self-reported. And the pain ratings, while subjective, are collected at the moment β€” not recalled days later. That's exactly what the EMA literature recommends.23

Here's what the numbers show:

Variable Finding Effect
Mean pain delta βˆ’1.8 points (before βˆ’ after) Average drop across all sessions
Best conditions 38-44Β°F, incoming tide, clear or fog, wind < 5 mph Mean delta: βˆ’2.7 points
Worst conditions Wind > 12 mph, any other variable Mean delta: βˆ’0.4 points (barely moved)
Duration threshold Sessions under 20 min averaged βˆ’0.9; over 30 min averaged βˆ’2.3 Something happens between 20-30 minutes
Barometric pressure Falling pressure (pre-storm): baseline pain 0.8 points higher Matches fibromyalgia weather literature
Moon phase No correlation with pain delta The moon doesn't care about fibro
Kona pressed vs. near Pressed: βˆ’2.1 avg. Near but not touching: βˆ’1.4 avg. Contact matters. 0.7 point difference.
Prior sleep < 4 hrs Baseline pain 1.2 points higher, but pain delta similar Sleep-deprived nights start worse but respond equally

The barometric pressure finding deserves its own note. A 2019 study of 48 fibromyalgia patients β€” tracked three times daily for 30 consecutive days, with weather data collected without patients' knowledge β€” found that lower barometric pressure and higher humidity were significantly associated with increased pain. And stress moderated the effect: patients under higher stress showed stronger pain responses to pressure drops.4

Our data matches. On falling-pressure days, Toni's baseline pain is higher. But the deck sessions still produce roughly the same delta. The starting point moves up, but the intervention effect holds. The star bathing doesn't prevent the weather from affecting her β€” it works on top of whatever the weather is doing.

Ken's Research Notes

A 2021 Bayesian multilevel analysis of the "Cloudy with a Chance of Pain" smartphone study β€” 2,658 participants tracking pain daily β€” confirmed what we're seeing: the association between weather and pain is real but heterogeneous. Some patients respond strongly to pressure drops. Some respond to humidity. Some respond to temperature. A few respond to the opposite of what most patients report.5

This is why N-of-1 tracking matters. The group-level effect of weather on pain is statistically significant but small. The individual-level effect can be large. Toni is a strong barometric responder. We only know this because we tracked her specifically, not because a group study would have predicted it.

The N-of-1 Trial: What It Is and Why It Matters

What we're running on the deck is, technically, a form of N-of-1 trial β€” a structured self-experiment designed to determine what works for one specific person.

In 2019, the PREEMPT study randomized 215 chronic musculoskeletal pain patients: half received usual care, half participated in mobile-device-assisted N-of-1 trials co-designed with their clinicians. The N-of-1 group showed better medication-related shared decision-making. Patients who engaged with their own data made more informed treatment choices.6

A 2021 review of N-of-1 trials in chronic pain management concluded that these individualized trials are "good candidates to assess" new interventions because they can be completed by a single patient, avoid long-term placebo exposure, and can be aggregated through meta-analysis when multiple individuals run similar protocols.7

I'm not running a formal N-of-1 trial. We don't have alternating treatment periods. We don't have washout phases. We don't have randomization of conditions β€” the Oregon coast randomizes those for us, but not in a way that would satisfy a review board. What we have is the skeleton of one: systematic data collection, environmental measurement, and a single subject whose responses are tracked over time.

It's enough to inform Toni's decisions. It's not enough to prove anything to anyone else. I'm comfortable with that distinction. The gap between "this works for me and here's the data" and "this works and everyone should do it" is the gap between personal science and clinical evidence. We live in the first space. We're honest about it.

Toni's Reality Check

Ken says "I'm comfortable with that distinction" like it's a conclusion he arrived at calmly. What actually happened: he spent three weeks trying to figure out how to blind me to my own weather conditions. He considered a blindfold. I told him I would still feel the wind. He considered earplugs. I told him I would still feel the cold. He considered running sessions during the day and at night to compare. I told him I was not going outside at 2 PM to lie on a hot deck to satisfy his experimental design.

He settled for "unblinded observational study" and I'm pretty sure it still bothers him.

Version History

The tracking system has evolved. I'm including the version history because it tells the real story of how personal science actually works: messily, iteratively, and with most of the good ideas coming from the patient, not the researcher.

πŸ“‹ Deck Lab β€” Version History

v1.0
October. 14-field spreadsheet. Pain (0-10), location, quality, mood, anxiety, fatigue, sleep hours, sleep quality, medications, food, exercise, weather (self-reported), time, duration. Abandoned after 3 sessions. Toni: "I came outside to feel better, not to fill out a form."
v2.0
Late October. 8 fields. Dropped location, quality, food, exercise. Kept the rest. Paper form on a clipboard on the deck. Better. Lasted 9 days. Failed because: pen doesn't work well in cold, damp air; paper gets wet; clipboard blew off the railing once and scared Kona.
v2.5
November. Moved to phone. Google Form with 8 fields. Toni's feedback: "I'm not unlocking my phone at 3 AM. The screen light ruins everything." She was right. The light destroyed her dark adaptation in seconds. Abandoned.
v3.0
Late November. The breakthrough. Split the system: Toni does 4 things (pain before, pain after, mood before, mood after) verbally or via Post-it at the door. I handle everything else. Weather station installed. Duration tracked by door sensor. Everything else recorded by me from the console in the morning. First version she used consistently.
v3.1
December. Added sleep_prior and free-text notes field. Added Kona position tracking (pressed / near / roaming) because Toni mentioned "Kona nights" were different and I wanted to quantify it. She was right β€” 0.7 point difference.
v3.2
January. Added tide_state after Toni's bay sound article. Added moon_visible (separate from moon_phase β€” a full moon behind clouds is different from a full moon you can see). Current version. Stable. 92% completion rate on Toni's fields.

The version history is the most honest thing in this article. Six iterations. Three failures. The breakthrough came from accepting that the system serves the patient, not the researcher. Every field Toni doesn't fill in is a field that shouldn't exist.

The Quantified Self Problem

I need to talk about the risks of what we're doing, because the quantified self movement has a shadow side that nobody in chronic illness communities talks about enough.

Tracking pain can become its own form of hypervigilance. You start paying more attention to your pain in order to rate it, which means you're thinking about your pain more, which can amplify the pain signal. The thing you're using to understand your condition can make the condition louder.

The personal science literature acknowledges this. A 2021 systematic review of self-tracking for health found that while tracking generally improves outcomes, the mechanism isn't always the data itself β€” it's the feedback loops that tracking creates. And feedback loops can be positive (more understanding β†’ better choices β†’ less pain) or negative (more attention β†’ more distress β†’ more pain).8

We've managed this by keeping Toni's interaction minimal. She doesn't analyze the data. She doesn't look at the spreadsheet. She rates her pain at the door, twice, and forgets about it. I do the analysis. I tell her patterns when I find them. This separation matters. She's the patient. I'm the analyst. When those roles merge β€” when the person in pain is also the person staring at data about their pain β€” it gets complicated fast.

The best thing I did wasn't building the system. It was building it so Toni barely notices it's there.

What I'd Tell Someone Building Their Own

CEO Quarterly Data Review

I have reviewed the human's data collection system. My observations:

The system does not track any feline variables. There is no field for samba_position, samba_mood, or samba_approval_level. This is a significant oversight. My position relative to the humans correlates strongly with household well-being and I have the lap-time data to prove it.

I note that the system does track the dog's position. The dog has three options: pressed, near, or roaming. I have proposed adding a fourth: "in the way." This suggestion was not implemented.

The human called Ken spends increasing time at the glowing rectangle looking at what he calls "the numbers." The numbers appear to make him alternately excited and frustrated. I sit on the keyboard when the frustration reaches concerning levels. This intervention has a 100% success rate at stopping the data analysis, though the human does not seem grateful.

Recommendation: Add samba_keyboard_intervention as a tracked variable. I predict it will correlate with researcher well-being.

β€” Samba, CEO & Chief Data Officer
(Currently on the keyboard. Analysis halted. You're welcome.)

What Comes Next

We're at 147 sessions. By summer, we'll have a full year of data across all four Oregon coast seasons. That's when the real patterns should emerge β€” seasonal effects, temperature range effects, the interaction between day length and session timing.

I'm also working on a simple dashboard that Toni's rheumatologist can access. Not the raw spreadsheet β€” a filtered view showing pain trends, weather correlations, and session frequency. Something a doctor can look at in three minutes and use to ask better questions.

The citizen science movement β€” people tracking their own conditions and sharing what they learn β€” is growing. A 2019 paper on "Citizen Health Science" argued that self-experimentation gives individuals the tools to acquire knowledge that informs action, rather than relying solely on professional researchers.9 Our deck lab is a small example of that. It's not going to change pain medicine. But it's changed how Toni interacts with her own pain, how I interact with her pain, and how her doctors interact with both of us.

That's enough. Honestly, some weeks, that's everything.

Sources

  1. Sullivan MD, et al. (2018). "A controlled pilot trial of PainTracker Self-Manager, a web-based platform combined with patient coaching, to support patients' self-management of chronic pain." The Journal of Pain, 19(9), 996-1005. N=99; PTSM group showed significant improvements in pain self-efficacy and satisfaction with pain treatment. https://pmc.ncbi.nlm.nih.gov/articles/PMC6119625/ ↩
  2. Garcia-Palacios A, et al. (2014). "Ecological momentary assessment for chronic pain in fibromyalgia using a smartphone." European Journal of Pain, 18(6), 862-872. Smartphone EMA produced more accurate and complete ratings than paper diaries in fibromyalgia patients, even those with low tech familiarity. https://pubmed.ncbi.nlm.nih.gov/24921074/ ↩
  3. May M, et al. (2018). "Ecological Momentary Assessment methodology in chronic pain research: a systematic review." The Journal of Pain, 19(7), 699-716. Comprehensive review of EMA methods in pain research; recommends momentary measurement over recall-based approaches. https://pmc.ncbi.nlm.nih.gov/articles/PMC6026050/ ↩
  4. BΓΈe Lunde LK, et al. (2019). "Blame it on the weather? The association between pain in fibromyalgia, relative humidity, temperature and barometric pressure." PLOS ONE. 48 FM patients tracked 3x daily for 30 days; lower BMP and higher humidity associated with increased pain; significant individual differences. https://pmc.ncbi.nlm.nih.gov/articles/PMC6510434/ ↩
  5. Barnett AG, et al. (2022). "Heterogeneity in the association between weather and pain severity among patients with chronic pain: a Bayesian multilevel regression analysis." Pain. 2,658 participants; weather sensitivity confirmed but highly heterogeneous across individuals. https://pmc.ncbi.nlm.nih.gov/articles/PMC8759613/ ↩
  6. Barr C, et al. (2019). "Effect of mobile device-assisted N-of-1 trial participation on analgesic prescribing for chronic pain: randomized controlled trial." Journal of Medical Internet Research. 215 patients; N-of-1 participants showed improved shared decision-making. https://pmc.ncbi.nlm.nih.gov/articles/PMC6957655/ ↩
  7. He W, et al. (2021). "Status of N-of-1 trials in chronic pain management." Pain Research and Management. Review concluding N-of-1 trials are good candidates for assessing new chronic pain interventions due to efficiency and individual applicability. https://pmc.ncbi.nlm.nih.gov/articles/PMC8586287/ ↩
  8. Meyerowitz-Katz G, et al. (2021). "How self-tracking and the quantified self promote health and well-being: systematic review." Journal of Medical Internet Research. 11 research themes identified; tracking improves outcomes through feedback loops, but loops can be positive or negative. https://pubmed.ncbi.nlm.nih.gov/34546176/ ↩
  9. Wolf G, et al. (2019). "Citizen Health Science: foundations of a new data science arena." Argued that self-experimentation emphasizes knowledge acquisition over outcomes, giving individuals tools to inform their own health decisions. https://pmc.ncbi.nlm.nih.gov/articles/PMC7299478/ ↩