TECHIN 510 — Spring 2026

From Sensor to Dashboard

Week 3: Sensor Data Visualization
University of Washington • Global Innovation Exchange
01 / 25
What You Will Learn

Learning Objectives

1
Specify system requirements Define accuracy, freshness, and availability targets for a sensor dashboard before writing code
2
Name the Pipes and Filters pattern Identify the canonical architecture pattern in the sensor-to-dashboard pipeline
3
Write a format contract Specify field names, types, ranges, and update rate between subsystems
4
Trace error propagation Follow sensor accuracy → quantization → system-level guarantee through the pipeline
02 / 25
What You Will Learn

Learning Objectives (continued)

5
Apply PRIMM to AI-generated firmware and serial parser code Predict behavior before running; investigate after
6
Distinguish structural from domain validation Implement range checks at system boundaries where data enters
7
Separate I/O from logic Create testable code without hardware dependencies by isolating pure functions
8
Add observability to a live dashboard Freshness indicator, failure counting, staleness detection

Today: Discussion + demo then hands-on lab

PRIMM for firmware + parser code

Predict what the next serial line should look like before you run it. Run Serial Monitor or the script. Investigate mismatches at the stage boundary (wrong columns? wrong type?). Modify one filter at a time, re-run. Make a one-line contract change if the hardware reality changed. (Full lab walkthrough uses the same loop.)

03 / 25
The Guiding Question

"Should I go study in the GIX room right now? Is it comfortable? Is someone already there?"

A few sensors and Python code can answer that question. Functionally, it is identical to what a building management system asks 10,000 times per hour.

04 / 25
Before We Build

What Does the System Need to Do?

Before jumping to code, define three measurable requirements for the GIX lounge monitor:

A
Accuracy Temperature within ±2°C of ground truth. Why 2°C? That is the threshold where humans notice comfort changes.
F
Freshness Data on the dashboard no older than 10 seconds. A “live” dashboard showing 5-minute-old data is lying.
R
Availability System recovers without restart after 3 consecutive sensor errors. The DHT22 fails ~20% of reads — the system must tolerate this.

These three numbers drive every architectural decision in the rest of this lecture. This is the 40% planning in the 40/20/40 principle.

05 / 25

Streamlit

Agentic walkthrough: prompts → a small dashboard
06 / 25
Example 1

Prompt 1 — Scaffold app.py

Your prompt to the agent
Create a Streamlit web app. The app should set a page title about a “GIX lounge comfort preview”, add a short markdown subtitle, and display a line of plain text saying hello.
P
Predict Before you run: what three pieces of text should appear, and in what order?
R
Run Save the file, run the command the agent gave you (should be streamlit run app.py), confirm the browser opens.
V
Verify Change the title string, save, and confirm the UI hot-reloads.

Typical reply includes st.title, st.markdown, st.write (or similar). Read the code before you run it.

07 / 25
Example 2

Prompt 2 — Widgets and rerun

Your prompt to the agent
In the same app.py for the GIX lounge comfort preview, keep the page title, subtitle, and hello line from Prompt 1. Below that, add a slider for target comfort from 0 to 100 (0 = chilly, 100 = cozy) with default 50, and show the chosen value below the slider using st.write or st.metric. In a one-line comment in the code, note that Streamlit reruns the whole script top-to-bottom when the comfort slider moves.
P
Predict When you drag the comfort slider, which lines of code run again?
R
Run Move the comfort slider; watch the value under it update.
I
Investigate Ask the agent what happens if you read the comfort slider’s value above the line where the slider is created. Try the broken version once, then fix it.
08 / 25
Example 2b

Prompt 2b — More UI widgets

Your prompt to the agent
In the same GIX lounge comfort preview app, keep the title, subtitle, hello, and comfort slider from Prompts 1–2. Add three more widgets in the main area: st.checkbox labeled “Simulate occupancy” (default False); st.radio for lighting mood with options Warm glow, Neutral, Cool white; and st.text_input for an optional short note (placeholder like “Note for this preview”). Show the current values under the widgets with st.write or st.caption. Add a short comment that changing any widget triggers a rerun.
P
Predict Which widgets return a value on the first run before the user touches them?
R
Run Toggle the checkbox, change the radio, type in the text field; confirm the displayed values update.
V
Verify Comfort slider position is preserved when you change other widgets; no errors from widget order.
09 / 25
Example 3

Prompt 3 — Layout

Your prompt to the agent
Refactor the same GIX lounge comfort preview app.py: keep the page title and markdown subtitle from Prompt 1, plus the hello line, comfort slider, and the checkbox/radio/text widgets from Prompts 2–2b. Use st.columns: in the left column show a metric “Feels like (demo °F)” with a number (placeholder or a simple function of the comfort slider); in the right column show “Humidity (demo %)” with a number. Put st.selectbox in the sidebar for “Quiet zone” vs “Collaborative” (labels only). Keep the hello line, comfort slider, and extra widgets below the columns or in a sensible place.
P
Predict Changing the sidebar zone will rerun the script — should the comfort slider value reset? (Try it.)
V
Verify Both lounge metrics visible; zone selectbox triggers a rerun; comfort slider value behaves as you predicted; columns and sidebar behave as containers only.
10 / 25
Example 4

Prompt 4 — DataFrame and chart

Your prompt to the agent
Build a small DataFrame with fake timestamps and a temperature column (10–20 rows). The temperature column should depend on the slider (e.g. base room temp plus a slider-driven offset or small noise). Display st.line_chart for temperature vs index or time. Optionally show st.dataframe with head() if it still fits on screen.
V
Verify Dragging the slider changes the series shape or level; chart updates without errors.

Same chart APIs apply once real readings live in a DataFrame. Next: the pipeline that gets them there.

11 / 25
How Streamlit runs

Reruns, st.empty, st.rerun, and caching

Widget reruns

Each time you interact with a widget (slider, sidebar, checkbox, …), Streamlit runs your script from top to bottom again. That is the normal “refresh”: a new run, same file, new widget state.

st.rerun() triggers the same kind of full rerun programmatically — for example after you update st.session_state and want the page to redraw without waiting for another click.

Live-looking updates

st.empty() reserves a placeholder. You can replace only that slot (e.g. with placeholder.container(): or placeholder.write(...)) when data arrives on a timer or stream — so the dashboard can feel live without relying on a widget event every time.

Contrast: a plain while True that redraws everything can fight Streamlit’s rerun model; prefer updating a bounded buffer (e.g. deque(maxlen=20)) inside a controlled loop paired with a placeholder.

Illustrative app.py fragment
import streamlit as st
import pandas as pd

@st.cache_data
def load_rows(csv_path: str) -> pd.DataFrame:
    return pd.read_csv(csv_path)  # slow I/O; skipped when path unchanged

if "ticks" not in st.session_state:
    st.session_state.ticks = 0

live = st.empty()
df = load_rows("sample.csv")
live.metric("Cached rows", len(df))

if st.button("Increment + rerun"):
    st.session_state.ticks += 1
    st.rerun()  # restarts script from top (like a widget-driven rerun)

st.caption(f"Session ticks: {st.session_state.ticks}")

Cache status: decorate slow work with @st.cache_data (serializable return values) or @st.cache_resource (connections, models). On rerun, Streamlit skips the cached body when inputs are unchanged. Call st.cache_data.clear() to invalidate if the file behind csv_path changes on disk.

12 / 25

The Data Pipeline

9 stages from physical world to browser pixels
13 / 25
Pipeline Architecture

The 9-Stage Data Pipeline

PHYSICAL WORLD 1. SENSOR Physical → Signal 2. MICROCONTROLLER Signal → Float 3. FORMATTER Float → CSV String HOST COMPUTER 4. TRANSPORT String → Bytes (USB) 5. PARSE (decode) Bytes → text + fields 6. VALIDATE + STRUCTURE Fields → typed DataFrame BROWSER 7. VISUALIZE DataFrame → Figure 8. RENDER Figure → HTML/JS 9. DELIVERY HTTP → Pixels 22.5°C (float) b"2.1,22.4,45.1\n" SVG + JSON → pixels

Same stages, two vocabularies: elsewhere we say “parse & validate” — that work spans Stage 5 (decode bytes, split CSV fields) and Stage 6 (check ranges/types, build a DataFrame). When debugging, ask which of those two filters broke the contract.

14 / 25
Architecture Pattern

The Pattern Has a Name: Pipe-and-Filter Architecture

This 9-stage pipeline is a typical pipe-and-filter architecture: independent processing steps connected by explicit data contracts.

1
Filters do one job each A filter is a small stage with a single responsibility: read one input format, apply one transformation, emit one output format. Example: sensor read → float, formatter → CSV, decoder → dict.
2
Pipes carry data between filters A pipe is the handoff channel plus contract: USB serial bytes, function return values, queues, or HTTP payloads. Medium changes; interface discipline stays the same.
3
Why teams use it: modularity + reuse You can replace one filter without rewriting the system, reuse filters across projects, and compose new workflows by rearranging stages.
4
Tradeoffs to manage Every handoff adds overhead and potential latency. Good observability and error handling are required so one bad stage does not silently corrupt downstream output.
5
You already use this pattern Unix pipelines (cat data.csv | grep "22" | awk -F, '{print $2}'), Express middleware chains, and ETL flows all follow the same idea.

Debug mindset: Trace stage-by-stage. In our course pipeline: sensor → formatter → Stage 5 parseStage 6 validate → chart. Ask “which filter broke the contract?”, not just “why is the chart wrong?”

15 / 25
Physics of Imperfection

Why Does 22.5°C Keep Flickering?

Even successful readings fluctuate: thermal noise, ADC quantization, acoustic interference. This embedded dashboard is intentionally “alive” to show that micro-variation is normal.

Live Room Monitor updated 2.0s ago
22.5°C +0.1°C vs avg

This is not a bug. This is physics. Software response:

df['temp_smoothed'] = df['temperature_c'].rolling(window=5).mean()
Error Budget

±0.5°C (sensor) + ±0.05°C (quantization) = dashboard cannot claim better than ±0.6°C

“22.5°C” really means [21.9, 23.1]

Design Decision

window=5 at 2s intervals = 10-second smoothing. Is that too slow for a sudden spike? You decide.

16 / 25
Critical Boundaries

Three Key Boundaries

PHYSICAL-TO-DIGITAL Between World and Stage 1 "The world becomes data" DEVICE-TO-HOST Between Stage 4 and Stage 5 "Bytes cross the cable" DATA-TO-PIXEL Between Stage 7 and Stage 8 "Numbers become visuals" The only place physics enters the system Format contract must match on both sides Last chance to catch unit errors before the user sees it
17 / 25
Systems Engineering

Writing a Format Contract

The CSV “format contract” deserves a written specification, not a passing mention:

Field 1: timestamp   — float, seconds since boot, [0, ∞), monotonic
Field 2: temp_c      — float, °C, [-40, 80], resolution 0.1
Field 3: humid_pct   — float, %, [0, 100], resolution 0.1
Field 4: dist_cm     — float, cm, [2, 400], -1.0 = sensor error

Delimiter: comma  |  Terminator: \n  |  Encoding: UTF-8
Rate: 1 line / 2s ±100ms

This contract is the primary artifact.

Interface Contract: This CSV spec is the first of many contracts: DataFrame asserts (W4), Zod schemas (W6), tool input_schema (W7). Same idea: define what crosses a boundary, verify it.

18 / 25
Verification Points

Three Verification Points

After Stage 1 — Sensor vs. Reality Tool: thermometer, weather app, Serial Monitor. Question: "Does the number the sensor reports match the physical world?"
After Stage 3 — Raw CSV String Tool: Serial Monitor. Question: "Right number of columns? Right delimiter? Reasonable values?"
After Stage 6 — DataFrame Statistics Tool: df.describe(), df.head(). Question: "Temperature mean near room temp? Distance values never negative?"

A chart that passes all three verification points is a chart you can trust. A chart that skips them is a chart that looks trustworthy.

19 / 25
The Dangerous Assumption

"A chart that renders without error is NOT the same as a chart that shows correct data. This is a typical dangerous assumption beginners make."

The pipeline model is your defense against this.

20 / 25

Live Dashboard

Streamlit + Plotly — Stages 7 through 9
21 / 25
Observability

Is the Pipeline Still Running?

The hardest failure: system looks healthy but shows stale data.

# Freshness: set once before the loop; refresh only on successful reads
last_updated = time.time()
while True:
    reading = get_latest_reading()
    if reading:
        last_updated = time.time()
        # ... render metrics ...
    age = time.time() - last_updated
    color = "green" if age < 10 else "red"
    st.markdown(f":{color}[Last update: {age:.0f}s ago]")
1
Heartbeat Is data arriving? The freshness indicator answers this. Meets the 10-second requirement from Slide 5.
2
Range check Is the data physically plausible? Use a domain check such as is_physically_plausible() at the parse/validate boundary.
3
Throughput count How many readings per minute? deque(maxlen=20) is a data retention decision — old readings are gone forever, not just hidden.

Three failure strategies in 6 lines: silent (pass), logged (logging.warning), counted with threshold. Choose the most observable strategy your system can support.

22 / 25

Physical Validation

The move from Understand to Evaluate on Bloom's Taxonomy
23 / 25
Verification

Hardware Dashboard Verification Checklist

Does the chart render? Smoke test. Necessary but not sufficient.
Do the numbers match physical reality? Compare to a second source: weather app, ruler, multimeter.
Are the units correct? °C not °F. cm not inches. Check firmware AND parsing code.
Does the chart update when the physical world changes? Wave your hand near the sensor. See a spike? Good.
Can you trace ONE data point end-to-end? Serial Monitor → terminal output → chart value. If those three match, the pipeline has integrity.
24 / 25
Up Next in Lab

Preview: Lab 3 — Makerspace hardware

In Lab 3 you will work with physical sensors and a microcontroller. Visit the makerspace and talk with the crew there to check out the hardware you need.

Your choice: Which sensors you use is up to you — pick something that fits what you want to measure.

Plan ahead: know what you want to measure (or a short list of options) before you go, so checkout goes smoothly.

25 / 25