RhinoMCP
AI-driven parametric geometry in Rhino 3D
Built an MCP integration that lets architects generate parametric buildings and bridges from natural-language prompts, bridging LLM reasoning with Rhino's Python API.
I build the robots I ship specs for.
UCLA Anderson MBA · 5+ years across AI product management, infrastructure PM, and hands-on robotics. Most PMs translate between product and engineering — I already speak both. When I write a spec, I've already debugged the code.
I'm a Robotics Product Manager who codes. I build the systems I ship — so the specs I write and the trade-offs I negotiate are grounded in what actually has to work on the robot.
Most PMs write requirements. Most engineers write code. I do both — which means when I talk about perception latency, planning horizons, or the cost of model predictive control, I'm not repeating jargon. I've debugged it at 2am on a real car.
Closing the loop — applying product thinking to the field I love most. Building autonomous systems end-to-end so the specs I write and the trade-offs I negotiate come from real hardware, not slides.
Full-time MBA in the Technology Management track. Crossed from engineering leadership into product strategy — market sizing, customer research, roadmap trade-offs, GTM.
First dedicated product role — shipped an AI customer-service chatbot extension on the Zendesk platform. Hands-on LLM fine-tuning, CRM integration, and cross-functional delivery.
5 years managing large-scale public infrastructure — underground railways, high-rises, LNG terminals. Learned to ship where failure has real, physical consequences.
Six autonomous driving algorithms, built solo from scratch on a 1/10-scale race car.
From reactive control to model predictive control — running ROS 2 on NVIDIA Jetson. Most PMs write requirements. I wrote these six algorithms, so the roadmap trade-offs I make come from actually debugging them at 2am — not from a textbook.

// Parameters tuned on real hardware, not simulation defaults
// Priority-based Ackermann mux ensures AEB (priority 200) overrides any autonomous algorithm (priority 10) in case of imminent collision.
Vectorized iTTC across every LiDAR beam.
Computes instantaneous Time-to-Collision per beam with NumPy. Publishes brake commands to Ackermann mux (priority 200) AND directly to VESC as redundant failsafe.
Two-beam wall angle estimate + PD controller.
Uses 90° and 45° LiDAR beams to estimate wall angle α. 1 m lookahead projects lateral error → steering.
Map-free reactive planner in steerable FOV.
Clip + smooth ranges, find closest obstacle in ±24° FOV, zero out dynamic safety bubble (scales with proximity), steer toward midpoint of widest gap.
Geometric path tracker over CSV waypoints.
Finds first waypoint ≥ Ld ahead, transforms to vehicle frame, applies pure pursuit law: γ = 2y/Ld², δ = atan(L·γ). Speed modulated by curvature.
Sampling-based replanning over occupancy grid.
LiDAR → 0.1 m/cell occupancy grid (9×10 m vehicle frame). RRT* with asymptotic optimality via rewiring. Path tracked by Pure Pursuit.
Receding-horizon QP with CVXPY. Warm-started.
State [x, y, v, yaw] · Input [accel, δ_rate]. Horizon: 8 steps × 0.1s = 0.8s lookahead. Cost Q = diag([13.5, 13.5, 5.5, 13.0]) balances position + heading over velocity. Each solve warm-starts from the previous solution to keep the solver on the same local optimum.
# Lab 8 — MPC cost with warm-start
Q = np.diag([13.5, 13.5, 5.5, 13.0]) # x, y, v, yaw
R = np.diag([0.01, 100.0]) # accel, steering_rate
cost = 0
for k in range(HORIZON): # 8 × 0.1s = 0.8s lookahead
state_err = x[:, k] - ref[:, k]
cost += cvxpy.quad_form(state_err, Q)
cost += cvxpy.quad_form(u[:, k], R)
# Warm-start from previous solution → faster convergence
prob.solve(solver=cvxpy.OSQP, warm_start=True)The fun part of robotics is not writing the algorithm — it's deciding which one, tuned how, at what latency cost. Three calls I had to make:
Plain RRT finds a path fast but rarely an optimal one. The rewire step in RRT* gives asymptotic optimality for a bounded cost per iteration — worth it on a 1/10 car where compute isn't the bottleneck but path quality matters for lap time.
Longer horizons look smarter on paper but explode solver time and over-commit to a stale prediction in a reactive environment. 8 × 100 ms hits the sweet spot: long enough to anticipate corners, short enough that the warm-start is still relevant next tick.
The Ackermann mux is clean in theory but adds a failure point. If the mux crashes or lags, the car keeps whatever velocity it last had. Publishing the brake directly to VESC gives a second independent path — safety-critical code shouldn't have a single point of failure.
// Side projects at the intersection of AI, geometry, and tooling
AI-driven parametric geometry in Rhino 3D
Built an MCP integration that lets architects generate parametric buildings and bridges from natural-language prompts, bridging LLM reasoning with Rhino's Python API.
From real-time control loops on embedded Linux to product discovery decks — I work across the whole vertical.
The complete record — experience, education, projects, and publications.
Building something in robotics, autonomy, or AI product? I want to hear about it. Usually respond within 24 hours.