The Reviewer’s Notebook
No. 01 · ML edition

Machine learning resume examples, marked up by hand.

Five real-shaped resumes — FAANG ML engineer, MLOps founder, applied scientist, new grad, staff platform — read the way a senior reviewer reads them. Bullets that work get a thin margin note. Bullets that don’t get rewritten in front of you.

Every other page on this query shows full resumes with a hundred words of generic praise. We do the opposite: fewer examples, deeper marks. You can read the whole thing without signing up for anything. The linter at the bottom is for the part where you turn around and do the same thing to your own resume — or start a fresh page in the same Jake’s format.

Chapter 0

Four bullets that read the same to every senior reviewer

The lines below are not strawmen. They appear, almost verbatim, on most ML resumes I’ve screened. Each one fails for a different reason.

  1. Trained a CNN for image classification (84% accuracy)

    Accuracy without a baseline is a number that lives in a vacuum. Was 0.84 a lift over a 0.32 popularity baseline, or a regression from the 0.87 the previous team shipped? The reader cannot tell. The architecture is named, but the rest of the work is invisible.

  2. Deployed ML model to production

    This sentence covers a Streamlit demo running on a free-tier instance and a system serving 10K req/s with on-call rotation. Without a runtime, traffic, latency, and ownership tenure, the recruiter assumes the weaker version. The bullet is doing harm.

  3. Optimized model inference for production

    Production ML is judged on three things: latency, throughput, and cost. A bullet that names none of them signals the work was conceptual. A senior reviewer reads it and moves on, and the rest of the resume gets a small credibility tax by association.

  4. PyTorch, TensorFlow, JAX, ONNX, Keras, scikit-learn, XGBoost

    A list of frameworks tied to nothing is a vocabulary list. The fix is not to delete tools; it is to attach two of them to a measurable outcome. ‟PyTorch + Triton, 240ms p95” is a bullet. The line above is decoration.

Chapter I — III

Five resumes, read closely

Each résumé is rendered the way it would be sent: Jake’s template, single page, compressed. The notes in the margin are mine. Bullets that work get a brief acknowledgement — there’s no reason to be nice about them, just a reason to point at why. Bullets that don’t are rewritten in front of you.

Senior ML engineer at FAANG

Five years post-PhD, two production launches, one platform-level call. The bullets that landed this resume share a pattern: every metric has a baseline, every system has a serving stack, every claim names what was owned.

Priya Iyer

priya.iyer@email.com | linkedin.com/in/piyer | github.com/piyer-ml

Education

Stanford University
PhD, Computer Science (Information Retrieval)2021

Experience

Quanta (Search Ranking)2023 – Present
Senior ML EngineerMountain View, CA
  • Owned the L2 ranker for the global search surface; raised nDCG@10 from 0.41 to 0.49 on a 280M-query offline eval and +1.7% revenue per session at p<0.01 in a 7-week A/B across 1.4B sessions.
  • Trained a 350M-parameter cross-encoder on 1.2B click pairs in PyTorch with FlashAttention-2; cut training cost from 12k to 3.8k A100-hours through gradient checkpointing and mixed-precision FSDP.
  • Cut p99 ranker latency from 240ms to 92ms on 8K req/s by quantizing the model to int8 with TensorRT and adding a feature-cache hot-shard; held quality regression to <0.4% nDCG.
  • Led the offline-online gap investigation across 4 quarters; identified label leakage from query-rewrite features that inflated offline gains by ~30% and shipped a holdout protocol now used by 6 ranking teams.
  • Mentored 3 mid-level engineers through full launch cycles and authored the team's launch-quality review doc adopted by all 12 ranking projects in 2024.
  • Built and open-sourced an internal eval-harness library used by 12 ranking projects in 2024; cut new-experiment setup time from 3 days to 2 hours and standardized 4 quality regression checks across the org.
  • Drove the 2024 ranking-stack consolidation: merged 3 forks of the L1 retrieval model into a single training pipeline; reduced training-engineering on-call load by 41% while preserving model-team independence.
  • Built the team's counterfactual replay framework on top of the search log archive; replaced 3 weeks of bake-in testing with 4 hours of replay across 12M production queries, now used by 5 ranking teams.
  • Drove the L1 retrieval switch from BM25 to a learned sparse retriever (SPLADE-v2); +0.6% nDCG@10 on the global eval and a 14% reduction in query-time index size.
Quanta (Ads ML)2021 – 2023
Machine Learning EngineerMountain View, CA
  • Worked on improvements to the ads click prediction model.
  • Built a multi-task learning architecture sharing embeddings across click and conversion heads; +0.9% revenue at p<0.05 on 480M daily impressions, with 2.1x training throughput vs the prior two-head split-tower.
  • Owned weekly retraining cadence for 3 production models across 14 months with zero rollbacks attributable to training-pipeline issues.
  • Built feature-freshness monitoring for 9 conversion features across the funnel; cut feature drift incidents from 6/month to 1/month over 8 months and removed the manual freshness audit from the launch checklist.
  • Drove the migration from a TensorFlow 1.x training stack to PyTorch + Ray; cut wall-clock training time on the click model from 14h to 4.2h on the same A100 footprint and unblocked 3 follow-on architectural changes.
  • Built the team's first online learning pipeline that incrementally retrained the click model every 6 hours; replaced the weekly manual retrain and cut staleness-related accuracy drift by ~40% on the offline replay set.

Technical Skills

Frameworks: PyTorch, TensorRT, FlashAttention, FSDP
Infra: Ray, Kubeflow, Vertex Pipelines, Argo
Languages: C++, Python, Triton, CUDA
Takeaway

Senior ML resumes do not need to list more bullets. They need bullets that name the thing that was owned, the metric that moved, and the scope it moved across.

MLOps engineer at AI startup

An MLOps resume is judged on infrastructure ownership, not model accuracy. The bullets here lead with cycle times, deployment cadence, and on-call ownership. Every model accuracy number is contextualized as something a model team produced and the platform served.

Daniel Wong

daniel.wong@email.com | linkedin.com/in/dwong | github.com/dwong-mlops

Education

University of Michigan
BS, Computer Engineering2018

Experience

Liminal AI2024 – Present
MLOps Engineer (founding platform)Remote
  • Built the model registry, training orchestration, and online serving stack from zero on Kubeflow + MLflow + Triton; 22 models in production by month 9, with weekly deploy cadence and zero unplanned model rollbacks.
  • Cut model training-to-serving cycle from 9 days to 36 hours by replacing a manual handoff with a CI-gated workflow (gitops + Argo Rollouts); freed two model engineers to ship 4 additional launches per quarter.
  • Set up drift, freshness, and quality monitoring across 22 models with PagerDuty integration; caught a 12% feature distribution shift on the fraud model 6 hours before the model team and prevented a known-bad rollout.
  • Used several monitoring tools to track production models.
  • Owned on-call for the model platform across 4 quarters; mean time to detect 6 minutes, mean time to mitigate 28 minutes across 31 model-related incidents.
  • Wrote the post-incident review template adopted by the eng org; ran 7 reviews across 2024 that produced 14 platform fixes shipped within 2 weeks of each incident.
  • Designed and rolled out a shadow-deploy framework on top of Argo Rollouts; standardized canary criteria across 22 models and removed the per-team rollout playbook entirely.
Photon Logistics2021 – 2024
Senior Software Engineer (ML Infra)New York, NY
  • Migrated 9 forecasting models from a manual SageMaker setup to a Vertex Pipelines + Feature Store stack; cut feature drift incidents from 2.4 per month to 0.3 per month.
  • Built a shared eval harness with reproducible holdout splits and automated regression alerts; adopted by 4 model teams and now blocks 100% of model promotions.
  • Owned on-call for 9 forecasting models across 4 quarters; mean time to mitigate 22 minutes across 18 model-related incidents, with 0 customer-facing outages attributable to the model platform.
  • Migrated the model registry from S3-backed manifests to MLflow with drift-aware promotion gates; blocked 6 known-bad rollouts in the first quarter and removed the manual promotion review from the release process.
  • Coached 4 model engineers through the platform onboarding flow; cut their first-deploy time from 11 days to 3 days on average and authored the runbook now used by all incoming model teams.

Technical Skills

Platform: Kubeflow, MLflow, Argo, Triton, Vertex AI
Languages: Go, Python, Bash, Terraform
Data: Postgres, Snowflake, Kafka, Datadog
Takeaway

MLOps bullets describe pipelines, registries, and incidents. If you find yourself listing accuracy numbers, you are writing an ML engineer bullet, not an MLOps bullet.

Intermission

Same review, your résumé

The marks on the right are written by the same reviewer that flags the bullets above — line by line, on every line of your résumé.

Click any of the three bullets to expand it. No rewrites. No keyword games. Just the missing pieces.

resume.md — three weak bullets
click to expand
line 11
Improved model accuracy by 15%
Click bullet to open
line 12
Deployed ML model to production
Click bullet to open
line 13
Optimized model inference for production
Click bullet to open

Applied scientist / research scientist

Applied scientist resumes balance two halves: production and publications. The split is explicit here. Production bullets get the same metric-method-scope discipline as an ML engineer resume. Publications get venue, year, authorship position, and a downstream signal.

Marisol Reyes

m.reyes@email.com | scholar.google.com/citations?user=mreyes | github.com/mreyes-ml

Education

Stanford University
PhD, Computer Science (NLP / Calibration)2023

Experience

Polaris (Conversational AI)2023 – Present
Applied Scientist IISeattle, WA
  • Designed and shipped the safety classifier for the consumer assistant: 12-class taxonomy, fine-tuned 1.3B-parameter encoder, F1 0.87 vs 0.71 prior production model, deployed at 4K req/s with p95 220ms.
  • Reduced false-positive rate from 6.4% to 1.9% via two-stage gating with calibrated thresholds; preserved recall at 0.93 on a stratified 180k-example holdout reviewed by 4 human raters.
  • Worked on improving response quality.
  • Authored the team's evaluation rubric for safety releases; now blocking criterion for 100% of assistant launches across 3 product surfaces.
  • Trained the v2 reward model on 42k pairwise human preferences; preferred-response rate 67% to 72% on a held-out 6k-pair eval, with calibration error <2.1% across 12 intent buckets.
  • Owned weekly safety regression review across 4 model teams; flagged and held back 2 launches with hidden FP-rate spikes that the team's automated eval missed.
  • Mentored 2 incoming applied scientists through their first launch cycle; both shipped models within 14 weeks (vs the team median of 22 weeks for new hires).
  • Designed the team's red-team prompt suite for safety releases; 1.4k adversarial prompts across 12 risk categories, expanded every release cycle and now a required input for every assistant launch.
  • Wrote the assistant's response-streaming safety spec covering token-level filtering; cut over-refusal rate from 8.7% to 3.2% on a 14k-prompt benign-baseline eval without regressing the safety classifier's flagging rate.
Stanford NLP Lab2018 – 2023
PhD ResearcherStanford, CA
  • Dissertation: calibration of large language model classifiers under distribution shift. Released 3 benchmarks now used by 14 follow-up papers.
  • Published 7 papers (2 first-author at NeurIPS, 1 first-author at ACL); 1,200+ citations.
  • First-author NeurIPS 2023 paper “Calibrated thresholds for LLM safety classifiers”; 410 citations; calibration code adopted by the Anthropic safety team.
  • First-author ACL 2022 paper “Distribution-shift-aware fine-tuning for instruction-tuned models”; 280 citations; cited in the production calibration spec at Polaris.
  • Co-authored NeurIPS 2024 “Mixture-of-experts routing under bandit feedback” (2nd of 4); routing logic shipped at Polaris with +12% throughput on long-tail intents.
  • Built and open-sourced the calibench evaluation harness for LLM calibration; 4.1k GitHub stars, used by 9 industry labs, cited in 22 follow-up papers.
  • Co-organized the NeurIPS 2022 Workshop on Distribution Shift in NLP (160 submissions, 320 attendees); served as area chair for the calibration track and curated the best-paper shortlist.

Technical Skills

Frameworks: PyTorch, JAX, HuggingFace, vLLM
Research areas: Calibration, distribution shift, RLHF evaluation
Takeaway

If your dissertation work shipped, write it as a production bullet. If it did not, write it as a publication line. Do not blur the two with hand-waving like 'research that informed product.'

Entry-level / new-grad ML engineer

With under 2 years of experience, you cannot fake production scale. What you can do: be the most specific candidate in the stack. Name architectures, dataset sizes, baselines beaten, and ranks. A Kaggle bronze with team size and a class project that beat a public benchmark beat five vague internships.

Aarav Mehta

aarav.mehta@email.com | linkedin.com/in/aaravm | github.com/aaravm-ml

Education

University of Toronto
BS, Computer Science (Machine Learning specialization)2025

Experience

Ladera (Computer Vision Intern)Summer 2024
ML Engineering InternToronto, ON
  • Fine-tuned a YOLOv8-m detector on 64k internal warehouse images; mAP@0.5 0.79 vs 0.62 ImageNet-pretrained baseline; shipped the model to a 200-camera staging fleet for shadow evaluation.
  • Cut detector inference latency from 38ms to 14ms per frame on Jetson Orin via int8 PTQ with TensorRT; <1.2% mAP regression on the validation split.
  • Exposure to MLOps practices.
  • Built the evaluation pipeline that compared 3 candidate detector backbones on the staging fleet's 14-day shadow log (1.8M frames); produced the comparison memo that drove the v1 model selection.
  • Wrote the team's first labeled-data quality script catching duplicate, mislabeled, and out-of-distribution images; flagged 4.2% of the seed dataset for review and shipped a fixed split before the v1 training run.

Projects

kg-rerank | Open source · github.com/aaravm-ml/kg-rerank
  • Two-tower retriever + cross-encoder reranker on the BEIR FiQA benchmark; nDCG@10 0.41 vs 0.36 SOTA reported in BEIR paper; 800 GitHub stars, used by 2 downstream papers.
  • Reproducible training in 2.5 hours on a single A100 with Hydra configs; CI runs the full eval on every PR and posts a regression comment if nDCG drops more than 0.5%.
  • Wrote the project's launch post (HN front page, 1.2k newsletter signups in 2 weeks); follow-up post on hard-negative mining cited in 2 grad-level course readings.
Otto Recommender (Kaggle, 2023) | Top 4% (rank 178 of 4,217 teams)
  • Solo bronze on a 12.9M-session dataset; co-visitation matrix + GBM reranker; competition metric (recall@20) 0.581 vs 0.566 public-leaderboard median.
  • Open-sourced the full pipeline (feature engineering, training, inference) on GitHub; 240 stars and 5 forks within 8 weeks of competition close.
uw-cv-bench | Course capstone, CSC2547
  • Benchmarked 6 detection-head architectures on the UTokyo street-scene dataset; reproduced and beat the published Faster-RCNN baseline (mAP 0.39 vs 0.36) using a tuned RetinaNet variant.
  • Selected as the top capstone of the cohort; presented at the department research showcase to 80+ attendees.

Technical Skills

Frameworks: PyTorch, JAX, TensorRT, Hydra
Eval: Recall@k, mAP, nDCG, ROUGE
Takeaway

Lead with measurable wins, even if they are validation-set wins. Avoid 'exposure to' or 'assisted with' phrasing entirely. The strongest junior bullets read like the project section of a senior resume.

Senior MLOps / platform ML

Senior platform-ML resumes need to show that you owned an organization-level decision. Tool choice, eval framework, model registry design. The bullets here describe systems that other ML engineers depend on, not models the writer trained.

Halima Osman

halima.osman@email.com | linkedin.com/in/halimaosman | github.com/halima-platform

Education

Imperial College London
MEng, Computing2017

Experience

Solstice Bank2022 – Present
Staff ML Platform EngineerLondon, UK
  • Owned the design and rollout of the bank-wide ML platform: Vertex AI Pipelines + an in-house feature store + a Triton-based serving fleet. Onboarded 9 model teams and 31 production models in 18 months.
  • Authored the model-risk eval framework adopted by the model risk management committee; now blocking criterion for 100% of model promotions across consumer credit, fraud, and AML.
  • Cut average onboarding time for a new model team from 6 weeks to 5 days via templated training + serving scaffolds; survey CSAT 8.7/10 across 9 onboarded teams.
  • Reduced GPU spend 38% (£2.1M annualized) by migrating the fraud model serving fleet from p4d to g5 instances and adding feature-cache hot-shards; held p99 latency at 110ms with no quality regression.
  • Platform engineering with various technologies.
  • Mentored 5 senior MLOps engineers across 4 teams; 3 promoted to staff or principal track.
  • Negotiated and led procurement of the bank's first dedicated GPU cluster (24× H100); designed the chargeback model that allocated capacity across 9 model teams with zero quarter-on-quarter contention escalations.
  • Authored the bank's ML model-risk policy v2 (approved by the model risk management committee); defined documentation, validation, and post-deployment review requirements now enforced across all three lines of defense.
Ribbon (formerly DataMint)2018 – 2022
Senior MLOps EngineerLondon, UK
  • Built the company's first model registry and CI/CD path; reduced incident-related rollbacks from 7 in 2019 to 1 in 2021 across 14 production models.
  • Owned on-call for the model platform during the company's Series C scale-up (4x model count, 6x request volume); zero customer-facing model incidents in the highest-growth quarter.
  • Designed the company's first model deployment SLO ladder (P0/P1/P2 tiers tied to revenue impact); used to prioritize on-call response across 14 production models.
  • Built the offline-online drift dashboard now monitored by 3 ML teams; caught 4 silent model regressions in 2021 that no automated alert had flagged.
  • Authored the company's incident post-mortem template; adopted by infra and platform orgs across 3 BUs and used in 22 reviews through 2022.
  • Built the company's first feature store on Postgres + dbt; eliminated feature duplication across 4 teams and cut average feature time-to-production from 2 weeks to 3 days.
  • Led the migration of 8 production models from a hand-rolled Flask serving stack to KServe; cut median model deploy time from 4 days to 6 hours and standardized canary rollout across the org.

Technical Skills

Platform: Vertex AI, Triton, Argo, Ray, Kubeflow
Languages: Go, Python, Terraform, SQL
Governance: Model risk, eval frameworks, regulator-facing audits
Takeaway

If your bullets read 'I trained X' instead of 'X teams shipped Y because of the platform I owned,' you are competing for ML engineer roles, not platform-ML staff roles.

Chapter IV

Patterns that hold up

The seven things that appear in every annotated example above. If your bullets miss two or three of these, that is the rewrite list. The frame applies to machine learning resume bullet points line by line.

  1. Metric, method, scope

    A defensible bullet has a named metric (with a baseline), a method (the model class, the validation strategy, the serving stack), and a scope (dataset size, request volume, ownership tenure). Miss any one and the line is challengeable in interview.

  2. Baseline framing

    Numbers without baselines are noise. F1 0.84 is great if the baseline is 0.62 and embarrassing if the baseline is 0.91. Always show the before-to-after on the same eval set, even if the lift is small.

  3. Scale annotation

    Dataset size, training compute, request volume, ownership tenure. Scope is what makes a metric credible. A bullet that names “320K images” or “1.4M DAU” or “5,000 A100-hours” lands; one without scope reads as a homework assignment.

  4. Latency, throughput, cost

    Production ML lives or dies on these three. Bullets that disclose p95 latency, req/s, or dollars-per-1K-inferences signal the work served real traffic. Bullets that omit them invite the assumption that the work never left a notebook.

  5. Architecture name credit

    ‟Fine-tuned ResNet-50,” ‟trained a 350M-parameter encoder,” ‟gradient-boosted ranker on 12M rows” all show the writer understood the choice. ‟Trained a model” is the weakest verb a candidate can put on the page.

  6. Quality regression named

    When a latency or compression win discloses the regression budget — <0.4% nDCG drop, <0.6 perplexity drift — the bullet earns trust. Latency wins that hide a quality drop are the most common bait-and-switch on ML resumes, and senior readers look for the disclosure.

  7. Publications and production, separately

    Applied scientists need an explicit split. Publications get venue, year, authorship position, and a downstream signal. Production gets the same metric-method-scope discipline as an ML engineer bullet. Blurring the two reads as padding.

A worked example

“AUC 0.71 to 0.84 on 2.3M holdout examples via gradient boosting + hand-engineered features.”

Metric — AUC 0.71 to 0.84, named with a baseline. Method — gradient boosting with hand-engineered features, defensible against the obvious follow-up about feature design. Scope — 2.3M holdout examples, large enough to mean something. The same frame applies in an adjacent role: a software engineer resume bullet swaps the model class for a system, but the structure is the same.

WHAT PEOPLE SAY

Real feedback from engineers who used it

From Reddit threads, LinkedIn posts, and DMs.

profile_pic
Akhil Ajith
Data Scientist - Senior Consultant @KPMG Ireland
1w
LinkedIn

This is super helpful! Thank you for letting me try it. Usually we neglect some low quality bullets and keep it anyways then eventually they hit you back in interviews, this tool has helped me refine those points by asking the right questions and definitely not another AI slop. PS Review is harsh and also there's no magic rewrite so prepare to put some time into it.

profile_pic
Asif Hassam
Founder@ Ziao Coding Boot camp | CapeTown
1w
LinkedIn

Absolutely Brilliant. I run coding bootcamps and recommended this tool to all my students. The ones who use it always score more interviews than the ones don’t. It’s a great tool for any developer who is job hunting.

Redditr/resumes
Sea-Cranberry-2440d
u/Sea-Cranberry-2440d·5 YOE Backend Engineer | US·1 day ago

I just tried it. I really like the idea of providing feedback and suggested changes. The feedback seems pretty sensible as well. I think adding an optional rewrite option to show an example of what each line should look like would be nice. I'll update my resume based on these suggested feedback to see if I can get better results. Thank you.

Redditr/learnmachinelearning
rival-bixb
u/rival-bixb·3 YOE Machine Learning Engineer | Canada·4 day ago

Not gonna lie, I was using this tool as a resume builder because it does clean jake resume template exports for free, but I gave the linting feature a go a month back, it's probably the best thing in this tool Sure any LLM can generate feedback but very few does localised ones and then on top of that it makes you correct the point by asking you to justify it. The only downside is this process takes time but honestly, it really helped me get my resume sorted.

profile_pic
Prajjwal Patel
AI automation Engineer | TCS
1w
LinkedIn

I used to use Claude and Overleaf before, everytime there's a new opening I would generate latex code and move it to overleaf and track it via notion page. Now I do all of that in Rejectless, the linting feature is probably the highlight because it really makes you think, the feedback is very specific and the fix is only applied after it validates which is really good.

Chapter V

Questions

Do I need a PhD for an ML engineer resume?

No. A PhD goes in education, not in your bullets. What matters is what you shipped or measured. If your dissertation produced a public benchmark, a paper, or a deployed system, write that as a project bullet with the same metric and scope you would use for industry work.

Should I list Kaggle competitions?

Only if you placed top 10% in a competition with at least 1,000 teams, or if Kaggle is your only proof of model work (junior or transitioning). Always include rank, team size, and the metric you optimized for. A bare 'Kaggle participant' line hurts more than it helps.

Where should the GitHub link go?

In the contact header, next to email and LinkedIn. Make sure the linked profile has at least one project with a real readme, results, and a runnable notebook or training script. A dead GitHub link is worse than no link.

One page or two for senior ML?

One page if you have under 8 years of experience or fewer than two production ML systems shipped. Two pages only if you have a long publication record or have led multiple cross-team ML launches. Recruiters skim. Density beats length.

How do I quantify research papers vs production work?

List up to five papers in a 'Selected publications' section with venue, year, your authorship position, and one short outcome line (citations, downstream use, shipped product). Production bullets stay in the experience section with baselines, dataset size, latency, and scale. Do not blur the two.

What's the difference between an ML engineer and applied scientist resume?

ML engineer resumes lead with shipped systems, latency, throughput, and serving stack. Applied scientist resumes lead with experimental rigor, baselines beaten, and (often) publications. Same metric-method-scope frame, different emphasis. The applied scientist example below shows the split.

How do I quantify model improvements without production metrics?

Use validation metrics with baselines: F1, AUC, nDCG, RMSE, perplexity. Always show before to after, dataset size, and validation method (k-fold, time-based holdout, stratified split). If you have no live traffic, name the offline benchmark and what beat it.

Should I name specific frameworks like PyTorch, TensorFlow, or JAX?

Yes, but only the two or three you used in shipped or deployed work. Tie each to a measurable outcome. 'PyTorch + Triton, 240ms p95' is a bullet. 'PyTorch, TensorFlow, JAX, ONNX, Keras, scikit-learn, XGBoost' is a buzzword cloud.

Further reading

If this was useful, the same review approach lives on software engineer resumes, backend and frontend pages, and a dedicated junior SWE review. If you’d rather start from a clean template, Jake’s resume builder gives you the format every example on this page uses, without LaTeX.

Coda

Two ways to start — your turn.

Paste your résumé and get the same line-by-line marks the examples got — no rewrites, no ATS games, no AI voice. Or start a fresh one in the same Jake’s format if the page you have is past saving.

© 2026 Rejectless