— Methodology · The Trust Document

How Considered Gear works.

Most gear coverage on the internet falls into two buckets: paid affiliate listicles written from press releases, and individual reviewers writing about the small slice of gear they personally own. We sit between those.

We read the public record on a category — long-form ownership posts, multi-year revisits, organic forum threads, platform reviews — and we synthesize what owners actually say. We rank the contenders on a transparent rubric. We tell you what's worth your money and what isn't.

We have not personally tested most of the gear we write about. We've read the people who have, sorted the signal from the marketing, and stack-ranked accordingly. When a piece is a personal review of gear we own, it's marked clearly at the top.

That's the whole model. The rest of this page is the math and the rules behind it.

What we do

  • Collect public review and ownership content on a category from multiple sources
  • Weight long-term ownership content (1+ years) above first-impression reviews
  • Weight organic forum and long-form posts above platform-native reviews
  • Discount anything that reads like vendor or affiliate copy
  • Score each contender on four axes (below) and combine them into a composite
  • Publish a stack-ranked shortlist with the math visible and the sources counted
  • Refresh each article on a schedule and date the last refresh

What we don't do

  • Accept payment for placement, ranking, or inclusion
  • Take review units from manufacturers
  • Rank products we couldn't find sufficient organic evidence on
  • Write about a category without a clear point of view on what makes gear in that category good
  • Publish AI-generated copy. AI accelerates research and synthesis. The final piece is shaped by a human and meets a published voice standard. Drafts that read like AI get rewritten.

How we score

Each contender gets a composite score out of 100, built from four weighted signals.

Sentiment 30%

Net positive sentiment across the source set. We don't take star ratings at face value; we read the language. Five-star reviews praising packaging are worth less than four-star reviews praising the second year of ownership.

Longevity 30%

Frequency of multi-year ownership mentions and the language used to describe how the gear ages. A wallet praised at year three matters more than a wallet praised in week one. Categories where longevity isn't relevant get this weight redistributed to sentiment.

Failure rate 25%, inverted

Frequency of failure-mode mentions. Common failure points (a specific seam, a specific hinge) hurt the score more than scattered one-off complaints. The top-ranked product is rarely the one with no critics — it's the one whose critics agree on a small, specific issue rather than a structural one.

Repair-and-keep 15%

Frequency of mentions of repair, refurbishment, resoling, restitching, or hand-down. This signal correlates with the kind of gear we care about — stuff worth fixing instead of replacing.

The composite is a weighted average, rounded to the nearest whole number.

Where the data comes from

Today, our research engine pulls from Reddit (subreddit search and long-form ownership posts), Amazon reviews (surfaced and weighted, not taken at face value), and long-form blog posts and ownership reviews fetched via search.

In progress: YouTube transcripts of long-term review videos, manufacturer-independent forums (Buyforlife, MalePatternBoldness, Watchuseek, etc.), and longitudinal blog reviews where a writer revisits gear on a multi-year cadence.

We do not synthesize private community content. Anything we read is publicly accessible.

How often we update

  • Heritage and slow-moving categories — refreshed every 6 months, or when a major release shifts the shortlist
  • Faster-moving categories — refreshed every 90 days
  • Active deals or seasonal categories — refreshed monthly during peak season, dormant otherwise

Every aggregator article shows its last refresh date. If you're reading a piece older than the cadence above, treat the rankings as provisional.

What disqualifies a product

  • Fewer than 50 organic ownership mentions across our source set
  • All available reviews dated within the last 12 months (no longevity data possible)
  • A pattern of vendor-driven content with no organic counterweight
  • Active recalls or unresolved safety issues at the time of publication
  • A failure-mode mention rate above 30% past the one-year mark

We name disqualifications publicly in the "What we excluded and why" section of each article. We don't pretend a product doesn't exist.

Affiliate links

Some links on Considered Gear are affiliate links, meaning we earn a small commission if you buy through them. An affiliate relationship has no effect on rankings. The composite score is computed from the source data before affiliate links are added, and we don't choose contenders based on affiliate availability. If a higher-ranked product has no affiliate program, it still ranks above a lower-ranked product that does.

When affiliate links are present, they're disclosed at the top of the article in a single sentence. We don't run banner ads, sponsorships, or "in partnership with" content.

How to challenge a ranking

If you've owned a product on (or off) one of our shortlists and your experience contradicts our scoring, we want to hear it. Email hello@consideredgear.com with the product, the issue, and the basis (years owned, what failed or held up). We don't promise to change a ranking based on a single email — that would be the opposite of the methodology — but we read every one, and patterns that show up in multiple corrections feed back into the next refresh.

Who runs this

Considered Gear is an independent publication. It's not venture-backed, not part of a media network, and not a side project of a larger commerce business. It earns its keep on a small affiliate revenue base and a newsletter, and it stays small on purpose.

Methodology v1.0 — initial publication. Four-axis composite, four-source set, 6-month / 90-day / 30-day refresh cadence by category type. Material changes to scoring, sources, or weights will be noted here with a date and a one-line description.