How iFixit Scores Repairability
Repairability

How iFixit Scores Repairability

What's a repairability score, and how is that calculated?

Everyone wants their favorite phone or laptop to get an awesome repairability score. So do we! Unfortunately, we don’t make the rules.

Wait a minute—we do make the rules! At iFixit we’ve been wrenching on gadgets of all kinds for two decades, so we’ve developed strong opinions about what makes something easy to fix, or not. The iFixit repairability score, first introduced all the way back in 2011, is a reflection of those values.

We’ve often hinted at all the things that make or break a repairability score—modular components good, reliance on adhesives bad, etc.—but as keen observers have noted, it’s a long and winding trans-dimensional hyperspace service tunnel from there to actual hard numbers on the iFixit scorecard. 

Well, strap in and hold onto your propeller cap, because today we’re going all the way down the tunnel. Let’s explore how iFixit repairability scoring actually works.

What Does “Repairable” Mean?

Critical Components: How sad are you if this breaks? and, How often does this break? (PS: Search your feelings—you know this device)

Before we can score anything, we have to define it. Under our working definition, a highly repairable device:

  1. Is straightforward to disassemble and reassemble, nondestructively and reversibly
  2. Requires only inexpensive, widely available tools for common fixes, and
  3. Gives priority access to critical components—those most crucial to device function and/or most likely to require service.

Those three things can get you pretty far on your way to a successful repair. But most people also need instructions and access to replacement parts. These last two things help complete the repair ecosystem—the environment needed for repairs to survive and thrive. Ideally, they’d be provided by the original equipment manufacturer (OEM).

But creating a robust repair ecosystem is easier said than done. iFixit arose because over the years, many OEMs struggled (or even refused) to provide the instructions and parts customers needed to keep their stuff working. So we figured out how to do it without them, or even for them, as often as possible. (Happily, we’re now regularly teaming up.)

And that’s still the core philosophy of repairability at iFixit: To be truly repairable, a device should be fixable by anyone—not just the manufacturer or its “authorized” technicians. Almost everything is technically “repairable” if you can ship it to a professional repair depot, equipped with highly trained technicians, expensive custom tools, and unlimited support from the manufacturer. While that’s a nice option to have, it doesn’t help if you need a repair quickly. Or if you’re in a far-flung location, or stuck at home during a pandemic. Or you have security/privacy concerns. Or it’s just prohibitively expensive. Or the OEM stopped supporting your device. A phone may be easier to ship than a tractor, but the principle remains the same—you should be able to fix it in the field.

Why Repairability Scoring Matters

Repair saves money, makes the most of our finite resources, and is better than recycling. The concept is simple enough to fit on a poster, but its implications are profound.

That said: judging repair-ability is complex, and most consumers don’t have the means to evaluate it for themselves before making a purchasing decision. So providing a repairability score is iFixit’s way of helping you choose more confidently when plonking down your coin.

What the Scores Mean

We assign a score for each device on a 10-point scale to represent its overall repairability:

  • 10/10 = Best in class. A very repairable device, with instructions and parts to match. We love to see it.
  • 5/10 or above is a respectable score. We try to calibrate our scorecard so that below 5/10 is about where things start to tip from “You can probably fix this yourself” to “You might wanna consider calling a pro.” At 4/10 maybe you can still fix it, but it’s tougher than it needs to be.
  • 1/10 = Most difficult (though not necessarily impossible) to repair. Practically or economically speaking, many repairs probably aren’t viable.
  • 0/10 usually means disassembly is catastrophically destructive or impossible. Don’t get your hopes up. You’ll probably do more breaking than fixing.

Scoring Breakdown

To calculate a score, our engineers have to fully disassemble the device and record the results, using a rubric that accounts for every action, tool, and obstacle in the process. So, there’s a lot rolled into that little number—let’s break it down.

Our baseline score considers three things: the service manual, availability of replacement parts, and the design of the product itself, a.k.a. design for repairability. We’ll take these in order from simplest to most complex.

The Service Manual

10% of the iFixit score—that is, one point out of ten overall—is reserved for the OEM service manual. Successful repairs need instructions, and those should be provided by (or with help from) the team that makes your device and knows it best.

Although it’s tempting to judge a service manual’s quality, our scorecard simply checks for completeness. We wouldn’t want any OEM to withhold a service manual out of concern that it’s not polished enough for general consumption.

So, the first requirement is simply to publish something. To get any credit at all, the service manual must be available to the public, and free of charge—either on the OEM support site, or prominently linked therefrom. It can’t be hidden away, require registration to access, or be locked behind a paywall. If the OEM has contracted a third party to write and/or host the service manual, that’s fine—but it should be easy to find, and carry the OEM’s seal of approval.

Beyond simply existing, for maximum credit this documentation needs to include all the traditional elements of a service manual:

  • Replacement procedures for all critical components
  • A list of required tools
  • A parts list, with unique part numbers (or other means of checking compatibility)
  • An exploded diagram, to aid in visually identifying said parts
  • Troubleshooting procedures
  • Schematics (board diagrams) 

Each of these elements is weighted, and we award partial credit even when some sections are missing. But taken together, these documentation elements sum up to one full point. In other words, without a free public service manual of some kind, the maximum achievable iFixit score for any device is 9.0/10.

Replacement Parts

Tough repairs can still be worth it—if you can find a replacement part.

Another 10% of the overall score represents the availability of replacement parts from the OEM. 

Just like the service manual, we’re looking for parts offered directly from the OEM (or via a third party with a clear link and endorsement from the OEM support site). Manufacturers are responsible for telling customers where to get trusted parts. We love aftermarket parts—sometimes they’re better than the originals—but OEMs need official, publicly available parts to get credit.

Robust selection is also important. If the only replacement part offered is a battery, that’s better than nothing—but for a high score, more is required. Ideally, you could assemble an entire functioning device using only the components offered for sale by the OEM. 

Replacement parts must also be reasonably priced for repair to be viable. Studies show that if the cost of repair exceeds about a third of the price of a new product, many people won’t bother fixing it. (For consumer electronics, that threshold can be even lower.) And you need some margin for cost of labor, tools, and/or consumables. So, we primarily award points for replacement parts priced at 25% of MSRP or less (exclusive of tax and shipping, because those vary regionally). Any part costing more than 25% scores a little better than a part that’s not for sale at all—but, only a little.

As a historical aside, although we’ve had our eye on this for a while, we only formally added replacement parts to the iFixit scorecard fairly recently—it’s factored in for many (but not all) of the devices we scored during the past year. We’re updating how repairability scores are displayed to help make that distinction clear. 

Also, it’s important to note that parts distribution is complex and varies by region, and it’s not uncommon for 3-6 months to elapse after a device first goes on sale before replacement parts make their way through the supply chain and become available to buy. When appropriate, we may give provisional credit for replacement parts that are not yet available, based on each OEM’s track record and/or commitments 1. If our trust proves to be misplaced, we’ll adjust. The main thing is, we want the score to reflect what you, as a consumer, should expect when the time comes for a repair.

Design for Repair

The remaining 80% of the iFixit score is all about product design—that is, design for repairability. This is where many would-be repairs go awry. A skilled tinkerer can complete many repairs without instructions, or salvage replacement parts from other nonworking devices. But if the product itself resists your efforts, or can’t even be disassembled without damage, it’s probably destined for early retirement.

This is a devilishly tricky thing to score. Here’s how we do it.

First, we disassemble the entire device and draw a simplified map of how it comes apart—a disassembly tree. Here’s a fictional example of a really bad one:

A flow chart showing a long, completely linear repair path to a battery in a fictional device.
A highly linear disassembly/reassembly sequence complicates repairs.

Why is this bad? Well, just look at the path to the battery: it’s at the bottom of the stack, so in order to replace it, you have to remove one component to get to the next, and the next, until the entire device is disassembled and only the battery remains. Such a highly linear disassembly sequence means more time, more tools, and more chances for something to go wrong. It also greatly complicates troubleshooting—if you finish your repair and the device won’t boot, where is the fault? In this example, it could be almost anything. 

Instead, the best devices tend to have independent access to major components—a “shallow” disassembly tree:

A flow chart showing simple, independent repair paths pointing directly to all components.
A “flat” disassembly tree with independent access to critical components is ideal.

In this idealized example, after removing just the back cover, you can immediately replace the battery. Or, leave the battery alone and just replace the display. Or, ignore both of those and just replace the fan. Lots of repairs are possible with minimal disassembly, and you don’t have to risk damaging good components by removing them unnecessarily.

This is by no means easy to design for. In the real world, OEMs have a host of competing priorities to factor into their design—repair being just one. So most products look like a mix of the above examples: 

A more realistic flow chart showing straightforward repair paths to two components, with the others requiring longer paths.
Optimizing paths to critical components is a design challenge for OEMs.

Here, once the back cover comes off, you have immediate and independent access to two major components—the display and fan—but all other repairs require additional disassembly. The path to the battery runs through the fan, storage, and motherboard—not quite so bad as in the first example, but still not very well optimized.

Since it’s usually not possible to optimize for all repairs equally, we assign weights to each component based on its overall importance for repair. Consumables like batteries—where replacement is inevitable, and the device won’t work without it—get weighted more heavily than, say, mechanical buttons. (Sorry, buttons.) The selection of critical components and their weights is unique for each product category—so all smartphones get evaluated on the same terms, with a different set of components and weights for evaluating laptops, etc.

Once we have our disassembly tree and weights for the most critical components, we map the path through the disassembly tree for each repair, noting the actions and tools required for each step, and begin the work of converting it all to numbers. 

The same repair path shown in the previous image, now converted into a numerical representation on a spreadsheet.

This part is time-based: how long does it take our teardown engineers, on average, to turn a screw, disconnect a cable, or separate and reapply adhesive. To keep the scores consistent from one device to the next, we use an extensive table of preassigned time intervals (known as proxy times) for common actions and design features. This way the timing we assign stays consistent regardless of the individual technician’s pace, skill level, or caffeine content. Add up all the proxy times on the path to any component, and you get a number that represents the overall difficulty of the repair.

Importantly, you can’t cheat by using a fancy tool to save time. Each tool used comes with its own scaling factor based on its cost, availability, required skill level, safety risks, and other considerations. When recording each action, we multiply its proxy time by the tool scaling factor.

This provides dual benefits:

  • Devices that can be disassembled with basic tools or even your bare hands will achieve a better score.
  • Devices that require proprietary tools for ordinary actions like turning a screw incur a penalty. (It may take the same amount of time to turn a Phillips screw vs. a pentalobe, but since most people don’t keep a pentalobe driver in the kitchen drawer, the scaling factor for a pentalobe is higher.)

We also record a small time penalty every time the repair procedure requires a tool change. The fewer tools required overall, and the fewer times you have to swap between them, the more time saved, and the better the score.

Finally, it’s important to note that we trace the entire repair path, including reassembly. It’s not enough to take something apart and then stop; to put it back in working order, you may need additional tools, time to reapply adhesives, etc. Some fasteners take longer to tease apart than to put back together, or vice versa. “Just do the reverse of disassembly” is good guidance for many repairs, but for repairability scoring it can give misleading results. 

In summary: to score design for repairability we trace the path for replacing each critical component, noting each disassembly and reassembly action taken, multiplying its associated proxy time against the scaling factor for any tool required, take the sum of those products, and we get a result that falls somewhere in a predefined range corresponding to a specific score, which we then multiply by the weight representing the overall importance of the component, where the sum of the individual component scores represents 80% of the overall score. (Tony Stark did this in a cave with a box of scraps, but we’re sorry, we’re not Tony Stark.)

X Factors

If you’ve somehow read all the way to this point and find yourself thinking, “Well, that seems oversimplified,” you could not be more correct. Repair is a moving target, and we update our scoring criteria at least once annually in response to new designs, materials, technologies—and obstacles.

A major current concern is software calibration and parts pairing. If you install a perfect, genuine OEM replacement part and it doesn’t work because you can’t get access to the calibration or pairing software, that’s the same as no repair at all. We strongly believe that calibration and other software tools should be publicly available, and the latest version of our scorecard reflects that.

Parts pairing also defeats a common workaround for intrepid fixers who can’t get parts directly from the OEM, namely, piecing together a working device from other non-working ones. Our scoring penalty for this repair-stopper is severe, and it’s a global scorecard penalty—meaning any points gained in other areas like documentation or design may be lost entirely if enough routine repairs won’t actually work outside of the OEM’s “authorized” repair scenario.

(Careful testing and recording of parts pairing results adds a good deal of time to our scoring process, and in most cases we won’t be doing that retroactively. This is another area where we’re updating how repairability scores are displayed to help clarify differences in scope.)

Another repair-blocker raising eyebrows around the iFixit campus with increasing regularity is the lack of any straightforward identifying marks on many modern products. This may sound silly, but if you’re struggling to pinpoint the precise make and model of whatever widget in front of you needs fixing, it can be a real challenge to track down compatible parts and instructions. It’s nice if you can just boot into a settings menu and find your hardware configuration there—but in any repair scenario, the ability to do that is by no means a given. 

Like parts pairing, the presence of a unique product identifier has no effect on the score if it’s done right, and can only drag the score down if obscured or missing. This one isn’t a global penalty, but points awarded for documentation and parts availability may suffer slightly if those resources are difficult to locate due to the device looking like an artifact from an Arthur C. Clarke novel

And while it’s not yet integrated into our scorecard as of this writing, lately we’ve been preoccupied with software updates. Even if you have everything else you need, repair may not be worthwhile if the OEM stopped supporting your device with OS or security updates after only a couple years. 

We could continue, but you get the gist: Answering a simple question like “Can this be fixed or not?” is often not so simple. Trying to quantify that answer and reliably represent the degree of fix-ability on a 10-point scale—even less simple. We’ll continue refining our scorecard to capture as many aspects of the repair experience as possible. (And if you’re concerned about some other aspect you don’t see mentioned here, give us a shout.)

This One Goes to Eleven

Even though we’ve been doing repairability scoring since before it was cool, we’re gratified that in recent years other organizations have developed repairability scoring systems of their own. See for example the French Repairability Index, the JRC scoring system for repairability, or even the prEN45554 standard (that our own staff helped develop—and which is itself a prescription for how to make a really good repairability scoring system of your own).

Is the iFixit scoring system the best? Well, yes—and no. Repairability is a surprisingly difficult thing to capture with numbers, and we don’t think any scoring system does it perfectly—ours included. 

In the absence of any exact science of repair, a repairability scoring system is just an expression of the values and priorities of the people who created it. Even though a lot of the other scoring systems out there are evaluating essentially the same things or similar things, they often weight or aggregate the results in very different ways, so the scores they produce are not comparable.

We’ve learned a lot in 20+ years of taking things apart and designing the tools to do so. Our scoring system aligns with that knowledge, is fiercely independent and DIY-oriented, and sets the bar high for OEMs—just how we like it.


[1]: See Understanding Provisional Repairability Scores for more details.