this post was submitted on 07 Mar 2024
35 points (88.9% liked)

Linux

48323 readers
648 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

My laptop is working just fine. It's from 2018 and it has an NVME drive.

It has an EFI boot partition and other partition with LUKS and LVM on top of that.

Since this week I see these logs from time to time:

Mar 07 17:31:14 almendra kernel: pcieport 0000:00:1d.6: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID)
Mar 07 17:31:14 almendra kernel: pcieport 0000:00:1d.6:   device [8086:34b6] error status/mask=00000001/00002000
Mar 07 17:31:14 almendra kernel: pcieport 0000:00:1d.6:    [ 0] RxErr                  (First)
Mar 07 17:31:14 almendra kernel: pcieport 0000:00:1d.6: AER:   Error of this Agent is reported first
Mar 07 17:31:14 almendra kernel: nvme 0000:02:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID)
Mar 07 17:31:14 almendra kernel: nvme 0000:02:00.0:   device [8086:0975] error status/mask=00000001/00002000
Mar 07 17:31:14 almendra kernel: nvme 0000:02:00.0:    [ 0] RxErr                  (First)

The devices are:

$ lspci -vv | grep 1d.6
00:1d.6 PCI bridge: Intel Corporation Device 34b6 (rev 30) (prog-if 00 [Normal decode])

$ lspci -vv | grep 02:00.0
02:00.0 Non-Volatile memory controller: Intel Corporation Optane NVME SSD H10 with Solid State Storage [Teton Glacier] (prog-if 02 [NVM Express])

The laptop works like always, but I have the impression that the NVME drive is telling me something bad.

It happens from time to time:

$ journalctl --since yesterday | grep -c "nvme 0000:02:00.0: PCIe Bus Error: severity=Corrected, type=Physical"
9

Do you know what does it mean?

top 19 comments
sorted by: hot top controversial new old
[–] Limonene@lemmy.world 31 points 8 months ago (1 children)

The good news is: the error shown there was a PCIe bus error, which means the error is somewhere between the NVME controller and your processor's PCIe interface. Also good news: the errors you experienced were fully corrected, so you probably lost no data.

So the flash memory in the drive isn't failing. That's good because if the flash memory starts failing, it's probably only going to fail more. In this case, your errors may be correctable: by replacing the motherboard, by replacing the processor, by reseating the NVME drive in its slot, by verifying that your power supply is reliable...

However, if your NVME controller actually does fail, it will be little consolation to tell you that your data is all still there on the flash chips, but with no way to get it. So now might be a good time to make a backup. Any time is a good time to make a backup, but now is an especially good time.

If you keep getting these errors at the same rate, then you probably don't need to do anything, since the errors are being corrected. If you're worried, you could use BTRFS and enable checksumming of data.

[–] vsis@feddit.cl 10 points 8 months ago (1 children)

[...] by replacing the motherboard, by replacing the processor, by reseating the NVME drive in its slot, by verifying that your power supply is reliable…

I will start with the cheapest option 😅

I assume the power supply is reliable. Having a battery should make it more stable I guess.

[–] h3ndrik@feddit.de 6 points 8 months ago* (last edited 8 months ago) (2 children)

And maybe clean the insides of your laptop, that's probably the first thing that could solve the issue. See if all cables are still locked in their connectors. Maybe take out the SSD, clean the contacts and you can use compressed air to clean the socket. But be careful, you want to do it right or you might cause damage. No dampness or water, it has to be either isopropyl alcohol or dry. And don't use a rag that introduces static electricity. And no workshop air compressor. Maybe something like a paintbrush is better suited. And don't just shove the vacuum in. I've done that and it might dislocate small components or key-caps and suck them in and it's a major annoyance to get them out of the vacuum cleaner bag 😆 Just be a bit careful. But I already had something like loose connectors/components cause random errors. Especially in equipment that is moved around or gets dropped occasionally. After 5 years, you might also find some dust inside. At least it used to be that way, It seems to be less of a problem with modern laptops. And more and more stuff gets soldered anyways.

And don't do too much if you're not comfortable with that. IMHO the SSD should be a safe thing to touch for most people. But it's really easy to break or bend some tiny contacts from other components or ribbon cables. And there are consumer devices that aren't really meant to be serviced. I wouldn't disassemble such a model without prior experience. If it's still working you might also leave it as is. Do backups. Storage devices often fail even without prior warning.

[–] vsis@feddit.cl 3 points 8 months ago* (last edited 8 months ago)

I opened it. All cables were looking good. I used a hand blower to clean the dust. Taked out the SSD and blew the socket and everything around.

Now I'm going to monitor if it keeps happening.

$ journalctl --since yesterday  | grep -c "nvme 0000:02:00.0: PCIe Bus Error: severity=Corrected, type=Physical"
16
[–] vsis@feddit.cl 2 points 8 months ago (1 children)

OK. I'll use a dust blower for photography gear. Thanks. Let's see if it works.

[–] possiblylinux127@lemmy.zip 1 points 8 months ago

Just don't use a powerful one and keep the device powered off while you clean it.

[–] MangoPenguin@lemmy.blahaj.zone 10 points 8 months ago

Regardless of what it is, make sure your backups are working, running often (daily or better is good), and test your restore process fully.

[–] waigl@lemmy.world 7 points 8 months ago (1 children)

Smartctl works on nvme drives. Use it.

[–] vsis@feddit.cl 4 points 8 months ago* (last edited 8 months ago) (1 children)

I did a short and a long test. It looks good

$ sudo smartctl -l selftest /dev/nvme0
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.7.6-arch1-2] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF SMART DATA SECTION ===
Self-test Log (NVMe Log 0x06)
Self-test status: No self-test in progress
Num  Test_Description  Status                       Power_on_Hours  Failing_LBA  NSID Seg SCT Code
 0   Extended          Completed without error                6334            -     -   -   -    -
 1   Short             Completed without error                6334            -     -   -   -    -
[–] bizdelnick@lemmy.ml 6 points 8 months ago (1 children)

Check also sudo smartctl -a /dev/nvme0

[–] vsis@feddit.cl 1 points 8 months ago

sudo smartctl -a /dev/nvme0

$ sudo smartctl -a /dev/nvme0
[sudo] password for ****:
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.7.6-arch1-2] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       INTEL HBRPEKNX0202A
Serial Number:                      BTTE95101RQM512B-1
Firmware Version:                   G002
PCI Vendor/Subsystem ID:            0x8086
IEEE OUI Identifier:                0x5cd2e4
Controller ID:                      1
NVMe Version:                       1.3
Number of Namespaces:               1
Namespace 1 Size/Capacity:          512,110,190,592 [512 GB]
Namespace 1 Formatted LBA Size:     512
Local Time is:                      Fri Mar  8 12:09:53 2024 CET
Firmware Updates (0x14):            2 Slots, no Reset required
Optional Admin Commands (0x0016):   Format Frmw_DL Self_Test
Optional NVM Commands (0x005f):     Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Log Page Attributes (0x0f):         S/H_per_NS Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg
Maximum Data Transfer Size:         32 Pages
Warning  Comp. Temp. Threshold:     77 Celsius
Critical Comp. Temp. Threshold:     80 Celsius

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 +     3.50W       -        -    0  0  0  0        0       0
 1 +     2.70W       -        -    1  1  1  1        0       0
 2 +     2.00W       -        -    2  2  2  2        0       0
 3 -   0.0250W       -        -    3  3  3  3     2000    5000
 4 -   0.0040W       -        -    4  4  4  4     5000    9000

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        30 Celsius
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    32%
Data Units Read:                    6,877,173 [3.52 TB]
Data Units Written:                 9,397,485 [4.81 TB]
Host Read Commands:                 54,359,124
Host Write Commands:                239,213,047
Controller Busy Time:               2,412
Power Cycles:                       536
Power On Hours:                     6,350
Unsafe Shutdowns:                   62
Media and Data Integrity Errors:    0
Error Information Log Entries:      0
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0

Error Information (NVMe Log 0x01, 16 of 256 entries)
No Errors Logged

Self-test Log (NVMe Log 0x06)
Self-test status: No self-test in progress
Num  Test_Description  Status                       Power_on_Hours  Failing_LBA  NSID Seg SCT Code
 0   Extended          Completed without error                6334            -     -   -   -    -
 1   Short             Completed without error                6334            -     -   -   -    -
[–] MiltownClowns@lemmy.world 7 points 8 months ago* (last edited 8 months ago)

I'm not knowledgeable enough to tell you whether the drive is failing or not, but I just want to double check that you got rolling back ups on this drive right now. Because I'm just an idiot, put to me that drive looks unreliable.

[–] rotopenguin@infosec.pub 7 points 8 months ago* (last edited 8 months ago) (3 children)

Given that it's just an interface error, you could try turning it all off, take the drive out and hit its contacts with electronics contact cleaner (I guess CRC brand is good as any). Work it a little bit, let it dry before putting it all back together.

Another possibility is that power management is being naughty. Fiddle with ASPM or APST.

Oh and do a btrfs/zfs scrub to check that your data is correct.

[–] possiblylinux127@lemmy.zip 2 points 8 months ago* (last edited 8 months ago) (1 children)

Doing a scrub on bad hardware will make corruption worse in many cases. When you have faulty hardware freeze everything

This person has had the same device for 6 years. If the drive was used heavily it probably just failed due to age

[–] rotopenguin@infosec.pub 1 points 8 months ago

Yeah, you're probably right. I'm thinking in terms of "not a raid, no redundant copies available" scrub, where the main output would be a sanity check of data checksums.

[–] vsis@feddit.cl 1 points 8 months ago* (last edited 8 months ago)

I used a hand dust blower intended for photography gear. I opened the laptop, blew the dust, disconnected the SSD and blew the socket and it's surroundings.

Now I will monitor the logs and see if it helps.

Thanks.

[–] mvirts@lemmy.world 1 points 8 months ago

Dont forget to blow on it

[–] possiblylinux127@lemmy.zip 7 points 8 months ago

Chances are it is. Always keep good backups.

Honestly its good practice to replace your drives every 5 years. That's not always necessary but it can save you some headaches

[–] ryannathans@aussie.zone 4 points 8 months ago

Look at smart errors