程序 RAID Linux 从服务器删除磁盘后停止响应

我开车了 CentOS 7 (标准核心:

3.10.0-327.36.3.el7.x86_64

) 用软件 RAID-10 超过 16 固态靴子 1 TB. (更准确地说,磁盘上有两个RAID阵列; 其中一个阵列提供主机分页部分). 上周拒绝了 SSD:

13:18:07 kvm7 kernel: sd 1:0:2:0: attempting task abort! scmd(ffff887e57b916c0)
13:18:07 kvm7 kernel: sd 1:0:2:0: [sdk] CDB: Write(10) 2a 08 02 55 20 08 00 00 01 00
13:18:07 kvm7 kernel: scsi target1:0:2: handle(0x000b), sas_address(0x4433221102000000), phy(2)
13:18:07 kvm7 kernel: scsi target1:0:2: enclosure_logical_id(0x500304801c14a001), slot(2)
13:18:10 kvm7 kernel: sd 1:0:2:0: task abort: SUCCESS scmd(ffff887e57b916c0)
13:18:11 kvm7 kernel: sd 1:0:2:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
13:18:11 kvm7 kernel: sd 1:0:2:0: [sdk] Sense Key : Not Ready [current]
13:18:11 kvm7 kernel: sd 1:0:2:0: [sdk] Add. Sense: Logical unit not ready, cause not reportable
13:18:11 kvm7 kernel: sd 1:0:2:0: [sdk] CDB: Write(10) 2a 08 02 55 20 08 00 00 01 00
13:18:11 kvm7 kernel: blk_update_request: I/O error, dev sdk, sector 39133192
13:18:11 kvm7 kernel: blk_update_request: I/O error, dev sdk, sector 39133192
13:18:11 kvm7 kernel: md: super_written gets error=-5, uptodate=0
13:18:11 kvm7 kernel: md/raid10:md3: Disk failure on sdk3, disabling device.#012md/raid10:md3: Operation continuing on 15 devices.
13:19:27 kvm7 kernel: sd 1:0:2:0: device_blocked, handle(0x000b)
13:19:29 kvm7 kernel: sd 1:0:2:0: [sdk] Synchronizing SCSI cache
13:19:29 kvm7 kernel: md: md3 still in use.
13:19:29 kvm7 kernel: sd 1:0:2:0: [sdk] Synchronize Cache(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
13:19:29 kvm7 kernel: mpt3sas1: removing handle(0x000b), sas_addr(0x4433221102000000)
13:19:29 kvm7 kernel: md: md2 still in use.
13:19:29 kvm7 kernel: md/raid10:md2: Disk failure on sdk2, disabling device.#012md/raid10:md2: Operation continuing on 15 devices.
13:19:29 kvm7 kernel: md: unbind<sdk3>
13:19:29 kvm7 kernel: md: export_rdev(sdk3)
13:19:29 kvm7 kernel: md: unbind<sdk2>
13:19:29 kvm7 kernel: md: export_rdev(sdk2)


/proc/mdstat

看起来像预料 (1 错误的成员), 没有任何问题,虚拟机继续下班。

md3 : active raid10 sdp3[15] sdb3[2] sdg3[12] sde3[8] sdn3[11] sdl3[7] sdm3[9] sdf3[10] sdi3[1] sdk3[5](F) sdc3[4] sdd3[6] sdh3[14] sdo3[13] sda3[0] sdj3[3]
7844052992 blocks super 1.2 128K chunks 2 near-copies [16/15] [UUUUU_UUUUUUUUUU]

SSD 不得不暂时取代 SSD 较大,因为 SSD 在 1 TB不可用; 所以,我们做了,开始重组,一切都很好。 今天到了“对” SSD, 因此,数据中心专家的技术专家简单地将托盘拔出了提到的 SSD, 并且系统在几秒钟内停止响应。 虽然主机正常在单独的数组上工作 RAID, 虚拟机无法执行I / O操作。 负载增加到&gt; 800. 我能执行

mdadm --detail /dev/md3

这表明劣化了 (但活跃 / 干净的) 一个阵列,所以从这个角度来看,系统绝对是顺序。 我试图删除错误 / 丢失阵列的磁盘,当然,这是失败的 (“没有这样的设备”), 突然甚至

mdadm --detail /dev/md3

不再产生任何结论,他只是挂了,我不得不杀了一个会议 SSH, 离开它。 之后,我决定强制重新启动计算机,因为我甚至没有知道如何从数组中删除这个错误的磁盘 - 一切都正确地说明了。 当然, RAID 它仍然劣化并需要同步,但除此之外:

没有问题。

我几乎确定我

必须

删除磁盘通过 mdadm 后

--set-faulty

在从机架上拔出托盘之前,

虽然我无法解释这种行为 mdraid.

在我看来,我们“建模了”磁盘的通常断开,所以有人对引起这个问题的原因有所了解,我如何确保下一个常规磁盘断开连接会导致相同的问题?

内核注册了几条消息,这对我来说似乎有趣,所以这就是新设备出现的内容

SDQ

虽然被拉的装置被称为

sdk

. 所以我假设

sdk

不是pinali.

正确

来自阵列。 上周有第一个失败 SSD, 我没有注意到这样的行为; 所以可更换的驱动器也接近

sdk

.

该杂志还显示 7 旧的旧和插入的故障之间的分钟 SSD, 因此,我不认为问题类似于该部分中描述的问题
https://superuser.com/question ... nding
发生。 此外,虚拟机立即关闭,而不是通过 7 分钟。 所以 - 对此的任何想法? 我会非常感激 :)

11:45:36 kvm7 kernel: sd 1:0:8:0: device_blocked, handle(0x000b)
11:45:37 kvm7 kernel: blk_update_request: I/O error, dev sdk, sector 0
11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069640
11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069648
11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069656
11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069664
11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069672
11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069680
11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069688
11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069696
11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069704
11:45:37 kvm7 kernel: md/raid10:md3: sdk3: rescheduling sector 4072069712
11:45:37 kvm7 kernel: sd 1:0:8:0: [sdk] FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
11:45:37 kvm7 kernel: sd 1:0:8:0: [sdk] CDB: Read(10) 28 00 20 af f7 08 00 00 08 00
11:45:37 kvm7 kernel: blk_update_request: I/O error, dev sdk, sector 548402952
11:45:37 kvm7 kernel: blk_update_request: I/O error, dev sdk, sector 0
11:45:37 kvm7 kernel: blk_update_request: I/O error, dev sdk, sector 39133192
11:45:37 kvm7 kernel: md: super_written gets error=-5, uptodate=0
11:45:37 kvm7 kernel: md/raid10:md3: Disk failure on sdk3, disabling device.#012md/raid10:md3: Operation continuing on 15 devices.
11:45:37 kvm7 kernel: md: md2 still in use.
11:45:37 kvm7 kernel: md/raid10:md2: Disk failure on sdk2, disabling device.#012md/raid10:md2: Operation continuing on 15 devices.
11:45:37 kvm7 kernel: blk_update_request: I/O error, dev sdk, sector 39133264
11:45:37 kvm7 kernel: md: super_written gets error=-5, uptodate=0
11:45:37 kvm7 kernel: sd 1:0:8:0: [sdk] Synchronizing SCSI cache
11:45:37 kvm7 kernel: sd 1:0:8:0: [sdk] Synchronize Cache(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
11:45:37 kvm7 kernel: mpt3sas1: removing handle(0x000b), sas_addr(0x4433221102000000)
11:45:37 kvm7 kernel: md: unbind<sdk2>
11:45:37 kvm7 kernel: md: export_rdev(sdk2)
11:48:00 kvm7 kernel: INFO: task md3_raid10:1293 blocked for more than 120 seconds.
11:48:00 kvm7 kernel: "echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs" disables this message.
11:48:00 kvm7 kernel: md3_raid10 D ffff883f26e55c00 0 1293 2 0x00000000
11:48:00 kvm7 kernel: ffff887f24bd7c58 0000000000000046 ffff887f212eb980 ffff887f24bd7fd8
11:48:00 kvm7 kernel: ffff887f24bd7fd8 ffff887f24bd7fd8 ffff887f212eb980 ffff887f23514400
11:48:00 kvm7 kernel: ffff887f235144dc 0000000000000001 ffff887f23514500 ffff8807fa4c4300
11:48:00 kvm7 kernel: Call Trace:
11:48:00 kvm7 kernel: [<ffffffff8163bb39>] schedule+0x29/0x70
11:48:00 kvm7 kernel: [<ffffffffa0104ef7>] freeze_array+0xb7/0x180 [raid10]
11:48:00 kvm7 kernel: [<ffffffff810a6b80>] ? wake_up_atomic_t+0x30/0x30
11:48:00 kvm7 kernel: [<ffffffffa010880d>] handle_read_error+0x2bd/0x360 [raid10]
11:48:00 kvm7 kernel: [<ffffffff812c7412>] ? generic_make_request+0xe2/0x130
11:48:00 kvm7 kernel: [<ffffffffa0108a1d>] raid10d+0x16d/0x1440 [raid10]
11:48:00 kvm7 kernel: [<ffffffff814bb785>] md_thread+0x155/0x1a0
11:48:00 kvm7 kernel: [<ffffffff810a6b80>] ? wake_up_atomic_t+0x30/0x30
11:48:00 kvm7 kernel: [<ffffffff814bb630>] ? md_safemode_timeout+0x50/0x50
11:48:00 kvm7 kernel: [<ffffffff810a5b8f>] kthread+0xcf/0xe0
11:48:00 kvm7 kernel: [<ffffffff810a5ac0>] ? kthread_create_on_node+0x140/0x140
11:48:00 kvm7 kernel: [<ffffffff81646a98>] ret_from_fork+0x58/0x90
11:48:00 kvm7 kernel: [<ffffffff810a5ac0>] ? kthread_create_on_node+0x140/0x140
11:48:00 kvm7 kernel: INFO: task qemu-kvm:26929 blocked for more than 120 seconds.

[serveral messages for stuck qemu-kvm processes]

11:52:42 kvm7 kernel: scsi 1:0:9:0: Direct-Access ATA KINGSTON SKC400S 001A PQ: 0 ANSI: 6
11:52:42 kvm7 kernel: scsi 1:0:9:0: SATA: handle(0x000b), sas_addr(0x4433221102000000), phy(2), device_name(0x4d6b497569a68ba2)
11:52:42 kvm7 kernel: scsi 1:0:9:0: SATA: enclosure_logical_id(0x500304801c14a001), slot(2)
11:52:42 kvm7 kernel: scsi 1:0:9:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
11:52:42 kvm7 kernel: scsi 1:0:9:0: qdepth(32), tagged(1), simple(0), ordered(0), scsi_level(7), cmd_que(1)
11:52:42 kvm7 kernel: sd 1:0:9:0: Attached scsi generic sg10 type 0
11:52:42 kvm7 kernel: sd 1:0:9:0: [sdq] 2000409264 512-byte logical blocks: (1.02 TB/953 GiB)
11:52:42 kvm7 kernel: sd 1:0:9:0: [sdq] Write Protect is off
11:52:42 kvm7 kernel: sd 1:0:9:0: [sdq] Write cache: enabled, read cache: enabled, supports DPO and FUA
11:52:42 kvm7 kernel: sdq: unknown partition table
11:52:42 kvm7 kernel: sd 1:0:9:0: [sdq] Attached SCSI disk

</ffffffff810a5ac0></ffffffff81646a98></ffffffff810a5ac0></ffffffff810a5b8f></ffffffff814bb630></ffffffff810a6b80></ffffffff814bb785></ffffffffa0108a1d></ffffffff812c7412></ffffffffa010880d></ffffffff810a6b80></ffffffffa0104ef7></ffffffff8163bb39></sdk2></sdk2></sdk3>
已邀请:

莫问

赞同来自:

从跟踪堆栈内核似乎

md

司机试图冻结数组 (

freeze_array+0xb7/0x180 [raid10]

), 要完全删除失败的元素,但此操作从未完成。 这是通过缺席确认的

md: unbind<sdk3>

线。

在我看来死了的问题 / 活动锁定,因此主要原因可以是软件错误。 你真的应该发一份报告
http://vger.kernel.org/vger-lists.html#linux-raid
</sdk3>

要回复问题请先登录注册