Files
memtest86plus/app/badram.c

291 lines
8.1 KiB
C
Raw Normal View History

2020-05-24 21:30:55 +01:00
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2020 Martin Whitaker.
//
// Derived from memtest86+ patn.c:
//
// MemTest86+ V1.60 Specific code (GPL V2.0)
// By Samuel DEMEULEMEESTER, sdemeule@memtest.org
// http://www.x86-secret.com - http://www.memtest.org
// ----------------------------------------------------
// Pattern extension for memtest86
//
// Generates patterns for the Linux kernel's BadRAM extension that avoids
// allocation of faulty pages.
//
// Released under version 2 of the Gnu Public License.
//
// By Rick van Rein, vanrein@zonnet.nl
//
// What it does:
// - Keep track of a number of BadRAM patterns in an array;
// - Combine new faulty addresses with it whenever possible;
// - Keep masks as selective as possible by minimising resulting faults;
// - Print a new pattern only when the pattern array is changed.
#include <stdbool.h>
#include <stdint.h>
#include "display.h"
#include "badram.h"
//------------------------------------------------------------------------------
// Constants
//------------------------------------------------------------------------------
#define MAX_PATTERNS 10
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
#define PATTERNS_SIZE (MAX_PATTERNS + 1)
2020-05-24 21:30:55 +01:00
// DEFAULT_MASK covers a uintptr_t, since that is the testing granularity.
#ifdef __x86_64__
#define DEFAULT_MASK (UINTPTR_MAX << 3)
#else
#define DEFAULT_MASK (UINTPTR_MAX << 2)
#endif
//------------------------------------------------------------------------------
// Types
//------------------------------------------------------------------------------
typedef struct {
uintptr_t addr;
uintptr_t mask;
} pattern_t;
//------------------------------------------------------------------------------
// Private Variables
//------------------------------------------------------------------------------
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
static pattern_t patterns[PATTERNS_SIZE];
2020-05-24 21:30:55 +01:00
static int num_patterns = 0;
//------------------------------------------------------------------------------
// Private Functions
//------------------------------------------------------------------------------
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
// New mask is 1 where both masks were 1 (b & d) and the addresses were equal ~(a ^ c).
// If addresses were unequal the new mask must be 0 to allow for both values.
#define COMBINE_MASK(a,b,c,d) ((b & d) & ~(a ^ c))
2020-05-24 21:30:55 +01:00
/*
* Combine two addr/mask pairs to one addr/mask pair.
*/
static void combine(uintptr_t addr1, uintptr_t mask1, uintptr_t addr2, uintptr_t mask2, uintptr_t *addr, uintptr_t *mask)
{
*mask = COMBINE_MASK(addr1, mask1, addr2, mask2);
*addr = addr1 | addr2;
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
*addr &= *mask; // Normalise to ensure sorting on .addr will work as intended
2020-05-24 21:30:55 +01:00
}
/*
* Count the number of addresses covered with a mask.
*/
static uintptr_t addresses(uintptr_t mask)
{
uintptr_t ctr = 1;
int i = 8*sizeof(uintptr_t);
while (i-- > 0) {
if (! (mask & 1)) {
ctr += ctr;
}
mask >>= 1;
}
return ctr;
}
/*
* Count how many more addresses would be covered by addr1/mask1 when combined
* with addr2/mask2.
*/
static uintptr_t combi_cost(uintptr_t addr1, uintptr_t mask1, uintptr_t addr2, uintptr_t mask2)
{
uintptr_t cost1 = addresses(mask1);
uintptr_t tmp, mask;
combine(addr1, mask1, addr2, mask2, &tmp, &mask);
return addresses(mask) - cost1;
}
/*
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
* Determine if pattern is already covered by an existing pattern.
* Return true if that's the case, else false.
2020-05-24 21:30:55 +01:00
*/
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
static bool is_covered(pattern_t pattern)
2020-05-24 21:30:55 +01:00
{
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
for (int i = 0; i < num_patterns; i++) {
if (combi_cost(patterns[i].addr, patterns[i].mask, pattern.addr, pattern.mask) == 0) {
return true;
}
}
return false;
}
/*
* Find the pair of entries that would be the cheapest to merge.
* Assumes patterns is sorted by .addr asc and that for each index i, the cheapest entry to merge with is at i-1 or i+1.
* Return -1 if <= 1 patterns exist, else the index of the first entry of the pair (the other being that + 1).
*/
static int cheapest_pair()
{
// This is guaranteed to be overwritten with >= 0 as long as num_patterns > 1
int merge_idx = -1;
uintptr_t min_cost = UINTPTR_MAX;
for (int i = 0; i < num_patterns - 1; i++) {
uintptr_t tmp_cost = combi_cost(
patterns[i].addr,
patterns[i].mask,
patterns[i+1].addr,
patterns[i+1].mask
);
if (tmp_cost <= min_cost) {
2020-05-24 21:30:55 +01:00
min_cost = tmp_cost;
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
merge_idx = i;
2020-05-24 21:30:55 +01:00
}
}
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
return merge_idx;
}
/*
* Remove entries at idx and idx+1.
*/
static void remove_pair(int idx)
{
for (int i = idx; i < num_patterns - 2; i++) {
patterns[i] = patterns[i + 2];
}
patterns[num_patterns - 1].addr = 0u;
patterns[num_patterns - 1].mask = 0u;
patterns[num_patterns - 2].addr = 0u;
patterns[num_patterns - 2].mask = 0u;
num_patterns -= 2;
2020-05-24 21:30:55 +01:00
}
/*
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
* Get the combined entry of idx1 and idx2.
2020-05-24 21:30:55 +01:00
*/
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
static pattern_t combined_pattern(int idx1, int idx2)
2020-05-24 21:30:55 +01:00
{
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
pattern_t combined;
combine(
patterns[idx1].addr,
patterns[idx1].mask,
patterns[idx2].addr,
patterns[idx2].mask,
&combined.addr,
&combined.mask
);
return combined;
2020-05-24 21:30:55 +01:00
}
/*
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
* Insert pattern at index idx, shuffling other entries on index towards the end.
2020-05-24 21:30:55 +01:00
*/
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
static void insert_at(pattern_t pattern, int idx)
2020-05-24 21:30:55 +01:00
{
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
// Move all entries >= idx one index towards the end to make space for the new entry
for (int i = num_patterns - 1; i >= idx; i--) {
patterns[i + 1] = patterns[i];
}
patterns[idx] = pattern;
num_patterns++;
}
/*
* Insert entry (addr, mask) in patterns in an index i so that patterns[i-1].addr < patterns[i]
* NOTE: Assumes patterns is already sorted by .addr asc!
*/
static void insert_sorted(pattern_t pattern)
{
// Normalise to ensure sorting on .addr will work as intended
pattern.addr &= pattern.mask;
// Find index to insert entry into
int new_idx = num_patterns;
for (int i = 0; i < num_patterns; i++) {
if (pattern.addr < patterns[i].addr) {
new_idx = i;
break;
2020-05-24 21:30:55 +01:00
}
}
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
insert_at(pattern, new_idx);
2020-05-24 21:30:55 +01:00
}
//------------------------------------------------------------------------------
// Public Functions
//------------------------------------------------------------------------------
void badram_init(void)
{
num_patterns = 0;
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
for (int idx = 0; idx < PATTERNS_SIZE; idx++) {
patterns[idx].addr = 0u;
patterns[idx].mask = 0u;
}
2020-05-24 21:30:55 +01:00
}
bool badram_insert(uintptr_t addr)
{
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
pattern_t pattern = {
.addr = addr,
.mask = DEFAULT_MASK
};
// If covered by existing entry we return immediately
if (is_covered(pattern)) {
2020-05-24 21:30:55 +01:00
return false;
}
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
// Add entry in order sorted by .addr asc
insert_sorted(pattern);
// If we have more patterns than the max we need to force a merge
if (num_patterns > MAX_PATTERNS) {
// Find the pair that is the cheapest to merge
// merge_idx will be -1 if num_patterns < 2, but that means MAX_PATTERNS = 0 which is not a valid state anyway
int merge_idx = cheapest_pair();
pattern_t combined = combined_pattern(merge_idx, merge_idx + 1);
// Remove the source pair so that we can maintain order as combined does not necessarily belong in merge_idx
remove_pair(merge_idx);
insert_sorted(combined);
2020-05-24 21:30:55 +01:00
}
return true;
}
void badram_display(void)
{
if (num_patterns == 0) {
return;
}
check_input();
2021-07-18 19:26:23 +01:00
clear_message_area();
display_pinned_message(0, 0, "BadRAM Patterns");
display_pinned_message(1, 0, "---------------");
scroll();
2020-05-24 21:30:55 +01:00
display_scrolled_message(0, "badram=");
int col = 7;
for (int i = 0; i < num_patterns; i++) {
if (i > 0) {
display_scrolled_message(col, ",");
col++;
}
2021-07-18 19:26:23 +01:00
int text_width = 2 * (TESTWORD_DIGITS + 2) + 1;
2020-05-24 21:30:55 +01:00
if (col > (SCREEN_WIDTH - text_width)) {
scroll();
col = 7;
}
2021-07-18 19:26:23 +01:00
display_scrolled_message(col, "0x%0*x,0x%0*x",
Change how BadRAM patterns are aggregated to minimize the number of covered addresses (#178) * BadRAM: Rename pattern -> patterns * BadRAM: Refactor COMBINE_MASK and add clarifying comment * BadRAM: Extract DEFAULT_MASK into variable * BadRAM: Add is_covered() for checking if pattern is already covered by one of the existing patterns * BadRAM: Initialize patterns to 0 * BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Prior to this patch, a list of up to MAX_PATTERNS (=10) addr/mask tuples (aka. pattern) were maintained, adding failing addresses one by one to the list until it was full. When full, space was created by forcing a merge of the new address with the existing pattern that would grow the least (with regards to number of addresses covered by the pattern) by merging it with the new address. This can lead to a great imbalance in the number of addresses covered by the patterns. Consider the following: MAX_PATTERNS=4 (for illustrative purposes). The following addresses are faulted and added to patterns: 0x00, 0x10, 0x20, 0x68, 0xa0, 0xb0, 0xc0, 0xd0 This is the end result with the implementation prior to this commit: patterns = [ (0x00, 0xe8), (0x00, 0x18), (0x68, 0xf8), (0x90, 0x98) ] Total addresses covered: 120. This commit changes how the merges are done, not only considering a merge between the new address and existing patterns, but also between existing patterns. It keeps the patterns in ascending order (by .addr) in patterns, and a new address is always inserted into patterns (even if num_patterns == MAX_PATTERNS, patterns is of MAX_PATTERNS+1 size). Then, if num_patterns > MAX_PATTERNS, we find the pair of patterns (only considering neighbours, assuming for any pattern i, i-1 or i+1 will be the best candidate for a merge) that would be the cheapest to merge (using the same metric as prior to this patch), and merge those. With this commit, this is the result of the exact same sequence of addresses as above: [ (0x00, 0xe0), (0x68, 0xf8), (0xa0, 0xe8), (0xc0, 0xe8) ] Total addresses covered: 72. A drawback of the current implementation (as compared to the prior) is that it does not make any attempt at merging patterns until num_patterns == MAX_PATTERNS, which can lead to having several patterns that could've been merged into one at no additional cost. I.e.: patterns = [ (0x00, 0xf8), (0x08, 0xf8) ] can appear, even if patterns = [ (0x00, 0xf0) ] represents the exact same addresses with one pattern instead of two. * fixup! BadRAM: Change how addr/masks are merged to minimize number of addresses covered by badram Co-authored-by: Anders Wenhaug <anders.wenhaug@solutionseeker.no>
2023-01-03 23:12:47 +01:00
TESTWORD_DIGITS, patterns[i].addr,
TESTWORD_DIGITS, patterns[i].mask);
2020-05-24 21:30:55 +01:00
col += text_width;
}
}