-
Notifications
You must be signed in to change notification settings - Fork 13.3k
Mitigate MMIO stale data vulnerability #98126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 4 commits
ab3a2a0
531752f
6f7d193
a27aace
6a6910e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,13 +1,16 @@ | ||
#![allow(unused)] | ||
|
||
use crate::arch::asm; | ||
use crate::cell::UnsafeCell; | ||
use crate::cmp; | ||
use crate::convert::TryInto; | ||
use crate::mem; | ||
use crate::ops::{CoerceUnsized, Deref, DerefMut, Index, IndexMut}; | ||
use crate::ptr::{self, NonNull}; | ||
use crate::slice; | ||
use crate::slice::SliceIndex; | ||
|
||
use super::super::mem::is_user_range; | ||
use super::super::mem::{is_enclave_range, is_user_range}; | ||
use fortanix_sgx_abi::*; | ||
|
||
/// A type that can be safely read from or written to userspace. | ||
|
@@ -210,7 +213,9 @@ where | |
unsafe { | ||
// Mustn't call alloc with size 0. | ||
let ptr = if size > 0 { | ||
rtunwrap!(Ok, super::alloc(size, T::align_of())) as _ | ||
// `copy_to_userspace` is more efficient when data is 8-byte aligned | ||
let alignment = cmp::max(T::align_of(), 8); | ||
rtunwrap!(Ok, super::alloc(size, alignment)) as _ | ||
} else { | ||
T::align_of() as _ // dangling pointer ok for size 0 | ||
}; | ||
|
@@ -225,13 +230,9 @@ where | |
/// Copies `val` into freshly allocated space in user memory. | ||
pub fn new_from_enclave(val: &T) -> Self { | ||
unsafe { | ||
let ret = Self::new_uninit_bytes(mem::size_of_val(val)); | ||
ptr::copy( | ||
val as *const T as *const u8, | ||
ret.0.as_ptr() as *mut u8, | ||
mem::size_of_val(val), | ||
); | ||
ret | ||
let mut user = Self::new_uninit_bytes(mem::size_of_val(val)); | ||
user.copy_from_enclave(val); | ||
user | ||
} | ||
} | ||
|
||
|
@@ -304,6 +305,100 @@ where | |
} | ||
} | ||
|
||
/// Copies `len` bytes of data from enclave pointer `src` to userspace `dst` | ||
/// | ||
/// This function mitigates stale data vulnerabilities | ||
/// https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00615.html | ||
/// | ||
/// # Panics | ||
/// This function panics if: | ||
/// | ||
/// * The `src` pointer is null | ||
/// * The `dst` pointer is null | ||
/// * The `src` memory range is not in enclave memory | ||
/// * The `dst` memory range is not in user memory | ||
pub(crate) unsafe fn copy_to_userspace(src: *const u8, dst: *mut u8, len: usize) { | ||
unsafe fn copy_bytewise_to_userspace(src: *const u8, dst: *mut u8, len: usize) { | ||
unsafe { | ||
let seg_sel: u16 = 0; | ||
for off in 0..len { | ||
asm!(" | ||
mov %ds, ({seg_sel}) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is writing to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good catch! I've updated the reference to be There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In general I'd agree that moving to a scratch register would be cleaner. Here I'm more hesitant as so much happens at a micro architectural level that I'm feeling much more at ease by following the Intel example code. Performance overhead is negligent anyway. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. According to Intel, you have to use the memory-operand version of verw.
|
||
verw ({seg_sel}) | ||
movb {val}, ({dst}) | ||
mfence | ||
lfence | ||
", | ||
val = in(reg_byte) *src.offset(off as isize), | ||
dst = in(reg) dst.offset(off as isize), | ||
seg_sel = in(reg) &seg_sel, | ||
options(nostack, att_syntax) | ||
); | ||
} | ||
} | ||
} | ||
|
||
unsafe fn copy_aligned_quadwords_to_userspace(src: *const u8, dst: *mut u8, len: usize) { | ||
unsafe { | ||
asm!( | ||
"rep movsq (%rsi), (%rdi)", | ||
inout("rcx") len / 8 => _, | ||
inout("rdi") dst => _, | ||
inout("rsi") src => _, | ||
options(att_syntax, nostack, preserves_flags) | ||
); | ||
} | ||
} | ||
assert!(!src.is_null()); | ||
assert!(!dst.is_null()); | ||
assert!(is_enclave_range(src, len)); | ||
assert!(is_user_range(dst, len)); | ||
assert!(len < isize::MAX as usize); | ||
assert!(!(src as usize).overflowing_add(len).1); | ||
assert!(!(dst as usize).overflowing_add(len).1); | ||
|
||
if len < 8 { | ||
// Can't align on 8 byte boundary: copy safely byte per byte | ||
unsafe { | ||
copy_bytewise_to_userspace(src, dst, len); | ||
} | ||
} else if len % 8 == 0 && dst as usize % 8 == 0 { | ||
// Copying 8-byte aligned quadwords: copy quad word per quad word | ||
unsafe { | ||
copy_aligned_quadwords_to_userspace(src, dst, len); | ||
} | ||
} else { | ||
// Split copies into three parts: | ||
// +--------+ | ||
// | small0 | Chunk smaller than 8 bytes | ||
// +--------+ | ||
// | big | Chunk 8-byte aligned, and size a multiple of 8 bytes | ||
// +--------+ | ||
// | small1 | Chunk smaller than 8 bytes | ||
// +--------+ | ||
|
||
unsafe { | ||
// Copy small0 | ||
let small0_size = (8 - dst as usize % 8) as u8; | ||
let small0_src = src; | ||
let small0_dst = dst; | ||
copy_bytewise_to_userspace(small0_src as _, small0_dst, small0_size as _); | ||
|
||
// Copy big | ||
let small1_size = ((len - small0_size as usize) % 8) as u8; | ||
let big_size = len - small0_size as usize - small1_size as usize; | ||
let big_src = src.offset(small0_size as _); | ||
let big_dst = dst.offset(small0_size as _); | ||
copy_aligned_quadwords_to_userspace(big_src as _, big_dst, big_size); | ||
|
||
// Copy small1 | ||
let small1_src = src.offset(big_size as isize + small0_size as isize); | ||
let small1_dst = dst.offset(big_size as isize + small0_size as isize); | ||
copy_bytewise_to_userspace(small1_src, small1_dst, small1_size as _); | ||
} | ||
} | ||
} | ||
|
||
#[unstable(feature = "sgx_platform", issue = "56975")] | ||
impl<T: ?Sized> UserRef<T> | ||
where | ||
|
@@ -352,7 +447,7 @@ where | |
pub fn copy_from_enclave(&mut self, val: &T) { | ||
unsafe { | ||
assert_eq!(mem::size_of_val(val), mem::size_of_val(&*self.0.get())); | ||
ptr::copy( | ||
copy_to_userspace( | ||
val as *const T as *const u8, | ||
self.0.get() as *mut T as *mut u8, | ||
mem::size_of_val(val), | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
use super::alloc::copy_to_userspace; | ||
use super::alloc::User; | ||
|
||
#[test] | ||
fn test_copy_function() { | ||
let mut src = [0u8; 100]; | ||
let mut dst = User::<[u8]>::uninitialized(100); | ||
|
||
for i in 0..src.len() { | ||
src[i] = i as _; | ||
} | ||
|
||
for size in 0..48 { | ||
// For all possible alignment | ||
for offset in 0..8 { | ||
// overwrite complete dst | ||
dst.copy_from_enclave(&[0u8; 100]); | ||
|
||
// Copy src[0..size] to dst + offset | ||
unsafe { copy_to_userspace(src.as_ptr(), dst.as_mut_ptr().offset(offset), size) }; | ||
|
||
// Verify copy | ||
for byte in 0..size { | ||
unsafe { | ||
assert_eq!(*dst.as_ptr().offset(offset + byte as isize), src[byte as usize]); | ||
} | ||
} | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need at least a summary of how the mitigation works. I don't have domain knowledge about SGX, but I see the bytewise function is using segments and fences while the quadword just copies, which seems very mysterious. I also wonder why the alignment matters for
dst
, whilesrc
alignment is not checked at all.The link does not make any of this this clear either, though maybe it would if I dug deeper... but even still it would be good to have some local explanation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This link makes it a little clearer that the vulnerability is specifically about partial writes ("Any write that is smaller than 8 bytes or is not 8-byte aligned."), which explains why we're only looking at
dst
alignment, and why the aligned chunks remain so simple. Could we let the aligned part go back to a normalptr::copy
?https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/processor-mmio-stale-data-vulnerabilities.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I guess that's a bad idea because we need to guarantee that all of those writes are full 8-bytes at a time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed! I've extended the documentation and added the reference