ControlFlowIntegrityDesign.rst 24 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655
  1. ===========================================
  2. Control Flow Integrity Design Documentation
  3. ===========================================
  4. This page documents the design of the :doc:`ControlFlowIntegrity` schemes
  5. supported by Clang.
  6. Forward-Edge CFI for Virtual Calls
  7. ==================================
  8. This scheme works by allocating, for each static type used to make a virtual
  9. call, a region of read-only storage in the object file holding a bit vector
  10. that maps onto to the region of storage used for those virtual tables. Each
  11. set bit in the bit vector corresponds to the `address point`_ for a virtual
  12. table compatible with the static type for which the bit vector is being built.
  13. For example, consider the following three C++ classes:
  14. .. code-block:: c++
  15. struct A {
  16. virtual void f1();
  17. virtual void f2();
  18. virtual void f3();
  19. };
  20. struct B : A {
  21. virtual void f1();
  22. virtual void f2();
  23. virtual void f3();
  24. };
  25. struct C : A {
  26. virtual void f1();
  27. virtual void f2();
  28. virtual void f3();
  29. };
  30. The scheme will cause the virtual tables for A, B and C to be laid out
  31. consecutively:
  32. .. csv-table:: Virtual Table Layout for A, B, C
  33. :header: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
  34. A::offset-to-top, &A::rtti, &A::f1, &A::f2, &A::f3, B::offset-to-top, &B::rtti, &B::f1, &B::f2, &B::f3, C::offset-to-top, &C::rtti, &C::f1, &C::f2, &C::f3
  35. The bit vector for static types A, B and C will look like this:
  36. .. csv-table:: Bit Vectors for A, B, C
  37. :header: Class, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
  38. A, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0
  39. B, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0
  40. C, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0
  41. Bit vectors are represented in the object file as byte arrays. By loading
  42. from indexed offsets into the byte array and applying a mask, a program can
  43. test bits from the bit set with a relatively short instruction sequence. Bit
  44. vectors may overlap so long as they use different bits. For the full details,
  45. see the `ByteArrayBuilder`_ class.
  46. In this case, assuming A is laid out at offset 0 in bit 0, B at offset 0 in
  47. bit 1 and C at offset 0 in bit 2, the byte array would look like this:
  48. .. code-block:: c++
  49. char bits[] = { 0, 0, 1, 0, 0, 0, 3, 0, 0, 0, 0, 5, 0, 0 };
  50. To emit a virtual call, the compiler will assemble code that checks that
  51. the object's virtual table pointer is in-bounds and aligned and that the
  52. relevant bit is set in the bit vector.
  53. For example on x86 a typical virtual call may look like this:
  54. .. code-block:: none
  55. ca7fbb: 48 8b 0f mov (%rdi),%rcx
  56. ca7fbe: 48 8d 15 c3 42 fb 07 lea 0x7fb42c3(%rip),%rdx
  57. ca7fc5: 48 89 c8 mov %rcx,%rax
  58. ca7fc8: 48 29 d0 sub %rdx,%rax
  59. ca7fcb: 48 c1 c0 3d rol $0x3d,%rax
  60. ca7fcf: 48 3d 7f 01 00 00 cmp $0x17f,%rax
  61. ca7fd5: 0f 87 36 05 00 00 ja ca8511
  62. ca7fdb: 48 8d 15 c0 0b f7 06 lea 0x6f70bc0(%rip),%rdx
  63. ca7fe2: f6 04 10 10 testb $0x10,(%rax,%rdx,1)
  64. ca7fe6: 0f 84 25 05 00 00 je ca8511
  65. ca7fec: ff 91 98 00 00 00 callq *0x98(%rcx)
  66. [...]
  67. ca8511: 0f 0b ud2
  68. The compiler relies on co-operation from the linker in order to assemble
  69. the bit vectors for the whole program. It currently does this using LLVM's
  70. `type metadata`_ mechanism together with link-time optimization.
  71. .. _address point: http://itanium-cxx-abi.github.io/cxx-abi/abi.html#vtable-general
  72. .. _type metadata: http://llvm.org/docs/TypeMetadata.html
  73. .. _ByteArrayBuilder: http://llvm.org/docs/doxygen/html/structllvm_1_1ByteArrayBuilder.html
  74. Optimizations
  75. -------------
  76. The scheme as described above is the fully general variant of the scheme.
  77. Most of the time we are able to apply one or more of the following
  78. optimizations to improve binary size or performance.
  79. In fact, if you try the above example with the current version of the
  80. compiler, you will probably find that it will not use the described virtual
  81. table layout or machine instructions. Some of the optimizations we are about
  82. to introduce cause the compiler to use a different layout or a different
  83. sequence of machine instructions.
  84. Stripping Leading/Trailing Zeros in Bit Vectors
  85. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  86. If a bit vector contains leading or trailing zeros, we can strip them from
  87. the vector. The compiler will emit code to check if the pointer is in range
  88. of the region covered by ones, and perform the bit vector check using a
  89. truncated version of the bit vector. For example, the bit vectors for our
  90. example class hierarchy will be emitted like this:
  91. .. csv-table:: Bit Vectors for A, B, C
  92. :header: Class, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
  93. A, , , 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, ,
  94. B, , , , , , , , 1, , , , , , ,
  95. C, , , , , , , , , , , , , 1, ,
  96. Short Inline Bit Vectors
  97. ~~~~~~~~~~~~~~~~~~~~~~~~
  98. If the vector is sufficiently short, we can represent it as an inline constant
  99. on x86. This saves us a few instructions when reading the correct element
  100. of the bit vector.
  101. If the bit vector fits in 32 bits, the code looks like this:
  102. .. code-block:: none
  103. dc2: 48 8b 03 mov (%rbx),%rax
  104. dc5: 48 8d 15 14 1e 00 00 lea 0x1e14(%rip),%rdx
  105. dcc: 48 89 c1 mov %rax,%rcx
  106. dcf: 48 29 d1 sub %rdx,%rcx
  107. dd2: 48 c1 c1 3d rol $0x3d,%rcx
  108. dd6: 48 83 f9 03 cmp $0x3,%rcx
  109. dda: 77 2f ja e0b <main+0x9b>
  110. ddc: ba 09 00 00 00 mov $0x9,%edx
  111. de1: 0f a3 ca bt %ecx,%edx
  112. de4: 73 25 jae e0b <main+0x9b>
  113. de6: 48 89 df mov %rbx,%rdi
  114. de9: ff 10 callq *(%rax)
  115. [...]
  116. e0b: 0f 0b ud2
  117. Or if the bit vector fits in 64 bits:
  118. .. code-block:: none
  119. 11a6: 48 8b 03 mov (%rbx),%rax
  120. 11a9: 48 8d 15 d0 28 00 00 lea 0x28d0(%rip),%rdx
  121. 11b0: 48 89 c1 mov %rax,%rcx
  122. 11b3: 48 29 d1 sub %rdx,%rcx
  123. 11b6: 48 c1 c1 3d rol $0x3d,%rcx
  124. 11ba: 48 83 f9 2a cmp $0x2a,%rcx
  125. 11be: 77 35 ja 11f5 <main+0xb5>
  126. 11c0: 48 ba 09 00 00 00 00 movabs $0x40000000009,%rdx
  127. 11c7: 04 00 00
  128. 11ca: 48 0f a3 ca bt %rcx,%rdx
  129. 11ce: 73 25 jae 11f5 <main+0xb5>
  130. 11d0: 48 89 df mov %rbx,%rdi
  131. 11d3: ff 10 callq *(%rax)
  132. [...]
  133. 11f5: 0f 0b ud2
  134. If the bit vector consists of a single bit, there is only one possible
  135. virtual table, and the check can consist of a single equality comparison:
  136. .. code-block:: none
  137. 9a2: 48 8b 03 mov (%rbx),%rax
  138. 9a5: 48 8d 0d a4 13 00 00 lea 0x13a4(%rip),%rcx
  139. 9ac: 48 39 c8 cmp %rcx,%rax
  140. 9af: 75 25 jne 9d6 <main+0x86>
  141. 9b1: 48 89 df mov %rbx,%rdi
  142. 9b4: ff 10 callq *(%rax)
  143. [...]
  144. 9d6: 0f 0b ud2
  145. Virtual Table Layout
  146. ~~~~~~~~~~~~~~~~~~~~
  147. The compiler lays out classes of disjoint hierarchies in separate regions
  148. of the object file. At worst, bit vectors in disjoint hierarchies only
  149. need to cover their disjoint hierarchy. But the closer that classes in
  150. sub-hierarchies are laid out to each other, the smaller the bit vectors for
  151. those sub-hierarchies need to be (see "Stripping Leading/Trailing Zeros in Bit
  152. Vectors" above). The `GlobalLayoutBuilder`_ class is responsible for laying
  153. out the globals efficiently to minimize the sizes of the underlying bitsets.
  154. .. _GlobalLayoutBuilder: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/Transforms/IPO/LowerTypeTests.h?view=markup
  155. Alignment
  156. ~~~~~~~~~
  157. If all gaps between address points in a particular bit vector are multiples
  158. of powers of 2, the compiler can compress the bit vector by strengthening
  159. the alignment requirements of the virtual table pointer. For example, given
  160. this class hierarchy:
  161. .. code-block:: c++
  162. struct A {
  163. virtual void f1();
  164. virtual void f2();
  165. };
  166. struct B : A {
  167. virtual void f1();
  168. virtual void f2();
  169. virtual void f3();
  170. virtual void f4();
  171. virtual void f5();
  172. virtual void f6();
  173. };
  174. struct C : A {
  175. virtual void f1();
  176. virtual void f2();
  177. };
  178. The virtual tables will be laid out like this:
  179. .. csv-table:: Virtual Table Layout for A, B, C
  180. :header: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15
  181. A::offset-to-top, &A::rtti, &A::f1, &A::f2, B::offset-to-top, &B::rtti, &B::f1, &B::f2, &B::f3, &B::f4, &B::f5, &B::f6, C::offset-to-top, &C::rtti, &C::f1, &C::f2
  182. Notice that each address point for A is separated by 4 words. This lets us
  183. emit a compressed bit vector for A that looks like this:
  184. .. csv-table::
  185. :header: 2, 6, 10, 14
  186. 1, 1, 0, 1
  187. At call sites, the compiler will strengthen the alignment requirements by
  188. using a different rotate count. For example, on a 64-bit machine where the
  189. address points are 4-word aligned (as in A from our example), the ``rol``
  190. instruction may look like this:
  191. .. code-block:: none
  192. dd2: 48 c1 c1 3b rol $0x3b,%rcx
  193. Padding to Powers of 2
  194. ~~~~~~~~~~~~~~~~~~~~~~
  195. Of course, this alignment scheme works best if the address points are
  196. in fact aligned correctly. To make this more likely to happen, we insert
  197. padding between virtual tables that in many cases aligns address points to
  198. a power of 2. Specifically, our padding aligns virtual tables to the next
  199. highest power of 2 bytes; because address points for specific base classes
  200. normally appear at fixed offsets within the virtual table, this normally
  201. has the effect of aligning the address points as well.
  202. This scheme introduces tradeoffs between decreased space overhead for
  203. instructions and bit vectors and increased overhead in the form of padding. We
  204. therefore limit the amount of padding so that we align to no more than 128
  205. bytes. This number was found experimentally to provide a good tradeoff.
  206. Eliminating Bit Vector Checks for All-Ones Bit Vectors
  207. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  208. If the bit vector is all ones, the bit vector check is redundant; we simply
  209. need to check that the address is in range and well aligned. This is more
  210. likely to occur if the virtual tables are padded.
  211. Forward-Edge CFI for Indirect Function Calls
  212. ============================================
  213. Under forward-edge CFI for indirect function calls, each unique function
  214. type has its own bit vector, and at each call site we need to check that the
  215. function pointer is a member of the function type's bit vector. This scheme
  216. works in a similar way to forward-edge CFI for virtual calls, the distinction
  217. being that we need to build bit vectors of function entry points rather than
  218. of virtual tables.
  219. Unlike when re-arranging global variables, we cannot re-arrange functions
  220. in a particular order and base our calculations on the layout of the
  221. functions' entry points, as we have no idea how large a particular function
  222. will end up being (the function sizes could even depend on how we arrange
  223. the functions). Instead, we build a jump table, which is a block of code
  224. consisting of one branch instruction for each of the functions in the bit
  225. set that branches to the target function, and redirect any taken function
  226. addresses to the corresponding jump table entry. In this way, the distance
  227. between function entry points is predictable and controllable. In the object
  228. file's symbol table, the symbols for the target functions also refer to the
  229. jump table entries, so that addresses taken outside the module will pass
  230. any verification done inside the module.
  231. In more concrete terms, suppose we have three functions ``f``, ``g``,
  232. ``h`` which are all of the same type, and a function foo that returns their
  233. addresses:
  234. .. code-block:: none
  235. f:
  236. mov 0, %eax
  237. ret
  238. g:
  239. mov 1, %eax
  240. ret
  241. h:
  242. mov 2, %eax
  243. ret
  244. foo:
  245. mov f, %eax
  246. mov g, %edx
  247. mov h, %ecx
  248. ret
  249. Our jump table will (conceptually) look like this:
  250. .. code-block:: none
  251. f:
  252. jmp .Ltmp0 ; 5 bytes
  253. int3 ; 1 byte
  254. int3 ; 1 byte
  255. int3 ; 1 byte
  256. g:
  257. jmp .Ltmp1 ; 5 bytes
  258. int3 ; 1 byte
  259. int3 ; 1 byte
  260. int3 ; 1 byte
  261. h:
  262. jmp .Ltmp2 ; 5 bytes
  263. int3 ; 1 byte
  264. int3 ; 1 byte
  265. int3 ; 1 byte
  266. .Ltmp0:
  267. mov 0, %eax
  268. ret
  269. .Ltmp1:
  270. mov 1, %eax
  271. ret
  272. .Ltmp2:
  273. mov 2, %eax
  274. ret
  275. foo:
  276. mov f, %eax
  277. mov g, %edx
  278. mov h, %ecx
  279. ret
  280. Because the addresses of ``f``, ``g``, ``h`` are evenly spaced at a power of
  281. 2, and function types do not overlap (unlike class types with base classes),
  282. we can normally apply the `Alignment`_ and `Eliminating Bit Vector Checks
  283. for All-Ones Bit Vectors`_ optimizations thus simplifying the check at each
  284. call site to a range and alignment check.
  285. Shared library support
  286. ======================
  287. **EXPERIMENTAL**
  288. The basic CFI mode described above assumes that the application is a
  289. monolithic binary; at least that all possible virtual/indirect call
  290. targets and the entire class hierarchy are known at link time. The
  291. cross-DSO mode, enabled with **-f[no-]sanitize-cfi-cross-dso** relaxes
  292. this requirement by allowing virtual and indirect calls to cross the
  293. DSO boundary.
  294. Assuming the following setup: the binary consists of several
  295. instrumented and several uninstrumented DSOs. Some of them may be
  296. dlopen-ed/dlclose-d periodically, even frequently.
  297. - Calls made from uninstrumented DSOs are not checked and just work.
  298. - Calls inside any instrumented DSO are fully protected.
  299. - Calls between different instrumented DSOs are also protected, with
  300. a performance penalty (in addition to the monolithic CFI
  301. overhead).
  302. - Calls from an instrumented DSO to an uninstrumented one are
  303. unchecked and just work, with performance penalty.
  304. - Calls from an instrumented DSO outside of any known DSO are
  305. detected as CFI violations.
  306. In the monolithic scheme a call site is instrumented as
  307. .. code-block:: none
  308. if (!InlinedFastCheck(f))
  309. abort();
  310. call *f
  311. In the cross-DSO scheme it becomes
  312. .. code-block:: none
  313. if (!InlinedFastCheck(f))
  314. __cfi_slowpath(CallSiteTypeId, f);
  315. call *f
  316. CallSiteTypeId
  317. --------------
  318. ``CallSiteTypeId`` is a stable process-wide identifier of the
  319. call-site type. For a virtual call site, the type in question is the class
  320. type; for an indirect function call it is the function signature. The
  321. mapping from a type to an identifier is an ABI detail. In the current,
  322. experimental, implementation the identifier of type T is calculated as
  323. follows:
  324. - Obtain the mangled name for "typeinfo name for T".
  325. - Calculate MD5 hash of the name as a string.
  326. - Reinterpret the first 8 bytes of the hash as a little-endian
  327. 64-bit integer.
  328. It is possible, but unlikely, that collisions in the
  329. ``CallSiteTypeId`` hashing will result in weaker CFI checks that would
  330. still be conservatively correct.
  331. CFI_Check
  332. ---------
  333. In the general case, only the target DSO knows whether the call to
  334. function ``f`` with type ``CallSiteTypeId`` is valid or not. To
  335. export this information, every DSO implements
  336. .. code-block:: none
  337. void __cfi_check(uint64 CallSiteTypeId, void *TargetAddr, void *DiagData)
  338. This function provides external modules with access to CFI checks for
  339. the targets inside this DSO. For each known ``CallSiteTypeId``, this
  340. function performs an ``llvm.type.test`` with the corresponding type
  341. identifier. It reports an error if the type is unknown, or if the
  342. check fails. Depending on the values of compiler flags
  343. ``-fsanitize-trap`` and ``-fsanitize-recover``, this function may
  344. print an error, abort and/or return to the caller. ``DiagData`` is an
  345. opaque pointer to the diagnostic information about the error, or
  346. ``null`` if the caller does not provide this information.
  347. The basic implementation is a large switch statement over all values
  348. of CallSiteTypeId supported by this DSO, and each case is similar to
  349. the InlinedFastCheck() in the basic CFI mode.
  350. CFI Shadow
  351. ----------
  352. To route CFI checks to the target DSO's __cfi_check function, a
  353. mapping from possible virtual / indirect call targets to the
  354. corresponding __cfi_check functions is maintained. This mapping is
  355. implemented as a sparse array of 2 bytes for every possible page (4096
  356. bytes) of memory. The table is kept readonly most of the time.
  357. There are 3 types of shadow values:
  358. - Address in a CFI-instrumented DSO.
  359. - Unchecked address (a “trusted” non-instrumented DSO). Encoded as
  360. value 0xFFFF.
  361. - Invalid address (everything else). Encoded as value 0.
  362. For a CFI-instrumented DSO, a shadow value encodes the address of the
  363. __cfi_check function for all call targets in the corresponding memory
  364. page. If Addr is the target address, and V is the shadow value, then
  365. the address of __cfi_check is calculated as
  366. .. code-block:: none
  367. __cfi_check = AlignUpTo(Addr, 4096) - (V + 1) * 4096
  368. This works as long as __cfi_check is aligned by 4096 bytes and located
  369. below any call targets in its DSO, but not more than 256MB apart from
  370. them.
  371. CFI_SlowPath
  372. ------------
  373. The slow path check is implemented in a runtime support library as
  374. .. code-block:: none
  375. void __cfi_slowpath(uint64 CallSiteTypeId, void *TargetAddr)
  376. void __cfi_slowpath_diag(uint64 CallSiteTypeId, void *TargetAddr, void *DiagData)
  377. These functions loads a shadow value for ``TargetAddr``, finds the
  378. address of ``__cfi_check`` as described above and calls
  379. that. ``DiagData`` is an opaque pointer to diagnostic data which is
  380. passed verbatim to ``__cfi_check``, and ``__cfi_slowpath`` passes
  381. ``nullptr`` instead.
  382. Compiler-RT library contains reference implementations of slowpath
  383. functions, but they have unresolvable issues with correctness and
  384. performance in the handling of dlopen(). It is recommended that
  385. platforms provide their own implementations, usually as part of libc
  386. or libdl.
  387. Position-independent executable requirement
  388. -------------------------------------------
  389. Cross-DSO CFI mode requires that the main executable is built as PIE.
  390. In non-PIE executables the address of an external function (taken from
  391. the main executable) is the address of that function’s PLT record in
  392. the main executable. This would break the CFI checks.
  393. Backward-edge CFI for return statements (RCFI)
  394. ==============================================
  395. This section is a proposal. As of March 2017 it is not implemented.
  396. Backward-edge control flow (`RET` instructions) can be hijacked
  397. via overwriting the return address (`RA`) on stack.
  398. Various mitigation techniques (e.g. `SafeStack`_, `RFG`_, `Intel CET`_)
  399. try to detect or prevent `RA` corruption on stack.
  400. RCFI enforces the expected control flow in several different ways described below.
  401. RCFI heavily relies on LTO.
  402. Leaf Functions
  403. --------------
  404. If `f()` is a leaf function (i.e. it has no calls
  405. except maybe no-return calls) it can be called using a special calling convention
  406. that stores `RA` in a dedicated register `R` before the `CALL` instruction.
  407. `f()` does not spill `R` and does not use the `RET` instruction,
  408. instead it uses the value in `R` to `JMP` to `RA`.
  409. This flavour of CFI is *precise*, i.e. the function is guaranteed to return
  410. to the point exactly following the call.
  411. An alternative approach is to
  412. copy `RA` from stack to `R` in the first instruction of `f()`,
  413. then `JMP` to `R`.
  414. This approach is simpler to implement (does not require changing the caller)
  415. but weaker (there is a small window when `RA` is actually stored on stack).
  416. Functions called once
  417. ---------------------
  418. Suppose `f()` is called in just one place in the program
  419. (assuming we can verify this in LTO mode).
  420. In this case we can replace the `RET` instruction with a `JMP` instruction
  421. with the immediate constant for `RA`.
  422. This will *precisely* enforce the return control flow no matter what is stored on stack.
  423. Another variant is to compare `RA` on stack with the known constant and abort
  424. if they don't match; then `JMP` to the known constant address.
  425. Functions called in a small number of call sites
  426. ------------------------------------------------
  427. We may extend the above approach to cases where `f()`
  428. is called more than once (but still a small number of times).
  429. With LTO we know all possible values of `RA` and we check them
  430. one-by-one (or using binary search) against the value on stack.
  431. If the match is found, we `JMP` to the known constant address, otherwise abort.
  432. This protection is *near-precise*, i.e. it guarantees that the control flow will
  433. be transferred to one of the valid return addresses for this function,
  434. but not necessary to the point of the most recent `CALL`.
  435. General case
  436. ------------
  437. For functions called multiple times a *return jump table* is constructed
  438. in the same manner as jump tables for indirect function calls (see above).
  439. The correct jump table entry (or it's index) is passed by `CALL` to `f()`
  440. (as an extra argument) and then spilled to stack.
  441. The `RET` instruction is replaced with a load of the jump table entry,
  442. jump table range check, and `JMP` to the jump table entry.
  443. This protection is also *near-precise*.
  444. Returns from functions called indirectly
  445. ----------------------------------------
  446. If a function is called indirectly, the return jump table is constructed for the
  447. equivalence class of functions instead of a single function.
  448. Cross-DSO calls
  449. ---------------
  450. Consider two instrumented DSOs, `A` and `B`. `A` defines `f()` and `B` calls it.
  451. This case will be handled similarly to the cross-DSO scheme using the slow path callback.
  452. Non-goals
  453. ---------
  454. RCFI does not protect `RET` instructions:
  455. * in non-instrumented DSOs,
  456. * in instrumented DSOs for functions that are called from non-instrumented DSOs,
  457. * embedded into other instructions (e.g. `0f4fc3 cmovg %ebx,%eax`).
  458. .. _SafeStack: https://clang.llvm.org/docs/SafeStack.html
  459. .. _RFG: http://xlab.tencent.com/en/2016/11/02/return-flow-guard
  460. .. _Intel CET: https://software.intel.com/en-us/blogs/2016/06/09/intel-release-new-technology-specifications-protect-rop-attacks
  461. Hardware support
  462. ================
  463. We believe that the above design can be efficiently implemented in hardware.
  464. A single new instruction added to an ISA would allow to perform the forward-edge CFI check
  465. with fewer bytes per check (smaller code size overhead) and potentially more
  466. efficiently. The current software-only instrumentation requires at least
  467. 32-bytes per check (on x86_64).
  468. A hardware instruction may probably be less than ~ 12 bytes.
  469. Such instruction would check that the argument pointer is in-bounds,
  470. and is properly aligned, and if the checks fail it will either trap (in monolithic scheme)
  471. or call the slow path function (cross-DSO scheme).
  472. The bit vector lookup is probably too complex for a hardware implementation.
  473. .. code-block:: none
  474. // This instruction checks that 'Ptr'
  475. // * is aligned by (1 << kAlignment) and
  476. // * is inside [kRangeBeg, kRangeBeg+(kRangeSize<<kAlignment))
  477. // and if the check fails it jumps to the given target (slow path).
  478. //
  479. // 'Ptr' is a register, pointing to the virtual function table
  480. // or to the function which we need to check. We may require an explicit
  481. // fixed register to be used.
  482. // 'kAlignment' is a 4-bit constant.
  483. // 'kRangeSize' is a ~20-bit constant.
  484. // 'kRangeBeg' is a PC-relative constant (~28 bits)
  485. // pointing to the beginning of the allowed range for 'Ptr'.
  486. // 'kFailedCheckTarget': is a PC-relative constant (~28 bits)
  487. // representing the target to branch to when the check fails.
  488. // If kFailedCheckTarget==0, the process will trap
  489. // (monolithic binary scheme).
  490. // Otherwise it will jump to a handler that implements `CFI_SlowPath`
  491. // (cross-DSO scheme).
  492. CFI_Check(Ptr, kAlignment, kRangeSize, kRangeBeg, kFailedCheckTarget) {
  493. if (Ptr < kRangeBeg ||
  494. Ptr >= kRangeBeg + (kRangeSize << kAlignment) ||
  495. Ptr & ((1 << kAlignment) - 1))
  496. Jump(kFailedCheckTarget);
  497. }
  498. An alternative and more compact encoding would not use `kFailedCheckTarget`,
  499. and will trap on check failure instead.
  500. This will allow us to fit the instruction into **8-9 bytes**.
  501. The cross-DSO checks will be performed by a trap handler and
  502. performance-critical ones will have to be black-listed and checked using the
  503. software-only scheme.
  504. Note that such hardware extension would be complementary to checks
  505. at the callee side, such as e.g. **Intel ENDBRANCH**.
  506. Moreover, CFI would have two benefits over ENDBRANCH: a) precision and b)
  507. ability to protect against invalid casts between polymorphic types.