OCamlLangImpl8.rst 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267
  1. ======================================================
  2. Kaleidoscope: Conclusion and other useful LLVM tidbits
  3. ======================================================
  4. .. contents::
  5. :local:
  6. Tutorial Conclusion
  7. ===================
  8. Welcome to the final chapter of the "`Implementing a language with
  9. LLVM <index.html>`_" tutorial. In the course of this tutorial, we have
  10. grown our little Kaleidoscope language from being a useless toy, to
  11. being a semi-interesting (but probably still useless) toy. :)
  12. It is interesting to see how far we've come, and how little code it has
  13. taken. We built the entire lexer, parser, AST, code generator, and an
  14. interactive run-loop (with a JIT!) by-hand in under 700 lines of
  15. (non-comment/non-blank) code.
  16. Our little language supports a couple of interesting features: it
  17. supports user defined binary and unary operators, it uses JIT
  18. compilation for immediate evaluation, and it supports a few control flow
  19. constructs with SSA construction.
  20. Part of the idea of this tutorial was to show you how easy and fun it
  21. can be to define, build, and play with languages. Building a compiler
  22. need not be a scary or mystical process! Now that you've seen some of
  23. the basics, I strongly encourage you to take the code and hack on it.
  24. For example, try adding:
  25. - **global variables** - While global variables have questional value
  26. in modern software engineering, they are often useful when putting
  27. together quick little hacks like the Kaleidoscope compiler itself.
  28. Fortunately, our current setup makes it very easy to add global
  29. variables: just have value lookup check to see if an unresolved
  30. variable is in the global variable symbol table before rejecting it.
  31. To create a new global variable, make an instance of the LLVM
  32. ``GlobalVariable`` class.
  33. - **typed variables** - Kaleidoscope currently only supports variables
  34. of type double. This gives the language a very nice elegance, because
  35. only supporting one type means that you never have to specify types.
  36. Different languages have different ways of handling this. The easiest
  37. way is to require the user to specify types for every variable
  38. definition, and record the type of the variable in the symbol table
  39. along with its Value\*.
  40. - **arrays, structs, vectors, etc** - Once you add types, you can start
  41. extending the type system in all sorts of interesting ways. Simple
  42. arrays are very easy and are quite useful for many different
  43. applications. Adding them is mostly an exercise in learning how the
  44. LLVM `getelementptr <../LangRef.html#getelementptr-instruction>`_ instruction
  45. works: it is so nifty/unconventional, it `has its own
  46. FAQ <../GetElementPtr.html>`_! If you add support for recursive types
  47. (e.g. linked lists), make sure to read the `section in the LLVM
  48. Programmer's Manual <../ProgrammersManual.html#TypeResolve>`_ that
  49. describes how to construct them.
  50. - **standard runtime** - Our current language allows the user to access
  51. arbitrary external functions, and we use it for things like "printd"
  52. and "putchard". As you extend the language to add higher-level
  53. constructs, often these constructs make the most sense if they are
  54. lowered to calls into a language-supplied runtime. For example, if
  55. you add hash tables to the language, it would probably make sense to
  56. add the routines to a runtime, instead of inlining them all the way.
  57. - **memory management** - Currently we can only access the stack in
  58. Kaleidoscope. It would also be useful to be able to allocate heap
  59. memory, either with calls to the standard libc malloc/free interface
  60. or with a garbage collector. If you would like to use garbage
  61. collection, note that LLVM fully supports `Accurate Garbage
  62. Collection <../GarbageCollection.html>`_ including algorithms that
  63. move objects and need to scan/update the stack.
  64. - **debugger support** - LLVM supports generation of `DWARF Debug
  65. info <../SourceLevelDebugging.html>`_ which is understood by common
  66. debuggers like GDB. Adding support for debug info is fairly
  67. straightforward. The best way to understand it is to compile some
  68. C/C++ code with "``clang -g -O0``" and taking a look at what it
  69. produces.
  70. - **exception handling support** - LLVM supports generation of `zero
  71. cost exceptions <../ExceptionHandling.html>`_ which interoperate with
  72. code compiled in other languages. You could also generate code by
  73. implicitly making every function return an error value and checking
  74. it. You could also make explicit use of setjmp/longjmp. There are
  75. many different ways to go here.
  76. - **object orientation, generics, database access, complex numbers,
  77. geometric programming, ...** - Really, there is no end of crazy
  78. features that you can add to the language.
  79. - **unusual domains** - We've been talking about applying LLVM to a
  80. domain that many people are interested in: building a compiler for a
  81. specific language. However, there are many other domains that can use
  82. compiler technology that are not typically considered. For example,
  83. LLVM has been used to implement OpenGL graphics acceleration,
  84. translate C++ code to ActionScript, and many other cute and clever
  85. things. Maybe you will be the first to JIT compile a regular
  86. expression interpreter into native code with LLVM?
  87. Have fun - try doing something crazy and unusual. Building a language
  88. like everyone else always has, is much less fun than trying something a
  89. little crazy or off the wall and seeing how it turns out. If you get
  90. stuck or want to talk about it, feel free to email the `llvm-dev mailing
  91. list <http://lists.llvm.org/mailman/listinfo/llvm-dev>`_: it has lots
  92. of people who are interested in languages and are often willing to help
  93. out.
  94. Before we end this tutorial, I want to talk about some "tips and tricks"
  95. for generating LLVM IR. These are some of the more subtle things that
  96. may not be obvious, but are very useful if you want to take advantage of
  97. LLVM's capabilities.
  98. Properties of the LLVM IR
  99. =========================
  100. We have a couple common questions about code in the LLVM IR form - lets
  101. just get these out of the way right now, shall we?
  102. Target Independence
  103. -------------------
  104. Kaleidoscope is an example of a "portable language": any program written
  105. in Kaleidoscope will work the same way on any target that it runs on.
  106. Many other languages have this property, e.g. lisp, java, haskell,
  107. javascript, python, etc (note that while these languages are portable,
  108. not all their libraries are).
  109. One nice aspect of LLVM is that it is often capable of preserving target
  110. independence in the IR: you can take the LLVM IR for a
  111. Kaleidoscope-compiled program and run it on any target that LLVM
  112. supports, even emitting C code and compiling that on targets that LLVM
  113. doesn't support natively. You can trivially tell that the Kaleidoscope
  114. compiler generates target-independent code because it never queries for
  115. any target-specific information when generating code.
  116. The fact that LLVM provides a compact, target-independent,
  117. representation for code gets a lot of people excited. Unfortunately,
  118. these people are usually thinking about C or a language from the C
  119. family when they are asking questions about language portability. I say
  120. "unfortunately", because there is really no way to make (fully general)
  121. C code portable, other than shipping the source code around (and of
  122. course, C source code is not actually portable in general either - ever
  123. port a really old application from 32- to 64-bits?).
  124. The problem with C (again, in its full generality) is that it is heavily
  125. laden with target specific assumptions. As one simple example, the
  126. preprocessor often destructively removes target-independence from the
  127. code when it processes the input text:
  128. .. code-block:: c
  129. #ifdef __i386__
  130. int X = 1;
  131. #else
  132. int X = 42;
  133. #endif
  134. While it is possible to engineer more and more complex solutions to
  135. problems like this, it cannot be solved in full generality in a way that
  136. is better than shipping the actual source code.
  137. That said, there are interesting subsets of C that can be made portable.
  138. If you are willing to fix primitive types to a fixed size (say int =
  139. 32-bits, and long = 64-bits), don't care about ABI compatibility with
  140. existing binaries, and are willing to give up some other minor features,
  141. you can have portable code. This can make sense for specialized domains
  142. such as an in-kernel language.
  143. Safety Guarantees
  144. -----------------
  145. Many of the languages above are also "safe" languages: it is impossible
  146. for a program written in Java to corrupt its address space and crash the
  147. process (assuming the JVM has no bugs). Safety is an interesting
  148. property that requires a combination of language design, runtime
  149. support, and often operating system support.
  150. It is certainly possible to implement a safe language in LLVM, but LLVM
  151. IR does not itself guarantee safety. The LLVM IR allows unsafe pointer
  152. casts, use after free bugs, buffer over-runs, and a variety of other
  153. problems. Safety needs to be implemented as a layer on top of LLVM and,
  154. conveniently, several groups have investigated this. Ask on the `llvm-dev
  155. mailing list <http://lists.llvm.org/mailman/listinfo/llvm-dev>`_ if
  156. you are interested in more details.
  157. Language-Specific Optimizations
  158. -------------------------------
  159. One thing about LLVM that turns off many people is that it does not
  160. solve all the world's problems in one system (sorry 'world hunger',
  161. someone else will have to solve you some other day). One specific
  162. complaint is that people perceive LLVM as being incapable of performing
  163. high-level language-specific optimization: LLVM "loses too much
  164. information".
  165. Unfortunately, this is really not the place to give you a full and
  166. unified version of "Chris Lattner's theory of compiler design". Instead,
  167. I'll make a few observations:
  168. First, you're right that LLVM does lose information. For example, as of
  169. this writing, there is no way to distinguish in the LLVM IR whether an
  170. SSA-value came from a C "int" or a C "long" on an ILP32 machine (other
  171. than debug info). Both get compiled down to an 'i32' value and the
  172. information about what it came from is lost. The more general issue
  173. here, is that the LLVM type system uses "structural equivalence" instead
  174. of "name equivalence". Another place this surprises people is if you
  175. have two types in a high-level language that have the same structure
  176. (e.g. two different structs that have a single int field): these types
  177. will compile down into a single LLVM type and it will be impossible to
  178. tell what it came from.
  179. Second, while LLVM does lose information, LLVM is not a fixed target: we
  180. continue to enhance and improve it in many different ways. In addition
  181. to adding new features (LLVM did not always support exceptions or debug
  182. info), we also extend the IR to capture important information for
  183. optimization (e.g. whether an argument is sign or zero extended,
  184. information about pointers aliasing, etc). Many of the enhancements are
  185. user-driven: people want LLVM to include some specific feature, so they
  186. go ahead and extend it.
  187. Third, it is *possible and easy* to add language-specific optimizations,
  188. and you have a number of choices in how to do it. As one trivial
  189. example, it is easy to add language-specific optimization passes that
  190. "know" things about code compiled for a language. In the case of the C
  191. family, there is an optimization pass that "knows" about the standard C
  192. library functions. If you call "exit(0)" in main(), it knows that it is
  193. safe to optimize that into "return 0;" because C specifies what the
  194. 'exit' function does.
  195. In addition to simple library knowledge, it is possible to embed a
  196. variety of other language-specific information into the LLVM IR. If you
  197. have a specific need and run into a wall, please bring the topic up on
  198. the llvm-dev list. At the very worst, you can always treat LLVM as if it
  199. were a "dumb code generator" and implement the high-level optimizations
  200. you desire in your front-end, on the language-specific AST.
  201. Tips and Tricks
  202. ===============
  203. There is a variety of useful tips and tricks that you come to know after
  204. working on/with LLVM that aren't obvious at first glance. Instead of
  205. letting everyone rediscover them, this section talks about some of these
  206. issues.
  207. Implementing portable offsetof/sizeof
  208. -------------------------------------
  209. One interesting thing that comes up, if you are trying to keep the code
  210. generated by your compiler "target independent", is that you often need
  211. to know the size of some LLVM type or the offset of some field in an
  212. llvm structure. For example, you might need to pass the size of a type
  213. into a function that allocates memory.
  214. Unfortunately, this can vary widely across targets: for example the
  215. width of a pointer is trivially target-specific. However, there is a
  216. `clever way to use the getelementptr
  217. instruction <http://nondot.org/sabre/LLVMNotes/SizeOf-OffsetOf-VariableSizedStructs.txt>`_
  218. that allows you to compute this in a portable way.
  219. Garbage Collected Stack Frames
  220. ------------------------------
  221. Some languages want to explicitly manage their stack frames, often so
  222. that they are garbage collected or to allow easy implementation of
  223. closures. There are often better ways to implement these features than
  224. explicit stack frames, but `LLVM does support
  225. them, <http://nondot.org/sabre/LLVMNotes/ExplicitlyManagedStackFrames.txt>`_
  226. if you want. It requires your front-end to convert the code into
  227. `Continuation Passing
  228. Style <http://en.wikipedia.org/wiki/Continuation-passing_style>`_ and
  229. the use of tail calls (which LLVM also supports).