LLVM  7.0.0svn
MemorySanitizer.cpp
Go to the documentation of this file.
1 //===- MemorySanitizer.cpp - detector of uninitialized reads --------------===//
2 //
3 // The LLVM Compiler Infrastructure
4 //
5 // This file is distributed under the University of Illinois Open Source
6 // License. See LICENSE.TXT for details.
7 //
8 //===----------------------------------------------------------------------===//
9 //
10 /// \file
11 /// This file is a part of MemorySanitizer, a detector of uninitialized
12 /// reads.
13 ///
14 /// The algorithm of the tool is similar to Memcheck
15 /// (http://goo.gl/QKbem). We associate a few shadow bits with every
16 /// byte of the application memory, poison the shadow of the malloc-ed
17 /// or alloca-ed memory, load the shadow bits on every memory read,
18 /// propagate the shadow bits through some of the arithmetic
19 /// instruction (including MOV), store the shadow bits on every memory
20 /// write, report a bug on some other instructions (e.g. JMP) if the
21 /// associated shadow is poisoned.
22 ///
23 /// But there are differences too. The first and the major one:
24 /// compiler instrumentation instead of binary instrumentation. This
25 /// gives us much better register allocation, possible compiler
26 /// optimizations and a fast start-up. But this brings the major issue
27 /// as well: msan needs to see all program events, including system
28 /// calls and reads/writes in system libraries, so we either need to
29 /// compile *everything* with msan or use a binary translation
30 /// component (e.g. DynamoRIO) to instrument pre-built libraries.
31 /// Another difference from Memcheck is that we use 8 shadow bits per
32 /// byte of application memory and use a direct shadow mapping. This
33 /// greatly simplifies the instrumentation code and avoids races on
34 /// shadow updates (Memcheck is single-threaded so races are not a
35 /// concern there. Memcheck uses 2 shadow bits per byte with a slow
36 /// path storage that uses 8 bits per byte).
37 ///
38 /// The default value of shadow is 0, which means "clean" (not poisoned).
39 ///
40 /// Every module initializer should call __msan_init to ensure that the
41 /// shadow memory is ready. On error, __msan_warning is called. Since
42 /// parameters and return values may be passed via registers, we have a
43 /// specialized thread-local shadow for return values
44 /// (__msan_retval_tls) and parameters (__msan_param_tls).
45 ///
46 /// Origin tracking.
47 ///
48 /// MemorySanitizer can track origins (allocation points) of all uninitialized
49 /// values. This behavior is controlled with a flag (msan-track-origins) and is
50 /// disabled by default.
51 ///
52 /// Origins are 4-byte values created and interpreted by the runtime library.
53 /// They are stored in a second shadow mapping, one 4-byte value for 4 bytes
54 /// of application memory. Propagation of origins is basically a bunch of
55 /// "select" instructions that pick the origin of a dirty argument, if an
56 /// instruction has one.
57 ///
58 /// Every 4 aligned, consecutive bytes of application memory have one origin
59 /// value associated with them. If these bytes contain uninitialized data
60 /// coming from 2 different allocations, the last store wins. Because of this,
61 /// MemorySanitizer reports can show unrelated origins, but this is unlikely in
62 /// practice.
63 ///
64 /// Origins are meaningless for fully initialized values, so MemorySanitizer
65 /// avoids storing origin to memory when a fully initialized value is stored.
66 /// This way it avoids needless overwritting origin of the 4-byte region on
67 /// a short (i.e. 1 byte) clean store, and it is also good for performance.
68 ///
69 /// Atomic handling.
70 ///
71 /// Ideally, every atomic store of application value should update the
72 /// corresponding shadow location in an atomic way. Unfortunately, atomic store
73 /// of two disjoint locations can not be done without severe slowdown.
74 ///
75 /// Therefore, we implement an approximation that may err on the safe side.
76 /// In this implementation, every atomically accessed location in the program
77 /// may only change from (partially) uninitialized to fully initialized, but
78 /// not the other way around. We load the shadow _after_ the application load,
79 /// and we store the shadow _before_ the app store. Also, we always store clean
80 /// shadow (if the application store is atomic). This way, if the store-load
81 /// pair constitutes a happens-before arc, shadow store and load are correctly
82 /// ordered such that the load will get either the value that was stored, or
83 /// some later value (which is always clean).
84 ///
85 /// This does not work very well with Compare-And-Swap (CAS) and
86 /// Read-Modify-Write (RMW) operations. To follow the above logic, CAS and RMW
87 /// must store the new shadow before the app operation, and load the shadow
88 /// after the app operation. Computers don't work this way. Current
89 /// implementation ignores the load aspect of CAS/RMW, always returning a clean
90 /// value. It implements the store part as a simple atomic store by storing a
91 /// clean shadow.
92 //
93 //===----------------------------------------------------------------------===//
94 
95 #include "llvm/ADT/APInt.h"
96 #include "llvm/ADT/ArrayRef.h"
98 #include "llvm/ADT/SmallString.h"
99 #include "llvm/ADT/SmallVector.h"
100 #include "llvm/ADT/StringExtras.h"
101 #include "llvm/ADT/StringRef.h"
102 #include "llvm/ADT/Triple.h"
105 #include "llvm/IR/Argument.h"
106 #include "llvm/IR/Attributes.h"
107 #include "llvm/IR/BasicBlock.h"
108 #include "llvm/IR/CallSite.h"
109 #include "llvm/IR/CallingConv.h"
110 #include "llvm/IR/Constant.h"
111 #include "llvm/IR/Constants.h"
112 #include "llvm/IR/DataLayout.h"
113 #include "llvm/IR/DerivedTypes.h"
114 #include "llvm/IR/Function.h"
115 #include "llvm/IR/GlobalValue.h"
116 #include "llvm/IR/GlobalVariable.h"
117 #include "llvm/IR/IRBuilder.h"
118 #include "llvm/IR/InlineAsm.h"
119 #include "llvm/IR/InstVisitor.h"
120 #include "llvm/IR/InstrTypes.h"
121 #include "llvm/IR/Instruction.h"
122 #include "llvm/IR/Instructions.h"
123 #include "llvm/IR/IntrinsicInst.h"
124 #include "llvm/IR/Intrinsics.h"
125 #include "llvm/IR/LLVMContext.h"
126 #include "llvm/IR/MDBuilder.h"
127 #include "llvm/IR/Module.h"
128 #include "llvm/IR/Type.h"
129 #include "llvm/IR/Value.h"
130 #include "llvm/IR/ValueMap.h"
131 #include "llvm/Pass.h"
133 #include "llvm/Support/Casting.h"
135 #include "llvm/Support/Compiler.h"
136 #include "llvm/Support/Debug.h"
138 #include "llvm/Support/MathExtras.h"
143 #include <algorithm>
144 #include <cassert>
145 #include <cstddef>
146 #include <cstdint>
147 #include <memory>
148 #include <string>
149 #include <tuple>
150 
151 using namespace llvm;
152 
153 #define DEBUG_TYPE "msan"
154 
155 static const unsigned kOriginSize = 4;
156 static const unsigned kMinOriginAlignment = 4;
157 static const unsigned kShadowTLSAlignment = 8;
158 
159 // These constants must be kept in sync with the ones in msan.h.
160 static const unsigned kParamTLSSize = 800;
161 static const unsigned kRetvalTLSSize = 800;
162 
163 // Accesses sizes are powers of two: 1, 2, 4, 8.
164 static const size_t kNumberOfAccessSizes = 4;
165 
166 /// Track origins of uninitialized values.
167 ///
168 /// Adds a section to MemorySanitizer report that points to the allocation
169 /// (stack or heap) the uninitialized bits came from originally.
170 static cl::opt<int> ClTrackOrigins("msan-track-origins",
171  cl::desc("Track origins (allocation sites) of poisoned memory"),
172  cl::Hidden, cl::init(0));
173 
174 static cl::opt<bool> ClKeepGoing("msan-keep-going",
175  cl::desc("keep going after reporting a UMR"),
176  cl::Hidden, cl::init(false));
177 
178 static cl::opt<bool> ClPoisonStack("msan-poison-stack",
179  cl::desc("poison uninitialized stack variables"),
180  cl::Hidden, cl::init(true));
181 
182 static cl::opt<bool> ClPoisonStackWithCall("msan-poison-stack-with-call",
183  cl::desc("poison uninitialized stack variables with a call"),
184  cl::Hidden, cl::init(false));
185 
186 static cl::opt<int> ClPoisonStackPattern("msan-poison-stack-pattern",
187  cl::desc("poison uninitialized stack variables with the given pattern"),
188  cl::Hidden, cl::init(0xff));
189 
190 static cl::opt<bool> ClPoisonUndef("msan-poison-undef",
191  cl::desc("poison undef temps"),
192  cl::Hidden, cl::init(true));
193 
194 static cl::opt<bool> ClHandleICmp("msan-handle-icmp",
195  cl::desc("propagate shadow through ICmpEQ and ICmpNE"),
196  cl::Hidden, cl::init(true));
197 
198 static cl::opt<bool> ClHandleICmpExact("msan-handle-icmp-exact",
199  cl::desc("exact handling of relational integer ICmp"),
200  cl::Hidden, cl::init(false));
201 
202 // When compiling the Linux kernel, we sometimes see false positives related to
203 // MSan being unable to understand that inline assembly calls may initialize
204 // local variables.
205 // This flag makes the compiler conservatively unpoison every memory location
206 // passed into an assembly call. Note that this may cause false positives.
207 // Because it's impossible to figure out the array sizes, we can only unpoison
208 // the first sizeof(type) bytes for each type* pointer.
210  "msan-handle-asm-conservative",
211  cl::desc("conservative handling of inline assembly"), cl::Hidden,
212  cl::init(false));
213 
214 // This flag controls whether we check the shadow of the address
215 // operand of load or store. Such bugs are very rare, since load from
216 // a garbage address typically results in SEGV, but still happen
217 // (e.g. only lower bits of address are garbage, or the access happens
218 // early at program startup where malloc-ed memory is more likely to
219 // be zeroed. As of 2012-08-28 this flag adds 20% slowdown.
220 static cl::opt<bool> ClCheckAccessAddress("msan-check-access-address",
221  cl::desc("report accesses through a pointer which has poisoned shadow"),
222  cl::Hidden, cl::init(true));
223 
224 static cl::opt<bool> ClDumpStrictInstructions("msan-dump-strict-instructions",
225  cl::desc("print out instructions with default strict semantics"),
226  cl::Hidden, cl::init(false));
227 
229  "msan-instrumentation-with-call-threshold",
230  cl::desc(
231  "If the function being instrumented requires more than "
232  "this number of checks and origin stores, use callbacks instead of "
233  "inline checks (-1 means never use callbacks)."),
234  cl::Hidden, cl::init(3500));
235 
236 // This is an experiment to enable handling of cases where shadow is a non-zero
237 // compile-time constant. For some unexplainable reason they were silently
238 // ignored in the instrumentation.
239 static cl::opt<bool> ClCheckConstantShadow("msan-check-constant-shadow",
240  cl::desc("Insert checks for constant shadow values"),
241  cl::Hidden, cl::init(false));
242 
243 // This is off by default because of a bug in gold:
244 // https://sourceware.org/bugzilla/show_bug.cgi?id=19002
245 static cl::opt<bool> ClWithComdat("msan-with-comdat",
246  cl::desc("Place MSan constructors in comdat sections"),
247  cl::Hidden, cl::init(false));
248 
249 // These options allow to specify custom memory map parameters
250 // See MemoryMapParams for details.
251 static cl::opt<unsigned long long> ClAndMask("msan-and-mask",
252  cl::desc("Define custom MSan AndMask"),
253  cl::Hidden, cl::init(0));
254 
255 static cl::opt<unsigned long long> ClXorMask("msan-xor-mask",
256  cl::desc("Define custom MSan XorMask"),
257  cl::Hidden, cl::init(0));
258 
259 static cl::opt<unsigned long long> ClShadowBase("msan-shadow-base",
260  cl::desc("Define custom MSan ShadowBase"),
261  cl::Hidden, cl::init(0));
262 
263 static cl::opt<unsigned long long> ClOriginBase("msan-origin-base",
264  cl::desc("Define custom MSan OriginBase"),
265  cl::Hidden, cl::init(0));
266 
267 static const char *const kMsanModuleCtorName = "msan.module_ctor";
268 static const char *const kMsanInitName = "__msan_init";
269 
270 namespace {
271 
272 // Memory map parameters used in application-to-shadow address calculation.
273 // Offset = (Addr & ~AndMask) ^ XorMask
274 // Shadow = ShadowBase + Offset
275 // Origin = OriginBase + Offset
276 struct MemoryMapParams {
277  uint64_t AndMask;
278  uint64_t XorMask;
279  uint64_t ShadowBase;
280  uint64_t OriginBase;
281 };
282 
283 struct PlatformMemoryMapParams {
284  const MemoryMapParams *bits32;
285  const MemoryMapParams *bits64;
286 };
287 
288 } // end anonymous namespace
289 
290 // i386 Linux
291 static const MemoryMapParams Linux_I386_MemoryMapParams = {
292  0x000080000000, // AndMask
293  0, // XorMask (not used)
294  0, // ShadowBase (not used)
295  0x000040000000, // OriginBase
296 };
297 
298 // x86_64 Linux
299 static const MemoryMapParams Linux_X86_64_MemoryMapParams = {
300 #ifdef MSAN_LINUX_X86_64_OLD_MAPPING
301  0x400000000000, // AndMask
302  0, // XorMask (not used)
303  0, // ShadowBase (not used)
304  0x200000000000, // OriginBase
305 #else
306  0, // AndMask (not used)
307  0x500000000000, // XorMask
308  0, // ShadowBase (not used)
309  0x100000000000, // OriginBase
310 #endif
311 };
312 
313 // mips64 Linux
314 static const MemoryMapParams Linux_MIPS64_MemoryMapParams = {
315  0, // AndMask (not used)
316  0x008000000000, // XorMask
317  0, // ShadowBase (not used)
318  0x002000000000, // OriginBase
319 };
320 
321 // ppc64 Linux
322 static const MemoryMapParams Linux_PowerPC64_MemoryMapParams = {
323  0xE00000000000, // AndMask
324  0x100000000000, // XorMask
325  0x080000000000, // ShadowBase
326  0x1C0000000000, // OriginBase
327 };
328 
329 // aarch64 Linux
330 static const MemoryMapParams Linux_AArch64_MemoryMapParams = {
331  0, // AndMask (not used)
332  0x06000000000, // XorMask
333  0, // ShadowBase (not used)
334  0x01000000000, // OriginBase
335 };
336 
337 // i386 FreeBSD
338 static const MemoryMapParams FreeBSD_I386_MemoryMapParams = {
339  0x000180000000, // AndMask
340  0x000040000000, // XorMask
341  0x000020000000, // ShadowBase
342  0x000700000000, // OriginBase
343 };
344 
345 // x86_64 FreeBSD
346 static const MemoryMapParams FreeBSD_X86_64_MemoryMapParams = {
347  0xc00000000000, // AndMask
348  0x200000000000, // XorMask
349  0x100000000000, // ShadowBase
350  0x380000000000, // OriginBase
351 };
352 
353 // x86_64 NetBSD
354 static const MemoryMapParams NetBSD_X86_64_MemoryMapParams = {
355  0, // AndMask
356  0x500000000000, // XorMask
357  0, // ShadowBase
358  0x100000000000, // OriginBase
359 };
360 
361 static const PlatformMemoryMapParams Linux_X86_MemoryMapParams = {
364 };
365 
366 static const PlatformMemoryMapParams Linux_MIPS_MemoryMapParams = {
367  nullptr,
369 };
370 
371 static const PlatformMemoryMapParams Linux_PowerPC_MemoryMapParams = {
372  nullptr,
374 };
375 
376 static const PlatformMemoryMapParams Linux_ARM_MemoryMapParams = {
377  nullptr,
379 };
380 
381 static const PlatformMemoryMapParams FreeBSD_X86_MemoryMapParams = {
384 };
385 
386 static const PlatformMemoryMapParams NetBSD_X86_MemoryMapParams = {
387  nullptr,
389 };
390 
391 namespace {
392 
393 /// An instrumentation pass implementing detection of uninitialized
394 /// reads.
395 ///
396 /// MemorySanitizer: instrument the code in module to find
397 /// uninitialized reads.
398 class MemorySanitizer : public FunctionPass {
399 public:
400  // Pass identification, replacement for typeid.
401  static char ID;
402 
403  MemorySanitizer(int TrackOrigins = 0, bool Recover = false)
404  : FunctionPass(ID),
405  TrackOrigins(std::max(TrackOrigins, (int)ClTrackOrigins)),
406  Recover(Recover || ClKeepGoing) {}
407 
408  StringRef getPassName() const override { return "MemorySanitizer"; }
409 
410  void getAnalysisUsage(AnalysisUsage &AU) const override {
412  }
413 
414  bool runOnFunction(Function &F) override;
415  bool doInitialization(Module &M) override;
416 
417 private:
418  friend struct MemorySanitizerVisitor;
419  friend struct VarArgAMD64Helper;
420  friend struct VarArgMIPS64Helper;
421  friend struct VarArgAArch64Helper;
422  friend struct VarArgPowerPC64Helper;
423 
424  void initializeCallbacks(Module &M);
425 
426  /// Track origins (allocation points) of uninitialized values.
427  int TrackOrigins;
428  bool Recover;
429 
430  LLVMContext *C;
431  Type *IntptrTy;
432  Type *OriginTy;
433 
434  /// Thread-local shadow storage for function parameters.
435  GlobalVariable *ParamTLS;
436 
437  /// Thread-local origin storage for function parameters.
438  GlobalVariable *ParamOriginTLS;
439 
440  /// Thread-local shadow storage for function return value.
441  GlobalVariable *RetvalTLS;
442 
443  /// Thread-local origin storage for function return value.
444  GlobalVariable *RetvalOriginTLS;
445 
446  /// Thread-local shadow storage for in-register va_arg function
447  /// parameters (x86_64-specific).
448  GlobalVariable *VAArgTLS;
449 
450  /// Thread-local shadow storage for va_arg overflow area
451  /// (x86_64-specific).
452  GlobalVariable *VAArgOverflowSizeTLS;
453 
454  /// Thread-local space used to pass origin value to the UMR reporting
455  /// function.
456  GlobalVariable *OriginTLS;
457 
458  /// The run-time callback to print a warning.
459  Value *WarningFn = nullptr;
460 
461  // These arrays are indexed by log2(AccessSize).
462  Value *MaybeWarningFn[kNumberOfAccessSizes];
463  Value *MaybeStoreOriginFn[kNumberOfAccessSizes];
464 
465  /// Run-time helper that generates a new origin value for a stack
466  /// allocation.
467  Value *MsanSetAllocaOrigin4Fn;
468 
469  /// Run-time helper that poisons stack on function entry.
470  Value *MsanPoisonStackFn;
471 
472  /// Run-time helper that records a store (or any event) of an
473  /// uninitialized value and returns an updated origin id encoding this info.
474  Value *MsanChainOriginFn;
475 
476  /// MSan runtime replacements for memmove, memcpy and memset.
477  Value *MemmoveFn, *MemcpyFn, *MemsetFn;
478 
479  /// Memory map parameters used in application-to-shadow calculation.
480  const MemoryMapParams *MapParams;
481 
482  /// Custom memory map parameters used when -msan-shadow-base or
483  // -msan-origin-base is provided.
484  MemoryMapParams CustomMapParams;
485 
486  MDNode *ColdCallWeights;
487 
488  /// Branch weights for origin store.
489  MDNode *OriginStoreWeights;
490 
491  /// An empty volatile inline asm that prevents callback merge.
492  InlineAsm *EmptyAsm;
493 
494  Function *MsanCtorFunction;
495 };
496 
497 } // end anonymous namespace
498 
499 char MemorySanitizer::ID = 0;
500 
502  MemorySanitizer, "msan",
503  "MemorySanitizer: detects uninitialized reads.", false, false)
506  MemorySanitizer, "msan",
507  "MemorySanitizer: detects uninitialized reads.", false, false)
508 
509 FunctionPass *llvm::createMemorySanitizerPass(int TrackOrigins, bool Recover) {
510  return new MemorySanitizer(TrackOrigins, Recover);
511 }
512 
513 /// Create a non-const global initialized with the given string.
514 ///
515 /// Creates a writable global for Str so that we can pass it to the
516 /// run-time lib. Runtime uses first 4 bytes of the string to store the
517 /// frame ID, so the string needs to be mutable.
519  StringRef Str) {
520  Constant *StrConst = ConstantDataArray::getString(M.getContext(), Str);
521  return new GlobalVariable(M, StrConst->getType(), /*isConstant=*/false,
522  GlobalValue::PrivateLinkage, StrConst, "");
523 }
524 
525 /// Insert extern declaration of runtime-provided functions and globals.
526 void MemorySanitizer::initializeCallbacks(Module &M) {
527  // Only do this once.
528  if (WarningFn)
529  return;
530 
531  IRBuilder<> IRB(*C);
532  // Create the callback.
533  // FIXME: this function should have "Cold" calling conv,
534  // which is not yet implemented.
535  StringRef WarningFnName = Recover ? "__msan_warning"
536  : "__msan_warning_noreturn";
537  WarningFn = M.getOrInsertFunction(WarningFnName, IRB.getVoidTy());
538 
539  for (size_t AccessSizeIndex = 0; AccessSizeIndex < kNumberOfAccessSizes;
540  AccessSizeIndex++) {
541  unsigned AccessSize = 1 << AccessSizeIndex;
542  std::string FunctionName = "__msan_maybe_warning_" + itostr(AccessSize);
543  MaybeWarningFn[AccessSizeIndex] = M.getOrInsertFunction(
544  FunctionName, IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8),
545  IRB.getInt32Ty());
546 
547  FunctionName = "__msan_maybe_store_origin_" + itostr(AccessSize);
548  MaybeStoreOriginFn[AccessSizeIndex] = M.getOrInsertFunction(
549  FunctionName, IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8),
550  IRB.getInt8PtrTy(), IRB.getInt32Ty());
551  }
552 
553  MsanSetAllocaOrigin4Fn = M.getOrInsertFunction(
554  "__msan_set_alloca_origin4", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy,
555  IRB.getInt8PtrTy(), IntptrTy);
556  MsanPoisonStackFn =
557  M.getOrInsertFunction("__msan_poison_stack", IRB.getVoidTy(),
558  IRB.getInt8PtrTy(), IntptrTy);
559  MsanChainOriginFn = M.getOrInsertFunction(
560  "__msan_chain_origin", IRB.getInt32Ty(), IRB.getInt32Ty());
561  MemmoveFn = M.getOrInsertFunction(
562  "__msan_memmove", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(),
563  IRB.getInt8PtrTy(), IntptrTy);
564  MemcpyFn = M.getOrInsertFunction(
565  "__msan_memcpy", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt8PtrTy(),
566  IntptrTy);
567  MemsetFn = M.getOrInsertFunction(
568  "__msan_memset", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt32Ty(),
569  IntptrTy);
570 
571  // Create globals.
572  RetvalTLS = new GlobalVariable(
573  M, ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8), false,
574  GlobalVariable::ExternalLinkage, nullptr, "__msan_retval_tls", nullptr,
576  RetvalOriginTLS = new GlobalVariable(
577  M, OriginTy, false, GlobalVariable::ExternalLinkage, nullptr,
578  "__msan_retval_origin_tls", nullptr, GlobalVariable::InitialExecTLSModel);
579 
580  ParamTLS = new GlobalVariable(
581  M, ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), false,
582  GlobalVariable::ExternalLinkage, nullptr, "__msan_param_tls", nullptr,
584  ParamOriginTLS = new GlobalVariable(
585  M, ArrayType::get(OriginTy, kParamTLSSize / 4), false,
586  GlobalVariable::ExternalLinkage, nullptr, "__msan_param_origin_tls",
588 
589  VAArgTLS = new GlobalVariable(
590  M, ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), false,
591  GlobalVariable::ExternalLinkage, nullptr, "__msan_va_arg_tls", nullptr,
593  VAArgOverflowSizeTLS = new GlobalVariable(
594  M, IRB.getInt64Ty(), false, GlobalVariable::ExternalLinkage, nullptr,
595  "__msan_va_arg_overflow_size_tls", nullptr,
597  OriginTLS = new GlobalVariable(
598  M, IRB.getInt32Ty(), false, GlobalVariable::ExternalLinkage, nullptr,
599  "__msan_origin_tls", nullptr, GlobalVariable::InitialExecTLSModel);
600 
601  // We insert an empty inline asm after __msan_report* to avoid callback merge.
602  EmptyAsm = InlineAsm::get(FunctionType::get(IRB.getVoidTy(), false),
603  StringRef(""), StringRef(""),
604  /*hasSideEffects=*/true);
605 }
606 
607 /// Module-level initialization.
608 ///
609 /// inserts a call to __msan_init to the module's constructor list.
610 bool MemorySanitizer::doInitialization(Module &M) {
611  auto &DL = M.getDataLayout();
612 
613  bool ShadowPassed = ClShadowBase.getNumOccurrences() > 0;
614  bool OriginPassed = ClOriginBase.getNumOccurrences() > 0;
615  // Check the overrides first
616  if (ShadowPassed || OriginPassed) {
617  CustomMapParams.AndMask = ClAndMask;
618  CustomMapParams.XorMask = ClXorMask;
619  CustomMapParams.ShadowBase = ClShadowBase;
620  CustomMapParams.OriginBase = ClOriginBase;
621  MapParams = &CustomMapParams;
622  } else {
623  Triple TargetTriple(M.getTargetTriple());
624  switch (TargetTriple.getOS()) {
625  case Triple::FreeBSD:
626  switch (TargetTriple.getArch()) {
627  case Triple::x86_64:
628  MapParams = FreeBSD_X86_MemoryMapParams.bits64;
629  break;
630  case Triple::x86:
631  MapParams = FreeBSD_X86_MemoryMapParams.bits32;
632  break;
633  default:
634  report_fatal_error("unsupported architecture");
635  }
636  break;
637  case Triple::NetBSD:
638  switch (TargetTriple.getArch()) {
639  case Triple::x86_64:
640  MapParams = NetBSD_X86_MemoryMapParams.bits64;
641  break;
642  default:
643  report_fatal_error("unsupported architecture");
644  }
645  break;
646  case Triple::Linux:
647  switch (TargetTriple.getArch()) {
648  case Triple::x86_64:
649  MapParams = Linux_X86_MemoryMapParams.bits64;
650  break;
651  case Triple::x86:
652  MapParams = Linux_X86_MemoryMapParams.bits32;
653  break;
654  case Triple::mips64:
655  case Triple::mips64el:
656  MapParams = Linux_MIPS_MemoryMapParams.bits64;
657  break;
658  case Triple::ppc64:
659  case Triple::ppc64le:
660  MapParams = Linux_PowerPC_MemoryMapParams.bits64;
661  break;
662  case Triple::aarch64:
663  case Triple::aarch64_be:
664  MapParams = Linux_ARM_MemoryMapParams.bits64;
665  break;
666  default:
667  report_fatal_error("unsupported architecture");
668  }
669  break;
670  default:
671  report_fatal_error("unsupported operating system");
672  }
673  }
674 
675  C = &(M.getContext());
676  IRBuilder<> IRB(*C);
677  IntptrTy = IRB.getIntPtrTy(DL);
678  OriginTy = IRB.getInt32Ty();
679 
680  ColdCallWeights = MDBuilder(*C).createBranchWeights(1, 1000);
681  OriginStoreWeights = MDBuilder(*C).createBranchWeights(1, 1000);
682 
683  std::tie(MsanCtorFunction, std::ignore) =
685  /*InitArgTypes=*/{},
686  /*InitArgs=*/{});
687  if (ClWithComdat) {
688  Comdat *MsanCtorComdat = M.getOrInsertComdat(kMsanModuleCtorName);
689  MsanCtorFunction->setComdat(MsanCtorComdat);
690  appendToGlobalCtors(M, MsanCtorFunction, 0, MsanCtorFunction);
691  } else {
692  appendToGlobalCtors(M, MsanCtorFunction, 0);
693  }
694 
695 
696  if (TrackOrigins)
698  IRB.getInt32(TrackOrigins), "__msan_track_origins");
699 
700  if (Recover)
702  IRB.getInt32(Recover), "__msan_keep_going");
703 
704  return true;
705 }
706 
707 namespace {
708 
709 /// A helper class that handles instrumentation of VarArg
710 /// functions on a particular platform.
711 ///
712 /// Implementations are expected to insert the instrumentation
713 /// necessary to propagate argument shadow through VarArg function
714 /// calls. Visit* methods are called during an InstVisitor pass over
715 /// the function, and should avoid creating new basic blocks. A new
716 /// instance of this class is created for each instrumented function.
717 struct VarArgHelper {
718  virtual ~VarArgHelper() = default;
719 
720  /// Visit a CallSite.
721  virtual void visitCallSite(CallSite &CS, IRBuilder<> &IRB) = 0;
722 
723  /// Visit a va_start call.
724  virtual void visitVAStartInst(VAStartInst &I) = 0;
725 
726  /// Visit a va_copy call.
727  virtual void visitVACopyInst(VACopyInst &I) = 0;
728 
729  /// Finalize function instrumentation.
730  ///
731  /// This method is called after visiting all interesting (see above)
732  /// instructions in a function.
733  virtual void finalizeInstrumentation() = 0;
734 };
735 
736 struct MemorySanitizerVisitor;
737 
738 } // end anonymous namespace
739 
740 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan,
741  MemorySanitizerVisitor &Visitor);
742 
743 static unsigned TypeSizeToSizeIndex(unsigned TypeSize) {
744  if (TypeSize <= 8) return 0;
745  return Log2_32_Ceil((TypeSize + 7) / 8);
746 }
747 
748 namespace {
749 
750 /// This class does all the work for a given function. Store and Load
751 /// instructions store and load corresponding shadow and origin
752 /// values. Most instructions propagate shadow from arguments to their
753 /// return values. Certain instructions (most importantly, BranchInst)
754 /// test their argument shadow and print reports (with a runtime call) if it's
755 /// non-zero.
756 struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> {
757  Function &F;
758  MemorySanitizer &MS;
759  SmallVector<PHINode *, 16> ShadowPHINodes, OriginPHINodes;
760  ValueMap<Value*, Value*> ShadowMap, OriginMap;
761  std::unique_ptr<VarArgHelper> VAHelper;
762  const TargetLibraryInfo *TLI;
763  BasicBlock *ActualFnStart;
764 
765  // The following flags disable parts of MSan instrumentation based on
766  // blacklist contents and command-line options.
767  bool InsertChecks;
768  bool PropagateShadow;
769  bool PoisonStack;
770  bool PoisonUndef;
771  bool CheckReturnValue;
772 
773  struct ShadowOriginAndInsertPoint {
774  Value *Shadow;
775  Value *Origin;
776  Instruction *OrigIns;
777 
778  ShadowOriginAndInsertPoint(Value *S, Value *O, Instruction *I)
779  : Shadow(S), Origin(O), OrigIns(I) {}
780  };
783 
784  MemorySanitizerVisitor(Function &F, MemorySanitizer &MS)
785  : F(F), MS(MS), VAHelper(CreateVarArgHelper(F, MS, *this)) {
786  bool SanitizeFunction = F.hasFnAttribute(Attribute::SanitizeMemory);
787  InsertChecks = SanitizeFunction;
788  PropagateShadow = SanitizeFunction;
789  PoisonStack = SanitizeFunction && ClPoisonStack;
790  PoisonUndef = SanitizeFunction && ClPoisonUndef;
791  // FIXME: Consider using SpecialCaseList to specify a list of functions that
792  // must always return fully initialized values. For now, we hardcode "main".
793  CheckReturnValue = SanitizeFunction && (F.getName() == "main");
794  TLI = &MS.getAnalysis<TargetLibraryInfoWrapperPass>().getTLI();
795 
796  MS.initializeCallbacks(*F.getParent());
797  ActualFnStart = &F.getEntryBlock();
798 
799  LLVM_DEBUG(if (!InsertChecks) dbgs()
800  << "MemorySanitizer is not inserting checks into '"
801  << F.getName() << "'\n");
802  }
803 
804  Value *updateOrigin(Value *V, IRBuilder<> &IRB) {
805  if (MS.TrackOrigins <= 1) return V;
806  return IRB.CreateCall(MS.MsanChainOriginFn, V);
807  }
808 
809  Value *originToIntptr(IRBuilder<> &IRB, Value *Origin) {
810  const DataLayout &DL = F.getParent()->getDataLayout();
811  unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy);
812  if (IntptrSize == kOriginSize) return Origin;
813  assert(IntptrSize == kOriginSize * 2);
814  Origin = IRB.CreateIntCast(Origin, MS.IntptrTy, /* isSigned */ false);
815  return IRB.CreateOr(Origin, IRB.CreateShl(Origin, kOriginSize * 8));
816  }
817 
818  /// Fill memory range with the given origin value.
819  void paintOrigin(IRBuilder<> &IRB, Value *Origin, Value *OriginPtr,
820  unsigned Size, unsigned Alignment) {
821  const DataLayout &DL = F.getParent()->getDataLayout();
822  unsigned IntptrAlignment = DL.getABITypeAlignment(MS.IntptrTy);
823  unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy);
824  assert(IntptrAlignment >= kMinOriginAlignment);
825  assert(IntptrSize >= kOriginSize);
826 
827  unsigned Ofs = 0;
828  unsigned CurrentAlignment = Alignment;
829  if (Alignment >= IntptrAlignment && IntptrSize > kOriginSize) {
830  Value *IntptrOrigin = originToIntptr(IRB, Origin);
831  Value *IntptrOriginPtr =
832  IRB.CreatePointerCast(OriginPtr, PointerType::get(MS.IntptrTy, 0));
833  for (unsigned i = 0; i < Size / IntptrSize; ++i) {
834  Value *Ptr = i ? IRB.CreateConstGEP1_32(MS.IntptrTy, IntptrOriginPtr, i)
835  : IntptrOriginPtr;
836  IRB.CreateAlignedStore(IntptrOrigin, Ptr, CurrentAlignment);
837  Ofs += IntptrSize / kOriginSize;
838  CurrentAlignment = IntptrAlignment;
839  }
840  }
841 
842  for (unsigned i = Ofs; i < (Size + kOriginSize - 1) / kOriginSize; ++i) {
843  Value *GEP =
844  i ? IRB.CreateConstGEP1_32(nullptr, OriginPtr, i) : OriginPtr;
845  IRB.CreateAlignedStore(Origin, GEP, CurrentAlignment);
846  CurrentAlignment = kMinOriginAlignment;
847  }
848  }
849 
850  void storeOrigin(IRBuilder<> &IRB, Value *Addr, Value *Shadow, Value *Origin,
851  Value *OriginPtr, unsigned Alignment, bool AsCall) {
852  const DataLayout &DL = F.getParent()->getDataLayout();
853  unsigned OriginAlignment = std::max(kMinOriginAlignment, Alignment);
854  unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType());
855  if (Shadow->getType()->isAggregateType()) {
856  paintOrigin(IRB, updateOrigin(Origin, IRB), OriginPtr, StoreSize,
857  OriginAlignment);
858  } else {
859  Value *ConvertedShadow = convertToShadowTyNoVec(Shadow, IRB);
860  Constant *ConstantShadow = dyn_cast_or_null<Constant>(ConvertedShadow);
861  if (ConstantShadow) {
862  if (ClCheckConstantShadow && !ConstantShadow->isZeroValue())
863  paintOrigin(IRB, updateOrigin(Origin, IRB), OriginPtr, StoreSize,
864  OriginAlignment);
865  return;
866  }
867 
868  unsigned TypeSizeInBits =
869  DL.getTypeSizeInBits(ConvertedShadow->getType());
870  unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits);
871  if (AsCall && SizeIndex < kNumberOfAccessSizes) {
872  Value *Fn = MS.MaybeStoreOriginFn[SizeIndex];
873  Value *ConvertedShadow2 = IRB.CreateZExt(
874  ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex)));
875  IRB.CreateCall(Fn, {ConvertedShadow2,
876  IRB.CreatePointerCast(Addr, IRB.getInt8PtrTy()),
877  Origin});
878  } else {
879  Value *Cmp = IRB.CreateICmpNE(
880  ConvertedShadow, getCleanShadow(ConvertedShadow), "_mscmp");
882  Cmp, &*IRB.GetInsertPoint(), false, MS.OriginStoreWeights);
883  IRBuilder<> IRBNew(CheckTerm);
884  paintOrigin(IRBNew, updateOrigin(Origin, IRBNew), OriginPtr, StoreSize,
885  OriginAlignment);
886  }
887  }
888  }
889 
890  void materializeStores(bool InstrumentWithCalls) {
891  for (StoreInst *SI : StoreList) {
892  IRBuilder<> IRB(SI);
893  Value *Val = SI->getValueOperand();
894  Value *Addr = SI->getPointerOperand();
895  Value *Shadow = SI->isAtomic() ? getCleanShadow(Val) : getShadow(Val);
896  Value *ShadowPtr, *OriginPtr;
897  Type *ShadowTy = Shadow->getType();
898  unsigned Alignment = SI->getAlignment();
899  unsigned OriginAlignment = std::max(kMinOriginAlignment, Alignment);
900  std::tie(ShadowPtr, OriginPtr) =
901  getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ true);
902 
903  StoreInst *NewSI = IRB.CreateAlignedStore(Shadow, ShadowPtr, Alignment);
904  LLVM_DEBUG(dbgs() << " STORE: " << *NewSI << "\n");
905 
907  insertShadowCheck(Addr, NewSI);
908 
909  if (SI->isAtomic())
910  SI->setOrdering(addReleaseOrdering(SI->getOrdering()));
911 
912  if (MS.TrackOrigins && !SI->isAtomic())
913  storeOrigin(IRB, Addr, Shadow, getOrigin(Val), OriginPtr,
914  OriginAlignment, InstrumentWithCalls);
915  }
916  }
917 
918  /// Helper function to insert a warning at IRB's current insert point.
919  void insertWarningFn(IRBuilder<> &IRB, Value *Origin) {
920  if (!Origin)
921  Origin = (Value *)IRB.getInt32(0);
922  if (MS.TrackOrigins) {
923  IRB.CreateStore(Origin, MS.OriginTLS);
924  }
925  IRB.CreateCall(MS.WarningFn, {});
926  IRB.CreateCall(MS.EmptyAsm, {});
927  // FIXME: Insert UnreachableInst if !MS.Recover?
928  // This may invalidate some of the following checks and needs to be done
929  // at the very end.
930  }
931 
932  void materializeOneCheck(Instruction *OrigIns, Value *Shadow, Value *Origin,
933  bool AsCall) {
934  IRBuilder<> IRB(OrigIns);
935  LLVM_DEBUG(dbgs() << " SHAD0 : " << *Shadow << "\n");
936  Value *ConvertedShadow = convertToShadowTyNoVec(Shadow, IRB);
937  LLVM_DEBUG(dbgs() << " SHAD1 : " << *ConvertedShadow << "\n");
938 
939  Constant *ConstantShadow = dyn_cast_or_null<Constant>(ConvertedShadow);
940  if (ConstantShadow) {
941  if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) {
942  insertWarningFn(IRB, Origin);
943  }
944  return;
945  }
946 
947  const DataLayout &DL = OrigIns->getModule()->getDataLayout();
948 
949  unsigned TypeSizeInBits = DL.getTypeSizeInBits(ConvertedShadow->getType());
950  unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits);
951  if (AsCall && SizeIndex < kNumberOfAccessSizes) {
952  Value *Fn = MS.MaybeWarningFn[SizeIndex];
953  Value *ConvertedShadow2 =
954  IRB.CreateZExt(ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex)));
955  IRB.CreateCall(Fn, {ConvertedShadow2, MS.TrackOrigins && Origin
956  ? Origin
957  : (Value *)IRB.getInt32(0)});
958  } else {
959  Value *Cmp = IRB.CreateICmpNE(ConvertedShadow,
960  getCleanShadow(ConvertedShadow), "_mscmp");
962  Cmp, OrigIns,
963  /* Unreachable */ !MS.Recover, MS.ColdCallWeights);
964 
965  IRB.SetInsertPoint(CheckTerm);
966  insertWarningFn(IRB, Origin);
967  LLVM_DEBUG(dbgs() << " CHECK: " << *Cmp << "\n");
968  }
969  }
970 
971  void materializeChecks(bool InstrumentWithCalls) {
972  for (const auto &ShadowData : InstrumentationList) {
973  Instruction *OrigIns = ShadowData.OrigIns;
974  Value *Shadow = ShadowData.Shadow;
975  Value *Origin = ShadowData.Origin;
976  materializeOneCheck(OrigIns, Shadow, Origin, InstrumentWithCalls);
977  }
978  LLVM_DEBUG(dbgs() << "DONE:\n" << F);
979  }
980 
981  /// Add MemorySanitizer instrumentation to a function.
982  bool runOnFunction() {
983  // In the presence of unreachable blocks, we may see Phi nodes with
984  // incoming nodes from such blocks. Since InstVisitor skips unreachable
985  // blocks, such nodes will not have any shadow value associated with them.
986  // It's easier to remove unreachable blocks than deal with missing shadow.
988 
989  // Iterate all BBs in depth-first order and create shadow instructions
990  // for all instructions (where applicable).
991  // For PHI nodes we create dummy shadow PHIs which will be finalized later.
992  for (BasicBlock *BB : depth_first(ActualFnStart))
993  visit(*BB);
994 
995  // Finalize PHI nodes.
996  for (PHINode *PN : ShadowPHINodes) {
997  PHINode *PNS = cast<PHINode>(getShadow(PN));
998  PHINode *PNO = MS.TrackOrigins ? cast<PHINode>(getOrigin(PN)) : nullptr;
999  size_t NumValues = PN->getNumIncomingValues();
1000  for (size_t v = 0; v < NumValues; v++) {
1001  PNS->addIncoming(getShadow(PN, v), PN->getIncomingBlock(v));
1002  if (PNO) PNO->addIncoming(getOrigin(PN, v), PN->getIncomingBlock(v));
1003  }
1004  }
1005 
1006  VAHelper->finalizeInstrumentation();
1007 
1008  bool InstrumentWithCalls = ClInstrumentationWithCallThreshold >= 0 &&
1009  InstrumentationList.size() + StoreList.size() >
1011 
1012  // Delayed instrumentation of StoreInst.
1013  // This may add new checks to be inserted later.
1014  materializeStores(InstrumentWithCalls);
1015 
1016  // Insert shadow value checks.
1017  materializeChecks(InstrumentWithCalls);
1018 
1019  return true;
1020  }
1021 
1022  /// Compute the shadow type that corresponds to a given Value.
1023  Type *getShadowTy(Value *V) {
1024  return getShadowTy(V->getType());
1025  }
1026 
1027  /// Compute the shadow type that corresponds to a given Type.
1028  Type *getShadowTy(Type *OrigTy) {
1029  if (!OrigTy->isSized()) {
1030  return nullptr;
1031  }
1032  // For integer type, shadow is the same as the original type.
1033  // This may return weird-sized types like i1.
1034  if (IntegerType *IT = dyn_cast<IntegerType>(OrigTy))
1035  return IT;
1036  const DataLayout &DL = F.getParent()->getDataLayout();
1037  if (VectorType *VT = dyn_cast<VectorType>(OrigTy)) {
1038  uint32_t EltSize = DL.getTypeSizeInBits(VT->getElementType());
1039  return VectorType::get(IntegerType::get(*MS.C, EltSize),
1040  VT->getNumElements());
1041  }
1042  if (ArrayType *AT = dyn_cast<ArrayType>(OrigTy)) {
1043  return ArrayType::get(getShadowTy(AT->getElementType()),
1044  AT->getNumElements());
1045  }
1046  if (StructType *ST = dyn_cast<StructType>(OrigTy)) {
1047  SmallVector<Type*, 4> Elements;
1048  for (unsigned i = 0, n = ST->getNumElements(); i < n; i++)
1049  Elements.push_back(getShadowTy(ST->getElementType(i)));
1050  StructType *Res = StructType::get(*MS.C, Elements, ST->isPacked());
1051  LLVM_DEBUG(dbgs() << "getShadowTy: " << *ST << " ===> " << *Res << "\n");
1052  return Res;
1053  }
1054  uint32_t TypeSize = DL.getTypeSizeInBits(OrigTy);
1055  return IntegerType::get(*MS.C, TypeSize);
1056  }
1057 
1058  /// Flatten a vector type.
1059  Type *getShadowTyNoVec(Type *ty) {
1060  if (VectorType *vt = dyn_cast<VectorType>(ty))
1061  return IntegerType::get(*MS.C, vt->getBitWidth());
1062  return ty;
1063  }
1064 
1065  /// Convert a shadow value to it's flattened variant.
1066  Value *convertToShadowTyNoVec(Value *V, IRBuilder<> &IRB) {
1067  Type *Ty = V->getType();
1068  Type *NoVecTy = getShadowTyNoVec(Ty);
1069  if (Ty == NoVecTy) return V;
1070  return IRB.CreateBitCast(V, NoVecTy);
1071  }
1072 
1073  /// Compute the integer shadow offset that corresponds to a given
1074  /// application address.
1075  ///
1076  /// Offset = (Addr & ~AndMask) ^ XorMask
1077  Value *getShadowPtrOffset(Value *Addr, IRBuilder<> &IRB) {
1078  Value *OffsetLong = IRB.CreatePointerCast(Addr, MS.IntptrTy);
1079 
1080  uint64_t AndMask = MS.MapParams->AndMask;
1081  if (AndMask)
1082  OffsetLong =
1083  IRB.CreateAnd(OffsetLong, ConstantInt::get(MS.IntptrTy, ~AndMask));
1084 
1085  uint64_t XorMask = MS.MapParams->XorMask;
1086  if (XorMask)
1087  OffsetLong =
1088  IRB.CreateXor(OffsetLong, ConstantInt::get(MS.IntptrTy, XorMask));
1089  return OffsetLong;
1090  }
1091 
1092  /// Compute the shadow and origin addresses corresponding to a given
1093  /// application address.
1094  ///
1095  /// Shadow = ShadowBase + Offset
1096  /// Origin = (OriginBase + Offset) & ~3ULL
1097  std::pair<Value *, Value *> getShadowOriginPtrUserspace(
1098  Value *Addr, IRBuilder<> &IRB, Type *ShadowTy, unsigned Alignment,
1099  Instruction **FirstInsn) {
1100  Value *ShadowOffset = getShadowPtrOffset(Addr, IRB);
1101  Value *ShadowLong = ShadowOffset;
1102  uint64_t ShadowBase = MS.MapParams->ShadowBase;
1103  *FirstInsn = dyn_cast<Instruction>(ShadowLong);
1104  if (ShadowBase != 0) {
1105  ShadowLong =
1106  IRB.CreateAdd(ShadowLong,
1107  ConstantInt::get(MS.IntptrTy, ShadowBase));
1108  }
1109  Value *ShadowPtr =
1110  IRB.CreateIntToPtr(ShadowLong, PointerType::get(ShadowTy, 0));
1111  Value *OriginPtr = nullptr;
1112  if (MS.TrackOrigins) {
1113  Value *OriginLong = ShadowOffset;
1114  uint64_t OriginBase = MS.MapParams->OriginBase;
1115  if (OriginBase != 0)
1116  OriginLong = IRB.CreateAdd(OriginLong,
1117  ConstantInt::get(MS.IntptrTy, OriginBase));
1118  if (Alignment < kMinOriginAlignment) {
1119  uint64_t Mask = kMinOriginAlignment - 1;
1120  OriginLong =
1121  IRB.CreateAnd(OriginLong, ConstantInt::get(MS.IntptrTy, ~Mask));
1122  }
1123  OriginPtr =
1124  IRB.CreateIntToPtr(OriginLong, PointerType::get(IRB.getInt32Ty(), 0));
1125  }
1126  return std::make_pair(ShadowPtr, OriginPtr);
1127  }
1128 
1129  std::pair<Value *, Value *> getShadowOriginPtr(Value *Addr, IRBuilder<> &IRB,
1130  Type *ShadowTy,
1131  unsigned Alignment,
1132  bool isStore) {
1133  Instruction *FirstInsn = nullptr;
1134  std::pair<Value *, Value *> ret =
1135  getShadowOriginPtrUserspace(Addr, IRB, ShadowTy, Alignment, &FirstInsn);
1136  return ret;
1137  }
1138 
1139  /// Compute the shadow address for a given function argument.
1140  ///
1141  /// Shadow = ParamTLS+ArgOffset.
1142  Value *getShadowPtrForArgument(Value *A, IRBuilder<> &IRB,
1143  int ArgOffset) {
1144  Value *Base = IRB.CreatePointerCast(MS.ParamTLS, MS.IntptrTy);
1145  if (ArgOffset)
1146  Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset));
1147  return IRB.CreateIntToPtr(Base, PointerType::get(getShadowTy(A), 0),
1148  "_msarg");
1149  }
1150 
1151  /// Compute the origin address for a given function argument.
1152  Value *getOriginPtrForArgument(Value *A, IRBuilder<> &IRB,
1153  int ArgOffset) {
1154  if (!MS.TrackOrigins) return nullptr;
1155  Value *Base = IRB.CreatePointerCast(MS.ParamOriginTLS, MS.IntptrTy);
1156  if (ArgOffset)
1157  Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset));
1158  return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0),
1159  "_msarg_o");
1160  }
1161 
1162  /// Compute the shadow address for a retval.
1163  Value *getShadowPtrForRetval(Value *A, IRBuilder<> &IRB) {
1164  return IRB.CreatePointerCast(MS.RetvalTLS,
1165  PointerType::get(getShadowTy(A), 0),
1166  "_msret");
1167  }
1168 
1169  /// Compute the origin address for a retval.
1170  Value *getOriginPtrForRetval(IRBuilder<> &IRB) {
1171  // We keep a single origin for the entire retval. Might be too optimistic.
1172  return MS.RetvalOriginTLS;
1173  }
1174 
1175  /// Set SV to be the shadow value for V.
1176  void setShadow(Value *V, Value *SV) {
1177  assert(!ShadowMap.count(V) && "Values may only have one shadow");
1178  ShadowMap[V] = PropagateShadow ? SV : getCleanShadow(V);
1179  }
1180 
1181  /// Set Origin to be the origin value for V.
1182  void setOrigin(Value *V, Value *Origin) {
1183  if (!MS.TrackOrigins) return;
1184  assert(!OriginMap.count(V) && "Values may only have one origin");
1185  LLVM_DEBUG(dbgs() << "ORIGIN: " << *V << " ==> " << *Origin << "\n");
1186  OriginMap[V] = Origin;
1187  }
1188 
1189  Constant *getCleanShadow(Type *OrigTy) {
1190  Type *ShadowTy = getShadowTy(OrigTy);
1191  if (!ShadowTy)
1192  return nullptr;
1193  return Constant::getNullValue(ShadowTy);
1194  }
1195 
1196  /// Create a clean shadow value for a given value.
1197  ///
1198  /// Clean shadow (all zeroes) means all bits of the value are defined
1199  /// (initialized).
1200  Constant *getCleanShadow(Value *V) {
1201  return getCleanShadow(V->getType());
1202  }
1203 
1204  /// Create a dirty shadow of a given shadow type.
1205  Constant *getPoisonedShadow(Type *ShadowTy) {
1206  assert(ShadowTy);
1207  if (isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy))
1208  return Constant::getAllOnesValue(ShadowTy);
1209  if (ArrayType *AT = dyn_cast<ArrayType>(ShadowTy)) {
1210  SmallVector<Constant *, 4> Vals(AT->getNumElements(),
1211  getPoisonedShadow(AT->getElementType()));
1212  return ConstantArray::get(AT, Vals);
1213  }
1214  if (StructType *ST = dyn_cast<StructType>(ShadowTy)) {
1216  for (unsigned i = 0, n = ST->getNumElements(); i < n; i++)
1217  Vals.push_back(getPoisonedShadow(ST->getElementType(i)));
1218  return ConstantStruct::get(ST, Vals);
1219  }
1220  llvm_unreachable("Unexpected shadow type");
1221  }
1222 
1223  /// Create a dirty shadow for a given value.
1224  Constant *getPoisonedShadow(Value *V) {
1225  Type *ShadowTy = getShadowTy(V);
1226  if (!ShadowTy)
1227  return nullptr;
1228  return getPoisonedShadow(ShadowTy);
1229  }
1230 
1231  /// Create a clean (zero) origin.
1232  Value *getCleanOrigin() {
1233  return Constant::getNullValue(MS.OriginTy);
1234  }
1235 
1236  /// Get the shadow value for a given Value.
1237  ///
1238  /// This function either returns the value set earlier with setShadow,
1239  /// or extracts if from ParamTLS (for function arguments).
1240  Value *getShadow(Value *V) {
1241  if (!PropagateShadow) return getCleanShadow(V);
1242  if (Instruction *I = dyn_cast<Instruction>(V)) {
1243  if (I->getMetadata("nosanitize"))
1244  return getCleanShadow(V);
1245  // For instructions the shadow is already stored in the map.
1246  Value *Shadow = ShadowMap[V];
1247  if (!Shadow) {
1248  LLVM_DEBUG(dbgs() << "No shadow: " << *V << "\n" << *(I->getParent()));
1249  (void)I;
1250  assert(Shadow && "No shadow for a value");
1251  }
1252  return Shadow;
1253  }
1254  if (UndefValue *U = dyn_cast<UndefValue>(V)) {
1255  Value *AllOnes = PoisonUndef ? getPoisonedShadow(V) : getCleanShadow(V);
1256  LLVM_DEBUG(dbgs() << "Undef: " << *U << " ==> " << *AllOnes << "\n");
1257  (void)U;
1258  return AllOnes;
1259  }
1260  if (Argument *A = dyn_cast<Argument>(V)) {
1261  // For arguments we compute the shadow on demand and store it in the map.
1262  Value **ShadowPtr = &ShadowMap[V];
1263  if (*ShadowPtr)
1264  return *ShadowPtr;
1265  Function *F = A->getParent();
1266  IRBuilder<> EntryIRB(ActualFnStart->getFirstNonPHI());
1267  unsigned ArgOffset = 0;
1268  const DataLayout &DL = F->getParent()->getDataLayout();
1269  for (auto &FArg : F->args()) {
1270  if (!FArg.getType()->isSized()) {
1271  LLVM_DEBUG(dbgs() << "Arg is not sized\n");
1272  continue;
1273  }
1274  unsigned Size =
1275  FArg.hasByValAttr()
1276  ? DL.getTypeAllocSize(FArg.getType()->getPointerElementType())
1277  : DL.getTypeAllocSize(FArg.getType());
1278  if (A == &FArg) {
1279  bool Overflow = ArgOffset + Size > kParamTLSSize;
1280  Value *Base = getShadowPtrForArgument(&FArg, EntryIRB, ArgOffset);
1281  if (FArg.hasByValAttr()) {
1282  // ByVal pointer itself has clean shadow. We copy the actual
1283  // argument shadow to the underlying memory.
1284  // Figure out maximal valid memcpy alignment.
1285  unsigned ArgAlign = FArg.getParamAlignment();
1286  if (ArgAlign == 0) {
1287  Type *EltType = A->getType()->getPointerElementType();
1288  ArgAlign = DL.getABITypeAlignment(EltType);
1289  }
1290  Value *CpShadowPtr =
1291  getShadowOriginPtr(V, EntryIRB, EntryIRB.getInt8Ty(), ArgAlign,
1292  /*isStore*/ true)
1293  .first;
1294  if (Overflow) {
1295  // ParamTLS overflow.
1296  EntryIRB.CreateMemSet(
1297  CpShadowPtr, Constant::getNullValue(EntryIRB.getInt8Ty()),
1298  Size, ArgAlign);
1299  } else {
1300  unsigned CopyAlign = std::min(ArgAlign, kShadowTLSAlignment);
1301  Value *Cpy = EntryIRB.CreateMemCpy(CpShadowPtr, CopyAlign, Base,
1302  CopyAlign, Size);
1303  LLVM_DEBUG(dbgs() << " ByValCpy: " << *Cpy << "\n");
1304  (void)Cpy;
1305  }
1306  *ShadowPtr = getCleanShadow(V);
1307  } else {
1308  if (Overflow) {
1309  // ParamTLS overflow.
1310  *ShadowPtr = getCleanShadow(V);
1311  } else {
1312  *ShadowPtr =
1313  EntryIRB.CreateAlignedLoad(Base, kShadowTLSAlignment);
1314  }
1315  }
1316  LLVM_DEBUG(dbgs()
1317  << " ARG: " << FArg << " ==> " << **ShadowPtr << "\n");
1318  if (MS.TrackOrigins && !Overflow) {
1319  Value *OriginPtr =
1320  getOriginPtrForArgument(&FArg, EntryIRB, ArgOffset);
1321  setOrigin(A, EntryIRB.CreateLoad(OriginPtr));
1322  } else {
1323  setOrigin(A, getCleanOrigin());
1324  }
1325  }
1326  ArgOffset += alignTo(Size, kShadowTLSAlignment);
1327  }
1328  assert(*ShadowPtr && "Could not find shadow for an argument");
1329  return *ShadowPtr;
1330  }
1331  // For everything else the shadow is zero.
1332  return getCleanShadow(V);
1333  }
1334 
1335  /// Get the shadow for i-th argument of the instruction I.
1336  Value *getShadow(Instruction *I, int i) {
1337  return getShadow(I->getOperand(i));
1338  }
1339 
1340  /// Get the origin for a value.
1341  Value *getOrigin(Value *V) {
1342  if (!MS.TrackOrigins) return nullptr;
1343  if (!PropagateShadow) return getCleanOrigin();
1344  if (isa<Constant>(V)) return getCleanOrigin();
1345  assert((isa<Instruction>(V) || isa<Argument>(V)) &&
1346  "Unexpected value type in getOrigin()");
1347  if (Instruction *I = dyn_cast<Instruction>(V)) {
1348  if (I->getMetadata("nosanitize"))
1349  return getCleanOrigin();
1350  }
1351  Value *Origin = OriginMap[V];
1352  assert(Origin && "Missing origin");
1353  return Origin;
1354  }
1355 
1356  /// Get the origin for i-th argument of the instruction I.
1357  Value *getOrigin(Instruction *I, int i) {
1358  return getOrigin(I->getOperand(i));
1359  }
1360 
1361  /// Remember the place where a shadow check should be inserted.
1362  ///
1363  /// This location will be later instrumented with a check that will print a
1364  /// UMR warning in runtime if the shadow value is not 0.
1365  void insertShadowCheck(Value *Shadow, Value *Origin, Instruction *OrigIns) {
1366  assert(Shadow);
1367  if (!InsertChecks) return;
1368 #ifndef NDEBUG
1369  Type *ShadowTy = Shadow->getType();
1370  assert((isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy)) &&
1371  "Can only insert checks for integer and vector shadow types");
1372 #endif
1373  InstrumentationList.push_back(
1374  ShadowOriginAndInsertPoint(Shadow, Origin, OrigIns));
1375  }
1376 
1377  /// Remember the place where a shadow check should be inserted.
1378  ///
1379  /// This location will be later instrumented with a check that will print a
1380  /// UMR warning in runtime if the value is not fully defined.
1381  void insertShadowCheck(Value *Val, Instruction *OrigIns) {
1382  assert(Val);
1383  Value *Shadow, *Origin;
1384  if (ClCheckConstantShadow) {
1385  Shadow = getShadow(Val);
1386  if (!Shadow) return;
1387  Origin = getOrigin(Val);
1388  } else {
1389  Shadow = dyn_cast_or_null<Instruction>(getShadow(Val));
1390  if (!Shadow) return;
1391  Origin = dyn_cast_or_null<Instruction>(getOrigin(Val));
1392  }
1393  insertShadowCheck(Shadow, Origin, OrigIns);
1394  }
1395 
1396  AtomicOrdering addReleaseOrdering(AtomicOrdering a) {
1397  switch (a) {
1403  return AtomicOrdering::Release;
1409  }
1410  llvm_unreachable("Unknown ordering");
1411  }
1412 
1413  AtomicOrdering addAcquireOrdering(AtomicOrdering a) {
1414  switch (a) {
1420  return AtomicOrdering::Acquire;
1426  }
1427  llvm_unreachable("Unknown ordering");
1428  }
1429 
1430  // ------------------- Visitors.
1432  void visit(Instruction &I) {
1433  if (!I.getMetadata("nosanitize"))
1435  }
1436 
1437  /// Instrument LoadInst
1438  ///
1439  /// Loads the corresponding shadow and (optionally) origin.
1440  /// Optionally, checks that the load address is fully defined.
1441  void visitLoadInst(LoadInst &I) {
1442  assert(I.getType()->isSized() && "Load type must have size");
1443  assert(!I.getMetadata("nosanitize"));
1444  IRBuilder<> IRB(I.getNextNode());
1445  Type *ShadowTy = getShadowTy(&I);
1446  Value *Addr = I.getPointerOperand();
1447  Value *ShadowPtr, *OriginPtr;
1448  unsigned Alignment = I.getAlignment();
1449  if (PropagateShadow) {
1450  std::tie(ShadowPtr, OriginPtr) =
1451  getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false);
1452  setShadow(&I, IRB.CreateAlignedLoad(ShadowPtr, Alignment, "_msld"));
1453  } else {
1454  setShadow(&I, getCleanShadow(&I));
1455  }
1456 
1458  insertShadowCheck(I.getPointerOperand(), &I);
1459 
1460  if (I.isAtomic())
1461  I.setOrdering(addAcquireOrdering(I.getOrdering()));
1462 
1463  if (MS.TrackOrigins) {
1464  if (PropagateShadow) {
1465  unsigned OriginAlignment = std::max(kMinOriginAlignment, Alignment);
1466  setOrigin(&I, IRB.CreateAlignedLoad(OriginPtr, OriginAlignment));
1467  } else {
1468  setOrigin(&I, getCleanOrigin());
1469  }
1470  }
1471  }
1472 
1473  /// Instrument StoreInst
1474  ///
1475  /// Stores the corresponding shadow and (optionally) origin.
1476  /// Optionally, checks that the store address is fully defined.
1477  void visitStoreInst(StoreInst &I) {
1478  StoreList.push_back(&I);
1479  }
1480 
1481  void handleCASOrRMW(Instruction &I) {
1482  assert(isa<AtomicRMWInst>(I) || isa<AtomicCmpXchgInst>(I));
1483 
1484  IRBuilder<> IRB(&I);
1485  Value *Addr = I.getOperand(0);
1486  Value *ShadowPtr = getShadowOriginPtr(Addr, IRB, I.getType(),
1487  /*Alignment*/ 1, /*isStore*/ true)
1488  .first;
1489 
1491  insertShadowCheck(Addr, &I);
1492 
1493  // Only test the conditional argument of cmpxchg instruction.
1494  // The other argument can potentially be uninitialized, but we can not
1495  // detect this situation reliably without possible false positives.
1496  if (isa<AtomicCmpXchgInst>(I))
1497  insertShadowCheck(I.getOperand(1), &I);
1498 
1499  IRB.CreateStore(getCleanShadow(&I), ShadowPtr);
1500 
1501  setShadow(&I, getCleanShadow(&I));
1502  setOrigin(&I, getCleanOrigin());
1503  }
1504 
1505  void visitAtomicRMWInst(AtomicRMWInst &I) {
1506  handleCASOrRMW(I);
1507  I.setOrdering(addReleaseOrdering(I.getOrdering()));
1508  }
1509 
1510  void visitAtomicCmpXchgInst(AtomicCmpXchgInst &I) {
1511  handleCASOrRMW(I);
1512  I.setSuccessOrdering(addReleaseOrdering(I.getSuccessOrdering()));
1513  }
1514 
1515  // Vector manipulation.
1516  void visitExtractElementInst(ExtractElementInst &I) {
1517  insertShadowCheck(I.getOperand(1), &I);
1518  IRBuilder<> IRB(&I);
1519  setShadow(&I, IRB.CreateExtractElement(getShadow(&I, 0), I.getOperand(1),
1520  "_msprop"));
1521  setOrigin(&I, getOrigin(&I, 0));
1522  }
1523 
1524  void visitInsertElementInst(InsertElementInst &I) {
1525  insertShadowCheck(I.getOperand(2), &I);
1526  IRBuilder<> IRB(&I);
1527  setShadow(&I, IRB.CreateInsertElement(getShadow(&I, 0), getShadow(&I, 1),
1528  I.getOperand(2), "_msprop"));
1529  setOriginForNaryOp(I);
1530  }
1531 
1532  void visitShuffleVectorInst(ShuffleVectorInst &I) {
1533  insertShadowCheck(I.getOperand(2), &I);
1534  IRBuilder<> IRB(&I);
1535  setShadow(&I, IRB.CreateShuffleVector(getShadow(&I, 0), getShadow(&I, 1),
1536  I.getOperand(2), "_msprop"));
1537  setOriginForNaryOp(I);
1538  }
1539 
1540  // Casts.
1541  void visitSExtInst(SExtInst &I) {
1542  IRBuilder<> IRB(&I);
1543  setShadow(&I, IRB.CreateSExt(getShadow(&I, 0), I.getType(), "_msprop"));
1544  setOrigin(&I, getOrigin(&I, 0));
1545  }
1546 
1547  void visitZExtInst(ZExtInst &I) {
1548  IRBuilder<> IRB(&I);
1549  setShadow(&I, IRB.CreateZExt(getShadow(&I, 0), I.getType(), "_msprop"));
1550  setOrigin(&I, getOrigin(&I, 0));
1551  }
1552 
1553  void visitTruncInst(TruncInst &I) {
1554  IRBuilder<> IRB(&I);
1555  setShadow(&I, IRB.CreateTrunc(getShadow(&I, 0), I.getType(), "_msprop"));
1556  setOrigin(&I, getOrigin(&I, 0));
1557  }
1558 
1559  void visitBitCastInst(BitCastInst &I) {
1560  // Special case: if this is the bitcast (there is exactly 1 allowed) between
1561  // a musttail call and a ret, don't instrument. New instructions are not
1562  // allowed after a musttail call.
1563  if (auto *CI = dyn_cast<CallInst>(I.getOperand(0)))
1564  if (CI->isMustTailCall())
1565  return;
1566  IRBuilder<> IRB(&I);
1567  setShadow(&I, IRB.CreateBitCast(getShadow(&I, 0), getShadowTy(&I)));
1568  setOrigin(&I, getOrigin(&I, 0));
1569  }
1570 
1571  void visitPtrToIntInst(PtrToIntInst &I) {
1572  IRBuilder<> IRB(&I);
1573  setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false,
1574  "_msprop_ptrtoint"));
1575  setOrigin(&I, getOrigin(&I, 0));
1576  }
1577 
1578  void visitIntToPtrInst(IntToPtrInst &I) {
1579  IRBuilder<> IRB(&I);
1580  setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false,
1581  "_msprop_inttoptr"));
1582  setOrigin(&I, getOrigin(&I, 0));
1583  }
1584 
1585  void visitFPToSIInst(CastInst& I) { handleShadowOr(I); }
1586  void visitFPToUIInst(CastInst& I) { handleShadowOr(I); }
1587  void visitSIToFPInst(CastInst& I) { handleShadowOr(I); }
1588  void visitUIToFPInst(CastInst& I) { handleShadowOr(I); }
1589  void visitFPExtInst(CastInst& I) { handleShadowOr(I); }
1590  void visitFPTruncInst(CastInst& I) { handleShadowOr(I); }
1591 
1592  /// Propagate shadow for bitwise AND.
1593  ///
1594  /// This code is exact, i.e. if, for example, a bit in the left argument
1595  /// is defined and 0, then neither the value not definedness of the
1596  /// corresponding bit in B don't affect the resulting shadow.
1597  void visitAnd(BinaryOperator &I) {
1598  IRBuilder<> IRB(&I);
1599  // "And" of 0 and a poisoned value results in unpoisoned value.
1600  // 1&1 => 1; 0&1 => 0; p&1 => p;
1601  // 1&0 => 0; 0&0 => 0; p&0 => 0;
1602  // 1&p => p; 0&p => 0; p&p => p;
1603  // S = (S1 & S2) | (V1 & S2) | (S1 & V2)
1604  Value *S1 = getShadow(&I, 0);
1605  Value *S2 = getShadow(&I, 1);
1606  Value *V1 = I.getOperand(0);
1607  Value *V2 = I.getOperand(1);
1608  if (V1->getType() != S1->getType()) {
1609  V1 = IRB.CreateIntCast(V1, S1->getType(), false);
1610  V2 = IRB.CreateIntCast(V2, S2->getType(), false);
1611  }
1612  Value *S1S2 = IRB.CreateAnd(S1, S2);
1613  Value *V1S2 = IRB.CreateAnd(V1, S2);
1614  Value *S1V2 = IRB.CreateAnd(S1, V2);
1615  setShadow(&I, IRB.CreateOr(S1S2, IRB.CreateOr(V1S2, S1V2)));
1616  setOriginForNaryOp(I);
1617  }
1618 
1619  void visitOr(BinaryOperator &I) {
1620  IRBuilder<> IRB(&I);
1621  // "Or" of 1 and a poisoned value results in unpoisoned value.
1622  // 1|1 => 1; 0|1 => 1; p|1 => 1;
1623  // 1|0 => 1; 0|0 => 0; p|0 => p;
1624  // 1|p => 1; 0|p => p; p|p => p;
1625  // S = (S1 & S2) | (~V1 & S2) | (S1 & ~V2)
1626  Value *S1 = getShadow(&I, 0);
1627  Value *S2 = getShadow(&I, 1);
1628  Value *V1 = IRB.CreateNot(I.getOperand(0));
1629  Value *V2 = IRB.CreateNot(I.getOperand(1));
1630  if (V1->getType() != S1->getType()) {
1631  V1 = IRB.CreateIntCast(V1, S1->getType(), false);
1632  V2 = IRB.CreateIntCast(V2, S2->getType(), false);
1633  }
1634  Value *S1S2 = IRB.CreateAnd(S1, S2);
1635  Value *V1S2 = IRB.CreateAnd(V1, S2);
1636  Value *S1V2 = IRB.CreateAnd(S1, V2);
1637  setShadow(&I, IRB.CreateOr(S1S2, IRB.CreateOr(V1S2, S1V2)));
1638  setOriginForNaryOp(I);
1639  }
1640 
1641  /// Default propagation of shadow and/or origin.
1642  ///
1643  /// This class implements the general case of shadow propagation, used in all
1644  /// cases where we don't know and/or don't care about what the operation
1645  /// actually does. It converts all input shadow values to a common type
1646  /// (extending or truncating as necessary), and bitwise OR's them.
1647  ///
1648  /// This is much cheaper than inserting checks (i.e. requiring inputs to be
1649  /// fully initialized), and less prone to false positives.
1650  ///
1651  /// This class also implements the general case of origin propagation. For a
1652  /// Nary operation, result origin is set to the origin of an argument that is
1653  /// not entirely initialized. If there is more than one such arguments, the
1654  /// rightmost of them is picked. It does not matter which one is picked if all
1655  /// arguments are initialized.
1656  template <bool CombineShadow>
1657  class Combiner {
1658  Value *Shadow = nullptr;
1659  Value *Origin = nullptr;
1660  IRBuilder<> &IRB;
1661  MemorySanitizerVisitor *MSV;
1662 
1663  public:
1664  Combiner(MemorySanitizerVisitor *MSV, IRBuilder<> &IRB)
1665  : IRB(IRB), MSV(MSV) {}
1666 
1667  /// Add a pair of shadow and origin values to the mix.
1668  Combiner &Add(Value *OpShadow, Value *OpOrigin) {
1669  if (CombineShadow) {
1670  assert(OpShadow);
1671  if (!Shadow)
1672  Shadow = OpShadow;
1673  else {
1674  OpShadow = MSV->CreateShadowCast(IRB, OpShadow, Shadow->getType());
1675  Shadow = IRB.CreateOr(Shadow, OpShadow, "_msprop");
1676  }
1677  }
1678 
1679  if (MSV->MS.TrackOrigins) {
1680  assert(OpOrigin);
1681  if (!Origin) {
1682  Origin = OpOrigin;
1683  } else {
1684  Constant *ConstOrigin = dyn_cast<Constant>(OpOrigin);
1685  // No point in adding something that might result in 0 origin value.
1686  if (!ConstOrigin || !ConstOrigin->isNullValue()) {
1687  Value *FlatShadow = MSV->convertToShadowTyNoVec(OpShadow, IRB);
1688  Value *Cond =
1689  IRB.CreateICmpNE(FlatShadow, MSV->getCleanShadow(FlatShadow));
1690  Origin = IRB.CreateSelect(Cond, OpOrigin, Origin);
1691  }
1692  }
1693  }
1694  return *this;
1695  }
1696 
1697  /// Add an application value to the mix.
1698  Combiner &Add(Value *V) {
1699  Value *OpShadow = MSV->getShadow(V);
1700  Value *OpOrigin = MSV->MS.TrackOrigins ? MSV->getOrigin(V) : nullptr;
1701  return Add(OpShadow, OpOrigin);
1702  }
1703 
1704  /// Set the current combined values as the given instruction's shadow
1705  /// and origin.
1706  void Done(Instruction *I) {
1707  if (CombineShadow) {
1708  assert(Shadow);
1709  Shadow = MSV->CreateShadowCast(IRB, Shadow, MSV->getShadowTy(I));
1710  MSV->setShadow(I, Shadow);
1711  }
1712  if (MSV->MS.TrackOrigins) {
1713  assert(Origin);
1714  MSV->setOrigin(I, Origin);
1715  }
1716  }
1717  };
1718 
1719  using ShadowAndOriginCombiner = Combiner<true>;
1720  using OriginCombiner = Combiner<false>;
1721 
1722  /// Propagate origin for arbitrary operation.
1723  void setOriginForNaryOp(Instruction &I) {
1724  if (!MS.TrackOrigins) return;
1725  IRBuilder<> IRB(&I);
1726  OriginCombiner OC(this, IRB);
1727  for (Instruction::op_iterator OI = I.op_begin(); OI != I.op_end(); ++OI)
1728  OC.Add(OI->get());
1729  OC.Done(&I);
1730  }
1731 
1732  size_t VectorOrPrimitiveTypeSizeInBits(Type *Ty) {
1733  assert(!(Ty->isVectorTy() && Ty->getScalarType()->isPointerTy()) &&
1734  "Vector of pointers is not a valid shadow type");
1735  return Ty->isVectorTy() ?
1737  Ty->getPrimitiveSizeInBits();
1738  }
1739 
1740  /// Cast between two shadow types, extending or truncating as
1741  /// necessary.
1742  Value *CreateShadowCast(IRBuilder<> &IRB, Value *V, Type *dstTy,
1743  bool Signed = false) {
1744  Type *srcTy = V->getType();
1745  size_t srcSizeInBits = VectorOrPrimitiveTypeSizeInBits(srcTy);
1746  size_t dstSizeInBits = VectorOrPrimitiveTypeSizeInBits(dstTy);
1747  if (srcSizeInBits > 1 && dstSizeInBits == 1)
1748  return IRB.CreateICmpNE(V, getCleanShadow(V));
1749 
1750  if (dstTy->isIntegerTy() && srcTy->isIntegerTy())
1751  return IRB.CreateIntCast(V, dstTy, Signed);
1752  if (dstTy->isVectorTy() && srcTy->isVectorTy() &&
1753  dstTy->getVectorNumElements() == srcTy->getVectorNumElements())
1754  return IRB.CreateIntCast(V, dstTy, Signed);
1755  Value *V1 = IRB.CreateBitCast(V, Type::getIntNTy(*MS.C, srcSizeInBits));
1756  Value *V2 =
1757  IRB.CreateIntCast(V1, Type::getIntNTy(*MS.C, dstSizeInBits), Signed);
1758  return IRB.CreateBitCast(V2, dstTy);
1759  // TODO: handle struct types.
1760  }
1761 
1762  /// Cast an application value to the type of its own shadow.
1763  Value *CreateAppToShadowCast(IRBuilder<> &IRB, Value *V) {
1764  Type *ShadowTy = getShadowTy(V);
1765  if (V->getType() == ShadowTy)
1766  return V;
1767  if (V->getType()->isPtrOrPtrVectorTy())
1768  return IRB.CreatePtrToInt(V, ShadowTy);
1769  else
1770  return IRB.CreateBitCast(V, ShadowTy);
1771  }
1772 
1773  /// Propagate shadow for arbitrary operation.
1774  void handleShadowOr(Instruction &I) {
1775  IRBuilder<> IRB(&I);
1776  ShadowAndOriginCombiner SC(this, IRB);
1777  for (Instruction::op_iterator OI = I.op_begin(); OI != I.op_end(); ++OI)
1778  SC.Add(OI->get());
1779  SC.Done(&I);
1780  }
1781 
1782  // Handle multiplication by constant.
1783  //
1784  // Handle a special case of multiplication by constant that may have one or
1785  // more zeros in the lower bits. This makes corresponding number of lower bits
1786  // of the result zero as well. We model it by shifting the other operand
1787  // shadow left by the required number of bits. Effectively, we transform
1788  // (X * (A * 2**B)) to ((X << B) * A) and instrument (X << B) as (Sx << B).
1789  // We use multiplication by 2**N instead of shift to cover the case of
1790  // multiplication by 0, which may occur in some elements of a vector operand.
1791  void handleMulByConstant(BinaryOperator &I, Constant *ConstArg,
1792  Value *OtherArg) {
1793  Constant *ShadowMul;
1794  Type *Ty = ConstArg->getType();
1795  if (Ty->isVectorTy()) {
1796  unsigned NumElements = Ty->getVectorNumElements();
1797  Type *EltTy = Ty->getSequentialElementType();
1798  SmallVector<Constant *, 16> Elements;
1799  for (unsigned Idx = 0; Idx < NumElements; ++Idx) {
1800  if (ConstantInt *Elt =
1801  dyn_cast<ConstantInt>(ConstArg->getAggregateElement(Idx))) {
1802  const APInt &V = Elt->getValue();
1803  APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros();
1804  Elements.push_back(ConstantInt::get(EltTy, V2));
1805  } else {
1806  Elements.push_back(ConstantInt::get(EltTy, 1));
1807  }
1808  }
1809  ShadowMul = ConstantVector::get(Elements);
1810  } else {
1811  if (ConstantInt *Elt = dyn_cast<ConstantInt>(ConstArg)) {
1812  const APInt &V = Elt->getValue();
1813  APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros();
1814  ShadowMul = ConstantInt::get(Ty, V2);
1815  } else {
1816  ShadowMul = ConstantInt::get(Ty, 1);
1817  }
1818  }
1819 
1820  IRBuilder<> IRB(&I);
1821  setShadow(&I,
1822  IRB.CreateMul(getShadow(OtherArg), ShadowMul, "msprop_mul_cst"));
1823  setOrigin(&I, getOrigin(OtherArg));
1824  }
1825 
1826  void visitMul(BinaryOperator &I) {
1827  Constant *constOp0 = dyn_cast<Constant>(I.getOperand(0));
1828  Constant *constOp1 = dyn_cast<Constant>(I.getOperand(1));
1829  if (constOp0 && !constOp1)
1830  handleMulByConstant(I, constOp0, I.getOperand(1));
1831  else if (constOp1 && !constOp0)
1832  handleMulByConstant(I, constOp1, I.getOperand(0));
1833  else
1834  handleShadowOr(I);
1835  }
1836 
1837  void visitFAdd(BinaryOperator &I) { handleShadowOr(I); }
1838  void visitFSub(BinaryOperator &I) { handleShadowOr(I); }
1839  void visitFMul(BinaryOperator &I) { handleShadowOr(I); }
1840  void visitAdd(BinaryOperator &I) { handleShadowOr(I); }
1841  void visitSub(BinaryOperator &I) { handleShadowOr(I); }
1842  void visitXor(BinaryOperator &I) { handleShadowOr(I); }
1843 
1844  void handleIntegerDiv(Instruction &I) {
1845  IRBuilder<> IRB(&I);
1846  // Strict on the second argument.
1847  insertShadowCheck(I.getOperand(1), &I);
1848  setShadow(&I, getShadow(&I, 0));
1849  setOrigin(&I, getOrigin(&I, 0));
1850  }
1851 
1852  void visitUDiv(BinaryOperator &I) { handleIntegerDiv(I); }
1853  void visitSDiv(BinaryOperator &I) { handleIntegerDiv(I); }
1854  void visitURem(BinaryOperator &I) { handleIntegerDiv(I); }
1855  void visitSRem(BinaryOperator &I) { handleIntegerDiv(I); }
1856 
1857  // Floating point division is side-effect free. We can not require that the
1858  // divisor is fully initialized and must propagate shadow. See PR37523.
1859  void visitFDiv(BinaryOperator &I) { handleShadowOr(I); }
1860  void visitFRem(BinaryOperator &I) { handleShadowOr(I); }
1861 
1862  /// Instrument == and != comparisons.
1863  ///
1864  /// Sometimes the comparison result is known even if some of the bits of the
1865  /// arguments are not.
1866  void handleEqualityComparison(ICmpInst &I) {
1867  IRBuilder<> IRB(&I);
1868  Value *A = I.getOperand(0);
1869  Value *B = I.getOperand(1);
1870  Value *Sa = getShadow(A);
1871  Value *Sb = getShadow(B);
1872 
1873  // Get rid of pointers and vectors of pointers.
1874  // For ints (and vectors of ints), types of A and Sa match,
1875  // and this is a no-op.
1876  A = IRB.CreatePointerCast(A, Sa->getType());
1877  B = IRB.CreatePointerCast(B, Sb->getType());
1878 
1879  // A == B <==> (C = A^B) == 0
1880  // A != B <==> (C = A^B) != 0
1881  // Sc = Sa | Sb
1882  Value *C = IRB.CreateXor(A, B);
1883  Value *Sc = IRB.CreateOr(Sa, Sb);
1884  // Now dealing with i = (C == 0) comparison (or C != 0, does not matter now)
1885  // Result is defined if one of the following is true
1886  // * there is a defined 1 bit in C
1887  // * C is fully defined
1888  // Si = !(C & ~Sc) && Sc
1889  Value *Zero = Constant::getNullValue(Sc->getType());
1890  Value *MinusOne = Constant::getAllOnesValue(Sc->getType());
1891  Value *Si =
1892  IRB.CreateAnd(IRB.CreateICmpNE(Sc, Zero),
1893  IRB.CreateICmpEQ(
1894  IRB.CreateAnd(IRB.CreateXor(Sc, MinusOne), C), Zero));
1895  Si->setName("_msprop_icmp");
1896  setShadow(&I, Si);
1897  setOriginForNaryOp(I);
1898  }
1899 
1900  /// Build the lowest possible value of V, taking into account V's
1901  /// uninitialized bits.
1902  Value *getLowestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa,
1903  bool isSigned) {
1904  if (isSigned) {
1905  // Split shadow into sign bit and other bits.
1906  Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1);
1907  Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits);
1908  // Maximise the undefined shadow bit, minimize other undefined bits.
1909  return
1910  IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaOtherBits)), SaSignBit);
1911  } else {
1912  // Minimize undefined bits.
1913  return IRB.CreateAnd(A, IRB.CreateNot(Sa));
1914  }
1915  }
1916 
1917  /// Build the highest possible value of V, taking into account V's
1918  /// uninitialized bits.
1919  Value *getHighestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa,
1920  bool isSigned) {
1921  if (isSigned) {
1922  // Split shadow into sign bit and other bits.
1923  Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1);
1924  Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits);
1925  // Minimise the undefined shadow bit, maximise other undefined bits.
1926  return
1927  IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaSignBit)), SaOtherBits);
1928  } else {
1929  // Maximize undefined bits.
1930  return IRB.CreateOr(A, Sa);
1931  }
1932  }
1933 
1934  /// Instrument relational comparisons.
1935  ///
1936  /// This function does exact shadow propagation for all relational
1937  /// comparisons of integers, pointers and vectors of those.
1938  /// FIXME: output seems suboptimal when one of the operands is a constant
1939  void handleRelationalComparisonExact(ICmpInst &I) {
1940  IRBuilder<> IRB(&I);
1941  Value *A = I.getOperand(0);
1942  Value *B = I.getOperand(1);
1943  Value *Sa = getShadow(A);
1944  Value *Sb = getShadow(B);
1945 
1946  // Get rid of pointers and vectors of pointers.
1947  // For ints (and vectors of ints), types of A and Sa match,
1948  // and this is a no-op.
1949  A = IRB.CreatePointerCast(A, Sa->getType());
1950  B = IRB.CreatePointerCast(B, Sb->getType());
1951 
1952  // Let [a0, a1] be the interval of possible values of A, taking into account
1953  // its undefined bits. Let [b0, b1] be the interval of possible values of B.
1954  // Then (A cmp B) is defined iff (a0 cmp b1) == (a1 cmp b0).
1955  bool IsSigned = I.isSigned();
1956  Value *S1 = IRB.CreateICmp(I.getPredicate(),
1957  getLowestPossibleValue(IRB, A, Sa, IsSigned),
1958  getHighestPossibleValue(IRB, B, Sb, IsSigned));
1959  Value *S2 = IRB.CreateICmp(I.getPredicate(),
1960  getHighestPossibleValue(IRB, A, Sa, IsSigned),
1961  getLowestPossibleValue(IRB, B, Sb, IsSigned));
1962  Value *Si = IRB.CreateXor(S1, S2);
1963  setShadow(&I, Si);
1964  setOriginForNaryOp(I);
1965  }
1966 
1967  /// Instrument signed relational comparisons.
1968  ///
1969  /// Handle sign bit tests: x<0, x>=0, x<=-1, x>-1 by propagating the highest
1970  /// bit of the shadow. Everything else is delegated to handleShadowOr().
1971  void handleSignedRelationalComparison(ICmpInst &I) {
1972  Constant *constOp;
1973  Value *op = nullptr;
1974  CmpInst::Predicate pre;
1975  if ((constOp = dyn_cast<Constant>(I.getOperand(1)))) {
1976  op = I.getOperand(0);
1977  pre = I.getPredicate();
1978  } else if ((constOp = dyn_cast<Constant>(I.getOperand(0)))) {
1979  op = I.getOperand(1);
1980  pre = I.getSwappedPredicate();
1981  } else {
1982  handleShadowOr(I);
1983  return;
1984  }
1985 
1986  if ((constOp->isNullValue() &&
1987  (pre == CmpInst::ICMP_SLT || pre == CmpInst::ICMP_SGE)) ||
1988  (constOp->isAllOnesValue() &&
1989  (pre == CmpInst::ICMP_SGT || pre == CmpInst::ICMP_SLE))) {
1990  IRBuilder<> IRB(&I);
1991  Value *Shadow = IRB.CreateICmpSLT(getShadow(op), getCleanShadow(op),
1992  "_msprop_icmp_s");
1993  setShadow(&I, Shadow);
1994  setOrigin(&I, getOrigin(op));
1995  } else {
1996  handleShadowOr(I);
1997  }
1998  }
1999 
2000  void visitICmpInst(ICmpInst &I) {
2001  if (!ClHandleICmp) {
2002  handleShadowOr(I);
2003  return;
2004  }
2005  if (I.isEquality()) {
2006  handleEqualityComparison(I);
2007  return;
2008  }
2009 
2010  assert(I.isRelational());
2011  if (ClHandleICmpExact) {
2012  handleRelationalComparisonExact(I);
2013  return;
2014  }
2015  if (I.isSigned()) {
2016  handleSignedRelationalComparison(I);
2017  return;
2018  }
2019 
2020  assert(I.isUnsigned());
2021  if ((isa<Constant>(I.getOperand(0)) || isa<Constant>(I.getOperand(1)))) {
2022  handleRelationalComparisonExact(I);
2023  return;
2024  }
2025 
2026  handleShadowOr(I);
2027  }
2028 
2029  void visitFCmpInst(FCmpInst &I) {
2030  handleShadowOr(I);
2031  }
2032 
2033  void handleShift(BinaryOperator &I) {
2034  IRBuilder<> IRB(&I);
2035  // If any of the S2 bits are poisoned, the whole thing is poisoned.
2036  // Otherwise perform the same shift on S1.
2037  Value *S1 = getShadow(&I, 0);
2038  Value *S2 = getShadow(&I, 1);
2039  Value *S2Conv = IRB.CreateSExt(IRB.CreateICmpNE(S2, getCleanShadow(S2)),
2040  S2->getType());
2041  Value *V2 = I.getOperand(1);
2042  Value *Shift = IRB.CreateBinOp(I.getOpcode(), S1, V2);
2043  setShadow(&I, IRB.CreateOr(Shift, S2Conv));
2044  setOriginForNaryOp(I);
2045  }
2046 
2047  void visitShl(BinaryOperator &I) { handleShift(I); }
2048  void visitAShr(BinaryOperator &I) { handleShift(I); }
2049  void visitLShr(BinaryOperator &I) { handleShift(I); }
2050 
2051  /// Instrument llvm.memmove
2052  ///
2053  /// At this point we don't know if llvm.memmove will be inlined or not.
2054  /// If we don't instrument it and it gets inlined,
2055  /// our interceptor will not kick in and we will lose the memmove.
2056  /// If we instrument the call here, but it does not get inlined,
2057  /// we will memove the shadow twice: which is bad in case
2058  /// of overlapping regions. So, we simply lower the intrinsic to a call.
2059  ///
2060  /// Similar situation exists for memcpy and memset.
2061  void visitMemMoveInst(MemMoveInst &I) {
2062  IRBuilder<> IRB(&I);
2063  IRB.CreateCall(
2064  MS.MemmoveFn,
2065  {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()),
2066  IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()),
2067  IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)});
2068  I.eraseFromParent();
2069  }
2070 
2071  // Similar to memmove: avoid copying shadow twice.
2072  // This is somewhat unfortunate as it may slowdown small constant memcpys.
2073  // FIXME: consider doing manual inline for small constant sizes and proper
2074  // alignment.
2075  void visitMemCpyInst(MemCpyInst &I) {
2076  IRBuilder<> IRB(&I);
2077  IRB.CreateCall(
2078  MS.MemcpyFn,
2079  {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()),
2080  IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()),
2081  IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)});
2082  I.eraseFromParent();
2083  }
2084 
2085  // Same as memcpy.
2086  void visitMemSetInst(MemSetInst &I) {
2087  IRBuilder<> IRB(&I);
2088  IRB.CreateCall(
2089  MS.MemsetFn,
2090  {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()),
2091  IRB.CreateIntCast(I.getArgOperand(1), IRB.getInt32Ty(), false),
2092  IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)});
2093  I.eraseFromParent();
2094  }
2095 
2096  void visitVAStartInst(VAStartInst &I) {
2097  VAHelper->visitVAStartInst(I);
2098  }
2099 
2100  void visitVACopyInst(VACopyInst &I) {
2101  VAHelper->visitVACopyInst(I);
2102  }
2103 
2104  /// Handle vector store-like intrinsics.
2105  ///
2106  /// Instrument intrinsics that look like a simple SIMD store: writes memory,
2107  /// has 1 pointer argument and 1 vector argument, returns void.
2108  bool handleVectorStoreIntrinsic(IntrinsicInst &I) {
2109  IRBuilder<> IRB(&I);
2110  Value* Addr = I.getArgOperand(0);
2111  Value *Shadow = getShadow(&I, 1);
2112  Value *ShadowPtr, *OriginPtr;
2113 
2114  // We don't know the pointer alignment (could be unaligned SSE store!).
2115  // Have to assume to worst case.
2116  std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr(
2117  Addr, IRB, Shadow->getType(), /*Alignment*/ 1, /*isStore*/ true);
2118  IRB.CreateAlignedStore(Shadow, ShadowPtr, 1);
2119 
2121  insertShadowCheck(Addr, &I);
2122 
2123  // FIXME: factor out common code from materializeStores
2124  if (MS.TrackOrigins) IRB.CreateStore(getOrigin(&I, 1), OriginPtr);
2125  return true;
2126  }
2127 
2128  /// Handle vector load-like intrinsics.
2129  ///
2130  /// Instrument intrinsics that look like a simple SIMD load: reads memory,
2131  /// has 1 pointer argument, returns a vector.
2132  bool handleVectorLoadIntrinsic(IntrinsicInst &I) {
2133  IRBuilder<> IRB(&I);
2134  Value *Addr = I.getArgOperand(0);
2135 
2136  Type *ShadowTy = getShadowTy(&I);
2137  Value *ShadowPtr, *OriginPtr;
2138  if (PropagateShadow) {
2139  // We don't know the pointer alignment (could be unaligned SSE load!).
2140  // Have to assume to worst case.
2141  unsigned Alignment = 1;
2142  std::tie(ShadowPtr, OriginPtr) =
2143  getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false);
2144  setShadow(&I, IRB.CreateAlignedLoad(ShadowPtr, Alignment, "_msld"));
2145  } else {
2146  setShadow(&I, getCleanShadow(&I));
2147  }
2148 
2150  insertShadowCheck(Addr, &I);
2151 
2152  if (MS.TrackOrigins) {
2153  if (PropagateShadow)
2154  setOrigin(&I, IRB.CreateLoad(OriginPtr));
2155  else
2156  setOrigin(&I, getCleanOrigin());
2157  }
2158  return true;
2159  }
2160 
2161  /// Handle (SIMD arithmetic)-like intrinsics.
2162  ///
2163  /// Instrument intrinsics with any number of arguments of the same type,
2164  /// equal to the return type. The type should be simple (no aggregates or
2165  /// pointers; vectors are fine).
2166  /// Caller guarantees that this intrinsic does not access memory.
2167  bool maybeHandleSimpleNomemIntrinsic(IntrinsicInst &I) {
2168  Type *RetTy = I.getType();
2169  if (!(RetTy->isIntOrIntVectorTy() ||
2170  RetTy->isFPOrFPVectorTy() ||
2171  RetTy->isX86_MMXTy()))
2172  return false;
2173 
2174  unsigned NumArgOperands = I.getNumArgOperands();
2175 
2176  for (unsigned i = 0; i < NumArgOperands; ++i) {
2177  Type *Ty = I.getArgOperand(i)->getType();
2178  if (Ty != RetTy)
2179  return false;
2180  }
2181 
2182  IRBuilder<> IRB(&I);
2183  ShadowAndOriginCombiner SC(this, IRB);
2184  for (unsigned i = 0; i < NumArgOperands; ++i)
2185  SC.Add(I.getArgOperand(i));
2186  SC.Done(&I);
2187 
2188  return true;
2189  }
2190 
2191  /// Heuristically instrument unknown intrinsics.
2192  ///
2193  /// The main purpose of this code is to do something reasonable with all
2194  /// random intrinsics we might encounter, most importantly - SIMD intrinsics.
2195  /// We recognize several classes of intrinsics by their argument types and
2196  /// ModRefBehaviour and apply special intrumentation when we are reasonably
2197  /// sure that we know what the intrinsic does.
2198  ///
2199  /// We special-case intrinsics where this approach fails. See llvm.bswap
2200  /// handling as an example of that.
2201  bool handleUnknownIntrinsic(IntrinsicInst &I) {
2202  unsigned NumArgOperands = I.getNumArgOperands();
2203  if (NumArgOperands == 0)
2204  return false;
2205 
2206  if (NumArgOperands == 2 &&
2207  I.getArgOperand(0)->getType()->isPointerTy() &&
2208  I.getArgOperand(1)->getType()->isVectorTy() &&
2209  I.getType()->isVoidTy() &&
2210  !I.onlyReadsMemory()) {
2211  // This looks like a vector store.
2212  return handleVectorStoreIntrinsic(I);
2213  }
2214 
2215  if (NumArgOperands == 1 &&
2216  I.getArgOperand(0)->getType()->isPointerTy() &&
2217  I.getType()->isVectorTy() &&
2218  I.onlyReadsMemory()) {
2219  // This looks like a vector load.
2220  return handleVectorLoadIntrinsic(I);
2221  }
2222 
2223  if (I.doesNotAccessMemory())
2224  if (maybeHandleSimpleNomemIntrinsic(I))
2225  return true;
2226 
2227  // FIXME: detect and handle SSE maskstore/maskload
2228  return false;
2229  }
2230 
2231  void handleBswap(IntrinsicInst &I) {
2232  IRBuilder<> IRB(&I);
2233  Value *Op = I.getArgOperand(0);
2234  Type *OpType = Op->getType();
2235  Function *BswapFunc = Intrinsic::getDeclaration(
2236  F.getParent(), Intrinsic::bswap, makeArrayRef(&OpType, 1));
2237  setShadow(&I, IRB.CreateCall(BswapFunc, getShadow(Op)));
2238  setOrigin(&I, getOrigin(Op));
2239  }
2240 
2241  // Instrument vector convert instrinsic.
2242  //
2243  // This function instruments intrinsics like cvtsi2ss:
2244  // %Out = int_xxx_cvtyyy(%ConvertOp)
2245  // or
2246  // %Out = int_xxx_cvtyyy(%CopyOp, %ConvertOp)
2247  // Intrinsic converts \p NumUsedElements elements of \p ConvertOp to the same
2248  // number \p Out elements, and (if has 2 arguments) copies the rest of the
2249  // elements from \p CopyOp.
2250  // In most cases conversion involves floating-point value which may trigger a
2251  // hardware exception when not fully initialized. For this reason we require
2252  // \p ConvertOp[0:NumUsedElements] to be fully initialized and trap otherwise.
2253  // We copy the shadow of \p CopyOp[NumUsedElements:] to \p
2254  // Out[NumUsedElements:]. This means that intrinsics without \p CopyOp always
2255  // return a fully initialized value.
2256  void handleVectorConvertIntrinsic(IntrinsicInst &I, int NumUsedElements) {
2257  IRBuilder<> IRB(&I);
2258  Value *CopyOp, *ConvertOp;
2259 
2260  switch (I.getNumArgOperands()) {
2261  case 3:
2262  assert(isa<ConstantInt>(I.getArgOperand(2)) && "Invalid rounding mode");
2264  case 2:
2265  CopyOp = I.getArgOperand(0);
2266  ConvertOp = I.getArgOperand(1);
2267  break;
2268  case 1:
2269  ConvertOp = I.getArgOperand(0);
2270  CopyOp = nullptr;
2271  break;
2272  default:
2273  llvm_unreachable("Cvt intrinsic with unsupported number of arguments.");
2274  }
2275 
2276  // The first *NumUsedElements* elements of ConvertOp are converted to the
2277  // same number of output elements. The rest of the output is copied from
2278  // CopyOp, or (if not available) filled with zeroes.
2279  // Combine shadow for elements of ConvertOp that are used in this operation,
2280  // and insert a check.
2281  // FIXME: consider propagating shadow of ConvertOp, at least in the case of
2282  // int->any conversion.
2283  Value *ConvertShadow = getShadow(ConvertOp);
2284  Value *AggShadow = nullptr;
2285  if (ConvertOp->getType()->isVectorTy()) {
2286  AggShadow = IRB.CreateExtractElement(
2287  ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), 0));
2288  for (int i = 1; i < NumUsedElements; ++i) {
2289  Value *MoreShadow = IRB.CreateExtractElement(
2290  ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), i));
2291  AggShadow = IRB.CreateOr(AggShadow, MoreShadow);
2292  }
2293  } else {
2294  AggShadow = ConvertShadow;
2295  }
2296  assert(AggShadow->getType()->isIntegerTy());
2297  insertShadowCheck(AggShadow, getOrigin(ConvertOp), &I);
2298 
2299  // Build result shadow by zero-filling parts of CopyOp shadow that come from
2300  // ConvertOp.
2301  if (CopyOp) {
2302  assert(CopyOp->getType() == I.getType());
2303  assert(CopyOp->getType()->isVectorTy());
2304  Value *ResultShadow = getShadow(CopyOp);
2305  Type *EltTy = ResultShadow->getType()->getVectorElementType();
2306  for (int i = 0; i < NumUsedElements; ++i) {
2307  ResultShadow = IRB.CreateInsertElement(
2308  ResultShadow, ConstantInt::getNullValue(EltTy),
2309  ConstantInt::get(IRB.getInt32Ty(), i));
2310  }
2311  setShadow(&I, ResultShadow);
2312  setOrigin(&I, getOrigin(CopyOp));
2313  } else {
2314  setShadow(&I, getCleanShadow(&I));
2315  setOrigin(&I, getCleanOrigin());
2316  }
2317  }
2318 
2319  // Given a scalar or vector, extract lower 64 bits (or less), and return all
2320  // zeroes if it is zero, and all ones otherwise.
2321  Value *Lower64ShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) {
2322  if (S->getType()->isVectorTy())
2323  S = CreateShadowCast(IRB, S, IRB.getInt64Ty(), /* Signed */ true);
2324  assert(S->getType()->getPrimitiveSizeInBits() <= 64);
2325  Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S));
2326  return CreateShadowCast(IRB, S2, T, /* Signed */ true);
2327  }
2328 
2329  // Given a vector, extract its first element, and return all
2330  // zeroes if it is zero, and all ones otherwise.
2331  Value *LowerElementShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) {
2332  Value *S1 = IRB.CreateExtractElement(S, (uint64_t)0);
2333  Value *S2 = IRB.CreateICmpNE(S1, getCleanShadow(S1));
2334  return CreateShadowCast(IRB, S2, T, /* Signed */ true);
2335  }
2336 
2337  Value *VariableShadowExtend(IRBuilder<> &IRB, Value *S) {
2338  Type *T = S->getType();
2339  assert(T->isVectorTy());
2340  Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S));
2341  return IRB.CreateSExt(S2, T);
2342  }
2343 
2344  // Instrument vector shift instrinsic.
2345  //
2346  // This function instruments intrinsics like int_x86_avx2_psll_w.
2347  // Intrinsic shifts %In by %ShiftSize bits.
2348  // %ShiftSize may be a vector. In that case the lower 64 bits determine shift
2349  // size, and the rest is ignored. Behavior is defined even if shift size is
2350  // greater than register (or field) width.
2351  void handleVectorShiftIntrinsic(IntrinsicInst &I, bool Variable) {
2352  assert(I.getNumArgOperands() == 2);
2353  IRBuilder<> IRB(&I);
2354  // If any of the S2 bits are poisoned, the whole thing is poisoned.
2355  // Otherwise perform the same shift on S1.
2356  Value *S1 = getShadow(&I, 0);
2357  Value *S2 = getShadow(&I, 1);
2358  Value *S2Conv = Variable ? VariableShadowExtend(IRB, S2)
2359  : Lower64ShadowExtend(IRB, S2, getShadowTy(&I));
2360  Value *V1 = I.getOperand(0);
2361  Value *V2 = I.getOperand(1);
2362  Value *Shift = IRB.CreateCall(I.getCalledValue(),
2363  {IRB.CreateBitCast(S1, V1->getType()), V2});
2364  Shift = IRB.CreateBitCast(Shift, getShadowTy(&I));
2365  setShadow(&I, IRB.CreateOr(Shift, S2Conv));
2366  setOriginForNaryOp(I);
2367  }
2368 
2369  // Get an X86_MMX-sized vector type.
2370  Type *getMMXVectorTy(unsigned EltSizeInBits) {
2371  const unsigned X86_MMXSizeInBits = 64;
2372  return VectorType::get(IntegerType::get(*MS.C, EltSizeInBits),
2373  X86_MMXSizeInBits / EltSizeInBits);
2374  }
2375 
2376  // Returns a signed counterpart for an (un)signed-saturate-and-pack
2377  // intrinsic.
2378  Intrinsic::ID getSignedPackIntrinsic(Intrinsic::ID id) {
2379  switch (id) {
2380  case Intrinsic::x86_sse2_packsswb_128:
2381  case Intrinsic::x86_sse2_packuswb_128:
2382  return Intrinsic::x86_sse2_packsswb_128;
2383 
2384  case Intrinsic::x86_sse2_packssdw_128:
2385  case Intrinsic::x86_sse41_packusdw:
2386  return Intrinsic::x86_sse2_packssdw_128;
2387 
2388  case Intrinsic::x86_avx2_packsswb:
2389  case Intrinsic::x86_avx2_packuswb:
2390  return Intrinsic::x86_avx2_packsswb;
2391 
2392  case Intrinsic::x86_avx2_packssdw:
2393  case Intrinsic::x86_avx2_packusdw:
2394  return Intrinsic::x86_avx2_packssdw;
2395 
2396  case Intrinsic::x86_mmx_packsswb:
2397  case Intrinsic::x86_mmx_packuswb:
2398  return Intrinsic::x86_mmx_packsswb;
2399 
2400  case Intrinsic::x86_mmx_packssdw:
2401  return Intrinsic::x86_mmx_packssdw;
2402  default:
2403  llvm_unreachable("unexpected intrinsic id");
2404  }
2405  }
2406 
2407  // Instrument vector pack instrinsic.
2408  //
2409  // This function instruments intrinsics like x86_mmx_packsswb, that
2410  // packs elements of 2 input vectors into half as many bits with saturation.
2411  // Shadow is propagated with the signed variant of the same intrinsic applied
2412  // to sext(Sa != zeroinitializer), sext(Sb != zeroinitializer).
2413  // EltSizeInBits is used only for x86mmx arguments.
2414  void handleVectorPackIntrinsic(IntrinsicInst &I, unsigned EltSizeInBits = 0) {
2415  assert(I.getNumArgOperands() == 2);
2416  bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy();
2417  IRBuilder<> IRB(&I);
2418  Value *S1 = getShadow(&I, 0);
2419  Value *S2 = getShadow(&I, 1);
2420  assert(isX86_MMX || S1->getType()->isVectorTy());
2421 
2422  // SExt and ICmpNE below must apply to individual elements of input vectors.
2423  // In case of x86mmx arguments, cast them to appropriate vector types and
2424  // back.
2425  Type *T = isX86_MMX ? getMMXVectorTy(EltSizeInBits) : S1->getType();
2426  if (isX86_MMX) {
2427  S1 = IRB.CreateBitCast(S1, T);
2428  S2 = IRB.CreateBitCast(S2, T);
2429  }
2430  Value *S1_ext = IRB.CreateSExt(
2431  IRB.CreateICmpNE(S1, Constant::getNullValue(T)), T);
2432  Value *S2_ext = IRB.CreateSExt(
2433  IRB.CreateICmpNE(S2, Constant::getNullValue(T)), T);
2434  if (isX86_MMX) {
2435  Type *X86_MMXTy = Type::getX86_MMXTy(*MS.C);
2436  S1_ext = IRB.CreateBitCast(S1_ext, X86_MMXTy);
2437  S2_ext = IRB.CreateBitCast(S2_ext, X86_MMXTy);
2438  }
2439 
2440  Function *ShadowFn = Intrinsic::getDeclaration(
2441  F.getParent(), getSignedPackIntrinsic(I.getIntrinsicID()));
2442 
2443  Value *S =
2444  IRB.CreateCall(ShadowFn, {S1_ext, S2_ext}, "_msprop_vector_pack");
2445  if (isX86_MMX) S = IRB.CreateBitCast(S, getShadowTy(&I));
2446  setShadow(&I, S);
2447  setOriginForNaryOp(I);
2448  }
2449 
2450  // Instrument sum-of-absolute-differencies intrinsic.
2451  void handleVectorSadIntrinsic(IntrinsicInst &I) {
2452  const unsigned SignificantBitsPerResultElement = 16;
2453  bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy();
2454  Type *ResTy = isX86_MMX ? IntegerType::get(*MS.C, 64) : I.getType();
2455  unsigned ZeroBitsPerResultElement =
2456  ResTy->getScalarSizeInBits() - SignificantBitsPerResultElement;
2457 
2458  IRBuilder<> IRB(&I);
2459  Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1));
2460  S = IRB.CreateBitCast(S, ResTy);
2461  S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)),
2462  ResTy);
2463  S = IRB.CreateLShr(S, ZeroBitsPerResultElement);
2464  S = IRB.CreateBitCast(S, getShadowTy(&I));
2465  setShadow(&I, S);
2466  setOriginForNaryOp(I);
2467  }
2468 
2469  // Instrument multiply-add intrinsic.
2470  void handleVectorPmaddIntrinsic(IntrinsicInst &I,
2471  unsigned EltSizeInBits = 0) {
2472  bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy();
2473  Type *ResTy = isX86_MMX ? getMMXVectorTy(EltSizeInBits * 2) : I.getType();
2474  IRBuilder<> IRB(&I);
2475  Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1));
2476  S = IRB.CreateBitCast(S, ResTy);
2477  S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)),
2478  ResTy);
2479  S = IRB.CreateBitCast(S, getShadowTy(&I));
2480  setShadow(&I, S);
2481  setOriginForNaryOp(I);
2482  }
2483 
2484  // Instrument compare-packed intrinsic.
2485  // Basically, an or followed by sext(icmp ne 0) to end up with all-zeros or
2486  // all-ones shadow.
2487  void handleVectorComparePackedIntrinsic(IntrinsicInst &I) {
2488  IRBuilder<> IRB(&I);
2489  Type *ResTy = getShadowTy(&I);
2490  Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1));
2491  Value *S = IRB.CreateSExt(
2492  IRB.CreateICmpNE(S0, Constant::getNullValue(ResTy)), ResTy);
2493  setShadow(&I, S);
2494  setOriginForNaryOp(I);
2495  }
2496 
2497  // Instrument compare-scalar intrinsic.
2498  // This handles both cmp* intrinsics which return the result in the first
2499  // element of a vector, and comi* which return the result as i32.
2500  void handleVectorCompareScalarIntrinsic(IntrinsicInst &I) {
2501  IRBuilder<> IRB(&I);
2502  Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1));
2503  Value *S = LowerElementShadowExtend(IRB, S0, getShadowTy(&I));
2504  setShadow(&I, S);
2505  setOriginForNaryOp(I);
2506  }
2507 
2508  void handleStmxcsr(IntrinsicInst &I) {
2509  IRBuilder<> IRB(&I);
2510  Value* Addr = I.getArgOperand(0);
2511  Type *Ty = IRB.getInt32Ty();
2512  Value *ShadowPtr =
2513  getShadowOriginPtr(Addr, IRB, Ty, /*Alignment*/ 1, /*isStore*/ true)
2514  .first;
2515 
2516  IRB.CreateStore(getCleanShadow(Ty),
2517  IRB.CreatePointerCast(ShadowPtr, Ty->getPointerTo()));
2518 
2520  insertShadowCheck(Addr, &I);
2521  }
2522 
2523  void handleLdmxcsr(IntrinsicInst &I) {
2524  if (!InsertChecks) return;
2525 
2526  IRBuilder<> IRB(&I);
2527  Value *Addr = I.getArgOperand(0);
2528  Type *Ty = IRB.getInt32Ty();
2529  unsigned Alignment = 1;
2530  Value *ShadowPtr, *OriginPtr;
2531  std::tie(ShadowPtr, OriginPtr) =
2532  getShadowOriginPtr(Addr, IRB, Ty, Alignment, /*isStore*/ false);
2533 
2535  insertShadowCheck(Addr, &I);
2536 
2537  Value *Shadow = IRB.CreateAlignedLoad(ShadowPtr, Alignment, "_ldmxcsr");
2538  Value *Origin =
2539  MS.TrackOrigins ? IRB.CreateLoad(OriginPtr) : getCleanOrigin();
2540  insertShadowCheck(Shadow, Origin, &I);
2541  }
2542 
2543  void handleMaskedStore(IntrinsicInst &I) {
2544  IRBuilder<> IRB(&I);
2545  Value *V = I.getArgOperand(0);
2546  Value *Addr = I.getArgOperand(1);
2547  unsigned Align = cast<ConstantInt>(I.getArgOperand(2))->getZExtValue();
2548  Value *Mask = I.getArgOperand(3);
2549  Value *Shadow = getShadow(V);
2550 
2551  Value *ShadowPtr;
2552  Value *OriginPtr;
2553  std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr(
2554  Addr, IRB, Shadow->getType(), Align, /*isStore*/ true);
2555 
2556  if (ClCheckAccessAddress) {
2557  insertShadowCheck(Addr, &I);
2558  // Uninitialized mask is kind of like uninitialized address, but not as
2559  // scary.
2560  insertShadowCheck(Mask, &I);
2561  }
2562 
2563  IRB.CreateMaskedStore(Shadow, ShadowPtr, Align, Mask);
2564 
2565  if (MS.TrackOrigins) {
2566  auto &DL = F.getParent()->getDataLayout();
2567  paintOrigin(IRB, getOrigin(V), OriginPtr,
2568  DL.getTypeStoreSize(Shadow->getType()),
2569  std::max(Align, kMinOriginAlignment));
2570  }
2571  }
2572 
2573  bool handleMaskedLoad(IntrinsicInst &I) {
2574  IRBuilder<> IRB(&I);
2575  Value *Addr = I.getArgOperand(0);
2576  unsigned Align = cast<ConstantInt>(I.getArgOperand(1))->getZExtValue();
2577  Value *Mask = I.getArgOperand(2);
2578  Value *PassThru = I.getArgOperand(3);
2579 
2580  Type *ShadowTy = getShadowTy(&I);
2581  Value *ShadowPtr, *OriginPtr;
2582  if (PropagateShadow) {
2583  std::tie(ShadowPtr, OriginPtr) =
2584  getShadowOriginPtr(Addr, IRB, ShadowTy, Align, /*isStore*/ false);
2585  setShadow(&I, IRB.CreateMaskedLoad(ShadowPtr, Align, Mask,
2586  getShadow(PassThru), "_msmaskedld"));
2587  } else {
2588  setShadow(&I, getCleanShadow(&I));
2589  }
2590 
2591  if (ClCheckAccessAddress) {
2592  insertShadowCheck(Addr, &I);
2593  insertShadowCheck(Mask, &I);
2594  }
2595 
2596  if (MS.TrackOrigins) {
2597  if (PropagateShadow) {
2598  // Choose between PassThru's and the loaded value's origins.
2599  Value *MaskedPassThruShadow = IRB.CreateAnd(
2600  getShadow(PassThru), IRB.CreateSExt(IRB.CreateNeg(Mask), ShadowTy));
2601 
2602  Value *Acc = IRB.CreateExtractElement(
2603  MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), 0));
2604  for (int i = 1, N = PassThru->getType()->getVectorNumElements(); i < N;
2605  ++i) {
2606  Value *More = IRB.CreateExtractElement(
2607  MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), i));
2608  Acc = IRB.CreateOr(Acc, More);
2609  }
2610 
2611  Value *Origin = IRB.CreateSelect(
2612  IRB.CreateICmpNE(Acc, Constant::getNullValue(Acc->getType())),
2613  getOrigin(PassThru), IRB.CreateLoad(OriginPtr));
2614 
2615  setOrigin(&I, Origin);
2616  } else {
2617  setOrigin(&I, getCleanOrigin());
2618  }
2619  }
2620  return true;
2621  }
2622 
2623 
2624  void visitIntrinsicInst(IntrinsicInst &I) {
2625  switch (I.getIntrinsicID()) {
2626  case Intrinsic::bswap:
2627  handleBswap(I);
2628  break;
2629  case Intrinsic::masked_store:
2630  handleMaskedStore(I);
2631  break;
2632  case Intrinsic::masked_load:
2633  handleMaskedLoad(I);
2634  break;
2635  case Intrinsic::x86_sse_stmxcsr:
2636  handleStmxcsr(I);
2637  break;
2638  case Intrinsic::x86_sse_ldmxcsr:
2639  handleLdmxcsr(I);
2640  break;
2641  case Intrinsic::x86_avx512_vcvtsd2usi64:
2642  case Intrinsic::x86_avx512_vcvtsd2usi32:
2643  case Intrinsic::x86_avx512_vcvtss2usi64:
2644  case Intrinsic::x86_avx512_vcvtss2usi32:
2645  case Intrinsic::x86_avx512_cvttss2usi64:
2646  case Intrinsic::x86_avx512_cvttss2usi:
2647  case Intrinsic::x86_avx512_cvttsd2usi64:
2648  case Intrinsic::x86_avx512_cvttsd2usi:
2649  case Intrinsic::x86_avx512_cvtusi2ss:
2650  case Intrinsic::x86_avx512_cvtusi642sd:
2651  case Intrinsic::x86_avx512_cvtusi642ss:
2652  case Intrinsic::x86_sse2_cvtsd2si64:
2653  case Intrinsic::x86_sse2_cvtsd2si:
2654  case Intrinsic::x86_sse2_cvtsd2ss:
2655  case Intrinsic::x86_sse2_cvttsd2si64:
2656  case Intrinsic::x86_sse2_cvttsd2si:
2657  case Intrinsic::x86_sse_cvtss2si64:
2658  case Intrinsic::x86_sse_cvtss2si:
2659  case Intrinsic::x86_sse_cvttss2si64:
2660  case Intrinsic::x86_sse_cvttss2si:
2661  handleVectorConvertIntrinsic(I, 1);
2662  break;
2663  case Intrinsic::x86_sse_cvtps2pi:
2664  case Intrinsic::x86_sse_cvttps2pi:
2665  handleVectorConvertIntrinsic(I, 2);
2666  break;
2667 
2668  case Intrinsic::x86_avx512_psll_w_512:
2669  case Intrinsic::x86_avx512_psll_d_512:
2670  case Intrinsic::x86_avx512_psll_q_512:
2671  case Intrinsic::x86_avx512_pslli_w_512:
2672  case Intrinsic::x86_avx512_pslli_d_512:
2673  case Intrinsic::x86_avx512_pslli_q_512:
2674  case Intrinsic::x86_avx512_psrl_w_512:
2675  case Intrinsic::x86_avx512_psrl_d_512:
2676  case Intrinsic::x86_avx512_psrl_q_512:
2677  case Intrinsic::x86_avx512_psra_w_512:
2678  case Intrinsic::x86_avx512_psra_d_512:
2679  case Intrinsic::x86_avx512_psra_q_512:
2680  case Intrinsic::x86_avx512_psrli_w_512:
2681  case Intrinsic::x86_avx512_psrli_d_512:
2682  case Intrinsic::x86_avx512_psrli_q_512:
2683  case Intrinsic::x86_avx512_psrai_w_512:
2684  case Intrinsic::x86_avx512_psrai_d_512:
2685  case Intrinsic::x86_avx512_psrai_q_512:
2686  case Intrinsic::x86_avx512_psra_q_256:
2687  case Intrinsic::x86_avx512_psra_q_128:
2688  case Intrinsic::x86_avx512_psrai_q_256:
2689  case Intrinsic::x86_avx512_psrai_q_128:
2690  case Intrinsic::x86_avx2_psll_w:
2691  case Intrinsic::x86_avx2_psll_d:
2692  case Intrinsic::x86_avx2_psll_q:
2693  case Intrinsic::x86_avx2_pslli_w:
2694  case Intrinsic::x86_avx2_pslli_d:
2695  case Intrinsic::x86_avx2_pslli_q:
2696  case Intrinsic::x86_avx2_psrl_w:
2697  case Intrinsic::x86_avx2_psrl_d:
2698  case Intrinsic::x86_avx2_psrl_q:
2699  case Intrinsic::x86_avx2_psra_w:
2700  case Intrinsic::x86_avx2_psra_d:
2701  case Intrinsic::x86_avx2_psrli_w:
2702  case Intrinsic::x86_avx2_psrli_d:
2703  case Intrinsic::x86_avx2_psrli_q:
2704  case Intrinsic::x86_avx2_psrai_w:
2705  case Intrinsic::x86_avx2_psrai_d:
2706  case Intrinsic::x86_sse2_psll_w:
2707  case Intrinsic::x86_sse2_psll_d:
2708  case Intrinsic::x86_sse2_psll_q:
2709  case Intrinsic::x86_sse2_pslli_w:
2710  case Intrinsic::x86_sse2_pslli_d:
2711  case Intrinsic::x86_sse2_pslli_q:
2712  case Intrinsic::x86_sse2_psrl_w:
2713  case Intrinsic::x86_sse2_psrl_d:
2714  case Intrinsic::x86_sse2_psrl_q:
2715  case Intrinsic::x86_sse2_psra_w:
2716  case Intrinsic::x86_sse2_psra_d:
2717  case Intrinsic::x86_sse2_psrli_w:
2718  case Intrinsic::x86_sse2_psrli_d:
2719  case Intrinsic::x86_sse2_psrli_q:
2720  case Intrinsic::x86_sse2_psrai_w:
2721  case Intrinsic::x86_sse2_psrai_d:
2722  case Intrinsic::x86_mmx_psll_w:
2723  case Intrinsic::x86_mmx_psll_d:
2724  case Intrinsic::x86_mmx_psll_q:
2725  case Intrinsic::x86_mmx_pslli_w:
2726  case Intrinsic::x86_mmx_pslli_d:
2727  case Intrinsic::x86_mmx_pslli_q:
2728  case Intrinsic::x86_mmx_psrl_w:
2729  case Intrinsic::x86_mmx_psrl_d:
2730  case Intrinsic::x86_mmx_psrl_q:
2731  case Intrinsic::x86_mmx_psra_w:
2732  case Intrinsic::x86_mmx_psra_d:
2733  case Intrinsic::x86_mmx_psrli_w:
2734  case Intrinsic::x86_mmx_psrli_d:
2735  case Intrinsic::x86_mmx_psrli_q:
2736  case Intrinsic::x86_mmx_psrai_w:
2737  case Intrinsic::x86_mmx_psrai_d:
2738  handleVectorShiftIntrinsic(I, /* Variable */ false);
2739  break;
2740  case Intrinsic::x86_avx2_psllv_d:
2741  case Intrinsic::x86_avx2_psllv_d_256:
2742  case Intrinsic::x86_avx512_psllv_d_512:
2743  case Intrinsic::x86_avx2_psllv_q:
2744  case Intrinsic::x86_avx2_psllv_q_256:
2745  case Intrinsic::x86_avx512_psllv_q_512:
2746  case Intrinsic::x86_avx2_psrlv_d:
2747  case Intrinsic::x86_avx2_psrlv_d_256:
2748  case Intrinsic::x86_avx512_psrlv_d_512:
2749  case Intrinsic::x86_avx2_psrlv_q:
2750  case Intrinsic::x86_avx2_psrlv_q_256:
2751  case Intrinsic::x86_avx512_psrlv_q_512:
2752  case Intrinsic::x86_avx2_psrav_d:
2753  case Intrinsic::x86_avx2_psrav_d_256:
2754  case Intrinsic::x86_avx512_psrav_d_512:
2755  case Intrinsic::x86_avx512_psrav_q_128:
2756  case Intrinsic::x86_avx512_psrav_q_256:
2757  case Intrinsic::x86_avx512_psrav_q_512:
2758  handleVectorShiftIntrinsic(I, /* Variable */ true);
2759  break;
2760 
2761  case Intrinsic::x86_sse2_packsswb_128:
2762  case Intrinsic::x86_sse2_packssdw_128:
2763  case Intrinsic::x86_sse2_packuswb_128:
2764  case Intrinsic::x86_sse41_packusdw:
2765  case Intrinsic::x86_avx2_packsswb:
2766  case Intrinsic::x86_avx2_packssdw:
2767  case Intrinsic::x86_avx2_packuswb:
2768  case Intrinsic::x86_avx2_packusdw:
2769  handleVectorPackIntrinsic(I);
2770  break;
2771 
2772  case Intrinsic::x86_mmx_packsswb:
2773  case Intrinsic::x86_mmx_packuswb:
2774  handleVectorPackIntrinsic(I, 16);
2775  break;
2776 
2777  case Intrinsic::x86_mmx_packssdw:
2778  handleVectorPackIntrinsic(I, 32);
2779  break;
2780 
2781  case Intrinsic::x86_mmx_psad_bw:
2782  case Intrinsic::x86_sse2_psad_bw:
2783  case Intrinsic::x86_avx2_psad_bw:
2784  handleVectorSadIntrinsic(I);
2785  break;
2786 
2787  case Intrinsic::x86_sse2_pmadd_wd:
2788  case Intrinsic::x86_avx2_pmadd_wd:
2789  case Intrinsic::x86_ssse3_pmadd_ub_sw_128:
2790  case Intrinsic::x86_avx2_pmadd_ub_sw:
2791  handleVectorPmaddIntrinsic(I);
2792  break;
2793 
2794  case Intrinsic::x86_ssse3_pmadd_ub_sw:
2795  handleVectorPmaddIntrinsic(I, 8);
2796  break;
2797 
2798  case Intrinsic::x86_mmx_pmadd_wd:
2799  handleVectorPmaddIntrinsic(I, 16);
2800  break;
2801 
2802  case Intrinsic::x86_sse_cmp_ss:
2803  case Intrinsic::x86_sse2_cmp_sd:
2804  case Intrinsic::x86_sse_comieq_ss:
2805  case Intrinsic::x86_sse_comilt_ss:
2806  case Intrinsic::x86_sse_comile_ss:
2807  case Intrinsic::x86_sse_comigt_ss:
2808  case Intrinsic::x86_sse_comige_ss:
2809  case Intrinsic::x86_sse_comineq_ss:
2810  case Intrinsic::x86_sse_ucomieq_ss:
2811  case Intrinsic::x86_sse_ucomilt_ss:
2812  case Intrinsic::x86_sse_ucomile_ss:
2813  case Intrinsic::x86_sse_ucomigt_ss:
2814  case Intrinsic::x86_sse_ucomige_ss:
2815  case Intrinsic::x86_sse_ucomineq_ss:
2816  case Intrinsic::x86_sse2_comieq_sd:
2817  case Intrinsic::x86_sse2_comilt_sd:
2818  case Intrinsic::x86_sse2_comile_sd:
2819  case Intrinsic::x86_sse2_comigt_sd:
2820  case Intrinsic::x86_sse2_comige_sd:
2821  case Intrinsic::x86_sse2_comineq_sd:
2822  case Intrinsic::x86_sse2_ucomieq_sd:
2823  case Intrinsic::x86_sse2_ucomilt_sd:
2824  case Intrinsic::x86_sse2_ucomile_sd:
2825  case Intrinsic::x86_sse2_ucomigt_sd:
2826  case Intrinsic::x86_sse2_ucomige_sd:
2827  case Intrinsic::x86_sse2_ucomineq_sd:
2828  handleVectorCompareScalarIntrinsic(I);
2829  break;
2830 
2831  case Intrinsic::x86_sse_cmp_ps:
2832  case Intrinsic::x86_sse2_cmp_pd:
2833  // FIXME: For x86_avx_cmp_pd_256 and x86_avx_cmp_ps_256 this function
2834  // generates reasonably looking IR that fails in the backend with "Do not
2835  // know how to split the result of this operator!".
2836  handleVectorComparePackedIntrinsic(I);
2837  break;
2838 
2839  default:
2840  if (!handleUnknownIntrinsic(I))
2841  visitInstruction(I);
2842  break;
2843  }
2844  }
2845 
2846  void visitCallSite(CallSite CS) {
2847  Instruction &I = *CS.getInstruction();
2848  assert(!I.getMetadata("nosanitize"));
2849  assert((CS.isCall() || CS.isInvoke()) && "Unknown type of CallSite");
2850  if (CS.isCall()) {
2851  CallInst *Call = cast<CallInst>(&I);
2852 
2853  // For inline asm, do the usual thing: check argument shadow and mark all
2854  // outputs as clean. Note that any side effects of the inline asm that are
2855  // not immediately visible in its constraints are not handled.
2856  if (Call->isInlineAsm()) {
2858  visitAsmInstruction(I);
2859  else
2860  visitInstruction(I);
2861  return;
2862  }
2863 
2864  assert(!isa<IntrinsicInst>(&I) && "intrinsics are handled elsewhere");
2865 
2866  // We are going to insert code that relies on the fact that the callee
2867  // will become a non-readonly function after it is instrumented by us. To
2868  // prevent this code from being optimized out, mark that function
2869  // non-readonly in advance.
2870  if (Function *Func = Call->getCalledFunction()) {
2871  // Clear out readonly/readnone attributes.
2872  AttrBuilder B;
2873  B.addAttribute(Attribute::ReadOnly)
2874  .addAttribute(Attribute::ReadNone);
2876  }
2877 
2879  }
2880  IRBuilder<> IRB(&I);
2881 
2882  unsigned ArgOffset = 0;
2883  LLVM_DEBUG(dbgs() << " CallSite: " << I << "\n");
2884  for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end();
2885  ArgIt != End; ++ArgIt) {
2886  Value *A = *ArgIt;
2887  unsigned i = ArgIt - CS.arg_begin();
2888  if (!A->getType()->isSized()) {
2889  LLVM_DEBUG(dbgs() << "Arg " << i << " is not sized: " << I << "\n");
2890  continue;
2891  }
2892  unsigned Size = 0;
2893  Value *Store = nullptr;
2894  // Compute the Shadow for arg even if it is ByVal, because
2895  // in that case getShadow() will copy the actual arg shadow to
2896  // __msan_param_tls.
2897  Value *ArgShadow = getShadow(A);
2898  Value *ArgShadowBase = getShadowPtrForArgument(A, IRB, ArgOffset);
2899  LLVM_DEBUG(dbgs() << " Arg#" << i << ": " << *A
2900  << " Shadow: " << *ArgShadow << "\n");
2901  bool ArgIsInitialized = false;
2902  const DataLayout &DL = F.getParent()->getDataLayout();
2903  if (CS.paramHasAttr(i, Attribute::ByVal)) {
2904  assert(A->getType()->isPointerTy() &&
2905  "ByVal argument is not a pointer!");
2906  Size = DL.getTypeAllocSize(A->getType()->getPointerElementType());
2907  if (ArgOffset + Size > kParamTLSSize) break;
2908  unsigned ParamAlignment = CS.getParamAlignment(i);
2909  unsigned Alignment = std::min(ParamAlignment, kShadowTLSAlignment);
2910  Value *AShadowPtr = getShadowOriginPtr(A, IRB, IRB.getInt8Ty(),
2911  Alignment, /*isStore*/ false)
2912  .first;
2913 
2914  Store = IRB.CreateMemCpy(ArgShadowBase, Alignment, AShadowPtr,
2915  Alignment, Size);
2916  } else {
2917  Size = DL.getTypeAllocSize(A->getType());
2918  if (ArgOffset + Size > kParamTLSSize) break;
2919  Store = IRB.CreateAlignedStore(ArgShadow, ArgShadowBase,
2920  kShadowTLSAlignment);
2921  Constant *Cst = dyn_cast<Constant>(ArgShadow);
2922  if (Cst && Cst->isNullValue()) ArgIsInitialized = true;
2923  }
2924  if (MS.TrackOrigins && !ArgIsInitialized)
2925  IRB.CreateStore(getOrigin(A),
2926  getOriginPtrForArgument(A, IRB, ArgOffset));
2927  (void)Store;
2928  assert(Size != 0 && Store != nullptr);
2929  LLVM_DEBUG(dbgs() << " Param:" << *Store << "\n");
2930  ArgOffset += alignTo(Size, 8);
2931  }
2932  LLVM_DEBUG(dbgs() << " done with call args\n");
2933 
2934  FunctionType *FT =
2935  cast<FunctionType>(CS.getCalledValue()->getType()->getContainedType(0));
2936  if (FT->isVarArg()) {
2937  VAHelper->visitCallSite(CS, IRB);
2938  }
2939 
2940  // Now, get the shadow for the RetVal.
2941  if (!I.getType()->isSized()) return;
2942  // Don't emit the epilogue for musttail call returns.
2943  if (CS.isCall() && cast<CallInst>(&I)->isMustTailCall()) return;
2944  IRBuilder<> IRBBefore(&I);
2945  // Until we have full dynamic coverage, make sure the retval shadow is 0.
2946  Value *Base = getShadowPtrForRetval(&I, IRBBefore);
2947  IRBBefore.CreateAlignedStore(getCleanShadow(&I), Base, kShadowTLSAlignment);
2948  BasicBlock::iterator NextInsn;
2949  if (CS.isCall()) {
2950  NextInsn = ++I.getIterator();
2951  assert(NextInsn != I.getParent()->end());
2952  } else {
2953  BasicBlock *NormalDest = cast<InvokeInst>(&I)->getNormalDest();
2954  if (!NormalDest->getSinglePredecessor()) {
2955  // FIXME: this case is tricky, so we are just conservative here.
2956  // Perhaps we need to split the edge between this BB and NormalDest,
2957  // but a naive attempt to use SplitEdge leads to a crash.
2958  setShadow(&I, getCleanShadow(&I));
2959  setOrigin(&I, getCleanOrigin());
2960  return;
2961  }
2962  // FIXME: NextInsn is likely in a basic block that has not been visited yet.
2963  // Anything inserted there will be instrumented by MSan later!
2964  NextInsn = NormalDest->getFirstInsertionPt();
2965  assert(NextInsn != NormalDest->end() &&
2966  "Could not find insertion point for retval shadow load");
2967  }
2968  IRBuilder<> IRBAfter(&*NextInsn);
2969  Value *RetvalShadow =
2970  IRBAfter.CreateAlignedLoad(getShadowPtrForRetval(&I, IRBAfter),
2971  kShadowTLSAlignment, "_msret");
2972  setShadow(&I, RetvalShadow);
2973  if (MS.TrackOrigins)
2974  setOrigin(&I, IRBAfter.CreateLoad(getOriginPtrForRetval(IRBAfter)));
2975  }
2976 
2977  bool isAMustTailRetVal(Value *RetVal) {
2978  if (auto *I = dyn_cast<BitCastInst>(RetVal)) {
2979  RetVal = I->getOperand(0);
2980  }
2981  if (auto *I = dyn_cast<CallInst>(RetVal)) {
2982  return I->isMustTailCall();
2983  }
2984  return false;
2985  }
2986 
2987  void visitReturnInst(ReturnInst &I) {
2988  IRBuilder<> IRB(&I);
2989  Value *RetVal = I.getReturnValue();
2990  if (!RetVal) return;
2991  // Don't emit the epilogue for musttail call returns.
2992  if (isAMustTailRetVal(RetVal)) return;
2993  Value *ShadowPtr = getShadowPtrForRetval(RetVal, IRB);
2994  if (CheckReturnValue) {
2995  insertShadowCheck(RetVal, &I);
2996  Value *Shadow = getCleanShadow(RetVal);
2997  IRB.CreateAlignedStore(Shadow, ShadowPtr, kShadowTLSAlignment);
2998  } else {
2999  Value *Shadow = getShadow(RetVal);
3000  IRB.CreateAlignedStore(Shadow, ShadowPtr, kShadowTLSAlignment);
3001  if (MS.TrackOrigins)
3002  IRB.CreateStore(getOrigin(RetVal), getOriginPtrForRetval(IRB));
3003  }
3004  }
3005 
3006  void visitPHINode(PHINode &I) {
3007  IRBuilder<> IRB(&I);
3008  if (!PropagateShadow) {
3009  setShadow(&I, getCleanShadow(&I));
3010  setOrigin(&I, getCleanOrigin());
3011  return;
3012  }
3013 
3014  ShadowPHINodes.push_back(&I);
3015  setShadow(&I, IRB.CreatePHI(getShadowTy(&I), I.getNumIncomingValues(),
3016  "_msphi_s"));
3017  if (MS.TrackOrigins)
3018  setOrigin(&I, IRB.CreatePHI(MS.OriginTy, I.getNumIncomingValues(),
3019  "_msphi_o"));
3020  }
3021 
3022  void visitAllocaInst(AllocaInst &I) {
3023  setShadow(&I, getCleanShadow(&I));
3024  setOrigin(&I, getCleanOrigin());
3025  IRBuilder<> IRB(I.getNextNode());
3026  const DataLayout &DL = F.getParent()->getDataLayout();
3027  uint64_t TypeSize = DL.getTypeAllocSize(I.getAllocatedType());
3028  Value *Len = ConstantInt::get(MS.IntptrTy, TypeSize);
3029  if (I.isArrayAllocation())
3030  Len = IRB.CreateMul(Len, I.getArraySize());
3031  if (PoisonStack && ClPoisonStackWithCall) {
3032  IRB.CreateCall(MS.MsanPoisonStackFn,
3033  {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len});
3034  } else {
3035  Value *ShadowBase = getShadowOriginPtr(&I, IRB, IRB.getInt8Ty(),
3036  I.getAlignment(), /*isStore*/ true)
3037  .first;
3038 
3039  Value *PoisonValue = IRB.getInt8(PoisonStack ? ClPoisonStackPattern : 0);
3040  IRB.CreateMemSet(ShadowBase, PoisonValue, Len, I.getAlignment());
3041  }
3042 
3043  if (PoisonStack && MS.TrackOrigins) {
3044  SmallString<2048> StackDescriptionStorage;
3045  raw_svector_ostream StackDescription(StackDescriptionStorage);
3046  // We create a string with a description of the stack allocation and
3047  // pass it into __msan_set_alloca_origin.
3048  // It will be printed by the run-time if stack-originated UMR is found.
3049  // The first 4 bytes of the string are set to '----' and will be replaced
3050  // by __msan_va_arg_overflow_size_tls at the first call.
3051  StackDescription << "----" << I.getName() << "@" << F.getName();
3052  Value *Descr =
3054  StackDescription.str());
3055 
3056  IRB.CreateCall(MS.MsanSetAllocaOrigin4Fn,
3057  {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len,
3058  IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy()),
3059  IRB.CreatePointerCast(&F, MS.IntptrTy)});
3060  }
3061  }
3062 
3063  void visitSelectInst(SelectInst& I) {
3064  IRBuilder<> IRB(&I);
3065  // a = select b, c, d
3066  Value *B = I.getCondition();
3067  Value *C = I.getTrueValue();
3068  Value *D = I.getFalseValue();
3069  Value *Sb = getShadow(B);
3070  Value *Sc = getShadow(C);
3071  Value *Sd = getShadow(D);
3072 
3073  // Result shadow if condition shadow is 0.
3074  Value *Sa0 = IRB.CreateSelect(B, Sc, Sd);
3075  Value *Sa1;
3076  if (I.getType()->isAggregateType()) {
3077  // To avoid "sign extending" i1 to an arbitrary aggregate type, we just do
3078  // an extra "select". This results in much more compact IR.
3079  // Sa = select Sb, poisoned, (select b, Sc, Sd)
3080  Sa1 = getPoisonedShadow(getShadowTy(I.getType()));
3081  } else {
3082  // Sa = select Sb, [ (c^d) | Sc | Sd ], [ b ? Sc : Sd ]
3083  // If Sb (condition is poisoned), look for bits in c and d that are equal
3084  // and both unpoisoned.
3085  // If !Sb (condition is unpoisoned), simply pick one of Sc and Sd.
3086 
3087  // Cast arguments to shadow-compatible type.
3088  C = CreateAppToShadowCast(IRB, C);
3089  D = CreateAppToShadowCast(IRB, D);
3090 
3091  // Result shadow if condition shadow is 1.
3092  Sa1 = IRB.CreateOr(IRB.CreateXor(C, D), IRB.CreateOr(Sc, Sd));
3093  }
3094  Value *Sa = IRB.CreateSelect(Sb, Sa1, Sa0, "_msprop_select");
3095  setShadow(&I, Sa);
3096  if (MS.TrackOrigins) {
3097  // Origins are always i32, so any vector conditions must be flattened.
3098  // FIXME: consider tracking vector origins for app vectors?
3099  if (B->getType()->isVectorTy()) {
3100  Type *FlatTy = getShadowTyNoVec(B->getType());
3101  B = IRB.CreateICmpNE(IRB.CreateBitCast(B, FlatTy),
3102  ConstantInt::getNullValue(FlatTy));
3103  Sb = IRB.CreateICmpNE(IRB.CreateBitCast(Sb, FlatTy),
3104  ConstantInt::getNullValue(FlatTy));
3105  }
3106  // a = select b, c, d
3107  // Oa = Sb ? Ob : (b ? Oc : Od)
3108  setOrigin(
3109  &I, IRB.CreateSelect(Sb, getOrigin(I.getCondition()),
3110  IRB.CreateSelect(B, getOrigin(I.getTrueValue()),
3111  getOrigin(I.getFalseValue()))));
3112  }
3113  }
3114 
3115  void visitLandingPadInst(LandingPadInst &I) {
3116  // Do nothing.
3117  // See https://github.com/google/sanitizers/issues/504
3118  setShadow(&I, getCleanShadow(&I));
3119  setOrigin(&I, getCleanOrigin());
3120  }
3121 
3122  void visitCatchSwitchInst(CatchSwitchInst &I) {
3123  setShadow(&I, getCleanShadow(&I));
3124  setOrigin(&I, getCleanOrigin());
3125  }
3126 
3127  void visitFuncletPadInst(FuncletPadInst &I) {
3128  setShadow(&I, getCleanShadow(&I));
3129  setOrigin(&I, getCleanOrigin());
3130  }
3131 
3132  void visitGetElementPtrInst(GetElementPtrInst &I) {
3133  handleShadowOr(I);
3134  }
3135 
3136  void visitExtractValueInst(ExtractValueInst &I) {
3137  IRBuilder<> IRB(&I);
3138  Value *Agg = I.getAggregateOperand();
3139  LLVM_DEBUG(dbgs() << "ExtractValue: " << I << "\n");
3140  Value *AggShadow = getShadow(Agg);
3141  LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n");
3142  Value *ResShadow = IRB.CreateExtractValue(AggShadow, I.getIndices());
3143  LLVM_DEBUG(dbgs() << " ResShadow: " << *ResShadow << "\n");
3144  setShadow(&I, ResShadow);
3145  setOriginForNaryOp(I);
3146  }
3147 
3148  void visitInsertValueInst(InsertValueInst &I) {
3149  IRBuilder<> IRB(&I);
3150  LLVM_DEBUG(dbgs() << "InsertValue: " << I << "\n");
3151  Value *AggShadow = getShadow(I.getAggregateOperand());
3152  Value *InsShadow = getShadow(I.getInsertedValueOperand());
3153  LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n");
3154  LLVM_DEBUG(dbgs() << " InsShadow: " << *InsShadow << "\n");
3155  Value *Res = IRB.CreateInsertValue(AggShadow, InsShadow, I.getIndices());
3156  LLVM_DEBUG(dbgs() << " Res: " << *Res << "\n");
3157  setShadow(&I, Res);
3158  setOriginForNaryOp(I);
3159  }
3160 
3161  void dumpInst(Instruction &I) {
3162  if (CallInst *CI = dyn_cast<CallInst>(&I)) {
3163  errs() << "ZZZ call " << CI->getCalledFunction()->getName() << "\n";
3164  } else {
3165  errs() << "ZZZ " << I.getOpcodeName() << "\n";
3166  }
3167  errs() << "QQQ " << I << "\n";
3168  }
3169 
3170  void visitResumeInst(ResumeInst &I) {
3171  LLVM_DEBUG(dbgs() << "Resume: " << I << "\n");
3172  // Nothing to do here.
3173  }
3174 
3175  void visitCleanupReturnInst(CleanupReturnInst &CRI) {
3176  LLVM_DEBUG(dbgs() << "CleanupReturn: " << CRI << "\n");
3177  // Nothing to do here.
3178  }
3179 
3180  void visitCatchReturnInst(CatchReturnInst &CRI) {
3181  LLVM_DEBUG(dbgs() << "CatchReturn: " << CRI << "\n");
3182  // Nothing to do here.
3183  }
3184 
3185  void visitAsmInstruction(Instruction &I) {
3186  // Conservative inline assembly handling: check for poisoned shadow of
3187  // asm() arguments, then unpoison the result and all the memory locations
3188  // pointed to by those arguments.
3189  CallInst *CI = dyn_cast<CallInst>(&I);
3190 
3191  for (size_t i = 0, n = CI->getNumOperands(); i < n; i++) {
3192  Value *Operand = CI->getOperand(i);
3193  if (Operand->getType()->isSized())
3194  insertShadowCheck(Operand, &I);
3195  }
3196  setShadow(&I, getCleanShadow(&I));
3197  setOrigin(&I, getCleanOrigin());
3198  IRBuilder<> IRB(&I);
3199  IRB.SetInsertPoint(I.getNextNode());
3200  for (size_t i = 0, n = CI->getNumOperands(); i < n; i++) {
3201  Value *Operand = CI->getOperand(i);
3202  Type *OpType = Operand->getType();
3203  if (!OpType->isPointerTy())
3204  continue;
3205  Type *ElType = OpType->getPointerElementType();
3206  if (!ElType->isSized())
3207  continue;
3208  Value *ShadowPtr, *OriginPtr;
3209  std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr(
3210  Operand, IRB, ElType, /*Alignment*/ 1, /*isStore*/ true);
3211  Value *CShadow = getCleanShadow(ElType);
3212  IRB.CreateStore(
3213  CShadow,
3214  IRB.CreatePointerCast(ShadowPtr, CShadow->getType()->getPointerTo()));
3215  }
3216  }
3217 
3218  void visitInstruction(Instruction &I) {
3219  // Everything else: stop propagating and check for poisoned shadow.
3221  dumpInst(I);
3222  LLVM_DEBUG(dbgs() << "DEFAULT: " << I << "\n");
3223  for (size_t i = 0, n = I.getNumOperands(); i < n; i++) {
3224  Value *Operand = I.getOperand(i);
3225  if (Operand->getType()->isSized())
3226  insertShadowCheck(Operand, &I);
3227  }
3228  setShadow(&I, getCleanShadow(&I));
3229  setOrigin(&I, getCleanOrigin());
3230  }
3231 };
3232 
3233 /// AMD64-specific implementation of VarArgHelper.
3234 struct VarArgAMD64Helper : public VarArgHelper {
3235  // An unfortunate workaround for asymmetric lowering of va_arg stuff.
3236  // See a comment in visitCallSite for more details.
3237  static const unsigned AMD64GpEndOffset = 48; // AMD64 ABI Draft 0.99.6 p3.5.7
3238  static const unsigned AMD64FpEndOffset = 176;
3239 
3240  Function &F;
3241  MemorySanitizer &MS;
3242  MemorySanitizerVisitor &MSV;
3243  Value *VAArgTLSCopy = nullptr;
3244  Value *VAArgOverflowSize = nullptr;
3245 
3246  SmallVector<CallInst*, 16> VAStartInstrumentationList;
3247 
3248  enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory };
3249 
3250  VarArgAMD64Helper(Function &F, MemorySanitizer &MS,
3251  MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {}
3252 
3253  ArgKind classifyArgument(Value* arg) {
3254  // A very rough approximation of X86_64 argument classification rules.
3255  Type *T = arg->getType();
3256  if (T->isFPOrFPVectorTy() || T->isX86_MMXTy())
3257  return AK_FloatingPoint;
3258  if (T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64)
3259  return AK_GeneralPurpose;
3260  if (T->isPointerTy())
3261  return AK_GeneralPurpose;
3262  return AK_Memory;
3263  }
3264 
3265  // For VarArg functions, store the argument shadow in an ABI-specific format
3266  // that corresponds to va_list layout.
3267  // We do this because Clang lowers va_arg in the frontend, and this pass
3268  // only sees the low level code that deals with va_list internals.
3269  // A much easier alternative (provided that Clang emits va_arg instructions)
3270  // would have been to associate each live instance of va_list with a copy of
3271  // MSanParamTLS, and extract shadow on va_arg() call in the argument list
3272  // order.
3273  void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override {
3274  unsigned GpOffset = 0;
3275  unsigned FpOffset = AMD64GpEndOffset;
3276  unsigned OverflowOffset = AMD64FpEndOffset;
3277  const DataLayout &DL = F.getParent()->getDataLayout();
3278  for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end();
3279  ArgIt != End; ++ArgIt) {
3280  Value *A = *ArgIt;
3281  unsigned ArgNo = CS.getArgumentNo(ArgIt);
3282  bool IsFixed = ArgNo < CS.getFunctionType()->getNumParams();
3283  bool IsByVal = CS.paramHasAttr(ArgNo, Attribute::ByVal);
3284  if (IsByVal) {
3285  // ByVal arguments always go to the overflow area.
3286  // Fixed arguments passed through the overflow area will be stepped
3287  // over by va_start, so don't count them towards the offset.
3288  if (IsFixed)
3289  continue;
3290  assert(A->getType()->isPointerTy());
3291  Type *RealTy = A->getType()->getPointerElementType();
3292  uint64_t ArgSize = DL.getTypeAllocSize(RealTy);
3293  Value *ShadowBase =
3294  getShadowPtrForVAArgument(RealTy, IRB, OverflowOffset);
3295  OverflowOffset += alignTo(ArgSize, 8);
3296  Value *ShadowPtr, *OriginPtr;
3297  std::tie(ShadowPtr, OriginPtr) =
3298  MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), kShadowTLSAlignment,
3299  /*isStore*/ false);
3300 
3301  IRB.CreateMemCpy(ShadowBase, kShadowTLSAlignment, ShadowPtr,
3302  kShadowTLSAlignment, ArgSize);
3303  } else {
3304  ArgKind AK = classifyArgument(A);
3305  if (AK == AK_GeneralPurpose && GpOffset >= AMD64GpEndOffset)
3306  AK = AK_Memory;
3307  if (AK == AK_FloatingPoint && FpOffset >= AMD64FpEndOffset)
3308  AK = AK_Memory;
3309  Value *ShadowBase;
3310  switch (AK) {
3311  case AK_GeneralPurpose:
3312  ShadowBase = getShadowPtrForVAArgument(A->getType(), IRB, GpOffset);
3313  GpOffset += 8;
3314  break;
3315  case AK_FloatingPoint:
3316  ShadowBase = getShadowPtrForVAArgument(A->getType(), IRB, FpOffset);
3317  FpOffset += 16;
3318  break;
3319  case AK_Memory:
3320  if (IsFixed)
3321  continue;
3322  uint64_t ArgSize = DL.getTypeAllocSize(A->getType());
3323  ShadowBase =
3324  getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset);
3325  OverflowOffset += alignTo(ArgSize, 8);
3326  }
3327  // Take fixed arguments into account for GpOffset and FpOffset,
3328  // but don't actually store shadows for them.
3329  if (IsFixed)
3330  continue;
3331  IRB.CreateAlignedStore(MSV.getShadow(A), ShadowBase,
3333  }
3334  }
3335  Constant *OverflowSize =
3336  ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AMD64FpEndOffset);
3337  IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS);
3338  }
3339 
3340  /// Compute the shadow address for a given va_arg.
3341  Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB,
3342  int ArgOffset) {
3343  Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy);
3344  Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset));
3345  return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0),
3346  "_msarg");
3347  }
3348 
3349  void unpoisonVAListTagForInst(IntrinsicInst &I) {
3350  IRBuilder<> IRB(&I);
3351  Value *VAListTag = I.getArgOperand(0);
3352  Value *ShadowPtr, *OriginPtr;
3353  unsigned Alignment = 8;
3354  std::tie(ShadowPtr, OriginPtr) =
3355  MSV.getShadowOriginPtr(VAListTag, IRB, IRB.getInt8Ty(), Alignment,
3356  /*isStore*/ true);
3357 
3358  // Unpoison the whole __va_list_tag.
3359  // FIXME: magic ABI constants.
3360  IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()),
3361  /* size */ 24, Alignment, false);
3362  // We shouldn't need to zero out the origins, as they're only checked for
3363  // nonzero shadow.
3364  }
3365 
3366  void visitVAStartInst(VAStartInst &I) override {
3368  return;
3369  VAStartInstrumentationList.push_back(&I);
3370  unpoisonVAListTagForInst(I);
3371  }
3372 
3373  void visitVACopyInst(VACopyInst &I) override {
3374  if (F.getCallingConv() == CallingConv::Win64) return;
3375  unpoisonVAListTagForInst(I);
3376  }
3377 
3378  void finalizeInstrumentation() override {
3379  assert(!VAArgOverflowSize && !VAArgTLSCopy &&
3380  "finalizeInstrumentation called twice");
3381  if (!VAStartInstrumentationList.empty()) {
3382  // If there is a va_start in this function, make a backup copy of
3383  // va_arg_tls somewhere in the function entry block.
3384  IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI());
3385  VAArgOverflowSize = IRB.CreateLoad(MS.VAArgOverflowSizeTLS);
3386  Value *CopySize =
3387  IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AMD64FpEndOffset),
3388  VAArgOverflowSize);
3389  VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize);
3390  IRB.CreateMemCpy(VAArgTLSCopy, 8, MS.VAArgTLS, 8, CopySize);
3391  }
3392 
3393  // Instrument va_start.
3394  // Copy va_list shadow from the backup copy of the TLS contents.
3395  for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) {
3396  CallInst *OrigInst = VAStartInstrumentationList[i];
3397  IRBuilder<> IRB(OrigInst->getNextNode());
3398  Value *VAListTag = OrigInst->getArgOperand(0);
3399 
3400  Value *RegSaveAreaPtrPtr = IRB.CreateIntToPtr(
3401  IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy),
3402  ConstantInt::get(MS.IntptrTy, 16)),
3404  Value *RegSaveAreaPtr = IRB.CreateLoad(RegSaveAreaPtrPtr);
3405  Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr;
3406  unsigned Alignment = 16;
3407  std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) =
3408  MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(),
3409  Alignment, /*isStore*/ true);
3410  IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment,
3411  AMD64FpEndOffset);
3412  Value *OverflowArgAreaPtrPtr = IRB.CreateIntToPtr(
3413  IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy),
3414  ConstantInt::get(MS.IntptrTy, 8)),
3416  Value *OverflowArgAreaPtr = IRB.CreateLoad(OverflowArgAreaPtrPtr);
3417  Value *OverflowArgAreaShadowPtr, *OverflowArgAreaOriginPtr;
3418  std::tie(OverflowArgAreaShadowPtr, OverflowArgAreaOriginPtr) =
3419  MSV.getShadowOriginPtr(OverflowArgAreaPtr, IRB, IRB.getInt8Ty(),
3420  Alignment, /*isStore*/ true);
3421  Value *SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSCopy,
3422  AMD64FpEndOffset);
3423  IRB.CreateMemCpy(OverflowArgAreaShadowPtr, Alignment, SrcPtr, Alignment,
3424  VAArgOverflowSize);
3425  }
3426  }
3427 };
3428 
3429 /// MIPS64-specific implementation of VarArgHelper.
3430 struct VarArgMIPS64Helper : public VarArgHelper {
3431  Function &F;
3432  MemorySanitizer &MS;
3433  MemorySanitizerVisitor &MSV;
3434  Value *VAArgTLSCopy = nullptr;
3435  Value *VAArgSize = nullptr;
3436 
3437  SmallVector<CallInst*, 16> VAStartInstrumentationList;
3438 
3439  VarArgMIPS64Helper(Function &F, MemorySanitizer &MS,
3440  MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {}
3441 
3442  void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override {
3443  unsigned VAArgOffset = 0;
3444  const DataLayout &DL = F.getParent()->getDataLayout();
3445  for (CallSite::arg_iterator ArgIt = CS.arg_begin() +
3446  CS.getFunctionType()->getNumParams(), End = CS.arg_end();
3447  ArgIt != End; ++ArgIt) {
3448  Triple TargetTriple(F.getParent()->getTargetTriple());
3449  Value *A = *ArgIt;
3450  Value *Base;
3451  uint64_t ArgSize = DL.getTypeAllocSize(A->getType());
3452  if (TargetTriple.getArch() == Triple::mips64) {
3453  // Adjusting the shadow for argument with size < 8 to match the placement
3454  // of bits in big endian system
3455  if (ArgSize < 8)
3456  VAArgOffset += (8 - ArgSize);
3457  }
3458  Base = getShadowPtrForVAArgument(A->getType(), IRB, VAArgOffset);
3459  VAArgOffset += ArgSize;
3460  VAArgOffset = alignTo(VAArgOffset, 8);
3461  IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment);
3462  }
3463 
3464  Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), VAArgOffset);
3465  // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of
3466  // a new class member i.e. it is the total size of all VarArgs.
3467  IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS);
3468  }
3469 
3470  /// Compute the shadow address for a given va_arg.
3471  Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB,
3472  int ArgOffset) {
3473  Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy);
3474  Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset));
3475  return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0),
3476  "_msarg");
3477  }
3478 
3479  void visitVAStartInst(VAStartInst &I) override {
3480  IRBuilder<> IRB(&I);
3481  VAStartInstrumentationList.push_back(&I);
3482  Value *VAListTag = I.getArgOperand(0);
3483  Value *ShadowPtr, *OriginPtr;
3484  unsigned Alignment = 8;
3485  std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr(
3486  VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true);
3487  IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()),
3488  /* size */ 8, Alignment, false);
3489  }
3490 
3491  void visitVACopyInst(VACopyInst &I) override {
3492  IRBuilder<> IRB(&I);
3493  VAStartInstrumentationList.push_back(&I);
3494  Value *VAListTag = I.getArgOperand(0);
3495  Value *ShadowPtr, *OriginPtr;
3496  unsigned Alignment = 8;
3497  std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr(
3498  VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true);
3499  IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()),
3500  /* size */ 8, Alignment, false);
3501  }
3502 
3503  void finalizeInstrumentation() override {
3504  assert(!VAArgSize && !VAArgTLSCopy &&
3505  "finalizeInstrumentation called twice");
3506  IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI());
3507  VAArgSize = IRB.CreateLoad(MS.VAArgOverflowSizeTLS);
3508  Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0),
3509  VAArgSize);
3510 
3511  if (!VAStartInstrumentationList.empty()) {
3512  // If there is a va_start in this function, make a backup copy of
3513  // va_arg_tls somewhere in the function entry block.
3514  VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize);
3515  IRB.CreateMemCpy(VAArgTLSCopy, 8, MS.VAArgTLS, 8, CopySize);
3516  }
3517 
3518  // Instrument va_start.
3519  // Copy va_list shadow from the backup copy of the TLS contents.
3520  for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) {
3521  CallInst *OrigInst = VAStartInstrumentationList[i];
3522  IRBuilder<> IRB(OrigInst->getNextNode());
3523  Value *VAListTag = OrigInst->getArgOperand(0);
3524  Value *RegSaveAreaPtrPtr =
3525  IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy),
3527  Value *RegSaveAreaPtr = IRB.CreateLoad(RegSaveAreaPtrPtr);
3528  Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr;
3529  unsigned Alignment = 8;
3530  std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) =
3531  MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(),
3532  Alignment, /*isStore*/ true);
3533  IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment,
3534  CopySize);
3535  }
3536  }
3537 };
3538 
3539 /// AArch64-specific implementation of VarArgHelper.
3540 struct VarArgAArch64Helper : public VarArgHelper {
3541  static const unsigned kAArch64GrArgSize = 64;
3542  static const unsigned kAArch64VrArgSize = 128;
3543 
3544  static const unsigned AArch64GrBegOffset = 0;
3545  static const unsigned AArch64GrEndOffset = kAArch64GrArgSize;
3546  // Make VR space aligned to 16 bytes.
3547  static const unsigned AArch64VrBegOffset = AArch64GrEndOffset;
3548  static const unsigned AArch64VrEndOffset = AArch64VrBegOffset
3549  + kAArch64VrArgSize;
3550  static const unsigned AArch64VAEndOffset = AArch64VrEndOffset;
3551 
3552  Function &F;
3553  MemorySanitizer &MS;
3554  MemorySanitizerVisitor &MSV;
3555  Value *VAArgTLSCopy = nullptr;
3556  Value *VAArgOverflowSize = nullptr;
3557 
3558  SmallVector<CallInst*, 16> VAStartInstrumentationList;
3559 
3560  enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory };
3561 
3562  VarArgAArch64Helper(Function &F, MemorySanitizer &MS,
3563  MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {}
3564 
3565  ArgKind classifyArgument(Value* arg) {
3566  Type *T = arg->getType();
3567  if (T->isFPOrFPVectorTy())
3568  return AK_FloatingPoint;
3569  if ((T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64)
3570  || (T->isPointerTy()))
3571  return AK_GeneralPurpose;
3572  return AK_Memory;
3573  }
3574 
3575  // The instrumentation stores the argument shadow in a non ABI-specific
3576  // format because it does not know which argument is named (since Clang,
3577  // like x86_64 case, lowers the va_args in the frontend and this pass only
3578  // sees the low level code that deals with va_list internals).
3579  // The first seven GR registers are saved in the first 56 bytes of the
3580  // va_arg tls arra, followers by the first 8 FP/SIMD registers, and then
3581  // the remaining arguments.
3582  // Using constant offset within the va_arg TLS array allows fast copy
3583  // in the finalize instrumentation.
3584  void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override {
3585  unsigned GrOffset = AArch64GrBegOffset;
3586  unsigned VrOffset = AArch64VrBegOffset;
3587  unsigned OverflowOffset = AArch64VAEndOffset;
3588 
3589  const DataLayout &DL = F.getParent()->getDataLayout();
3590  for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end();
3591  ArgIt != End; ++ArgIt) {
3592  Value *A = *ArgIt;
3593  unsigned ArgNo = CS.getArgumentNo(ArgIt);
3594  bool IsFixed = ArgNo < CS.getFunctionType()->getNumParams();
3595  ArgKind AK = classifyArgument(A);
3596  if (AK == AK_GeneralPurpose && GrOffset >= AArch64GrEndOffset)
3597  AK = AK_Memory;
3598  if (AK == AK_FloatingPoint && VrOffset >= AArch64VrEndOffset)
3599  AK = AK_Memory;
3600  Value *Base;
3601  switch (AK) {
3602  case AK_GeneralPurpose:
3603  Base = getShadowPtrForVAArgument(A->getType(), IRB, GrOffset);
3604  GrOffset += 8;
3605  break;
3606  case AK_FloatingPoint:
3607  Base = getShadowPtrForVAArgument(A->getType(), IRB, VrOffset);
3608  VrOffset += 16;
3609  break;
3610  case AK_Memory:
3611  // Don't count fixed arguments in the overflow area - va_start will
3612  // skip right over them.
3613  if (IsFixed)
3614  continue;
3615  uint64_t ArgSize = DL.getTypeAllocSize(A->getType());
3616  Base = getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset);
3617  OverflowOffset += alignTo(ArgSize, 8);
3618  break;
3619  }
3620  // Count Gp/Vr fixed arguments to their respective offsets, but don't
3621  // bother to actually store a shadow.
3622  if (IsFixed)
3623  continue;
3624  IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment);
3625  }
3626  Constant *OverflowSize =
3627  ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AArch64VAEndOffset);
3628  IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS);
3629  }
3630 
3631  /// Compute the shadow address for a given va_arg.
3632  Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB,
3633  int ArgOffset) {
3634  Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy);
3635  Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset));
3636  return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0),
3637  "_msarg");
3638  }
3639 
3640  void visitVAStartInst(VAStartInst &I) override {
3641  IRBuilder<> IRB(&I);
3642  VAStartInstrumentationList.push_back(&I);
3643  Value *VAListTag = I.getArgOperand(0);
3644  Value *ShadowPtr, *OriginPtr;
3645  unsigned Alignment = 8;
3646  std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr(
3647  VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true);
3648  IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()),
3649  /* size */ 32, Alignment, false);
3650  }
3651 
3652  void visitVACopyInst(VACopyInst &I) override {
3653  IRBuilder<> IRB(&I);
3654  VAStartInstrumentationList.push_back(&I);
3655  Value *VAListTag = I.getArgOperand(0);
3656  Value *ShadowPtr, *OriginPtr;
3657  unsigned Alignment = 8;
3658  std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr(
3659  VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true);
3660  IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()),
3661  /* size */ 32, Alignment, false);
3662  }
3663 
3664  // Retrieve a va_list field of 'void*' size.
3665  Value* getVAField64(IRBuilder<> &IRB, Value *VAListTag, int offset) {
3666  Value *SaveAreaPtrPtr =
3667  IRB.CreateIntToPtr(
3668  IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy),
3669  ConstantInt::get(MS.IntptrTy, offset)),
3670  Type::getInt64PtrTy(*MS.C));
3671  return IRB.CreateLoad(SaveAreaPtrPtr);
3672  }
3673 
3674  // Retrieve a va_list field of 'int' size.
3675  Value* getVAField32(IRBuilder<> &IRB, Value *VAListTag, int offset) {
3676  Value *SaveAreaPtr =
3677  IRB.CreateIntToPtr(
3678  IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy),
3679  ConstantInt::get(MS.IntptrTy, offset)),
3680  Type::getInt32PtrTy(*MS.C));
3681  Value *SaveArea32 = IRB.CreateLoad(SaveAreaPtr);
3682  return IRB.CreateSExt(SaveArea32, MS.IntptrTy);
3683  }
3684 
3685  void finalizeInstrumentation() override {
3686  assert(!VAArgOverflowSize && !VAArgTLSCopy &&
3687  "finalizeInstrumentation called twice");
3688  if (!VAStartInstrumentationList.empty()) {
3689  // If there is a va_start in this function, make a backup copy of
3690  // va_arg_tls somewhere in the function entry block.
3691  IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI());
3692  VAArgOverflowSize = IRB.CreateLoad(MS.VAArgOverflowSizeTLS);
3693  Value *CopySize =
3694  IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AArch64VAEndOffset),
3695  VAArgOverflowSize);
3696  VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize);
3697  IRB.CreateMemCpy(VAArgTLSCopy, 8, MS.VAArgTLS, 8, CopySize);
3698  }
3699 
3700  Value *GrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64GrArgSize);
3701  Value *VrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64VrArgSize);
3702 
3703  // Instrument va_start, copy va_list shadow from the backup copy of
3704  // the TLS contents.
3705  for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) {
3706  CallInst *OrigInst = VAStartInstrumentationList[i];
3707  IRBuilder<> IRB(OrigInst->getNextNode());
3708 
3709  Value *VAListTag = OrigInst->getArgOperand(0);
3710 
3711  // The variadic ABI for AArch64 creates two areas to save the incoming
3712  // argument registers (one for 64-bit general register xn-x7 and another
3713  // for 128-bit FP/SIMD vn-v7).
3714  // We need then to propagate the shadow arguments on both regions
3715  // 'va::__gr_top + va::__gr_offs' and 'va::__vr_top + va::__vr_offs'.
3716  // The remaning arguments are saved on shadow for 'va::stack'.
3717  // One caveat is it requires only to propagate the non-named arguments,
3718  // however on the call site instrumentation 'all' the arguments are
3719  // saved. So to copy the shadow values from the va_arg TLS array
3720  // we need to adjust the offset for both GR and VR fields based on
3721  // the __{gr,vr}_offs value (since they are stores based on incoming
3722  // named arguments).
3723 
3724  // Read the stack pointer from the va_list.
3725  Value *StackSaveAreaPtr = getVAField64(IRB, VAListTag, 0);
3726 
3727  // Read both the __gr_top and __gr_off and add them up.
3728  Value *GrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 8);
3729  Value *GrOffSaveArea = getVAField32(IRB, VAListTag, 24);
3730 
3731  Value *GrRegSaveAreaPtr = IRB.CreateAdd(GrTopSaveAreaPtr, GrOffSaveArea);
3732 
3733  // Read both the __vr_top and __vr_off and add them up.
3734  Value *VrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 16);
3735  Value *VrOffSaveArea = getVAField32(IRB, VAListTag, 28);
3736 
3737  Value *VrRegSaveAreaPtr = IRB.CreateAdd(VrTopSaveAreaPtr, VrOffSaveArea);
3738 
3739  // It does not know how many named arguments is being used and, on the
3740  // callsite all the arguments were saved. Since __gr_off is defined as
3741  // '0 - ((8 - named_gr) * 8)', the idea is to just propagate the variadic
3742  // argument by ignoring the bytes of shadow from named arguments.
3743  Value *GrRegSaveAreaShadowPtrOff =
3744  IRB.CreateAdd(GrArgSize, GrOffSaveArea);
3745 
3746  Value *GrRegSaveAreaShadowPtr =
3747  MSV.getShadowOriginPtr(GrRegSaveAreaPtr, IRB, IRB.getInt8Ty(),
3748  /*Alignment*/ 8, /*isStore*/ true)
3749  .first;
3750 
3751  Value *GrSrcPtr = IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy,
3752  GrRegSaveAreaShadowPtrOff);
3753  Value *GrCopySize = IRB.CreateSub(GrArgSize, GrRegSaveAreaShadowPtrOff);
3754 
3755  IRB.CreateMemCpy(GrRegSaveAreaShadowPtr, 8, GrSrcPtr, 8, GrCopySize);
3756 
3757  // Again, but for FP/SIMD values.
3758  Value *VrRegSaveAreaShadowPtrOff =
3759  IRB.CreateAdd(VrArgSize, VrOffSaveArea);
3760 
3761  Value *VrRegSaveAreaShadowPtr =
3762  MSV.getShadowOriginPtr(VrRegSaveAreaPtr, IRB, IRB.getInt8Ty(),
3763  /*Alignment*/ 8, /*isStore*/ true)
3764  .first;
3765 
3766  Value *VrSrcPtr = IRB.CreateInBoundsGEP(
3767  IRB.getInt8Ty(),
3768  IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy,
3769  IRB.getInt32(AArch64VrBegOffset)),
3770  VrRegSaveAreaShadowPtrOff);
3771  Value *VrCopySize = IRB.CreateSub(VrArgSize, VrRegSaveAreaShadowPtrOff);
3772 
3773  IRB.CreateMemCpy(VrRegSaveAreaShadowPtr, 8, VrSrcPtr, 8, VrCopySize);
3774 
3775  // And finally for remaining arguments.
3776  Value *StackSaveAreaShadowPtr =
3777  MSV.getShadowOriginPtr(StackSaveAreaPtr, IRB, IRB.getInt8Ty(),
3778  /*Alignment*/ 16, /*isStore*/ true)
3779  .first;
3780 
3781  Value *StackSrcPtr =
3782  IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy,
3783  IRB.getInt32(AArch64VAEndOffset));
3784 
3785  IRB.CreateMemCpy(StackSaveAreaShadowPtr, 16, StackSrcPtr, 16,
3786  VAArgOverflowSize);
3787  }
3788  }
3789 };
3790 
3791 /// PowerPC64-specific implementation of VarArgHelper.
3792 struct VarArgPowerPC64Helper : public VarArgHelper {
3793  Function &F;
3794  MemorySanitizer &MS;
3795  MemorySanitizerVisitor &MSV;
3796  Value *VAArgTLSCopy = nullptr;
3797  Value *VAArgSize = nullptr;
3798 
3799  SmallVector<CallInst*, 16> VAStartInstrumentationList;
3800 
3801  VarArgPowerPC64Helper(Function &F, MemorySanitizer &MS,
3802  MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {}
3803 
3804  void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override {
3805  // For PowerPC, we need to deal with alignment of stack arguments -
3806  // they are mostly aligned to 8 bytes, but vectors and i128 arrays
3807  // are aligned to 16 bytes, byvals can be aligned to 8 or 16 bytes,
3808  // and QPX vectors are aligned to 32 bytes. For that reason, we
3809  // compute current offset from stack pointer (which is always properly
3810  // aligned), and offset for the first vararg, then subtract them.
3811  unsigned VAArgBase;
3812  Triple TargetTriple(F.getParent()->getTargetTriple());
3813  // Parameter save area starts at 48 bytes from frame pointer for ABIv1,
3814  // and 32 bytes for ABIv2. This is usually determined by target
3815  // endianness, but in theory could be overriden by function attribute.
3816  // For simplicity, we ignore it here (it'd only matter for QPX vectors).
3817  if (TargetTriple.getArch() == Triple::ppc64)
3818  VAArgBase = 48;
3819  else
3820  VAArgBase = 32;
3821  unsigned VAArgOffset = VAArgBase;
3822  const DataLayout &DL = F.getParent()->getDataLayout();
3823  for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end();
3824  ArgIt != End; ++ArgIt) {
3825  Value *A = *ArgIt;
3826  unsigned ArgNo = CS.getArgumentNo(ArgIt);
3827  bool IsFixed = ArgNo < CS.getFunctionType()->getNumParams();
3828  bool IsByVal = CS.paramHasAttr(ArgNo, Attribute::ByVal);
3829  if (IsByVal) {
3830  assert(A->getType()->isPointerTy());
3831  Type *RealTy = A->getType()->getPointerElementType();
3832  uint64_t ArgSize = DL.getTypeAllocSize(RealTy);
3833  uint64_t ArgAlign = CS.getParamAlignment(ArgNo);
3834  if (ArgAlign < 8)
3835  ArgAlign = 8;
3836  VAArgOffset = alignTo(VAArgOffset, ArgAlign);
3837  if (!IsFixed) {
3838  Value *Base = getShadowPtrForVAArgument(RealTy, IRB,
3839  VAArgOffset - VAArgBase);
3840  Value *AShadowPtr, *AOriginPtr;
3841  std::tie(AShadowPtr, AOriginPtr) = MSV.getShadowOriginPtr(
3842  A, IRB, IRB.getInt8Ty(), kShadowTLSAlignment, /*isStore*/ false);
3843 
3844  IRB.CreateMemCpy(Base, kShadowTLSAlignment, AShadowPtr,
3845  kShadowTLSAlignment, ArgSize);
3846  }
3847  VAArgOffset += alignTo(ArgSize, 8);
3848  } else {
3849  Value *Base;
3850  uint64_t ArgSize = DL.getTypeAllocSize(A->getType());
3851  uint64_t ArgAlign = 8;
3852  if (A->getType()->isArrayTy()) {
3853  // Arrays are aligned to element size, except for long double
3854  // arrays, which are aligned to 8 bytes.
3855  Type *ElementTy = A->getType()->getArrayElementType();
3856  if (!ElementTy->isPPC_FP128Ty())
3857  ArgAlign = DL.getTypeAllocSize(ElementTy);
3858  } else if (A->getType()->isVectorTy()) {
3859  // Vectors are naturally aligned.
3860  ArgAlign = DL.getTypeAllocSize(A->getType());
3861  }
3862  if (ArgAlign < 8)
3863  ArgAlign = 8;
3864  VAArgOffset = alignTo(VAArgOffset, ArgAlign);
3865  if (DL.isBigEndian()) {
3866  // Adjusting the shadow for argument with size < 8 to match the placement
3867  // of bits in big endian system
3868  if (ArgSize < 8)
3869  VAArgOffset += (8 - ArgSize);
3870  }
3871  if (!IsFixed) {
3872  Base = getShadowPtrForVAArgument(A->getType(), IRB,
3873  VAArgOffset - VAArgBase);
3874  IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment);
3875  }
3876  VAArgOffset += ArgSize;
3877  VAArgOffset = alignTo(VAArgOffset, 8);
3878  }
3879  if (IsFixed)
3880  VAArgBase = VAArgOffset;
3881  }
3882 
3883  Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(),
3884  VAArgOffset - VAArgBase);
3885  // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of
3886  // a new class member i.e. it is the total size of all VarArgs.
3887  IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS);
3888  }
3889 
3890  /// Compute the shadow address for a given va_arg.
3891  Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB,
3892  int ArgOffset) {
3893  Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy);
3894  Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset));
3895  return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0),
3896  "_msarg");
3897  }
3898 
3899  void visitVAStartInst(VAStartInst &I) override {
3900  IRBuilder<> IRB(&I);
3901  VAStartInstrumentationList.push_back(&I);
3902  Value *VAListTag = I.getArgOperand(0);
3903  Value *ShadowPtr, *OriginPtr;
3904  unsigned Alignment = 8;
3905  std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr(
3906  VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true);
3907  IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()),
3908  /* size */ 8, Alignment, false);
3909  }
3910 
3911  void visitVACopyInst(VACopyInst &I) override {
3912  IRBuilder<> IRB(&I);
3913  Value *VAListTag = I.getArgOperand(0);
3914  Value *ShadowPtr, *OriginPtr;
3915  unsigned Alignment = 8;
3916  std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr(
3917  VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true);
3918  // Unpoison the whole __va_list_tag.
3919  // FIXME: magic ABI constants.
3920  IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()),
3921  /* size */ 8, Alignment, false);
3922  }
3923 
3924  void finalizeInstrumentation() override {
3925  assert(!VAArgSize && !VAArgTLSCopy &&
3926  "finalizeInstrumentation called twice");
3927  IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI());
3928  VAArgSize = IRB.CreateLoad(MS.VAArgOverflowSizeTLS);
3929  Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0),
3930  VAArgSize);
3931 
3932  if (!VAStartInstrumentationList.empty()) {
3933  // If there is a va_start in this function, make a backup copy of
3934  // va_arg_tls somewhere in the function entry block.
3935  VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize);
3936  IRB.CreateMemCpy(VAArgTLSCopy, 8, MS.VAArgTLS, 8, CopySize);
3937  }
3938 
3939  // Instrument va_start.
3940  // Copy va_list shadow from the backup copy of the TLS contents.
3941  for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) {
3942  CallInst *OrigInst = VAStartInstrumentationList[i];
3943  IRBuilder<> IRB(OrigInst->getNextNode());
3944  Value *VAListTag = OrigInst->getArgOperand(0);
3945  Value *RegSaveAreaPtrPtr =
3946  IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy),
3948  Value *RegSaveAreaPtr = IRB.CreateLoad(RegSaveAreaPtrPtr);
3949  Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr;
3950  unsigned Alignment = 8;
3951  std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) =
3952  MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(),
3953  Alignment, /*isStore*/ true);
3954  IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment,
3955  CopySize);
3956  }
3957  }
3958 };
3959 
3960 /// A no-op implementation of VarArgHelper.
3961 struct VarArgNoOpHelper : public VarArgHelper {
3962  VarArgNoOpHelper(Function &F, MemorySanitizer &MS,
3963  MemorySanitizerVisitor &MSV) {}
3964 
3965  void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override {}
3966 
3967  void visitVAStartInst(VAStartInst &I) override {}
3968 
3969  void visitVACopyInst(VACopyInst &I) override {}
3970 
3971  void finalizeInstrumentation() override {}
3972 };
3973 
3974 } // end anonymous namespace
3975 
3976 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan,
3977  MemorySanitizerVisitor &Visitor) {
3978  // VarArg handling is only implemented on AMD64. False positives are possible
3979  // on other platforms.
3980  Triple TargetTriple(Func.getParent()->getTargetTriple());
3981  if (TargetTriple.getArch() == Triple::x86_64)
3982  return new VarArgAMD64Helper(Func, Msan, Visitor);
3983  else if (TargetTriple.isMIPS64())
3984  return new VarArgMIPS64Helper(Func, Msan, Visitor);
3985  else if (TargetTriple.getArch() == Triple::aarch64)
3986  return new VarArgAArch64Helper(Func, Msan, Visitor);
3987  else if (TargetTriple.getArch() == Triple::ppc64 ||
3988  TargetTriple.getArch() == Triple::ppc64le)
3989  return new VarArgPowerPC64Helper(Func, Msan, Visitor);
3990  else
3991  return new VarArgNoOpHelper(Func, Msan, Visitor);
3992 }
3993 
3995  if (&F == MsanCtorFunction)
3996  return false;
3997  MemorySanitizerVisitor Visitor(F, *this);
3998 
3999  // Clear out readonly/readnone attributes.
4000  AttrBuilder B;
4001  B.addAttribute(Attribute::ReadOnly)
4002  .addAttribute(Attribute::ReadNone);
4004 
4005  return Visitor.runOnFunction();
4006 }
Value * CreateInBoundsGEP(Value *Ptr, ArrayRef< Value *> IdxList, const Twine &Name="")
Definition: IRBuilder.h:1393
Type * getVectorElementType() const
Definition: Type.h:371
uint64_t CallInst * C
Return a value (possibly void), from a function.
User::op_iterator arg_iterator
The type of iterator to use when looping over actual arguments at this call site. ...
Definition: CallSite.h:213
SymbolTableList< Instruction >::iterator eraseFromParent()
This method unlinks &#39;this&#39; from the containing basic block and deletes it.
Definition: Instruction.cpp:68
unsigned Log2_32_Ceil(uint32_t Value)
Return the ceil log base 2 of the specified value, 32 if the value is zero.
Definition: MathExtras.h:552
A parsed version of the target data layout string in and methods for querying it. ...
Definition: DataLayout.h:111
constexpr char Align[]
Key for Kernel::Arg::Metadata::mAlign.
const std::string & getTargetTriple() const
Get the target triple which is a string describing the target host.
Definition: Module.h:238
static const MemoryMapParams Linux_PowerPC64_MemoryMapParams
Function * getCalledFunction() const
Return the function called, or null if this is an indirect function invocation.
Value * CreateICmp(CmpInst::Predicate P, Value *LHS, Value *RHS, const Twine &Name="")
Definition: IRBuilder.h:1846
Value * CreateConstGEP1_32(Value *Ptr, unsigned Idx0, const Twine &Name="")
Definition: IRBuilder.h:1432
static Constant * getString(LLVMContext &Context, StringRef Initializer, bool AddNull=true)
This method constructs a CDS and initializes it with a text string.
Definition: Constants.cpp:2515
bool isAllOnesValue() const
Return true if this is the value that would be returned by getAllOnesValue.
Definition: Constants.cpp:100
raw_ostream & errs()
This returns a reference to a raw_ostream for standard error.
void addIncoming(Value *V, BasicBlock *BB)
Add an incoming value to the end of the PHI list.
This instruction extracts a struct member or array element value from an aggregate value...
Value * CreateBinOp(Instruction::BinaryOps Opc, Value *LHS, Value *RHS, const Twine &Name="", MDNode *FPMathTag=nullptr)
Definition: IRBuilder.h:1246
GCNRegPressure max(const GCNRegPressure &P1, const GCNRegPressure &P2)
This class represents an incoming formal argument to a Function.
Definition: Argument.h:30
static const PlatformMemoryMapParams Linux_PowerPC_MemoryMapParams
AllocaInst * CreateAlloca(Type *Ty, unsigned AddrSpace, Value *ArraySize=nullptr, const Twine &Name="")
Definition: IRBuilder.h:1292
Base class for instruction visitors.
Definition: InstVisitor.h:81
Value * getAggregateOperand()
Value * CreateICmpNE(Value *LHS, Value *RHS, const Twine &Name="")
Definition: IRBuilder.h:1740
Atomic ordering constants.
NodeTy * getNextNode()
Get the next node, or nullptr for the list tail.
Definition: ilist_node.h:289
LLVM_ATTRIBUTE_NORETURN void report_fatal_error(Error Err, bool gen_crash_diag=true)
Report a serious error, calling any installed error handler.
Definition: Error.cpp:115
Compute iterated dominance frontiers using a linear time algorithm.
Definition: AllocatorList.h:24
BinaryOps getOpcode() const
Definition: InstrTypes.h:555
unsigned getParamAlignment(unsigned ArgNo) const
Extract the alignment for a call or parameter (0=unknown).
Definition: CallSite.h:406
bool isAtomic() const
Return true if this instruction has an AtomicOrdering of unordered or higher.
Constant * getOrInsertFunction(StringRef Name, FunctionType *T, AttributeList AttributeList)
Look up the specified function in the module symbol table.
Definition: Module.cpp:142
Value * CreateXor(Value *LHS, Value *RHS, const Twine &Name="")
Definition: IRBuilder.h:1148
A Module instance is used to store all the information related to an LLVM module. ...
Definition: Module.h:63
bool isSized(SmallPtrSetImpl< Type *> *Visited=nullptr) const
Return true if it makes sense to take the size of this type.
Definition: Type.h:265
LLVM_ATTRIBUTE_ALWAYS_INLINE size_type size() const
Definition: SmallVector.h:137
void setOrdering(AtomicOrdering Ordering)
Sets the ordering constraint of this rmw instruction.
Definition: Instructions.h:766
static const MemoryMapParams Linux_I386_MemoryMapParams
Same, but only replaced by something equivalent.
Definition: GlobalValue.h:54
an instruction that atomically checks whether a specified value is in a memory location, and, if it is, stores a new value there.
Definition: Instructions.h:518
static const MemoryMapParams NetBSD_X86_64_MemoryMapParams
static cl::opt< bool > ClPoisonStackWithCall("msan-poison-stack-with-call", cl::desc("poison uninitialized stack variables with a call"), cl::Hidden, cl::init(false))
This class represents zero extension of integer types.
static const unsigned kRetvalTLSSize
Value * CreateICmpSLT(Value *LHS, Value *RHS, const Twine &Name="")
Definition: IRBuilder.h:1768
This class represents a function call, abstracting a target machine&#39;s calling convention.
static PointerType * getInt32PtrTy(LLVMContext &C, unsigned AS=0)
Definition: Type.cpp:228
void setOrdering(AtomicOrdering Ordering)
Sets the ordering constraint of this load instruction.
Definition: Instructions.h:243
static PointerType * get(Type *ElementType, unsigned AddressSpace)
This constructs a pointer to an object of the specified type in a numbered address space...
Definition: Type.cpp:617
const Value * getTrueValue() const
AtomicOrdering getOrdering() const
Returns the ordering constraint of this load instruction.
Definition: Instructions.h:237
Like Internal, but omit from symbol table.
Definition: GlobalValue.h:57
static VarArgHelper * CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, MemorySanitizerVisitor &Visitor)
This instruction constructs a fixed permutation of two input vectors.
static cl::opt< bool > ClWithComdat("msan-with-comdat", cl::desc("Place MSan constructors in comdat sections"), cl::Hidden, cl::init(false))
Externally visible function.
Definition: GlobalValue.h:49
bool hasFnAttribute(Attribute::AttrKind Kind) const
Return true if the function has the attribute.
Definition: Function.h:307
A raw_ostream that writes to an SmallVector or SmallString.
Definition: raw_ostream.h:504
This class wraps the llvm.memset intrinsic.
static bool isEquality(Predicate P)
Return true if this predicate is either EQ or NE.
Value * CreateSExt(Value *V, Type *DestTy, const Twine &Name="")
Definition: IRBuilder.h:1560
Metadata node.
Definition: Metadata.h:862
static unsigned TypeSizeToSizeIndex(unsigned TypeSize)
F(f)
This class represents a sign extension of integer types.
uint64_t alignTo(uint64_t Value, uint64_t Align, uint64_t Skew=0)
Returns the next integer (mod 2**64) that is greater than or equal to Value and is a multiple of Alig...
Definition: MathExtras.h:685
CallInst * CreateMemSet(Value *Ptr, Value *Val, uint64_t Size, unsigned Align, bool isVolatile=false, MDNode *TBAATag=nullptr, MDNode *ScopeTag=nullptr, MDNode *NoAliasTag=nullptr)
Create and insert a memset to the specified pointer and the specified value.
Definition: IRBuilder.h:404
An instruction for reading from memory.
Definition: Instructions.h:168
AttrBuilder & addAttribute(Attribute::AttrKind Val)
Add an attribute to the builder.
an instruction that atomically reads a memory location, combines it with another value, and then stores the result back.
Definition: Instructions.h:681
Hexagon Common GEP
bool isVectorTy() const
True if this is an instance of VectorType.
Definition: Type.h:230
#define op(i)
bool isMustTailCall() const
static Type * getX86_MMXTy(LLVMContext &C)
Definition: Type.cpp:171
static cl::opt< unsigned long long > ClXorMask("msan-xor-mask", cl::desc("Define custom MSan XorMask"), cl::Hidden, cl::init(0))
Use * op_iterator
Definition: User.h:225
bool isPPC_FP128Ty() const
Return true if this is powerpc long double.
Definition: Type.h:159
static const MemoryMapParams Linux_AArch64_MemoryMapParams
static cl::opt< bool > ClHandleICmp("msan-handle-icmp", cl::desc("propagate shadow through ICmpEQ and ICmpNE"), cl::Hidden, cl::init(true))
op_iterator op_begin()
Definition: User.h:230
static PointerType * getInt64PtrTy(LLVMContext &C, unsigned AS=0)
Definition: Type.cpp:232
static Constant * get(ArrayType *T, ArrayRef< Constant *> V)
Definition: Constants.cpp:960
unsigned getBitWidth() const
Return the number of bits in the APInt.
Definition: APInt.h:1502
bool onlyReadsMemory() const
Determine if the call does not access or only reads memory.
static Constant * getNullValue(Type *Ty)
Constructor to create a &#39;0&#39; constant of arbitrary type.
Definition: Constants.cpp:268
Value * CreateNot(Value *V, const Twine &Name="")
Definition: IRBuilder.h:1282
StoreInst * CreateAlignedStore(Value *Val, Value *Ptr, unsigned Align, bool isVolatile=false)
Definition: IRBuilder.h:1346
IntegerType * getInt32Ty()
Fetch the type representing a 32-bit integer.
Definition: IRBuilder.h:347
unsigned countTrailingZeros() const
Count the number of trailing zero bits.
Definition: APInt.h:1625
static cl::opt< int > ClPoisonStackPattern("msan-poison-stack-pattern", cl::desc("poison uninitialized stack variables with the given pattern"), cl::Hidden, cl::init(0xff))
ArrayRef< unsigned > getIndices() const
AnalysisUsage & addRequired()
#define INITIALIZE_PASS_DEPENDENCY(depName)
Definition: PassSupport.h:51
bool isSigned() const
Definition: InstrTypes.h:1054
static cl::opt< bool > ClDumpStrictInstructions("msan-dump-strict-instructions", cl::desc("print out instructions with default strict semantics"), cl::Hidden, cl::init(false))
This class represents the LLVM &#39;select&#39; instruction.
Type * getPointerElementType() const
Definition: Type.h:376
const DataLayout & getDataLayout() const
Get the data layout for the module&#39;s target platform.
Definition: Module.cpp:361
unsigned getAlignment() const
Return the alignment of the memory that is being allocated by the instruction.
Definition: Instructions.h:113
IntegerType * getInt64Ty()
Fetch the type representing a 64-bit integer.
Definition: IRBuilder.h:352
This is the base class for all instructions that perform data casts.
Definition: InstrTypes.h:592
ArrayRef< T > makeArrayRef(const T &OneElt)
Construct an ArrayRef from a single element.
Definition: ArrayRef.h:451
&#39;undef&#39; values are things that do not have specified contents.
Definition: Constants.h:1275
This class wraps the llvm.memmove intrinsic.
Class to represent struct types.
Definition: DerivedTypes.h:201
LLVMContext & getContext() const
Get the global data context.
Definition: Module.h:242
static cl::opt< bool > ClCheckAccessAddress("msan-check-access-address", cl::desc("report accesses through a pointer which has poisoned shadow"), cl::Hidden, cl::init(true))
PointerType * getPointerTo(unsigned AddrSpace=0) const
Return a pointer to the current type.
Definition: Type.cpp:639
IterTy arg_end() const
Definition: CallSite.h:575
bool isUnsigned() const
Definition: InstrTypes.h:1060
static cl::opt< unsigned long long > ClAndMask("msan-and-mask", cl::desc("Define custom MSan AndMask"), cl::Hidden, cl::init(0))
bool isIntegerTy() const
True if this is an instance of IntegerType.
Definition: Type.h:197
This provides a uniform API for creating instructions and inserting them into a basic block: either a...
Definition: IRBuilder.h:731
IntegerType * getIntPtrTy(const DataLayout &DL, unsigned AddrSpace=0)
Fetch the type representing a pointer to an integer value.
Definition: IRBuilder.h:390
This file contains the simple types necessary to represent the attributes associated with functions a...
static cl::opt< unsigned long long > ClOriginBase("msan-origin-base", cl::desc("Define custom MSan OriginBase"), cl::Hidden, cl::init(0))
InstrTy * getInstruction() const
Definition: CallSite.h:92
Value * CreateAdd(Value *LHS, Value *RHS, const Twine &Name="", bool HasNUW=false, bool HasNSW=false)
Definition: IRBuilder.h:962
void setName(const Twine &Name)
Change the name of the value.
Definition: Value.cpp:295
The C convention as implemented on Windows/x86-64 and AArch64.
Definition: CallingConv.h:154
static cl::opt< bool > ClHandleAsmConservative("msan-handle-asm-conservative", cl::desc("conservative handling of inline assembly"), cl::Hidden, cl::init(false))
static StructType * get(LLVMContext &Context, ArrayRef< Type *> Elements, bool isPacked=false)
This static method is the primary way to create a literal StructType.
Definition: Type.cpp:336
This file implements a class to represent arbitrary precision integral constant values and operations...
Type * getVoidTy()
Fetch the type representing void.
Definition: IRBuilder.h:380
This class represents a cast from a pointer to an integer.
AtomicOrdering
Atomic ordering for LLVM&#39;s memory model.
StoreInst * CreateStore(Value *Val, Value *Ptr, bool isVolatile=false)
Definition: IRBuilder.h:1321
Value * CreateIntToPtr(Value *V, Type *DestTy, const Twine &Name="")
Definition: IRBuilder.h:1624
ValTy * getCalledValue() const
Return the pointer to function that is being called.
Definition: CallSite.h:100
bool isNullValue() const
Return true if this is the value that would be returned by getNullValue.
Definition: Constants.cpp:85
Class to represent function types.
Definition: DerivedTypes.h:103
Value * CreateBitCast(Value *V, Type *DestTy, const Twine &Name="")
Definition: IRBuilder.h:1629
Type * getType() const
All values are typed, get the type of this value.
Definition: Value.h:245
This represents the llvm.va_start intrinsic.
static const char *const kMsanInitName
std::string itostr(int64_t X)
Definition: StringExtras.h:229
AtomicOrdering getSuccessOrdering() const
Returns the success ordering constraint of this cmpxchg instruction.
Definition: Instructions.h:572
#define T
static cl::opt< int > ClTrackOrigins("msan-track-origins", cl::desc("Track origins (allocation sites) of poisoned memory"), cl::Hidden, cl::init(0))
Track origins of uninitialized values.
Class to represent array types.
Definition: DerivedTypes.h:369
This instruction compares its operands according to the predicate given to the constructor.
static bool isStore(int Opcode)
bool isVarArg() const
Definition: DerivedTypes.h:123
This class represents a no-op cast from one type to another.
bool paramHasAttr(unsigned ArgNo, Attribute::AttrKind Kind) const
Return true if the call or the callee has the given attribute.
Definition: CallSite.h:377
MDNode * getMetadata(unsigned KindID) const
Get the metadata of given kind attached to this Instruction.
Definition: Instruction.h:200
Value * getInsertedValueOperand()
SmallString - A SmallString is just a SmallVector with methods and accessors that make it work better...
Definition: SmallString.h:26
Value * CreateSub(Value *LHS, Value *RHS, const Twine &Name="", bool HasNUW=false, bool HasNSW=false)
Definition: IRBuilder.h:979
An instruction for storing to memory.
Definition: Instructions.h:310
bool removeUnreachableBlocks(Function &F, LazyValueInfo *LVI=nullptr, DeferredDominance *DDT=nullptr)
Remove all blocks that can not be reached from the function&#39;s entry.
Definition: Local.cpp:2207
bool isIntOrIntVectorTy() const
Return true if this is an integer type or a vector of integer types.
Definition: Type.h:203
static const unsigned kParamTLSSize
Value * CreateZExt(Value *V, Type *DestTy, const Twine &Name="")
Definition: IRBuilder.h:1556
static cl::opt< bool > ClPoisonStack("msan-poison-stack", cl::desc("poison uninitialized stack variables"), cl::Hidden, cl::init(true))
Function * getDeclaration(Module *M, ID id, ArrayRef< Type *> Tys=None)
Create or insert an LLVM Function declaration for an intrinsic, and return it.
Definition: Function.cpp:1007
This class represents a truncation of integer types.
void SetInsertPoint(BasicBlock *TheBB)
This specifies that created instructions should be appended to the end of the specified block...
Definition: IRBuilder.h:127
Value * getOperand(unsigned i) const
Definition: User.h:170
static const PlatformMemoryMapParams Linux_MIPS_MemoryMapParams
bool isRelational() const
Return true if the predicate is relational (not EQ or NE).
bool isCall() const
Return true if a CallInst is enclosed.
Definition: CallSite.h:87
Value * CreateOr(Value *LHS, Value *RHS, const Twine &Name="")
Definition: IRBuilder.h:1130
static const unsigned kMinOriginAlignment
Constant * getAggregateElement(unsigned Elt) const
For aggregates (struct/array/vector) return the constant that corresponds to the specified element if...
Definition: Constants.cpp:338
bool doesNotAccessMemory() const
Determine if the call does not access memory.
Type * getScalarType() const
If this is a vector type, return the element type, otherwise return &#39;this&#39;.
Definition: Type.h:304
Value * getOperand(unsigned i_nocapture) const
bool isZeroValue() const
Return true if the value is negative zero or null value.
Definition: Constants.cpp:65
bool isVoidTy() const
Return true if this is &#39;void&#39;.
Definition: Type.h:141
const BasicBlock & getEntryBlock() const
Definition: Function.h:626
an instruction for type-safe pointer arithmetic to access elements of arrays and structs ...
Definition: Instructions.h:841
LoadInst * CreateLoad(Value *Ptr, const char *Name)
Provided to resolve &#39;CreateLoad(Ptr, "...")&#39; correctly, instead of converting the string to &#39;bool&#39; fo...
Definition: IRBuilder.h:1305
static cl::opt< unsigned long long > ClShadowBase("msan-shadow-base", cl::desc("Define custom MSan ShadowBase"), cl::Hidden, cl::init(0))
static bool runOnFunction(Function &F, bool PostInlining)
initializer< Ty > init(const Ty &Val)
Definition: CommandLine.h:410
unsigned getNumOperands() const
This instruction inserts a single (scalar) element into a VectorType value.
The landingpad instruction holds all of the information necessary to generate correct exception handl...
const Instruction * getFirstNonPHI() const
Returns a pointer to the first instruction in this block that is not a PHINode instruction.
Definition: BasicBlock.cpp:189
static GCRegistry::Add< OcamlGC > B("ocaml", "ocaml 3.10-compatible GC")
const_iterator getFirstInsertionPt() const
Returns an iterator to the first instruction in this block that is suitable for inserting a non-PHI i...
Definition: BasicBlock.cpp:218
const BasicBlock * getSinglePredecessor() const
Return the predecessor of this block if it has a single predecessor block.
Definition: BasicBlock.cpp:235
INITIALIZE_PASS_BEGIN(MemorySanitizer, "msan", "MemorySanitizer: detects uninitialized reads.", false, false) INITIALIZE_PASS_END(MemorySanitizer
LLVM Basic Block Representation.
Definition: BasicBlock.h:59
The instances of the Type class are immutable: once they are created, they are never changed...
Definition: Type.h:46
std::pair< Function *, Function * > createSanitizerCtorAndInitFunctions(Module &M, StringRef CtorName, StringRef InitName, ArrayRef< Type *> InitArgTypes, ArrayRef< Value *> InitArgs, StringRef VersionCheckName=StringRef())
Creates sanitizer constructor function, and calls sanitizer&#39;s init function from it.
This is an important class for using LLVM in a threaded context.
Definition: LLVMContext.h:69
const char * getOpcodeName() const
Definition: Instruction.h:128
This is an important base class in LLVM.
Definition: Constant.h:42
Resume the propagation of an exception.
This file contains the declarations for the subclasses of Constant, which represent the different fla...
Value * CreateSelect(Value *C, Value *True, Value *False, const Twine &Name="", Instruction *MDFrom=nullptr)
Definition: IRBuilder.h:1901
bool isPointerTy() const
True if this is an instance of PointerType.
Definition: Type.h:224
unsigned getNumParams() const
Return the number of fixed parameters this function type requires.
Definition: DerivedTypes.h:139
Represent the analysis usage information of a pass.
op_iterator op_end()
Definition: User.h:232
Value * CreateNeg(Value *V, const Twine &Name="", bool HasNUW=false, bool HasNSW=false)
Definition: IRBuilder.h:1256
static const PlatformMemoryMapParams Linux_X86_MemoryMapParams
This instruction compares its operands according to the predicate given to the constructor.
Predicate
This enumeration lists the possible predicates for CmpInst subclasses.
Definition: InstrTypes.h:885
FunctionPass class - This class is used to implement most global optimizations.
Definition: Pass.h:285
static const unsigned kShadowTLSAlignment
static FunctionType * get(Type *Result, ArrayRef< Type *> Params, bool isVarArg)
This static method is the primary way of constructing a FunctionType.
Definition: Type.cpp:297
static Constant * get(StructType *T, ArrayRef< Constant *> V)
Definition: Constants.cpp:1021
Value * getPointerOperand()
Definition: Instructions.h:274
bool isX86_MMXTy() const
Return true if this is X86 MMX.
Definition: Type.h:182
Value * CreateICmpEQ(Value *LHS, Value *RHS, const Twine &Name="")
Definition: IRBuilder.h:1736
self_iterator getIterator()
Definition: ilist_node.h:82
Class to represent integer types.
Definition: DerivedTypes.h:40
IntegerType * getIntNTy(unsigned N)
Fetch the type representing an N-bit integer.
Definition: IRBuilder.h:360
Value * CreateExtractElement(Value *Vec, Value *Idx, const Twine &Name="")
Definition: IRBuilder.h:1921
This class represents a cast from an integer to a pointer.
const Value * getCondition() const
static Constant * getAllOnesValue(Type *Ty)
Definition: Constants.cpp:322
static const PlatformMemoryMapParams Linux_ARM_MemoryMapParams
static const MemoryMapParams FreeBSD_X86_64_MemoryMapParams
Comdat * getOrInsertComdat(StringRef Name)
Return the Comdat in the module with the specified name.
Definition: Module.cpp:474
const Value * getArraySize() const
Get the number of elements allocated.
Definition: Instructions.h:93
Value * CreateExtractValue(Value *Agg, ArrayRef< unsigned > Idxs, const Twine &Name="")
Definition: IRBuilder.h:1963
PointerType * getInt8PtrTy(unsigned AddrSpace=0)
Fetch the type representing a pointer to an 8-bit integer value.
Definition: IRBuilder.h:385
MDNode * createBranchWeights(uint32_t TrueWeight, uint32_t FalseWeight)
Return metadata containing two branch weights.
Definition: MDBuilder.cpp:38
AtomicOrdering getOrdering() const
Returns the ordering constraint of this rmw instruction.
Definition: Instructions.h:761
INITIALIZE_PASS_END(RegBankSelect, DEBUG_TYPE, "Assign register bank of generic virtual registers", false, false) RegBankSelect
#define llvm_unreachable(msg)
Marks that the current location is not supposed to be reachable.
Value * CreateMul(Value *LHS, Value *RHS, const Twine &Name="", bool HasNUW=false, bool HasNSW=false)
Definition: IRBuilder.h:996
Value * CreateTrunc(Value *V, Type *DestTy, const Twine &Name="")
Definition: IRBuilder.h:1552
Type * getAllocatedType() const
Return the type that is being allocated by the instruction.
Definition: Instructions.h:106
Triple - Helper class for working with autoconf configuration names.
Definition: Triple.h:44
signed greater than
Definition: InstrTypes.h:912
unsigned first
bool isInvoke() const
Return true if a InvokeInst is enclosed.
Definition: CallSite.h:90
Intrinsic::ID getIntrinsicID() const
Return the intrinsic ID of this intrinsic.
Definition: IntrinsicInst.h:51
PHINode * CreatePHI(Type *Ty, unsigned NumReservedValues, const Twine &Name="")
Definition: IRBuilder.h:1866
bool isPtrOrPtrVectorTy() const
Return true if this is a pointer type or a vector of pointer types.
Definition: Type.h:227
static IntegerType * get(LLVMContext &C, unsigned NumBits)
This static method is the primary way of constructing an IntegerType.
Definition: Type.cpp:240
Type * getSequentialElementType() const
Definition: Type.h:358
Iterator for intrusive lists based on ilist_node.
unsigned getNumOperands() const
Definition: User.h:192
See the file comment.
Definition: ValueMap.h:86
This is the shared class of boolean and integer constants.
Definition: Constants.h:84
iterator end()
Definition: BasicBlock.h:266
unsigned getScalarSizeInBits() const LLVM_READONLY
If this is a vector type, return the getPrimitiveSizeInBits value for the element type...
Definition: Type.cpp:130
CallingConv::ID getCallingConv() const
getCallingConv()/setCallingConv(CC) - These method get and set the calling convention of this functio...
Definition: Function.h:199
IterTy arg_begin() const
Definition: CallSite.h:571
Value * CreateIntCast(Value *V, Type *DestTy, bool isSigned, const Twine &Name="")
Definition: IRBuilder.h:1698
This is a &#39;vector&#39; (really, a variable-sized array), optimized for the case when the array is small...
Definition: SmallVector.h:861
Module.h This file contains the declarations for the Module class.
Value * CreateInsertElement(Value *Vec, Value *NewElt, Value *Idx, const Twine &Name="")
Definition: IRBuilder.h:1934
Provides information about what library functions are available for the current target.
unsigned getABITypeAlignment(Type *Ty) const
Returns the minimum ABI-required alignment for the specified type.
Definition: DataLayout.cpp:722
bool isAggregateType() const
Return true if the type is an aggregate type.
Definition: Type.h:258
signed less than
Definition: InstrTypes.h:914
TerminatorInst * SplitBlockAndInsertIfThen(Value *Cond, Instruction *SplitBefore, bool Unreachable, MDNode *BranchWeights=nullptr, DominatorTree *DT=nullptr, LoopInfo *LI=nullptr)
Split the containing block at the specified instruction - everything before SplitBefore stays in the ...
CallInst * CreateMaskedStore(Value *Val, Value *Ptr, unsigned Align, Value *Mask)
Create a call to Masked Store intrinsic.
Definition: IRBuilder.cpp:491
CHAIN = SC CHAIN, Imm128 - System call.
ConstantInt * getInt32(uint32_t C)
Get a constant 32-bit value.
Definition: IRBuilder.h:307
static IntegerType * getIntNTy(LLVMContext &C, unsigned N)
Definition: Type.cpp:180
StringRef str()
Return a StringRef for the vector contents.
Definition: raw_ostream.h:529
static GCRegistry::Add< StatepointGC > D("statepoint-example", "an example strategy for statepoint")
CallInst * CreateMemCpy(Value *Dst, unsigned DstAlign, Value *Src, unsigned SrcAlign, uint64_t Size, bool isVolatile=false, MDNode *TBAATag=nullptr, MDNode *TBAAStructTag=nullptr, MDNode *ScopeTag=nullptr, MDNode *NoAliasTag=nullptr)
Create and insert a memcpy between the specified pointers.
Definition: IRBuilder.h:446
This class wraps the llvm.memcpy intrinsic.
static Constant * get(Type *Ty, uint64_t V, bool isSigned=false)
If Ty is a vector type, return a Constant with a splat of the given value.
Definition: Constants.cpp:621
void appendToGlobalCtors(Module &M, Function *F, int Priority, Constant *Data=nullptr)
Append F to the list of global ctors of module M with the given Priority.
Definition: ModuleUtils.cpp:84
CallInst * CreateMaskedLoad(Value *Ptr, unsigned Align, Value *Mask, Value *PassThru=nullptr, const Twine &Name="")
Create a call to Masked Load intrinsic.
Definition: IRBuilder.cpp:470
unsigned getNumIncomingValues() const
Return the number of incoming edges.
static const size_t kNumberOfAccessSizes
static GlobalVariable * createPrivateNonConstGlobalForString(Module &M, StringRef Str)
Create a non-const global initialized with the given string.
static cl::opt< bool > ClKeepGoing("msan-keep-going", cl::desc("keep going after reporting a UMR"), cl::Hidden, cl::init(false))
raw_ostream & dbgs()
dbgs() - This returns a reference to a raw_ostream for debugging messages.
Definition: Debug.cpp:133
unsigned getVectorNumElements() const
Definition: DerivedTypes.h:462
signed less or equal
Definition: InstrTypes.h:915
Class to represent vector types.
Definition: DerivedTypes.h:393
const Module * getModule() const
Return the module owning the function this instruction belongs to or nullptr it the function does not...
Definition: Instruction.cpp:56
Class for arbitrary precision integers.
Definition: APInt.h:69
IntegerType * getInt8Ty()
Fetch the type representing an 8-bit integer.
Definition: IRBuilder.h:337
Value * CreateShl(Value *LHS, Value *RHS, const Twine &Name="", bool HasNUW=false, bool HasNSW=false)
Definition: IRBuilder.h:1051
const Value * getFalseValue() const
Value * CreatePointerCast(Value *V, Type *DestTy, const Twine &Name="")
Definition: IRBuilder.h:1675
uint64_t getTypeSizeInBits(Type *Ty) const
Size examples:
Definition: DataLayout.h:560
static cl::opt< ITMode > IT(cl::desc("IT block support"), cl::Hidden, cl::init(DefaultIT), cl::ZeroOrMore, cl::values(clEnumValN(DefaultIT, "arm-default-it", "Generate IT block based on arch"), clEnumValN(RestrictedIT, "arm-restrict-it", "Disallow deprecated IT based on ARMv8"), clEnumValN(NoRestrictedIT, "arm-no-restrict-it", "Allow IT blocks based on ARMv7")))
uint64_t getTypeAllocSize(Type *Ty) const
Returns the offset in bytes between successive objects of the specified type, including alignment pad...
Definition: DataLayout.h:428
void removeAttributes(unsigned i, const AttrBuilder &Attrs)
removes the attributes from the list of attributes.
Definition: Function.cpp:403
Predicate getPredicate() const
Return the predicate for this instruction.
Definition: InstrTypes.h:959
bool isInlineAsm() const
Check if this call is an inline asm statement.
static cl::opt< bool > ClHandleICmpExact("msan-handle-icmp-exact", cl::desc("exact handling of relational integer ICmp"), cl::Hidden, cl::init(false))
unsigned getAlignment() const
Return the alignment of the access that is being performed.
Definition: Instructions.h:230
LLVM_NODISCARD bool empty() const
Definition: SmallVector.h:62
unsigned getNumArgOperands() const
Return the number of call arguments.
StringRef getName() const
Return a constant reference to the value&#39;s name.
Definition: Value.cpp:224
#define I(x, y, z)
Definition: MD5.cpp:58
#define N
static const MemoryMapParams Linux_X86_64_MemoryMapParams
static ArrayType * get(Type *ElementType, uint64_t NumElements)
This static method is the primary way to construct an ArrayType.
Definition: Type.cpp:568
LLVM_NODISCARD std::enable_if<!is_simple_type< Y >::value, typename cast_retty< X, const Y >::ret_type >::type dyn_cast(const Y &Val)
Definition: Casting.h:323
This instruction extracts a single (scalar) element from a VectorType value.
void maybeMarkSanitizerLibraryCallNoBuiltin(CallInst *CI, const TargetLibraryInfo *TLI)
Given a CallInst, check if it calls a string function known to CodeGen, and mark it with NoBuiltin if...
Definition: Local.cpp:2710
static const PlatformMemoryMapParams NetBSD_X86_MemoryMapParams
Value * getReturnValue() const
Convenience accessor. Returns null if there is no return value.
static InlineAsm * get(FunctionType *Ty, StringRef AsmString, StringRef Constraints, bool hasSideEffects, bool isAlignStack=false, AsmDialect asmDialect=AD_ATT)
InlineAsm::get - Return the specified uniqued inline asm string.
Definition: InlineAsm.cpp:43
static cl::opt< bool > ClCheckConstantShadow("msan-check-constant-shadow", cl::desc("Insert checks for constant shadow values"), cl::Hidden, cl::init(false))
static const MemoryMapParams Linux_MIPS64_MemoryMapParams
Value * CreateAnd(Value *LHS, Value *RHS, const Twine &Name="")
Definition: IRBuilder.h:1112
Value * CreatePtrToInt(Value *V, Type *DestTy, const Twine &Name="")
Definition: IRBuilder.h:1619
static const unsigned kOriginSize
static const MemoryMapParams FreeBSD_I386_MemoryMapParams
bool isFPOrFPVectorTy() const
Return true if this is a FP type or a vector of FP.
Definition: Type.h:185
iterator_range< df_iterator< T > > depth_first(const T &G)
This represents the llvm.va_copy intrinsic.
bool isArrayAllocation() const
Return true if there is an allocation size parameter to the allocation instruction that is not 1...
size_type count(const KeyT &Val) const
Return 1 if the specified key is in the map, 0 otherwise.
Definition: ValueMap.h:158
assert(ImpDefSCC.getReg()==AMDGPU::SCC &&ImpDefSCC.isDef())
Value * getArgOperand(unsigned i) const
getArgOperand/setArgOperand - Return/set the i-th call argument.
LoadInst * CreateAlignedLoad(Value *Ptr, unsigned Align, const char *Name)
Provided to resolve &#39;CreateAlignedLoad(Ptr, Align, "...")&#39; correctly, instead of converting the strin...
Definition: IRBuilder.h:1328
static cl::opt< int > ClInstrumentationWithCallThreshold("msan-instrumentation-with-call-threshold", cl::desc("If the function being instrumented requires more than " "this number of checks and origin stores, use callbacks instead of " "inline checks (-1 means never use callbacks)."), cl::Hidden, cl::init(3500))
unsigned getPrimitiveSizeInBits() const LLVM_READONLY
Return the basic size of this type if it is a primitive type.
Definition: Type.cpp:115
void setSuccessOrdering(AtomicOrdering Ordering)
Sets the success ordering constraint of this cmpxchg instruction.
Definition: Instructions.h:577
ArrayRef< unsigned > getIndices() const
Module * getParent()
Get the module that this global value is contained inside of...
Definition: GlobalValue.h:565
LLVM Value Representation.
Definition: Value.h:73
uint64_t getTypeStoreSize(Type *Ty) const
Returns the maximum number of bytes that may be overwritten by storing the specified type...
Definition: DataLayout.h:411
FunctionType * getFunctionType() const
Definition: CallSite.h:320
constexpr char Size[]
Key for Kernel::Arg::Metadata::mSize.
static VectorType * get(Type *ElementType, unsigned NumElements)
This static method is the primary way to construct an VectorType.
Definition: Type.cpp:593
#define LLVM_FALLTHROUGH
LLVM_FALLTHROUGH - Mark fallthrough cases in switch statements.
Definition: Compiler.h:238
std::underlying_type< E >::type Mask()
Get a bitmask with 1s in all places up to the high-order bit of E&#39;s largest value.
Definition: BitmaskEnum.h:81
unsigned getArgumentNo(Value::const_user_iterator I) const
Given a value use iterator, returns the argument that corresponds to it.
Definition: CallSite.h:199
BasicBlock::iterator GetInsertPoint() const
Definition: IRBuilder.h:122
FunctionPass * createMemorySanitizerPass(int TrackOrigins=0, bool Recover=false)
Value * CreateLShr(Value *LHS, Value *RHS, const Twine &Name="", bool isExact=false)
Definition: IRBuilder.h:1072
const Value * getCalledValue() const
Get a pointer to the function that is invoked by this instruction.
static cl::opt< bool > ClPoisonUndef("msan-poison-undef", cl::desc("poison undef temps"), cl::Hidden, cl::init(true))
ConstantInt * getInt8(uint8_t C)
Get a constant 8-bit value.
Definition: IRBuilder.h:297
StringRef - Represent a constant reference to a string, i.e.
Definition: StringRef.h:49
Predicate getSwappedPredicate() const
For example, EQ->EQ, SLE->SGE, ULT->UGT, OEQ->OEQ, ULE->UGE, OLT->OGT, etc.
Definition: InstrTypes.h:999
static const PlatformMemoryMapParams FreeBSD_X86_MemoryMapParams
Type * getArrayElementType() const
Definition: Type.h:365
Value * CreateInsertValue(Value *Agg, Value *Val, ArrayRef< unsigned > Idxs, const Twine &Name="")
Definition: IRBuilder.h:1971
bool isBigEndian() const
Definition: DataLayout.h:222
#define LLVM_DEBUG(X)
Definition: Debug.h:119
static IntegerType * getInt8Ty(LLVMContext &C)
Definition: Type.cpp:174
static const char *const kMsanModuleCtorName
static Constant * get(ArrayRef< Constant *> V)
Definition: Constants.cpp:1056
iterator_range< arg_iterator > args()
Definition: Function.h:675
signed greater or equal
Definition: InstrTypes.h:913
bool isArrayTy() const
True if this is an instance of ArrayType.
Definition: Type.h:221
Type * getContainedType(unsigned i) const
This method is used to implement the type iterator (defined at the end of the file).
Definition: Type.h:333
A wrapper class for inspecting calls to intrinsic functions.
Definition: IntrinsicInst.h:44
const BasicBlock * getParent() const
Definition: Instruction.h:67
an instruction to allocate memory on the stack
Definition: Instructions.h:60
This instruction inserts a struct field of array element value into an aggregate value.
CallInst * CreateCall(Value *Callee, ArrayRef< Value *> Args=None, const Twine &Name="", MDNode *FPMathTag=nullptr)
Definition: IRBuilder.h:1871