Isolation ========= Recall: isolated execution as building block for security. Many examples of isolation. Entire virtual machines on some server in a data center (cloud provider). Applications on a mobile phone. Web sites visited by a user in their web browser. Intuitive model: separate computer for each isolated box. But that is often too expensive and cumbersome. Don't want N mobile devices to run N smartphone applications. Don't want N servers in the cloud to run N customer applications. Don't want N laptops to visit N websites. Goal: implement multiple isolation domains ("boxes") on a single computer. What does isolation actually mean? What's common in these examples? Code from one box cannot tamper with state of another box. "Tamper" could be reading or writing. Integrity: cannot write data outside of the box. Two boxes, A (adversary) and V (victim). State of V (S_V) should be unchanged after running A. Doesn't matter what the state of A (S_A) is, or what code A runs. Powerful statement: e.g., cannot corrupt password database in victim! Confidentiality: how to define "cannot read data"? Hard to pin down what this means by talking about what happens when A runs. Suppose the victim's secret was 5, and some adversary outputs 5; is that ok? Need to talk about two potential executions of A. Consider two worlds: one with S0_V and one with S1_V, both with same S_A. Running A in either world should result the same S'_A. In other words: A's execution does not depend on S_V. Strong statement that does not depend on how A might try to get V's data. This is sometimes called "non-leakage". See section 3.1 of https://unsat.cs.washington.edu/papers/nelson-ni.pdf Many systems are pretty close in terms of achieving these definitions. Prevent adversary from reading/writing some data. One potential problem: definitions do not take into account victim running. Confidentiality: what if victim's execution influences adversary's execution? Integrity: what if adversary can nudge victim's execution in some way? Strengthening confidentiality. Run A and V in some interleaved pattern, leading to S'_A and S'_V. Should be the same S'_A as if we simply omitted V's execution and ran just A. I.e., adversary's execution independent of victim's execution. "Non-interference". Many different variations, with subtle details. E.g., see section 3.1 of https://unsat.cs.washington.edu/papers/nelson-ni.pdf How to strengthen integrity? Run A and V in some interleaved pattern, leading to S'_A and S'_V. Should be the same S'_V as if we simply omitted A's execution and ran just V. Symmetric: victim's execution is independent of adversary's execution. "Non-interference" again. Non-interference is extremely powerful and general. Unifies confidentiality and integrity. In particular, the non-interference view of integrity means data cannot leak out! E.g., can give sensitive data to an adversary box, and be sure it does not escape. Unfortunately strong non-interference is often expensive to achieve. Effectively what goes wrong is that there's a lot of state outside of S_A and S_V. Simple example: resource allocation. Victim state has an integer, victim allocates that many bytes of memory. Adversary can also try to allocate memory, observe when allocation fails. A can infer how many bytes V allocated. Often called "covert channels". Trickier example: execution time. Victim runs computation that takes longer depending on the secret. Adversary observes how long it takes for it to finish executing its code. A can infer how much time victim's computation is taking. Often called "timing channels". Avoiding this requires partitioning the resources (memory, cycles, ..) between A and V. Sometimes done (e.g., hard partitioning between secure and insecure worlds). Not a great fit for dynamic use cases: smartphone apps, web sites, etc. Or requires careful design that isolates the adversary in a restrictive environment. Do not expose dynamic resources, do not expose time, etc. In the end, hard to hide overall execution time of entire system... How to achieve isolation? Identify the state that belongs to the isolation domain. Identify operations that read/write state during execution. Or, more broadly, identify in/out dependencies of every operation. Ensure these operations can only read/write that domain's state. Why is isolation hard? Broadly, because we want to achieve high performance and isolation. For perf, want to allow isolated environment to run close to real hardware. Ideally 1 instruction from isolated environmen is 1 instruction in hardware. But also need to still ensure isolation. Every instruction must be limited to accessing isolated domain's state. But that might mean more instructions to check the state being accessed. Efficiently and safely eliminating these checks is hard. Example: virtual machines / OS processes. State: virtual memory and virtual CPU (registers). Each virtual machine has disjoint memory and its own separate virtual CPU. For performance, want to run the VM's instructions directly on main CPU. Technique 1: naming / translation. Memory of the VM lives in physical memory. Hardware provides page table machinery. Translates virtual address named by instruction into physical memory. Two important properties of virtual memory: VM's instructions see VM's virtual memory. Physical memory pages not present in page table are not accessible. Technique 2: time-multiplexing. Save / restore state when switching between isolation domains. Avoids the need for a naming mechanism to interpose. E.g., load CPU registers before running VM, save them before switching away. Guarantees that VM sees only its own state in registers. Cannot corrupt any other VM's registers (assuming that memory is not mapped). Cannot read any other VM's registers (not loaded; assuming not mapped). Technique 3: explicit checks on trap / "trap and emulate". Hardware has various control registers (e.g., page table control register). What happens if VM tries to access one of these control registers? Common plan: use hardware support to "trap" execution of such instructions. Execute the instruction in software with whatever appropriate security checks. Remaining problem: implicit state that is hard to interpose on. Timestamp counter register. Performance counters. Seemingly benign but indirectly leak information about other VMs. Hard to hide performance. Even if we somehow disable access to clock, can reconstruct clock w/ threads. Processes, containers look a lot like virtual machines. More complex state: not just memory, but logical things like files, pipes, etc. With processes, sharing of resources is baked in, so not just pure isolation. Containers are mostly about setting a policy that ensures files are disjoint, etc. Other "end" of design space for isolation: language runtimes. Javascript, WebAssembly, Java, Native Client, .. Execution runtime looks somewhat different from underlying hardware. E.g., in Javascript, AST of program syntax, stack, objects via references. E.g., in WebAssembly, structured opcodes, stack, array of memory. E.g., in Native Client, x86 code but with constraints on instructions. Need efficient interpreter that runs code on real hardware. Somehow represent logical state in physical memory. Translate AST/code/... onto hardware instructions. Technique 4: compilation / software interposition. Add software checks to the generated code as needed. Translated code must be sure it only accesses state belonging to this module. Different invariants to enforce this, as we will discuss next. Translated code must keep executing other translated code. Jumps / calls must ensure they continue running code that maintains invariant. Sometimes complex interplay between these two. Especially if code appears at runtime, or code lives in accessible memory. Javascript: can only access objects via a "reference", like a pointer. Runtime type system carefully ensures "reference" cannot be corrupted. E.g., cannot turn an integer into a reference pointer. Relatively complex: structures contain references, etc. Code integrity: structures can also contain function pointers / closures. Runtime type system similarly needs to enforce integrity for code references. Generated code must follow runtime type system rules. Somewhat expensive at runtime. E.g., when invoking "a.b()", check is "a.b" an integer or a function, etc. WebAssembly: single range of contiguous memory; addr must be in-bounds. Can use lightweight range checks (0 <= addr < memsize). Can use virtual memory support (same as for VMs) too, almost no overhead. 32-bit WebAssembly on 64-bit machines: reserve 8GB range. Memory base is fixed, offsets can only be +/- 4GB. Map accessible portion of memory, unmap everything else. Code is static, not part of the memory region. Can translate and instrument code once. No need to worry about mutating code, reading code, etc. Computed calls: index into table of all legal jump targets. Just need to check bounds (and also type signature). More amenable to high-performance isolation than Javascript. More direct translation from wasm opcodes to hardware instructions. Lightweight isolation checks for memory (bounds check and/or use virtual memory). Lightweight isolation checks for code integrity (bounds check on computed jump). Variant of software interposition: software fault isolation (SFI). Instead of compiling + adding checks, just verify checks are already present. Benefit: high performance. Application can pre-compile to machine code as needed. Benefit: possibly faster to load. Instead of compiling, just verify that required checks are present. Downside: not portable. Machine code for specific architecture (like x86 or ARM). Example SFI system: Google's Native Client. https://en.wikipedia.org/wiki/Google_Native_Client Not widely used anymore. Summary. Isolation is the key building block, as we saw earlier. Defining isolation is tricky: integrity vs confidentiality. Non-interference is the strongest definition; hard to achieve. Non-leakage is more practical. Side channels: covert channels, timing channels. Several techniques for implementing isolation: Translation of naming. Time-multiplexing. Trap and emulate. Compilation / software instrumentation. Software fault isolation.