OnePage

Security Lessons from Exploits and Attacks

Prabhaker Mateti

Wright State University

www.cs.wright.edu/~pmateti

Table of Contents

1 Abstract

The decades-old attacks of "buffer overflows" have taught us that programmers must master array index arithmetic, that code may be fetched from stack and that a static code analysis on tools about to be deployed is a must.

What are we learning from the latest? StackClash, StageFright, HeartBleed, ShellShock, WannaCry, Petya (to name just a few).

We suggest a few (surprising?) lessons. Why are we not discovering bugs that could become security exploits before the bad guys remind us? For the long term future of secure systems, we should focus on better education, not just how do we patch so that the known attacks are mitigated.

2 This Talk

  1. This talk is not about "results". It is mostly about "issues" in "whitehat security research".

2.1 Exploits/ Attack Descriptions

  1. CVE classification
  2. An analysis of its effects, internal code structure.
  3. A conceptual deep description of technique?
  4. Prevention, Mitigation, Detection, Repair
  5. Side Effects of the above

2.2 What Should/ Do We Learn from Exploits?

  1. Who is "we"?
  2. Them is the computer-using public.
  3. We is cyber security students, teachers, and researchers.

2.3 Obsession (?!) with Terminology

  1. Programs v Processes
  2. Viruses v Worms
  3. Bugs, Vulnerabilities vs Exploits and Attacks
  4. Prevention, Mitigation, Detection, Repair
  5. Security Aware Design

3 Buffer Overflows

The decades-old attacks of "buffer overflows" have taught us that

  1. we rename the attacks we study several times:
    1. Stack Smashing
    2. Arbitrary Code Execution
    3. Code Injection
    4. No better than the original

3.1 Buffer Overflows have taught us that #2

  1. programmers must master array index arithmetic

    // % g++ -pedantic -Wall -std=c++14 ptr-arith.C -o ptr-arith
    
    #include <stdio.h>
    
    int main() {
      int  a[10];
      int * p = & a[2];
      int * q = & 2[a];
      printf("p %p q %p\n", (void *) p, (void *) q);
      return 0;
    }
    

3.2 Buffer Overflows have taught us that #3

  1. CPU arch and OS permit: code may be fetched from stack, and other segments
  2. context: Virtual Memory terms: segments, pages, frames; permissions: rwx

3.3 Buffer Overflows have taught us that #4

  1. Static Code Analysis on program source is a must.
  2. http://frama-c.com/ FOSS, C/ C++
  3. https://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis

3.4 Buffer Overflows have taught us that #5

  1. It can take 10+ years to bring it under control
  2. [Anecdodotal; Opinion]

4 Recent Exploits and Attacks

What are we learning from the latest? E.g.,

  1. StackClash, Jun 2017
  2. StageFright, Jul 2015
  3. HeartBleed, Dec 2014
  4. ShellShock, Sep 2014
  5. WannaCry, May 2017
  6. Petya, Jul 2017

5 StackClash

  1. Arbitrary Code Injection and Execution
  2. CVE-2017-1000364 for the Linux kernel
  3. CVE-2017-1000366 for glibc
  4. https://blog.qualys.com/securitylabs/2017/06/19/the-stack-clash

5.1 Unix Process Memory Model

  1. Text Segment. Instruction segment. Code + constant data. Read-only.
  2. Data Segment. contiguous (in a virtual sense) with the text
    1. initialized data
    2. uninitialized (or 0-initialized) BSS (Block Started By Symbol).
  3. Heap Segment. malloc/new, free/delete, use system calls brk and sbrk
  4. Stack Segment. method-local vars, and function call information. Stack grows towards the uninitialized data segment.
  5. The U Area. A system stack segment for process use. E.g. open files, current directory, signal action, accounting information;
  6. extern int etext, edata, end;

5.2 What is a Stack Guard Gap?

  1. Access to the stack guard page triggers a trap, so it serves as a divider between a stack memory region and other memory regions in the process address space so that sequential stack access cannot be fluently (!!) transformed into access to another memory region adjacent to the stack (and vice versa).
  2. https://access.redhat.com/security/vulnerabilities/stackguard The do_anonymous_page function in mm/memory.c in the Linux kernel before versions … does not properly separate the stack and the heap, which allows context-dependent attackers to execute arbitrary code by writing to the bottom page of a shared memory segment

6 StageFright

  1. Stagefright is the name given to a group of bugs in the Android multimedia library component called "Stagefright". Two exploits published. Has not become an attack.
  2. CVE-2015-1538 CVE-2015-1539 CVE-2015-3824 CVE-2015-3826 .. -3829 CVE-2015-4480 CVE-2015-6602 CVE-2016-2814
  3. Effect: Remote code execution and privilege escalation.

6.1 StageFright Coding Error

  1. Integer overflow. Undetected due to syntax of type coercion.
  2. There was an arithmetic calculation that should have discovered overflows in the stagefright library. Many media-based services and applications previwed or opened media files using this library. Often, without alerting the user. A cleverly constructed media file would turn this bug into a vulnerability. The service ran at high-enough privileges that this became an exploit.

6.2 StageFright Coding Error

  1. Overflow Error Undetected
uint64_t allocSize = mTimeToSampleCount * 2 * sizeof(uint32_t);

  1. Corrected Version
uint64_t allocSize = mTimeToSampleCount * 2 * (uint64_t) sizeof(uint32_t);

7 HeartBleed

  1. Heartbleed is a security bug in the OpenSSL cryptography library.
  2. CVE-2014-0160
  3. Cause: Incomplete input validation (a missing bounds check) in the implementation of the TLS heartbeat extension. "Buffer over-read"

8 ShellShock

  1. Shellshock [aka Bashdoor] is a "family of security bugs" in the FOSS Bash shell. One bug causes Bash to parse and execute the string concatenated to the end of function definitions.
  2. CVE-2014-6277, CVE-2014-6278, CVE-2014-7169, CVE-2014-7186, CVE-2014-7187, …

9 WannaCry

  1. The WannaCry targeted Windows machines. Encrypting files and demanding ransom in Bitcoins.
  2. CVE-2017-0144
  3. Worm. Propagated using NSA EternalBlue exploit of SMB protocol.

10 Petya

  1. Petya targeted Windows machines. Infecting the master boot record that displays the ransom note. Encrypts file system table MFT. Prevents from booting. Original Petya does not encrypt files one by one. Variants encrypt the first MB of files.
  2. Spread via software update of Ukranian MeDoc software. Then same LAN. May be using EternalBlue.

11 Analysis

  1. The next few slides are analysis, and lessons.
  2. Main Question: Have we fixed the past exploits so they cannot become attacks again?

12 Bugs, Vulnerabilities, and Exploits

  1. Guess: 90+% of security incidents are traceable ultimately to bugs.
  2. Bugs are software errors made in the implementation, design, specs, requirements
  3. Status Report: We cannot make bugs go away.

12.1 Exploits

  1. Is every bug a (security) vulnerability?
  2. Can every bug be built into an exploit?
  3. Can every exploit be built into an attack?

12.2 Attacks

  1. How to rigorously label them as coding errors, design errors, etc.
  2. Case studies: stack clash, stagefright, heart bleed, bash , wannaCry, Petya,
  3. CVE classification? Basis? How do we relate variations?

12.3 Rigor?

  1. How to label "bugs" as coding errors, design errors, etc. cf: Knuth on TeX Errors
  2. Bugs, vulnerabilities, exploits, attacks: Describe Rigorously
  3. Define/ Detect "malware" [before community labeling]
  4. Define/ Detect "ransomware" [before community labeling]

13 Possibilities for Building Secure Systems

  1. Bug-Free Development of Software
  2. Crowd Sourced Trust
  3. A Security-Aware CPU Architecture
  4. A Security-Aware OS
  5. Encryption at the Core
  6. Security-Aware Libraries

14 Re-Doing the OS

  1. The main functionality of an OS: Given the pathname of a program file, create a process.
  2. Must be enhanced with "Is it OK to create a process?"
  3. Status Report: Linux, Windows, … have 1000+ Bugs in every release.
  4. Sources?: Web search on "kernel vulnerabilies" and subscribe to kernel developer mailing lists. [Take it as opinion, if you wish.]

14.1 SysCall Collection

  1. An OS can be defined as a collection of system calls.
  2. Slowly growing.
  3. Never shrinking.
  4. A typical OS has 350+ syscalls.
  5. Re-evaluate the need for all these syscalls.

14.2 Provenance

  1. Pedigree Ancestry
  2. Code Signing
  3. Crowd Sourced Trust

14.3 Sandboxing

  1. Virtual Machines
  2. Container Technology
  3. New Approaches: MirageOS, QubesOS, …

15 Bug-Free Development of Software

  1. Semantic checking – not just syntax and nice GUI
  2. Hundreds of tools exist – scant usage
  3. Education?!

15.1 Development of Better Software

Sad but true: Typical graduate in CS cannot answer:

  1. What is an Abstract Syntax Tree?
  2. What is a Class Invariant?
  3. What conditions ust exist for Deadlocks or Livelocks to happen?
  4. What are ASLR and ROP? [not just acronym expansion]

16 Crowd Sourced Trust

  1. Is program P to be trusted? What say you?
  2. Resolving conflicting details
  3. Algorithmically checkable facts
  4. Tracking ancestry and forking of programs
  5. Assumption: The number of attacks is not that large [ < 264 ?!]

17 Parting Thoughts

  1. Computing has become a must-know topic. So should cyber security.
  2. Security and Privacy are fundamental human rights.

18 References

  1. CVE Common Vulnerabilities and Exposures https://cve.mitre.org/
  2. Ross Anderson, Security Engineering – The Book, 1080p, 2008, http://www.cl.cam.ac.uk/~rja14/book.html
  3. Privacy and Human Rights. An International Survey of Privacy Laws and Practice. http://gilc.org/privacy/survey/intro.html
  4. Security of person. https://en.wikipedia.org/wiki/Security_of_person
  5. Prabhaker Mateti, Android libStageFright Vulnerabilities, Work-in-Progress, 18pp, July 2017.

19 End


Copyright © 2017 www.wright.edu/~pmateti • 2017-08-21