Home > Tracks > Embedded Systems Security

Defending against Hackers: Exploit Mitigations and Attacks on Arm Cortex-A Devices

Maria "Azeria" Markstedter - Watch Now

With the proliferation of Arm-based mobile and IoT devices, exploit mitigations for Arm-based devices are often the front-line in defending these devices from hackers. For this reason it is important to understand how they work, and their limitations. This talk will look at the most common exploit mitigations available for C/C++ programs running on the Arm Cortex-A architecture and how these mitigations work under-the-hood to block certain categories of memory-corruption-based exploits. The aim of this talk is to educate developers on how hackers can bypass individual mitigations, and the importance of combining them to increase the level of security on these devices.

M↓ MARKDOWN HELP
italicssurround text with
*asterisks*
boldsurround text with
**two asterisks**
hyperlink
[hyperlink](https://example.com)
or just a bare URL
code
surround text with
`backticks`
strikethroughsurround text with
~~two tilde characters~~
quote
prefix with
>

lramage
Score: 0 | 4 months ago | no reply

Thank you for presenting! As an embedded software engineer with advanced security training, I would have appreciated a more in-depth approach to some of these exploits specifically regarding embedded devices. However, this was a great overview of vulnerability mitigations in general; something that would benefit embedded developers who are unfamiliar with some of these concepts.

mrichardson
Score: 0 | 5 months ago | 2 replies

So, I have not attempted any kind of exploit so this is unfamiliar territory for me. All of the concepts regarding memory corruption and exploitation make sense. But, how exactly do you know what address to jump to in order to execute shell code?

AzeriaSpeaker
Score: 0 | 5 months ago | 1 reply

If you mean the part without mitigations where we execute shellcode on the stack: in the slides you see that we use the address of a BX SP instruction which will branch to the location SP points to (which is the next position on the stack after that gadget) and starts executing the instructions on that location. This is only possible if XN is not enabled, which means the stack is executable.

The address of that gadgets can be derived by looking up the base address of the library, plus the offset of the exact position of this instruction inside that library. Exploit devs use so-called ROP gadget hunters for that, which let you search for gadgets inside a binary and give you the offsets.
Now, the reason we can just statically use the base address of the library is because ASLR is not enabled, which means that the location of that library in memory stays the same. When ASLR is enabled, the base addresses change and require the hacker to predict the base address (either with brute force, info leak, or the other attacks I mentioned).

MSARUL
Score: 0 | 4 months ago | 1 reply

Sorry, I still didnt quite get it. Assume the XN not enabled case, only thing Exploiter can do is overwriting the LR. He can overwrite it with Some address value. He doesnt know the location SP points to. so in order to execute Shell code on the stack he needs to execute BX SP Instruction right? But Where will be the BX SP instruction located? How he knows the address of that instruction so that he can place that address in LR. Could you please elaborate?

AzeriaSpeaker
Score: 1 | 4 months ago | no reply

Again, the location of SP is predictable and the attacker DOES know its location relative to the gadget placed on the stack, and he also knows where the address of the gadget is if ASLR is disabled, because it is STATIC and can be looked up. Even without ASLR there are still ways to predict the base address of libraries.

unknown
Score: 0 | 5 months ago | no reply

if no ASLR is used, the addresses are fixed and you can find a good target by debugging or static reverse engineering. Even with ASLR there are techniques like info leaks, slide bruteforcing or NOP sleds that can allow exploitation even when you don't know the exact target.

AndyNPR
Score: 0 | 5 months ago | no reply

This presentation was an eye opener for me as a developer of ARM Cortex-M* processors for 10+ years. While the number of hackers that understands these processors may be less than MMU-capable processors, products with these processors are becoming more capable and internet-connected to where they can become parts of very large DDOS botnets (hello hackers listening to me type from at least 3 nearby Amazon Echos & Shows in my house), and there is less security to overcome because firmware rarely runs with restricted permissions in non-MMU processors (you're in == you're root).

One common theme I saw in the exploit development was already having the firmware images to locate address offsets of useful routines in libc and other libraries. A question I have is how these images are obtained? Is JTAG or another local mechanism (i.e. read external Flash chip) used to extract firmware images, or are firmware upgrade files decrypted and disassembled? Which method is more common? There have been some examples of using special timing to access a disabled JTAG connection, but that requires fairly skilled knowledge and tools to do. Are there other methods to extract firmware from target processors (feel free to just mention if this is covered in your book)?

Score: 0 | 5 months ago | no reply

Thanks for the presentation and the knowledge sharing, it does help in understanding to see it all come together.
That must have been a lot of work to prepare and then share in this format.
Even while I know that you've done that a "few" times already, it is an art to actually do it well.
Also thanks for sharing the access code on twitter. Wouldn't have been able to watch this otherwise. Much appreciated.
Have a great weekend!

Bruce_Lueckenhoff
Score: -1 | 5 months ago | 1 reply

Something didn't sit right after watching this presentation. After a few minutes away, I realized what it is:

Early in the presentation, the presenter shows a table basically showing embedded RTOS's almost completely lacking some mitigations (No Execute, Address Space Layout Randomization, Stack Canaries) found in full-blown Operating Systems (Linux, MS-Windows, and so on)

The presenter never addressed how the volume of known-exploits, volume of knowledgeable attackers, and so on, is one million times smaller for these RTOS's versus full OS's like Linux/MS-Windows. Furthermore, any attacks developed for one such RTOS are unlikely to be effective on another (independently developed) RTOS.

This smaller user base and the diversity of implementations found among different RTOS's makes devices using such RTOS's less vulnerable, not more, despite the full OS's ostensible advantages.

Executive Summary: A world with a bunch of idiosyncratic RTOS's, each separately developed, is more secure, than a monoculture of "just embedded Linux". This is true despite the fact that the full-blown OS has, on paper, superior defenses.

AzeriaSpeaker
Score: 1 | 5 months ago | 1 reply

Hi Bruce, I’m not sure I understand your complaint. This was not an RTOS presentation. The talk was focused on Arm cortex-A devices with Rich OS (mobile phones, routers, cameras etc) and I clearly stated that in the abstract and the talk itself.

Second, you expect me to state the volume of exploits against cortex-m and cortex-r devices versus cortex-a? That was not the topic of my talk.

Third, you are wrong about RTOS devices being more secure and “a million times less likely to be exploited”. In fact, there have been many attacks against critical infrastructure using RTOS, I.e. Industrial Control Systems, Smart Grids, and IIoT, making them valuable and high impact targets.

MSARUL
Score: 0 | 4 months ago | no reply

As majority of the RTOS based devices doesn`t have console, is it possible to exploit vulnerabilities in those RTOS based devices?

Mel
Score: 2 | 5 months ago | 1 reply

Thanks, Azeria :)

AzeriaSpeaker
Score: 0 | 5 months ago | no reply

Thanks Mel :)

blair
Score: 0 | 5 months ago | 1 reply

Very nice, thank you. I am building from the ground up in Rust (caveat: non-ARM). Its memory-safety would, in theory, (and assuming proper wrapping of "unsafe" code) obviate the traditional buffer-overflow entry-"vector". Is that a safe assumption, or does the fact that the toolchain employs gcc to compile to the same MIPS assembly leave open non-overflow-related PC-highjacking techniques? Also, the beginning of the presentation alluded to a whole slew of fun-sounding exploits too in-depth to exhaustively describe; I was curious the extent to which those would be thwarted by a language / DSL / etc that properly shelters memory In its proper place?

AzeriaSpeaker
Score: 0 | 5 months ago | no reply

Hi Blair, you're completely right, I didn't have time to cover this topic in-depth given the time constraints.
It's always recommended to use memory safe languages that make memory corruption vulnerabilities less likely. Though I don't have the context of the software you are building, it is likely that you will have code bases in your device environment that have been written in other languages. This can come in form of standard components, or even the underlying operating system you are building your software on. Even if your code base is entirely written in a memory safe language, there are many more vulnerability classes which aren't memory-corruption based. Say, race-conditions or logic bugs for example. When you develop in Rust, you need to be careful to not get a false sense of security due to its memory safety, since other bug classes need to be considered.

markonweb
Score: 3 | 5 months ago | 1 reply

Thanks Azeria - fantastic presentation! Your coverage of XN, ASLR, and Canaries was accessible and well-organized. As a ciso who was a dev a million years ago, I really appreciated how you help tie both of those worlds together. Looking forward to your two books!

AzeriaSpeaker
Score: 0 | 5 months ago | no reply

Thank you, Mark! I really appreciate your feedback and I'm happy you liked it. :)

alexanderentinger
Score: 1 | 5 months ago | 3 replies

Interesting talk and a good overview ;) One thing that always irks me when interacting with the security community is that there seems to be a looking-down (or gotcha) attitude on the ignorant developer who once again used functions known to be unsafe. It appears to me like most of these folks have never worked inside a real embedded product development team where one has to reach specific goals with constrained resources (time and budget wise). I feel that this turns the relationship between security engineer/researcher and embedded engineer needlessly antagonistic, what I would love to see from the security community is to come up with automated tools and processes which allow for automatic checking of potential security violations hooked up with some form of CI. Because let's be honest - those mistakes will continue to be made.

AzeriaSpeaker
Score: 1 | 5 months ago | no reply

Hi Alexander, thanks for you feedback! I completely understand your concerns about the disparity in communication between the developer and the security community. Like in every community, there are different groups of people with different attitudes and opinions. I agree that there are people with the looking-down attitude, however I know that the majority of security engineers and researchers are eager to help developers and vendors to fix the bugs they find and improve their security. Most often, security researchers are upset with the vendor for not fixing bugs or responding to bug reports, not the developers themselves. In my opinion, we can't expect all developers to be self-taught security experts. Which is why I talk about exploit mitigations, that make existing vulnerabilities harder to exploit. I regularly give workshops to developers who want to learn about vulnerabilities and exploit mitigations and hope that I can make these concepts more accessible to the developer community through my upcoming books.

In regards to your mention about automated tools, they do exist. There are many different tools that automatically scan for unsafe functions in either source code or via fuzzing. As other comments have mentioned, there are various scanning tools that not only look for unsafe functions in code, but also check common misconfigurations and lack of security features.

asuchy
Score: 0 | 5 months ago | no reply

there are quite a few CI plugins already that look for unsafe functions and I have been working scripts for my developers to check for the exploit mitigation settings. The the price range for these can be free to very expensive for security scanning tools to check for vulnerablities. A lot depends on who and how someone works with the security teams. I personally don't like teams that appear to be cutting corners(pushing for sign off without complete the review) and teams that wait until the last minute before engaging tean security. It is much easier for a security team to help fix and correct issues at the start than at the end of a project since in the media it is the security team that takes the blame and it can destory our career.

Cameron
Score: 0 | 5 months ago | no reply

You should check out polyspace. Its a good tool for checking code in place, without need for compiler linkup. Works great in a CI environment, and also allows for continual re-checks whenever a new commit gets pushed up.

vandanasalve
Score: 3 | 5 months ago | 1 reply

Great presentations, lots of learning and exploring, Thanks for the talk !!

AzeriaSpeaker
Score: 0 | 5 months ago | no reply

Thank you! Glad you enjoyed it!

Marcio
Score: 2 | 5 months ago | 2 replies

Very cool presentation. She explains things as clear as possible for a 1-hour presentation. The only thing is that the video has some cuts in between sections.

AzeriaSpeaker
Score: 0 | 5 months ago | no reply

Hey Marcio, I agree, the transition cuts are awful. I'm not very good at video editing. :)

Justin
Score: 0 | 5 months ago | no reply

Yea, I was about to comment this too. Too bad because she was making an excellent point near 26:30.
There is a segmentation fault / the overflow flag in the status register becomes '1'. Depending on the ISA, the next instruction can point to a state where the user now has unfettered control (or the PC just resets to 0x00000000) either of which is and undesired security breach. A beautiful exposition deprived of its rightful denouement!!

Lionel
Score: 5 | 5 months ago | 2 replies

Slides?

AzeriaSpeaker
Score: 0 | 5 months ago | no reply

You can find them on the left (they were uploaded in the middle of the talk)

Bruce
Score: 0 | 5 months ago | no reply

There's a link to download a pdf of the slides on the left hand side of the page.

DanR
Score: 1 | 5 months ago | 2 replies

Would be interested in a copy of your slides. Thank you so much. I've been developing at system level for years -- a former compiler developer, so the explanations were a little slow paced for my taste. But very enlightening. Thanks!

AzeriaSpeaker
Score: 0 | 5 months ago | no reply

Hi Dan, thanks for your feedback. Glad to hear you were able to pick up the concepts quickly. The audience of this conference has different levels of technical experience, which is why I decided to explain it in a way that most people can understand, even if they lack knowledge in assembly. I hope you were still able to get something out of this talk. :)

Michael-K
Score: 0 | 5 months ago | no reply

I feel dumb. I wouldn't have been able to understand it if it was faster. Great talk! looking forward to the slides.

Michael-K
Score: 4 | 5 months ago | 2 replies

I'm loving this talk. The only mitigation I knew about is XN. Never thought I'd understand the mechanics behind exploiting an overflow and what the mitigations actually protect against. Thank you for this educational talk. I'm learning a lot here.
EDIT: will the slides be available somewhere?

Bruce
Score: 0 | 5 months ago | 1 reply

There's a link to download a pdf of the slides on the left hand side of the page.

unknown
Score: 0 | 5 months ago | no reply

Not for this talk (yet?)

AzeriaSpeaker
Score: 0 | 5 months ago | no reply

Thanks! You can find the slides are on the left

piersh
Score: 2 | 5 months ago | 2 replies

I presume adding mitigations is in addition to removing as many buffer overflows as possible!
A copy of your slides would be really useful as I'd like more time to think through some of the details more carefully.
Advice on best practices for younger / time pressured software devs should be more widely available - will you cover this area in your books?

AzeriaSpeaker
Score: 0 | 5 months ago | no reply

Hi Piersh,
Yes, my first book will be a deep dive into understanding assembly and reverse engineering compiled binaries. The second book will cover various vulnerability classes, including unsafe functions (in C and assembly view) and their safer versions. It will also contain an in-depth overview of Arm exploit mitigations, how they work, what they protect against, and how hackers bypass them. The target audience for these books is both, security researchers and embedded systems developers.

Stephane.Boucher
Score: 0 | 5 months ago | no reply

You can find the slides on the left hand side

Cameron
Score: 0 | 5 months ago | 1 reply

Hi! Thanks for the talk, very informative. I was wondering what is the application you are using to break up your blocks of assembly code? I'm referring to the slide @20:04 timestamp.

AzeriaSpeaker
Score: 0 | 5 months ago | no reply

Hi Cameron, the tool you are seeing in that slide is called IDA Pro. It's a disassembler for translating compiled binaries into assembly/disassembly. If you are looking for a disassembler, I would suggest you get Ghidra (free) since IDA Pro is very expensive.

Andrew
Score: 1 | 5 months ago | no reply

Very interesting presentation.

OUR SPONSORS