Hacker summer camp 2021: my notes

Unfortunately, again this year, I wasn’t able to attend BSides LV / BlackHat US / DEFCON in-person.

I did however try to watch a few ICS-related talks, and here are my thoughts. Please be aware that this is not an exhaustive list of all interesting talks!


As a trainer, I received a pass to “attend” the conference. Since all trainings happened remotely, I stayed in France and watched a few talks. They’re all available for replay on SwapCard if you registered (and paid 😉 ) for BlackHat.

A Broken Chain: Discovering OPC UA Attack Surface and Exploiting the Supply Chain

In this talk, Eran Jacob from OTORIO explains how a large number of OPC-UA implementation rely on a very few software libraries, mostly provided by the OPC Foundation.

The research was then focused on these stacks, and several vulnerabilities were identified:

  • CVE-2021-27432: Sending large XML data, that happens pre-authentication -> able to crash the OPC-UA server
  • CVE-2021-27434: XXE vulnerability in XML processing allowing to access sensitive data from the OPC-UA server or more globally from the OPC-UA server subnet
  • CVE-2021-32994: Denial of Service in Softing OPC-UA C++ SDK

The author also recommended the following research:


As mentioned in my training, OPC-UA clearly is one the ICS protocols of the future and provides long awaited security features. However, it is a very complex protocol and using it in an insecure way (like security=none, as I’ve seen in assessments) is actually worse than using Modbus.

In this talk, we learn about vulnerability heritage in the different software components used by most vendors. This is another call to start working on SBOM (Software Bill of Materials).

Lastly, the vulnerabilities identified probably could have been found by performing a security review of the code. As an integrator/vendor relying on external software components, it is also part of your duty to ensure that this external code complies with your security requirements, and the ones from your customers.

Hacking a Capsule Hotel – Ghost in the Bedrooms

While not strictly related to ICS, I liked this presentation by Kya Supa from LEXFO during which he explained how he was able to get access to most connected devices (light, curtains, fan…) for all “rooms” in a capsule hotel.

Your Software IS/NOT Vulnerable: CSAF, VEX, and the Future of Advisories

Vulnerability management is a very difficult job, and quite frankly we’re mostly quite bad at it. Why? It’s still a mostly manual job of reading/analyzing vulnerabilities and comparing to your asset inventory (or delegating this job to an external provider).

In this talk, Allan Friedman from NTIA and Thomas Schmidt from BSI present two important concepts/tools :

  • CSAF (Common Security Advisories Framework): A standard defining a way to automate security advisories by making it available in machine-readable format
  • VEX : A sort of “negated” security advisory. It’s a way to tell customers that your product is not affected by a specific vulnerability. This was added to the CSAF as a specific profile.


As software dependencies are everywhere, using SBOM and CSAF seems like a great way to achieve more efficiency while reducing the manual labor and alert-fatigue from security teams.

What can you do?

  • As a end-client, require your vendors to provide all advisories in machine-readable format
  • As a vendor, doing so will reduce the friction with clients and the ability to explicitly state that your product is not vulnerable to the latest vulnerability on one of your software component will likely reduce workload for your clients, and for your support team.

A Hole in the Tube: Uncovering Vulnerabilities in Critical Infrastructure of Healthcare Facilities

In this talk, Ben Seri and Barak Hadad from Armis showcase their security research into PTS (Pneumatic Tube Systems), that are used for example in a lot of hospitals in the US.

They identified numerous – very classical- vulnerabilities in the embedded devices used in such systems.


Fortifying ICS – Hardening and testing

The topic of hardening systems is rarely mentioned in ICS cybersecurity, but plays a very important role in preventing attacks or slowing them down.


  • You can use GPO (Group Policy objects) to harden a Windows machine that is not domain-joined: it’s called LGPO (Local GPO)
  • Network interfaces tend to keep their default bindings on Windows
  • Expect things to break. Virtualized systems are easier to harden as you can use snapshots to easily revert if a setting is breaking normal operations
  • Hardening could happen during FAT and SAT. Those are specific moment in a project lifecycle during which you can perform in-depth training without impacting production. Performing hardening tests during FAT/SAT also allows vendors/integrators/asset owners to get a better understanding on what we’re trying to achieve, and sometimes how their product works 😉

You can also take a look at Cutaway’s SAWH (Stand-Alone Windows Hardening) that was recently published on Github. It’s a first step to quickly secure your machines. I’m more in favor of integrating cybersecurity as part of the project requirements, and asking the vendor to either :

  • Use your own hardened template OS
  • Comply with your hardening guide

and document exceptions. But I understand it’s not always possible.

Bottom-Up and Top-Down: Exploiting Vulnerabilities In the OT Cloud Era

Sharon Brizinov and Uri Katz from Claroty take a look at vulnerabilities in cloud-connected automation.

This new paradigm implies that all communication from/to the PLC/ remote IOs go through a cloud platform of some sorts.

For this study, they take a deeper look at Codesys solution for cloud automation, centered around Codesys PLCs (Wago PFC 100/200 in this example) and the Codesys automation server, that allows to centrally manage all PLCs.

They managed to identify two attacks paths:


By exploiting overflows in a configuration service running on port TCP 6626 on the PLC, they can then exploit the lack of anti-CSRF protection in the web management console to add a new user there.


By creating a malicious Codesys package, and publishing it to the Codesys store, an attacker could execute code on the workstations on which the package was installed. This package could steal the credentials from the legitimate cloud server user and send it to the attackers. They would then be able to perform all kinds of admin tasks on all the connected PLCs, including and up to running arbitrary code on the devices.

Takeaways / thoughts

  • Centrally managing all your PLCs is an appealing idea, but clearly comes with challenges. Defining the perimeter we’re comfortable centralizing in a single platform needs to include cybersecurity considerations.
  • Reminds me on the work on PLCNext platform by Kelly Leuschner: https://cs3sthlm.se/program/presentations/kelly-leuschner/
  • Codesys has a pretty bad record track for security, and this seems to extend to Codesys automation.

Security Coding Practices for PLCs

This is a project that I know quite well, since I watched the starting point at S4 conference in Miami last year.

PLC code security is probably not the first topic to address in a cybersecurity program, but will be a logical next step once you cover the basics. I hope to be able to talk in more depth about this project in a dedicated blog post.

In the meantime, check https://www.plc-security.com/, and contribute if you can!

Remote training: the tools I use

I decided to write a short blog post describing the technical solutions I used during my BlackHat US training. All BlackHat trainings happened remotely this year, and this doesn’t come without challenges. The first one is of course attendees engagement, and the second is the labs, since they rely on real ICS hardware. I publish this blog post hopping this can help/inspire other trainers facing the same challenges.

Attendees engagement

This was not my first remote training, but I was surprised to see that this time, attendees were not turning the webcam on, neither were they participating vocally. Most of the questions / comments happened in Zoom’s chat, and that seemed a bit strange to me. “Reading the room” is important when training to make sure people are understanding what you’re trying to explain; it’s usually quite easy to tell visually 🙂

To make sure attendees were getting the core messages, I used a platform called Beekast. Beekast allows you to create polls, quizzes, and all kind of user interactions.

Below are a few examples of quizzes I used at the end of each module:

During the Capture the Flag event, it is also difficult to know who is following the instructions. I set up a CTFd server, not for the competition, but to have a way of knowing who’s able to complete the tasks, and set the pace accordingly.

Example of CTFd scoreboard allowing the trainer to see who’s following or not

I’m also using a shared Google doc, in which it’s easier to share command lines :

Technical setup for the CTF

I usually come to conferences for training with my ICS setup that fits in a luggage, and use a small WiFi access point to allow attendees to connect, using the virtual machines I provide. Remotely, it’s unfortunately not so easy.

I decided to try something new for BlackHat, by hosting the virtual machines for attendees in the cloud.

I used Guacamole to allow attendees to access the VM directly from a standard web browser. I was worried about the graphical performance, but using a very standard server with no graphic cards worked perfectly.

I rented a dedicated server from OVH, hosted in Canada (since most of the attendees would be from US region I thought). I installed ESXi from VMware to manage the VMs.

I created 2 template VMs for the attendees : one Windows 10 and one Kali Linux. I then used this script to deploy 20 copies of each one.

I used a 24-core server, with 256Gb of RAM, which was more than enough to run the ~45 VMs (CPU usage varied from 40% most of the time to 70/80).

Most of the exercises are performed directly on the virtual machines, but for the CTF we also need to access the specific hardware, that is temporarily installed in my home. So I set up a site-to-site VPN between the Cloud server in Canada and my local ESXi server, which is itself connected to PLCs and an additional laptop to host virtual machines.

The whole setup architecture looks like this :

I also set a Twitch stream that allowed attendees to see in “real time” the impact of their attack on the hardware setup. I used OBS with 3 webcam to capture the setup and the screens of the SCADA VMs. It looked like this:

Hope this will be useful to some!

Certified PLCs, secure PLCs?

Programmable Logic Controllers (PLCs) are often seen as one of the major reasons Industrial Control Systems are insecure. These devices -even today- are indeed crippled with critical vulnerabilities. Even worse, they have by design vulnerabilities, also known as forever-days.

While securing ICS is definitively possible even though we use very insecure devices, it is a legitimate worry and manufacturers need to step up their product security game.

S4x20: A write-up

This year, I attended the S4 conference in Miami South Beach for the second time. It is a great event, one of the very few cybersecurity events focused on ICS. I will try in this post to mention some of the talks I attended, as well as give the main takeaways I go back home with.
Most of the slides are unavailable when I’m writing this, so a lot is from memory and twitter searches, so please excuse any imprecision.
Also, this is just a write-up of “my” S4, so there’s plenty of things I will not cover (on ramp, sponsor stage, most of the main stage, Pown2Own competition, 3rd floor activities, cabana sessions….).


14 Hours and an Electric Grid (Jason Larsen)

The goal of this talk was to try to challenge some of our assumptions regarding the offensive capabilities of our adversaries.
To do so, Jason took us on the story of the first 14 hours on a security assessment he recently did.
The target of this assessment was a Moxa Serial-to-Ethernet converter, very similar to the ones that were attacked in Ukraine in 2015.
In a nutshell, below are the steps Jason took in 14 hours:
• Perform a port scan to identify exposed services
• Analyze the legitimate packet during a firmware update
• Analyze the firmware
• Modify the firmware in order to incorporate additional features, like modifying the values received/sent to the controllers connected to serial.

While it’s certainly impressive, the main takeaway is that the 2015 Ukraine attackers only needed to step 2 and send an invalid firmware, that would crash/make the device unsuable.

Key takeway: real-world attacks on ICS are much more simple than most of the research work published by the security community and we tend to overestimate the required effort to perform such attacks.

Exploitable Vulnerabilities Hidden Deep in OT (Mark Carrigan)

In this presentation, the focus was shifted towards “configuration-related” vulnerabilities, by opposition to “patchable” vulnerabilities.
Industrial Control Systems insecurity does not stop at the network or system-level. Very common misconfigurations or abuse of configuration can be leveraged by attackers to propagate and cause havoc in the network. As the slides are not published, I cannot go way further in the details of this talk.

Key takeaway: after securing the network, the systems, there is still some work to do at the application layer, and this will be even harder to fix.

Critical Infrastructure As Code (Matthew backes)

Infrastructure as code is a concept in which all the infrastructure is documented and version-controled, for example using a tool like Terraform. In this talk, Matthew details the experiment performed by the MIT Lincoln Laboratory to build a Critical-Infrastructure-as-Code for a small electrical substation and generation control system.
They started with the tool ANSIBLE, and were able to create an intermediate layer to interface to several type of devices:
• HTTP scraping for devices having only a web interface for configuration
• SSH for network devices
• A specific DLL provided by the vendor for the HMI

Key takeaway : While the infrastructure-as-code is not ICS specific, it is interesting to see that this project was able to use it in a realistic ICS environment by developing the right interfaces. Having a solid configuration management is a key asset in cybersecurity defense. We should push vendors to provide standard configuration APIs to help with this approach.

Running A Factory: A Realistic, High Interaction ICS Honeynet (Stephen Hilt)

Stephen hilt and his team created a very realistic honeypot environment to try to detect what kind of actions attackers would perform once they got access. For example, they left open an unauthenticated VNC service to one of the exposed workstation (a situation that unfortunately still exist today on the Internet). They even created a fake company, website and personnel to make it look more real. The PDF is very interesting for people that want to perform similar activities.

Key takeaways: while important efforts were put in to create a very realistic ICS honeypot, the attacks observed ranged from classical (ransomware) to absurd, but nothing advanced was witnessed. ICS advanced attacks only make sense if you have a goal: destabilization, etc… Even a very realistic honeypot didn’t catch anything worth it.


What To Patch When … Automating and Replacing the CVSS (Art Manion)

This is something I was looking forward to! Last year, on the same stage, was presented a new methodology for vulnerability patch triage, called TEMSL. Unfortunately, not a lot of details were provided and no further resource was available at the time.

This year, a paper is published by the software engineering institute at Carneggie Melon to detail this new approach.

The main difference with CVSS is that this methodology (SSVC, for Stakeholder-Specific Vulnerability Categorization), does not grade vulnerability but works with a decision tree:

One new thing is that the methodology defines different outcomes for the decision tree, based on wether you’re the developper of the application for which there is a new vulnerability, or a user of the product.

Four outcomes are defined:

Considering the difficulty of applying security patches to ICS environment, it is really interesting to limit the actions. I even prefer last year version that only had: NOW, NEXT, NEVER. But this version is probably more applicable to mature environments.

Only 4 criterias are used to navigate through the decision tree and take a deicison for the patch:
Exploitation: is the vulnerability actively exploited?
Exposure: what is the level of network exposure (internet, internal network…) of the devices affected?
Mission impact: how bad would be the exploitation of the vulnerability?
Safety: wether a successful exploitation of the vulnerability could have an impact on safety

Below is an extract of the decision tree if there is no known exploit at the moment:

More details can be found directly in the paper, that I strongly encourage you read and maybe perform a test of this methodology.

Key takeaway: there is now a paper and no more excuses not to test this promising methodology.

Special Access Features on PLC’s (Tobias Scharnowski)

In this talk, a Siemens S7-1200 PLCs was analyzed from a hardware point of view.

A small chip containing the bootloader was identified, and by reversing the code authors were able to identify a “special access feature” allowing to access privileged functions like dumping and loading an arbitrary firmware, to finally play Tic-Tac-Toe on the device.

Key takeaway: Securing the boot process of PLCs will limit the risk a supply chain attack replacing the valid firmware or installing a rootkit, but will also reduce security analysts’ abilities to perform deep analysis of the firmware.

PLC Secure Coding Practices (and the consequences of not following these practices) (Jake Browdski)

This talk was very interesting as it focuses on an uncommon topic: secure development for PLCs.

Funny thing, some best coding practice for IT also apply for PLCs, like VALIDATING THE INPUTS!

This also dived a bit into cyber-physical attacks, by mentioning the example of a motor/actuator being restarted or moved too frequently, which could cause a physical impact in a pipe. Unfortunately, the talk was fast-paced and I wasn’t able to remember everything in detail, hopefully the slides will be released without waiting for the video.

In the meantime, we have limericks to study:

Key takeaway: I need to dig into PLC logic vulnerabilities and try to add some examples of exploitation to my ICS setups used in my trainings.


Using Rust, a Secure Programming Language, in Embedded & Safety Critical Systems (Adam Crain)

We still develop using the same toolset as 60 years ago. New technologies exist and allow the developers to produce more secure code.
Adam exposed a few of the security-related advantages of Rust, which I’m not going to detail (mostly because I am a poor developer and didn’t really grasped everything).

Key takeaway: Rust is there, secure, fast, and now there is a Modbus library developed by Adam (with his colleague, a beer-pong tournament winner) so use it!

Tuning ICS Security Alerts: An Alarm Management Approach (Chris Sistrunk)

This talk is about not reiventing the wheel: engineers have been engineering & tweaking alarm systems for ICS for a long time, and OT security monitoring faces similar challenges: how to make sure critical alerts are treated, how to prevent alert fatigue….

A high-level methodology was giving throughout the talk:

  • Know your systems
  • Define your alert philosphy and start small
  • Tune the alerts & reduce the noise
  • Create playbooks and run them in crisis exercises

Kudos to Chris for publishing the slides immediately!

An interesting discussion emerged during the Q&A, regarding the relationship between SOC operators and ICS operators: how to provide valuable information to the ICS operator without turning him into a level 1 SOC analyst? I understand that operators shouldn’t be in charge of cybersecurity monitoring, but I also think that they should somehow be informed that the SCADA/BPCS could be untrustable at some point if suspicious activity is ongoing.

Key takeaway: Don’t reinvent the wheel and start looking at alarm management systems (ISA 18.12 & EEMUA 191) to improve your ICS cybersecurity alerting.

Designing A More Secure ICS Protocol Chip (Andrew Zonenberg)

In this talk, Andrew detailed why and how he created a security chip dedicated to securing ICS protocols. In order to keep things as simple as possible, he used SSP21 protocol and decided to implement it in hardware, using an FPGA. This approach reduces the overall risks and especially the risk of exploiting a vulnerability in the SSP21 software implementation to compromise the device. With a pure FPGA implementation, there is no memory to corrupt, no way of persistence.

Key takeyaway: there is a protocol called SSP21 and a hardware implementation integrated directly into PLC hardware could allow for an easier and very less vulnerable implementation.

Unsolicited Response

Unsollicited response is a fun environment where attendees can rant about a specific topic for 5 minutes. I’ll only mention two of them that really strike me:

Ron Fabella: Urging us to stop thinking we’re the heroes, but rather to act as guides for our clients

 Selena Larson: Urging us to be more inclusive in this field, wether it is for non-technical people, or different gender/races.

I’m not even detailing Reid’s rant, he apparently totally lost it 😉 [probably by fear of seing Rust used more widely in PLCs]

Global conclusion

My impression just at the end of S4 was very similar to the one I felt last year: mixed feelings. It is a great event, but I was disappointed/not really interested in a large part of the content. I spent most of my time on stage 2, which offered content more related to my expectations.

After writing this post, I realize that the content that I liked, even if it’s only 50% of the program, was worth the travel (and ticket cost). My favorite talks were definitely the ones that will allow me to dig more in the next months: PLC programming best practices, ICS security alert management, and SVCC to replace CVSS.

Also, what makes most of the value at S4 is the people coming there. Last year, I was able to meet for the first time some very talented individuals that I’ve followed on Twitter and admired for years (and continue to do so). BEER-ISAC meetings also allowed me to meet a lot of people.

However, I feel S4 capitalizes too much on its attendees, on the fact that “this is the conference to go” for ICS cybersecurity. As the conference is growing, perhaps having a diverse program committee would make it feel less like a one man show.