
By Jason Yuan (Engineer, Automotive)
Modern phishing doesn’t always start with a suspicious email or link. It increasingly hides in plain sight – inside legitimate-looking software installers. In this blog, we examine how attackers used a signed Windows package to deploy the RedLine Stealer malware via multiple layers of evasion that bypass traditional detection methods.
We’ll walk through how such an attack unfolds, why it poses a unique threat to automotive manufacturers (OEMs), and what lessons can be drawn from a breach that begins long before code reaches a vehicle.
When phishing goes beyond email
When people hear the term “phishing,” they usually think of fake login pages or suspicious links in emails. But today’s attackers have moved beyond that. Phishing can now take the form of a seemingly legitimate installer, bundled with malicious code. This method is known as installer phishing.
In the automotive industry, this tactic poses a particularly serious risk. Imagine a calibration engineer downloading a vendor-supplied CAN viewer 3.4 setup.exe from a trusted supplier portal. It’s a routine task, repeated many times throughout a development cycle. The engineer installs the tool as usual, unaware that the file has been silently tampered with upstream and replaced with a malicious version. Within minutes, a malware such as Redline Stealer can exfiltrate sensitive credentials, Git tokens, and even code-signing keys stored on the machine. From there, attackers may gain access to over-the-air (OTA) signing systems or firmware build environments, setting the stage for far-reaching supply chain compromise.
This kind of compromise is not hypothetical. In the 2018 ASUS ShadowHammer campaign, attackers tampered with a trusted software update tool that was modified and even signed using ASUS’s own certificate. Nearly a million systems received the Trojanized installer. More recently, in 2023, threat actors used spoofed Zoom and MSIX installers to deliver malware that appeared completely legitimate. These campaigns bypassed conventional detection and provided attackers with a stealthy path into enterprise environments.
What makes installer phishing especially dangerous is its familiarity – it blends seamlessly into routine workflows. For security teams, it’s challenging to defend against something that looks like business as usual. And in a fast-paced development or support setting, a single well-timed double-click is all it takes for an attacker to gain a foothold.
How RedLine Stealer was embedded in an Inno Setup package
If you’ve ever installed a small engineering utility on Windows, such as a CAN bus viewer, an electronic control unit (ECU) flasher, or a configuration tool for in-vehicle networks, there’s a good chance it was bundled using Inno Setup. First released in 1997, this installer system remains widely popular due to its simplicity, scriptability, and ability to bundle everything into a single executable.
In a recent campaign uncovered by Splunk, attackers exploited Inno Setup’s flexibility to deliver a deeply layered, evasive malware chain. On the surface, the installer appeared legitimate: a signed executable with no malicious behavior. But as soon as the user launched it, a silent multi-stage deployment began beneath the interface.
The attack chain started with the installer’s embedded Pascal script. Instead of merely extracting application files, the script quietly wrote additional components to a hidden directory on the user’s system. One of these was a scheduled task, created to ensure persistence by triggering on every reboot. Another was an executable taskshostw.exe, a slightly modified and renamed version of a legitimate Qt application. Its filename mimicked the native Windows binary taskhostw.exe, differing by only a single character.
The executable wasn’t overtly malicious on its own. But when it ran, it automatically loaded a locally bundled DLL, QtGuid4.dll. That DLL had been replaced with a weaponized version, crafted to decrypt an encrypted blob stored in a sidecar file, periphyton.ics, and emitted shellcode directly in memory. That shellcode launched HijackLoader, a staging component responsible for unpacking and injecting the final payload: RedLine Stealer.
RedLine wasn’t executed directly. Instead, it was injected into MSBuild.exe, a trusted Microsoft development tool commonly present on engineering machines. From there, it began its real task: quietly exfiltrating browser cookies, VPN profiles, and other sensitive data from the compromised machine.
Figure 1. Execution path of a weaponized installer
The installer appeared to be signed. The dropped files resembled ordinary developer tools. There were no obvious exploits or glaring red flags. Each stage of the attack chain was subtle enough to pass as legitimate on its own. Traditional antivirus scanners, which often rely on signature matching or static heuristics, failed to detect any malicious activity. That’s because the final payload — the actual data-stealing malware — was encrypted and only decrypted in memory, well beyond the reach of static analysis.
And in case the malware was being observed – it checked. Before executing its core logic, it scanned the host environment for telltale signs of a sandbox or automated analysis tool. If it detected unusual hardware configurations, limited system resources, fake user profiles, or common AV sandbox artifacts, it simply held back and behaved like a harmless installer. In doing so, it bypassed both static and dynamic detection, waiting for a real user on a real machine before activating the next stage of the attack.
In automotive environments, a compromised installer doesn’t just affect a single machine — it can ripple across vehicles, production lines, and the entire software supply chain, making it a critical concern in automotive cybersecurity.
A malware-laced utility on a development workstation can steal Git credentials or code-signing certificates, opening the door to rogue firmware signed under your organization’s name. These trusted systems are integral to OTA infrastructure, so an upstream breach can silently spread into production builds, potentially reaching vehicles on the road.
Dealer and workshop laptops, often used for diagnostics and firmware updates, are direct touchpoints to vehicles in the field. If compromised, they can inject unauthorized changes during routine service, allowing the infection to spread vehicle by vehicle.
Even the manufacturing floor is vulnerable. Calibration tools and MES terminals often run on standard Windows PCs. An infected tool in this space can halt production, corrupt ECU programming, or leak sensitive factory data, all triggered by a single installer that looked legitimate.
What trust assumes – and why it shouldn’t
The attack didn’t start in the vehicle. It began when a developer downloaded a tool and trusted it. That is the quiet truth behind modern supply chain threats: they do not exploit the final product; they compromise the systems and tools that build it. Once inside, they move without resistance, not because they are invisible, but because they blend in with everything already trusted.
This is where zero trust, a security concept that assumes no implicit trust, regardless of whether the connection is external or internal, must take hold. For one, mitigation must begin before that trust is granted. Every installer, support utility, and diagnostic package should be treated as a deliverable – never assumed safe, but continuously verified through automated, reproducible, and verifiable checks. If a package also includes components it should not, or deviates from what the build system declared, it should be flagged and blocked long before it reaches development or production.
This is how compromise is prevented without relying on hope or reaction time. Trust should never be assumed. It must be earned.