Replies: 1 comment 3 replies
-
Hi just saw this discussion. It is an interesting finding indeed (albeit an expected one). In the DFIR world this always has been an issue during analysis when the script is too big. Check for example block-parser from matthew and check this https://cloud.google.com/blog/topics/threat-intelligence/greater-visibility/ for reference. From SIGMA perspective. This is not necessarily an issue but its an issue on which log do you apply the rule. Most utilities collect individual entry and then rules are applied on those entries. But nothing prevents a tool from merging ScriptBlocks and calling them a single "Event" (except for size issue ofc for example). While your blog is highlighting an issue that is often forgotten for most, I don't really consider it a true issue because at the end. You can't truly control the split mechanism that the PowerShell engine use (to my knowledge). And while statistically we could be blinded during certain executions. The issue is solved at a tool level not at a sigma level. But thanks for opening the discussion as it is a great topic. |
Beta Was this translation helpful? Give feedback.
-
Hello everyone,
it is known that PowerShell Script Block Logging breaks scripts into multiple fragments when they exceed a certain length. When loading the well-known PowerView script multiple times (as an example for a big script), we can easily show that the number of script block fragments differs between repeated loading. A total of 10 runs of loading PowerView resulted in 39 to 76 script block fragments, which is quite a significant difference, with an average of 56.7 fragments.
Now, when using Sigma rules that operate on single logged ScriptBlockTexts, the number of generated alerts might differ because the number of logged block fragments differs. More specific, the number of generated alerts usually increases with increasing number of blocks, because the malicious/suspicious strings were found in more block fragments.
But in some cases, we observed that the number of generated alerts decreased with an increased number of script block fragments. This is the result of multiple occurences of malicious/suspicious strings randomly being split into one fragment, thus raising fewer alerts. These findings show that there is a certain inconsistency when using Sigma rules on PowerShell script block logs, but the findings are of no critical nature, yet.
However, among the script block fragments created during the 10 test runs, we found one case that is concerning: The rule Malicious PowerShell Commandlets - ScriptBlock detects various strings inside script blocks, e.g., "PowerView". In one of the runs the PowerView script was fragmented in such a way, that a string that should have been detected was no longer detected, i.e., "PowerView" was split into "PowerVi" and "ew". This shows, that depending on the fragmentation of script blocks, one can indeed lose alerts and miss contents of scripts that should be detected. Furthermore, rules like Execute Invoke-command on Remote Host that detect multiple strings in a single script block (
ScriptBlockText|contains|all
) might miss detection when one of those strings is randomly put into a different block fragment.To conclude, PowerShell script block fragmentation causes serious problems when using Sigma rules on PowerShell script blocks logs. The number of generated alerts varies significantly and one might even completely miss detections.
The described findings where observed on a Windows 10 host with PowerShell Version 5.1.
For more information and further details on the analysis see this blog post.
I am looking forward to any answers and opinions on the described findings.
Best regards,
L015H4CK
Note: I did not find any issues, questions or blogs about this problem. So I thought this might be the right place to start a discussion. Please let me know if this is an already known problem.
Beta Was this translation helpful? Give feedback.
All reactions