All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Use an empty alternative | rex field=MESSAGE "aaa(?<FIELD1>bbb|)" | rex field=MESSAGE "ccc(?<FIELD2>ddd|)"
You can use '| append [ | noop ]' as a workaround: | from federated <> | append [ | noop ] | outputlookup <>.csv  
Let's say I have the following SPL query.  Ignore the regexes, thery're not important for the example: index=abc | rex field=MESSAGE "aaa(?<FIELD1>bbb)" | rex field=MESSAGE "ccc(?<FIELD2>ddd)" stat... See more...
Let's say I have the following SPL query.  Ignore the regexes, thery're not important for the example: index=abc | rex field=MESSAGE "aaa(?<FIELD1>bbb)" | rex field=MESSAGE "ccc(?<FIELD2>ddd)" stats count by FIELD1, FIELD2   Right now, the query doesn't return a result unless both fields match, but I still want to return a result if only one field matches.  I just want to return an empty string in the field that doesn't match.  Is there a way to do this? Thanks!
Careful, the linked documentation page says not to modify that app in any way.
i write a custom alert with bash script who send values of spl query to the hive, the script create a case on the hive but with empty fields. alert_actions.conf: [alert_to_thehive] is_custom = 1 ... See more...
i write a custom alert with bash script who send values of spl query to the hive, the script create a case on the hive but with empty fields. alert_actions.conf: [alert_to_thehive] is_custom = 1 disabled = 0 label = Alert to TheHive description = Custom alert action to send alerts to TheHive icon_path = alert_icon.png payload_format = json ttl = 10 # Command to execute alert.execute.cmd = alert_to_thehive.sh # Arguments passed to the script alert.execute.cmd.arg.1 = $result.Image$ alert.execute.cmd.arg.2 = $result.CommandLine$
  (index=hcp_system OR index=hcp_logging) namespace=$env_dd$ | rex "#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*STEP:\s*(?P<STEP>[^,]+),\s*PKEY:\s*(?P<PKEY>[^,]+),\s*STATE:\s*(?P<STATE>[^,]+),\s*MSG0:\s*(?P<MS... See more...
  (index=hcp_system OR index=hcp_logging) namespace=$env_dd$ | rex "#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*STEP:\s*(?P<STEP>[^,]+),\s*PKEY:\s*(?P<PKEY>[^,]+),\s*STATE:\s*(?P<STATE>[^,]+),\s*MSG0:\s*(?P<MSG0>[^,]+),\s*PROPS:\s*(?P<PROPS>[^#]+)\s*#HLE#" | eval IID=if("$interface_dd$"!="", "$interface_dd$", IID), STEP=if("$step_dd$"!="", "$step_dd$", STEP), PKEY=if(isnull("$record_id$") OR "$record_id$"="", PKEY, "*" . "$record_id$" . "*"), STATE=if("$state_dd$"!="", "$state_dd$", STATE), MSG0=if(isnull("$message_1$") OR "$message_1$"="", MSG0, "*" . "$message_1$" . "*"), PROPS=if(isnull("$properties$") OR "$properties$"="", PROPS, "*" . "$properties$" . "*") | search (IID=* OR isnull(IID)) (STEP=* OR isnull(STEP)) (PKEY=* OR isnull(PKEY)) (STATE=* OR isnull(STATE)) (MSG0=* OR isnull(MSG0)) (PROPS=* OR isnull(PROPS)) | table IID STEP PKEY STATE MSG0 PROPS   How to make it show in the table values which are selected in DD and if the search field is "text field" (PKEY MSG0 and PROPS in my case) empty to show what the rex  PKEY:\s*(?P<PKEY>[^,]+) will extract. As current behavior is following: DD DropDown TF Text Field Input : -DD  IID:SF  -DD  STEP:RECEIVE_FROM_KAFKA -DD  STATE:IN_PROGRESS -TF  PKEY MSG0 and PROPS are empty Msg1:"#HLS# IID:SF, STEP:RECEIVE_FROM_KAFKA, PKEY:456, STATE:IN_PROGRESS, MSG0:Success, PROPS:YES #HLE#" Msg2: "#HLS# IID:SAP, STEP:SEND_TO_KAFKA, PKEY:52345345, STATE:IN_PROGRESS, MSG0:MOO, PROPS:FOO #HLE#" Extracted Table: STEP                                        |   PKEY             |       STATE                   |  MSG0      | PROPS RECEIVE_FROM_KAFKA |    52345345 |       IN_PROGRESS |  MOO         | YES   Resume: the result is mixed in column lines from different messages in the input of the text fields is empty, How can I make it to extract all messages with the following log pattern and then filter them based on the DD or text fields?
Hi at all, I don't know if someone else found this issue: Using for the first time 9.3.0 version I tried to customize an app menu bar. Then I found that if I try to use this app with my language (... See more...
Hi at all, I don't know if someone else found this issue: Using for the first time 9.3.0 version I tried to customize an app menu bar. Then I found that if I try to use this app with my language (it-IT) it doesn't change; if instead I run it with the default english interface (en-US) it correctly runs. Ciao. Giuseppe  
Hi, I’ve created some scheduled Splunk reports with inline tables in the email body. We're sending these reports to a Slack channel via email, but the URLs appear as plain text in Slack, while they a... See more...
Hi, I’ve created some scheduled Splunk reports with inline tables in the email body. We're sending these reports to a Slack channel via email, but the URLs appear as plain text in Slack, while they are hyperlinked in Gmail. Is there a workaround to ensure the URLs are clickable in Slack? Also how to enable hyperlinks for URLs in report(not dashboard) @ITWhisperer @gcusello @PickleRick 
I wonder if app is compatible to Python v3.9 or not as in Splunk enterprise 9.3 its hardcore to python v3.9, its also not splunk supported app. are you seeing any python related errors coming from th... See more...
I wonder if app is compatible to Python v3.9 or not as in Splunk enterprise 9.3 its hardcore to python v3.9, its also not splunk supported app. are you seeing any python related errors coming from this app ? 
Your initial search is filtering for user has declined events - you should probably extend this with an OR to include the events where the user accepts the MFA. Then you will be able to get user's ev... See more...
Your initial search is filtering for user has declined events - you should probably extend this with an OR to include the events where the user accepts the MFA. Then you will be able to get user's events and work out the timing and counts between the first decline and the eventual accept. Without seeing your (anonymised) events, it is difficult to speculate any further.
its not Splunk Cloud, but Splunk enterprise 9.3
Hello all, implementing some routing at the moment in order to forward a subset of data to a third party syslog system. However, i'm running into issues with the Windows Logs. They look like this at ... See more...
Hello all, implementing some routing at the moment in order to forward a subset of data to a third party syslog system. However, i'm running into issues with the Windows Logs. They look like this at syslog-NG  Dec 29 07:47:18 12/29/2014 02:47:17 AM Dec 29 07:47:18 LogName=Security Dec 29 07:47:18 SourceName=Microsoft Windows security auditing. Dec 29 07:47:18 EventCode=4689 Dec 29 07:47:18 EventType=0   I believe this is because of the /r/n in the Windows events caused by non-xml  How can i get the Splunk Heavy Forwarder to treat each Windows event as one line and then send it through?  Architecture = UF - HF - Third Party System/Splunk Cloud  Thanks 
I am currently working on creating an alert for a possible MFA fatigue attack from our Entra ID sign in logs. The logic would be to find sign in events where a user received x number of MFA requests ... See more...
I am currently working on creating an alert for a possible MFA fatigue attack from our Entra ID sign in logs. The logic would be to find sign in events where a user received x number of MFA requests within a given timeframe, denied them all and then on the 5th one for example they approved the MFA request for our SOC to investigate. I have some of the logic for this written out below, but I am struggling to figure out how to add the last piece in of an approved MFA request after the x number of denied MFA attempts by the same user. Has anyone had any luck creating this and if so, how did you go about it? Any help is greatly appreciated. Thank you! index=cloud_entraid category=SignInLogs operationName="Sign-in activity" properties.status.errorCode=500121 properties.status.additionalDetails="MFA denied; user declined the authentication" | rename properties.* as * | bucket span=10m _time | stats count min(_time) as firstTime max(_time) as lastTime by user, status.additionalDetails, appDisplayName, user_agent | where count > 4 | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`
Similar issue. There are no error logs per say.  The search log shows the the output appears to be happening on the remote SH.  Results written to file '/opt/splunk/etc/apps/search/lookups/mylooku... See more...
Similar issue. There are no error logs per say.  The search log shows the the output appears to be happening on the remote SH.  Results written to file '/opt/splunk/etc/apps/search/lookups/mylookup.csv' on serverName=',<<remoteServerName>> In other words, if I login to my local search head and run this and get an output of 100 entries: | federated from:my report | outputlookup mylookup.csv Then I run this (Again on the local search head), it will be empty: | inputlookup mylookup.csv    
Hello adrifesa95. Are you using the Splunk Add-on for Check Point Log Exporter, or the older Splunk Add-on for Check Point OPSEC LEA? If the newer one, there is a section on the docs referring to tro... See more...
Hello adrifesa95. Are you using the Splunk Add-on for Check Point Log Exporter, or the older Splunk Add-on for Check Point OPSEC LEA? If the newer one, there is a section on the docs referring to troubleshooting when its not parsing due to depth limit and how to increase it... https://docs.splunk.com/Documentation/AddOns/released/CheckPointLogExporter/Troubleshoot
Thanks so much for your attention. your feedback really means a lot to me. I totally agree that there are different ways to reach the same goal. I’ll definitely try to use your suggestions, but hone... See more...
Thanks so much for your attention. your feedback really means a lot to me. I totally agree that there are different ways to reach the same goal. I’ll definitely try to use your suggestions, but honestly, if I were to implement everything you mentioned, it would pretty much turn into a whole new project with a different approach. Using Python was a great idea, but for some reason, I just didn’t end up using it! Let me explain a bit about some of the points you brought up. The main thing that made the code a bit complicated is all the logging that’s happening. I needed to log every single event in the project, and the reason I used process IDs was to track everything from start to finish. Since the code is open source, anyone can tweak it to fit their needs. The task might seem simple (deleting frozen buckets based on a limit), but as you know, once you start working on a project, you run into all sorts of issues. Writing this took me a few weeks, and without ChatGPT, it would’ve taken even longer. I’ve mentioned in the Readme that I got some help from ChatGPT. For hardcoding some paths, your idea is a good one, and I’m hoping someone will contribute that to the project. Lastly, I tested this script on 40TB of frozen data with a daily log volume of 5TB, and at least for me, there weren’t any performance issues. Deleting directly (from shell) was just as fast as using the script. I hope you get a chance to test it out and let me know how it goes. I’d be really happy to use your feedback to improve the project even more.
| fieldsummary | search values=*\"value\":\"<what value you exactly want to check>\"* | table field
You can infer from the search itself which fields you need present. You need "dest" and "country" fields in the sse_host_to_country lookup and "user" and "countries" fields in the gdpr_user_category ... See more...
You can infer from the search itself which fields you need present. You need "dest" and "country" fields in the sse_host_to_country lookup and "user" and "countries" fields in the gdpr_user_category lookup (and the "countries" field can contain multiple values separated with the pipe character).
Could you please give me Format that lookup ? 
Hello FelixL, I have the same problem as you. Did you find out why it happened and how to fix it? If you change the locale in the url, it will sometimes start working. For example, en-US to en-GB