All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you so much for your response. I have checked the link the queries that has been discussed on that answer are helpful for tracking the status of a notable event, such as when it is new, when it... See more...
Thank you so much for your response. I have checked the link the queries that has been discussed on that answer are helpful for tracking the status of a notable event, such as when it is new, when it is picked up, and when it is closed. However, this is not exactly what I’m looking for. I apologise if my question wasn't clear. What I need is to calculate the time difference between when the notable event was triggered and the time of the raw log that caused it. This will help me assess how long my correlation search took to detect the anomaly. The goal is to fine-tune the correlation searches, as not all of them are running in real time. Let me explain with an example: Suppose I have a rule that triggers when there are 50 failed login attempts within a 20-minute window. If this condition was true from 9:00 AM to 9:20 AM, but due to a delay—either from the ES server or some other reason—the search didn’t run until 9:30 AM, then I’ve lost 10 minutes before my SOC team was alerted. If I can have a dashboard that shows the exact time difference between the raw event and the notable trigger, I can better optimise my correlation searches to minimise such delays.
Hi Team, As per business requirement, need to get below details from same autosys batch and corresponding outputs to be displayed on the single row in a table: 1. Last execution time  2. Execution... See more...
Hi Team, As per business requirement, need to get below details from same autosys batch and corresponding outputs to be displayed on the single row in a table: 1. Last execution time  2. Execution time of specific search keyword i.e., Completed invokexPressionJob and obtained queue id :: 3. Number of times "ERROR" keyword present index="<indexid>" Appid="<appid>" host IN (<host01>) source="<log_path01>" | stats latest(_time) as latest_time | convert ctime(latest_time) | append [search index="<indexid>" Appid="<appid>" host IN (<host01>) source="<log_path01>" | search "Completed invokexPressionJob and obtained queue id ::" | stats latest(_time) as last_success_time | convert ctime(last_success_time)] | append [search index="<indexid>" Appid="<appid>" host IN (<host01>) source="<log_path01>" | rex field=_raw "\s(?P<level>[^\/]+)\s\[main\]" | stats count(level) by level | WHERE level IN ("ERROR")] | append [| makeresults | eval job_name="Print Job"] | table latest_time last_success_time count(level) job_name | stats list(*) as * Above query works fine. From query performance prospective, am I achieving the output right way? Is there any other better to achieve it? Because, similar set to query I need to apply to 10 other batch jobs inside the Splunk dashboard. Kindly suggest!!
My apologies for such a noob question.  I literally got dropped into a Splunk environment and I know little to nothing about it. I have an index (foo as an example) and I'm told it's based on Oracle... See more...
My apologies for such a noob question.  I literally got dropped into a Splunk environment and I know little to nothing about it. I have an index (foo as an example) and I'm told it's based on Oracle audit logs.  However, the index was built for us by the Admin and all I get is blank looks when I asked what exactly is IN the index.  So my question is...how can I interrogate the index to find out what is in it? I ran across these commands : | metadata type=sourcetypes index="foo" | metadata type=hosts index="foo" This is a start, so now I have some sourcetype "keywords" (is that right?) and I can see some hosts.  But I suspect that's just the tip of the iceberg as it were given the index itself is pretty darn big. I'm an Oracle guy and if I wanted to get familiar w/ an Oracle structure I would start w/ looking at the table structures, note the fields in all the tables, get a diagram if one was available.  I don't have that option here.  I don't have the rights to "manage" the index or even create my own. So I have an index and no real clue as to what is in it...
Hello, I just upgraded my Splunk Enterprise from 9.2.1 to 9.2.2, and I saw that the OpenSSL used is in version 1.0.2zj. This version is vulnerable to the CVE-2024-5535 critical vulnerability. Is t... See more...
Hello, I just upgraded my Splunk Enterprise from 9.2.1 to 9.2.2, and I saw that the OpenSSL used is in version 1.0.2zj. This version is vulnerable to the CVE-2024-5535 critical vulnerability. Is there a future patch for Splunk Enterprise 9.2.x which upgrades the embedded OpenSSL ? Best regards, LAIRES Jordan
| spath "properties.appliedConditionalAccessPolicies{}" output=appliedConditionalAccessPolicies | mvexpand appliedConditionalAccessPolicies | where json_extract_exact(appliedConditionalAccessPolicies... See more...
| spath "properties.appliedConditionalAccessPolicies{}" output=appliedConditionalAccessPolicies | mvexpand appliedConditionalAccessPolicies | where json_extract_exact(appliedConditionalAccessPolicies,"result") != "notApplied"
One small hint for future - if you paste search code, use preformatted paragraph or code block - it makes it easier to read and prevents accidental interpretation of some character sequences as emoji... See more...
One small hint for future - if you paste search code, use preformatted paragraph or code block - it makes it easier to read and prevents accidental interpretation of some character sequences as emojis or something. But to the point. Your search is a bit flawed conceptually. 1. Your json gets parsed into multivalued fields. Separate ones. there is no guarantee that subsequent values of each of those multivalued fields correspond with each other. Especailly after additional processing. A simple run-anywhere example to illustrate my point | makeresults | eval _raw="[{\"a\":\"a\",\"b\":\"b\"},{\"a\":\"c\"},{\"b\":\"d\"}]" | spath As you can see, the event consists of an array of three structures with fields from second and third of them being completely unrelated to one another. After parsing, the multivalued fields "suggest" that the "a" field with value "c" matches field "b" with value "d". And if you wanted to reorder those pairs (even assuming you can know for sure that for your particular data the order does match both fields well) so they keep in proper order... that's very ugly and inefficient. So I'd advise to separately parse out whole properties.appliedConditionalAccessPolicies{}, then do mvexpand so that they get into separate results (maybe cutting out all other fields if you don't need them so they don't get dragged along and fill memory unnecessarily). And then parse the values from the resulting json "substructures". Then you can simply filter with where or do whatever you want. 2. Be careful with dedup - it leaves just the first event (or n events if you specify limit) for each value(s) of given field(s). It doesn't matter that other fields do not change and you capture all their values. So that might not be what you want.
Not quite sure what you are looking for? VMware has different components - but you will find a range of pre configured KPI's you can adapt to your requirements for ESX, Virtual Machines, Storage etc ... See more...
Not quite sure what you are looking for? VMware has different components - but you will find a range of pre configured KPI's you can adapt to your requirements for ESX, Virtual Machines, Storage etc in the ITSI Content pack for Vmware 
It's not truncating as such. It's just that by default Splunk's key-value pairs extraction works up to a delimiter - in this case, space unless the string is quoted IIRC. Since you don't have any cus... See more...
It's not truncating as such. It's just that by default Splunk's key-value pairs extraction works up to a delimiter - in this case, space unless the string is quoted IIRC. Since you don't have any custom extractions defined and use default settings, Splunk simply extracts from key=value pairs. As I said - there is at least one (I think there were more of them but some might be archived) app for ingesting CEF data. But since it's ugly because the format is not very well-specified, unless you have a very good reason for sticking with CEF, I'd suggest you go to the console and change the notification format. To make things even more interesting, as I see on "mine" HX, the default (and actually the only available) format for notifications straight from the box is JSON. Is this a notification from CM about an alert from HX?
Good day,  I have a query to check my Entra logs to see what Conditional access policies gets hit. The returns results like this but I would like it to display only the policies that were success ... See more...
Good day,  I have a query to check my Entra logs to see what Conditional access policies gets hit. The returns results like this but I would like it to display only the policies that were success or Applied and not the ones that was not applied. CA CAName success failure failure CA-Office-MFA   CA-Signin-LocationBased CA-HybridJoined notApplied success failure CA-Office-MFA   CA-Signin-LocationBased CA-HybridJoined notApplied success success CA-Office-MFA   CA-Signin-LocationBased CA-HybridJoined What I want instead   success failure failure CA-Office-MFA   CA-Signin-LocationBased CA-HybridJoined success success CA-Signin-LocationBased CA-HybridJoined success failure CA-Signin-LocationBased CA-HybridJoined index=db_azure_entraid sourcetype="azure:monitor:aad" command="Sign-in activity" category=SignInLogs "properties.clientAppUsed"!=null NOT app="Windows Sign In" | spath "properties.appliedConditionalAccessPolicies{}.result" | search "properties.appliedConditionalAccessPolicies{}.result"=notApplied | rename "properties.appliedConditionalAccessPolicies{}.result" as CA | rename "properties.appliedConditionalAccessPolicies{}.displayName" as CAName | dedup CA | table CA CAName
OK. Let's back up a little. 1. How are the events ingested? Read from files with a monitor input or any other way? (like HEC input or a modular input). You mention UF so I suspect monitor input(s) b... See more...
OK. Let's back up a little. 1. How are the events ingested? Read from files with a monitor input or any other way? (like HEC input or a modular input). You mention UF so I suspect monitor input(s) but I want to be sure. 2. I assume you meant props.conf, not propes.conf - that was just a typo here, right? 3. Line breaking is _not_ happening on the UF. You need to have your LINE_BREAKER defined on the first heavy component that the event passes through (if you're sending from UF directly to indexers, you need this setting on the indexers).
We are developing a Splunk app that uses an authenticated external API. In order to support the Cloud Platform, we need to pass the manual check for the cloud tag, but the following error occurred, a... See more...
We are developing a Splunk app that uses an authenticated external API. In order to support the Cloud Platform, we need to pass the manual check for the cloud tag, but the following error occurred, and we couldn't pass.   ================ [ manual_check ] check_for_secret_disclosure - Check for passwords and secrets. details: [ FAILED ] key1 value is being passed in the url which gets exposed in the network. Kindly add sensitive data in the headers to make the network communications secure. ================   code: req = urllib.request.Request(f"https://api.docodoco.jp/v6/search?key1={self.apikeys['apikey1']}... req.add_header('Authorization', self.apikeys['apikey2'])   We understand that confidential information should not be transmitted via HTTP headers or POST and should not be included in URLs. Since "key1" is not confidential information, we believe there should be no issue with including it in the URL. Due to the external API's specifications, "key1" must always be included in the URL, so we are looking for a way to pass this manual check. For example, if there is a support desk, we would like to explain that there is no issue with the part flagged in the manual check. Does anyone know of such a support channel? Alternatively, if there is a way to provide additional information to reviewers conducting this manual review, we would like to know. (For example, adding comments to the source code, etc.)
Thank you for your answers. It turned out, I had to trust the ssl certificate.
Hi   thanks for the response . sample logs: (these are coming as a single event as mentioned in screenshot) zowin.exposed. 3600 in ns ns1.dyna-ns.net. zowin.exposed. 3600 in ns ns2.dyna-ns.... See more...
Hi   thanks for the response . sample logs: (these are coming as a single event as mentioned in screenshot) zowin.exposed. 3600 in ns ns1.dyna-ns.net. zowin.exposed. 3600 in ns ns2.dyna-ns.net. zuckerberg.exposed. 3600 in ns ns1.afternic.com. zuckerberg.exposed. 3600 in ns ns2.afternic.com. zwiebeltvde.exposed. 3600 in ns docks13.rzone.de. zwiebeltvde.exposed. 3600 in ns shades01.rzone.de I am applying this on UF config . (/etc/system/local/propes.conf [zone_files] LINE_BREAKER= ([\r\n]+) SHOULD_LINEMERGE = false ~
Hi All, Am  trying to pantag search results  to a dynamic address group, but getting below error.Please support if anyone come across the same . External search command 'pantag' returned error code... See more...
Hi All, Am  trying to pantag search results  to a dynamic address group, but getting below error.Please support if anyone come across the same . External search command 'pantag' returned error code 2. Script output = "ERROR URLError: code: 401 reason: Key Expired: LUFR...dHc9 has expired. ".
I know. You are receiving what they send you. But you can often just talk with the sending party Anyway, since it looks like there is something ELK-like in the middle, it could be worthwhile to c... See more...
I know. You are receiving what they send you. But you can often just talk with the sending party Anyway, since it looks like there is something ELK-like in the middle, it could be worthwhile to check the ingestion process architecture - why are there middle men? Are we ingesting into multiple desitnations from single source? Maybe we could drop the extra stuff and not only lower our license consumption but also make our data compatible with existing TAs? So the short-time soluion is of course to extract the string from one field of the json and run spath on it (there is no way I know of to do it automatically unless you want to get messy with regexes on this - another reason for getting your data tidy). But long-term solution IMO is to get the data right.
When we are trying to run a report in deployment server to get the hosts that are reporting to Splunk, it is giving below error Unable to determine response format from HTTP Header Connection fail... See more...
When we are trying to run a report in deployment server to get the hosts that are reporting to Splunk, it is giving below error Unable to determine response format from HTTP Header Connection failed with Read Timeout The REST request on the endpoint URI /services/deployment/server/clients?count=0 returned HTTP 'status not OK': code=502, Read Timeout. Can anyone please suggest any work around.
this is a sample of event  <149>Jul 23 18:54:24 fireeye.mps.test cef[5159]: CEF:0|fireeye|HX|4.8.0|IOC Hit Found|IOC Hit Found|10|rt=Jul 23 2019 16:54:24 UTC dvchost=fireeye.mps.test categoryDevic... See more...
this is a sample of event  <149>Jul 23 18:54:24 fireeye.mps.test cef[5159]: CEF:0|fireeye|HX|4.8.0|IOC Hit Found|IOC Hit Found|10|rt=Jul 23 2019 16:54:24 UTC dvchost=fireeye.mps.test categoryDeviceGroup=/IDS categoryDeviceType=Forensic Investigation categoryObject=/Host cs1Label=Host Agent Cert Hash cs1=fwvqcmXUHVcbm4AFK01cim dst=192.168.1.172 dmac=00-00-5e-00-53-00 dhost=test-host1 dntdom=test deviceCustomDate1Label=Agent Last Audit deviceCustomDate1=Jul 23 2019 16:54:22 UTC cs2Label=FireEye Agent Version cs2=29.7.0 cs5Label=Target GMT Offset cs5=+PT2H cs6Label=Target OS cs6=Windows 10 Pro 17134 externalId=17688554 start=Jul 23 2019 16:53:18 UTC categoryOutcome=/Success categorySignificance=/Compromise categoryBehavior=/Found cs7Label=Resolution cs7=ALERT cs8Label=Alert Types cs8=exc act=Detection IOC Hit msg=Host test-host1 IOC compromise alert categoryTupleDescription=A Detection IOC found a compromise indication. cs4Label=IOC Name cs4=SVCHOST SUSPICIOUS PARENT PROCESS   i need to do field extractions and make the event display all the data without truncating inside the details of event
Hi at all, I tried to use this visualization to display a process tree and it runs, but I have an issue: some leaves of the tree aren't displayed: I have only around 1,900 rows, so I haven't t... See more...
Hi at all, I tried to use this visualization to display a process tree and it runs, but I have an issue: some leaves of the tree aren't displayed: I have only around 1,900 rows, so I haven't the limit of 250,000 rows and neither the limit of 1,000 levels because I have max 5 levels. What could it be the issue? Thank you for your help. Ciao. Giuseppe
Hi Peter, Could you please check event Queue -->  Event Queue Backlog: Check if event queues on the forwarders are building up (seen in metrics.log). This can happen if there's too much data be... See more...
Hi Peter, Could you please check event Queue -->  Event Queue Backlog: Check if event queues on the forwarders are building up (seen in metrics.log). This can happen if there's too much data being processed at once. Another thing to monitor is the network, during the logs stop any changes on the network utilization ( both receivers side forwarder's end ) Also ensure the following inputs on the forwarder side ( this worked in my case, but results may vary in your setup ) useACK=false autoBatch=false
Hi @aab1 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated