All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Your outputs.conf will need to match the pass4SymmKey set on the CM and IDX layer - since you are trying to reduce existing logs I want to assume that was already done but I'm not certain based on yo... See more...
Your outputs.conf will need to match the pass4SymmKey set on the CM and IDX layer - since you are trying to reduce existing logs I want to assume that was already done but I'm not certain based on your explanation of the error message. Since the metrics logs are abundant and it's hard to think that HF performance matters at 30 seconds frequency I would recommend changing the collection interval and keep the rest if possible.
If you have a large organization with a large number of identities on your AD you will want to consider reviewing default cache size.  Increasing the cache size will help prevent the additional CPU c... See more...
If you have a large organization with a large number of identities on your AD you will want to consider reviewing default cache size.  Increasing the cache size will help prevent the additional CPU cycles to replace the Windows Unique ID with a Human Readable format. evt_ad_cache_disabled = <boolean> * Enables or disables the AD object cache. * Default: false (enabled) evt_ad_cache_exp = <integer> * The expiration time, in seconds, for AD object cache entries. * This setting is optional. * Default: 3600 (1 hour) evt_ad_cache_exp_neg = <integer> * The expiration time, in seconds, for negative AD object cache entries. * This setting is optional. * Default: 10 evt_ad_cache_max_entries = <integer> * The maximum number of AD object cache entries. * This setting is optional. * Default: 1000  
You can override default timeout inside a rest call - add a timeout=300 and see if that helps.   You more likely are going to run up against a different error but getting past the timeout issue is th... See more...
You can override default timeout inside a rest call - add a timeout=300 and see if that helps.   You more likely are going to run up against a different error but getting past the timeout issue is the first step to find the underlying issue.
Hello, I have a table with several fields that I display in a dashboard. One column is from violation_details field, which contains XML data. Note that I don't want to parse anything from this fie... See more...
Hello, I have a table with several fields that I display in a dashboard. One column is from violation_details field, which contains XML data. Note that I don't want to parse anything from this field, because depending on the violations the tags won't be the same. Here is an example of a value for this field   <?xml version='1.0' encoding='UTF-8'?><BAD_MSG><violation_masks><block>58f7c3e96a0c279b-7e3f5f28b0000040</block><alarm>5cf7c3e97b0c6fdb-7e3f5f28b0000040</alarm><learn>5cf2c1e9730c2f5b-3d3c000830000000</learn><staging>0-0</staging></violation_masks><response_violations><violation><viol_index>56</viol_index><viol_name>VIOL_HTTP_RESPONSE_STATUS</viol_name><response_code>500</response_code></violation></response_violations></BAD_MSG>   How could I make this more readable like this :   <?xml version='1.0' encoding='UTF-8'?> <BAD_MSG> <violation_masks> <block>58f7c3e96a0c279b-7e3f5f28b0000040</block> <alarm>5cf7c3e97b0c6fdb-7e3f5f28b0000040</alarm> <learn>5cf2c1e9730c2f5b-3d3c000830000000</learn> <staging>0-0</staging> </violation_masks> <response_violations> <violation> <viol_index>56</viol_index> <viol_name>VIOL_HTTP_RESPONSE_STATUS</viol_name> <response_code>500</response_code> </violation> </response_violations> </BAD_MSG>   I've seen this POST XML-to-display-in-a-proper-format-with-tag but it seems to use a deprecated method. Is there a better way ?
How to check if apps/add-on running in splunk cloud which are dependent on python versions < 3.9?
This is much more helpful.  Running: index=<name> | fieldsummary Gives me 2.4 million+ events and 261 Statistics.  I presume then the 261 would be the sum total of disparate fields available to any... See more...
This is much more helpful.  Running: index=<name> | fieldsummary Gives me 2.4 million+ events and 261 Statistics.  I presume then the 261 would be the sum total of disparate fields available to any of my queries.  Should that be true then I need only investigate each one to see what the heck they are and figure out if they are of any use. Not a small task, but I know more now than I did 30 minutes ago. 
TY 4 that...when I run that first command it returns just north of 2.5 million events and 17 statistics.  So I see bandwidth, cpu, df, df_metric, exec, interfaces, iostat, lsof, netstat, openPorts, p... See more...
TY 4 that...when I run that first command it returns just north of 2.5 million events and 17 statistics.  So I see bandwidth, cpu, df, df_metric, exec, interfaces, iostat, lsof, netstat, openPorts, package, protocol, ps, top, uptime, vmstat, and who. For all of these, the sourcetype = source with one exception.  Exec is broken out to 3 .sh files in a splunkforwarder folder structure. I do not know if this is correct or not.  For instance, I discovered there is a fields link within Settings and I can get to Field Alisases, trim the list to "oracle" and I see stuff reporting from Oracle Audit, Oracle Database, Oracle Listener, Oracle Instance, Oracle Session, Oracle SysPerf, etc... My understanding is the Splunk Index (this is a file?) is used by Splunk in searching for Keywords (are these fields?).  Thus, if the index contains ONLY the source / sourcetype information, then I'm gold and I simply need to define what those 17 stats are actually from/ for.  However, I also know that cannot be true as I can search on a Host=<something> which is not in that list. I do hope that makes sense.
I agree with @sjringo but offer this faster query to find the information | tstats count where index=foo by sourcetype Splunk doesn't store data in tables so there's no equivalent to a SQL table du... See more...
I agree with @sjringo but offer this faster query to find the information | tstats count where index=foo by sourcetype Splunk doesn't store data in tables so there's no equivalent to a SQL table dump.  You can use the fieldsummary command to see what fields are in the index along with their values. index = foo | fieldsummary
I would say the first thing to look at is what are the different soucetype's in the index ? index=foo | stats count by sourcetype Then that will give you some kind of idea of what is being in... See more...
I would say the first thing to look at is what are the different soucetype's in the index ? index=foo | stats count by sourcetype Then that will give you some kind of idea of what is being ingested for the index you have ? Then if the sourcetype is named that it indicated the sourcetype's log's you can then look at the sources ? index=foo | stats count by sourcetype,source  This would give you an idea of what is in the index ?
Thank you so much for your response. I have checked the link the queries that has been discussed on that answer are helpful for tracking the status of a notable event, such as when it is new, when it... See more...
Thank you so much for your response. I have checked the link the queries that has been discussed on that answer are helpful for tracking the status of a notable event, such as when it is new, when it is picked up, and when it is closed. However, this is not exactly what I’m looking for. I apologise if my question wasn't clear. What I need is to calculate the time difference between when the notable event was triggered and the time of the raw log that caused it. This will help me assess how long my correlation search took to detect the anomaly. The goal is to fine-tune the correlation searches, as not all of them are running in real time. Let me explain with an example: Suppose I have a rule that triggers when there are 50 failed login attempts within a 20-minute window. If this condition was true from 9:00 AM to 9:20 AM, but due to a delay—either from the ES server or some other reason—the search didn’t run until 9:30 AM, then I’ve lost 10 minutes before my SOC team was alerted. If I can have a dashboard that shows the exact time difference between the raw event and the notable trigger, I can better optimise my correlation searches to minimise such delays.
Hi Team, As per business requirement, need to get below details from same autosys batch and corresponding outputs to be displayed on the single row in a table: 1. Last execution time  2. Execution... See more...
Hi Team, As per business requirement, need to get below details from same autosys batch and corresponding outputs to be displayed on the single row in a table: 1. Last execution time  2. Execution time of specific search keyword i.e., Completed invokexPressionJob and obtained queue id :: 3. Number of times "ERROR" keyword present index="<indexid>" Appid="<appid>" host IN (<host01>) source="<log_path01>" | stats latest(_time) as latest_time | convert ctime(latest_time) | append [search index="<indexid>" Appid="<appid>" host IN (<host01>) source="<log_path01>" | search "Completed invokexPressionJob and obtained queue id ::" | stats latest(_time) as last_success_time | convert ctime(last_success_time)] | append [search index="<indexid>" Appid="<appid>" host IN (<host01>) source="<log_path01>" | rex field=_raw "\s(?P<level>[^\/]+)\s\[main\]" | stats count(level) by level | WHERE level IN ("ERROR")] | append [| makeresults | eval job_name="Print Job"] | table latest_time last_success_time count(level) job_name | stats list(*) as * Above query works fine. From query performance prospective, am I achieving the output right way? Is there any other better to achieve it? Because, similar set to query I need to apply to 10 other batch jobs inside the Splunk dashboard. Kindly suggest!!
My apologies for such a noob question.  I literally got dropped into a Splunk environment and I know little to nothing about it. I have an index (foo as an example) and I'm told it's based on Oracle... See more...
My apologies for such a noob question.  I literally got dropped into a Splunk environment and I know little to nothing about it. I have an index (foo as an example) and I'm told it's based on Oracle audit logs.  However, the index was built for us by the Admin and all I get is blank looks when I asked what exactly is IN the index.  So my question is...how can I interrogate the index to find out what is in it? I ran across these commands : | metadata type=sourcetypes index="foo" | metadata type=hosts index="foo" This is a start, so now I have some sourcetype "keywords" (is that right?) and I can see some hosts.  But I suspect that's just the tip of the iceberg as it were given the index itself is pretty darn big. I'm an Oracle guy and if I wanted to get familiar w/ an Oracle structure I would start w/ looking at the table structures, note the fields in all the tables, get a diagram if one was available.  I don't have that option here.  I don't have the rights to "manage" the index or even create my own. So I have an index and no real clue as to what is in it...
Hello, I just upgraded my Splunk Enterprise from 9.2.1 to 9.2.2, and I saw that the OpenSSL used is in version 1.0.2zj. This version is vulnerable to the CVE-2024-5535 critical vulnerability. Is t... See more...
Hello, I just upgraded my Splunk Enterprise from 9.2.1 to 9.2.2, and I saw that the OpenSSL used is in version 1.0.2zj. This version is vulnerable to the CVE-2024-5535 critical vulnerability. Is there a future patch for Splunk Enterprise 9.2.x which upgrades the embedded OpenSSL ? Best regards, LAIRES Jordan
| spath "properties.appliedConditionalAccessPolicies{}" output=appliedConditionalAccessPolicies | mvexpand appliedConditionalAccessPolicies | where json_extract_exact(appliedConditionalAccessPolicies... See more...
| spath "properties.appliedConditionalAccessPolicies{}" output=appliedConditionalAccessPolicies | mvexpand appliedConditionalAccessPolicies | where json_extract_exact(appliedConditionalAccessPolicies,"result") != "notApplied"
One small hint for future - if you paste search code, use preformatted paragraph or code block - it makes it easier to read and prevents accidental interpretation of some character sequences as emoji... See more...
One small hint for future - if you paste search code, use preformatted paragraph or code block - it makes it easier to read and prevents accidental interpretation of some character sequences as emojis or something. But to the point. Your search is a bit flawed conceptually. 1. Your json gets parsed into multivalued fields. Separate ones. there is no guarantee that subsequent values of each of those multivalued fields correspond with each other. Especailly after additional processing. A simple run-anywhere example to illustrate my point | makeresults | eval _raw="[{\"a\":\"a\",\"b\":\"b\"},{\"a\":\"c\"},{\"b\":\"d\"}]" | spath As you can see, the event consists of an array of three structures with fields from second and third of them being completely unrelated to one another. After parsing, the multivalued fields "suggest" that the "a" field with value "c" matches field "b" with value "d". And if you wanted to reorder those pairs (even assuming you can know for sure that for your particular data the order does match both fields well) so they keep in proper order... that's very ugly and inefficient. So I'd advise to separately parse out whole properties.appliedConditionalAccessPolicies{}, then do mvexpand so that they get into separate results (maybe cutting out all other fields if you don't need them so they don't get dragged along and fill memory unnecessarily). And then parse the values from the resulting json "substructures". Then you can simply filter with where or do whatever you want. 2. Be careful with dedup - it leaves just the first event (or n events if you specify limit) for each value(s) of given field(s). It doesn't matter that other fields do not change and you capture all their values. So that might not be what you want.
Not quite sure what you are looking for? VMware has different components - but you will find a range of pre configured KPI's you can adapt to your requirements for ESX, Virtual Machines, Storage etc ... See more...
Not quite sure what you are looking for? VMware has different components - but you will find a range of pre configured KPI's you can adapt to your requirements for ESX, Virtual Machines, Storage etc in the ITSI Content pack for Vmware 
It's not truncating as such. It's just that by default Splunk's key-value pairs extraction works up to a delimiter - in this case, space unless the string is quoted IIRC. Since you don't have any cus... See more...
It's not truncating as such. It's just that by default Splunk's key-value pairs extraction works up to a delimiter - in this case, space unless the string is quoted IIRC. Since you don't have any custom extractions defined and use default settings, Splunk simply extracts from key=value pairs. As I said - there is at least one (I think there were more of them but some might be archived) app for ingesting CEF data. But since it's ugly because the format is not very well-specified, unless you have a very good reason for sticking with CEF, I'd suggest you go to the console and change the notification format. To make things even more interesting, as I see on "mine" HX, the default (and actually the only available) format for notifications straight from the box is JSON. Is this a notification from CM about an alert from HX?
Good day,  I have a query to check my Entra logs to see what Conditional access policies gets hit. The returns results like this but I would like it to display only the policies that were success ... See more...
Good day,  I have a query to check my Entra logs to see what Conditional access policies gets hit. The returns results like this but I would like it to display only the policies that were success or Applied and not the ones that was not applied. CA CAName success failure failure CA-Office-MFA   CA-Signin-LocationBased CA-HybridJoined notApplied success failure CA-Office-MFA   CA-Signin-LocationBased CA-HybridJoined notApplied success success CA-Office-MFA   CA-Signin-LocationBased CA-HybridJoined What I want instead   success failure failure CA-Office-MFA   CA-Signin-LocationBased CA-HybridJoined success success CA-Signin-LocationBased CA-HybridJoined success failure CA-Signin-LocationBased CA-HybridJoined index=db_azure_entraid sourcetype="azure:monitor:aad" command="Sign-in activity" category=SignInLogs "properties.clientAppUsed"!=null NOT app="Windows Sign In" | spath "properties.appliedConditionalAccessPolicies{}.result" | search "properties.appliedConditionalAccessPolicies{}.result"=notApplied | rename "properties.appliedConditionalAccessPolicies{}.result" as CA | rename "properties.appliedConditionalAccessPolicies{}.displayName" as CAName | dedup CA | table CA CAName
OK. Let's back up a little. 1. How are the events ingested? Read from files with a monitor input or any other way? (like HEC input or a modular input). You mention UF so I suspect monitor input(s) b... See more...
OK. Let's back up a little. 1. How are the events ingested? Read from files with a monitor input or any other way? (like HEC input or a modular input). You mention UF so I suspect monitor input(s) but I want to be sure. 2. I assume you meant props.conf, not propes.conf - that was just a typo here, right? 3. Line breaking is _not_ happening on the UF. You need to have your LINE_BREAKER defined on the first heavy component that the event passes through (if you're sending from UF directly to indexers, you need this setting on the indexers).
We are developing a Splunk app that uses an authenticated external API. In order to support the Cloud Platform, we need to pass the manual check for the cloud tag, but the following error occurred, a... See more...
We are developing a Splunk app that uses an authenticated external API. In order to support the Cloud Platform, we need to pass the manual check for the cloud tag, but the following error occurred, and we couldn't pass.   ================ [ manual_check ] check_for_secret_disclosure - Check for passwords and secrets. details: [ FAILED ] key1 value is being passed in the url which gets exposed in the network. Kindly add sensitive data in the headers to make the network communications secure. ================   code: req = urllib.request.Request(f"https://api.docodoco.jp/v6/search?key1={self.apikeys['apikey1']}... req.add_header('Authorization', self.apikeys['apikey2'])   We understand that confidential information should not be transmitted via HTTP headers or POST and should not be included in URLs. Since "key1" is not confidential information, we believe there should be no issue with including it in the URL. Due to the external API's specifications, "key1" must always be included in the URL, so we are looking for a way to pass this manual check. For example, if there is a support desk, we would like to explain that there is no issue with the part flagged in the manual check. Does anyone know of such a support channel? Alternatively, if there is a way to provide additional information to reviewers conducting this manual review, we would like to know. (For example, adding comments to the source code, etc.)