All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Using append is almost never the right solution - you are performing the same search three times and just collecting bits of info each time - this can be done in one search   index="<indexid>" Appi... See more...
Using append is almost never the right solution - you are performing the same search three times and just collecting bits of info each time - this can be done in one search   index="<indexid>" Appid="<appid>" host IN (<host01>) source="<log_path01>" | eval success_time=if(searchmatch("Completed invokexPressionJob and obtained queue id ::"), _time, null()) | rex field=_raw "\s(?P<level>[^\/]+)\s\[main\]" | stats latest(_time) as latest_time latest(success_time) as success_time sum(eval(if(level="ERROR",1, 0))) as errors | convert ctime(latest_time) | convert ctime(success_time)   success_time is determined if the event matches the criteria wanted and errors are calculated if the level is ERROR. Not sure what you're trying to do with the final append with Print Job on a new row.
Splunk add-on for Google Cloud Platform How to add logs/new Input to have Kubernetes Pod Status?   What are the steps? How to add new Input to have Kubernetes Pod Status(highlight below GCP ... See more...
Splunk add-on for Google Cloud Platform How to add logs/new Input to have Kubernetes Pod Status?   What are the steps? How to add new Input to have Kubernetes Pod Status(highlight below GCP picture of Pods) into Splunk?  
Hi @Fadil.CK, Thanks for asking your question on the Community. We had some spam issues, so the community has been on read-only mode for the past few days, not allowing other members to reply.  ... See more...
Hi @Fadil.CK, Thanks for asking your question on the Community. We had some spam issues, so the community has been on read-only mode for the past few days, not allowing other members to reply.  Did you happen to find a solution you can share here? If you still need help, you can reach out to AppD Support. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read ... 
Hi @Dhinagar.Devapalan, Thanks for asking your question on the Community. We had some spam issues so the community has been on read-only mode for the past few days, not giving other members a chanc... See more...
Hi @Dhinagar.Devapalan, Thanks for asking your question on the Community. We had some spam issues so the community has been on read-only mode for the past few days, not giving other members a chance to reply.  Did you happen to find a solution you can share here? If you still need help, you can reach out to AppD Support. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases. 
Hi @Jeffrey.Escamilla, I know we had some issues with community access last week. You can come back to comment and create content again. With that in mind, if Mario's answer helped you out, plea... See more...
Hi @Jeffrey.Escamilla, I know we had some issues with community access last week. You can come back to comment and create content again. With that in mind, if Mario's answer helped you out, please click the 'Accept as Solution' button on the reply that helped. If you need more help, reply back to keep the conversation going. 
Run the following on a single instance server or the distributed installation Monitoring Console instance.  The rest call SPL can be a massive help if the the CMD line option is not authenticating yo... See more...
Run the following on a single instance server or the distributed installation Monitoring Console instance.  The rest call SPL can be a massive help if the the CMD line option is not authenticating you.   | rest splunk_server=* /services/kvstore/status In my experience anything that is a search head or search cluster you do want to have a KVStore backup in case of any corruptions.  A lot of apps are switching from lookup tables and opting for a better performing KVStore instance. 
Put "$combined_token$" in the title or description of the first panel.  You can then see what is populating the token you depend upon.  Also I'm curious about your current eval, I would have opted to... See more...
Put "$combined_token$" in the title or description of the first panel.  You can then see what is populating the token you depend upon.  Also I'm curious about your current eval, I would have opted to make the spaces literal characters or only put in a single concatenation character between tokens.   <eval "combined">$token1$. .$token2$. .$token3$. .$token4$. $token5$</eval> I would try <eval "combined">$token1$." ".$token2$." ".$token3$." ".$token4$." ".$token5$</eval> or <eval "combined">$token1$.$token2$.$token3$.$token4$.$token5$</eval>  
Try something along these lines | makeresults | eval origData="<?xml version='1.0' encoding='UTF-8'?><BAD_MSG><violation_masks><block>58f7c3e96a0c279b-7e3f5f28b0000040</block><alarm>5cf7c3e97b0c6fd... See more...
Try something along these lines | makeresults | eval origData="<?xml version='1.0' encoding='UTF-8'?><BAD_MSG><violation_masks><block>58f7c3e96a0c279b-7e3f5f28b0000040</block><alarm>5cf7c3e97b0c6fdb-7e3f5f28b0000040</alarm><learn>5cf2c1e9730c2f5b-3d3c000830000000</learn><staging>0-0</staging></violation_masks><response_violations><violation><viol_index>56</viol_index><viol_name>VIOL_HTTP_RESPONSE_STATUS</viol_name><response_code>500</response_code></violation></response_violations></BAD_MSG>" | rex mode=sed field=origData "s/<(?<!)/ </g s/>(?=<)/> /g" | rex max_match=0 field=origData "(?m)(?<line>^.+$)" | fields - origData | streamstats count as row | mvexpand line | eval open=if(match(line,"^\<(?!.*\/)"),1,null()) | eval undent=if(match(line,"^<\/"),-1,null()) | streamstats window=1 current=f values(open) as indent by row global=f | streamstats sum(indent) as space by row global=f | streamstats sum(undent) as unspace by row global=f | fillnull value=0 unspace space | eval spaces=space+unspace+len(line) | eval line=printf("%".spaces."s",line) | stats list(line) as line by row | eval line=mvjoin(line," ") | fields - row You could add some additional tweaking to deal with the initial xml line if you are certain it is always there
Your outputs.conf will need to match the pass4SymmKey set on the CM and IDX layer - since you are trying to reduce existing logs I want to assume that was already done but I'm not certain based on yo... See more...
Your outputs.conf will need to match the pass4SymmKey set on the CM and IDX layer - since you are trying to reduce existing logs I want to assume that was already done but I'm not certain based on your explanation of the error message. Since the metrics logs are abundant and it's hard to think that HF performance matters at 30 seconds frequency I would recommend changing the collection interval and keep the rest if possible.
If you have a large organization with a large number of identities on your AD you will want to consider reviewing default cache size.  Increasing the cache size will help prevent the additional CPU c... See more...
If you have a large organization with a large number of identities on your AD you will want to consider reviewing default cache size.  Increasing the cache size will help prevent the additional CPU cycles to replace the Windows Unique ID with a Human Readable format. evt_ad_cache_disabled = <boolean> * Enables or disables the AD object cache. * Default: false (enabled) evt_ad_cache_exp = <integer> * The expiration time, in seconds, for AD object cache entries. * This setting is optional. * Default: 3600 (1 hour) evt_ad_cache_exp_neg = <integer> * The expiration time, in seconds, for negative AD object cache entries. * This setting is optional. * Default: 10 evt_ad_cache_max_entries = <integer> * The maximum number of AD object cache entries. * This setting is optional. * Default: 1000  
You can override default timeout inside a rest call - add a timeout=300 and see if that helps.   You more likely are going to run up against a different error but getting past the timeout issue is th... See more...
You can override default timeout inside a rest call - add a timeout=300 and see if that helps.   You more likely are going to run up against a different error but getting past the timeout issue is the first step to find the underlying issue.
Hello, I have a table with several fields that I display in a dashboard. One column is from violation_details field, which contains XML data. Note that I don't want to parse anything from this fie... See more...
Hello, I have a table with several fields that I display in a dashboard. One column is from violation_details field, which contains XML data. Note that I don't want to parse anything from this field, because depending on the violations the tags won't be the same. Here is an example of a value for this field   <?xml version='1.0' encoding='UTF-8'?><BAD_MSG><violation_masks><block>58f7c3e96a0c279b-7e3f5f28b0000040</block><alarm>5cf7c3e97b0c6fdb-7e3f5f28b0000040</alarm><learn>5cf2c1e9730c2f5b-3d3c000830000000</learn><staging>0-0</staging></violation_masks><response_violations><violation><viol_index>56</viol_index><viol_name>VIOL_HTTP_RESPONSE_STATUS</viol_name><response_code>500</response_code></violation></response_violations></BAD_MSG>   How could I make this more readable like this :   <?xml version='1.0' encoding='UTF-8'?> <BAD_MSG> <violation_masks> <block>58f7c3e96a0c279b-7e3f5f28b0000040</block> <alarm>5cf7c3e97b0c6fdb-7e3f5f28b0000040</alarm> <learn>5cf2c1e9730c2f5b-3d3c000830000000</learn> <staging>0-0</staging> </violation_masks> <response_violations> <violation> <viol_index>56</viol_index> <viol_name>VIOL_HTTP_RESPONSE_STATUS</viol_name> <response_code>500</response_code> </violation> </response_violations> </BAD_MSG>   I've seen this POST XML-to-display-in-a-proper-format-with-tag but it seems to use a deprecated method. Is there a better way ?
How to check if apps/add-on running in splunk cloud which are dependent on python versions < 3.9?
This is much more helpful.  Running: index=<name> | fieldsummary Gives me 2.4 million+ events and 261 Statistics.  I presume then the 261 would be the sum total of disparate fields available to any... See more...
This is much more helpful.  Running: index=<name> | fieldsummary Gives me 2.4 million+ events and 261 Statistics.  I presume then the 261 would be the sum total of disparate fields available to any of my queries.  Should that be true then I need only investigate each one to see what the heck they are and figure out if they are of any use. Not a small task, but I know more now than I did 30 minutes ago. 
TY 4 that...when I run that first command it returns just north of 2.5 million events and 17 statistics.  So I see bandwidth, cpu, df, df_metric, exec, interfaces, iostat, lsof, netstat, openPorts, p... See more...
TY 4 that...when I run that first command it returns just north of 2.5 million events and 17 statistics.  So I see bandwidth, cpu, df, df_metric, exec, interfaces, iostat, lsof, netstat, openPorts, package, protocol, ps, top, uptime, vmstat, and who. For all of these, the sourcetype = source with one exception.  Exec is broken out to 3 .sh files in a splunkforwarder folder structure. I do not know if this is correct or not.  For instance, I discovered there is a fields link within Settings and I can get to Field Alisases, trim the list to "oracle" and I see stuff reporting from Oracle Audit, Oracle Database, Oracle Listener, Oracle Instance, Oracle Session, Oracle SysPerf, etc... My understanding is the Splunk Index (this is a file?) is used by Splunk in searching for Keywords (are these fields?).  Thus, if the index contains ONLY the source / sourcetype information, then I'm gold and I simply need to define what those 17 stats are actually from/ for.  However, I also know that cannot be true as I can search on a Host=<something> which is not in that list. I do hope that makes sense.
I agree with @sjringo but offer this faster query to find the information | tstats count where index=foo by sourcetype Splunk doesn't store data in tables so there's no equivalent to a SQL table du... See more...
I agree with @sjringo but offer this faster query to find the information | tstats count where index=foo by sourcetype Splunk doesn't store data in tables so there's no equivalent to a SQL table dump.  You can use the fieldsummary command to see what fields are in the index along with their values. index = foo | fieldsummary
I would say the first thing to look at is what are the different soucetype's in the index ? index=foo | stats count by sourcetype Then that will give you some kind of idea of what is being in... See more...
I would say the first thing to look at is what are the different soucetype's in the index ? index=foo | stats count by sourcetype Then that will give you some kind of idea of what is being ingested for the index you have ? Then if the sourcetype is named that it indicated the sourcetype's log's you can then look at the sources ? index=foo | stats count by sourcetype,source  This would give you an idea of what is in the index ?
Thank you so much for your response. I have checked the link the queries that has been discussed on that answer are helpful for tracking the status of a notable event, such as when it is new, when it... See more...
Thank you so much for your response. I have checked the link the queries that has been discussed on that answer are helpful for tracking the status of a notable event, such as when it is new, when it is picked up, and when it is closed. However, this is not exactly what I’m looking for. I apologise if my question wasn't clear. What I need is to calculate the time difference between when the notable event was triggered and the time of the raw log that caused it. This will help me assess how long my correlation search took to detect the anomaly. The goal is to fine-tune the correlation searches, as not all of them are running in real time. Let me explain with an example: Suppose I have a rule that triggers when there are 50 failed login attempts within a 20-minute window. If this condition was true from 9:00 AM to 9:20 AM, but due to a delay—either from the ES server or some other reason—the search didn’t run until 9:30 AM, then I’ve lost 10 minutes before my SOC team was alerted. If I can have a dashboard that shows the exact time difference between the raw event and the notable trigger, I can better optimise my correlation searches to minimise such delays.
Hi Team, As per business requirement, need to get below details from same autosys batch and corresponding outputs to be displayed on the single row in a table: 1. Last execution time  2. Execution... See more...
Hi Team, As per business requirement, need to get below details from same autosys batch and corresponding outputs to be displayed on the single row in a table: 1. Last execution time  2. Execution time of specific search keyword i.e., Completed invokexPressionJob and obtained queue id :: 3. Number of times "ERROR" keyword present index="<indexid>" Appid="<appid>" host IN (<host01>) source="<log_path01>" | stats latest(_time) as latest_time | convert ctime(latest_time) | append [search index="<indexid>" Appid="<appid>" host IN (<host01>) source="<log_path01>" | search "Completed invokexPressionJob and obtained queue id ::" | stats latest(_time) as last_success_time | convert ctime(last_success_time)] | append [search index="<indexid>" Appid="<appid>" host IN (<host01>) source="<log_path01>" | rex field=_raw "\s(?P<level>[^\/]+)\s\[main\]" | stats count(level) by level | WHERE level IN ("ERROR")] | append [| makeresults | eval job_name="Print Job"] | table latest_time last_success_time count(level) job_name | stats list(*) as * Above query works fine. From query performance prospective, am I achieving the output right way? Is there any other better to achieve it? Because, similar set to query I need to apply to 10 other batch jobs inside the Splunk dashboard. Kindly suggest!!
My apologies for such a noob question.  I literally got dropped into a Splunk environment and I know little to nothing about it. I have an index (foo as an example) and I'm told it's based on Oracle... See more...
My apologies for such a noob question.  I literally got dropped into a Splunk environment and I know little to nothing about it. I have an index (foo as an example) and I'm told it's based on Oracle audit logs.  However, the index was built for us by the Admin and all I get is blank looks when I asked what exactly is IN the index.  So my question is...how can I interrogate the index to find out what is in it? I ran across these commands : | metadata type=sourcetypes index="foo" | metadata type=hosts index="foo" This is a start, so now I have some sourcetype "keywords" (is that right?) and I can see some hosts.  But I suspect that's just the tip of the iceberg as it were given the index itself is pretty darn big. I'm an Oracle guy and if I wanted to get familiar w/ an Oracle structure I would start w/ looking at the table structures, note the fields in all the tables, get a diagram if one was available.  I don't have that option here.  I don't have the rights to "manage" the index or even create my own. So I have an index and no real clue as to what is in it...