All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

So it turns out the SQL doesn't write the entire event at once and Splunk therefore only reads part of the event. It worked in our TEST because I dumped the log file and therefore the entire events... See more...
So it turns out the SQL doesn't write the entire event at once and Splunk therefore only reads part of the event. It worked in our TEST because I dumped the log file and therefore the entire events were there. The solution was : multiline_event_extra_waittime = true time_before_close = 10
Hi Ryan, Thanks for checking by, i am still trying to figure out the issue yyyyyyyy000:/opt/appdynamics/machine-agent/jre/bin # ./keytool -printcert -sslserver cxxxxx.saas.appdynamics.com:443 keyt... See more...
Hi Ryan, Thanks for checking by, i am still trying to figure out the issue yyyyyyyy000:/opt/appdynamics/machine-agent/jre/bin # ./keytool -printcert -sslserver cxxxxx.saas.appdynamics.com:443 keytool error: java.lang.Exception: No certificate from the SSL server Support suggested to check with network team for any block, currently, i am into it
@PickleRick  Your comments helped. I  was applying this on the UF level and changing to indexers made it work. Thanks
Hi @jmartens , I just checked. Yes, for 9.3.x branch, the fix is in version 9.3.1.  Hope it helps!  
Trying to use splunkcloud, I get The connection has timed out An error occurred during a connection to prd-p-xauy6.splunkcloud.com. Seems to be an SSL cert error because of strict checking. Is the... See more...
Trying to use splunkcloud, I get The connection has timed out An error occurred during a connection to prd-p-xauy6.splunkcloud.com. Seems to be an SSL cert error because of strict checking. Is there a solution?
Using append is almost never the right solution - you are performing the same search three times and just collecting bits of info each time - this can be done in one search   index="<indexid>" Appi... See more...
Using append is almost never the right solution - you are performing the same search three times and just collecting bits of info each time - this can be done in one search   index="<indexid>" Appid="<appid>" host IN (<host01>) source="<log_path01>" | eval success_time=if(searchmatch("Completed invokexPressionJob and obtained queue id ::"), _time, null()) | rex field=_raw "\s(?P<level>[^\/]+)\s\[main\]" | stats latest(_time) as latest_time latest(success_time) as success_time sum(eval(if(level="ERROR",1, 0))) as errors | convert ctime(latest_time) | convert ctime(success_time)   success_time is determined if the event matches the criteria wanted and errors are calculated if the level is ERROR. Not sure what you're trying to do with the final append with Print Job on a new row.
Splunk add-on for Google Cloud Platform How to add logs/new Input to have Kubernetes Pod Status?   What are the steps? How to add new Input to have Kubernetes Pod Status(highlight below GCP ... See more...
Splunk add-on for Google Cloud Platform How to add logs/new Input to have Kubernetes Pod Status?   What are the steps? How to add new Input to have Kubernetes Pod Status(highlight below GCP picture of Pods) into Splunk?  
Hi @Fadil.CK, Thanks for asking your question on the Community. We had some spam issues, so the community has been on read-only mode for the past few days, not allowing other members to reply.  ... See more...
Hi @Fadil.CK, Thanks for asking your question on the Community. We had some spam issues, so the community has been on read-only mode for the past few days, not allowing other members to reply.  Did you happen to find a solution you can share here? If you still need help, you can reach out to AppD Support. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read ... 
Hi @Dhinagar.Devapalan, Thanks for asking your question on the Community. We had some spam issues so the community has been on read-only mode for the past few days, not giving other members a chanc... See more...
Hi @Dhinagar.Devapalan, Thanks for asking your question on the Community. We had some spam issues so the community has been on read-only mode for the past few days, not giving other members a chance to reply.  Did you happen to find a solution you can share here? If you still need help, you can reach out to AppD Support. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases. 
Hi @Jeffrey.Escamilla, I know we had some issues with community access last week. You can come back to comment and create content again. With that in mind, if Mario's answer helped you out, plea... See more...
Hi @Jeffrey.Escamilla, I know we had some issues with community access last week. You can come back to comment and create content again. With that in mind, if Mario's answer helped you out, please click the 'Accept as Solution' button on the reply that helped. If you need more help, reply back to keep the conversation going. 
Run the following on a single instance server or the distributed installation Monitoring Console instance.  The rest call SPL can be a massive help if the the CMD line option is not authenticating yo... See more...
Run the following on a single instance server or the distributed installation Monitoring Console instance.  The rest call SPL can be a massive help if the the CMD line option is not authenticating you.   | rest splunk_server=* /services/kvstore/status In my experience anything that is a search head or search cluster you do want to have a KVStore backup in case of any corruptions.  A lot of apps are switching from lookup tables and opting for a better performing KVStore instance. 
Put "$combined_token$" in the title or description of the first panel.  You can then see what is populating the token you depend upon.  Also I'm curious about your current eval, I would have opted to... See more...
Put "$combined_token$" in the title or description of the first panel.  You can then see what is populating the token you depend upon.  Also I'm curious about your current eval, I would have opted to make the spaces literal characters or only put in a single concatenation character between tokens.   <eval "combined">$token1$. .$token2$. .$token3$. .$token4$. $token5$</eval> I would try <eval "combined">$token1$." ".$token2$." ".$token3$." ".$token4$." ".$token5$</eval> or <eval "combined">$token1$.$token2$.$token3$.$token4$.$token5$</eval>  
Try something along these lines | makeresults | eval origData="<?xml version='1.0' encoding='UTF-8'?><BAD_MSG><violation_masks><block>58f7c3e96a0c279b-7e3f5f28b0000040</block><alarm>5cf7c3e97b0c6fd... See more...
Try something along these lines | makeresults | eval origData="<?xml version='1.0' encoding='UTF-8'?><BAD_MSG><violation_masks><block>58f7c3e96a0c279b-7e3f5f28b0000040</block><alarm>5cf7c3e97b0c6fdb-7e3f5f28b0000040</alarm><learn>5cf2c1e9730c2f5b-3d3c000830000000</learn><staging>0-0</staging></violation_masks><response_violations><violation><viol_index>56</viol_index><viol_name>VIOL_HTTP_RESPONSE_STATUS</viol_name><response_code>500</response_code></violation></response_violations></BAD_MSG>" | rex mode=sed field=origData "s/<(?<!)/ </g s/>(?=<)/> /g" | rex max_match=0 field=origData "(?m)(?<line>^.+$)" | fields - origData | streamstats count as row | mvexpand line | eval open=if(match(line,"^\<(?!.*\/)"),1,null()) | eval undent=if(match(line,"^<\/"),-1,null()) | streamstats window=1 current=f values(open) as indent by row global=f | streamstats sum(indent) as space by row global=f | streamstats sum(undent) as unspace by row global=f | fillnull value=0 unspace space | eval spaces=space+unspace+len(line) | eval line=printf("%".spaces."s",line) | stats list(line) as line by row | eval line=mvjoin(line," ") | fields - row You could add some additional tweaking to deal with the initial xml line if you are certain it is always there
Your outputs.conf will need to match the pass4SymmKey set on the CM and IDX layer - since you are trying to reduce existing logs I want to assume that was already done but I'm not certain based on yo... See more...
Your outputs.conf will need to match the pass4SymmKey set on the CM and IDX layer - since you are trying to reduce existing logs I want to assume that was already done but I'm not certain based on your explanation of the error message. Since the metrics logs are abundant and it's hard to think that HF performance matters at 30 seconds frequency I would recommend changing the collection interval and keep the rest if possible.
If you have a large organization with a large number of identities on your AD you will want to consider reviewing default cache size.  Increasing the cache size will help prevent the additional CPU c... See more...
If you have a large organization with a large number of identities on your AD you will want to consider reviewing default cache size.  Increasing the cache size will help prevent the additional CPU cycles to replace the Windows Unique ID with a Human Readable format. evt_ad_cache_disabled = <boolean> * Enables or disables the AD object cache. * Default: false (enabled) evt_ad_cache_exp = <integer> * The expiration time, in seconds, for AD object cache entries. * This setting is optional. * Default: 3600 (1 hour) evt_ad_cache_exp_neg = <integer> * The expiration time, in seconds, for negative AD object cache entries. * This setting is optional. * Default: 10 evt_ad_cache_max_entries = <integer> * The maximum number of AD object cache entries. * This setting is optional. * Default: 1000  
You can override default timeout inside a rest call - add a timeout=300 and see if that helps.   You more likely are going to run up against a different error but getting past the timeout issue is th... See more...
You can override default timeout inside a rest call - add a timeout=300 and see if that helps.   You more likely are going to run up against a different error but getting past the timeout issue is the first step to find the underlying issue.
Hello, I have a table with several fields that I display in a dashboard. One column is from violation_details field, which contains XML data. Note that I don't want to parse anything from this fie... See more...
Hello, I have a table with several fields that I display in a dashboard. One column is from violation_details field, which contains XML data. Note that I don't want to parse anything from this field, because depending on the violations the tags won't be the same. Here is an example of a value for this field   <?xml version='1.0' encoding='UTF-8'?><BAD_MSG><violation_masks><block>58f7c3e96a0c279b-7e3f5f28b0000040</block><alarm>5cf7c3e97b0c6fdb-7e3f5f28b0000040</alarm><learn>5cf2c1e9730c2f5b-3d3c000830000000</learn><staging>0-0</staging></violation_masks><response_violations><violation><viol_index>56</viol_index><viol_name>VIOL_HTTP_RESPONSE_STATUS</viol_name><response_code>500</response_code></violation></response_violations></BAD_MSG>   How could I make this more readable like this :   <?xml version='1.0' encoding='UTF-8'?> <BAD_MSG> <violation_masks> <block>58f7c3e96a0c279b-7e3f5f28b0000040</block> <alarm>5cf7c3e97b0c6fdb-7e3f5f28b0000040</alarm> <learn>5cf2c1e9730c2f5b-3d3c000830000000</learn> <staging>0-0</staging> </violation_masks> <response_violations> <violation> <viol_index>56</viol_index> <viol_name>VIOL_HTTP_RESPONSE_STATUS</viol_name> <response_code>500</response_code> </violation> </response_violations> </BAD_MSG>   I've seen this POST XML-to-display-in-a-proper-format-with-tag but it seems to use a deprecated method. Is there a better way ?
How to check if apps/add-on running in splunk cloud which are dependent on python versions < 3.9?
This is much more helpful.  Running: index=<name> | fieldsummary Gives me 2.4 million+ events and 261 Statistics.  I presume then the 261 would be the sum total of disparate fields available to any... See more...
This is much more helpful.  Running: index=<name> | fieldsummary Gives me 2.4 million+ events and 261 Statistics.  I presume then the 261 would be the sum total of disparate fields available to any of my queries.  Should that be true then I need only investigate each one to see what the heck they are and figure out if they are of any use. Not a small task, but I know more now than I did 30 minutes ago. 
TY 4 that...when I run that first command it returns just north of 2.5 million events and 17 statistics.  So I see bandwidth, cpu, df, df_metric, exec, interfaces, iostat, lsof, netstat, openPorts, p... See more...
TY 4 that...when I run that first command it returns just north of 2.5 million events and 17 statistics.  So I see bandwidth, cpu, df, df_metric, exec, interfaces, iostat, lsof, netstat, openPorts, package, protocol, ps, top, uptime, vmstat, and who. For all of these, the sourcetype = source with one exception.  Exec is broken out to 3 .sh files in a splunkforwarder folder structure. I do not know if this is correct or not.  For instance, I discovered there is a fields link within Settings and I can get to Field Alisases, trim the list to "oracle" and I see stuff reporting from Oracle Audit, Oracle Database, Oracle Listener, Oracle Instance, Oracle Session, Oracle SysPerf, etc... My understanding is the Splunk Index (this is a file?) is used by Splunk in searching for Keywords (are these fields?).  Thus, if the index contains ONLY the source / sourcetype information, then I'm gold and I simply need to define what those 17 stats are actually from/ for.  However, I also know that cannot be true as I can search on a Host=<something> which is not in that list. I do hope that makes sense.