All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Kenny_splunk  I think the best place to start here is by checking the _audit index to see who is using/searching aginst the index in question... Start off with the following query and take it f... See more...
Hi @Kenny_splunk  I think the best place to start here is by checking the _audit index to see who is using/searching aginst the index in question... Start off with the following query and take it from there: index=_audit search="*<yourIndexName>*" info=completed action=search Its important to remember, however, than some people might search for index=* in order to access a particular index, which might not come up in the above search. They might also use something like win* instead of win_events.  People can use index="yourName", index=yourName, index IN (yourName,anotherName) etc etc which is why I included the wildcards either side for the above sample query. You might want to tune to your environment etc as you see fit! In these logs you should find a number of useful fields, such as "search" (what they ran) and "user" (Who ran it) amonst other things llike event_count and result_count. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
If you want a list of the top 5 hosts reporting into each index then I would look to use the following search:   | tstats count where index=* by host, index | sort - count | streamstats count as n ... See more...
If you want a list of the top 5 hosts reporting into each index then I would look to use the following search:   | tstats count where index=* by host, index | sort - count | streamstats count as n by index | search n<=5 | stats values(host) by index     This Splunk search starts by using tstats to efficiently count events for each host and index, retrieving data across all indexes. It then sorts the results in descending order by event count so that the most active hosts appear first. The streamstats command assigns a running count (n) to each record within its respective index, effectively numbering the hosts within each index. The search n<=5 step filters the results to include only the top 5 hosts per index based on event count. Finally, stats values(host) by index consolidates the results to display the top 5 hosts for each index in a clean format. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @pflaher  I wonder if you could share an example event that you are searching across, as I dont have access to an example dataset for this? One thing you could try, which I have had success in i... See more...
Hi @pflaher  I wonder if you could share an example event that you are searching across, as I dont have access to an example dataset for this? One thing you could try, which I have had success in is using TERM, like this index=firewall sourcetype=cp_log:syslog source=checkpoint:firewall dest="172.24.245.210" TERM(*172.24.245.210*) The wildcards are less than ideal but could help speed up your searches (I found TERM can give 10x faster searches). Depending the data you might be able to do TERM(dest=172.24.245.210) - you could try either. Does this give you a faster response? It would be worth comparing the job inspector for the two searches to see if this improves your response time, fingers crossed! Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will  
@gcusello I never got a chance to do it today but will try tomorrow and report back. 
well TIL… thanks @SanjayReddy 
When I run this query to give me results for the last 24 hours, its takes hours to complete. I would like to run it for say 30 days, but the time it takes would be unreasonable.  index=firewall sour... See more...
When I run this query to give me results for the last 24 hours, its takes hours to complete. I would like to run it for say 30 days, but the time it takes would be unreasonable.  index=firewall sourcetype=cp_log:syslog source=checkpoint:firewall dest="172.24.245.210" | fields dest, src | dedup dest, src | table dest, src I am looking to identify any front end application server that connects to this 172.24.245.210 server    
@kiran_panchavat we are facing similar issue, any chance you can share the py script you received from PP?
The command you're looking for is eval. index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR=ERROR OR ERROR=WARN | eval StatusMsg = case(<<some expression>>, "Task threw an uncaught and unreco... See more...
The command you're looking for is eval. index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR=ERROR OR ERROR=WARN | eval StatusMsg = case(<<some expression>>, "Task threw an uncaught and unrecoverable exception", <<some other expression>>, "Ignoring await stop request for non-present connector", ..., <<a different expression>>, "Connection refused", 1==1, "Unknown") | table host connName StatusMsg  The trick is in selecting the appropriate status message.  You'll need to key off some field(s) in the results.
There is no option in tstats or values to limit the number of values.  You can, however, expand the host field and then limit the number displayed. | tstats values(host) as host where index=* by ind... See more...
There is no option in tstats or values to limit the number of values.  You can, however, expand the host field and then limit the number displayed. | tstats values(host) as host where index=* by index | mvexpand host | dedup 5 index host  
Hi  I am looking to extract some key-value pairs, for each event. I have data that always has resourceSpans{}.scopeSpans{}.spans{}.attributes{}.key but it might have resourceSpans{}.scopeSpans{}.s... See more...
Hi  I am looking to extract some key-value pairs, for each event. I have data that always has resourceSpans{}.scopeSpans{}.spans{}.attributes{}.key but it might have resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.doubleValue or resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue. I want to use them to run stats commands on them. So I was looking to extract each  Key | doubleValue or stringValue  and then use them This is some of the data I have. We can see that doubleValue and stringValue  are mixed and can pop up anytime. I have tried the following. But there is an issue     source="trace_Marketing_Bench_31032016_17_cff762901d1eff01766119738a9218e2.jsonl" host="TEST1" index="murex_logs" sourcetype="Market_Risk_DT" "**strategy**" 920e1021406277a9 | spath "resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue" | spath "resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.doubleValue" | spath "resourceSpans{}.scopeSpans{}.spans{}.attributes{}.key" | eval output=mvzip('resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue','resourceSpans{}.scopeSpans{}.spans{}.attributes{}.key') | table output     The order is not coming out correctly. In red, we can see that  WARNING is with mr_batch_status, not mr_batch_compute_cpu_time - That is because they are both extracting independently and not synced to each other. How do I get them to extract the same? SOme raw data     {"resourceSpans":[{"resource":{"attributes":[{"key":"telemetry.sdk.version","value":{"stringValue":"1.12.0"}},{"key":"telemetry.sdk.name","value":{"stringValue":"opentelemetry"}},{"key":"telemetry.sdk.language","value":{"stringValue":"cpp"}},{"key":"service.instance.id","value":{"stringValue":"00vptl2h"}},{"key":"service.namespace","value":{"stringValue":"MXMARKETRISK.SERVICE"}},{"key":"service.name","value":{"stringValue":"MXMARKETRISK.ENGINE.MX"}}]},"scopeSpans":[{"scope":{"name":"murex::tracing_backend::otel","version":"v1"},"spans":[{"traceId":"cff762901d1eff01766119738a9218e2","spanId":"71d94e8ebb30a3d5","parentSpanId":"920e1021406277a9","name":"fullreval_task","kind":"SPAN_KIND_INTERNAL","startTimeUnixNano":"1716379123221825454","endTimeUnixNano":"1716379155367858727","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_batchId","value":{"stringValue":"440"}},{"key":"mr_batchType","value":{"stringValue":"Full Revaluation"}},{"key":"mr_bucketName","value":{"stringValue":"imccBucket#ALL_10_Reduced"}},{"key":"mr_jobDomain","value":{"stringValue":"Market Risk"}},{"key":"mr_jobId","value":{"stringValue":"Marketing_Bench | 31/03/2016 | 17"}},{"key":"mr_strategy","value":{"stringValue":"typo_Bond"}},{"key":"mr_uuid","value":{"stringValue":"b1ed4d3a-0e4d-4afa-ad39-7cf6a07c36a9"}},{"key":"mrb_batch_affinity","value":{"stringValue":"Marketing_Bench_run_Batch|Marketing_Bench|2016/03/31|17_FullReval0_00029"}},{"key":"mr_batch_compute_cpu_time","value":{"doubleValue":31.586568}},{"key":"mr_batch_compute_time","value":{"doubleValue":31.777}},{"key":"mr_batch_load_cpu_time","value":{"doubleValue":0.0}},{"key":"mr_batch_load_time","value":{"doubleValue":0.0}},{"key":"mr_batch_status","value":{"stringValue":"WARNING"}},{"key":"mr_batch_total_cpu_time","value":{"doubleValue":31.912966}},{"key":"mr_batch_total_time","value":{"doubleValue":32.14}}],"status":{}}]}]}]}          
Hi @Praz_123  you can use rest call in moniotring console for getting inforamtion for SH and Indexers information   | rest splunk_server=<server name> services/server/sysinfo | eval "R... See more...
Hi @Praz_123  you can use rest call in moniotring console for getting inforamtion for SH and Indexers information   | rest splunk_server=<server name> services/server/sysinfo | eval "RAM GB"=round(physicalMemoryMB/1024) | table os_name os_build cpu_arch "RAM GB" numberOfCores numberOfVirtualCores transparent_hugepages.defrag transparent_hugepages.enabled transparent_hugepages.effective_state ulimits* | rename os_name as "Opeating System" os_build as "OS Build" cpu_arch as "OS Arch" numberOfCores as "Physical Cores" numberOfVirtualCores as "Virtual Cores" transparent_hugepages.defrag as "THP Defrag" transparent_hugepages.enabled as "THP enabled" you can use following rest call to run in CLI as well curl -k -u admin:changeme https://localhost:8089/services/server/sysinfo  
Hello I'm looking to modify this search I've found and using. I like the result set but would like to limit the host count to just five for each index it reports to. The .csv export of the original ... See more...
Hello I'm looking to modify this search I've found and using. I like the result set but would like to limit the host count to just five for each index it reports to. The .csv export of the original search is really messy and just unusable. My SPL skills are limited at the moment so any help is much appreciated.  | tstats values(host) as host where index=* by index
I haven't found any new information, so I'll contact support. Thanks!
Hi @Drew .Gingerich, Thanks for asking your question on the Community. It's been a few days with no reply from the Community. Did you find any additional information or a solution you can share her... See more...
Hi @Drew .Gingerich, Thanks for asking your question on the Community. It's been a few days with no reply from the Community. Did you find any additional information or a solution you can share here? If you still need help, you can contact AppDynamics Support: How do I open a case with AppDynamics Support? 
Hi @Rakshit.Patki, Thanks for asking your question on the community. It's been a few days with no reply from the Community. Did you happen to find any additional information or a solution you can s... See more...
Hi @Rakshit.Patki, Thanks for asking your question on the community. It's been a few days with no reply from the Community. Did you happen to find any additional information or a solution you can share? If you still need help, you can contact AppDynamics Support: How do I open a case with AppDynamics Support? 
Hi @Lukas.Holub, Since it's been a bit with no reply from the community, did you happen to find any additional information or a solution you can share? If you still need help you can contact App... See more...
Hi @Lukas.Holub, Since it's been a bit with no reply from the community, did you happen to find any additional information or a solution you can share? If you still need help you can contact AppDynamics Support: How do I open a case with AppDynamics Support? 
I have the following values that will go in a field titled StatusMsg: "Task threw an uncaught and unrecoverable exception" "Ignoring await stop request for non-present connector" "Graceful stop of... See more...
I have the following values that will go in a field titled StatusMsg: "Task threw an uncaught and unrecoverable exception" "Ignoring await stop request for non-present connector" "Graceful stop of task" "Failed to start connector" "Error while starting connector" "Ignoring error closing connection" "failed to publish monitoring message" "Ignoring error closing connection" "restart failed"| "disconnected" "Communications link failure during rollback" "Exception occurred while closing reporter" "Connection to node" "Unexpected exception sending HTTP Request" "Ignoring stop request for unowned task" "failed on invocation of onPartitionsAssigned for partitions" "Ignoring stop request for unowned connector" "Ignoring await stop request for non-present connector" "Connection refused"   I am not certain how to do this.  This is the base search: index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR=ERROR OR ERROR=WARN I want to create the field on the fly and have it pick up the appropriate CASE value.  I would then put it in a table with host connName StatusMsg   Any assist would be greatly appreciated.  
@Praz_123  There are a few ways you can check your ulimit settings. The monitoring console includes a health check for ulimits. See https://docs.splunk.com/Documentation/Splunk/9.4.0/DMC/Customize... See more...
@Praz_123  There are a few ways you can check your ulimit settings. The monitoring console includes a health check for ulimits. See https://docs.splunk.com/Documentation/Splunk/9.4.0/DMC/Customizehealthcheck      Each time the Splunk Enterprise service is started or restarted, it will report on the ulimits. You can search the internal logs for the report using:  
Thanks @livehybrid . I'll ask our customers to try out your suggestions and will report back. I really appreciate your help!
@Kenny_splunk  Find sourcetypes that are consuming a lot of data, especially unnecessary logs Reduce retention or delete them if they are no longer needed. If multiple indexes contain similar dat... See more...
@Kenny_splunk  Find sourcetypes that are consuming a lot of data, especially unnecessary logs Reduce retention or delete them if they are no longer needed. If multiple indexes contain similar data, consolidate where possible.