All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There is no option in tstats or values to limit the number of values.  You can, however, expand the host field and then limit the number displayed. | tstats values(host) as host where index=* by ind... See more...
There is no option in tstats or values to limit the number of values.  You can, however, expand the host field and then limit the number displayed. | tstats values(host) as host where index=* by index | mvexpand host | dedup 5 index host  
Hi  I am looking to extract some key-value pairs, for each event. I have data that always has resourceSpans{}.scopeSpans{}.spans{}.attributes{}.key but it might have resourceSpans{}.scopeSpans{}.s... See more...
Hi  I am looking to extract some key-value pairs, for each event. I have data that always has resourceSpans{}.scopeSpans{}.spans{}.attributes{}.key but it might have resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.doubleValue or resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue. I want to use them to run stats commands on them. So I was looking to extract each  Key | doubleValue or stringValue  and then use them This is some of the data I have. We can see that doubleValue and stringValue  are mixed and can pop up anytime. I have tried the following. But there is an issue     source="trace_Marketing_Bench_31032016_17_cff762901d1eff01766119738a9218e2.jsonl" host="TEST1" index="murex_logs" sourcetype="Market_Risk_DT" "**strategy**" 920e1021406277a9 | spath "resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue" | spath "resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.doubleValue" | spath "resourceSpans{}.scopeSpans{}.spans{}.attributes{}.key" | eval output=mvzip('resourceSpans{}.scopeSpans{}.spans{}.attributes{}.value.stringValue','resourceSpans{}.scopeSpans{}.spans{}.attributes{}.key') | table output     The order is not coming out correctly. In red, we can see that  WARNING is with mr_batch_status, not mr_batch_compute_cpu_time - That is because they are both extracting independently and not synced to each other. How do I get them to extract the same? SOme raw data     {"resourceSpans":[{"resource":{"attributes":[{"key":"telemetry.sdk.version","value":{"stringValue":"1.12.0"}},{"key":"telemetry.sdk.name","value":{"stringValue":"opentelemetry"}},{"key":"telemetry.sdk.language","value":{"stringValue":"cpp"}},{"key":"service.instance.id","value":{"stringValue":"00vptl2h"}},{"key":"service.namespace","value":{"stringValue":"MXMARKETRISK.SERVICE"}},{"key":"service.name","value":{"stringValue":"MXMARKETRISK.ENGINE.MX"}}]},"scopeSpans":[{"scope":{"name":"murex::tracing_backend::otel","version":"v1"},"spans":[{"traceId":"cff762901d1eff01766119738a9218e2","spanId":"71d94e8ebb30a3d5","parentSpanId":"920e1021406277a9","name":"fullreval_task","kind":"SPAN_KIND_INTERNAL","startTimeUnixNano":"1716379123221825454","endTimeUnixNano":"1716379155367858727","attributes":[{"key":"market_risk_span","value":{"stringValue":"true"}},{"key":"mr_batchId","value":{"stringValue":"440"}},{"key":"mr_batchType","value":{"stringValue":"Full Revaluation"}},{"key":"mr_bucketName","value":{"stringValue":"imccBucket#ALL_10_Reduced"}},{"key":"mr_jobDomain","value":{"stringValue":"Market Risk"}},{"key":"mr_jobId","value":{"stringValue":"Marketing_Bench | 31/03/2016 | 17"}},{"key":"mr_strategy","value":{"stringValue":"typo_Bond"}},{"key":"mr_uuid","value":{"stringValue":"b1ed4d3a-0e4d-4afa-ad39-7cf6a07c36a9"}},{"key":"mrb_batch_affinity","value":{"stringValue":"Marketing_Bench_run_Batch|Marketing_Bench|2016/03/31|17_FullReval0_00029"}},{"key":"mr_batch_compute_cpu_time","value":{"doubleValue":31.586568}},{"key":"mr_batch_compute_time","value":{"doubleValue":31.777}},{"key":"mr_batch_load_cpu_time","value":{"doubleValue":0.0}},{"key":"mr_batch_load_time","value":{"doubleValue":0.0}},{"key":"mr_batch_status","value":{"stringValue":"WARNING"}},{"key":"mr_batch_total_cpu_time","value":{"doubleValue":31.912966}},{"key":"mr_batch_total_time","value":{"doubleValue":32.14}}],"status":{}}]}]}]}          
Hi @Praz_123  you can use rest call in moniotring console for getting inforamtion for SH and Indexers information   | rest splunk_server=<server name> services/server/sysinfo | eval "R... See more...
Hi @Praz_123  you can use rest call in moniotring console for getting inforamtion for SH and Indexers information   | rest splunk_server=<server name> services/server/sysinfo | eval "RAM GB"=round(physicalMemoryMB/1024) | table os_name os_build cpu_arch "RAM GB" numberOfCores numberOfVirtualCores transparent_hugepages.defrag transparent_hugepages.enabled transparent_hugepages.effective_state ulimits* | rename os_name as "Opeating System" os_build as "OS Build" cpu_arch as "OS Arch" numberOfCores as "Physical Cores" numberOfVirtualCores as "Virtual Cores" transparent_hugepages.defrag as "THP Defrag" transparent_hugepages.enabled as "THP enabled" you can use following rest call to run in CLI as well curl -k -u admin:changeme https://localhost:8089/services/server/sysinfo  
Hello I'm looking to modify this search I've found and using. I like the result set but would like to limit the host count to just five for each index it reports to. The .csv export of the original ... See more...
Hello I'm looking to modify this search I've found and using. I like the result set but would like to limit the host count to just five for each index it reports to. The .csv export of the original search is really messy and just unusable. My SPL skills are limited at the moment so any help is much appreciated.  | tstats values(host) as host where index=* by index
I haven't found any new information, so I'll contact support. Thanks!
Hi @Drew .Gingerich, Thanks for asking your question on the Community. It's been a few days with no reply from the Community. Did you find any additional information or a solution you can share her... See more...
Hi @Drew .Gingerich, Thanks for asking your question on the Community. It's been a few days with no reply from the Community. Did you find any additional information or a solution you can share here? If you still need help, you can contact AppDynamics Support: How do I open a case with AppDynamics Support? 
Hi @Rakshit.Patki, Thanks for asking your question on the community. It's been a few days with no reply from the Community. Did you happen to find any additional information or a solution you can s... See more...
Hi @Rakshit.Patki, Thanks for asking your question on the community. It's been a few days with no reply from the Community. Did you happen to find any additional information or a solution you can share? If you still need help, you can contact AppDynamics Support: How do I open a case with AppDynamics Support? 
Hi @Lukas.Holub, Since it's been a bit with no reply from the community, did you happen to find any additional information or a solution you can share? If you still need help you can contact App... See more...
Hi @Lukas.Holub, Since it's been a bit with no reply from the community, did you happen to find any additional information or a solution you can share? If you still need help you can contact AppDynamics Support: How do I open a case with AppDynamics Support? 
I have the following values that will go in a field titled StatusMsg: "Task threw an uncaught and unrecoverable exception" "Ignoring await stop request for non-present connector" "Graceful stop of... See more...
I have the following values that will go in a field titled StatusMsg: "Task threw an uncaught and unrecoverable exception" "Ignoring await stop request for non-present connector" "Graceful stop of task" "Failed to start connector" "Error while starting connector" "Ignoring error closing connection" "failed to publish monitoring message" "Ignoring error closing connection" "restart failed"| "disconnected" "Communications link failure during rollback" "Exception occurred while closing reporter" "Connection to node" "Unexpected exception sending HTTP Request" "Ignoring stop request for unowned task" "failed on invocation of onPartitionsAssigned for partitions" "Ignoring stop request for unowned connector" "Ignoring await stop request for non-present connector" "Connection refused"   I am not certain how to do this.  This is the base search: index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR=ERROR OR ERROR=WARN I want to create the field on the fly and have it pick up the appropriate CASE value.  I would then put it in a table with host connName StatusMsg   Any assist would be greatly appreciated.  
@Praz_123  There are a few ways you can check your ulimit settings. The monitoring console includes a health check for ulimits. See https://docs.splunk.com/Documentation/Splunk/9.4.0/DMC/Customize... See more...
@Praz_123  There are a few ways you can check your ulimit settings. The monitoring console includes a health check for ulimits. See https://docs.splunk.com/Documentation/Splunk/9.4.0/DMC/Customizehealthcheck      Each time the Splunk Enterprise service is started or restarted, it will report on the ulimits. You can search the internal logs for the report using:  
Thanks @livehybrid . I'll ask our customers to try out your suggestions and will report back. I really appreciate your help!
@Kenny_splunk  Find sourcetypes that are consuming a lot of data, especially unnecessary logs Reduce retention or delete them if they are no longer needed. If multiple indexes contain similar dat... See more...
@Kenny_splunk  Find sourcetypes that are consuming a lot of data, especially unnecessary logs Reduce retention or delete them if they are no longer needed. If multiple indexes contain similar data, consolidate where possible.  
@Kenny_splunk  Use the tstats command to track index usage over time. This will help you identify peaks and patterns in data usage. Review and adjust your index retention policies to ensure th... See more...
@Kenny_splunk  Use the tstats command to track index usage over time. This will help you identify peaks and patterns in data usage. Review and adjust your index retention policies to ensure that data is stored only for as long as needed. This can help reduce storage costs. Review saved searches and reports to ensure they are still relevant and being used. Disable or delete those that are not needed. Optimize your searches by using efficient search commands and avoiding unnecessary subsearches. Use summary indexing and data models for faster results. Index Usage Over Time:    
@Joseph.McNellage were you ever able to resolve this issue? We are encountering the exact same thing and it appears to be a problem with ElasticSearch. I'm wondering if upgrading to the newest versio... See more...
@Joseph.McNellage were you ever able to resolve this issue? We are encountering the exact same thing and it appears to be a problem with ElasticSearch. I'm wondering if upgrading to the newest version might resolve the issue.
In the investigation panel for an incident in Splunk SOAR, there is a comment or command field under Activity.  If you copy and paste multiple lines of text that include blank lines in between sectio... See more...
In the investigation panel for an incident in Splunk SOAR, there is a comment or command field under Activity.  If you copy and paste multiple lines of text that include blank lines in between sections of text in the comment field, all formatting is lost and the text is all bunched together. However, if you select an incident from  the queue and select the Edit button, and paste the same lines of text in the "Add comment" field, the formatting is preserved. Is there any way to add a new line character or line break to the text to maintain the blank lines or prevent the text from bunching up?
@jiaminyun  When the index in Splunk becomes full, indexing will stop. It's important to monitor your index capacity to prevent it from getting full, as this can impact overall performance.  rest c... See more...
@jiaminyun  When the index in Splunk becomes full, indexing will stop. It's important to monitor your index capacity to prevent it from getting full, as this can impact overall performance.  rest command to check the index size.  eventcount command: on DMC, you can get the index size details. 
So we are starting a new project soon, and basically our boss is personally sending me an index (not internal) to investigate. Investigate as far as as far as usage. We are trying to optimize the ... See more...
So we are starting a new project soon, and basically our boss is personally sending me an index (not internal) to investigate. Investigate as far as as far as usage. We are trying to optimize the env and cut whats not being used, or checking to see what is being overused. KO'S, data intake, etc. Any good practices, processes or tips you can lend? this would be the most perfect learning opportunity. Im excited, but nervous.
How much syntax has changed from splunklib (which ran on Python 2.x) to splunk-sdk (which runs on Python 3.x)? Just seems like a lot of the tutorials and info on Splunk API is super outdated. Is nobo... See more...
How much syntax has changed from splunklib (which ran on Python 2.x) to splunk-sdk (which runs on Python 3.x)? Just seems like a lot of the tutorials and info on Splunk API is super outdated. Is nobody doing this anymore? Currently mainly interested in running a search and getting results into Pandas using Python. Also breaking up a search into multiple smaller time spans if the time period is too long and/or the return data set too large.   I have old code from the splunklib Python 2.0 days but basically just starting over and using it as reference.  
Can you please check syntax and everything is correct? I have used the same thing in my terminal and after this I am writing command in my search and reporting search bar that is  index="mycloud" s... See more...
Can you please check syntax and everything is correct? I have used the same thing in my terminal and after this I am writing command in my search and reporting search bar that is  index="mycloud" sourcetype="httpevent" | table message props.conf [source::http: LogStash] sourcetype = httpevent TRANSFORMS-00 = securelog_set_default_metadata TRANSFORMS-01 = securelog_override_raw transforms.conf [securelog_set_default_metadata] INGEST_EVAL = host = json_extract(_raw, "host.name") [securelog_override_raw] INGEST_EVAL = _raw = json_extract(_raw, "message")
Hi @rahulkumar , as I said, in the message field there's the original _raw field, in other words the original event. So you have to restore the original event deleting the additional fields in the ... See more...
Hi @rahulkumar , as I said, in the message field there's the original _raw field, in other words the original event. So you have to restore the original event deleting the additional fields in the json structure otherwise the standard add-ons don't read them in a correct way. The configurations I hinted makes this restore: they extract metedata from the json fieldas and restore in _raw the original event. You cannot use spath because the parsers work on the _raw field, for this reason you have to configure the original event restore using props.conf and transforms.conf. Ciao. Giuseppe