All Posts

Top

All Posts

Hi @PoojaChand02 , It seems the screenshots were from different Splunk platforms. The first one is Splunk Enterprise but second one is  Splunk Cloud.  Splunk Cloud does not have "Data Summary" butt... See more...
Hi @PoojaChand02 , It seems the screenshots were from different Splunk platforms. The first one is Splunk Enterprise but second one is  Splunk Cloud.  Splunk Cloud does not have "Data Summary" button. You can see similar data summary using below query for host data .(You can use other types like  "hosts", "sources" or "sourcetypes". Please do not forget to replace rename command accordingly. You can also see other indexes than main. | metadata index=main type=hosts | eval lastSeen = strftime(lastTime, "%x %l:%M:%S %p") | rename host AS Host, totalCount AS Count, lastSeen AS "Last Update" | table Host, Count, "Last Update"  
Have you added the dropdown - what is the problem you are facing? Simply add the dropdown, set the 8 static options and then in your search use index=bla host=*$my_host_token$* where my_host_token... See more...
Have you added the dropdown - what is the problem you are facing? Simply add the dropdown, set the 8 static options and then in your search use index=bla host=*$my_host_token$* where my_host_token is the token for your dropdown Assuming the table below is the finite list of hosts you will have, then this should work - there are of course other ways to do this, but this is the simplest.  
Thanks @scelikok. No I don't just want the orderID. But I want to manually create the RESTful API routing pattern. for "path=/order/123456",  "route=/order/{orderID}", basically I am trying to use r... See more...
Thanks @scelikok. No I don't just want the orderID. But I want to manually create the RESTful API routing pattern. for "path=/order/123456",  "route=/order/{orderID}", basically I am trying to use regex to replace the value and create a new field in this way: if value matches \/order\/\d{12}, then convert to /order/{orderID} I have other examples like: path=/user/jason@sample.com/orders route=/user/{userID}/orders    
https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchTutorial/Aboutthesearchapp
The data summary option does not exist in Splunk Cloud
Thanks, I will check out your advice in a bit. Yes, I agree that the data structure is not ideal for parsing. Unfortunately, this is output from an OpenTelemetry collector following the OpenTelemetr... See more...
Thanks, I will check out your advice in a bit. Yes, I agree that the data structure is not ideal for parsing. Unfortunately, this is output from an OpenTelemetry collector following the OpenTelemetry standard (which Spunk also embraces, though we don’t have native parsing for it yet in Splunk Enterprise), so if this takes off as the cross-vendor standard for pushing telemetry, then we are going to have to deal with ingesting in this format more and more. Or maybe it is an opportunity to suggest formatting changes to CNCF to the standard.
It is probably because your field looks like it has come from JSON and based on the link provided by @isoutamo , means that the field extractions are happening at stage 4, whereas your REPORT extract... See more...
It is probably because your field looks like it has come from JSON and based on the link provided by @isoutamo , means that the field extractions are happening at stage 4, whereas your REPORT extraction is happening at stage 3, therefore the field does not exist. You could try creating a calculated field using an eval replace expression to remove the non-domain part. You can try this in standard SPL by experimenting with your regex using | eval domain=replace('event.url', "(?:https?:\/\/)?(?:www[0-9]*\.)?(?)([^\n:\/]+)", "\1") That is NOT correct above, as I am not sure what the replacement token \1 should be with all the brackets and capturing/non-capturing groups, but you can experiment with regex101.com  
Let me clarify: When you say "color", you are talking about converting percentage text to a string representation of color code, not to color the percentage text in E-mail alert.  Correct?  In other ... See more...
Let me clarify: When you say "color", you are talking about converting percentage text to a string representation of color code, not to color the percentage text in E-mail alert.  Correct?  In other words, you want something like Name color percentage A red 71 B amber 90 C red 44 D amber 88 E red 78 Because while potentially doable (and likely will involve a custom command you need to develop externally), Splunk doesn't provide such a function to color text used in E-mail alerts.  If this is the correct requirement, look up documentation for case I further assume that your "percentage" field doesn't come with a percent sign (%); if you want that % in E-mail, you can always add it after color mapping.   | eval color = case(percentage < 80, "red", percentage < 95, "amber", true(), "green")   Here is a data emulation you can play with and compare with real data   | makeresults format=csv data="Name, percentage A, 71% B, 90% C, 44% D, 88% E, 78%" | eval percentage = replace(percentage, "%", "") ``` data emulation above ```   Hope this helps.
Hi @codewarrior, If I got it correct, your need is extract a new field named "route" and it will contain the value after "orders/".  You can capture it in your rex command, please try below; level... See more...
Hi @codewarrior, If I got it correct, your need is extract a new field named "route" and it will contain the value after "orders/".  You can capture it in your rex command, please try below; level=info request.elapsed=(?<duration>.*) request.method=(?<method>.*) request.path=(?<path>.+orders\/(?<route>.+)) request_id=(?<request_id>.*) response.status=(?<statusCode>.*)  
Before I rant, thank you for sharing valid mock data in text.  This said, this is the second time in as many consecutive days I feel like screaming at lazy developers who makes some terrible use of J... See more...
Before I rant, thank you for sharing valid mock data in text.  This said, this is the second time in as many consecutive days I feel like screaming at lazy developers who makes some terrible use of JSON arrays. (The developer might be you.  But the rant stands )  Your data would have much cleaner, self-evidenced semantics had the developer simply use this:   [ { "attributes": {"host.name":{"stringValue":"myname1"},"telemetry.sdk.name":{"stringValue":"my_sdk"}}, "metrics": {"hw.host.energy":{"dataPoints":[{"timeUnixNano":"1712951030986039000","asDouble":359}]},"hw.host.power":{"dataPoints":[{"timeUnixNano":"1712951030986039000","asDouble":26}]}} }, { "attributes": {"host.name":{"stringValue":"myname2"},"telemetry.sdk.name":{"stringValue":"my_sdk"}}, "metrics": {"hw.host.energy":{"dataPoints":[{"timeUnixNano":"1712951030987780000","asDouble":211}]}} ]   In other words, only two JSON arrays in the original data are used correctly.  resourceMetrics.resource.attributes[] and resourceMetrics.scopeMetrics.metrics[] are total abomination of the intent of JSON arrays.  Speak to your developers to see if they could change the data structure not just for Splunk, but for future maintainers of their own code and any other downstream team as well. Now that this is off my chest, I understand that it will take more than one day for developers to change code even if you convince them on day one.  Here is the SPL that I use to tabulate your data like the following: host.name.stringValue hw.host.energy{}.asDouble hw.host.energy{}.timeUnixNano hw.host.power{}.asDouble hw.host.power{}.timeUnixNano sdk.name.stringValue myname1 359 1712951030986039000 26 1712951030986039000 my_sdk myname2 211 1712951030987780000     my_sdk In this form, I have assumed that dataPoints[] is the only node of interest under resourceMetrics[].scopeMetrics[].metrics.gauge  | spath path=resourceMetrics{} | fields - _* resourceMetrics{}.* | mvexpand resourceMetrics{} | spath input=resourceMetrics{} path=resource.attributes{} | spath input=resourceMetrics{} path=scopeMetrics{} | spath input=scopeMetrics{} path=metrics{} | fields - resourceMetrics{} scopeMetrics{} | foreach resource.attributes{} mode=multivalue [eval key = mvappend(key, json_extract(<<ITEM>>, "key"))] | eval idx = mvrange(0, mvcount(key)) | eval attributes_good = json_object() | foreach idx mode=multivalue [eval attribute = mvindex('resource.attributes{}', <<ITEM>>), attributes_good = json_set_exact(attributes_good, json_extract(attribute, "key"), json_extract(attribute, "value"))] | fields - key attribute resource.attributes{} | foreach metrics{} mode=multivalue [eval name = mvappend(name, json_extract(<<ITEM>>, "name"))] | eval name = if(isnull(name), json_extract('metrics{}', "name"), name) | eval idx = mvrange(0, mvcount(name)) | eval metrics_good = json_object() | foreach idx mode=multivalue [eval metric = mvindex('metrics{}', <<ITEM>>), metrics_good = json_set_exact(metrics_good, json_extract(metric, "name"), json_extract(metric, "gauge.dataPoints"))] ``` the above assumes that gauge.dataPoints is the only subnode of interest ``` | fields - idx name metric metrics{} ``` the above transforms array-laden JSON into easily understandable JSON ``` | spath input=attributes_good | spath input=metrics_good | fields - *_good ``` the following is only needed if dataPoints[] actually contain multiple values. This is the only code requiring prior knowledge about data fields ``` | mvexpand hw.host.energy{}.timeUnixNano | mvexpand hw.host.power{}.timeUnixNano (The fields - xxx commands are not essential; they just declutter view.)  Hope this helps. This is an emulation you can play with and compare with real data: | makeresults | eval _raw = "{ \"resourceMetrics\": [ { \"resource\": { \"attributes\": [ { \"key\": \"host.name\", \"value\": { \"stringValue\": \"myname1\" } }, { \"key\": \"telemetry.sdk.name\", \"value\": { \"stringValue\": \"my_sdk\" } } ] }, \"scopeMetrics\": [ { \"metrics\": [ { \"name\": \"hw.host.energy\", \"gauge\": { \"dataPoints\": [ { \"timeUnixNano\": \"1712951030986039000\", \"asDouble\": 359 } ] } }, { \"name\": \"hw.host.power\", \"gauge\": { \"dataPoints\": [ { \"timeUnixNano\": \"1712951030986039000\", \"asDouble\": 26 } ] } } ] } ] }, { \"resource\": { \"attributes\": [ { \"key\": \"host.name\", \"value\": { \"stringValue\": \"myname2\" } }, { \"key\": \"telemetry.sdk.name\", \"value\": { \"stringValue\": \"my_sdk\" } } ] }, \"scopeMetrics\": [ { \"metrics\": [ { \"name\": \"hw.host.energy\", \"gauge\": { \"dataPoints\": [ { \"timeUnixNano\": \"1712951030987780000\", \"asDouble\": 211 } ] } } ] } ] } ] }" | spath ``` data emulation above ``` Final thoughts about data structure with self-evidence semantics: If my speculation about dataPoints[] being the only node of interest under resourceMetrics[].scopeMetrics[].metrics.gauge stands, good data could be further simplified to [ { "attributes": {"host.name":{"stringValue":"myname1"},"telemetry.sdk.name":{"stringValue":"my_sdk"}}, "metrics": {"hw.host.energy":[{"timeUnixNano":"1712951030986039000","asDouble":359}],"hw.host.power":[{"timeUnixNano":"1712951030986039000","asDouble":26}]} }, { "attributes": {"host.name":{"stringValue":"myname2"},"telemetry.sdk.name":{"stringValue":"my_sdk"}}, "metrics": {"hw.host.energy":[{"timeUnixNano":"1712951030987780000","asDouble":211}]} ] I do understand that listing hw.host.energy and hw.host.power as coexisting columns is different from your illustrated output and may not suite your needs.  But presentation can easily be adapted.  Bad data structure remains bad.
Hi @Gustavo.Marconi, I reached out to a few people and Anderson B. should be getting in touch with you about this. 
Hi @SplunkExplorer, You are right that is Deployment Server log but it should show client ip address too. You can use below search to check deployment steps on client; index=_internal host=YourCli... See more...
Hi @SplunkExplorer, You are right that is Deployment Server log but it should show client ip address too. You can use below search to check deployment steps on client; index=_internal host=YourClientHost sourcetype=splunkd (DeployedApplication OR ApplicationManager OR "Restarting Splunkd") You should see similar events on regarding host logs; INFO DeployedApplication - Checksum mismatch 0 <> 18281318892102154454 for app=your_app_name. Will reload from='x.x.x.x:8089/services/streams/deployment?name=default:your_serverclass_name:your_app_name' INFO DeployedApplication - Downloaded url=x.x.x.x:8089/services/streams/deployment?name=default:your_serverclass_name:your_app_name to file='C:\Program Files\SplunkUniversalForwarder\var\run\your_serverclass_name\your_app_name-1711990721.bundle' sizeKB=xx INFO DeployedApplication - Installing app=your_app_name to='C:\Program Files\SplunkUniversalForwarder\etc\apps\your_app_name' INFO ApplicationManager - Detected app creation:your_app_name WARN DC:DeploymentClient - Restarting Splunkd...    If everything seems ok on these log, we can think the problem is on provided path/filename.  
Here are a few questions from the session (get the full Q&A deck and live recording in the #office-hours Slack channel):   Q1: Since when has the evolution of Splunk APM started & why is it that co... See more...
Here are a few questions from the session (get the full Q&A deck and live recording in the #office-hours Slack channel):   Q1: Since when has the evolution of Splunk APM started & why is it that companies are starting to realize the benefit of it very late We’ve been doing this since the beginning - it just wasn’t called Observability! Splunk APM launched in 2020 Now recognized in the Leaders Quadrant in the Gartner Magic Quadrant for APM Join us at .conf24 to learn about new capabilities, see product demos, and more! Q2: Can you show the latest method of Otel Installation and Instrumentation? Opt into auto instrumentation with the OpenTelemetry Collector installation (Java, Node.js, .NET): Demo Documentation:  Typical Instrumentation Steps  OpenTelemetry Zero Configuration Auto Instrumentation  Q3: How do I understand which microservice is having an issue? Docs: Investigating the root cause of an error with the Service Map Lantern: Implementing Distributed Tracing Blog: How to investigate a reported problem Lantern: Troubleshooting a service latency issue related to a database query Lantern: Using Business Workflows in Splunk APM Q4: Can I have some guidance for monitoring AKS environments? Deploying the OpenTelemetry Collector Helm chart in AKS OpenTelemetry Zero Configuration Auto Instrumentation  Typical Instrumentation Steps   Other questions from the session (solutions in the #office-hours Slack channel): What is the best practice for enabling APM on an app running in Azure App Service as a Java Web App? What do we mean by troubleshooting metrics vs. monitoring metrics? I'm starting from scratch for my team to instrument our microservices and lambdas with otel in golang. Does Splunk Observability Cloud have a dependency on Splunk Enterprise/Cloud? Or is it an independent product? Can we still ingest logs directly to Splunk Observability cloud? How SOC is licensed? I expect we can now ingest only Metrics n Traces Does the “Splunk OpenTelemetry Collector for Kubernetes” (Helm chart) support sending telemetry to Splunk APM and logs to Splunk Enterprise at the same time? What is the best practice for instrumenting services running as an App Service in Azure? I've seen some stellar documentation covering ECS and K8s use cases but I'm a little unsure what the best process is for Azure App Services. How does Splunk Observability link services between APM and Infrastructure? Is this demonstration with custom instrumentation? Could you show an example with custom code instrumentation, for example with attributes / custom metrics Can I write custom queries for the traces I ingest? For the purposes of debugging, alerting, reporting, etc. How is a service different from Business workflow? Is BW collection of services?  
{"id":"0","severity":"Information","message":"[{\"TARGETSYSTEM\":\"SEQ\",\"ARUNAME\":\"CPW_02170\",\"TOTAL\":437330,\"PROCESSED\":436669,\"REMAINING\":661,\"ERROR\":0,\"SKIPPED\":112},{\"TARGETSYSTEM... See more...
{"id":"0","severity":"Information","message":"[{\"TARGETSYSTEM\":\"SEQ\",\"ARUNAME\":\"CPW_02170\",\"TOTAL\":437330,\"PROCESSED\":436669,\"REMAINING\":661,\"ERROR\":0,\"SKIPPED\":112},{\"TARGETSYSTEM\":\"SEQ\",\"ARUNAME\":\"CPW_02171\",\"TOTAL\":78833,\"PROCESSED\":78832,\"REMAINING\":1,\"ERROR\":0,\"SKIPPED\":35},{\"TARGETSYSTEM\":\"SEQ\",\"ARUNAME\":\"CPW_02169H\",\"TOTAL\":100192,\"PROCESSED\":100192,\"REMAINING\":0,\"ERROR\":0,\"SKIPPED\":20016},{\"TARGETSYSTEM\":\"CPW\",\"ARUNAME\":\"CPW_00061\",\"TOTAL\":7,\"PROCESSED\":0,\"REMAINING\":7,\"ERROR\":0,\"SKIPPED\":0},{\"TARGETSYSTEM\":\"CPW\",\"ARUNAME\":\"CPW_01015\",\"TOTAL\":9,\"PROCESSED\":0,\"REMAINING\":9,\"ERROR\":0,\"SKIPPED\":0},{\"TARGETSYSTEM\":\"CPW\",\"ARUNAME\":\"CPW_00011H\",\"TOTAL\":17,\"PROCESSED\":0,\"REMAINING\":17,\"ERROR\":0,\"SKIPPED\":0},{\"TARGETSYSTEM\":\"CPW\",\"ARUNAME\":\"CPW_00079\",\"TOTAL\":0,\"PROCESSED\":0,\"REMAINING\":0,\"ERROR\":0,\"SKIPPED\":0},{\"TARGETSYSTEM\":\"CPW\",\"ARUNAME\":\"CPW_02191\",\"TOTAL\":0,\"PROCESSED\":0,\"REMAINING\":0,\"ERROR\":0,\"SKIPPED\":0},{\"TARGETSYSTEM\":\"CPW\",\"ARUNAME\":\"CPW_02184\",\"TOTAL\":0,\"PROCESSED\":0,\"REMAINING\":0,\"ERROR\":0,\"SKIPPED\":0},{\"TARGETSYSTEM\":\"CPW\",\"ARUNAME\":\"CPW_07009CS\",\"TOTAL\":0,\"PROCESSED\":0,\"REMAINING\":0,\"ERROR\":0,\"SKIPPED\":0},{\"TARGETSYSTEM\":\"CPW\",\"ARUNAME\":\"CPW_00304\",\"TOTAL\":1318,\"PROCESSED\":1318,\"REMAINING\":0,\"ERROR\":0,\"SKIPPED\":24},{\"TARGETSYSTEM\":\"CPW\",\"ARUNAME\":\"CPW_00314\",\"TOTAL\":6188,\"PROCESSED\":6188,\"REMAINING\":0,\"ERROR\":0,\"SKIPPED\":1},{\"TARGETSYSTEM\":\"CPW\",\"ARUNAME\":\"CPW_00355\",\"TOTAL\":505,\"PROCESSED\":462,\"REMAINING\":43,\"ERROR\":0,\"SKIPPED\":11},{\"TARGETSYSTEM\":\"CPW\",\"ARUNAME\":\"CPW_00364\",\"TOTAL\":12934,\"PROCESSED\":2804,\"REMAINING\":10130,\"ERROR\":0,\"SKIPPED\":1},{\"\":\"EAS\",\"ARUNAME\":\"CPW_02130\",\"TOTAL\":0,\"PROCESSED\":0,\"REMAINING\":0,\"ERROR\":0,\"SKIPPED\":0}]"} I want below two views from same data First View: Second view:
Yes you could exclude successful responses by adding a filter. Assuming that all errors have an errorCode and all non-errors do not, then you could do it like this: index = xxx sourcetype=xxx "Publi... See more...
Yes you could exclude successful responses by adding a filter. Assuming that all errors have an errorCode and all non-errors do not, then you could do it like this: index = xxx sourcetype=xxx "Publish message on SQS" bulkDelete | rex field=_raw "message=(?<message>{.*}$)" | spath input=message | search "errors{}.errorCode" = * | spath input=errors{}.errorDetails | table eventSource statusCode statusText
As per the above screenshot I am unable to view the Data summary tab in our Splunk search environment   
Have a go with: | stats count values(srcip) as srcip values(dstip) as dstip by title   This should produce three rows and therefore 3 alerts, where the srcip and dstip are multi-value fields.
On cluster master one of $SPLUNK_HOME/etc/master-apps/<app-name>/local/indexes.conf, I set remote.s3.access_key and remote.s3.secret_key with the same access_key and secret_key used with s3cmd. Howev... See more...
On cluster master one of $SPLUNK_HOME/etc/master-apps/<app-name>/local/indexes.conf, I set remote.s3.access_key and remote.s3.secret_key with the same access_key and secret_key used with s3cmd. However after apply cluster-bundle, the indexes.conf is updated and both key values are replaced. The new set of keys not only replace the ones under [default] stanza, but also on each index stanza.  Where the new keys come from? Is it expected that keys be overwritten?
I have a log stream in this format: level=info request.elapsed=100 request.method=GET request.path=/orders/123456 request_id=2ca011b5-ad34-4f32-a95c-78e8b5b1a270 response.status=500 I have extracte... See more...
I have a log stream in this format: level=info request.elapsed=100 request.method=GET request.path=/orders/123456 request_id=2ca011b5-ad34-4f32-a95c-78e8b5b1a270 response.status=500 I have extracted the fields using regex: | rex field=message "level=info request.elapsed=(?<duration>.*) request.method=(?<method>.*) request.path=(?<path>.*) request_id=(?<request_id>.*) response.status=(?<statusCode>.*)" I want to manually build a new field called route based on the extracted field path. For example, for "path=/order/123456", I want to create new field "route=/order/{orderID}", so I can grouping by route not by path, the path contains real parameter which I cannot group on path.  How can I achieve this? Thanks.
Hi everybody, I was doing an internal demo presentation with the demo1, and someone noticed that in a server the memory usage was high (87.8%), when we checked the processes that were running, there... See more...
Hi everybody, I was doing an internal demo presentation with the demo1, and someone noticed that in a server the memory usage was high (87.8%), when we checked the processes that were running, there was no process consuming memory, except 3% of the machine agent, so we don't understand why is showing that 87.8% peak. Memory usage 87.8% But no process is consuming memory except for the machine agent. In other example happens the opposite, the memory usage in the server is 34.6%. Memory usage is 34.6% but the sum of the processes is way more than 100%  Sum of processes are using way more than 100%. Is this an interpretation problem of us or just an issue of the demo1? In other demos the sum is correct according to each process.  Thanks in advance. Hope you're having a great day.