All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi Folks, I’m preparing to present a large-scale Splunk design to stakeholders and want to make it interactive. To achieve this, I’m considering using Mermaid, which allows us to start with code, i... See more...
hi Folks, I’m preparing to present a large-scale Splunk design to stakeholders and want to make it interactive. To achieve this, I’m considering using Mermaid, which allows us to start with code, iterate incrementally, and improve easily over time. Below is my initial draft of the design.     I’d appreciate your input on two points: Are there any obvious mistakes in this draft? (It’s been a while since I last worked on Splunk design, especially after transitioning to Splunk Cloud.) Are you aware of any pre-existing Mermaid or Draw.io diagrams for large-scale Splunk clusters that we could adapt or reuse? Thanks in advance for your feedback!  
How many levels deep can the parent/child relationship be? Anyway you can do this with a couple of lines of SPL, see this example which creates some dummy data and then the final two lines will crea... See more...
How many levels deep can the parent/child relationship be? Anyway you can do this with a couple of lines of SPL, see this example which creates some dummy data and then the final two lines will create the sum of time per USER/TRANNO | makeresults count=4 | eval TRANNO=random() | eval USRCPUT_MICROSEC=random() % 10000 | streamstats c as USER | eval USER="TASK ".USER | appendpipe [ | eval children=mvrange(1,5,1) | mvexpand children | rename TRANNO as PHTRANNO | eval USRCPUT_MICROSEC=random() % 1000000 ] ``` The above is just creating some example data ``` | eval TRANNO=coalesce(TRANNO, PHTRANNO) | stats sum(USRCPUT_MICROSEC) as USRCPUT_MICROSEC by USER TRANNO If this is not what your data looks like, please post some anonymised examples of your data.
@Braagi In your drilldown use an <eval> token setter, i.e. <drilldown> <eval token="drill_time_start">$click.value$-300</eval> <eval token="drill_time_end">$click.value$+... See more...
@Braagi In your drilldown use an <eval> token setter, i.e. <drilldown> <eval token="drill_time_start">$click.value$-300</eval> <eval token="drill_time_end">$click.value$+300</eval> </drilldown>
I have an index with a list of transactions, the transactions in the system start as 1 process with a transaction number (TRANNO) and that transaction can start a number of sub-tasks, each sub task w... See more...
I have an index with a list of transactions, the transactions in the system start as 1 process with a transaction number (TRANNO) and that transaction can start a number of sub-tasks, each sub task will have its own transaction number (TRANNO) but will also have its parent transaction number (PHTRANNO).  All the tasks (parents and children) have an amount of CPU consumed (USRCPUT_MICROSEC). All tasks have an id field (USER) which tells us what type of task it was. I want to be able to create a report listing all the types of tasks (USER) and the average and max CPU consumed (USRCPUT_MICROSEC). I've managed to create the report with the sum of the parents CPU but most of the CPU is consumed by the children.  Any suggestions on how to do this?  I've been searching and trying things for hours and I'm not getting anywhere.
Do you have some raw data you can share with us? Im wondering if in your case it would be better to do the splitting before indexing the data, if possible, so that you are not relying on mvexpand.  ... See more...
Do you have some raw data you can share with us? Im wondering if in your case it would be better to do the splitting before indexing the data, if possible, so that you are not relying on mvexpand.  It isnt easy (or efficient - as you've found) to expand evants into multiple events at search-time. Happy to try and help you index this in separate events if this would help though! Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @avi123  Its getting late here so might not be optimal, but I think this should work! | makeresults | eval json_data="{\"BCD_AB_UY_01\":1,\"BCD_AB_UY_02\":0,\"BCD_BC_01\":1,\"BCD_BC_02\":0,\"BC... See more...
Hi @avi123  Its getting late here so might not be optimal, but I think this should work! | makeresults | eval json_data="{\"BCD_AB_UY_01\":1,\"BCD_AB_UY_02\":0,\"BCD_BC_01\":1,\"BCD_BC_02\":0,\"BCD_CD_01\":1,\"BCD_CD_02\":1,\"BCD_CD_03\":0,\"BCD_KPI_01\":1,\"BCD_KPI_02\":1,\"BCD_KPI_03\":0,\"BCD_MY_01\":1,\"BCD_MY_02\":1,\"BCD_RMO_PZ_01\":1,\"BCD_RMO_PZ_02\":1,\"BCD_RMO_PZ_03\":0,\"BCD_RMO_PZ_04\":0,\"BCD_RSTA_01\":1,\"BCD_RSTA_02\":1,\"BCD_RSTA_03\":0,\"BCD_SHY_01\":1,\"BCD_SHY_02\":1,\"BCD_UK_01\":1,\"BCD_UK_02\":1,\"BCD_UK_03\":1,\"BCD_UK_04\":1,\"BCD_UK_05\":1,\"BCD_UK_06\":1,\"BCD_UK_07\":1,\"BCD_UK_08\":0,\"BCD_UK_09\":0,\"BCD_UK_10\":0,\"BCD_UK_11\":0,\"BCD_UK_12\":0}" | eval _raw=json_extract(json_data,"") | eval host="Testing", service="MySerivceName" | spath | foreach * [| eval fields=mvappend(fields, IF(<<FIELD>> >= 0, json_object("<<FIELD>>",<<FIELD>>),null()))] | table _time host service fields | mvexpand fields | eval fieldObj=json_array_to_mv((json_entries(fields))) | eval fieldName=json_extract(fieldObj, "key") | eval value=json_extract(fieldObj, "value") | eval friendlyTime=strftime(_time,"%d/%m/%Y %H:%M:%S") | search value=0 | eval metricLabel="URGENT !! Labware - ".service." has been stopped in Server" | eval metricValue="Hello Application Support team, The below service has been stopped in the server, Service name : ".service." Timestamp : ".friendlyTime." Server : ".host." Please take the required action to resume the service. Thank you. Regards, Background Service Check Automation Bot" | eval querypattern="default" | eval assignmentgroup="PTO ABC Lab - Operatives" | eval business_service="LIME Business Service" | eval serviceoffering="LIME" | eval Interface="CLMTS" | eval urgency=2 | eval impact=1 Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
You're not actually showing us how you are using mvexpand. There is also no 1:1 relationship with parentSpanId with the other MV fields. The general way to expand multiple MV fields in an event is t... See more...
You're not actually showing us how you are using mvexpand. There is also no 1:1 relationship with parentSpanId with the other MV fields. The general way to expand multiple MV fields in an event is to create composite fields and then expand that or to use stats by that field, but we'd need a better idea of what you're trying to end up with.
My organization ran into a similar issue when we were initially deploying SOAR - we think it was due to our notable events having too many characters and then the entire message getting truncated. W... See more...
My organization ran into a similar issue when we were initially deploying SOAR - we think it was due to our notable events having too many characters and then the entire message getting truncated. We ended up creating "One Notable to Rule them All" - just a notable that looks for other notables - it also removes fields we don't care about, ignores all suppressed events, and a few other pieces that make sense for our environment. We then also set the alert actions on this notable itself.
From what I can tell it's just a straightforward field and single value from a json feed. Here's an event example from the search: { [-] action: Block asset: { [-] id: xxxxx-xxxx-xxxxx-xx... See more...
From what I can tell it's just a straightforward field and single value from a json feed. Here's an event example from the search: { [-] action: Block asset: { [-] id: xxxxx-xxxx-xxxxx-xxxx-xxxxxxxx kind: Endpoint name: Vxxxxx3 } dataType: Event guid: 0xxxxxx id: 7c332exxxxb24e kind: TrustDowngrade occurredAt: 2025-03-18T14:56:13.748Z primaryProcess: { [+] } processes: [ [+] ] summary: { [+] } tenantId: xxxx-xxxxx-xxxx-xxxx-xxxxxxxx It's json data so the field format on the thing I want ends up being asset.name and the data does show Vxxxxx3. I thought maybe it was a search timing thing, like the field wasn't there when the search was run in the alert (seemed unlikely, but I'm having no luck so far in figuring this out) so I did a rex on the raw data to pull the data I wanted using: | rex field=_raw "\"name\":\"(?<hostname>[^\"]*)\"" and again, I now see a hostname field with the Vxxxxx3 data in it like I'd expect. But when I put $result.hostname$ into the message of the alert, all I get is a blank email.
You obviously have some not very pretty json structures. As @ITWhisperer said - show us a sample because for now you're extracting some fields which - as you say - are apparently multivalued. But th... See more...
You obviously have some not very pretty json structures. As @ITWhisperer said - show us a sample because for now you're extracting some fields which - as you say - are apparently multivalued. But the values in each of them are unrelated to the values in other fields. So you're losing any connections between the values you might have had in the original json. Another issue is that if you have four 3-valued fields and you do mvexpand on each of them you'll get the cartesian product of those fields - 3^4=81 separate result rows. I'm not sure that's what you want.
So, have a timechart with multiple streams. Call them X, Y, and Z. Run the panel for a 4h timeframe. I want to click a peak or valley on one of the lines, take the name of that line (got this part... See more...
So, have a timechart with multiple streams. Call them X, Y, and Z. Run the panel for a 4h timeframe. I want to click a peak or valley on one of the lines, take the name of that line (got this part done) and the exact time that was clicked on ( I think this is click.value ) and pass them to another panel in the same dashboard. The "click.value" should be an epoch time...aka a number... so I should be able to add or subtract say 300 from that number and use them as the earliest and latest variables for a search. Effectively I want to do ("click.value"-300) for earliest and ("click.value"+300) for latest on another panel making it a 10 minute window with the point that was clicked on being the mid-point. I have tried in-line :  <set token="Drill_time_1">$click.value$ - 300</set> <set token="Drill_time_2">$click.value$ + 300</set> I have tried in-search :  earliest=$Drill_time_1$-300 latest=$Drill_time_2$+300 ...And various combinations there-of. All to no avail. Anyone have an idea?
Thank You. I did it already with the help of  custom service.
I am trying to instrument a Java Spring Boot application for OpenTelemetry.  I am following the instructions from here: Instrument your Java application for Splunk Observability Cloud — Splunk Obse... See more...
I am trying to instrument a Java Spring Boot application for OpenTelemetry.  I am following the instructions from here: Instrument your Java application for Splunk Observability Cloud — Splunk Observability Cloud documentation But when I start the application, I get this error: ``` java -javaagent:./splunk-otel-javaagent.jar -jar my-app/target/my-app-0.0.1-SNAPSHOT.jar Unexpected error (103) returned by AddToSystemClassLoaderSearch Unable to add ./splunk-otel-javaagent.jar to system class path - the system class loader does not define the appendToClassPathForInstrumentation method or the method failed FATAL ERROR in native method: processing of -javaagent failed, appending to system class path failed ``` From my `mvn -v`: ``` Java version: 21.0.2, vendor: Oracle Corporation, runtime: C:\Users\****\apps\openjdk21\current Default locale: en_US, platform encoding: UTF-8 OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows" ``` How do I correctly start the application with the javaagent?
Please share some anonymised sample events (in a codeblock </> not as a picture) and a description of what you are trying to do.
Your second dashboard doesn't have a global_time input - probably waiting for that to be set?
Hi I have the following data. I am looking to get a line per data, so I can work with it better. If I use mvexpand I hit memory limits, as I need to do it on all the fields. Is there another way? ... See more...
Hi I have the following data. I am looking to get a line per data, so I can work with it better. If I use mvexpand I hit memory limits, as I need to do it on all the fields. Is there another way? Or perhaps I just need to increase the mvexpand memory limits! host="PMC_Sample_Data" index="murex_logs" sourcetype="Market_Risk_DT" | spath "resourceSpans{}.scopeSpans{}.spans{}.spanId" | rename resourceSpans{}.scopeSpans{}.spans{}.spanId as spanId | spath "resourceSpans{}.scopeSpans{}.spans{}.parentSpanId" | rename "resourceSpans{}.scopeSpans{}.spans{}.parentSpanId" as parentSpanId | spath "resourceSpans{}.scopeSpans{}.spans{}.startTimeUnixNano" | rename resourceSpans{}.scopeSpans{}.spans{}.startTimeUnixNano as start | spath "resourceSpans{}.scopeSpans{}.spans{}.endTimeUnixNano" | rename resourceSpans{}.scopeSpans{}.spans{}.endTimeUnixNano as end | spath resourceSpans{}.scopeSpans{}.spans{}.traceId | rename resourceSpans{}.scopeSpans{}.spans{}.traceId as traceId | table traceId spanId parentSpanId start end Thanks in advance    
You can "destream" your pipeline by inserting the table command before your lookup (either with a strict set of files or just wildcard).
Hi, I believe you can monitor redis enterprise by including it as a custom service. Based on this documentation: https://docs.splunk.com/observability/en/gdi/integrations/cloud-azure.html I ... See more...
Hi, I believe you can monitor redis enterprise by including it as a custom service. Based on this documentation: https://docs.splunk.com/observability/en/gdi/integrations/cloud-azure.html I believe this to be true because this would be a "root resource type" (not a nested sub-component) and also its metrics would be exposed via Azure Monitor.
I am assuming @SeanO_VA is referring to the postgres binaries (pg_* binaries - although may be more) in the $SPLUNK_HOME/bin directory - although for me none are running on my 9.4.1 instance. In ter... See more...
I am assuming @SeanO_VA is referring to the postgres binaries (pg_* binaries - although may be more) in the $SPLUNK_HOME/bin directory - although for me none are running on my 9.4.1 instance. In terms of uses in future version of Splunk etc, I suspect it will be highly likely that the patched versions would be included unless there is a good reason not to, at which point it would be time to discuss directly with Support/Account team to determine relevant mitigations. 
Hi, I think I’m understanding the question, but if I’m off base, just let me know. I think what I’m hearing is that you’re moving from Elastic to Splunk Observability Cloud and you’re wanting to und... See more...
Hi, I think I’m understanding the question, but if I’m off base, just let me know. I think what I’m hearing is that you’re moving from Elastic to Splunk Observability Cloud and you’re wanting to understand how logs are exported, stored, and used in dashboards. Here is an overview of where observability data is stored and how it’s all integrated together. - Splunk Observability Cloud is where application metrics and traces are ingested and stored. - Splunk Cloud or Enterprise is where logs are ingested and stored. - Splunk Observability Cloud uses an integration called Log Observer Connect to read logs from Splunk Cloud/Enterprise and correlate them to your metrics and traces. The logs are not stored in Splunk Observability Cloud—they’re just visible through this integration. - Dashboards with logs can be created in either Splunk Observability Cloud or Splunk Cloud/Enterprise. The choice is yours and just depends on your use-case and what you want to include on the dashboards. - You may also choose to pull metrics and APM data into Splunk Cloud/Enterprise from Splunk Observability Cloud using the Splunk Infrastructure Monitoring TA. This will be helpful if you want to build your dashboards in Splunk Cloud/Enterprise and include application metrics or Real User Monitoring metrics or Synthetics test metrics. - As for getting logs into Splunk from your application, you have options: - For Kubernetes environments, I would recommend using our OpenTelemetry helm chart. You can export logs to a Splunk HEC endpoint on Splunk Cloud/Enterprise. You can also utilize OpenTelemetry pipelines to control that data any way you want. - For traditional server environments, you can simply use the Universal Forwarder to read your application logs from disk and forward them to Splunk Cloud/Enterprise.