All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Splunkers, I've been having problems with Dashboard Studio recently and it has been bothering me for a long time. It would be great if you could give me some suggestions. I want to assign diff... See more...
Hello Splunkers, I've been having problems with Dashboard Studio recently and it has been bothering me for a long time. It would be great if you could give me some suggestions. I want to assign different colors according to different field values. I have made the following configurations, but they haven't taken effect. { "type": "splunk.map", "options": { "center": [ 24.007647480837704, 107.43997967141127 ], "zoom": 2.3155822324586683, "showBaseLayer": true, "layers": [ { "type": "bubble", "latitude": "> primary | seriesByName('latitude')", "longitude": "> primary | seriesByName('longitude')", "bubbleSize": "> primary | frameWithoutSeriesNames('_geo_bounds_east', '_geo_bounds_west', '_geo_bounds_north', '_geo_bounds_south', 'latitude', 'longitude') | frameBySeriesTypes('number')", "dataColors": " > primary | seriesByName('status') | matchValue('colorMatchConfig')" } ] }, "dataSources": { "primary": "ds_PHhx1Fxi" }, "context": { "colorMatchConfig": [ { "match": "high", "value": "#FF0000" }, { "match": "low", "value": "#00FF00" }, { "match": "critical", "value": "#0000FF" } ] }, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }  
Please create a new question for this item.
Hi @gcusello  {     "type": "splunk.map",     "options": {         "center": [             24.007647480837704,             107.43997967141127         ],         "zoom": 2.3155822324586683, ... See more...
Hi @gcusello  {     "type": "splunk.map",     "options": {         "center": [             24.007647480837704,             107.43997967141127         ],         "zoom": 2.3155822324586683,         "showBaseLayer": true,         "layers": [             {                 "type": "bubble",                 "latitude": "> primary | seriesByName('latitude')",                 "longitude": "> primary | seriesByName('longitude')",                 "bubbleSize": "> primary | frameWithoutSeriesNames('_geo_bounds_east', '_geo_bounds_west', '_geo_bounds_north', '_geo_bounds_south', 'latitude', 'longitude') | frameBySeriesTypes('number')",                 "dataColors": " > primary | seriesByName('status') | matchValue('colorMatchConfig')"             }         ]     },     "dataSources": {         "primary": "ds_PHhx1Fxi"     },     "context": {         "colorMatchConfig": [             {                 "match": "high",                 "value": "#FF0000"             },             {                 "match": "low",                 "value": "#00FF00"             },             {                 "match": "critical",                 "value": "#0000FF"             }         ]     },     "containerOptions": {},     "showProgressBar": false,     "showLastUpdated": false }    
Hi @gcusello Thankyou for your support, you are my hero! I've been having problems with Dashboard Studio recently and it has been bothering me for a long time. It would be great if you could give me... See more...
Hi @gcusello Thankyou for your support, you are my hero! I've been having problems with Dashboard Studio recently and it has been bothering me for a long time. It would be great if you could give me some suggestions. I want to assign different colors according to different field values. I have made the following configurations, but they haven't taken effect. Can help me to check it.
According to the documentation you should package dependencies your app needs in its  /<appname>/bin  directory.
Worked perfectly. Exactly what I needed. Thanks so much for your quick response!
Hi @mattt  I'm a little confused as to why requestId= is still present in the second event example. If you want to run the regex extract against the "requestID" field then you need to add "in <fiel... See more...
Hi @mattt  I'm a little confused as to why requestId= is still present in the second event example. If you want to run the regex extract against the "requestID" field then you need to add "in <fieldName>" to your extract: <regex> in <src_field> See the docs here For example: EXTRACT-requestId = (requestId=)?(?<field_requestId>[a-f0-9\-]{36}) EXTRACT-Response = Response:\s(?<field_response>([A-Z]+)\s([^\s,]+(?:[^\r\n]+))) EXTRACT-Request = Request:\s(?<field_request>([A-Z]+)\s([^\s,]+(?:[^\r\n]+)))  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
thank you, I will try this also.
Hi @dtamburin  Cribl will be sending data which is already parsed, therefore the proposed props/transforms will not work, instead you can use Ingest Actions: == props.conf == [cribl] RULESET-rulese... See more...
Hi @dtamburin  Cribl will be sending data which is already parsed, therefore the proposed props/transforms will not work, instead you can use Ingest Actions: == props.conf == [cribl] RULESET-ruleset_cribl = _rule:ruleset_cribl:set_index:eval:is31lica RULESET_DESC-ruleset_cribl = == transforms.conf == [_rule:ruleset_cribl:set_index:eval:is31lica] INGEST_EVAL = index=IF(match(_raw,"(?i)vpxa"),"vmware", index)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Data from Cribl is "cooked" meaning it already has been processed so props and transforms on the indexers will not process it further. You should change the index name in Cribl.
You must select between AWS, Azure or GCP. Personally I select always AWS with Victoria experience. With SCP you don’t need to manage base infra like indexers, search heads etc. in OS and HW level. ... See more...
You must select between AWS, Azure or GCP. Personally I select always AWS with Victoria experience. With SCP you don’t need to manage base infra like indexers, search heads etc. in OS and HW level. But you must manage some configurations like users, roles, apps, indexes etc. Usually there is some nodes in your onprem like DS, some HF like modular inputs, IHF/IUFs/HEC if you need to do some modifications for inputs. Most apps and inputs could be installed directly into SCP and used there, but some is better to put into onprem. As you could see there are still some administrative tasks left to you even the core is in SCP. At least I have seen that this combination is working quite well and is much easier for admins than running everything by yourself. It has better cost efficiency than running everything by yourselves.
Hi @chrisludke  Try the following to eval a percentage, note that Ive named the field on the stats so it is easier to reference.  | stats max(valueA) as max_valueA, max(valueB) as max_valueB by hos... See more...
Hi @chrisludke  Try the following to eval a percentage, note that Ive named the field on the stats so it is easier to reference.  | stats max(valueA) as max_valueA, max(valueB) as max_valueB by host | eval percentage = round((max_valueB / max_valueA) * 100, 2) | table host, max_valueA, max_valueB, percentage   Here is a sample query: | makeresults | eval valueA=1000, valueB=932, host="Test" | stats max(valueA) as max_valueA, max(valueB) as max_valueB by host | eval percentage = round((max_valueB / max_valueA) * 100, 2) | table host, max_valueA, max_valueB, percentage    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Brand new to splunk, inherited a slightly configured system. I want to move certain cribl events to an index called vmware. I added this... props.conf [sourcetype::cribl] TRANSFORMS-index = rout... See more...
Brand new to splunk, inherited a slightly configured system. I want to move certain cribl events to an index called vmware. I added this... props.conf [sourcetype::cribl] TRANSFORMS-index = route_to_vmware transforms.conf [route_to_vmware] REGEX = (?i)vpxa DEST_KEY = _MetaData:Index FORMAT = vmware Created an index in splunk. Example of event, ending up in main index... any help would be appreciated.  thank you I did restart splunk from the GUI after changes were made.
Looking for assistance in adding a percentage to an existing chart result. I have the following Splunk search that is able to chart the maximum value found of ValueA and ValueB and chart by hosts. Va... See more...
Looking for assistance in adding a percentage to an existing chart result. I have the following Splunk search that is able to chart the maximum value found of ValueA and ValueB and chart by hosts. ValueA  is the maximum count found (lets say total number of objects). ValueB is the maximum observed usage of ValueA. I do not use a bin or time reference directly in the search, rather using Splunk's pre-build time reference on-demand (Example , "last 24 hours" when executing the search) index=indextype  sourcetype=sourcetype  "search_string"  |  chart  max(valueA)  max(valueB)  by  host
Good morning, I’m experiencing an issue with the following log: 15:41:41,341 2025-05-13 15:41:41,340 DEBUG [org.jbo.res.rea.cli.log.DefaultClientLogger] (vert.x-eventloop-thread-1) requestId=31365... See more...
Good morning, I’m experiencing an issue with the following log: 15:41:41,341 2025-05-13 15:41:41,340 DEBUG [org.jbo.res.rea.cli.log.DefaultClientLogger] (vert.x-eventloop-thread-1) requestId=31365aee-0e03-43bc-9ccd-fd465aa7a4ca Request: GET http://something.com/something/else Headers[Accept=application/json If-Modified-Since=Tue, 13 May 2025 04:00:27 GMT User-Agent=Quarkus REST Client], Empty body 2025-05-13 15:41:39,970 DEBUG [org.jbo.res.rea.cli.log.DefaultClientLogger] (vert.x-eventloop-thread-1) requestId=95a1a839-2967-4ab8-8302-f5480106adb6 Response: GET http://something.com/something/else, Status[304 Not Modified], Headers[access-control-allow-credentials=true access-control-allow-headers=content-type, accept, authorization, cache-control, pragma access-control-allow-methods=OPTIONS,HEAD,POST,GET access-control-allow-origin=* cache-control=no-cache server-timing=intid;desc=4e7d2996fd2b9cc9 set-cookie=d81b2a11fe1ca01805243b5777a6e906=abae4222185903c47a832e0c67618490; path=/; HttpOnly] A bit of context that may be relevant: these logs are shipped using Splunk OTEL collectors. In the _raw logs, I see the following field values: Field Value requestID 95a1a839-2967-4ab8-8302-f5480106adb6 Response: GET http://something.com/something/else requestID requestId=31365aee-0e03-43bc-9ccd-fd465aa7a4ca Request: GET http://something.com/something/else   What I want is for the requestID, and the Request or Response parts to be extracted into separate fields. I’ve already added the following to my props.conf: [sourcetype*] EXTRACT-requestId = requestId=(?<field_request>[a-f0-9\-]+) EXTRACT-Response = Response:\s(?<field_response>([A-Z]+)\s([^\s,]+(?:[^\r\n]+))) EXTRACT-Request = Request:\s(?<field_request>([A-Z]+)\s([^\s,]+(?:[^\r\n]+))) I verified on regex101 that the regex matches correctly, but it's not working in Splunk. Could the issue be that the log show Response: instead of Response= and Splunk doesn’t treat it as a proper field delimiter? Unfortunately, I’m unable to modify the source lo What else can I check? Do I need to modify the .yml configuration for the Splunk OTEL collector, or should I stick to using props.conf and transforms.conf?   Thank you in advance, Best Regards. Matteo
I have taken a rather long query and condensed it down to the following to remove any possibility that something was possibly filtering it out.   | ldapsearch search="(&(cn=*userhere*))"   That w... See more...
I have taken a rather long query and condensed it down to the following to remove any possibility that something was possibly filtering it out.   | ldapsearch search="(&(cn=*userhere*))"   That will output all of the available data for the user including memberOf. memberOf skips "Domain Users" but seems to display every other group. I am currently running 3.0.8 of Splunk Supporting Add-on for Active Directory. Release notes do not mention this issue, no one seems to be reporting this issue but I have confirmed it happening on two completely independent instances. Both were 3.0.8.
Is this just one event or two or four? If four, how do you relate Request END to its corresponding Request BEGIN? What fields, if any, do you already have extracted?
Hi All, I have the log file like below : [Request BEGIN] Session ID - 1234gcy6789rtcd, Request ID - 2605, Source IP - 123.245.7.66, Source Host - 123.245.7.98, Source Port - 78690, xyz Engine - XYZ... See more...
Hi All, I have the log file like below : [Request BEGIN] Session ID - 1234gcy6789rtcd, Request ID - 2605, Source IP - 123.245.7.66, Source Host - 123.245.7.98, Source Port - 78690, xyz Engine - XYZS_BPM_Service, PNO - 1234, Service ID - abc12nf [Request END] Success :  [Request BEGIN] Session ID - 1234gcy6789rtcd, Request ID - 2605, Source IP - 123.245.7.66, Source Host - 123.245.7.98, Source Port - 78690, xyz Engine - XYZS_BPM_Service, PNO - 1234, Service ID - abc12nf [Request END] Success :  Details about the failure   along with above details there are lot of details in the log but these are the detail i need to create a dashboard, Can anyone please help me on how to extract all the above field. and how can a create a dashboard for how many request are successful along with details about the success request like IP and service name etc. Thanks a lot in advance. Regards, AKM
I have tried with below suggestions, but still not working.   Though the throttling error and timeout errors are cleaned. I have checked the permission also on Azure for the client we are using. N... See more...
I have tried with below suggestions, but still not working.   Though the throttling error and timeout errors are cleaned. I have checked the permission also on Azure for the client we are using. Now error logs are: 05-13-2025 12:07:30.690 +0000 INFO ExecProcessor [326053 ExecProcessor] - Removing status item "/opt/splunk/etc/apps/TA-MS-AAD/bin/MS_AAD_signins.py (MS_AAD_signins://SignInDetails) (isModInput=yes)   05-13-2025 06:41:33.207 +0000 ERROR UiAuth [46016 TcpChannelThread] - Request from 122.169.17.168 to "/en-US/splunkd/__raw/servicesNS/nobody/TA-MS-AAD/TA_MS_AAD_MS_AAD_signins/SignInDetails?output_mode=json" failed CSRF validation -- expected key "[REDACTED]8117" and header had key "10508357373912334086"  
  I'm trying to enable SignalFx AlwaysOn Profiling for my Java application. The app is already instrumented to: Send metrics directly to the ingest endpoint, and Send traces via a Collector ag... See more...
  I'm trying to enable SignalFx AlwaysOn Profiling for my Java application. The app is already instrumented to: Send metrics directly to the ingest endpoint, and Send traces via a Collector agent running on the host. I have a couple of questions: Can the ingest endpoint also be used for profiling, similar to how it's used for metrics? If yes, could you please share the exact endpoint format or a link to the relevant documentation? I attempted to enable profiling by pointing to the same Collector endpoint used for tracing. The logs indicate that the profiler is enabled, but I’m also seeing a message saying "Exporter failed", without a specific reason for the failure. Could you help me troubleshoot this issue? Here are the relevant log entries: com.splunk.opentelemetry.profiler.ConfigurationLogger - ----------------------- com.splunk.opentelemetry.profiler.ConfigurationLogger - Profiler configuration: com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.enabled : true com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.directory : /tmp com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.recording.duration : 20s com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.keep-files : false com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.logs-endpoint : http://<host_ip>:4318 com.splunk.opentelemetry.profiler.ConfigurationLogger - otel.exporter.otlp.endpoint : http://<host_ip>:4318 com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.memory.enabled : false com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.tlab.enabled : false com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.memory.event.rate : 150/s com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.include.internal.stacks : false com.splunk.opentelemetry.profiler.ConfigurationLogger - splunk.profiler.tracing.stacks.only : false com.splunk.opentelemetry.profiler.JfrActivator - Profiler is active. com.splunk.opentelemetry.profiler.EventProcessingChain - In total handled 151 events in 32ms io.opentelemetry.sdk.logs.export.SimpleLogRecordProcessor - Exporter failed Any help in understanding the root cause and resolving the export failure would be appreciated.