All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

On the other hand - you have no control over some of the settings (for some you can engage Support to set them for you). You have limited control over size of your environment. Your options are limit... See more...
On the other hand - you have no control over some of the settings (for some you can engage Support to set them for you). You have limited control over size of your environment. Your options are limited in terms of handling frozen data. You can't integrate authentication with your on-prem LDAP... So there are pros and cons, as I said
@LexSplunker  I think thats normal behavior not an add-on issue?. did you already try  something like this ? I always specify the attributes I need because of the  special handling and performance. ... See more...
@LexSplunker  I think thats normal behavior not an add-on issue?. did you already try  something like this ? I always specify the attributes I need because of the  special handling and performance.     | ldapsearch search="(&(cn=*userhere*))" attrs="cn,memberOf,primaryGroupID" | eval primaryGroupName=if(primaryGroupID="513","Domain Users","Other Primary Group") refer: https://docs.splunk.com/Documentation/SA-LdapSearch/3.0.2/User/UseSA-ldapsearchtotroubleshootproblems If this reply Helps, Please Upvote.
@Sarvesh_Fenix The first one is just an info message about the SignInDetails input status. The second one shows a CSRF  validation failure in the Splunk web interface. Did you try restart after conf... See more...
@Sarvesh_Fenix The first one is just an info message about the SignInDetails input status. The second one shows a CSRF  validation failure in the Splunk web interface. Did you try restart after configuring inputs?  
Hi, Thanks for the reply. Yes it's one event it starts with begin. I want extract fields like xyz Engine, service id, ip, PNO Regards, AKM
@isoutamo  Can you help me to check it, thanks!
Hello Splunkers, I've been having problems with Dashboard Studio recently and it has been bothering me for a long time. It would be great if you could give me some suggestions. I want to assign diff... See more...
Hello Splunkers, I've been having problems with Dashboard Studio recently and it has been bothering me for a long time. It would be great if you could give me some suggestions. I want to assign different colors according to different field values. I have made the following configurations, but they haven't taken effect. { "type": "splunk.map", "options": { "center": [ 24.007647480837704, 107.43997967141127 ], "zoom": 2.3155822324586683, "showBaseLayer": true, "layers": [ { "type": "bubble", "latitude": "> primary | seriesByName('latitude')", "longitude": "> primary | seriesByName('longitude')", "bubbleSize": "> primary | frameWithoutSeriesNames('_geo_bounds_east', '_geo_bounds_west', '_geo_bounds_north', '_geo_bounds_south', 'latitude', 'longitude') | frameBySeriesTypes('number')", "dataColors": " > primary | seriesByName('status') | matchValue('colorMatchConfig')" } ] }, "dataSources": { "primary": "ds_PHhx1Fxi" }, "context": { "colorMatchConfig": [ { "match": "high", "value": "#FF0000" }, { "match": "low", "value": "#00FF00" }, { "match": "critical", "value": "#0000FF" } ] }, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }  
Please create a new question for this item.
Hi @gcusello  {     "type": "splunk.map",     "options": {         "center": [             24.007647480837704,             107.43997967141127         ],         "zoom": 2.3155822324586683, ... See more...
Hi @gcusello  {     "type": "splunk.map",     "options": {         "center": [             24.007647480837704,             107.43997967141127         ],         "zoom": 2.3155822324586683,         "showBaseLayer": true,         "layers": [             {                 "type": "bubble",                 "latitude": "> primary | seriesByName('latitude')",                 "longitude": "> primary | seriesByName('longitude')",                 "bubbleSize": "> primary | frameWithoutSeriesNames('_geo_bounds_east', '_geo_bounds_west', '_geo_bounds_north', '_geo_bounds_south', 'latitude', 'longitude') | frameBySeriesTypes('number')",                 "dataColors": " > primary | seriesByName('status') | matchValue('colorMatchConfig')"             }         ]     },     "dataSources": {         "primary": "ds_PHhx1Fxi"     },     "context": {         "colorMatchConfig": [             {                 "match": "high",                 "value": "#FF0000"             },             {                 "match": "low",                 "value": "#00FF00"             },             {                 "match": "critical",                 "value": "#0000FF"             }         ]     },     "containerOptions": {},     "showProgressBar": false,     "showLastUpdated": false }    
Hi @gcusello Thankyou for your support, you are my hero! I've been having problems with Dashboard Studio recently and it has been bothering me for a long time. It would be great if you could give me... See more...
Hi @gcusello Thankyou for your support, you are my hero! I've been having problems with Dashboard Studio recently and it has been bothering me for a long time. It would be great if you could give me some suggestions. I want to assign different colors according to different field values. I have made the following configurations, but they haven't taken effect. Can help me to check it.
According to the documentation you should package dependencies your app needs in its  /<appname>/bin  directory.
Worked perfectly. Exactly what I needed. Thanks so much for your quick response!
Hi @mattt  I'm a little confused as to why requestId= is still present in the second event example. If you want to run the regex extract against the "requestID" field then you need to add "in <fiel... See more...
Hi @mattt  I'm a little confused as to why requestId= is still present in the second event example. If you want to run the regex extract against the "requestID" field then you need to add "in <fieldName>" to your extract: <regex> in <src_field> See the docs here For example: EXTRACT-requestId = (requestId=)?(?<field_requestId>[a-f0-9\-]{36}) EXTRACT-Response = Response:\s(?<field_response>([A-Z]+)\s([^\s,]+(?:[^\r\n]+))) EXTRACT-Request = Request:\s(?<field_request>([A-Z]+)\s([^\s,]+(?:[^\r\n]+)))  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
thank you, I will try this also.
Hi @dtamburin  Cribl will be sending data which is already parsed, therefore the proposed props/transforms will not work, instead you can use Ingest Actions: == props.conf == [cribl] RULESET-rulese... See more...
Hi @dtamburin  Cribl will be sending data which is already parsed, therefore the proposed props/transforms will not work, instead you can use Ingest Actions: == props.conf == [cribl] RULESET-ruleset_cribl = _rule:ruleset_cribl:set_index:eval:is31lica RULESET_DESC-ruleset_cribl = == transforms.conf == [_rule:ruleset_cribl:set_index:eval:is31lica] INGEST_EVAL = index=IF(match(_raw,"(?i)vpxa"),"vmware", index)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Data from Cribl is "cooked" meaning it already has been processed so props and transforms on the indexers will not process it further. You should change the index name in Cribl.
You must select between AWS, Azure or GCP. Personally I select always AWS with Victoria experience. With SCP you don’t need to manage base infra like indexers, search heads etc. in OS and HW level. ... See more...
You must select between AWS, Azure or GCP. Personally I select always AWS with Victoria experience. With SCP you don’t need to manage base infra like indexers, search heads etc. in OS and HW level. But you must manage some configurations like users, roles, apps, indexes etc. Usually there is some nodes in your onprem like DS, some HF like modular inputs, IHF/IUFs/HEC if you need to do some modifications for inputs. Most apps and inputs could be installed directly into SCP and used there, but some is better to put into onprem. As you could see there are still some administrative tasks left to you even the core is in SCP. At least I have seen that this combination is working quite well and is much easier for admins than running everything by yourself. It has better cost efficiency than running everything by yourselves.
Hi @chrisludke  Try the following to eval a percentage, note that Ive named the field on the stats so it is easier to reference.  | stats max(valueA) as max_valueA, max(valueB) as max_valueB by hos... See more...
Hi @chrisludke  Try the following to eval a percentage, note that Ive named the field on the stats so it is easier to reference.  | stats max(valueA) as max_valueA, max(valueB) as max_valueB by host | eval percentage = round((max_valueB / max_valueA) * 100, 2) | table host, max_valueA, max_valueB, percentage   Here is a sample query: | makeresults | eval valueA=1000, valueB=932, host="Test" | stats max(valueA) as max_valueA, max(valueB) as max_valueB by host | eval percentage = round((max_valueB / max_valueA) * 100, 2) | table host, max_valueA, max_valueB, percentage    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Brand new to splunk, inherited a slightly configured system. I want to move certain cribl events to an index called vmware. I added this... props.conf [sourcetype::cribl] TRANSFORMS-index = rout... See more...
Brand new to splunk, inherited a slightly configured system. I want to move certain cribl events to an index called vmware. I added this... props.conf [sourcetype::cribl] TRANSFORMS-index = route_to_vmware transforms.conf [route_to_vmware] REGEX = (?i)vpxa DEST_KEY = _MetaData:Index FORMAT = vmware Created an index in splunk. Example of event, ending up in main index... any help would be appreciated.  thank you I did restart splunk from the GUI after changes were made.
Looking for assistance in adding a percentage to an existing chart result. I have the following Splunk search that is able to chart the maximum value found of ValueA and ValueB and chart by hosts. Va... See more...
Looking for assistance in adding a percentage to an existing chart result. I have the following Splunk search that is able to chart the maximum value found of ValueA and ValueB and chart by hosts. ValueA  is the maximum count found (lets say total number of objects). ValueB is the maximum observed usage of ValueA. I do not use a bin or time reference directly in the search, rather using Splunk's pre-build time reference on-demand (Example , "last 24 hours" when executing the search) index=indextype  sourcetype=sourcetype  "search_string"  |  chart  max(valueA)  max(valueB)  by  host
Good morning, I’m experiencing an issue with the following log: 15:41:41,341 2025-05-13 15:41:41,340 DEBUG [org.jbo.res.rea.cli.log.DefaultClientLogger] (vert.x-eventloop-thread-1) requestId=31365... See more...
Good morning, I’m experiencing an issue with the following log: 15:41:41,341 2025-05-13 15:41:41,340 DEBUG [org.jbo.res.rea.cli.log.DefaultClientLogger] (vert.x-eventloop-thread-1) requestId=31365aee-0e03-43bc-9ccd-fd465aa7a4ca Request: GET http://something.com/something/else Headers[Accept=application/json If-Modified-Since=Tue, 13 May 2025 04:00:27 GMT User-Agent=Quarkus REST Client], Empty body 2025-05-13 15:41:39,970 DEBUG [org.jbo.res.rea.cli.log.DefaultClientLogger] (vert.x-eventloop-thread-1) requestId=95a1a839-2967-4ab8-8302-f5480106adb6 Response: GET http://something.com/something/else, Status[304 Not Modified], Headers[access-control-allow-credentials=true access-control-allow-headers=content-type, accept, authorization, cache-control, pragma access-control-allow-methods=OPTIONS,HEAD,POST,GET access-control-allow-origin=* cache-control=no-cache server-timing=intid;desc=4e7d2996fd2b9cc9 set-cookie=d81b2a11fe1ca01805243b5777a6e906=abae4222185903c47a832e0c67618490; path=/; HttpOnly] A bit of context that may be relevant: these logs are shipped using Splunk OTEL collectors. In the _raw logs, I see the following field values: Field Value requestID 95a1a839-2967-4ab8-8302-f5480106adb6 Response: GET http://something.com/something/else requestID requestId=31365aee-0e03-43bc-9ccd-fd465aa7a4ca Request: GET http://something.com/something/else   What I want is for the requestID, and the Request or Response parts to be extracted into separate fields. I’ve already added the following to my props.conf: [sourcetype*] EXTRACT-requestId = requestId=(?<field_request>[a-f0-9\-]+) EXTRACT-Response = Response:\s(?<field_response>([A-Z]+)\s([^\s,]+(?:[^\r\n]+))) EXTRACT-Request = Request:\s(?<field_request>([A-Z]+)\s([^\s,]+(?:[^\r\n]+))) I verified on regex101 that the regex matches correctly, but it's not working in Splunk. Could the issue be that the log show Response: instead of Response= and Splunk doesn’t treat it as a proper field delimiter? Unfortunately, I’m unable to modify the source lo What else can I check? Do I need to modify the .yml configuration for the Splunk OTEL collector, or should I stick to using props.conf and transforms.conf?   Thank you in advance, Best Regards. Matteo