All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The reason for using this is to be able to create a list of all groups a user is in. That above query will evaluate and memberOf still does not show "Domain Users" but shows every other group. The d... See more...
The reason for using this is to be able to create a list of all groups a user is in. That above query will evaluate and memberOf still does not show "Domain Users" but shows every other group. The documentation makes no mention that the primary group ID will not show up, unfortunately the network I am currently on I am unable to add a test user and assign the primary group to something else and remove them from Domain Users but I can't see how it would be normal functionality to exclude the primary group the user is a member of.   I have just never in my career seen something that could list group memberships and would intentionally skip the primary group, or "Domain Users" whichever is true in this scenario.   I tested with Domain Computers as well and had the same results. It still seems weird.
@Gopikrishnan_Ra did you already try editing the source code and add explicit font settings like "fontSize": "medium", "fontType": "proportional" to each visualization? could be rendering issue ? Wha... See more...
@Gopikrishnan_Ra did you already try editing the source code and add explicit font settings like "fontSize": "medium", "fontType": "proportional" to each visualization? could be rendering issue ? What version are you on? If this helps, Please Upvote.
On the other hand - you have no control over some of the settings (for some you can engage Support to set them for you). You have limited control over size of your environment. Your options are limit... See more...
On the other hand - you have no control over some of the settings (for some you can engage Support to set them for you). You have limited control over size of your environment. Your options are limited in terms of handling frozen data. You can't integrate authentication with your on-prem LDAP... So there are pros and cons, as I said
@LexSplunker  I think thats normal behavior not an add-on issue?. did you already try  something like this ? I always specify the attributes I need because of the  special handling and performance. ... See more...
@LexSplunker  I think thats normal behavior not an add-on issue?. did you already try  something like this ? I always specify the attributes I need because of the  special handling and performance.     | ldapsearch search="(&(cn=*userhere*))" attrs="cn,memberOf,primaryGroupID" | eval primaryGroupName=if(primaryGroupID="513","Domain Users","Other Primary Group") refer: https://docs.splunk.com/Documentation/SA-LdapSearch/3.0.2/User/UseSA-ldapsearchtotroubleshootproblems If this reply Helps, Please Upvote.
@Sarvesh_Fenix The first one is just an info message about the SignInDetails input status. The second one shows a CSRF  validation failure in the Splunk web interface. Did you try restart after conf... See more...
@Sarvesh_Fenix The first one is just an info message about the SignInDetails input status. The second one shows a CSRF  validation failure in the Splunk web interface. Did you try restart after configuring inputs?  
Hi, Thanks for the reply. Yes it's one event it starts with begin. I want extract fields like xyz Engine, service id, ip, PNO Regards, AKM
@isoutamo  Can you help me to check it, thanks!
Hello Splunkers, I've been having problems with Dashboard Studio recently and it has been bothering me for a long time. It would be great if you could give me some suggestions. I want to assign diff... See more...
Hello Splunkers, I've been having problems with Dashboard Studio recently and it has been bothering me for a long time. It would be great if you could give me some suggestions. I want to assign different colors according to different field values. I have made the following configurations, but they haven't taken effect. { "type": "splunk.map", "options": { "center": [ 24.007647480837704, 107.43997967141127 ], "zoom": 2.3155822324586683, "showBaseLayer": true, "layers": [ { "type": "bubble", "latitude": "> primary | seriesByName('latitude')", "longitude": "> primary | seriesByName('longitude')", "bubbleSize": "> primary | frameWithoutSeriesNames('_geo_bounds_east', '_geo_bounds_west', '_geo_bounds_north', '_geo_bounds_south', 'latitude', 'longitude') | frameBySeriesTypes('number')", "dataColors": " > primary | seriesByName('status') | matchValue('colorMatchConfig')" } ] }, "dataSources": { "primary": "ds_PHhx1Fxi" }, "context": { "colorMatchConfig": [ { "match": "high", "value": "#FF0000" }, { "match": "low", "value": "#00FF00" }, { "match": "critical", "value": "#0000FF" } ] }, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }  
Please create a new question for this item.
Hi @gcusello  {     "type": "splunk.map",     "options": {         "center": [             24.007647480837704,             107.43997967141127         ],         "zoom": 2.3155822324586683, ... See more...
Hi @gcusello  {     "type": "splunk.map",     "options": {         "center": [             24.007647480837704,             107.43997967141127         ],         "zoom": 2.3155822324586683,         "showBaseLayer": true,         "layers": [             {                 "type": "bubble",                 "latitude": "> primary | seriesByName('latitude')",                 "longitude": "> primary | seriesByName('longitude')",                 "bubbleSize": "> primary | frameWithoutSeriesNames('_geo_bounds_east', '_geo_bounds_west', '_geo_bounds_north', '_geo_bounds_south', 'latitude', 'longitude') | frameBySeriesTypes('number')",                 "dataColors": " > primary | seriesByName('status') | matchValue('colorMatchConfig')"             }         ]     },     "dataSources": {         "primary": "ds_PHhx1Fxi"     },     "context": {         "colorMatchConfig": [             {                 "match": "high",                 "value": "#FF0000"             },             {                 "match": "low",                 "value": "#00FF00"             },             {                 "match": "critical",                 "value": "#0000FF"             }         ]     },     "containerOptions": {},     "showProgressBar": false,     "showLastUpdated": false }    
Hi @gcusello Thankyou for your support, you are my hero! I've been having problems with Dashboard Studio recently and it has been bothering me for a long time. It would be great if you could give me... See more...
Hi @gcusello Thankyou for your support, you are my hero! I've been having problems with Dashboard Studio recently and it has been bothering me for a long time. It would be great if you could give me some suggestions. I want to assign different colors according to different field values. I have made the following configurations, but they haven't taken effect. Can help me to check it.
According to the documentation you should package dependencies your app needs in its  /<appname>/bin  directory.
Worked perfectly. Exactly what I needed. Thanks so much for your quick response!
Hi @mattt  I'm a little confused as to why requestId= is still present in the second event example. If you want to run the regex extract against the "requestID" field then you need to add "in <fiel... See more...
Hi @mattt  I'm a little confused as to why requestId= is still present in the second event example. If you want to run the regex extract against the "requestID" field then you need to add "in <fieldName>" to your extract: <regex> in <src_field> See the docs here For example: EXTRACT-requestId = (requestId=)?(?<field_requestId>[a-f0-9\-]{36}) EXTRACT-Response = Response:\s(?<field_response>([A-Z]+)\s([^\s,]+(?:[^\r\n]+))) EXTRACT-Request = Request:\s(?<field_request>([A-Z]+)\s([^\s,]+(?:[^\r\n]+)))  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
thank you, I will try this also.
Hi @dtamburin  Cribl will be sending data which is already parsed, therefore the proposed props/transforms will not work, instead you can use Ingest Actions: == props.conf == [cribl] RULESET-rulese... See more...
Hi @dtamburin  Cribl will be sending data which is already parsed, therefore the proposed props/transforms will not work, instead you can use Ingest Actions: == props.conf == [cribl] RULESET-ruleset_cribl = _rule:ruleset_cribl:set_index:eval:is31lica RULESET_DESC-ruleset_cribl = == transforms.conf == [_rule:ruleset_cribl:set_index:eval:is31lica] INGEST_EVAL = index=IF(match(_raw,"(?i)vpxa"),"vmware", index)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Data from Cribl is "cooked" meaning it already has been processed so props and transforms on the indexers will not process it further. You should change the index name in Cribl.
You must select between AWS, Azure or GCP. Personally I select always AWS with Victoria experience. With SCP you don’t need to manage base infra like indexers, search heads etc. in OS and HW level. ... See more...
You must select between AWS, Azure or GCP. Personally I select always AWS with Victoria experience. With SCP you don’t need to manage base infra like indexers, search heads etc. in OS and HW level. But you must manage some configurations like users, roles, apps, indexes etc. Usually there is some nodes in your onprem like DS, some HF like modular inputs, IHF/IUFs/HEC if you need to do some modifications for inputs. Most apps and inputs could be installed directly into SCP and used there, but some is better to put into onprem. As you could see there are still some administrative tasks left to you even the core is in SCP. At least I have seen that this combination is working quite well and is much easier for admins than running everything by yourself. It has better cost efficiency than running everything by yourselves.
Hi @chrisludke  Try the following to eval a percentage, note that Ive named the field on the stats so it is easier to reference.  | stats max(valueA) as max_valueA, max(valueB) as max_valueB by hos... See more...
Hi @chrisludke  Try the following to eval a percentage, note that Ive named the field on the stats so it is easier to reference.  | stats max(valueA) as max_valueA, max(valueB) as max_valueB by host | eval percentage = round((max_valueB / max_valueA) * 100, 2) | table host, max_valueA, max_valueB, percentage   Here is a sample query: | makeresults | eval valueA=1000, valueB=932, host="Test" | stats max(valueA) as max_valueA, max(valueB) as max_valueB by host | eval percentage = round((max_valueB / max_valueA) * 100, 2) | table host, max_valueA, max_valueB, percentage    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Brand new to splunk, inherited a slightly configured system. I want to move certain cribl events to an index called vmware. I added this... props.conf [sourcetype::cribl] TRANSFORMS-index = rout... See more...
Brand new to splunk, inherited a slightly configured system. I want to move certain cribl events to an index called vmware. I added this... props.conf [sourcetype::cribl] TRANSFORMS-index = route_to_vmware transforms.conf [route_to_vmware] REGEX = (?i)vpxa DEST_KEY = _MetaData:Index FORMAT = vmware Created an index in splunk. Example of event, ending up in main index... any help would be appreciated.  thank you I did restart splunk from the GUI after changes were made.