All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I would generally recommend setting the token to the submitted token model as well as the default, i.e. var submittedTokenModel = mvc.Components.getInstance('submitted'); and submittedTokenModel.s... See more...
I would generally recommend setting the token to the submitted token model as well as the default, i.e. var submittedTokenModel = mvc.Components.getInstance('submitted'); and submittedTokenModel.set('clickedButtonValue', value); I'm also not entirely sure how the <dashboard> or <form> structure of a dashboard changes how tokens are managed, because the token models effect how the tokens are used when clicking submit buttons in a dashboard, where the dashboard will always be a <form> dashboard. So, first change the dashboard to <form> and then try the changed JS - hopefully one will make the difference.  
What is the x-axis you need. You have 3 fields output in your search | table StatisticalId Value Unit and there is a lot of mvexpand logic going on... and that seems like you are going to multiple... See more...
What is the x-axis you need. You have 3 fields output in your search | table StatisticalId Value Unit and there is a lot of mvexpand logic going on... and that seems like you are going to multiple your data significantly as there's no correlation between each of the MV values you are expanding. That aside, the basic command to create the chart would be something like | chart max(Value) over Unit by StatisticalId which would put Unit on the x-axis. Swap Unit and StatisticalId to make StatisticalId the x-axis  
Thank you, @akouki_splunk! That's it. The on-prem Splunk instance uses SAML authentication so I get automatically assigned both "admin" and "user" roles from my group memberships. The "user" role was... See more...
Thank you, @akouki_splunk! That's it. The on-prem Splunk instance uses SAML authentication so I get automatically assigned both "admin" and "user" roles from my group memberships. The "user" role was in the "blacklisted_roles" list, which caused the error. Thank you for the quick response!  
The CSV is not structured as a lookup table. The structure should be that, given a value for CPU1 (e.g. "process_a"), what are the (first matching) values for CPU2 ("process_b") and CPU3 ("process_c"... See more...
The CSV is not structured as a lookup table. The structure should be that, given a value for CPU1 (e.g. "process_a"), what are the (first matching) values for CPU2 ("process_b") and CPU3 ("process_c"). What you seem to be looking for is given a value for some CPU (e.g. "process_a"), to what CPU category does it belong ("CPU1"). Are you able to restructure the test.csv to be more like: Process CPU Class process_a CPU1 process_b CPU2 process_c CPU3 process_d CPU1 process_e CPU2 process_f CPU3 process_g CPU1 process_h CPU2 process_i CPU3   IF you can't restructure that file, something like this would work:   | makeresults | eval CPU=mvappend("process_a","process_a","process_b","process_a","process_c","process_a","process_b","process_d","process_a","process_e","process_a","process_b","process_c","process_a","process_a","process_b","process_d","process_a","process_c","process_a","process_b","process_e","process_a") | mvexpand CPU ``` The above is to generate sample data and can be ignored in your SPL ``` ``` uncomment the line below and notice the change from CPU1 to CPU ``` ```index=custom | eval SEP=split(_raw,"|"), eval CPU=trim(mvindex(SEP,1))``` ``` These two lines create aliases to map in the CPU group for each class in turn ``` | eval myCPU1=CPU | eval myCPU2=CPU ``` These next lines assume that a process will only appear once in the test.csv file. ``` ``` If that is the case, then CPU2 and CPU3 will be non-null when CPU1 matches, ``` ``` otherwise that process does not belong to CPU1 (and ditto for the CPU2 case.) ``` | lookup community CPU1 as myCPU1 | eval myCPU1=if(NOT isnull(CPU2),CPU,NULL) | lookup community CPU2 as myCPU2 | eval myCPU2=if(NOT isnull(CPU1),CPU,NULL) ``` Now create your stats on the two CPU classes. ``` | bin _time span=1m | stats count(myCPU1) as CPU1_COUNT count(myCPU2) as CPU2_COUNT by _time    
Hi @ww9rivers , I'm @akouki_splunk , the developer of the Content Manager App. It seems you are having an issue with the blacklisted roles or users. Do you have access to the app configuration fil... See more...
Hi @ww9rivers , I'm @akouki_splunk , the developer of the Content Manager App. It seems you are having an issue with the blacklisted roles or users. Do you have access to the app configuration files? If so, please open the etc/apps/appcontentmanager/default/acms_settings.conf file and clear the blacklisted_roles and blacklisted_users attributes. The file content should look like this after the modification :   [settings] blacklisted_apps = alert_logevent,alert_webhook,appsbrowser,introspection_generator_addon,launcher,learned,legacy,logd_input,python_upgrade_readiness_app,sample_app,splunk_assist,splunk_gdi,splunk_httpinput,splunk_ingest_actions,splunk_instrumentation,splunk_internal_metrics,splunk_metrics_workspace,splunk_monitoring_console,splunk_secure_gateway,SplunkForwarder,SplunkLightForwarder,splunk-dashboard-studio blacklisted_conffiles = server,limits,app,passwords blacklisted_stanzas = blacklisted_roles = blacklisted_users = theme = light is_configured = 0 default_owner = nobody    
Did you want cids to contain that GUID? Try | rex field=log ".*customers\s(?<cids>.*)" Alternatively, if the GUID is always at the end, following a space, you can even drop the "customers" ... See more...
Did you want cids to contain that GUID? Try | rex field=log ".*customers\s(?<cids>.*)" Alternatively, if the GUID is always at the end, following a space, you can even drop the "customers" part: | rex field=log "(?<cids>\S+$)" Your example appears to be creating a capture group named "cids" that captures nothing (the first set of parentheses), and then a second non-capturing group that matches what you want (the second set of parentheses). This document might help explain in more detail: https://docs.splunk.com/Documentation/SCS/current/Search/AboutSplunkregularexpressions#Capture_groups_in_regular_expressions 
I want to exact a string 'GUID" from the log right after "customers". This regex expression works in https://regex101.com/ but not in Splunk.  My field name is log: 2023-06-19 15:28:01.726 ERROR [co... See more...
I want to exact a string 'GUID" from the log right after "customers". This regex expression works in https://regex101.com/ but not in Splunk.  My field name is log: 2023-06-19 15:28:01.726 ERROR [communication-service,6e72370er2368b08,6e723709fd368b08] [,,,] 1 --- [container-0-C-1] c.w.r.acc.commservice.sink.ReminderSink : Reminder Message processed, no linked customers aaf60d69-99a9-41f5-a081-032224284066   | rex field=log "(?<cids>).*customers\s(.*)"  
Before you do your eval statement, test that your extraction works. In your query, use a rex statement to see test this. ... | rex field=<your_field> "\"path\"\:\"auth\/(abc|xyz)\/login\/(?<Use... See more...
Before you do your eval statement, test that your extraction works. In your query, use a rex statement to see test this. ... | rex field=<your_field> "\"path\"\:\"auth\/(abc|xyz)\/login\/(?<User>[\w\_]+)" ... Then once you confirm you extracting your User field values, add the eval statement in the query. Once you confirm that works, you can then go back to your sourcetype, and modify your extract and eval lines.  --- If this reply helps you, Karma would be appreciated.
Hello, I am trying to change the email address of my Splunk community account. I went to My settings > Personal > Email and set the new email address. I got the verification email and verified the n... See more...
Hello, I am trying to change the email address of my Splunk community account. I went to My settings > Personal > Email and set the new email address. I got the verification email and verified the new email address. Now the new email address was displayed under My settings. However, when I logged out and then logged back in, the old email address is shown again. Is this a known issue?
I believe that your scenario could be accomplished with Ingest Actions: https://docs.splunk.com/Documentation/Splunk/9.2.1/Data/DataIngest This should support cloning data and applying different ... See more...
I believe that your scenario could be accomplished with Ingest Actions: https://docs.splunk.com/Documentation/Splunk/9.2.1/Data/DataIngest This should support cloning data and applying different filtering rules and routing to the two streams.
Thank you! Just like that it works and only in 1 line
| eval fruit=mvappend(fruit1,if(fruit2!="NULL",fruit2,null())) | stats count by fruit
A little update: I now got to a point where I have the following situation: Fruit_1 count Fruit_2 count Apple 5 null null Orange 10 null null Pear 5 Apple 5 Melon 10 Orange 10 How do I get i... See more...
A little update: I now got to a point where I have the following situation: Fruit_1 count Fruit_2 count Apple 5 null null Orange 10 null null Pear 5 Apple 5 Melon 10 Orange 10 How do I get it so that the amount of apples and oranges from column Fruit_1 and Fruit_2 are combined into 1 big fruit list and 1 count list? So the result should be: Fruit Count Apple 10 Orange 20 Pear 5 Melon 5
Hi @richgalloway ,even TCP connection is setup to the indexer and its port. No firewall blocking as well but still no events being returned on search. 
Hi All, We have Splunk Security ENT 6.6.2 - EOL, I know! our admins guys are working on upgrading. My Problem. We created 2 new user groups. Team A and Team B We gave Team A - Total access to dat... See more...
Hi All, We have Splunk Security ENT 6.6.2 - EOL, I know! our admins guys are working on upgrading. My Problem. We created 2 new user groups. Team A and Team B We gave Team A - Total access to data in half the indexes. Role restrictions on indexes We gave Team B - Total access to data in the other half the indexes. Role restrictions on indexes The outcome was as expected, Team A can only see data from indexes for their role and likewise for Team B. This is where we have a problem, Both Teams need to user the Incident Review Dashboard and Both teams need to assign notable events to users within their own Team. As Owners. However, they cannot, and the system gives errors. If we take the role restriction off. So both teams can see all Data. Then they can assign notable events. Our internal Splunk admin, say it is a bug in this version and the system needs to be upgraded. My questions, Has anyone experienced similar?  Is there a bug and if so, any reference that can be found on the bug? Are there any workarounds regarding this problem? We have 2 teams that need to use the Incident Review to respond to alerts. However, these teams need to be independent and should not be able to see data within indexes that belongs to the other Team. Thanks for any advise.
I have a few questions on how splunk sees and displays the license warning counts. Yes if you go over your pool size then that equals a warning count. However, several instances I see some conflictin... See more...
I have a few questions on how splunk sees and displays the license warning counts. Yes if you go over your pool size then that equals a warning count. However, several instances I see some conflicting information like when I add a new license that is bigger than the previous one, I would think the warning count would reset but it doesn’t. I also have a search that looks at the license usage.log and shows me how many times I have went over my size in the last 30 days. This also has different counts than what is shown in the warning count section. The final weird issue I see is when I had a sever warning count at 44 but a week later within any changes, the number decreased to 37. What’s causes so many different numbers with the Splunk licenses
since moving to 9.2.1, now my df.sh events are now a single event when searching. also notice the format is bad when running the script compared to the built in df. novice linux guy here looking to s... See more...
since moving to 9.2.1, now my df.sh events are now a single event when searching. also notice the format is bad when running the script compared to the built in df. novice linux guy here looking to see if anyone else has come across this. thanks! splunk df     linux df         splunk event
Hello all, I need to configure SAML/SSO with Splunk but i m having the following issues: - I have 3 search heads in a cluster (without a load balancer )    => I can create a dedicated SAML confi... See more...
Hello all, I need to configure SAML/SSO with Splunk but i m having the following issues: - I have 3 search heads in a cluster (without a load balancer )    => I can create a dedicated SAML config for each search head and disable the replication of the authentication.conf - we have many tenants and we have users connecting from the different tenants to Splunk (currently we have multiple LDAP configurations)  => I understood that Splunk only accepts one IdProvider with SAML, so users from other tenants will not be able to access splunk with SSO. - ideally, we must have some users connecting with LDAP, but Splunk doesn't allow enabling both LDAP and SAML simultaneously  or it is possible but requires a custom script for that. Questions: 1-  does anyone have worked on a script to enable LDAP and SAML ?  2- Any idea about the best config from Azure ID regarding the multi-tenants and the B2B collaboration? 3- Any advice in general how to better approach this issue?  Best  
Thank @harsmarvania57, my bad. It worked as well. I want to write another script that used Splunk SDK, which does not depend on Splunk lib, or must run the script in Splunk server.  Any way, I near... See more...
Thank @harsmarvania57, my bad. It worked as well. I want to write another script that used Splunk SDK, which does not depend on Splunk lib, or must run the script in Splunk server.  Any way, I nearly finished with my the script by using SDK.  Thank for your help and your script helped me a lot!
thank you very much. it works.