All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Well, I did change one thing from your last example.  Here is the final version that worked as required, for those that read this later. <input id="app_nodes_multiselect" type="multiselect... See more...
Well, I did change one thing from your last example.  Here is the final version that worked as required, for those that read this later. <input id="app_nodes_multiselect" type="multiselect" depends="$app_fm_app_id$" token="app_fm_entity_id" searchWhenChanged="true"> <label>Nodes</label> <delimiter> </delimiter> <fieldForLabel>entity_name</fieldForLabel> <fieldForValue>internal_entity_id</fieldForValue> <search> <query> | inputlookup aix_kv_apm_comps WHERE entity_type!=$app_fm_group_nodes$ | search [| makeresults | eval search="internal_parent_id=(".mvjoin($app_fm_app_id$, " OR internal_parent_id=").")" | return $search] | table entity_name, internal_entity_id | sort entity_name </query> </search> <choice value="*">All</choice> <default>*</default> <change> <condition match="$form.app_fm_entity_id$=&quot;*&quot;"> <set token="app_net_fm_entity_id">_all</set> <set token="condition">1</set> </condition> <condition> <set token="condition">2</set> <eval token="app_net_fm_entity_id">case(mvcount($form.app_fm_entity_id$)="2" AND mvindex($form.app_fm_entity_id$,0)="*", mvindex($form.app_fm_entity_id$,1), mvfind($form.app_fm_entity_id$,"^\\*$$")=mvcount($form.app_fm_entity_id$)-1, "_all", true(), $form.app_fm_entity_id$)</eval> <set token="app_net_fm_entity_id">$app_fm_entity_id$</set> </condition> </change> </input>    
Thank you sooo much!!!  That worked perfectly!!!  
Hi @Jonathan.Wang, Thank you for following up. Since the community has not jumped in yet either. I think the best next step is to contact Support.  AppDynamics is migrating our Support case handli... See more...
Hi @Jonathan.Wang, Thank you for following up. Since the community has not jumped in yet either. I think the best next step is to contact Support.  AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases.  If contact Support or find a solution on your own, please share your learnings as a reply to this post. 
The transaction command provides a duration field for the difference in times. Is this not sufficient for your needs?
Hi All, I am having 20+ Panels in Studio Dashboard, As per Customer Requirement they wants only 5 Panels Per Page, Could you pls help on JSON Code how to Segregate the Panels, For Example, If Click ... See more...
Hi All, I am having 20+ Panels in Studio Dashboard, As per Customer Requirement they wants only 5 Panels Per Page, Could you pls help on JSON Code how to Segregate the Panels, For Example, If Click on 1st it should display only 5 Panel, if I Click on next Dot it should display next 5 Panels and So On.  
Essentially, the mvrange and mvexpand gives you two events one with row equal to zero and one with row equal to one. If you can use these to calculate how far back you want the send event to be based... See more...
Essentially, the mvrange and mvexpand gives you two events one with row equal to zero and one with row equal to one. If you can use these to calculate how far back you want the send event to be based on the difference between the info_min_time and info_max_time (which are returned by addinfo), you can modify the calculation for earliest and latest appropriately. Hopefully that makes sense.
@isoutamo Thanks you so much, How can I estimate the time required for replicating the data?
Those are ok steps. If you are updating those *_load values, you should remember decrease those when everything is ready.
Hi thanks for pointing that out.   The   "by bookmark_status_display"   was indeed unneeded as I'm specifying  which status it is in the query  hence the  actual query should be:  | sseanalyti... See more...
Hi thanks for pointing that out.   The   "by bookmark_status_display"   was indeed unneeded as I'm specifying  which status it is in the query  hence the  actual query should be:  | sseanalytics 'bookmark' | where bookmark_status="bookmarked" | stats count(bookmark_status_display) AS "Bookmark Status" -  Once taking that into considereation i was able to use the following for the result : | rest /services/saved/searches | search alert_type!="always" AND action.email.to="production@email.com" AND title!="*test*" | stats count(action.email.to) AS "Count" | appendcols [sseanalytics 'bookmark' | where bookmark_status="successfullyImplemented" | stats count(bookmark_status_display) AS "Bookmark Status"] | eventstats values(Count) as Count | eval diff = 'Bookmark Status' - Count | table diff Thank you 100!
Good afternoon everyone!   Helping a client setup Splunk SAML for the first time.  We have confirmed that the SAML IDP is successfully sending all necessary attributes in the assertion and Splunk i... See more...
Good afternoon everyone!   Helping a client setup Splunk SAML for the first time.  We have confirmed that the SAML IDP is successfully sending all necessary attributes in the assertion and Splunk is consuming it.  We are getting the following error though.  I've included all logs leading up to the final ERROR  08-19-2024 15:43:55.859 +0000 WARN SAMLConfig [25929 webui] - Use RSA-SHA256, RSA-SHA384, or RSA-SHA512 for 'signatureAlgorithm' rather than 'RSA-SHA1' 08-19-2024 15:43:55.859 +0000 WARN SAMLConfig [25929 webui] - Use RSA-SHA256, RSA-SHA384, or RSA-SHA512 for 'inboundSignatureAlgorithm' rather than 'RSA-SHA1' 08-19-2024 15:43:55.859 +0000 WARN SAMLConfig [25929 webui] - Use SHA256, SHA384, or SHA512 for 'inboundDigestMethod' rather than 'SHA1' 08-19-2024 15:43:55.859 +0000 INFO SAMLConfig [25929 webui] - Skipping :idpCert.pem because it does not begin with idpCertChain_ when populating idpCertChains 08-19-2024 15:43:55.859 +0000 INFO SAMLConfig [25929 webui] - No valid value for 'saml_negative_cache_timeout'. Defaulting to 3600 08-19-2024 15:43:55.860 +0000 INFO SAMLConfig [25929 webui] - Both AQR and AuthnExt are disabled, setting _shouldCacheSAMLUserInfotoDisk=true 08-19-2024 15:43:55.860 +0000 INFO AuthenticationProviderSAML [25929 webui] - Writing to persistent storage for user= name=splunktester@customerdomain.com email=splunktester@customerdomain.com roles=user stanza=userToRoleMap_SAML 08-19-2024 15:43:55.860 +0000 ERROR ConfPathMapper [25929 webui] - /opt/splunk/etc/system/local: Setting /nobody/system/authentication/userToRoleMap_SAML = user::splunktester@customerdomain.com::splunktester@customerdomain.com: Unsupported path or value 08-19-2024 15:43:55.873 +0000 ERROR HttpListener [25929 webui] - Exception while processing request from 10.10.10.10:58723 for /saml/acs: Data could not be written: /nobody/system/authentication/userToRoleMap_SAML: user::splunktester@customerdomain.com::splunktester@customerdomain.com trace="[0x0000556C45CBFC98] "? (splunkd + 0x1E9CC98)";[0x0000556C48F53CBE] "_ZN10TcpChannel11when_eventsE18PollableDescriptor + 606 (splunkd + 0x5130CBE)";[0x0000556C48EF74FE] "_ZN8PolledFd8do_eventEv + 126 (splunkd + 0x50D44FE)";[0x0000556C48EF870A] "_ZN9EventLoop3runEv + 746 (splunkd + 0x50D570A)";[0x0000556C48F4E46D] "_ZN19Base_TcpChannelLoop7_do_runEv + 29 (splunkd + 0x512B46D)";[0x0000556C467D457C] "_ZN17SplunkdHttpServer2goEv + 108 (splunkd + 0x29B157C)";[0x0000556C48FF85EE] "_ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 46 (splunkd + 0x51D55EE)";[0x0000556C48FF86FB] "_ZN6Thread8callMainEPv + 139 (splunkd + 0x51D56FB)";[0x00007F4744F58EA5] "? (libpthread.so.0 + 0x7EA5)";[0x00007F4743E83B0D] "clone + 109 (libc.so.6 + 0xFEB0D)"" The web page displays  Data could not be written: /nobody/system/authentication/userToRoleMap_SAML: user::splunktester@customerdomain.com::splunktester@customerdomain.com The server had an unexpected I haven't been able to find anything online about this.  Some posts have hinted to permission errors on .conf files.  I know this can be cause by either the splunk service not running as the correct user and/or the .conf file not have the correct perms.
Hi, how do i get the difference in the time stamp? . I want to know the difference between the starting timestamp and the completed time stamp "My base query" | rex "status:\s+(?<Status>.*)\"}" ... See more...
Hi, how do i get the difference in the time stamp? . I want to know the difference between the starting timestamp and the completed time stamp "My base query" | rex "status:\s+(?<Status>.*)\"}" | rex field=_raw "\((?<Message_Id>[^\)]*)" | rex "Path\:\s+(?<ResourcePath>.*)\"" | eval timestamp_s = timestamp/1000 | eval human_readable_time = strftime(timestamp_s, "%Y-%m-%d %H:%M:%S") | transaction Message_Id startswith="Starting execution for request" endswith="Successfully completed execution"   RAW_LOG 8/19/24 9:56:05.113 AM {"id":"38448254623555555", "timestamp":1724079365113, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Startingexecutionforrequest:f34444-22222-44444-999999-0888888"} {"id":"38448254444444444", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Methodcompletedwithstatus:200"} {"id":"38448222222222222", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Successfullycompletedexecution"} {"id":"38417111111111111", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) AWS Integration Endpoint RequestId :f32222-22222-44444-999999-0888888"}                
Hi,    I get this error in our Splunk dashboards once I migrated splunk to version 9.2.2 I was able to bypass the error in the dashboards by updating internal_library_settings and unrestricting th... See more...
Hi,    I get this error in our Splunk dashboards once I migrated splunk to version 9.2.2 I was able to bypass the error in the dashboards by updating internal_library_settings and unrestricting the library requirements. But this increases the risk/vulnerability of the environment.  require(['jquery', 'underscore', 'splunkjs/mvc', 'util/console'], function($, _, mvc, console) { function setToken(name, value) { console.log('Setting Token %o=%o', name, value); var defaultTokenModel = mvc.Components.get('default'); if (defaultTokenModel) { defaultTokenModel.set(name, value); } var submittedTokenModel = mvc.Components.get('submitted'); if (submittedTokenModel) { submittedTokenModel.set(name, value); } } $('.dashboard-body').on('click', '[data-set-token],[data-unset-token],[data-token-json]', function(e) { e.preventDefault(); var target = $(e.currentTarget); var setTokenName = target.data('set-token'); if (setTokenName) { setToken(setTokenName, target.data('value')); } var unsetTokenName = target.data('unset-token'); if (unsetTokenName) { setToken(unsetTokenName, undefined); } var tokenJson = target.data('token-json'); if (tokenJson) { try { if (_.isObject(tokenJson)) { _(tokenJson).each(function(value, key) { if (value == null) { // Unset the token setToken(key, undefined); } else { setToken(key, value); } }); } } catch (e) { console.warn('Cannot parse token JSON: ', e); } } }); }); The above code is the one that I have for one of the dashboards. Not sure how to check the jQuery version of the code, but if there is some help to fix the above code that meets the requirements of Jquery 3.5, I'll implement the same for other code as well.   Thanks, Pravin
@yuanliu , I see the whole event in a single line when I search for that event and on the indexer I have this props. [load_server] TRUNCATE=999999
I have the collect search working, eval _raw="field1","field2", ... Conversion functions - Splunk Documentation Thank you for pointing me in the right direction and well done @PickleRick
is there anything i need to configure on thehost of the remote server i am monitoring And how can i see/configure what jmx metrics i can collect or visualise
Appreciate the clarification, I have 30%+ headroom with my license so a couple of onetime events should not be an issue.
Hi  @yuanliu  How do i get the difference of the time stamp? . I want the difference of starting timestamp and the completed time stamp "My base query" | rex "status:\s+(?<Status>.*)\"}" | rex fie... See more...
Hi  @yuanliu  How do i get the difference of the time stamp? . I want the difference of starting timestamp and the completed time stamp "My base query" | rex "status:\s+(?<Status>.*)\"}" | rex field=_raw "\((?<Message_Id>[^\)]*)" | rex "Path\:\s+(?<ResourcePath>.*)\"" | eval timestamp_s = timestamp/1000 | eval human_readable_time = strftime(timestamp_s, "%Y-%m-%d %H:%M:%S") | transaction Message_Id startswith="Starting execution for request" endswith="Successfully completed execution"   RAW_LOG 8/19/24 9:56:05.113 AM {"id":"38448254623555555", "timestamp":1724079365113, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Startingexecutionforrequest:f34444-22222-44444-999999-0888888"} {"id":"38448254444444444", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Methodcompletedwithstatus:200"} {"id":"38448222222222222", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Successfullycompletedexecution"} {"id":"38417111111111111", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) AWS Integration Endpoint RequestId :f32222-22222-44444-999999-0888888"}                
That's pretty normal for Windows events. Not every log has every field. And not every field has a reasonable value each time. This is from my home lab.  
Looks like the app version 1.0.15 was submitted back in May '24 and failed vettingdue to a minor issue with the version of Add-On builder.  check_for_addon_builder_version The Add-on Builder ve... See more...
Looks like the app version 1.0.15 was submitted back in May '24 and failed vettingdue to a minor issue with the version of Add-On builder.  check_for_addon_builder_version The Add-on Builder version used to create this app (4.1.3) is below the minimum required version of 4.2.0. Re-generate your add-on using at least Add-on Builder 4.2.0. File: default/addon_builder.conf Line Number: 4 Once these issues are remedied you can resubmit your app for review. We have people who would like to use this app in Splunk Cloud and if the developer could update the vetting that would be great. Best, 
I mean you use up additional license amount for indexing additional data using the collect command unless you use the stash or stash_hec sourcetypes. So each events you firstly index into index A and... See more...
I mean you use up additional license amount for indexing additional data using the collect command unless you use the stash or stash_hec sourcetypes. So each events you firstly index into index A and then search, transform and collect into index B will cost you twice (roughly - depending on what you do with it in terms of processing before collecting) the license usage that it uses just be indexing it into index A. Whether you're within your license limits or not depends of course on the overall amount of ingested data and your license size.