All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This app now exists which does a better job at PDF production https://splunkbase.splunk.com/app/7171  
Hey PickleRick, I see, I was not aware that having different sourcetype than stash would double licence usage thank you for making me aware of that. I see so the only solutions available to restrict... See more...
Hey PickleRick, I see, I was not aware that having different sourcetype than stash would double licence usage thank you for making me aware of that. I see so the only solutions available to restrict search access based on filters is to create separate apps or do data processing prior to event ingestion. I didn't want to do separate apps because of congestion, especially since they will only differ from one line in the search filter. Please correct me if I'm wrong but I thought this would increase costs. Wasn't aware that having different sourcetypes other than stash would also incur costs (thanks). The speeding up search was in reference to summary indexing, not a concern. I was wondering why summary indexing wouldn't work since filtering the search for only superheros/villains will speed up the search, which is what summary indexing is meant to help with. The main purpose was always for access restrictions. Thanks,
Try this : [hecpaloalto_in] INGEST_EVAL = index=if(match(sourcetype, "pan:logs"), "palo_alto", "aws")
@marcoscala were you able to fix the Palo Alto Splunk app throwing JS errors ?
@shawno were you able to fix the error ? 
FYI, converting to Dashboard Studio fixes the diagrams, but truncates the tables. yay.
Same here, for as long as I can remember (don't ask me the versions) -- but still currently an issue with 9.2.2. Funny thing is, I have about 9 graphs, and three work OK. Tried all kinds of tactics l... See more...
Same here, for as long as I can remember (don't ask me the versions) -- but still currently an issue with 9.2.2. Funny thing is, I have about 9 graphs, and three work OK. Tried all kinds of tactics like: putting the graphs on on it's own line, putting all together, changing the order, trying landscape v.s. letter, changing the paper type, "plain text", "HTML & plain text"....
Hello,  Is there a way to add 3rd party python modules to the add-on builder? I am trying to create a python script in the add-on builder, but looks like I need to use a module that is not included... See more...
Hello,  Is there a way to add 3rd party python modules to the add-on builder? I am trying to create a python script in the add-on builder, but looks like I need to use a module that is not included in the add-on builder. Thanks for any help on this. Tom  
The transaction command creates a field called "duration" that is the difference in the _time values from the first and last events of the transaction.  That should fill this need, assuming _time is ... See more...
The transaction command creates a field called "duration" that is the difference in the _time values from the first and last events of the transaction.  That should fill this need, assuming _time is set by properly extracting the "timestamp" value at index time. The transaction command is not very performant, however.  A more efficient way to do it uses stats. "My base query" ("Starting execution for request" OR "Successfully completed execution") | rex "status:\s+(?<Status>.*)\"}" | rex field=_raw "\((?<Message_Id>[^\)]*)" | rex "Path\:\s+(?<ResourcePath>.*)\"" | rex "timestamp\\\":(\d+)" | stats min(timestamp) as startTime, max(timestamp) as endTime by Message_Id | eval duration = endTime - startTime | eval end_timestamp_s = endTime/1000, start_timestamp_s = startTime/1000 | eval human_readable_etime = strftime(end_timestamp_s, "%Y-%m-%d %H:%M:%S"), human_readable_stime = strftime(start_timestamp_s, "%Y-%m-%d %H:%M:%S"), duration = tostring(duration, "duration") | table Message_Id human_readable_stime human_readable_etime duration  
Well, I did change one thing from your last example.  Here is the final version that worked as required, for those that read this later. <input id="app_nodes_multiselect" type="multiselect... See more...
Well, I did change one thing from your last example.  Here is the final version that worked as required, for those that read this later. <input id="app_nodes_multiselect" type="multiselect" depends="$app_fm_app_id$" token="app_fm_entity_id" searchWhenChanged="true"> <label>Nodes</label> <delimiter> </delimiter> <fieldForLabel>entity_name</fieldForLabel> <fieldForValue>internal_entity_id</fieldForValue> <search> <query> | inputlookup aix_kv_apm_comps WHERE entity_type!=$app_fm_group_nodes$ | search [| makeresults | eval search="internal_parent_id=(".mvjoin($app_fm_app_id$, " OR internal_parent_id=").")" | return $search] | table entity_name, internal_entity_id | sort entity_name </query> </search> <choice value="*">All</choice> <default>*</default> <change> <condition match="$form.app_fm_entity_id$=&quot;*&quot;"> <set token="app_net_fm_entity_id">_all</set> <set token="condition">1</set> </condition> <condition> <set token="condition">2</set> <eval token="app_net_fm_entity_id">case(mvcount($form.app_fm_entity_id$)="2" AND mvindex($form.app_fm_entity_id$,0)="*", mvindex($form.app_fm_entity_id$,1), mvfind($form.app_fm_entity_id$,"^\\*$$")=mvcount($form.app_fm_entity_id$)-1, "_all", true(), $form.app_fm_entity_id$)</eval> <set token="app_net_fm_entity_id">$app_fm_entity_id$</set> </condition> </change> </input>    
Thank you sooo much!!!  That worked perfectly!!!  
Hi @Jonathan.Wang, Thank you for following up. Since the community has not jumped in yet either. I think the best next step is to contact Support.  AppDynamics is migrating our Support case handli... See more...
Hi @Jonathan.Wang, Thank you for following up. Since the community has not jumped in yet either. I think the best next step is to contact Support.  AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases.  If contact Support or find a solution on your own, please share your learnings as a reply to this post. 
The transaction command provides a duration field for the difference in times. Is this not sufficient for your needs?
Hi All, I am having 20+ Panels in Studio Dashboard, As per Customer Requirement they wants only 5 Panels Per Page, Could you pls help on JSON Code how to Segregate the Panels, For Example, If Click ... See more...
Hi All, I am having 20+ Panels in Studio Dashboard, As per Customer Requirement they wants only 5 Panels Per Page, Could you pls help on JSON Code how to Segregate the Panels, For Example, If Click on 1st it should display only 5 Panel, if I Click on next Dot it should display next 5 Panels and So On.  
Essentially, the mvrange and mvexpand gives you two events one with row equal to zero and one with row equal to one. If you can use these to calculate how far back you want the send event to be based... See more...
Essentially, the mvrange and mvexpand gives you two events one with row equal to zero and one with row equal to one. If you can use these to calculate how far back you want the send event to be based on the difference between the info_min_time and info_max_time (which are returned by addinfo), you can modify the calculation for earliest and latest appropriately. Hopefully that makes sense.
@isoutamo Thanks you so much, How can I estimate the time required for replicating the data?
Those are ok steps. If you are updating those *_load values, you should remember decrease those when everything is ready.
Hi thanks for pointing that out.   The   "by bookmark_status_display"   was indeed unneeded as I'm specifying  which status it is in the query  hence the  actual query should be:  | sseanalyti... See more...
Hi thanks for pointing that out.   The   "by bookmark_status_display"   was indeed unneeded as I'm specifying  which status it is in the query  hence the  actual query should be:  | sseanalytics 'bookmark' | where bookmark_status="bookmarked" | stats count(bookmark_status_display) AS "Bookmark Status" -  Once taking that into considereation i was able to use the following for the result : | rest /services/saved/searches | search alert_type!="always" AND action.email.to="production@email.com" AND title!="*test*" | stats count(action.email.to) AS "Count" | appendcols [sseanalytics 'bookmark' | where bookmark_status="successfullyImplemented" | stats count(bookmark_status_display) AS "Bookmark Status"] | eventstats values(Count) as Count | eval diff = 'Bookmark Status' - Count | table diff Thank you 100!
Good afternoon everyone!   Helping a client setup Splunk SAML for the first time.  We have confirmed that the SAML IDP is successfully sending all necessary attributes in the assertion and Splunk i... See more...
Good afternoon everyone!   Helping a client setup Splunk SAML for the first time.  We have confirmed that the SAML IDP is successfully sending all necessary attributes in the assertion and Splunk is consuming it.  We are getting the following error though.  I've included all logs leading up to the final ERROR  08-19-2024 15:43:55.859 +0000 WARN SAMLConfig [25929 webui] - Use RSA-SHA256, RSA-SHA384, or RSA-SHA512 for 'signatureAlgorithm' rather than 'RSA-SHA1' 08-19-2024 15:43:55.859 +0000 WARN SAMLConfig [25929 webui] - Use RSA-SHA256, RSA-SHA384, or RSA-SHA512 for 'inboundSignatureAlgorithm' rather than 'RSA-SHA1' 08-19-2024 15:43:55.859 +0000 WARN SAMLConfig [25929 webui] - Use SHA256, SHA384, or SHA512 for 'inboundDigestMethod' rather than 'SHA1' 08-19-2024 15:43:55.859 +0000 INFO SAMLConfig [25929 webui] - Skipping :idpCert.pem because it does not begin with idpCertChain_ when populating idpCertChains 08-19-2024 15:43:55.859 +0000 INFO SAMLConfig [25929 webui] - No valid value for 'saml_negative_cache_timeout'. Defaulting to 3600 08-19-2024 15:43:55.860 +0000 INFO SAMLConfig [25929 webui] - Both AQR and AuthnExt are disabled, setting _shouldCacheSAMLUserInfotoDisk=true 08-19-2024 15:43:55.860 +0000 INFO AuthenticationProviderSAML [25929 webui] - Writing to persistent storage for user= name=splunktester@customerdomain.com email=splunktester@customerdomain.com roles=user stanza=userToRoleMap_SAML 08-19-2024 15:43:55.860 +0000 ERROR ConfPathMapper [25929 webui] - /opt/splunk/etc/system/local: Setting /nobody/system/authentication/userToRoleMap_SAML = user::splunktester@customerdomain.com::splunktester@customerdomain.com: Unsupported path or value 08-19-2024 15:43:55.873 +0000 ERROR HttpListener [25929 webui] - Exception while processing request from 10.10.10.10:58723 for /saml/acs: Data could not be written: /nobody/system/authentication/userToRoleMap_SAML: user::splunktester@customerdomain.com::splunktester@customerdomain.com trace="[0x0000556C45CBFC98] "? (splunkd + 0x1E9CC98)";[0x0000556C48F53CBE] "_ZN10TcpChannel11when_eventsE18PollableDescriptor + 606 (splunkd + 0x5130CBE)";[0x0000556C48EF74FE] "_ZN8PolledFd8do_eventEv + 126 (splunkd + 0x50D44FE)";[0x0000556C48EF870A] "_ZN9EventLoop3runEv + 746 (splunkd + 0x50D570A)";[0x0000556C48F4E46D] "_ZN19Base_TcpChannelLoop7_do_runEv + 29 (splunkd + 0x512B46D)";[0x0000556C467D457C] "_ZN17SplunkdHttpServer2goEv + 108 (splunkd + 0x29B157C)";[0x0000556C48FF85EE] "_ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 46 (splunkd + 0x51D55EE)";[0x0000556C48FF86FB] "_ZN6Thread8callMainEPv + 139 (splunkd + 0x51D56FB)";[0x00007F4744F58EA5] "? (libpthread.so.0 + 0x7EA5)";[0x00007F4743E83B0D] "clone + 109 (libc.so.6 + 0xFEB0D)"" The web page displays  Data could not be written: /nobody/system/authentication/userToRoleMap_SAML: user::splunktester@customerdomain.com::splunktester@customerdomain.com The server had an unexpected I haven't been able to find anything online about this.  Some posts have hinted to permission errors on .conf files.  I know this can be cause by either the splunk service not running as the correct user and/or the .conf file not have the correct perms.
Hi, how do i get the difference in the time stamp? . I want to know the difference between the starting timestamp and the completed time stamp "My base query" | rex "status:\s+(?<Status>.*)\"}" ... See more...
Hi, how do i get the difference in the time stamp? . I want to know the difference between the starting timestamp and the completed time stamp "My base query" | rex "status:\s+(?<Status>.*)\"}" | rex field=_raw "\((?<Message_Id>[^\)]*)" | rex "Path\:\s+(?<ResourcePath>.*)\"" | eval timestamp_s = timestamp/1000 | eval human_readable_time = strftime(timestamp_s, "%Y-%m-%d %H:%M:%S") | transaction Message_Id startswith="Starting execution for request" endswith="Successfully completed execution"   RAW_LOG 8/19/24 9:56:05.113 AM {"id":"38448254623555555", "timestamp":1724079365113, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Startingexecutionforrequest:f34444-22222-44444-999999-0888888"} {"id":"38448254444444444", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Methodcompletedwithstatus:200"} {"id":"38448222222222222", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Successfullycompletedexecution"} {"id":"38417111111111111", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) AWS Integration Endpoint RequestId :f32222-22222-44444-999999-0888888"}