All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone,    I have created dashboard that shows total log volumes for different sources across 7 days. I am using line chart and trellis. As shown in pic, I want to add median/average... See more...
Hello everyone,    I have created dashboard that shows total log volumes for different sources across 7 days. I am using line chart and trellis. As shown in pic, I want to add median/average value of logs as horizonal red line. Is there a way to achieve it ? Final aim is to be able to observe pattern and median/avg log volumes of certain week that ultimately helps to define baseline of log volume for each source. below is the SPL I am using,   | tstats count as log_count where index=myindex AND hostname="colla" AND source=* earliest=--7d@d latest=now by _time, source | timechart span=1d sum(log_count) by source Any suggestions would be highly appreciated. Thanks
Hi, I am trying to get a list off all users that hit our AI rule and see if this increase or decrease over the timespan of 90 days. I want to see the application they use and see the last three month... See more...
Hi, I am trying to get a list off all users that hit our AI rule and see if this increase or decrease over the timespan of 90 days. I want to see the application they use and see the last three months display as columns with a count of amount of users. Example below Applications June(Month1) July(Month2) August(Month3) chatGPT 213 233 512   index=db_it_network sourcetype=pan* rule=g_artificial-intelligence-access | table user, app, date_month ```| dedup user, app, date_month``` | stats count by date_month, app | sort date_month, app 0 | rename count as "Number of Users" | table date_month, app, "Number of Users"
Hello everyone,  I have created dashboard that shows total log volumes for different sources across 7 days. I am using line chart and trellis. As shown in pic, I want to add median/average value... See more...
Hello everyone,  I have created dashboard that shows total log volumes for different sources across 7 days. I am using line chart and trellis. As shown in pic, I want to add median/average value of logs as horizonal red line. Is there a way to achieve it ? Final aim is to be able to observe pattern and median/avg log volumes of certain week that ultimately helps to define baseline of log volume for each source. below is the SPL I am using,   | tstats count as log_count where index=myindex AND hostname="colla" AND source=* earliest=--7d@d latest=now by _time, source | timechart span=1d sum(log_count) by source Any suggestions would be highly appreciated. Thanks
I've been out of touch with Core Splunk for sometime, so just checking if there are options for below requirement Organisation is looking for RFP for various Big Data products and Organisation needs... See more...
I've been out of touch with Core Splunk for sometime, so just checking if there are options for below requirement Organisation is looking for RFP for various Big Data products and Organisation needs -  multi-cloud design for various applications. Application (and thus data) resides in AWS/Azure/GCP in multiple regions within Europe - Doesn't want to have lot of egress cost. So aggregating data into the cloud which Splunk was installed predominently is out of question. - The design is to have 'Data nodes' (Indexer clusters or Data clusters) in each of the application/data residing cloud providers - A Search Head cluster (Cross Cloud search) will be then spun in the main provider (eg AWS), which can then search ALL these remote 'Data nodes' Is this design feasible in Splunk? (I understand Mothership add-on, but my last encouter with it at enterprise scale was not that great) Looking for something like below with low latency
I have try to prompt with my email. To execute the requested action, deny or delegate, click here https://10.250.74.118:8443/approval/14. It need to enter the WEB UI and found the "certain" prompt.... See more...
I have try to prompt with my email. To execute the requested action, deny or delegate, click here https://10.250.74.118:8443/approval/14. It need to enter the WEB UI and found the "certain" prompt. If I have 10000 prompt, I can not found the event related to the email rapidly.  If it is possible that use rest api to post prompt decision to soar certain event?
Hello  I have some issue getting the Windows performance -Velocity SD Service Counters logs. I used [perform://Velocity SD Service Counters] counter=* disable==0 instances=* object=Velocity SD ... See more...
Hello  I have some issue getting the Windows performance -Velocity SD Service Counters logs. I used [perform://Velocity SD Service Counters] counter=* disable==0 instances=* object=Velocity SD Service Counters mode=multikv showZeroValue=1 index=windows But not getting events. Any recommendation will be highly appreciated!  
Hello,  Is there a way to add 3rd party python modules to the add-on builder? I am trying to create a python script in the add-on builder, but looks like I need to use a module that is not included... See more...
Hello,  Is there a way to add 3rd party python modules to the add-on builder? I am trying to create a python script in the add-on builder, but looks like I need to use a module that is not included in the add-on builder. Thanks for any help on this. Tom  
Hi All, I am having 20+ Panels in Studio Dashboard, As per Customer Requirement they wants only 5 Panels Per Page, Could you pls help on JSON Code how to Segregate the Panels, For Example, If Click ... See more...
Hi All, I am having 20+ Panels in Studio Dashboard, As per Customer Requirement they wants only 5 Panels Per Page, Could you pls help on JSON Code how to Segregate the Panels, For Example, If Click on 1st it should display only 5 Panel, if I Click on next Dot it should display next 5 Panels and So On.  
Good afternoon everyone!   Helping a client setup Splunk SAML for the first time.  We have confirmed that the SAML IDP is successfully sending all necessary attributes in the assertion and Splunk i... See more...
Good afternoon everyone!   Helping a client setup Splunk SAML for the first time.  We have confirmed that the SAML IDP is successfully sending all necessary attributes in the assertion and Splunk is consuming it.  We are getting the following error though.  I've included all logs leading up to the final ERROR  08-19-2024 15:43:55.859 +0000 WARN SAMLConfig [25929 webui] - Use RSA-SHA256, RSA-SHA384, or RSA-SHA512 for 'signatureAlgorithm' rather than 'RSA-SHA1' 08-19-2024 15:43:55.859 +0000 WARN SAMLConfig [25929 webui] - Use RSA-SHA256, RSA-SHA384, or RSA-SHA512 for 'inboundSignatureAlgorithm' rather than 'RSA-SHA1' 08-19-2024 15:43:55.859 +0000 WARN SAMLConfig [25929 webui] - Use SHA256, SHA384, or SHA512 for 'inboundDigestMethod' rather than 'SHA1' 08-19-2024 15:43:55.859 +0000 INFO SAMLConfig [25929 webui] - Skipping :idpCert.pem because it does not begin with idpCertChain_ when populating idpCertChains 08-19-2024 15:43:55.859 +0000 INFO SAMLConfig [25929 webui] - No valid value for 'saml_negative_cache_timeout'. Defaulting to 3600 08-19-2024 15:43:55.860 +0000 INFO SAMLConfig [25929 webui] - Both AQR and AuthnExt are disabled, setting _shouldCacheSAMLUserInfotoDisk=true 08-19-2024 15:43:55.860 +0000 INFO AuthenticationProviderSAML [25929 webui] - Writing to persistent storage for user= name=splunktester@customerdomain.com email=splunktester@customerdomain.com roles=user stanza=userToRoleMap_SAML 08-19-2024 15:43:55.860 +0000 ERROR ConfPathMapper [25929 webui] - /opt/splunk/etc/system/local: Setting /nobody/system/authentication/userToRoleMap_SAML = user::splunktester@customerdomain.com::splunktester@customerdomain.com: Unsupported path or value 08-19-2024 15:43:55.873 +0000 ERROR HttpListener [25929 webui] - Exception while processing request from 10.10.10.10:58723 for /saml/acs: Data could not be written: /nobody/system/authentication/userToRoleMap_SAML: user::splunktester@customerdomain.com::splunktester@customerdomain.com trace="[0x0000556C45CBFC98] "? (splunkd + 0x1E9CC98)";[0x0000556C48F53CBE] "_ZN10TcpChannel11when_eventsE18PollableDescriptor + 606 (splunkd + 0x5130CBE)";[0x0000556C48EF74FE] "_ZN8PolledFd8do_eventEv + 126 (splunkd + 0x50D44FE)";[0x0000556C48EF870A] "_ZN9EventLoop3runEv + 746 (splunkd + 0x50D570A)";[0x0000556C48F4E46D] "_ZN19Base_TcpChannelLoop7_do_runEv + 29 (splunkd + 0x512B46D)";[0x0000556C467D457C] "_ZN17SplunkdHttpServer2goEv + 108 (splunkd + 0x29B157C)";[0x0000556C48FF85EE] "_ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 46 (splunkd + 0x51D55EE)";[0x0000556C48FF86FB] "_ZN6Thread8callMainEPv + 139 (splunkd + 0x51D56FB)";[0x00007F4744F58EA5] "? (libpthread.so.0 + 0x7EA5)";[0x00007F4743E83B0D] "clone + 109 (libc.so.6 + 0xFEB0D)"" The web page displays  Data could not be written: /nobody/system/authentication/userToRoleMap_SAML: user::splunktester@customerdomain.com::splunktester@customerdomain.com The server had an unexpected I haven't been able to find anything online about this.  Some posts have hinted to permission errors on .conf files.  I know this can be cause by either the splunk service not running as the correct user and/or the .conf file not have the correct perms.
Hi, how do i get the difference in the time stamp? . I want to know the difference between the starting timestamp and the completed time stamp "My base query" | rex "status:\s+(?<Status>.*)\"}" ... See more...
Hi, how do i get the difference in the time stamp? . I want to know the difference between the starting timestamp and the completed time stamp "My base query" | rex "status:\s+(?<Status>.*)\"}" | rex field=_raw "\((?<Message_Id>[^\)]*)" | rex "Path\:\s+(?<ResourcePath>.*)\"" | eval timestamp_s = timestamp/1000 | eval human_readable_time = strftime(timestamp_s, "%Y-%m-%d %H:%M:%S") | transaction Message_Id startswith="Starting execution for request" endswith="Successfully completed execution"   RAW_LOG 8/19/24 9:56:05.113 AM {"id":"38448254623555555", "timestamp":1724079365113, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Startingexecutionforrequest:f34444-22222-44444-999999-0888888"} {"id":"38448254444444444", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Methodcompletedwithstatus:200"} {"id":"38448222222222222", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Successfullycompletedexecution"} {"id":"38417111111111111", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) AWS Integration Endpoint RequestId :f32222-22222-44444-999999-0888888"}                
Hi,    I get this error in our Splunk dashboards once I migrated splunk to version 9.2.2 I was able to bypass the error in the dashboards by updating internal_library_settings and unrestricting th... See more...
Hi,    I get this error in our Splunk dashboards once I migrated splunk to version 9.2.2 I was able to bypass the error in the dashboards by updating internal_library_settings and unrestricting the library requirements. But this increases the risk/vulnerability of the environment.  require(['jquery', 'underscore', 'splunkjs/mvc', 'util/console'], function($, _, mvc, console) { function setToken(name, value) { console.log('Setting Token %o=%o', name, value); var defaultTokenModel = mvc.Components.get('default'); if (defaultTokenModel) { defaultTokenModel.set(name, value); } var submittedTokenModel = mvc.Components.get('submitted'); if (submittedTokenModel) { submittedTokenModel.set(name, value); } } $('.dashboard-body').on('click', '[data-set-token],[data-unset-token],[data-token-json]', function(e) { e.preventDefault(); var target = $(e.currentTarget); var setTokenName = target.data('set-token'); if (setTokenName) { setToken(setTokenName, target.data('value')); } var unsetTokenName = target.data('unset-token'); if (unsetTokenName) { setToken(unsetTokenName, undefined); } var tokenJson = target.data('token-json'); if (tokenJson) { try { if (_.isObject(tokenJson)) { _(tokenJson).each(function(value, key) { if (value == null) { // Unset the token setToken(key, undefined); } else { setToken(key, value); } }); } } catch (e) { console.warn('Cannot parse token JSON: ', e); } } }); }); The above code is the one that I have for one of the dashboards. Not sure how to check the jQuery version of the code, but if there is some help to fix the above code that meets the requirements of Jquery 3.5, I'll implement the same for other code as well.   Thanks, Pravin
Looks like the app version 1.0.15 was submitted back in May '24 and failed vettingdue to a minor issue with the version of Add-On builder.  check_for_addon_builder_version The Add-on Builder ve... See more...
Looks like the app version 1.0.15 was submitted back in May '24 and failed vettingdue to a minor issue with the version of Add-On builder.  check_for_addon_builder_version The Add-on Builder version used to create this app (4.1.3) is below the minimum required version of 4.2.0. Re-generate your add-on using at least Add-on Builder 4.2.0. File: default/addon_builder.conf Line Number: 4 Once these issues are remedied you can resubmit your app for review. We have people who would like to use this app in Splunk Cloud and if the developer could update the vetting that would be great. Best, 
Splunk add-on for aws not working for cloudwatch logging. I have Splunk-add-on for AWS installed on my Splunk Search head. I am able to authenticate to cloudwatch and pull logs. It was working fine, ... See more...
Splunk add-on for aws not working for cloudwatch logging. I have Splunk-add-on for AWS installed on my Splunk Search head. I am able to authenticate to cloudwatch and pull logs. It was working fine, But since last couple of days not getting logs, I see no error coming in logs, and seeing events are being stored in old timestamp If i check indextime vs _time. Earlier it was not the case here, it was up to date. I dont see any error related to lag or as such. Splunk version : 9.2.1 Splunk add-on for AWS: 7.3.0 I checked this version is compatible with Splunk 9.2.1 version. Sharing snapshot which display indextime & _time difference. I tried disabling\enabling inputs but that also didnt help. What's the props being used for aws:cloudwatchlogs , whats the standard from cloudwatch? will this impact if someone has defined random format or custom timestamp for their lambda or gluejobs cloudwatch events?
Hi,  I need to update an sso_error HTML file in Splunk, but I'm not sure of the best approach. Could anyone provide guidance on how to do this? Thanks in advance for your assistance. 
The task guide for the Forage job sim states this:  For example, to add “Count by category” to your dashboard, type out sourcetype="fraud_detection.csv" | top category in the search field. This act... See more...
The task guide for the Forage job sim states this:  For example, to add “Count by category” to your dashboard, type out sourcetype="fraud_detection.csv" | top category in the search field. This action counts the number in each category Yet I am guessing Splunk has been updated since the task guide was created because the search doesn't register the command. I have tried others but, am not receiving the desired results. Does anyone know about this? or a different command to give me a valid bar chart in visualization?
Dear Members,   I'm new in splunk, i'm trying to forward the RHEL logs to the indexer. i've done all the necessary configuration to forward the logs, but didn't receive on the indexer. When i check... See more...
Dear Members,   I'm new in splunk, i'm trying to forward the RHEL logs to the indexer. i've done all the necessary configuration to forward the logs, but didn't receive on the indexer. When i checked the status forward server, using ./splunk list forward-server command. It was showing inactive. Because of some file ownership.  Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunkfwd:splunkfwd /opt/splunkforwarder" So, I ran the below command to change the file ownership.  sudo chown -R splunkfwd:splunkfwd /opt/splunkforwarder. After executing the above command, still i'm not receiving the logs and moreover, when again i'm trying to run this "./splunk list forward-server" command to check the status of forwarder, it will takes me to enter username and password again, when i'm entering the username and password its showing login failed. NOTE: I've tried to login using both root and splunk user. But non of this worked. Please help me out in this, what should i do to make it work.   Thank you. 
Hi All, Wanted to know whether AppDynamics can monitor Salesforce in any way at all? I saw some posts mentioning manual injector for End user monitoring on Salesforce frontend but any more details w... See more...
Hi All, Wanted to know whether AppDynamics can monitor Salesforce in any way at all? I saw some posts mentioning manual injector for End user monitoring on Salesforce frontend but any more details we can capture at all from salesforce? Please suggest your experience if anyone has tried out any custom monitoring. Am looking for some ways through which we can get close to APM monitoring like metrics.
I am using HEC to receive various logs from Firehose, HEC is allowed to use index names AWS & palo_alto. The default index is set to AWS. All the logs coming from the HEC are assigned to default i... See more...
I am using HEC to receive various logs from Firehose, HEC is allowed to use index names AWS & palo_alto. The default index is set to AWS. All the logs coming from the HEC are assigned to default index AWS and default sourcetype aws:firehose I am using the below config to change the sourcetype and index name of the logs.      props.conf [source::syslog:dev/syslogng/*] TRANSFORMS-hecpaloalto = hecpaloalto, hecpaloalto_in disabled = false transforms.conf [hecpaloalto] REGEX = (.*) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::pan:log [hecpaloalto_in] REGEX = (.*) DEST_KEY = _MetaData:Index FORMAT = palo_alto   The sourcetype has changed to pan:log and its as intended but the Index name is still displaying as AWS, instead of changing to palo_alto. HEC config have default index as aws and selected indexes are aws, palo_alto Is there anything wrong in my config?
We are trying to see why my application is taking too much as end-end latency time in business transaction snapshots shows a high value. We are unable to drill down the latency time when viewed detai... See more...
We are trying to see why my application is taking too much as end-end latency time in business transaction snapshots shows a high value. We are unable to drill down the latency time when viewed details into it ?  Is there a way where we can see the full drill down view of all the calls involved in it ?  We can only see the execution time but not the latency time in Appdynamics. Screenshots attached. 
Here is my sample log    2024-07-08T04:43:32.468537+00:00 dxx1-dbxxxs.xxx.net MSSQLSERVER[0] {"EventTime":"2024-07-08 04:43:32","Hostname":"dx1-dbxxxs.xxx.net","Keywords":45035996273704960,"EventTy... See more...
Here is my sample log    2024-07-08T04:43:32.468537+00:00 dxx1-dbxxxs.xxx.net MSSQLSERVER[0] {"EventTime":"2024-07-08 04:43:32","Hostname":"dx1-dbxxxs.xxx.net","Keywords":45035996273704960,"EventType":"AUDIT_SUCCESS","SeverityValue":2,"Severity":"INFO","EventID":44444,"SourceName":"MSSQLSERVER","Task":5,"RecordNumber":1234343410,"ProcessID":0,"ThreadID":0,"Channel":"Application","Message":"Audit event:lkjfd:sdfkjhf:Askjhdfsdf","Category":"None","EventReceivedTime":"2024-07-08 04:43:32","SourceModuleName":"default-inputs","SourceModuleType":"im_msvistalog"}#015   Here is my config props.conf [dbtest:test] #mysourcetype TRANSFORMS-extract_kv_pairs = extract_json_data transforms.conf   [extract_json_data] REGEX = "(\w+)":"?([^",}]+)"? FORMAT = $1::$2 WRITE_META = true The same Regex is working in Regex101 here is the test link https://regex101.com/r/rt3bly/1 I am not sure why its not working in my log extraction.  Any help is highly appreciated. Thanks