All Topics

Top

All Topics

Congratulations are due to the winners of Splunk's first-ever Community Dashboard Challenge!! Read on on the details of these winning dashboards.   Alexander Romanauskas    This dashboar... See more...
Congratulations are due to the winners of Splunk's first-ever Community Dashboard Challenge!! Read on on the details of these winning dashboards.   Alexander Romanauskas    This dashboard monitors the productivity of chickens in the coop. It includes daily weather and sunrise/sunset data input via REST API, along with records of the daily amounts of food and water consumed and the number of eggs collected.       Martin Hettervik These dashboards are part of an app that Martin created to visualize Nessus security scans in Splunk. Inspired by the Tenable App for Splunk, Martin aimed to enhance visualizations for better data comprehension and navigation. The first dashboard provides an overview of all vulnerability scans, using color coding to differentiate severity levels. It highlights environments and hosts with the most vulnerabilities, shows the types of prevalent vulnerabilities, the period of scan data, and the number of scanned networks. The drilldown dashboards offer detailed views of vulnerabilities per host, with clickable tables for more information about specific hosts and direct links to the Tenable website for detailed vulnerability information. Users can filter vulnerabilities by severity and other criteria. The dashboards integrate with the Splunk ES asset list for sorting vulnerabilities by business group or environment and include a lookup feature for "ignored vulnerabilities," allowing users to exclude specific vulnerabilities from the dashboards.   Vijeta Galani This dashboard is to monitor cyber and Infra critical cloud applications. This makes use of website ping and RSS feeds. It gives an overview of application status and also has drill down action which shows detailed timeline for each application. The first panel shows if the website is up and running. It also captures slow response and displays it under Warning. If the website is down or returns status other than 200 it displays under Error count. The 2nd panel is for displaying RSS feed within the given timeframe for certain applications that are critical for day to day operations. On clicking the non-zero entries from the Website status count, it displays the details on the applications that were in error/warning/OK status along with the response code and trendline.   Chris Kaye  This dashboard builds on the Splunk tutorial data set and uses pan and zoom to set timeframes for a drilldown into the state of request events allowing the user to easily investigate anomalies in response data, showing and hiding drilldown panels as the user pursues their investigation. It also uses hidden capabilities of standard charts to help visualise recent data compared to historic data. CSS is also used to enhance the visual impact of the dashboard. Mike Wang The Risky Signin Analytic Dashboard presents risky sign-in events from Azure AD (now known as Entra ID) in an easily understandable visual display. It correlates multiple valuable data in a risky sign-in event investigation, lists common sign-in attributes for comparison with rare sign-in attributes, and describes threat activities as clearly and systematically as possible. For example, highlighting risky accounts with both sign-in failures and successes and ordering sign-in events by time can help identify potential impossible travel activities, etc. The Risky Signin Analytic Dashboard can be used to: Monitor for unusual account risky sign-in activities. Analyze risky account events with both successful and failed sign-ins. Serve as an auxiliary tool for Entra AD incident/alert event analysis. Strengthen the security posture of identity verification by reviewing risky sign-in events, such as optimizing the settings of Conditional Access Policy and whether the account has enabled multi-factor authentication, etc. Our Thanks & Feedback Congratulations to all the winners!   Your outstanding work has not only earned you recognition but also contributed to the growth and knowledge of the entire Splunk Community. We would also like to extend our thanks to our panel of judges for their time and effort in evaluating the entries, as well as to all community members for their support and engagement. For folks who participated, as well as those who would like to participate in future challenges, submit your feedback here.      
Good afternoon everyone!   Helping a client setup Splunk SAML for the first time.  We have confirmed that the SAML IDP is successfully sending all necessary attributes in the assertion and Splunk i... See more...
Good afternoon everyone!   Helping a client setup Splunk SAML for the first time.  We have confirmed that the SAML IDP is successfully sending all necessary attributes in the assertion and Splunk is consuming it.  We are getting the following error though.  I've included all logs leading up to the final ERROR  08-19-2024 15:43:55.859 +0000 WARN SAMLConfig [25929 webui] - Use RSA-SHA256, RSA-SHA384, or RSA-SHA512 for 'signatureAlgorithm' rather than 'RSA-SHA1' 08-19-2024 15:43:55.859 +0000 WARN SAMLConfig [25929 webui] - Use RSA-SHA256, RSA-SHA384, or RSA-SHA512 for 'inboundSignatureAlgorithm' rather than 'RSA-SHA1' 08-19-2024 15:43:55.859 +0000 WARN SAMLConfig [25929 webui] - Use SHA256, SHA384, or SHA512 for 'inboundDigestMethod' rather than 'SHA1' 08-19-2024 15:43:55.859 +0000 INFO SAMLConfig [25929 webui] - Skipping :idpCert.pem because it does not begin with idpCertChain_ when populating idpCertChains 08-19-2024 15:43:55.859 +0000 INFO SAMLConfig [25929 webui] - No valid value for 'saml_negative_cache_timeout'. Defaulting to 3600 08-19-2024 15:43:55.860 +0000 INFO SAMLConfig [25929 webui] - Both AQR and AuthnExt are disabled, setting _shouldCacheSAMLUserInfotoDisk=true 08-19-2024 15:43:55.860 +0000 INFO AuthenticationProviderSAML [25929 webui] - Writing to persistent storage for user= name=splunktester@customerdomain.com email=splunktester@customerdomain.com roles=user stanza=userToRoleMap_SAML 08-19-2024 15:43:55.860 +0000 ERROR ConfPathMapper [25929 webui] - /opt/splunk/etc/system/local: Setting /nobody/system/authentication/userToRoleMap_SAML = user::splunktester@customerdomain.com::splunktester@customerdomain.com: Unsupported path or value 08-19-2024 15:43:55.873 +0000 ERROR HttpListener [25929 webui] - Exception while processing request from 10.10.10.10:58723 for /saml/acs: Data could not be written: /nobody/system/authentication/userToRoleMap_SAML: user::splunktester@customerdomain.com::splunktester@customerdomain.com trace="[0x0000556C45CBFC98] "? (splunkd + 0x1E9CC98)";[0x0000556C48F53CBE] "_ZN10TcpChannel11when_eventsE18PollableDescriptor + 606 (splunkd + 0x5130CBE)";[0x0000556C48EF74FE] "_ZN8PolledFd8do_eventEv + 126 (splunkd + 0x50D44FE)";[0x0000556C48EF870A] "_ZN9EventLoop3runEv + 746 (splunkd + 0x50D570A)";[0x0000556C48F4E46D] "_ZN19Base_TcpChannelLoop7_do_runEv + 29 (splunkd + 0x512B46D)";[0x0000556C467D457C] "_ZN17SplunkdHttpServer2goEv + 108 (splunkd + 0x29B157C)";[0x0000556C48FF85EE] "_ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 46 (splunkd + 0x51D55EE)";[0x0000556C48FF86FB] "_ZN6Thread8callMainEPv + 139 (splunkd + 0x51D56FB)";[0x00007F4744F58EA5] "? (libpthread.so.0 + 0x7EA5)";[0x00007F4743E83B0D] "clone + 109 (libc.so.6 + 0xFEB0D)"" The web page displays  Data could not be written: /nobody/system/authentication/userToRoleMap_SAML: user::splunktester@customerdomain.com::splunktester@customerdomain.com The server had an unexpected I haven't been able to find anything online about this.  Some posts have hinted to permission errors on .conf files.  I know this can be cause by either the splunk service not running as the correct user and/or the .conf file not have the correct perms.
Hi, how do i get the difference in the time stamp? . I want to know the difference between the starting timestamp and the completed time stamp "My base query" | rex "status:\s+(?<Status>.*)\"}" ... See more...
Hi, how do i get the difference in the time stamp? . I want to know the difference between the starting timestamp and the completed time stamp "My base query" | rex "status:\s+(?<Status>.*)\"}" | rex field=_raw "\((?<Message_Id>[^\)]*)" | rex "Path\:\s+(?<ResourcePath>.*)\"" | eval timestamp_s = timestamp/1000 | eval human_readable_time = strftime(timestamp_s, "%Y-%m-%d %H:%M:%S") | transaction Message_Id startswith="Starting execution for request" endswith="Successfully completed execution"   RAW_LOG 8/19/24 9:56:05.113 AM {"id":"38448254623555555", "timestamp":1724079365113, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Startingexecutionforrequest:f34444-22222-44444-999999-0888888"} {"id":"38448254444444444", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Methodcompletedwithstatus:200"} {"id":"38448222222222222", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Successfullycompletedexecution"} {"id":"38417111111111111", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) AWS Integration Endpoint RequestId :f32222-22222-44444-999999-0888888"}                
Hi,    I get this error in our Splunk dashboards once I migrated splunk to version 9.2.2 I was able to bypass the error in the dashboards by updating internal_library_settings and unrestricting th... See more...
Hi,    I get this error in our Splunk dashboards once I migrated splunk to version 9.2.2 I was able to bypass the error in the dashboards by updating internal_library_settings and unrestricting the library requirements. But this increases the risk/vulnerability of the environment.  require(['jquery', 'underscore', 'splunkjs/mvc', 'util/console'], function($, _, mvc, console) { function setToken(name, value) { console.log('Setting Token %o=%o', name, value); var defaultTokenModel = mvc.Components.get('default'); if (defaultTokenModel) { defaultTokenModel.set(name, value); } var submittedTokenModel = mvc.Components.get('submitted'); if (submittedTokenModel) { submittedTokenModel.set(name, value); } } $('.dashboard-body').on('click', '[data-set-token],[data-unset-token],[data-token-json]', function(e) { e.preventDefault(); var target = $(e.currentTarget); var setTokenName = target.data('set-token'); if (setTokenName) { setToken(setTokenName, target.data('value')); } var unsetTokenName = target.data('unset-token'); if (unsetTokenName) { setToken(unsetTokenName, undefined); } var tokenJson = target.data('token-json'); if (tokenJson) { try { if (_.isObject(tokenJson)) { _(tokenJson).each(function(value, key) { if (value == null) { // Unset the token setToken(key, undefined); } else { setToken(key, value); } }); } } catch (e) { console.warn('Cannot parse token JSON: ', e); } } }); }); The above code is the one that I have for one of the dashboards. Not sure how to check the jQuery version of the code, but if there is some help to fix the above code that meets the requirements of Jquery 3.5, I'll implement the same for other code as well.   Thanks, Pravin
Looks like the app version 1.0.15 was submitted back in May '24 and failed vettingdue to a minor issue with the version of Add-On builder.  check_for_addon_builder_version The Add-on Builder ve... See more...
Looks like the app version 1.0.15 was submitted back in May '24 and failed vettingdue to a minor issue with the version of Add-On builder.  check_for_addon_builder_version The Add-on Builder version used to create this app (4.1.3) is below the minimum required version of 4.2.0. Re-generate your add-on using at least Add-on Builder 4.2.0. File: default/addon_builder.conf Line Number: 4 Once these issues are remedied you can resubmit your app for review. We have people who would like to use this app in Splunk Cloud and if the developer could update the vetting that would be great. Best, 
Splunk add-on for aws not working for cloudwatch logging. I have Splunk-add-on for AWS installed on my Splunk Search head. I am able to authenticate to cloudwatch and pull logs. It was working fine, ... See more...
Splunk add-on for aws not working for cloudwatch logging. I have Splunk-add-on for AWS installed on my Splunk Search head. I am able to authenticate to cloudwatch and pull logs. It was working fine, But since last couple of days not getting logs, I see no error coming in logs, and seeing events are being stored in old timestamp If i check indextime vs _time. Earlier it was not the case here, it was up to date. I dont see any error related to lag or as such. Splunk version : 9.2.1 Splunk add-on for AWS: 7.3.0 I checked this version is compatible with Splunk 9.2.1 version. Sharing snapshot which display indextime & _time difference. I tried disabling\enabling inputs but that also didnt help. What's the props being used for aws:cloudwatchlogs , whats the standard from cloudwatch? will this impact if someone has defined random format or custom timestamp for their lambda or gluejobs cloudwatch events?
Hi,  I need to update an sso_error HTML file in Splunk, but I'm not sure of the best approach. Could anyone provide guidance on how to do this? Thanks in advance for your assistance. 
The task guide for the Forage job sim states this:  For example, to add “Count by category” to your dashboard, type out sourcetype="fraud_detection.csv" | top category in the search field. This act... See more...
The task guide for the Forage job sim states this:  For example, to add “Count by category” to your dashboard, type out sourcetype="fraud_detection.csv" | top category in the search field. This action counts the number in each category Yet I am guessing Splunk has been updated since the task guide was created because the search doesn't register the command. I have tried others but, am not receiving the desired results. Does anyone know about this? or a different command to give me a valid bar chart in visualization?
Dear Members,   I'm new in splunk, i'm trying to forward the RHEL logs to the indexer. i've done all the necessary configuration to forward the logs, but didn't receive on the indexer. When i check... See more...
Dear Members,   I'm new in splunk, i'm trying to forward the RHEL logs to the indexer. i've done all the necessary configuration to forward the logs, but didn't receive on the indexer. When i checked the status forward server, using ./splunk list forward-server command. It was showing inactive. Because of some file ownership.  Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunkfwd:splunkfwd /opt/splunkforwarder" So, I ran the below command to change the file ownership.  sudo chown -R splunkfwd:splunkfwd /opt/splunkforwarder. After executing the above command, still i'm not receiving the logs and moreover, when again i'm trying to run this "./splunk list forward-server" command to check the status of forwarder, it will takes me to enter username and password again, when i'm entering the username and password its showing login failed. NOTE: I've tried to login using both root and splunk user. But non of this worked. Please help me out in this, what should i do to make it work.   Thank you. 
Hi All, Wanted to know whether AppDynamics can monitor Salesforce in any way at all? I saw some posts mentioning manual injector for End user monitoring on Salesforce frontend but any more details w... See more...
Hi All, Wanted to know whether AppDynamics can monitor Salesforce in any way at all? I saw some posts mentioning manual injector for End user monitoring on Salesforce frontend but any more details we can capture at all from salesforce? Please suggest your experience if anyone has tried out any custom monitoring. Am looking for some ways through which we can get close to APM monitoring like metrics.
I am using HEC to receive various logs from Firehose, HEC is allowed to use index names AWS & palo_alto. The default index is set to AWS. All the logs coming from the HEC are assigned to default i... See more...
I am using HEC to receive various logs from Firehose, HEC is allowed to use index names AWS & palo_alto. The default index is set to AWS. All the logs coming from the HEC are assigned to default index AWS and default sourcetype aws:firehose I am using the below config to change the sourcetype and index name of the logs.      props.conf [source::syslog:dev/syslogng/*] TRANSFORMS-hecpaloalto = hecpaloalto, hecpaloalto_in disabled = false transforms.conf [hecpaloalto] REGEX = (.*) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::pan:log [hecpaloalto_in] REGEX = (.*) DEST_KEY = _MetaData:Index FORMAT = palo_alto   The sourcetype has changed to pan:log and its as intended but the Index name is still displaying as AWS, instead of changing to palo_alto. HEC config have default index as aws and selected indexes are aws, palo_alto Is there anything wrong in my config?
We are trying to see why my application is taking too much as end-end latency time in business transaction snapshots shows a high value. We are unable to drill down the latency time when viewed detai... See more...
We are trying to see why my application is taking too much as end-end latency time in business transaction snapshots shows a high value. We are unable to drill down the latency time when viewed details into it ?  Is there a way where we can see the full drill down view of all the calls involved in it ?  We can only see the execution time but not the latency time in Appdynamics. Screenshots attached. 
Here is my sample log    2024-07-08T04:43:32.468537+00:00 dxx1-dbxxxs.xxx.net MSSQLSERVER[0] {"EventTime":"2024-07-08 04:43:32","Hostname":"dx1-dbxxxs.xxx.net","Keywords":45035996273704960,"EventTy... See more...
Here is my sample log    2024-07-08T04:43:32.468537+00:00 dxx1-dbxxxs.xxx.net MSSQLSERVER[0] {"EventTime":"2024-07-08 04:43:32","Hostname":"dx1-dbxxxs.xxx.net","Keywords":45035996273704960,"EventType":"AUDIT_SUCCESS","SeverityValue":2,"Severity":"INFO","EventID":44444,"SourceName":"MSSQLSERVER","Task":5,"RecordNumber":1234343410,"ProcessID":0,"ThreadID":0,"Channel":"Application","Message":"Audit event:lkjfd:sdfkjhf:Askjhdfsdf","Category":"None","EventReceivedTime":"2024-07-08 04:43:32","SourceModuleName":"default-inputs","SourceModuleType":"im_msvistalog"}#015   Here is my config props.conf [dbtest:test] #mysourcetype TRANSFORMS-extract_kv_pairs = extract_json_data transforms.conf   [extract_json_data] REGEX = "(\w+)":"?([^",}]+)"? FORMAT = $1::$2 WRITE_META = true The same Regex is working in Regex101 here is the test link https://regex101.com/r/rt3bly/1 I am not sure why its not working in my log extraction.  Any help is highly appreciated. Thanks  
Hello Guys, We are using Splunk Cloud and have created multiple HECs for different products. We noticed that events coming in through HEC always have "xxx.splunkcloud.com" as the value of the host ... See more...
Hello Guys, We are using Splunk Cloud and have created multiple HECs for different products. We noticed that events coming in through HEC always have "xxx.splunkcloud.com" as the value of the host field. Is there a way to assign different hostnames to different products? Thanks & Regards, Iris
Hello everyone, I installed and configured the Splunk Forwarder on a machine. While the logs are being forwarded to Splunk, I’ve noticed that some data is missing from the logs that are coming throu... See more...
Hello everyone, I installed and configured the Splunk Forwarder on a machine. While the logs are being forwarded to Splunk, I’ve noticed that some data is missing from the logs that are coming through. Could this issue be related to specific configurations that need to be adjusted on the forwarder, or is it possible that the problem is coming from the machines themselves? If anyone has experienced something similar or has insights on how to address this, I would greatly appreciate your advice. Thank you in advance for your help! Best regards,
Hi all, How can this be fixed? Thanks for your help on this,
Hello Guys, We have paloalto firewalls with different timezone settings. For the ones which is not in the same timezone as Splunk, their logs will be considered as the logs of the future and hence c... See more...
Hello Guys, We have paloalto firewalls with different timezone settings. For the ones which is not in the same timezone as Splunk, their logs will be considered as the logs of the future and hence cannot be searched in Splunk in a timely manner. I cannot fix it by specifying timezone for the source types provided by the paloalto TA, since it cannot fulfill multiple time zones at the same time. I wonder if you have experienced the similar problem, if yes, would you please share your experience on handling this kind of issue? Thanks much for your help in advance! Regards, Iris
I've got a data set which collects data everyday but for my graph I'd like to compare the time selected to the same duration 24 hours before.   I can get the query to do the comparison but I want... See more...
I've got a data set which collects data everyday but for my graph I'd like to compare the time selected to the same duration 24 hours before.   I can get the query to do the comparison but I want to be able to show only the timeframe selected in the timepicker i.e. last 30 mins rather then the fill -48hours etc.   Below is the query I've used: index=naming version=2.2.* metric="playing" earliest=-36h latest=now | dedup _time, _raw | timechart span=1h sum(value) as value | timewrap 1d | rename value_latest_day as "Current 24 Hours", value_1day_before as "Previous 24 Hours" | foreach * [eval <<FIELD>>=round(<<FIELD>>, 0)] This is the base query I've used. For a different version I have done a join however that takes a bit too long to join. Ideally I want to be able to filter the above data (as it's quite quick to load) but only for the time picked in the time picker.   Thanks,
Hi all,  Im trying to use this app by Baboon - Monitoring of Java Virtual Machines with JMX I get some error when i click on data inputs Oops. Page not found! Click here to return to Splunk h... See more...
Hi all,  Im trying to use this app by Baboon - Monitoring of Java Virtual Machines with JMX I get some error when i click on data inputs Oops. Page not found! Click here to return to Splunk homepage. Would I need to activate the app first?
Hi, For a few days now, my Splunk Dashboard shortcut has been displaying an error when I connect with the administrator account. But when I use another account with less privilege via LDAP auth... See more...
Hi, For a few days now, my Splunk Dashboard shortcut has been displaying an error when I connect with the administrator account. But when I use another account with less privilege via LDAP authentication, I don't get this error, the page displays fine. Do you have any idea what the problem is? Thanks for your help.