All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@isoutamo Thanks you so much, How can I estimate the time required for replicating the data?
Those are ok steps. If you are updating those *_load values, you should remember decrease those when everything is ready.
Hi thanks for pointing that out.   The   "by bookmark_status_display"   was indeed unneeded as I'm specifying  which status it is in the query  hence the  actual query should be:  | sseanalyti... See more...
Hi thanks for pointing that out.   The   "by bookmark_status_display"   was indeed unneeded as I'm specifying  which status it is in the query  hence the  actual query should be:  | sseanalytics 'bookmark' | where bookmark_status="bookmarked" | stats count(bookmark_status_display) AS "Bookmark Status" -  Once taking that into considereation i was able to use the following for the result : | rest /services/saved/searches | search alert_type!="always" AND action.email.to="production@email.com" AND title!="*test*" | stats count(action.email.to) AS "Count" | appendcols [sseanalytics 'bookmark' | where bookmark_status="successfullyImplemented" | stats count(bookmark_status_display) AS "Bookmark Status"] | eventstats values(Count) as Count | eval diff = 'Bookmark Status' - Count | table diff Thank you 100!
Good afternoon everyone!   Helping a client setup Splunk SAML for the first time.  We have confirmed that the SAML IDP is successfully sending all necessary attributes in the assertion and Splunk i... See more...
Good afternoon everyone!   Helping a client setup Splunk SAML for the first time.  We have confirmed that the SAML IDP is successfully sending all necessary attributes in the assertion and Splunk is consuming it.  We are getting the following error though.  I've included all logs leading up to the final ERROR  08-19-2024 15:43:55.859 +0000 WARN SAMLConfig [25929 webui] - Use RSA-SHA256, RSA-SHA384, or RSA-SHA512 for 'signatureAlgorithm' rather than 'RSA-SHA1' 08-19-2024 15:43:55.859 +0000 WARN SAMLConfig [25929 webui] - Use RSA-SHA256, RSA-SHA384, or RSA-SHA512 for 'inboundSignatureAlgorithm' rather than 'RSA-SHA1' 08-19-2024 15:43:55.859 +0000 WARN SAMLConfig [25929 webui] - Use SHA256, SHA384, or SHA512 for 'inboundDigestMethod' rather than 'SHA1' 08-19-2024 15:43:55.859 +0000 INFO SAMLConfig [25929 webui] - Skipping :idpCert.pem because it does not begin with idpCertChain_ when populating idpCertChains 08-19-2024 15:43:55.859 +0000 INFO SAMLConfig [25929 webui] - No valid value for 'saml_negative_cache_timeout'. Defaulting to 3600 08-19-2024 15:43:55.860 +0000 INFO SAMLConfig [25929 webui] - Both AQR and AuthnExt are disabled, setting _shouldCacheSAMLUserInfotoDisk=true 08-19-2024 15:43:55.860 +0000 INFO AuthenticationProviderSAML [25929 webui] - Writing to persistent storage for user= name=splunktester@customerdomain.com email=splunktester@customerdomain.com roles=user stanza=userToRoleMap_SAML 08-19-2024 15:43:55.860 +0000 ERROR ConfPathMapper [25929 webui] - /opt/splunk/etc/system/local: Setting /nobody/system/authentication/userToRoleMap_SAML = user::splunktester@customerdomain.com::splunktester@customerdomain.com: Unsupported path or value 08-19-2024 15:43:55.873 +0000 ERROR HttpListener [25929 webui] - Exception while processing request from 10.10.10.10:58723 for /saml/acs: Data could not be written: /nobody/system/authentication/userToRoleMap_SAML: user::splunktester@customerdomain.com::splunktester@customerdomain.com trace="[0x0000556C45CBFC98] "? (splunkd + 0x1E9CC98)";[0x0000556C48F53CBE] "_ZN10TcpChannel11when_eventsE18PollableDescriptor + 606 (splunkd + 0x5130CBE)";[0x0000556C48EF74FE] "_ZN8PolledFd8do_eventEv + 126 (splunkd + 0x50D44FE)";[0x0000556C48EF870A] "_ZN9EventLoop3runEv + 746 (splunkd + 0x50D570A)";[0x0000556C48F4E46D] "_ZN19Base_TcpChannelLoop7_do_runEv + 29 (splunkd + 0x512B46D)";[0x0000556C467D457C] "_ZN17SplunkdHttpServer2goEv + 108 (splunkd + 0x29B157C)";[0x0000556C48FF85EE] "_ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 46 (splunkd + 0x51D55EE)";[0x0000556C48FF86FB] "_ZN6Thread8callMainEPv + 139 (splunkd + 0x51D56FB)";[0x00007F4744F58EA5] "? (libpthread.so.0 + 0x7EA5)";[0x00007F4743E83B0D] "clone + 109 (libc.so.6 + 0xFEB0D)"" The web page displays  Data could not be written: /nobody/system/authentication/userToRoleMap_SAML: user::splunktester@customerdomain.com::splunktester@customerdomain.com The server had an unexpected I haven't been able to find anything online about this.  Some posts have hinted to permission errors on .conf files.  I know this can be cause by either the splunk service not running as the correct user and/or the .conf file not have the correct perms.
Hi, how do i get the difference in the time stamp? . I want to know the difference between the starting timestamp and the completed time stamp "My base query" | rex "status:\s+(?<Status>.*)\"}" ... See more...
Hi, how do i get the difference in the time stamp? . I want to know the difference between the starting timestamp and the completed time stamp "My base query" | rex "status:\s+(?<Status>.*)\"}" | rex field=_raw "\((?<Message_Id>[^\)]*)" | rex "Path\:\s+(?<ResourcePath>.*)\"" | eval timestamp_s = timestamp/1000 | eval human_readable_time = strftime(timestamp_s, "%Y-%m-%d %H:%M:%S") | transaction Message_Id startswith="Starting execution for request" endswith="Successfully completed execution"   RAW_LOG 8/19/24 9:56:05.113 AM {"id":"38448254623555555", "timestamp":1724079365113, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Startingexecutionforrequest:f34444-22222-44444-999999-0888888"} {"id":"38448254444444444", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Methodcompletedwithstatus:200"} {"id":"38448222222222222", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Successfullycompletedexecution"} {"id":"38417111111111111", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) AWS Integration Endpoint RequestId :f32222-22222-44444-999999-0888888"}                
Hi,    I get this error in our Splunk dashboards once I migrated splunk to version 9.2.2 I was able to bypass the error in the dashboards by updating internal_library_settings and unrestricting th... See more...
Hi,    I get this error in our Splunk dashboards once I migrated splunk to version 9.2.2 I was able to bypass the error in the dashboards by updating internal_library_settings and unrestricting the library requirements. But this increases the risk/vulnerability of the environment.  require(['jquery', 'underscore', 'splunkjs/mvc', 'util/console'], function($, _, mvc, console) { function setToken(name, value) { console.log('Setting Token %o=%o', name, value); var defaultTokenModel = mvc.Components.get('default'); if (defaultTokenModel) { defaultTokenModel.set(name, value); } var submittedTokenModel = mvc.Components.get('submitted'); if (submittedTokenModel) { submittedTokenModel.set(name, value); } } $('.dashboard-body').on('click', '[data-set-token],[data-unset-token],[data-token-json]', function(e) { e.preventDefault(); var target = $(e.currentTarget); var setTokenName = target.data('set-token'); if (setTokenName) { setToken(setTokenName, target.data('value')); } var unsetTokenName = target.data('unset-token'); if (unsetTokenName) { setToken(unsetTokenName, undefined); } var tokenJson = target.data('token-json'); if (tokenJson) { try { if (_.isObject(tokenJson)) { _(tokenJson).each(function(value, key) { if (value == null) { // Unset the token setToken(key, undefined); } else { setToken(key, value); } }); } } catch (e) { console.warn('Cannot parse token JSON: ', e); } } }); }); The above code is the one that I have for one of the dashboards. Not sure how to check the jQuery version of the code, but if there is some help to fix the above code that meets the requirements of Jquery 3.5, I'll implement the same for other code as well.   Thanks, Pravin
@yuanliu , I see the whole event in a single line when I search for that event and on the indexer I have this props. [load_server] TRUNCATE=999999
I have the collect search working, eval _raw="field1","field2", ... Conversion functions - Splunk Documentation Thank you for pointing me in the right direction and well done @PickleRick
is there anything i need to configure on thehost of the remote server i am monitoring And how can i see/configure what jmx metrics i can collect or visualise
Appreciate the clarification, I have 30%+ headroom with my license so a couple of onetime events should not be an issue.
Hi  @yuanliu  How do i get the difference of the time stamp? . I want the difference of starting timestamp and the completed time stamp "My base query" | rex "status:\s+(?<Status>.*)\"}" | rex fie... See more...
Hi  @yuanliu  How do i get the difference of the time stamp? . I want the difference of starting timestamp and the completed time stamp "My base query" | rex "status:\s+(?<Status>.*)\"}" | rex field=_raw "\((?<Message_Id>[^\)]*)" | rex "Path\:\s+(?<ResourcePath>.*)\"" | eval timestamp_s = timestamp/1000 | eval human_readable_time = strftime(timestamp_s, "%Y-%m-%d %H:%M:%S") | transaction Message_Id startswith="Starting execution for request" endswith="Successfully completed execution"   RAW_LOG 8/19/24 9:56:05.113 AM {"id":"38448254623555555", "timestamp":1724079365113, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Startingexecutionforrequest:f34444-22222-44444-999999-0888888"} {"id":"38448254444444444", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Methodcompletedwithstatus:200"} {"id":"38448222222222222", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) Successfullycompletedexecution"} {"id":"38417111111111111", "timestamp":1724079365126, "message":"(fghhhhhh-244933333-456789-rrrrrrrrrr) AWS Integration Endpoint RequestId :f32222-22222-44444-999999-0888888"}                
That's pretty normal for Windows events. Not every log has every field. And not every field has a reasonable value each time. This is from my home lab.  
Looks like the app version 1.0.15 was submitted back in May '24 and failed vettingdue to a minor issue with the version of Add-On builder.  check_for_addon_builder_version The Add-on Builder ve... See more...
Looks like the app version 1.0.15 was submitted back in May '24 and failed vettingdue to a minor issue with the version of Add-On builder.  check_for_addon_builder_version The Add-on Builder version used to create this app (4.1.3) is below the minimum required version of 4.2.0. Re-generate your add-on using at least Add-on Builder 4.2.0. File: default/addon_builder.conf Line Number: 4 Once these issues are remedied you can resubmit your app for review. We have people who would like to use this app in Splunk Cloud and if the developer could update the vetting that would be great. Best, 
I mean you use up additional license amount for indexing additional data using the collect command unless you use the stash or stash_hec sourcetypes. So each events you firstly index into index A and... See more...
I mean you use up additional license amount for indexing additional data using the collect command unless you use the stash or stash_hec sourcetypes. So each events you firstly index into index A and then search, transform and collect into index B will cost you twice (roughly - depending on what you do with it in terms of processing before collecting) the license usage that it uses just be indexing it into index A. Whether you're within your license limits or not depends of course on the overall amount of ingested data and your license size.
Of course you _can_ do search & collect. It's just not something that's typically done since you'd have to first ingest the data "normally" and then split it using a search into another two indexes (... See more...
Of course you _can_ do search & collect. It's just not something that's typically done since you'd have to first ingest the data "normally" and then split it using a search into another two indexes (since you don't want group A to see index B and vice versa). And if you wanted to use original sourcetype (or any other sourcetype than stash or stash_hec), you'd get double your license usage. If there is not much data, that might be acceptable but typically it's a waste of perfectly good license And a waste of resources to search, split and collect. And additional lag on ingest. So that's why you don't typically do it this way. And I don't get why you would want to do separate apps? Anyway, now you're saying that you want to speed up searches whereas before you said that it's due to access restrictions. And there is definitely something to work on with your data format if you indeed have a mix of various formats within one json structure which might be an array or might not be an array... That seems to be calling for some sanitization process on ingest.
Splunk add-on for aws not working for cloudwatch logging. I have Splunk-add-on for AWS installed on my Splunk Search head. I am able to authenticate to cloudwatch and pull logs. It was working fine, ... See more...
Splunk add-on for aws not working for cloudwatch logging. I have Splunk-add-on for AWS installed on my Splunk Search head. I am able to authenticate to cloudwatch and pull logs. It was working fine, But since last couple of days not getting logs, I see no error coming in logs, and seeing events are being stored in old timestamp If i check indextime vs _time. Earlier it was not the case here, it was up to date. I dont see any error related to lag or as such. Splunk version : 9.2.1 Splunk add-on for AWS: 7.3.0 I checked this version is compatible with Splunk 9.2.1 version. Sharing snapshot which display indextime & _time difference. I tried disabling\enabling inputs but that also didnt help. What's the props being used for aws:cloudwatchlogs , whats the standard from cloudwatch? will this impact if someone has defined random format or custom timestamp for their lambda or gluejobs cloudwatch events?
Hi @BRFZ , I don't think that's a Splunk issue: see the generated logs. If it could be a splunk issue you could have a truncated log, but not a missing internal part of the event. Unless you have ... See more...
Hi @BRFZ , I don't think that's a Splunk issue: see the generated logs. If it could be a splunk issue you could have a truncated log, but not a missing internal part of the event. Unless you have a masking policy. Ciao. Giuseppe
I was not aware of the licensing implications, thank you and I'll stay in compliance.
Hi,  I need to update an sso_error HTML file in Splunk, but I'm not sure of the best approach. Could anyone provide guidance on how to do this? Thanks in advance for your assistance. 
For example, in some events, we have the IP address, while in others, we just see a dash ("-") or 0, even for the same event ID. Exemple :   <Event xmlns=' http://schemas.microsoft.com/win/2004/08... See more...
For example, in some events, we have the IP address, while in others, we just see a dash ("-") or 0, even for the same event ID. Exemple :   <Event xmlns=' http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Service Control Manager' Guid='{555908d1-a6d7-4695-8e1e-26931d2012f4}' EventSourceName='Service Control Manager'/><EventID> 4624 </EventID><Version>0</Version><Level>4</Level><Task>0</Task><Opcode>0</Opcode><Keywords>0x8080000000000000</Keywords><TimeCreated SystemTime='2014-04-24T18:38:37.868683300Z'/><EventRecordID>412598</EventRecordID><Correlation/><Execution ProcessID='192' ThreadID='210980'/><Channel>System</Channel> <Computer>TEST</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>S18</Data><Data Name='SubjectUserName'>BOB</Data><Data Name='SubjectDomainName'>GOZ</Data><Data Name='SubjectLogonId'>x0</Data><Data Name='TargetUserSid'>s20</Data><Data Name='TargetUserName'>BOBT</Data><Data Name='TargetDomainName'>TESTTGT</Data><Data Name='TargetLogonId'>x0</Data><Data Name='LogonType'>x</Data><Data Name='LogonProcessName'>usr </Data><Data Name='AuthenticationPackageName'>Negotiate</Data><Data Name='WorkstationName'>tst</Data><Data Name='LogonGuid'>{845152}</Data><Data Name='TransmittedServices'>-</Data><Data Name='LmPackageName'>-</Data><Data Name='KeyLength'>0</Data><Data Name='ProcessId'>mspam</Data><Data Name='ProcessName'>test.ee</Data><Data Name='IpAddress'>x.x.x.x</Data><Data Name='IpPort'>0</Data><Data Name='ImpersonationLevel'>%%1833</Data><Data Name='RestrictedAdminMode'>mlmpknnn</Data><Data Name='TargetOutboundUserName'>-</Data><Data </EventData></Event> In this example, it's related to the IP address and port. In some cases, we have a specific IP address, while in others, it's just a dash ("-"). Similarly, for the port, sometimes it shows a dash ("-"), and other times it shows a 0, or sometimes the port is correctly specified.