All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ok maybe it is too much Splunk today.  Whatever it is I can not for the life of me remember how to do this. I am doing a basic search on some logs.  I want to show the search term in the table resul... See more...
Ok maybe it is too much Splunk today.  Whatever it is I can not for the life of me remember how to do this. I am doing a basic search on some logs.  I want to show the search term in the table results.  The term is being queried out of the _raw   index=myindex sourcetype=mystuff Environment=thisone "THE_TERM" | top Environment by userid | table Environment, userid   Where and how to I add "THE_TERM" to the table results?
i still get the rollback error 
Hello, we have 2 Splunk platforms and we are using _TCP_ROUTING to forward logs. System logs from 1st platform indexers are currently logged on themself.   We want to also receive system logs fro... See more...
Hello, we have 2 Splunk platforms and we are using _TCP_ROUTING to forward logs. System logs from 1st platform indexers are currently logged on themself.   We want to also receive system logs from  indexers of the 1st platform on our 2nd platform however there is no default tcpout group on 1st platform indexers.   So should we create default outputs.conf on 1st platform indexers to continue indexing local system logs?   Thanks for your help.  
Hi All, Thanks for your time, I have a query for getting the number of errors for each client/customer, api_name,time etc index=index_api | stats count by customer,api_name, _time   If i have... See more...
Hi All, Thanks for your time, I have a query for getting the number of errors for each client/customer, api_name,time etc index=index_api | stats count by customer,api_name, _time   If i have the dataset like below - how do i take the snapshot of it and compare in the next 30 minute dataset Client/customer   api_name            _time                                   count Abc                            Validation_V2   2024 oct 29 10.30             10 Xyz                             Testing_V2         2024 oct 29 10.30             15 TestCust                   Testing_V3         2024 oct 29 10.30            20   assuming these are for the last 30 mins...  when i get to the next run say after 30 mins  i want to see if the same dataset is repeated so that i can get the consecutive error count any guidance or helpful suggestions....     
Hello @ITWhisperer ,    I would like to pass based search to panels in dashboard   <search id="base_search_1"> <query> index=$siteid$ sourcetype=log* values IN (Ax01, Ms09) ..... | table * ... See more...
Hello @ITWhisperer ,    I would like to pass based search to panels in dashboard   <search id="base_search_1"> <query> index=$siteid$ sourcetype=log* values IN (Ax01, Ms09) ..... | table * </query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> </search> <search id="base_search_2"> <query> index=$siteid$ sourcetype=log* Values IN (*) ..... | table * </query> <earliest>$time_token.earliest$</earliest>   I need to pass base_search_1 when a inut drodpown is selected with "All", when other values are selected in the input dropdown, it need to pass base_search_2 to the panel in dashboard. thanks! <latest>$time_token.latest$</latest> </search> the reason why i choose this is, Actually we are having a input dropdown field which may be empty at some time also we are filtering only head 10000 records as per need, So when the input dropdown field is selected with "All" values, we don't have any issues either the field can be with values or can be empty but when the inputdropdown field is having spome field values to be filtered then empty field should not be giving proper results, so instead of head 10000, we need to filter non empty values of 10k, rather than head 10k, also please suggest other possible efiicient way to do this. thanks!
came in handy here in Germany after Winter-Time Change
Hello, I need help in creating a search query to filter info showing just our logfile with same error line for all rows. This error code also appear in other days on the same logfile but I don't wa... See more...
Hello, I need help in creating a search query to filter info showing just our logfile with same error line for all rows. This error code also appear in other days on the same logfile but I don't want that to show up. If no other info except this error shows up in the logfile, our app is failing and I need to catch that. c.q.s.c.StoreHourSyncRestController : *** Sync Busy *** Please assist. Thank you! Andie Medalla
Are the search heads in the same time zone, are they configured for the same time zone? Are the user profiles set to the appropriate time zone? There are a lot of factors at play here and mostly to... See more...
Are the search heads in the same time zone, are they configured for the same time zone? Are the user profiles set to the appropriate time zone? There are a lot of factors at play here and mostly to do with local configurations which you haven't confirmed yet.
Hello,   I would like send two different index data one to indexer and other to Intermediate forwarder. How the configuration needs to be updated in Universal Forwarder ?  Thanks
What do you mean by "pass the two base search"? Pass them where? How are you trying to use base searches? Please provide more specific examples of what you are trying to do, as your current question ... See more...
What do you mean by "pass the two base search"? Pass them where? How are you trying to use base searches? Please provide more specific examples of what you are trying to do, as your current question is too ill-defined to be able to provide a meaningful answer
Try something like this | makeresults format=csv data="no,item 1,A 2,B 3,C 4,D 5,E" | append [| makeresults format=csv data="date,item 2024/10/1,A 2024/10/1,B 2024/10/1,C"] | append ... See more...
Try something like this | makeresults format=csv data="no,item 1,A 2,B 3,C 4,D 5,E" | append [| makeresults format=csv data="date,item 2024/10/1,A 2024/10/1,B 2024/10/1,C"] | append [| makeresults format=csv data="date,item 2024/10/2,C 2024/10/2,D"] ``` The lines above represent your sample data appended together ``` | chart count by item date | fields - NULL | untable item date count
The join command is very inefficient and not always necessary.  Try this query using a subsearch. index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType... See more...
The join command is very inefficient and not always necessary.  Try this query using a subsearch. index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog |fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | table email extensionAttribute10 extensionAttribute11 first last identity | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity  
Good morning, I need help. I have three SearchHead servers. Two of them (SH1 and SH2) are presenting the "_time" field correctly. However, the other server (SH3) is presenting the "_time" field wit... See more...
Good morning, I need help. I have three SearchHead servers. Two of them (SH1 and SH2) are presenting the "_time" field correctly. However, the other server (SH3) is presenting the "_time" field with three hours more than the other two (SH1 and SH2). How do I resolve this? SH2 - Normal SH3 - With three more hours. (With same search range)  
Take a look at any TLS certificates that get issued between ALB and Proxy.
I missed it as well.  The content of the splunkbase page is created by the Developer so you can ask them to add a note about availability.
We have an on-prem Splunk-Enterprise Version: 9.0.4.1 We updated IDP url in the SAML configuration and after uploading new IDP certificate to all search heads under .../auth/idpCerts, it was working ... See more...
We have an on-prem Splunk-Enterprise Version: 9.0.4.1 We updated IDP url in the SAML configuration and after uploading new IDP certificate to all search heads under .../auth/idpCerts, it was working for about an hour and then it stopped working getting an error: "Verification of SAML assertion using IDP's cert failed. Unknown signer of SAML response". We logged in to the search heads It was noticed that the idpCert.pem is breaking authentication because the file idpCert.pem updated and we are currently investigating if this is a system related issue. Is this a known issue?
Same error. In essence, it doesn't recognise splunk_hec as a possible exporter. I'm on the latest version of Opentelemetry collector. 
I made a couple more bookmarklets to help: 1. SID Only: Strip all URL parameters except the SID to have the search parameters loaded from the saved job (only works if the SID is still saved)   jav... See more...
I made a couple more bookmarklets to help: 1. SID Only: Strip all URL parameters except the SID to have the search parameters loaded from the saved job (only works if the SID is still saved)   javascript&colon; window.location.href = window.location.href.replace(/\?.*?(\bsid=[^&]+).*/, '?$1');   2. Show Search: Show the search after the error message   javascript&colon; query_str = decodeURIComponent(window.location.href.replace(/.*?\bq=([^&]+).*/, '$1')); document.body.innerHTML += `<pre>${query_str}</pre>`;   3. Strip off different parameters until it works. 1st click removes the display fields list, 2nd click collapses repeated spaces, and 3rd click truncates the query to 3500 characters.   javascript&colon;(function(){if (location.href.indexOf('display.events.fields')>=0) {window.location.href = window.location.href.replace(/\b(display\.events\.fields=[^&]+)/, '');}else if (location.href.indexOf('%'+'0A')>=0) {window.location.href = window.location.href.replaceAll(/(%(20|0A))+/g, ' ');}else{window.location.href = window.location.href.replace(/(\bq=[^&]{100,3500})[^&]*(.*)/, '$1$2');}})();   Again,  replace the "&colon;" in the blocks above with the colon character.
@yuanliu  Good to know.   If I may ask again, how did you know the cost associated for each SPL command? Thanks!!
Your services and receivers according to documentation seem more designed for logs an not metrics.  Here is another sample from the documentation that seems more suited for metrics. pipelines: met... See more...
Your services and receivers according to documentation seem more designed for logs an not metrics.  Here is another sample from the documentation that seems more suited for metrics. pipelines: metrics: receivers: [prometheus] processors: [batch] exporters: [splunk_hec/metrics] receivers: prometheus: config: scrape_configs: - job_name: 'otel-collector' scrape_interval: 5s static_configs: - targets: ['<container_name>:<container_port>'] exporters: splunk_hec/metrics: # Splunk HTTP Event Collector token. token: "00000000-0000-0000-0000-0000000000000" # URL to a Splunk instance to send data to. endpoint: "https://splunk:8088/services/collector" # Optional Splunk source: https://docs.splunk.com/Splexicon:Source source: "app" # Optional Splunk source type: https://docs.splunk.com/Splexicon:Sourcetype sourcetype: "jvm_metrics" # Splunk index, optional name of the Splunk index targeted. index: "metrics"