All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello,   I would like send two different index data one to indexer and other to Intermediate forwarder. How the configuration needs to be updated in Universal Forwarder ?  Thanks
What do you mean by "pass the two base search"? Pass them where? How are you trying to use base searches? Please provide more specific examples of what you are trying to do, as your current question ... See more...
What do you mean by "pass the two base search"? Pass them where? How are you trying to use base searches? Please provide more specific examples of what you are trying to do, as your current question is too ill-defined to be able to provide a meaningful answer
Try something like this | makeresults format=csv data="no,item 1,A 2,B 3,C 4,D 5,E" | append [| makeresults format=csv data="date,item 2024/10/1,A 2024/10/1,B 2024/10/1,C"] | append ... See more...
Try something like this | makeresults format=csv data="no,item 1,A 2,B 3,C 4,D 5,E" | append [| makeresults format=csv data="date,item 2024/10/1,A 2024/10/1,B 2024/10/1,C"] | append [| makeresults format=csv data="date,item 2024/10/2,C 2024/10/2,D"] ``` The lines above represent your sample data appended together ``` | chart count by item date | fields - NULL | untable item date count
The join command is very inefficient and not always necessary.  Try this query using a subsearch. index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType... See more...
The join command is very inefficient and not always necessary.  Try this query using a subsearch. index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog |fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | table email extensionAttribute10 extensionAttribute11 first last identity | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity  
Good morning, I need help. I have three SearchHead servers. Two of them (SH1 and SH2) are presenting the "_time" field correctly. However, the other server (SH3) is presenting the "_time" field wit... See more...
Good morning, I need help. I have three SearchHead servers. Two of them (SH1 and SH2) are presenting the "_time" field correctly. However, the other server (SH3) is presenting the "_time" field with three hours more than the other two (SH1 and SH2). How do I resolve this? SH2 - Normal SH3 - With three more hours. (With same search range)  
Take a look at any TLS certificates that get issued between ALB and Proxy.
I missed it as well.  The content of the splunkbase page is created by the Developer so you can ask them to add a note about availability.
We have an on-prem Splunk-Enterprise Version: 9.0.4.1 We updated IDP url in the SAML configuration and after uploading new IDP certificate to all search heads under .../auth/idpCerts, it was working ... See more...
We have an on-prem Splunk-Enterprise Version: 9.0.4.1 We updated IDP url in the SAML configuration and after uploading new IDP certificate to all search heads under .../auth/idpCerts, it was working for about an hour and then it stopped working getting an error: "Verification of SAML assertion using IDP's cert failed. Unknown signer of SAML response". We logged in to the search heads It was noticed that the idpCert.pem is breaking authentication because the file idpCert.pem updated and we are currently investigating if this is a system related issue. Is this a known issue?
Same error. In essence, it doesn't recognise splunk_hec as a possible exporter. I'm on the latest version of Opentelemetry collector. 
I made a couple more bookmarklets to help: 1. SID Only: Strip all URL parameters except the SID to have the search parameters loaded from the saved job (only works if the SID is still saved)   jav... See more...
I made a couple more bookmarklets to help: 1. SID Only: Strip all URL parameters except the SID to have the search parameters loaded from the saved job (only works if the SID is still saved)   javascript&colon; window.location.href = window.location.href.replace(/\?.*?(\bsid=[^&]+).*/, '?$1');   2. Show Search: Show the search after the error message   javascript&colon; query_str = decodeURIComponent(window.location.href.replace(/.*?\bq=([^&]+).*/, '$1')); document.body.innerHTML += `<pre>${query_str}</pre>`;   3. Strip off different parameters until it works. 1st click removes the display fields list, 2nd click collapses repeated spaces, and 3rd click truncates the query to 3500 characters.   javascript&colon;(function(){if (location.href.indexOf('display.events.fields')>=0) {window.location.href = window.location.href.replace(/\b(display\.events\.fields=[^&]+)/, '');}else if (location.href.indexOf('%'+'0A')>=0) {window.location.href = window.location.href.replaceAll(/(%(20|0A))+/g, ' ');}else{window.location.href = window.location.href.replace(/(\bq=[^&]{100,3500})[^&]*(.*)/, '$1$2');}})();   Again,  replace the "&colon;" in the blocks above with the colon character.
@yuanliu  Good to know.   If I may ask again, how did you know the cost associated for each SPL command? Thanks!!
Your services and receivers according to documentation seem more designed for logs an not metrics.  Here is another sample from the documentation that seems more suited for metrics. pipelines: met... See more...
Your services and receivers according to documentation seem more designed for logs an not metrics.  Here is another sample from the documentation that seems more suited for metrics. pipelines: metrics: receivers: [prometheus] processors: [batch] exporters: [splunk_hec/metrics] receivers: prometheus: config: scrape_configs: - job_name: 'otel-collector' scrape_interval: 5s static_configs: - targets: ['<container_name>:<container_port>'] exporters: splunk_hec/metrics: # Splunk HTTP Event Collector token. token: "00000000-0000-0000-0000-0000000000000" # URL to a Splunk instance to send data to. endpoint: "https://splunk:8088/services/collector" # Optional Splunk source: https://docs.splunk.com/Splexicon:Source source: "app" # Optional Splunk source type: https://docs.splunk.com/Splexicon:Sourcetype sourcetype: "jvm_metrics" # Splunk index, optional name of the Splunk index targeted. index: "metrics"
Hi everyone, I'm currently working on integrating Trellix ePolicy Orchestrator (ePO) logs into Splunk for better monitoring and analysis. I would like to know the best approach to configure Splu... See more...
Hi everyone, I'm currently working on integrating Trellix ePolicy Orchestrator (ePO) logs into Splunk for better monitoring and analysis. I would like to know the best approach to configure Splunk to collect and index logs from the Trellix ePO server. Specifically, I’m looking for details on: Recommended methods (e.g., syslog, API, or other tools/add-ons) Any Splunk add-ons or apps that facilitate ePO log ingestion Best practices for configuration and parsing these logs in Splunk Any guidance or references to documentation would be greatly appreciated! Thank you!
hostname.csv          ip                              mac                               hostname                                     location                     description 1.      x.x.x.x        ... See more...
hostname.csv          ip                              mac                               hostname                                     location                     description 1.      x.x.x.x                                                             abc_01                                           NYC                            null mac 2.                                      00:00:00                       def_02                                            DC                              null ip 3.      x.x.x.y                    00:00:11                        ghi_03                                           Chicago                     no update 4.                                                                                jkl_04                                             LA                                null mac & ip 5.                                                                               Hostname_not_in_idx             Seatle                        not match i would like to search in Splunk index=* host=* ip=* mac=*, compare my host equal to my hostname column from a lookup file "hostname.csv". if it matches, then I would like to append ip and mac values from the index=* to hostname.csv file. if it doesn't match the Hostname and host, it will not alter hostname.csv file. (I don't want to overwrite the hostname.cvs. I want to append only the ip and mac values from the index to the hostname.csv file.) the result look like this. the based_search doesn't have location field. I would like to keep the location column as it. new hostname.csv file.               ip                              mac                             hostname                                               location                 description 1.       x.x.x.x                     00:new:mac                abc_01                                                   NYC_orig               append mac 2.       x.x.y.new               00:00:00                       def_02                                                    DC_orig                 append ip 3.       x.x.x.y                      00:00:11                       ghi_03                                                     Chicago_orig     no update 4.       new.ip                     new:mac                       jkl_04                                                       LA_orig               append ip & mac 5.                                                                                  Hostname_not_in_idx                      Seatle                   no update thank you for your help
Thanks for the feedback - I did sent the developer an email inquiry.  It appears the app is only available in the European markets... If I didn't miss it in the splunkbase documentation / website, it... See more...
Thanks for the feedback - I did sent the developer an email inquiry.  It appears the app is only available in the European markets... If I didn't miss it in the splunkbase documentation / website, it would be nice to have that listed there.         
And what vulnerability is that and did your vulnerability manegement team actually bothered to read through the description or is it just blindly copy-pasted "finding" from Nessus?
Hello. I'm trying to transfer metric collected from Prometheus to my cloud instance.  According to https://docs.splunk.com/observability/en/gdi/opentelemetry/components/splunk-hec-exporter.html  sh... See more...
Hello. I'm trying to transfer metric collected from Prometheus to my cloud instance.  According to https://docs.splunk.com/observability/en/gdi/opentelemetry/components/splunk-hec-exporter.html  should use splunk_hec exporter.  Configurtion for OpenTelemetry looks like      receivers: prometheus: config: scrape_configs: - job_name: 'prometheus' scrape_interval: 10s static_configs: - targets: ['localhost:9090'] exporters: splunk_hec: token: "xxxxxxxx" endpoint: "https://http-inputs-xxxx.splunkcloud.com/services/collector" source: "lab" sourcetype: "lab" service: pipelines: metrics: receivers: [prometheus] exporters: [splunk_hec]     but I'm receiving error that splunk_hec is not accepted as an exporter.  error decoding 'exporters': unknown type: "splunk_hec" for id: "splunk_hec" (valid values: [nop otlp kafka zipkin debug otlphttp file opencensus prometheus prometheusremotewrite])   Do you have to use any intermittent solution to achieve this goal?  Thank. Sz  
Good day, I want to join two indexes to show all the email addresses that the user have that signed in.  This queries my mimecast signin logs  index=db_mimecast splunkAccountCode=* mcType=audi... See more...
Good day, I want to join two indexes to show all the email addresses that the user have that signed in.  This queries my mimecast signin logs  index=db_mimecast splunkAccountCode=* mcType=auditLog | dedup user | table _time, user | sort _time desc Lets say it returns a user@domain.com that singed in. I want to then join this to show all the info from  index=collect_identities sourcetype=ldap:query | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | table email extensionAttribute10 extensionAttribute11 first last identity | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity I tried inner join but I do not have anything that match since my results come back as this for my second query identity email extensionAttribute10 extensionAttribute11 first last USurname user@domain.com userT1@domain.com user@domain.com user@domain.com user@another.com user@domain.com user Surname  
We are hosting Splunk enterprise on AWS EC2 instances, the flow goes as follows: ALB>Apache Reverse proxies>ALB>SHC<>Indexers. after a period of times (days mostly) we start to experience 504 gatew... See more...
We are hosting Splunk enterprise on AWS EC2 instances, the flow goes as follows: ALB>Apache Reverse proxies>ALB>SHC<>Indexers. after a period of times (days mostly) we start to experience 504 gateway time-out which disappears when we restart our proxies, and we go for another round and so on. Any clues for how to troubleshoot this, we adjusted the timeouts parameters on the application, and the application loadbalancers but the problem is still persisting.  
Hi @JandrevdM , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma P... See more...
Hi @JandrevdM , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors