All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi everyone, I'm currently working on integrating Trellix ePolicy Orchestrator (ePO) logs into Splunk for better monitoring and analysis. I would like to know the best approach to configure Splu... See more...
Hi everyone, I'm currently working on integrating Trellix ePolicy Orchestrator (ePO) logs into Splunk for better monitoring and analysis. I would like to know the best approach to configure Splunk to collect and index logs from the Trellix ePO server. Specifically, I’m looking for details on: Recommended methods (e.g., syslog, API, or other tools/add-ons) Any Splunk add-ons or apps that facilitate ePO log ingestion Best practices for configuration and parsing these logs in Splunk Any guidance or references to documentation would be greatly appreciated! Thank you!
hostname.csv          ip                              mac                               hostname                                     location                     description 1.      x.x.x.x        ... See more...
hostname.csv          ip                              mac                               hostname                                     location                     description 1.      x.x.x.x                                                             abc_01                                           NYC                            null mac 2.                                      00:00:00                       def_02                                            DC                              null ip 3.      x.x.x.y                    00:00:11                        ghi_03                                           Chicago                     no update 4.                                                                                jkl_04                                             LA                                null mac & ip 5.                                                                               Hostname_not_in_idx             Seatle                        not match i would like to search in Splunk index=* host=* ip=* mac=*, compare my host equal to my hostname column from a lookup file "hostname.csv". if it matches, then I would like to append ip and mac values from the index=* to hostname.csv file. if it doesn't match the Hostname and host, it will not alter hostname.csv file. (I don't want to overwrite the hostname.cvs. I want to append only the ip and mac values from the index to the hostname.csv file.) the result look like this. the based_search doesn't have location field. I would like to keep the location column as it. new hostname.csv file.               ip                              mac                             hostname                                               location                 description 1.       x.x.x.x                     00:new:mac                abc_01                                                   NYC_orig               append mac 2.       x.x.y.new               00:00:00                       def_02                                                    DC_orig                 append ip 3.       x.x.x.y                      00:00:11                       ghi_03                                                     Chicago_orig     no update 4.       new.ip                     new:mac                       jkl_04                                                       LA_orig               append ip & mac 5.                                                                                  Hostname_not_in_idx                      Seatle                   no update thank you for your help
Thanks for the feedback - I did sent the developer an email inquiry.  It appears the app is only available in the European markets... If I didn't miss it in the splunkbase documentation / website, it... See more...
Thanks for the feedback - I did sent the developer an email inquiry.  It appears the app is only available in the European markets... If I didn't miss it in the splunkbase documentation / website, it would be nice to have that listed there.         
And what vulnerability is that and did your vulnerability manegement team actually bothered to read through the description or is it just blindly copy-pasted "finding" from Nessus?
Hello. I'm trying to transfer metric collected from Prometheus to my cloud instance.  According to https://docs.splunk.com/observability/en/gdi/opentelemetry/components/splunk-hec-exporter.html  sh... See more...
Hello. I'm trying to transfer metric collected from Prometheus to my cloud instance.  According to https://docs.splunk.com/observability/en/gdi/opentelemetry/components/splunk-hec-exporter.html  should use splunk_hec exporter.  Configurtion for OpenTelemetry looks like      receivers: prometheus: config: scrape_configs: - job_name: 'prometheus' scrape_interval: 10s static_configs: - targets: ['localhost:9090'] exporters: splunk_hec: token: "xxxxxxxx" endpoint: "https://http-inputs-xxxx.splunkcloud.com/services/collector" source: "lab" sourcetype: "lab" service: pipelines: metrics: receivers: [prometheus] exporters: [splunk_hec]     but I'm receiving error that splunk_hec is not accepted as an exporter.  error decoding 'exporters': unknown type: "splunk_hec" for id: "splunk_hec" (valid values: [nop otlp kafka zipkin debug otlphttp file opencensus prometheus prometheusremotewrite])   Do you have to use any intermittent solution to achieve this goal?  Thank. Sz  
Good day, I want to join two indexes to show all the email addresses that the user have that signed in.  This queries my mimecast signin logs  index=db_mimecast splunkAccountCode=* mcType=audi... See more...
Good day, I want to join two indexes to show all the email addresses that the user have that signed in.  This queries my mimecast signin logs  index=db_mimecast splunkAccountCode=* mcType=auditLog | dedup user | table _time, user | sort _time desc Lets say it returns a user@domain.com that singed in. I want to then join this to show all the info from  index=collect_identities sourcetype=ldap:query | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | table email extensionAttribute10 extensionAttribute11 first last identity | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity I tried inner join but I do not have anything that match since my results come back as this for my second query identity email extensionAttribute10 extensionAttribute11 first last USurname user@domain.com userT1@domain.com user@domain.com user@domain.com user@another.com user@domain.com user Surname  
We are hosting Splunk enterprise on AWS EC2 instances, the flow goes as follows: ALB>Apache Reverse proxies>ALB>SHC<>Indexers. after a period of times (days mostly) we start to experience 504 gatew... See more...
We are hosting Splunk enterprise on AWS EC2 instances, the flow goes as follows: ALB>Apache Reverse proxies>ALB>SHC<>Indexers. after a period of times (days mostly) we start to experience 504 gateway time-out which disappears when we restart our proxies, and we go for another round and so on. Any clues for how to troubleshoot this, we adjusted the timeouts parameters on the application, and the application loadbalancers but the problem is still persisting.  
Hi @JandrevdM , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma P... See more...
Hi @JandrevdM , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
| tstats ... | inputlookup append=t ... | stats values(*) as * by host  
I do not know the number of max emails. But I believe the first answer will be sufficient and I will try and work around it on my join
Well, you probably can using the foreach command and the {} notation (or alternatively using some mv* magic with kv command in the rnd) but the question is - what for. It's usually the other way arou... See more...
Well, you probably can using the foreach command and the {} notation (or alternatively using some mv* magic with kv command in the rnd) but the question is - what for. It's usually the other way around that's the problem - to normalize your data when you have multiple fields holding "the same" data.
Hi @JandrevdM , do you know the max number of emails? if they are fixed (always the same quantity), you could use nomv and then a regex to divide the emails. Ciao. Giuseppe
Hello Splunkers,    I would like to pass the two base search when input dropdown is set as all, i need to pass a base search, when other values apart from all is selected, it need to pass a diffrent... See more...
Hello Splunkers,    I would like to pass the two base search when input dropdown is set as all, i need to pass a base search, when other values apart from all is selected, it need to pass a diffrent base search. Thanks!
Instead of having email user@domain.com userT1@domain.com I would like to then split it email 1 email 2 user@domain.com userT2@domain.com
Hi @PotatoDataUser , try using the lookup command (https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchReference/Lookup) | tstats count where index=my_index by host | lookup my_lookup.csv serv... See more...
Hi @PotatoDataUser , try using the lookup command (https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchReference/Lookup) | tstats count where index=my_index by host | lookup my_lookup.csv server_name Ciao. Giuseppe
Hi @JandrevdM , what do you mean with "split"? if you have a multivalue field, you could try with mvexpand to have a row for each value or nomv to have all the values in one row. Ciao. Giuseppe
Although removing through REST probably works I find it easier to do it this way: edit the configuration file in SPLUNK_INSTALL_DIR\etc\system\local\authentication.conf Naviate to Settings > Authe... See more...
Although removing through REST probably works I find it easier to do it this way: edit the configuration file in SPLUNK_INSTALL_DIR\etc\system\local\authentication.conf Naviate to Settings > Authentication methods > reload authentication configuration
Hi Guys, I have one master list that inculdes all items, and I want to consolidate two other time-related tables into a single chart, as shown in the example below. master list time-related... See more...
Hi Guys, I have one master list that inculdes all items, and I want to consolidate two other time-related tables into a single chart, as shown in the example below. master list time-related table 1 time-related table 2 result chart And could I use the chart to produce the pivot chart in Splunk?  
Hello, We have been facing a weird error suddenly, wherein our production Splunk cloud Enterprise Security Incident Review dashboard suddenly isn't showing the Drill down searches in any of the trig... See more...
Hello, We have been facing a weird error suddenly, wherein our production Splunk cloud Enterprise Security Incident Review dashboard suddenly isn't showing the Drill down searches in any of the triggered notables. For all of them "Something went wrong" message is thrown up. I tried changing the roles to ess_admin, tried with multiple drilldown searches but none helped. I am wondering if this is an app backend problem, but just wanted to make sure I am not missing out on anything before opening a support ticket. Any help would be greatly appreciated.
Thanks is there any way to split it, I tried this but it is not working index=collect_identities sourcetype=ldap:query | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=re... See more...
Thanks is there any way to split it, I tried this but it is not working index=collect_identities sourcetype=ldap:query | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity | eval email=split(email, "") | eval extensionAttribute10=split(extensionAttribute10, "") | eval extensionAttribute11=split(extensionAttribute11, "") | eval first=split(first, "") | eval last=split(last, "") | mvexpand email | mvexpand extensionAttribute10 | mvexpand extensionAttribute11 | mvexpand first | mvexpand last