All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Team, I am new to Splunk Cloud. I need someone's help to get stated with Splunk. I have the Splunk cloud instance up and running, now, I want to onboard Sophos on prem physical appliance firewa... See more...
Team, I am new to Splunk Cloud. I need someone's help to get stated with Splunk. I have the Splunk cloud instance up and running, now, I want to onboard Sophos on prem physical appliance firewall production logs in to Splunk, i would appreciate if you could help me with step by step methods to achieve this goal. Likewise, I also need to onboard AV logs, please provide me step by step methods
We are looking to use the VMWare - Monitoring Extension I need some clarification if this extension uses Vcenter or if host info needs to be captured in the config file for each VM and Host manage... See more...
We are looking to use the VMWare - Monitoring Extension I need some clarification if this extension uses Vcenter or if host info needs to be captured in the config file for each VM and Host managed by Vcenter? The documentation is not very clear on that or the specific permissions required for the user that does authentication. See image below for reference. What is the difference between the "host" in the servers section at the top of the config and the "host" in the hostconfig section further down. Which one do we use if our client's Vcenter is managing 200 Hosts? config.yml
Hi All, We have some remote sites with limited bandwidth that may be offline for periods of time due to their location. I need a way to make sure that when they come back online the Forwarders don'... See more...
Hi All, We have some remote sites with limited bandwidth that may be offline for periods of time due to their location. I need a way to make sure that when they come back online the Forwarders don't saturate all of the sites bandwidths trying to send all of the data it has built up whilst offline. I looked at the maxKBps option but this looks like it is just the processed throughput and if the site is offline then it will keep processing and will not limit the output once connected again. Is my view on maxKBps correct and if so is there an option to limit output?   Thanks in Advance Leon
URGENT In the above screen print i have fields called Ids and System. In the 3rd event Ids represent the username and the 2nd event Ids value is equal to the System value.  I want to the pas... See more...
URGENT In the above screen print i have fields called Ids and System. In the 3rd event Ids represent the username and the 2nd event Ids value is equal to the System value.  I want to the pass the users field values to the second event if the Ids and System values match. How can i do it ?   Thanks in advance.
Hi all, One of the panels in my dashboard will display events based on input token (tkn.folder), I called this panel as panel_A. In panel_A, token setting is Token=tkn.folder Token Prefix=Folder ... See more...
Hi all, One of the panels in my dashboard will display events based on input token (tkn.folder), I called this panel as panel_A. In panel_A, token setting is Token=tkn.folder Token Prefix=Folder like " Token Suffix=" I hope to use this token value which is the input for panel_A into another panel, named as panel_B in the same dashboard as a subsearch. However, it didn't work. I use these code in panel_B as subsearch to restrict the LOG_ID I need to analyze.   [| search index=my_index | lookup folder_list.csv LOG_ID OUTPUT Folder | where ($tkn.folder$) | fields + LOG_ID]   After I input a string for that token in panel_A, although panel_A works normally and can generate its output based on the input.  The panel_B keeps displaying  Searching is waiting for input.. Is there any way to make panel_B or more panels to capture the input for panel_A? Thank you very much.
Hi  I  use REST API  send search cmd to Splunk  and get search result and work normally but when I send "| script python mypython" by REST API ,  "mypython" is not working  I am sure "mypython... See more...
Hi  I  use REST API  send search cmd to Splunk  and get search result and work normally but when I send "| script python mypython" by REST API ,  "mypython" is not working  I am sure "mypython" is setting to custom command and  "| script python mypython" is ok when I execute by myself  , REST API account is admin Please help to solve this problem , thanks
Hi Splunkers, today I'm here not for an issue, or better, not yet, but to "pull all togheter" the components of my task, which is forwarding Splunk data from HF to another system, an Exabeam UEBA i... See more...
Hi Splunkers, today I'm here not for an issue, or better, not yet, but to "pull all togheter" the components of my task, which is forwarding Splunk data from HF to another system, an Exabeam UEBA in my case. I'm trying to prevent possibile errors I could do in changing the required files, so I may want perform a check here with you to understand if I got all I need from docs. Let me give you more context and introduce the current state. The Splunk environment installation and setup has not been performed by my team, but by another one That team has not created an outputs.conf file in SPLUNK_HOME/etc/system/local; they created each outputs.conf they required in a separate folder under the SPLUNK_HOME/etc/app one. So at this time we have a lot of outputs.conf files, but no one under SPLUNK_HOME/etc/system/local. At the same time, no props.conf and transforms.conf are present under SPLUNK_HOME/etc/system/local We must forward only a subset of data via syslog and we have to filter them with sourcetypes We have 2 destination syslog servers balanced by a Load Balancer, so we have to send data to LB VIP We are using syslog but, for some reason, we will not use the default UDP; we are going to use syslog via TCP I have no direct access to Splunk HF; the task is going to be performed with colleagues that have this access. I'm in charge of editing the required files and pass them to my colleagues, that will upload them on HF. Which documentation we used? Those one: Forward data to third-party systems  Route and filter data  Plus I searched other symylar topics here on community and tried to got some results. So, putting all data togheter, we stated that, because there are not the outputs.conf, props.conf and transforms.conf files in $SPLUNK_HOME/etc/system/local, we must: create outputs.conf,  props.conf and transforms.conf under $SPLUNK_HOME/etc/system/local folder populate them following docs If the above assumptions are right, I have some doubts about the files, because some docs points are not complete clear for me. So, suppose we want to to start forward only a subset of Windows EventID with syslog tcp; are the below conf files ok? outpust.conf:       [syslog:syslogToExabeamGroup] type = tcp server = <ipaddress>:<port>       Note that, cause I have to forward only a subset of data, I avoided the defaultGroup settings, like in the sample of Forward data to third-party systems docs. props.conf:       [<windows_sourcetype_name>] TRANSFORMS-routing1 = syslog_from_win_to_exabeam       Here I used directly the souretype name and not the syntax sourcetype::<sourcetype_name>; is it correct? Plus, even if in Forward data to third-party systems docs I have the syntax like TRANSFORMS-whatever_you_want, I followd what stated in Route and Filter Data and used a syntax like TRANSFORMS-routingX. transforms.conf:       [syslog_from_win_to_exabeam] REGEX = EventID\>(4624|4625|4648|4672|4720|4722|4723|4724|4725|4726|4728|4729|4732|4733|4740|4756|4757|4767|4768|4769|4770|4771|4776|4780|1102|4611|4663|4673|4674|4688|4697|4698|4719|4778|4779|4780|4800|4801|5136|5137|5138|5139|5140|5141|5145|6272) DEST_KEY = _SYSLOG_ROUTING FORMAT = syslogToExabeamGroup       The regex has been built based on our logs (we are receiving them in XML format). It seems all ok but I'm not sure I forgot/done bad some configuration.
Hi Team,   I want to calculate p value of tTest from Splunk query any suggestions?
HI Team, I am facing an issue in splunk XML Dashboard in a panel table. Below is the snippet of our panel table.   We tried to change the background color of each column of the table to white ... See more...
HI Team, I am facing an issue in splunk XML Dashboard in a panel table. Below is the snippet of our panel table.   We tried to change the background color of each column of the table to white but we are unable to do it. Even we tried to use the formatting option by selecting the paint icon. But we are unable to change the background color. We request you to kindly suggest us the needful in order to change the background color
Hi Splunkers, We have a ton of bookmarked content in Splunk Security Essentials App on one of our Dev Splunk search heads. Now i want to move that to Enterprise Security Search Head. Is that possib... See more...
Hi Splunkers, We have a ton of bookmarked content in Splunk Security Essentials App on one of our Dev Splunk search heads. Now i want to move that to Enterprise Security Search Head. Is that possible? 
I'm seeing this error lines when received IPFIX messages and no data shown as well. NetFlow (v9) seems working fine. What could be possibly wrong here? INFO netflow_utils:228 - No matching ipfix a... See more...
I'm seeing this error lines when received IPFIX messages and no data shown as well. NetFlow (v9) seems working fine. What could be possibly wrong here? INFO netflow_utils:228 - No matching ipfix app found, terminating the process of pulling configuration from vendor app... = below are the version info = Splunk Stream 8.1.0 Splunk_TA_stream_wire_data 8.1.0 Splunk_TA_stream 8.1.0 Splunk Enterprise Version: 9.0.4 Build: de405f4a7979 Products: hadoop  
  I am trying to get the values from one json object using the keys from another json array.     | makeresults | eval limits=json_object("process1", json_array(123), "process2", json_array(234), ... See more...
  I am trying to get the values from one json object using the keys from another json array.     | makeresults | eval limits=json_object("process1", json_array(123), "process2", json_array(234), "process3", json_array(0.12)), total=0 | eval processes = json_array("process1", "process2") | eval new_data_limits = json_object() | foreach processes [ | eval new_data_limits = json_set(new_data_limits, <<FIELD>>, json_extract(limits, <<FIELD>>))]   1) How do I capture the limits into the new_data_limits array? 2) If there's multiple events similar to 'limits', how do I get the average of similar process? (i.e "process1", "process2")   TIA....
Hi, I need assistance with writing a regex that extracts all characters upto the character "_" underscore. so, the data could look like this: field1: ABCD_1234234 EFG_12349879 HIJK_12349850... See more...
Hi, I need assistance with writing a regex that extracts all characters upto the character "_" underscore. so, the data could look like this: field1: ABCD_1234234 EFG_12349879 HIJK_12349850 And I would like to only see: ABCD EFG HIJK I tired this however it is not doing the trick: | regex field1 = "^.*?(?=\_)" regex101 seems to show it working but ... I must be missing something when converting it into splunk. Any help would be appreciated. Thanks,
I had a question about the csv lookup app for Splunk. I recently installed the app on one of our dev search heads and it works great. By default, it stores and finds lookups from the /export/opt/splu... See more...
I had a question about the csv lookup app for Splunk. I recently installed the app on one of our dev search heads and it works great. By default, it stores and finds lookups from the /export/opt/splunk/etc/apps/<App-Name>/lookups, and it works great for lookups in that directory. With our production environment, we have lookups spread around the environment.  From my assumptions, I couldnt quite find the answer in the App docs, im assuming the app works alongside the built in lookups tab in splunk. If Splunk itself is seeing a lookup file, no matter where it is, whether its under /apps/<appname>/ or under /users/<username> , Then the lookup csv app can also read it and edit it.  My question is, is there a file within the app where we have to specify where the lookup files are and where to point to find them, or does the app automatically seek out and finds all .csv files that are lookups.    Thanks for any help
My team has duplicate events in our index (~600 GB). We have fixed duplicate source and need to remove the existing duplicates from the index. What are the best practices for managing duplicates ov... See more...
My team has duplicate events in our index (~600 GB). We have fixed duplicate source and need to remove the existing duplicates from the index. What are the best practices for managing duplicates over a large index? So far we've explored two options - Create a summary index with duplicates removed     - its a large compute load to run this deduplication job and populate a new index all at once. How can we do this efficiently and prevent our job from auto-cancelling?     - We would like to be able to update the new index from the one containing duplicates on ingest. Are there best practices for doing this reliably? - Delete duplicate events from current index     - this is less attractive, due to permanent deletion
Does Splunk have a certificate store I can add a local CA certificate to?  I have a TA on a HF that's trying to pull events over https, and it's complaining about the certificate because it's signed ... See more...
Does Splunk have a certificate store I can add a local CA certificate to?  I have a TA on a HF that's trying to pull events over https, and it's complaining about the certificate because it's signed by our local CA.
Hello Everyone, Looking for a little clarity on how to best use the "wait time after violation" option when constructing a health rule. Our thought was that we did not want teams to receive alerts ... See more...
Hello Everyone, Looking for a little clarity on how to best use the "wait time after violation" option when constructing a health rule. Our thought was that we did not want teams to receive alerts for the same issue, so we tried to set the value at 1440 (one day) so the team would only get one alert per day, until it resolution. I think this may be causing an issue though, we had a brief violation of a rule yesterday evening at 5 pm cst, but then the value went back to being underneath the threshold (no longer violating). Even though it's been 22 hours since the violation, the health rule is still showing that its violating. I believe having "wait time after violation" set to 24 hours, it is not allowing the health rule to re-evaluate its status. is that how it is working? I am just trying to gain confidence in how it is meant to operate. Any input is greatly appreciated! (Images attached for reference) Last violation - 22 hours ago last 1 hour - well under threshold wait time after violation
Hello Splunkies,  Having some issues with getting ES dashboards to populate...  Query for Network Traffic Dashboard titled, "Traffic Search" :  | tstats `summariesonly` max(_time) as _time,valu... See more...
Hello Splunkies,  Having some issues with getting ES dashboards to populate...  Query for Network Traffic Dashboard titled, "Traffic Search" :  | tstats `summariesonly` max(_time) as _time,values(All_Traffic.action) as action,values(All_Traffic.src_port) as src_port,count from datamodel=Network_Traffic.All_Traffic where * $action_dm$ $src_dm$ $dest_dm$ $transport_dm$ $dest_port_dm$ by All_Traffic.src,All_Traffic.dest,All_Traffic.transport,All_Traffic.dest_port | `drop_dm_object_name("All_Traffic")` | sort - count | fields _time,action,src,src_port,dest,transport,dest_port,count Error I get when launching: Error in 'TsidxStats': WHERE clause is not an exact query. Thanks in advance for the help!   
Hello , I am trying to figure out how to enable users to maintain their saved searches , reports and alerts in version control. Application teams login to Splunk UI and create their own reports ,... See more...
Hello , I am trying to figure out how to enable users to maintain their saved searches , reports and alerts in version control. Application teams login to Splunk UI and create their own reports , alerts etc. Those get to be written to savedsearches.conf files , private or shared. So application owners want their portion of the configuration be available to them to maintain in , say a Git repo. We are using clustered searchhead deployment and also have a separate shc deployer system. Thought about setting up a cron job to git clone /opt/splunk/etc/users/<user-id>/search/local/savedsearches.conf. I am not sure how it may work, seemed clunky. Is there any other way for teams (standard users)  to download , track and update their report, alert configuration without use of GUI ? Thanks.
Is it possible to make a dashboard that will show only specific error? Ex The dashboard will be plain with general business transaction metrics, but when an error occurs all of the graphs will focus... See more...
Is it possible to make a dashboard that will show only specific error? Ex The dashboard will be plain with general business transaction metrics, but when an error occurs all of the graphs will focus on that error. Like all other business transactions will fade away and only the business transaction in error will remain.