All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Splunk Connect for Syslog This is Splunk’s preferred method of ingesting high volumes of data. Details can be located here → https://splunk-connect-for-syslog.readthedocs.io/en/latest/ TCP Data In... See more...
Splunk Connect for Syslog This is Splunk’s preferred method of ingesting high volumes of data. Details can be located here → https://splunk-connect-for-syslog.readthedocs.io/en/latest/ TCP Data Input Navigate to Settings -> Data Inputs -> TCP (Add new) This brings you to following screen. In this step, we will configure Splunk to listen on TCP using port 514. NSS only supports TCP, but the destination port is configurable. Most administrators use port “514” as it is the default port for UDP based syslog. After configuring SIEM port, click next   We're following the above step but not able to find TCP / UDP  configure the port to synced with Zscaler NSS. I'm logging in Splunk Cloud portal  as trial member. Could it be restriction/privilege to my trail account ?  Please advice    Thanks Asif         
  above ones is the talbe and query for the same is index="wtqlty" source=pdf-fc-002-rh sourcetype="release_pdf_results_json" |table pdf_name pdf_url pdf_state please help
I have two CSV files. One files has the name of the accounts and servers where the accounts are added. The second CSV file I have a lookup breaking down the groups members. The field name is in com... See more...
I have two CSV files. One files has the name of the accounts and servers where the accounts are added. The second CSV file I have a lookup breaking down the groups members. The field name is in common with both CSV files. e.g:  Accounts01.CSV Class Domain Hostname Name User robotic ROB-SVR01 Administrator Group robotic ROB-SVR01 Advanced_users_IT Group robotic ROB-SVR01 Advanced_users_HR   e.g: GroupMembers.CSV Name member Advanced_users_IT user_IT_01 user_IT_02 user_IT_03 Advanced_users_HR user_HR_01 user_HR_02 user_IT_01   Is there any way to combine both files to match the names and adding a new column showing the members, so the result can be like this?   Class Domain Hostname Name Members User robotic ROB-SVR01 Administrator User Account Group robotic ROB-SVR01 Advanced_users_IT user_IT_01 user_IT_02 user_IT_03 Group robotic ROB-SVR01 Advanced_users_HR user_HR_01 user_HR_02 user_IT_01
Hello everyone, I am trying to create queries to show the max and average values of inbound and outbound network traffic (unit : Gbps) of my forwarders I already configured the Splunk add on for un... See more...
Hello everyone, I am trying to create queries to show the max and average values of inbound and outbound network traffic (unit : Gbps) of my forwarders I already configured the Splunk add on for unix and linux on my forwarders, but don't know which script to enable to collect the data needed Also, i installed the Pavo network traffic app for splunk, but don't know how to configure it For info, my splunk server is on a single instance deployment Any ideas ?    Thanks ! 
Hi All, We have two splunk environments 8.2, and I am in charge of these two. On the first environment, everything works fine. I'm trying to configure the same thing on the second for another infras... See more...
Hi All, We have two splunk environments 8.2, and I am in charge of these two. On the first environment, everything works fine. I'm trying to configure the same thing on the second for another infrastructure. To explain a little : I have two heavy forwarders behind a VIP, and then logs are forwarded to an indexer cluster of two indexers. I had configure some syslog forwarding for network devices, and everything works fine in TCP or UDP, my logs are showing in the Search Head, in the good index. But for Firepower, impossible to indexing datas, I use TCP 10000 and with a tcpdump on the HF I see logs arrives, but after that nothing... I've checked splunkd.logs (and nearly all log file on the servers), there are no errors on the HF or the Indexers and nothing is blocked by a firewall or the network. I don't use estreamer add-on and firepower add-on, and on the first infrastructure is working fine in the same configuration. Here are some configurations : On Indexers : [fw_ext] homePath = volume:primary/fw_ext/db coldPath = volume:primary/fw_ext/colddb thawedPath = $SPLUNK_DB/fw_ext/thaweddb repFactor = auto maxHotBuckets=10 On HF : [tcp://10000] index = fw_ext sourcetype = cisco:asa connection_host = ip disabled = 0 Paths are empty for the index fw_ext, so I presume that the logs don't arrives to the Indexers. On the Firepower, configuration for the first and the second infrastructures are identical, so I presume that the problem don't come from the device too. The forwarding configuration is good too since it works for every other network devices and servers.   Have you guys any ideas ? Thanks by advance.
I would like to send data (Output) from Splunk to external server/Cloud/DB, Please suggest me the best way. Everyday around 10-15k records, I would like to utilize that data in other Analytics tool ... See more...
I would like to send data (Output) from Splunk to external server/Cloud/DB, Please suggest me the best way. Everyday around 10-15k records, I would like to utilize that data in other Analytics tool for ex: Power BI
Hi Team, I have a .Net application that has N number of threads, I am using Appdynamics .Net Profiler for validating the threads measures by enabling the thread correlation. While observing the dat... See more...
Hi Team, I have a .Net application that has N number of threads, I am using Appdynamics .Net Profiler for validating the threads measures by enabling the thread correlation. While observing the data in the controller, I have a doubt here that how the thread samples have been collected. Is it through instrumentation or through sampling technique? @Ryan.Paredez  Could you please let me know about this? Thanks in advance. ^ Post edited by @Ryan.Paredez for formatting 
I have a query where I get "STARTED" and "FINISHED" status events for the same methods. e.g. index IN (private public) sourcetype in (X Y) log_entry=method_status method=getCusips status=STARTED  ... See more...
I have a query where I get "STARTED" and "FINISHED" status events for the same methods. e.g. index IN (private public) sourcetype in (X Y) log_entry=method_status method=getCusips status=STARTED  | rename _time as start_time | table sourcetype method start_time status | sort start_time for this query I get more, let's say 3 results where everything is the same for the event, except event _time also I would like to get "FINISHED" events so the same only with finished index IN (private public) sourcetype in (X Y) log_entry=method_status method=getCusips status=FINISHED | rename _time as end_time | table sourcetype method end_time status  | sort end_time I will always get the same number of events for both queries. Since it is sorted I would need to pair the first started with first finished,  second started with the second finished and so on, and get the duration (end_time - start_time), but how? So what I would like to see is, let's say if I have 2 started and 2 finished events, and as I mentioned only the time is different(between the 2 started events so I cannot use anything else): source_type method start_time end_time duration X getCusips 12 16 4 X getCusips 18 20 2 I was thinking to iterate on the events somehow and map them the 1st to the 1st, 2nd to 2nd, but no idea if this is even doable. Hope I have explained it clearly.
Hello.  I would like to ingest data from a Fireeye HX, viewing the data either in the Fireeye app or through our own dashboards.   However, although the data is being indexed the fields are not bein... See more...
Hello.  I would like to ingest data from a Fireeye HX, viewing the data either in the Fireeye app or through our own dashboards.   However, although the data is being indexed the fields are not being extracted/labelled in a useful way. I  am running Splunk 8.2.2 on Linux.   I have an indexer cluster and SH cluster. I am using the latest app version, 3.8.8. The Fireeye HX is sending data via TCP in CEF format. On the CM: # cat etc/master-apps/_cluster/local/inputs.conf <snip> [tcp://:1234] index = fe_data sourcetype = hx_ce_syslog # ls etc/master-apps/FireEye_v3 appserver bin default lookups metadata README.md static I created local versions of props.conf and transforms.conf .  In props.conf I uncommented this line as instructed (as we want the data in our own index). # Uncomment the next line to send FireEye data to a separate index called "fireeye" TRANSFORMS-updateFireEyeIndex = fix_FireEye_CEF_in, fix_FireEye_CSV_in, fix_FireEye_XML_in, fix_FireEye_JSON_st, fix_HX_CEF_in, fix_HX2_CEF_in In transforms.conf I changed entries like this to use our index: [fix_HX_CEF_in] REGEX=.*:\sCEF\:\d\|mandiant\|mso\| DEST_KEY=_MetaData:Index FORMAT=fe_data Q: Did I need to change FORMAT to use our index if I have specified the index in inputs.conf? Q: Am I right in thinking I don't need the FireEye app installed on the SHC if I don't want to use the app there?  i.e. it is enough for the indexers to use the app's conifguration to parse the data. Q: If the above is correct, does anyone know why the fields are not being extracted as, for example, cef_name?
I have this output from a field, with a lot of blank spaces,  what would it be the best way to convert this data into a table? or maybe a regex to parse it out better Start TREATMENTING ROUTES. TRE... See more...
I have this output from a field, with a lot of blank spaces,  what would it be the best way to convert this data into a table? or maybe a regex to parse it out better Start TREATMENTING ROUTES. TREATMENTS IS: GNCT   1 T12023   2 LKOUTWDF   POSITIONS ROUTES. POSITIONS IS: TOPS   1 CGHBRAB21053TBX   N S3T55NS   End.
Hello, I'm having a problem with mvexpand in Splunk. I'm having the following error: command.mvexpand: output will be truncated at 1103400 results due to excessive memory usage. Memory threshol... See more...
Hello, I'm having a problem with mvexpand in Splunk. I'm having the following error: command.mvexpand: output will be truncated at 1103400 results due to excessive memory usage. Memory threshold of 500MB as configured in limits.conf / [mvexpand] / max_mem_usage_mb has been reached. Doing some searching here on answers I came across this previous answer: https://answers.splunk.com/answers/98620/mvexpand-gives-mvexpand-output-will-be-truncated-due-to-exc... Although that solution seemed to help a lot of people it did not help me. I don't seem to see a fix anywhere else. If anyone has some advice it would be most helpful. Thanks! Taking the question, is it possible to improve this range? Here is my search: index=_raw UserName=* timeformat="%d-%m-%YT%H:%M:%S" earliest="01-12-2021T00:00:00" latest="02-12-2021T23:59:00" | stats values(_time) as Time by UserName | eval i = mvrange(0,20) | mvexpand i | eval reconnection=if(UserName==UserName, tonumber(mvindex(Time,i+1))-tonumber(mvindex(Time,i)), "falha") | where reconnection>0 AND reconnection<1200 | eval reconnection=tostring(reconnection, "duration") | chart count by reconnection
Need help on trimming the month from the field  EX:  Input      November 29, 2021 2:02:33 PM          output   Nov 29, 2021 2:02:33 PM  for all month in the field  Thanks 
We have a couple of critical batch jobs running every night and we a way to monitor them. The jobs are doing life cycle management, so it's very important that we are sure the jobs have actually been... See more...
We have a couple of critical batch jobs running every night and we a way to monitor them. The jobs are doing life cycle management, so it's very important that we are sure the jobs have actually been started, but of course also that they ended without issues. Does anyone have experience with setting up monitoring and alarms for this in AppD?
Hi  We have a situation, while trying to post a request to a external api from java script, we are getting timeout error.   While trying to hit the same url through curl we are getting the below ss... See more...
Hi  We have a situation, while trying to post a request to a external api from java script, we are getting timeout error.   While trying to hit the same url through curl we are getting the below ssl cert error.  "curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. " I have attached the error snip and the related JS here. JS snippet: const userAction = async (pid) => { const url='https://xyz.com:443/PID?ppid='+pid; const response = await fetch(url,{ method: 'POST',mode: 'cors'}) .then(response => console.log(response)) .then(xmlString => console.log($.parseXML(xmlString)) ) .catch(error => console.log(error)) has anyone come across the above issue, any help is appreciated  Thanks.
Hi All In our environment, servers are put under maintenance (serves are shutdown) at a particular time of a day . So we need to disable the Correlation searches during this period .To avoid Inciden... See more...
Hi All In our environment, servers are put under maintenance (serves are shutdown) at a particular time of a day . So we need to disable the Correlation searches during this period .To avoid Incidents getting created . How can we disable/enable an Correlation searches  during a particular time.  Please let me know if you have any suggestions    Thanks and Regards  
1. I have installed universal forwarder and have a Splunk cloud account. 2. On the laptop in universal forwarder, i downloaded the file and execute the command:  /opt/splunkforwarder/bin/splunk inst... See more...
1. I have installed universal forwarder and have a Splunk cloud account. 2. On the laptop in universal forwarder, i downloaded the file and execute the command:  /opt/splunkforwarder/bin/splunk install app /tmp/splunkclouduf.spl. 3. I restart the splunk process.   No data went in, may I know why?   Note: I am trying to forward the Windows event log which is the same host where i installed the Splunk universal forwarder
Two concerns come when moving on-prem data to the cloud:   1. Data sensitivity- What if confidential data is lost? (in transit or at rest) 2. Authentication - Login into the cloud, there is no 2FA... See more...
Two concerns come when moving on-prem data to the cloud:   1. Data sensitivity- What if confidential data is lost? (in transit or at rest) 2. Authentication - Login into the cloud, there is no 2FA or anything, just username and password, and the user can just login like this.   Would like to ask cloud users how do you manage to overcome these 2 concerns when shifting your data to Splunk cloud?
After creating a dashboard having 6 panels, all the jobs are getting queued. Also the search lag health status is yellow.  Search Lag Root Cause(s): The percentage of non high priority searches ... See more...
After creating a dashboard having 6 panels, all the jobs are getting queued. Also the search lag health status is yellow.  Search Lag Root Cause(s): The percentage of non high priority searches lagged (50%) over the last 24 hours is very high and exceeded the yellow thresholds (40%) on this Splunk instance. Total Searches that were part of this percentage=2. Total lagged Searches=1 Please help me to resolve these issues.
I have created a search which is working fine. It sends an email when the alert condition meets. My question is, is there any way I can add/update the email address in my alert using curl command? ... See more...
I have created a search which is working fine. It sends an email when the alert condition meets. My question is, is there any way I can add/update the email address in my alert using curl command? also can I update my alert search query using curl command? Thanks, Regards,    
i would like to get below values from splunk into shell script . i am creating alert for below values and using webhook to invoke a shell script.  i am using below webhooklink to trigger the scr... See more...
i would like to get below values from splunk into shell script . i am creating alert for below values and using webhook to invoke a shell script.  i am using below webhooklink to trigger the script  but i don't know how to get those splunk search results into shell script? can someone help to suggest me which command/code has to used to capture the value form splunk ?