All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to configure CRC Salt but I am quite not sure how write it on inputs.conf. The directory on splunk is like this: /home/csaops/csasec/NFV/KPG_MIO_HC_Logs_2021-11-10-10.txt How do I config... See more...
I want to configure CRC Salt but I am quite not sure how write it on inputs.conf. The directory on splunk is like this: /home/csaops/csasec/NFV/KPG_MIO_HC_Logs_2021-11-10-10.txt How do I configure this configuration?        
I have a dashboard using simple xml. The dashboard has 5 rows, each row of which contains 2 panels. The first panel is a small table and the second panel is a timechart. I would like the small table ... See more...
I have a dashboard using simple xml. The dashboard has 5 rows, each row of which contains 2 panels. The first panel is a small table and the second panel is a timechart. I would like the small table to be 20% of the width, and the timechart to be 80% of the width. I've tried to create a java script file so that the dashboard will run the java script automatically. The java script only works for the first 2 rows. Here's the js file. Any help appreciated. require(['jquery', 'splunkjs/mvc/simplexml/ready!'], function($) { // Grab the DOM for the panel dashboard row var panelRow = $('.dashboard-row').first(); // Get the dashboard cells (which are the parent elements of the actual panels and define the panel size) var panelCells = $(panelRow).children('.dashboard-cell'); // Adjust the cells' width $(panelCells[0]).css('width', '20%'); $(panelCells[1]).css('width', '80%'); panelRow = $('.dashboard-row').next(); panelCells = $(panelRow).children('.dashboard-cell'); $(panelCells[0]).css('width', '20%'); $(panelCells[1]).css('width', '80%'); panelRow = $('.dashboard-row').next(); panelCells = $(panelRow).children('.dashboard-cell'); $(panelCells[2]).css('width', '20%'); $(panelCells[3]).css('width', '80%'); $(window).trigger('resize'); });  
Hi, I want to find all the dashboards that can potentially use base search to save computing resources. As you know we can use a base search and populate the panels using that base search. I want to... See more...
Hi, I want to find all the dashboards that can potentially use base search to save computing resources. As you know we can use a base search and populate the panels using that base search. I want to find a way where I can automatically check all the dashboards and see if their panels are using duplicate searches so that I can guide users to implement base searches.  Thanks in advance!  
How can I just keep the account name? I tried with replace, but that didn't work, the way I want Here is the search that I am using:     | makeresults | eval Member=" CN=Domain Admins,OU=Users,D... See more...
How can I just keep the account name? I tried with replace, but that didn't work, the way I want Here is the search that I am using:     | makeresults | eval Member=" CN=Domain Admins,OU=Users,DC=Lab,DC=com CN=Account Report,OU=Users,DC=Lab,DC=com CN=Report,OU=Users,DC=Lab,DC=com CN=HelpDesk,OU=Users,DC=Lab,DC=com " |eval change=replace(Member,"CN=","") | table Member,change       My goal is to keep the name of the account only, to be like: Domain Admins Account Report Report HelpDesk   Thanks in advance,    
I have a dhasboard which should show buckets with number of machines by span of time.  Machine A to F is used for 2 mins Machines D-T was used for 2hrs Machine s-Z was used for more than 4hrs So ... See more...
I have a dhasboard which should show buckets with number of machines by span of time.  Machine A to F is used for 2 mins Machines D-T was used for 2hrs Machine s-Z was used for more than 4hrs So my graph should show the buckets with time range as a standard set.  XAxis <5 mins, 5-30mins 30min - 2hrs 2-4hrs  > 4hrs YAxis  No of machines logged on for <2mins No of machines logged on for 5-30mins  and so on. Logon Time Logoff Time MachineName SessionTimeinMins 12/1/2021 19:33 12/1/2021 19:36 A 3 12/1/2021 16:46 12/1/2021 17:04 B 18 12/1/2021 15:35 12/1/2021 15:38 C 3 12/1/2021 11:35 12/1/2021 11:38 D 120 12/1/2021 16:35 12/1/2021 21:35 E 300   Base Search | bucket SessionTimeinMins span=20 | chart count(MachineName) by sessionSpan But this do not help in achieving what i wanted. Any help is much appreciated.  Ho do I set my X-Axis to show standard buckets like <2min, 30-1h and bring the count into this bucket.    Thanks    
Hello, I have a need to run a search for MAC OUI matches against a .csv file containing 1000+ MAC OUIs? Can anyone provide example as I have not had any luck building a search using inpulookup that w... See more...
Hello, I have a need to run a search for MAC OUI matches against a .csv file containing 1000+ MAC OUIs? Can anyone provide example as I have not had any luck building a search using inpulookup that works. Thank you! Splunkster21
Hi,   Does a dashboard with a setting to refresh every 2 mins mean, there will be a new search launched every 2 mins? Isn't it a resource consumption, especially when all the users keep on setting... See more...
Hi,   Does a dashboard with a setting to refresh every 2 mins mean, there will be a new search launched every 2 mins? Isn't it a resource consumption, especially when all the users keep on setting their dashboards to refresh without any legit reason? How can I search all the dashboards which are using this setting so that I can chase those users to remove the refresh setting if not really needed?  
I need to extract the contents of the message field, but the first strings must be ignored, I need to get from the stdout field. Any ideas how to do this? Examples:   message: 2021-12-02T20... See more...
I need to extract the contents of the message field, but the first strings must be ignored, I need to get from the stdout field. Any ideas how to do this? Examples:   message: 2021-12-02T20:06:11.541111542Z stdout F 2021-12-02 17:06:11,540 Completed 200 OK message: 2021-12-02T20:06:11.540863953Z stdout F contract: txt (truncated)...] message: 2021-12-02T20:06:11.540857713Z stdout F clientDocument: txt    
I have an alert set up to run every hour to look for any latency of :45 minutes. If over that send a "Please Investigate" message Index=...  | stats count max(_time) as lastTime by host | eval now=... See more...
I have an alert set up to run every hour to look for any latency of :45 minutes. If over that send a "Please Investigate" message Index=...  | stats count max(_time) as lastTime by host | eval now=now() | eval timedelta=round((now-lastTime)/60/60,2) | eval timedelta=if(timedelta > .75,"Please Investigate", timedelta) | convert ctime(lastTime) ctime(now) | sort - timedelta The problem is that I get this alert email even when the latency is 0.00. What I really need is for the alert to trigger and run when it sees the phrase "Please Investigate" . I have been unsuccessful in setting this up in the Splunk Alert GUI as a trigger.  
Hello, I am trying to export the results from an api search, currently I am using the curl command:  curl -k -u user:pass https://hostname:8089/services/search/jobs/export?search=$NewQ -o Output-fil... See more...
Hello, I am trying to export the results from an api search, currently I am using the curl command:  curl -k -u user:pass https://hostname:8089/services/search/jobs/export?search=$NewQ -o Output-file.csv I can see that the search completed in the splunk webclient but am not able to find the output csv file that should result from this command. I have checked the $SPLUNK_HOME/var/run/splunk/csv folder after each attempt at using this command and there has never been a file created there (which to my understanding is where this file is supposed to be created). Any help is greatly appreciated thank you.
Hello, How would I implement inline or Uses Transform Field extraction (please see screenshot below) for following event (please see sample event below). Any help will be highly appreciated, thank y... See more...
Hello, How would I implement inline or Uses Transform Field extraction (please see screenshot below) for following event (please see sample event below). Any help will be highly appreciated, thank you so much. Screenshot (inline field extraction option)   One Sample Event {"log":"\u001b[0m\u001b[0m05:14:09,516 INFO  [stdout] (default task-4193) 2021-12-02 05:14:09,516 INFO  [tltest.logging.TltestEventWriter] \u003cMODTRANSAUDTRL\u003e\u003cEVENTID\u003e1210VIEW\u003c/EVENTID\u003e\u003cEVENTTYPE\u003eDATA_INTERACTION\u003c/EVENTTYPE\u003e\u003cSRCADDR\u003e192.131.8.1\u003c/SRCADDR\u003e\u003cRETURNCODE\u003e00\u003c/RETURNCODE\u003e\u003cSESSIONID\u003etfYU4-AEPnEzZg\u003c/SESSIONID\u003e\u003cSYSTEM\u003eTLCATS\u003c/SYSTEM\u003e\u003cTIMESTAMP\u003e20211202051409\u003c/TIMESTAMP\u003e\u003cUSERID\u003eAX3BLNB\u003c/USERID\u003e\u003cUSERTYPE\u003eAdmin\u003c/USERTYPE\u003e\u003cVARDATA\u003eCASE NUMBER, CASE NAME;052014011348000,BANTAM LLC\u003c/VARDATA\u003e\u003c/MODTRANSAUDTRL\u003e\n","stream":"stdout","time":"2021-12-02T05:14:09.517228451Z"}  
Splunk Connect for Syslog This is Splunk’s preferred method of ingesting high volumes of data. Details can be located here → https://splunk-connect-for-syslog.readthedocs.io/en/latest/ TCP Data In... See more...
Splunk Connect for Syslog This is Splunk’s preferred method of ingesting high volumes of data. Details can be located here → https://splunk-connect-for-syslog.readthedocs.io/en/latest/ TCP Data Input Navigate to Settings -> Data Inputs -> TCP (Add new) This brings you to following screen. In this step, we will configure Splunk to listen on TCP using port 514. NSS only supports TCP, but the destination port is configurable. Most administrators use port “514” as it is the default port for UDP based syslog. After configuring SIEM port, click next   We're following the above step but not able to find TCP / UDP  configure the port to synced with Zscaler NSS. I'm logging in Splunk Cloud portal  as trial member. Could it be restriction/privilege to my trail account ?  Please advice    Thanks Asif         
  above ones is the talbe and query for the same is index="wtqlty" source=pdf-fc-002-rh sourcetype="release_pdf_results_json" |table pdf_name pdf_url pdf_state please help
I have two CSV files. One files has the name of the accounts and servers where the accounts are added. The second CSV file I have a lookup breaking down the groups members. The field name is in com... See more...
I have two CSV files. One files has the name of the accounts and servers where the accounts are added. The second CSV file I have a lookup breaking down the groups members. The field name is in common with both CSV files. e.g:  Accounts01.CSV Class Domain Hostname Name User robotic ROB-SVR01 Administrator Group robotic ROB-SVR01 Advanced_users_IT Group robotic ROB-SVR01 Advanced_users_HR   e.g: GroupMembers.CSV Name member Advanced_users_IT user_IT_01 user_IT_02 user_IT_03 Advanced_users_HR user_HR_01 user_HR_02 user_IT_01   Is there any way to combine both files to match the names and adding a new column showing the members, so the result can be like this?   Class Domain Hostname Name Members User robotic ROB-SVR01 Administrator User Account Group robotic ROB-SVR01 Advanced_users_IT user_IT_01 user_IT_02 user_IT_03 Group robotic ROB-SVR01 Advanced_users_HR user_HR_01 user_HR_02 user_IT_01
Hello everyone, I am trying to create queries to show the max and average values of inbound and outbound network traffic (unit : Gbps) of my forwarders I already configured the Splunk add on for un... See more...
Hello everyone, I am trying to create queries to show the max and average values of inbound and outbound network traffic (unit : Gbps) of my forwarders I already configured the Splunk add on for unix and linux on my forwarders, but don't know which script to enable to collect the data needed Also, i installed the Pavo network traffic app for splunk, but don't know how to configure it For info, my splunk server is on a single instance deployment Any ideas ?    Thanks ! 
Hi All, We have two splunk environments 8.2, and I am in charge of these two. On the first environment, everything works fine. I'm trying to configure the same thing on the second for another infras... See more...
Hi All, We have two splunk environments 8.2, and I am in charge of these two. On the first environment, everything works fine. I'm trying to configure the same thing on the second for another infrastructure. To explain a little : I have two heavy forwarders behind a VIP, and then logs are forwarded to an indexer cluster of two indexers. I had configure some syslog forwarding for network devices, and everything works fine in TCP or UDP, my logs are showing in the Search Head, in the good index. But for Firepower, impossible to indexing datas, I use TCP 10000 and with a tcpdump on the HF I see logs arrives, but after that nothing... I've checked splunkd.logs (and nearly all log file on the servers), there are no errors on the HF or the Indexers and nothing is blocked by a firewall or the network. I don't use estreamer add-on and firepower add-on, and on the first infrastructure is working fine in the same configuration. Here are some configurations : On Indexers : [fw_ext] homePath = volume:primary/fw_ext/db coldPath = volume:primary/fw_ext/colddb thawedPath = $SPLUNK_DB/fw_ext/thaweddb repFactor = auto maxHotBuckets=10 On HF : [tcp://10000] index = fw_ext sourcetype = cisco:asa connection_host = ip disabled = 0 Paths are empty for the index fw_ext, so I presume that the logs don't arrives to the Indexers. On the Firepower, configuration for the first and the second infrastructures are identical, so I presume that the problem don't come from the device too. The forwarding configuration is good too since it works for every other network devices and servers.   Have you guys any ideas ? Thanks by advance.
I would like to send data (Output) from Splunk to external server/Cloud/DB, Please suggest me the best way. Everyday around 10-15k records, I would like to utilize that data in other Analytics tool ... See more...
I would like to send data (Output) from Splunk to external server/Cloud/DB, Please suggest me the best way. Everyday around 10-15k records, I would like to utilize that data in other Analytics tool for ex: Power BI
Hi Team, I have a .Net application that has N number of threads, I am using Appdynamics .Net Profiler for validating the threads measures by enabling the thread correlation. While observing the dat... See more...
Hi Team, I have a .Net application that has N number of threads, I am using Appdynamics .Net Profiler for validating the threads measures by enabling the thread correlation. While observing the data in the controller, I have a doubt here that how the thread samples have been collected. Is it through instrumentation or through sampling technique? @Ryan.Paredez  Could you please let me know about this? Thanks in advance. ^ Post edited by @Ryan.Paredez for formatting 
I have a query where I get "STARTED" and "FINISHED" status events for the same methods. e.g. index IN (private public) sourcetype in (X Y) log_entry=method_status method=getCusips status=STARTED  ... See more...
I have a query where I get "STARTED" and "FINISHED" status events for the same methods. e.g. index IN (private public) sourcetype in (X Y) log_entry=method_status method=getCusips status=STARTED  | rename _time as start_time | table sourcetype method start_time status | sort start_time for this query I get more, let's say 3 results where everything is the same for the event, except event _time also I would like to get "FINISHED" events so the same only with finished index IN (private public) sourcetype in (X Y) log_entry=method_status method=getCusips status=FINISHED | rename _time as end_time | table sourcetype method end_time status  | sort end_time I will always get the same number of events for both queries. Since it is sorted I would need to pair the first started with first finished,  second started with the second finished and so on, and get the duration (end_time - start_time), but how? So what I would like to see is, let's say if I have 2 started and 2 finished events, and as I mentioned only the time is different(between the 2 started events so I cannot use anything else): source_type method start_time end_time duration X getCusips 12 16 4 X getCusips 18 20 2 I was thinking to iterate on the events somehow and map them the 1st to the 1st, 2nd to 2nd, but no idea if this is even doable. Hope I have explained it clearly.
Hello.  I would like to ingest data from a Fireeye HX, viewing the data either in the Fireeye app or through our own dashboards.   However, although the data is being indexed the fields are not bein... See more...
Hello.  I would like to ingest data from a Fireeye HX, viewing the data either in the Fireeye app or through our own dashboards.   However, although the data is being indexed the fields are not being extracted/labelled in a useful way. I  am running Splunk 8.2.2 on Linux.   I have an indexer cluster and SH cluster. I am using the latest app version, 3.8.8. The Fireeye HX is sending data via TCP in CEF format. On the CM: # cat etc/master-apps/_cluster/local/inputs.conf <snip> [tcp://:1234] index = fe_data sourcetype = hx_ce_syslog # ls etc/master-apps/FireEye_v3 appserver bin default lookups metadata README.md static I created local versions of props.conf and transforms.conf .  In props.conf I uncommented this line as instructed (as we want the data in our own index). # Uncomment the next line to send FireEye data to a separate index called "fireeye" TRANSFORMS-updateFireEyeIndex = fix_FireEye_CEF_in, fix_FireEye_CSV_in, fix_FireEye_XML_in, fix_FireEye_JSON_st, fix_HX_CEF_in, fix_HX2_CEF_in In transforms.conf I changed entries like this to use our index: [fix_HX_CEF_in] REGEX=.*:\sCEF\:\d\|mandiant\|mso\| DEST_KEY=_MetaData:Index FORMAT=fe_data Q: Did I need to change FORMAT to use our index if I have specified the index in inputs.conf? Q: Am I right in thinking I don't need the FireEye app installed on the SHC if I don't want to use the app there?  i.e. it is enough for the indexers to use the app's conifguration to parse the data. Q: If the above is correct, does anyone know why the fields are not being extracted as, for example, cef_name?