All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I need help in getting the step by step process in upgrading Splunk on-prem HF
Hi, have CLIENT_CONNECT_AUTH_FAIL log entries in Splunk for different usernames. Would like to send an alert when the count of CLIENT_CONNECT_AUTH_FAIL entries for a specific username exceeds a thr... See more...
Hi, have CLIENT_CONNECT_AUTH_FAIL log entries in Splunk for different usernames. Would like to send an alert when the count of CLIENT_CONNECT_AUTH_FAIL entries for a specific username exceeds a threshold (say 10 within the last 5 min), an alert should be generated for every user that exceeded a threshold (1 alert per the corresponding username). Trying to achieve that I've used `| stats count by username` and then put trigger `search count > 10`, but results are not as expected Consider an example. Stats query produces the following results: username     count user1              20 user2              15 user3              5 If I set `Trigger` = `Once` then I get an alert for only user1 despite that count of CLIENT_CONNECT_AUTH_FAIL for `user2` also exceeded threshold. If I set `Trigger` = `For each result` then I get an alert for every username despite that threshold is not exceeded for `user3`. What is the right way to do this in Splunk?
  <panel depends="show_panel"> <title>xyz</title> <search base="main_base"> <progress> <condition> <set token="show_panel">true</set> <condition> <condition> <unset token="show_panel"... See more...
  <panel depends="show_panel"> <title>xyz</title> <search base="main_base"> <progress> <condition> <set token="show_panel">true</set> <condition> <condition> <unset token="show_panel"></unset> <condition> </progress> <query> | search months="$tok_mon$" </query> </search> </panel>     can anyone explain what the above xml code is doing with the token show_panel and what the progress Tag is doing or does? this is a drilldown xml code .
Hi,  I want to create the following excel table using splunk. The first 3 columns are based on the output of a query, something like this:  <query>index=mfpublic sourcetype=SMF100 IFCID=1 DB2_SHARI... See more...
Hi,  I want to create the following excel table using splunk. The first 3 columns are based on the output of a query, something like this:  <query>index=mfpublic sourcetype=SMF100 IFCID=1 DB2_SHARING_GROUP_NAME=$ssid_tok$ DB2_SUBSYSTEM="DBXH" | table _time DB2_SSID CPU_accumulated </query> The last column is the result of a math operation between first row and second row. Using Excel, column D has the formula: C2-C3 in the first row, then C3-C4 in the second, then C4-C5, and so on.  (A) Time (B) DB2_SSID (C) CPU_accumulated (D) Difference 17-1-2022 11:20 DBXH 355363188 19569 17-1-2022 11:19 DBXH 355343619 19437 17-1-2022 11:18 DBXH 355324182 21579 17-1-2022 11:17 DBXH 355302603 22657 17-1-2022 11:16 DBXH 355279946 19793 17-1-2022 11:15 DBXH 355260153 -   Is it possible to do this math operation between columns from different rows to create another column ?  After having this "column D" I want to create a line chart based on this information.  Thanks a lot for your help !! 
Hi All, Currently one of the windows user contacted us and informed  that he could notice that Splunk UF agent is failing frequently in his machine. When investigated the issue, we could see the fo... See more...
Hi All, Currently one of the windows user contacted us and informed  that he could notice that Splunk UF agent is failing frequently in his machine. When investigated the issue, we could see the following error details in Splunk _internal logs.  Component= ExecProcessor ========================= 01-17-2022 09:22:11.436 +0000 ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" ERROR splunk-winevtlog - WinEventLogChannel::~WinEventLogChannel: Failed to checkpoint for channel='Windows PowerShell' 01-17-2022 09:22:11.436 +0000 ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" ERROR splunk-winevtlog - WinEventLogChannel::saveBookMark: Failed to update Windows Event Log bookmark, channel='Windows PowerShell 01-17-2022 09:22:11.436 +0000 ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" ERROR splunk-winevtlog - WinEventLogChannel::~WinEventLogChannel: Failed to checkpoint for channel='Security' component=AuthenticationManagerSplunk ========================================= 01-17-2022 09:22:19.839 +0000 ERROR AuthenticationManagerSplunk - Either password or seed file not found! No users configured! component=Metrics ==================== 01-17-2022 09:22:20.245 +0000 ERROR Metrics - Metric with name thruput:idxSummary already registered component=TcpOutputFd ====================== 01-17-2022 04:45:00.175 +0000 ERROR TcpOutputFd - Read error. An existing connection was forcibly closed by the remote host. component=PipelineComponent ============================= 01-17-2022 05:21:48.213 +0000 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? component=FileClassifierManager ================================== 01-17-2022 09:22:23.780 +0000 WARN FileClassifierManager - The file 'C:\Program Files\SplunkUniversalForwarder\var\log\splunk\C__Program Files_SplunkUniversalForwarder_bin_splunk-winevtlog_exe_crash-2021-08-13-08-22-30.dmp' is invalid. Reason: binary component=TailReader ========================== 01-17-2022 09:22:23.780 +0000 INFO TailReader - Ignoring file 'C:\Program Files\SplunkUniversalForwarder\var\log\splunk\C__Program Files_SplunkUniversalForwarder_bin_splunk-winevtlog_exe_crash-2021-08-13-08-22-30.dmp' due to: binary component=WatchedFile ========================== 01-17-2022 09:22:23.498 +0000 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='C:\Program Files\SplunkUniversalForwarder\var\log\splunk\C__Program Files_SplunkUniversalForwarder_bin_splunk-winevtlog_exe_crash-2021-10-01-08-22-12.log'.   I have checked the truncate value and found all are within the control limit except the below sourcetype  Default Value is set to 10000  WinEventLog:Microsoft-Windows-PowerShell/Operational 21132   Splunk Agent version is 7.0  Splunk Enterprise indexer version is 8.2.2  Please guide me what kind of troubleshooting steps needs to taken in-order to resolve this issue.      
Hello,  i have a question regarding the usage of the results of a join within an eval if. I have a couple of responses, to which I am joining their preceeding requests (written in another source)  ... See more...
Hello,  i have a question regarding the usage of the results of a join within an eval if. I have a couple of responses, to which I am joining their preceeding requests (written in another source)   index="index1" sourcetype="sourcetype1" Response... |table rcvTime Command |join type=left left=response right=request usetime=true earlier=true where response.ID=request.ID [search index="index2" sourcetype="sourcetype2" Request ....|table rcvTime Command|sort _time-]   The issue is, that sometimes I get a wrong match, hence a request, that is not connected to the response and was a few days ago. The reason, why they are matched, is because it is the same device ID.  Thats why I am trying to have an eval for the timediff. If I am using the variable request.command within the if, I will receive empty results:   index="index1" sourcetype="sourcetype1" Response... |table rcvTime Command |join type=left left=response right=request usetime=true earlier=true where response.ID=request.ID [search index="index2" sourcetype="sourcetype2" Request ....|table rcvTime Command|sort _time-] |....(commands calculating timediff) | request.command=if(timediff<300,request.command,"")   If I am saving the value within a field that contains no point in the name, it works properly:   index="index1" sourcetype="sourcetype1" Response... |table rcvTime Command |join type=left left=response right=request usetime=true earlier=true where response.ID=request.ID [search index="index2" sourcetype="sourcetype2" Request ....|table rcvTime Command|sort _time-] |....(commands calculating timediff) |rename requestCommand as request.command | requestCommand=if(timediff<300,requestCommand,"")   Does someone have an idea, why I cannot use request.command within the eval (but on other commands I can use it)? Thanks and best Regards
We got a request to attach 2 CSV files to 1 report  what is the best way to do it  ?
Hi Expert Can Microsoft Dynamics 365 data be ingested to Microsoft Security and Compliance portal and from there it can be fed and extracted to Splunk? I'm looking to implement and wonder if this co... See more...
Hi Expert Can Microsoft Dynamics 365 data be ingested to Microsoft Security and Compliance portal and from there it can be fed and extracted to Splunk? I'm looking to implement and wonder if this could be a potential solution. Unlike Splunk has an add-on for Microsoft Office 365, where we can use Office 365 management activity API to retrieve information (https://docs.splunk.com/Documentation/AddOns/released/MSO365/About) Also looking at this link (https://docs.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-schema#enum-auditlogrecordtype---type-edmint32), PowerApps/ D365 data can be ingested to Microsoft Security and Compliance Centre portal and from here the data can be extracted using APIs link to SIEM (Splunk) via AuditRecordType = e.g. 45 PowerApps portal event or 21 for Dynamics 365 events. is my understanding correct? Appreciate your response. Regards/Somnath
Hi folks Does Splunk offer a command-line interface? Using Splunk through the browser might be OK when working with one alert, but is a little cumbersome when managing lots of alerts. If there is ... See more...
Hi folks Does Splunk offer a command-line interface? Using Splunk through the browser might be OK when working with one alert, but is a little cumbersome when managing lots of alerts. If there is indeed a command-line interface, what is required to use it? Is there documentation for it? Thanks Steve
Hi I have events like this:   1900/10/26|1900/10/25|333|CHECKOUT |U |2222|000|00 |14|111111 |000000000 |0000 | |12345678998|123456789987|1236549877896543 |3333333333333 | |1900/10/25|23:47:18|190... See more...
Hi I have events like this:   1900/10/26|1900/10/25|333|CHECKOUT |U |2222|000|00 |14|111111 |000000000 |0000 | |12345678998|123456789987|1236549877896543 |3333333333333 | |1900/10/25|23:47:18|1900/10/25|23:47:19|1900/10/25|23:47:19|00000000000|000000000000|CTT|WQQ| |12345678|000000325585632|AB| | | | | |000000000000| | |000000000000|00000000|00000000|00000000|00000000| | | | | |null|0|IDD1   How can I seperate by pipe fields in search? (without change trans or any other config)   Thanks
Looking at an existing alert trigger, I notice the description field includes variables of some sort. e.g. $result.User$ $result.Message$ Where can I find more information about these variables? ... See more...
Looking at an existing alert trigger, I notice the description field includes variables of some sort. e.g. $result.User$ $result.Message$ Where can I find more information about these variables? What other variables are available to include in the alert description? Thanks Steve
I have a server where logs are generated on daily basis in this format- /ABC/DEF/XYZ/xyz17012022.zip      /ABC/DEF/XYZ/xyz16012022.zip            /ABC/DEF/XYZ/xyz15012022.zip OR  /ABC/DEF/RST/rst1... See more...
I have a server where logs are generated on daily basis in this format- /ABC/DEF/XYZ/xyz17012022.zip      /ABC/DEF/XYZ/xyz16012022.zip            /ABC/DEF/XYZ/xyz15012022.zip OR  /ABC/DEF/RST/rst17012022.gz      /ABC/DEF/RST/rst16012022.gz               /ABC/DEF/RST/rst15012022.gz   I am getting this error , every time when i am indexing the .gz, .tar or .zip  file - "updated less than 10000ms ago, will not read it until it stops changing ; has stopped changing , will read it now." This problem was earlier addressed in this post,  https://community.splunk.com/t5/Developing-for-Splunk-Enterprise/gz-file-not-getting-indexed-in-splunk/td-p/313840 As suggested I have used " crcSalt = <SOURCE> " but I am still facing similar errors.   inputs.conf:  [monitor:///ABC/DEF/XYZ/xyz*.zip] index= log_critical disabled = false sourcetype= Critical_XYZ ignoreOlderThan = 2d crcSalt = <SOURCE>  
hello I use a dashboard with different post process search because I reuse the same index and the same sourcetype   <search id="erreur"> <query>index=toto` sourcetype=tutu:web:error site=$si... See more...
hello I use a dashboard with different post process search because I reuse the same index and the same sourcetype   <search id="erreur"> <query>index=toto` sourcetype=tutu:web:error site=$site$ | fields web_error_count </query> <earliest>$date.earliest$</earliest> <latest>$date.latest$</latest> </search> <search base="erreur"> <query>| stats sum(web_error_count) as web_error_count | appendpipe [ stats count as _events | where _events = 0 | eval web_errr_count = 0 ]</query>   But sometimes I need to use the same index and the same sourcetype only one time So, in this case I use an inline search in the dashboard What I need to know is about the performances Is it better to use a post process search or an inline search when we dont have to reuse a specific sourcetype? And when I have 2 inline search with the same index and 2 different sourcetype, is it better to use a post proces search like this <search id="test"> <query>index=toto` sourcetype=tutu:web:error OR sourcetype=titi:url) site=$site$ | fields web_error_count </query> <earliest>$date.earliest$</earliest> <latest>$date.latest$</latest> </search> Thanks
Hi Everyone.  I'm expanding my blacklist and i'm having issues with a seemingly simple blacklist line. Here is my current blacklist: blacklist1 = EventCode="4688" Message="%%1936|%%1938|TokenElevat... See more...
Hi Everyone.  I'm expanding my blacklist and i'm having issues with a seemingly simple blacklist line. Here is my current blacklist: blacklist1 = EventCode="4688" Message="%%1936|%%1938|TokenElevationTypeDefault|TokenElevationTypeLimited" blacklist2 = EventCode="4673|4674|5447|4656|4658|4664|4690|5379|4627" blacklist3 = EventCode="4663|4660|4702|4762|4672|4799|4798|4670" Message="Security\sID:\s+NT\sAUTHORITY\SSYSTEM" blacklist4 = Eventcode="4624" Message="Logon\sType:\s\t5"   So everything seems to work as expected for #1-3.   But when adding blacklist4, the forwarder doesn't seem to filter the event.  Searching in Splunk with the exact same regex is pulling up all the events I want to filter.  And the syntax seems to be exactly like blacklist3 that is working as intended.  Does anyone have any suggestions? Thanks
Hi  what is the usecase of integrating Splunk with ETL tools? Send splunk data to ETL? Send ETL data to splunk?   any idea? Thanks
Dear Team,   Greetings!!   I need your help and guidance on the following issue , i keep getting this error in the notifications message:   Search peer Splunk-idx4 has the following message: Th... See more...
Dear Team,   Greetings!!   I need your help and guidance on the following issue , i keep getting this error in the notifications message:   Search peer Splunk-idx4 has the following message: The minimum dree disk space (5000MB) reached for opt/splunk/var/run/splunk/dispatch. Problem replicating config (bundle) to search peer '10.10.5.106:8089' , HTTP response code 500 (HTTP/1.1 500 Error writing to /opt/splunk/var/run/searchpeers/Splunksh01-1642302054.bundle.53d7c4e2bfaedd1d.tmp: NO space left on device). Error writing to /opt/splunk/var/run/searchpeers/Splunksh01-1642302054.bundle.fbc779696ccbf76a.tmp: No space left on the device (unknown write error) Even on the search and reporting when i run a query, it gives this error,  2 error occurred while the search was executing. Therefore, search results might be incomplete. Hide errors . [Splunk id-03] Failed to read size=3307 event(s) from raw data in bucket='nsoc_fw_ahnlab~703~B239BEEE-90FA-43C8-ADDA-620D3FACAB66' path ='/opt/splunk_data/indexes/nsoc_fw_ahnlab/db/hot_v1_703. Rawdata may be corrupt, see seach log. Results may be incomplete! . [Splunk id-03] Failed to read size=5030 event(s) from raw data in bucket='nsoc_fw_ahnlab~703~B239BEEE-90FA-43C8-ADDA-620D3FACAB66' path ='/opt/splunk_data/indexes/nsoc_fw_ahnlab/db/hot_v1_703. Rawdata may be corrupt, see seach log. Results may be incomplete!   Kindly help me and guide me on how to fix the above issue.   Thank you in advance!
As shown in the picture below, one workstation has 4 IP addresses (4 NIC) and sends Windows Event log to Splunk Indexer. When I search the log collected in the indexer, I could confirm that the sour... See more...
As shown in the picture below, one workstation has 4 IP addresses (4 NIC) and sends Windows Event log to Splunk Indexer. When I search the log collected in the indexer, I could confirm that the source IP address of logs was decided randomly among 4 IP addresses. I don't know the source IP address is decided by what criteria, so I ask this question. My question: 1. Is the source IP address decided by what criteria? 2. Is there function to decide the source IP address in the Universal Forwarder? For your information, my network is a standalone network without external connection such as Web. Kind regards
Hi I'm trying to count the number of times of a specific values "not match" exist in a multi-value field, search for events where this value appears more then once. add an example name Check ... See more...
Hi I'm trying to count the number of times of a specific values "not match" exist in a multi-value field, search for events where this value appears more then once. add an example name Check ID aaa-1 bbb-2 ccc-3 not match match match 6564 ddd-1 eee-2 fff-3 not match match not match 7875   because in the lower row the value "not match" exist more then 1 time (>1). I don't found a suitable command. would appreciate  help:)
Hello Everyone I have a problem with receiving IPFIX flow From NSX-T 3.1. this is a summary of what I do: I checked Firewall things and it doesn't have any problem because I can see IPFIX flow wit... See more...
Hello Everyone I have a problem with receiving IPFIX flow From NSX-T 3.1. this is a summary of what I do: I checked Firewall things and it doesn't have any problem because I can see IPFIX flow with Wireshark on the Splunk server. I use Splunk_TA_stream and splunk_app_stream 8.0.1 and I can Get IPFix flow with IPFIX Generator( flowalyzer). I change the Splunk Stream configuration for those IPFIX fields that NSX-T sends. because some of IPFIX is not Standard.   I changed the Splunk Stream configuration based on these Link according to this Link: https://emc.extremenetworks.com/content/oneview/docs/analytics/docs/pur_splunk.htm?Highlight=Splunk https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.0/nsxt_30_admin.pdf Does anybody have experience in Receiving IPFIX flow from NSX-T?
Hello Everyone I have a problem with receiving IPFIX flow From NSX-T 3.1. this is a summary of what I do: I checked Firewall things and it doesn't have any problem because I can see IPFIX flow wit... See more...
Hello Everyone I have a problem with receiving IPFIX flow From NSX-T 3.1. this is a summary of what I do: I checked Firewall things and it doesn't have any problem because I can see IPFIX flow with Wireshark on the Splunk server. I use Splunk_TA_stream and splunk_app_stream 8.0.1 and I can Get IPFix flow with IPFIX Generator( flowalyzer). I change the Splunk Stream configuration for those IPFIX fields that NSX-T sends. because some of IPFIX is not Standard.   I changed the Splunk Stream configuration based on these Link according to this Link: https://emc.extremenetworks.com/content/oneview/docs/analytics/docs/pur_splunk.htm?Highlight=Splunk https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.0/nsxt_30_admin.pdf Does anybody have experience in Receiving IPFIX flow from NSX-T?