All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi I have events like this:   1900/10/26|1900/10/25|333|CHECKOUT |U |2222|000|00 |14|111111 |000000000 |0000 | |12345678998|123456789987|1236549877896543 |3333333333333 | |1900/10/25|23:47:18|190... See more...
Hi I have events like this:   1900/10/26|1900/10/25|333|CHECKOUT |U |2222|000|00 |14|111111 |000000000 |0000 | |12345678998|123456789987|1236549877896543 |3333333333333 | |1900/10/25|23:47:18|1900/10/25|23:47:19|1900/10/25|23:47:19|00000000000|000000000000|CTT|WQQ| |12345678|000000325585632|AB| | | | | |000000000000| | |000000000000|00000000|00000000|00000000|00000000| | | | | |null|0|IDD1   How can I seperate by pipe fields in search? (without change trans or any other config)   Thanks
Looking at an existing alert trigger, I notice the description field includes variables of some sort. e.g. $result.User$ $result.Message$ Where can I find more information about these variables? ... See more...
Looking at an existing alert trigger, I notice the description field includes variables of some sort. e.g. $result.User$ $result.Message$ Where can I find more information about these variables? What other variables are available to include in the alert description? Thanks Steve
I have a server where logs are generated on daily basis in this format- /ABC/DEF/XYZ/xyz17012022.zip      /ABC/DEF/XYZ/xyz16012022.zip            /ABC/DEF/XYZ/xyz15012022.zip OR  /ABC/DEF/RST/rst1... See more...
I have a server where logs are generated on daily basis in this format- /ABC/DEF/XYZ/xyz17012022.zip      /ABC/DEF/XYZ/xyz16012022.zip            /ABC/DEF/XYZ/xyz15012022.zip OR  /ABC/DEF/RST/rst17012022.gz      /ABC/DEF/RST/rst16012022.gz               /ABC/DEF/RST/rst15012022.gz   I am getting this error , every time when i am indexing the .gz, .tar or .zip  file - "updated less than 10000ms ago, will not read it until it stops changing ; has stopped changing , will read it now." This problem was earlier addressed in this post,  https://community.splunk.com/t5/Developing-for-Splunk-Enterprise/gz-file-not-getting-indexed-in-splunk/td-p/313840 As suggested I have used " crcSalt = <SOURCE> " but I am still facing similar errors.   inputs.conf:  [monitor:///ABC/DEF/XYZ/xyz*.zip] index= log_critical disabled = false sourcetype= Critical_XYZ ignoreOlderThan = 2d crcSalt = <SOURCE>  
hello I use a dashboard with different post process search because I reuse the same index and the same sourcetype   <search id="erreur"> <query>index=toto` sourcetype=tutu:web:error site=$si... See more...
hello I use a dashboard with different post process search because I reuse the same index and the same sourcetype   <search id="erreur"> <query>index=toto` sourcetype=tutu:web:error site=$site$ | fields web_error_count </query> <earliest>$date.earliest$</earliest> <latest>$date.latest$</latest> </search> <search base="erreur"> <query>| stats sum(web_error_count) as web_error_count | appendpipe [ stats count as _events | where _events = 0 | eval web_errr_count = 0 ]</query>   But sometimes I need to use the same index and the same sourcetype only one time So, in this case I use an inline search in the dashboard What I need to know is about the performances Is it better to use a post process search or an inline search when we dont have to reuse a specific sourcetype? And when I have 2 inline search with the same index and 2 different sourcetype, is it better to use a post proces search like this <search id="test"> <query>index=toto` sourcetype=tutu:web:error OR sourcetype=titi:url) site=$site$ | fields web_error_count </query> <earliest>$date.earliest$</earliest> <latest>$date.latest$</latest> </search> Thanks
Hi Everyone.  I'm expanding my blacklist and i'm having issues with a seemingly simple blacklist line. Here is my current blacklist: blacklist1 = EventCode="4688" Message="%%1936|%%1938|TokenElevat... See more...
Hi Everyone.  I'm expanding my blacklist and i'm having issues with a seemingly simple blacklist line. Here is my current blacklist: blacklist1 = EventCode="4688" Message="%%1936|%%1938|TokenElevationTypeDefault|TokenElevationTypeLimited" blacklist2 = EventCode="4673|4674|5447|4656|4658|4664|4690|5379|4627" blacklist3 = EventCode="4663|4660|4702|4762|4672|4799|4798|4670" Message="Security\sID:\s+NT\sAUTHORITY\SSYSTEM" blacklist4 = Eventcode="4624" Message="Logon\sType:\s\t5"   So everything seems to work as expected for #1-3.   But when adding blacklist4, the forwarder doesn't seem to filter the event.  Searching in Splunk with the exact same regex is pulling up all the events I want to filter.  And the syntax seems to be exactly like blacklist3 that is working as intended.  Does anyone have any suggestions? Thanks
Hi  what is the usecase of integrating Splunk with ETL tools? Send splunk data to ETL? Send ETL data to splunk?   any idea? Thanks
Dear Team,   Greetings!!   I need your help and guidance on the following issue , i keep getting this error in the notifications message:   Search peer Splunk-idx4 has the following message: Th... See more...
Dear Team,   Greetings!!   I need your help and guidance on the following issue , i keep getting this error in the notifications message:   Search peer Splunk-idx4 has the following message: The minimum dree disk space (5000MB) reached for opt/splunk/var/run/splunk/dispatch. Problem replicating config (bundle) to search peer '10.10.5.106:8089' , HTTP response code 500 (HTTP/1.1 500 Error writing to /opt/splunk/var/run/searchpeers/Splunksh01-1642302054.bundle.53d7c4e2bfaedd1d.tmp: NO space left on device). Error writing to /opt/splunk/var/run/searchpeers/Splunksh01-1642302054.bundle.fbc779696ccbf76a.tmp: No space left on the device (unknown write error) Even on the search and reporting when i run a query, it gives this error,  2 error occurred while the search was executing. Therefore, search results might be incomplete. Hide errors . [Splunk id-03] Failed to read size=3307 event(s) from raw data in bucket='nsoc_fw_ahnlab~703~B239BEEE-90FA-43C8-ADDA-620D3FACAB66' path ='/opt/splunk_data/indexes/nsoc_fw_ahnlab/db/hot_v1_703. Rawdata may be corrupt, see seach log. Results may be incomplete! . [Splunk id-03] Failed to read size=5030 event(s) from raw data in bucket='nsoc_fw_ahnlab~703~B239BEEE-90FA-43C8-ADDA-620D3FACAB66' path ='/opt/splunk_data/indexes/nsoc_fw_ahnlab/db/hot_v1_703. Rawdata may be corrupt, see seach log. Results may be incomplete!   Kindly help me and guide me on how to fix the above issue.   Thank you in advance!
As shown in the picture below, one workstation has 4 IP addresses (4 NIC) and sends Windows Event log to Splunk Indexer. When I search the log collected in the indexer, I could confirm that the sour... See more...
As shown in the picture below, one workstation has 4 IP addresses (4 NIC) and sends Windows Event log to Splunk Indexer. When I search the log collected in the indexer, I could confirm that the source IP address of logs was decided randomly among 4 IP addresses. I don't know the source IP address is decided by what criteria, so I ask this question. My question: 1. Is the source IP address decided by what criteria? 2. Is there function to decide the source IP address in the Universal Forwarder? For your information, my network is a standalone network without external connection such as Web. Kind regards
Hi I'm trying to count the number of times of a specific values "not match" exist in a multi-value field, search for events where this value appears more then once. add an example name Check ... See more...
Hi I'm trying to count the number of times of a specific values "not match" exist in a multi-value field, search for events where this value appears more then once. add an example name Check ID aaa-1 bbb-2 ccc-3 not match match match 6564 ddd-1 eee-2 fff-3 not match match not match 7875   because in the lower row the value "not match" exist more then 1 time (>1). I don't found a suitable command. would appreciate  help:)
Hello Everyone I have a problem with receiving IPFIX flow From NSX-T 3.1. this is a summary of what I do: I checked Firewall things and it doesn't have any problem because I can see IPFIX flow wit... See more...
Hello Everyone I have a problem with receiving IPFIX flow From NSX-T 3.1. this is a summary of what I do: I checked Firewall things and it doesn't have any problem because I can see IPFIX flow with Wireshark on the Splunk server. I use Splunk_TA_stream and splunk_app_stream 8.0.1 and I can Get IPFix flow with IPFIX Generator( flowalyzer). I change the Splunk Stream configuration for those IPFIX fields that NSX-T sends. because some of IPFIX is not Standard.   I changed the Splunk Stream configuration based on these Link according to this Link: https://emc.extremenetworks.com/content/oneview/docs/analytics/docs/pur_splunk.htm?Highlight=Splunk https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.0/nsxt_30_admin.pdf Does anybody have experience in Receiving IPFIX flow from NSX-T?
Hello Everyone I have a problem with receiving IPFIX flow From NSX-T 3.1. this is a summary of what I do: I checked Firewall things and it doesn't have any problem because I can see IPFIX flow wit... See more...
Hello Everyone I have a problem with receiving IPFIX flow From NSX-T 3.1. this is a summary of what I do: I checked Firewall things and it doesn't have any problem because I can see IPFIX flow with Wireshark on the Splunk server. I use Splunk_TA_stream and splunk_app_stream 8.0.1 and I can Get IPFix flow with IPFIX Generator( flowalyzer). I change the Splunk Stream configuration for those IPFIX fields that NSX-T sends. because some of IPFIX is not Standard.   I changed the Splunk Stream configuration based on these Link according to this Link: https://emc.extremenetworks.com/content/oneview/docs/analytics/docs/pur_splunk.htm?Highlight=Splunk https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.0/nsxt_30_admin.pdf Does anybody have experience in Receiving IPFIX flow from NSX-T?
hi I use a dashboard with 17 panels (12 single panels and 5 table panels) that works in real-time In this case, real time means that I can use scheduled serarch because I need to have the last even... See more...
hi I use a dashboard with 17 panels (12 single panels and 5 table panels) that works in real-time In this case, real time means that I can use scheduled serarch because I need to have the last events every time I launch my dashboard By default, my timepicker is on the last 24 hours The index is always the same but I use 10 different sourcetype I must imperatively use real time Most of the time I use post process search in order to avoid to query the index and the sourcetype many times The problem I have is a slow display, sometimes it works almost fine and most of the time I have a message "waiting for data" or "waiting for queued job to start" I also think that since 2 days there is slowness issues behind indexers because I have tested other dashboards and they are slow too.. What are best practices for dashboard in real time please?
Hi Spelunker's!. I'm receiving this message from Splunk  "Received event for unconfigured/disabled/deleted index=threathunting with source="source::[T1015] Accessibility Features" host="host::h... See more...
Hi Spelunker's!. I'm receiving this message from Splunk  "Received event for unconfigured/disabled/deleted index=threathunting with source="source::[T1015] Accessibility Features" host="host::hdc-sec01-siem001" sourcetype="sourcetype::stash". So far received events from 1 missing index(es)" Kindly advice.  Thank you.
We use Palo Alto, Barracuda, and McAfee WGs. All perform some form of Web Filtering / Blocking, which I'm now being asked to produce a report on,  Top 50 blocked categories. The SPL looks something... See more...
We use Palo Alto, Barracuda, and McAfee WGs. All perform some form of Web Filtering / Blocking, which I'm now being asked to produce a report on,  Top 50 blocked categories. The SPL looks something like  Index IN (Palo, Barra, MCWG) vendor_action="Blocked-URL"  earliest=-8d@d latest=-1d@d | top limit=50 category | stats count by category. The problem is - I need to filter out links to a site (for instance type Betfred into google and I get two blocks although the human never actually went to Betfred.  I've also got the dilemma of multiple images being called from a web page each being blocked. So - how do you interpret weblogs to only be unique calls by a human being to a website, rather than google lookups or multiple returns whilst visiting another site.  I've tried using dedup against user and URL, but that removes repeat attempts throughout the week along with all the image download requests, it's not very accurate or scientific. There has to be a way to work out that the web request is a link click or a URL entry rather than a page lookup, but I'm at a loss.  
I'm trying to do a line graph using this command: source="filename.csv" sourcetype="csv" | stats sum(intake), values(gender) by academic_year Output:   However, I want the total intake to sho... See more...
I'm trying to do a line graph using this command: source="filename.csv" sourcetype="csv" | stats sum(intake), values(gender) by academic_year Output:   However, I want the total intake to show the total for each gender, male and female so that my line graph will look something like this:    Thank you for the help!    
We have a customer who has Splunk as main Security platform, but now they are trying to onboard other datasets for forensic/compliance/data retention purposes/application data. This doesn't need to b... See more...
We have a customer who has Splunk as main Security platform, but now they are trying to onboard other datasets for forensic/compliance/data retention purposes/application data. This doesn't need to be in Splunk as such, but any searchable tools like OpenSearch or similar. Before looking into such extra tools, wanted to understand if there is any provision with Splunk which would allow a data ingestion at cheaper cost (not counting to the main license cost or a cheaper license option?) So the scenario is (Security + compliance + application data) => Splunk Heavy Forwarder -> (A) Security data to Splunk  &&  (B) Rest of data to a log retention service Before going into this avenue, wanted to check if Splunk provide such a cheaper license option? i.e. for a log retention mode or non-important data (In future, they may have funding to move into Splunk, but not for atleast 6-8 months)  
Hello Community, I have a lookup file policy_search.csv that has search criteria to find specific policy events in my data.  The file looks like this: #, policy, search_criteria 1, policyA,  (poli... See more...
Hello Community, I have a lookup file policy_search.csv that has search criteria to find specific policy events in my data.  The file looks like this: #, policy, search_criteria 1, policyA,  (policy="policyA") OR 2, policyB,  (policy="policyB" AND (protocol="X" OR protocol="Y")) OR 3, policyC, (policy="policyC" AND channel="ch1") OR   I want to produce a search like the one below, but using the criteria in the lookup: index=events | search       (policy="policyA") OR      (policy="policyB" AND (protocol="X" OR protocol="Y")) OR      (policy="policyC" AND channel="ch1") | table incident policy protocol channel  How could I do that? the idea is to maintain the search criteria in the lookup file and have changes reflected automatically in our reports. I'm looking for something like index=events | search [| inputlookup policy_search.csv | stats values(search_criteria)] | table incident policy protocol channel   I really appreciate any help.  Thank you very much! Adan Castaneda
hi if tere is no results retourned I need to display 0 in my single panel and the unit whic is "sec" So I need to display "0 sec" and the formatting options even if there is no results how to do t... See more...
hi if tere is no results retourned I need to display 0 in my single panel and the unit whic is "sec" So I need to display "0 sec" and the formatting options even if there is no results how to do this please? <single> <title>Bur</title> <search base="hang"> <query>| stats perc90(hang_duration_sec) as hang_duration_sec </query> </search> <option name="drilldown">none</option> <option name="height">85</option> <option name="numberPrecision">0.0</option> <option name="rangeColors">["0x53a051","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="rangeValues">[0,5,10]</option> <option name="refresh.display">progressbar</option> <option name="unit">sec</option> <option name="useColors">1</option> </single>
hi the search below returns results     index=tutu sourcetype=toto runq | search NOT runq=0.0 | table runq host | join host [ search index=tutu sourcetype=toto | fields type host cpu_... See more...
hi the search below returns results     index=tutu sourcetype=toto runq | search NOT runq=0.0 | table runq host | join host [ search index=tutu sourcetype=toto | fields type host cpu_core) | stats max(cpu_core) as nbcore by host ] | eval Vel = (runq / nbcore) / 6      but when I add      table vel     or     | stats avg(Vel) as Vel     at the end of the search, there is no results what is wrong please?
Should a non authenticated user access this endpoint (POST request) https://localhost:8089/services/template/realize and create templates , and if no what can the security impact of this