All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Dear all, I try to filter sender email which not contains specific 3 subdomains and domain. For example: sender: user1@aaa.domain.com user2@bbb.domain.com user355@ccc.domain.com userxxx@gmail.... See more...
Dear all, I try to filter sender email which not contains specific 3 subdomains and domain. For example: sender: user1@aaa.domain.com user2@bbb.domain.com user355@ccc.domain.com userxxx@gmail.com useryyy@top.domain2.com i want just display with stats sender userxxx@gmail.comand user useryyy@top.domain2.com I try to add index = * sourcetype="cisco:esa:textmail" OR sourcetype=MSExchange* | eventstats values(src) AS cs_ip BY icid [...} where mvcount(recipient) > 5 and sender !="[\w][\w\-\.]+@(?domain.com)" or using this: | rex field=sender "[\w][\w\-\.]+@(?<domain>\w[\w\-\.]+[a-zA-Z]{2,5})" | stats sum(count) as count by domain_detected | eval domain_detected=mvfilter(domain_detected!="*.domain.com") without success  
Hi, I'm trying to get this addon working. I've setup a dummy alert with MS Teams alert action. Each time the alert triggers in splunkd.log I'm getting  action=ms_teams_publish_to_channel - Alert ac... See more...
Hi, I'm trying to get this addon working. I've setup a dummy alert with MS Teams alert action. Each time the alert triggers in splunkd.log I'm getting  action=ms_teams_publish_to_channel - Alert action script returned error code=5 In splunkd.log 
I need to extract (at search time) a multivalue field in some JSON data in a manner that will allow me to perform additional, multiple regex's on the resulting field, all at search time. I can do thi... See more...
I need to extract (at search time) a multivalue field in some JSON data in a manner that will allow me to perform additional, multiple regex's on the resulting field, all at search time. I can do this inline easily using: | spath output=trigger_name path=triggeredComponents{}.triggeredFilters{}.filterType | spath output=trigger_value path=triggeredComponents{}.triggeredFilters{}.trigger.value | eval new_trig=mvzip(trigger_name,trigger_value,":") | mvexpand new_trig | rex field=new_trig "^Internal destination device name:(?<dest>.*)$" | rex field=new_trig "^Destination IP:(?<dest_ip>(?:(?:\d{1,3}\.){3}(?:\d{1,3}))|(?:(?:::)?(?:[\dA-Fa-f]{1,4}:{1,2}){1,7}(?:[\d\%A-Fa-z\.]+)?(?:::)?)|(?:::[\dA-Fa-f\.]{1,15})|(?:::))$"   Thanks in advance   Edit: I already have KV_MODE=JSON in my props.conf
hi, i need to build a query that fetches me results based on a condition,  index=<myindex>  host=<myhost>  |rex field=_raw ".*TimeInMs=(?<TimeInMs>\d+)" | table host,  TimeInMs here in my case, i ... See more...
hi, i need to build a query that fetches me results based on a condition,  index=<myindex>  host=<myhost>  |rex field=_raw ".*TimeInMs=(?<TimeInMs>\d+)" | table host,  TimeInMs here in my case, i need only those host values where TimeInMs is greater than 120000. Appreciate your help in correct query for the same.
Hi All, We run searches against logs that return, as part of the dataset, IP addresses. We basically want to know what network and VLAN a given address belongs to so I created a CSV file that con... See more...
Hi All, We run searches against logs that return, as part of the dataset, IP addresses. We basically want to know what network and VLAN a given address belongs to so I created a CSV file that contains the following: network          vlan   name 10.1.1.0/24   12      Server Network 10.1.2.0/24   13      Printer Network So I'd like to pull in the CSV data and perform a cidrmatch against it using each IP address the search comes across. If there is no match I want the fields to just return "No Data" so we can then go and update the CSV with anything missing. As a test I've done the following: | inputlookup Network_VLAN_Names.csv | fields network vlan name| where NOT isnull(network) | dest_ip="10.1.1.21" | foreach network [eval subnet=if(cidrmatch('<<FIELD>>', dest_ip), <<FIELD>>, "No Match")] | search subnet!="No Match" | table _time dest_ip vlan name | sort _time asc The first issue is that if there is no match, the row isn't returned all (I just want particular fields in a row of returned data to reflect the VLAN and friendly name of the network (if available in the CSV), not for the row to not be available. Also when I've tried using: <base search> | append [ inputlookup Network_VLAN_Names.csv | fields network vlan name| where NOT isnull(network) ] | foreach network [eval subnet=if(cidrmatch('<<FIELD>>', dest_ip), <<FIELD>>, "No Match")] | search subnet!="No Match" | table _time dest_ip vlan name | sort _time asc I get nothing back... There's probably a simple solution to this but I'm not seeing it! Any help would be much appreciated.
Hello, I need to correlate forti_action to action field according to cim model . Some values are missing : Example :  Pass = allowed  monitored= deferred dropped= blocked   pba-close= ? perf-... See more...
Hello, I need to correlate forti_action to action field according to cim model . Some values are missing : Example :  Pass = allowed  monitored= deferred dropped= blocked   pba-close= ? perf-stats = ? pba-close = ? roll-log = ? Pba-create = ? Can you help ? thank you
Hi Splunkers! I have a Lookup table (lookup_table_1) that pulls external CSV data periodically and updates itself (script). I need to add a couple of columns that I could update manually along wit... See more...
Hi Splunkers! I have a Lookup table (lookup_table_1) that pulls external CSV data periodically and updates itself (script). I need to add a couple of columns that I could update manually along with the data that is automatic updated.  I was thinking creating another lookup (lookup_table_2) that is generated by querying (lookup_table_1) and adding 2 extra columns (given lookup_table_2 already has fields a, b, c, d, e created):         | inputlookup lookup_table_1.csv | append [| inputlookup lookup_table_2.csv ] | dedup Id | outputlookup lookup_table_2.csv         However, this removes values from manual fields (d, e). lookup_table_1 - (dynamic) |           field A         |           field B          |          field C          | | dynamic_value | dynamic_value | dynamic_value | lookup_table_2 - (dynamic+2x manual fields(DF)) |           field A         |           field B          |          field C          |            field D         |         field E         | | dynamic_value | dynamic_value | dynamic_value |      static_value   |    static_value | Is there a way to have dynamic lookup that would update periodically, but preserving 2 manual columns that I could update manually without compromising them each time lookup updates itself? Thanks in advance! Regards
Hi, I'm trying to find ,if any one of the fields having specific value (here values is No) then create a field with that value. for example,  Field1   Field2    Result field Yes          No      ... See more...
Hi, I'm trying to find ,if any one of the fields having specific value (here values is No) then create a field with that value. for example,  Field1   Field2    Result field Yes          No               No No           Yes              No No           No               No Yes         Yes               N/A   Please suggest how can I achieve this.
Hello, I'm looking for details on indexed_kv_limit parameter following an upgrade from 7.x to 8.x. After an upgrade, I saw a warning message from my indexers saying :  The search you ran returned ... See more...
Hello, I'm looking for details on indexed_kv_limit parameter following an upgrade from 7.x to 8.x. After an upgrade, I saw a warning message from my indexers saying :  The search you ran returned a number of fields that exceeded the current indexed field extraction limit. To ensure that all fields are extracted for search, set limits.conf: [kv] / indexed_kv_limit to a number that is higher than the number of fields contained in the files that you index. When I inspect the job, I noticed many lookup are loaded by AutoLookupDriver not related to my sourcetype. These lookup are configured by various TA (palo alto, cisco et c....). After, the lookup loading, I noticed more than 400 fields appears in Final required field list. Are Fields required related to lookup loaded ? (from my understanding : yes) However, I don't understand why the search doesn't limit the lookup to the sourcetype of the logs. Is-it possible to limit these loading ? Regards.                      
Hi All,  As the "Google Analytics app for Splunk" has been deprecated and is no longer supported, may I please know what apps,  others are using as a replacement,  to ingest google analytics data in... See more...
Hi All,  As the "Google Analytics app for Splunk" has been deprecated and is no longer supported, may I please know what apps,  others are using as a replacement,  to ingest google analytics data into Splunk?   Thanks in advance  Kind Regards AG
Hello, I have the search below which should graph the count of the error messages grouped by criticality; the visualisation is "single value" with trellis split by criticality. it all works as long... See more...
Hello, I have the search below which should graph the count of the error messages grouped by criticality; the visualisation is "single value" with trellis split by criticality. it all works as long as there are values found. when there are no events found for one criticality value, the trellis graph not displayed; when events for both criticality values aren't found, the "no results found" message is displayed. I'm looking for a way to simulate the fillnull function in the case of missing events; I have tried the solutions with makeresults and appendpipe (as described here, here and here), but none worked for me. The goal is to have zeroes for each time period automatially calculated by timechart where the events are missing. I guess the count column cannot be initialised somehow, as long as there is no value for the selected time period (the "search criticality = ...." subsearch)   cheers     index=<index> source=<source> | rex ".\d{3}Z\s(app|batchrun\s-\s\w+)\s(?<loglevel>1|2|3|4|5)\s" | eval criticality=case(loglevel == "1", "error", loglevel == "2", "warning", loglevel == "3", "info", loglevel == "4", "debug") | search criticality = error OR criticality = info OR criticality = warning | timechart count by criticality      
Hello, looks like upgrading Splunk as root modified our file system using tar.gz, is this normal behaviour?   [root@xhost ~]# ll /OPT/siem/splunk/ total 2396 drwxr-xr-x 4 siem siem 4096 Oct 17 2... See more...
Hello, looks like upgrading Splunk as root modified our file system using tar.gz, is this normal behaviour?   [root@xhost ~]# ll /OPT/siem/splunk/ total 2396 drwxr-xr-x 4 siem siem 4096 Oct 17 2018 bin -r--r--r-- 1 siem siem 57 Oct 17 2018 copyright.txt drwxr-xr-x 16 siem siem 4096 Jun 6 2019 etc drwxr-xr-x 3 siem siem 44 Oct 17 2018 include drwxr-xr-x 6 siem siem 4096 Oct 17 2018 lib -r--r--r-- 1 siem siem 61779 Oct 17 2018 license-eula.txt drwxr-xr-x 3 siem siem 58 Oct 17 2018 openssl -r--r--r-- 1 siem siem 844 Oct 17 2018 README-splunk.txt drwxr-xr-x 3 siem siem 86 Oct 17 2018 share -r--r--r-- 1 siem siem 2365100 Oct 17 2018 splunk-7.1.4-5a7a840afcb3-linux-2.6-x86_64-manifest lrwxrwxrwx 1 siem siem 9 May 28 2019 var -> /VAR/siem [root@xhost tmp]# ll /OPT/siem/splunk/ total 4616 drwxr-xr-x 4 10777 xgroup 4096 Jan 8 2020 bin -r--r--r-- 1 10777 xgroup 57 Jan 8 2020 copyright.txt drwxr-xr-x 16 10777 xgroup 4096 Jan 8 2020 etc -rw-r--r-- 1 10777 xgroup 0 Jan 8 2020 ftr drwxr-xr-x 3 10777 xgroup 44 Jan 8 2020 include drwxr-xr-x 7 10777 xgroup 4096 Jan 8 2020 lib -r--r--r-- 1 10777 xgroup 62762 Jan 8 2020 license-eula.txt drwxr-xr-x 3 10777 xgroup 58 Jan 8 2020 openssl -r--r--r-- 1 10777 xgroup 844 Jan 8 2020 README-splunk.txt drwxr-xr-x 4 10777 xgroup 108 Jan 8 2020 share -r--r--r-- 1 siem siem 2365100 Oct 17 2018 splunk-7.1.4-5a7a840afcb3-linux-2.6-x86_64-manifest -r--r--r-- 1 10777 xgroup 2270678 Jan 8 2020 splunk-7.3.4-13e97039fb65-linux-2.6-x86_64-manifest lrwxrwxrwx 1 siem siem 9 May 28 2019 var -> /VAR/siem [root@xhost tmp]#   Thanks.
Hello, I have used OneClassSVM algorithm for anomaly detection and after applying fit command I have a training data set. Now I want to train the data set everyday. Everywhere in the blogs it is wr... See more...
Hello, I have used OneClassSVM algorithm for anomaly detection and after applying fit command I have a training data set. Now I want to train the data set everyday. Everywhere in the blogs it is written like schedule the data set or train the data set. But I didnt find any exclusive option to train the data set.  Do I have to schedule my training set by using save as alert option or is there any other way of doing this? Kindly give your ideas.
When im trying to pull data using Curl on my mac for command : ' curl -s -ku admin:admin -o ?Users/Vivek/Desktop/09012020.csv https://localhost:8089/servicesNS/admin/search/search/jobs/export -d s... See more...
When im trying to pull data using Curl on my mac for command : ' curl -s -ku admin:admin -o ?Users/Vivek/Desktop/09012020.csv https://localhost:8089/servicesNS/admin/search/search/jobs/export -d search=\"search index=network host=SGC01* OR host=APR01* earliest=09/01/2020:00:00:00 latest=09/01/2020:23:59:59 | rex field=_raw "^[^ \n]* (?P<host>[^ ]+)\s+%(?P<mnemonic>[^ ]+)[^ \n]* \[(?P<fault_code>[^\]]+)[^\[\n]*\[(?P<state>[^\]]+)\]\[(?P<severity>[a-z]+)\]\[(?P<dn_mo>.*)\]" | stats count by host mnemonic fault_code state severity dn_mo\" -d output_mode=csv --data-urlencode -d preview="False" ' I'm getting an error with rex segment saying  b"/bin/sh: -c: line 1: syntax error near unexpected token `?P'\n/bin/sh: -c: line 1: `]* (?P<host>[^ ]+)\\s+%(?P<mnemonic>[^ ]+)[^ '\n" Need help to solve this problem as the customer has to pull ~10M records summary stats by various cateogries  @Ayn @micahkemp @harsmarvania57 
I can't able to login with IP address
Can someone help me to identify Percentage of Indexes’ logs in 24 hours.? I have pulled using count like this :index=* earliest=-24h@h latest=now | stats count by index But need this in Percentage.
Hi Community, I Need to find the login hours of the user/employee. Did we see those results in splunk...? Please help me out on this. Thanks...
I am using the splunk docker image to start a heavy forwarder with this command:   docker run -d -p 8000:8000 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=mydummypw" -e "SPLUNK_ROLE=... See more...
I am using the splunk docker image to start a heavy forwarder with this command:   docker run -d -p 8000:8000 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=mydummypw" -e "SPLUNK_ROLE=splunk_heavy_forwarder" --name hforwarder splunk/splunk:latest   I would like this heavy forwarder to run with the forwarder license, but when I check with   splunk list licenser-groups   I see that a Trial license is selected instead or the Forwarder one:   Enterprise is_active:0 stack_ids: Forwarder is_active:0 stack_ids: forwarder Free is_active:0 stack_ids: free Lite is_active:0 stack_ids: Lite_Free is_active:0 stack_ids: Trial is_active:1 stack_ids: download-trial   I could of course connect to the container and switch the license group with   splunk edit licenser-groups Forwarder -is_active 1   but this requires a restart and I would like to achieve this with only parameters to the docker run command. Any idea if this is possible ?   If I add the SPLUNK_LICENSE_MASTER_URL parameter to make my heavy forwarder a slave to a license server, it works, but I am looking for a way to use the Forwarder license instead.
We are planning to migrate archsight to Splunk via Collection of UF , syslog  to HF. How many UF we need to install , Do we need to require 1 UF for each data source.
Hi Guys, My search is working very well in search bar as shown below but  when i try to put that under a tab in Splunk Dashboard it is not working, could anyone please help P.S I'm usin... See more...
Hi Guys, My search is working very well in search bar as shown below but  when i try to put that under a tab in Splunk Dashboard it is not working, could anyone please help P.S I'm using the latest tabs.js and tabs.css from git TIA kavya