All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am looking to collect Netflow data on a host, where I have installed Splunk UF along with Stream addon. I want to send this data to a client's Splunk Indexer,  on port 9997. While the doc states,... See more...
I am looking to collect Netflow data on a host, where I have installed Splunk UF along with Stream addon. I want to send this data to a client's Splunk Indexer,  on port 9997. While the doc states, configure indexer to receive Stream data on port 9997, however, in the "Set up data collection on remote machines" section, it requires HEC tokens for the indexer. Is there a way to configure the addon to to Netflow data to Indexers on port 9997 instead of on the HEC token ?
Hi All,The following search has been created to identify the unsecure communications.Also i need to see the end-to-end connectivity if it’s successful on unsecure protocol. For example, some services... See more...
Hi All,The following search has been created to identify the unsecure communications.Also i need to see the end-to-end connectivity if it’s successful on unsecure protocol. For example, some services are configured in F5 with HTTP redirection profile. Now ultimately you will observe port 80 traffic on Edge firewall but F5 then redirects it to HTTPS.so please could you help us to achive these? (index=paloalto OR index=juniper) (dest_port= 20 OR dest_port= 22 OR dest_port= 23 OR dest_port= 53 OR dest_port= 139 OR dest_port= 80 OR dest_port= 445 OR dest_port= 3389 OR dest_port= 21) | lookup Port_service.csv dest_port as dest_port OUTPUT service | stats count values(src_ip) by dest_port service dest_ip transport action  | table values(src_ip) dest_port service transport dest_ip count action
Hi All, I am trying to create a dashboard with response time between the transactions.  For example, let's i have data output as below: _time                                                  Direct... See more...
Hi All, I am trying to create a dashboard with response time between the transactions.  For example, let's i have data output as below: _time                                                  Direction                ABCD                             Transaction_ID       2021-07-13 18:56:58.487            in                abcd.008.001.08                      123456789 2021-07-13 18:56:58.603          out               abcd.008.001.08                     123456789 2021-07-13 18:56:59.981            in                abcd.002.001.10                     123456789 2021-07-13 18:57:00.062          out               abcd.002.001.10                      123456789 2021-07-13 18:57:00.565          out               abcd.002.001.10                     123456789 From above output I would like to calculate time difference between first and fourth event (i.e 2021-07-13 18:57:00.062 - 2021-07-13 18:56:58.487) and time difference between second and third event ( i.e 2021-07-13 18:56:59.981 - 2021-07-13 18:56:58.603).   Can someone help me with this query. Highly appreciate your help in this context.
Hello Guys I have a sort of quick question that has been challanging me.   I use this SPL to extract some info     | stats values(*) as * by CLIENTE_OUTPOST     Sometimes I use list sometimes... See more...
Hello Guys I have a sort of quick question that has been challanging me.   I use this SPL to extract some info     | stats values(*) as * by CLIENTE_OUTPOST     Sometimes I use list sometimes I use values... and I want to be able to extract all values in the multivalue field "PROMOS" in a new field called "ADDED" this is an example:   from this:   CLIENT_OUTPOST PROMOS DATE VOUCHER LIZZA_90 UIK_IO 87585 A_IDYD 78545 10584 18-05-2021 XX-PO-89   I want this: CLIENT_OUTPOST PROMOS DATE VOUCHER ADDED LIZZA_90 UIK_IO 87585 A_IDYD 78545 10584 18-05-2021 XX-PO-89 87585 78545 10584 I will be so thankfull if you can help me out, just for reference I will eaither have strings with characters or strings that are numbers... but i have tried mvfilter, rex without any luck thank you so much guys!   Love,   Cindy
action feature version location count ?difference? A f1 v1 WA 120 0 A f1 v1 OR 110 10 A f1 v1 CA 115 5 B f1 v1 AZ 120 0 A f1 v2 WA 14 1 A f1 v2 O... See more...
action feature version location count ?difference? A f1 v1 WA 120 0 A f1 v1 OR 110 10 A f1 v1 CA 115 5 B f1 v1 AZ 120 0 A f1 v2 WA 14 1 A f1 v2 OR 10 5 B f1 v2 AZ 15 0 I got a table of info above: action, feature, version, location, and count.  Could anyone help me to find the last column "difference" here?   A group is identified by same feature and version combination. so in the example table, the first four rows(f1+v1) are one group, and the last three rows(f1+v2) are the second group.  within each group, difference = count B - count A.    for example: row1, difference = count (B, f1, v1, AZ) - count (A, f1,v1,WA) = 120-120=0 row2, difference = count (B, f1, v1, AZ) - count (A, f1,v1,OR) = 120-110=10 difference of countB itself is 0.
Hello, I am trying to rename some fields pre-index using props.conf and it's not working.  Props below. [onelogin:event] EVAL-app_name = app EVAL-src_ip = ipaddr Also tried using FIELDALIAS to... See more...
Hello, I am trying to rename some fields pre-index using props.conf and it's not working.  Props below. [onelogin:event] EVAL-app_name = app EVAL-src_ip = ipaddr Also tried using FIELDALIAS to no avail.  The props file is  local  dir on the HF (/opt/splunk/etc/apps/splunk_ta_onelogin/local).   bt debug shows the intended config .. /opt/splunk/etc/apps/splunk_ta_onelogin/local/props.conf [onelogin:event] /opt/splunk/etc/apps/splunk_ta_onelogin/local/props.conf EVAL-app_name = app /opt/splunk/etc/apps/splunk_ta_onelogin/local/props.conf EVAL-src_ip = ipaddr Also tried putting the props file in system local, no effect.  How do I troubleshoot this?  Thanks!  
Hi, I've upgraded from splunk 6.6 to 8.2(single instance) and all my realtime alerts(per result) keep triggering for the same event every 5 minutes(throttle period with usermail as suppresed field )... See more...
Hi, I've upgraded from splunk 6.6 to 8.2(single instance) and all my realtime alerts(per result) keep triggering for the same event every 5 minutes(throttle period with usermail as suppresed field ) The only way to stop it is restarting splunk or deactivating the alert. I deactivated all alerts and saved searchs and left only one alert producing a single event with the same result, the alert is triggered every five minutes for the same event. It is a simple query  from a server log filtering only errors. I've activated the SavedSplunker debug log and the only strange thing is this message every minute after the event was produced. DEBUG SavedSplunker - failed to write suppressed results to /opt/splunk/var/run/splunk/dispatch/rt_scheduler_Z2VybWFuLnNhbnRhbmE_aHlkcmEtYWRtaW4__RMD53954c1af0f5d4e15_at_1626209231_1.144/results.csv.gz Thanks in advance
I am trying to update splunk saved searches schedule by calling rest api in a bash script, I am reading cron and search title from a csv file and try to run a loop. It is working fine partially. It i... See more...
I am trying to update splunk saved searches schedule by calling rest api in a bash script, I am reading cron and search title from a csv file and try to run a loop. It is working fine partially. It is changing schedule only for private searches not global one.   #! /bin/bash INPUT=data.csv OLDIFS=$IFS IFS=',' [ ! -f $INPUT ] && { echo "$INPUT file not found" exit 99; } echo "-----------------------------------------------------" >> output.txt while read app cron search_name do SEARCH=${search_name// /%20} QUERY="https://localhost:8089/servicesNS/admin/$app/saved/searches/$SEARCH" echo $QUERY >> output.txt echo -e "\n---------------------------------------------------------\n" echo -e "---Search Name-->$search_name" echo -e "---Rest API URI-->$QUERY" curl -i -k -u user:password $QUERY -d cron_schedule=$cron -d output_mode=json >> response.txt done < $INPUT IFS=$OLDIFS
I am creating a dashboard for my team. So far, I've been able to implement chain searches by modifying the source code. However, they are based on a live base search. My goal is to power the base sea... See more...
I am creating a dashboard for my team. So far, I've been able to implement chain searches by modifying the source code. However, they are based on a live base search. My goal is to power the base searches off of a report instead of a live search. Is that possible?
Hi, there, I am working on following search and somehow cannot append the search as part of the "fit DensityFunction" table result from search macro "search_macro_smart($cef_ruleid$)" splunk_server... See more...
Hi, there, I am working on following search and somehow cannot append the search as part of the "fit DensityFunction" table result from search macro "search_macro_smart($cef_ruleid$)" splunk_server="splunk" index="area" source="area1" sourcetype="dsystem_events" | stats count by cef_ruleid | sort - count | head 85 | map search="search `search_macro_smart($cef_ruleid$)`" maxsearches=85 | join [| makeresults | eval current_id=$cef_ruleid$ | stats values(current_id)]   The search macro "search_macro_smart($cef_ruleid$)"  will be generate 85 raw of data for outlier with data in past 45 days and I need the append "cef_ruleid " as part of the search macro output on dashboard so we can know the detected outlier belong to which ""cef_ruleid "   Your help is appreciated, mason    
Hello, How can I improve on my Splunk query so that only one event is counted over a 30-day span where we have 500,000,000 events matched? This is the query I have so far:     | tstats count WHE... See more...
Hello, How can I improve on my Splunk query so that only one event is counted over a 30-day span where we have 500,000,000 events matched? This is the query I have so far:     | tstats count WHERE (index=<my_index> sourcetype=json_data earliest=-30d latest=-0h) BY _time span=1mon, host, address, server     This query returns approximately 600,000,000 events, but I only need to count just one of these unique events at the host-level. Since I'm using the tstats command first to retrieve data, I made sure that indeces exist on _time, host, address, and server. My problem here is that Splunk first retreives all of the matching events and then it removes the duplicates. Is there a way to just retreive unique events by host, address, and server? For example, a host could have the following events over the past 30 days: _time host address server 2021-07-13 12:55:08 testenv1 10.10.10.10 store1 2021-07-13 12:55:08 testenv1 10.10.10.10 store1 2021-07-13 12:55:08 testenv1 10.10.10.10 store1 2021-07-13 12:55:08 testenv2 10.10.10.11 store2 2021-07-13 12:55:08  testenv2 10.10.10.11 store2 2021-07-13 12:55:08  testenv2 10.10.10.11 store2   And I want my query to do this: _time host address server 2021-07 testenv1 10.10.10.10 store1 2021-07  testenv2 10.10.10.11 store2   This is just a sample of my data. In several cases, we have unique hosts that repeat 20,000 times over a hour time span. I need my Splunk query to display this record just once, without having to retreive all other 20,000 events. I also tried to use disctinct_counts like this, but this still retrieves all of the duplicated events under the Events tab:     | tstats distinct_count WHERE (index=<my_index> sourcetype=json_data earliest=-30d latest=-0h) BY _time span=1mon, host, address, server     I've browsed multiple Splunk threads and I'm just stumped. Thank you.
Does any know if additional ports are needed to be open to add additional DMZ servers like ftp, web, etc...
All of the .splunkrc-examples out there show, how to specify user and password (unencrypted!) in the file, but our Splunk-administrators here issue authorization-tokens instead. Can those be specifi... See more...
All of the .splunkrc-examples out there show, how to specify user and password (unencrypted!) in the file, but our Splunk-administrators here issue authorization-tokens instead. Can those be specified in the file and/or on command-line, or does the current code not support that?
Ok,  We had a need to monitor our Isilon Clusters.  I looked around and loe and behold, there's an app or that! I downloaded the Dell EMC Isilon App, v2.5.0, and the Add-on, v2.7.0.  All went wel... See more...
Ok,  We had a need to monitor our Isilon Clusters.  I looked around and loe and behold, there's an app or that! I downloaded the Dell EMC Isilon App, v2.5.0, and the Add-on, v2.7.0.  All went well, I followed the instructions and I had my first of five clusters added, no problem.   My second thru fourth addition worked flawlessly.   Then came my fifth and LAST Cluster.  All of my clusters have the same userid/password for authorization.  The only thing I changed was the IP address.    I received the following: What in the wide world of sports is "list index out of range"? I have tried everything.  I have stopped and restarted splunk.  I  have removed that IP from the config files, stopped and restarted splunk.  And my only response is this message. The isilonappsetup.conf is getting updated with the device.  The password.conf is NOT getting the update for the encrypted password. Where is the fix? Any help at this point would be great!
I am just wondering if others are running into this same issues. I find that some of my sourcetypes mysteriously just stop for a while. They start up again eventually, but we don't really want huge d... See more...
I am just wondering if others are running into this same issues. I find that some of my sourcetypes mysteriously just stop for a while. They start up again eventually, but we don't really want huge delays in our data.   The azure:aad:signin sourcetype seems to give me the most trouble. Sometimes it may stop for a few hours - but then will immediately provide data if I bounce the input. During this time, I am not even getting debug logs for "source=*ta_ms_aad_MS_AAD_signins.log."   Most recently when I had an issue I noticed a "HTTPError: 504 Server Error: Gateway Timeout for url" for my aad_risk_detection ingest, so I do suspect network issues play a part in the problem. However, that really doesn't address what is happening to the retries...   Microsoft Azure Add-on for Splunk 3.1.1 Splunk Enterprise 8.0.5
Hi,   I was wondering if I could do two things. I am new to splunk so please have mercy on me. I am looking for a query that will search inside a mailbox and look for a certain subject. Once it fin... See more...
Hi,   I was wondering if I could do two things. I am new to splunk so please have mercy on me. I am looking for a query that will search inside a mailbox and look for a certain subject. Once it find that subject I would like to extract the recipients to a csv file for the last 40 days. Is this possible?  
Hello,  This is the query that I am working on. Its showing multiple time entries. How do we get it to filter down to single entry? (index=xyz source=abc) SMF30JBN=MC2DC03D SMF30JNM=JOB* SMF30STP=5... See more...
Hello,  This is the query that I am working on. Its showing multiple time entries. How do we get it to filter down to single entry? (index=xyz source=abc) SMF30JBN=MC2DC03D SMF30JNM=JOB* SMF30STP=5 | table DATETIME SMF30JBN SMF30STP SMF30JNM SMF30STM   Thank you, Chinmay.
Greetings Splunkers, I have a dashboard that "broke" over the weekend. When I run any of the dashboard searches I see errors of: Unexpected status for to fetch REST endpoint uri=https://127.0.... See more...
Greetings Splunkers, I have a dashboard that "broke" over the weekend. When I run any of the dashboard searches I see errors of: Unexpected status for to fetch REST endpoint uri=https://127.0.0.1:8089/services/storage/investigation/investigation?count=0&all=true&earliest=-700d&latest=-2d&output_mode=xml from server=https://127.0.0.1:8089 - Bad Request Failed to fetch REST endpoint uri=https://127.0.0.1:8089/services/storage/investigation/investigation?count=0&all=true&earliest=-700d&latest=-2d&output_mode=xml from server https://127.0.0.1:8089. Check that the URI path provided exists in the REST API. Learn More The REST request on the endpoint URI /services/storage/investigation/investigation?count=0&all=true&earliest=-700d&latest=-2d&output_mode=xml returned HTTP 'status not OK': code=400, Bad Request.   Looking up similar questions here leads me to believe that it might be an issue with the REST api path for investigations. Does anyone know if there is a different path for investigations for ES 6.0.2? I am sure I am missing something simple so don't be afraid to "barney" style this.  Other questions: https://community.splunk.com/t5/Splunk-Enterprise-Security/Why-do-we-get-errors-on-the-REST-command-in-the-Investigation/m-p/470951
Hello all, has anybody a running solution to integrate splunk alerts with Zabbix? I already tried this app (https://splunkbase.splunk.com/app/5272/#/details) but there is no description how the sea... See more...
Hello all, has anybody a running solution to integrate splunk alerts with Zabbix? I already tried this app (https://splunkbase.splunk.com/app/5272/#/details) but there is no description how the search should look like respectively the way it is descriped does not work as expected. Kind Regards, Peter
Unexpected status for to fetch REST endpoint uri=https://127.0.0.1:8089/services/storage/investigation/investigation?count=0&all=true&earliest=-700d&latest=now&output_mode=xml from server=https://12... See more...
Unexpected status for to fetch REST endpoint uri=https://127.0.0.1:8089/services/storage/investigation/investigation?count=0&all=true&earliest=-700d&latest=now&output_mode=xml from server=https://127.0.0.1:8089 - Bad Request I'm having an issue with understanding and fixing my REST API. It has worked previously and there's no upgrade that I'm aware of. If I modify the above search from "latest=now" to "latest=-3d" the data returns fine. No new data is being written to this URI. Yesterday "latest=-2d" returned data today it does not. I'm probably not explaining this well but to me it appears that somewhere in the last few days this API URI broke. Any assistance would be appreciated.