All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I'm trying to analyze WatchGuard firewall logs received by Splunk using syslog on udp 514 port. I was able to find a well working regex to use in a search using the following rex command in ... See more...
Hello, I'm trying to analyze WatchGuard firewall logs received by Splunk using syslog on udp 514 port. I was able to find a well working regex to use in a search using the following rex command in order to extract needed fields : * | rex field=_raw ".*\s(?<HOSTNAME>\S+)\s(?<PROCESS>\S+):\s.*\s(?<DISPOSITION>(Allow|Deny))\s(?<SRC_INT>\S+)\s(?<DST_INT>\S+)\s.*(?<PR>(icmp|igmp|tcp|udp)).*\s(?<SRC_IP>[[octet]](?:\.[[octet]]){3})\s(?<DST_IP>[[octet]](?:\.[[octet]]){3})\s(?<SRC_PORT>\d{1,5})\s(?<DST_PORT>\d{1,5})\s.*\((?P<RULE_NAME>.*)?(-00)\)$" | table HOSTNAME,PROCESS,DISPOSITION,SRC_INT,DST_INT,PR,SRC_IP,DST_IP,SRC_PORT,DST_PORT,RULE_NAME   Result is a table as we can see in attachment. Now, in order to optimize all of that, i would like to be able to extract all these fields automatically without having the need to use a rex command in each search i run... i tryed using the Splunk Field extraction wizard, both using the automatic regex generator and by copy paste my search regex, but no success... i suppose i missed something somewhere ? thanks for your help Florent  
Hello Together I have a little difficulty with the merging of cells. The idea is that if the results for the value JobID in the table have the same value, the entries for Start Time and End Time sho... See more...
Hello Together I have a little difficulty with the merging of cells. The idea is that if the results for the value JobID in the table have the same value, the entries for Start Time and End Time should be merged.       index=MYINDEX host=MYHOST sourcetype=regway:server status=COMPLETED | eval "End Time"=strftime(_time,"%c") | append [ search index=MYINDEX host=MYHOST sourcetype=MYINDEX:server "Created metadata export job with id:" | rex "id: (?<JobID>\w{1,}-\w{1,}-\w{1,}-\w{1,}-\w{1,})" | eval "Start Time"=strftime(_time,"%c")] | sort JobID | table "Start Time", "End Time" , JobID       My Result looks currently like this:   
Hi , I need help in the below,  There is a description column, which has like Description process_1_details : name : msmg cpu:43% memory:4% disk:67% process_2_details : name : hefe cpu:0% memory... See more...
Hi , I need help in the below,  There is a description column, which has like Description process_1_details : name : msmg cpu:43% memory:4% disk:67% process_2_details : name : hefe cpu:0% memory:8% disk:56% I want a search query to extract these name , cpu, memory, disk fields and want this kind of output. name                 cpu               memory               disk msmg              msmg43%   msmg4%          msmg67% hefe                 hefe0%          hefe8%             hefe56% want the process name to be attached with all the related details.
Hi all, I am using $results_link$ in an alert.   Something changed in the last few months and when clicking on the link, it brings up a loadjob which only shows stats in a table and no longer disp... See more...
Hi all, I am using $results_link$ in an alert.   Something changed in the last few months and when clicking on the link, it brings up a loadjob which only shows stats in a table and no longer displays the events themselves.   Does anyone have any insight as to what might have changed and if there is a way to revert this?
In the network topology diagram ,we should show the status of the service node. how can i use the SPL to get the result . for example , we have proxy1、proxy2、server1、server2、db1、db2.
What is the best way to import Log Analytics logs from Azure to Splunk  ? is there anyway to do it without using Even Hub  ?    we are using Splunk Enterprise Version:7.3.4 we also have Heavy for... See more...
What is the best way to import Log Analytics logs from Azure to Splunk  ? is there anyway to do it without using Even Hub  ?    we are using Splunk Enterprise Version:7.3.4 we also have Heavy forwarder Splunk Enterprise Version:8.1
Hi all, I am using slack_alerts addon to send Slack messages. It allows for use of tokens in the message body as referenced here https://docs.splunk.com/Documentation/Splunk/8.1.3/Alert/EmailNotific... See more...
Hi all, I am using slack_alerts addon to send Slack messages. It allows for use of tokens in the message body as referenced here https://docs.splunk.com/Documentation/Splunk/8.1.3/Alert/EmailNotificationTokens (E.G AWS CloudTrail events) I am trying to figure out a way that I can either: a) Have various fields renamed to a single field (such as a security group ID, or a bucket name be rewritten to field called "resource") so that I can reference this field in my token ($result.resource) b) have a dynamic token that looks at various fields Currently I just have a line in my alert that looks like:   *Resource:* `$result.requestParameters.policyName$ $result.requestParameters.policyArn$ $result.requestParameters.groupId$ $result.responseElements.groupId$ $result.requestParameters.groupDescription$ $result.requestParameters.bucketName$`    but this is messy and the results show up with spaces before and after the value I was wondering if strcat is the right search expression to use for this, the search runs in realtime so it shouldnt make a jumble fo 2 or 3 resources in a single line? Hoping someone will have some useful insight
Greetings Splunk Community,  I am looking to build a Dashboard Panel to show the count of incidents which have passed the SLA timeline as described below: Urgency (Count) SLA Time Critical  ... See more...
Greetings Splunk Community,  I am looking to build a Dashboard Panel to show the count of incidents which have passed the SLA timeline as described below: Urgency (Count) SLA Time Critical  Older than 1 week High  Older than 2 weeks Medium  Older than 3 weeks Low Older than 4 weeks It would be nice if this can also reflect the owner name for these late incidents. I tired something like this, but I am not sure if I am on the right track or not. | `es_notable_events` | where status_group="New" OR status_group="Open" | stats sum(count) as count by status,urgency,owner | `get_reviewstatuses` | chart sum(count) as count over status_label by urgency | rename status_label as status | `sort_chart` Your kind assistance is highly appreciated. Best Regards, Izzat.
Hi, cannot found anything similar to this issue, please guide me or forward any related thread to me. Thanks my search...... | table product, product_tag sortby product,product_tag Current output... See more...
Hi, cannot found anything similar to this issue, please guide me or forward any related thread to me. Thanks my search...... | table product, product_tag sortby product,product_tag Current output: product product_tag product1  A_tag1 product1  A_tag2 product1  B_tag1 product2 C_tag1 product3 D_tag1 product3 D_tag2   Desired Output: product product_tag1 product_tag2 product_tag3 product1  A_tag1 A_tag2 B_tag1 product2 C_tag1 NA NA product3 D_tag1 D_tag2 NA   Regards, Yu Ming
How to create a dataframe from the splunk search result  I have tried the below code and I can create a data frame using pandas, but while retrieving each column i am not able to get the desired out... See more...
How to create a dataframe from the splunk search result  I have tried the below code and I can create a data frame using pandas, but while retrieving each column i am not able to get the desired output  import pandas as pd df = pd.DataFrame.from_records(tuple_generator, columns = tuple_fields_name_list) Could you please support 
I have a python script with runs daily and saves output in csv file  for example: if i run that script  today it will get the data from april 1st to today date(04/21/2021) and if i run tomorrow it w... See more...
I have a python script with runs daily and saves output in csv file  for example: if i run that script  today it will get the data from april 1st to today date(04/21/2021) and if i run tomorrow it will get the data from april 1st to tomorrow date (04/22/2021) and with different file name every time we run  i want to onboard this data into splunk with out duplicate data  how can we do that?  we have a field name called start_time   this field we are taking as time field  for example: start_time field value = 04/21/2021 10.30 example: start_time field value = 04/22/2021 10.30   Thanks in advance  
In the Splunk Add-on for Infoblox, the record_type field does not always parse correctly--especially instances in which there RRSIG records returned. Here is an instance where the parsing works fine.... See more...
In the Splunk Add-on for Infoblox, the record_type field does not always parse correctly--especially instances in which there RRSIG records returned. Here is an instance where the parsing works fine.       Apr 21 08:41:27 xxx.xx.xxx.xx named[10396]: 21-Apr-2021 08:41:27.792 client xx.xxx.xx.xx#60438: UDP: query: self.events.data.microsoft.com IN A response: NOERROR + self.events.data.microsoft.com. 2064 IN CNAME self-events-data.trafficmanager.net.; self-events-data.trafficmanager.net. 6 IN CNAME skypedataprdcolcus14.cloudapp.net.; skypedataprdcolcus14.cloudapp.net. 3 IN A xx.xx.xxx.xxx; record_type = CNAME record_type = CNAME record_type = A   Infoblox App Version is 2.0.0. Thanks!     However, here is an instance where it does not work, and where it's returning a RRSIG record_type. There is always an extracted timestamp:       Apr 21 08:51:12 xxx.xxx.x.xx named[18234]: 21-Apr-2021 08:51:12.351 client xxx.xxx.xx.xx#36237: UDP: query: data.lseg.com IN A response: NOERROR +EDV data.lseg.com. 300 IN A xxx.xxx.x.xx; data.lseg.com. 300 IN RRSIG A 13 3 300 20210422075112 20210420055112 34505 lseg.com. FR6lVgPJ3AI6aLoo+XCebNkTxORPa+pKk6CbFo0bs4Q/hnvCl3nN5E+9N6JRTUKe22XqOYFtoGBv1/9Q89ldaA==; record_type = A record_type = RRSIG record_type = 20210422075112            
Hi Splunk Team,  Need to create a Splunk entitlement id based on Purchase Order applicable for enterprise license. Kindly suggest.
Hi All, Based on this query I want to filter out wineventlog before ingesting into Splunk. So that i can save some licenses. So the condition is something like for two of the sourcetypes and for the... See more...
Hi All, Based on this query I want to filter out wineventlog before ingesting into Splunk. So that i can save some licenses. So the condition is something like for two of the sourcetypes and for the particular eventcodes (4624,4634) I want to filter out if the logs comes from Account Name= - & *$ for the particular set of hosts. index=abc sourcetype IN (winev,wind) EventCode IN (4624,4634) Account_Name="-" Account_Name="*$" host=*xyz*   So do we need to write the blacklist stanza in the inputs.conf file or do we need to specify the props and transforms separately.   Actually for all Windows client machines we are ingesting the wineventlog with the help of Deployment master server. So from Deployment master server we used to push the configurations to all windows machines so kindly help with the stanza for the same.    
Hi @gcusello , We have an app "Splunk for Netxpress"  when scanning through "Splunk Platform Upgrade Readiness App" we are getting following error. Can you please help me to get rid of this.   ... See more...
Hi @gcusello , We have an app "Splunk for Netxpress"  when scanning through "Splunk Platform Upgrade Readiness App" we are getting following error. Can you please help me to get rid of this.   Regards, Rahul Gupta
Hello Everyone I hope everyone is doing well...   It turns out I have to find how many times a custumer that has made a purchase has contacted the corporate line to complain...  I can generate a ta... See more...
Hello Everyone I hope everyone is doing well...   It turns out I have to find how many times a custumer that has made a purchase has contacted the corporate line to complain...  I can generate a table that shows me the custumers that have made an actual purchase by ID , and also I can make a table of the custumer that have called on the line to make a complain.. first table would look like this: ID PRODUCT_BOUGHT 41545 x_98 1428 x_98 4856 x_91 8596 x_91 1254 x_96   and the second table would look like this.. ID CASE_NUMBER 41545 001 4856 002 4856 003 41545 004 1254 005 1254 006   The issue is that I need to count how many times each ID has called on the line and bring also the product bought and the case number recieved on the line... BUT I can only think of a multiseach in order to create the table but I cant seem to find any documentation on how to do the cross validation or even count and I feel like Im hitting my head against a wall... This is the multisearch that I am using: | multisearch [| search index="pur.ok" | search status="PAY.ok" | fields ID PRODUCT_BOUGHT] [|search index="corp_line" | search in_calls="corp_cx_cases") | fields ID CASE_NUMBER]   but since I am a python user trying to learn splunk I cant seem to find a way to obtain this table: desired results: ID CALLS_ON_THE_LINE PRODUCT_AND_CASES 41545 2 x_98-001-004 4856 2  x_91-002-003 1254 2 x_96-005-006 1428 0 x_98 8596 0 x_91   THank you a million to everyone that can help me out with a guidance or documentation on how to achieve this like form the bottom of my hart thank you so much!!!!!! Im sending you a big hug from Texas!
I had set up a summary schedule reports with calculated results every 5 minutes. Howerver, The same summary schedule is performed twice a month irregularly, about twice a month. The result is doubl... See more...
I had set up a summary schedule reports with calculated results every 5 minutes. Howerver, The same summary schedule is performed twice a month irregularly, about twice a month. The result is doubled. Looking at the internal log, there is a log in which the same schedule is executed twice with a slight time difference. Splunk version 7.3.5 3 Search Header sh1, sh2, sh3, (clusting) 4 indexers (clustering) , indexer clustermater havyforwarder ex)  01:10 summary result 5 01:15 summary result 5 01:20 summary result 5 01:20 summary result 5  (Duplicates) 01:25 summary result 5 Is there anything I need to check? Are there any issues or workarounds for this? I am using dedup to filter. 
Hi, We've noticed an issue with our upgrade after upgrading Splunk from 7.3.2 to version 8.0.5.  We're on a cluster environment, with 3 indexers and 3 SHs. We're forcing python 3.7 on all of the Sp... See more...
Hi, We've noticed an issue with our upgrade after upgrading Splunk from 7.3.2 to version 8.0.5.  We're on a cluster environment, with 3 indexers and 3 SHs. We're forcing python 3.7 on all of the Splunk servers. Since the upgrade, all 3 indexer Indexing queues have been full, as you can see in the screenshot below. There have been no changes to the amount of data we're ingesting since the upgrade, however a few of the apps did need to also be upgraded to be python 3.7 compatible.  Here is what we've tried: Restarting - Alleviates the queues for a little bit, but inevitably gets blocked Increasing the queue sizes - We've increased the queue sizes from the default to 80mb, and this increased the time until the queues were blocked. Noticeably, one indexer would block first, then the others would get blocked after some more minutes Validated all the permissions and ownership There's been two things of note that could be related to this issue: This graph shows that the indexer pipe is directly correlating to the FwdDataReceiverThread. Unfortunately, doesn't seem to be much info concerning this thread out there.  We've noticed that we've been getting the following errors concerning this thread. ERROR Watchdog - No response received from IMonitoredThread=0x7fb47f7feb50 within 8000 ms. Looks like thread name='FwdDataReceiverThread' tid=6894 is busy !? Starting to trace with 8000 ms interval. There have also been a number of crashlogs since the upgrade on the Indexers. These crashlogs include items like the following: It seems to be related to a particular search, so not sure if this is related to the issue.   Does anyone have any ideas about these items?
Does anyone know if this software requires System Administrator or elevated privileges to run?  Not to install, to run?
Hello Everyone I hope you are safe and sound, I'm extracting values from events that come in a Json format and after that I want to create a Table were I can see each ID and the product thy bought f... See more...
Hello Everyone I hope you are safe and sound, I'm extracting values from events that come in a Json format and after that I want to create a Table were I can see each ID and the product thy bought from the store but I am always getting within a single cell the same value repeated two times and when I try to do a stats count then... it is also count twice... This is my code: index=purchase_store_x1 | rex mode=sed "s/^(?i)(?:(?!{).)+//g" | spath | search BodyJson.name="pdone.ok" | rename BodyJson.product.ID as PRODUCT | rename BodyJson.ID.CX.Unique as ID | table ID PRODUCT | sort -ID and so instead of getting the ID asociaed with the product purchased I get something like this: ID PRODUCT 31254 31254 XUI45 XUI45 54581 54581 XUI8 XUI45 47851 47851 XUIE58 XUI45 How can I just a normal table without having the same value repeated twice in the cell? THANK YOU SO MUCH for your help,