All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am using the following in a configuration being distributed to several remote syslog servers.  Works as expected on all UF's, except 1.  From a single UF's, the 'host' field in the indexed events i... See more...
I am using the following in a configuration being distributed to several remote syslog servers.  Works as expected on all UF's, except 1.  From a single UF's, the 'host' field in the indexed events is being reported as "PaloAlto" instead of the 4th segment as expected?  I searched through all of the .conf files on the UF manually and used BTOOL looking for a missed "host_segment" entry or something hidden in another config that would cause this, none found.  Am I am missing something obvious to the rest of you?       [monitor:///app01/logs/ASA] whitelist = .+\.log$ host_segment = 4 index = syn_netfw sourcetype = cisco:asa ignoreOlderThan = 2d disabled = 0 [monitor:///app01/logs/PaloAlto] whitelist = .+\.log$ host_segment = 4 index = syn_netfw sourcetype = pan:log ignoreOlderThan = 2d disabled = 0      
Hi, recently we deployed IT Essential Works with latest Exchange Content Pack. we also deployed the three addons for the Exchange  in the exchange nodes (including IIS and OWA logs). Now we are i... See more...
Hi, recently we deployed IT Essential Works with latest Exchange Content Pack. we also deployed the three addons for the Exchange  in the exchange nodes (including IIS and OWA logs). Now we are in the process of validation of the ITSI dashboards, External Logins Map is one of them, and we realized that the extracted source IP (c_ip field) corresponds to our load balancer (XXX.XXX.XXX.XXX) instead of the remote host (IP shown at the end of the event). below an example of exchange event that reach our splunk infra.     2021-10-08 12:22:31 XXX.XXX.XXX.XXX POST /Microsoft-Server-ActiveSync/default.eas Cmd=Ping&User=---%5n---&DeviceId=-------&DeviceType=Outlook&CorrelationID=<empty>;&cafeReqId=c586f22d-14cd-4449-be95-fe666b30c92e; 443 -------\----- 192.168.X.X Outlook-iOS-Android/1.0 - 200 0 0 181382 52.98.193.109     we use the official TA-Exchange-2013-Mailbox, TA-Exchange-ClientAccess and Ta-Windows-Exchange-IIS addons. I found the definition of c_ip field in the transform.conf and props.conf in  the TA-Windows-Exchange-IIS, but I dont see any specific regex for its correct extraction.  could someone tell me how to proceed to fix this parsing issue so the dashboards can show correct information? many thanks
Hi, my regex was like below , search| rex field=_raw "Status=(?<Status>\"\w+\s+\w+\".*?)," |stats count by Status My output is like below , Status                      count  "No Service"       ... See more...
Hi, my regex was like below , search| rex field=_raw "Status=(?<Status>\"\w+\s+\w+\".*?)," |stats count by Status My output is like below , Status                      count  "No Service"           250 Service                      500 but i am in need of output as below , Status                      count  No Service           250 Service                   500 I am in need of status ("No Service") as No Service with double quotes in Output .Please let me know what i am missing here .
Bonjour, Nous prévoyons de réaliser un upgrade de Splunk Enterprise. Actuellement nous avons 2 noeuds en version 8.0.6. Nous aimerions monter la version en 8.2.2. Nous utilisons entre autre les app... See more...
Bonjour, Nous prévoyons de réaliser un upgrade de Splunk Enterprise. Actuellement nous avons 2 noeuds en version 8.0.6. Nous aimerions monter la version en 8.2.2. Nous utilisons entre autre les app suivantes :  Splunk Add-on for Blue Coat ProxySG Splunk_TA_bluecoat-proxysg 3.6.0 Splunk Add-on for F5 BIG-IP Splunk_TA_f5-bigip 3.1.0 Pulse Connect Secure Splunk_TA_pulse_connect_secure 1.1.3 WALLIX Bastion TA-WALLIX_Bastion 1.0.4 Faut-il prévoir un upgrade de ces app avec la version Splunk Enterprise 8.2  ? Nous utilisons également des universal Forwarder en version 7.0 . Seront-ils toujours compatibles avec Splunk 8.2 ? Merci, Jean-Christophe Hermitte
Hi Folks, We have log file monitoring of one of the text file , and that text file getting updated once in a week. Then Splunk reads the data from that file. Today we had faced a situation , wh... See more...
Hi Folks, We have log file monitoring of one of the text file , and that text file getting updated once in a week. Then Splunk reads the data from that file. Today we had faced a situation , where log file updated with todays data but not logs were sent to Splunk. we verified in splunkd.log and didn't find any info related to that specific log file, and Splunk UF connected to HF and everything  is working fine and other data was flowing to Splunk as usal. However after Splunk restart data sent to splunk, I was wondering if log file is not getting updated for some time , will Splunk ignores the file from monitoring until restart?. and we have stanza ignoreOlderthan set to 5d , is this something to do with> . we are aware that ignoreOlderthan used to lgnore logs data older than specified  time, just wanted to make sure this is not that case.
I have this task were I am successful in getting result sets from nodes that are present in my splunk instance. How ever I can't find a way to announce in the spl syntax if the Name of the Node that... See more...
I have this task were I am successful in getting result sets from nodes that are present in my splunk instance. How ever I can't find a way to announce in the spl syntax if the Name of the Node that doesn't exist with a Yes or no value in another field i.e. Node            present Appserver   No    
Hello All, While connecting to Splunk Cloud application through ODBC DSN Configuration I am getting HTTP Protocol Error 404 issue. Can someone suggest whether any firewall issue is the error occurre... See more...
Hello All, While connecting to Splunk Cloud application through ODBC DSN Configuration I am getting HTTP Protocol Error 404 issue. Can someone suggest whether any firewall issue is the error occurred or the link should have IP address?  NOTE: Th rough browser I am able to login into application with my credentials.
Hi all, Is there any app, method or guidance for ingesting emails directly form a O365 mailbox? So a use case for us would be: We have a mailbox which receives Phishing Reports SOAR logs ont... See more...
Hi all, Is there any app, method or guidance for ingesting emails directly form a O365 mailbox? So a use case for us would be: We have a mailbox which receives Phishing Reports SOAR logs onto the mailbox, downloads the unread mails + turns them into "Events" Playbook begins working on these events - checking URL's, checking to/from addresses, maybe further triage based on o365 logs or whatever Detonate mail/attachments in Sandbox, capture networks/process/file related results, e.g. Cuckoo Playbook decides if mail is okay, suspicious, or phishing (or integrates with another tool to get that information - e.g. Proofpoint All information made available to the analyst who reviews In order to kick these off we'd need to be able to INGEST the email to begin with, but don't see any way to do that at present. If it doesn't exist I will write my own app for it - but don't want to reinvent the wheel if I don't have to Thanks!  
Hi! I have the following data and would like to check, for those records with the same ID, if one record has CREATED_DATE within CREATED_DATE and RESOLVED_DATE of another one. So in the example, the... See more...
Hi! I have the following data and would like to check, for those records with the same ID, if one record has CREATED_DATE within CREATED_DATE and RESOLVED_DATE of another one. So in the example, the first record in blue was created on 10-4 and resolved on 10-07, where the second record with the same ID was created on 10-05 while the other one was open. Can we do this kind of check in Splunk? ID CREATED_DATE RESOLVED_DATE 123 2021-10-04 19:30:35 2021-10-07 15:13:16 123 2021-10-05 16:11:25 2021-10-15 12:05:32 456 2021-03-05 10:10:13 2021-05-05 11:05:21   We'd need another column, say CHECK, that says "overlap" when the second record was created between the range of the first one, with the same ID. Thank you very much in advance!
Hi all, strange thing - when using mean() and avg() in the same stats command, whichever is written first is empty, while the second value  is shown with the correct result. ... | stats mean(Capaci... See more...
Hi all, strange thing - when using mean() and avg() in the same stats command, whichever is written first is empty, while the second value  is shown with the correct result. ... | stats mean(Capacity) avg(Capacity) mean(Capacity) avg(Capacity)   20.71428   ... | stats  avg(Capacity) mean(Capacity) avg(Capacity)  mean(Capacity)   20.71428   I know they are basically the same values. But why can't I show them side by side? Each function on its own is working fine. Also adding any of the other statistical functions is no problem, just avg() and mean() don't go together. Why? I'm on 8.2.0 at the moment. Thank you very much and kind regards, Gunnar
i am getting two diffrent results in total. query1 is providing acurate result.  query2 as soom as adding |lookup locationdetails.csv City AS City total value to less than acurate one using splunk ... See more...
i am getting two diffrent results in total. query1 is providing acurate result.  query2 as soom as adding |lookup locationdetails.csv City AS City total value to less than acurate one using splunk version 7.3.71 query1 index=xyz source=xyz  |eval Month=strftime(_time,"%b %Y") |search Month="Mar 2021" |search Product In (Sold,Damaged) |stats count(Product) as Total  query 2 index=xyz source=xyz  |eval Month=strftime(_time,"%b %Y") |search Month="Mar 2021" |search Product in (Sold,Damaged) |lookup locationdetails.csv City AS City |stats count(Product) as Total 
Hi, I am new to Splunk and working with parking records. I am calculating the current wait_time based off upcoming parking expiry times. Within my monitored data each record has the following f... See more...
Hi, I am new to Splunk and working with parking records. I am calculating the current wait_time based off upcoming parking expiry times. Within my monitored data each record has the following fields: arrival_time, the time data was created, which is when the car parked permit_expiry, which is a couple of hours after the creation time parking_space, which is a number between 1 and 99, that doesn't repeat until the permit_expiry has passed. I have the steps I wish to use to display this, but am unsure how to construct a query to achieve the result.  Check how many parking_space are curently in use (which should be a number between 0 & 99): sourcetype="parking_log"  | where permit_expiry > now() | stats count by parking_space Find the next 5 earliest upcoming permit_expiry times and minus them from the current time. | where permit_expiry > now() limit=5 | for each permit_expiry: num_minutes=permit_expiry-arrival_time If the number of used parking_space is less than 99, for each parking_space that is free (98,97,96) replace the latest permit_expiry time with 0. if the count(parking_space) is less than 94 than all 5 numbers between 0 display the average of the five numbers (which may include both 0s and the calcluated num_minutes. Many thanks!
Hi, I got production logs as txt files containing many Fields that are always in the format $_XXX: YYY with XXX being the Field name and YYY being the field value. All fields belong together as one s... See more...
Hi, I got production logs as txt files containing many Fields that are always in the format $_XXX: YYY with XXX being the Field name and YYY being the field value. All fields belong together as one set of production data for one device. The complete file has a timestamp ($_Date: ...) somewhere in the text.   I now want that whole file parsed like a csv file but with only one row of data. So that the XXXs are my value names and the YYYs are the actual values like on this picture from a csv:  Whatever I try, Splunk always wants to handle my Values as separate events instead of one large single event. Is there a simple way to achieve this?  
Hi, I am new to Splunk and working with parking records. Within my events, I have a permit_expiry field, which is a date and time a few or so hours after the initial data timestamp. How do I displ... See more...
Hi, I am new to Splunk and working with parking records. Within my events, I have a permit_expiry field, which is a date and time a few or so hours after the initial data timestamp. How do I display the number of permit_expiry which are occurring within the hour. I understand there is the now() function which holds the current time, but am unsure how to utilise it. My draft search is below, but I know there is something missing within the "now() + 1 hour". sourcetype="parking_log" | where permit_expiry < now() + 1 hour | stats count by permit_expiry Many thanks!
Hi i hope everyone will be fine.i am facing issue .I am forwarding logs to third party like port of any system.i seen error message at port i am using python third party library scoket.io i face erro... See more...
Hi i hope everyone will be fine.i am facing issue .I am forwarding logs to third party like port of any system.i seen error message at port i am using python third party library scoket.io i face error "code 400, message Bad request version ('nCurrent=0')".help me to solve my issue.with python standard libraray name socket work fine with splunk.when i use with scoket.io libraray its crate error bad request.    
Hi, I am new to Splunk and working with parking records. I am trying to display parking spaces that are currently not in use.   Within my monitored data each record has the following fields: the... See more...
Hi, I am new to Splunk and working with parking records. I am trying to display parking spaces that are currently not in use.   Within my monitored data each record has the following fields: the time data was created, which is when the car parked permit_expiry, which is a couple of hours after the creation time parking_space, which is a number between 1 and 99, that doesn't repeat until the permit_expiry has passed. I also have a separate lookup table/csv file called parking_lots of all parking_space (1-99), and their respective parking_lot.   This is what I have come up with so far: sourcetype="parking_log" | where now() < expiry_time | lookup parking_lots parking_space | *display parking_space that don't appear in the above search (1-99)* I am struggling to understand how to display the parking spaces, as well as use of the now() function. Many thanks!
Hello all, I am extracting a field which is coming in multiple formats, however I found that once of the format is not working as expected. Details below. Please help me in extracting all of these f... See more...
Hello all, I am extracting a field which is coming in multiple formats, however I found that once of the format is not working as expected. Details below. Please help me in extracting all of these formats without affecting others.   Example 1:(highlighted is the field I am trying to extract) APPLICATION-MIB::evtDevice = STRING: "Server2.Application.APP.INTRANET" APPLICATION-MIB::evtComponent =   Example 2:(highlighted is the field I am trying to extract) APPLICATION-MIB::evtDevice = STRING: "Server1" APPLICATION-MIB::evtComponent =   Example 3:(highlighted is the field I am trying to extract) APPLICATION-MIB::evtDevice = STRING: "SG2-SWMGMT-CAT-001" APPLICATION-MIB::evtComponent =   regex used : APPLICATION-MIB::evtDevice\s+=\sSTRING:\s\"(?<source_host>\w+[a-zA-Z0-9-_]\w+) The above regex is working for both example 1 and 2. However, for example 3 this is working only for the underlined fields and not everything highlighted.   Please help in getting this worked.
I work for a bio company (Creative Biogene) and I want to know how can I avoid data loss in my working computer?
I am new to splunk and I am a bit lost reading the documentation for how to create a dashboard and implement inputs to prompt the user for values to use in searches. The data I have is a series of pr... See more...
I am new to splunk and I am a bit lost reading the documentation for how to create a dashboard and implement inputs to prompt the user for values to use in searches. The data I have is a series of prices that I am comparing against an average price for a certain time period (example 30 days). My goal is to create a dashboard that allows the user to enter a value that will be used in my search in an equation that increases some values to go over this threshold and some to go below. I already have this query written out. I am not sure how to proceed with creating the dashboard and inputs and then incorporating my already written query. Do I need to make this a saved search? Please help thank you!
I am trying to produce the following output : app_name request_id time workload at the time(requests per second) App1 123 1000 ? App2 1234 1000 ?   I have two queries that ret... See more...
I am trying to produce the following output : app_name request_id time workload at the time(requests per second) App1 123 1000 ? App2 1234 1000 ?   I have two queries that return : 1. A table with the requests taking the most time app_name request_id time app1 1 1000   2. Numeric value that returns the requests per second for a given app app_name requests per second app1 10   How can I join the results from two different queries to produce the final table above? Thank you!