All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  I have the following query:    | bin _time span=1d | stats count as ProductCount by applysourcetype, product, _time | where _time=relative_time(now(), "-d@d") or _time=relative_time(now(),... See more...
Hi,  I have the following query:    | bin _time span=1d | stats count as ProductCount by applysourcetype, product, _time | where _time=relative_time(now(), "-d@d") or _time=relative_time(now(), "-8d@d") | eval when = if(_time=relative_time(now(), "-d@d"), "(Yesterday)", "(7 Days Ago)") | eval "Products Ordered {when}" = ProductCount | fields - _time ProductCount when | stats values(*) as * by product , applysourcetype     and I'm getting following output:  How can make product filed one row for unique product? 
I need to ingest Fortinet Firewall logs to Splunk cloud. The logs are being redirected to Forticloud. There is a functionality to forward Syslog from Firewall to an IP address or FQDN, but I am not s... See more...
I need to ingest Fortinet Firewall logs to Splunk cloud. The logs are being redirected to Forticloud. There is a functionality to forward Syslog from Firewall to an IP address or FQDN, but I am not sure how to do that on Splunk Cloud. Any help is much appreciated.
Hello - I have Splunk report that was generated 5 years ago.  I was looking for advice.  Can it be updated to work better?  It is currently running but seems to take long time to complete. I tried t... See more...
Hello - I have Splunk report that was generated 5 years ago.  I was looking for advice.  Can it be updated to work better?  It is currently running but seems to take long time to complete. I tried to improve but not knowledgeable enough to make sure I will not screw it up. So all insights would be greatly appreciated.  Need to make sure the contents of the lookup CVS generate a complete list of jobs.  It is important to see if a job is not running.  I assume this LEFT joint from the CSV to the results. Using the Frequency_mins to do the calculations.  It seems the original search is doing additional calculations. I do apologize for such a large query.   index="idx_cibca_Application_prod" sourcetype = "tomcat:runtime:log:jpma" AND "lastUpdatedTS" OR "Time taken for" host=Server_1 OR host=Server_2 OR host=Server_3 OR host=Server_4 OR host=Server_5 OR host=Server_6 OR host=Server_7 OR host=Server_8 | eval Job_Thread_Name=case(like(_raw,"%tspPaymentArchiveExecutorIncrementalPoolSizeRange%") ,"tspPaymentArchiveExecutorIncrementalPoolSizeRange",like(_raw,"%completedTxnReplaceIncrementalPoolSizeRange%") ,"completedTxnReplaceIncrementalPoolSizeRange", like(_raw,"%completedTxnBackoutCdIncrementalPoolSizeRange%") ,"completedTxnBackoutCdIncrementalPoolSizeRange", like(_raw,"%completedTxnBackoutPdIncrementalPoolSizeRange%") ,"completedTxnBackoutPdIncrementalPoolSizeRange", like(_raw,"%oneYearCompletedTxnArchivalIncrementalExecutor%") ,"oneYearCompletedTxnArchivalIncrementalExecutor", like(_raw,"%twoYearCompletedTxnArchivalIncrementalExecutor%") ,"twoYearCompletedTxnArchivalIncrementalExecutor", like(_raw,"%fxRateExecutorIncrementalPoolSizeRange%") ,"fxRateExecutorIncrementalPoolSizeRange", like(_raw,"%mfAccountBalancePDExecutorIncrementalPoolSizeRange%") ,"mfAccountBalancePDExecutorIncrementalPoolSizeRange", like(_raw,"%achPaymentExecutorWithPoolSizeRange%") ,"achPaymentExecutorWithPoolSizeRange", like(_raw,"%achtemplateIncrementalExecutorWithPoolSizeRange%") ,"achtemplateIncrementalExecutorWithPoolSizeRange", like(_raw,"%tspPaymentExecutorIncrementalPoolSizeRange%") ,"tspPaymentExecutorIncrementalPoolSizeRange", like(_raw,"%tspTemplateExecutorIncrementalPoolSizeRange%") ,"tspTemplateExecutorIncrementalPoolSizeRange", like(_raw,"%acctEntitlementExecutorIncrementalPoolSizeRange%") ,"acctEntitlementExecutorIncrementalPoolSizeRange", like(_raw,"%achEntitlementExecutorIncrementalPoolSizeRange%") ,"achEntitlementExecutorIncrementalPoolSizeRange", like(_raw,"%acttxEntitlementExecutorIncrementalPoolSizeRange%") ,"acttxEntitlementExecutorIncrementalPoolSizeRange", like(_raw,"%atsAccountExecutorIncrementalPoolSizeRange%") ,"atsAccountExecutorIncrementalPoolSizeRange", like(_raw,"%completedTxnCDExecutorIncrementalPoolSizeRange%") ,"completedTxnCDExecutorIncrementalPoolSizeRange", like(_raw,"%completedTxnPDExecutorIncrementalPoolSizeRange%") ,"completedTxnPDExecutorIncrementalPoolSizeRange", like(_raw,"%accountGroupExecutorIncrementalPoolSizeRange%") ,"accountGroupExecutorIncrementalPoolSizeRange", like(_raw,"%accountNickNameExecutorIncrementalPoolSizeRange%") ,"accountNickNameExecutorIncrementalPoolSizeRange", like(_raw,"%accountRetentionExecutorIncrementalPoolSizeRange%") ,"accountRetentionExecutorIncrementalPoolSizeRange",like(_raw,"%mfAccountBalanceCDExecutorIncrementalPoolSizeRange%") ,"mfAccountBalanceCDExecutorIncrementalPoolSizeRange", like(_raw,"%pymtCutoffExecutorIncrementalPoolSizeRange%") ,"pymtCutoffExecutorIncrementalPoolSizeRange", like(_raw,"%achFileImportsExecutorWithPoolSizeRange%") ,"achFileImportsExecutorWithPoolSizeRange")  | stats latest(_time) as _time , latest(host) as host by Job_Thread_Name | eval Thread_Last_Executed=strftime(_time, "%Y-%m-%d %I:%M:%S %p"), EPOC_Time=(_time) | eval  Lag=round((now()-EPOC_Time)/60) | table Job_Thread_Name, Thread_Last_Executed, host, Lag   | lookup Application_Job_Thread_Name.csv Job_Thread_Name OUTPUTNEW Job_Name Job_Config_Name Frequency_Bucket_in_mins | table  Job_Name, host, Job_Thread_Name, Job_Config_Name, Frequency_Bucket_in_mins, Thread_Last_Executed, Lag | inputlookup  append=t Application_Job_Thread_Name.csv   | dedup  Job_Name  | table  Job_Name, host, Job_Thread_Name, Job_Config_Name, Frequency_Bucket_in_mins , Thread_Last_Executed, Lag   | eval  Status=if(isnull(Lag), "NOT OK - Job not running", if(Lag<= if(Frequency_Bucket_in_mins>60, Frequency_Bucket_in_mins+10, 70),"OK","NOT OK - Job not running - Lag found")) | join  type=left Job_Config_Name  [ search index="idx_cibca_Application_prod" sourcetype="tomcat:runtime:log:jpma" AND "Job Details job name:" host=Server_1 OR host=Server_2 OR host=Server_3 OR host=Server_4 OR host=Server_5 OR host=Server_6 OR host=Server_7 OR host=Server_8 | rex "Job Details job name:(?<Job_Config_Name>.*) status:(?<JOB_STATUS>.*) timetaken:(?<TIMETAKEN>.*) minutes"  | rex "(?<DATE_TIME>^(\d+)-(\d+)-(\d+)(\s+)(\d+):(\d+):(\d+).(\d+))" | stats  latest(DATE_TIME) AS Job_Status_Logged latest(JOB_STATUS) AS Job_Status, latest(TIMETAKEN) AS TIMETAKEN_IN_MINS by Job_Config_Name]  | rename host as Thread_Host | table  Job_Name, Thread_Host, Job_Thread_Name, Frequency_mins, Thread_Last_Executed,Lag,Status,Job_Status,Job_Status_Logged,TIMETAKEN_IN_MINS  | eval  Job_Status_Logged = if(isnull(Job_Status_Logged),"NA",Job_Status_Logged), Job_Status = if(isnull(Job_Status),"NA",Job_Status), TIMETAKEN_IN_MINS = if(isnull(TIMETAKEN_IN_MINS),"NA",TIMETAKEN_IN_MINS)   CSV File content is: Job_Config_Name,Job_Name,Job_Thread_Name,Frequency_mins ach_payment_incremental_loader_task,ACH Payment,achPaymentExecutorWithPoolSizeRange,1 achTemplateIncrementalLoaderTask,ACH Tempate,achtemplateIncrementalExecutorWithPoolSizeRange,1 tsp_payment_incremental_loader_task,TSP Payments,tspPaymentExecutorIncrementalPoolSizeRange,1 tsp_template_incremental_loader_task,TSP Template,tspTemplateExecutorIncrementalPoolSizeRange,1 acct_entitlement_incremental_loader_task,Account Entitlement,acctEntitlementExecutorIncrementalPoolSizeRange,5 ach_entitlement_incremental_loader_task,ACH Entitlement,achEntitlementExecutorIncrementalPoolSizeRange,5 acttx_entitlement_incremental_loader_task,AT Entitlement,acttxEntitlementExecutorIncrementalPoolSizeRange,5 account_incremental_job,Account,atsAccountExecutorIncrementalPoolSizeRange,5 completed_txn_cd_incremental_job,Completed TXN CD,completedTxnCDExecutorIncrementalPoolSizeRange,5 completed_txn_pd_incremental_job,Completed TXN PD,completedTxnPDExecutorIncrementalPoolSizeRange,5 account_group_incremental_loader_task,Account Group,accountGroupExecutorIncrementalPoolSizeRange,1 account_nickname_incremental_job,Nick Name,accountNickNameExecutorIncrementalPoolSizeRange,5 account_retention_incremental_job,Retention,accountRetentionExecutorIncrementalPoolSizeRange,5 oneYearCompletedTxnArchiveIncrementalJob,One Year Retention,oneYearCompletedTxnArchivalIncrementalExecutor,60 twoYearCompletedTxnArchiveIncrementalJob,Two year Retention,twoYearCompletedTxnArchivalIncrementalExecutor,60 account_balance_cd_incremental_job,Balance Cd,mfAccountBalanceCDExecutorIncrementalPoolSizeRange,5 account_balance_pd_incremental_job,Balance PD,mfAccountBalancePDExecutorIncrementalPoolSizeRange,10 fxrate_incremental_loader_task,FX Rate,fxRateExecutorIncrementalPoolSizeRange,30 completed_txn_replace_incremental_loader_task,Replace,completedTxnReplaceIncrementalPoolSizeRange,480 completed_txn_backout_cd_incremental_loader_task,Backout CD,completedTxnBackoutCdIncrementalPoolSizeRange,360 completed_txn_backout_pd_incremental_loader_task,Backout PD,completedTxnBackoutPdIncrementalPoolSizeRange,360 tsp_payment_archive_incremental_loader_task,TSP payment Archive,tspPaymentArchiveExecutorIncrementalPoolSizeRange,10080 payment_cutoff_incremental_loader_task,Payment Cutoff,pymtCutoffExecutorIncrementalPoolSizeRange,1 ach_fileimports_incremental_loader_task,ACH File Import,achFileImportsExecutorWithPoolSizeRange,1  
One of our teams on-boards psv logs and while the data on-boarded correctly in most case, sometimes the header is not recognized and field extraction is not happening.  props.conf: [status_psv] IN... See more...
One of our teams on-boards psv logs and while the data on-boarded correctly in most case, sometimes the header is not recognized and field extraction is not happening.  props.conf: [status_psv] INDEXED_EXTRACTIONS = PSV KV_MODE = none NO_BINARY_CHECK = true SHOULD_LINEMERGE = false HEADER_FIELD_LINE_NUMBER = 1 TIMESTAMP_FIELDS = rqid TIME_FORMAT = %s%6Q MAX_DAYS_HENCE = 5 what are the possible reasons for some logs ST being ignored
Is there a splunk add on available that can provide Azure O365 AD group members list into Splunk?   Eg: on querying for group1@domain.com It should return member1@domain.com, member2@domain.com  ... See more...
Is there a splunk add on available that can provide Azure O365 AD group members list into Splunk?   Eg: on querying for group1@domain.com It should return member1@domain.com, member2@domain.com   I found few add ons but they seem to be for logging/monitoring purposes like who changed what and when. Did not find anything like say Microsoft graph via Splunk to list groups, get group members etc
I'm trying to do a search against index=_internal but I do not see this index on my searchhead.  I do see it when I do a search from the indexers. Any suggestions where I should look first to fix t... See more...
I'm trying to do a search against index=_internal but I do not see this index on my searchhead.  I do see it when I do a search from the indexers. Any suggestions where I should look first to fix this?
Greetings - I do have the TA for nix. I spend a couple of hours scouring all my resources and looking at the TA_nix  where to insert or turn an entry for the OS type. On the linux side need to kno... See more...
Greetings - I do have the TA for nix. I spend a couple of hours scouring all my resources and looking at the TA_nix  where to insert or turn an entry for the OS type. On the linux side need to know if what vendor :  centos/RHEL ,  version 6,7,8 . Any input would be appreciated.
Hello friends, Thank you so much for your help in advance. I have a field named "ERROR_COLAB" in which a series of responses are concatenated into a single long string, because of  the nature of th... See more...
Hello friends, Thank you so much for your help in advance. I have a field named "ERROR_COLAB" in which a series of responses are concatenated into a single long string, because of  the nature of the ERRORS that can be present there is no a formal, objective, efficient way to "split" the values in  "ERROR_COLAB" to classify the responses concatenated in them. So I was thinking about what if I can create a lookup table with the values that I need to extract to later on "parse" them into a regex formula in order to extract them.. to illustrate my idea: Lets say I have this lookup table code_error meaning po_R83 No_call_bak ?OP card_nofunds HOTELARCH78 overbookings and I have the following values in  "ERROR_COLAB" ERROR_COLAB ?OP_ERR7+JSU8.OIJK1 po_R83_io IOS_NEVER:300SSSS HOTELARCH78?123- I would like to know if the first part of the string is equal to any of the values on the field "error code" of the lookup table. So my desired result would look like this: ERROR_COLAB code_error_extracted meaning ?OP_ERR7+JSU8.OIJK1 ?OP card_nofunds po_R83_io po_R83 No_call_bak IOS_NEVER:300SSSS N.A N.A HOTELARCH78?123- HOTELARCH78 overbookings   Thank you so much guys   Kindly, Cindy
Hello, i searched few hours how to extract the RULE_NAME field from my Firewall logs without success. RULE_NAME is at the end of the log line, between (). It can contain any characters, space, "-" ... See more...
Hello, i searched few hours how to extract the RULE_NAME field from my Firewall logs without success. RULE_NAME is at the end of the log line, between (). It can contain any characters, space, "-" or "_". My problem comes from the fact that the RULE_NAME is sometimes finished by a 3 characters string i need to remove : "-00"   Here is my actual REGEX, but it does't works for the simple "Internal Policy" RULE_NAME :  .*\s+\((?P<RULE_NAME>.*)?(-00)\)$   Here are original logs lines i need to match : May 3 16:35:02 10.40.1.254 May 3 16:35:02 MYFIREWALL.mycorp.lan firewall: msg_id="3000-0148" Allow 0-SSL-VPN Firebox 73 udp 20 128 172.XXX.XXX.14 172.XXX.XXX.1 54127 53 src_user="fgi@mycorp.com" (DNS-01-proxy_user.out-00) May 3 17:39:56 10.40.1.254 May 3 17:39:56 MYFIREWALL.mycorp.lan firewall: msg_id="3000-0148" Allow VLAN1-Lan-Trusted Firebox 69 udp 20 128 172.21.20.26 172.21.20.254 52481 53 msg="DNS Forwarding" src_user="yal@mycorp.lan" record_type="A" question="sync.srv.stackadapt.com" (Internal Policy) May 3 16:35:02 10.40.1.254 May 3 16:35:02 MYFIREWALL.mycorp.lan firewall: msg_id="3000-0148" Allow 0-SSL-VPN Firebox 73 udp 20 128 172.XXX.XXX.14 172.XXX.XXX.1 54127 53 src_user="fgi@mycorp.com" (My super rule name with space DNS.out) Any idea on how to ignore the "-00" suffix when present ? thanks Florent  
Hello, I'm trying to get more detailed information about my scheduled saved searches, especially when they complete with success but contain errors and warnings in the stack trace. I see that all d... See more...
Hello, I'm trying to get more detailed information about my scheduled saved searches, especially when they complete with success but contain errors and warnings in the stack trace. I see that all details are stored in the $SPLUNK_HOME/var/run/splunk/dispatch folder and I am wondering if this folder can be monitored by Splunk. Is this possible? Thank you and best regards, Andrew
How can I identify which Dashboards contain a specific saved search?
Hi All,   Can any one guide me how to find, how much data is getting ingested into Splunk from a particular HEC token and how much volume of data a single HEC token handle?  Reason: Currently we are... See more...
Hi All,   Can any one guide me how to find, how much data is getting ingested into Splunk from a particular HEC token and how much volume of data a single HEC token handle?  Reason: Currently we are using single token to ingest multiple API data in to splunk and recently our clients wanted to ingest another API data into Splunk,  instead of creating a new token we are planning to use the same token for API data ingestion but before using the same token, we wanted to assess the current data  ingestion by the HEC, to avoid data loss.  Need a query to fetch the avg data volume consumed by the HEC token per day.  
I've been having issues with wildcarded input monitoring.  In an attempt to adjust for an issue with file path naming across a number of servers.    My original/working stanza. Disregard ellipses, ... See more...
I've been having issues with wildcarded input monitoring.  In an attempt to adjust for an issue with file path naming across a number of servers.    My original/working stanza. Disregard ellipses, I've shortened the path for this posting: - [monitor://D:\Program Files\Microsoft\...\MessageTracking]   My adjusted/wildcarded stanzas that have not worked for input on any of our. Disregard ellipses, I've shortened the path for this posting to increase readability: - [monitor://D:\*\Microsoft\...\MessageTracking] - [monitor://D:\Prog*am Files\Microsoft\...\MessageTracking] - [monitor://D:\*Files\Microsoft\...\MessageTracking]   I don't appear to receive an error message after this change, but logs completely drop off when wildcard is put into the deployed configuration.
Hi Splunkers,   we are tring to integrate our CTI portal to our splunk ES instance by intelligence feed, the situation is this: the file downloadable is a CSV file by API, with this structure: <G... See more...
Hi Splunkers,   we are tring to integrate our CTI portal to our splunk ES instance by intelligence feed, the situation is this: the file downloadable is a CSV file by API, with this structure: <Generic_IoC>, <IoC_Type>, <Timestamp>, <Description>. NOTE: the Generic_IoC field can be a URL,Mail,Hash,IP, etc... this file is accessible by a POST API call with in body a string "id=<RANDOM_LENGHT_TOKEN>" how can we configure ES properly for integrate such information?   thank you
Hello People I hope everyone is doing just fine, I have been trying to extract some values from a field without any luck. I work for a hotel company and whenever a customer uses our transportation s... See more...
Hello People I hope everyone is doing just fine, I have been trying to extract some values from a field without any luck. I work for a hotel company and whenever a customer uses our transportation services a field named "travel_stops" is recorded and updated. This field due to the way is "programmed" will always come in the following format:       .Madrid-plane.taxi$comp$uber.domestic$depart       please note that the string will always start with a dot (.) in other words every "stop" is separated by a (.) dot I want to be able to extract for each value in this field the first and last stops which for this case will be Madrid-plane and domestic$depart  I am trying to use something like this:       | rex field=Stops "(?<First_Stop>[^.]+).(?<Last_Stop>.+)"       I also have another "setback" and it if when a custumer is awaiting his/her first transportation service, dependiong on the type of custumer the system may record something like "awaiting","push","never" ect.. and in those cases I want to be able to leave that record as it is... since I will be doing some extra work on those fiedls.. What I want: travel_stops First_Stop Last_Stop .Madrid-plane.taxi$comp$uber.domestic$depart Madrid-plane domestic$depart never never null pull pull null but is not givving me the expected result thank you for much to anyone who is willing to help me out! Like I truly appreciate it. also, if you have a have a link to a blog on regex in Splunk that will be so much appreciated as I will be using more of these in the future   Kindly, Cindy!
Hey! I have a lookup where we manually enter in a number to a corresponding day. It looks something like this. Monday | 2 Tuesday | 3 Wednesday | 2 Thursday | 5 Friday | 3 Is there a way to mak... See more...
Hey! I have a lookup where we manually enter in a number to a corresponding day. It looks something like this. Monday | 2 Tuesday | 3 Wednesday | 2 Thursday | 5 Friday | 3 Is there a way to make this into a column chart where the x axis is the days of the week and the y axis is the corresponding number? I tried to just put it in a column chart but it defaults to flipping the x/y axis, I then tried to transpose but along it still doesnt format correctly.
Good morning all.   I have a little issue with DB Connect and Postgres DB.   I am using the Postgres driver v42.2 to access the DB.  In the Data Lab under Sql Explorer, I can connect to the db and ... See more...
Good morning all.   I have a little issue with DB Connect and Postgres DB.   I am using the Postgres driver v42.2 to access the DB.  In the Data Lab under Sql Explorer, I can connect to the db and assign the catalog, schema and table.  All of the pull down populates and are available.   I enter this query:     select * from bd_cntl.job_monitor_info where 1=1 and batch_id=to_number(to_char(CURRENT_DATE -3, 'YYYYMMDD'),'99999999')       The system spins and outputs:     org.postgresql.util.PSQLException: ERROR: Could not begin transaction on data node.       To make this any stranger,  I installed Postgresql on my workspace and it connects without issue and can run the query fine.     Do any of you fellows that are familiar with Postgres and the like have any ideas how to work around this issue? Thanks in advance, Rcp    
Hi there I have a near real-time interface which utilises SOAP for data transfer.  Can Splunk read in  SOAP messages? Kind Regards Paul.
Hi Team I have the required data in one of the fields but the logs are not in order how can i extract the required data. Below is the example of how it looks row1 -->   A                     B   ... See more...
Hi Team I have the required data in one of the fields but the logs are not in order how can i extract the required data. Below is the example of how it looks row1 -->   A                     B                     C                     D row2 -->   D                      A                      B                      C row3-->    C                      D                      A                      B im trying to extract the data using mvindex | eval x= mvindex(split(planname,"\n"),2) | table x --> Since in index 2nd position for all the rows the data is different so im unable to view the correct data. Please let me know the query to extract the data Thanks
Hi, We have installed Eset security  antivirus on our splunk server and we have many problems as when we disable antivirus everything is well. I want to know if antiviruses have any effect on perfor... See more...
Hi, We have installed Eset security  antivirus on our splunk server and we have many problems as when we disable antivirus everything is well. I want to know if antiviruses have any effect on performance of servers. Thanks,