All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have two timestamps in milliseconds: start=1710525600000, end=1710532800000. How can I search for logs between those timestamps? Let's say I want to run this query: index=my_app | search env=pr... See more...
I have two timestamps in milliseconds: start=1710525600000, end=1710532800000. How can I search for logs between those timestamps? Let's say I want to run this query: index=my_app | search env=production | search service=my-service How to specify the time range in millis for this query?
Hi Team, We are using below query   [| inputlookup ABCD_Lookup_Blacklist.csv | outputlookup ABCD_Lookup_Blacklist_backup.csv append=false | sendemail to="nandan@cumpass.com" sendresults=false sen... See more...
Hi Team, We are using below query   [| inputlookup ABCD_Lookup_Blacklist.csv | outputlookup ABCD_Lookup_Blacklist_backup.csv append=false | sendemail to="nandan@cumpass.com" sendresults=false sendcsv=true sendpdf=false inline=true subject="ABCD_Lookup_Blacklist_backup.csv" | rename target as host | eval lookup_name="ABCD_Lookup_Blacklist.csv"]   Now we are getting attachment of CSV file name is unknown.csv So,  we want Attachment of CSV file name with appropriate lookup_name.   Please help with us Thank you, NANDAN  
Sample Logs: <<< Reporting.logs : 2454 : 15671231232345:INFO :com.am.sss.inws.sample.connector.SampleDBinternalexternal:::XII KEY:: g67a124-6f55-433a-345aexwc vx:: REQS REQUID :: 34567d34-1245-4... See more...
Sample Logs: <<< Reporting.logs : 2454 : 15671231232345:INFO :com.am.sss.inws.sample.connector.SampleDBinternalexternal:::XII KEY:: g67a124-6f55-433a-345aexwc vx:: REQS REQUID :: 34567d34-1245-4asd-a27f-42345cvdwwxz:: SUB REQUID:: 7866-ghnb5-33333:: Application :barcode! company :: Org : Branch-loc :: TIME:<TIMESTAMP> (12) 2022/01/22 17:17:58:208 to 17:17:58:212 4 ms Generic BF Invoice time for one statment with parameters <<< Applicationlogs : 2454 : 15671231232345:INFO :com.am.sss.inws.sample.connector.AccountBinding:::XIS KEY:: g67a124-6f55-433a-345aexwc vx:: REQS REQUID :: 7854d34-7623-4asd-a27f-90864cvdwwxz:: SUB REQUID:: 7866-ghnb5-33333:: Application :barcode! company :: Org : Branch-loc :: TIME:<TIMESTAMP> (12) 2022/01/22 17:17:58:208 to 17:17:58:212 4 ms Generic BF Invoice time for one statment with parameters <<< IntialLogs : 2454 : 15671231232345:INFO :com.am.sss.inws.sample.connector.IntialReortbinding:::XIP KEY:: g67a124-6f55-433a-345aexwc vx:: REQS REQUID :: 12345d34-1288-8asd-a26f-42348cvdwwxz:: SUB REQUID:: 7866-ghnb5-33333:: Application :barcode! company :: Org : Branch-loc :: TIME:<TIMESTAMP> (12) 2022/01/22 17:17:58:208 to 17:17:58:212 4 ms Generic BF Invoice time for one statment with parameters <<< PartialReportingLogs : 2454 : 15671231232345:INFO :com.am.sss.inws.sample.connector.totalDBinternalexternal:::XII KEY:: g67a124-6f55-433a-345aexwc vx:: REQS REQUID :: 09876d34-6753-3asd-a30f-87654cvdwwxz:: SUB REQUID:: 7866-ghnb5-33333:: Application :barcode! company :: Org : Branch-loc :: TIME:<TIMESTAMP> (12) 2022/01/22 17:17:58:208 to 17:17:58:212 4 ms Generic BF Invoice time for one statment with parameters <<< FailedLogs : 2454 : 15671231232345:INFO :com.am.sss.inws.sample.connector.SampleDBinternalexternal:::ZII KEY:: g67a124-6f55-433a-345aexwc vx:: REQS REQUID :: 56744d34-1245-4asd-a11f-89765cvdwwxz:: SUB REQUID:: 7866-ghnb5-33333:: Application :barcode! company :: Org : Branch-loc :: TIME:<TIMESTAMP> (12) 2022/01/22 17:17:58:208 to 17:17:58:212 4 ms Generic BF Invoice time for one statment with parameters <<< Reporting.logs : 2454 : 15671231232345:INFO :com.am.sss.inws.sample.connector.notalwayslogs:::PII KEY:: g67a124-6f55-433a-345aexwc vx:: REQS REQUID :: 89765d34-9875-4asd-a2f-87654cvdwwxz:: SUB REQUID:: 7866-ghnb5-33333:: Application :barcode! company :: Org : Branch-loc :: TIME:<TIMESTAMP> (12) 2022/01/22 17:17:58:208 to 17:17:58:212 4 ms Generic BF Invoice time for one statment with parameters    I am not sure how to write rex to do field extraction. please find the below screenshot, i need rex for the highlighted ones:
Hi. I found old article on the subject and followed, but I do not see overlaying charts. My SPL ------------- index=firewall sourcetype="collector" fqdn="fw.myorg.com" earliest=-2d@d latest=-1d... See more...
Hi. I found old article on the subject and followed, but I do not see overlaying charts. My SPL ------------- index=firewall sourcetype="collector" fqdn="fw.myorg.com" earliest=-2d@d latest=-1d@d | multikv | eval ReportKey=today | append [search index=firewall sourcetype="collector" fqdn="fw.myorg.com" earliest=-4d@d latest=-3d@d | multikv | eval ReportKey=yesterday | eval _time = _time + 2*86400] | timechart span=1H count by ReportKey ------------- So I expect it would report by ReportKey instead it shows by NULL --- -------------
Hi All,   Trust all is good on your end!! Recently moved to Splunk Mission Control and today I stumble upon one issue. There was few incident I works on 13th and on the same day itself I acknowle... See more...
Hi All,   Trust all is good on your end!! Recently moved to Splunk Mission Control and today I stumble upon one issue. There was few incident I works on 13th and on the same day itself I acknowledged & closed those incidents, now today I checked that incident ID and I found some are “in progress” or “new” state and the last updated day is showings today.s date I'm not getting how to fix this issue why there is some discrepancy.  Can someone please assist me to address this issue.    Thanks Debjit 
    Hi I am fairly new to Splunk , thank you in advance if you can help me...:) My goal is to log the service response duration each time a ESService is called. The ESService value can be anything... See more...
    Hi I am fairly new to Splunk , thank you in advance if you can help me...:) My goal is to log the service response duration each time a ESService is called. The ESService value can be anything. In the table format below I am able to see which service is being hit and the duration .   But in the visualization section, all the events showing the same color, Is there anyway to show different color for each ESService . For example , when ESBusinessrep blue, for ESPerson red etc.(dynamically there can be N number of service types). And when I hover on the bars they are showing time, and duration values only not the ESService. How to achieve this?  
How do I assign value to list or array and use it in where condition? Thank you in advance!! For example: I tried to search if number 4 is in array/list of number between 0 to 6.      inde... See more...
How do I assign value to list or array and use it in where condition? Thank you in advance!! For example: I tried to search if number 4 is in array/list of number between 0 to 6.      index = test | eval list-var = (0,1,2,3,4,5,6) | eval num = 4 | search num IN list-var
I have the following stanza in etc\system\local\inputs.conf. However I don't see dynamic DNS update events being forwarded to the Splunk server. Local event viewer shows events after "ipconfig /rel... See more...
I have the following stanza in etc\system\local\inputs.conf. However I don't see dynamic DNS update events being forwarded to the Splunk server. Local event viewer shows events after "ipconfig /release" followed by "ipconfig /renew" I also tried [WinEventLog://DNS Server] as stanza name, to no avail. Appreciate any insight. Thanks, Billy [WinEventLog://Microsoft-Windows-DNS-Server/Audit] disabled = 0 renderXml = 1 whitelist = 519, 520
1. Pls whats the best way to monitor kvstore? 2. What is the best way to monitor errors from kvstore migration 
Hello, How to search based on drop-down condition? Thank you in advance! index = test | eval week_or_day_token = "w" (Drop down: if select "week" = "w", "day" = "d) | eval day_i... See more...
Hello, How to search based on drop-down condition? Thank you in advance! index = test | eval week_or_day_token = "w" (Drop down: if select "week" = "w", "day" = "d) | eval day_in_week_token = 1 (Drop down: if select 0=Sunday, 1=Monday, 2=Tuesday, and so on) If  week_or_day_token  is "week", then use day_in_week_token, otherwise if  week_or_day_token is "day" , then use all day * | eval day_in_week = if(week_or_day_token="w", day_in_week_token, "*") Get what day number in week on each timestamp | eval day_no_each_timestamp = strftime(_time, "%" + day_in_week_token) I searched the timestamp that falls on Monday (day_in_week=1), but I got 0 events | search day_no_each_timestamp = day_in_week If I replaced it with "1", it worked, although the value day_in_week is 1 | search day_no_each_timestamp = "1"
Is it possible to automate the dashboard code management and deployment using GitLab ?
Hello Splunkers!! A generic question I want to ask. There are 40+ dashboards in which customer are using any optimization in any dashboards. They are using direct index search across all the panel... See more...
Hello Splunkers!! A generic question I want to ask. There are 40+ dashboards in which customer are using any optimization in any dashboards. They are using direct index search across all the panels. They are not using any base search and summary index in any of the dashboard panels. Sometime in one dashboard they are using 60+ panels with all index searches. Could any help me to provide all the consequences will happen in this scenerio?   Thanks in advance
Currently, I have two tables Table1 hostnames        vendors              products          versions host1                   vendor1              product1         version1 host2                  ... See more...
Currently, I have two tables Table1 hostnames        vendors              products          versions host1                   vendor1              product1         version1 host2                   vendor2              product2         version2 host3                   vendor3              product3         version3 host4                   vendor4              product4         version4 Table2 device.hostname        device.username HOST1                                user1 HOST2                                user2 HOST3                                user3 HOST4                                user4 The table that I want to generate from these two is the following: Table3 hosts        username      vendors              products          versions host1                 user1              vendor1              product1         version1 host2                 user2              vendor2              product3         version4 host3                 user3              vendor3              product3         version3 host4                 user4              vendor4              product4         version4   The search I tried was the following:   (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | rename device.hostname as hostname | rename device.username as username | eval hosts = coalesce(hostnames, hostname) | table hosts, username, vendors, products, versions   The result was the following: hosts        username      vendors              products          versions host1                                  vendor1              product1         version1 host2                                  vendor2              product3         version4 host3                                  vendor3              product3         version3 host4                                  vendor4              product4         version4 HOST1      user1 HOST2      user2 HOST3      user3 HOST4      user4 host1 and HOST1 both reference the same hostname, just one index had the letters capitalized and the other did not. Does anyone have any ideas?
I'm seeing errors similar to below whenever the Resources -> Subscription input is configured and the TA doesn't pull data. ValueError("Parameter 'subscription_id' must not be None.") ValueError: P... See more...
I'm seeing errors similar to below whenever the Resources -> Subscription input is configured and the TA doesn't pull data. ValueError("Parameter 'subscription_id' must not be None.") ValueError: Parameter 'subscription_id' must not be None. You can only add the "subscription id" in the GUI while creating a new input and when you save the config it immediately blanks out that field.... I wouldn't assume you'd need a subscription id to pull data for all subscriptions in a tenant anyways … Either way it's busted. Has anyone gotten the "Subscriptions" input to work successfully? Does anyone know if this is a known bug? Splunk Add-on for Microsoft Cloud Services 
I was with the NAVSEA team at SKO and they said that if I provided the Canadian compliance requirements you could ad these to the Compliance essentials ap, like you have done for the US and Australia... See more...
I was with the NAVSEA team at SKO and they said that if I provided the Canadian compliance requirements you could ad these to the Compliance essentials ap, like you have done for the US and Australia. Here is what I received from my customer Marine Atlantic: For compliance: we deal with IMO, SOLAS Chapter 9, sometimes referred to as ISM code. This is for existing vessels.   The new vessel is DNV SP1 (Security Profile 1 or Cyber Secure Essential) compliant (some SP3 systems). You may note, SP0 is IMO...   We also use ITSG-33 internally which is the Canadian version of NIST 800.53 (but not maintained as well - NIST has added some cloud based controls for example).   Does the Splunk Compliance Ap cover these requirements??   please let me know thank Alli. RSM PBST Canada.
I have written this query:   index=index_name (log.event=res OR (log.event=tracing AND log.operationName=query_name)) | timechart span=1m avg(log.responseTime) as AvgTimeTaken, min(log.responseTime... See more...
I have written this query:   index=index_name (log.event=res OR (log.event=tracing AND log.operationName=query_name)) | timechart span=1m avg(log.responseTime) as AvgTimeTaken, min(log.responseTime) as MinTimeTaken, max(log.responseTime) as MaxTimeTaken count by log.operationName   My results look like this: _time   AvgTimeTaken: NULL MaxTimeTaken: NULL MinTimeTaken: NULL count:query_name count: NULL   count:query_name 2024-03-18 13:00:00       0 0 0   I want to understand what the :NULL means, and also how I can get the query to display all values.  Secondly, the count is getting displayed for query_name that is similar to the query_name in my query string. I wanted to get an exact match on the query_name. Can someone please help me with this? Thanks!
I run a Splunk query to see events from my web application firewall. I filter out certain violations by name, using a NOT and parenthesis to list out violations i don't care to see.  My network is ... See more...
I run a Splunk query to see events from my web application firewall. I filter out certain violations by name, using a NOT and parenthesis to list out violations i don't care to see.  My network is subject to attack and my query, which i use to look for legitimate users being blocked, will be inundated by various IPs generating 100s of events. How can i table fields so i can see the data i want per event, but also filter out a field if that fields event count is greater than a value? Simple example is an IP is seen from a facility once for a block in the last 15 minutes. Another IP, was seen 400 times as part of a scan. I want to see the 1 (or even 10) events by a specific source IP, but not the 400 from another. I know i can block all of the IP, or part by a wildcard, but that gets messy and can lead to too many IPs in a NOT statement. Current table info to my query table _time, event_id, hostname, violation, policy, uri, ip_client | sort - _time Adding a stats count by ip_client only shows the count and ip, losing the other data and the event IDs will always be different, so the count will never be higher than 1. It would be nice if i could do something like "| where count ip_client<=10" to remove any source IPs that show up more than 10 times in the results.
Hi Guys, Thanks in Advance. I am using transaction command to fetch unique correlationId and i have multiple conditions to be match.below is my query .I am getting result.But not in proper way   ... See more...
Hi Guys, Thanks in Advance. I am using transaction command to fetch unique correlationId and i have multiple conditions to be match.below is my query .I am getting result.But not in proper way       index="mulesoft" (message="API: START: /v1/fin_outbound") OR (message="API: START: /v1/onDemand") OR (message="API: START: /v1/fin_Import") OR (message="API: START: /v1/onDemand") OR (*End of GL-import flow*) OR (tracePoint="EXCEPTION") OR (priority="WARN" AND *GLImport Job Already Running, Please wait for the job to complete*) OR (*End of GL Import process - No files found for import to ISG*) |transaction correlationId | search NOT message IN ("API: START: /v1/fin_Zuora_GL_Revpro_Journals_outbound")|rename content.File.fid as "TransferBatch/OnDemand" content.File.fname as "BatchName/FileName" content.File.fprocess_message as ProcessMsg content.File.fstatus as Status content.File.isg_file_batch_id as OracleBatchID content.File.total_rec_count as "Total Record Count"|eventstats min(timestamp) AS Start_Time, max(timestamp) AS End_Time by correlationId| eval JobType=case(like('message',"%API: START: /v1/onDemand%"),"OnDemand",like('message',"%API: START: /v1/onDemand%"),"OnDemand",like('message',"API: START: /v1/fin_Import"),"Scheduled")| eval Status=case(like('Status' ,"%SUCCESS%"),"SUCCESS", like('Status',"%ERROR%"),"ERROR",like('tracePoint',"%EXCEPTION%"),"ERROR",like('priority',"%WARN%"),"WARN",like('message',"%End of GL Import process - No files found for import to ISG%"),"ERROR")| eval ProcessMsg= coalesce(ProcessMsg,message) | eval StartTime=round(strptime(Start_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=round(strptime(End_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval ElapsedTimeInSecs=EndTime-StartTime | eval "Total Elapsed Time"=strftime(ElapsedTimeInSecs,"%H:%M:%S") |rename Logon_Time as Timestamp |table Status Start_Time JobType "TransferBatch/OnDemand" "BatchName/FileName" ProcessMsg OracleBatchID "Total Record Count" ElapsedTimeInSecs "Total Elapsed Time" correlationId|fields - ElapsedTimeInSecs | search Status="*"       Screen shot added in that i want to show only yellow marked values
I have an alert which detects when a log feed has failed The team the alert goes to have asked that I allow them to suppress the alert. I have now created a mailto link within the alert email that ... See more...
I have an alert which detects when a log feed has failed The team the alert goes to have asked that I allow them to suppress the alert. I have now created a mailto link within the alert email that sends and email with a specifically crafted email subject and body that is detected in all future alerts to suppress future alerts for 12hrs. a simple math calculation generates the 12hrs, the epoch timestamp is in the subject header, the alert spl looks at the subject and either suppresses the alert or not. This works perfectly - the technical team have now asked that I vary the suppression as follows If the alert came in before 10AM the suppression remains 12 hours If the alert came in after 10AM then the suppression time would be "until 10AM the following day". So - how do you calculate a time stamp to 10AM the following day. It must be simple but my mind has lost it right now. Something like is current hour >10AM timestamp=tomorrow:10Hrs
does anyone ever know this issue, I use centos8 stream to install soar 6.2.0 onprem, but it can't read /etc/redhat-release [phantom@10 splunk-soar]$ ./soar-prepare-system --splunk-soar-home /opt/s... See more...
does anyone ever know this issue, I use centos8 stream to install soar 6.2.0 onprem, but it can't read /etc/redhat-release [phantom@10 splunk-soar]$ ./soar-prepare-system --splunk-soar-home /opt/splunk-soar/ --https-port 443 Detailed logs will be located at /opt/splunk-soar/var/log/phantom/phantom_install_log Preparing system for installation of Splunk SOAR 6.2.0.355 Unable to read CentOS/RHEL version from /etc/redhat-release. Traceback (most recent call last): File "/opt/splunk-soar/./soar-prepare-system", line 93, in main pre_installer.run() File "/opt/splunk-soar/install/deployments/deployment.py", line 132, in run self.run_pre_deploy() File "/opt/splunk-soar/usr/python39/lib/python3.9/contextlib.py", line 79, in inner return func(*args, **kwds) File "/opt/splunk-soar/install/deployments/deployment.py", line 146, in run_pre_deploy plan = DeploymentPlan.from_spec(self.spec, self.options) File "/opt/splunk-soar/install/deployments/deployment_plan.py", line 51, in from_spec deployment_operations=[_type(options) for _type in deployment_operations], File "/opt/splunk-soar/install/deployments/deployment_plan.py", line 51, in <listcomp> deployment_operations=[_type(options) for _type in deployment_operations], File "/opt/splunk-soar/install/operations/optional_tasks/rpm_packages.py", line 53, in __init__ self.rpm_checker = RpmChecker(self.get_rpm_packages(), self.shell) File "/opt/splunk-soar/install/operations/optional_tasks/rpm_packages.py", line 63, in get_rpm_packages if get_os_family() == OsFamilyType.el7: File "/opt/splunk-soar/install/install_common.py", line 340, in get_os_family os_version = get_os_version() File "/opt/splunk-soar/install/install_common.py", line 326, in get_os_version return _get_centos_and_rhel_version() File "/opt/splunk-soar/install/install_common.py", line 315, in _get_centos_and_rhel_version raise InstallError("Unable to read CentOS/RHEL version from /etc/redhat-release.") install.install_common.InstallError: Unable to read CentOS/RHEL version from /etc/redhat-release. Pre-install failed.   while I open the /etc/redhat-release [phantom@10 splunk-soar]$ cat /etc/redhat-release NAME="CentOS stream" VERSION="8" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="8" PLATFORM_ID="platform:el8" PRETTY_NAME="CentOS Stream 8" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:8" HOME_URL="https://centos.org/" BUG_REPORT_URL="https://bugzilla.redhat.com/" REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8" REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"   welcome for any suggestion