All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Any changes happened on SAML SSO configuration in the new Splunk v8.2.4 ? We have an IdP configured to use SSO and it is working in the Splunk v8.1.1. We recently upgraded to v8.2.4 we copie... See more...
Hello, Any changes happened on SAML SSO configuration in the new Splunk v8.2.4 ? We have an IdP configured to use SSO and it is working in the Splunk v8.1.1. We recently upgraded to v8.2.4 we copied the same authentication.conf from v8.1.1.  Seeing the below error in the Splunkd.log   relaystate is empty RelayState may be missing due to IDP-intiated SAML workflow. User=<user>@<DOMAIN1.DOMAIN.COM> domain= does not match default domain. Contact your syste administrator for more information about the default domain=saml for this system   Any idea how to fix this error?    
Hi, I'm trying to build a query to get the count of opened and resolved incidents every hour in a day but the numbers are not tallying. Not sure if the issue might be the fact that ServiceNow uses ... See more...
Hi, I'm trying to build a query to get the count of opened and resolved incidents every hour in a day but the numbers are not tallying. Not sure if the issue might be the fact that ServiceNow uses GMT and therefore all the tickets have the dv_opened_at and dv_closed_at field in terms of GMT and the _time field is the local time which in my case is EST. I'm using the following query but not getting the correct numbers: index=xyz |eval _time = strptime(dv_opened_at,"%Y-%m-%d %H:%M:%S") | sort 0 - _time | addinfo | where _time >= info_min_time AND _time <= info_max_time | eventstats min(_time) AS earliest_time BY sys_id | where _time = earliest_time | timechart span=1h dc(sys_id) AS "Opened Tickets" | appendcols [ search index=xyz |eval _time = strptime(dv_resolved_at,"%Y-%m-%d %H:%M:%S") | sort 0 - _time | addinfo | where _time >= info_min_time AND _time <= info_max_time | eventstats min(_time) AS earliest_time BY sys_id | where _time = earliest_time | timechart span=1h dc(sys_id) AS "Closed Tickets"] Does anyone know how I can fix the query to get the correct number of incidents opened and closed every hour on a specific day?
After the upgrade of Splunk Enterprise to 8.2.4, several triggered alerts with tokens are no longer sending out emails.    Looking at splunkd.log, there is a warning message concerning the alert 02... See more...
After the upgrade of Splunk Enterprise to 8.2.4, several triggered alerts with tokens are no longer sending out emails.    Looking at splunkd.log, there is a warning message concerning the alert 02-10-2022 10:02:28.244 -0600 WARN Pathname [15448 AlertNotifierWorker-0] - Pathname 'E:\Splunk\bin\Python3.exe E:\Splunk\etc\apps\search\bin\sendemail.py "results_link= "ssname=Password Reset Reminder" "graceful=True" "trigger_time=1644508948" results_file="E:\Splunk\var\run\splunk\dispatch\scheduler__srunyonadm__search__RMD5c5f30383081059ef_at_1644508800_24883\results.csv.gz" "is_stream_malert=False"' larger than MAX_PATH, callers: call_sites=[0xd4d290, 0xd4f001, 0x15d1632, 0x15ce217, 0x1439f53, 0x13c8176, 0x71f406, 0x71ea9e, 0x71e899, 0x6eaeeb, 0x70c3c5] I am concerned with the "larger thanMAX_PATH" message because Splunk doc states -  "The Windows API has a path limitation of MAX_PATH which Microsoft defines as 260 characters including the drive letter, colon, backslash, 256-characters for the path, and a null terminating character. Windows cannot address a file path that is longer than this, and if Splunk software creates a file with a path length that is longer than MAX_PATH, it cannot retrieve the file later. There is no way to change this configuration." What can be done to get this working again? Regards, Scott Runyon
Howdy, I'm trying to come up with a query that charts the most occurring x_forwarded_for and respective count in each of the bins over whatever window. Currently, the below query creates a sorted c... See more...
Howdy, I'm trying to come up with a query that charts the most occurring x_forwarded_for and respective count in each of the bins over whatever window. Currently, the below query creates a sorted chart of the most occurring x_forwarded_for and their respective count over the entire lookback window, instead of each bin. I think I need to fit head 1 in there somewhere. It's likely some or all of the x_forwarded_for's across those bins are repeats and I'd like that charted, so no unique counts. Any help is appreciated!         index="canvas_*" cluster="*" | where isnull(user_id)| bin _time span=5m | stats count by x_forwarded_for | sort - count        
I have reports Quarter1.csv and Quarter2.csv. after I upload these two  csv report I got  host="***" source="****" sourcetype="***" and those fields IP_Address,Plugin_Name,Severity,Protocol,Port... See more...
I have reports Quarter1.csv and Quarter2.csv. after I upload these two  csv report I got  host="***" source="****" sourcetype="***" and those fields IP_Address,Plugin_Name,Severity,Protocol,Port,Exploit,Synopsis,Description,Solution,See_Also,CVSS_V2_Base_Score,CVE,Plugin. I want 3 reports base on joining  with these 6 files:  IP_Address, Plugin_Name, Severity, Protocol, Port, Exploit, | table IP_Address,Plugin_Name,Severity,Protocol,Port,Exploit,Synopsis,Description,Solution,See_Also,CVSS_V2_Base_Score,CVE,Plugin, status First report: - if the event are in  Quarter1.csv and Quarter2.csv. show status as "Active Vulnerability" Second report:- if the event are in  Quarter1.csv but not in Quarter2.csv. show status as "Fixed" Third​ report:- if the event are  not in  Quarter1.csv but there are   in Quarter2.csv. show status as "New Active Vulnerability"
I'm running Splunk Enterprise 8.2.4. When deploying the Universal Forwarder for Windows (version 8.2.4) and selecting to run it under the Local System account it subsequently asks me for the 'create ... See more...
I'm running Splunk Enterprise 8.2.4. When deploying the Universal Forwarder for Windows (version 8.2.4) and selecting to run it under the Local System account it subsequently asks me for the 'create credentials for the administrator account' as per attached. What is the purpose of this ?
Hi. So I'm reading about this Add-on and the instructions seem to be pretty straightforward about getting the Add-on installed on my search head and indexer. What I have are Domain Controllers on a n... See more...
Hi. So I'm reading about this Add-on and the instructions seem to be pretty straightforward about getting the Add-on installed on my search head and indexer. What I have are Domain Controllers on a network that is not local. I have a universal forwarder (Ubuntu) on site there which is forwarding Palo Alto logs via syslog-ng.  My question is this. What do I need to install on a Domain Controller on the remote network to get it to gather Active Directory and forward to the indexer either directly or via the universal forwarder? 
Is there a list of what metrics are and are not available in Dash Studio. Currently it looks like custom metrics, service endpoints and information points are not available. Documenting this in a ce... See more...
Is there a list of what metrics are and are not available in Dash Studio. Currently it looks like custom metrics, service endpoints and information points are not available. Documenting this in a central place would be helpful instead of getting half way thru building a dashboard only to find out a metric isn't available.
In the query  _time is already formatted. But when i try to export the data in csv its showing different formats.    Query:index="wineventlog" host IN (USMDCKPAP30074) EventCode=6006 OR EventCode... See more...
In the query  _time is already formatted. But when i try to export the data in csv its showing different formats.    Query:index="wineventlog" host IN (USMDCKPAP30074) EventCode=6006 OR EventCode="6005" Type=Information | eval BootUptime = if(EventCode=6005,strftime(_time, "%Y-%d-%m %H:%M:%S"),null()) | table host BootUptime Eg:       2022-31-01 10:00:42 2022-29-01 06:40:11 2022-27-01 12:55:56       After exporting :       8/1/2022 4:08 1/1/2022 4:03 2021-25-12 04:03:29 2021-18-12 04:02:54 2021-16-12 10:14:45 2021-16-12 10:08:21 11/12/2021 4:08 4/12/2021 4:11 Please help me resolve this
Good morning to all, I have a newbie question. I know I’m missing something simple, wondering if someone could point me in the right direction. I currently use Syslog as an input stream and create ... See more...
Good morning to all, I have a newbie question. I know I’m missing something simple, wondering if someone could point me in the right direction. I currently use Syslog as an input stream and create the main index.  My Cisco applications appear to be working just fine, but I cannot get data into the same tables for the CIM-type applications to see data.
I'm running Splunk 8.2.2.1 on a Macbook with Apple Silicon and macOS 12.2. I've installed the Splunk Dashboard examples app (v8.2.2). When I navigate to the app and then to "Examples", all i get ... See more...
I'm running Splunk 8.2.2.1 on a Macbook with Apple Silicon and macOS 12.2. I've installed the Splunk Dashboard examples app (v8.2.2). When I navigate to the app and then to "Examples", all i get is an empty area below the headline "Examples". I've tried Firefox, Safari and Chromium. Any ideas?
I have a  KPI dashboard with all single value panels, I'm passing the time token from this dashboard to other dashboard using drill-down options but Im also looking to get click.value from a single p... See more...
I have a  KPI dashboard with all single value panels, I'm passing the time token from this dashboard to other dashboard using drill-down options but Im also looking to get click.value from a single panel to pass it as input token for another dashboard but its not working , Is there any other way we can capture the value on the single value panel and pass it to other dashboard Any help is highly appreciated. Thanks
Hi,  I am using following search into Windows EventViewer System logs  that I extracted for testing: index="503461" host="hp-laptop" "Sleep Time"  Log looks like below: Information,4.2.... See more...
Hi,  I am using following search into Windows EventViewer System logs  that I extracted for testing: index="503461" host="hp-laptop" "Sleep Time"  Log looks like below: Information,4.2.2022 г. 12:55:47,Microsoft-Windows-Power-Troubleshooter,1,None,"The system has returned from a low power state. Sleep Time: ‎2022‎-‎02‎-‎04T10:38:18.391571900Z Wake Time: ‎2022‎-‎02‎-‎04T10:55:46.701556600Z Wake Source: Device -USB Composite Device"     I am trying to calculate the two time stamps into total duration. Can someone help with the search string, thank you
Does Splunk have any spl command like punct? The default punct field will get patterns on the _raw field. Is there any command where I can use to get the similar pattern on the custom field inste... See more...
Does Splunk have any spl command like punct? The default punct field will get patterns on the _raw field. Is there any command where I can use to get the similar pattern on the custom field instead of _raw? Example: description="User: ABC Project: XYZ Company Name: JKLM Short Description: Project is so and so" description="User: ABC Company Name: JKLM Project: XYZ Employee Level: 7 Short Description: Project is so and so User Designation: Splunk Consultant" description="User: ABC Project: Jkl Company Name: JKLM Short Description: Project: Automation" so on.. I cannot use extract command, because sub fields which i want to extract is not in order and key as 2/3/4/5 words. the only key value delim I can see is colon : and also some times user might feed : in certain sub fields.  
Hi,   I need your help   I have a standard query like this: index=a foo and I need to return only the record that match with a list of information in a CSV (or even in a lookup table).... See more...
Hi,   I need your help   I have a standard query like this: index=a foo and I need to return only the record that match with a list of information in a CSV (or even in a lookup table). Please note that doesn't exist in the index=a a specific field like the info in the csv. Example of CSV: Name Andrew John Michael Thanks in advance.
I am looking for something like this as below I have a seach string = rubi and want to check this string presence in a lookuptable = metals.csv Name         date                region rubi       ... See more...
I am looking for something like this as below I have a seach string = rubi and want to check this string presence in a lookuptable = metals.csv Name         date                region rubi            12122021     abc diamond  12122022     def platinum   12122023    ghi what would be my splunk query to shows the presence of my search string with lookuptable. I want the result to be something like below Since in above example rubi is present in metals.csv my result table should look like with an extra column Present and status as Yes Name  Present rubi       Yes If not present say example searchstring=copper and is not present in metals.csv then output table should be Name      Present copper      No Note: I am giving the seachstring in text box of dashboard and want a result table as above
My Query is    index=windows Type=Disk host IN (abc) FileSystem="*" DriveType="*" Name="*" | dedup host, Name | table _time, host, Name | sort host, Name | join type=left host [| search index... See more...
My Query is    index=windows Type=Disk host IN (abc) FileSystem="*" DriveType="*" Name="*" | dedup host, Name | table _time, host, Name | sort host, Name | join type=left host [| search index=perfmon source="Perfmon:CPU" object=Processor collection=CPU counter="% Processor Time" instance=_Total host IN (abc) | convert num(Value) as value num(pctCPU) as value | stats avg(value) as "CPUTrend" max(value) as cpu_utz by host | eval "Max Peak CPU" = round(cpu_utz, 2) | eval "CPUTrend"=round(CPUTrend, 2) | fields - cpu_utz | sort -"Peak CPU" | rename "Max Peak CPU" AS "maxCPUutil" | dedup "maxCPUutil" | table _time, host, "maxCPUutil"] | table host, "maxCPUutil", Name   I have this below output host maxCPUutil Name host                               maxCPUutil       Name abc                                  5.59                       c: abc                                  5.59                       E: abc                                   5.59                       F:   What i want is host                                   maxCPUutil                     Name abc                                          5.59                                     C:                                                                                                  E:                                                                                                  F:
I have an application named "TA-training_samaksh_for_splunk". I have to run the following query   index="training_samaksh" source="/home/devuser/tutorialdata/www1/access.log" | table ip_address, ... See more...
I have an application named "TA-training_samaksh_for_splunk". I have to run the following query   index="training_samaksh" source="/home/devuser/tutorialdata/www1/access.log" | table ip_address, request_method,time_taken | outputlookup createinapp=true testwritecsv_lookup   The transforms.conf has the following lookup defined [testwritecsv_lookup] filename = test.csv The "test.csv" file always get created/updated in the "/splunk/etc/apps/search/lookups" or "/spunk/etc/users/<username>/TA-training_samaksh_for_splunk/lookups" and not in "/splunk/etc/apps/TA-training_samaksh_for_splunk/lookups" even though I am running the search within the app.  Any solution for this?
We are using SAP Business Technology Platform(Cloud Foundry) as PAAS and we want to drain application logs to Splunk cloud platform. Please provide implementation steps. Currently we are using Kiba... See more...
We are using SAP Business Technology Platform(Cloud Foundry) as PAAS and we want to drain application logs to Splunk cloud platform. Please provide implementation steps. Currently we are using Kibana service for log monitoring on SAP Business Technology Platform(Cloud Foundry). Now we want to drain syslog and application log to Splunk cloud platform from SAP Business Technology Platform(Cloud Foundry). We need all necessary steps to set-up integration from SAP Business Technology Platform(Cloud Foundry) to Splunk cloud platform. We are new to splunk and want to do simple PoC on it with integration set-up. 
I am new to Splunk and my use case is to send a file to Splunk and then Splunk will parse it. Can someone please help me with the code to put the file from my local machine to the Splunk server using... See more...
I am new to Splunk and my use case is to send a file to Splunk and then Splunk will parse it. Can someone please help me with the code to put the file from my local machine to the Splunk server using API? I want to automate this task.