All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  I have installed universal forwarder in a cloud instance(linux),  then I installed splunk enterprise in my local machine(laptop) which is running win 10. I want to forward logs from linux machi... See more...
Hi,  I have installed universal forwarder in a cloud instance(linux),  then I installed splunk enterprise in my local machine(laptop) which is running win 10. I want to forward logs from linux machine to my laptop's splunk's indexer. The problem is , what server IP should I be given in Linux universal forwarder/etc/system/local/outputs.conf [tcpout:example] server=????? I tried giving my IP ***.***.**.*** :9997, but there is no use. In my laptop, the splunk is running at localhost:8000. Please help me with this. Thanks.
Hi all,   I have a table called active_services.csv. One of the fields is called Report_Date Date value is in the following format 20220124. The CSV file is automatically updated weekly but s... See more...
Hi all,   I have a table called active_services.csv. One of the fields is called Report_Date Date value is in the following format 20220124. The CSV file is automatically updated weekly but sometimes fails and requires manual intervention. I need help with a query so I can setup an alert to notify me when the report date value is older than X amount of days. Please help. Thank you for your help in advance.
Hello Splunkers!    I used the | delete command to delete the data, but to my knowledge, the actual data is still in the storage.      Therefore, is it possible to delete the actual data that I ... See more...
Hello Splunkers!    I used the | delete command to delete the data, but to my knowledge, the actual data is still in the storage.      Therefore, is it possible to delete the actual data that I deleted in search??   Thank you in advance. 🥸  
How would I find sAMAccountName(s) - more than one. I have tried boolean operators and(&) or(|) to no avail. Currently only one works.  | ldapsearch domain=xxxx basedn="DC=xxxx,DC=xxxx" search="(&(o... See more...
How would I find sAMAccountName(s) - more than one. I have tried boolean operators and(&) or(|) to no avail. Currently only one works.  | ldapsearch domain=xxxx basedn="DC=xxxx,DC=xxxx" search="(&(objectClass=user)(sAMAccountName=specificuser))"
Hi all, I am struggling a bit with incorporating a lookup into my searches.  I have a lookup file that is a single column of IP addresses and a header of TORIP. It should be a pretty basic search i... See more...
Hi all, I am struggling a bit with incorporating a lookup into my searches.  I have a lookup file that is a single column of IP addresses and a header of TORIP. It should be a pretty basic search index=* src_ip=* followed by the lookup. I added the lookup file and lookup definition but when I run a search it fails saying the lookup table doesnt exist.   
Hi. I've got a search looking for times and dates with "index=main host=web1 "/blarg=foo"| table _time" how can I use the results to to search with "index=main host=app1 blarg" during the times from ... See more...
Hi. I've got a search looking for times and dates with "index=main host=web1 "/blarg=foo"| table _time" how can I use the results to to search with "index=main host=app1 blarg" during the times from the first search?
What is the best way to trim a timestamp formatted like 2022-01-06 01:51:23 UTC so that it only reflects the date and hour, like this  2022-01-06 01? I need to be able to search for events by just th... See more...
What is the best way to trim a timestamp formatted like 2022-01-06 01:51:23 UTC so that it only reflects the date and hour, like this  2022-01-06 01? I need to be able to search for events by just the date and hour.
I use the following to define an icon, to display on my dashboard: eval coldImg = "/weatherAssets/apps/ics_analysis/lowTemp.png" in the Simple XML for the dashboard. Here is the path fo... See more...
I use the following to define an icon, to display on my dashboard: eval coldImg = "/weatherAssets/apps/ics_analysis/lowTemp.png" in the Simple XML for the dashboard. Here is the path for the image: /opt/splunk/etc/apps/ics_analysis/weatherAssets/lowTemp.pngwhere  ics_analysis is the name of the app and weatherAssets is the folder for the icons.   It used to display, when I had the following: eval coldImg = "https://image.flaticon.com/icons/png/512/1312/1312331.png" but now it only shows a broken image icon.   What could be wrong? How can I debug the problem? It's frustrating that I don't know how to find out the error message to the issue. Do I have to restart the Splunk server, or bump my dashboard? (I just did reload the web page.)   Thanks for your help!       
I am looking the 6.5 x86 release of Splunk. It is no longer listed under the older downloads. Can anyone help?
Is there any step by step guide to setup splunk home lab. I am trying to learn and does not know where to start?
Hello, I want to calculate a ratio between two fields (i know it suppose to be an easy one but looks like im missing something) i want to count all the Totals and then check where Total > 200  a... See more...
Hello, I want to calculate a ratio between two fields (i know it suppose to be an easy one but looks like im missing something) i want to count all the Totals and then check where Total > 200  as latency and count them all  after i have both of them i want to check if the ration between them is > 0.3   sourcetype="*user-program*" | rename AdditionalData.Total as Total | eval Latency=if(Total>200,Total,null()) |eval Ratio = Total/Latency   this one returning no results
Hello, Any changes happened on SAML SSO configuration in the new Splunk v8.2.4 ? We have an IdP configured to use SSO and it is working in the Splunk v8.1.1. We recently upgraded to v8.2.4 we copie... See more...
Hello, Any changes happened on SAML SSO configuration in the new Splunk v8.2.4 ? We have an IdP configured to use SSO and it is working in the Splunk v8.1.1. We recently upgraded to v8.2.4 we copied the same authentication.conf from v8.1.1.  Seeing the below error in the Splunkd.log   relaystate is empty RelayState may be missing due to IDP-intiated SAML workflow. User=<user>@<DOMAIN1.DOMAIN.COM> domain= does not match default domain. Contact your syste administrator for more information about the default domain=saml for this system   Any idea how to fix this error?    
Hi, I'm trying to build a query to get the count of opened and resolved incidents every hour in a day but the numbers are not tallying. Not sure if the issue might be the fact that ServiceNow uses ... See more...
Hi, I'm trying to build a query to get the count of opened and resolved incidents every hour in a day but the numbers are not tallying. Not sure if the issue might be the fact that ServiceNow uses GMT and therefore all the tickets have the dv_opened_at and dv_closed_at field in terms of GMT and the _time field is the local time which in my case is EST. I'm using the following query but not getting the correct numbers: index=xyz |eval _time = strptime(dv_opened_at,"%Y-%m-%d %H:%M:%S") | sort 0 - _time | addinfo | where _time >= info_min_time AND _time <= info_max_time | eventstats min(_time) AS earliest_time BY sys_id | where _time = earliest_time | timechart span=1h dc(sys_id) AS "Opened Tickets" | appendcols [ search index=xyz |eval _time = strptime(dv_resolved_at,"%Y-%m-%d %H:%M:%S") | sort 0 - _time | addinfo | where _time >= info_min_time AND _time <= info_max_time | eventstats min(_time) AS earliest_time BY sys_id | where _time = earliest_time | timechart span=1h dc(sys_id) AS "Closed Tickets"] Does anyone know how I can fix the query to get the correct number of incidents opened and closed every hour on a specific day?
After the upgrade of Splunk Enterprise to 8.2.4, several triggered alerts with tokens are no longer sending out emails.    Looking at splunkd.log, there is a warning message concerning the alert 02... See more...
After the upgrade of Splunk Enterprise to 8.2.4, several triggered alerts with tokens are no longer sending out emails.    Looking at splunkd.log, there is a warning message concerning the alert 02-10-2022 10:02:28.244 -0600 WARN Pathname [15448 AlertNotifierWorker-0] - Pathname 'E:\Splunk\bin\Python3.exe E:\Splunk\etc\apps\search\bin\sendemail.py "results_link= "ssname=Password Reset Reminder" "graceful=True" "trigger_time=1644508948" results_file="E:\Splunk\var\run\splunk\dispatch\scheduler__srunyonadm__search__RMD5c5f30383081059ef_at_1644508800_24883\results.csv.gz" "is_stream_malert=False"' larger than MAX_PATH, callers: call_sites=[0xd4d290, 0xd4f001, 0x15d1632, 0x15ce217, 0x1439f53, 0x13c8176, 0x71f406, 0x71ea9e, 0x71e899, 0x6eaeeb, 0x70c3c5] I am concerned with the "larger thanMAX_PATH" message because Splunk doc states -  "The Windows API has a path limitation of MAX_PATH which Microsoft defines as 260 characters including the drive letter, colon, backslash, 256-characters for the path, and a null terminating character. Windows cannot address a file path that is longer than this, and if Splunk software creates a file with a path length that is longer than MAX_PATH, it cannot retrieve the file later. There is no way to change this configuration." What can be done to get this working again? Regards, Scott Runyon
Howdy, I'm trying to come up with a query that charts the most occurring x_forwarded_for and respective count in each of the bins over whatever window. Currently, the below query creates a sorted c... See more...
Howdy, I'm trying to come up with a query that charts the most occurring x_forwarded_for and respective count in each of the bins over whatever window. Currently, the below query creates a sorted chart of the most occurring x_forwarded_for and their respective count over the entire lookback window, instead of each bin. I think I need to fit head 1 in there somewhere. It's likely some or all of the x_forwarded_for's across those bins are repeats and I'd like that charted, so no unique counts. Any help is appreciated!         index="canvas_*" cluster="*" | where isnull(user_id)| bin _time span=5m | stats count by x_forwarded_for | sort - count        
I have reports Quarter1.csv and Quarter2.csv. after I upload these two  csv report I got  host="***" source="****" sourcetype="***" and those fields IP_Address,Plugin_Name,Severity,Protocol,Port... See more...
I have reports Quarter1.csv and Quarter2.csv. after I upload these two  csv report I got  host="***" source="****" sourcetype="***" and those fields IP_Address,Plugin_Name,Severity,Protocol,Port,Exploit,Synopsis,Description,Solution,See_Also,CVSS_V2_Base_Score,CVE,Plugin. I want 3 reports base on joining  with these 6 files:  IP_Address, Plugin_Name, Severity, Protocol, Port, Exploit, | table IP_Address,Plugin_Name,Severity,Protocol,Port,Exploit,Synopsis,Description,Solution,See_Also,CVSS_V2_Base_Score,CVE,Plugin, status First report: - if the event are in  Quarter1.csv and Quarter2.csv. show status as "Active Vulnerability" Second report:- if the event are in  Quarter1.csv but not in Quarter2.csv. show status as "Fixed" Third​ report:- if the event are  not in  Quarter1.csv but there are   in Quarter2.csv. show status as "New Active Vulnerability"
I'm running Splunk Enterprise 8.2.4. When deploying the Universal Forwarder for Windows (version 8.2.4) and selecting to run it under the Local System account it subsequently asks me for the 'create ... See more...
I'm running Splunk Enterprise 8.2.4. When deploying the Universal Forwarder for Windows (version 8.2.4) and selecting to run it under the Local System account it subsequently asks me for the 'create credentials for the administrator account' as per attached. What is the purpose of this ?
Hi. So I'm reading about this Add-on and the instructions seem to be pretty straightforward about getting the Add-on installed on my search head and indexer. What I have are Domain Controllers on a n... See more...
Hi. So I'm reading about this Add-on and the instructions seem to be pretty straightforward about getting the Add-on installed on my search head and indexer. What I have are Domain Controllers on a network that is not local. I have a universal forwarder (Ubuntu) on site there which is forwarding Palo Alto logs via syslog-ng.  My question is this. What do I need to install on a Domain Controller on the remote network to get it to gather Active Directory and forward to the indexer either directly or via the universal forwarder? 
Is there a list of what metrics are and are not available in Dash Studio. Currently it looks like custom metrics, service endpoints and information points are not available. Documenting this in a ce... See more...
Is there a list of what metrics are and are not available in Dash Studio. Currently it looks like custom metrics, service endpoints and information points are not available. Documenting this in a central place would be helpful instead of getting half way thru building a dashboard only to find out a metric isn't available.
In the query  _time is already formatted. But when i try to export the data in csv its showing different formats.    Query:index="wineventlog" host IN (USMDCKPAP30074) EventCode=6006 OR EventCode... See more...
In the query  _time is already formatted. But when i try to export the data in csv its showing different formats.    Query:index="wineventlog" host IN (USMDCKPAP30074) EventCode=6006 OR EventCode="6005" Type=Information | eval BootUptime = if(EventCode=6005,strftime(_time, "%Y-%d-%m %H:%M:%S"),null()) | table host BootUptime Eg:       2022-31-01 10:00:42 2022-29-01 06:40:11 2022-27-01 12:55:56       After exporting :       8/1/2022 4:08 1/1/2022 4:03 2021-25-12 04:03:29 2021-18-12 04:02:54 2021-16-12 10:14:45 2021-16-12 10:08:21 11/12/2021 4:08 4/12/2021 4:11 Please help me resolve this