All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is it possible to set TLS to only one input? For example: Checkpoint --> TLS --> SC4S --> Splunk CISCO ASA --> UDP514 --> SC4S --> Splunk So far, i can only find information about enabling TLS ... See more...
Is it possible to set TLS to only one input? For example: Checkpoint --> TLS --> SC4S --> Splunk CISCO ASA --> UDP514 --> SC4S --> Splunk So far, i can only find information about enabling TLS for all, just wondering if i can set it per source.   Thanks!
Hi, I have multiple timecharts which have similar search queries, sharing the same index, the only difference is that they are from different metric names, ie     | mstats max(my_Var) AS my_Var wh... See more...
Hi, I have multiple timecharts which have similar search queries, sharing the same index, the only difference is that they are from different metric names, ie     | mstats max(my_Var) AS my_Var where index=* AND "internal_name"="A1" ... | timechart span=1w sum(Var) AS output | mstats max(my_Var) AS my_Var where index=* AND "internal_name"="A2" ... | timechart span=1w sum(Var) AS output     .. and so on. I would like to have a panel where the various "output" are summed into a combined timechart. I have seen some similar solutions involving tokens, but I am unfamiliar with how they work, so I hope that someone can walk me through what to do, or any other solutions will be great too. Thanks  
We're using Splunk Cloud 8.2.2202.1 and have a data upload issue. I can upload a CSV  using the Add Data button in Settings menu into index=sandbox, but can't upload to my newly created index=xxx. I... See more...
We're using Splunk Cloud 8.2.2202.1 and have a data upload issue. I can upload a CSV  using the Add Data button in Settings menu into index=sandbox, but can't upload to my newly created index=xxx. I get a "supplied index 'xxx' missing" error. After data loaded into sandbox (showing I have accounted for adding a timestamp) I can index=sandbox | collect index=xxx sourcetype=hec testmode=false to put the data into xxx which seems to me proves index=xxx exists despite the contents of the error message. 1. Can anyone suggest what I might be doing wrong in getting my csv data into the correct index without taking a detour through index=sandbox? 2. I suspect there might be more log information about the failure somewhere, but I've looked in index=_internal and have not seen anything relevant. Is there somewhere else I can look?  3. We have been maddened by a string of silent failures in our use of Splunk in the past weeks and I wonder if there isn't a logging verbosity control that we could use to make it clearer what is happening (or not happening) with our Splunk operations?  Kind Regards, Sean
I am stuck on a integration. Scenario:- we have pas sever who generally does the va scan of all the environment now we need to integrate this with splunk Problem statement:-the pas server run que... See more...
I am stuck on a integration. Scenario:- we have pas sever who generally does the va scan of all the environment now we need to integrate this with splunk Problem statement:-the pas server run queries on the data and is storing it in a virtual directory in itslef now the client has installed the windows iis webserver and hosted this directory on a https url and the reports are showing on a url as  E.g url/report1, url/report2 etc. Client has created a service account and password for security to login to website and access the reports.  Now we have 3 things:- https url, username and password.  Which method of integration i should go with? Note :- client said no to u.f installation on the windows server and i checked and found no addon on splunkbase Correct me on Possible solution:- 1) use the inbuilt rest api method but instead of username and password i should ask him for api key? 2)if he cant provide api key then i need to go to addon builder and then create an addon with username and password as authentication? Has anybody worked on this? Is there any documentation avaiy with you? I am not sure of this method 3) use curl command and also put creds and download the report to another server and then use u.f on it to send to splunk. I am not sure of this method due to security concerns. Help me out url, username ans password
Hi, I want to store earliest and latest times of my search in variables to use them in further operations. But I am unable to do so. I am trying like below. | makeresults | eval jobEarliestTi... See more...
Hi, I want to store earliest and latest times of my search in variables to use them in further operations. But I am unable to do so. I am trying like below. | makeresults | eval jobEarliestTime = $job.earliestTime$ | eval jobLatestTime = $job.latestTime$ Could anyone help me on this? Thank You.
Hi Guys, I already have a query below that gives me a table similar to the one on bottom.  I was wondering if there is a way to get it to display results when count of IP Address is exactly 2?    ... See more...
Hi Guys, I already have a query below that gives me a table similar to the one on bottom.  I was wondering if there is a way to get it to display results when count of IP Address is exactly 2?    Meaning show results when IP address = 2 otherwise dont show it.  So 3rd entry should not show but first two should.   Please let me know if any ideas.  Appreciate your helps in advance.       index=EventLog source=security EventCode=4771 | stats count values(source) AS IP_Address BY Account_Name EventID Message | where count > 20   Account_Name EventID Message Count IP Address SmithA 4771 Kerberos pre-authentication failed 5000 1.1.1.1 2.2.2.2 JohnsonX 4771 Kerberos pre-authentication failed 6000 3.3.3.3 4.4.4.4 washingtonZ 4771 Kerberos pre-authentication failed 7000 5.5.5.5
We have a case where -   index = network_index host=xx.xx.xx.xx | eval lag_sec = (_indextime - _time) | stats count by lag_sec   _time is current but _indextime is 37 minute earlier.  What ... See more...
We have a case where -   index = network_index host=xx.xx.xx.xx | eval lag_sec = (_indextime - _time) | stats count by lag_sec   _time is current but _indextime is 37 minute earlier.  What can it be? 
Good Afternoon! I have a search (code example #1) that looks for the EventData_Xml field looking at programs installed. I'm creating a report to show what where and when. Trying to cut out the unne... See more...
Good Afternoon! I have a search (code example #1) that looks for the EventData_Xml field looking at programs installed. I'm creating a report to show what where and when. Trying to cut out the unneeded data and show just the program name, such as Microsoft Edge in the "Program Installed" column in the code example #2 below. Thank you in advance for any assistance. I appreciate it.   index=wineventlog EventData_Xml="*" AND EventID=11707 | table host _time EventData_Xml | rename host as "Host", _time as "Time", EventData_Xml as "Program Installed" | convert ctime(Time) <Data>Product: Microsoft Edge -- Installation completed successfully.</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data></Data><Binary>7B34443639394544332D333539302D334635352D424638302D3732374546444242313032467D</Binary>    
I want to install a custom app "Splunk Health Care Centre" in Splunk cloud. So i replicated the existing "Search & Reporting app" and sent to the cloud team. But the cloud team is getting below e... See more...
I want to install a custom app "Splunk Health Care Centre" in Splunk cloud. So i replicated the existing "Search & Reporting app" and sent to the cloud team. But the cloud team is getting below error and they are unable to install the custom app. Please help here.   ==================================================================== #Custom 8.2.6 - SplunkHealthCareCentre JavaScript file standards [failure] Check for usages of telemetry metrics in JavaScript The telemetry operations are not permitted. Match: splunkUtil.trackEvent File: appserver/static/get_html_dashboard_count.js Line Number: 7 Directory structure standards Ensure that the directories and files in the app adhere to hierarchy standards. [failure] Check that the file 'local.meta' does not exist. All metadata permissions should be set in 'default.meta'. Do not supply a local.meta file- put all settings in default.meta. File: metadata/local.meta [failure] Check that when decompressed the Splunk app directory name matches the id property in the [package] stanza in app.conf. For Cloud apps, the id property must exist and match the app directory name. For on-premise apps, if the id property exists, it must match the app directory name; if there is no id property, check_for_updates must be set to False in app.conf for the check to pass. The `app.conf` [package] stanza does not exist. Please disable `check_for_updates` or set the `id` property in the [package] stanza. File: default/app.conf ==========================================================================
I have a log file which should be updated in every min, but some time service hung, so the file cannot get update. Need to send an alert: if the file wasn't updated more than 5mins. But tried with t... See more...
I have a log file which should be updated in every min, but some time service hung, so the file cannot get update. Need to send an alert: if the file wasn't updated more than 5mins. But tried with the following different ways, the results seems not accurate. Any idea? Thanks. I did: index=abcapp source=abc.log" | sort _time |streamstats window=2 range(_time) as timediff |table timediff _time |eval alert=if(timediff>=5,1,0) |where alert=1 OR index=abcapp source="abc.log" | sort _time | delta _time as timediff | eval alert = if(timediff>5,1,0) |where alert =1 OR index=abcapp source="abc.log" earliest=-5m latest=now |stats count as num | eval alert = if(num=0,1,0) |where alert =1
Hello! I just set up Splunk Enterprise on-prem this morning and I was able to connect our Cisco Meraki firewall to Splunk using the UDP Data input (syslog). I am able to see all the traffic inbound... See more...
Hello! I just set up Splunk Enterprise on-prem this morning and I was able to connect our Cisco Meraki firewall to Splunk using the UDP Data input (syslog). I am able to see all the traffic inbound/outbound and I would like to create an alert so that when a network port scan (like NMAP) is conducted internally I will be notified. I'm a little unsure what I need to input in the search box query?  For instance, I tried this in search: index=* sourcetype="syslog" src=172.30.0.0/21 dst=172.30.0.0/21 dport=22. It's pulling in some data from my test NMAP scan but it's only showing up as one scan from my kali Linux IP and the destination is the subnet gateway, it's not creating an entry for every host that was scanned. Any suggestions on what I need to do?
This might be impossible, but thought I would at least ask the question before giving up! I have created an add-on that uses python scripts to pull in and index/checkpoint data. I am already successf... See more...
This might be impossible, but thought I would at least ask the question before giving up! I have created an add-on that uses python scripts to pull in and index/checkpoint data. I am already successfully pulling from several of these data input sources. The issue I am running into is that the next API I want to pull data from requires an identifier at the end (a UUID in this case).  I dont know this uuid until I do a search on one of the forementioned inputs.  So my question is, is there a way to take a value (variable/token/etc...) and dynamically create another data input from it? Can you define a $var$ in your script that can be dynamically passed in, in order to index data from that endpoint?
Hi Everyone, Had a question and apologies in advanced if the topic has already been brought up. We are currently utilizing the Microsoft Office 365 Reporting Mail Add-on for Splunk to ingest messag... See more...
Hi Everyone, Had a question and apologies in advanced if the topic has already been brought up. We are currently utilizing the Microsoft Office 365 Reporting Mail Add-on for Splunk to ingest message trace logs, but just recently we've been running into consistent 401 unauthorized errors. We've double checked and triple checked that the account used to query the API is not locked and we are able to get results when we manually call the URI: Invoke-RestMethod -Method GET -Uri "https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?" -Credential $cred Has anyone run into this issue? If so, would be very appreciated if there would be any feedback as to how it was resolved (or at least a pathway to remediation). Thank you again
| eval hours= if (day="Monday", hours=(a+b), hours) So basically if day=monday, i wants hours to add up a+b
Ciao  Could you please help me split this string: DF - R - Emails with words (O365) sent to Personal  Account (scored) (Data Exfil)  I need the DF - R - Emails with words  (O365) sent to perso... See more...
Ciao  Could you please help me split this string: DF - R - Emails with words (O365) sent to Personal  Account (scored) (Data Exfil)  I need the DF - R - Emails with words  (O365) sent to personal account (scored) (data exfil) all to be split into their own fields
First time splunker here. Can you have an inputs.conf with only: [default] host = <fqdn> In etc/system/local while having custom apps with inputs in etc/apps? Or will the etc/system/local/inp... See more...
First time splunker here. Can you have an inputs.conf with only: [default] host = <fqdn> In etc/system/local while having custom apps with inputs in etc/apps? Or will the etc/system/local/inputs.conf mess with the custom app inputs?
Basically my data is formatted as a message and then info in parentheses on the right. Example: " LL - VPN Activity (list) (monitor)" " LL - Assess (iCloud) Storage (not conf) (approve)"  etc. Wh... See more...
Basically my data is formatted as a message and then info in parentheses on the right. Example: " LL - VPN Activity (list) (monitor)" " LL - Assess (iCloud) Storage (not conf) (approve)"  etc. What I want to do is for every message, get rid of the contextual info on the right hand side. So in these cases, it would become:  "LL - VPN Activity" "LL - Assess (iCloud) Storage" Any ideas?
How can I remove alerts in the messages tab in Splunk Web via curl? Users are reading them and panicking far to often.
I have a collection of log files that I am trying to parse.  Quick summary: From Apache/Tomcat using logback I don't have permission to change the layout of the log files at the moment but can wo... See more...
I have a collection of log files that I am trying to parse.  Quick summary: From Apache/Tomcat using logback I don't have permission to change the layout of the log files at the moment but can work on that Event field from the quick parse seems to hold a lot of data that could be separated out into "Fields" via comma separation.   I seem to need the 3rd item in Event if we could parse them by comma Is there an easy way to do that in the query syntax? The Event section looks like: date,  something about a filter, THE URL THAT I WANT, other junk etc 2022-05-01 23:15:24,  calling SSO: null, /topUrl/rest/services/folder/servicename/command?moreinfoEtc, referer: Null, request ip: 10.xxx.xxx.xxx So, I'd love to get statistics on  topURL ServiceName Any help would be great.  
The test_new.html is getting update every 4 hours.The html file may or maynot have same number of lines. The data is only coming immediately when I am adding say test data into the html file. That ... See more...
The test_new.html is getting update every 4 hours.The html file may or maynot have same number of lines. The data is only coming immediately when I am adding say test data into the html file. That means the data flow is not an issue.   I am expecting it to send me data as and when timestamp of the file changes.  Need your suggestions on the same. I have done below configuration In UF agent I have  added inputs.conf as [monitor:///root/splunkstorage/test_new.html] disabled = false index = test_normal sourcetype = test:global:testnew:html crcSalt = <SOURCE> In the indexer I have props.conf  [test:global:testnew:html] DATETIME_CONFIG = CURRENT CHECK_METHOD = modtime LINE_BREAKER = (<html><body>) NO_BINARY_CHECK = true SEDCMD-addvalues = s/<head>/<html><body>\n<head>/g TRUNCATE = 0 category = Custom disabled = false pulldown_type = true