All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

anyone have a splunkbase app that will perform this function?     I wish to email a CSV file (possibly ZIPPED too) into Splunk to be consumed into splunk.   Is this possible at all? thanks!  Mik... See more...
anyone have a splunkbase app that will perform this function?     I wish to email a CSV file (possibly ZIPPED too) into Splunk to be consumed into splunk.   Is this possible at all? thanks!  Mike B
Here is my query - I'm doing two searches that are independent of each other. In both searches, I'm restricting the time to a certain hour and then grouping by day.  index="first search" | eval date... See more...
Here is my query - I'm doing two searches that are independent of each other. In both searches, I'm restricting the time to a certain hour and then grouping by day.  index="first search" | eval date_hour=strftime(_time, "%H") | eval dateday=strftime(_time, "%d") | search date_hour>=10 date_hour<11 | stats count as totalFail by dateday | append [search index="second search" | eval date_hour=strftime(_time, "%H") | search date_hour>=10 date_hour<11 | eval date_day=strftime(_time, "%d") | stats count as totalProcess by date_day | eval failureRate = totalFail/totalProcess] | table dateday, totalFail, totalProcess, failureRate   Trying to achieve  two things here: 1) Getting the data to be outputted "correctly" as a table (ie, data is uniform across rows) and 2) Getting a simple calculation (percentage) to work.  Right now the table is not formatted correctly (ie, 10 rows, instead of 5) and the percentage calculation doesn't appear to be working.  Here is the desired output: Day | Fail | Total | Percentage 10 | 1 | 10 | 10% 11 | 2 | 10 | 20% 12| 0| 10| 0%  
Functionality worked before the upgrade. After upgrade logon shows "Tab-completion of "splunk <verb> <object> is available". Like, $ source ./setSplunkEnv Tab-completion of "splunk <verb> <objec... See more...
Functionality worked before the upgrade. After upgrade logon shows "Tab-completion of "splunk <verb> <object> is available". Like, $ source ./setSplunkEnv Tab-completion of "splunk <verb> <object>" is available.   But when utilized it returns "Splunk re-bash: completion: function 'fSplunkComplete' not found. File is in "/opt/splunk/bin/setSplunkEnv" "/opt/splunk/share/splunk/cli-command-completion.sh" location and referenced files are both available.
Hello  wonder if anyone got this app working for rss feeds?.  https://splunkbase.splunk.com/app/2646/#/details Broad feed support: the input supports all of the major feed types (RSS, ATOM, RDF) a... See more...
Hello  wonder if anyone got this app working for rss feeds?.  https://splunkbase.splunk.com/app/2646/#/details Broad feed support: the input supports all of the major feed types (RSS, ATOM, RDF) and will automatically determine the type of the feed and import it automatically   was only able to ingest BBC news, cisco webex status feed . the ones i am interested in fail with error   But these fail to be ingested ;  the error is same for all the feeds tested https://www.csoonline.com/in/index.rss https://feeds.feedburner.com/securityweek http://krebsonsecurity.com/feed/ https://threatpost.com/feed/ https://www.darkreading.com/rss/all.xml https://feeds.feedburner.com/TheHackersNews https://www.theregister.com/security/headlines.atom https://nvd.nist.gov/feeds/xml/cve/misc/nvd-rss.xml https://www.bleepingcomputer.com/feed/ https://www.infosecurity-magazine.com/rss/news   Does not look like a dns error as it works for bbc & webex url.  same error from test machine fully open to the internet.    Supported Splunk Versions: 7.2, 7.3, 8.0, 8.1, 8.2 ;    http TRACE: Request URL: https://www.csoonline.com/in/index.rss Request Method: GET Status Code: 200 OK Remote Address: 172.22.59.131:80     ERROR TRACE:     2021-11-16 19:25:53,176 ERROR Unable to get the feed, url=https://www.infosecurity-magazine.com/rss/news Traceback (most recent call last): File "/opt/splunk/etc/apps/syndication/bin/syndication.py", line 350, in run results, last_entry_date_retrieved = self.get_feed(feed_url.geturl(), return_latest_date=True, include_later_than=last_entry_date, logger=self.logger, username=username, password=password, clean_html=clean_html) File "/opt/splunk/etc/apps/syndication/bin/syndication.py", line 167, in get_feed d = feedparser.parse(feed_url) File "/opt/splunk/etc/apps/syndication/bin/syndication_app/feedparser/api.py", line 241, in parse data = _open_resource(url_file_stream_or_string, etag, modified, agent, referrer, handlers, request_headers, result) File "/opt/splunk/etc/apps/syndication/bin/syndication_app/feedparser/api.py", line 141, in _open_resource return http.get(url_file_stream_or_string, etag, modified, agent, referrer, handlers, request_headers, result) File "/opt/splunk/etc/apps/syndication/bin/syndication_app/feedparser/http.py", line 200, in get f = opener.open(request) File "/opt/splunk/lib/python2.7/urllib2.py", line 429, in open response = self._open(req, data) File "/opt/splunk/lib/python2.7/urllib2.py", line 447, in _open '_open', req) File "/opt/splunk/lib/python2.7/urllib2.py", line 407, in _call_chain result = func(*args) File "/opt/splunk/lib/python2.7/urllib2.py", line 1241, in https_open context=self._context) File "/opt/splunk/lib/python2.7/urllib2.py", line 1198, in do_open raise URLError(err) URLError: <urlopen error [Errno -2] Name or service not known>       https://lukemurphey.net/projects/splunk-syndication-input/wiki/Troubleshooting   Troubleshooting If you experience problems with the input, run the following search to see both the output from the input and the modular input logs together in order to see if the logs indicate what is wrong: (index=main sourcetype=syndication) OR (index=_internal sourcetype="syndication_modular_input") If you have debug logging enabled, then you can see details with the following: index=_internal sourcetype="syndication_modular_input" | rex field=_raw "(?<action>((Skipping)|(Including)))" | search count>0 OR action=Including | table date latest_date title action count    
I have Splunk results in following format:   2021-11-13 01:02:50.127 ERROR 23 --- [ taskExecutor-2] c.c.p.r.service.RedisService : The Redis Cache had no record for key: null Returning ... See more...
I have Splunk results in following format:   2021-11-13 01:02:50.127 ERROR 23 --- [ taskExecutor-2] c.c.p.r.service.RedisService : The Redis Cache had no record for key: null Returning empty list. 2021-10-22 21:11:51.996 ERROR 22 --- [ taskExecutor-1] c.c.p.r.service.SftpService : Could not delete file: /-/XYZ.FILE - 4: Failure 2021-10-22 02:05:14.426 ERROR 22 --- [ taskExecutor-1] c.c.p.r.service.SftpService : Could not delete file: /-/XYZ.FILE - 4: Failure   I want to create a Visualization in the following format - In the attached screenshot.  The **count variable** is based on the "Error Message" only. Since "Could not delete file: /-/XYZ.FILE - 4: Failure" appeared twice, hence the count is set to 2. As the logs grow, and this message occurrence increase, this count should increase too. I tried using erex and substring from Splunk but kinda failed miserably! Any help on how to form the Splunk query for this visualization would be appreciated. Thanks
I have faced a very interesting situation and have no clue what is going wrong. I have a forwarded info from a particular host and if use a search like this I have all results. index=win host=MYHOS... See more...
I have faced a very interesting situation and have no clue what is going wrong. I have a forwarded info from a particular host and if use a search like this I have all results. index=win host=MYHOST If I use this search it gives no results. index=win host=MYHOST sourcetype=mysourcetype BUT in realtime search it gives me the results! The setup on the host looks like this for mysourcetype. [mylogsource] disabled = 0 index = win sourcetype = mysourcetype
I need to learn the data Retention settings for Splunk Ent. & ES in my environment. Would you share a SPL if there are any please. Thanks a million.
what's the best way to set a sedcmd in props to remove spaces and add a " _ " in just the a cvs header line? for example- I have a csv header like this; field 1, field 2, filed 3 and I want the fi... See more...
what's the best way to set a sedcmd in props to remove spaces and add a " _ " in just the a cvs header line? for example- I have a csv header like this; field 1, field 2, filed 3 and I want the fields to be field_1, field_2, filed_3 but just for the header, not for the data in the fields.  I see this in a old post but, it will change everything with a space even the data: inputs.conf [monitor://c:\temp\sed-csv.log] disable=false index=sedcsv sourcetype=sedcsv props.conf [sedcsv] SEDCMD-replacespace = s/ /_/g https://community.splunk.com/t5/Splunk-Search/CSV-Field-Extraction-with-spaces-in-field-name/m-p/245974 
Hello. I've noticed that in many solutions when there is a need for a value from previous row, streamstats with window=1 is used. For example - https://community.splunk.com/t5/Splunk-Search/Unable-t... See more...
Hello. I've noticed that in many solutions when there is a need for a value from previous row, streamstats with window=1 is used. For example - https://community.splunk.com/t5/Splunk-Search/Unable-to-subtract-one-days-hours-from-previous-days-to-create/m-p/575093/highlight/true#M200392 In similar cases I tended to use autoregress which behaves more or less the same. The question is - what are pros/cons of each of those commands? Do they have some non-obvious limitations? Is any "better" than the other?
Hello,   I am trying to figure out how many good IP addresses vs bad IP addresses there are based on Tenable Security center results (severity=low, medium, high, critical).  A good scan should show... See more...
Hello,   I am trying to figure out how many good IP addresses vs bad IP addresses there are based on Tenable Security center results (severity=low, medium, high, critical).  A good scan should show multiple severity level results vs a bad scan would not show as many severity level results.   I would like to get as many fields filled based on SPL query.  More importantly I would like to get the good vs bad scan results (credentialed scans) from Tenable Security Center (ACAS).  What I mean by this is that when a scan has been initiated, you know a good scan vs a bad scan, where a good scan can pull multiple vulnerabilities based on the severity levels.  Where as for a bad scan does not pull as many vulnerabilities and the severity levels are very low or close to nothing at all.  I created a SPL query that provides the 26 data standard fields:  IP repository.dataFormat  netbiosName dnsName AWS hostname macAddress OS_Type, OS_Version operatingSystem SystemManufacture SystemSerialNumber SYStemModel AWSAccountNumber AWSInstanceID AWSENI passFail plugin_id pluginName repository.name, cpe low, medium, high critical total Country lat lon SPL Query earliest=7d@d index=acas sourcetype="tenable:sc:vuln"  | rex field=operatingSystem "^(?P<OS_Type>\w+)\.(?P<OS_Version>.*)$" | rex field=dnsName "^(?P<hostname>\w+)\.(?P<domain>.*)$" | rex field=system "^(?P<manufacture>\w+)\.(?P<serialnumber>.*)$" | rex field=pluginText "\<cm\:compliance-result\>(?<status>\w+)\<\/cm\:compliance-result\>" | eval AWS=if(like(dnsName,"clou%"),"TRUE","FALSE") | iplocation ip | eventstats count(eval(severity="informational")) as informational, count(eval(severity="low")) as low, count(eval(severity="medium")) as medium, count(eval(severity="high")) as high, count(eval(severity="critical")) as critical by ip | dedup ip | eval total = low+medium+high+critical | table ip, repositiory.dtatFormat, netbiosName, dnsName, AWS, hostname, macAddress, OS_Type, OS_Version, operatingSystem, SystemManufacture, SystemSerialNumber, SystemModel, AWSAccountNumber, AWSINstanceID, AWSENI, passFail, plugin_id, pluginName, repository.name, cpe, low, medium, high, critical, total, Country, lat, lon  
Hi all, i need to create a table that count for every product how many events are accepted or rejected. In addition to this fields the latest event date should be shown with the count. Table should... See more...
Hi all, i need to create a table that count for every product how many events are accepted or rejected. In addition to this fields the latest event date should be shown with the count. Table should be like this Product Latest Accepted Total Accepted Latest Rejected Total Rejected Bike 10/11/2021 35 12/11/2021 14 Skate 11/11/2021 99 13/11/2021 5   the first part of the query is pretty easy: ...| stats count(eval(action="accepted)) AS "Total Accepted" count(eval(action="rejected)) AS "Total rejected" by product | rename product AS Product I'm not able to retrieve the latest date for every  kind of action, tried with latest(_time) without success. Many thanks  
I'm upgrading the Splunk_TA_windows to the newest version in our environment. We are coming from an old 5.x version. Now that the Windows TA, Active Directory TA, and the DNS TA have all been consoli... See more...
I'm upgrading the Splunk_TA_windows to the newest version in our environment. We are coming from an old 5.x version. Now that the Windows TA, Active Directory TA, and the DNS TA have all been consolidated into one TA, I've got some questions for how to best deploy this. I've looked at the local inputs.conf files for all three of the legacy TAs and consolidated them into a local inputs.conf file for the new TA. I've deployed it to one machine using the deployment server and have immediately discovered an issue. I figured the AD and DNS logs would not be present on a Workstation PC so those pieces would not run, however, that's not the case. Some the AD powershell inputs are running on my laptop, which is not what I want. So, I'm figuring I need to find a way to split out the local inputs.conf file by machine type (workstation/server/domain controller/DNS server). I'm thinking maybe I need to deploy the Splunk_TA_windows to all our windows machines as is ... no local inputs.conf. And then maybe create small apps to turn on certain features of the TA per machine type. Is that the right way to do this? Would that even work? I'm thinking there might be issues with the scripted inputs as the script files would live in another app. Anyway, I'm just not sure what the best way to handle this is. Any help would be much appreciated.
I am trying to create a Timechart that will list out the TotalHours of that day and then subtract the previous days TotalHours to see the Hours difference in each day. This needs to span 14 days   B... See more...
I am trying to create a Timechart that will list out the TotalHours of that day and then subtract the previous days TotalHours to see the Hours difference in each day. This needs to span 14 days   Basically I just need the Total Hours difference from One day to the next spanned across a timechart  This is the data and Query I have so far (not much) -------Search----- | where TotalHours != "0" AND _time>relative_time(now(), "-14d@d") | dedup PROJECT_NUMBER _time | table PROJECT_NUMBER TotalHours _time | sort by PROJECT_NUMBER  
Hello Everyone,   I'm trying to extract usernames from the logs of a proftpd. An event looks like this: 2021-11-16 16:17:43,866 HOST proftpd[28071] 10.10.10.10 (11.11.11.11[22.22.22.22]): USER AS... See more...
Hello Everyone,   I'm trying to extract usernames from the logs of a proftpd. An event looks like this: 2021-11-16 16:17:43,866 HOST proftpd[28071] 10.10.10.10 (11.11.11.11[22.22.22.22]): USER ASD-ASDASD: Login successful.   Simple usernames (ASDFG) works fine, also usernames with _ like ASD_ASD. But as soon as the username contains - character, its only extract the first part ASD-ASDASD How do I circumvent this? How can I extract strings that contains - ?    
I am changing the dashboard XML to perform an in-line field validation, but I cannot seem to get the regEx right. Here is the sample XML code that I already have for another field. (In this example ... See more...
I am changing the dashboard XML to perform an in-line field validation, but I cannot seem to get the regEx right. Here is the sample XML code that I already have for another field. (In this example I am validating if the field entry is numeric or not). My regex requirement is, It should be able to check if the input entered has any space in the beginning or end or if it has any doublequotes character in it. The regex should run inside the dashboard source code XML. ======= <eval token="validationResult">if(match(value, "([^a-zA-Z0-9]\s?[^a-zA-Z0-9])\s$"), "trailing Punctuation or Space. [Remove any padded whitespaces or double quotes frov 'Value'] ", "No padded Punctuation or Space or doublequotes [No Action Required]"</eval> ========
Hi, MS ADFS will write inside WinEventLog:security and Splunk_TA_windows is watching that log and enriching with lookup. Issue here is that AD FS have same ID's like some other. Example: EventCode... See more...
Hi, MS ADFS will write inside WinEventLog:security and Splunk_TA_windows is watching that log and enriching with lookup. Issue here is that AD FS have same ID's like some other. Example: EventCode=516 should be "The following user account has been locked out due to too many bad password attempts. Additional Data Activity ID: %1 User: %2 Client IP: %3 nBad Password Count: %4 nLast Bad Password Attempt: %5" while Splunk_TA_windows will enrich it to: Internal resources allocated for the queuing of audit messages have been exhausted, leading to the loss of some audits how can I override default Splunk_TA_windows lookup if Provider Name='AD FS Auditing' is AD FS Auditing, so then I can enrich with ADFS table else use default lookup.
Hi, I am trying to create an alert that triggers when more than 5 files are deleted in less than 3 minutes from the app we monitor.  For some reason, the alert only works for single file deletion b... See more...
Hi, I am trying to create an alert that triggers when more than 5 files are deleted in less than 3 minutes from the app we monitor.  For some reason, the alert only works for single file deletion but does not work when I set it for. a number of events. any idea why? would love to get some help
Hello everyone,   I'm trying to apply an Ontologicall indexing as it was described in the conference "Bridging the Data Divide To Solve Social and Environmental Challenges" at .conf21. If you have... See more...
Hello everyone,   I'm trying to apply an Ontologicall indexing as it was described in the conference "Bridging the Data Divide To Solve Social and Environmental Challenges" at .conf21. If you have any idea please tell me. I attached the photo in order to describe my objective in a visual way. Thomas.
Hi all, I have a doubt regarding the datamodel use. In Splunk Foundamentals 2 course, I got what Data Models is and how to use it with Pivot. My doubt now is the following: is it possible to use a... See more...
Hi all, I have a doubt regarding the datamodel use. In Splunk Foundamentals 2 course, I got what Data Models is and how to use it with Pivot. My doubt now is the following: is it possible to use a datamodel and its field in a custom search, for example in the Search and Reporting app? And if yes, how? Suppose I have to perform a simple search like this one on network traffic: index=<some index> sourcetype=<some_sourcetype>| stats count src as source by dest as destination Suppose now I want to use Network Traffic Data model and its Data set All_Traffic to perform this search, to avoid the use of index and sourcetype; is this possible? And if yes, how to perform this search?
Hi team I found main flow will not run after adding branch flow ,  is it known limitation ?   thanks