All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi , In splunk query I need to convert date format as below . Current format - 07/09/23 Required Format : 2023-09-07
Many thanks, I will get to it!
Hi @Bastiaan, as I said, follow the Splunk Search Tutorial and you'll quickly learn how to search on Splunk. Anyway, if you have only to search some strings, you can put them in the main search usi... See more...
Hi @Bastiaan, as I said, follow the Splunk Search Tutorial and you'll quickly learn how to search on Splunk. Anyway, if you have only to search some strings, you can put them in the main search using the boolean operator to correlate them, remembering that the AND operator is by default. So if you want to find all events containing the strings you define, you could try to run: index=your_index host=your_host ("CONFIG, commit* but not Succeeded" OR "snmpd.log") | table _time host TS_Agent Then you can add the time conditions, but, as I said, follow the Search Tutotial. At least, don't use the "-" char in field names because Splunk uses it as subtraction operator, use underscore "_". Ciao. Giuseppe
I see I have a lot to learn. The essence is: I want to get three things from the log of host "hostname". First, "CONFIG, commit* but not Succeeded", I also want "snmpd.log" messages and I want to ge... See more...
I see I have a lot to learn. The essence is: I want to get three things from the log of host "hostname". First, "CONFIG, commit* but not Succeeded", I also want "snmpd.log" messages and I want to get "TS-Agent" from the logging. But from the last one I'm not interested in what happens between 01:00 and 05:00 since they give errors during that time frame that I don't care about. The other two filter/searches I want to get 24/7 messages from.
I have indexes created and i have 2 csv first is ipv6.csv and its has coulmn called ip and second csv is cmd.csv it contain critical_command coulmn. example : ipv6.csv ip 11.11.11.11 2.2.2.2 ... See more...
I have indexes created and i have 2 csv first is ipv6.csv and its has coulmn called ip and second csv is cmd.csv it contain critical_command coulmn. example : ipv6.csv ip 11.11.11.11 2.2.2.2 cmd.csv critical_command restart shutdown now i want to search ip 11.11.11.11 and critical_command restart or ip 2.2.2.2 and restart in certain index. how i will write the
Hi, In the splunk 9.1.x version and above,  we are noticing that the moment.js is missing in the following location, /opt/splunk/share/splunk/search_mrsparkle/exposed/js/contrib/moment.js Due to ... See more...
Hi, In the splunk 9.1.x version and above,  we are noticing that the moment.js is missing in the following location, /opt/splunk/share/splunk/search_mrsparkle/exposed/js/contrib/moment.js Due to this our custom app functionalities are not working and we are getting error as attached, Please let us know if this is a known issues and any resolutions for this. In spite of  placing the moment.js in our app folder we still notice the app is trying to use the default moment js in this location "/opt/splunk/share/splunk/search_mrsparkle/exposed/js/contrib/moment.js"  We have also tried other solution from the community by  placing  var moment = require('moment'); but still its returning error.  Can you please provide any possible  solution to resolve this issue.
  Hi, We are seeing log parsing issue with Juniper SRX logs for the following logs RT_FLOW_SESSION_CREATE RT_FLOW_SESSION_CLOSE. It doesn't parsing at all. As far as i could see from the release... See more...
  Hi, We are seeing log parsing issue with Juniper SRX logs for the following logs RT_FLOW_SESSION_CREATE RT_FLOW_SESSION_CLOSE. It doesn't parsing at all. As far as i could see from the release notes that the Add-on has a known issues with Junper SRX Logs Parsing for RT_FLOW_SESSION_CLOSE_LS. However not with the ones which i mentioned above (RT_FLOW_SESSION_CREATE or RT_FLOW_SESSION_CLOSE). Can you please help. Is this related. ? Date filed Issue number Description 2022-12-29 ADDON-59372 Junper SRX Logs Parsing for RT_FLOW_SESSION_CLOSE_LS
Hi @nz_021 , The usual error is the local firewall, if you already disabled it, and doesn't run, also with root, open a case to Splunk Support. Ciao. Giuseppe
search head is standalone server. and still same error even i run with root
Hi,  Many thanks for the update. This is helpful.  I will consider this as a solution 
Hi @nz_021 , are you using a stand alone server or a clustered one? which user are you using for installation? did you tried with root? Ciao. Giuseppe
i try to disabled the os firewall, but still not have impact. the error still same.   thanks
i @Bastiaan, at first, indicate alway the index you're using the main search. then, you used a wrong syntax, you cannor use the case function i the main search but only in eval or stats. Then I do... See more...
i @Bastiaan, at first, indicate alway the index you're using the main search. then, you used a wrong syntax, you cannor use the case function i the main search but only in eval or stats. Then I don't understand the conditions you're trying to set, could you better describe them? Then in the first part of the search you didn't closed the parenthesis: it's not possible to close a parenthesis after a pipe as you did. I hint to follow the Splunk Search Tutorial, to understand how to create a search in Splunk and its rules: http://docs.splunk.com/Documentation/Splunk/latest/SearchTutorial/WelcometotheSearchTutorial Ciao. Giuseppe
Hi @nz_021, at first, did you disable local firewall on the Splunk server? then there are some errors, that I'm not sure that are related to the firewall. Let me know. ciao. Giuseppe
Hello all, I'm quite new to the wonderful world of Splunk, but not new to monitoring or IT in general. We are optimizing our operations processes and I'd like to get a state of the last 24h of our e... See more...
Hello all, I'm quite new to the wonderful world of Splunk, but not new to monitoring or IT in general. We are optimizing our operations processes and I'd like to get a state of the last 24h of our environment, specifically our Firewall status. It sends all it's logging to Splunk and I've created the following filter to find all the errors, but it's not working: host="hostname" AND ( CASE(CONFIG) CASE(commit*) NOT Succeeded ) OR "snmpd.log due to log overflow" OR ( ("TS-Agent" AND "connect-agent-failure") | where NOT (date_hour >= 1 AND date_hour < 5) ) It gives me back: "Error in 'search' command: Unable to parse the search: unbalanced parentheses." The last part of the filer (TS-Agent and so on) has to be filtered because I wish to exclude a timeframe from the results (reboot schedule of said servers), however, the other searches need to be from all the time (e.g. the last 24h or whatever  I set). I think I'm doing something wrong or things just don't work like I expect. I hope you folks can help me out or point me in the right direction. I'd like to get all the errors on one tile so I can see if I can get my coffee in the morning slowly or fast Many thanks in advance!
Halo,  i have problem when start splunk, it's no problem before, but when i try to restart the splunk, it just show warning and the web interface cannot be accessed   and when i check the log ... See more...
Halo,  i have problem when start splunk, it's no problem before, but when i try to restart the splunk, it just show warning and the web interface cannot be accessed   and when i check the log with ERROR, it just show this  and here the picture when i try to check the splunk service status   anyone can help?
Hi @pck1983, here you can find some useful description of how Splunk manages timezones: https://docs.splunk.com/Documentation/SCS/current/Search/Timezones https://docs.splunk.com/Documentation/Spl... See more...
Hi @pck1983, here you can find some useful description of how Splunk manages timezones: https://docs.splunk.com/Documentation/SCS/current/Search/Timezones https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Applytimezoneoffsetstotimestamps In few words, yes, if Splunk isn't able to understand the timestamp, is uses the previous event timestamp or _indextime  as _time. Splunk automatically manages different timezones so, setting the timezone in your user preferences, you can read the timestamps using the timestamp corresponding to your timezone. Ciao. Giuseppe
Hi Giuseppe, so that was a parsing error - make sence because a hand full of older entries hat another formating. The majority of the entries from that older logfile where indexed correctly! Just t... See more...
Hi Giuseppe, so that was a parsing error - make sence because a hand full of older entries hat another formating. The majority of the entries from that older logfile where indexed correctly! Just that I understand it - Splunk parses the event and extract a time from the event. That parsed time is stored in _time. The indextime is stored in _indextime. In case there is not time entry in the file the indextime ist also used for _time. Correct so far? But what if I get events from machines in different timezones? Is _time converted fo my local timezone?  What does it mean when I search for events from today 6:00am till 10:00am? Does that mean 6:00am - 10:00am in my timezone? Or in the timezones of the machines?
then show the table _time    field   _raw Note that expecting _raw in such an alert is very unreasonable and can be quite expensive.  In a simpler form following @bowesmana's recipe, you may ... See more...
then show the table _time    field   _raw Note that expecting _raw in such an alert is very unreasonable and can be quite expensive.  In a simpler form following @bowesmana's recipe, you may get away with something like   index=... earliest=-3d@d | bin _time span=1d@d ``` Calculates the count for a field by day ``` | stats count values(_raw) as _raw by field _time ``` Now calculate today's value and the total ``` | stats values(_raw) as _raw sum(eval(if(_time=relative_time(now(), "@d"),count, 0))) as today sum(count) as total by field ``` And set a field to be TRUE or FALSE to alert ``` | where today > 0 AND total - today == 0   In this form, _raw is not an ordered list, but a lexicographic one. If you really, really need _raw in its raw form, you can consider using subsearch to limit values of fields to only those in alerts.  Then you must consider the cost of subsearch.  
Hello, I am trying to drilldown in a dashboard to a URL that checks malicious IP's and Domains. Issue I am having is the URL for IP search and Domain search is different. All IOC's are in the same... See more...
Hello, I am trying to drilldown in a dashboard to a URL that checks malicious IP's and Domains. Issue I am having is the URL for IP search and Domain search is different. All IOC's are in the same field called "threat_match_value" but there is another field in log called "threat_key" which specifies if it is a IP or Domain. Is it possible to add a condition like: If threat_key=Domain drill down to Domain URL but the click.value be the "threat_match_value".   Don't really want to separate into 2 panels   Thanks,