All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

  Hi, We are seeing log parsing issue with Juniper SRX logs for the following logs RT_FLOW_SESSION_CREATE RT_FLOW_SESSION_CLOSE. It doesn't parsing at all. As far as i could see from the release... See more...
  Hi, We are seeing log parsing issue with Juniper SRX logs for the following logs RT_FLOW_SESSION_CREATE RT_FLOW_SESSION_CLOSE. It doesn't parsing at all. As far as i could see from the release notes that the Add-on has a known issues with Junper SRX Logs Parsing for RT_FLOW_SESSION_CLOSE_LS. However not with the ones which i mentioned above (RT_FLOW_SESSION_CREATE or RT_FLOW_SESSION_CLOSE). Can you please help. Is this related. ? Date filed Issue number Description 2022-12-29 ADDON-59372 Junper SRX Logs Parsing for RT_FLOW_SESSION_CLOSE_LS
Hi @nz_021 , The usual error is the local firewall, if you already disabled it, and doesn't run, also with root, open a case to Splunk Support. Ciao. Giuseppe
search head is standalone server. and still same error even i run with root
Hi,  Many thanks for the update. This is helpful.  I will consider this as a solution 
Hi @nz_021 , are you using a stand alone server or a clustered one? which user are you using for installation? did you tried with root? Ciao. Giuseppe
i try to disabled the os firewall, but still not have impact. the error still same.   thanks
i @Bastiaan, at first, indicate alway the index you're using the main search. then, you used a wrong syntax, you cannor use the case function i the main search but only in eval or stats. Then I do... See more...
i @Bastiaan, at first, indicate alway the index you're using the main search. then, you used a wrong syntax, you cannor use the case function i the main search but only in eval or stats. Then I don't understand the conditions you're trying to set, could you better describe them? Then in the first part of the search you didn't closed the parenthesis: it's not possible to close a parenthesis after a pipe as you did. I hint to follow the Splunk Search Tutorial, to understand how to create a search in Splunk and its rules: http://docs.splunk.com/Documentation/Splunk/latest/SearchTutorial/WelcometotheSearchTutorial Ciao. Giuseppe
Hi @nz_021, at first, did you disable local firewall on the Splunk server? then there are some errors, that I'm not sure that are related to the firewall. Let me know. ciao. Giuseppe
Hello all, I'm quite new to the wonderful world of Splunk, but not new to monitoring or IT in general. We are optimizing our operations processes and I'd like to get a state of the last 24h of our e... See more...
Hello all, I'm quite new to the wonderful world of Splunk, but not new to monitoring or IT in general. We are optimizing our operations processes and I'd like to get a state of the last 24h of our environment, specifically our Firewall status. It sends all it's logging to Splunk and I've created the following filter to find all the errors, but it's not working: host="hostname" AND ( CASE(CONFIG) CASE(commit*) NOT Succeeded ) OR "snmpd.log due to log overflow" OR ( ("TS-Agent" AND "connect-agent-failure") | where NOT (date_hour >= 1 AND date_hour < 5) ) It gives me back: "Error in 'search' command: Unable to parse the search: unbalanced parentheses." The last part of the filer (TS-Agent and so on) has to be filtered because I wish to exclude a timeframe from the results (reboot schedule of said servers), however, the other searches need to be from all the time (e.g. the last 24h or whatever  I set). I think I'm doing something wrong or things just don't work like I expect. I hope you folks can help me out or point me in the right direction. I'd like to get all the errors on one tile so I can see if I can get my coffee in the morning slowly or fast Many thanks in advance!
Halo,  i have problem when start splunk, it's no problem before, but when i try to restart the splunk, it just show warning and the web interface cannot be accessed   and when i check the log ... See more...
Halo,  i have problem when start splunk, it's no problem before, but when i try to restart the splunk, it just show warning and the web interface cannot be accessed   and when i check the log with ERROR, it just show this  and here the picture when i try to check the splunk service status   anyone can help?
Hi @pck1983, here you can find some useful description of how Splunk manages timezones: https://docs.splunk.com/Documentation/SCS/current/Search/Timezones https://docs.splunk.com/Documentation/Spl... See more...
Hi @pck1983, here you can find some useful description of how Splunk manages timezones: https://docs.splunk.com/Documentation/SCS/current/Search/Timezones https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Applytimezoneoffsetstotimestamps In few words, yes, if Splunk isn't able to understand the timestamp, is uses the previous event timestamp or _indextime  as _time. Splunk automatically manages different timezones so, setting the timezone in your user preferences, you can read the timestamps using the timestamp corresponding to your timezone. Ciao. Giuseppe
Hi Giuseppe, so that was a parsing error - make sence because a hand full of older entries hat another formating. The majority of the entries from that older logfile where indexed correctly! Just t... See more...
Hi Giuseppe, so that was a parsing error - make sence because a hand full of older entries hat another formating. The majority of the entries from that older logfile where indexed correctly! Just that I understand it - Splunk parses the event and extract a time from the event. That parsed time is stored in _time. The indextime is stored in _indextime. In case there is not time entry in the file the indextime ist also used for _time. Correct so far? But what if I get events from machines in different timezones? Is _time converted fo my local timezone?  What does it mean when I search for events from today 6:00am till 10:00am? Does that mean 6:00am - 10:00am in my timezone? Or in the timezones of the machines?
then show the table _time    field   _raw Note that expecting _raw in such an alert is very unreasonable and can be quite expensive.  In a simpler form following @bowesmana's recipe, you may ... See more...
then show the table _time    field   _raw Note that expecting _raw in such an alert is very unreasonable and can be quite expensive.  In a simpler form following @bowesmana's recipe, you may get away with something like   index=... earliest=-3d@d | bin _time span=1d@d ``` Calculates the count for a field by day ``` | stats count values(_raw) as _raw by field _time ``` Now calculate today's value and the total ``` | stats values(_raw) as _raw sum(eval(if(_time=relative_time(now(), "@d"),count, 0))) as today sum(count) as total by field ``` And set a field to be TRUE or FALSE to alert ``` | where today > 0 AND total - today == 0   In this form, _raw is not an ordered list, but a lexicographic one. If you really, really need _raw in its raw form, you can consider using subsearch to limit values of fields to only those in alerts.  Then you must consider the cost of subsearch.  
Hello, I am trying to drilldown in a dashboard to a URL that checks malicious IP's and Domains. Issue I am having is the URL for IP search and Domain search is different. All IOC's are in the same... See more...
Hello, I am trying to drilldown in a dashboard to a URL that checks malicious IP's and Domains. Issue I am having is the URL for IP search and Domain search is different. All IOC's are in the same field called "threat_match_value" but there is another field in log called "threat_key" which specifies if it is a IP or Domain. Is it possible to add a condition like: If threat_key=Domain drill down to Domain URL but the click.value be the "threat_match_value".   Don't really want to separate into 2 panels   Thanks,
Hi @pck1983, the timestamp format is defined for each sourcetype in the props.conf (for more infos see at https://docs.splunk.com/Documentation/ITSI/4.17.0/Configure/props.conf) to deploy to the For... See more...
Hi @pck1983, the timestamp format is defined for each sourcetype in the props.conf (for more infos see at https://docs.splunk.com/Documentation/ITSI/4.17.0/Configure/props.conf) to deploy to the Forwarders that ingested tha log and on the Search Head. The timestamp format definitions are described at https://docs.splunk.com/Documentation/SCS/current/Search/Timevariables In your case, you have to set: [your_sourcetype] TIME_PREFIX = ^ TIME_FORMAT = %b %d %H:%M:%S Ciao. Giuseppe
Hello, I have a few questions about the time in Splunk. That is a entry from an older logfile and here the _time field and the timestamp in the log does not match! 4/30/23 1:32:16.000 PM Mai... See more...
Hello, I have a few questions about the time in Splunk. That is a entry from an older logfile and here the _time field and the timestamp in the log does not match! 4/30/23 1:32:16.000 PM Mai 08 13:32:16 xxxxxx sshd[3312558]: Failed password for yyyyyyyy from 192.168.1.141 port 58744 ssh2   How could that happen? How does time come up with the time fields? And how does it handle files which comtain no time-stamps? Is then the index-time used?  Ther is a few things which I do not fully understand - maybe there is some article in the documentation which explain that in detail but I have not found with a quick search.  Could pleas someone clearify how splunk handle that or link to an article? Thanks!
@ITWhisperer Thanks for the reply. Given I use $product_brand$ in the conditional panel now, I still need to set the condition of displaying the panel. At the <condition> tag, how can I set it to acc... See more...
@ITWhisperer Thanks for the reply. Given I use $product_brand$ in the conditional panel now, I still need to set the condition of displaying the panel. At the <condition> tag, how can I set it to accept multiple values? As the above method only accepts a single value at one time, I want it to be if $procut_brand$ IN ANY of product brand ["A", "B", "C"], set the display panel to true and if not in those 3, just don't display. Any nudge in the right direction? Many thanks. 
Ah, the original design did not consider the possibility of mixed increment and no-increment.  Now, to deal with this, you will need to tell us whether you want to catch any duplicate regardless of i... See more...
Ah, the original design did not consider the possibility of mixed increment and no-increment.  Now, to deal with this, you will need to tell us whether you want to catch any duplicate regardless of interleave, or whether you want to catch only "consecutive" events that duplicate event_id, because the two use cases are very different. If only consecutive duplicate event_id should trigger alert, you can do   | delta event_id as delta | stats list(_time) as _time values(delta) as delta by event_id event_name task_id | where delta == "0" | fieldformat _time = strftime(_time, "%F %H:%M:%S.%3Q")   To test this use case, I construct the following extended test dataset based on your illustration. Time _time event_id event_name task_id 9/4/22 10:03:39 PM 2022-09-04 22:03:39 1274851 pending-transfer 3 9/4/22 10:02:39 PM 2022-09-04 22:02:39 1274856 pending-transfer 3 9/4/22 09:57:39 PM 2022-09-04 21:57:39 1274856 pending-transfer 3 9/4/22 09:52:39 PM 2022-09-04 21:52:39 1274856 pending-transfer 3 9/4/22 09:47:39 PM 2022-09-04 21:47:39 1274851 pending-transfer 3 9/4/22 09:37:39 PM 2022-09-04 21:37:39 1274849 pending-transfer 3 And the result is a single row event_id event_name task_id _time delta 1274856 pending-transfer 3 2022-09-04 22:02:39.000,2022-09-04 21:57:39.000,2022-09-04 21:52:39.000 0 5 If, on the other hand, the alert should be triggered no matter which other event_id's are in between, you should do   | stats list(_time) as _time by event_id event_name task_id | where mvcount(_time) > 1 | fieldformat _time = strftime(_time, "%F %H:%M:%S.%3Q")   Using the same test dataset as illustrated above, you should see two outputs event_id event_name task_id _time 1274851 pending-transfer 3 2022-09-04 22:03:39.000,2022-09-04 21:47:39.000 1274856 pending-transfer 3 2022-09-04 22:02:39.000,2022-09-04 21:57:39.000,2022-09-04 21:52:39.000 Here is data emulation that you can play with and compare with real data   | makeresults | eval _raw = "Time event_name task_id event_id 9/4/22 10:03:39 PM pending-transfer 3 1274851 9/4/22 10:02:39 PM pending-transfer 3 1274856 9/4/22 09:57:39 PM pending-transfer 3 1274856 9/4/22 09:52:39 PM pending-transfer 3 1274856 9/4/22 09:47:39 PM pending-transfer 3 1274851 9/4/22 09:37:39 PM pending-transfer 3 1274849" | multikv | eval _time = strptime(Time, "%m/%d/%y %I:%M:%S %p") | fields - linecount _raw ``` data emulation above ```    
Hi, HTTP 503 Service Unavailable -- {"messages":[{"type":"ERROR","text":"This node is not the captain of the search head cluster, and we could not determine the current captain. The cluster is eithe... See more...
Hi, HTTP 503 Service Unavailable -- {"messages":[{"type":"ERROR","text":"This node is not the captain of the search head cluster, and we could not determine the current captain. The cluster is either in the process of electing a new captain, or this member hasn't joined the pool"}]} We received this error on one of the Search head cluster member. Is there any way to troubleshoot this? Please assist. Thankyou.  
While integrating the Speakatoo API into my project, I'm encountering a "cookies error." I'm seeking assistance and guidance on how to resolve this issue.