All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Well, I did find another line which has the date and time, but it's over 15 lines into the log file.  We need to start with the first line which is the beginning of the stanza, but get the timestamp ... See more...
Well, I did find another line which has the date and time, but it's over 15 lines into the log file.  We need to start with the first line which is the beginning of the stanza, but get the timestamp which is 15th line showing after the opening line shown below C:\Program Files\Universal\UAGSrv\xxxl_p01.nam>set StartDate=Tue 07/23/2024  This is the actual timestamp which I think would work since it has both date and time (hoping that's what the _80514 is the time??  Files\Universal\UAGSrv\xxx_p01.nam>set timestamp=20240723_80514
Thanks for the help! @gcusello. I fixed my rex Iam seeing results now.
Hi @kc_prane , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Thanks @KendalW for the help!
Hello, @gcusello. Thank you for your response. I had an issue with Rex. I corrected that now, and your earlier query works for me.
Hello, Could you tell me what is priority? capabilities explicitly enabled/disabled or from inherited roles? I had to manually edit etc/system/local/authorize.conf (clustered environment) on edit_c... See more...
Hello, Could you tell me what is priority? capabilities explicitly enabled/disabled or from inherited roles? I had to manually edit etc/system/local/authorize.conf (clustered environment) on edit_correlationsearches = enabled (was disabled) even if it had ess_admin, ess_analyst and power inherited. Thanks for your help.  
Thank you @PickleRick for your answer. Eventually I worked around the problem like this: | makeresults | eval amount = 10.6 | eval integer = floor(amount) | eval fraction = round(amount - flo... See more...
Thank you @PickleRick for your answer. Eventually I worked around the problem like this: | makeresults | eval amount = 10.6 | eval integer = floor(amount) | eval fraction = round(amount - floor(amount), 2) | eval compare = if(fraction = 0.6, "T", "F") I simply rounded the floating point number to some decimal places. I tested also your example and this solves this problem (that is not actually a problem as you suggested).   Thank you!
I have a KV Store with replicate turned on, a lookup definition with WILDCARD(match_field), and an automatic configured to output a numeric lookup_field. When I run a search on the relevant source ty... See more...
I have a KV Store with replicate turned on, a lookup definition with WILDCARD(match_field), and an automatic configured to output a numeric lookup_field. When I run a search on the relevant source type, I see the lookup_field. However, when I search with the lookup_field (e.g., "lookup_field=1"), the search finishes quickly and doesn't return anything. This is an example of the lookup. mac,exception 00ABCD*,1 11EEFF*,1 This is an example of the lookup definition. WILDCARD(mac) This is an example of the automatic lookup. lookup mac_addresses mac OUTPUT exception Here is an example of a search that does not return the expected results: index=mac_index exception=1 Here's what's really strange. It works for some events, but not others. When I run this, I get five events earliest=7/29/2024:00:00:00 latest=7/30/2024:00:00:00 index=logs exception=1 When I run this (adding the manual lookup), I get 109 (which is accurate). earliest=7/29/2024:00:00:00 latest=7/30/2024:00:00:00 index=logs | lookup exception_lookup mac OUTPUTNEW exception | search exception=1 Any ideas of what could cause this? Any ideas on how to troubleshoot it?
I don't think so. Post-process search is a parameter for the POST request and needs a valid SPL search. If you wanted to have the post-process search reference the base search itself you'd have to lo... See more...
I don't think so. Post-process search is a parameter for the POST request and needs a valid SPL search. If you wanted to have the post-process search reference the base search itself you'd have to loadjob with that particular search's ID. EDIT: OK, you can do that using the same saved search (but for this you need a scheduled saved search).
I suppose it's not "for Splunk" but rather it's simply a floating point arithmetics which is not as straightforward as we are used to. You could simply manipulate numbers being 1 or 2 orders of magn... See more...
I suppose it's not "for Splunk" but rather it's simply a floating point arithmetics which is not as straightforward as we are used to. You could simply manipulate numbers being 1 or 2 orders of magnitude bigger than your "real" values so that you operate on integers. This is a common problem with floating-point arithmetics - numbers are not what they seem (or seems they should be).
@Siddharthnegi- Two things I would say to check: Restart Splunk and check. Use "My Reports" instead of "Reports" and check. (Do restart Splunk if you are updating the XML from backend. And ensure ... See more...
@Siddharthnegi- Two things I would say to check: Restart Splunk and check. Use "My Reports" instead of "Reports" and check. (Do restart Splunk if you are updating the XML from backend. And ensure you are updating on the right server.)   I hope this helps!!!!
My first thought is that the blocks downstream from the ansible block don't require it to complete, while the blocks downstream from the splunk block do. To check on this: Click on all downstream b... See more...
My first thought is that the blocks downstream from the ansible block don't require it to complete, while the blocks downstream from the splunk block do. To check on this: Click on all downstream blocks For each, open the advanced dropdown in the left panel See if the Join Settings require the ansible/splunk blocks If you don't want the block to be required, uncheck the box here   To directly answer your title question, you can build your own error handling by placing a decision block after the splunk block to check whether splunk_block:action_results:status returns success or failed. If you take this approach and have the different branches reconnect at any point, you'll have to check the join settings because they will automatically require the splunk block to have completed even if your playbook previously followed the "failed" path.
@kwiki- You are on the right track on using streamstats.  But I would just run two searches and compare the results, it would be much easier to write query for. Here it is: index=myindex sourcetype=... See more...
@kwiki- You are on the right track on using streamstats.  But I would just run two searches and compare the results, it would be much easier to write query for. Here it is: index=myindex sourcetype=trans response_code!=00 earliest=-3d@d latest=-2d@d | stats count as error_count_3_days_ago | append [| search index=myindex sourcetype=trans response_code!=00 earliest=-2d@d latest=-1d@d | stats count as error_count_2_days_ago] | stats first(*) as * | eval perc_increase = (error_count_2_days_ago-error_count_3_days_ago) / error_count_3_days_ago)*100, 2) | where perc_increase>3 | table perc_increase ( I have not tested the query, but logic is to append data data together and compare)   I hope this helps!!!!
Please confirm the "bin" field is present in the index.  It is not created by the bin command. If the 'bin' field is null or not present then the stats command will return no results and so the stre... See more...
Please confirm the "bin" field is present in the index.  It is not created by the bin command. If the 'bin' field is null or not present then the stats command will return no results and so the streamstats command will have nothing to evaluate.
You can use the savedsearch command to run a saved search in your query.  If you use the time picker to specify a time range other than All Time then the saved search will use your selected time rang... See more...
You can use the savedsearch command to run a saved search in your query.  If you use the time picker to specify a time range other than All Time then the saved search will use your selected time range; otherwise, the time range in the saved search will be used.
@nivets- Question has paradox. Do you want to change the time-range or not change the timerange?
Sorry I could not be of more assistance. If file permissions are not the culprit I still think there might be some issue with the python version handling. Just cant figure ut what that would be thou... See more...
Sorry I could not be of more assistance. If file permissions are not the culprit I still think there might be some issue with the python version handling. Just cant figure ut what that would be though, sorry.
I'm running a RHEL8 on the latest version. We've been down the long road with Splunk support and have confirmed exhaustively that systemd is hanging on processes that aren't there. And until systemd ... See more...
I'm running a RHEL8 on the latest version. We've been down the long road with Splunk support and have confirmed exhaustively that systemd is hanging on processes that aren't there. And until systemd times out (360 seconds by default), it won't actually return to you. And when Splunk does return as "stopped", it didn't actually stop, the command just timed out (journalctl -f --unit <Splunk service file>). We're working with our Linux teams and likely Red Hat Support to figure out why.
We will continue the restore for now, as our security team is pushing me. I will check your information when I try another upgrade. I really appreciate your help! Thank you!
well, my question concerned general idea of using timechart when joining indexes.  Not ready to prepare ready to analyze example. Anyway your hint was valuable as well . Especially  using BIN comman... See more...
well, my question concerned general idea of using timechart when joining indexes.  Not ready to prepare ready to analyze example. Anyway your hint was valuable as well . Especially  using BIN command and baskets could be very useful in my queries .  I am going to read more about it and I guess will ask more question about BIN  soon   thank you @ITWhisperer