All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@marnall @yuanliu @gcusello First of all thank you for your kind response and apologize if my query has confused you. I have added a screenshot (masked) of the raw events from the splunk (All Time). ... See more...
@marnall @yuanliu @gcusello First of all thank you for your kind response and apologize if my query has confused you. I have added a screenshot (masked) of the raw events from the splunk (All Time). If you refer the screenshot, it is showing events in one complete block. I simply want to show the latest available log output through the splunk dashboard. The queries shared above is only referring to the head 1 line of each event block. That seems to be an incorrect output.  I request you to please refer the attached screenshot output and suggest accordingly. Thanks in advance
@zksvc Have you tried in the another browser?
Not sure if you were able to resolve this, I'm running into same issue. Did you find what caused it?
This issue is resolved after making few changes to props.conf where the field extraction is set.
My requirement is to pass the tokens via drill down from parent dashboard to drill down dashboard (which is created with Java scripts) From parent dashboard, I tried to pass the token via drill down... See more...
My requirement is to pass the tokens via drill down from parent dashboard to drill down dashboard (which is created with Java scripts) From parent dashboard, I tried to pass the token via drill down URL to the java scripted dashboard, but that did not work out. Can anyone please help me in passing the tokens via drill down to the target dashboard which is created with Java script?
Hi everyone, want to ask why i cannot download data from dashboard. if i make it 4 hours it can, and if i want download in another date ex (1 Jan 2025) its fine, it is only happen when i want downloa... See more...
Hi everyone, want to ask why i cannot download data from dashboard. if i make it 4 hours it can, and if i want download in another date ex (1 Jan 2025) its fine, it is only happen when i want download data in (4 Jan 2025).  Since i cannot access any website / media to screenshot from client so i only can photo.  Thank you
Back again with another question. I'm still playing with my search and whle this is an issue I've managed to work around, the fact that I need to work around it without knowing the why behind it eats... See more...
Back again with another question. I'm still playing with my search and whle this is an issue I've managed to work around, the fact that I need to work around it without knowing the why behind it eats at me. I have a search that pulls data from two different sourcetypes, and each of those sourcetypes have a src_mac field(the data in these fields is identical except for the letter case). To rectify the issues this causes when attempting to call the field in a search, I use eval to create two new fields with the sourcetype of each event so that the field names are now unique(in addition to fixing the letter case mismatch). Specifically, this creates two fields named "src_mac-known_devices" and "src_mac-ise:syslog" | eval src_mac-{sourcetype}=src_mac, src_mac=upper(src_mac) | where upper("src_mac-*") = upper("src_mac-*") However, in the WHERE command, I'm only able to call these two new fields when I use a wildcard. I can't actually put in | WHERE upper("src_mac-bro_known_devices") = upper("src_mac-ise:syslog") The command just doesn't work for some reason, and I get zero hits despite *knowing* I should get plenty of hits. In other words, it works fine when I use the wildcard and not at all when I use anything else. Even attempting to do something like | where upper("src_mac-b*") = upper("src_mac-c*") doesn't work.  I have read through the wiki articles on proper naming practices for fields, so I know my two fields contain illegal characters. I also know the : is used when trying to search indexed fields, but I thought I could use single or double quotation marks to work around that limitation or maybe using the / to escape the special characters.....but none of that has worked. At this point, I just want to understand *why* it isn't working. Thank you for any help anyone can provide.
It works! Thank you very much!
search the whole globally defined time window for idx1 instead of only looking at the short periods of time we are interested in. I am not sure if that will actually have a huge effect on load it c... See more...
search the whole globally defined time window for idx1 instead of only looking at the short periods of time we are interested in. I am not sure if that will actually have a huge effect on load it causes. The subject of mapping search into specific intervals recently came up, but not in combination with saved search command. (To be clear, if someone invokes your saved search from "Reports" menu, there is no problem with map.) Whether searching the entire time window will affect efficiency depends on the window itself.  It also depends on how much compute is performed after data retrieval.  If the window spans multiple data buckets for idx1, efficiency will be affected.  Otherwise, no.  If your window is large or if idx1 is extraordinarily voluminous, you can review job inspector to compare.
Wrt above %busy picture, how does Min , Max, Sum, Count calculate? What does it mean in Appdynamics?
Hey,  If you’re referring to the correlation search detailed at Splunk Research, here are some suggestions to help reduce false positives (though these depend on your current user activity pattern... See more...
Hey,  If you’re referring to the correlation search detailed at Splunk Research, here are some suggestions to help reduce false positives (though these depend on your current user activity patterns): - Identify list of legitimate users or admins who are authorized to perform such PowerShell activities by running: | stats count BY process_name user process_path - Initially, you can try excluding processes running from trusted directories like C:\\Windows\\System32\\* or C:\\Program Files\\*. However, note that some ransomware has been observed executing from the System32 directory as a parent process. So, consider excluding these paths only after analyzing and reducing the alert volume: | where NOT (process_path IN ("C:\\Windows\\System32\\*", "C:\\Program Files\\*") AND user IN ("admin_user")) | stats count BY process_name user process_path - Pay close attention to processes that frequently appear. Cross-reference them with known benign activities to further refine your filtering logic. - If the alerts are not time-sensitive, consider reducing the correlation search frequency (e.g., to every 6 hours) to mitigate alert fatigue.
Hello everyone, I am facing an issue with the alerts triggered by the "Set Default PowerShell Execution Policy To Unrestricted or Bypass" (Correlation Search) rule in Splunk, as many alerts are bein... See more...
Hello everyone, I am facing an issue with the alerts triggered by the "Set Default PowerShell Execution Policy To Unrestricted or Bypass" (Correlation Search) rule in Splunk, as many alerts are being generated unexpectedly. After reviewing the details, I added the command `| stats count BY process_name` to analyze the data more precisely. After executing this, the result was 389 processes within 24 hours. However, it seems there might be false positives and I’m unable to determine if this alert is normal or if there’s a misconfiguration. I would appreciate any help in identifying whether these alerts are expected or if there is an issue with the configuration or the rule itself. Any assistance or advice would be greatly appreciated. Thank you in advance.  
Hi, What I was trying to say about savedsearch details was a referring to the linked other post. In that post I don't think the details about the saved search are not disclosed. About your reply,... See more...
Hi, What I was trying to say about savedsearch details was a referring to the linked other post. In that post I don't think the details about the saved search are not disclosed. About your reply, I couldn't make it work. I restructured my search, and as you might have guessed already, the real search is more complicated, but the point is the structure. Since you asked, all the data is so far in raw events. I could use   | map search="search earliest=$starttime$ latest=$endtime$ ...."   to achieve the same result The essential part in the structure is to be able to search from idx1 with a narrow time window based on the timestamps from the events matched from idx2 obtained from a much wider span. The reason is simply that the link between the two indexes, eventID, is weak and also there is at present a lot more data in idx1. eventID is actually not guaranteed to be unique for any period of time, but with reasonable reliability it is unique for a short period of time. That is why I have used localize, and so far I have not been able to make localize work with anything but map. Moving the map to a separate subsearch ruined the search and it returned nothing. I started thinking about what you suggested and created a construction like this:   search index=ix1 [search index=ix2 eventStatus="Successful" | return 1000 eventID ] [search index=ix2 eventStatus="Successful" | eval Start=_time-60, End=_time, search="_time>".Start." AND _time<".End | return 500 $search] | stats values(client) values(port) values(target) by eventID   It seems to return what I want. My understanding is that the search will however search the whole globally defined time window for idx1 instead of only looking at the short periods of time we are interested in. I am not sure if that will actually have a huge effect on load it causes. At the end of your reply you describe the root cause of the problem, namely the way SPL treats $xxx$ expansions. As you say, it is not a bug. It is a property or limitation of SPL.
Hi @Naa_Win , the dashboards depend on what you need: if you need to see the hosts that sent logs in the last 30 days but not in the last hour, you can run: | tstats count WHERE index=_internal ea... See more...
Hi @Naa_Win , the dashboards depend on what you need: if you need to see the hosts that sent logs in the last 30 days but not in the last hour, you can run: | tstats count WHERE index=_internal earliest=-30d latest=now BY _time host | where _time<now()-3600 | stats latest(_time) AS _time BY host Then you can display the blocked queues and the status of queues using the searches that I shared at https://community.splunk.com/t5/Getting-Data-In/How-do-we-know-whether-typing-queues-are-blocked-or-not/m-p/586347 and so on. As I said they depend on what you need to display. Ciao. Giuseppe
Hi @shashankk , no, you cannot manage thisin inputs.conf. Modify my search using a correct time frame depending on the frequency of your data: if your file is read every 5 minutes, use: | eval ea... See more...
Hi @shashankk , no, you cannot manage thisin inputs.conf. Modify my search using a correct time frame depending on the frequency of your data: if your file is read every 5 minutes, use: | eval earliest= _time-60, latest=_time+60 , if every minute, use: | eval earliest= _time-30, latest=_time+30 In this way, you are sure to read only the latest file. Ciao. Giuseppe
Hi @Devansh9401 , it isn't possible: PersonVue doesn't show result score, only if an exam is passed or not. Ciao. Giuseppe
@solg  I believe the link below will assist you with your question. https://community.splunk.com/t5/Security/TCP-Data-Input-and-SSL/m-p/483077 
@Devansh9401 There is no option to view the score, you can only see whether you passed or failed. I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, tha... See more...
@Devansh9401 There is no option to view the score, you can only see whether you passed or failed. I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
From where we can see the actual score of any Splunk exam. Because from Splunk website we can only get certification and from Pearson Vue we can only see report which says congratulations you're pass... See more...
From where we can see the actual score of any Splunk exam. Because from Splunk website we can only get certification and from Pearson Vue we can only see report which says congratulations you're passed and doesn't mention any actual score.