All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, What I was trying to say about savedsearch details was a referring to the linked other post. In that post I don't think the details about the saved search are not disclosed. About your reply,... See more...
Hi, What I was trying to say about savedsearch details was a referring to the linked other post. In that post I don't think the details about the saved search are not disclosed. About your reply, I couldn't make it work. I restructured my search, and as you might have guessed already, the real search is more complicated, but the point is the structure. Since you asked, all the data is so far in raw events. I could use   | map search="search earliest=$starttime$ latest=$endtime$ ...."   to achieve the same result The essential part in the structure is to be able to search from idx1 with a narrow time window based on the timestamps from the events matched from idx2 obtained from a much wider span. The reason is simply that the link between the two indexes, eventID, is weak and also there is at present a lot more data in idx1. eventID is actually not guaranteed to be unique for any period of time, but with reasonable reliability it is unique for a short period of time. That is why I have used localize, and so far I have not been able to make localize work with anything but map. Moving the map to a separate subsearch ruined the search and it returned nothing. I started thinking about what you suggested and created a construction like this:   search index=ix1 [search index=ix2 eventStatus="Successful" | return 1000 eventID ] [search index=ix2 eventStatus="Successful" | eval Start=_time-60, End=_time, search="_time>".Start." AND _time<".End | return 500 $search] | stats values(client) values(port) values(target) by eventID   It seems to return what I want. My understanding is that the search will however search the whole globally defined time window for idx1 instead of only looking at the short periods of time we are interested in. I am not sure if that will actually have a huge effect on load it causes. At the end of your reply you describe the root cause of the problem, namely the way SPL treats $xxx$ expansions. As you say, it is not a bug. It is a property or limitation of SPL.
Hi @Naa_Win , the dashboards depend on what you need: if you need to see the hosts that sent logs in the last 30 days but not in the last hour, you can run: | tstats count WHERE index=_internal ea... See more...
Hi @Naa_Win , the dashboards depend on what you need: if you need to see the hosts that sent logs in the last 30 days but not in the last hour, you can run: | tstats count WHERE index=_internal earliest=-30d latest=now BY _time host | where _time<now()-3600 | stats latest(_time) AS _time BY host Then you can display the blocked queues and the status of queues using the searches that I shared at https://community.splunk.com/t5/Getting-Data-In/How-do-we-know-whether-typing-queues-are-blocked-or-not/m-p/586347 and so on. As I said they depend on what you need to display. Ciao. Giuseppe
Hi @shashankk , no, you cannot manage thisin inputs.conf. Modify my search using a correct time frame depending on the frequency of your data: if your file is read every 5 minutes, use: | eval ea... See more...
Hi @shashankk , no, you cannot manage thisin inputs.conf. Modify my search using a correct time frame depending on the frequency of your data: if your file is read every 5 minutes, use: | eval earliest= _time-60, latest=_time+60 , if every minute, use: | eval earliest= _time-30, latest=_time+30 In this way, you are sure to read only the latest file. Ciao. Giuseppe
Hi @Devansh9401 , it isn't possible: PersonVue doesn't show result score, only if an exam is passed or not. Ciao. Giuseppe
@solg  I believe the link below will assist you with your question. https://community.splunk.com/t5/Security/TCP-Data-Input-and-SSL/m-p/483077 
@Devansh9401 There is no option to view the score, you can only see whether you passed or failed. I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, tha... See more...
@Devansh9401 There is no option to view the score, you can only see whether you passed or failed. I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
From where we can see the actual score of any Splunk exam. Because from Splunk website we can only get certification and from Pearson Vue we can only see report which says congratulations you're pass... See more...
From where we can see the actual score of any Splunk exam. Because from Splunk website we can only get certification and from Pearson Vue we can only see report which says congratulations you're passed and doesn't mention any actual score.
It is best not to use rex to extract information from structured data.  As @ITWhisperer says, the OP should post complete sample logs instead of a fragment of JSON.  If the raw event is itself compli... See more...
It is best not to use rex to extract information from structured data.  As @ITWhisperer says, the OP should post complete sample logs instead of a fragment of JSON.  If the raw event is itself compliant JSON, there should be no need to extract using search commands because Splunk automatically does this at search time.
Several things need to be clarified.  First, as @marnall says, there is no such a thing as to use input.conf to make Splunk only handle part of the event unless you can predetermine which row in that... See more...
Several things need to be clarified.  First, as @marnall says, there is no such a thing as to use input.conf to make Splunk only handle part of the event unless you can predetermine which row in that multi-row data is "latest".  Secondly, when you say "latest", people generally understand it to be the latest event in indexer.  If you desire to only SHOW the latest row based on ExpiryDate, that can be easily achieved in search. Thirdly, your "simple" requirement statement omitted an important qualifier: Do you want the absolute largest ExpiryDate in the entire log or do you want the largest ExpiryDate per group by certain criteria, e.g., by FilePath? If it's the former, you can simply do   index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log | rex field=_raw "(?<Severity>[^\|]+)\|(?<Hostname>[^\|]+)\|(?<CertIssuer>[^\|]+)\|(?<FilePath>[^\|]+)\|(?<Status>[^\|]+)\|(?<ExpiryDate>[^\|]+)" | multikv forceheader=1 | sort ExpiryDate | tail 1 | table Severity Hostname CertIssuer FilePath Status ExpiryDate   If, on the other hand, you want largest ExpiryDate by FilePath - which seems more practical to me, you could do   index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log ``` | rex field=_raw "(?<Severity>[^\|]+)\|(?<Hostname>[^\|]+)\|(?<CertIssuer>[^\|]+)\|(?<FilePath>[^\|]+)\|(?<Status>[^\|]+)\|(?<ExpiryDate>[^\|]+)" | multikv | sort - FilePath ExpiryDate | stats latest(*) as * by FilePath | table Severity Hostname CertIssuer FilePath Status ExpiryDate   Output from this search using your sample data is Severity Hostname CertIussuer FilePath Status ExpiryDate INFO appu2.de.com rootca13 /applications/hs_cert/cert/live/h_core.jks Valid 2026-10-18 WARNING appu2.de.com key /applications/hs_cert/cert/live/h_hcm.jks Expiring Soon 2025-06-14 ALERT appu2.de.com key /applications/hs_cert/cert/live/h_mq.p12 Expired 2025-01-03   This method uses a side effect of latest function's assumptions about event order.  There are more resilient method to do this, too. Here is an emulation of your sample data you can play with and compare with real data   | makeresults format=csv data="_raw ALERT|appu2.de.com|rootca12|/applications/hs_cert/cert/live/h_hcm.jks|Expired|2020-10-18 WARNING|appu2.de.com|key|/applications/hs_cert/cert/live/h_hcm.jks|Expiring Soon|2025-06-14 INFO|appu2.de.com|rootca13|/applications/hs_cert/cert/live/h_core.jks|Valid|2026-10-18 ALERT|appu2.de.com|rootca12|/applications/hs_cert/cert/live/h_core.jks|Expired|2020-10-18 WARNING|appu2.de.com|key|/applications/hs_cert/cert/live/h_core.jks|Expiring Soon|2025-03-22 ALERT|appu2.de.com|key|/applications/hs_cert/cert/live/h_mq.p12|Expired|2025-01-03" ``` the above emulates index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log ```  
I don't have Splunk running on a windows machine so I can't comment on whether those files are necessary or not, but if you find that your splunk installation is working well without those files and ... See more...
I don't have Splunk running on a windows machine so I can't comment on whether those files are necessary or not, but if you find that your splunk installation is working well without those files and then you would like to just disable the warning, then you can remove the related lines in the manifest file in your splunk directory to disable the integrity checking on them.
The error is exactly what it says: The interpreter cannot determine values of placeholder tokens $starttime$ and $endtime$ without replace_me arguments.  There absolutely are details about savedsearc... See more...
The error is exactly what it says: The interpreter cannot determine values of placeholder tokens $starttime$ and $endtime$ without replace_me arguments.  There absolutely are details about savedsearch command.  From opening paragraph in savedsearch: If the search contains replacement placeholder terms, such as $replace_me$, the search processor replaces the placeholders with the strings you specify. For example: | savedsearch mysearch replace_me="value" Before showing a possible fix, I want to warn against using map command against time.  It is unclear what starttimeu and endtimeu are.  Are these in raw events in index=ix1?  Fundamentally, map is often not the best solution to any given problem.  Try these two searches, one uses map, one doesn't: To map   | makeresults format=csv data="field1v, field2v aaa, bbb" | map field1v field2v search="| makeresults format=csv data=\"field1, field2, field3 abc, def, 1 aaa, bbb, 2 xxx, yyy, 3\" | search field1 = $field1v$ field2 = $field2v$"   Not to map   | makeresults format=csv data="field1, field2, field3 abc, def, 1 aaa, bbb, 2 xxx, yyy, 3" | search [makeresults format=csv data="field1v, field2v aaa, bbb" | rename field1v as field1, field2v as field2]   The output is exactly the same, but the second one is easier to understand and easier to construct. To apply to your problem, if the fields I questioned above all exist in raw events, your search would be better constructed as      search index=ix1 [search index=ix2 eventStatus="Successful" | return 1000 eventID ] [search index=ix2 eventStatus="Successful" | localize timeafter=0m timebefore=1m | fields starttime endtime | rename starttime as startimeu, endtime as endtimeu] | stats values(client) values(port) values(target) by eventID     Not only does this search not need replacement token, but also it is easier to maintain. In short, to map? Or not to map?  That is the question. $xxx$ in saved search is interpreted as replacement tokens whose values must be specified in the invocation command.  This contradicts with the map command's use of $xxx$.  You can say not differentiating value tokens used in different contexts is a design weakness in SPL.  But that's what SPL has today.  This is not a bug per se.  
Hi @greentemplar, How did you determine the conflict (or figure out which ones were conflicting)? Any idea what the root cause is/was? Thanks!
The Deployment Server tracks forwarders by GUID rather than by name and/or address.  Each time Splunk is installed it generates a new GUID, which is why you see the same host name multiple times. To... See more...
The Deployment Server tracks forwarders by GUID rather than by name and/or address.  Each time Splunk is installed it generates a new GUID, which is why you see the same host name multiple times. To retain forwarder info across rebuilds, save and restore the $SPLUNK_HOME/etc/instance.cfg file on each forwarder.
According to the readme.md file, to configure it: On your Splunk instance navigate to `/app/KeycloakAPI_nxtp` to perform the configuration. I would assume this takes place after you install the app... See more...
According to the readme.md file, to configure it: On your Splunk instance navigate to `/app/KeycloakAPI_nxtp` to perform the configuration. I would assume this takes place after you install the app on your instance. Then you should be able to go to https://yoursplunk:8000/<locale>/app/KeycloakAPI_nxtp  And there may be a setup page.
I would like to view it in the format below      
Splunk will store the indexed data until the end of the retention period in the index. You cannot tell Splunk to just store the latest copy from inputs.conf. You can, however, use searches to return ... See more...
Splunk will store the indexed data until the end of the retention period in the index. You cannot tell Splunk to just store the latest copy from inputs.conf. You can, however, use searches to return only the latest indexed event. By default, events will be returned in reverse chronological order. So if your list of certificates is in a single event, then you may be able to filter to only the latest one by using "head 1" index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log | head 1 | rex field=_raw "(?<Severity>[^\|]+)\|(?<Hostname>[^\|]+)\|(?<CertIssuer>[^\|]+)\|(?<FilePath>[^\|]+)\|(?<Status>[^\|]+)\|(?<ExpiryDate>[^\|]+)" | multikv forceheader=1 | table Severity Hostname CertIssuer FilePath Status ExpiryDate If this is not the case, then perhaps you could post a sanitized screenshot of your events to give us a better idea of how they appear in your search interface.
We have a 5 node Splunk forwarder cluster to handle throughput of multiple servers in our datacenter.  Currently our upgrade method is keeping the the Deployment server as mutable where we just run c... See more...
We have a 5 node Splunk forwarder cluster to handle throughput of multiple servers in our datacenter.  Currently our upgrade method is keeping the the Deployment server as mutable where we just run config. changes via Chef, and update it.  But, the 5 node forwarders are being treated as fully replaceable with Terraform and Chef. Everything is working, but I notice the Deployment server holds onto forwarders after Terraform destroys the old one, and the new one pings home on a new IP(currently on DHCP), but with the same hostname as the destroyed forwarder.  Would replacing the forwarders with the same static IP and Hostname resolve that, or would there still be duplicate entries? Deployment server: Oracle Linux 8.10 Splunk-enterprise 8.2.9 Forwarders: Oracle Linux 8.10 Splunkforwarder 8.2.9
You would get better help if you follow these golden rules that I call the four commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a ... See more...
You would get better help if you follow these golden rules that I call the four commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers here do not have to look at). Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Talvez algo assim: index=analise Task.TaskStatus="Concluído" Task.DbrfMaterial{}. SolutionCode="410 TROCA DO MOD/PLACA/PECA" State IN ("*") CustomerName IN ("*") ItemCode("*") | spath path=Task.Dbrf... See more...
Talvez algo assim: index=analise Task.TaskStatus="Concluído" Task.DbrfMaterial{}. SolutionCode="410 TROCA DO MOD/PLACA/PECA" State IN ("*") CustomerName IN ("*") ItemCode("*") | spath path=Task.DbrfMaterial{} output=DbrfMaterial | mvexpand DbrfMaterial | table TaskNo DbrfMaterial | spath input=DbrfMaterial path= | table TaskNo EngineeringCode ItemDescription ItemQty SolutionCode Como exatamente você gostaria que sua tablela fosse?