All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It is best not to use rex to extract information from structured data.  As @ITWhisperer says, the OP should post complete sample logs instead of a fragment of JSON.  If the raw event is itself compli... See more...
It is best not to use rex to extract information from structured data.  As @ITWhisperer says, the OP should post complete sample logs instead of a fragment of JSON.  If the raw event is itself compliant JSON, there should be no need to extract using search commands because Splunk automatically does this at search time.
Several things need to be clarified.  First, as @marnall says, there is no such a thing as to use input.conf to make Splunk only handle part of the event unless you can predetermine which row in that... See more...
Several things need to be clarified.  First, as @marnall says, there is no such a thing as to use input.conf to make Splunk only handle part of the event unless you can predetermine which row in that multi-row data is "latest".  Secondly, when you say "latest", people generally understand it to be the latest event in indexer.  If you desire to only SHOW the latest row based on ExpiryDate, that can be easily achieved in search. Thirdly, your "simple" requirement statement omitted an important qualifier: Do you want the absolute largest ExpiryDate in the entire log or do you want the largest ExpiryDate per group by certain criteria, e.g., by FilePath? If it's the former, you can simply do   index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log | rex field=_raw "(?<Severity>[^\|]+)\|(?<Hostname>[^\|]+)\|(?<CertIssuer>[^\|]+)\|(?<FilePath>[^\|]+)\|(?<Status>[^\|]+)\|(?<ExpiryDate>[^\|]+)" | multikv forceheader=1 | sort ExpiryDate | tail 1 | table Severity Hostname CertIssuer FilePath Status ExpiryDate   If, on the other hand, you want largest ExpiryDate by FilePath - which seems more practical to me, you could do   index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log ``` | rex field=_raw "(?<Severity>[^\|]+)\|(?<Hostname>[^\|]+)\|(?<CertIssuer>[^\|]+)\|(?<FilePath>[^\|]+)\|(?<Status>[^\|]+)\|(?<ExpiryDate>[^\|]+)" | multikv | sort - FilePath ExpiryDate | stats latest(*) as * by FilePath | table Severity Hostname CertIssuer FilePath Status ExpiryDate   Output from this search using your sample data is Severity Hostname CertIussuer FilePath Status ExpiryDate INFO appu2.de.com rootca13 /applications/hs_cert/cert/live/h_core.jks Valid 2026-10-18 WARNING appu2.de.com key /applications/hs_cert/cert/live/h_hcm.jks Expiring Soon 2025-06-14 ALERT appu2.de.com key /applications/hs_cert/cert/live/h_mq.p12 Expired 2025-01-03   This method uses a side effect of latest function's assumptions about event order.  There are more resilient method to do this, too. Here is an emulation of your sample data you can play with and compare with real data   | makeresults format=csv data="_raw ALERT|appu2.de.com|rootca12|/applications/hs_cert/cert/live/h_hcm.jks|Expired|2020-10-18 WARNING|appu2.de.com|key|/applications/hs_cert/cert/live/h_hcm.jks|Expiring Soon|2025-06-14 INFO|appu2.de.com|rootca13|/applications/hs_cert/cert/live/h_core.jks|Valid|2026-10-18 ALERT|appu2.de.com|rootca12|/applications/hs_cert/cert/live/h_core.jks|Expired|2020-10-18 WARNING|appu2.de.com|key|/applications/hs_cert/cert/live/h_core.jks|Expiring Soon|2025-03-22 ALERT|appu2.de.com|key|/applications/hs_cert/cert/live/h_mq.p12|Expired|2025-01-03" ``` the above emulates index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log ```  
I don't have Splunk running on a windows machine so I can't comment on whether those files are necessary or not, but if you find that your splunk installation is working well without those files and ... See more...
I don't have Splunk running on a windows machine so I can't comment on whether those files are necessary or not, but if you find that your splunk installation is working well without those files and then you would like to just disable the warning, then you can remove the related lines in the manifest file in your splunk directory to disable the integrity checking on them.
The error is exactly what it says: The interpreter cannot determine values of placeholder tokens $starttime$ and $endtime$ without replace_me arguments.  There absolutely are details about savedsearc... See more...
The error is exactly what it says: The interpreter cannot determine values of placeholder tokens $starttime$ and $endtime$ without replace_me arguments.  There absolutely are details about savedsearch command.  From opening paragraph in savedsearch: If the search contains replacement placeholder terms, such as $replace_me$, the search processor replaces the placeholders with the strings you specify. For example: | savedsearch mysearch replace_me="value" Before showing a possible fix, I want to warn against using map command against time.  It is unclear what starttimeu and endtimeu are.  Are these in raw events in index=ix1?  Fundamentally, map is often not the best solution to any given problem.  Try these two searches, one uses map, one doesn't: To map   | makeresults format=csv data="field1v, field2v aaa, bbb" | map field1v field2v search="| makeresults format=csv data=\"field1, field2, field3 abc, def, 1 aaa, bbb, 2 xxx, yyy, 3\" | search field1 = $field1v$ field2 = $field2v$"   Not to map   | makeresults format=csv data="field1, field2, field3 abc, def, 1 aaa, bbb, 2 xxx, yyy, 3" | search [makeresults format=csv data="field1v, field2v aaa, bbb" | rename field1v as field1, field2v as field2]   The output is exactly the same, but the second one is easier to understand and easier to construct. To apply to your problem, if the fields I questioned above all exist in raw events, your search would be better constructed as      search index=ix1 [search index=ix2 eventStatus="Successful" | return 1000 eventID ] [search index=ix2 eventStatus="Successful" | localize timeafter=0m timebefore=1m | fields starttime endtime | rename starttime as startimeu, endtime as endtimeu] | stats values(client) values(port) values(target) by eventID     Not only does this search not need replacement token, but also it is easier to maintain. In short, to map? Or not to map?  That is the question. $xxx$ in saved search is interpreted as replacement tokens whose values must be specified in the invocation command.  This contradicts with the map command's use of $xxx$.  You can say not differentiating value tokens used in different contexts is a design weakness in SPL.  But that's what SPL has today.  This is not a bug per se.  
Hi @greentemplar, How did you determine the conflict (or figure out which ones were conflicting)? Any idea what the root cause is/was? Thanks!
The Deployment Server tracks forwarders by GUID rather than by name and/or address.  Each time Splunk is installed it generates a new GUID, which is why you see the same host name multiple times. To... See more...
The Deployment Server tracks forwarders by GUID rather than by name and/or address.  Each time Splunk is installed it generates a new GUID, which is why you see the same host name multiple times. To retain forwarder info across rebuilds, save and restore the $SPLUNK_HOME/etc/instance.cfg file on each forwarder.
According to the readme.md file, to configure it: On your Splunk instance navigate to `/app/KeycloakAPI_nxtp` to perform the configuration. I would assume this takes place after you install the app... See more...
According to the readme.md file, to configure it: On your Splunk instance navigate to `/app/KeycloakAPI_nxtp` to perform the configuration. I would assume this takes place after you install the app on your instance. Then you should be able to go to https://yoursplunk:8000/<locale>/app/KeycloakAPI_nxtp  And there may be a setup page.
I would like to view it in the format below      
Splunk will store the indexed data until the end of the retention period in the index. You cannot tell Splunk to just store the latest copy from inputs.conf. You can, however, use searches to return ... See more...
Splunk will store the indexed data until the end of the retention period in the index. You cannot tell Splunk to just store the latest copy from inputs.conf. You can, however, use searches to return only the latest indexed event. By default, events will be returned in reverse chronological order. So if your list of certificates is in a single event, then you may be able to filter to only the latest one by using "head 1" index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log | head 1 | rex field=_raw "(?<Severity>[^\|]+)\|(?<Hostname>[^\|]+)\|(?<CertIssuer>[^\|]+)\|(?<FilePath>[^\|]+)\|(?<Status>[^\|]+)\|(?<ExpiryDate>[^\|]+)" | multikv forceheader=1 | table Severity Hostname CertIssuer FilePath Status ExpiryDate If this is not the case, then perhaps you could post a sanitized screenshot of your events to give us a better idea of how they appear in your search interface.
We have a 5 node Splunk forwarder cluster to handle throughput of multiple servers in our datacenter.  Currently our upgrade method is keeping the the Deployment server as mutable where we just run c... See more...
We have a 5 node Splunk forwarder cluster to handle throughput of multiple servers in our datacenter.  Currently our upgrade method is keeping the the Deployment server as mutable where we just run config. changes via Chef, and update it.  But, the 5 node forwarders are being treated as fully replaceable with Terraform and Chef. Everything is working, but I notice the Deployment server holds onto forwarders after Terraform destroys the old one, and the new one pings home on a new IP(currently on DHCP), but with the same hostname as the destroyed forwarder.  Would replacing the forwarders with the same static IP and Hostname resolve that, or would there still be duplicate entries? Deployment server: Oracle Linux 8.10 Splunk-enterprise 8.2.9 Forwarders: Oracle Linux 8.10 Splunkforwarder 8.2.9
You would get better help if you follow these golden rules that I call the four commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a ... See more...
You would get better help if you follow these golden rules that I call the four commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers here do not have to look at). Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Talvez algo assim: index=analise Task.TaskStatus="Concluído" Task.DbrfMaterial{}. SolutionCode="410 TROCA DO MOD/PLACA/PECA" State IN ("*") CustomerName IN ("*") ItemCode("*") | spath path=Task.Dbrf... See more...
Talvez algo assim: index=analise Task.TaskStatus="Concluído" Task.DbrfMaterial{}. SolutionCode="410 TROCA DO MOD/PLACA/PECA" State IN ("*") CustomerName IN ("*") ItemCode("*") | spath path=Task.DbrfMaterial{} output=DbrfMaterial | mvexpand DbrfMaterial | table TaskNo DbrfMaterial | spath input=DbrfMaterial path= | table TaskNo EngineeringCode ItemDescription ItemQty SolutionCode Como exatamente você gostaria que sua tablela fosse?
Bom dia! No cenário apresentado abaixo, não consigo associar os itens em uma tabela dentro do campo DbrfMatrial: EngineeringCode, ItemDescription, ItemQty, SolutionCode   Usei o... See more...
Bom dia! No cenário apresentado abaixo, não consigo associar os itens em uma tabela dentro do campo DbrfMatrial: EngineeringCode, ItemDescription, ItemQty, SolutionCode   Usei o índice abaixo! index=analise Task.TaskStatus="Concluído" Task.DbrfMaterial{}. SolutionCode="410 TROCA DO MOD/PLACA/PECA" State IN ("*") CustomerName IN ("*") ItemCode("*") | mvexpand Task.DbrfMaterial{}. Código de Engenharia| pesquise Task.DbrfMaterial{}. CódigoDeEngenharia="*" | contagem de estatísticas por Task.DbrfMaterial{}. Código de Engenharia| renomear contagem como Quantidade | cabeça 20 | tabela Task.DbrfMaterial{}. Quantidade do código de engenharia| ordenar -Quantidade | appendcols [ search index=brazilcalldata Task.TaskStatus="Concluído" Task.DbrfMaterial.SolutionCode="410 TROCA DO MOD/PLACA/PECA" CustomerName IN ("*") State IN ("*") Task.DbrfMaterial.EngineeringCode="*" ItemCode = "*" | stats count, sum(Task.DbrfMaterial.ItemQty) as TotalItemQty by Task.DbrfMaterial.EngineeringCode Task.DbrfMaterial.ItemDescription | renomeie Task.DbrfMaterial.EngineeringCode como Item, Task.DbrfMaterial.ItemDescription como Descricao, TotalItemQty como "Qtde Itens" | table Item Descrição "Qtde Itens" count | sort - "Qtde Itens" ] | eval TotalQuantity = Quantity + 'Qtde Itens' | pesquise Task.DbrfMaterial{}. Código de Engenharia!="" | tabela Task.DbrfMaterial{}. EngineeringCode Quantidade "Qtde Itens" TotalQuantity
You can achieve this by using the sendemail command. Rather than setting email as an action, you can incorporate the sendemail command directly into your search query, configuring it with the necess... See more...
You can achieve this by using the sendemail command. Rather than setting email as an action, you can incorporate the sendemail command directly into your search query, configuring it with the necessary parameters. Example <yoursearch> | sendemail to=example@splunk.com server=mail.example.com subject="Here is an email from Splunk" message="This is an example message" sendresults=true inline=true format=raw sendpdf=true   ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
(index="routerswitch" action_type IN(Failed_Attempts, Passed_Attempts) src_mac=* SwitchName=switch1 Port_Id=GigabitEthernet1/0/21 earliest=-30d) OR (index=connections source="/var/devices.log" src_ip... See more...
(index="routerswitch" action_type IN(Failed_Attempts, Passed_Attempts) src_mac=* SwitchName=switch1 Port_Id=GigabitEthernet1/0/21 earliest=-30d) OR (index=connections source="/var/devices.log" src_ip=172.* earliest=-30d src_mac=*) | fields src_mac dhcp_host_name src_ip IP_Address SwitchName Port_Id | eval src_mac=upper(src_mac) | stats values(dhcp_host_name) as hostname values(src_ip) as IP values(IP_Address) as net_IP values(SwitchName) as switch values(Port_Id) as portID by src_mac | where isnotnull(hostname) AND isnotnull(IP) AND isnotnull(net_IP) AND isnotnull(switch) AND isnotnull(portID)
There are absolutely no differences in the src_mac. The search *does* find the correct results where the src_mac in each sourcetype match and the full device data is shown. It's just that the stats c... See more...
There are absolutely no differences in the src_mac. The search *does* find the correct results where the src_mac in each sourcetype match and the full device data is shown. It's just that the stats command doesn't appear to *require* that there be a matching src_mac in each sourcetype so it can pull all the required fields from each. The end result being a table that may contain a devices src_mac and hostname....but is missing the switch port and name. Or the opposite where I'm missing the hostname but have the rest of the info.   If needed, I'll fabricate some results.
@karn  I'm not entirely sure about this, but I can provide some documentation about the license for your reference. Feel free to take a look. https://docs.splunk.com/Documentation/UBA/5.4.1/Install... See more...
@karn  I'm not entirely sure about this, but I can provide some documentation about the license for your reference. Feel free to take a look. https://docs.splunk.com/Documentation/UBA/5.4.1/Install/License  https://docs.splunk.com/Documentation/SOAR/current/Admin/License  If this reply helps you, Karma would be appreciated.
@danielbb Go through this link for more information : https://www.splunk.com/en_us/blog/tips-and-tricks/whats-your-ulimit.html  I hope this helps, if any reply helps you, you could add your upvote/k... See more...
@danielbb Go through this link for more information : https://www.splunk.com/en_us/blog/tips-and-tricks/whats-your-ulimit.html  I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
@danielbb default ulimit value is 1024. Minimal values might work for basic setups, but modern applications often require higher limits.
@danielbb You can put the ulimit value 65535.