All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Splunk will store the indexed data until the end of the retention period in the index. You cannot tell Splunk to just store the latest copy from inputs.conf. You can, however, use searches to return ... See more...
Splunk will store the indexed data until the end of the retention period in the index. You cannot tell Splunk to just store the latest copy from inputs.conf. You can, however, use searches to return only the latest indexed event. By default, events will be returned in reverse chronological order. So if your list of certificates is in a single event, then you may be able to filter to only the latest one by using "head 1" index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log | head 1 | rex field=_raw "(?<Severity>[^\|]+)\|(?<Hostname>[^\|]+)\|(?<CertIssuer>[^\|]+)\|(?<FilePath>[^\|]+)\|(?<Status>[^\|]+)\|(?<ExpiryDate>[^\|]+)" | multikv forceheader=1 | table Severity Hostname CertIssuer FilePath Status ExpiryDate If this is not the case, then perhaps you could post a sanitized screenshot of your events to give us a better idea of how they appear in your search interface.
We have a 5 node Splunk forwarder cluster to handle throughput of multiple servers in our datacenter.  Currently our upgrade method is keeping the the Deployment server as mutable where we just run c... See more...
We have a 5 node Splunk forwarder cluster to handle throughput of multiple servers in our datacenter.  Currently our upgrade method is keeping the the Deployment server as mutable where we just run config. changes via Chef, and update it.  But, the 5 node forwarders are being treated as fully replaceable with Terraform and Chef. Everything is working, but I notice the Deployment server holds onto forwarders after Terraform destroys the old one, and the new one pings home on a new IP(currently on DHCP), but with the same hostname as the destroyed forwarder.  Would replacing the forwarders with the same static IP and Hostname resolve that, or would there still be duplicate entries? Deployment server: Oracle Linux 8.10 Splunk-enterprise 8.2.9 Forwarders: Oracle Linux 8.10 Splunkforwarder 8.2.9
You would get better help if you follow these golden rules that I call the four commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a ... See more...
You would get better help if you follow these golden rules that I call the four commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers here do not have to look at). Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Talvez algo assim: index=analise Task.TaskStatus="Concluído" Task.DbrfMaterial{}. SolutionCode="410 TROCA DO MOD/PLACA/PECA" State IN ("*") CustomerName IN ("*") ItemCode("*") | spath path=Task.Dbrf... See more...
Talvez algo assim: index=analise Task.TaskStatus="Concluído" Task.DbrfMaterial{}. SolutionCode="410 TROCA DO MOD/PLACA/PECA" State IN ("*") CustomerName IN ("*") ItemCode("*") | spath path=Task.DbrfMaterial{} output=DbrfMaterial | mvexpand DbrfMaterial | table TaskNo DbrfMaterial | spath input=DbrfMaterial path= | table TaskNo EngineeringCode ItemDescription ItemQty SolutionCode Como exatamente você gostaria que sua tablela fosse?
Bom dia! No cenário apresentado abaixo, não consigo associar os itens em uma tabela dentro do campo DbrfMatrial: EngineeringCode, ItemDescription, ItemQty, SolutionCode   Usei o... See more...
Bom dia! No cenário apresentado abaixo, não consigo associar os itens em uma tabela dentro do campo DbrfMatrial: EngineeringCode, ItemDescription, ItemQty, SolutionCode   Usei o índice abaixo! index=analise Task.TaskStatus="Concluído" Task.DbrfMaterial{}. SolutionCode="410 TROCA DO MOD/PLACA/PECA" State IN ("*") CustomerName IN ("*") ItemCode("*") | mvexpand Task.DbrfMaterial{}. Código de Engenharia| pesquise Task.DbrfMaterial{}. CódigoDeEngenharia="*" | contagem de estatísticas por Task.DbrfMaterial{}. Código de Engenharia| renomear contagem como Quantidade | cabeça 20 | tabela Task.DbrfMaterial{}. Quantidade do código de engenharia| ordenar -Quantidade | appendcols [ search index=brazilcalldata Task.TaskStatus="Concluído" Task.DbrfMaterial.SolutionCode="410 TROCA DO MOD/PLACA/PECA" CustomerName IN ("*") State IN ("*") Task.DbrfMaterial.EngineeringCode="*" ItemCode = "*" | stats count, sum(Task.DbrfMaterial.ItemQty) as TotalItemQty by Task.DbrfMaterial.EngineeringCode Task.DbrfMaterial.ItemDescription | renomeie Task.DbrfMaterial.EngineeringCode como Item, Task.DbrfMaterial.ItemDescription como Descricao, TotalItemQty como "Qtde Itens" | table Item Descrição "Qtde Itens" count | sort - "Qtde Itens" ] | eval TotalQuantity = Quantity + 'Qtde Itens' | pesquise Task.DbrfMaterial{}. Código de Engenharia!="" | tabela Task.DbrfMaterial{}. EngineeringCode Quantidade "Qtde Itens" TotalQuantity
You can achieve this by using the sendemail command. Rather than setting email as an action, you can incorporate the sendemail command directly into your search query, configuring it with the necess... See more...
You can achieve this by using the sendemail command. Rather than setting email as an action, you can incorporate the sendemail command directly into your search query, configuring it with the necessary parameters. Example <yoursearch> | sendemail to=example@splunk.com server=mail.example.com subject="Here is an email from Splunk" message="This is an example message" sendresults=true inline=true format=raw sendpdf=true   ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
(index="routerswitch" action_type IN(Failed_Attempts, Passed_Attempts) src_mac=* SwitchName=switch1 Port_Id=GigabitEthernet1/0/21 earliest=-30d) OR (index=connections source="/var/devices.log" src_ip... See more...
(index="routerswitch" action_type IN(Failed_Attempts, Passed_Attempts) src_mac=* SwitchName=switch1 Port_Id=GigabitEthernet1/0/21 earliest=-30d) OR (index=connections source="/var/devices.log" src_ip=172.* earliest=-30d src_mac=*) | fields src_mac dhcp_host_name src_ip IP_Address SwitchName Port_Id | eval src_mac=upper(src_mac) | stats values(dhcp_host_name) as hostname values(src_ip) as IP values(IP_Address) as net_IP values(SwitchName) as switch values(Port_Id) as portID by src_mac | where isnotnull(hostname) AND isnotnull(IP) AND isnotnull(net_IP) AND isnotnull(switch) AND isnotnull(portID)
There are absolutely no differences in the src_mac. The search *does* find the correct results where the src_mac in each sourcetype match and the full device data is shown. It's just that the stats c... See more...
There are absolutely no differences in the src_mac. The search *does* find the correct results where the src_mac in each sourcetype match and the full device data is shown. It's just that the stats command doesn't appear to *require* that there be a matching src_mac in each sourcetype so it can pull all the required fields from each. The end result being a table that may contain a devices src_mac and hostname....but is missing the switch port and name. Or the opposite where I'm missing the hostname but have the rest of the info.   If needed, I'll fabricate some results.
@karn  I'm not entirely sure about this, but I can provide some documentation about the license for your reference. Feel free to take a look. https://docs.splunk.com/Documentation/UBA/5.4.1/Install... See more...
@karn  I'm not entirely sure about this, but I can provide some documentation about the license for your reference. Feel free to take a look. https://docs.splunk.com/Documentation/UBA/5.4.1/Install/License  https://docs.splunk.com/Documentation/SOAR/current/Admin/License  If this reply helps you, Karma would be appreciated.
@danielbb Go through this link for more information : https://www.splunk.com/en_us/blog/tips-and-tricks/whats-your-ulimit.html  I hope this helps, if any reply helps you, you could add your upvote/k... See more...
@danielbb Go through this link for more information : https://www.splunk.com/en_us/blog/tips-and-tricks/whats-your-ulimit.html  I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
@danielbb default ulimit value is 1024. Minimal values might work for basic setups, but modern applications often require higher limits.
@danielbb You can put the ulimit value 65535.
@jmunsterman  As mentioned @richgalloway  , you can use the stats command to retrieve all the results. Please find the attached screenshot for reference. I hope this helps, if any reply helps yo... See more...
@jmunsterman  As mentioned @richgalloway  , you can use the stats command to retrieve all the results. Please find the attached screenshot for reference. I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
Thanks @kiran_panchavat, About the ulimits, what are the minimal ulimits requirements?
One way to see all 100+ values of the field is by using the stats command. ... | stats count by dnis  Of course, the table command also will list all values of the field (with duplicates, if any). ... See more...
One way to see all 100+ values of the field is by using the stats command. ... | stats count by dnis  Of course, the table command also will list all values of the field (with duplicates, if any). ... | table dnis
@danielbb  Please have a look https://docs.splunk.com/Documentation/Splunk/9.4.0/Installation/SystemRequirements#Considerations_regarding_system-wide_resource_limits_on_.2Anix_systems    I hope t... See more...
@danielbb  Please have a look https://docs.splunk.com/Documentation/Splunk/9.4.0/Installation/SystemRequirements#Considerations_regarding_system-wide_resource_limits_on_.2Anix_systems    I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
@danielbb  This allows different buckets to be stored on different storage types which can in turn is very useful to improve efficiency and reduce storage costs. Below are the recommended configurat... See more...
@danielbb  This allows different buckets to be stored on different storage types which can in turn is very useful to improve efficiency and reduce storage costs. Below are the recommended configurations for each bucket/storage type and example indexes.conf parameters that can be utilized . I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
@danielbb  General Considerations for all Splunk servers 1.Setting Ulimits and Transparent Huge Pages 2.Turn OFF SELInux 3.Check the Firewalld – In case as per company policy you need to have OS-... See more...
@danielbb  General Considerations for all Splunk servers 1.Setting Ulimits and Transparent Huge Pages 2.Turn OFF SELInux 3.Check the Firewalld – In case as per company policy you need to have OS-level firewall make sure you open the required ports for Splunk on the OS. Following are a few useful commands you can use 4.Don’t Run Splunk as Root, Create a Splunk user & group, Give Splunk user Sudo privileges. 5.Storage Consideration for Indexers Splunk indexed data goes through various stages during its lifecycle as shown below: Hot Bucket > Warm Bucket > Cold Bucket > Frozen/Archived > Thawed(Manual process) This allows different buckets to be stored on different storage types which can in turn is very useful to improve efficiency and reduce storage costs.  I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.    
distinct results in splunk and how to show all data in selected fields vs the 100+ results  
We are creating an installation of one indexer, one search head, and one universal forwarder with syslog, and I wonder what the minimal OS requirements are--such as disabling transparent huge pages o... See more...
We are creating an installation of one indexer, one search head, and one universal forwarder with syslog, and I wonder what the minimal OS requirements are--such as disabling transparent huge pages on the indexer, file descriptors, etc. we are speaking about a bare minimum installation.