All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Anthony.Dahanne, I'm glad you were able to figure it out. Thanks for coming back and sharing the solution too!
Another thing that comes to mind - local file permissions? (splunk process unable to alter the passwd file)
Hi @dtburrows3, Thanks so much it helped me a lot your suggestions, for now I will go with eventstats solutions. For  foreach command I need to go deep on it since it is more complex. @Pickl... See more...
Hi @dtburrows3, Thanks so much it helped me a lot your suggestions, for now I will go with eventstats solutions. For  foreach command I need to go deep on it since it is more complex. @PickleRick I will try xyseries, same as I did before to have the expected single values for the productcat# fields. Need to push this report to Production ASAP.
Removing users is a standard splunk admin task, so this is odd!. If you look at your config, what does this state?  If you run the btool command and check your authentication config?    /opt/sp... See more...
Removing users is a standard splunk admin task, so this is odd!. If you look at your config, what does this state?  If you run the btool command and check your authentication config?    /opt/splunk/bin/splunk cmd btool authentication list --debug    
trying to use rex to get the contents for the field letterIdAndDeliveryIndicatorMap. For example, Logged string letterIdAndDeliveryIndicatorMap=[abc=P, efg=P, HijKlmno=E] I want to extract the cont... See more...
trying to use rex to get the contents for the field letterIdAndDeliveryIndicatorMap. For example, Logged string letterIdAndDeliveryIndicatorMap=[abc=P, efg=P, HijKlmno=E] I want to extract the contents between the [] , which is abc=P, efg=P, HijKlmno=E and then find stats on them. I was trying something like  rex  field=_raw "letterIdAndDeliveryIndicatorMap=\[(?<letterIdAry>[^\] ]+)" but, its not working as expected. Thanks in advance!  
Additional idea on this thought is based on baseline of probing network.  You can use this information to assign a risk base alert.  Just a thought... 
Splunk can't find something that's not there.  You'll need to use makeresults or a lookup to populate what you expect and then replace that with actual indexed data.
This could be a number of things causing issues, that said tcp ouput is normally something related to the network or setup. A few things to check: What does the inputs.conf look like on your indexe... See more...
This could be a number of things causing issues, that said tcp ouput is normally something related to the network or setup. A few things to check: What does the inputs.conf look like on your indexer? Check on the indexer the port - should show your configured port 9997 netstat -tupln Is there a firewall blocking this port? Can your UF communicate to Indexer?
Thanks for quick response! Actually i was looking for the output like below. File missed in between time 6-7:30AM and 9-10:05PM File  Date TI7L 03-06-2024 06:52   file missing TI8L... See more...
Thanks for quick response! Actually i was looking for the output like below. File missed in between time 6-7:30AM and 9-10:05PM File  Date TI7L 03-06-2024 06:52   file missing TI8L 03-06-2024 11:51 TI8L 03-06-2024 11:50 TI9L 03-06-2024 19:06 TI9L 03-06-2024 19:10 TI5L 03-06-2024 22:16   File missing
What's the best way to start the splunk ? Is it with root user or with the splunk user ? 
Hello Team, I have configured splunk forwarder and on which I am getting below error, WARN TcpOutputProc [8204 parsing] - The TCP output processor has paused the data flow. Forwarding to host_des... See more...
Hello Team, I have configured splunk forwarder and on which I am getting below error, WARN TcpOutputProc [8204 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=WALVAU-VIDI-1 inside output group default-autolb-group from host_src=WALVAU-MCP-APP- has been blocked for blocked_seconds=400. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.   Task : I want to send data from Splunk forwarder to Splunk enterprise server ( Indexer ) 1.  I opened outbound port on UF 9997 2. Opened inbound port 9997 on indexer outputs.conf on UF [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = WALVAU-VIDI-1:9997 [tcpout-server://WALVAU-VIDI-1:9997] inputs.conf on UF [monitor://D:\BEXT\Walmart_VAU_ACP\Log\BPI*.log] disabled = false index = walmart_vau_acp sourcetype = Walmart_VAU_ACP Please help me to fix the issue. So that forwarder will send data to Indexer server.  
Hi @Somesh , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Point... See more...
Hi @Somesh , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Thanks for the update @gcusello. It worked by running the command "splunk enable boot-start -systemd-managed 1". Also what is the best practice to start the splunk ? Is that needs to be started by "r... See more...
Thanks for the update @gcusello. It worked by running the command "splunk enable boot-start -systemd-managed 1". Also what is the best practice to start the splunk ? Is that needs to be started by "root" user or the "splunk" user 
Hi @orendado , Usually different types of logs are categorized using sourcetype. Related to sourcetype, usually there are all the parsing rules and field extraction. Are you using different source... See more...
Hi @orendado , Usually different types of logs are categorized using sourcetype. Related to sourcetype, usually there are all the parsing rules and field extraction. Are you using different sourcetypes? If you want to add othe data sources, you can create your own sourcetypes eventually starting from an existern one. The Add Data function is very useful to find the correct sourcetype to associate to your data sources. Ciao. Giuseppe
Hi, Let's say I'm ingesting different types of logs files from different type(some are txt,csv,json,xml....) to the same index. How can I add additional data to each datasource/log? I would like to ... See more...
Hi, Let's say I'm ingesting different types of logs files from different type(some are txt,csv,json,xml....) to the same index. How can I add additional data to each datasource/log? I would like to some extra fields in json format, for example : customers name, system same...
Thanks again for the info and clarification   ewholz
Hi @iam_ironman , I usually use this configuration. Ciao. Giuseppe
Hi @Somesh, please try some of these (starting from the first url): https://www.aidanwardman.com/enabling-boot-start-in-splunk-on-rhel-9-rocky-9/ https://docs.splunk.com/Documentation/Splunk/9.2.1... See more...
Hi @Somesh, please try some of these (starting from the first url): https://www.aidanwardman.com/enabling-boot-start-in-splunk-on-rhel-9-rocky-9/ https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/ConfigureSplunktostartatboottime https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/RunSplunkassystemdservice Ciao. Giuseppe
As @ITWhisperer shows, the key is mvexpand.  Meanwhile, I believe that you already have serial_number, type, outer result, and so on so you don't need to extract them.  In addition, you have a whole ... See more...
As @ITWhisperer shows, the key is mvexpand.  Meanwhile, I believe that you already have serial_number, type, outer result, and so on so you don't need to extract them.  In addition, you have a whole bunch of logs{}.* fields (including logs{}.result) that you no longer need.  You can simplify search to | fields - logs{}.* | spath logs{} output=logs | mvexpand logs | spath input=logs One more thing: is the outer result field important?  If it is you want to rename outer result first.
From what *I* have seen, the machineTypesFilter seems to be at the root of this bug. This is the absolute *WORST* update that I've seen in the last 6 years that I've been working with Splunk. I did... See more...
From what *I* have seen, the machineTypesFilter seems to be at the root of this bug. This is the absolute *WORST* update that I've seen in the last 6 years that I've been working with Splunk. I did read something that would indicate that the (white|black)list.X can also take OS Strings, but the docs call it "platform dependent", so I am putting it off until we can actually SEE what's being deployed again. I have noticed yet *ANOTHER* bug...  After a Deployment Server has been running for a bit, ANY CALL that would query DS Client information will TIMEOUT. I have multiple scripts that read data from /services/deployment/server/clients, and I've bumped the timeouts to 30 seconds, and it still times-out.  It used to take < 2s to pull data from THOUSANDS of clients.