All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@samuel-devops  Sometimes, Splunk UF fails to start due to permission issues. Ensure that the container has the right permissions: docker exec -it uf bash chown -R splunk:splunk /opt/splunkforward... See more...
@samuel-devops  Sometimes, Splunk UF fails to start due to permission issues. Ensure that the container has the right permissions: docker exec -it uf bash chown -R splunk:splunk /opt/splunkforwarder chmod -R 755 /opt/splunkforwarder   Restart the container: docker restart uf  Manually Check Splunk UF API The error suggests that the Ansible task is failing to check for restarts via the Splunk API. Run this manually inside the container: curl -k -u admin:test12345 https://localhost:8089/services/messages/restart_required?output_mode=json If the API is unreachable, Splunk UF might not be fully initialized.  
@samuel-devops  Check if Splunk UF is actually running  docker ps -a | grep uf If it’s not running, check the logs: docker logs uf Look for messages indicating that splunkd started and is listen... See more...
@samuel-devops  Check if Splunk UF is actually running  docker ps -a | grep uf If it’s not running, check the logs: docker logs uf Look for messages indicating that splunkd started and is listening on port 8089. You should see something like:   Splunk> Be an IT superhero. Splunk Universal Forwarder has started. Confirm the ports are mapped and accessible:   docker ps  Ensure the container uf is running and ports 0.0.0.0:9997->9997/tcp, 0.0.0.0:8080->8080/tcp, and 0.0.0.0:8089->8089/tcp are listed.
Thanks for your answer, The events are part of an index, which aren't available as json. It is a shared notable index. My idea is to define in a lookup which fieldnames I will extract. For Exa... See more...
Thanks for your answer, The events are part of an index, which aren't available as json. It is a shared notable index. My idea is to define in a lookup which fieldnames I will extract. For Example: | eval sum=case(id=1, "dest_ip:" .dest_ip ",src_ip:".src_ip, id=2, "user:".user + ",domain:".domain id=3, "country:".country, id=4, "company:".company + ",product:".product) | table id, sum   But the scalable is very worse, because the kind of condition is grow up to 1000. I think is not manageable in one use case. Thanks for your help Best regards Tino
It good to know that. Then this (on UF) splunk list inputstatus Shows to you what inputs your UF sees and what it has read.
Hi @alesyo  How about this? You would just need to use this on your existing query I think   | foreach * [| eval summary=mvappend(summary,IF(<<FIELD>>!="" and "<<FIELD>>"!="id", "<<FIELD... See more...
Hi @alesyo  How about this? You would just need to use this on your existing query I think   | foreach * [| eval summary=mvappend(summary,IF(<<FIELD>>!="" and "<<FIELD>>"!="id", "<<FIELD>>=".<<FIELD>>,null()))] | eval summary_output="Id:".id." - ".mvjoin(summary," ") | fields summary_output     However I've included a full working example below:     | makeresults | eval data="[{\"id\":1,\"dest_ip\":\"1.1.1.1\",\"src_ip\":\"2.2.2.2\"},{\"id\":2,\"user\":\"bob\",\"domain\":\"microsoft\"},{\"id\":3,\"county\":\"usa\",\"city\":\"seattle\"},{\"id\":4,\"company\":\"cisco\",\"product\":\"splunk\"}]" | eval rawdata=json_array_to_mv(data) | mvexpand rawdata | eval _raw=json_extract(rawdata,"") | fields - data rawdata | spath | stats values(*) AS * by id | foreach * [| eval summary=mvappend(summary,IF(<<FIELD>>!="" and "<<FIELD>>"!="id", "<<FIELD>>=".<<FIELD>>,null()))] | eval summary_output="Id:".id." - ".mvjoin(summary," ") | fields summary_output     Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
| foreach * [| eval summary1=if("<<FIELD>>"==Field1,<<FIELD>>,summary1) | eval summary2=if("<<FIELD>>"==Field2,<<FIELD>>,summary2)] | eval summary=Field1."=".summary1.if(isnotnull(Field2)," "... See more...
| foreach * [| eval summary1=if("<<FIELD>>"==Field1,<<FIELD>>,summary1) | eval summary2=if("<<FIELD>>"==Field2,<<FIELD>>,summary2)] | eval summary=Field1."=".summary1.if(isnotnull(Field2)," ".Field2."=".summary2,null())
Good day,   I got it to work after adding a : after "usage" as shown below: index=”main” source=”C:\\Admin\StorageLogs\storage_usage.log” | rex field=_raw "usage: (?<diskUsage>[0-9\.]+)% used"... See more...
Good day,   I got it to work after adding a : after "usage" as shown below: index=”main” source=”C:\\Admin\StorageLogs\storage_usage.log” | rex field=_raw "usage: (?<diskUsage>[0-9\.]+)% used" | where diskUsage>75   Thank you for your assistance.
Hi @rikinet  Would the following achieve what you're looking for?   | makeresults count=5 | streamstats count as a | eval _time = _time + (60*a) | eval json1="{\"id\":1,\"attrib_A\":\"A... See more...
Hi @rikinet  Would the following achieve what you're looking for?   | makeresults count=5 | streamstats count as a | eval _time = _time + (60*a) | eval json1="{\"id\":1,\"attrib_A\":\"A1\"}#{\"id\":2,\"attrib_A\":\"A2\"}#{\"id\":3,\"attrib_A\":\"A3\"}#{\"id\":4,\"attrib_A\":\"A4\"}#{\"id\":5,\"attrib_A\":\"A5\"}", json2="{\"id\":2,\"attrib_B\":\"B2\"}#{\"id\":3,\"attrib_B\":\"B3\"}#{\"id\":4,\"attrib_B\":\"B4\"}#{\"id\":6,\"attrib_B\":\"B6\"}" | makemv delim="#" json1 | makemv delim="#" json2 ``` end data prep ``` | eval data=mvappend(json1,json2) | mvexpand data | spath input=data path=id output=id | spath input=data path=attrib_A output=attrib_A | spath input=data path=attrib_B output=attrib_B | stats values(attrib_A) as attrib_A values(attrib_B) as attrib_B by id | table id, attrib_A, attrib_B Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Thanks for hitting me back, syslog has been tried but raw data has always been unsuccessful. As you suggested I will try SC4S
Hi Community, I have the following challenge. I have different events, and for each event, I want to generate a summary with different values. These values are defined in a lookup table. The fo... See more...
Hi Community, I have the following challenge. I have different events, and for each event, I want to generate a summary with different values. These values are defined in a lookup table. The following example: E1: id=1 , dest_ip=1.1.1.1, src_ip=2.2.2.2,..... E2: id=2, user=bob,  domain=microsoft E3: id=3 county=usa, city=seattle E4: id=4 company=cisco, product=splunk Lookup Table: (Potential more fieldnames) ID Field1  Field2 1 dest_ip src_ip 2 user domain 3 country   4 company product Expected Output: id1: Summary dest_ip=1.1.1.1 src_ip=2.2.2.2 Id2: Summary user=bob domain=microsoft id3: Summary country=usa Id4: Summary company=splunk, product =splunk The solution could be using a case function but it doesn't scale well becuse I woult need to add a new line for each case. Potentially, the number of cases could grow to 1000. I tried to solve with foreach, but I am unable to retrieve the values from the event. Here's the query I tried.       index=events | lookup cases.csv id OUTPUT field1, field2 | foreach field* [ eval summary = summary + "<<field>>" + ":" <<ITEM>> ] table id, summary       Thanks for your help! Alesyo
Thanks for the suggestion. I have no idea how to create the search, I am very much a novice when it comes to SPLUNK. So is the search you're suggesting to be applied to the top level block or on the ... See more...
Thanks for the suggestion. I have no idea how to create the search, I am very much a novice when it comes to SPLUNK. So is the search you're suggesting to be applied to the top level block or on the lower level dashboard? I'm not sure where I need to add it? So for example if I add the search on the top level, how does it know to go to the underlying dashboard to retrieve the isBad value? Or is the isBad value stored on the lower level dashboard and the top level is searching for the isBad value on the dashboard?
I checked by using this command but no luck , kindly find my logs    root@hf2:/opt# ps aux | grep /opt/log/ root 3152 0.0 0.0 9276 2304 pts/2 S+ 13:17 0:00 grep --color=auto /opt/log/ root@hf2:... See more...
I checked by using this command but no luck , kindly find my logs    root@hf2:/opt# ps aux | grep /opt/log/ root 3152 0.0 0.0 9276 2304 pts/2 S+ 13:17 0:00 grep --color=auto /opt/log/ root@hf2:/opt# ls -l /opt/log/ total 204 -rw-r-xr--+ 1 root root 207575 Feb 19 11:12 cisco_ironport_web.log root@hf2:/opt# SplunkD Logs for your refernecne : 03-04-2025 22:23:55.770 +0530 INFO TailingProcessor [32908 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-04-2025 22:29:34.873 +0530 INFO TailingProcessor [33197 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-04-2025 22:39:22.449 +0530 INFO TailingProcessor [33712 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-04-2025 22:44:59.341 +0530 INFO TailingProcessor [33979 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/cisco_ironport_web.log. 03-04-2025 22:44:59.341 +0530 INFO TailingProcessor [33979 MainTailingThread] - Adding watch on path: /opt/log/cisco_ironport_web.log. 03-04-2025 22:54:52.366 +0530 INFO TailingProcessor [34246 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/cisco_ironport_web.log. 03-04-2025 22:54:52.366 +0530 INFO TailingProcessor [34246 MainTailingThread] - Adding watch on path: /opt/log/cisco_ironport_web.log. 03-05-2025 12:35:53.768 +0530 INFO TailingProcessor [2117 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/cisco_ironport_web.log. 03-05-2025 12:35:53.768 +0530 INFO TailingProcessor [2117 MainTailingThread] - Adding watch on path: /opt/log/cisco_ironport_web.log. 03-05-2025 13:07:00.440 +0530 INFO TailingProcessor [2920 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-05-2025 13:16:28.483 +0530 INFO TailingProcessor [3132 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-05-2025 13:18:26.876 +0530 INFO TailingProcessor [3339 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. root@hf2:/opt#
@livehybrid  I saw successful result of creating the file /var/log/bash_history.log, but events from this file came from less than 10% of the hosts. I did not see any errors related to permissions or... See more...
@livehybrid  I saw successful result of creating the file /var/log/bash_history.log, but events from this file came from less than 10% of the hosts. I did not see any errors related to permissions or inability to view the file.
@PickleRick Thank you so much, I understand my mistakes. What methods would you recommend for collecting user-entered commands in real-time?
@livehybrid kv store issue is resolved once I installed java. Stuck here on how to assign new created index to all akamai logs?
You can refer this link- https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/ConfigureSSOOkta#Configure_the_Splunk_platform_to_remove_users_on_Okta
Actually, it looks like cmath should be a system library - are you adjusting the python lib path in your code, if so, what is it set to?  
Hi @Namdev  How did you get on with looking into the below? @livehybrid wrote: Hi @Namdev  Please could you confirm which user the Splunk Forwarder is running as? Is it splunkfwd, splunk or so... See more...
Hi @Namdev  How did you get on with looking into the below? @livehybrid wrote: Hi @Namdev  Please could you confirm which user the Splunk Forwarder is running as? Is it splunkfwd, splunk or something else? Please could you show a screenshot of the permissions on your /opt/log files in question.  Did you run anything like this against the directory to give splunk access? setfacl -R -m u:splunkfwd:r-x /opt/log  Are there any logs in splunkd.log relating to these files?  Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Hi @sufs2000  I see, sorry I misread the first question. In that case I think you would need to use a search which will determine if either of the lower-level dashboard entities are not "OK" , are y... See more...
Hi @sufs2000  I see, sorry I misread the first question. In that case I think you would need to use a search which will determine if either of the lower-level dashboard entities are not "OK" , are you comfortable creating this search? Ive used the following for the example shown below | makeresults | eval statusesStr="[{\"hostname\": \"host-23\", \"status\": \"OK\"}, {\"hostname\": \"host-87\", \"status\": \"NotOK\"}, {\"hostname\": \"host-45\", \"status\": \"OK\"}]" | eval statuses=json_array_to_mv(statusesStr) | mvexpand statuses | eval _raw=statuses | fields _raw | spath ``` end of data setup ``` | eval isBad=IF(status!="OK",1,0) | stats sum(isBad) as isBad   Ultimately using an IF to determine if its bad (and set value to 1) then sum up the isBad field to get a single value for if there is an issue (>=1) | eval isBad=IF(status!="OK",1,0) | stats sum(isBad) as isBad Once that is done you can apply the same type of logic (see below) Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Yes i do understand would require some kind of regex , but My issue is how do i wrrite the regex  to match the date , do i need to configure a dat.xml file to read the current date  server.log.20250... See more...
Yes i do understand would require some kind of regex , but My issue is how do i wrrite the regex  to match the date , do i need to configure a dat.xml file to read the current date  server.log.20250303.1 server.log.20250303.10 server.log.20250303.11 server.log.20250303.12 server.log.20250303.13 server.log.20250303.14 server.log.20250303.15