All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @alesyo  I think the JSON in my example shouldnt affect the outcome, as this was purely a way for me to provide a working example. You could use "fields" to list the fields you are interested in ... See more...
Hi @alesyo  I think the JSON in my example shouldnt affect the outcome, as this was purely a way for me to provide a working example. You could use "fields" to list the fields you are interested in before running the foreach command? index=notable .etc... | fields id interstingField1 interestingField2 ..etc.. | foreach * [| eval summary=mvappend(summary,IF(<<FIELD>>!="" and "<<FIELD>>"!="summary" and "<<FIELD>>"!="id", "<<FIELD>>=".<<FIELD>>,null()))] | eval summary_output="Id:".id." - ".mvjoin(summary," ") | fields summary_output Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @rjastrze  As this is a Splunk Works developed app, I think the best approach would be to open a support ticket and request that this be installed on your stack, or ask them to look into having i... See more...
Hi @rjastrze  As this is a Splunk Works developed app, I think the best approach would be to open a support ticket and request that this be installed on your stack, or ask them to look into having it cloud vetted. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Cooked:tcp : tcp Raw:tcp : tcp TailingProcessor:FileStatus : $SPLUNK_HOME/etc/apps/sample_app/logs type = missing $SPLUNK_HOME/etc/splunk.version file position = 70 file size = 70 percent =... See more...
Cooked:tcp : tcp Raw:tcp : tcp TailingProcessor:FileStatus : $SPLUNK_HOME/etc/apps/sample_app/logs type = missing $SPLUNK_HOME/etc/splunk.version file position = 70 file size = 70 percent = 100.00 type = finished reading $SPLUNK_HOME/var/log/splunk type = directory $SPLUNK_HOME/var/log/splunk/configuration_change.log type = directory $SPLUNK_HOME/var/log/splunk/license_usage_summary.log type = directory $SPLUNK_HOME/var/log/splunk/metrics.log type = directory $SPLUNK_HOME/var/log/splunk/splunk_instrumentation_cloud.log* type = directory $SPLUNK_HOME/var/log/splunk/splunkd.log type = directory $SPLUNK_HOME/var/log/watchdog/watchdog.log* type = directory $SPLUNK_HOME/var/run/splunk/search_telemetry/*search_telemetry.json type = directory $SPLUNK_HOME/var/spool/splunk/tracker.log* type = directory /opt/log/ type = directory /opt/log/cisco_ironport_web.log file position = 207575 file size = 207575 parent = /opt/log/ percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/audit.log file position = 159471 file size = 159471 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = open file /opt/splunkforwarder/var/log/splunk/btool.log file position = 192268 file size = 192268 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/conf.log file position = 9044 file size = 9044 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/configuration_change.log file position = 3353479 file size = 3353479 parent = $SPLUNK_HOME/var/log/splunk/configuration_change.log percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/first_install.log file position = 70 file size = 70 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/health.log file position = 785728 file size = 785728 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/license_usage.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/license_usage_summary.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk/license_usage_summary.log percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/mergebuckets.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/metrics.log file position = 21630761 file size = 21630761 parent = $SPLUNK_HOME/var/log/splunk/metrics.log percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/metrics.log.1 file position = 25000026 file size = 25000026 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/metrics.log.2 file position = 25000081 file size = 25000081 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/mongod.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/remote_searches.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/scheduler.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/search_messages.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/searchhistory.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/splunk_instrumentation_cloud.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk/splunk_instrumentation_cloud.log* percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/splunkd-utility.log file position = 69012 file size = 69012 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/splunkd.log file position = 12378562 file size = 12378562 parent = $SPLUNK_HOME/var/log/splunk/splunkd.log percent = 100.00 type = open file /opt/splunkforwarder/var/log/splunk/splunkd_access.log file position = 44571 file size = 44571 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = open file /opt/splunkforwarder/var/log/splunk/splunkd_stderr.log file position = 200 file size = 200 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/splunkd_stdout.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/splunkd_ui_access.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/wlm_monitor.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/watchdog/watchdog.log file position = 12202 file size = 12202 parent = $SPLUNK_HOME/var/log/watchdog/watchdog.log* percent = 100.00 type = finished reading tcp_cooked:listenerports : 8089  
The current version is not available for the cloud. According to conversations with Splunk Support, the update addresses a skipped jobs issue when the status on Salesforce REST API tool is at idle. ... See more...
The current version is not available for the cloud. According to conversations with Splunk Support, the update addresses a skipped jobs issue when the status on Salesforce REST API tool is at idle. Please update version 1.0.6 for cloud support compatibility and ensure future updates are cloud compatible. 
thank you @livehybrid  i ended up creating a ticket with splunk support
@samuel-devops  Make sure nothing else is using the same ports. Check if the container is binding properly: netstat -tulnp | grep 8089 or inside the container: docker exec -it uf netstat -tulnp
@samuel-devops  Sometimes, Splunk UF fails to start due to permission issues. Ensure that the container has the right permissions: docker exec -it uf bash chown -R splunk:splunk /opt/splunkforward... See more...
@samuel-devops  Sometimes, Splunk UF fails to start due to permission issues. Ensure that the container has the right permissions: docker exec -it uf bash chown -R splunk:splunk /opt/splunkforwarder chmod -R 755 /opt/splunkforwarder   Restart the container: docker restart uf  Manually Check Splunk UF API The error suggests that the Ansible task is failing to check for restarts via the Splunk API. Run this manually inside the container: curl -k -u admin:test12345 https://localhost:8089/services/messages/restart_required?output_mode=json If the API is unreachable, Splunk UF might not be fully initialized.  
@samuel-devops  Check if Splunk UF is actually running  docker ps -a | grep uf If it’s not running, check the logs: docker logs uf Look for messages indicating that splunkd started and is listen... See more...
@samuel-devops  Check if Splunk UF is actually running  docker ps -a | grep uf If it’s not running, check the logs: docker logs uf Look for messages indicating that splunkd started and is listening on port 8089. You should see something like:   Splunk> Be an IT superhero. Splunk Universal Forwarder has started. Confirm the ports are mapped and accessible:   docker ps  Ensure the container uf is running and ports 0.0.0.0:9997->9997/tcp, 0.0.0.0:8080->8080/tcp, and 0.0.0.0:8089->8089/tcp are listed.
Thanks for your answer, The events are part of an index, which aren't available as json. It is a shared notable index. My idea is to define in a lookup which fieldnames I will extract. For Exa... See more...
Thanks for your answer, The events are part of an index, which aren't available as json. It is a shared notable index. My idea is to define in a lookup which fieldnames I will extract. For Example: | eval sum=case(id=1, "dest_ip:" .dest_ip ",src_ip:".src_ip, id=2, "user:".user + ",domain:".domain id=3, "country:".country, id=4, "company:".company + ",product:".product) | table id, sum   But the scalable is very worse, because the kind of condition is grow up to 1000. I think is not manageable in one use case. Thanks for your help Best regards Tino
It good to know that. Then this (on UF) splunk list inputstatus Shows to you what inputs your UF sees and what it has read.
Hi @alesyo  How about this? You would just need to use this on your existing query I think   | foreach * [| eval summary=mvappend(summary,IF(<<FIELD>>!="" and "<<FIELD>>"!="id", "<<FIELD... See more...
Hi @alesyo  How about this? You would just need to use this on your existing query I think   | foreach * [| eval summary=mvappend(summary,IF(<<FIELD>>!="" and "<<FIELD>>"!="id", "<<FIELD>>=".<<FIELD>>,null()))] | eval summary_output="Id:".id." - ".mvjoin(summary," ") | fields summary_output     However I've included a full working example below:     | makeresults | eval data="[{\"id\":1,\"dest_ip\":\"1.1.1.1\",\"src_ip\":\"2.2.2.2\"},{\"id\":2,\"user\":\"bob\",\"domain\":\"microsoft\"},{\"id\":3,\"county\":\"usa\",\"city\":\"seattle\"},{\"id\":4,\"company\":\"cisco\",\"product\":\"splunk\"}]" | eval rawdata=json_array_to_mv(data) | mvexpand rawdata | eval _raw=json_extract(rawdata,"") | fields - data rawdata | spath | stats values(*) AS * by id | foreach * [| eval summary=mvappend(summary,IF(<<FIELD>>!="" and "<<FIELD>>"!="id", "<<FIELD>>=".<<FIELD>>,null()))] | eval summary_output="Id:".id." - ".mvjoin(summary," ") | fields summary_output     Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
| foreach * [| eval summary1=if("<<FIELD>>"==Field1,<<FIELD>>,summary1) | eval summary2=if("<<FIELD>>"==Field2,<<FIELD>>,summary2)] | eval summary=Field1."=".summary1.if(isnotnull(Field2)," "... See more...
| foreach * [| eval summary1=if("<<FIELD>>"==Field1,<<FIELD>>,summary1) | eval summary2=if("<<FIELD>>"==Field2,<<FIELD>>,summary2)] | eval summary=Field1."=".summary1.if(isnotnull(Field2)," ".Field2."=".summary2,null())
Good day,   I got it to work after adding a : after "usage" as shown below: index=”main” source=”C:\\Admin\StorageLogs\storage_usage.log” | rex field=_raw "usage: (?<diskUsage>[0-9\.]+)% used"... See more...
Good day,   I got it to work after adding a : after "usage" as shown below: index=”main” source=”C:\\Admin\StorageLogs\storage_usage.log” | rex field=_raw "usage: (?<diskUsage>[0-9\.]+)% used" | where diskUsage>75   Thank you for your assistance.
Hi @rikinet  Would the following achieve what you're looking for?   | makeresults count=5 | streamstats count as a | eval _time = _time + (60*a) | eval json1="{\"id\":1,\"attrib_A\":\"A... See more...
Hi @rikinet  Would the following achieve what you're looking for?   | makeresults count=5 | streamstats count as a | eval _time = _time + (60*a) | eval json1="{\"id\":1,\"attrib_A\":\"A1\"}#{\"id\":2,\"attrib_A\":\"A2\"}#{\"id\":3,\"attrib_A\":\"A3\"}#{\"id\":4,\"attrib_A\":\"A4\"}#{\"id\":5,\"attrib_A\":\"A5\"}", json2="{\"id\":2,\"attrib_B\":\"B2\"}#{\"id\":3,\"attrib_B\":\"B3\"}#{\"id\":4,\"attrib_B\":\"B4\"}#{\"id\":6,\"attrib_B\":\"B6\"}" | makemv delim="#" json1 | makemv delim="#" json2 ``` end data prep ``` | eval data=mvappend(json1,json2) | mvexpand data | spath input=data path=id output=id | spath input=data path=attrib_A output=attrib_A | spath input=data path=attrib_B output=attrib_B | stats values(attrib_A) as attrib_A values(attrib_B) as attrib_B by id | table id, attrib_A, attrib_B Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Thanks for hitting me back, syslog has been tried but raw data has always been unsuccessful. As you suggested I will try SC4S
Hi Community, I have the following challenge. I have different events, and for each event, I want to generate a summary with different values. These values are defined in a lookup table. The fo... See more...
Hi Community, I have the following challenge. I have different events, and for each event, I want to generate a summary with different values. These values are defined in a lookup table. The following example: E1: id=1 , dest_ip=1.1.1.1, src_ip=2.2.2.2,..... E2: id=2, user=bob,  domain=microsoft E3: id=3 county=usa, city=seattle E4: id=4 company=cisco, product=splunk Lookup Table: (Potential more fieldnames) ID Field1  Field2 1 dest_ip src_ip 2 user domain 3 country   4 company product Expected Output: id1: Summary dest_ip=1.1.1.1 src_ip=2.2.2.2 Id2: Summary user=bob domain=microsoft id3: Summary country=usa Id4: Summary company=splunk, product =splunk The solution could be using a case function but it doesn't scale well becuse I woult need to add a new line for each case. Potentially, the number of cases could grow to 1000. I tried to solve with foreach, but I am unable to retrieve the values from the event. Here's the query I tried.       index=events | lookup cases.csv id OUTPUT field1, field2 | foreach field* [ eval summary = summary + "<<field>>" + ":" <<ITEM>> ] table id, summary       Thanks for your help! Alesyo
Thanks for the suggestion. I have no idea how to create the search, I am very much a novice when it comes to SPLUNK. So is the search you're suggesting to be applied to the top level block or on the ... See more...
Thanks for the suggestion. I have no idea how to create the search, I am very much a novice when it comes to SPLUNK. So is the search you're suggesting to be applied to the top level block or on the lower level dashboard? I'm not sure where I need to add it? So for example if I add the search on the top level, how does it know to go to the underlying dashboard to retrieve the isBad value? Or is the isBad value stored on the lower level dashboard and the top level is searching for the isBad value on the dashboard?
I checked by using this command but no luck , kindly find my logs    root@hf2:/opt# ps aux | grep /opt/log/ root 3152 0.0 0.0 9276 2304 pts/2 S+ 13:17 0:00 grep --color=auto /opt/log/ root@hf2:... See more...
I checked by using this command but no luck , kindly find my logs    root@hf2:/opt# ps aux | grep /opt/log/ root 3152 0.0 0.0 9276 2304 pts/2 S+ 13:17 0:00 grep --color=auto /opt/log/ root@hf2:/opt# ls -l /opt/log/ total 204 -rw-r-xr--+ 1 root root 207575 Feb 19 11:12 cisco_ironport_web.log root@hf2:/opt# SplunkD Logs for your refernecne : 03-04-2025 22:23:55.770 +0530 INFO TailingProcessor [32908 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-04-2025 22:29:34.873 +0530 INFO TailingProcessor [33197 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-04-2025 22:39:22.449 +0530 INFO TailingProcessor [33712 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-04-2025 22:44:59.341 +0530 INFO TailingProcessor [33979 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/cisco_ironport_web.log. 03-04-2025 22:44:59.341 +0530 INFO TailingProcessor [33979 MainTailingThread] - Adding watch on path: /opt/log/cisco_ironport_web.log. 03-04-2025 22:54:52.366 +0530 INFO TailingProcessor [34246 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/cisco_ironport_web.log. 03-04-2025 22:54:52.366 +0530 INFO TailingProcessor [34246 MainTailingThread] - Adding watch on path: /opt/log/cisco_ironport_web.log. 03-05-2025 12:35:53.768 +0530 INFO TailingProcessor [2117 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/cisco_ironport_web.log. 03-05-2025 12:35:53.768 +0530 INFO TailingProcessor [2117 MainTailingThread] - Adding watch on path: /opt/log/cisco_ironport_web.log. 03-05-2025 13:07:00.440 +0530 INFO TailingProcessor [2920 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-05-2025 13:16:28.483 +0530 INFO TailingProcessor [3132 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-05-2025 13:18:26.876 +0530 INFO TailingProcessor [3339 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. root@hf2:/opt#
@livehybrid  I saw successful result of creating the file /var/log/bash_history.log, but events from this file came from less than 10% of the hosts. I did not see any errors related to permissions or... See more...
@livehybrid  I saw successful result of creating the file /var/log/bash_history.log, but events from this file came from less than 10% of the hosts. I did not see any errors related to permissions or inability to view the file.
@PickleRick Thank you so much, I understand my mistakes. What methods would you recommend for collecting user-entered commands in real-time?