All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @rikinet  Would the following achieve what you're looking for?   | makeresults count=5 | streamstats count as a | eval _time = _time + (60*a) | eval json1="{\"id\":1,\"attrib_A\":\"A... See more...
Hi @rikinet  Would the following achieve what you're looking for?   | makeresults count=5 | streamstats count as a | eval _time = _time + (60*a) | eval json1="{\"id\":1,\"attrib_A\":\"A1\"}#{\"id\":2,\"attrib_A\":\"A2\"}#{\"id\":3,\"attrib_A\":\"A3\"}#{\"id\":4,\"attrib_A\":\"A4\"}#{\"id\":5,\"attrib_A\":\"A5\"}", json2="{\"id\":2,\"attrib_B\":\"B2\"}#{\"id\":3,\"attrib_B\":\"B3\"}#{\"id\":4,\"attrib_B\":\"B4\"}#{\"id\":6,\"attrib_B\":\"B6\"}" | makemv delim="#" json1 | makemv delim="#" json2 ``` end data prep ``` | eval data=mvappend(json1,json2) | mvexpand data | spath input=data path=id output=id | spath input=data path=attrib_A output=attrib_A | spath input=data path=attrib_B output=attrib_B | stats values(attrib_A) as attrib_A values(attrib_B) as attrib_B by id | table id, attrib_A, attrib_B Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Thanks for hitting me back, syslog has been tried but raw data has always been unsuccessful. As you suggested I will try SC4S
Hi Community, I have the following challenge. I have different events, and for each event, I want to generate a summary with different values. These values are defined in a lookup table. The fo... See more...
Hi Community, I have the following challenge. I have different events, and for each event, I want to generate a summary with different values. These values are defined in a lookup table. The following example: E1: id=1 , dest_ip=1.1.1.1, src_ip=2.2.2.2,..... E2: id=2, user=bob,  domain=microsoft E3: id=3 county=usa, city=seattle E4: id=4 company=cisco, product=splunk Lookup Table: (Potential more fieldnames) ID Field1  Field2 1 dest_ip src_ip 2 user domain 3 country   4 company product Expected Output: id1: Summary dest_ip=1.1.1.1 src_ip=2.2.2.2 Id2: Summary user=bob domain=microsoft id3: Summary country=usa Id4: Summary company=splunk, product =splunk The solution could be using a case function but it doesn't scale well becuse I woult need to add a new line for each case. Potentially, the number of cases could grow to 1000. I tried to solve with foreach, but I am unable to retrieve the values from the event. Here's the query I tried.       index=events | lookup cases.csv id OUTPUT field1, field2 | foreach field* [ eval summary = summary + "<<field>>" + ":" <<ITEM>> ] table id, summary       Thanks for your help! Alesyo
Thanks for the suggestion. I have no idea how to create the search, I am very much a novice when it comes to SPLUNK. So is the search you're suggesting to be applied to the top level block or on the ... See more...
Thanks for the suggestion. I have no idea how to create the search, I am very much a novice when it comes to SPLUNK. So is the search you're suggesting to be applied to the top level block or on the lower level dashboard? I'm not sure where I need to add it? So for example if I add the search on the top level, how does it know to go to the underlying dashboard to retrieve the isBad value? Or is the isBad value stored on the lower level dashboard and the top level is searching for the isBad value on the dashboard?
I checked by using this command but no luck , kindly find my logs    root@hf2:/opt# ps aux | grep /opt/log/ root 3152 0.0 0.0 9276 2304 pts/2 S+ 13:17 0:00 grep --color=auto /opt/log/ root@hf2:... See more...
I checked by using this command but no luck , kindly find my logs    root@hf2:/opt# ps aux | grep /opt/log/ root 3152 0.0 0.0 9276 2304 pts/2 S+ 13:17 0:00 grep --color=auto /opt/log/ root@hf2:/opt# ls -l /opt/log/ total 204 -rw-r-xr--+ 1 root root 207575 Feb 19 11:12 cisco_ironport_web.log root@hf2:/opt# SplunkD Logs for your refernecne : 03-04-2025 22:23:55.770 +0530 INFO TailingProcessor [32908 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-04-2025 22:29:34.873 +0530 INFO TailingProcessor [33197 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-04-2025 22:39:22.449 +0530 INFO TailingProcessor [33712 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-04-2025 22:44:59.341 +0530 INFO TailingProcessor [33979 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/cisco_ironport_web.log. 03-04-2025 22:44:59.341 +0530 INFO TailingProcessor [33979 MainTailingThread] - Adding watch on path: /opt/log/cisco_ironport_web.log. 03-04-2025 22:54:52.366 +0530 INFO TailingProcessor [34246 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/cisco_ironport_web.log. 03-04-2025 22:54:52.366 +0530 INFO TailingProcessor [34246 MainTailingThread] - Adding watch on path: /opt/log/cisco_ironport_web.log. 03-05-2025 12:35:53.768 +0530 INFO TailingProcessor [2117 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/cisco_ironport_web.log. 03-05-2025 12:35:53.768 +0530 INFO TailingProcessor [2117 MainTailingThread] - Adding watch on path: /opt/log/cisco_ironport_web.log. 03-05-2025 13:07:00.440 +0530 INFO TailingProcessor [2920 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-05-2025 13:16:28.483 +0530 INFO TailingProcessor [3132 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. 03-05-2025 13:18:26.876 +0530 INFO TailingProcessor [3339 MainTailingThread] - Parsing configuration stanza: monitor:///opt/log/. root@hf2:/opt#
@livehybrid  I saw successful result of creating the file /var/log/bash_history.log, but events from this file came from less than 10% of the hosts. I did not see any errors related to permissions or... See more...
@livehybrid  I saw successful result of creating the file /var/log/bash_history.log, but events from this file came from less than 10% of the hosts. I did not see any errors related to permissions or inability to view the file.
@PickleRick Thank you so much, I understand my mistakes. What methods would you recommend for collecting user-entered commands in real-time?
@livehybrid kv store issue is resolved once I installed java. Stuck here on how to assign new created index to all akamai logs?
You can refer this link- https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/ConfigureSSOOkta#Configure_the_Splunk_platform_to_remove_users_on_Okta
Actually, it looks like cmath should be a system library - are you adjusting the python lib path in your code, if so, what is it set to?  
Hi @Namdev  How did you get on with looking into the below? @livehybrid wrote: Hi @Namdev  Please could you confirm which user the Splunk Forwarder is running as? Is it splunkfwd, splunk or so... See more...
Hi @Namdev  How did you get on with looking into the below? @livehybrid wrote: Hi @Namdev  Please could you confirm which user the Splunk Forwarder is running as? Is it splunkfwd, splunk or something else? Please could you show a screenshot of the permissions on your /opt/log files in question.  Did you run anything like this against the directory to give splunk access? setfacl -R -m u:splunkfwd:r-x /opt/log  Are there any logs in splunkd.log relating to these files?  Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Hi @sufs2000  I see, sorry I misread the first question. In that case I think you would need to use a search which will determine if either of the lower-level dashboard entities are not "OK" , are y... See more...
Hi @sufs2000  I see, sorry I misread the first question. In that case I think you would need to use a search which will determine if either of the lower-level dashboard entities are not "OK" , are you comfortable creating this search? Ive used the following for the example shown below | makeresults | eval statusesStr="[{\"hostname\": \"host-23\", \"status\": \"OK\"}, {\"hostname\": \"host-87\", \"status\": \"NotOK\"}, {\"hostname\": \"host-45\", \"status\": \"OK\"}]" | eval statuses=json_array_to_mv(statusesStr) | mvexpand statuses | eval _raw=statuses | fields _raw | spath ``` end of data setup ``` | eval isBad=IF(status!="OK",1,0) | stats sum(isBad) as isBad   Ultimately using an IF to determine if its bad (and set value to 1) then sum up the isBad field to get a single value for if there is an issue (>=1) | eval isBad=IF(status!="OK",1,0) | stats sum(isBad) as isBad Once that is done you can apply the same type of logic (see below) Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Yes i do understand would require some kind of regex , but My issue is how do i wrrite the regex  to match the date , do i need to configure a dat.xml file to read the current date  server.log.20250... See more...
Yes i do understand would require some kind of regex , but My issue is how do i wrrite the regex  to match the date , do i need to configure a dat.xml file to read the current date  server.log.20250303.1 server.log.20250303.10 server.log.20250303.11 server.log.20250303.12 server.log.20250303.13 server.log.20250303.14 server.log.20250303.15
Dear Splunk community, I have following sample input data, containing JSON snippets in MV fields:   | makeresults count=5 | streamstats count as a | eval _time = _time + (60*a) | eval json1="{\"... See more...
Dear Splunk community, I have following sample input data, containing JSON snippets in MV fields:   | makeresults count=5 | streamstats count as a | eval _time = _time + (60*a) | eval json1="{\"id\":1,\"attrib_A\":\"A1\"}#{\"id\":2,\"attrib_A\":\"A2\"}#{\"id\":3,\"attrib_A\":\"A3\"}#{\"id\":4,\"attrib_A\":\"A4\"}#{\"id\":5,\"attrib_A\":\"A5\"}", json2="{\"id\":2,\"attrib_B\":\"B2\"}#{\"id\":3,\"attrib_B\":\"B3\"}#{\"id\":4,\"attrib_B\":\"B4\"}#{\"id\":6,\"attrib_B\":\"B6\"}" | makemv delim="#" json1 | makemv delim="#" json2 | table _time, json1, json2   The lists of ids in json1 and json2 may be disjoint, identical or overlap. For example, in above data, id=1 and id=5 only exist in json1, id=6 only exists in json2, the other ids exist in both. Attributes can be null values, but may then be treated as if the id didn't exist. For each event, I would like to merge the data from json1 and json2 into a single table with columns id, attrib_A and attrib_B. The expected output for the sample data would be: _time id attrib_A attrib_B t 1 A1 null t 2 A2 B2 t 3 A3 B3 t 4 A4 B4 t 5 A5 null t 6 null B6 ... ... ... ... t+5 1 A1 null t+5 2 A2 B2 t+5 3 A3 B3 t+5 4 A4 B4 t+5 5 A5 null t+5 6 null B6 How can I achieve this in a straighforward way? The following works for the sample data, but it seems overly complicated and am not sure if it works in all cases:   ```insert after above sample data generation:``` ```extract and expand JSONs``` | mvexpand json2 | spath input=json2 | rename id as json2_id | mvexpand json1 | spath input=json1 | rename id as json1_id | table _time, json1_id, attrib_A, json2_id, attrib_B ```create mv fields containing the subsets of IDs from json1 and json2``` | eventstats values(json1_id) as json1, values(json2_id) as json2 by _time | eval only_json1=mvmap(json1, if(isnull(mvfind(json2, json1)), json1, null())) | eval only_json2=mvmap(json2, if(isnull(mvfind(json1, json2)), json2, null())) | eval both=mvmap(json1, if(isnotnull(mvfind(json2, json1)), json1, null())) | table _time, json1_id, attrib_A, json2_id, attrib_B, json1, json2, only_json1, only_json2, both ```keep json2 record if a) json2_id equals json1_id or b) json2_id does not appear in json1``` | eval attrib_B=if(json2_id==json1_id or isnull(mvfind(json1, json2_id)), attrib_B, null()) | eval json2_id=if(json2_id==json1_id or isnull(mvfind(json1, json2_id)), json2_id, null()) ```keep json1 record if a) json1_id equals json2_id or b) json1_id does not appear in json2``` | eval attrib_A=if(json1_id==json2_id or isnull(mvfind(json2, json1_id)), attrib_A, null()) | eval json1_id=if(json1_id==json2_id or isnull(mvfind(json2, json1_id)), json1_id, null()) ```remove records where json1 and json2 are both null``` | where isnotnull(json1_id) or isnotnull(json2_id) | table _time, json1_id, attrib_A, json2_id, attrib_B | dedup _time, json1_id, attrib_A    Thank you!
Done it in HF UI by configuring the data input but no where asked about index? Where to configure index now? Created new index on CM and pushed to indexers already. How to map these logs to new index?
Also, were you able to fix your KVStore issue or do you still need help with this? Please refer to previous response re checking mongo / splunkd.log logs to look into this issue too. Thanks Will
Hi @Karthikeya  How have you configured the Data collection? Have you done this in the UI on the HF or did you deploy the inputs.conf from your Deployment Server? If you are pushing an inputs.conf ... See more...
Hi @Karthikeya  How have you configured the Data collection? Have you done this in the UI on the HF or did you deploy the inputs.conf from your Deployment Server? If you are pushing an inputs.conf then you can specify index=<yourIndex> in the stanza for your input in your inputs.conf Feel free to share some examples of your configuration so we can create a more relevant response! Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Just one line: blacklist1 = EventCode="46[23]4" Message="Logon Type:\s+3"
I am using Splunk trial license, I have checked permissions and it is not a permission issue  
@gcusello  Yes python was upgraded to 3.9 while upgrading Splunk 9.3.1, and it was throwing error to upgrade numpy then I upgraded Numpy to 1.26.0 to make it compatible with the python version.