All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@PickleRick Thank you so much, I understand my mistakes. What methods would you recommend for collecting user-entered commands in real-time?
@livehybrid kv store issue is resolved once I installed java. Stuck here on how to assign new created index to all akamai logs?
You can refer this link- https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/ConfigureSSOOkta#Configure_the_Splunk_platform_to_remove_users_on_Okta
Actually, it looks like cmath should be a system library - are you adjusting the python lib path in your code, if so, what is it set to?  
Hi @Namdev  How did you get on with looking into the below? @livehybrid wrote: Hi @Namdev  Please could you confirm which user the Splunk Forwarder is running as? Is it splunkfwd, splunk or so... See more...
Hi @Namdev  How did you get on with looking into the below? @livehybrid wrote: Hi @Namdev  Please could you confirm which user the Splunk Forwarder is running as? Is it splunkfwd, splunk or something else? Please could you show a screenshot of the permissions on your /opt/log files in question.  Did you run anything like this against the directory to give splunk access? setfacl -R -m u:splunkfwd:r-x /opt/log  Are there any logs in splunkd.log relating to these files?  Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Hi @sufs2000  I see, sorry I misread the first question. In that case I think you would need to use a search which will determine if either of the lower-level dashboard entities are not "OK" , are y... See more...
Hi @sufs2000  I see, sorry I misread the first question. In that case I think you would need to use a search which will determine if either of the lower-level dashboard entities are not "OK" , are you comfortable creating this search? Ive used the following for the example shown below | makeresults | eval statusesStr="[{\"hostname\": \"host-23\", \"status\": \"OK\"}, {\"hostname\": \"host-87\", \"status\": \"NotOK\"}, {\"hostname\": \"host-45\", \"status\": \"OK\"}]" | eval statuses=json_array_to_mv(statusesStr) | mvexpand statuses | eval _raw=statuses | fields _raw | spath ``` end of data setup ``` | eval isBad=IF(status!="OK",1,0) | stats sum(isBad) as isBad   Ultimately using an IF to determine if its bad (and set value to 1) then sum up the isBad field to get a single value for if there is an issue (>=1) | eval isBad=IF(status!="OK",1,0) | stats sum(isBad) as isBad Once that is done you can apply the same type of logic (see below) Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Yes i do understand would require some kind of regex , but My issue is how do i wrrite the regex  to match the date , do i need to configure a dat.xml file to read the current date  server.log.20250... See more...
Yes i do understand would require some kind of regex , but My issue is how do i wrrite the regex  to match the date , do i need to configure a dat.xml file to read the current date  server.log.20250303.1 server.log.20250303.10 server.log.20250303.11 server.log.20250303.12 server.log.20250303.13 server.log.20250303.14 server.log.20250303.15
Dear Splunk community, I have following sample input data, containing JSON snippets in MV fields:   | makeresults count=5 | streamstats count as a | eval _time = _time + (60*a) | eval json1="{\"... See more...
Dear Splunk community, I have following sample input data, containing JSON snippets in MV fields:   | makeresults count=5 | streamstats count as a | eval _time = _time + (60*a) | eval json1="{\"id\":1,\"attrib_A\":\"A1\"}#{\"id\":2,\"attrib_A\":\"A2\"}#{\"id\":3,\"attrib_A\":\"A3\"}#{\"id\":4,\"attrib_A\":\"A4\"}#{\"id\":5,\"attrib_A\":\"A5\"}", json2="{\"id\":2,\"attrib_B\":\"B2\"}#{\"id\":3,\"attrib_B\":\"B3\"}#{\"id\":4,\"attrib_B\":\"B4\"}#{\"id\":6,\"attrib_B\":\"B6\"}" | makemv delim="#" json1 | makemv delim="#" json2 | table _time, json1, json2   The lists of ids in json1 and json2 may be disjoint, identical or overlap. For example, in above data, id=1 and id=5 only exist in json1, id=6 only exists in json2, the other ids exist in both. Attributes can be null values, but may then be treated as if the id didn't exist. For each event, I would like to merge the data from json1 and json2 into a single table with columns id, attrib_A and attrib_B. The expected output for the sample data would be: _time id attrib_A attrib_B t 1 A1 null t 2 A2 B2 t 3 A3 B3 t 4 A4 B4 t 5 A5 null t 6 null B6 ... ... ... ... t+5 1 A1 null t+5 2 A2 B2 t+5 3 A3 B3 t+5 4 A4 B4 t+5 5 A5 null t+5 6 null B6 How can I achieve this in a straighforward way? The following works for the sample data, but it seems overly complicated and am not sure if it works in all cases:   ```insert after above sample data generation:``` ```extract and expand JSONs``` | mvexpand json2 | spath input=json2 | rename id as json2_id | mvexpand json1 | spath input=json1 | rename id as json1_id | table _time, json1_id, attrib_A, json2_id, attrib_B ```create mv fields containing the subsets of IDs from json1 and json2``` | eventstats values(json1_id) as json1, values(json2_id) as json2 by _time | eval only_json1=mvmap(json1, if(isnull(mvfind(json2, json1)), json1, null())) | eval only_json2=mvmap(json2, if(isnull(mvfind(json1, json2)), json2, null())) | eval both=mvmap(json1, if(isnotnull(mvfind(json2, json1)), json1, null())) | table _time, json1_id, attrib_A, json2_id, attrib_B, json1, json2, only_json1, only_json2, both ```keep json2 record if a) json2_id equals json1_id or b) json2_id does not appear in json1``` | eval attrib_B=if(json2_id==json1_id or isnull(mvfind(json1, json2_id)), attrib_B, null()) | eval json2_id=if(json2_id==json1_id or isnull(mvfind(json1, json2_id)), json2_id, null()) ```keep json1 record if a) json1_id equals json2_id or b) json1_id does not appear in json2``` | eval attrib_A=if(json1_id==json2_id or isnull(mvfind(json2, json1_id)), attrib_A, null()) | eval json1_id=if(json1_id==json2_id or isnull(mvfind(json2, json1_id)), json1_id, null()) ```remove records where json1 and json2 are both null``` | where isnotnull(json1_id) or isnotnull(json2_id) | table _time, json1_id, attrib_A, json2_id, attrib_B | dedup _time, json1_id, attrib_A    Thank you!
Done it in HF UI by configuring the data input but no where asked about index? Where to configure index now? Created new index on CM and pushed to indexers already. How to map these logs to new index?
Also, were you able to fix your KVStore issue or do you still need help with this? Please refer to previous response re checking mongo / splunkd.log logs to look into this issue too. Thanks Will
Hi @Karthikeya  How have you configured the Data collection? Have you done this in the UI on the HF or did you deploy the inputs.conf from your Deployment Server? If you are pushing an inputs.conf ... See more...
Hi @Karthikeya  How have you configured the Data collection? Have you done this in the UI on the HF or did you deploy the inputs.conf from your Deployment Server? If you are pushing an inputs.conf then you can specify index=<yourIndex> in the stanza for your input in your inputs.conf Feel free to share some examples of your configuration so we can create a more relevant response! Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Just one line: blacklist1 = EventCode="46[23]4" Message="Logon Type:\s+3"
I am using Splunk trial license, I have checked permissions and it is not a permission issue  
@gcusello  Yes python was upgraded to 3.9 while upgrading Splunk 9.3.1, and it was throwing error to upgrade numpy then I upgraded Numpy to 1.26.0 to make it compatible with the python version.
@livehybrid  Yes this is an internally developed app. I tried installing cmath sudo -H ./splunk cmd python3 -m pip install cmath -t /opt/splunk/etc/apps/stormwatch/bin/site-packages But getting e... See more...
@livehybrid  Yes this is an internally developed app. I tried installing cmath sudo -H ./splunk cmd python3 -m pip install cmath -t /opt/splunk/etc/apps/stormwatch/bin/site-packages But getting error  WARNING: The directory '/root/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag. ERROR: Could not find a version that satisfies the requirement cmath (from versions: none) ERROR: No matching distribution found for cmath  
Hi @Keith_NZ  I dont have an Ingress Processor instance available at the moment to test, but would a custom function work for you here? Something like this? function my_rex($source, $field, $rexSt... See more...
Hi @Keith_NZ  I dont have an Ingress Processor instance available at the moment to test, but would a custom function work for you here? Something like this? function my_rex($source, $field, $rexStr: string="(?<all>.*)") { return | rex field=$field $rexStr } FROM main | my_rex host "(?<hostname>.mydomain.com" Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
NO logs on Search head 
Hi I’m expecting that you have Splunk trial not free license? Free license doesn’t contain most of those features which you are trying to use! The easiest way to check why those files are not acces... See more...
Hi I’m expecting that you have Splunk trial not free license? Free license doesn’t contain most of those features which you are trying to use! The easiest way to check why those files are not accessible is just sudo/su to your Splunk UF user and check if it can access those or not. If not the add permissions as @livehybrid already told. If it can access those, then start to debug with logs and e.g. with  splunk list inputstatus etc. You could find quite many posts here where this issue is already discussed and solved. r. Ismo 
This is depending from those apps. You must first check which are working in which splunk versions. It’s quite probable that you need to update those also step by step as it’s quite possible that sam... See more...
This is depending from those apps. You must first check which are working in which splunk versions. It’s quite probable that you need to update those also step by step as it’s quite possible that same version doesn’t work on 7.x and 9.3. Also it’s possible that some apps don’t work anymore in 9.3. Also some may need OS level updates like OS version, Java or python updates etc. Depending of your data and integrations you should even think and plan if it’s possible to setup totally new node up with fresh install and newest apps. That could be much easier way to do version update? Of course it probably needs that you could leave the old node up and running until its data have expired. Also you must transfer license to new server and add old as a license client for it.
As already said, please define what you are meaning with word integrate! Here is one conf presentation about splunk and power bi https://conf.splunk.com/files/2022/slides/PLA1122B.pdf