All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Have you checked that this file exists and your splunk user have read access to it?
Hi Dear Community, I am encountering the following error across all servers: SSLCommon - Can't read key file from /opt/splunk/etc/auth/CERT.pem
Hi If you want collect introspection data from UF there are some instructions for it. https://docs.splunk.com/Documentation/Splunk/9.4.0/Troubleshooting/ConfigurePIF https://community.splunk.com/... See more...
Hi If you want collect introspection data from UF there are some instructions for it. https://docs.splunk.com/Documentation/Splunk/9.4.0/Troubleshooting/ConfigurePIF https://community.splunk.com/t5/Getting-Data-In/How-to-collect-introspection-logs-from-forwarders-in-a/m-p/129601 Depending on what you are really needing it maybe better to use e.g Windows or Unix/Linux TA to collect that information. And remember that you cannot collect all same data from UFs than splunk collect from full enterprise instance. There is at least one app for this task https://splunkbase.splunk.com/app/3805 r. Ismo
Hi here is excellent presentation kept in Helsinki UG. https://data-findings.com/wp-content/uploads/2024/09/HSUG-20240903-Tiia-Ojares.pdf There is shown how to debug your tokens etc. Probably you sh... See more...
Hi here is excellent presentation kept in Helsinki UG. https://data-findings.com/wp-content/uploads/2024/09/HSUG-20240903-Tiia-Ojares.pdf There is shown how to debug your tokens etc. Probably you should use $form.$ tokens instead of use just $tokens$? r. Ismo
Here is an excellent conf presentation, how to find the reason for this lag https://conf.splunk.com/files/2019/slides/FN1570.pdf
Just guessing, but this sounds like issue with your authentication part. At least earlier splunk has used user nobody as local user which are not existing or at least it haven't any roles. There is a... See more...
Just guessing, but this sounds like issue with your authentication part. At least earlier splunk has used user nobody as local user which are not existing or at least it haven't any roles. There is at least one old post which explains user nobody https://community.splunk.com/t5/All-Apps-and-Add-ons/Disambiguation-of-the-meaning-of-quot-nobody-quot-as-an-owner-of/m-p/400573 Here is another post which explains how to find those scheduled searches https://community.splunk.com/t5/Splunk-Search/How-to-identify-a-skipped-scheduled-accelerated-report/m-p/700427 Was there any issues with your upgrade? If I understand correctly you have update it from 9.1.x to 9.2.4? In which platform and is this distributed environment? What are behind your LDAP authentication and authorization directory? Do you know if there are or have been defined user nobody? r. Ismo
Here is interesting conf presentation which may help you to understand difference between metrics and event indexes. https://conf.splunk.com/files/2022/recordings/OBS1157B_1080.mp4 I think that it's ... See more...
Here is interesting conf presentation which may help you to understand difference between metrics and event indexes. https://conf.splunk.com/files/2022/recordings/OBS1157B_1080.mp4 I think that it's still quite accurate, but if I recall right there have happened some changes how ingestion amount is calculated with metrics? Basically this should be better for end users. Here is also basic information about Metrics https://docs.splunk.com/Documentation/Splunk/latest/Metrics/GetStarted r. Ismo
Can you paste your inputs.conf here between editor's script block </> ? When you are saying "cheduled runs are sometimes missed, or scripts execute at unexpected times." is the only fix for this to ... See more...
Can you paste your inputs.conf here between editor's script block </> ? When you are saying "cheduled runs are sometimes missed, or scripts execute at unexpected times." is the only fix for this to restart splunkd service on UF? Are those UFs in domain or just individual nodes which manages time sync with ntpd instead of Windows domain service?
Other have already give you some hints to use and check for this issue. If you have lot of logs (probably you have)? Then one option is use SC4S. There is more about it e.g. - https://splunkbase.s... See more...
Other have already give you some hints to use and check for this issue. If you have lot of logs (probably you have)? Then one option is use SC4S. There is more about it e.g. - https://splunkbase.splunk.com/app/4740 - https://lantern.splunk.com/Data_Descriptors/Syslog/Installing_Splunk_Connect_For_Syslog_(SC4S)_on_a_Windows_network - https://www.splunk.com/en_us/blog/tips-and-tricks/splunk-connect-for-syslog-turnkey-and-scalable-syslog-gdi.html?locale=en_us (several parts) If I recall right there is also some .conf presentation (2019-21? or something) and some UG presentations too. - https://conf.splunk.com/files/2020/slides/PLA1454C.pdf
Hi old discussion about getting Zabbix audit logs into Splunk. https://community.splunk.com/t5/Getting-Data-In/How-do-I-integrate-Zabbix-with-Splunk/m-p/432733 Maybe this helps you? r. Ismo
This has changed on 9.2 see https://docs.splunk.com/Documentation/Splunk/9.2.0/Updating/Upgradepre-9.2deploymentservers If you have distributed environment where DS is not your only indexer you must... See more...
This has changed on 9.2 see https://docs.splunk.com/Documentation/Splunk/9.2.0/Updating/Upgradepre-9.2deploymentservers If you have distributed environment where DS is not your only indexer you must follow above instructions. Do you have look from internal logs (_internal and those _ds*) if there are any hints why those are not seen on DS's screens?
Is this now fixed? (I am not sure what @PickleRick 's test commands (head 1000 etc.) are doing here though!)
Hi @yin_guan , at first, you don't need to locally index anything on the DS, so you can have : [indexAndForward] index = false Then, did you checked if firewall route between UF and DS is open for... See more...
Hi @yin_guan , at first, you don't need to locally index anything on the DS, so you can have : [indexAndForward] index = false Then, did you checked if firewall route between UF and DS is open for the Management Port 8089 used by the DS ? You can check it from the UF using telnet: telnet 192.168.90.237 8089 Then, on the UF, I suppose that you configured outputs.conf in $SPLUNK_HOME/etc/system/local, is it true? it's a best practice do not configure outputs.conf in $SPLUNK_HOME/etc/system/local, but in a dedicated add-on deployed using the DS. At least, two or three minutes are required for the connection to the DS. Ciao. Giuseppe
is there any way that we can export the logs from zabbix to splunk via any script or by setting up a HEC collector data input..im trying to display all the logs from my zabbix server to splunk..pls h... See more...
is there any way that we can export the logs from zabbix to splunk via any script or by setting up a HEC collector data input..im trying to display all the logs from my zabbix server to splunk..pls help me i am stuck in this
Hello evereone I encountered an issue, as shown in the image. I can see two machines in the MC's forwarder dashboard, but I don't see any machines in my forwarder management.  I have added the fo... See more...
Hello evereone I encountered an issue, as shown in the image. I can see two machines in the MC's forwarder dashboard, but I don't see any machines in my forwarder management.  I have added the following configuration to DS, but it still doesn't work after restarting [indexAndForward] index = true selectiveIndexing = true The deployment server and UF are both version 9.3. What aspects should I check?      
I want to add an endpoint to the webhook allow list. I checked the documentation for that. However, I cannot find "Webhook allow list" under Settings > Server settings. I'm using a trial instance ... See more...
I want to add an endpoint to the webhook allow list. I checked the documentation for that. However, I cannot find "Webhook allow list" under Settings > Server settings. I'm using a trial instance with version 9.2.2406.109. 
Use the rex statement, for example this regex | makeresults | eval field = " service error rate 50x 8.976851851851853" | rex field=field "service error rate\s+\w+\s+(?<value>\d+\.\d)"  (this is an ... See more...
Use the rex statement, for example this regex | makeresults | eval field = " service error rate 50x 8.976851851851853" | rex field=field "service error rate\s+\w+\s+(?<value>\d+\.\d)"  (this is an example you can run in a search window). Change your regex statement to match what you expect in the data 
Without knowing your data, I would suggest you start with index=edwapp sourcetype=ygttest is_cont_sens_acct="是" | bin _time span=7d | stats dc(_time) as days_present earliest(_time) as earliest_time... See more...
Without knowing your data, I would suggest you start with index=edwapp sourcetype=ygttest is_cont_sens_acct="是" | bin _time span=7d | stats dc(_time) as days_present earliest(_time) as earliest_time latest(_time) as latest_time count as "fwcishu" by _time day oprt_user_name blng_dept_name oprt_user_acct | eventstats min(*_time) as all_*_time which will give you a breakdown of all the data you need with the earliest and latest for each grouping. It will count number of days per 7 day period (days_present) and group by week and all your other grouping parameters. You can calculate overall earliest/latest date with eventstats. Then I would expect you can manipulate your data from that result set to get you what you want Using map is not the right solution. If you share some of the data and mock up an example of what you're trying to end up with, it would help.
" service error rate 50x 8.976851851851853" field = " service error rate 50x 8.976851851851853" need to extract 8.9 value from above string.
Thank you for your reply, but the statement you provided cannot achieve the result of looping each start time and end time