All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Sorry I didn’t read correctly the error message. It said that splunk cannot read key file from your pem file. Are you sure that it contains all needed parts inside it?
Somehow this script seems to be a scripted input for MySQL backend. Probably it’s enough just define it’s into UF’s inputs.conf? I haven’t tried it, so I can’t say more.
Hi! I was wondering if anybody had any css/xml code that could be used to hide the "Populating..." text under this drilldown in my dashboard?   
I have a DBConnect query that runs to populate the panel of a dashboard every week.  We upgraded both the database which houses the data AND Splunk a couple of weeks ago.  The new database is Postgre... See more...
I have a DBConnect query that runs to populate the panel of a dashboard every week.  We upgraded both the database which houses the data AND Splunk a couple of weeks ago.  The new database is Postgres 14 and Spunk is not at 9.2.3.  I have run this query directly on the Postgres box, so it appears that Postgres doesn't suddenly have an issue with it. Other panels/queries in this dashboard use the same DBConnect connection, so the path, structure, and data all appear to be good.  The issue seems to lie with the "math" in the time range, but I cannot put my finger on why.  Basically, we are trying to pull a trend of data going back 8 weeks, starting from last week. query=" SELECT datekey, policydisposition, count(guid) as events FROM event_cdr WHERE datekey >= CURRENT_DATE - CAST(EXTRACT(DOW FROM CURRENT_DATE) as int) - (7*8) AND datekey < CURRENT_DATE - CAST(EXTRACT(DOW FROM CURRENT_DATE) as int) AND direction_flag = 1 AND policydisposition = 1 GROUP BY datekey, policydisposition ORDER BY datekey, policydisposition When I try to execute this query, I consistently get the following error: "Error in 'dbxquery' command: External search command exited unexpectedly. The search job has failed due to an error. You may be able view the job in the Job Inspector" Some of the search.log file is here: "12-30-2024 16:00:27.142 INFO PreviewExecutor [3835565 StatusEnforcerThread] - Preview Enforcing initialization done 12-30-2024 16:00:28.144 INFO ReducePhaseExecutor [3835565 StatusEnforcerThread] - ReducePhaseExecutor=1 action=PREVIEW 12-30-2024 16:02:27.196 ERROR ChunkedExternProcessor [3835572 phase_1] - EOF while attempting to read transport header read_size=0 12-30-2024 16:02:27.197 ERROR ChunkedExternProcessor [3835572 phase_1] - Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.197 WARN ReducePhaseExecutor [3835572 phase_1] - Not downloading remote search.log and telemetry files. Reason: No remote_event_providers.csv file. 12-30-2024 16:02:27.197 INFO ReducePhaseExecutor [3835572 phase_1] - Ending phase_1 12-30-2024 16:02:27.197 INFO UserManager [3835572 phase_1] - Unwound user context: User338 -> NULL 12-30-2024 16:02:27.197 ERROR SearchOrchestrator [3835544 searchOrchestrator] - Phase_1 failed due to : Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.197 INFO ReducePhaseExecutor [3835565 StatusEnforcerThread] - ReducePhaseExecutor=1 action=QUIT 12-30-2024 16:02:27.197 INFO DispatchExecutor [3835565 StatusEnforcerThread] - Search applied action=QUIT while status=GROUND 12-30-2024 16:02:27.197 INFO SearchStatusEnforcer [3835565 StatusEnforcerThread] - sid=1735574426.75524, newState=FAILED, message=Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.197 ERROR SearchStatusEnforcer [3835565 StatusEnforcerThread] - SearchMessage orig_component=SearchStatusEnforcer sid=1735574426.75524 message_key= message=Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.197 INFO SearchStatusEnforcer [3835565 StatusEnforcerThread] - State changed to FAILED: Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.201 INFO UserManager [3835565 StatusEnforcerThread] - Unwound user context: User338 -> NULL 12-30-2024 16:02:27.202 INFO DispatchManager [3835544 searchOrchestrator] - DispatchManager::dispatchHasFinished(id='1735574426.75524', username='User338') 12-30-2024 16:02:27.202 INFO UserManager [3835544 searchOrchestrator] - Unwound user context: User338 -> NULL 12-30-2024 16:02:27.202 INFO SearchOrchestrator [3835541 RunDispatch] - SearchOrchestrator is destructed. sid=1735574426.75524, eval_only=0 12-30-2024 16:02:27.203 INFO SearchStatusEnforcer [3835541 RunDispatch] - SearchStatusEnforcer is already terminated 12-30-2024 16:02:27.203 INFO UserManager [3835541 RunDispatch] - Unwound user context: User338 -> NULL 12-30-2024 16:02:27.203 INFO LookupDataProvider [3835541 RunDispatch] - Clearing out lookup shared provider map 12-30-2024 16:02:27.206 ERROR dispatchRunner [600422 MainThread] - RunDispatch has failed: sid=1735574426.75524, exit=-1, error=Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.213 INFO UserManagerPro [600422 MainThread] - Load authentication: forcing roles="db_connect_admin, db_connect_user, slc_user, user""
Actually i can see a github link there but there is no proper documentation for it to install it and work it out..or do you meant the add on plugin ?   
Splunkers I'm trying to detect when a user fails GT 5 times in time range of one hour for last 24h, and i have the splq below, but i would like to have an opinion from community if any other option ... See more...
Splunkers I'm trying to detect when a user fails GT 5 times in time range of one hour for last 24h, and i have the splq below, but i would like to have an opinion from community if any other option is to use splq logic to do the same? SPLQ Used index=VPN_Something | bin _time span=24h | stats list(status) as Attempts, count(eval(match(status,"failure"))) as Failed, count(eval(match(status,"success"))) as Success by _time user | eval "Time Range"= strftime(_time,"%Y-%m-%d %H:%M") | eval "Time Range"= 'Time Range'.strftime(_time+3600,"- %H:%M") | where Failed > 5  
Yes, I have verified it, and everything is correct.
Have you checked that this file exists and your splunk user have read access to it?
Hi Dear Community, I am encountering the following error across all servers: SSLCommon - Can't read key file from /opt/splunk/etc/auth/CERT.pem
Hi If you want collect introspection data from UF there are some instructions for it. https://docs.splunk.com/Documentation/Splunk/9.4.0/Troubleshooting/ConfigurePIF https://community.splunk.com/... See more...
Hi If you want collect introspection data from UF there are some instructions for it. https://docs.splunk.com/Documentation/Splunk/9.4.0/Troubleshooting/ConfigurePIF https://community.splunk.com/t5/Getting-Data-In/How-to-collect-introspection-logs-from-forwarders-in-a/m-p/129601 Depending on what you are really needing it maybe better to use e.g Windows or Unix/Linux TA to collect that information. And remember that you cannot collect all same data from UFs than splunk collect from full enterprise instance. There is at least one app for this task https://splunkbase.splunk.com/app/3805 r. Ismo
Hi here is excellent presentation kept in Helsinki UG. https://data-findings.com/wp-content/uploads/2024/09/HSUG-20240903-Tiia-Ojares.pdf There is shown how to debug your tokens etc. Probably you sh... See more...
Hi here is excellent presentation kept in Helsinki UG. https://data-findings.com/wp-content/uploads/2024/09/HSUG-20240903-Tiia-Ojares.pdf There is shown how to debug your tokens etc. Probably you should use $form.$ tokens instead of use just $tokens$? r. Ismo
Here is an excellent conf presentation, how to find the reason for this lag https://conf.splunk.com/files/2019/slides/FN1570.pdf
Just guessing, but this sounds like issue with your authentication part. At least earlier splunk has used user nobody as local user which are not existing or at least it haven't any roles. There is a... See more...
Just guessing, but this sounds like issue with your authentication part. At least earlier splunk has used user nobody as local user which are not existing or at least it haven't any roles. There is at least one old post which explains user nobody https://community.splunk.com/t5/All-Apps-and-Add-ons/Disambiguation-of-the-meaning-of-quot-nobody-quot-as-an-owner-of/m-p/400573 Here is another post which explains how to find those scheduled searches https://community.splunk.com/t5/Splunk-Search/How-to-identify-a-skipped-scheduled-accelerated-report/m-p/700427 Was there any issues with your upgrade? If I understand correctly you have update it from 9.1.x to 9.2.4? In which platform and is this distributed environment? What are behind your LDAP authentication and authorization directory? Do you know if there are or have been defined user nobody? r. Ismo
Here is interesting conf presentation which may help you to understand difference between metrics and event indexes. https://conf.splunk.com/files/2022/recordings/OBS1157B_1080.mp4 I think that it's ... See more...
Here is interesting conf presentation which may help you to understand difference between metrics and event indexes. https://conf.splunk.com/files/2022/recordings/OBS1157B_1080.mp4 I think that it's still quite accurate, but if I recall right there have happened some changes how ingestion amount is calculated with metrics? Basically this should be better for end users. Here is also basic information about Metrics https://docs.splunk.com/Documentation/Splunk/latest/Metrics/GetStarted r. Ismo
Can you paste your inputs.conf here between editor's script block </> ? When you are saying "cheduled runs are sometimes missed, or scripts execute at unexpected times." is the only fix for this to ... See more...
Can you paste your inputs.conf here between editor's script block </> ? When you are saying "cheduled runs are sometimes missed, or scripts execute at unexpected times." is the only fix for this to restart splunkd service on UF? Are those UFs in domain or just individual nodes which manages time sync with ntpd instead of Windows domain service?
Other have already give you some hints to use and check for this issue. If you have lot of logs (probably you have)? Then one option is use SC4S. There is more about it e.g. - https://splunkbase.s... See more...
Other have already give you some hints to use and check for this issue. If you have lot of logs (probably you have)? Then one option is use SC4S. There is more about it e.g. - https://splunkbase.splunk.com/app/4740 - https://lantern.splunk.com/Data_Descriptors/Syslog/Installing_Splunk_Connect_For_Syslog_(SC4S)_on_a_Windows_network - https://www.splunk.com/en_us/blog/tips-and-tricks/splunk-connect-for-syslog-turnkey-and-scalable-syslog-gdi.html?locale=en_us (several parts) If I recall right there is also some .conf presentation (2019-21? or something) and some UG presentations too. - https://conf.splunk.com/files/2020/slides/PLA1454C.pdf
Hi old discussion about getting Zabbix audit logs into Splunk. https://community.splunk.com/t5/Getting-Data-In/How-do-I-integrate-Zabbix-with-Splunk/m-p/432733 Maybe this helps you? r. Ismo
This has changed on 9.2 see https://docs.splunk.com/Documentation/Splunk/9.2.0/Updating/Upgradepre-9.2deploymentservers If you have distributed environment where DS is not your only indexer you must... See more...
This has changed on 9.2 see https://docs.splunk.com/Documentation/Splunk/9.2.0/Updating/Upgradepre-9.2deploymentservers If you have distributed environment where DS is not your only indexer you must follow above instructions. Do you have look from internal logs (_internal and those _ds*) if there are any hints why those are not seen on DS's screens?
Is this now fixed? (I am not sure what @PickleRick 's test commands (head 1000 etc.) are doing here though!)
Hi @yin_guan , at first, you don't need to locally index anything on the DS, so you can have : [indexAndForward] index = false Then, did you checked if firewall route between UF and DS is open for... See more...
Hi @yin_guan , at first, you don't need to locally index anything on the DS, so you can have : [indexAndForward] index = false Then, did you checked if firewall route between UF and DS is open for the Management Port 8089 used by the DS ? You can check it from the UF using telnet: telnet 192.168.90.237 8089 Then, on the UF, I suppose that you configured outputs.conf in $SPLUNK_HOME/etc/system/local, is it true? it's a best practice do not configure outputs.conf in $SPLUNK_HOME/etc/system/local, but in a dedicated add-on deployed using the DS. At least, two or three minutes are required for the connection to the DS. Ciao. Giuseppe