All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I’m working on creating a Splunk troubleshooting Dashboard for our internal team, who we are new to Splunk, to troubleshoot forwarder issues—specifically cases where no data is being received... See more...
Hello, I’m working on creating a Splunk troubleshooting Dashboard for our internal team, who we are new to Splunk, to troubleshoot forwarder issues—specifically cases where no data is being received. I’d like to know the possible ways to troubleshoot forwarders when data is missing or for other related issues. Are there any existing dashboards I could use as a reference? also, what are the key metrics and internal index REST calls that I should focus on to cover all aspects of forwarder troubleshooting?  #forwarder #troubleshoot #dashboard
I found the problem, I needed to add the following to the inputs.conf file of UF, I don't know if this is a problem after the update or if it was also needed before, obviously when I typed it they sh... See more...
I found the problem, I needed to add the following to the inputs.conf file of UF, I don't know if this is a problem after the update or if it was also needed before, obviously when I typed it they showed [default] host = 192.168.90.233    
this is unfortunately.. not fixed 
Hi All, I am searching UiPath Orchestrator Logs in Splunk as following:   index="<indexname>" source = "user1" OR source = "user2" "<ProcessName>" "Exception occurred" | rex field=message "(?<dyna... See more...
Hi All, I am searching UiPath Orchestrator Logs in Splunk as following:   index="<indexname>" source = "user1" OR source = "user2" "<ProcessName>" "Exception occurred" | rex field=message "(?<dynamic_text>jobId:\s*\w+)" | search dynamic_text!=null | stats values(dynamic_text) AS extracted_texts | map search="index="<indexname>" source = "user1" OR source = "user2" dynamic_text=\"$extracted_texts$\""   with my above search, I'll have to reference the jobId matched field from the first search to get other matching records to process transaction details Thanks a lot in advance!
Another approach is to (almost) use stats:-) as you original proposed.  With a little help of foreach.   | stats values(value_a) as value_a | eval product = 1 | foreach value_a mode=multivalue ... See more...
Another approach is to (almost) use stats:-) as you original proposed.  With a little help of foreach.   | stats values(value_a) as value_a | eval product = 1 | foreach value_a mode=multivalue [eval product = product * <<ITEM>>]   Here is an emulation you can play with and compare with real data:   | makeresults format=csv data="value_a 0.44 0.25 0.67" ``` data emulation above ```   The above search gives this result value_a product 0.25 0.44 0.67 0.074
Thank you for your reply. I deeply apologize for the issue I described! Our sample data is as follows: 2024-12-12 00:30:12 ", 0699075634," Liu Zhiqiang "," Logistics Department "," Yes“ 2024-12-12... See more...
Thank you for your reply. I deeply apologize for the issue I described! Our sample data is as follows: 2024-12-12 00:30:12 ", 0699075634," Liu Zhiqiang "," Logistics Department "," Yes“ 2024-12-12 08:30:14 ", 0699075634," Liu Zhiqiang "," Logistics Department "," Yes " 2024-12-12 11:30:12 ", 0699075634," Liu Zhiqiang "," Logistics Department "," Yes“ 2024-12-13 15:30:55 ", 0699075634," Liu Zhiqiang "," Logistics Department "," Yes " 2024-12-13 00:30:12 ", 0699075634," Liu Zhiqiang "," Logistics Department "," Yes“ 2024-12-14 19:30:30 ", 0699075634," Liu Zhiqiang "," Logistics Department "," Yes " 2024-12-14 22:30:12 ", 0699075634," Liu Zhiqiang "," Logistics Department "," Yes“ The field title is: opr_time oprt_user_acct oprt_user_name blng_dept_name is_cont_sens_acct
Correct, you can't multiply down columns, but you can with rows, so depending on how many events and columns you want the product of, you could do something like this | makeresults format=csv data="... See more...
Correct, you can't multiply down columns, but you can with rows, so depending on how many events and columns you want the product of, you could do something like this | makeresults format=csv data="value_a,value_b,otherValue 0.44,1,10 0.25,2,20 0.67,3,30" | appendpipe [| transpose 0 | eval product = 1 | foreach row* [eval product = product * '<<FIELD>>'] | eval {column} = product | stats values(value_a) as value_a values(value_b) as value_b]
Some additional information that was somehow omitted from my original post.... If I change date range part of the query from "(7 * 8)" to just "7".  The query runs fine.  If I change "(7 * 8)" to "(... See more...
Some additional information that was somehow omitted from my original post.... If I change date range part of the query from "(7 * 8)" to just "7".  The query runs fine.  If I change "(7 * 8)" to "(7 * 1)", the query runs fine.  If I change "(7 * 8)" to "(7 * 2)"... or any number greater than 2... the query fails with the same error as mentioned in the original post.
We are using the Salesforce Add-On 4.8.0 and we have the username lookup enabled and it seems to be working properly except for the user type.  It is setting all users to standard.  Has anyone seen t... See more...
We are using the Salesforce Add-On 4.8.0 and we have the username lookup enabled and it seems to be working properly except for the user type.  It is setting all users to standard.  Has anyone seen this or know how to fix it?
After some more searching I found SEC1936B .conf23 and followed the file configuration instructions. I have TLS connections now. Thank you for your time.
N.A  
I'm trying to find a simple way to calculate the product of a single column, e.g. value_a 0.44 0.25 0.67 Ideally, I could use something like this: | stats product(value_a)  But thi... See more...
I'm trying to find a simple way to calculate the product of a single column, e.g. value_a 0.44 0.25 0.67 Ideally, I could use something like this: | stats product(value_a)  But this doesn't seem possible.
That was just for checking how many fields are returned from the sample of data. Of course it's not suitable for production search
Hi @CyberWolf, There's a tendency among practitioners to bin time into buckets rounded to the nearest time interval, e.g. 1 hour: 00:00, 01:00, 02:00, etc.; however, this results in counting errors.... See more...
Hi @CyberWolf, There's a tendency among practitioners to bin time into buckets rounded to the nearest time interval, e.g. 1 hour: 00:00, 01:00, 02:00, etc.; however, this results in counting errors. Instead, count using a rolling window in ascending _time order:   index=VPN_Something status=failure | stats count by _time user | streamstats time_window=1h sum(count) as failure_count by user | where failure_count>5   Since you're only interested in a 1-hour window, your search time range only needs span the last hour plus any allowance for ingest lag and a buffer to accommodate your scheduling interval. See https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Streamstats for information on scaling streamstats. If you're using accelerated data models or indexed fields or if your raw events are structured in key-value pairs separated by minor breakers, you can use tstats to greatly improve the performance of the search. If you have the capacity, you might also consider a real-time search that counts events as they're indexed, although the results may be incorrect relative to your requirements. If you have Splunk Enterprise Security, look at the "Access - Excessive Failed Logins - Rule" correlation search. For reference, it's a real-time search scheduled every 5 minutes (*/5 * * * *), with earliest=rt-65m@m and latest=rt-5m@m:   | from datamodel:"Authentication"."Failed_Authentication" | stats values("tag") as "tag",dc("user") as "user_count",dc("dest") as "dest_count",count by "app","src" | where 'count'>=6   In the Authentication data model, app would be something like vpn, and src would be a device identifier. As @isoutamo wrote, there are many approximate solutions to this problem. The correct solution depends on your requirements and your tolerance for counting errors.
Hi you could find quite many examples for this with query  site:community.splunk.com%20login%20failed%20more%20than%205%20times%20per%20hour%20solved just copy paste this to google.  In your ... See more...
Hi you could find quite many examples for this with query  site:community.splunk.com%20login%20failed%20more%20than%205%20times%20per%20hour%20solved just copy paste this to google.  In your example there is at least one misunderstanding. You have add “bin _time span=24h”, later you are expecting that your _time is divided into 1 hour span. But just google those examples and then change your query or just create a new. With splunk there is rarely only one correct solution! Happy splunking!
You cannot add own css code into Dashboard Studio.
Sorry I didn’t read correctly the error message. It said that splunk cannot read key file from your pem file. Are you sure that it contains all needed parts inside it?
Somehow this script seems to be a scripted input for MySQL backend. Probably it’s enough just define it’s into UF’s inputs.conf? I haven’t tried it, so I can’t say more.
Hi! I was wondering if anybody had any css/xml code that could be used to hide the "Populating..." text under this drilldown in my dashboard?   
I have a DBConnect query that runs to populate the panel of a dashboard every week.  We upgraded both the database which houses the data AND Splunk a couple of weeks ago.  The new database is Postgre... See more...
I have a DBConnect query that runs to populate the panel of a dashboard every week.  We upgraded both the database which houses the data AND Splunk a couple of weeks ago.  The new database is Postgres 14 and Spunk is not at 9.2.3.  I have run this query directly on the Postgres box, so it appears that Postgres doesn't suddenly have an issue with it. Other panels/queries in this dashboard use the same DBConnect connection, so the path, structure, and data all appear to be good.  The issue seems to lie with the "math" in the time range, but I cannot put my finger on why.  Basically, we are trying to pull a trend of data going back 8 weeks, starting from last week. query=" SELECT datekey, policydisposition, count(guid) as events FROM event_cdr WHERE datekey >= CURRENT_DATE - CAST(EXTRACT(DOW FROM CURRENT_DATE) as int) - (7*8) AND datekey < CURRENT_DATE - CAST(EXTRACT(DOW FROM CURRENT_DATE) as int) AND direction_flag = 1 AND policydisposition = 1 GROUP BY datekey, policydisposition ORDER BY datekey, policydisposition When I try to execute this query, I consistently get the following error: "Error in 'dbxquery' command: External search command exited unexpectedly. The search job has failed due to an error. You may be able view the job in the Job Inspector" Some of the search.log file is here: "12-30-2024 16:00:27.142 INFO PreviewExecutor [3835565 StatusEnforcerThread] - Preview Enforcing initialization done 12-30-2024 16:00:28.144 INFO ReducePhaseExecutor [3835565 StatusEnforcerThread] - ReducePhaseExecutor=1 action=PREVIEW 12-30-2024 16:02:27.196 ERROR ChunkedExternProcessor [3835572 phase_1] - EOF while attempting to read transport header read_size=0 12-30-2024 16:02:27.197 ERROR ChunkedExternProcessor [3835572 phase_1] - Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.197 WARN ReducePhaseExecutor [3835572 phase_1] - Not downloading remote search.log and telemetry files. Reason: No remote_event_providers.csv file. 12-30-2024 16:02:27.197 INFO ReducePhaseExecutor [3835572 phase_1] - Ending phase_1 12-30-2024 16:02:27.197 INFO UserManager [3835572 phase_1] - Unwound user context: User338 -> NULL 12-30-2024 16:02:27.197 ERROR SearchOrchestrator [3835544 searchOrchestrator] - Phase_1 failed due to : Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.197 INFO ReducePhaseExecutor [3835565 StatusEnforcerThread] - ReducePhaseExecutor=1 action=QUIT 12-30-2024 16:02:27.197 INFO DispatchExecutor [3835565 StatusEnforcerThread] - Search applied action=QUIT while status=GROUND 12-30-2024 16:02:27.197 INFO SearchStatusEnforcer [3835565 StatusEnforcerThread] - sid=1735574426.75524, newState=FAILED, message=Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.197 ERROR SearchStatusEnforcer [3835565 StatusEnforcerThread] - SearchMessage orig_component=SearchStatusEnforcer sid=1735574426.75524 message_key= message=Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.197 INFO SearchStatusEnforcer [3835565 StatusEnforcerThread] - State changed to FAILED: Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.201 INFO UserManager [3835565 StatusEnforcerThread] - Unwound user context: User338 -> NULL 12-30-2024 16:02:27.202 INFO DispatchManager [3835544 searchOrchestrator] - DispatchManager::dispatchHasFinished(id='1735574426.75524', username='User338') 12-30-2024 16:02:27.202 INFO UserManager [3835544 searchOrchestrator] - Unwound user context: User338 -> NULL 12-30-2024 16:02:27.202 INFO SearchOrchestrator [3835541 RunDispatch] - SearchOrchestrator is destructed. sid=1735574426.75524, eval_only=0 12-30-2024 16:02:27.203 INFO SearchStatusEnforcer [3835541 RunDispatch] - SearchStatusEnforcer is already terminated 12-30-2024 16:02:27.203 INFO UserManager [3835541 RunDispatch] - Unwound user context: User338 -> NULL 12-30-2024 16:02:27.203 INFO LookupDataProvider [3835541 RunDispatch] - Clearing out lookup shared provider map 12-30-2024 16:02:27.206 ERROR dispatchRunner [600422 MainThread] - RunDispatch has failed: sid=1735574426.75524, exit=-1, error=Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.213 INFO UserManagerPro [600422 MainThread] - Load authentication: forcing roles="db_connect_admin, db_connect_user, slc_user, user""