All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,   I have three license keys for Splunk SOAR and Splunk UBA, each valid for one year. While I am able to install the keys on both SOAR and UBA, I would like to verify all the keys I have install... See more...
Hi,   I have three license keys for Splunk SOAR and Splunk UBA, each valid for one year. While I am able to install the keys on both SOAR and UBA, I would like to verify all the keys I have installed, identify which key is currently active, and check their expiration dates.   Thank you
Hi      I have deployed Splunk enterprise and my logs are getting ingested into the indexer. Now i have created an app for enriching the logs with additional fields from a csv file. I have deployed ... See more...
Hi      I have deployed Splunk enterprise and my logs are getting ingested into the indexer. Now i have created an app for enriching the logs with additional fields from a csv file. I have deployed the app by making configuration changes in props.conf and transforms.conf and i am able to view search time enrichment. But my requirement is real time enrichment as my csv file would change every 2 days. Can anyone provide a sample configuration for props.conf and transforms.conf for real time enrichment of logs with fields from csv based on match with one of the fields of the logs. Regards
  Hello everyone, I’m trying to send SPAN traffic from a single interface (ens35) to Splunk Enterprise using the Splunk Stream forwarder in independent mode. The Splunk Stream forwarder and the sea... See more...
  Hello everyone, I’m trying to send SPAN traffic from a single interface (ens35) to Splunk Enterprise using the Splunk Stream forwarder in independent mode. The Splunk Stream forwarder and the search head appear to be connected properly, but I’m not seeing any of the SPAN traffic in Splunk. In the stmfwd.log, I see the following error: (CaptureServer.cpp:2032) stream.CaptureServer - NetFlow receiver configuration is not set in streamfwd.conf. NetFlow data will not be captured. Please update streamfwd.conf to include correct NetFlow receiver configuration. However, I’m not trying to capture NetFlow data; I only want to capture the raw SPAN traffic. Here is my streamfwd.conf: [streamfwd] httpEventCollectorToken = xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx indexer.1.uri = http://splunk-indexer:8088 indexer.2.uri = http://splunk-indexer2:8088 streamfwdcapture.1.interface = ens35 Why is the SPAN traffic not being forwarded to Splunk? How can I configure Splunk Stream properly so that it captures and sends the SPAN traffic to my indexers without any NetFlow setup? Thank you!
I registered for the 14-day Free Trial of Splunk Cloud Platform. I registered my email address and verified it. I expected to receive an email entitled "Welcome to Splunk Cloud Platform" with c... See more...
I registered for the 14-day Free Trial of Splunk Cloud Platform. I registered my email address and verified it. I expected to receive an email entitled "Welcome to Splunk Cloud Platform" with corresponding links from which to access and use a trial version of Splunk Cloud. That email never arrived after several hours. No evidence of it exists either in my "Spam" or "Trash" folders of my inbox. Please look into this and advise. Thanks! -Rolland
Hello, I have a report scheduled every week and the results are exported to pdf's. Is there an option to NOT email if no results are found because sometimes these PDF's have nothing in them.   Tha... See more...
Hello, I have a report scheduled every week and the results are exported to pdf's. Is there an option to NOT email if no results are found because sometimes these PDF's have nothing in them.   Thanks
I have the Splunk Add-on for Google Cloud Platform set up on an IDM server.  I am currently on version 4.4 and have inputs set up already from months ago however, I am trying to send more data to spl... See more...
I have the Splunk Add-on for Google Cloud Platform set up on an IDM server.  I am currently on version 4.4 and have inputs set up already from months ago however, I am trying to send more data to splunk but for some reason the inputs page does not load anymore.  The connection seems to be fine as I am still receiving expected data from my previous inputs but now when i try to add another input I get the following error: Failed to Load Inputs Page This is normal on Splunk search heads as they do not require an Input page. Check your installation or return to the configuration page. Details AxiosError: Request failed with status code 500
Hi All, I am using the base search and post-process searches outlined below, along with additional post-process searches in my Splunk dashboard. The index name and fields are consistent across all t... See more...
Hi All, I am using the base search and post-process searches outlined below, along with additional post-process searches in my Splunk dashboard. The index name and fields are consistent across all the panels. I have explicitly included a fields command to specify the list of fields required for the post-process searches. However, I am observing a discrepancy: the result count in the Splunk search is higher than the result count displayed on the Splunk dashboard. Could you help me understand why this is happening ? base search:- index=myindex TERM(keyword) fieldname1="EXIT" | bin _time span=1d | fields _time, httpStatusCde, statusCde, respTime, EId Post process search1:- | search EId="5eb2aee9" | stats count as Total, count(eval(httpStatusCde!="200" OR statusCde!="0000")) as failures, exactperc95(respTime) as p95RespTime by _time | eval "FailureRate"= round((failures/Total)*100,2) | table _time, Total, FailureRate, p95RespTime | sort -_time Post process search2:- | search EId="5eb2aee8" | stats count as Total, count(eval(httpStatusCde!="200" OR statusCde!="0000")) as failures, exactperc95(respTime) as p95RespTime by _time | eval "FailureRate"= round((failures/Total)*100,2) | table _time, Total, FailureRate, p95RespTime | sort -_time
Hi I have an ask to create an alert that must trigger if there are more than 50 '404' status codes in a 3 min period. This window must repeat three times in a row - for e.g 9:00 - 9:03, 9:03 - 9:06, ... See more...
Hi I have an ask to create an alert that must trigger if there are more than 50 '404' status codes in a 3 min period. This window must repeat three times in a row - for e.g 9:00 - 9:03, 9:03 - 9:06, 9:06 - 9:09.   The count should trigger only for those requests with 404 status code and for certain urls. The alert must only trigger if there are three values over 50 in consecutive 3 min windows. I have some initial SPL not using streamstats, but was wondering if streamstats would be better? Initial SPL - run over a 9 min time range: index="xxxx" "httpMessage.status"=404 url = "xxxx/1" OR url="xxxx/2" OR url ="xxxx/3" | timechart span=3m count(httpMessage.status) AS HTTPStatusCount | where HTTPStatusCount>50 | table _time HTTPStatusCount   thanks.
I'm trying to estimate how much would my Splunk Cloud setup cost me given my ingestion and searches. I'm currently using Splunk with a free license (Docker) and I'd like to get a number that represe... See more...
I'm trying to estimate how much would my Splunk Cloud setup cost me given my ingestion and searches. I'm currently using Splunk with a free license (Docker) and I'd like to get a number that represents either the price or some sort of credits. How can I do that? Thanks.
Below are 2 queries which returns different events but have a common field thread_id which can be taken by using below rex.  raw message logs are different for both queries. I want events list with... See more...
Below are 2 queries which returns different events but have a common field thread_id which can be taken by using below rex.  raw message logs are different for both queries. I want events list with raw message logs from both query but only if each raw message has this common thread_id I have tried multiple things like join, append, map and github copilot as well but not getting the desired results. Can somebody please help on how to achieve this.    rex field=_raw "\*{4}(?<thread_id>\d+)\*" index="*sample-app*" ("*504 Gateway Time-out*" AND "*Error code: 6039*") index="*sample-app*" "*ExecuteFactoryJob: Caught soap exception*"     index="*wfd-rpt-app*" ("*504 Gateway Time-out*" AND "*Error code: 6039*") | rex field=_raw "\*{4}(?<thread_id>\d+)\*" | append [ search index="*wfd-rpt-app*" "*ExecuteFactoryJob: Caught soap exception*" | rex field=_raw "\*{4}(?<thread_id>\d+)\*" ] | stats values(_raw) as raw_messages by _time, thread_id | table _time, thread_id, raw_messages     I tried above query but it is returning some results which is correct which contains raw message from both the queries, but some results are there which contains thread id and only the 504 gateway message even though the thread_id has both type of message when I checked separately. I'm new to splunk, any help is really appreciated.
Hello Everyone, I'm trying to add index filtering for the datamodels in my setup. I found for some datamodels such as Vulnerabilities, there's no matching data at all.  In this case, should I create... See more...
Hello Everyone, I'm trying to add index filtering for the datamodels in my setup. I found for some datamodels such as Vulnerabilities, there's no matching data at all.  In this case, should I create an empty index for these datamodels? so that splunk won't do useless search for them. Please also know me if there are better solution for this case. Thanks & Regards, Iris
Hello, I’m working on creating a Splunk troubleshooting Dashboard for our internal team, who we are new to Splunk, to troubleshoot forwarder issues—specifically cases where no data is being received... See more...
Hello, I’m working on creating a Splunk troubleshooting Dashboard for our internal team, who we are new to Splunk, to troubleshoot forwarder issues—specifically cases where no data is being received. I’d like to know the possible ways to troubleshoot forwarders when data is missing or for other related issues. Are there any existing dashboards I could use as a reference? also, what are the key metrics and internal index REST calls that I should focus on to cover all aspects of forwarder troubleshooting?  #forwarder #troubleshoot #dashboard
Hi All, I am searching UiPath Orchestrator Logs in Splunk as following:   index="<indexname>" source = "user1" OR source = "user2" "<ProcessName>" "Exception occurred" | rex field=message "(?<dyna... See more...
Hi All, I am searching UiPath Orchestrator Logs in Splunk as following:   index="<indexname>" source = "user1" OR source = "user2" "<ProcessName>" "Exception occurred" | rex field=message "(?<dynamic_text>jobId:\s*\w+)" | search dynamic_text!=null | stats values(dynamic_text) AS extracted_texts | map search="index="<indexname>" source = "user1" OR source = "user2" dynamic_text=\"$extracted_texts$\""   with my above search, I'll have to reference the jobId matched field from the first search to get other matching records to process transaction details Thanks a lot in advance!
We are using the Salesforce Add-On 4.8.0 and we have the username lookup enabled and it seems to be working properly except for the user type.  It is setting all users to standard.  Has anyone seen t... See more...
We are using the Salesforce Add-On 4.8.0 and we have the username lookup enabled and it seems to be working properly except for the user type.  It is setting all users to standard.  Has anyone seen this or know how to fix it?
I'm trying to find a simple way to calculate the product of a single column, e.g. value_a 0.44 0.25 0.67 Ideally, I could use something like this: | stats product(value_a)  But thi... See more...
I'm trying to find a simple way to calculate the product of a single column, e.g. value_a 0.44 0.25 0.67 Ideally, I could use something like this: | stats product(value_a)  But this doesn't seem possible.
Hi! I was wondering if anybody had any css/xml code that could be used to hide the "Populating..." text under this drilldown in my dashboard?   
I have a DBConnect query that runs to populate the panel of a dashboard every week.  We upgraded both the database which houses the data AND Splunk a couple of weeks ago.  The new database is Postgre... See more...
I have a DBConnect query that runs to populate the panel of a dashboard every week.  We upgraded both the database which houses the data AND Splunk a couple of weeks ago.  The new database is Postgres 14 and Spunk is not at 9.2.3.  I have run this query directly on the Postgres box, so it appears that Postgres doesn't suddenly have an issue with it. Other panels/queries in this dashboard use the same DBConnect connection, so the path, structure, and data all appear to be good.  The issue seems to lie with the "math" in the time range, but I cannot put my finger on why.  Basically, we are trying to pull a trend of data going back 8 weeks, starting from last week. query=" SELECT datekey, policydisposition, count(guid) as events FROM event_cdr WHERE datekey >= CURRENT_DATE - CAST(EXTRACT(DOW FROM CURRENT_DATE) as int) - (7*8) AND datekey < CURRENT_DATE - CAST(EXTRACT(DOW FROM CURRENT_DATE) as int) AND direction_flag = 1 AND policydisposition = 1 GROUP BY datekey, policydisposition ORDER BY datekey, policydisposition When I try to execute this query, I consistently get the following error: "Error in 'dbxquery' command: External search command exited unexpectedly. The search job has failed due to an error. You may be able view the job in the Job Inspector" Some of the search.log file is here: "12-30-2024 16:00:27.142 INFO PreviewExecutor [3835565 StatusEnforcerThread] - Preview Enforcing initialization done 12-30-2024 16:00:28.144 INFO ReducePhaseExecutor [3835565 StatusEnforcerThread] - ReducePhaseExecutor=1 action=PREVIEW 12-30-2024 16:02:27.196 ERROR ChunkedExternProcessor [3835572 phase_1] - EOF while attempting to read transport header read_size=0 12-30-2024 16:02:27.197 ERROR ChunkedExternProcessor [3835572 phase_1] - Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.197 WARN ReducePhaseExecutor [3835572 phase_1] - Not downloading remote search.log and telemetry files. Reason: No remote_event_providers.csv file. 12-30-2024 16:02:27.197 INFO ReducePhaseExecutor [3835572 phase_1] - Ending phase_1 12-30-2024 16:02:27.197 INFO UserManager [3835572 phase_1] - Unwound user context: User338 -> NULL 12-30-2024 16:02:27.197 ERROR SearchOrchestrator [3835544 searchOrchestrator] - Phase_1 failed due to : Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.197 INFO ReducePhaseExecutor [3835565 StatusEnforcerThread] - ReducePhaseExecutor=1 action=QUIT 12-30-2024 16:02:27.197 INFO DispatchExecutor [3835565 StatusEnforcerThread] - Search applied action=QUIT while status=GROUND 12-30-2024 16:02:27.197 INFO SearchStatusEnforcer [3835565 StatusEnforcerThread] - sid=1735574426.75524, newState=FAILED, message=Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.197 ERROR SearchStatusEnforcer [3835565 StatusEnforcerThread] - SearchMessage orig_component=SearchStatusEnforcer sid=1735574426.75524 message_key= message=Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.197 INFO SearchStatusEnforcer [3835565 StatusEnforcerThread] - State changed to FAILED: Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.201 INFO UserManager [3835565 StatusEnforcerThread] - Unwound user context: User338 -> NULL 12-30-2024 16:02:27.202 INFO DispatchManager [3835544 searchOrchestrator] - DispatchManager::dispatchHasFinished(id='1735574426.75524', username='User338') 12-30-2024 16:02:27.202 INFO UserManager [3835544 searchOrchestrator] - Unwound user context: User338 -> NULL 12-30-2024 16:02:27.202 INFO SearchOrchestrator [3835541 RunDispatch] - SearchOrchestrator is destructed. sid=1735574426.75524, eval_only=0 12-30-2024 16:02:27.203 INFO SearchStatusEnforcer [3835541 RunDispatch] - SearchStatusEnforcer is already terminated 12-30-2024 16:02:27.203 INFO UserManager [3835541 RunDispatch] - Unwound user context: User338 -> NULL 12-30-2024 16:02:27.203 INFO LookupDataProvider [3835541 RunDispatch] - Clearing out lookup shared provider map 12-30-2024 16:02:27.206 ERROR dispatchRunner [600422 MainThread] - RunDispatch has failed: sid=1735574426.75524, exit=-1, error=Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.213 INFO UserManagerPro [600422 MainThread] - Load authentication: forcing roles="db_connect_admin, db_connect_user, slc_user, user""
Splunkers I'm trying to detect when a user fails GT 5 times in time range of one hour for last 24h, and i have the splq below, but i would like to have an opinion from community if any other option ... See more...
Splunkers I'm trying to detect when a user fails GT 5 times in time range of one hour for last 24h, and i have the splq below, but i would like to have an opinion from community if any other option is to use splq logic to do the same? SPLQ Used index=VPN_Something | bin _time span=24h | stats list(status) as Attempts, count(eval(match(status,"failure"))) as Failed, count(eval(match(status,"success"))) as Success by _time user | eval "Time Range"= strftime(_time,"%Y-%m-%d %H:%M") | eval "Time Range"= 'Time Range'.strftime(_time+3600,"- %H:%M") | where Failed > 5  
is there any way that we can export the logs from zabbix to splunk via any script or by setting up a HEC collector data input..im trying to display all the logs from my zabbix server to splunk..pls h... See more...
is there any way that we can export the logs from zabbix to splunk via any script or by setting up a HEC collector data input..im trying to display all the logs from my zabbix server to splunk..pls help me i am stuck in this
Hello evereone I encountered an issue, as shown in the image. I can see two machines in the MC's forwarder dashboard, but I don't see any machines in my forwarder management.  I have added the fo... See more...
Hello evereone I encountered an issue, as shown in the image. I can see two machines in the MC's forwarder dashboard, but I don't see any machines in my forwarder management.  I have added the following configuration to DS, but it still doesn't work after restarting [indexAndForward] index = true selectiveIndexing = true The deployment server and UF are both version 9.3. What aspects should I check?