All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers,   This problem is occurring on Splunk_TA_paloalto app panels. Is there someone who knows how to handle this problem? I understand it has no effect on any search, but it's still annoyi... See more...
Hi Splunkers,   This problem is occurring on Splunk_TA_paloalto app panels. Is there someone who knows how to handle this problem? I understand it has no effect on any search, but it's still annoying.   Thanks in advance.
Hi, I need to write a query to find the time remaining to consume events.   index=x message.message="Response sent" message.feedId="v1" | stats count as Produced index=y | spath RenderedMessage |... See more...
Hi, I need to write a query to find the time remaining to consume events.   index=x message.message="Response sent" message.feedId="v1" | stats count as Produced index=y | spath RenderedMessage | search RenderedMessage="*/v1/xyz*StatusCode*2*"| stats count as Processed index=z message.feedId="v1" | stats avg("message.durationMs") as AverageResponseTime     So I want to basically perform: Average Time left = Produced - Processed /AverageResponseTime How can I go about doing this? Thank you so much
I have a single search head and configured the props.conf to have DATETIME_CONFIG = CURRENT as I want the data to be indexed at the time Splunk receives the report. I restarted splunk after every cha... See more...
I have a single search head and configured the props.conf to have DATETIME_CONFIG = CURRENT as I want the data to be indexed at the time Splunk receives the report. I restarted splunk after every change. Previously I had it set to a field in the report. When I upload a csv and use the correct sourcetype it assigns the current time to the report. When I upload a report via curl through the HEC endpoint it indexes it to the right time. Same thing when I run it through a simple script. But when the test pipeline runs, it indexes data to the timestamp that is in the report even though it is using the same sourcetype as the other tests I did. Is it possible to add a time field that overrides the sourcetype config? Is there a way to see the actual api request in the splunk internal logs?
I have a drilldown into another dashboard with parameters earliest=$earliest$ and latest=$latest$, this works. When I go into the drilldown dashboard directly it sets the data to come back as "all ti... See more...
I have a drilldown into another dashboard with parameters earliest=$earliest$ and latest=$latest$, this works. When I go into the drilldown dashboard directly it sets the data to come back as "all time".  Is there a way that I can have multiple defaults or some other constrain that doesn't cause this? Here's what I've been working on but it's not working. Any feedback would be helpful... <input type="time" token="t_time"> <default> <earliest>if(isnull($url.earliest$), "-15m@m", $url.earliest$)</earliest> <latest>if(isnull($url.latest$), "now", $url.latest$)</latest> </default> </input>
good day. I am somewhat new to splunk, I am trying to generate a cross between some malicious IP s I have in a file. csv and I want to compare them with src_ip field and if there are coincidences I ... See more...
good day. I am somewhat new to splunk, I am trying to generate a cross between some malicious IP s I have in a file. csv and I want to compare them with src_ip field and if there are coincidences I throw the result, I understand that you have to generate a lookup but I can not move any further
Hello I have  a dashboard with few panels All the panels have drilldown tokens that updates a table also, i have few filters that supposed to update the table also but i see a message " Search is ... See more...
Hello I have  a dashboard with few panels All the panels have drilldown tokens that updates a table also, i have few filters that supposed to update the table also but i see a message " Search is waiting for input..." when selecting options from the filters.  The tables updated only when clicking on the other panels. how can i update the table from the filters ?   Thanks
We are planning for a migration of hot drives to faster disk for all indexers. The current plan is below, and I wonder if it makes sense, because the plan applies maintenance mode for each indexer, a... See more...
We are planning for a migration of hot drives to faster disk for all indexers. The current plan is below, and I wonder if it makes sense, because the plan applies maintenance mode for each indexer, and I grew up with using maintenance mode for the entire operation. Any thoughts? The current plan -   Event monitoring Suspend monitoring of alert on the indexers (Repeat following for each indexer, one at a time) Splunk Ops: Put Splunk Cluster in Maintenance Mode Stop Splunk service on One Indexer VM Ops vMotion the existing 2.5TB disk to any Unity datastores Provision new 2.5TB VM disk from the VSAN datastore Linux Ops Rename the existing hot data logical volume "/opt/splunk/hot-data" to "/opt/splunk/hot-data-old" Create a new volume group and mount the new 2.5TB disk as "/opt/splunk/hot-data" Splunk Ops Restart Splunk service on indexer Take Indexer Cluster out of Maintenance Mode Review Cluster Master to confirm indexer is processing and rebalancing has started as expected Wait a few minutes to allow for Splunk to rebalance across all indexers (Return to top and repeat steps for next indexer) Splunk Ops: Validate service and perform test searches Check CM Panel - -> Resources/usage/machine (bottom panel - IOWait Times) and monitor changes in IOWait Event monitoring Enable monitoring of alert on the indexers     In addition, Splunk PS suggested to use -   splunk offline --enforce-counts     Not sure if it's the right way since it might need to migrate the ~40TB of cold data, and would slow the entire operation
I have been testing out SmartStore in a test environment. I can not find the setting to control how quickly data ingested into splunk can be replicated to my S3 bucket. What I want is for any data in... See more...
I have been testing out SmartStore in a test environment. I can not find the setting to control how quickly data ingested into splunk can be replicated to my S3 bucket. What I want is for any data ingested to be replicated to my s3 bucket as quickly as possible, I am looking for the closest to 0 minutes of data loss. Data only seems to replicate when the Splunk server is restarted. I have tested this by setting up another splunk server with the same s3 bucket as my original, and it seems to have only picked up older data when searching.    max_cache_size   only controls the size of the local cache which I'm not after   hotlist_recency_secs   controls how long before hot data could be deleted from cache, not how long before it is replicated to s3   frozenTimePeriodInSecs, maxGlobalDataSizeMB, maxGlobalRawDataSizeMB   controls freezing behavior which is not what I'm looking for. What setting do I need to configure? Am I missing something within conf files in Splunk or permissions to set in AWS for S3?  Thank you for the help in advance!
Hi, I am new to Splunk, and I am doing some testing with Blue Prism Data gateway with Splunk. How can I get the Splunk URL and API Token
Hello, I'm looking for assistance with a webmail-only report, I ran a query and I only got ActiveSync output, my customer is only interested in OWA not ActiveSync as a report for their users. Code ... See more...
Hello, I'm looking for assistance with a webmail-only report, I ran a query and I only got ActiveSync output, my customer is only interested in OWA not ActiveSync as a report for their users. Code which produced only Active Sync. index="iis_logs_exchxxx" sourcetype="iis" s_port="443" c_ip!="10.*" c_ip!="127.0.0.1" c_ip!="::1" cs_method!="HEAD" cs_username="*@domain.com" | iplocation c_ip | eval alert_time=_time | convert ctime(alert_time) timeformat="%m/%d/%Y %H:%M:%S %Z" | table alert_time,cs_username,cs_User_Agent,c_ip, City, Region, Country | stats values(c_ip) by alert_time,cs_username,cs_User_Agent,City,Region,Country | rename cs_username AS "Username", values(c_ip) AS "IP addresses", cs_User_Agent AS "Device Type", alert_time AS "Date/Time"
Hi, I am runing Splunk Stream to collect DNS data from Domain Controllers. On some of the busy DCs the Splunk_TA_stream is generating lots of the following errors:     ERROR [9412] (SplunkSenderM... See more...
Hi, I am runing Splunk Stream to collect DNS data from Domain Controllers. On some of the busy DCs the Splunk_TA_stream is generating lots of the following errors:     ERROR [9412] (SplunkSenderModularInput.cpp:435) stream.SplunkSenderModularInput - Event queue overflow; dropping 10001 events     Looking at the Splunk Stream Admin-Network Metrics dashboard these seem to occur at the same the Active Network Flows seem to be hitting a limit: I would like to increase the number of network flows allowed in an attempt to stop the event queue overflows. Looking at the documentation I can see 2 configurations that seem relevant: maxTcpSessionCount = <integer> * Defines maximum number of concurrent TCP/UDP flows per processing thread. processingThreads = <integer> * Defines number of threads to use for processing network traffic. Questions: 1) What is the default for maxTcpSessionCount and processingThreads? 2) Would parameter would it be better to increase? Also are these the correct parameters to be looking to tune with the errors I am getting. If not what should I look at?
Hello, I am managing Splunk roles. I want to adjust capabilities to roles, but unfortunately for few of them I did not find what exactly they do.  Searching did not give me results or the results we... See more...
Hello, I am managing Splunk roles. I want to adjust capabilities to roles, but unfortunately for few of them I did not find what exactly they do.  Searching did not give me results or the results were not satisfying. If you have some extract with all capabilities and their description, please advise me what exactly following capabilities do (screenshot attached)  
can anyone please tell me  the scenario based interview questions for splunk admin role ?
I am working on adding some drop down to an existing dashboard studio. I have the queries working with no issues by referencing the drop down's but wrapping the Token Name in $$. What I am working ... See more...
I am working on adding some drop down to an existing dashboard studio. I have the queries working with no issues by referencing the drop down's but wrapping the Token Name in $$. What I am working on now is I would like to update a Widgets Title with the Tokens Label as that is the 'human' readable data, not data to drive the queries. This works with showing the 'value' of the Tokens selection $tok_aToken$ but how do I show the tokens label? I have tried: $tok_aToken_label$,  $tok_aToken.label$ and have been searching for hours and have been unable to find a solution ?
Hi, I have 3 values and i want to display it in a single value panel like the below image which is from Tableau,I want to replicate the same in Splunk. Can it be done? If not can we represent 2 valu... See more...
Hi, I have 3 values and i want to display it in a single value panel like the below image which is from Tableau,I want to replicate the same in Splunk. Can it be done? If not can we represent 2 values (GPA and website) in a single value and Grade in legend?   Else please suggest what other representation can i go with which displays 3 values
I used the query index="botsv2" Amber. I found a capture_hostname: matar    Which e-mail seems to be linked to "matar"?   And who sends the person attach to the "feed" email to?   This... See more...
I used the query index="botsv2" Amber. I found a capture_hostname: matar    Which e-mail seems to be linked to "matar"?   And who sends the person attach to the "feed" email to?   This is from https://github.com/splunk/botsv2  
I would like to compare total throughput for two dates 60 days apart (say, current and -60d). The query in the CMC that generates the throughput is  index=_internal (host=`sim_indexer_url` OR host=... See more...
I would like to compare total throughput for two dates 60 days apart (say, current and -60d). The query in the CMC that generates the throughput is  index=_internal (host=`sim_indexer_url` OR host=`sim_si_url`) sourcetype=splunkd group=per_Index_thruput series!=_* | timechart minspan=30s per_second(kb) as kb by series I need the series information, but it could be binned into 1 whole day.  
I'm having trouble to use any action with a IPV6 value, any action of any app that I try to use a IPV6 on it, they return me this error. Nov 29, 09:25:17 : 'add_element_1' on asset 'akamai original'... See more...
I'm having trouble to use any action with a IPV6 value, any action of any app that I try to use a IPV6 on it, they return me this error. Nov 29, 09:25:17 : 'add_element_1' on asset 'akamai original': 1 action failed. (1)For Parameter: {"context":{"artifact_id":0,"guid":"857e066c-de68-4109-a58b-ee1e515b01dd","parent_action_run":[]},"elements":"2804:1b3:ac03:a6dd:d941:1714:85bb:8b4","networklistid":"7168_ORIGINALBLACKLIST"} Message: "Parameter 'elements' failed validation"   Nov 29, 09:25:17 : 'add_element_1' on asset 'akamai original' completed with status: 'failed'. Action Info: Size : 336 bytes : [{"app_name":"Akamai WAF","asset_name":"akamai original","param":{"context": {"guid": "857e066c-de68-4109-a58b-ee1e515b01dd", "artifact_id": 0, "parent_action_run": []}, "elements": "2804:1b3:ac03:a6dd:d941:1714:85bb:8b4", "networklistid": "7168_ORIGINALBLACKLIST"},"status":"failed","message":"Parameter 'elements' failed validation"}]   Always I receive a message "Parameter 'elements' failed validation", in that case is a app to add a IP on a Akamai network list.   If anyone is achieving use IPV6 I will be glad if you can share with me.   Thanks.  
Hi Team, I came across an issue where I have below sample logs in a file  15:30:31.396|Info|Response ErrorMessage: || 15:30:36.610|Info|Logging Rest Client Request...|| 15:30:36.610|Info|Request U... See more...
Hi Team, I came across an issue where I have below sample logs in a file  15:30:31.396|Info|Response ErrorMessage: || 15:30:36.610|Info|Logging Rest Client Request...|| 15:30:36.610|Info|Request Uri: https://abc-domain/api/xy/Identify|| 15:30:36.694|Info|Logging Rest Client Response...|| 15:30:36.694|Info|Response Status Code: 401|| 15:30:36.710|Info|Response Status Description: Unauthorized|| 15:30:36.741|Info|Response Content: || 15:30:36.741|Info|Response ErrorMessage: || 15:30:36.762|Info|Logging Rest Client Request...|| I am using splunk forwarder version splunkforwarder-8.2.4-87e2dda940d1-x64-release with below prop.conf settings   [xyz:mnl] LB_CHUNK_BREAKER = ([\r\n]+)     On splunk portal I am not getting one line as a one event instead I am getting multiple lines as a single event like below         
We have a situation where the application sends the logs in syslog format. But we don't have a Syslog server to receive it. Instead, can we make the UF (installed in the same app server) receive tho... See more...
We have a situation where the application sends the logs in syslog format. But we don't have a Syslog server to receive it. Instead, can we make the UF (installed in the same app server) receive those syslog events and forward them to Splunk Cloud? Note: We don't have the physical location of the logs in the app server to monitor using UF