All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All,    I'm trying to make my dashboard dynamic.  for example, if the search query responds with 5 values, I want 5 row & panel to be created dynamically in the dashboard. Likewise will it be po... See more...
Hi All,    I'm trying to make my dashboard dynamic.  for example, if the search query responds with 5 values, I want 5 row & panel to be created dynamically in the dashboard. Likewise will it be possible to make the panels to be getting created based on the query output? Please assist me on this ask and I can add more details if needed   
Hybrid and multi-cloud deployments is the new reality for many organizations today who want to get the most out of their on-premises and cloud investments. With the pace of hybrid and multi-cloud dep... See more...
Hybrid and multi-cloud deployments is the new reality for many organizations today who want to get the most out of their on-premises and cloud investments. With the pace of hybrid and multi-cloud deployments on the rise how can one ensure your cloud is fully optimized and not froth with security and reliability concerns? Are there any best practices and recommendations for migrating self-managed Splunk Enterprise deployments to Splunk Cloud Platform (Splunk platform capabilities delivered as a service) efficiently and smoothly?
I feel like I'm dancing circles around the solution to this problem. I created a field named "Duration" with rex that has system recovery time in the 1d 1h 1m format but it doesn't always have all va... See more...
I feel like I'm dancing circles around the solution to this problem. I created a field named "Duration" with rex that has system recovery time in the 1d 1h 1m format but it doesn't always have all values; can also be 1d 1m, 1h 1m, 1m, 1h, or 1d (with values other than 1). I want to show the average downtime over 60days by system. index= ....... earliest=-60d latest=0h | rex field=issue ".*\((?P<Duration>\d[^\)]+" | rex field=Duration "((?P<Days>\d{0,2})d\s*)?((?P<Hours>\d{0,2})h\s*)?((?P<Mins>\d{0,2})m)?" | where isnotnull(Duration) | eval D=tonumber(Days)*1440 | eval H=tonumber(Hours)*60 | eval M=tonumber(Mins) | stats sum(D) as DT sum(H) as HT sum(M) as MT count(event_id) as Events by System | addtotals fieldname=TTotal | eval HTime=TTotal/60 This gets me the numbers I need but am having trouble displaying the average time by System. It still needs to be divided by event_id per system and I need this to be an ongoing report so I can't do it manually. | stats avg(HTime) by System - only gives me the HTime value per system, not the average per event per system. Suggestions?  
Dear experts ,  I am searching on my bot index, which contain conve-id and rest of the fields are stored as payload. Using spath i am able to extract required fields from payload into a table , now ... See more...
Dear experts ,  I am searching on my bot index, which contain conve-id and rest of the fields are stored as payload. Using spath i am able to extract required fields from payload into a table , now for trend analysis i want to use time chart command to see number of users per month , however its not working , below is the query for your reference , need help with the query : index=idx_chatbot logpoint=response-in AND service="journeyService" OR service="watsonPostMessage" |spath input=payload output=displayname path=context.displayName | spath input=payload output=Country path=context.countryCode | spath input=payload output=Intent path=intents{}.intent |spath input=payload output=ticketResponse path=response.createTicketResponse.Message | table conversation-id timestamp service duration logpoint userFeedback displayname text Country Intent category ticketResponse payload | dedup conversation-id | timechart span=1mon count(displayName)
Our teams have noticed an issue since we upgraded to Splunk 9.0.3 (from 8.1.x) with the chart legend interactions.  When the legend is a long list of series, attempting to scroll the legend list by c... See more...
Our teams have noticed an issue since we upgraded to Splunk 9.0.3 (from 8.1.x) with the chart legend interactions.  When the legend is a long list of series, attempting to scroll the legend list by clicking on the "scroll button" is instead mistakenly interpreted as a click on one of the legend's series, causing a drilldown search.  This is seen, at least, in the Search Assistant. Has anyone else seen this?  Is it a known (to Splunk) problem?  Any idea which versions/situations do/don't exhibit the bug?
Hello everyone, I have the same issue as this guy: https://community.appdynamics.com/t5/Licensing-including-Trial/I-haven-t-received-controller-info-after-trial-set-up/td-p/49483 My email: [Redac... See more...
Hello everyone, I have the same issue as this guy: https://community.appdynamics.com/t5/Licensing-including-Trial/I-haven-t-received-controller-info-after-trial-set-up/td-p/49483 My email: [Redacted] My coworker's email (he also has this issue): [Redacted] Could you guys send us the necessary information to activate the Controller? Best regards, Marcelo Contin ^ Post edited to remove the email address. Please don't share your or others' emails on community posts for security and privacy reasons. 
I am wondering if anyone has this issue or use case. We are trying to see if we can have a system that would alert us on when a host has stopped sending logs based on the specific index it belongs. F... See more...
I am wondering if anyone has this issue or use case. We are trying to see if we can have a system that would alert us on when a host has stopped sending logs based on the specific index it belongs. For example: We woudl like to know if a firewall has stopped sending logs within 30min and also lets say if a host for another less continuos feed has stopped, exmaple: host A of index=trickle_feed has not send in 4 hours, etc. We are good with the logic on those searches, what i am really looking for is direction on how you create those alerts and assigned them to someone to be follow up on? what other tools you might be using for the triaging and tracking of the alert/incident/ticket/work while the feed for the Quiet host is being restored?   
Hi folks, Our on-premise 5.3.1 SOAR's Ingest daemon is behaving funny in terms of memory management and was wondering if someone can give me any pointers to where to look for what is going wrong. ... See more...
Hi folks, Our on-premise 5.3.1 SOAR's Ingest daemon is behaving funny in terms of memory management and was wondering if someone can give me any pointers to where to look for what is going wrong. In essence, the ingestd keeps on using more and more virtual memory until it maxes out at 256GB and then stops ingesting more data. Restarting the service does solve the issue. I am thinking the root cause might be hiding in 3 places: - poorly written playbooks - I am thinking something might be wrong with the playbooks that we have. We have playbooks running as often as every 5 minutes, so I suppose they can cause resource starvation. Not sure how to dive deeper for potential memory leaks here though.  - something going wrong with the ingestion of containers/better clean-up of closed containers - is it possible that just closing containers without deleting them after X amount of time can cause this? - some weird bug that we've hit - not sure how likely this is but I saw that in version 5.3.4 a bug regarding memory usage has been fixed (PSAAS-9663) so it is on my list, if nothing else turns up   One relevant point to make is that this started occurring after migration from 4.9.X to our current version so I have no idea if this is linked to the fact that we migrated to Python 3 playbooks or the particular product version. Any pointers to where/how to start looking for the root cause are appreciated. Cheers.
A new splunk user here. I am trying to install splunk UF on ubuntu. I get this error while trying to run the package for the first time: Could not open log file "/opt/splunkforwarder/var/log/splu... See more...
A new splunk user here. I am trying to install splunk UF on ubuntu. I get this error while trying to run the package for the first time: Could not open log file "/opt/splunkforwarder/var/log/splunk/first_install.log" for writing (2). I saw some articles online but the suggestions did not resolve the issue for me. If I can get some step by step guide on resolving this, I will be grateful. Thank you.    
Hi all, Here's an interesting use case, wonder if SOAR can handle it. You send a user an email from SOAR after running a playbook. In the email you ask them a question with a Yes / No Response ... See more...
Hi all, Here's an interesting use case, wonder if SOAR can handle it. You send a user an email from SOAR after running a playbook. In the email you ask them a question with a Yes / No Response User can click "Yes" or "No" hyperlinks in the email, both are URL's linking back to SOAR. SOAR records when the URL is accessed, and notes it down in the related event (e.g. User click "No") Any possible way of doing something like that?
I want to disable the feature of save as, user can able to search but shouldn't be able to save it as a dashboard or report or any knowledge object.  
I have a horizontal bar chart usingthe following post processing search: | stats count by urgency | eval urgency = if(urgency=="-", "unknown", 'urgency') The values of the urgency field are: ... See more...
I have a horizontal bar chart usingthe following post processing search: | stats count by urgency | eval urgency = if(urgency=="-", "unknown", 'urgency') The values of the urgency field are: "1 - High" "2 - Medium" "3 - Low" "unknown" I would like the horizontal bar color to change for each value: "1 - High"  would be Red "2 - Medium" would be Orange "3 - Low" would be Yellow "unknown" would remain blue I have seen code for working with value ranges, but I am looking for code that works only with the value.   Any suggestions are grealy appreciated
I am trying to determine the average time for a set of issues to get resolved. I already created a field named "Duration" that extracts the time periods, the issue is that they're labeled in differen... See more...
I am trying to determine the average time for a set of issues to get resolved. I already created a field named "Duration" that extracts the time periods, the issue is that they're labeled in different time formats, with some combination of Day Hour Minute (ex. 4d 7h 20m, 1d 13m, 7h 43m, 5h, 25m). Duration is a rex created field which pulls the info from a string that looks something like this: issue="D830 System Down - 1930E 13 Jan - 2240 14 Jan (1d 3h 10m) - MU3892" Here is part of the search: index=main ................... . . | rex field=issue ".*\((?P<Duration>\d[^\)]+" | rex field=Duration "((?P<Days>\d{0,2})d\s)?((?P<Hours>\d{0,2})h\s)?(?P<Mins>\d{0,2})m" | eval Days=tonumber(Days) | eval Hours=tonumber(Hours) | eval Mins=tonumber(Mins) | eval MTTR=((Days*1440)+(Hours*60)+(Mins))/60 | table Duration Days Hours Mins MTTR Two combinations work successfully - 1d 12m and 43m Anything that includes the Hours field breaks the rex: - 1d 10h 20m and 20h 10m only pulls Mins - 5h doesn't work at all I ran it in regex101 and it should work for all. What is wrong with my "rex field=Duration " line?
Hello,  I need a search query to detect http outboun irect traffic. Thank  you.
Hello I have 2 lookups. The first one will be getting inputs from a dashboard and getting saved to the lookup(for example: a column called <username>). The second lookup has the same data from ... See more...
Hello I have 2 lookups. The first one will be getting inputs from a dashboard and getting saved to the lookup(for example: a column called <username>). The second lookup has the same data from the first lookup with additional information(for example : columns called <username>,<usercity>,<userstate> ,<usercountry>). I'm trying to take the inputs from the first lookup > use information from the second lookup> and map it out using a clustermap.  Can someone help me with the spl ?  
I'm doing a search for server names and will eventually extract to to a csv. However, each result comes out as one of the following servername.domain: servername.domain servername: servername.... See more...
I'm doing a search for server names and will eventually extract to to a csv. However, each result comes out as one of the following servername.domain: servername.domain servername: servername.domain servername: servername How can I change the results in that particular field to be just servername? I feel like this is where regular expressions may come in to play. 
Hey, Is there a way to retrieve the raw object of an app action in phantom.collect? So I have an app, which returns the following values: data, message, status, parameter And normally that wo... See more...
Hey, Is there a way to retrieve the raw object of an app action in phantom.collect? So I have an app, which returns the following values: data, message, status, parameter And normally that works fine - I can call each of these in turn like this;     data_result = phantom.collect(container=container, datapath=["my_app_action:action_result.data"]) message_result = phantom.collect(container=container, datapath=["my_app_action:action_result.message"])     etc.   but how do I retrieve the full object? e.g. something like this:     all_result = phantom.collect(container=container, datapath=["my_app_action:action_result.*"]) all_result = phantom.collect(container=container, datapath=["my_app_action:*"])     Hope that makes sense.
I have a simple question for documentation purposes. What are the default ports and services being used on them for the Splunk heavy forwarder and Splunk ES?
Hi, I have a csv that is imported to splunk and one of those fields has a space for the thousands and ends with  ",00",  I need it to be an integer only with numbers.   I can solve this th... See more...
Hi, I have a csv that is imported to splunk and one of those fields has a space for the thousands and ends with  ",00",  I need it to be an integer only with numbers.   I can solve this this with 2 lines:        | eval test=replace(field1,",00","")        | eval test=replace(test," ","") But I want to create a new field with Calculated fields. How can I do that in one line of code?
Hi All,  When using stats  to display values() of  fields , how can we have the values to align between the field names ?  For example My Data set Severity Status Count P1 New ... See more...
Hi All,  When using stats  to display values() of  fields , how can we have the values to align between the field names ?  For example My Data set Severity Status Count P1 New 1 P1 Open 2 P1 Unassigned 3 P1 Closed 5 When using | stats values(status) as status, values(Count) as Count by severity this is what i get.  Notice the count values are not as per dataset. Severity Status Count P1 New Open Unassigned Closed 1 5 3 2 i did like the results of Count to align as per their Status field. Expected Result Severity Status Count P1 New Open Unassigned Closed 1 2 3 5