All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, kindly let us know which one is better for performance tuning. Dashboard taking so much time to load.   index=abc "DONE" or  index=abc myfield="DONE"
When i open the dashboards, the scripts which is present in dashboard , all are running in job manager . Expecting the job  should run when i click the drop down. How to resolve this ?
Dumb question I cannot find a simple answer to.  If I run a simple timechart search for 7 days, 30 days or 90 days -- How can I overlay the 7 day, 30 day or 90 day average line over the timecha... See more...
Dumb question I cannot find a simple answer to.  If I run a simple timechart search for 7 days, 30 days or 90 days -- How can I overlay the 7 day, 30 day or 90 day average line over the timechart? For example:     index=blah sourcetype=blah filter_term=blah | timechart span=1d count as daily_count      
Hi, I want to use the HashiCorp Vault app to get usernames and passwords. But since my HashiCorp Vault service has call limitation (i.e. the service will block an IP that calls the service too freque... See more...
Hi, I want to use the HashiCorp Vault app to get usernames and passwords. But since my HashiCorp Vault service has call limitation (i.e. the service will block an IP that calls the service too frequently), I want to cache the results. I could not find a way in SOAR to cache secrets. The cached secrets should be referenced by all playbooks and custom functions, without calling HashiCorp Vault again and again.  
Hi Good morning. We have a SH cluster and Indexer cluster. we have received a complain from SOC analyst some of notable events already exists(example last month or a week ago) are missing now or n... See more...
Hi Good morning. We have a SH cluster and Indexer cluster. we have received a complain from SOC analyst some of notable events already exists(example last month or a week ago) are missing now or no longer visible on incedent review tab. But, when we try to run again the SPL on that day we got the result. When we try to search the `notable` | search event_id = "the event id of notable" no result found. NOTE: -The storage is big. -Some complain notable events present last week or last month are no longer visible now or they cannot search, but when we try to run the SPL on that day we got the result. Can someone guide me, what are the things need to check to pinpoint the cause of this concern we have now. I am new in splunk.
We have two Indexer server and one of the Folder which has frozen buckets capturing the disk space need to clean them permanently and claim the disk space tried all the solutions posted but neither w... See more...
We have two Indexer server and one of the Folder which has frozen buckets capturing the disk space need to clean them permanently and claim the disk space tried all the solutions posted but neither work any other best solution available as we need to clean the data which is more than 2 years old
I have a dataset with a multiline field called Logs. The field typically has values like the below,     "mId": "Null", "deviceID": "a398Z389j", "cSession": "443", "cWeb": "443", "uWeb... See more...
I have a dataset with a multiline field called Logs. The field typically has values like the below,     "mId": "Null", "deviceID": "a398Z389j", "cSession": "443", "cWeb": "443", "uWeb": "Mixed", "s": "Steak", "Ing": [ "1-555-5555555", "1-888-8888888" ], "Sem": [ "Warehouse@Forest.box" ]     I'd like to make it so I can identify the values within "Ing" and easily search where a specific value is in "Ing" for other events. I was able to break it out and split on the comma and then look at the index number 6 but this only returns the 1st item, where in most events there are multiple (upwards of 10) items.   | eval a = mvindex(split(Logs,","), 6) "Ing": [ "1-555-5555555"   Thoughts on how to get a complete list of the items in Ing?
I have a search head cluster and I will have scheduled reports that send data to a summary index.  I don't want other users searches to queue or not run this scheduled report. Is it possible to crea... See more...
I have a search head cluster and I will have scheduled reports that send data to a summary index.  I don't want other users searches to queue or not run this scheduled report. Is it possible to create a dedicated search head to run this scheduled report but the search head cluster still has access to the summary index?
Hi Guys, I'm trying to create a table with the count emails sent and emails received from a given emails addresses Column 1                            Column 2                        Column 3 ... See more...
Hi Guys, I'm trying to create a table with the count emails sent and emails received from a given emails addresses Column 1                            Column 2                        Column 3 Email addresses               Emails received          email sent  bob1@splunk.com            <Number>                   <Number> bob2@splunk.com            <Number>                 <Number> I tried this with append command but the result are shown under one another my search is  index=email_index Recipients IN(bob1@splunk.com, bob2@splunk.com, bob3@splunk.com )  |stats count as "Emails received" by Recipients | append [search index=email_index Sender IN(bob1@splunk.com, bob2@splunk.com,  bob3@splunk.com )  |stats count as "Emails sent" by Sender] |table "Emails received" "Emails sent"  Recipients Sender Anyone can help me please?
Hi Splunkers! I want to achieve a visualization on a report to show the comparison with an arrow (Increase or decrease). I'm attaching the image of what I want to achieve. Please help me to know i... See more...
Hi Splunkers! I want to achieve a visualization on a report to show the comparison with an arrow (Increase or decrease). I'm attaching the image of what I want to achieve. Please help me to know if there's a way to achieve it or If there's any other trend I can do similar to that. TIA
Hello Community I've been looking at the installation process of Splunk CIM and got stuck on a step. After installation there seems to be a need to whitelist indexes for datamodels (or vice versa... See more...
Hello Community I've been looking at the installation process of Splunk CIM and got stuck on a step. After installation there seems to be a need to whitelist indexes for datamodels (or vice versa). I realize this can be done pretty easily through the GUI though normally the configuration is handled centrally. Having come up empty looking through the content of the app/package, is it possible to specify index whitelists for particular datamodels in any conf file that I may have missed? Best regards
I have a KV store based lookup for Port Address Translation.  Given the first 3 octets of a public facing IP and a port, I need to lookup the first 3 octets of the private address from this lookup... See more...
I have a KV store based lookup for Port Address Translation.  Given the first 3 octets of a public facing IP and a port, I need to lookup the first 3 octets of the private address from this lookup. The lookup contains the first 3 octets of the public IP, the first 3 octets of the private IP, the maximum port for that private IP and the minimum port for that private subnet range.  Starting with a public_address of 123.45.67.8, port 1042 something like this works: | inputlookup PAT_translation_table where public_address="123.45.67" lower_port<="1042" upper_port>="1042" It returns the field private_address with a value like 10.1.2 and then I append on the .8 to get the internal IP. I need to be able to do this with multiple results from other searches, however. Something like this: <initial search results that include src_ip and src_port> | rex field=src_ip "(?<first3octets>\d{1,3}\.\d{1,3}\.\d{1,3})(?<lastoctet>\.\d{1,3}) | inputlookup PAT_translation_table append=true where 'public_address'=first3octets  'lower_port'<=src_port  'upper_port'>=src_port In this example, inputlookup returns nothing. If I just use the lookup command, I can't use greater than or less than so it returns all the values as an mvfield for private_address, an mvfield for upper_port, and a separate mvfield for lower_port. How would I query that?! Do any of you have any suggestions how I can do this?  
Looking for the exact query to find outliers or anomalies in my csv data using stddev in Splunk enterprise? Fields from csv:  user, action, src, dest, host, _time    Any help would be appreciat... See more...
Looking for the exact query to find outliers or anomalies in my csv data using stddev in Splunk enterprise? Fields from csv:  user, action, src, dest, host, _time    Any help would be appreciated.    Thanks in advance!    
Hey all, Looking for some assistance on this splunk search. I've looked at other examples but for some reason I'm unable to replicate that with our data set. Currently have:     index=DB DN... See more...
Hey all, Looking for some assistance on this splunk search. I've looked at other examples but for some reason I'm unable to replicate that with our data set. Currently have:     index=DB DNS="*aws.amazon.com*" | dedup DNS | stats count by DNS | lookup dataFile hostname AS DNS OUTPUT hostname as matched | eval matched=if(isnull(matched), "No Match", "Matched") | stats sum(count) BY matched     So what this is doing is matching the Index and lookup file name DataFile by the DNS name and it just gives me the count of what matches and the count of what doesn't have a match in dataFile. However, I'm looking for this but essentially flipped. I need the results of the lookup table "dataFile" to be the base set of data and compare that to the index named DB so that it displays the count of assets not matched in the index. I've tried something like this:     index=DB DNS="*aws.amazon.com*" [|inputlookup dataFile | rename hostname as host | fields host] | lookup dataFile hostname as DNS output hostname | stats values(hostname) as host     but no it just keeps parsing so something is wrong here. Not sure what may be the best approach here.
getting below error  ommand.mvexpand: output will be truncated at 3200 results due to excessive memory usage. Memory threshold of 500MB as configured in limits.conf / [mvexpand] / max_mem_usage_mb h... See more...
getting below error  ommand.mvexpand: output will be truncated at 3200 results due to excessive memory usage. Memory threshold of 500MB as configured in limits.conf / [mvexpand] / max_mem_usage_mb has been reached. this is the query i am running index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}" | mvexpand user_actions | spath output=pp_user_action_name input=user_actions path=name | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | where pp_user_action_name like "%newintakeprocess.aspx%" | eval pp_user_action_name=substr(pp_user_action_name,0,40) | stats count(pp_user_action_response) As "Total_Calls" ,avg(pp_user_action_response) AS "User_Action_Response" by pp_user_action_name | eval User_Action_Response=round(User_Action_Response,0) | sort -Total_Calls   how i can optimize this search and resolve the mvexpand limit issue?
Hello upon upload of the SSE application on a splunk cloud search head. It returns 3 failures, preventing it from installing. I have attached a screenshot of the failure summary if someone would be a... See more...
Hello upon upload of the SSE application on a splunk cloud search head. It returns 3 failures, preventing it from installing. I have attached a screenshot of the failure summary if someone would be able to offer any suggestions. The cloud version is  8.2.2202.1. SSE Version: 3.6.0.  
Hello, When I try to create a world map using the geo stats using Dashboard studio., I get the below error. However, when I use the same query on Classic dashboard, its functional. How can I resol... See more...
Hello, When I try to create a world map using the geo stats using Dashboard studio., I get the below error. However, when I use the same query on Classic dashboard, its functional. How can I resolve this. Please advise. Query: index="qradar_offenses"| spath | iplocation src | geostats count by src Splunk Version: 8.2.8 -- Thanks, Siddarth  
Hello, Is there a way to convert this query to run with tstats? It is _slow_ when running it for two weeks of data... index=index_name host=IP_name | eval lag_sec = (_indextime - _time) | eval lag_... See more...
Hello, Is there a way to convert this query to run with tstats? It is _slow_ when running it for two weeks of data... index=index_name host=IP_name | eval lag_sec = (_indextime - _time) | eval lag_min = lag_sec/60 | timechart span=1h avg(lag_min)  
Hi, I need to extract several fields from my JSON logs. For example I have a login event like this: I need to create e field "action" when category=SignInLogs and succeeded (last field) is e... See more...
Hi, I need to extract several fields from my JSON logs. For example I have a login event like this: I need to create e field "action" when category=SignInLogs and succeeded (last field) is equal to true or false generating the field action=success or action=failure to be CIM compliant. This value is already extracted under the field "properties.authenticationDetails{}.succeeded. Is it possible to do that by fields transformation in Splunk UI? Thanks in advance!!
Hello, I am currently using the |append method for some queries, but was curious if there is a better way for me to be writing these? We are trying to create a single alert that could be triggered by... See more...
Hello, I am currently using the |append method for some queries, but was curious if there is a better way for me to be writing these? We are trying to create a single alert that could be triggered by various conditions such as total number of failures or total number of unique customer failures. The following is a simplified example of what I am currently doing and would like to improve if anyone knows how:   "base query stuff" | stats count as TOTAL count(eval(SEVERITY="INFO")) as SUCCESS count(eval(SEVERITY="SOAP-FAULT")) as FAULT count(eval(SEVERITY!="INFO" AND SEVERITY!="SOAP-FAULT")) as ERROR | append [search "base query stuff" SEVERITY="SOAP-FAULT" | stats dc(userId) as UNIQUE_FAULT] | where UNIQUE_FAULT > 10 OR FAULT > 20 OR ERROR > 30   I would also love to be able to create a table with all of  this data (hence the success variable), which contains the totals of each and unique customer impacts of each!