All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers! I want to achieve a visualization on a report to show the comparison with an arrow (Increase or decrease). I'm attaching the image of what I want to achieve. Please help me to know i... See more...
Hi Splunkers! I want to achieve a visualization on a report to show the comparison with an arrow (Increase or decrease). I'm attaching the image of what I want to achieve. Please help me to know if there's a way to achieve it or If there's any other trend I can do similar to that. TIA
Hello Community I've been looking at the installation process of Splunk CIM and got stuck on a step. After installation there seems to be a need to whitelist indexes for datamodels (or vice versa... See more...
Hello Community I've been looking at the installation process of Splunk CIM and got stuck on a step. After installation there seems to be a need to whitelist indexes for datamodels (or vice versa). I realize this can be done pretty easily through the GUI though normally the configuration is handled centrally. Having come up empty looking through the content of the app/package, is it possible to specify index whitelists for particular datamodels in any conf file that I may have missed? Best regards
I have a KV store based lookup for Port Address Translation.  Given the first 3 octets of a public facing IP and a port, I need to lookup the first 3 octets of the private address from this lookup... See more...
I have a KV store based lookup for Port Address Translation.  Given the first 3 octets of a public facing IP and a port, I need to lookup the first 3 octets of the private address from this lookup. The lookup contains the first 3 octets of the public IP, the first 3 octets of the private IP, the maximum port for that private IP and the minimum port for that private subnet range.  Starting with a public_address of 123.45.67.8, port 1042 something like this works: | inputlookup PAT_translation_table where public_address="123.45.67" lower_port<="1042" upper_port>="1042" It returns the field private_address with a value like 10.1.2 and then I append on the .8 to get the internal IP. I need to be able to do this with multiple results from other searches, however. Something like this: <initial search results that include src_ip and src_port> | rex field=src_ip "(?<first3octets>\d{1,3}\.\d{1,3}\.\d{1,3})(?<lastoctet>\.\d{1,3}) | inputlookup PAT_translation_table append=true where 'public_address'=first3octets  'lower_port'<=src_port  'upper_port'>=src_port In this example, inputlookup returns nothing. If I just use the lookup command, I can't use greater than or less than so it returns all the values as an mvfield for private_address, an mvfield for upper_port, and a separate mvfield for lower_port. How would I query that?! Do any of you have any suggestions how I can do this?  
Looking for the exact query to find outliers or anomalies in my csv data using stddev in Splunk enterprise? Fields from csv:  user, action, src, dest, host, _time    Any help would be appreciat... See more...
Looking for the exact query to find outliers or anomalies in my csv data using stddev in Splunk enterprise? Fields from csv:  user, action, src, dest, host, _time    Any help would be appreciated.    Thanks in advance!    
Hey all, Looking for some assistance on this splunk search. I've looked at other examples but for some reason I'm unable to replicate that with our data set. Currently have:     index=DB DN... See more...
Hey all, Looking for some assistance on this splunk search. I've looked at other examples but for some reason I'm unable to replicate that with our data set. Currently have:     index=DB DNS="*aws.amazon.com*" | dedup DNS | stats count by DNS | lookup dataFile hostname AS DNS OUTPUT hostname as matched | eval matched=if(isnull(matched), "No Match", "Matched") | stats sum(count) BY matched     So what this is doing is matching the Index and lookup file name DataFile by the DNS name and it just gives me the count of what matches and the count of what doesn't have a match in dataFile. However, I'm looking for this but essentially flipped. I need the results of the lookup table "dataFile" to be the base set of data and compare that to the index named DB so that it displays the count of assets not matched in the index. I've tried something like this:     index=DB DNS="*aws.amazon.com*" [|inputlookup dataFile | rename hostname as host | fields host] | lookup dataFile hostname as DNS output hostname | stats values(hostname) as host     but no it just keeps parsing so something is wrong here. Not sure what may be the best approach here.
getting below error  ommand.mvexpand: output will be truncated at 3200 results due to excessive memory usage. Memory threshold of 500MB as configured in limits.conf / [mvexpand] / max_mem_usage_mb h... See more...
getting below error  ommand.mvexpand: output will be truncated at 3200 results due to excessive memory usage. Memory threshold of 500MB as configured in limits.conf / [mvexpand] / max_mem_usage_mb has been reached. this is the query i am running index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}" | mvexpand user_actions | spath output=pp_user_action_name input=user_actions path=name | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | where pp_user_action_name like "%newintakeprocess.aspx%" | eval pp_user_action_name=substr(pp_user_action_name,0,40) | stats count(pp_user_action_response) As "Total_Calls" ,avg(pp_user_action_response) AS "User_Action_Response" by pp_user_action_name | eval User_Action_Response=round(User_Action_Response,0) | sort -Total_Calls   how i can optimize this search and resolve the mvexpand limit issue?
Hello upon upload of the SSE application on a splunk cloud search head. It returns 3 failures, preventing it from installing. I have attached a screenshot of the failure summary if someone would be a... See more...
Hello upon upload of the SSE application on a splunk cloud search head. It returns 3 failures, preventing it from installing. I have attached a screenshot of the failure summary if someone would be able to offer any suggestions. The cloud version is  8.2.2202.1. SSE Version: 3.6.0.  
Hello, When I try to create a world map using the geo stats using Dashboard studio., I get the below error. However, when I use the same query on Classic dashboard, its functional. How can I resol... See more...
Hello, When I try to create a world map using the geo stats using Dashboard studio., I get the below error. However, when I use the same query on Classic dashboard, its functional. How can I resolve this. Please advise. Query: index="qradar_offenses"| spath | iplocation src | geostats count by src Splunk Version: 8.2.8 -- Thanks, Siddarth  
Hello, Is there a way to convert this query to run with tstats? It is _slow_ when running it for two weeks of data... index=index_name host=IP_name | eval lag_sec = (_indextime - _time) | eval lag_... See more...
Hello, Is there a way to convert this query to run with tstats? It is _slow_ when running it for two weeks of data... index=index_name host=IP_name | eval lag_sec = (_indextime - _time) | eval lag_min = lag_sec/60 | timechart span=1h avg(lag_min)  
Hi, I need to extract several fields from my JSON logs. For example I have a login event like this: I need to create e field "action" when category=SignInLogs and succeeded (last field) is e... See more...
Hi, I need to extract several fields from my JSON logs. For example I have a login event like this: I need to create e field "action" when category=SignInLogs and succeeded (last field) is equal to true or false generating the field action=success or action=failure to be CIM compliant. This value is already extracted under the field "properties.authenticationDetails{}.succeeded. Is it possible to do that by fields transformation in Splunk UI? Thanks in advance!!
Hello, I am currently using the |append method for some queries, but was curious if there is a better way for me to be writing these? We are trying to create a single alert that could be triggered by... See more...
Hello, I am currently using the |append method for some queries, but was curious if there is a better way for me to be writing these? We are trying to create a single alert that could be triggered by various conditions such as total number of failures or total number of unique customer failures. The following is a simplified example of what I am currently doing and would like to improve if anyone knows how:   "base query stuff" | stats count as TOTAL count(eval(SEVERITY="INFO")) as SUCCESS count(eval(SEVERITY="SOAP-FAULT")) as FAULT count(eval(SEVERITY!="INFO" AND SEVERITY!="SOAP-FAULT")) as ERROR | append [search "base query stuff" SEVERITY="SOAP-FAULT" | stats dc(userId) as UNIQUE_FAULT] | where UNIQUE_FAULT > 10 OR FAULT > 20 OR ERROR > 30   I would also love to be able to create a table with all of  this data (hence the success variable), which contains the totals of each and unique customer impacts of each! 
Hello All. Im trying to install the splunk conector inside our AKS cluster to collect app logs to send to our splunk cloud instance. I´m following this tutorial.  So I downloaded the chart and set ... See more...
Hello All. Im trying to install the splunk conector inside our AKS cluster to collect app logs to send to our splunk cloud instance. I´m following this tutorial.  So I downloaded the chart and set the values.yaml with our splunk cloud instance and our token like follows: deamonsets were deployed and pods are running, but if I check logs I´m seeing a timeout error .    So I got into the container a run a curl command to our splunk instance and was able to connect so no firewall or anything there in the middle.  What could I possible me missing here? is the url like in the example the correct way to configure this? I mean like:  https://mysplunkinstance.splunkcloud.com:8088/services/collector   The token is correct according to our config (otherwise I believe it would give a 401/403 or something in the error logs). In this case it´s trying to connect but without success with that timeout error   Anything else I need to be checking?   Thank you.
I am trying to create a timechart overlay of blocked  traffic comparted to total traffic with the following search:   | tstats count AS "Total Traffic" from datamodel=Network_Traffic where (nodenam... See more...
I am trying to create a timechart overlay of blocked  traffic comparted to total traffic with the following search:   | tstats count AS "Total Traffic" from datamodel=Network_Traffic where (nodename = All_Traffic ) OR (nodename = Blocked_Traffic) All_Traffic.src_zone=INTERNET-O groupby _time span=1d, All_Traffic.src_zone, All_Traffic.action, All_Traffic.Traffic_By_Action.Blocked_Traffic prestats=true | `drop_dm_object_name("All_Traffic")` | timechart span=1d count by action | eval "Block Avg" = round('blocked'*100/('allowed'+'blocked'),2)    This search has two issues: Timechart shows bars by action and 'd like to see just the total count of network sessions The average is basically flatlined as it's at roughly 40% whereas my totals by action are roughly 1.5B
On Splunk Enterprise on prem 9.0.1, after using Smart Forecasting (MLTK 5.3.1), and publishing the recently created model, the apply command returns error "Error in 'apply' command: list assignment i... See more...
On Splunk Enterprise on prem 9.0.1, after using Smart Forecasting (MLTK 5.3.1), and publishing the recently created model, the apply command returns error "Error in 'apply' command: list assignment index out of range". But the same model created using SPL with fit command works fine. Here is the traceback: 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/processors/ApplyProcessor.py", line 158, in apply 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: prediction_df = algo.apply(df, process_options) 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/algos/StateSpaceForecast.py", line 712, in apply 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: df = self.add_output_metadata(df) 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/algos/StateSpaceForecast.py", line 353, in add_output_metadata 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: metadata[i] = 'f' 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: IndexError: list assignment index out of range 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: WARNING Error while applying model "modelo4": list assignment index out of range 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: list assignment index out of range 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/processors/ApplyProcessor.py", line 158, in apply 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: prediction_df = algo.apply(df, process_options) 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/algos/StateSpaceForecast.py", line 712, in apply 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: df = self.add_output_metadata(df) 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/algos/StateSpaceForecast.py", line 353, in add_output_metadata 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: metadata[i] = 'f' 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: IndexError: list assignment index out of range 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: During handling of the above exception, another exception occurred: 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/cexc/__init__.py", line 174, in run 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: while self._handle_chunk(): 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/cexc/__init__.py", line 236, in _handle_chunk 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: ret = self.handler(metadata, body) 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/apply.py", line 136, in handler 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: self.controller.execute() 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/chunked_controller.py", line 220, in execute 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: self.processor.process() 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/processors/ApplyProcessor.py", line 177, in process 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: self.df = self.apply(self.df, self.algo, self.process_options) 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/processors/ApplyProcessor.py", line 166, in apply 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: raise RuntimeError(e) 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: RuntimeError: list assignment index out of range   Anyone managed to get the publish option in MLTK working?
Index=dev log-severity=INFO app name=abcd | rex “tv counts for indicator S = (?&lt;Count&gt;\d+)” | stats count by _time, Counts l table _time, counts   I have two queries separately  1) tv cou... See more...
Index=dev log-severity=INFO app name=abcd | rex “tv counts for indicator S = (?&lt;Count&gt;\d+)” | stats count by _time, Counts l table _time, counts   I have two queries separately  1) tv counts for indicator S= (?&lt;Count&gt;\d+) 2) Dishtv counts for indicator S= (?&lt;Count&gt;\d+) Both of the counts are combined because they’re having same wordings  (tv counts for indicator S= (?&lt;Count&gt;\d+), spark Data frame that generates these 1 and 2 queries is different.they have different output counts but in graphs they are overlapping because of same logger messages wording. how can I get separate counts for each of them separately  pls suggest     
Hi all, i have a established query which is working fine. But when i try to add the inputlookup to the query, its not working. i am using a federated search.  My need is to configure a maintenanc... See more...
Hi all, i have a established query which is working fine. But when i try to add the inputlookup to the query, its not working. i am using a federated search.  My need is to configure a maintenance table as a csv lookup  and refer to it in the query.  when i try to access the csv file via inputlookup, i get error.  can you please suggest is there a way to configure maintenance for a particular backend via lookup table and refer to it in the query. i want to exclude the backend host for a particular date and time.  Query below: index="federated:XXX"  ("HTTP response code" OR "url-open" OR "Host connection failed")  NOT "HTTP response code 2**" | rex field=_raw "https://(?<backend>.*)\:" | rex field=_raw "gtid\(\w{1,24}\): (?<error>.*)"| rex field=_raw "^<\d+>(?P<date>\d+\-\d+\-\d+\w+:\d+:\d+\.\d+)[^ \n]* (?P<host>\w+)\s+\[(?P<domain>[^\]]+)" | eval thresholdValue = case(backend=="******" AND domain=="*****", 500, backend=="abcd.com" AND domain!="abcd-ALERTS", 350, backend=="ertyu.com" AND domain=="ertyu", 1000, backend!="qwerty.com", 100) | stats count by domain,backend,error,source,thresholdValue | sort -count | where count>thresholdValue | eval Priority=if(count>200,"3","4") | eval createINCTicket="0" | table domain,backend,error,source,thresholdValue,Priority,count,createINCTicket | lookup incsearch DOMAIN AS domain URL AS backend OUTPUT APPCODE AS BackendAppcode CREATETICKET AS CT INCIDENT AS incident   Maintenance csv lookup  maint_backend maint_domain date_hour_start date_hour_end date_mday_start date_mday_end abcd.com abcd-abcd 1 3 6 7
I have 3 date columns.I have already calculated the difference between current day and the diff is in days are the values in the 3 columns.   Col1 Col2 Col3 12   7 2 34 ... See more...
I have 3 date columns.I have already calculated the difference between current day and the diff is in days are the values in the 3 columns.   Col1 Col2 Col3 12   7 2 34 45 15 25   250 56 120 21     Required filter : - i have  to filter only days <=40 in all 3 columns. - If a column has null and other 2 columns have values <=40 then they need to be shown -if a column or 2 column has null and rest other column has value <=40 they need to displayed -if a column is null and other column values are greater >40 then they need to removed from scope. Please let me know the search .    
I am trying to only include dest_ip  in my search if action is not "blocked.  These are the input panels:     <input type="dropdown" token="my_action" searchWhenChanged="true"> <label>Act... See more...
I am trying to only include dest_ip  in my search if action is not "blocked.  These are the input panels:     <input type="dropdown" token="my_action" searchWhenChanged="true"> <label>Action</label> <choice value="*">any</choice> <choice value="allowed">allowed</choice> <choice value="blocked">blocked</choice> <prefix>action=</prefix> <change> <condition label="blocked"> <unset token="is_not_blocked"></unset> </condition> <condition label="allowed"> <set token="is_not_blocked">true</set> </condition> <condition label="*"> <set token="is_not_blocked">true</set> </condition> </change> <default>*</default> </input> <input type="text" token="my_dest_ip" searchWhenChanged="true" depends="$is_not_blocked$"> <label>Destination IP address (CIDR okay)</label> <default>*</default> <prefix>dest_ip=</prefix> <initialValue>*</initialValue> </input>     This is the search:     <panel> <title>Network Connections by Source</title> <table> <title>Count of network connections by source - click on a line for list of sessions from that source</title> <search> <query>index=proxy $my_host$ $my_src_ip$ $my_dest_ip$ $my_url$ $my_action$ | lookup dnslookup clientip as src_ip OUTPUT clienthost as Host | stats count by src_ip Host action | table src_ip, Host action count | sort -count | rename src_ip as "Source_IP" action as Action count as "Count"</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <set token="drill_client_ip">$row.Source_IP$</set> <set token="drill_url">*</set> <set token="drill_dest_ip">*</set> <set token="drill_action">$row.Action$</set> </drilldown> </table> </panel>     The input panel for my_dest_ip disappears when I select "blocked" in the action panel, but the search still includes dest_ip=*.  What am I not understanding?
Hello, Currently, our client receives daily emails with the data from the CSV file embedded in the email. Is there a Splunk-only process for encrypting the email after embedding; or, encrypting the... See more...
Hello, Currently, our client receives daily emails with the data from the CSV file embedded in the email. Is there a Splunk-only process for encrypting the email after embedding; or, encrypting the attached CSV file? We know how to encrypt at the OS level, but want to know if can be done via Splunk only. Example CSV file Name  Address State...... John Smith 123 B'way, NY Betty Boop 456 Main Street, NJ etc... Thanks in advance and God bless, Genesius
Hi, I have generated a search which return list of hosts and the count of events for these host. sometime the host values returned as IP Address and others as Host Name. I have a lookup table which... See more...
Hi, I have generated a search which return list of hosts and the count of events for these host. sometime the host values returned as IP Address and others as Host Name. I have a lookup table which contains a list of all IP Addresses and Host Names in addition to other metadata information. so the result of the search is something like : Host1                     100 192.168.0.2         110 Host3                      120   and the lookup table something like: Host1        192.168.0.1         App1         Owner1 Host2        192.168.0.2         App2         Owner2 Host3        192.168.0.3         App3         Owner3   I need to lookup for host value (IP or Server Name) returned in the search result and return all the metadata associated with that value.