All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, Can some one help me how to create a report as excel form? This report should be like Daily summary table I want output like below: Day 5-May MTD-May Prior Month Volume Value A... See more...
Hi Team, Can some one help me how to create a report as excel form? This report should be like Daily summary table I want output like below: Day 5-May MTD-May Prior Month Volume Value Avg Amount 123,456 $123 $456 1,234,567 $234 $567 4,567,898 $2,345 $567 Receiver Receiving Participant Account issue Total 2 3 4 9 3 4 5 12 1 2 3 6   Can someone help me in building this query. Appreciate your help in this context.
Hi all, I have the following Correlation Search set up to detect accounts that have been excessively locked out during a short period of time. However, we get many repeat alerts even if the last loc... See more...
Hi all, I have the following Correlation Search set up to detect accounts that have been excessively locked out during a short period of time. However, we get many repeat alerts even if the last lockout time is long past (probably because of how it is configured). Is there a way to stop the alerting once the lockouts stop? For example, an account has been locked out an excessive number of times (ie. 10 times); the first time this occurred is on 05/01/2021 at 09:34:03 and the last time is 05/01/2021 at 18:22:08, we would still get alerted days after the fact (ie. 05/05/2021).   Correlation Search: Mode: Manual Search: index=wineventlog EventCode=4740 | stats count min(_time) as firstTime max(_time) as lastTime values(dest_nt_domain) as machines by user, signature | `ctime(firstTime)` | `ctime(lastTime)` | search count > 5   Time Range: Earliest Time: -7d Latest Time: -10m@m Cron Schedule: */15**** Scheduling: Continuous Schedule Window: Auto Schedule Priority: Default   Trigger Conditions: Trigger alert when: Number of Results is greater than 0   Throttling: Window duration: 1 day(s) Fields to group by: user
Hello Guys, Need some help with learning how to set a token for time in a dashboard that will populate the date when entered specifically when user selects date range or date & time rage under t... See more...
Hello Guys, Need some help with learning how to set a token for time in a dashboard that will populate the date when entered specifically when user selects date range or date & time rage under the search dropdown, similarly need to have this same token populate the date when user selects time range of previous week, or previous month, two days ago... So far this is what I got, but its not populating any dates...       Token created: <fieldset submitButton="false"> <input type="time" token="time_selected" searchWhenChanged="true"> <label>Select Time </label> <default> <earliest>-7d@h</earliest> <latest>now</latest> </default> <change> </change> </input> This is the field I used in where the token is needed: | eval StartTime=strpTime("tokEarliest", "%Y-%m-%d %H:%M:%S"), EndTime= strpTime(TimeOfOrigin, "%Y-%m-%d %H:%M:%S.%q")​  
I have a generic search that I am using to display data for a handful of applications, which look something like this: index=$index$ application=$app_name$ fieldA=$searchA$ fieldB=$searchB$   Howe... See more...
I have a generic search that I am using to display data for a handful of applications, which look something like this: index=$index$ application=$app_name$ fieldA=$searchA$ fieldB=$searchB$   However, one of my applications do not have a `fieldA`. Therefore,  if I was to preform a search using my dashboard, this would resolve to: index=myIndex application=application1 fieldA=* fieldB=*  Because `fieldA` does not exist in `application1`, the search fails, and I get nothing back.  Is there a way to resolve search criteria for fields that do not exist?
  2021-05-05 12:20:20.032 +0000 [QuartzScheduler_Worker-16] ERROR c.s.d.s.task.listeners.RecordWriterMetricsListener - action=unable_to_write_batch java.net.SocketTimeoutException: Read timed out ... See more...
  2021-05-05 12:20:20.032 +0000 [QuartzScheduler_Worker-16] ERROR c.s.d.s.task.listeners.RecordWriterMetricsListener - action=unable_to_write_batch java.net.SocketTimeoutException: Read timed out 2021-05-05 12:20:20.032 +0000 [QuartzScheduler_Worker-16] ERROR org.easybatch.core.job.BatchJob - Unable to write records java.net.SocketTimeoutException: Read timed out 2021-05-05 12:20:20.032 +0000 [QuartzScheduler_Worker-16] INFO org.easybatch.core.job.BatchJob - Job ‘IPOD_UNBRICK_LOG’ finished with status: FAILED
I am having issues ingesting PCAP files from the GUI. I found similar Answers and bug "STREAM-4235" but it appears to be resolved in Stream v7.3 which I am currently using.  I have tried Splunk Ent... See more...
I am having issues ingesting PCAP files from the GUI. I found similar Answers and bug "STREAM-4235" but it appears to be resolved in Stream v7.3 which I am currently using.  I have tried Splunk Enterprise 8.0.5 and 8.1.2 I tried following this documentation: https://docs.splunk.com/Documentation/StreamApp/7.3.0/DeployStreamApp/UseStreamtoparsePCAPfiles Per its instructions, I downloaded these apps: https://splunkbase.splunk.com/app/1809/ https://splunkbase.splunk.com/app/5238/ (Splunk_TA_stream seems to be "Splunk Add-on for Stream Forwarders") From Splunk 8.0.5 I get an attribute error. I am assuming there is a compatibility issue. No errors from Splunk 8.1.2, but the files were no where to be found without any good indication of what happened in the logs. I tested the servers ability to collect and index data into the target index via the collect command with no issues. On one of my test servers, I ran through the inputs.conf and  "set_permissions.sh" steps found here. It did more than what I wanted and didn't help: here: https://docs.splunk.com/Documentation/StreamApp/7.3.0/DeployStreamApp/InstallStreamForwarder
How do I check to see if the Splunk Security essentials Datasets add-on is installed. I have Security Essentials installed on ES. But keep getting "Data sets not available". Thx
Hi,   I am trying to compare event type count statistics for 2 days using the following search: earliest=-48h latest=-24h | stats count as count1 by eventtype | table eventtype, count1 | join even... See more...
Hi,   I am trying to compare event type count statistics for 2 days using the following search: earliest=-48h latest=-24h | stats count as count1 by eventtype | table eventtype, count1 | join eventtype [search earliest=-24h latest=now() | stats count as count2 by eventtype | table eventtype, count2] | eval diff=(count2-count1) | table eventtype, diff | sort diff Is there any option to do this without using subsearch and join?   Thanks.
Hi all, I performed an initial search, to this I added a second search, with the map command, where based on the values of the OLD field, it performs the search on the same index. index=summary | ... See more...
Hi all, I performed an initial search, to this I added a second search, with the map command, where based on the values of the OLD field, it performs the search on the same index. index=summary | search PRATICA ="TRAS" AND LA_OLD !=null |dedup LA |table CODICE_,CANALE,ADDRESS,PRATICA, LA,LA_OLD,PACCHETTO,DATA |map [search index=summary LA="$LA_OLD$" |rename LA as LAC_OLD, ADDRESS as ADDRESS_OLD,PACCHETTO as PT_OLD |eval CODE="$CODICE$",LA_NEW="$LA$",CANALE="$CANALE$",PRATICA_G="$PRATICA$",PT_NEW="PACCHETTO$",ADDRESS_NEW="$ADDRESS$",,DATA_MIG="$DATA$" ] maxsearches=9999 |dedup LA_NEW |table CODE,CANALE,PRATICA_G, LA_NEW,LAC_OLD,PT_NEW,PT_OLD,ADDRESS_NEW,ADDRESS_OLD,DATA_MIG   the first query finds 1400 events, the second query only finds 250 and returns me only 250. I would like him to give me back all 1400 events but filled in the changes of the 250 (which are the OLDs) Tks BR
Hi all, After installing the add-on for eStreamer (https://splunkbase.splunk.com/app/3662/#/details) it is a configuration step, where you click into set up inside the apps manager view. What we fou... See more...
Hi all, After installing the add-on for eStreamer (https://splunkbase.splunk.com/app/3662/#/details) it is a configuration step, where you click into set up inside the apps manager view. What we found is that this option is not visible: Is there any way to solve this issue? Thanks in advance. Best regards.
Hi all, We have few Custom CSV lookups that have been added to ES for Threat Intel. For the existing data, we can lookup the artifacts and confirm that those are present in ES but when adding new da... See more...
Hi all, We have few Custom CSV lookups that have been added to ES for Threat Intel. For the existing data, we can lookup the artifacts and confirm that those are present in ES but when adding new data to those lookups and reducing the "interval" option in Threat Intel Management, they still do not get added to ES. Current setting for the data sources is 43200 seconds (12 hrs) but even after reducing it to few minutes the new entries never make it to ES. In Threat Intel Audit I do see the intel download time change but that doesn't seem to be making any difference. Is there a way to manually force ES to re-read and add updated entries from the lookup? Thanks, ~ Abhi
Hi All,  I am wondering how people are working with metrics data in an IOT application without the IAI app now that it has been depreciated. The main issue that I am facing is the enrichment of the... See more...
Hi All,  I am wondering how people are working with metrics data in an IOT application without the IAI app now that it has been depreciated. The main issue that I am facing is the enrichment of the metric with asset hierarchy type information. Auto lookups are out for metrics so the |m... searches end up getting complex and the metrics workspaces always use the metric_name for its grouping and as far as i can see cannot be configured to use externally added hierarchy/groupings like in the IAI app. Any tips or thoughts on this would be appreciated. Kind regards, Simon  
Can anyone suggest any solutions for this?
We have an IBM AIX 6 instance from where we want to fetch data in Splunk. It is not supported by IBM anymore. Has someone been sending data from AIX 6 to Splunk? Need guidance.
Hi Guys,  Wondering if you can help me out with the following. Within a single event I have to fields:  1) expiry_date 2) delivery_date I would create a new variable giving me the difference in d... See more...
Hi Guys,  Wondering if you can help me out with the following. Within a single event I have to fields:  1) expiry_date 2) delivery_date I would create a new variable giving me the difference in days between the delivery date and the expiry date for each order.  For example  expiry_date=2021-05-25T00:00:00Z delivery_date=2021-04-27T19:00:44Z Should give a result of 28days.  Hope you can help! Thanks in advance. 
Hi, I would like to know if there is the possibility to automatically trigger a playbook when there is a change in the status of a container (e.g. when it becomes "Closed")? Thank you in advance!
Hello,   I have a lookup file with below data. This auto populates everyday from a scheduled query. Feature Feature1 Feature 2 Feature 3 . . .   I need to send a pdf to my colleagues from ... See more...
Hello,   I have a lookup file with below data. This auto populates everyday from a scheduled query. Feature Feature1 Feature 2 Feature 3 . . .   I need to send a pdf to my colleagues from a dashboard with line chart based on each feature. I tried Trellies, but 7.x version splunk donot support pdf delivery.   | inputcsv 85PercUsage.csv | stats values(feature) AS extendfeat | mvexpand extendfeat | append [| makeresults | eval extendfeat=""] | map search="search index=usage sourcetype=usedfeature=\"$extendfeat$\" earliest=-1d@d+11h+30m latest=@d+11h+30m| bin _time span=5m | stats sum(quantity) as MaxUSed by feature,_time | timechart span=5m max(MaxUSed) as LicUsed by feature" maxsearches=100
This is my architecture: 1. A deployment server (DS) with apps within deployment-apps folder 2. Two search heads (SH) 3. Two clustered indexers (CI) 4. A number of production servers with Splunk ... See more...
This is my architecture: 1. A deployment server (DS) with apps within deployment-apps folder 2. Two search heads (SH) 3. Two clustered indexers (CI) 4. A number of production servers with Splunk forwarder installed (SF). I feed both SH and SF with apps on DS. SF write to CI and SH reads from CI. I need to develop a custom app to read application logs. I know that I need inputs.conf on SF and props.conf (with i.e. EXTRACT parameter set) on SH.  What is the best and proper way to do so? Create two separate apps (one for SH and one for SF)? Or use one app, but disable part of config files somehow (i.e. basing on server class)? I don't want inputs.conf to be used on SH and props.conf on SF. What about Splunkbase apps (i.e. for Apache) - how they should be different and how to maintain them?
Hi, I successfully created an SPL that does what I need for a single host but I cannot get it to work for all hosts.  This works   index=<my_index> host=<specific_host> sourcetype=<my_sourcetype>... See more...
Hi, I successfully created an SPL that does what I need for a single host but I cannot get it to work for all hosts.  This works   index=<my_index> host=<specific_host> sourcetype=<my_sourcetype> instance=_Total counter="% Processor Time" | sort host, -_time | dedup 2 host | lookup <my_lookup> resource_name as host output businessprocess_name | search businessprocess_name = "<my_business_process_name>" | eval Value = round(Value,2) | delta Value AS ValueDelta | eval lowerThreshold = -25 | eval upperThreshold = 25 | eval CreateEvent = if((ValueDelta > upperThreshold OR ValueDelta < lowerThreshold),"Yes","No") | search CreateEvent = "Yes" | eval metric_type = "CPU Usage Anomaly" | eval description = if(ValueDelta < 0,"CPU Usage is now: " + Value + "%. A decrease of " + ValueDelta,"CPU Usage is now: " + Value + "%. An increase of " + ValueDelta) | table _time, host, businessprocess_name, metric_type, description   The output of that SPL is (changed the lower and upper threshold to trigger a result) _time host businessprocess_name metric_type description 2021-05-05 12:35:57 <specific_host> <my_business_process_name> CPU Usage Anomaly CPU Usage is now: 57.52951309736281%. A decrease of -3.69007662538445 I know the sort on host does not make sense in this SPL but it nicely takes the last values, compares it and based on the difference the result is what it needs to be.  When I remove the host=<specific_host> and run it on all the hosts in the system the output is wrong.  It seems that it is comparing value of row 1 (server A) with the value of row 2 (server A), then value of row 2 (again server A) with the value on row 3 (server B), etc etc. I guess that makes sense but not what I am looking for.  What would be needed to run the calculation of the delta for only the two records that belong to the same host? What I am aiming to do is to create an event when the difference in the CPU usage between the last two values is more then the configured threshold, whether it drops or increases. Maybe I am going about it the wrong way with the Delta command? 
Hi All,   I have created a bar chart visualization of the count of Act_Status based on the respective statuses. (Screenshot attached) In this visualization I am trying to set colors for different ... See more...
Hi All,   I have created a bar chart visualization of the count of Act_Status based on the respective statuses. (Screenshot attached) In this visualization I am trying to set colors for different statuses, such as: Green for Running Yellow for Standby Red for Stopped Is there any modification that I can make to get the desired output..? You kind suggestions will be highly appreciated.   Thank you.