All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Just noticed this in our data but after we updated the TA-Akamai_SIEM version back in March of this year our Akamai log are no longer being filtered out to their respective fields. Any ideas as to wh... See more...
Just noticed this in our data but after we updated the TA-Akamai_SIEM version back in March of this year our Akamai log are no longer being filtered out to their respective fields. Any ideas as to what might be wrong?
I'm trying to understand how to update the severity of a notable event when a new event arrives with a normal severity.  I'm feeding external alerts into ITSI and a correlation search turns it into a... See more...
I'm trying to understand how to update the severity of a notable event when a new event arrives with a normal severity.  I'm feeding external alerts into ITSI and a correlation search turns it into a notable event.  I'm using a specific ID for the "Notable Event Identifier Fields".  These alerts correctly turn into notable events and placed into an episode.  When the same alert comes into ITSI, but with a "Normal"\2 severity, I expect it to change the severity of the prior notable event in the episode.  Instead, it will treat it like a new notable event and put it into the same episode.  I thought ITSI uses the Notable Event Identifier Fields to determine if two events are the same or not.  I checked that both the original event and the "clearing" event have the exact same event_identifier_hash, so why does ITSI treat it like an additional alert\event in the episode?  Instead of having one normal\clear event in the episode, I now have one critical and one normal. How are you supposed to update the status of an alert\notable event in an episode when a clearing event is received?
Stuck again and not sure what I'm missing... I have the first two steps, but cannot figure out the syntax to use Timechart to count all events as a specific label. Any help is greatly appreciated.  ... See more...
Stuck again and not sure what I'm missing... I have the first two steps, but cannot figure out the syntax to use Timechart to count all events as a specific label. Any help is greatly appreciated.  The Task:  Use timechart to calculate the sum of price as "DailySales" and all count all events as "UnitsSold". What I have so far:  index=web sourcetype=access_combined status=200 productId=* |timechart sum(price) as DailySales
How to show the avg and their status in Flow Map viz connections. index=gc source="log" QUE_NAM="S*" | stats sum(eval(FINAL="MQ SUCCESS")) as good sum(eval(FINAL="CONN FAILED")) as errors sum(e... See more...
How to show the avg and their status in Flow Map viz connections. index=gc source="log" QUE_NAM="S*" | stats sum(eval(FINAL="MQ SUCCESS")) as good sum(eval(FINAL="CONN FAILED")) as errors sum(eval(FINAL="MEND FAIL")) as warn avg(QUE_DEP) as queueAvgDept by QUE_NAM | eval to=QUE_NAM, from="internal" | append [search index=es sourcetype=queue_monitor queue_name IN ("*Q","*R") | bucket _time span=10m | stats max(current_depth) as max_Depth avg(current_depth) as avg_Depth by _time queue_name queue_manager | eval to=queue_name, from="external"] For this query, i got below visualization and i need to connect between internal and external one ( highlighted in red color and how to show the avg count through the flow in between  external and name) Please help me out on this Thanks in advance!  
I'm trying to create a search where I take a small list of IPs from sourcetype A and compare them against a larger set of IPs in sourcetype B.  I will then make a table using fields from sourcetype B... See more...
I'm trying to create a search where I take a small list of IPs from sourcetype A and compare them against a larger set of IPs in sourcetype B.  I will then make a table using fields from sourcetype B that do not exist in sourcetype A to create a more detailed look of the events involving the IP. Is there a way to do this without using a lookup table? index=paloalto (sourcetype=sourcetype_B OR sourcetype=sourcetype_A) | eval small_tmp=case(log_type="CORRELATION", src_ip) | eval large_tmp=case(log_type!="CORRELATION", src_ip) | where match(small_tmp, large_tmp) | table field A, field B, field C  
I have the following setup with Indexer Discovery + Indexer Cluster + Search Head Cluster: - Deployment Server - 3 X Indexer + Cluster Manager (Indexer Cluster) - Search Head Deployer + Search Hea... See more...
I have the following setup with Indexer Discovery + Indexer Cluster + Search Head Cluster: - Deployment Server - 3 X Indexer + Cluster Manager (Indexer Cluster) - Search Head Deployer + Search Head (Set-up as part of a SHC for possible future scaling up)   For forwarding logs from Cluster Manager, I referred to: https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Forwardmanagerdata For forwarding logs from Search Head Cluster nodes, I referred to: https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Forwardsearchheaddata I believe forwarding logs from the Deployment Server should be similar to the above.   For indexers belonging to an indexer cluster, I have considered the following: 1. Install UF in each indexer to monitor & forward logs to the indexer cluster (via indexer discovery) 2. Just monitor logs locally and allow each indexer to index its own local logs (without going through the indexer cluster) 3. Configure the indexer to forward the locally monitored logs without indexing, to the indexer cluster. I am not sure if is necessary to ensure that it does not index the same data twice. Unsure on how this would play out. Option 2 seems to be the easiest to achieve, but ideally I would like all logs to go through the indexer cluster for indexing. What should be the best practice for forwarding logs from indexers that are part of the indexer cluster?  
Field1=Start Field2=Finish Field1 and Field2 have multiple events with values Start and Finish for a given uid respectively. I want to pick earliest event for Fiield1 and latest event for Field2 a... See more...
Field1=Start Field2=Finish Field1 and Field2 have multiple events with values Start and Finish for a given uid respectively. I want to pick earliest event for Fiield1 and latest event for Field2 and find the duration. Field3=uid which is the common field. ….| transaction uid startswith=“Start” ends with=“Finish” | stats avg(duration) It’s not giving the expected result.
Hi I have a use case that involves copying historical data from a 3-indexer cluster (6 months old) to another machine. I have considered two potential solutions: Stop one of the indexers and copy ... See more...
Hi I have a use case that involves copying historical data from a 3-indexer cluster (6 months old) to another machine. I have considered two potential solutions: Stop one of the indexers and copy the index from the source to the destination. The only drawback could be that the data might be incomplete, but this is not a concern as the data is for testing purposes. Given the volume of data, this process could take a significant amount of time. Create a new index using the collect command with the required set of data, then copy the index to the other machine. I believe this would be the best way to implement this use case. However, the downside is that the data volume is quite large and might take a considerable amount of time to execute the SPL in the search head, potentially affecting the performance of the SH. I am unsure if the collect command can handle such a large search and create an index. Is there a limitation on the size of the data when using the collect command? Please advise on how to best handle this scenario. Regards, Pravin
How can i prevent truncation of the legend on a classic Splunk dashboard? The output has an ellipsis in the middle of my Legend, but i want to show the full text on the legend. See my query below: i... See more...
How can i prevent truncation of the legend on a classic Splunk dashboard? The output has an ellipsis in the middle of my Legend, but i want to show the full text on the legend. See my query below: index=$ss_name$_$nyName_tok$_ sourcetype=plt (Instrument="ZT2" OR Instrument="XY2" OR Instrument="P4") | rex field=Instrument "(Calculated)\.(?<tag>.+)$$" | timechart span=$span$ max(ValueEng) by tag Thanks
Hi Splunkers, I am working on creating custom alerts using JavaScript in Splunk. I have created the SPL for the alert and configured the definition in macros.conf as follows: index=XXXX sourcetyp... See more...
Hi Splunkers, I am working on creating custom alerts using JavaScript in Splunk. I have created the SPL for the alert and configured the definition in macros.conf as follows: index=XXXX sourcetype=YYYY earliest=-30d latest=now | eval ds_file_path=file_path."\\".file_name | stats avg(ms_per_block) as avg_processing_time by machine ds_file_path | eval avg_processing_time = round(avg_processing_time, 1) ." ms" From the JS part, I gave the alert condition: | search avg_processing_time >= 2 When I run the SPL in Splunk, it returns about 5 to 6 rows of results: machine       ds_file_path                                                          avg_processing_time Machine01  C:\Program Files\Splunk\etc\db\gdb.ds      5.3 ms Machine02  C:\Program Files\Splunk\etc\db\rwo.ds      7.6 ms Machine03  C:\Program Files\Splunk\etc\db\gdb.ds      7.5 ms Machine04  C:\Program Files\Splunk\etc\db\rwo.ds      8.3 ms   However, when I send the conditions result as an alert email, it includes only one row of the result, like the one mentioned below: machine       ds_file_path                                                          avg_processing_time Machine01  C:\Program Files\Splunk\etc\db\gdb.ds      5.3 ms   I need to send all the populated results ( avg_processing_time >= 2) in the email. How can I achieve this? I need some guidance. Here are the alert parameters I used: { 'name': 'machine_latency_' + index, 'action.email.subject.alert': 'Machine latency for the ' + index + ' environment has reached an average processing time ' + ms_threshold + ' milliseconds...', 'action.email.sendresults': 1, 'action.email.message.alert': 'In the ' + index + ' environment, the machine $result.machine$ has reached the average processing time of $result.avg_processing_time_per_block$ .', 'action.email.to': email, 'action.logevent.param.event': '{"session_id": $result.machine$, "user": $result.avg_processing_time_per_block$}', 'action.logevent.param.index': index, 'alert.digest_mode': 0, 'alert.suppress': 1, 'alert.suppress.fields': 'session_id', 'alert.suppress.period': '24h', 'alert_comparator': 'greater than', 'alert_threshold': 0, 'alert_type': 'number of events', 'cron_schedule': cron_expression, 'dispatch.earliest_time': '-30m', 'dispatch.latest_time': 'now', 'description': 'XXXXYYYYYZZZZ', 'search': '|`sw_latency_above_threshold(the_index=' + index + ')`' } I suspect that the email action is not configured to include the entire result set. Could someone help me with how to modify the alert settings or the SPL to ensure that all results are included in the alert email? Thanks in advance for your help!
Hello Everyone, I have built a Splunk query (shared below) recently & I noticed that when apply search condition App_Name IN (*) its actually drop number of event scanned.  Like if I execute follo... See more...
Hello Everyone, I have built a Splunk query (shared below) recently & I noticed that when apply search condition App_Name IN (*) its actually drop number of event scanned.  Like if I execute following query it has 2515 events (not to confused with statistics)...  index=msad_hcv NOT ("forwarded") | spath output=role_name path=auth.metadata.role_name | mvexpand role_name | rex field=role_name "(\w+-(?P<App_Name>[^\"]+))" | search Environment=* type=* request.path=* App_Name IN (*) | stats count But in case I comment App_Name IN (*) condition in line no. 4 then it produced 4547 events.   index=msad_hcv NOT ("forwarded") | spath output=role_name path=auth.metadata.role_name | mvexpand role_name | rex field=role_name "(\w+-(?P<App_Name>[^\"]+))" | search Environment=* type=* request.path=* ```App_Name IN (*)``` | stats count My question is how can I save the log events from getting dropped when App_Name IN (*) is in force ? Please note that events which are being dropped, are the ones who don't have "App_Name" in their events. 
I would like to add a button in a each row of a table. Once I click on the button it should create an incident from service now. How can we achive it by using javascript , html, css and xml.    Tha... See more...
I would like to add a button in a each row of a table. Once I click on the button it should create an incident from service now. How can we achive it by using javascript , html, css and xml.    Thanks in advance
Hi, In my project, we are using Synthetic monitoring. But as more and more Applications are moving to MFA (TOTP) via MS Authenticator, we need to be able to capture these TOTP and enable Synthetic m... See more...
Hi, In my project, we are using Synthetic monitoring. But as more and more Applications are moving to MFA (TOTP) via MS Authenticator, we need to be able to capture these TOTP and enable Synthetic monitoring. Can someone help/guide how we can enable MFA in synthetic monitoring.
Hello Splunk Community,  I'm encountering an issue with ingesting data from a Prometheus remote_write_agent into Splunk Enterprise – this solution utilises the ‘Prometheus Metrics for Splunk and is ... See more...
Hello Splunk Community,  I'm encountering an issue with ingesting data from a Prometheus remote_write_agent into Splunk Enterprise – this solution utilises the ‘Prometheus Metrics for Splunk and is within a Test Environment. Problem Summary: Despite ensuring that the 'inputs.conf' file matches the configuration specifications defined in the 'inputs.conf.spec' file, the Prometheus data is not being ingested and I am receiving errors, e.g port: Not found in "btool" output (btool does not list the stanza [prometheusrw]) when viewing the inputs.conf file in the config explorer application. Details: Splunk Version: Splunk Enterprise 9.2 (Trial License) Operating System: Ubuntu 22.04 Splunk Application: Prometheus Metrics for Splunk (Latest Version 1.0.1)   inputs.conf.spec  /opt/splunk/etc/apps/modinput_prometheus/README/inputs.conf.spec (Full inputs.conf.spec - https://github.com/lukemonahan/splunk_modinput_prometheus/blob/master/modinput_prometheus/README/inputs.conf.spec As seen in image, the inputs.conf.spec file states there is a port  and maxClients configuration parameters. In the inputs.conf I updated the  /opt/splunk/etc/apps/modinput_prometheus/local/inputs.conf file to include the details below which meet the required formatting above: The inputs.conf file was saved, and the Splunk Server rebooted. After rebooting the input.conf was checked to ensure the config specification where being accepted using the Config Explorer App –  These errors where received for the following configuration parameters: However, other configuration parameters such as index, sourcetype & whitelist  Returned: 'Found in "btool" output. Exists in spec file (Stanza=[prometheusrw]) - and were accepted by Splunk. For some unknown reason, Splunk is not recognising some of the configuration parameters above that are listed within the inputs.conf.spec file, even when formatted accordingly.   Other Information: Prometheus remote-write-exporter details: Splunk Index: skyline_prometheus_metrics Any assistance is appreciated, thank you Splunk Community          
I want to integrate commvault with splunk  to create a dashboard  using addon. I dont find how to procced further .so please provide with the steps.
Hello,   I have a dashboard with radio button + text input field.      <form version="1.1" theme="light"> <label>mnj1809_radio</label> <init> <set token="tokradiotext">$tokradio$="$tokte... See more...
Hello,   I have a dashboard with radio button + text input field.      <form version="1.1" theme="light"> <label>mnj1809_radio</label> <init> <set token="tokradiotext">$tokradio$="$toktext$"</set> </init> <fieldset submitButton="false"> <input type="radio" token="tokradio"> <label>Field</label> <choice value="category">Group</choice> <choice value="severity">Severity</choice> <default>category</default> <change> <set token="tokradiotext">$value$="$toktext$"</set> </change> </input> <input type="text" token="toktext"> <label>Value</label> <default>*</default> <change> <set token="tokradiotext">$tokradio$="$value$"</set> </change> </input> </fieldset> <row> <panel> <event> <title>tokradiotext=$tokradiotext$</title> <search> <query>| makeresults</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </event> </panel> </row> </form>   'A radio input to select field names and a text input to enter field values, you can define and update a separate field when either token changes' Radio button is allowed one choice but I'd like to use checkbox that select multiple option. So if I choose multiple checkbox I'd like to see the search is looking for the results in those fields what were choose.   Could you please help me?   Thanks in advance!
Hi  I am getting some events from a csv which contains the below format and would like to drop such events using transforms.  null,null,0,null,null,null,null,null,null,  ---- to be dropped null,... See more...
Hi  I am getting some events from a csv which contains the below format and would like to drop such events using transforms.  null,null,0,null,null,null,null,null,null,  ---- to be dropped null,null,0,null,null,null,null,null,null,  ---- to be dropped null,null,0,null,null,null,null,null,null,  ---- to be dropped null,null,0,null,null,null,null,null,null,  ---- to be dropped null,null,0,null,null,null,null,null,null,  ---- to be dropped 52376,null,0,test,87387,2984,22,abc,99  ----- to be kept Below is what i have done so far and is not working Props.conf [Reports5min] TRANSFORMS-null = setnull transforms.conf [setnull] REGEX = ^null,null\,0,null,null,null,null,null,null,$ DEST_KEY = queue FORMAT = nullQueue
Hello , How can I know the start time and the latest time  coming of data of all index . meaning that when was the first time data came in that index and when is the latest time data have came in th... See more...
Hello , How can I know the start time and the latest time  coming of data of all index . meaning that when was the first time data came in that index and when is the latest time data have came in that index.
I am practicing my attacks on the DVWA webserver and I would want to monitor the traffic logs from the DVWA into my splunk enterprise. However, I am unsure of the steps to do so despite following the... See more...
I am practicing my attacks on the DVWA webserver and I would want to monitor the traffic logs from the DVWA into my splunk enterprise. However, I am unsure of the steps to do so despite following the instructions given of getting data into my splunk enterprise.  So far, my splunk only monitors the following logs which I do not need. Additionally, I have added the following for the 'add monitor': But there is no logs on the apache or anything related to web in my splunk. Therefore, why does my splunk enterprise  captures logs from /var/log syslog only?      
Dear Everyone can help me for this, i have log from syslog but cannot break event by lines. {"@timestamp":"2000-01-21T00:58:39.372418529Z","event":{},"@version":"1","type":"prod","filtered_message"... See more...
Dear Everyone can help me for this, i have log from syslog but cannot break event by lines. {"@timestamp":"2000-01-21T00:58:39.372418529Z","event":{},"@version":"1","type":"prod","filtered_message":"[ABC]|Type=ABC|logDate=2000-01-21 00:58:39|ABC1=ABC2|ABC12=ABC23|ABC34=ABC35|ABC45=ABC46"}{"@timestamp":"2000-02-21T00:58:39.372418529Z","event":{},"@version":"1","type":"prod","filtered_message":"[ABC]|Type=ABC|logDate=2000-02-21 00:58:39|ABC1=ABC5|ABC13=ABC24|ABC35=ABC36|ABC46=ABC47"}   i need break this log from props.conf, i already used this: [ABC] LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false   but not works, please tell me how to extract event log by lines