All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi all, i have some events with a field called RUNTIME for each job. how can i get the average value of RUNTIME for each of the job and result will be on new field  
Dear community! I'm monitoring a folder with CSV files, and in each file there are two tables in the same sheet. Is there any way to define each section into 2 sourcetypes or sources? Thank you... See more...
Dear community! I'm monitoring a folder with CSV files, and in each file there are two tables in the same sheet. Is there any way to define each section into 2 sourcetypes or sources? Thank you for your help  
I have an error while trying to animate an embedded SVG in one of my XML dashboard. I noticed that the animate tag included in a <line> or <rect> root tags is systematically removed from my XML when ... See more...
I have an error while trying to animate an embedded SVG in one of my XML dashboard. I noticed that the animate tag included in a <line> or <rect> root tags is systematically removed from my XML when rendered in the dashboard's HTML (the output). That problem gives me error while trying to parse the <animate> tag with a custom JavaScript, as they are missing from the DOM.      <line id="id_1" stroke="rgb(0,220,228)" stroke-width="3" visibility="visible" x1="404.0" x2="833.0" y1="1255.0" y2="1255.0"> <title>id_1</title> <desc>description of id_1</desc> <metadata> <route/> </metadata> <animate attributeName="stroke" attributeType="CSS" dur="1s" from="rgb(0,220,228)" id="id_1" repeatCount="indefinite" to="white"/> </line>     I enclosed with this post a sample of the SVG embedded in my dashboard. While rendered as HTML, it looks like this:     <line id="id_1" stroke="rgb(0,220,228)" stroke-width="3" visibility="visible" x1="404.0" x2="833.0" y1="1255.0" y2="1255.0"> <title>id_1</title> <desc>description of id_1</desc> <metadata> </metadata> </line>     It used to work on a Splunk 7, but somehow it doesn't work on Splunk 8 and 9. I wanted to know what might cause this error ? And also, can someone provide me docs on how the Splunk XML dashboard are exported as HTML in order to understand which component might explain this problem ? Are there SVG/XML standards that evolved from Splunk 7 to Splunk 8/9 ?
Hi Team, I am working on web application firewall related use case, I wanna find out top targeted domain on my domain. I just try to work with index=netwaf Example: my domain is example.com,... See more...
Hi Team, I am working on web application firewall related use case, I wanna find out top targeted domain on my domain. I just try to work with index=netwaf Example: my domain is example.com, so there is a bunch of subdomains, So I  just want to find top targeted domains with traffic size  ( if it is malicious would be great ) please I need help quickly.  
Hello, before the upgradation to splunk 9.x we have to move the actual instances of Splunk to new VMs with new OS version and additional resources  (CPU, RAM and disk space [indexers]): cluster S... See more...
Hello, before the upgradation to splunk 9.x we have to move the actual instances of Splunk to new VMs with new OS version and additional resources  (CPU, RAM and disk space [indexers]): cluster SH: 3 nodes cluster indexers: 16 peer (2 sites). Migration for nodes like SHs, CM and Deploy are pretty clear, I have some doubt about the peers node. Probably we do the migration a peer at time, is it better to put peer offline or stop them ? in case is preferable to put them offline, is it possible to extend the restart period without time limit, for example 9 hours or more ? This is due to syncing the file system where indexes sits to the new VM. It also not clear if offline method rolls the bucket to warm from hot or must be done manually Thanks,
Hello I found a strange behavior of the join command. Here are two queries: 1) search status=pass | chart dc(id) as pass by name 2) search status=failed | chart dc(id) as failed by name ... See more...
Hello I found a strange behavior of the join command. Here are two queries: 1) search status=pass | chart dc(id) as pass by name 2) search status=failed | chart dc(id) as failed by name then I try to join these queries to get one table: 3) search status=pass | chart dc(id) as pass by name | table name pass | join name [ search status=failed | chart dc(id) as failed by name | table name failed ] | table name pass failed For some reason "failed" column values by the query #2 are different to values from join in the query #3, but "pass" column valuesare the same: query 1 table   query 2 table   name pass name  failed Tom  12 Tom  4 Jerry 13 Jerry 5   query 3 table     name pass failed Tom 12 7 Jerry 13 9
Hi to all,   in the graph below, I need to add symbol "%" in y2 axis and translate axis y1 and y2.   Can someone help me please?
Hi Team,  Current i have fields and with this query below, was able to get all fields are in same size. <option name="charting.chart.markerSize">1</option>     But I would like to increase t... See more...
Hi Team,  Current i have fields and with this query below, was able to get all fields are in same size. <option name="charting.chart.markerSize">1</option>     But I would like to increase the size of orange field and decrease the size of blue & green field. how it can be done in splunk? any leads are welcome.    
Hi,   We have a project to implement splunk so that it talks out to splunk cloud, via a proxy server. To do this, i believe we need to configure the heavy forwarder to connect to the proxy using s... See more...
Hi,   We have a project to implement splunk so that it talks out to splunk cloud, via a proxy server. To do this, i believe we need to configure the heavy forwarder to connect to the proxy using socks5, port 1080, as per this article:  https://docs.splunk.com/Documentation/Splunk/9.0.2/Forwarding/ConfigureaforwardertouseaSOCKSproxy I beleive i've done this correctly, i think, and we also think the proxy is configured correctly. Yet we aren't seeing the data flow into splunk.   Am i missing something with the config on the forwarder, or is it really just as that article presents it? Looking in the deployment server, i can see the test endpoints we've added are visible, so all that seems to be working, its now getting this out and into the cloud we need.   All new to me splunk, so trying to work it out on the fly, therefore an pointers in the right direction would be appreciated
can someone point me to the capabilty that needs to be provided for ES users to be able to view Adaptive responses section in ES Incident review page. Few of my ES users from the security team who ha... See more...
can someone point me to the capabilty that needs to be provided for ES users to be able to view Adaptive responses section in ES Incident review page. Few of my ES users from the security team who have admin access to ES but cant see this section, however admin users can. Non admin users get an error message saying: Adaptive Responses: Error: No adaptive response actions found.   Admin Users see this  
I have a correlation search in Splunk ES that does some statistics, and return a table with the events; "src_ip", "dest_ip", "count", "latest", "action", "app", "src", "dest", and "dest_port". The se... See more...
I have a correlation search in Splunk ES that does some statistics, and return a table with the events; "src_ip", "dest_ip", "count", "latest", "action", "app", "src", "dest", and "dest_port". The search looks something  like the following. | tstats count latest(_time) as latest values(All_Traffic.action) as action values(All_Traffic.app) as app values(All_Traffic.src) as src values(All_Traffic.dest) as dest values(All_Traffic.dest_port) as dest_port from datamodel=Network_Traffic where All_Traffic.dest_ip=1.2.3.4 by All_Traffic.src_ip All_Traffic.dest_ip | rename All_Traffic.* as * When this correlation search triggers it writes an event to the notable index, and that notable event contains the fields that are outputed from the search, except src_ip and dest_ip. Note that I'm talking about the notable index here, not the incidents showing in the Incident Review. I've looked in the documentation for an explanation of this behaviour, but can't find anything. Can someone explain to me how Splunk picks which fields are to be written to the notable index events, and if possible, how one can force Splunk to write all fields from the search to the notable index?
Hello Team, This is the first time I am posting a question and hope that I have explained it thoroughly. I am trying to create a regex for a log file which contains multiple values throughout th... See more...
Hello Team, This is the first time I am posting a question and hope that I have explained it thoroughly. I am trying to create a regex for a log file which contains multiple values throughout the log which required same field name. but splunk does not allows to use same field name again. Here is the sample log: /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9/TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9/TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9/TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9/TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9/TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9/TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9/TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9/TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 Note: Text values are 4 char and Number contains 10 digits. How can I move forward to achieve a field extraction and format like this? /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 Thank You in Advance   
I want to extract the two characters 78 from the barvalue  and have it in a separate column in my table:-  deltavalue = 890(11%) sigmavalue=334(56%) barvalue=445(78%)    
Hi there, I created multiple field extractions, extracting values from different sourcetypes into the same field: sourcetype0: "field0":"(?<geolocation_code>.{7}) sourcetype1: "field1":"(?<geoloca... See more...
Hi there, I created multiple field extractions, extracting values from different sourcetypes into the same field: sourcetype0: "field0":"(?<geolocation_code>.{7}) sourcetype1: "field1":"(?<geolocation_code>.{7}) sourcetype2: "field2":"(?<geolocation_code>.{7}) They are populated as expected, all looking like ABCDEF0, CDEFGH3 or ZDEGFH9. But when using them in search geolocation_code=ABCDEF0 I have zero hits even though the preview from the fields pane on the left shows me plenty of those values. Using geolocation_code!=ABCDEF0 on the other hand works exactly as inteded. Also geolocation_code=ABCDEF0* gives the result I expected from geolocation_code=ABCDEF0 even though this field only contains exactly the value I'm looking for. I don't really understand what is happening here and why only with this extraction but not with other. 
100 * sum([x]) / sum([y] - [z])  
How to loop the array values after split with delimiter  | eval json="{"key1":"key1value","key2":"key2value","key3":"key3value","key4":"key4Value" }" | eval keyNames ="key1,key2,key3,key4" // key... See more...
How to loop the array values after split with delimiter  | eval json="{"key1":"key1value","key2":"key2value","key3":"key3value","key4":"key4Value" }" | eval keyNames ="key1,key2,key3,key4" // key names can add or remove based on search string the requirement   | eval keys=split(keyNames ,";") How to loop these keys and perform some operation.  I have tired with some MV commands but no luck. Example:  | eval count = mvcount(keys) | streamstats count as counter | eval jsonKey= mvindex(keys,count) | eval keyValue = json_extract(json, jsonKey) I am not sure how to achieve this use case, can some one please help me on it.  
Hi.. I have to find the ip address hitting fw for that i have to implement the whois lookup for the hitting ips but no use i tried with the app Whois it's not working. Is there any way   Than... See more...
Hi.. I have to find the ip address hitting fw for that i have to implement the whois lookup for the hitting ips but no use i tried with the app Whois it's not working. Is there any way   Thanks....    
how do I make each column of a column graph a drilldown to a new dashboard in dashboard studio?
I want to align the <input> fields vertically in same row and same panel. Is there any way, that can help me to     <row depends="$field_show$"> <panel> <input type="dropdown"> <label>----</la... See more...
I want to align the <input> fields vertically in same row and same panel. Is there any way, that can help me to     <row depends="$field_show$"> <panel> <input type="dropdown"> <label>----</label> <choice value="true">true</choice> <choice value="false">false</choice> <default>false</default> <initialValue>false</initialValue> </input> <input type="time" searchWhenChanged="true"> <label>Select time range</label> <default> <earliest>-0m</earliest> <latest>+1d@h</latest> </default> <input type="dropdown"> <label>----</label> <choice value="true">true</choice> <choice value="false">false</choice> <default>false</default> <initialValue>false</initialValue> </input> <input type="time" searchWhenChanged="true"> <label>Select time range</label> <default> <earliest>-0m</earliest> <latest>+1d@h</latest> </default> </panel> </row>    
Hi,  I am a beginner here in Splunk. I am trying to search multiple lines in the log and generate an alert if certain values match.  For example, in the log below, I wanted to compare the IP addr... See more...
Hi,  I am a beginner here in Splunk. I am trying to search multiple lines in the log and generate an alert if certain values match.  For example, in the log below, I wanted to compare the IP address (10.0.46.173) when the Error Code is "ERROR_CREDENTIAL". If the IP addresses are the same, then an alert will be created.   2022-12-13 19:05:48.247 ERROR:ip-10-0-46-173.ap-northeast-1.compute.internal,10.0.46.173,19:05:48,ERROR_CREDENTIAL,sfsdf 2022-12-13 19:06:00.580 ERROR:ip-10-0-46-173.ap-northeast-1.compute.internal,10.0.46.173,19:06:00,ERROR_CREDENTIAL,kjsadasd 2022-12-13 19:06:17.537 ERROR:ip-10-0-46-173.ap-northeast-1.compute.internal,10.0.46.173,19:06:17,ERROR_CREDENTIAL,opuio   I wonder if anyone could help and guide me with right document.   Thanks, Junster