All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I used a custom function that parses out email addresses from an alert, I used the phantom.add_artifact function to add the artifact to the container. I am then using a filter to check for the artifa... See more...
I used a custom function that parses out email addresses from an alert, I used the phantom.add_artifact function to add the artifact to the container. I am then using a filter to check for the artifact ("artifact:*.label", "==", "notiresponse"). It evaluates as false each time even though if I check the container it is there. What can I do to ensure that the filter is seeing this artifact? When I check the debug log, I can see the loop checking against all of the artifacts in the container except for the one I am creating via custom function. We have multiple playbooks that do this, but this one, in particular, is giving me trouble. 
I have two fields below that show up in our log files.  I used Splunk tool to create the Regex to extract the fields and at first I thought it worked until we had fields with different values that di... See more...
I have two fields below that show up in our log files.  I used Splunk tool to create the Regex to extract the fields and at first I thought it worked until we had fields with different values that didn't extract.  Is there a simple Regex I can use to extract ObjectType and Domain Controller fields in example below?  Values should never have space so we can end value after first space. ObjectType User Domain Controller TSTETCDRS001
I am trying to assigning back Numeric value to $ps$ token which I change to ProcessingStepName1, ProcessingStepName2, ProcessingStepName3, ProcessingStepName4 by Eval. after I click the Bar in a bar... See more...
I am trying to assigning back Numeric value to $ps$ token which I change to ProcessingStepName1, ProcessingStepName2, ProcessingStepName3, ProcessingStepName4 by Eval. after I click the Bar in a bar chart and token $ps$ gets the value as one of the processingStepNames(ProcessingStepName1, ProcessingStepName2, ProcessingStepName3, ProcessingStepName4) but I need to to change the Names back to Number's which I changed by Eval. How should I do that? I tried Eval to do so but it is not working. Any suggestion please? <dashboard> <label>Processing_Step_Clone_2</label> <row> <panel> <chart> <title>$form.Source$ between $form.earliest_date$ $form.second_dash.earliest$ - $form.second_dash.latest$</title> <search> <query>index=Idx1 sourcetype=sourcetype#  Datatype=$form.Datatype$ |spath Source | search Source=$form.Source$ |eval type = if(ProcessStatus=0,"Success","Failure") |eval ProcessingStep=if(ProcessingStep="6","ProcessingStepName1",ProcessingStep) |eval ProcessingStep=if(ProcessingStep="21","ProcessingStepName2",ProcessingStep) |eval ProcessingStep=if(ProcessingStep="1","ProcessingStepName3",ProcessingStep) |eval ProcessingStep=if(ProcessingStep="2","ProcessingStepName4",ProcessingStep) |chart count over ProcessingStep </query> <earliest>$form.second_dash.earliest$</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> . . . <option name="trellis.size">medium</option> <drilldown> <set token="ps">$click.value$></set> </drilldown> </chart> </panel> </row> <row> <panel> <chart> <title>Success/Failure visualization for $ps$ </title> <search> <query>index=Idx1 sourcetype=sourcetype# Datatype=$form.Datatype$ | spath Source | search Source=$form.Source$ | eval type = if(ProcessStatus=0,"Success","Failure") | search ProcessingStep=$ps$ | timechart count by type</query> <earliest>$form.second_dash.earliest$</earliest> <latest>now</latest> </search>
My current search returns a series of events like:  {'field1' : {'field2' : [obj1, obj2, obj3]}} {'field1' : {'field2' : [obj4, obj5]}} {'field1' : {'field2' : [obj6]}}   I want to return the to... See more...
My current search returns a series of events like:  {'field1' : {'field2' : [obj1, obj2, obj3]}} {'field1' : {'field2' : [obj4, obj5]}} {'field1' : {'field2' : [obj6]}}   I want to return the total sum of the lengths of the field1.field2 lists - in this case, would be 3 + 2 + 1 = 6 Can anyone help me with an easy way to do this? 
I just installed Splunk on a Windows 10 Pro and iPad Apple  and when I start it I get: I tried modifying my firewall but that didn't solve the issue. I was thinking it might be a port forw... See more...
I just installed Splunk on a Windows 10 Pro and iPad Apple  and when I start it I get: I tried modifying my firewall but that didn't solve the issue. I was thinking it might be a port forwarding issue but if so, what addresses and ports do I need to forward? P/s: iPad cũ and giá iPhone cũ view more laptop cũ hcm Vietnamese language
Hi, We have a large amount of data in /opt/app/axtract_fe1/var/log/apache2/main_collector_access-*.log file, and we do not want HTTP 200, 204 or 401 logs. How do I filter this out from being indexe... See more...
Hi, We have a large amount of data in /opt/app/axtract_fe1/var/log/apache2/main_collector_access-*.log file, and we do not want HTTP 200, 204 or 401 logs. How do I filter this out from being indexed? //SAMPLE LOG 70.166.76.65 - - [27/Oct/2021:12:42:56 -0400] "POST / HTTP/1.1" 200 2949 "-" "-" R:1 Conn:- PID:12954 RD:45125 CSt:+ FT:forwarded CPE_IP:70.166.77.73, 70.166.76.65 RespTime:0/45125 70.166.76.65 - - [27/Oct/2021:12:42:56 -0400] "POST / HTTP/1.1" 204 248 "-" "-" R:1 Conn:close PID:12954 RD:40522 CSt:- FT:forwarded CPE_IP:70.166.77.73, 70.166.76.65 RespTime:0/40522 70.166.76.65 - - [27/Oct/2021:12:43:03 -0400] "POST / HTTP/1.1" 200 800 "-" "-" R:0 Conn:- PID:12945 RD:34579 CSt:+ FT:forwarded CPE_IP:70.166.77.73, 70.166.76.65 RespTime:0/34579 70.166.76.65 - - [27/Oct/2021:12:43:03 -0400] "POST / HTTP/1.1" 200 2949 "-" "-" R:1 Conn:- PID:12945 RD:43790 CSt:+ FT:forwarded CPE_IP:70.166.77.73, 70.166.76.65 RespTime:0/43790 70.166.76.65 - - [27/Oct/2021:12:43:03 -0400] "POST / HTTP/1.1" 204 248 "-" "-" R:1 Conn:close PID:12945 RD:40819 CSt:- FT:forwarded CPE_IP:70.166.77.73, 70.166.76.65 RespTime:0/40819 //Props.conf file [source::/path/to/your/access.log*] TRANSFORMS-null= setnull    
hello I need to calculate a percentage value from 2 differents stats  First I tried to do something like this   index=toto sourcetype=:request web_domain="*" web_status=* | stats dc(web_domain)... See more...
hello I need to calculate a percentage value from 2 differents stats  First I tried to do something like this   index=toto sourcetype=:request web_domain="*" web_status=* | stats dc(web_domain) as nbdomain, count(web_status) as nbdomainko | eval KO=round(nbdomain/nbdomainko*100,1) | table KO   it returns a result but it's wrong because I need to count the web_status by web_domain in order to count the number of web_status by web_domain for being able to calculate my percentage value   | stats dc(web_domain) as nbdomain, count(web_status) as nbdomainko by web_domain   So I try to separate the 2 search with an append command but it returns anything   index=toto sourcetype=request web_domain="*" web_status=* | stats dc(web_domain) as nbdomain | append [ search index=toto sourcetype=:request web_domain="*" web_status=* | stats count(web_status) as nbstatus by web_domain] | eval prcerreur = round(nbdomain/nbstatus*100,1). " %" | table prcerreur   so what is the best way to solve my use case please?  
Created an app from front end but the app directory is not showing under $SPLUNK_HOME/etc/apps directory
Hi  Team,   I have Created an app from front end but the app directory is not showing under $SPLUNK_HOME/etc/apps directory.   Any suggestions as to what I am missing   Thank You
We are using export to excel app on splunk 7.2.4.2 version which is working fine. after we upgraded splunk version to 8.1.2 - the app is not working and giving attached error when clicked on export ... See more...
We are using export to excel app on splunk 7.2.4.2 version which is working fine. after we upgraded splunk version to 8.1.2 - the app is not working and giving attached error when clicked on export button. the app is using python 2.7 version but splunk 8.1.2 version comes with python 3 version. we tried placing python.version=2 in config files but didn't help us. https://community.splunk.com/t5/Splunk-Enterprise-Security/Splunk-8-python-2-7-for-an-app/m-p/487637
Hello All, On my ADHOC search head, I use to be able to go see all the apps installed and see what apps that needed to be updated.  I do not see that that anymore. I am not sure why I do not se... See more...
Hello All, On my ADHOC search head, I use to be able to go see all the apps installed and see what apps that needed to be updated.  I do not see that that anymore. I am not sure why I do not see them anymore.  Any ideas?   thanks ed
We have been using WEF as our collection point for a while.  We started out small but have expanded the range of events over time.   We have ~5,000 hosts forwarding to a single collector. The collec... See more...
We have been using WEF as our collection point for a while.  We started out small but have expanded the range of events over time.   We have ~5,000 hosts forwarding to a single collector. The collector is busy, but seems to be healthy based on conventional Windows indicators. However,  we have some data loss between the centralized event and Splunk (cloud).   Logs show up in the WEF collection log but never make it to the index.   First,   are there any performance tuning suggestions you can offer UF on a WEF collector? Second,  can you think of any way to check on processing of a single event once it goes into the UF and heads to the indexer?
Is there any way we can add some filter in subsearch savedsearch so that we wont skip any data/records as its limiting the events. e.g I am using savedsearch under join command, but its limiting the... See more...
Is there any way we can add some filter in subsearch savedsearch so that we wont skip any data/records as its limiting the events. e.g I am using savedsearch under join command, but its limiting the data. Thanks in Advance. 
the "where" command checks only one condition  doesn't work like that my search: . . . .  | where NOT (id_old = id OR user = username)   but there is a separate input, then everything works cor... See more...
the "where" command checks only one condition  doesn't work like that my search: . . . .  | where NOT (id_old = id OR user = username)   but there is a separate input, then everything works correctly. help plz
Client's F5 Load Balancer is writing data to our Splunk Syslog Heavy Forwarder, but when searching in Splunk Search Head the data is incomplete/missing. Did a packet capture (tcpdump) on Syslog serve... See more...
Client's F5 Load Balancer is writing data to our Splunk Syslog Heavy Forwarder, but when searching in Splunk Search Head the data is incomplete/missing. Did a packet capture (tcpdump) on Syslog server from the F5 Load Balancer and copied the syslog-ng for the F5 host. Assumption is the Syslog server is receiving all the syslog messages sent from the F5 host, but syslog-ng is not writing all of them to file. In the packet capture, the Syslog server received 800+ syslog messages, but only wrote 68 syslog messages to file. Any suggestion as to why this is happening? Or any suggestion how to torubleshoot this issue?
Hello, I am working with a large form that essentially takes inputs and creates a record of scan. The part I am focusing on is a set of two inputs that I pass as tokens into a search that at the end... See more...
Hello, I am working with a large form that essentially takes inputs and creates a record of scan. The part I am focusing on is a set of two inputs that I pass as tokens into a search that at the end runs a collect function. The problem I am running into is that after I enter a value in the first input and tab into the next input. It seems the function runs. This is causing the second input to not be collected in the data unless is quickly enter and tab out of the input box. If I can do it quickly, the value is getting passed. It will not pass a value if the second input is entered but still clicked on.  Is there anyway to halt the collect function until both inputs are entered? 
What is the difference between using Spool vs OneShot CLI commands?   Unfortunately I'm unable to install UFs or directly poll the logs and need to index tar.gz.   Is there a performance benefit?  Do... See more...
What is the difference between using Spool vs OneShot CLI commands?   Unfortunately I'm unable to install UFs or directly poll the logs and need to index tar.gz.   Is there a performance benefit?  Does using spool allow the indexer Splunk server to index the data in the background?
Hi there, I am planning to move our Frozen bucket location from a local drive to a share on another server, I just have a few questions regarding this. Is it as simple as editing the indexes for th... See more...
Hi there, I am planning to move our Frozen bucket location from a local drive to a share on another server, I just have a few questions regarding this. Is it as simple as editing the indexes for this and will a UNC path work ok if the permissions are set or must it be a mapped local drive? Thanks in advance!    
Hi, I was just curious if Splunk Universal Forwarder has any dependency with JRE/JDK as I am planning to upgrade JRE/JDK on my Windows machines and if there is a dependency, how do I go about perfor... See more...
Hi, I was just curious if Splunk Universal Forwarder has any dependency with JRE/JDK as I am planning to upgrade JRE/JDK on my Windows machines and if there is a dependency, how do I go about performing the upgrade? Would I have to stop the Splunk Universal Forwarder service first before upgrading  or can I just upgrade as per