All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello All, how can we search against 2 columns of a CSV lookup file and if the value of the field that i am searching for happens to be either of the 2 columns, then exclude those results ? Kind of ... See more...
Hello All, how can we search against 2 columns of a CSV lookup file and if the value of the field that i am searching for happens to be either of the 2 columns, then exclude those results ? Kind of a whitelist. Lets say i have a csv table of 2 columns as follows URLs UA         i am searching against my firewall logs and if the url field in the events matches  against URLs column of the table  OR the user_agent field from events matches the UA column of the table, then exclude those events This is what i have come up with but its not working...     index= firewall NOT [ | inputlookup lookup_file.csv | rename url as URLs | fields url] OR NOT [ |inputlookup lookup_file.csv | rename user_agent as UA | fields user_agent] .......      
We use exchange 2013 and relay permission is given to certain machines(IP's). These machines can send email as any existing or non existent user under our domain. but they are only allowed to send em... See more...
We use exchange 2013 and relay permission is given to certain machines(IP's). These machines can send email as any existing or non existent user under our domain. but they are only allowed to send email from a particular email address. So far I have achieved the following created an alert if a machine sends an email from another email address which is not allowed or approved. but this works only for a search like   index="myindex" OriginalClientIp="10.x.x.x" NOT Sender="non-existent_user@domain.com" | table Sender Recipients Timestamp OriginalClientIp   I have a list of email addresses and IP's.  There will be a max of two email addresses from each IP any way to lookup a table and list out non matching "from email addresses"?
Hi,  I am trying to overlay two timecharts with different date ranges  The code for the first time chart is for the date range March 1993- June 30th 2019 :  index="midas_temp”     MET_DOMAIN_NAME=... See more...
Hi,  I am trying to overlay two timecharts with different date ranges  The code for the first time chart is for the date range March 1993- June 30th 2019 :  index="midas_temp”     MET_DOMAIN_NAME=DLY3208 MET_DOMAIN_NAME=DLY3208   | eval trange=MAX_AIR_TEMP - MIN_AIR_TEMP | fields trange |bucket span=1d _time | stats avg(trange) AS avgdailytrange by _time |eval Month-Day=strftime(_time,"%m-%d")|chart avg(avgdailytrange) AS "average trange" by Month-Day   And the second time chart is for the date range March - June 2020:  index="midas_temp" MET_DOMAIN_NAME=DLY3208 |eval trange=MAX_AIR_TEMP - MIN_AIR_TEMP|timechart avg(trange) span=1day   Is there a way I can put these two lines on the same timechart?  Thanks   
Hi I use the search below   <row> <panel> <table> <search> <query>index=toto sourcetype=tutu | timechart span=5m perc90(citr) as cit</query> <earliest>-4h@... See more...
Hi I use the search below   <row> <panel> <table> <search> <query>index=toto sourcetype=tutu | timechart span=5m perc90(citr) as cit</query> <earliest>-4h@m</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="citr> <colorPalette type="list">[#DC4E41,#F1813F,#53A051]</colorPalette> <scale type="threshold">0,22000,24000,27000</scale> </format> </table> </panel> </row>   As you can see, the results are colored in green, orange or red following the result value  I would like to display the results like an heatmap It means that I would like to have the _time fields in an x axis and to display just the color results without the value results something like this   Is is possible to do that just with a table viz? Regards
Hi, Is it possible to make use of multiple indexes in one query. Below is the use case which I am trying to implement.  If the connection from a IP address has a threat signature match in IPS, then... See more...
Hi, Is it possible to make use of multiple indexes in one query. Below is the use case which I am trying to implement.  If the connection from a IP address has a threat signature match in IPS, then look for the same address in WAF and if the WAF action is alerted, then trigger the alert. If the WAF action is blocked, then the alert can be suppressed.  Is it possible to implement this use case. I am just trying to fine tune our detection capabilities as much as possible. 
A002 : A] [A004 : 2] [A005 : 2000] [A006 : 0110] [A007 : 85] [A008 : VISA Credit] [A008.ID : 9] [A010 : 1644757200000] [A019 : ANZ 407220] [A021 : 20] [A022 : A] [RESPONDER : 5] [A028 :... See more...
A002 : A] [A004 : 2] [A005 : 2000] [A006 : 0110] [A007 : 85] [A008 : VISA Credit] [A008.ID : 9] [A010 : 1644757200000] [A019 : ANZ 407220] [A021 : 20] [A022 : A] [RESPONDER : 5] [A028 : 85] SELECT A028, responder, count( * ) as total from table where A028 <> '00' group by auth_resp_cde, auth_responder The above one is SQL query i wanted to similar query in the SPLNK please assist. 
when splunk is running in a production server , what complication can happen ?      
Hi, How can I enable the export functionality on my panels? These panels are using a base search and I am struggling with using a savedsearch with the multiple tokens the panel uses.
Hi, I have a lookup file as below. Fileid earliest latest abc 01 03 bcd 02 05 Now the alert(that runs for every hour)that am going to set should look at this lookup file,if the current time... See more...
Hi, I have a lookup file as below. Fileid earliest latest abc 01 03 bcd 02 05 Now the alert(that runs for every hour)that am going to set should look at this lookup file,if the current time (earliest and latest time) matches with any one in the lookup file,the corresponding Fileid should be the outputted and the alert should also lookout out for that Fileid with the time range as mentioned in the lookup file please advice me how to achieve this 
I want to search all the email logs for a mail transaction.  However we have multiple indexes for our mail logs.  When i run the search below , it gets the qid  which is the expected behavior.   ... See more...
I want to search all the email logs for a mail transaction.  However we have multiple indexes for our mail logs.  When i run the search below , it gets the qid  which is the expected behavior.    sourcetype=INDEX_B index=INDEX_B [search sourcetype=INDEX_A to=*<email address>* | fields msgid |rename msgid as hdr_mid] | table qid   where:    msgid/hdr_mid = unique email id in index_A. I have to rename msgid to hdr_mid as thats the name of the field in INDEX_A qid = another unique id in INDEX_B that corresponds to INDEX_A   What i want to accomplish is that the result of the qid will immediately search all match in INDEX_B but its not generating any search result. Below is the  modified version i made.   sourcetype=INDEX_B index=INDEX_B [search sourcetype=INDEX_B index=INDEX_B | search [search sourcetype=INDEX_A to=*<email address>* | fields msgid |rename msgid as hdr_mid | rename qid as search]] | table qid    
I've got a role with more than 6 concurrency limit, and here is what I did: Step1.  I submitted 6 concurrent jobs using API   POST https://<host>:<mPort>/services/search/jobs   Step2. I wai... See more...
I've got a role with more than 6 concurrency limit, and here is what I did: Step1.  I submitted 6 concurrent jobs using API   POST https://<host>:<mPort>/services/search/jobs   Step2. I waited for all the 6 jobs' statuses to be "DONE"  using API   GET https://<host>:<mPort>/services/search/jobs/{search_id}   Step3. I set all the 6 jobs' ttl to 3600 to leave me enough time to get all the results using API   POST https://<host>:<mPort>/services/search/jobs/{search_id}/control   Step4. I got all the 6 jobs' results using API   GET https://<host>:<mPort>/services/search/jobs/{search_id}/results    Step5. I deleted all the 6 jobs using API   DELETE https://<host>:<mPort>/services/search/jobs/{search_id}   Step6. I submitted another 6 concurrent jobs and found that most of the jobs' statuses got stuck in "QUEUED" forever ... I don't know why the first 6 concurrent jobs worked well but the second 6 jobs got "QUEUED" ? Does the "DELETE" API not work actually?
explain splunk authentication. 
According to the SentinelOne Upgrade Documentation for v3.6, they are suggesting the following: Distributed deployment (8.x) Heavy Forwarder: IA-sentinelone_app_for_splunk (IA-sentinelone... See more...
According to the SentinelOne Upgrade Documentation for v3.6, they are suggesting the following: Distributed deployment (8.x) Heavy Forwarder: IA-sentinelone_app_for_splunk (IA-sentinelone_app_for_splunk) Search Head: (Pre-requisite) Splunk CIM Add-on SentinelOne App (sentinelone_app_for_splunk) Indexer: TA-sentinelone_app_for_splunk (TA-sentinelone_app_for_splunk)Question:  Does the "IA-sentinelone_app" need to be installed on the HF? Can the TA-sentinelone be installed on the HF instead?  Note: The customer does not want index time extractions.
There are two apps, a custom app and search app which are inaccessible to users despite them having read permission to view the app. These apps exist on a Search Head Cluster. Can someone please ad... See more...
There are two apps, a custom app and search app which are inaccessible to users despite them having read permission to view the app. These apps exist on a Search Head Cluster. Can someone please advise how to fix this ?
I am trying to link 2 events together due to information in the first event not showing in the second. the information is needed to filter the results. I have been trying to use transaction but in do... See more...
I am trying to link 2 events together due to information in the first event not showing in the second. the information is needed to filter the results. I have been trying to use transaction but in doing that I am loosing information needed to filter the end results-   eventA OR (eventB (amount>25 AND amount!=250 AND amount!="NONE")) |transaction blue |lookup C fieldD OUTPUT eggs |search eggs>21 |table fieldD amount eggs blue   that is the basics of the search, the problem is that fieldD is only in eventA, amount only in eventB. After using transaction to link them, amount disappears and can't be used to filter. Is there any other way to link the 2 events without loosing data within the events? 
We have setup Monitoring of Java Virtual Machines with JMX app on our application server. We are getting following error. 2022-03-03 13:27:22 INFO Logger=ModularInput Initialising Modular Input 202... See more...
We have setup Monitoring of Java Virtual Machines with JMX app on our application server. We are getting following error. 2022-03-03 13:27:22 INFO Logger=ModularInput Initialising Modular Input 2022-03-03 13:27:22 INFO Logger=ModularInput Getting scheme 2022-03-03 13:27:25 INFO Logger=ModularInput Initialising Modular Input 2022-03-03 13:27:25 INFO Logger=ModularInput Running connection poller 2022-03-03 13:27:26 INFO Logger=ModularInput Running state checker 2022-03-03 13:27:26 INFO Logger=ModularInput Activation key check passed 2022-03-03 13:27:26 INFO Logger=org.exolab.castor.mapping.Mapping Loading mapping descriptors from jar:file:/splunk/splunkforwarder/etc/apps/SPLUNK4JMX/bin/lib/jmxmodinput.jar!/mapping.xml 2022-03-03 13:29:33 ERROR Logger=jmx://fakevalue host=xxx.xx.xxx.xxx, jmxServiceURL=, jmxport=6969, jvmDescription=fakevalue, processID=0,stanza=jmx://xxx,systemErrorMessage="Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: xxx.xx.xxx.xxx; nested exception is: java.net.ConnectException: Connection timed out (Connection timed out)]" Below is a portion of our Java setting (masked). We handle some parameters in Catalina property files. JVM_OPTS="-Dhae.cf.srv.util.security.hmacKey=fakevalue -XX:ReservedCodeCacheSize=500m -Xmx16G -Xms16G -Xss1024K -Dorg.quartz.threadPool.threadCount=1 -Dspring.security.strategy=fakevalue -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Duser.timezone=UTC -Dhae.cf.srv.sec.timeout=15 -Dhpf.session.expiry=30 -Dng.log.dir=xxx -Djavax.net.ssl.trustStore=xxx/cacerts.jks -Djavax.net.ssl.trustStorePassword=xxxxx -XX:+UseG1GC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -Xloggc: I suspect the issue is with SSL configuration, and we have same deployment to non-ssl Java environment. Things are working as expected. How can we add SSL related configuration into config.xml? Current connection: <jmxserver host="xxx.xx.xxx.xxx" jvmDescription="fakevalue" jmxport="6969" jmxuser="admin" jmxpass="xxxx"> @Damien_Dallimor 
EDIT: Solved. Used regex to target the printable portion first then converted to ascii   For a couple dashboards, I'm using the following to display the plain text of hex data: [search] | eval asc... See more...
EDIT: Solved. Used regex to target the printable portion first then converted to ascii   For a couple dashboards, I'm using the following to display the plain text of hex data: [search] | eval ascii=(ltrim(replace(data,"([A-F0-9]{2})","%\1"),"0x")) | table ascii This works great for most everything. However, when using it on snort's ET POLICY ZIP file download events, it gives me nothing.    Any ideas on why this is failing for specifically these alerts?    Things I'm aware of: zip files are not plaintext. The filenames within them, however, are. The plan is to use regex to locate and extract filenames after.    Things I've confirmed: The relevant field is labeled as "data" in working and non working examples.  The data field contains ONLY hex data No lowercase, spaces, dashes, etc are used in the data field.  The data fields do contain the strings I'm trying to extract.  
I have a xml _raw="2022-03-02 21:22:39.417 [MESSAGE] [default-threads - 8] [re_messages] - <?xml version="1.0" encoding="UTF-8"?><al:EnvEventDatagram xmlns:mex="http://xxxx" xmlns:bdm="http://xxxx" x... See more...
I have a xml _raw="2022-03-02 21:22:39.417 [MESSAGE] [default-threads - 8] [re_messages] - <?xml version="1.0" encoding="UTF-8"?><al:EnvEventDatagram xmlns:mex="http://xxxx" xmlns:bdm="http://xxxx" xmlns:al="http://xxxx" xmlns:xsi="http://www.w3.org/xxxx" xsi:schemaLocation="http://xxxx.xsd"><mex:ManagedApp><mex:IssuerId>com1</mex:IssuerId><mex:Code>abc</mex:Code><mex:DeployedUnitId>123</mex:DeployedUnitId><mex:DxmVersion>1.10</mex:DxmVersion></mex:ManagedApp><mex:ID>456</mex:ID><mex:AID>1</mex:AID><al:SvcEventDatagram><mex:MessageID>aaa</mex:MessageID><al:Alert><al:DA><al:ASQ><al:IssuerId>bbb</al:IssuerId><al:Value>ccc</al:Value></al:ASQ><al:CU><bdm:B><bdm:IssuerId>888</bdm:IssuerId><bdm:Value>ddd</bdm:Value></bdm:B></al:CU><al:YYY><al:LLL>89</al:LLL><al:BNum>28</al:BNum><al:NUM>6</al:NUM></al:YYY><al:FAUTQ><al:Value>vvv</al:Value></al:FAUTQ><al:BA><bdm:TypeQcd><bdm:IssuerId>kkk</bdm:IssuerId><bdm:Value>ABC</bdm:Value></bdm:TypeQcd><bdm:Ccyamt><bdm:MM>88</bdm:MM></bdm:Ccyamt></al:BA><al:BA><bdm:TypeQcd><bdm:IssuerId>abc</bdm:IssuerId><bdm:Value>NNN</bdm:Value></bdm:TypeQcd><bdm:Ccyamt><bdm:MM>22</bdm:MM></bdm:Ccyamt><al:ReasonQcd><al:IssuerId>vvv</al:IssuerId><al:Value>FF</al:Value></al:ReasonQcd></al:BA><al:DATypeQcd><al:Value>mmm</al:Value></al:DATypeQcd><al:OverLimitInd>ii</al:OverLimitInd><al:Qcd><al:Value>N/A</al:Value></al:Qcd></al:DA><al:QQQ><bdm:DescriptionTxt><bdm:Text>HH</bdm:Text></bdm:DescriptionTxt><bdm:StartDttm>2022-03-02</bdm:StartDttm><bdm:ATQ><bdm:IssuerId>77</bdm:IssuerId><bdm:Value>TTT</bdm:Value></bdm:ATQ><bdm:Status><bdm:TypeQcd><bdm:IssuerId>55</bdm:IssuerId><bdm:Value>PPP</bdm:Value></bdm:TypeQcd></bdm:Status><bdm:Ccyamt><bdm:MM>12</bdm:MM></bdm:Ccyamt><bdm:DebitCreditQcd><bdm:IssuerId>AAA</bdm:IssuerId><bdm:Value>GGG</bdm:Value></bdm:DebitCreditQcd><al:TED>2022-03-02</al:TED><al:ProcessDt>2022-03-02</al:ProcessDt></al:QQQ></al:Alert></al:SvcEventDatagram></al:EnvEventDatagram>" any way can get all <bdm:Value>'s vallues(ddd, ABC etc.) by regex?  
I am trying to create my Search head captain using bootstrap but I end up with the error (URI=https:myinstance/services/shcluster/member/consensus/pseudoid/last_known_state?output_mode=json, socket_e... See more...
I am trying to create my Search head captain using bootstrap but I end up with the error (URI=https:myinstance/services/shcluster/member/consensus/pseudoid/last_known_state?output_mode=json, socket_error=Resource temporarily unavailable;   I have tired multiply times to reconfiguration from clean start and still the same error. Please help.
I’m receiving an error whenever I try to view any csv lookup tables I have uploaded into my search head cluster (v8.1.6).   Uploading the same csv files on to my local sandbox works without issue.   ... See more...
I’m receiving an error whenever I try to view any csv lookup tables I have uploaded into my search head cluster (v8.1.6).   Uploading the same csv files on to my local sandbox works without issue.     With the query   | inputlookup <filename>.csv   I receive the error   The lookup table '<filename>.csv' requires a .csv or KV store lookup definition.   The .csv files appear on the local file system and propagate across the cluster properly.  The splunkd.log also doesn't give any information beyond what the UI already outputs.