All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to search all the email logs for a mail transaction.  However we have multiple indexes for our mail logs.  When i run the search below , it gets the qid  which is the expected behavior.   ... See more...
I want to search all the email logs for a mail transaction.  However we have multiple indexes for our mail logs.  When i run the search below , it gets the qid  which is the expected behavior.    sourcetype=INDEX_B index=INDEX_B [search sourcetype=INDEX_A to=*<email address>* | fields msgid |rename msgid as hdr_mid] | table qid   where:    msgid/hdr_mid = unique email id in index_A. I have to rename msgid to hdr_mid as thats the name of the field in INDEX_A qid = another unique id in INDEX_B that corresponds to INDEX_A   What i want to accomplish is that the result of the qid will immediately search all match in INDEX_B but its not generating any search result. Below is the  modified version i made.   sourcetype=INDEX_B index=INDEX_B [search sourcetype=INDEX_B index=INDEX_B | search [search sourcetype=INDEX_A to=*<email address>* | fields msgid |rename msgid as hdr_mid | rename qid as search]] | table qid    
I've got a role with more than 6 concurrency limit, and here is what I did: Step1.  I submitted 6 concurrent jobs using API   POST https://<host>:<mPort>/services/search/jobs   Step2. I wai... See more...
I've got a role with more than 6 concurrency limit, and here is what I did: Step1.  I submitted 6 concurrent jobs using API   POST https://<host>:<mPort>/services/search/jobs   Step2. I waited for all the 6 jobs' statuses to be "DONE"  using API   GET https://<host>:<mPort>/services/search/jobs/{search_id}   Step3. I set all the 6 jobs' ttl to 3600 to leave me enough time to get all the results using API   POST https://<host>:<mPort>/services/search/jobs/{search_id}/control   Step4. I got all the 6 jobs' results using API   GET https://<host>:<mPort>/services/search/jobs/{search_id}/results    Step5. I deleted all the 6 jobs using API   DELETE https://<host>:<mPort>/services/search/jobs/{search_id}   Step6. I submitted another 6 concurrent jobs and found that most of the jobs' statuses got stuck in "QUEUED" forever ... I don't know why the first 6 concurrent jobs worked well but the second 6 jobs got "QUEUED" ? Does the "DELETE" API not work actually?
explain splunk authentication. 
According to the SentinelOne Upgrade Documentation for v3.6, they are suggesting the following: Distributed deployment (8.x) Heavy Forwarder: IA-sentinelone_app_for_splunk (IA-sentinelone... See more...
According to the SentinelOne Upgrade Documentation for v3.6, they are suggesting the following: Distributed deployment (8.x) Heavy Forwarder: IA-sentinelone_app_for_splunk (IA-sentinelone_app_for_splunk) Search Head: (Pre-requisite) Splunk CIM Add-on SentinelOne App (sentinelone_app_for_splunk) Indexer: TA-sentinelone_app_for_splunk (TA-sentinelone_app_for_splunk)Question:  Does the "IA-sentinelone_app" need to be installed on the HF? Can the TA-sentinelone be installed on the HF instead?  Note: The customer does not want index time extractions.
There are two apps, a custom app and search app which are inaccessible to users despite them having read permission to view the app. These apps exist on a Search Head Cluster. Can someone please ad... See more...
There are two apps, a custom app and search app which are inaccessible to users despite them having read permission to view the app. These apps exist on a Search Head Cluster. Can someone please advise how to fix this ?
I am trying to link 2 events together due to information in the first event not showing in the second. the information is needed to filter the results. I have been trying to use transaction but in do... See more...
I am trying to link 2 events together due to information in the first event not showing in the second. the information is needed to filter the results. I have been trying to use transaction but in doing that I am loosing information needed to filter the end results-   eventA OR (eventB (amount>25 AND amount!=250 AND amount!="NONE")) |transaction blue |lookup C fieldD OUTPUT eggs |search eggs>21 |table fieldD amount eggs blue   that is the basics of the search, the problem is that fieldD is only in eventA, amount only in eventB. After using transaction to link them, amount disappears and can't be used to filter. Is there any other way to link the 2 events without loosing data within the events? 
We have setup Monitoring of Java Virtual Machines with JMX app on our application server. We are getting following error. 2022-03-03 13:27:22 INFO Logger=ModularInput Initialising Modular Input 202... See more...
We have setup Monitoring of Java Virtual Machines with JMX app on our application server. We are getting following error. 2022-03-03 13:27:22 INFO Logger=ModularInput Initialising Modular Input 2022-03-03 13:27:22 INFO Logger=ModularInput Getting scheme 2022-03-03 13:27:25 INFO Logger=ModularInput Initialising Modular Input 2022-03-03 13:27:25 INFO Logger=ModularInput Running connection poller 2022-03-03 13:27:26 INFO Logger=ModularInput Running state checker 2022-03-03 13:27:26 INFO Logger=ModularInput Activation key check passed 2022-03-03 13:27:26 INFO Logger=org.exolab.castor.mapping.Mapping Loading mapping descriptors from jar:file:/splunk/splunkforwarder/etc/apps/SPLUNK4JMX/bin/lib/jmxmodinput.jar!/mapping.xml 2022-03-03 13:29:33 ERROR Logger=jmx://fakevalue host=xxx.xx.xxx.xxx, jmxServiceURL=, jmxport=6969, jvmDescription=fakevalue, processID=0,stanza=jmx://xxx,systemErrorMessage="Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: xxx.xx.xxx.xxx; nested exception is: java.net.ConnectException: Connection timed out (Connection timed out)]" Below is a portion of our Java setting (masked). We handle some parameters in Catalina property files. JVM_OPTS="-Dhae.cf.srv.util.security.hmacKey=fakevalue -XX:ReservedCodeCacheSize=500m -Xmx16G -Xms16G -Xss1024K -Dorg.quartz.threadPool.threadCount=1 -Dspring.security.strategy=fakevalue -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true -Duser.timezone=UTC -Dhae.cf.srv.sec.timeout=15 -Dhpf.session.expiry=30 -Dng.log.dir=xxx -Djavax.net.ssl.trustStore=xxx/cacerts.jks -Djavax.net.ssl.trustStorePassword=xxxxx -XX:+UseG1GC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -Xloggc: I suspect the issue is with SSL configuration, and we have same deployment to non-ssl Java environment. Things are working as expected. How can we add SSL related configuration into config.xml? Current connection: <jmxserver host="xxx.xx.xxx.xxx" jvmDescription="fakevalue" jmxport="6969" jmxuser="admin" jmxpass="xxxx"> @Damien_Dallimor 
EDIT: Solved. Used regex to target the printable portion first then converted to ascii   For a couple dashboards, I'm using the following to display the plain text of hex data: [search] | eval asc... See more...
EDIT: Solved. Used regex to target the printable portion first then converted to ascii   For a couple dashboards, I'm using the following to display the plain text of hex data: [search] | eval ascii=(ltrim(replace(data,"([A-F0-9]{2})","%\1"),"0x")) | table ascii This works great for most everything. However, when using it on snort's ET POLICY ZIP file download events, it gives me nothing.    Any ideas on why this is failing for specifically these alerts?    Things I'm aware of: zip files are not plaintext. The filenames within them, however, are. The plan is to use regex to locate and extract filenames after.    Things I've confirmed: The relevant field is labeled as "data" in working and non working examples.  The data field contains ONLY hex data No lowercase, spaces, dashes, etc are used in the data field.  The data fields do contain the strings I'm trying to extract.  
I have a xml _raw="2022-03-02 21:22:39.417 [MESSAGE] [default-threads - 8] [re_messages] - <?xml version="1.0" encoding="UTF-8"?><al:EnvEventDatagram xmlns:mex="http://xxxx" xmlns:bdm="http://xxxx" x... See more...
I have a xml _raw="2022-03-02 21:22:39.417 [MESSAGE] [default-threads - 8] [re_messages] - <?xml version="1.0" encoding="UTF-8"?><al:EnvEventDatagram xmlns:mex="http://xxxx" xmlns:bdm="http://xxxx" xmlns:al="http://xxxx" xmlns:xsi="http://www.w3.org/xxxx" xsi:schemaLocation="http://xxxx.xsd"><mex:ManagedApp><mex:IssuerId>com1</mex:IssuerId><mex:Code>abc</mex:Code><mex:DeployedUnitId>123</mex:DeployedUnitId><mex:DxmVersion>1.10</mex:DxmVersion></mex:ManagedApp><mex:ID>456</mex:ID><mex:AID>1</mex:AID><al:SvcEventDatagram><mex:MessageID>aaa</mex:MessageID><al:Alert><al:DA><al:ASQ><al:IssuerId>bbb</al:IssuerId><al:Value>ccc</al:Value></al:ASQ><al:CU><bdm:B><bdm:IssuerId>888</bdm:IssuerId><bdm:Value>ddd</bdm:Value></bdm:B></al:CU><al:YYY><al:LLL>89</al:LLL><al:BNum>28</al:BNum><al:NUM>6</al:NUM></al:YYY><al:FAUTQ><al:Value>vvv</al:Value></al:FAUTQ><al:BA><bdm:TypeQcd><bdm:IssuerId>kkk</bdm:IssuerId><bdm:Value>ABC</bdm:Value></bdm:TypeQcd><bdm:Ccyamt><bdm:MM>88</bdm:MM></bdm:Ccyamt></al:BA><al:BA><bdm:TypeQcd><bdm:IssuerId>abc</bdm:IssuerId><bdm:Value>NNN</bdm:Value></bdm:TypeQcd><bdm:Ccyamt><bdm:MM>22</bdm:MM></bdm:Ccyamt><al:ReasonQcd><al:IssuerId>vvv</al:IssuerId><al:Value>FF</al:Value></al:ReasonQcd></al:BA><al:DATypeQcd><al:Value>mmm</al:Value></al:DATypeQcd><al:OverLimitInd>ii</al:OverLimitInd><al:Qcd><al:Value>N/A</al:Value></al:Qcd></al:DA><al:QQQ><bdm:DescriptionTxt><bdm:Text>HH</bdm:Text></bdm:DescriptionTxt><bdm:StartDttm>2022-03-02</bdm:StartDttm><bdm:ATQ><bdm:IssuerId>77</bdm:IssuerId><bdm:Value>TTT</bdm:Value></bdm:ATQ><bdm:Status><bdm:TypeQcd><bdm:IssuerId>55</bdm:IssuerId><bdm:Value>PPP</bdm:Value></bdm:TypeQcd></bdm:Status><bdm:Ccyamt><bdm:MM>12</bdm:MM></bdm:Ccyamt><bdm:DebitCreditQcd><bdm:IssuerId>AAA</bdm:IssuerId><bdm:Value>GGG</bdm:Value></bdm:DebitCreditQcd><al:TED>2022-03-02</al:TED><al:ProcessDt>2022-03-02</al:ProcessDt></al:QQQ></al:Alert></al:SvcEventDatagram></al:EnvEventDatagram>" any way can get all <bdm:Value>'s vallues(ddd, ABC etc.) by regex?  
I am trying to create my Search head captain using bootstrap but I end up with the error (URI=https:myinstance/services/shcluster/member/consensus/pseudoid/last_known_state?output_mode=json, socket_e... See more...
I am trying to create my Search head captain using bootstrap but I end up with the error (URI=https:myinstance/services/shcluster/member/consensus/pseudoid/last_known_state?output_mode=json, socket_error=Resource temporarily unavailable;   I have tired multiply times to reconfiguration from clean start and still the same error. Please help.
I’m receiving an error whenever I try to view any csv lookup tables I have uploaded into my search head cluster (v8.1.6).   Uploading the same csv files on to my local sandbox works without issue.   ... See more...
I’m receiving an error whenever I try to view any csv lookup tables I have uploaded into my search head cluster (v8.1.6).   Uploading the same csv files on to my local sandbox works without issue.     With the query   | inputlookup <filename>.csv   I receive the error   The lookup table '<filename>.csv' requires a .csv or KV store lookup definition.   The .csv files appear on the local file system and propagate across the cluster properly.  The splunkd.log also doesn't give any information beyond what the UI already outputs.
Hello, I have CSV (with epoch time) source files (file with a few sample events given below) with header info. I wrote a props configuration file (see below). I tested this props with a few events ... See more...
Hello, I have CSV (with epoch time) source files (file with a few sample events given below) with header info. I wrote a props configuration file (see below). I tested this props with a few events and working as expected. Do you have any recommendation on this props configuration file or I am good to go with this props.conf? Also is there any way I can change the field name (i.e., id as ID, created as TIMESTAMP.........so on)? Your feedback and help will be highly appreciated. Thank you so much. Sample csv with epoch time: props.conf that I Wrote:prop [ csv ] SHOULD_LINEMERGE=false CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv category=Structured HEADER_FIELD_LINE_NUMBER=1 TIMESTAMP_FIELDS=created TIME_FORMAT=%s%9N MAX_TIMESTAMP_LOOKAHEAD=14      
I'm not sure if I'm missing something simple or not, but I've got event logs from my Salesforce instance fed in, as well as the User object, and for some reason I can aggregate on some fields of User... See more...
I'm not sure if I'm missing something simple or not, but I've got event logs from my Salesforce instance fed in, as well as the User object, and for some reason I can aggregate on some fields of User but not others ... even though the fields exist in Splunk.   index=sfdc sourcetype=LightningPageViewCSV |join USER_ID [ search sourcetype=sfdc:user | eval USER_ID=substr(Id,1,len(Id)-3) ] |stats avg(EFFECTIVE_PAGE_TIME) by Name   // this works to aggregate by the user's name. Not really useful but it was a test to make sure something came through. The substring is b/c one object uses the 18-char Salesforce Id, the other uses the shortened 15-char Id.    index=sfdc sourcetype=LightningPageViewCSV |join USER_ID [ search sourcetype=sfdc:user | eval USER_ID=substr(Id,1,len(Id)-3) ] |stats avg(EFFECTIVE_PAGE_TIME) by State__c,Loc__c   //no results from this for some reason ... State__c and Loc__c are custom fields on User.   index=sfdc sourcetype=sfdc:user index=sfdc sourcetype=sfdc:user Name="[one of the names from the first query]"   //I run these just to see what I've got in my user object and I can see several people with non-null State__c and Loc__c This is a new dev org I just spun up so I'm not sure if I missed a step in adding these sources or not. The LightningPageViewCSV is an imported static CSV file of the EventLogFile for testing. The sfdc:user was a one time read in of the User object. Both of these are tied to the sfdc index.
Hello,  Thank you for taking the time to consider my question/situation. I am working on removing static deploymentclient.conf configurations (located on endpoints under $SPLUNK_HOME/etc/system/loc... See more...
Hello,  Thank you for taking the time to consider my question/situation. I am working on removing static deploymentclient.conf configurations (located on endpoints under $SPLUNK_HOME/etc/system/local) in my organization in favor of using app-based configurations for this, which are sent from the existing deployment server.  Initially I had no issues removing the existing deploymentclienttest.conf file within /etc/system/local on the deployment client using  a windows batch file (.bat) stored under the /etc/deployment-apps/<appName>/bin/<nameOfRemovalscript>.bat. The contents of the bat file are shown below: del "C:\Program Files\SplunkUniversalForwarder\etc\system\local\deploymentclienttest.conf" The 'inputs.conf' that was stored in the same custom app under the local/ directory is as shown below:   [script://C:\Program Files\SplunkUniversalForwarder\etc\apps\<nameofApp>\bin\<replaceDeploymentClient>.bat] interval = -1 source = replaceDeploymentClient sourcetype = scriptedInput index = _internal disabled = 0   However since I did this, my workstation no longer actually runs any scripts (I've tested .bat and .cmd scripts, no python or ps1) I've tried referring to the script using both absolute (shown above) and relative file paths, as well as storing the .bat file within <appname>/bin/scripts/ incase that was something that was needed, but it wasn't configured that way when I got it to work the first time.  My question is essentially this: what would cause a UF to just not be able to run scripts deployed by the DS anymore? If I go into the app and manually run the script it removes the files and does whatever other commands I entered just fine, so what gives? I'm beginning to think this is a bug, but I still have hope that this is just the result of a bad config one place or another.  Please advise on any further troubleshooting I can do. I should note that within Splunkd.log on the UF it says that the script has been scheduled to run whenever I deploy it with "restart splunkd" enabled for the app, but even that doesn't seem to do the trick.  Any help is appreciated, and thanks in advance!
Hello,  I hope to get some guidance regarding configuring a Splunk web interface to be public facing while keeping the management side on a private interface. Some of the information I have read fr... See more...
Hello,  I hope to get some guidance regarding configuring a Splunk web interface to be public facing while keeping the management side on a private interface. Some of the information I have read from our esteemed experts is a bit confusing to me. I am tracking that I am able to make changes to the web.conf file to alter the default IP/interface but there is a caution that i should also change the management side with that.  For security reasons and separation of duties, I am hoping to set it up that only persons who have physical access to the private network can make managerial changes but allow access to the web interface for analysts outside the immediate area to use the SEIM. I Is this even possible or am I seeking to set something up that is largely moot?  v/r Matt
In Splunk Cloud, when I go to change the time picker it brings up relative options.  It used to bring up presets.  How do I get presets to be the default.  PS: nothing I do in Settings, Server Settin... See more...
In Splunk Cloud, when I go to change the time picker it brings up relative options.  It used to bring up presets.  How do I get presets to be the default.  PS: nothing I do in Settings, Server Settings, Search Preferences seems to take.  It's set to 7 days but Search and Reporting still shows "last 30 minutes".  SO I have to change the time picker, then go to presets, then select whatever preset I want. 
Hi Splunkers, I have a requirement to show a single value panel with the total number of connections to a server and change the color to RED of the panel when the connection is down (which is not bei... See more...
Hi Splunkers, I have a requirement to show a single value panel with the total number of connections to a server and change the color to RED of the panel when the connection is down (which is not being shown on the panel). I've tried using the classField and range but it seems those are depreciated. I tried searching this forum but couldn't find any relevant options. Is there any other alternative to get this done? Please help. Data: session - name of the session (can be many) server - server name can be many (used trellis for this purpose) STATUS - Status of the connected, can be either UP or DOWN. I've used rangevalues in the below simple xml which isnt working as expected.       <form> <label>Color My Text</label> <fieldset submitButton="false"> <input type="time" token="time"> <label>time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <single> <search> <query>index=* | stats count by session,server,STATUS | foreach server [eval range=if('STATUS'="DOWN","severe", "low")] | chart count by server</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="classField">range</option> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="field">count</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051","0xdc4e41"]</option> <option name="rangeValues">[1]</option> <option name="refresh.display">progressbar</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">small</option> <option name="trellis.splitBy">server</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitserver">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">0</option> </single> </panel> </row> </form>      
Hi There, I have two Application log messages that I receive in Splunk  1. Service stopped 2. Service Started I need to create an alert if the "service started" log message does not show up w... See more...
Hi There, I have two Application log messages that I receive in Splunk  1. Service stopped 2. Service Started I need to create an alert if the "service started" log message does not show up within 10 minutes of the "Service  stopped" log message. So the alert needs to trigger an email only if it has been more than 10 min since the service stopped an a new log message stating Service started does not show up in the logs. I am finding some solutions here, but need one that will compare the log messages, I am new to splunk please do share the syntax as I would not know how to work it out without it. index=* | search app=xxx log="xxx" message="*service stopped/started*"
I am trying to separate multi value rows into their own rows. I have been trying to separate by adding a comma after the end of each row and then splitting them based on the comma, but I am only able... See more...
I am trying to separate multi value rows into their own rows. I have been trying to separate by adding a comma after the end of each row and then splitting them based on the comma, but I am only able to split the first repetition of the pattern. Can anyone help?   Example: I have rows like this: Domain Name Instance name Last Phone home Search execution time Domain1.com instance1.com                      instance2.com  instance3.com                      instance4.com 2022-02-28 2022-03-3   And I would like to transform them into this: Domain Name Instance name Last Phone home Search execution time Domain1.com instance1.com 2022-02-28 2022-03-02 Domain1.com instance2.com 2022-02-28 2022-03-02 Domain1.com instance3.com 2022-02-28 2022-03-02 Domain1.com instance4.com 2022-02-28 2022-03-02  
Is it possible to apply the color based on the first value on the multi-value fields? Below is the sample data. If the first value of server_status is Online then the field color should turn into ... See more...
Is it possible to apply the color based on the first value on the multi-value fields? Below is the sample data. If the first value of server_status is Online then the field color should turn into green else the color will be red. hostname server_status server101 Online Offline (31 days ago)         hostname server_status server101 Offline (31 days ago) Online