All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My DB connect app is hosted on the Splunk Heavy forwarder and i need to create a connection to SQL server. I got the server address and the port number is 1433. I need to request firewall access to f... See more...
My DB connect app is hosted on the Splunk Heavy forwarder and i need to create a connection to SQL server. I got the server address and the port number is 1433. I need to request firewall access to from my HF to the db server over port 1433. Do i need to provide the any port number from my HF so the Firewall can be opened to communicate ? Or i just need to give my HF IP address to the firewall team to open connection to db server ip with port 1433 ?
Here is my XML. In know I am missing something, but I can't figure it out. <dashboard> <label>gcj_printerStatusv2</label> <row> <panel> <single> <search> <query> i... See more...
Here is my XML. In know I am missing something, but I can't figure it out. <dashboard> <label>gcj_printerStatusv2</label> <row> <panel> <single> <search> <query> index=oit_printer_monitoring AND type=Print* | eval statusLevel = case(status="normal",1,status="offline",2) | eval printerLoc = printer.location | eval timeConv=strftime(_time,"%H:%M:%S %m/%d") | eval statusTime = status.timeConv | rangemap field=statusLevel low=1-1 severe=2-2 default=low | replace "1" with "UP" in statusLevel | replace "2" with "DOWN" in statusLevel | where printer="oix12" | stats latest(statusTime) BY printerLoc </query> <earliest>-4h@m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x2af33e","0xff2727"]</option> <option name="rangeValues">[0]</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">large</option> <option name="trellis.splitBy">printerLoc</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">0</option> <option name="useThousandSeparators">1</option> </single> </panel> </row> </dashboard> Does rangemap work with trellis? Thanks and God bless, Genesius
I'm trying to do a simple | stats count over a virtual index and receiving errors. Thoughts on where to look for this one? Splunk 7.3.3 / Splunk 8.x to EMR cluster with master and two slave nod... See more...
I'm trying to do a simple | stats count over a virtual index and receiving errors. Thoughts on where to look for this one? Splunk 7.3.3 / Splunk 8.x to EMR cluster with master and two slave nodes. It still produces a count, but I assume it's much slower than if it was doing a map-reduce on it. Exception - com.splunk.mr.JobStartException: Failed to start MapReduce job. Please consult search.log for more information. Message: [ Failed to start MapReduce job, name=SPLK_searchhead1.abc.corp.com_1580844860.138_0 ] and [ null ] Edit: Other testing performed: Upgraded JDK from 1.7 to 1.8. No change to what works/doesn't work. After adding vix.mapreduce.framework.name=yarn to indexes.conf and mapreduce.framework.name=yarn to yarn-site.xml, I get Exception - failed 2 times due to AM Container for appattempt_... I've tested outside of splunk and still receive the AM Container error: yarn jar hadoop-streaming.jar streamjob -files wordSplitter.py -mapper wordSplitter.py -input input.txt -output wordCountOut -reducer aggregate
Does anyone have any SPL that looks at ALL connected network devices? For example, John Doe decides he wants to connect his own personal laptop to the network and or he tries to connect to a VDI ses... See more...
Does anyone have any SPL that looks at ALL connected network devices? For example, John Doe decides he wants to connect his own personal laptop to the network and or he tries to connect to a VDI session using that same laptop. I'd like to see that type of activity via a query in Splunk. Any help with this is greatly appreciated.
Cisco eStreamer eNcore Add-on for Splunk v3.6.8 has two EXTRACTs with errors in them. EXTRACT-extract_src and EXTRACT-extract_dest both have an extraneous equal sign (=) before the start of the ... See more...
Cisco eStreamer eNcore Add-on for Splunk v3.6.8 has two EXTRACTs with errors in them. EXTRACT-extract_src and EXTRACT-extract_dest both have an extraneous equal sign (=) before the start of the regex which means that src_ip and dest_ip don't get extracted. Both are under the [cisco:firepower:syslog] stanza.
when i click on missile map app , it redirects me to another splunk(windows) app . I disabled windows app and re launch the missile map again , and now the i gets a blank page . Could you please help... See more...
when i click on missile map app , it redirects me to another splunk(windows) app . I disabled windows app and re launch the missile map again , and now the i gets a blank page . Could you please help with this ?
Hi Splunk community, I am trying to make a query that returns all transactions for a starting event and ending event that last a certain duration as well as any starting events that don't have an end... See more...
Hi Splunk community, I am trying to make a query that returns all transactions for a starting event and ending event that last a certain duration as well as any starting events that don't have an ending event for a specific time range. I attempted to do this by putting keepevicted = true in my transaction but this appears to include some unwanted data as well. I believe the below example will show what I mean: The data list is as follows: 1. Connection: misc. 2. Connection: misc. 3. unneeded data 4. Connection: lost 5. Connection: finding 6. Connection: found 7.unneeded data 8. Connection: lost 9. Connection: finding 10. Connection: still finding My query will be as follows "Connection" | transaction startswith="lost " endswith="found" keepevicted=true This will return 3 result transactions, events 1-2, events 4-6, and events 8-10. The last two are the ones I want but the first transaction is unneeded but shows up anyway as a result of keepevicted since they are considered close enough. If I removed keepevicted, I will only receive events 4-6 since 8-10 doesn't have the end event. Is there a way I can modify the query so I receive the last two transactions but not the first one? Is it possible that transactions aren't necessary and there are other splunk commands that can get me what I want?
We've tried installing several apps on a distributed search head cluster via a deployer: Demisto: https://splunkbase.splunk.com/app/3447/ https://splunkbase.splunk.com/app/3448/ Sophos: h... See more...
We've tried installing several apps on a distributed search head cluster via a deployer: Demisto: https://splunkbase.splunk.com/app/3447/ https://splunkbase.splunk.com/app/3448/ Sophos: https://splunkbase.splunk.com/app/3612/ https://splunkbase.splunk.com/app/1854/ are two examples. I was initially able to load the setup and take screenshots of all of the requirements but going back in, none of the setup pages are showing. I tried reinstalling demisto from a fresh tgz file but the setup page is still not showing up. I've checked app.conf and the necessary apps are visible, and I checked that they're set as not yet configured, and they all have setup.xml files in their root directory. Something to note, this is an ES search head cluster.
Do we need to set up anything specific for an indexer internal logs to see .We just added the indexer and I can see logs coming from that indexer from diff inputs but not its own internal logs.I can ... See more...
Do we need to set up anything specific for an indexer internal logs to see .We just added the indexer and I can see logs coming from that indexer from diff inputs but not its own internal logs.I can see its won logs if I log into the indexer.How do I see them on the search head
I have a message that consists of key-value pairs: "status=BLOCKED, identifier=123422dsd13, userId=12344, name=John" I am using | extract pairdelim=", " kvdelim="=" to extract these key-value... See more...
I have a message that consists of key-value pairs: "status=BLOCKED, identifier=123422dsd13, userId=12344, name=John" I am using | extract pairdelim=", " kvdelim="=" to extract these key-value pairs. As an output I would like to get a 2-columns table with rows that contain key in column1 and value in column2: | Key | Value | | status | BLOCKED | | identifier | 123422dsd13 | | userId | 12344 | | name | John |
We use the Splunk_TA_symantec-ep version 2.3.0 and it is not compatible with our upgraded Symantec Endpoint Protection Manager version 14 (14.2 RUS). Which TA should we use?
Hello, we have splunk light platform only for few systems, Is there a way to send alerts from splunk light and ingest to splunk enterprise?
Hello - I come with a warning from an issue I just had recently and resolved. Hopefully this will get some visibility and possibly fixed in a later release of the add-on. This was tested with the lat... See more...
Hello - I come with a warning from an issue I just had recently and resolved. Hopefully this will get some visibility and possibly fixed in a later release of the add-on. This was tested with the latest version of this add-on (ver 3.4.0) and in reference to the log entry: %ASA-5-746012: user-identity: Add IP-User mapping IP Address - domain_name \user_name result - reason If you are using this Add-On - please make a note to check if you have anyone in your org with the string "Deny" (Capital D) as part of their name or e-mail address. This would likely be someone named Denys or similar. As well as part of a last name, likely the leading characters. ex: Cisco_ASA_user = LOCAL\Denys.Somelastname@yourdomain.com Cisco_ASA_user = LOCAL\Bob.Denyan@yourdomain.com I have found that due to two weak regex entries in the transforms.conf file for the add-on, it will look for a "Deny" string (capital D) in the log entry from your ASAs to populate the [cisco_asa_vendor_action] and [vendor_action] fields. This results in a deny vendor action even when the actual result reason shows succeeded; per the logs. [cisco_asa_vendor_action] REGEX=([Aa]uthentication [Ss]ucceeded|[Aa]uthorization [Pp]ermitted|authentication Successful|passed authentication|Login permitted|Authentication failed|Authorization denied|Can't find authorization|Authentication Failed|authentication Rejected|credentials rejected|Authentication:Dropping|login warning|login failed|failed authentication|[Cc]onnection denied|Deny inbound|Deny|Terminating|action locally|Unable to Pre-allocate|denied\s[tcp|udp|icmp]+|access denied|access requested|access permitted|limit exceeded|Dropped|Dropping|[B|b]uilt|[pP]ermitted|whitelisted|Pre-allocated|Rebuilt|redirected|discarded) FORMAT=vendor_action::$1 [cisco_asa_vendor_action_for_performance] REGEX=([Aa]uthentication [Ss]ucceeded|[Aa]uthorization [Pp]ermitted|authentication Successful|passed authentication|Login permitted|Authentication failed|Authorization denied|Can't find authorization|Authentication Failed|authentication Rejected|credentials rejected|Authentication:Dropping|login warning|login failed|failed authentication|[Cc]onnection denied|Deny inbound|Deny|Terminating|action locally|Unable to Pre-allocate|denied\s[tcp|udp|icmp]+|access denied|access requested|access permitted|limit exceeded|Dropped|Dropping|[B|b]uilt|[pP]ermitted|whitelisted|Pre-allocated|Rebuilt|redirected|discarded) FORMAT=Cisco_ASA_vendor_action::$1 You can see in the regex that it is looking through OR | statements to find Deny . To resolve this I have found that throwing a \s after the Deny in the two stanzas listed above, will process the fields correctly for those people that have the string as part of their name or email. So |Deny\s| will fix it. The corrected regex is below. Be sure not to make the change under the default folder REGEX=([Aa]uthentication [Ss]ucceeded|[Aa]uthorization [Pp]ermitted|authentication Successful|passed authentication|Login permitted|Authentication failed|Authorization denied|Can't find authorization|Authentication Failed|authentication Rejected|credentials rejected|Authentication:Dropping|login warning|login failed|failed authentication|[Cc]onnection denied|Deny inbound|Deny\s|Terminating|action locally|Unable to Pre-allocate|denied\s[tcp|udp|icmp]+|access denied|access requested|access permitted|limit exceeded|Dropped|Dropping|[B|b]uilt|[pP]ermitted|whitelisted|Pre-allocated|Rebuilt|redirected|discarded) Hope this helps! -Chris
We am trying to roll of the frozen data from three indexers to a NFS mount directory which contains three sub-directories with indexer names. Whenever I am trying to create new index and define the f... See more...
We am trying to roll of the frozen data from three indexers to a NFS mount directory which contains three sub-directories with indexer names. Whenever I am trying to create new index and define the frozen path, need to manually define the coldToFrozenDir on each Indexers as they must be pointing to the sub-directories on the mounts instead of main directory. Is there a an option within indexes.conf to get this automated so we can deploy using our DS? IDX1: /abc/frozen/idx1 IDX2:/abc/frozen/idx2 IDX2:/abc/frozen/idx3
{ @timestamp: 2020-02-04T13:46:41.274+00:00 domain: test environment: dev level: INFO logger_name: com.test.practice.evthub.sse.impl.EventEncrypter message: {"data":"67... See more...
{ @timestamp: 2020-02-04T13:46:41.274+00:00 domain: test environment: dev level: INFO logger_name: com.test.practice.evthub.sse.impl.EventEncrypter message: {"data":"6757", "key":"value"} thread_name: main } For the above log, how to get the json inside the message field as a json object using spath. the output must be available to be reused for calculating stats. Finally i need to get the value available under the key. To get this task done first i need the json object to be created. Tried using "spath input=message output=key" but didn't work for me.
i am running cli command "splunk remove excess buckets" to remove excess buckets from cluster master. it is asking for id and password. i am looking to automate it, so i dont want it to ask the i... See more...
i am running cli command "splunk remove excess buckets" to remove excess buckets from cluster master. it is asking for id and password. i am looking to automate it, so i dont want it to ask the id and password. is this a possibility to configure it that way or what needs to be done so that id/pass is not asked ...
I'm looking to capture any failures of a kvstore backup that is kicked off from a script.
I have a search head cluster (3 search heads) and an indexer cluster (3 indexers). More than 10% of splunkd.log (on my search heads) are produced by "TcpOutputProc". Is it an unusual amount ? $ g... See more...
I have a search head cluster (3 search heads) and an indexer cluster (3 indexers). More than 10% of splunkd.log (on my search heads) are produced by "TcpOutputProc". Is it an unusual amount ? $ grep TcpOutputProc splunk/splunkd.log | wc -l 6780 $ wc -l splunk/splunkd.log 59586 splunk/splunkd.log $ grep TcpOutputProc splunk/splunkd.log | head -5 02-04-2020 10:36:49.831 +0100 INFO TcpOutputProc - Closing stream for idx=xxx.142:9997 02-04-2020 10:36:49.832 +0100 INFO TcpOutputProc - Connected to idx=xxx.143:9997, pset=0, reuse=0. using ACK. 02-04-2020 10:37:08.569 +0100 INFO TcpOutputProc - Closing stream for idx=xxx.143:9997 02-04-2020 10:37:08.570 +0100 INFO TcpOutputProc - Connected to idx=xxx.141:9997, pset=0, reuse=0. using ACK. 02-04-2020 10:37:08.574 +0100 INFO TcpOutputProc - Closing stream for idx=xxx.141:9997
We are on 7.2.5.1. My outputs is sending incoming Windows logs out to 2 F5 VIPs via a syslog stanza. The data is going out and only ever hits the first vip in the server= line in the stanza [sys... See more...
We are on 7.2.5.1. My outputs is sending incoming Windows logs out to 2 F5 VIPs via a syslog stanza. The data is going out and only ever hits the first vip in the server= line in the stanza [syslog:test_group] priority = NO_PRI server = 10.X.X.1:514,10.X.X.2:514 type = udp The .1 is receiving all of the data on the F5 and the HF never seems to switch over to the .2 IP. Any help would be greatly appreciated.
Hello, in the below data I have a lot of processes and the ParentProcesses of them. I would like to keep only the rows related with process "Process4" meaning the first 3 rows. | makeresults ... See more...
Hello, in the below data I have a lot of processes and the ParentProcesses of them. I would like to keep only the rows related with process "Process4" meaning the first 3 rows. | makeresults | eval mydata="Process1,Process2 Process2,Process3 Process3,Process4 Process5,Process6 Process6,Process7 Process8,Process9 Process7,Process10" | makemv mydata | mvexpand mydata | makemv delim="," mydata | eval ParentProcess=mvindex(mydata,0) | eval Process=mvindex(mydata,1) | table ParentProcess Process Many thanks in advance.