All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, in the alert for the Website Monitoring app, there is a check: tag!="exclude_from_alerts" Which seems to control exclusion of a specific site from alerts. But I have no idea how to s... See more...
Hi, in the alert for the Website Monitoring app, there is a check: tag!="exclude_from_alerts" Which seems to control exclusion of a specific site from alerts. But I have no idea how to set this up. Setting tag=exclude_from_alerts or exclude_from_alerts=true both just result in Errors in the log. thx afx
I have read several question-answer pages everywhere, kaspersky documentation and all other stuff, but unfortunately no clear - professional level explanation on how to perform it. Even Splunk itsel... See more...
I have read several question-answer pages everywhere, kaspersky documentation and all other stuff, but unfortunately no clear - professional level explanation on how to perform it. Even Splunk itself does not provide any documentation about it (last time i checked). I have: Kaspersky Security Center 11 - Full license. Kaspersky App for Splunk - downloaded and installed from Splunk Database. Splunk Enterprise. I have done: 1) On Kaspersky Security Center Side - i have configured Event Manager to send CEF events to Splunk, with IP/PORT. I have also selected what to send inside the Policies. 2) I have deployed Kaspersky App into the Splunk by tar archive (general installation way) with all required software(add-on) also installed) How to configure Splunk part? I know that i have to provide Data input and etc, however whatever i try, Kaspersky App does not show anything. I do not see on web interface any relative configurations for Kaspersky App. Are there? Am i missing something? It looks like yes. Or maybe this is a os-network issue? Thanks for support.
Hello. I'm trying to integrate splunk with my local project developed in Java. I have a main project called send-data-service while on the other hand, I have a project named utilities which include... See more...
Hello. I'm trying to integrate splunk with my local project developed in Java. I have a main project called send-data-service while on the other hand, I have a project named utilities which includes the logging utility that can be re-used by my other projects. Every time send-data-service logs information, it calls the utility service. All of these works fine and also logs information and errors using a logback.xml. This then generates a .log file. Now, I've read that I should use a splunk logback appender in my current logback.xml so as I can send data to my splunk server (which is hosted via virtual box) every time I try to make a request inside my send-data-service. Although I can't actually make it work. I've already setup a HEC inside splunk but it still does not receive any data. My goal is to send the whole .log file to my splunk server. This is the content of my logback.xml: <appender name="http" class="com.splunk.logging.HttpEventCollectorLogbackAppender"> <url>http://localhost:8089</url> <token>$token-generated$</token> <source>send-data-service</source> <sourcetype>logback</sourcetype> <messageFormat>text</messageFormat> <middleware>HttpEventCollectorUnitTestMiddleware</middleware> <layout class="ch.qos.logback.classic.PatternLayout"> <pattern>%logger: %msg%n</pattern> </layout> </appender> <logger name="splunk.logger" additivity="false" level="INFO"> <appender-ref ref="http" /> </logger I would just also like to ask if i'm using a proper approach? I'm just new in using splunk and I'm really quite lost. Hope anyone can help me with this. Thank you!
I am getting below error when the page first loads, after that when I manually select "Last 1 week" in the dropdown, the timechart displays. Below is the error , please help resolve the issue ? I... See more...
I am getting below error when the page first loads, after that when I manually select "Last 1 week" in the dropdown, the timechart displays. Below is the error , please help resolve the issue ? Invalid value "$week$" for time term 'earliest' I think, somehow, when the page loads, the token $week$ having a value of "-7d" is not working. Also, when I select the choice, the query is passed into the token and then the query is running using that token. Below is my code : <panel> <title>Bandwidth Utilization - Trend</title> <input type="dropdown" token="week" searchWhenChanged="true"> <label>Select Week</label> <choice value="-7d">Last 1 Week</choice> <choice value="-14d">Last 2 Weeks</choice> <choice value="-21d">Last 3 Weeks</choice> <choice value="-1mon">Last 1 Month</choice> <selectFirstChoice>true</selectFirstChoice> <default>-7d</default> <initialValue>-7d</initialValue> <change> <condition value="-7d"> <set token="comparestring">index=snmp sourcetype=snmp_ta_vpn earliest=$week$ latest=now | my search ..... </condition> <condition value="-14d"> <set token="comparestring">index=snmp sourcetype=snmp_ta_vpn earliest=$week$ latest=now | my search ..... <condition value="-21d"> <set token="comparestring">index=snmp sourcetype=snmp_ta_vpn earliest=$week$ latest=now | my search ..... <condition value="-1mon"> <set token="comparestring">index=snmp sourcetype=snmp_ta_vpn earliest=$week$@mon latest=now | my search ..... </condition> </change> <search> <query>index=snmp | dedup host | stats count</query> <earliest>-5m@m</earliest> <latest>now</latest> </search> <fieldForLabel>count1</fieldForLabel> <fieldForValue>count1</fieldForValue> </input> <chart> <search> <query>$comparestring$</query> <earliest>0</earliest> <latest></latest> <sampleRatio>1</sampleRatio> <refresh>2m</refresh> <refreshType>delay</refreshType> </search> <!--option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">large</option--> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.text">Time</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">area</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">connect</option> <option name="charting.chart.showDataLabels">minmax</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">1</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">right</option> <option name="charting.lineWidth">2</option> <option name="height">396</option> <option name="refresh.display">progressbar</option> </chart> </panel>
Hello, I have a data from two different sourcetypes. In that data, I have two specific columns where in I have to check whether there are common values in both fields or not and if there are commo... See more...
Hello, I have a data from two different sourcetypes. In that data, I have two specific columns where in I have to check whether there are common values in both fields or not and if there are common values in bot the fields, I have to show then on the same row in their respective fields and uncommon fields next to the common fields. For the common files, the status should be yes else no. The data is like below: Field1 Field2 A B C D Z L L A B K S C D M Expected Output: Field1 Field2 Status A A Yes C C Yes L L Yes L Z No B K No S S Yes D M NO Please help me... I have used join, but it is giving blank values in the middle of the table
Hi All, I need to create a query where user access a same destination from 5 or more sources, also in that query opposite should also be achieved i.e. 5 or more destination and 1 source, is it pos... See more...
Hi All, I need to create a query where user access a same destination from 5 or more sources, also in that query opposite should also be achieved i.e. 5 or more destination and 1 source, is it possible?
We have problems collecting data from AWS-Cloudwatch logs with Splunk Addon for AWS, so we are looking for alternatives. On searching this forum and the internet at large, most advices go to Amazon... See more...
We have problems collecting data from AWS-Cloudwatch logs with Splunk Addon for AWS, so we are looking for alternatives. On searching this forum and the internet at large, most advices go to Amazon Kinesis Firehose, only a few go to HEC and AWS Lamda. But the second seems much easier, only that all the information I found is either quite old or not complete. Does anyone have experience with HEC and AWS Lamda? Or can even provide some pros and cons for the alternatives? Thanks for any help.
Hi all, What I want to achieve is to identify the users that possibly leaking /auto-forwarding emails to his personal email address (e.g. gmail) based on Exchange logs 1- Detect possible Auto-fo... See more...
Hi all, What I want to achieve is to identify the users that possibly leaking /auto-forwarding emails to his personal email address (e.g. gmail) based on Exchange logs 1- Detect possible Auto-forwarding rule 2- Detect possible email leakage Company email ID: 123@123.com Private Email ID: *@gmail.com and *@yahoo.com 1- Detect Possible Auto-Forwarding Rule based on timestamp can I have splunk query to support me identify users that auto-forwarding ? 2- Detect possible email leakage I want to capture if user sending 10+ emails to specific recipient using free email services e.g. gmail in duration of 3 minutes. Sample Query index=mail-1 sourcetype="MSExchange:*" sender=123@123.com | search recipient IN("*@gmail","*@yahoo.com") Thanks in Advance. Regards,
Hello, We have the xmatters Splunk app installed on our Splunk Cloud SH. It's not sending alerts to xmatters and the xmatters log is coming up with the below. I've tested the app on my own inst... See more...
Hello, We have the xmatters Splunk app installed on our Splunk Cloud SH. It's not sending alerts to xmatters and the xmatters log is coming up with the below. I've tested the app on my own instance and xmatters has tested it on their end and both splunk instances are sending alerts to xmatters. Has anyone come across this issue? I think the SH might be having issues when resolving the xmatters URL, but how do I get Splunk Cloud Support to look at this if the app is not supported? 2020-04-05 23:45:37,954 ERROR [xmatters.alert_action.main] [xmatters] [<module>] [14006] <urlopen error [Errno -2] Name or service not known> Traceback (most recent call last): File "/opt/splunk/etc/apps/xmatters_alert_action/bin/xmatters.py", line 143, in <module> REQUEST_ID = XM_ALERT.execute() File "/opt/splunk/etc/apps/xmatters_alert_action/bin/xmatters.py", line 119, in execute request_id = xm_client.send_event(self.endpoint_url, xm_event) File "/opt/splunk/etc/apps/xmatters_alert_action/lib/xmatters_sdk/xm_client.py", line 61, in send_event force_https=True File "/opt/splunk/etc/apps/xmatters_alert_action/lib/common_utils/rest.py", line 160, in post return self._send_request(req, headers) File "/opt/splunk/etc/apps/xmatters_alert_action/lib/common_utils/rest.py", line 100, in _send_request res = urllib2.urlopen(req) File "/opt/splunk/lib/python2.7/urllib2.py", line 154, in urlopen return opener.open(url, data, timeout) File "/opt/splunk/lib/python2.7/urllib2.py", line 429, in open response = self._open(req, data) File "/opt/splunk/lib/python2.7/urllib2.py", line 447, in _open '_open', req) File "/opt/splunk/lib/python2.7/urllib2.py", line 407, in _call_chain result = func(*args) File "/opt/splunk/lib/python2.7/urllib2.py", line 1241, in https_open context=self._context) File "/opt/splunk/lib/python2.7/urllib2.py", line 1198, in do_open raise URLError(err) URLError: <urlopen error [Errno -2] Name or service not known>
I have installed splunk enterprise and wanted to configure receiving and forwarding. For receiving I know the default port is 9997, but I want to know what would I update in Forwarding in "Host" fiel... See more...
I have installed splunk enterprise and wanted to configure receiving and forwarding. For receiving I know the default port is 9997, but I want to know what would I update in Forwarding in "Host" field. would it be my IPV4: 9997 or there should be any speicific IP related to search head. Please help
Hi Fellow Splunkers, I am looking to forward all Indexed data from an Indexer Cluster to another third party system. I have read through many posts that suggest configuring a single instance of an ... See more...
Hi Fellow Splunkers, I am looking to forward all Indexed data from an Indexer Cluster to another third party system. I have read through many posts that suggest configuring a single instance of an Indexer to forward logs cool no problem just follow the guide on "Forward data to third-party systems". However forwarding logs from an Indexer Cluster would be a different ball game right? As different data sits on different indexers in a cluster. So assuming I have 3 peers that is configured with a Search Factor = 2 and Replication factor of 2. Which Indexer do I choose to forward the logs / what's the best practice? Do I need to add a Heavy Fowarder? Many thanks!
Our system is generating log files named stdout.{pid}.log, the 'pid' here is the process id of current login session, and the log file will be reused when system reusing same pid, and it will overwri... See more...
Our system is generating log files named stdout.{pid}.log, the 'pid' here is the process id of current login session, and the log file will be reused when system reusing same pid, and it will overwrite the log file rather than appending at the tail. currently we found sometimes the entries on splunk are incomplete when it's overwriting by sytem. i.e. system generated a stdout.27797.log on 25th Mar, and the whole content can be found on splunk. but when system overwriting this log file on 26 Mar, we only find the first 2 lines of the file on splunk, seems this file is indexed but splunk just can't collect all the content in it. the configuration of input.conf: [monitor:///testing/stdout.*.log] index= efx_fxomurex_raw sourcetype = efx_fxomurex_stdout_live ignoreOlderThan = 7d crcSalt = <SOURCE> OS: Solaris 10.
hi there THis is my sample data. I want to use the heat map option and highlight the max and min per each column. So I would have 2 values highlighted in each column, the max and the min. Can... See more...
hi there THis is my sample data. I want to use the heat map option and highlight the max and min per each column. So I would have 2 values highlighted in each column, the max and the min. Can this be done in Splunk 7.3.1? | makeresults | eval data = " 1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 2; 10 1; " | makemv delim=";" data | mvexpand data | rex field=data "(?<Date>\d+)\s+(?<Y>\d+)" | fields + Date Y | fields - _time |search Y = * | chart count(Y) by Y | sort + Y this is what I get just using the default setting. similar question asked here before https://answers.splunk.com/answers/116018/splunk-6-simple-xml-dataoverlaymode-on-table-can-we-specify-which-column-to-show-heatmap-on.html
I am getting the following messages on my forwarder running on Windows 10: 04-06-2020 18:05:52.171 -0700 INFO TcpOutputProc - Found currently active indexer. Connected to idx=192.168.218.6:9997, ... See more...
I am getting the following messages on my forwarder running on Windows 10: 04-06-2020 18:05:52.171 -0700 INFO TcpOutputProc - Found currently active indexer. Connected to idx=192.168.218.6:9997, reuse=1. 04-06-2020 18:06:22.093 -0700 INFO TcpOutputProc - Found currently active indexer. Connected to idx=192.168.218.6:9997, reuse=1. 04-06-2020 18:06:51.934 -0700 INFO TcpOutputProc - Found currently active indexer. Connected to idx=192.168.218.6:9997, reuse=1. 04-06-2020 18:07:21.808 -0700 INFO TcpOutputProc - Found currently active indexer. Connected to idx=192.168.218.6:9997, reuse=1. 04-06-2020 18:07:51.660 -0700 INFO TcpOutputProc - Found currently active indexer. Connected to idx=192.168.218.6:9997, reuse=1. I just updated the forwarder to 8.0.3 and these messages just keep coming. There are no disconnect messages, just these found messages. HELP...
I have a rex as such: | rex field=host "(?<sydney>10-92-3[2-4])" | rex field=host "(?<melbourne>10-92-11[0-2])" which returns 2 fields. sydney and melbourne Now I want to have the fields ... See more...
I have a rex as such: | rex field=host "(?<sydney>10-92-3[2-4])" | rex field=host "(?<melbourne>10-92-11[0-2])" which returns 2 fields. sydney and melbourne Now I want to have the fields returned to a field city. How can i do this?
Hello, I am working on upgrading from an older version of the Palo Alto Network App for Splunk. I have installed the TA on all indexers and the APP/TA on the search head. Most of the dashboards a... See more...
Hello, I am working on upgrading from an older version of the Palo Alto Network App for Splunk. I have installed the TA on all indexers and the APP/TA on the search head. Most of the dashboards are being populated with data, but the GlobalProtect dashboard has nothing. I am seeing info in the pan_logs for GlobalProtect, but I don't see any reference to GP in the Pan Firewall Data Model. I see that the dashboard panels are making reference to: datamodel="pan_firewall" WHERE nodename="log.system.globalprotect" I've looked through the entire data model and don't see any reference to globalprotect. Splunk Version Version: 7.2.0 Build: 8c86330ac18 Palo Alto Networks Add-on 6.2.0 Palo Alto Networks App for Splunk 6.2.0 Data being sent from firewalls to splunk via UDP input Thanks, Garry
Hi, I am building a lab environment, loaded with the boss of the soc pre-indexed data. I installed all the apps, and everything was working. I needed to restore my VM from a previous snapshot, tho... See more...
Hi, I am building a lab environment, loaded with the boss of the soc pre-indexed data. I installed all the apps, and everything was working. I needed to restore my VM from a previous snapshot, though, and my Splunk Security Essentials stopped loading. I found community recommendations to _bump recommendation in the splunk community article "why-does-the-splunk-security-essentials-app-has-mi" (not enough karma to post the link) Number One: Most Likely The most common culprit for this is a core bug with refreshing static assets. To get around this, run a _bump by browsing to http://yoursplunk:8000/en-US/_bump and click the button. I also tried removing the app from /opt/splunk/etc/apps/ and reinstalled it. " Is there something else I can try to restore SES?
Is there a way to dynamically pass a comparison operator as a variable without a macro? I am looking to achieve something similar to what is shown below. | eval number=8 | eval operator=">=" | e... See more...
Is there a way to dynamically pass a comparison operator as a variable without a macro? I am looking to achieve something similar to what is shown below. | eval number=8 | eval operator=">=" | eval comparison=7 | eval validate=if(number.operator.comparison,"yep","nope")
i'm hardcoding some data like names, where i will pass in a token in the future, to create a simple example of what i'm trying to achieve. I want to loop through all values, which has objects contai... See more...
i'm hardcoding some data like names, where i will pass in a token in the future, to create a simple example of what i'm trying to achieve. I want to loop through all values, which has objects containing the data. Each field i loop through, i want to write an if statement to see if it matches what i'm expecting, if so increment the counter, else leave it the same. Here is my data and what i have so far, weird part is the match or even == doesn't work for me here. It should be at least 2 for sum, but nameTotal should have gotten that right at least? If i remove the stats (cause it loses my nameTotal field, nameTotal is always 0 meaning my if statement's never was true which means '<>' isn't right?, not sure what the variable is that i'm looking for from it. index=myIndex latest=+5h "extraFields{}.aimId"="innersource" "extraFields{}.prData.prResponse.values{}.author.user.name"="f401950" | eval nameTotal= 0 | foreach "extraFields{}.prData.prResponse.values{}.author.user.name" [eval names=if(match('<<FIELD>>', "f401950"), nameTotal+1, nameTotal)] | stats sum(names) as totalPrs | table totalPrs, nameTotal Results: totalPrs | nameTotal 1 | Sample of my data
I would like to pull the proxy configuration from server.conf when validating my modular input so I can validate the input's connectivity through a proxy. I am using the Java SDK to access server.con... See more...
I would like to pull the proxy configuration from server.conf when validating my modular input so I can validate the input's connectivity through a proxy. I am using the Java SDK to access server.conf during validation but unfortunately, it seems like when a modular input is being validated, the /configs/conf-server endpoint becomes inaccessible and any network request to that endpoint will hang. As soon as the validation times out, I am able to access that endpoint again. Why does this happen? Is there another way to access server.conf during validation? Thanks in advance!