All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to integrate few servers into Splunk. The servers send syslog data only. Earlier I was having two servers(log sources), so I made the input traffic to come on port 514 and 515 . I used t... See more...
I am trying to integrate few servers into Splunk. The servers send syslog data only. Earlier I was having two servers(log sources), so I made the input traffic to come on port 514 and 515 . I used two port to get two host names in the logs. But now the servers count is about 5 servers and I dont feel like giving another 5 separate ports to this 5 servers for getting different host name. I want to use single port say port 514 as input to my HF for n number of server, and get the n distinguish HOSTs. Can I anyone suggest how can I acheive this in splunk.
Hi I'm creating a new dashboard but I need to populate a dropdown based on text input. Can someone help me? Thanks in advance.
Hi Team, Palo Alto logs have been successfully send to our Syslog server. And our Syslog server acts as a Heavy Forwarder hence we have installed the Add-on "Palo Alto Networks Add-on for Splunk"... See more...
Hi Team, Palo Alto logs have been successfully send to our Syslog server. And our Syslog server acts as a Heavy Forwarder hence we have installed the Add-on "Palo Alto Networks Add-on for Splunk" (https://splunkbase.splunk.com/app/2757") in our Syslog Heavy Forwarder server. As per the document provided in the Add-On we have changed the sourcetype to pan:log and when we searched the logs in Splunk the data split into three sourcetypes as pan:traffic , pan:threat & pan:system. Now the issue seems to be with the Field Extractions. The field extractions are not happening as expected. The PAN OS version is 9.0.5 and the fields are not getting extracted as per the props.conf and transforms.conf present in the installed Add-On. So kindly let me know how to fix the issue.
I have something like - index=os_solaris sourcetype=cpu | stats count by host | join type=left host [|search index=os_solaris sourcetype=vmstat | stats count by host ] I actually like to su... See more...
I have something like - index=os_solaris sourcetype=cpu | stats count by host | join type=left host [|search index=os_solaris sourcetype=vmstat | stats count by host ] I actually like to substract the output of index=os_solaris sourcetype=vmstat | stats count by host from the bigger set of index=os_solaris sourcetype=cpu | stats count by host How can I do that?
Hello all, I am trying to build a report of any windows machines not rebooted in the last 45 days and just need some help working through it. So far I know that searching : index=mywindowse... See more...
Hello all, I am trying to build a report of any windows machines not rebooted in the last 45 days and just need some help working through it. So far I know that searching : index=mywindowseventlogindex sourcetype=wineventlog:system EventCode=6006 | dedup host | stats count by host returns all machines that have rebooted. I am just struggling with how to take this and compare this list against all other hosts and then only display the ones that did not have this event in the last 45 days. Any help is much appreciated.
I currently have a drill down dashboard which sets tokens depending on the $click.value$. <drilldown> <condition match="$click.value$ != &quot;(master)&quot;"> <set token="... See more...
I currently have a drill down dashboard which sets tokens depending on the $click.value$. <drilldown> <condition match="$click.value$ != &quot;(master)&quot;"> <set token="non_master">$click.value$</set> <unset token="master"></unset> </condition> <condition match="$click.value$ = &quot;(master)&quot;"> <set token="master">$click.value$</set> <unset token="non_master"></unset> </condition> </drilldown> I would want to add another condition which would set a token if $click.value$ contains a specific string, let's say "bla". I have tried using like and match but without success. Any suggestion on how can I achieve this?
I want to export the configuration and setup for a Business Journey from one controller to another. So far I have not found any methods for this neither inside AppDynamics nor in its API. Does anybo... See more...
I want to export the configuration and setup for a Business Journey from one controller to another. So far I have not found any methods for this neither inside AppDynamics nor in its API. Does anybody know how or do I have to recreate each journey manually across my different controllers?
I'm trying to monitor the log file from HWiNFO64 ( hwinfo.com ), but the csv file has some quirks that Splunk doesn't like. The two main problems are percent signs in the column names, and missing ... See more...
I'm trying to monitor the log file from HWiNFO64 ( hwinfo.com ), but the csv file has some quirks that Splunk doesn't like. The two main problems are percent signs in the column names, and missing leading zeros in the time stamps. Although it should be valid to not include leading zeros for the minutes and seconds, Splunk can not parse this out of the box. Every hour the first ten minutes are missing data, but as soon as the minutes hits 10, then the parsing works until 59:59. This is example data: Date,Time,"CPU (Tctl/Tdie) [°C]","CPU Die (average) [°C]","CPU CCD1 (Tdie) [°C]","CPU CCD2 (Tdie) [°C]","CPU PPT Limit [%]","CPU TDC Limit [%]","CPU EDC Limit [%]","Chipset [°C]","System1 [°C]","CPU [°C]","PCIEX16_1 [°C]","VRM MOS [°C]","Chipset [°C]","CPU [RPM]","System 1 [RPM]","System 2 [RPM]","System 3/PCH [RPM]","PCIEX16_2 [°C]","System2 [°C]","System 5 Pump [RPM]","System 6 Pump [RPM]","System 4 [RPM]","VR Loop1 [°C]","GPU Temperature [°C]","GPU Fan1 [RPM]","GPU Fan2 [RPM]","GPU Core Load [%]","GPU Memory Controller Load [%]","GPU Video Engine Load [%]","GPU Bus Load [%]","GPU Memory Usage [%]","GPU D3D Usage [%]","GPU Video Decode 0 Usage [%]","GPU Video Encode 0 Usage [%]","GPU Computing (Compute_0) Usage [%]","GPU Computing (Compute_1) Usage [%]","GPU VR Usage [%]", 31.1.2020,13:5:58.138,38.1,36.3,36.0,33.3,25.8,9.6,19.7,65.4,35,38,46,42,52,1099,636,600,1757,42,40,2896,535,559,43.0,37,0,0,7.0,7.0,0.0,0.0,18.7,5.6,0.0,0.0,0.0,0.0,0.0, 31.1.2020,13:6:0.213,36.6,36.6,36.5,33.0,26.3,10.3,21.7,65.4,35,36,46,42,52,1101,639,600,1753,42,40,2922,535,558,43.0,37,0,0,10.0,8.0,0.0,1.0,18.7,9.2,0.0,0.0,0.0,0.0,0.0, 31.1.2020,13:6:2.286,36.8,36.9,36.3,33.0,26.2,10.1,23.3,65.4,35,36,46,42,52,1101,635,600,1753,42,40,2836,535,558,43.0,37,0,0,7.0,7.0,0.0,0.0,18.7,5.9,0.0,0.0,0.0,0.0,0.0, Is it possible to add some magic to props.conf to solve this issues? Regards, Andreas
We have several Universal Forwarders installed on different Linux machines. Due to the virtualization technology, each of the Linux servers has several ip addresses. By default the Universal Forwarde... See more...
We have several Universal Forwarders installed on different Linux machines. Due to the virtualization technology, each of the Linux servers has several ip addresses. By default the Universal Forwarder uses the first one (eth0 on this example). I assume this happens because the UF just asks the OS for opening the connection without specifying the interface to be used. Linux ifconfig: eth0 Link encap:Ethernet HWaddr XX:XX:XX:XX inet addr:XX:XX:XX:XX Bcast:XX:XX:XX:XX Mask:255.255.254.0 inet6 addr: XX:XX:XX:XX/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10680683134 errors:0 dropped:11120 overruns:0 frame:0 TX packets:8692419851 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3381271414547 (3224631.7 Mb) TX bytes:3873093410263 (3693669.7 Mb) eth0:0 Link encap:Ethernet HWaddr XX:XX:XX:XX inet addr:YY:YY:YY:YY Bcast:XX:XX:XX:XX Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Due to firewall restrictions we need to use a (secondary/virtual) different ip address for the outgoing connections (eth0:0 YY.YY.YY.YY on the example). We didnt find any clue on the documentation about how to achieve this behavior. Any idea? Many thanks in advance! Regards,
We need to restrict a role to one app, No other roles should be able to view this.
Hello, I need to create a whitelist with the blacklist. I mean... I have three blacklist in the windows security input: [WinEventLog://Security] disabled=0 index = wineventlog source = Xml... See more...
Hello, I need to create a whitelist with the blacklist. I mean... I have three blacklist in the windows security input: [WinEventLog://Security] disabled=0 index = wineventlog source = XmlWinEventLog:Security sourcetype = XmlWinEventLog ... ... ... blacklist = 4624,4625,2222 blacklist1 = EventCode="4688" $XmlRegex="<Data Name='NewProcessName'>(C:\\Program Files\\SplunkUniversalForwarder\\bin\\splunkd.exe)|(C:\\Program Files\\SplunkUniversalForwarder\\bin\\btool.exe)|(C:\\Program Files\\SplunkUniversalForwarder\\bin\\splunk.exe)|(C:\\Program Files\\SplunkUniversalForwarder\\bin\\splunk-MonitorNoHandle.exe)|(C:\\Program Files\\SplunkUniversalForwarder\\bin\\splunk-admon.exe)|(C:\\Program Files\\SplunkUniversalForwarder\\bin\\splunk-netmon.exe)</Data>" blacklist2 = EventCode="1111" $XmlRegex="<Data Name='CallerProcessName'>C:\\ProgramData\\random\\andom2\\dasdfa.exe</Data>" I need to add another blacklist like this: blacklist3 = EventCode="4663" $XmlRegex="<Data Name='ProcessName'>(C:\\Windows\\System32\\Taskmgr.exe)</Data>" This blacklist remove all 4663 events with the processname Taskmgr.exe (works). But actually, I want to remove all 4663 events except, 4663 events with the process name Taskmgr.exe I tried to use an expression like this, but it isn't work: blacklist3 = EventCode="4663" $XmlRegex="<Data Name='ProcessName'>(?!C:\\Windows\\System32\\Taskmgr.exe)</Data>" blacklist3 = EventCode="4663" $XmlRegex="<Data Name='ProcessName'>?!(C:\\Windows\\System32\\Taskmgr.exe)</Data>" blacklist3 = EventCode="4663" $XmlRegex="<Data Name='ProcessName'>^((?!C:\\Windows\\System32\\Taskmgr.exe)[\s\S])*$</Data>" Has it a solution? I can't use a whitelist because I have blacklists. Thanks a lot!
Hello, I have relatively easy issue I am struggling with. I would like to calculate the time difference in seconds between the form.to and form.from set from the time picker. The dashboard begin... See more...
Hello, I have relatively easy issue I am struggling with. I would like to calculate the time difference in seconds between the form.to and form.from set from the time picker. The dashboard beginning looks as follows: <form> <label>System KPI Dashboard Clone</label> <fieldset submitButton="false" autoRun="true"> <input type="dropdown" token="sysid" searchWhenChanged="true"> <label>System</label> <fieldForLabel>SYSYSID</fieldForLabel> <fieldForValue>SYSYSID</fieldForValue> <search> <query>| dbxquery query="select distinct sysysid from sapiop.zkpic_sysreltask where hana = 'X' order by sysysid" connection="HANA_MLBSO"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <default>ISP</default> </input> <input type="time" token="field1" searchWhenChanged="true"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> <change> <eval token="form.from">strftime(relative_time(now(),'earliest'), "%F %T")</eval> <eval token="form.to">strftime(relative_time(now(),'latest'), "%F %T")</eval> <eval token="timediff">$form.to$ - "$form.from$</eval> </change> </input> </fieldset> <row> <panel depends="$hidden$"> However the timediff does not get set. I tried different combinations with relative_time, etc. but is seems not to get set. How would I do it? Kind regards, Kamil
I tried passing token value in multiselect from another results but it was not picking the token value. is there any turnaround option for this? <choice value="$Token$">All</choice> this w... See more...
I tried passing token value in multiselect from another results but it was not picking the token value. is there any turnaround option for this? <choice value="$Token$">All</choice> this where i have to pass the token value <input type="multiselect" token="env" searchWhenChanged="true"> <label>Environment</label> <search> <query>dedup environment</query> </search> <choice value="$Token$">All</choice> <fieldForLabel>environment</fieldForLabel> <fieldForValue>environment</fieldForValue> </input>
Hi All, I was writing one script in shell for getting data though rest api . Below is the connect of the script `#!/bin/bash remove old files from driectory rm /opt/splunk/etc/apps/HU... See more...
Hi All, I was writing one script in shell for getting data though rest api . Below is the connect of the script `#!/bin/bash remove old files from driectory rm /opt/splunk/etc/apps/HUM_QA_FIREEYE_INPUTS/bin/fireeye_alert.json date_time=$(date +'%Y-%d-%m''T''%H'':00:00.000') retieve data for fireeye_alert curl -s -X POST -k -H 'Content-Type: application/json' -H "x-fireeye-api-key:some_unique_value" https://abc.com -d '{"fromLastModifiedOn":"$date_time","size": 10}' >> /opt/splunk/etc/apps/HUM_QA_FIREEYE_INPUTS/bin/fireeye_aler` Here , I have scheduled this script to run every hour and every time it run , it will create a new date_time value which we need to give in the curl command as argument . The problem is , I am not getting any data if I use this script . There is some error with fromLastModifiedOn coming. ERROR : { "message": "Incorrect request params: fromLastModifiedOn. Enter the valid request params and without any space" } thanks in advance.
How to query kpi thresholds of various services. Is there a REST call or a lookup for this in Splunk ITSI?
Hi I dont know why my eval command doesnt return any resulys `index` | lookup tutu.csv HOSTNAME as host output SITE | stats values(SITE) as Site, values(index) AS index BY host | eval chec... See more...
Hi I dont know why my eval command doesnt return any resulys `index` | lookup tutu.csv HOSTNAME as host output SITE | stats values(SITE) as Site, values(index) AS index BY host | eval check=if(index="ai-toto*" ,"toto index only","All indexes") | search check="toto index only" | table host If I delete the eval comamnd I have results I have done a workaround like this: | stats values(SITE) as Site, dc(index) AS index BY host | where NOT index==4 It works but I would like to know why my original search doesnt works
Hello i'm creating a sample of some poc so i added data manually from the "add data" option. when reviewing the time format from the "add data" option i see everything extracting perfectly but ... See more...
Hello i'm creating a sample of some poc so i added data manually from the "add data" option. when reviewing the time format from the "add data" option i see everything extracting perfectly but when searching in splunk the time in "_time" is the time that i added the data. for example: 02/02/2020 11:19:20.000 44.204.160.84 - - [02/Feb/2020:23:55:40 +0200] "POST /posts/posts/explore HTTP/1.0" so you can see that the date is correct but the time is not the same as in the event update i noticed that it is failing only from some point in the log so for example i have this event : 02/02/2020 13:41:28.000 138.47.33.59 - - [02/Feb/2020:13:41:28 +0200] "PUT /explore HTTP/1.0" date and time are correct right after that i have this event : 02/02/2020 13:41:28.000 217.135.8.245 - - [02/Feb/2020:13:45:27 +0200] "GET /explore HTTP/1.0" date is correct, time not. it saves the time of the previous event. and this is the time for the rest of the events how can i fix it ? thanks
Hello i have this part of event : "POST /posts/posts/explore HTTP/1.0" i need to extract the part between "POST" and "HTTP" which "POST" can be anything, for example "GET", "PUSH", etc. ... See more...
Hello i have this part of event : "POST /posts/posts/explore HTTP/1.0" i need to extract the part between "POST" and "HTTP" which "POST" can be anything, for example "GET", "PUSH", etc. also, the string between "POST" and "HTTP" can contain only 1 '/', for example "POST /search HTTP..." what is the right regular expression for that ? thanks !
Hi, I am setting indexes.conf file where I am going to fix homepath and coldpah sizes. for ex.- [myindex] homePath = FASTDISK:\splunk\myindex\db coldPath = SLOWDISK:\splunk\myindex\colddb th... See more...
Hi, I am setting indexes.conf file where I am going to fix homepath and coldpah sizes. for ex.- [myindex] homePath = FASTDISK:\splunk\myindex\db coldPath = SLOWDISK:\splunk\myindex\colddb thawedPath = SLOWDISK:\splunk\myindex\thaweddb coldToFrozenDir = ARCHIVEDISK:\splunk\myindex\frozen **maxTotalDataSizeMB = 1000000** homePath.maxDataSizeMB = 600000 coldPath.maxDataSizeMB = 400000 frozenTimePeriodInSecs = 31536000 Here I have below questions- 1. If I am setting separately homepath and coldpath sizes then why it is required to mention whole index size as well i.e. maxTotalDataSizeMB or it is not mandatory? 2. I read somewhere that for thawedPath does not take into consideration any environment variable/volume path referenced. is it correct? then I have to manually set thawedPath. Thanks,
Hi! Anyone experienced with the new AddOn Builder (3.0.1)? I would like to implement some logic before saving an "add-on setup parameter" or a "data input parameter". To be more precise, I would l... See more...
Hi! Anyone experienced with the new AddOn Builder (3.0.1)? I would like to implement some logic before saving an "add-on setup parameter" or a "data input parameter". To be more precise, I would like to manipulate string values, concatenating them and encoding them with Base64. I spent some time doing reverse-engineering of Python code, but it looks like parameters are saved according to some generic logic and it is not possible to override it for specific parameters. Which is the right way to do this? Where should I place my code, so that it survives to future changed made using the Add On Builder?