All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi at all, I have very long events (more than 10,000 chars) that I have to send via syslog (udp) to a third party system. I'm working on an Heavy Forwarder with Splunk 8.0.3 running on Linux Red ... See more...
Hi at all, I have very long events (more than 10,000 chars) that I have to send via syslog (udp) to a third party system. I'm working on an Heavy Forwarder with Splunk 8.0.3 running on Linux Red Hat. Events are truncated at 1024. I know that there the parameter maxEventSize to put in outputs.conf but it doesn't run in my situation (I inserted a very greater number in maxEventSize). Had anyone the same problem? Thanks in advance. Ciao. Giuseppe
I have done some really basic testing as i want to prove that this is not working correctly. I have added 3 scripts into a clean input.conf [powershell://Powershell01] script = 'Filetest01' | out-f... See more...
I have done some really basic testing as i want to prove that this is not working correctly. I have added 3 scripts into a clean input.conf [powershell://Powershell01] script = 'Filetest01' | out-file "c:\temp\filetest01 $(get-date -f "yyyy-MM-dd HH.mm").txt" schedule = */5 * * * * Index = Sandbox Source = Powershell-test01 Sourcetype = Powershellscript [powershell://Powershell02] script = 'Filetest02' | out-file "c:\temp\filetest02 $(get-date -f "yyyy-MM-dd HH.mm").txt" schedule = */5 * * * * Index = Sandbox Source = Powershell-test02 Sourcetype = Powershellscript [powershell://Powershell03] script = 'Filetest03' | out-file "c:\temp\filetest03 $(get-date -f "yyyy-MM-dd HH.mm").txt" schedule = */5 * * * * Index = Sandbox Source = Powershell-test03 Sourcetype = Powershellscript The log is reporting this when starting the service 06-26-2020 10:06:07.1442893+2 INFO Exiting powershell host script. 06-26-2020 10:06:22.8073774+2 INFO start splunk-powerhsell.ps1 06-26-2020 10:06:24.2149136+2 INFO launched disposer this is the result 1. Filetest01 starts to run as the first job when the service is restart, but non of the other 2. File should not start 1 sec behind schedule 3. Some scripts will execute 2 times 4. Filetest2 was skipped once. it goes from 10 > 20 where it was expected to be each 5 min Name LastWriteTime ---- ------------- filetest01 2020-06-26 10.06.txt 26-06-2020 10:06:24 filetest01 2020-06-26 10.14.txt 26-06-2020 10:14:59 filetest01 2020-06-26 10.19.txt 26-06-2020 10:19:59 filetest01 2020-06-26 10.20.txt 26-06-2020 10:20:00 filetest01 2020-06-26 10.29.txt 26-06-2020 10:29:59 filetest01 2020-06-26 10.34.txt 26-06-2020 10:34:59 filetest01 2020-06-26 10.35.txt 26-06-2020 10:35:00 filetest01 2020-06-26 10.44.txt 26-06-2020 10:44:59 filetest01 2020-06-26 10.49.txt 26-06-2020 10:49:59 filetest01 2020-06-26 10.50.txt 26-06-2020 10:50:00 filetest02 2020-06-26 10.09.txt 26-06-2020 10:09:59 filetest02 2020-06-26 10.10.txt 26-06-2020 10:10:00 filetest02 2020-06-26 10.19.txt 26-06-2020 10:19:59 filetest02 2020-06-26 10.24.txt 26-06-2020 10:24:59 filetest02 2020-06-26 10.25.txt 26-06-2020 10:25:00 filetest02 2020-06-26 10.34.txt 26-06-2020 10:34:59 filetest02 2020-06-26 10.39.txt 26-06-2020 10:39:59 filetest02 2020-06-26 10.40.txt 26-06-2020 10:40:00 filetest02 2020-06-26 10.49.txt 26-06-2020 10:49:59 filetest03 2020-06-26 10.09.txt 26-06-2020 10:09:59 filetest03 2020-06-26 10.14.txt 26-06-2020 10:14:59 filetest03 2020-06-26 10.15.txt 26-06-2020 10:15:00 filetest03 2020-06-26 10.24.txt 26-06-2020 10:24:59 filetest03 2020-06-26 10.29.txt 26-06-2020 10:29:59 filetest03 2020-06-26 10.30.txt 26-06-2020 10:30:00 filetest03 2020-06-26 10.39.txt 26-06-2020 10:39:59 filetest03 2020-06-26 10.44.txt 26-06-2020 10:44:59 filetest03 2020-06-26 10.45.txt 26-06-2020 10:45:00 UniversalForwarder Version 7.0.2.0 Upgraded to Version 8.0.4.0 After upgrade it looks good Name LastWriteTime ---- ------------- filetest01 2020-06-26 10.57.txt 26-06-2020 10:57:12 filetest01 2020-06-26 11.00.txt 26-06-2020 11:00:00 filetest01 2020-06-26 11.05.txt 26-06-2020 11:05:00 filetest02 2020-06-26 10.57.txt 26-06-2020 10:57:12 filetest02 2020-06-26 11.00.txt 26-06-2020 11:00:00 filetest02 2020-06-26 11.05.txt 26-06-2020 11:05:00 filetest03 2020-06-26 10.57.txt 26-06-2020 10:57:12 filetest03 2020-06-26 11.00.txt 26-06-2020 11:00:00 filetest03 2020-06-26 11.05.txt 26-06-2020 11:05:00 At last i found out this was a problem in the splunk version that was fixed later. Is it possible to not have Splunk run all scripts when starting the service. as i don't think it support Cron 0/5 * * * *    
Hi All,   How would capture the netflows from different switces in different network zones.  I have deployed Independent stream forwarder in DMZ zone    How would i specify that It has capture fl... See more...
Hi All,   How would capture the netflows from different switces in different network zones.  I have deployed Independent stream forwarder in DMZ zone    How would i specify that It has capture flows from  Switch 1  Switch 2  Router 1  Router 2 .    Hope I am clear with my question. 
Hi everyone, I'd be eternally grateful if someone could help point me in the right direction here. I'm trying to output a table with merged results from two different indexes.  There are two events... See more...
Hi everyone, I'd be eternally grateful if someone could help point me in the right direction here. I'm trying to output a table with merged results from two different indexes.  There are two events: Index A:   time="12th April, 19:07:32", name="Bob", ip="192.168.0.45", searched="how do I tie my laces", mac="00:1B:44:11:3A:B7"   Index B:   the_time="12-04-20-190702", username="Bob", ipaddress="192.168.0.45", location="Home", macaddress="00:1B:44:11:3A:B7"   As you can see in both indexes there are common fields - IP address and MAC address, however the timestamps differ slightly.  I would like to output a table that contained all of the fields e.g. Time, Name, IP address, Searched, Location, MAC Address however I'm not entirely sure how I would construct the search.  Is there a way that I can make the table work even though the timestamp is out slightly? Any help would be amazing, thank you! Best wishes, D    
I have an ID among other things that is extracted by Splunk DB Connect from a mySQL database.  Whats special with the ID is that it ends with 3 equal signs: XXXXXXXX=== I`m required to put this val... See more...
I have an ID among other things that is extracted by Splunk DB Connect from a mySQL database.  Whats special with the ID is that it ends with 3 equal signs: XXXXXXXX=== I`m required to put this value into a summary index in order to make it available for a search head outside the cluster where it is indexed and when it is written to the summary index everything appears good and the value is written as is with the 3 equal signs. However, when I search for the field _raw will display the value as it is written but when I list it in either a table or with a transform command the equal signs have been removed and I need this value to be exact as I later need to compare it in order to join data.  As the value always appears with 3 equals sign I have temporarily rtrim() on the source I am comparing it to but it really bugs me that the characters gets removed. PS: Extracting it from one Splunk server to another by the API is sadly not an option due to network limitations.
Hi guys. Does anyone know the detailed documentation about jirafill SPL command for  TA-jira-service-desk-simple-addon? I check the online document about this add-on below. however, I could not fin... See more...
Hi guys. Does anyone know the detailed documentation about jirafill SPL command for  TA-jira-service-desk-simple-addon? I check the online document about this add-on below. however, I could not find the detailed command and options of jirafill. Online doc of this add-on https://ta-jira-service-desk-simple-addon.readthedocs.io/en/latest/userguide.html#using-the-jira-service-desk-alert-action   For example, I cannot see the difference between jirafill opt=1, 2, and 3 and other options.   Thanks in advance for your support.  
I have one requirement we need to give cretin users access only to windows app ans search app my question what is the easier  to this requirement . 
i am trying to create a Splunk dashboard, where I want to set a value to a token based on the two dropdown values(service dropdown and environment dropdown) <input type="dropdown" token="service" ... See more...
i am trying to create a Splunk dashboard, where I want to set a value to a token based on the two dropdown values(service dropdown and environment dropdown) <input type="dropdown" token="service" searchWhenChanged="true"> <label>service</label> <choice value="capi">capi</choice> <choice value="crapi">crapi</choice> <choice value="oapi">oapi</choice> <default>capi</default> <initialValue>capi</initialValue> </input> <input type="dropdown" token="environment" searchWhenChanged="true"> <label>Environment</label> <choice value="prod">prod</choice> <choice value="ppe">ppe</choice> <choice value="pte">pte</choice> <choice value="dev">dev</choice> <default>prod</default> <initialValue>prod</initialValue> </input> above are the 2 dropdowns, now i want to set a value to token "endpoint" based on value selected in service and environment dropdown values. i tried using condition match, but i am not getting it right <condition match="$service$==capi AND $environment$==ppe"> <set token = endpoint>"/capi/ppe"</set> </condition>  
複数の時間が入っているログから、特定のフィールドのタイムスタンプを一つを選択し、時間を変更した上で、タイムスタンプ(_time)に格納したいのですが、 うまくできません。 例えばログは以下の様なものです。 580 <158>1 2020-06-26T13:03:36+09:00 logforwarder x - - test 1,2020/06/26 04:03:30,no-serial,... See more...
複数の時間が入っているログから、特定のフィールドのタイムスタンプを一つを選択し、時間を変更した上で、タイムスタンプ(_time)に格納したいのですが、 うまくできません。 例えばログは以下の様なものです。 580 <158>1 2020-06-26T13:03:36+09:00 logforwarder x - - test 1,2020/06/26 04:03:30,no-serial,TRAFFIC,end,2304,2020/06/26 04:03:11,,,,, Splunk上では、2020-06-26T13:03:36+09:00の値が_timeに入っています。 しかし、この値を_timeに格納したいのではなく、上記ログの2020/06/26 04:03:30に+9時間を足した値を_timeとしたいです。 2020/06/26 04:03:30はgenerated_timeというキーに入っています。 Props.confでは以下の処理を定義しましたが、うまくいきませんでした。 EVAL-temptime = strptime(generated_time,"%Y/%m/%d %H:%M:%S") EVAL-temptime2 = strftime(temptime+(32400),"%Y/%m/%d %H:%M:%S") FIELDALIAS-time = temptime2 AS _time Props.confでは上記の様なEVAL処理はできないのでしょうか。 サーチ文としては動作することは確認していて、以下のコマンドでは、正常に_timeにgenerated_timeに+9時間したものが格納されます。 |eval temptime = strptime(generated_time,"%Y/%m/%d %H:%M:%S") | eval _time=strftime(temptime+(32400),"%Y/%m/%d %H:%M:%S") | stats count by _time
I have data like this: Status              EndTime        StartTime Pending           25-06-2020      24-06-2020 Pending           24-06-2020      23-06-2020 New              23-06-2020      22-0... See more...
I have data like this: Status              EndTime        StartTime Pending           25-06-2020      24-06-2020 Pending           24-06-2020      23-06-2020 New              23-06-2020      22-06-2020 Pending          22-06-2020      21-06-2020 Pending           21-06-2020      20-06-2020 OLD               20-06-2020      19-06-2020 OLD              19-06-2020      18-06-2020 NEW              18-6-2020       17-06-2020 I need to capture the date change and start and end time of Status change. So output should be like: Pending 25-06-2020 23-06-2020 New 23-06-2020 22-06-2020 Pending 22-06-2020 20-06-2020 OLD  20-06-2020 18-06-2020 NEW     18-6-2020     17-06-2020 can somebody please help?    
Splunk Installation - I would like to know how to add disk space to splunk for indexes and logs.  Where would this be in the fine manual?
Hello,   I have a timechart with multiple fields, I want to append existing query or add new query to display one field as a text in graph. Example:  I am having above graph, want to display ... See more...
Hello,   I have a timechart with multiple fields, I want to append existing query or add new query to display one field as a text in graph. Example:  I am having above graph, want to display text (field) from search query at the two purple circles .   Thanks,
fields in sourcetype1 --> A,B,C, txid ( always has a value) fields in sourcetype2--> D,E,F, txid ( may occur value for some sources or may not) Its clear that i have a common field ( txid) in two d... See more...
fields in sourcetype1 --> A,B,C, txid ( always has a value) fields in sourcetype2--> D,E,F, txid ( may occur value for some sources or may not) Its clear that i have a common field ( txid) in two different sourcetypes  sourcetype1 and sourcetype2 Requirement: I need to print  A,B from sourcetype1, in case i didnt find "txid" in  sourcetype2,   Please help
| inputlookup file.csv | search NOT [search index=sph | dedup DMC | table DMC ] | dedup number In my scenario I have two files, a csv static file with a constant list of data and a .txt file which c... See more...
| inputlookup file.csv | search NOT [search index=sph | dedup DMC | table DMC ] | dedup number In my scenario I have two files, a csv static file with a constant list of data and a .txt file which contains either the entire contents of the csv file or some. in fact I want to make a comparison if an element is not in the .txt file and it is in the csv file I display it in suite I alert. the problem now is in the monitoring of the .txt file, sometimes when it is a single line that changes it displays all the elements already present at the level of the .txt file as missing because it does not reindex. can we monitor a file without indexing?
I have a list of GPS points in a lookup file which describes a race track, generated using this https://www.gpsvisualizer.com. This lookup file has maybe 100 points which describe the path of this tr... See more...
I have a list of GPS points in a lookup file which describes a race track, generated using this https://www.gpsvisualizer.com. This lookup file has maybe 100 points which describe the path of this track. lookup headings are latitiude,longitude,Segment name I then have some GPS traces of my car driving round this race track, where a GPS point is recorded roughly every second. For each recorded car location point, I want to look in this lookup file and return the Segment name which is physically closest. It's not quite a substring/pattern match (i.e. match_type WILDCARD), I would like to be able to return the segment name for the closest point i have in the lookup file. I'm aware of the haversine app: https://splunkbase.splunk.com/app/936/ And this is someone bundling the algorithm into a macro: https://community.splunk.com/t5/Splunk-Search/Distance-between-two-Geocoordinates/td-p/422514 But I'm not sure how I could use that macro or the haversine function in a lookup ? So far i'm just doing literal matches of the lat/long values in the car traces and I'm getting no matches at all (very disappointing) Any suggestions/ideas ?
I have created Add on using Splunk Add on Builder. I don't want to use global setting instead I want to use setup parameters as Access Key and Secret Key And Environment. I want to retrieve passwords... See more...
I have created Add on using Splunk Add on Builder. I don't want to use global setting instead I want to use setup parameters as Access Key and Secret Key And Environment. I want to retrieve passwords based on Environment .I am also passing Environment parameter from alert action for conditional matching.                               Thanks In Advance 
A data model acceleration is populating summary with "friendly" values from an automatic lookup (replacing a guid-like value) that I built for "privileged" users ... a lookup that I wish to retain (f... See more...
A data model acceleration is populating summary with "friendly" values from an automatic lookup (replacing a guid-like value) that I built for "privileged" users ... a lookup that I wish to retain (for dashboard purposes). But I can't figure out how to have acceleration summary not include that "friendly" lookup value. This appears to be because 'nobody' is always the one running acceleration searches. I'd like to be able to specify the user running the acceleration query for this data model to prevent the automatic lookup, but I can't see how to specify user of acceleration searches. All other acceleration config parameters appear to be in datamodels.conf. But I'm not seeing any option to specify alternative  "owner" / "run_as" account. Is it not possible to specify owner on the acceleration searches? Or is there another path to get acceleration searches to run as a particular user?
Using Splunk Enterprise (currently 7.3.x here); I'm not an admin, so cannot see/change "savedsearches.conf". I have over a dozen alerts (with more to come) that I need to copy from our pre-productio... See more...
Using Splunk Enterprise (currently 7.3.x here); I'm not an admin, so cannot see/change "savedsearches.conf". I have over a dozen alerts (with more to come) that I need to copy from our pre-production Splunk search head to production (where I would edit the details). I would expect a simple "export" menu choice somewhere in the Alerts page or on a particular Alert, but there's nothing remotely similar. The Splunk "apps" already exist on both, and differ, so I couldn't ask some admin to just copy it. Given that the search terms, cron schedule, time span, trigger condition, and actions are all separate, it is a major amount of work to copy & paste (with multiple additional clicks).   
I am using the following query to get the average duration for certain Jobs. I want to have a visualization on a daily basis. However, for some reason I am unable to get a visualization. Below is the... See more...
I am using the following query to get the average duration for certain Jobs. I want to have a visualization on a daily basis. However, for some reason I am unable to get a visualization. Below is the part of the query. | eval AVGDURATION = (CALCEND - START) | stats avg(AVGDURATION) as AVGDURATION by JOBNAME | eval AVGDURATION = round(AVGDURATION, 2) | eval HHMMSS=tostring(AVGDURATION, "duration") | stats values(HHMMSS) as DURATION by JOBNAME
Please who knows a website where I can get video tutorials on Splunk processing Language (SPL)