All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

All,  I need help in indexing only success response in Splunk and the failure response like error/authentication failed/host down are in splunkd.log Please help me in python script to index the f... See more...
All,  I need help in indexing only success response in Splunk and the failure response like error/authentication failed/host down are in splunkd.log Please help me in python script to index the failed message in log file. Below is the script:   import requests import json import urllib import sys import os host=(str(sys.argv[1])) headers={ "accept": "application/json", "content-type": "application/json" } Input_URL= ['https://{host}.com/apigee/files/data'.format(host=host)] def return_json(url): try: response = requests.get(url,headers=headers) if not response.status_code // 100 == 2: return "ERROR: Unexpected response {}".format(response) json_obj = response.json() return json.dumps(json_obj) except requests.exceptions.RequestException as e: return "ERROR: {}".format(e) for url in Input_URL: print return_json(url)  
@seunomosowon  Need help with this: I am using Splunk Enterprise Version:8.0.4  and TA-mailclient= 1.3.0   message from ""C:\Program Files\Splunk\bin\Python2.exe" "C:\Program Files\Splunk\etc\ap... See more...
@seunomosowon  Need help with this: I am using Splunk Enterprise Version:8.0.4  and TA-mailclient= 1.3.0   message from ""C:\Program Files\Splunk\bin\Python2.exe" "C:\Program Files\Splunk\etc\apps\TA-mailclient\bin\mail.py"" ERRORinclude_headers New scheduled exec process: "C:\Program Files\Splunk\bin\Python2.exe" "C:\Program Files\Splunk\etc\apps\TA-mailclient\bin\mail.py" Found external scheme definition for stanza="mail://" from spec file="C:\Program Files\Splunk\etc\apps\TA-mailclient\README\inputs.conf.spec" with parameters="protocol, mailserver, password, is_secure, mailbox_cleanup, include_headers" message from ""C:\Program Files\Splunk\bin\Python2.exe" "C:\Program Files\Splunk\etc\apps\TA-mailclient\bin\mail.py"" ERRORinclude_headers message from ""C:\Program Files\Splunk\bin\Python2.exe" "C:\Program Files\Splunk\etc\apps\TA-mailclient\bin\mail.py"" ERRORinclude_headers message from ""C:\Program Files\Splunk\bin\Python2.exe" "C:\Program Files\Splunk\etc\apps\TA-mailclient\bin\mail.py"" ERRORinclude_headers New scheduled exec process: "C:\Program Files\Splunk\bin\Python2.exe" "C:\Program Files\Splunk\etc\apps\TA-mailclient\bin\mail.py" Found external scheme definition for stanza="mail://" from spec file="C:\Program Files\Splunk\etc\apps\TA-mailclient\README\inputs.conf.spec" with parameters="protocol, mailserver, password, is_secure, mailbox_cleanup, include_headers" message from ""C:\Program Files\Splunk\bin\Python2.exe" "C:\Program Files\Splunk\etc\apps\TA-mailclient\bin\mail.py"" ERRORinclude_headers New scheduled exec process: "C:\Program Files\Splunk\bin\Python2.exe" "C:\Program Files\Splunk\etc\apps\TA-mailclient\bin\mail.py" Found external scheme definition for stanza="mail://" from spec file="C:\Program Files\Splunk\etc\apps\TA-mailclient\README\inputs.conf.spec" with parameters="protocol, mailserver, password, is_secure, mailbox_cleanup, include_headers" message from ""C:\Program Files\Splunk\bin\Python2.exe" "C:\Program Files\Splunk\etc\apps\TA-mailclient\bin\mail.py"" ERRORinclude_headers New scheduled exec process: "C:\Program Files\Splunk\bin\Python2.exe" "C:\Program Files\Splunk\etc\apps\TA-mailclient\bin\mail.py" Found external scheme definition for stanza="mail://" from spec file="C:\Program Files\Splunk\etc\apps\TA-mailclient\README\inputs.conf.spec" with parameters="protocol, mailserver, password, is_secure, mailbox_cleanup, include_headers" group=conf, action=app_update, total_seconds=6.672, total_count=1, slowest_app=TA-mailclient, slowest_app_seconds=6.672, modinputs_seconds=6.000, modinputs_count=1, rest_endpoints_seconds=0.500, rest_endpoints_count=1, authorize_seconds=0.172, authorize_count=1, transforms_seconds=0.000, transforms_count=1, props_seconds=0.000, props_count=1
I have a base search that produces a lookup that contains a million rows. When doing inputlookup, it displays the number of events correctly. Sample data below. | inputlookup windows.csv _time ... See more...
I have a base search that produces a lookup that contains a million rows. When doing inputlookup, it displays the number of events correctly. Sample data below. | inputlookup windows.csv _time time host event user 2020-05-20T00:00:01.000+0000 5/20/2020 qatads23 4624 kristin 2020-05-22T00:00:12.000+0000 5/22/2020 sgpads30 4624 john 2020-05-24T00:00:12.000+0000 5/24/2020 sgpads30 4624 alice 2020-05-24T00:00:28.000+0000 5/24/2020 sgpads30 4624 mike 2020-05-24T00:00:33.000+0000 5/24/2020 sgpads30 4624 susan 2020-05-24T00:00:33.000+0000 5/24/2020 sgpads30 4624 lisa   However, when the search is being put into a dashboard, the number of results returned is lesser. I suspect that this is related to a truncate issue.  I noticed the bigger the lookup, the number of results returned reduces more even though the date range selected is the same from the time picker. For example, when I select the date range from May 20th-25th, the number of events returned is 180,000. However, as the lookup file grows, the same date range returns lesser events, maybe around 150,000. Below is the example of the script to get the time format in order for the time picker to work with the time field when searching based on a lookup file. Reference - https://www.splunk.com/en_us/blog/tips-and-tricks/i-cant-make-my-time-range-picker-pick.html | inputlookup windows.csv | eval time=strftime(_time,"%Y-%m-%d") | addinfo | where _time>=info_min_time AND (_time<=info_max_time OR info_max_time="+Infinity") | eval Start_Time=strftime(info_min_time,"%m/%d/%y") | eval Stop_Time=strftime(info_max_time,"%m/%d/%y") | fields _time host event user | search $field1$ $field2$ | sort 0 _time _time time host event user 2020-05-20T00:00:01.000+0000 5/20/2020 qatads23 4624 kristin 2020-05-22T00:00:12.000+0000 5/22/2020 sgpads30 4624 john 2020-05-24T00:00:12.000+0000 5/24/2020 sgpads30 4624 alice 2020-05-24T00:00:33.000+0000 5/24/2020 sgpads30 4624 susan   Noticed that 2 entries (mike and lisa) are missing from the original lookup. One thing to note is that when searching via index, there's no issue on the number of results returned except that it's very slow. Hence, I appended the results into a lookup file and search from that file instead. Theoretically, this should be faster. Below is the same search but querying from the index instead of lookup file. index=windows | fields _time host event user | search $field1$ $field2$ | sort 0 _time Any help would be much appreciated. Thank you.
I have a date like 2020-06-08 06:39:49.0 I need to extract workweek from it. Thanks in advance.
I have a column chart that works great, but I want to add a single value to each column. The columns represent the sum of run times for a series of daily sub-jobs. Jobs are variable, but lets say f... See more...
I have a column chart that works great, but I want to add a single value to each column. The columns represent the sum of run times for a series of daily sub-jobs. Jobs are variable, but lets say for example there are 5 jobs that run, and maybe 5 sub-jobs. If I run my stats and chart using:   stats sum(subJobRunTime) as totalRunTime by Job Date | xyseries Date, Job, totalRunTime   I get a nice column chart. The table looks like this:  Date Job 1 Job 2 Job 3 Job 4 Job 5 6/1/2020 78.7 11.11666667 3.366666667 48.01666667 2.283333333 6/2/2020 93.55 13.71666667 39.88333333 53 3.233333333 6/3/2020 99.01666667 11.86666667 5.7 35.91666667 2.666666667 6/4/2020 77.36666667 34.98333333 86.31666667 50.78333333 2.183333333 6/5/2020 117.2166667 12.25 58.65   2.133333333   I would like to simply add a row at the bottom that is the average plus one standard deviation for each column, which I would then like to add as an overlay on the chart as a "limit line" that the user can use as a visual of "above this, job is taking too long."  I have tried append, appendpipe, and even appendcols, as well as trying eventstats to add the average as an additional field to each event, but these either add no value or worse, make the chart unnecessarily complicated (converting my columns into runTimeJob1, avgRunTimeJob1, runTimeJob2, avgRunTimeJob2, etc., so I don't have a single series "average" that I can overlay on the column chart. Would like something like:  Date Job 1 Job 2 Job 3 Job 4 Job 5 6/1/2020 78.7 11.11666667 3.366666667 48.01666667 2.283333333 6/2/2020 93.55 13.71666667 39.88333333 53 3.233333333 6/3/2020 99.01666667 11.86666667 5.7 35.91666667 2.666666667 6/4/2020 77.36666667 34.98333333 86.31666667 50.78333333 2.183333333 6/5/2020 117.2166667 12.25 58.65   2.133333333 Average+SD 100 20 60 55 3.5   (and yes, the values in the last row are totally made up, but I hope you're seeing my point). Then I'd have the ability to chart job runtime by date, with an overlay of "Average+SD" Any suggestions?
Hi All, I'm in the process of upgrading this app to be 8.x compatible - https://splunkbase.splunk.com/app/1922/ Now this app used the Python Splunklib SDK so I also downloaded the latest from here ... See more...
Hi All, I'm in the process of upgrading this app to be 8.x compatible - https://splunkbase.splunk.com/app/1922/ Now this app used the Python Splunklib SDK so I also downloaded the latest from here and replaced it - https://github.com/splunk/splunk-sdk-python I've got it working and it passes appinspect Only problem is it doesn't pass Splunk's 8.x Upgrade Readiness Checker App - https://splunkbase.splunk.com/app/4698/ The culprit seems to be the latest splunklib Python SDK. I'm sure it will be fine just thought I would ping the devs to either fix the warnings or get the Upgrade Readiness devs to exclude the warnings for splunklib.   An example of one of the warnings. I'd attach the full JSON output but this forum doesn't seem to allow attachments. Edit: Full repo of updated app including the results from both app inspect and Upgrade Readiness checker : https://bitbucket.org/rivium/ta-base64/src/master/
I need to filter the event which does not contain word "error". For example, I have events containing- "POST /operation/requiredword"  "POST /operation/requiredword | error". I want to count on... See more...
I need to filter the event which does not contain word "error". For example, I have events containing- "POST /operation/requiredword"  "POST /operation/requiredword | error". I want to count only the "POST /operation/requiredword" and exclude the "POST /operation/requiredword | error". Here is the query I am using right now and it is giving me both the events with and without containing "error": index="depat-test-app" | rex "DN: (?<ConsumingApp>.*?)[}\s]" | rex field=_raw "(?<Passed>(POST \/opertion\/requiredword)) | stats count(passed) by ConsumingApp   What I want is something like this: index="depat-test-app" | rex "DN: (?<ConsumingApp>.*?)[}\s]" | rex field=_raw "(?<Passed>(POST \/operation\/requiredword NOT error )) | stats count(passed) by ConsumingApp
Hello everyone, I am getting incorrect values while using appendcols to fetch the different results like a currentweek, 1weekago,  2weeksago.   |bin _time span=1h | stats count{trid) as transacti... See more...
Hello everyone, I am getting incorrect values while using appendcols to fetch the different results like a currentweek, 1weekago,  2weeksago.   |bin _time span=1h | stats count{trid) as transaction_count_1WAgo avg(duration) as Average RT_1WAgo by _time ABC | fields *   |appendcols [search earliest=** latest=** | |bin _time span=1h | stats count(trid) as transaction_count_2WAgo avg(duration) as Average RT_2WAgo by _time ABC | fields * ]   |appendcols [search earliest=** latest=** | |bin _time span=1h | stats count{trid) as transaction_count_currentweek avg(duration) as Average RT_currentweek by _time ABC| fields *]   If I run all this queries individually it shows some results and when I am trying to use them in a single search by appending using "appendcols" they shows different..   Please suggest on this? Note: i have used all latest, earliest time frames correctly but didn't show here.
I have the below table from the mentioned query.   sourcetype=abc source=*restart.log | rex field=_raw "server (?<JVM>\w+.*APP).......etc....(to grep the JVM & status, i have used 'rex' command) ... See more...
I have the below table from the mentioned query.   sourcetype=abc source=*restart.log | rex field=_raw "server (?<JVM>\w+.*APP).......etc....(to grep the JVM & status, i have used 'rex' command) | rex field=source "/applications(?<Request>\d+)/\w+.*" | table host,Request,JVM,Status | dedup host,Request,JVM,Status     host Request JVM Status host1 46742 A1_APP started host1 46742 A2_APP started host2 46742 B1_APP started host2 46742 B2_APP failed host1 27598 C1_APP started host2 27598 D1_APP started host1 27598 C2_APP started From the above table, I want my query to search whether all the JVM's present in 'Request' got started or not...JVM numbers will wary..If all JVM's got restarted, my final output should be success..else it should be 'Failure' even if any of the JVM was not started... Output should be like below: Request Result 46742 Failure 27598 Success
Hi All, I have query below which joins 3 sources 1,2,3 on id field, this works when id values matches across 3 sources, I have 1 additional condition for the ids could also match with substring match... See more...
Hi All, I have query below which joins 3 sources 1,2,3 on id field, this works when id values matches across 3 sources, I have 1 additional condition for the ids could also match with substring matching for example- id1=F80C05F3-19AF-40D3-AC73-19544E928D21 id2=XOP-F80C05F3-19AF-40D3-AC73-19544E928D21 id3=ABC-F80C05F3-19AF-40D3-AC73-19544E928D21 the query below needs modified for substring matching  based on id1 existing in id2 or id3 and it needs to return the results , how can this query below be  modified ? splunk query- sourcetype=source1 id1="*" OR sourcetype=source2 id2="*" OR sourcetype=source3 id2="*" Id=coalesce(id1,id2,id3) | stats count by Id sourcetype | xyseries Id sourcetype count | fillnull source1 source2 source3 value="Not exists" | table source1 source2 source3
I have a search that uses the transaction:   | transaction startswith=<...> endswith=<...>    Command to group it into certain events I want to see. How would I search this even further to ge... See more...
I have a search that uses the transaction:   | transaction startswith=<...> endswith=<...>    Command to group it into certain events I want to see. How would I search this even further to get the time difference between each event in this transaction and then graph these time differences to a line/bar graph with the events/hosts on X-axis and time on y-axis. There are no specific fields for each event that I want to use to calculate the time difference, I only want to show the time difference between each and every raw log in this transaction.
I have 100+ log files with me. Inside of each log file there is no timestamp and each line is single event in splunk. My requirement is to calculate the start time and end time & time taken from dat... See more...
I have 100+ log files with me. Inside of each log file there is no timestamp and each line is single event in splunk. My requirement is to calculate the start time and end time & time taken from data uploaded time in Splunk for each log file. Please sugest how to achieve this. My output should be like below. FileName StartTime EndTime Time Taken deploy1.log 5/6/2020 07:30 5/6/2020 07:35 5 mins deploy2.log 5/6/2020 08:23 5/6/2020 08:35 12 mins deploy3.log 5/6/2020 09:12 5/6/2020 09:27 15 mins
I have used the below query to create one table: index=abc sourcetype=xyz source=*.txt host=host1 OR host=host2 | rex field=source "/abcdef(?<EAR_Name>\w+.*ear)/versionInfo.txt" | rex field=source ... See more...
I have used the below query to create one table: index=abc sourcetype=xyz source=*.txt host=host1 OR host=host2 | rex field=source "/abcdef(?<EAR_Name>\w+.*ear)/versionInfo.txt" | rex field=source "/xyzpqrs(?<EAR_Name>\w+.*ear)/versionInfo*.txt" | rex field=_raw "deployTag=\d+\@\w+.*@(?<Label>\w+.*)" | chart latest(Label) by EAR_Name,host My current table is, EAR_Name       host1            host2 mobile.ear  sg.mobile-12   sg.mobile-10 google.ear  hk.google-45   hk.google-45 facebook.ear  th.fb-37           th.fb-37 here..sg.mobile-12, hk.google-45 values of Label.. My requirement is to compare(row-wise) each value of host1 column with host2 column..and produce the output like "Matching","Not Matching"...like below: EAR_Name             host1             host2             Result mobile.ear        sg.mobile-12 sg.mobile-10 Not Matching google.ear         hk.google-45 hk.google-45 Matching facebook.ear     th.fb-37            th.fb-37         Matching
I'm trying to create a visualization of web service activity. I thought the timeline would be a good representation of when services are up and down. Unfortunately, the logs I have to work with ping ... See more...
I'm trying to create a visualization of web service activity. I thought the timeline would be a good representation of when services are up and down. Unfortunately, the logs I have to work with ping services every 15 minutes. So every 15 minutes, there's a log that states: <serviceName> is up: true OR <serviceName> is up:false I was able to implement this query and get something back (last 60 minutes): index=application_activity up | transaction startswith="*true" endswith="*false" by service | table _time service duration But of course because of the 15 minute time interval, it has huge gaps. How can I have these gaps filled in as long it as true the service is up, and only have gaps show when it is false? I'm also open to another recommendation for a visualization.
Greetings, I need to search for requests from the same username that occur within certain time interval, say, less than 100ms and output various request attributes. How can the query be constructed ... See more...
Greetings, I need to search for requests from the same username that occur within certain time interval, say, less than 100ms and output various request attributes. How can the query be constructed to extract such requests? Thanks in advance
Hello, I need your help please. I have a log like F5:WAF "2020-04-08 15:36:21","146632585856347577","",192.121.195.41,443,190.92.12.16,.... anD i want to extract data after the text "F5:WAF": 1s... See more...
Hello, I need your help please. I have a log like F5:WAF "2020-04-08 15:36:21","146632585856347577","",192.121.195.41,443,190.92.12.16,.... anD i want to extract data after the text "F5:WAF": 1st field: the date "2020-04-08 15:36:21" 2nd field: the source ip "192.121.195.41" 3th field: the port "443" ane the 4th the dest ip "190.92.12.16". Thank you!
Hello all! I've inherited a large Splunk deployment and I've been given some leniency with setting up, or rather, revamping the monitoring. Environment: - 3 Primary locations - 30 - 40 Indexers pe... See more...
Hello all! I've inherited a large Splunk deployment and I've been given some leniency with setting up, or rather, revamping the monitoring. Environment: - 3 Primary locations - 30 - 40 Indexers per location - 10 - 12 Search Heads per location - 1 DMC per location ** The numbers above don't account for BCP or lower environments. Right now each DMC is responsible for its location, however there is a push to have the entire deployment's "health" available in a "single pane of glass". Without regard to cost, what is the ideal method to accomplish this? I've toyed with standing up a single DMC at one of the regions and plugging all indexers, SH, etc. into it simply for the health perspective. The same scenario as I just mentioned but in Splunk Cloud is also possible. Also using one of the existing DMC's as the master for all regions is on the table. I'd love to hear what is currently out there, or what architecture makes the most sense in this scenario. Cheers!    
Hi, Is it possible to have two or more app agents in one server? ^ Post edited by @Ryan.Paredez to improve the title. 
Hi All, I have an app which has multiple dashboards and i have created a new settings dashboard in which i have created 3 buttons for 3 different colors. when i click on these buttons the colors ar... See more...
Hi All, I have an app which has multiple dashboards and i have created a new settings dashboard in which i have created 3 buttons for 3 different colors. when i click on these buttons the colors are changing accordingly as i have set in css file but only in the settings dashboard where the buttons are present. How can i make the color selected in the settings page applied to all the other dashboard as well? I don't want to create separate buttons in each dashboard. with the buttons in the settings page i want to control the dashboard color. i have searched the forum and all i have found is dashboard level color changes, but i need App level. please give your suggestions.
I am trying to replicate a interface where the records are stored in a table. A checkbox will be there in every row of the table, so if I am the user and if I want any of the row to be acknowledged... See more...
I am trying to replicate a interface where the records are stored in a table. A checkbox will be there in every row of the table, so if I am the user and if I want any of the row to be acknowledged I will click the checkbox of that row and press a submit button. This process indicates that I have acknowledged the error in the particular time. I want to replicate the same in Splunk. Currently I did a editable Splunk table in a dashboard but we have enter the time, username and save the lookup but I wanted to have a checkbox when it is clicked then i want to save the username, time for that particular row in the last column which will be empty for unacknowledged rows. For example: Below is the dashboard which I am trying to replicate in Splunk, the first row in it has been acknowledged so it has the value in the last column "Ack By" which shows who has acknowledged and the time. I wanted to do the similar thing in Splunk by having this table in a dashboard. So whoever clicks the row his/her email/name with clicked time should be updated in the new column "Ack By" and the checkbox should be hidden for the acknowledged row. Could anyone please help with this?