All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  I can't see where I am wrong in my configuration file : inputs.conf : /opt/splunk/etc/apps/my_app_poller/local [script://./bin/my_python_script.py] interval = 27 7 * * * index = my_index s... See more...
Hi,  I can't see where I am wrong in my configuration file : inputs.conf : /opt/splunk/etc/apps/my_app_poller/local [script://./bin/my_python_script.py] interval = 27 7 * * * index = my_index sourcetype = script:python source = script://./bin/my_python_script.py disabled = 0 [batch:///opt/splunk/etc/apps/my_app_poller/bin/*my_python_script.json] move_policy = sinkhole index = my_index sourcetype = script:python crcSalt = <SOURCE> disabled = 0 props.conf : /opt/splunk/etc/apps/my_app_2/default [script:python] INDEXED_EXTRACTIONS = json DATETIME_CONFIG = CURRENT TRUNCATE = 999999 JSON_TRIM_BRACES_IN_ARRAY_NAMES = true
Hello All!! I’m looking to set up an alert everyday based on a lookup data comparing with a summary report. — lookup has hosts that are all reporting. —- summary report has hosts are reporting ever... See more...
Hello All!! I’m looking to set up an alert everyday based on a lookup data comparing with a summary report. — lookup has hosts that are all reporting. —- summary report has hosts are reporting everyday and it runs every midnight. Example: Look up has 2000 hosts And summary report has 1000 hosts. I need to report that delta 1000 hosts. Which would be an alert set up. How can I achieve this. I’m trying with set and outer join but couldn’t able to get the result. MY SEARCH : index= summary source=Daily.report reporting=yes earliest=-2d latest=-1d | table host, ip  lookup : hosts.csv   Please help me in getting the solution. Thanks in advance!  
Hi, I have base search which has appname field which lists all apps I have on splunk instance. I would like to output table or static in which it says whether apps a,b,c,d are present. If it is what ... See more...
Hi, I have base search which has appname field which lists all apps I have on splunk instance. I would like to output table or static in which it says whether apps a,b,c,d are present. If it is what it is version and what indexers are installed at. If there is no result found in base search, i would still like to see output as all a-d apps as absent status. I already have base search :   index=.. host=... AND appname IN (a,b,c,d)    
Hi everyone, Based on the Splunk Mobile documentation, users can react to alerts by configuring an action name and a corresponding URL for the action, as highlighted here: https://docs.splunk.com/Do... See more...
Hi everyone, Based on the Splunk Mobile documentation, users can react to alerts by configuring an action name and a corresponding URL for the action, as highlighted here: https://docs.splunk.com/Documentation/Alerts/2.5.0/Alerts/ViewandRespondtoAlerts#Respond_to_an_alert Anyone has experience with enabling that feature with the Splunk Cloud Gateway app installed (SH) on a corporate internal network? I've not able link the action back into the SH because the traffic goes through internet, I assume for it to work I need to open additional port (e.g. for VPN originated traffic) on my SH and relevant tunneling profile despite they were not needed for the main setup as described here: https://docs.splunk.com/Documentation/Gateway/1.12.2/Installation/Installation The Splunk Mobile is managed by MDM, Workspace One UEM, and I'm trying to use per-app VPN to configure it, but not having success so far. Would greatly appreciate any finding.
So I'm referencing this solved answer:  https://community.splunk.com/t5/Getting-Data-In/Extract-JSON-data-within-the-logs-JSON-mixed-with-unstructured/td-p/195292/page/2 But my configuration isn't w... See more...
So I'm referencing this solved answer:  https://community.splunk.com/t5/Getting-Data-In/Extract-JSON-data-within-the-logs-JSON-mixed-with-unstructured/td-p/195292/page/2 But my configuration isn't working.  I have this mess of a field: Message={"ProviderGuid":"eb79061a-a566-4698-9119-3ed2807060e7","YaraMatch":[],"ProviderName":"Microsoft-Windows-DNSServer","EventName":"LOOK_UP","Opcode":0,"OpcodeName":"Info","TimeStamp":"2020-08-25T20:10:50.2211944-07:00","ThreadID":4168,"ProcessID":2632,"ProcessName":"dns","PointerSize":8,"EventDataLength":352,"XmlEventData":{"FormattedMessage":"RESPONSE_SUCCESS: TCP=0; InterfaceIP=192.168.1.5; Destination=192.168.1.50; AA=0; AD=0; QNAME=86.130.9.52.in-addr.arpa.; QTYPE=12; XID=17,307; DNSSEC=0; RCODE=0; Port=63,227; Flags=33,152; Scope=Default; Zone=..Cache; PolicyName=NULL; PacketData=439B8180000100010000000002383603...; AdditionalInfo= VirtualizationInstance:.; GUID={EC86881D-308D-4A91-94FE-5DCDDFCADFE3} ","RCODE":"0","TCP":"0","Scope":"Default","GUID":"{EC86881D-308D-4A91-94FE-5DCDDFCADFE3}","Port":"63,227","AD":"0","QNAME":"86.130.9.52.in-addr.arpa.","PolicyName":"NULL","MSec":"3243143.0166","XID":"17,307","AA":"0","Destination":"192.168.1.50","QTYPE":"12","Zone":"..Cache","PID":"2632","AdditionalInfo":"VirtualizationInstance:.","PacketData":"439B8180000100010000000002383603...","TID":"4168","ProviderName":"Microsoft-Windows-DNSServer","PName":"","DNSSEC":"0","InterfaceIP":"192.168.1.5","EventName":"LOOK_UP","Flags":"33,152"}} and I'm trying to parse out the KV portion in the middle.  Here are my props.conf and transforms.conf files props.conf [windns] REPORT-jsonkv = report-json,report-kv transforms.conf [report-json] REGEX = XmlEventData":{(?<kvdata>.+?)," [report-kv] REGEX = \s(\S+)=(\S+) FORMAT = $1::$2 MV_ADD = true   If I understand the sequence correctly, that blob above should parse into kvdata as the following: "FormattedMessage":"RESPONSE_SUCCESS: TCP=0; InterfaceIP=192.168.1.5; Destination=192.168.1.50; AA=0; AD=0; QNAME=86.130.9.52.in-addr.arpa.; QTYPE=12; XID=17,307; DNSSEC=0; RCODE=0; Port=63,227; Flags=33,152; Scope=Default; Zone=..Cache; PolicyName=NULL; PacketData=439B8180000100010000000002383603...; AdditionalInfo= VirtualizationInstance:.; GUID={EC86881D-308D-4A91-94FE-5DCDDFCADFE3} "   and then that should become kv pairs TCP=0 InterfaceIP=192.168.1.5 and so on.... (except "AdditionalInfo" will NOT parse out due to the REGEX, but the rest should, but that's ok)   I have a single server, basic config. suggestions appreciated.
Hi, I am new to using http event collector. I already received the hec token. I need to send data to splunk cloud , using the provided token. Any help would be appreciated. Thanks
Hi, I want to be able to visualise the top 1-5/10 login times based on a time range. So if I select a time range of 24 hours I'd like to be able to see the most frequent times users have logged in ov... See more...
Hi, I want to be able to visualise the top 1-5/10 login times based on a time range. So if I select a time range of 24 hours I'd like to be able to see the most frequent times users have logged in over an hour period. Not too fussed on the top listings but essentially I'd like this to display (based on the below) atop count. Top Most Frequent Logins over an hour (24hr range) 1. Date 25/08/2020 10:00am -11:00am = 200  In theory, it would list only the highest login counts over an hr and then decent in count value from whatever top count I'd want.
I am trying to parse json data in Splunk    This is the example data.    { "certificates": [ { "NotAfter": "2020-09-06T15:34:22-07:00", "NotBefore": "2019-09-07T15:34:22-07:00",... See more...
I am trying to parse json data in Splunk    This is the example data.    { "certificates": [ { "NotAfter": "2020-09-06T15:34:22-07:00", "NotBefore": "2019-09-07T15:34:22-07:00", "allowedOperations": [ "certificate_show", "certificate_der_download" ], }, { "NotAfter": "2020-10-07T10:51:40-07:00", "NotBefore": "2019-10-08T10:51:40-07:00", "allowedOperations": [ "certificates_show" ], }    I want only the data between the tags before "NotAfter" into separate events , and the top part has to be ignored.    I have tried regex101 to identify to identify the breaking patters , it works there but not in Splunk.  Can you please guide.    Thanks., nawaz 
Hi all  I'm creating a custom visualisation for my org, so I'm developing it into an app, following this tutorial: https://docs.splunk.com/Documentation/Splunk/8.0.5/AdvancedDev/CustomVizTutorial ... See more...
Hi all  I'm creating a custom visualisation for my org, so I'm developing it into an app, following this tutorial: https://docs.splunk.com/Documentation/Splunk/8.0.5/AdvancedDev/CustomVizTutorial Everything's great so far and working perfectly on my local machine when I'm referencing a background image in my CSS using: background-image: url( '/static/app/********/images/hot-water-icon.gif' ); (obviously ******** is the name of my app, on my local machine). This is great, until I sent an upgrade to test on another machine, where it got installed into an app directory called "******** - Copy" (I assume because that was the name of the file of the folder I copied to zip & send thru, because I didn't want to send some other things in the /static/app/********/bin/ folder I was still working on at the time....) So... How do I access the URI for the named app install folder? Can it be done? or do I just need to presume it'll always be as I set it? Thanks in advance, Phil
Hi, I have some documents that looks like this:     { "document_id": "some-id", "status": "some-status", "fields": "some values", "stages": [ { "duration": 0.031, "name": ... See more...
Hi, I have some documents that looks like this:     { "document_id": "some-id", "status": "some-status", "fields": "some values", "stages": [ { "duration": 0.031, "name": "my_name", "more_fields": "more_values", "array_field": [...], }, ... ] }     The length of the stages field can be quite large. I would like to calculate the avg or median duration for each type of stage but not for all stage types. Here is what I have initially:     data_source | fields status, stages{}.name as sname, stages{}.duration | eval stage_fields=mvzip('stages{}.name', 'stages{}.duration') | where job_result in ("some-status") | mvexpand stage_fields | fields stage_fields | rex field=stage_fields "(?<stage_name>.+),(?<stage_duration>.+)" | where stage_name in ("my_name", "other_name") | timechart span=1h median(stage_duration) as "Median Stage Duration" by stage_name | rename stage_name as "Stage Name"      This obviously starts truncating results because mvexpand starts expanding into a huge number of fields and complains about memory limits. I tried to put an mvfilter before it so that it only expands those stages that I am interested in but obviously I didn't know how to use it so that ended up as a no op. So the question is how can I make this query more efficient? Thanks!
Hi Everyone, I have a requirement like this. This is my search query. index=xyz sourcetype=yui source="user.log" process (Type ="*") (Name_Id ="*") (Request_URL ="*")| convert timeformat="%Y-%m-%d... See more...
Hi Everyone, I have a requirement like this. This is my search query. index=xyz sourcetype=yui source="user.log" process (Type ="*") (Name_Id ="*") (Request_URL ="*")| convert timeformat="%Y-%m-%d" ctime(_time) AS Date| rex field=Request_URL "(?<id>[A_Za-z0-9]{8}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{12})"|fillnull value="" id| eval ClickHere= "https://cvb/api/?processGroupId=".id|stats count by Date Name_Id Type Request_URL id ClickHere So I am getting data for Date Name_Id Type Request_URL id ClickHere. Where ClickHere column is a hyperlink. My Dashboard script: <dashboard theme="dark"> <label>Process</label> <row> <panel> <table> <search> <query>index=xyz sourcetype=yui source="user.log" process (Type ="*") (Name_Id ="*") (Request_URL ="*")| convert timeformat="%Y-%m-%d" ctime(_time) AS Date| rex field=Request_URL "(?<id>[A_Za-z0-9]{8}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{12})"|fillnull value="" id| eval ClickHere= "https://cvb/api/?processGroupId=".id|stats count by Date Name_Id Type Request_URL id ClickHere</query> <earliest>-1d@d</earliest> <latest>@d</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <condition field="ClickHere"> <link target="_blank">$row.ClickHere|n$</link> </condition> </drilldown> </table> </panel> </row> </dashboard> Sample of Request_URL's(Multiple URL' are there) https://cgy/api/flow/groups/ef451556-016d-1000-0000-00005025535d https://jkl/api/groups/1b6877ea-0174-1000-0000-00003d8351cd/variable-registry Sample of ClickHere column Hyperlink https://abc/api/?processGroupId=ef451556-016d-1000-0000-00005025535d https://abc/api/?processGroupId=1b6877ea-0174-1000-0000-00003d8351cd I want when I click on Request_URL https://cgy/api/flow/groups/ef451556-016d-1000-0000-00005025535d It should open this ClickHere column hyperlink(https://abc/api/?processGroupId=ef451556-016d-1000-0000-00005025535d. When I click on Request_URL https://jkl/api/groups/1b6877ea-0174-1000-0000-00003d8351cd/variable-registry  It should open this ClickHere column hyperlink https://abc/api/?processGroupId=1b6877ea-0174-1000-0000-00003d8351cd.   In short I want to remove ClickHere column and when I click on Request_URL it should take me to the link that Clickhere column was taken to. Can someone guide me how to do this in splunk. Thanks in advance.
Need help with this integration. @richgalloway @woodcock 
Greetings, Is there a way to tell splunk to look for deployment apps in a directory other than $SPLUNK_HOME/etc/deployment-apps/ If so, where would this setting be found? Why would I want to do so... See more...
Greetings, Is there a way to tell splunk to look for deployment apps in a directory other than $SPLUNK_HOME/etc/deployment-apps/ If so, where would this setting be found? Why would I want to do something like this?  Because I'm trying to use git-sync to pull my apps from a git repo, but they are mounted at deployment-apps/deploymentapps.git/ and I don't yet know how to change it so the actual apps go to the parent directory.   Thank you for your help!
Anyone setup a custom action or action script to automatically re-start an IIS App Pool on a CLR Crash event? Severity: WARN Policy: CLR Health Event Type: CLR_CRASH
Hello I have below logs in last 60 mins log1: ABC=1,DEF=2,GHI=3 log2:ABC=0,DEF=0,GHI=3 while executing my query for last 60 mins i am getting below result ABC=1,DEF=2,GHI=3 ABC=0... See more...
Hello I have below logs in last 60 mins log1: ABC=1,DEF=2,GHI=3 log2:ABC=0,DEF=0,GHI=3 while executing my query for last 60 mins i am getting below result ABC=1,DEF=2,GHI=3 ABC=0,DEF=0,GHI=0 But i want only latest log result as like below ABC=1,DEF=2,GHI=3
Hello,  My first post!!! I have a bunch of results that show up when searched. One of the example is  Aug 5 19:08:12 ServerName Aug 5, 2020 19:08:12 GMT|50000|APP|UNKNOWN|XXX.XXX.XXX.XXX|XXX.XXX.X... See more...
Hello,  My first post!!! I have a bunch of results that show up when searched. One of the example is  Aug 5 19:08:12 ServerName Aug 5, 2020 19:08:12 GMT|50000|APP|UNKNOWN|XXX.XXX.XXX.XXX|XXX.XXX.XXX.XXX|XXX.XXX.XXX.XXX|443|XXXXX|-|/someprocess.php|-|A message posted successfully|500  Aug 5 19:08:10 ServerName Aug 5, 2020 19:08:10 GMT|50000|APP|UNKNOWN|XXX.XXX.XXX.XXX|XXX.XXX.XXX.XXX|XXX.XXX.XXX.XXX|443|XXXXX|-|/newprocess.php|-|A message posted successfully|200 I want to do a stats count by the .php processes. So, how do i add these or eval/stats these .php processes / scripts ? 
We have four indexers in a cluster, single site, with RF=3 and SF=2.  We will have a maintenance that will require two indexers power down (EC2 instances), and the maintenance will last about two ho... See more...
We have four indexers in a cluster, single site, with RF=3 and SF=2.  We will have a maintenance that will require two indexers power down (EC2 instances), and the maintenance will last about two hours.  What will be the proper way or sequence for taking those two indexer servers power down?  Should I do splunk offline on one indexer first, power down, wait for a while, and then proceed to other indexer?  or should I do splunk offline on both servers , and power down simultaneously?  
Hello Splunkers I have an IIS log  that I am testing against and I have a need to test for a specified range The _time field in the log is formatted like this   2020-08-23T21:25:33.437-0400 2020-0... See more...
Hello Splunkers I have an IIS log  that I am testing against and I have a need to test for a specified range The _time field in the log is formatted like this   2020-08-23T21:25:33.437-0400 2020-08-23T21:25:33.437-0400 I want to query everything between  21:25:33 and 21:25:43 2020-08-23T21:25:33.437-0400 2020-08-23T21:25:34.133-0400 2020-08-23T21:25:35.267-0400 2020-08-23T21.25:36:42.683-0400 2020-08-23T21:25:37.270-0400 2020-08-23T21:25:38.013-0400 2020-08-23T21:25:39.320-0400 2020-08-23T21:25:40.753-0400 2020-08-23T21:25:41.597-0400 2020-08-23T21:25:42.013-0400 2020-08-23T21:25:43.353-0400   So my search would look something like this.  What is the best way to do this? | where _time < blah _time >= blah
The following search works well enough, but I would like the color of the "bubbles" to be based on  sc_status="200" or  sc_status!="200" I still want to show a bubble for all of the cs_uri_stem  v... See more...
The following search works well enough, but I would like the color of the "bubbles" to be based on  sc_status="200" or  sc_status!="200" I still want to show a bubble for all of the cs_uri_stem  values.   In theory,  if every cs_uri_stem has at least one event that is status 200 and at least one event that is something else, this could duplicate the number of rows in the output table.   ...base search... | stats avg(eval(time_taken)) AS avg_tt, avg(eval(sc_bytes)) AS avg_bytes, count(eval(source)) AS NumTransactions, BY cs_uri_stem | table cs_uri_stem, avg_tt, avg_bytes, NumTransactions | rename avg_bytes AS "Average Bytes Returned" avg_tt AS "Average Time in Milliseconds" NumTransactions AS "# of Transactions"   Can this be accomplished in the Dashboard's XML?  Can this also be accomplished with an eval statement in the search itself?
I would like to create a new field, FlagSC,  based on the value of sc_status.  The new field should have a value of "OK"  when the status is 200, or a value of "Other" for all other statuses.   I int... See more...
I would like to create a new field, FlagSC,  based on the value of sc_status.  The new field should have a value of "OK"  when the status is 200, or a value of "Other" for all other statuses.   I intend to use this in a bubble chart with colors based on FlagSC In theory,  if every cs_uri_stem has at least one event that is status 200 and at least one event that is something else, this could duplicate the number of rows in the output table. I have tried variations of the code below: ...base search... | stats values(eval(if(sc_status==200,"OK","Other"))) AS FlagSC, avg(eval(time_taken)) AS avg_tt, avg(eval(sc_bytes)) AS avg_bytes, count(eval(source)) AS NumTransactions, BY cs_uri_stem | table FlagSC, avg_tt, avg_bytes, NumTransactions | rename avg_bytes AS "Average Bytes Returned" avg_tt AS "Average Time in Milliseconds" NumTransactions AS "# of Transactions"   Ultimately, the goal is to have something that might resemble  the following and does NOT include any rows where FlagSC is "OKOther" cs_uri_stem FlagSC avg_tt avg_bytes NumTransactions foo/ OK ... ... ... foo/ Other ... ... ... bar/ OK ... ... ... bar/ Other ... ... ...