All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I would like to create a dashboard to show the percentage of each of my service meeting a certain performance requirement.  Each of the request access log entry would have field serviceName, txTime, ... See more...
I would like to create a dashboard to show the percentage of each of my service meeting a certain performance requirement.  Each of the request access log entry would have field serviceName, txTime, and I would like to generate a table that show the percentage of requests meeting my SLA requirement of say 1000ms.   The desired output would look something like ServiceName                     Percentage Service1                                98.9% Service2                                99% thank you
Hi Guys, my question is  Can priority (the regular P1/P2/P3 column) and job alias from the pw_map lookup be added to this alert as additional columns? I’ve recently started seeing some ingest issues... See more...
Hi Guys, my question is  Can priority (the regular P1/P2/P3 column) and job alias from the pw_map lookup be added to this alert as additional columns? I’ve recently started seeing some ingest issues with a few queues, and these columns would help with escalation and determining downstream impacts.
Hi,   We are looking to add a custom field to our alerts to BigPanda. Is there a way to add fields natively or a workaround done by any Splunk users?   Thanks, Kay
Hello Splunk Community,  I have two search heads.  1 search head is able to send out email alerts and the other one can't. I am using Amazon SES as the Mail Host. Each Search Head has a uniq... See more...
Hello Splunk Community,  I have two search heads.  1 search head is able to send out email alerts and the other one can't. I am using Amazon SES as the Mail Host. Each Search Head has a unique Access Key and setup the secret key.  I can't figure out why 1 of the search heads can not send an email out and the other can.   I used the sendemail command on the server with the issue and this is the error message I am getting: command="sendemail", (535, b'Authentication Credentials Invalid') while sending mail to:<myEmailAdress> Thoughts? 
Hi folks, Just started using splunk lately and I'm stuck with this alert that I want to create, I've been told to add priority ( P1,P2,P3 column) and job alias from pw_job_mopping lookup be added t... See more...
Hi folks, Just started using splunk lately and I'm stuck with this alert that I want to create, I've been told to add priority ( P1,P2,P3 column) and job alias from pw_job_mopping lookup be added to this alert that already exist as additional columns. Any help will be appreciated .  
As we work on the migration to the cloud, we have the following case - We are sending the syslog data to a heavy forwarder on the cloud (and to the on-prem indexers), to its 9997 port. When reachin... See more...
As we work on the migration to the cloud, we have the following case - We are sending the syslog data to a heavy forwarder on the cloud (and to the on-prem indexers), to its 9997 port. When reaching this HF, we would like to fork just the firewall data to a subset of the indexers. Is it possible to make such a routing? We would like to have something like -    [<transforms_stanza_name>] SOURCE=index REGEX=^firewall DEST_KEY=_TCP_ROUTING FORMAT=<subset of cloud indexers>    
@aplura_llc_supp  Hello, please see the error below, I am running the getwatchlist sample query from the About page. |getwatchlist http://www.spamhaus.org/drop/drop.lasso delimiter=; relevantField... See more...
@aplura_llc_supp  Hello, please see the error below, I am running the getwatchlist sample query from the About page. |getwatchlist http://www.spamhaus.org/drop/drop.lasso delimiter=; relevantFieldName=’sourceRange’ relevantFieldCol=1 referenceCol=2 ignoreFirstLine=True comment=; Errors - 'update_default_profile' referenced before assignment fix_values() missing 1 required positional argument: 'result'     8-02-2022 18:52:32.466 ERROR ScriptRunner [28863 phase_1] - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/getwatchlist/bin/getwatchlist.py __EXECUTE__ http://www.spamhaus.org/drop/drop.lasso  delimiter=; relevantFieldName=’sourceRange’ relevantFieldCol=1 referenceCol=2 ignoreFirstLine=True comment=;':   error_message="local variable 'update_default_profile' referenced before assignment" error_type="<class 'UnboundLocalError'>" error_arguments="local variable 'update_default_profile' referenced before assignment" error_filename="getwatchlist.py" error_line_number="76"  08-02-2022 18:52:32.466 ERROR ScriptRunner [28863 phase_1] - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/getwatchlist/bin/getwatchlist.py __EXECUTE__ http://www.spamhaus.org/drop/drop.lasso  delimiter=; relevantFieldName=’sourceRange’ relevantFieldCol=1 referenceCol=2 ignoreFirstLine=True comment=;':   error_message="fix_values() missing 1 required positional argument: 'result'" error_type="<class 'TypeError'>" error_arguments="fix_values() missing 1 required positional argument: 'result'" error_filename="getwatchlist.py" error_line_number="398" 
Hello, I would like to export the page Settings->Appdynamics Agents. I looked in the API documentation but I can't find anything. My goal is to get a list of all servers (not for each app) as well a... See more...
Hello, I would like to export the page Settings->Appdynamics Agents. I looked in the API documentation but I can't find anything. My goal is to get a list of all servers (not for each app) as well as the agent version installed. I know there is an API call to get the list per application. I am looking for the export of the page: Settings->Appdynamics Agents. Thanks, Nabil
Hi community, I am stuck on a problem where i have to calculate percentage and Percent Difference.    I have 3 columns, for example --  Name |  Errorcode |  Result abc     |   324   |   5 ... See more...
Hi community, I am stuck on a problem where i have to calculate percentage and Percent Difference.    I have 3 columns, for example --  Name |  Errorcode |  Result abc     |   324   |   5 abc     |    999 |   1 abc     |  Total |    6 I want the output to look like this --  Name |  Errorcode |  Result | Percent of Total |  Percent Difference ( week over week) abc     |   324   |   5 |     83.33 | 25 abc     |    999 |   1 | 16.67 |  100 abc     |  Total |    6 | 100 |  100  for Percent Difference (week over week) should look at the errors for that Name from the prior week and understanding the percent difference to this week.  Example, if there were 3 1027 errorcodes last week and 6 1027 errors this week the percent difference would be 100%. 
Hi community, I am stuck on a problem where i have to calculate percentage and Percent Difference.    I have 3 columns, for example --  Name |  Errorcode |  Result abc     |   324   |   5 ... See more...
Hi community, I am stuck on a problem where i have to calculate percentage and Percent Difference.    I have 3 columns, for example --  Name |  Errorcode |  Result abc     |   324   |   5 abc     |    999 |   1 abc     |  Total |    6 I want the output to look like this --  Name |  Errorcode |  Result | Percent of Total |  Percent Difference ( week over week) abc     |   324   |   5 |     83.33 | 25 abc     |    999 |   1 | 16.67 |  100 abc     |  Total |    6 | 100 |  100  for Percent Difference (week over week) should look at the errors for that Name from the prior week and understanding the percent difference to this week.  Example, if there were 3 1027 errorcodes last week and 6 1027 errors this week the percent difference would be 100%. 
Hi, I have two search queries which results in table as follow: | search query1 | table type1 platform1 target1 type1 platform1 target1 X WIN path/cpp X None pat... See more...
Hi, I have two search queries which results in table as follow: | search query1 | table type1 platform1 target1 type1 platform1 target1 X WIN path/cpp X None path/c X LINUX path/py   | search query2 | table type2 platform2 target2 type2 platform2 target2 Z WIN path/cpp Z LINUX path/cpp   (Target are unique based on their full path) How I can compare both tables . by making left join between both tables and comparing, such that : -> join both tables where first query table is the lead when comparing against, left join I believe ? -> Count as match only IF target from first query where platform = WIN , exists in second table where platform = WIN  -> Count as match only IF target from first query where platform = LINUX, exists in second table where platform = LINUX -> Count as match only IF target from first query where platform = NONE, exists  in second table for both platform = LINUX and platform = WIN else no match Then list results in table with total matching target, total  missing target, total target for type X , total target for type Z . How I can reach this ?  Thanks
Hi All, I'm working on a use case called explicit logins with of collecting eventid 4648. I'm wondering whether this event id tracking clear text passwords or direct logins? I can't tell from the e... See more...
Hi All, I'm working on a use case called explicit logins with of collecting eventid 4648. I'm wondering whether this event id tracking clear text passwords or direct logins? I can't tell from the event information as it didn't say anything relates to logon type. Could anyone shed some light on this?  Thanks in advance.
Hi All, We are checking if there is anyway we can monitor if we can find out the account used for sql start up on the servers.
I have installed splunk universal forwarder on a linux box. I want to forward a log file. This version (9) will not forward to my Splunk indexer. Version 8.24 does forward to my indexer. Does T... See more...
I have installed splunk universal forwarder on a linux box. I want to forward a log file. This version (9) will not forward to my Splunk indexer. Version 8.24 does forward to my indexer. Does TLS have to be configured for version 9 to work? How can I disable the TLS requirement, or what do I have to do to get this to work?   Thanks, eholz1
Hello, _metrics is written on our clustered indexers since latest Splunk versions however it's not shown on our DMC Index Detail page  or Manager node Indexer Clustering page. I think repFactor=0... See more...
Hello, _metrics is written on our clustered indexers since latest Splunk versions however it's not shown on our DMC Index Detail page  or Manager node Indexer Clustering page. I think repFactor=0 by default so it means it's not replicated? Is it really used by Splunk? DMC? Internal usage only? Should we declare it on our clustered indexers like any other user index? Thanks for your help. Splunk Enterprise 8.2.2
  Hello, I want to perform the above operation. I have a first search (A), and want to remove elements in it (in this case a field called id) from a second search B. What is the most clean ... See more...
  Hello, I want to perform the above operation. I have a first search (A), and want to remove elements in it (in this case a field called id) from a second search B. What is the most clean way of implementing this such search?   
I'm very new to splunk.  What I'm trying to search for is the next log entry after the entry I search for.  For example, I have this log entry from a search: search: index=dhcp DHCPREQUEST result... See more...
I'm very new to splunk.  What I'm trying to search for is the next log entry after the entry I search for.  For example, I have this log entry from a search: search: index=dhcp DHCPREQUEST result:  8/1/22 10:00:00.000 AM   Aug 1 10:00:00 b826c80c7n dhcpd[23809]: DHCPREQUEST for 10.23.1.131 from 00:50:56:9e:82:3e via eth0 host = nss.wright.edu monitor_fast index = dhcp linecount = 1 source = /var/log/clients/130.108.128.199/dhcpd sourcetype = isc:dhcp   What I'm trying to find is the next log entry after this.  Any suggestions would help.
Good day I am trying to create a map for various regions, as lookups. The original format of the map is in GeoJSON, however, when converting to KML using an online converter, the polygon is being s... See more...
Good day I am trying to create a map for various regions, as lookups. The original format of the map is in GeoJSON, however, when converting to KML using an online converter, the polygon is being simplified into straight lines (creating a block).  Is there a way to create a map directly from the GeoJSON file, instead of converting to KML? Kind regards    
Hello everyone. I have a Dropdown token being used as the <valuePrefix> in a Multiselect input. The Multiselect seems to only set the <valuePrefix> tag's value during dashboard initialization. I can ... See more...
Hello everyone. I have a Dropdown token being used as the <valuePrefix> in a Multiselect input. The Multiselect seems to only set the <valuePrefix> tag's value during dashboard initialization. I can get a change in the dropdown to take effect on the Multiselect when refreshing the page in the web browser, but not when changing the Dropdown value or when refreshing the panel. This same behaviour applies to the Checkbox input.  Here is my code for the panel in question:  <panel depends="$Global_Tok$"> <title>CPU</title> <input type="multiselect" token="CPU_Field_Global_Multi"> <label>Fields</label> <choice value="(pctUser) AS User">User</choice> <choice value="(pctIdle) AS Idle">Idle</choice> <choice value="(Load) AS &quot;Load (-Idle)&quot;">Load</choice> <choice value="(pctIowait) AS IO_Wait">IO_Wait</choice> <choice value="(pctNice) AS Nice">Nice</choice> <choice value="(pctSystem) AS System">System</choice> <default>"(Load) AS ""Load (-Idle)""",(pctUser) AS User,(pctIdle) AS Idle,(pctIowait) AS IO_Wait,(pctNice) AS Nice,(pctSystem) AS System</default> <initialValue>(Load) AS "Load (-Idle)",(pctUser) AS User,(pctIdle) AS Idle,(pctIowait) AS IO_Wait,(pctNice) AS Nice,(pctSystem) AS System</initialValue> <valuePrefix>$Global_Function$</valuePrefix> <delimiter> </delimiter> </input> <input type="checkbox" token="CPU_Fields_Cbx"> <label>Fields</label> <choice value="(pctUser) AS User">User</choice> <choice value="(pctIdle) AS Idle">Idle</choice> <choice value="(Load) AS &quot;Load (-Idle)&quot;">Load</choice> <choice value="(pctIowait) AS IO_Wait">IO_Wait</choice> <choice value="(pctNice) AS Nice">Nice</choice> <choice value="(pctSystem) AS System">System</choice> <valuePrefix>$Global_Function$</valuePrefix> </input> <chart> <search> <query>host=$HOST_SELECTION$ source=cpu | multikv | search CPU=$CPU_Core_Global_Tok$ | eval Load=(pctIdle-100)*-1 | timechart span=$Global_Span$ $CPU_Fields_Cbx$</query> <earliest>$Global_Time.earliest$</earliest> <latest>$Global_Time.latest$</latest> <refresh>1m</refresh> <refreshType>delay</refreshType> </search> <option name="charting.axisTitleX.text">Time</option> <option name="charting.axisTitleY.text">CPU Util (%)</option> <option name="charting.axisY.maximumNumber">100</option> <option name="charting.chart">line</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> In this example code, the panel is only using the Checkbox input token for the search, but the same issue is applied when I use the Multiselect as well. The token I am updating with the Dropdown input is "$Global_Function$".  The code for the Global Function dropdown input is as follows: <input type="dropdown" token="Global_Function" depends="$Global_Tok$"> <label>Function</label> <choice value="avg">Average</choice> <choice value="max">Maximum</choice> <default>avg</default> <initialValue>avg</initialValue> </input> I have not been able to get the Multiselect or Checkbox inputs to change their initial <valuePrefix> value upon the Function Dropdown changing. So if anyone has suggestions on how to accomplish this, then I am all ears.
Hello, I have 16 AWS rules and would like to make a dashboard/report of the frequency they fire week/month/year. Is this possible in an efficient manner? Thank You