All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a drop down in my dashboard which looks up a csv file having client information in one of the rows named domain and is of the form "<number>.<client_name>.<network>". Eg: s122.clientA.cmbs.co... See more...
I have a drop down in my dashboard which looks up a csv file having client information in one of the rows named domain and is of the form "<number>.<client_name>.<network>". Eg: s122.clientA.cmbs.com My drop down is configured as below   <input type="dropdown" token="token" searchWhenChanged="true"> <label>Client</label> <choice value="*">All</choice> <initialValue>*</initialValue> <fieldForLabel>display</fieldForLabel> <fieldForValue>domain</fieldForValue> <search> <query>| inputlookup domains.csv</query> <earliest>0</earliest> <latest></latest> </search> <default>*</default> </input>   Then in my search, I'm just appending "... AND client=$domain$" but in this case the domain value will be client=s122.clientA.cmbs.com. How do I extract only the client name from the drop down and use that in the search instead?  Eg: client=*clientA* should be appended to the search after extraction instead of the whole string (client=*s122.clientA.cmbs.com*)  
I have a bunch of web servers that are currently streaming their logs (real time) into an S3 bucket. I have the Splunk AWS add-on installed to collect those logs but have found that it duplicates th... See more...
I have a bunch of web servers that are currently streaming their logs (real time) into an S3 bucket. I have the Splunk AWS add-on installed to collect those logs but have found that it duplicates the data and given the amount of data coming in, this would kill our license very quickly. The owners of these servers will not allow us to install the UF on it.  Setting up a HF is also a no go. Is there another way to ingest those logs without duplication. I have seen there might be a way with a custom props.conf file but I am not sure this is way. Any ideas would be welcome.   Thanks
How to calculate how much data each search-head-clustering is searching
I have a statistics table that returns values based on timechart span=1h count by status. There are two statuses.  I would like to color the first status if it is greater than the second status. Th... See more...
I have a statistics table that returns values based on timechart span=1h count by status. There are two statuses.  I would like to color the first status if it is greater than the second status. Thanks for the help. David
Hi, I need to filter out some events from a syslog source. The events  are like this: Apr 28 14:15:09 10.130.4.203 Apr 28 14:15:09 hostname: User ****  : Sign Off, ID: **, InstID: 4731, IPAddress: ... See more...
Hi, I need to filter out some events from a syslog source. The events  are like this: Apr 28 14:15:09 10.130.4.203 Apr 28 14:15:09 hostname: User ****  : Sign Off, ID: **, InstID: 4731, IPAddress: *****, FolderID: 0, Username: ******, AgentBrand: -, AgentVersion: -, XFerSize: 0, Error: 0   Apr 28 14:15:09 10.130.4.203 Apr 28 14:15:09 hostname: User ****  : Upload, ID: **, InstID: 4731, IPAddress: *****, FolderID: 1234, Username: ******, AgentBrand: -, AgentVersion: -, XFerSize: 0, Error: 0   Apr 28 14:15:09 10.130.4.203 Apr 28 14:15:09 hostname: User ****  : Sign Off, ID: **, InstID: 2819, IPAddress: *****, FolderID: 0, Username: ******, AgentBrand: -, AgentVersion: -, XFerSize: 0, Error: 0   I have two different InstID (4731 and 2819) and many FolderID, so  I need to keep all the events with InstID:2189 and the events whith InstID:4731 and FolderID:0, so my goal is to discard by props.conf and transforms.conf all the events that have InstID:4731 and FolderID different from 0 Any help? Thanks in advance
Hello Splunk Community, I'm setting up a new Dashboard view for one of our apps, and I'm having issues with the drilldown option. I cloned another view that works perfectly, which is for similar log... See more...
Hello Splunk Community, I'm setting up a new Dashboard view for one of our apps, and I'm having issues with the drilldown option. I cloned another view that works perfectly, which is for similar logs from our firewall (Forward Traffic). When you click on a specific item in the initial output, it is supposed to drilldown and give you the raw syslog from that specific event. However, it's not working for this new view. Below an example from the XML:    <title>Failed Logins</title> <search> <query>index=netfw logdesc="SSL VPN login fail" | table _raw _time devname user remip msg reason</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">row</option> <option name="percentagesRow">false</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <fields>["_time","devname","user","remip","msg","reason"]</fields> <drilldown> <set token="rawlog">$row._raw$</set> </drilldown> </table> </panel> </row> <row> <panel> <table> <title>RAW View</title> <search> <query>index=netfw "$rawlog$"</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="drilldown">none</option> </table>   While the drilldown does not "work", when I check the results by clicking on the magnifying glass, it shows the proper output (index=netfw "raw log output from the clicked event"). However S&R shows 0 events.  What's odd is, if I copy and paste the raw log output inside of quotes in S&R, it will show 0 events. But, if I take the raw log from the other view that works and paste it in S&R inside of quotes, it pulls up that single event fine.  Another oddity, the problem logs that I can't cut and paste into search have the word "in" at the tail end of the log string, which is inside quotes. That "in", for "SSL user failed to logged in" is orange in the string.  Any help is much appreciated, I think that "in" could be the cause, but I'm new to splunk, SPL, etc. Below is the code from the working view, it's slightly different.    <row> <panel> <table> <title>Forward Traffic Logs</title> <search> <query>index=netfw | table _raw _time devname srcip dstip action service policyid</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">row</option> <option name="percentagesRow">false</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <fields>["_time","devname","srcip","dstip","action","service","policyid"]</fields> <drilldown> <set token="clientTok">$row._raw$</set> <set token="forms.clientTok">$row._raw$</set> <set token="resultrow">$row.srcip$</set> <set token="forms.resultrow">$row.srcip$</set> </drilldown> </table> </panel> </row> <row> <panel> <event> <title>Complete Log Details</title> <search> <query>index=netfw "$clientTok$"</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="list.drilldown">none</option> </event> </panel> </row> <row> <panel> <table> <title>RAW View</title> <search> <query>index=netfw "$clientTok$"</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> </row>  
Hi Team, Im trying to get the user location based on the ip address in splunk but IPlocation command is failing to retrieve the city for few of the records.Below is the query im using .For some reco... See more...
Hi Team, Im trying to get the user location based on the ip address in splunk but IPlocation command is failing to retrieve the city for few of the records.Below is the query im using .For some records splunk is  not pulling up city/region.Can someone pleas help .Thanks   index=vpn host="*sin-bon-vpn*" Cisco_ASA_message_id=722051 OR Cisco_ASA_message_id=113019 NOT "AnyConnect-Parent" | transaction user endswith="Duration:" keepevicted=true | mvexpand src |rename host as vpn | iplocation src |table lat lon user vpn City Region  
According to the documentation: https://docs.splunk.com/Documentation/AddOns/released/CiscoASA/Distributeddeployment Under "Distributed deployment feature compatibility", the package should contain... See more...
According to the documentation: https://docs.splunk.com/Documentation/AddOns/released/CiscoASA/Distributeddeployment Under "Distributed deployment feature compatibility", the package should contain eventgen.conf and samples. I have just dowloaded the package and untarred .tgz file. It does not contain eventgent.conf and samples. Does anyone know how to generate data for this app in UF?    
<earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> suppose the above code is fetching result for last 60 Minutes(12am to 1 am) today (28/4/21) is their any possibility to subt... See more...
<earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> suppose the above code is fetching result for last 60 Minutes(12am to 1 am) today (28/4/21) is their any possibility to subtract the time to get result for same time(12 am to 1am ) yesterday(27/4/21)
Good day,  We are currently using : Splunk Enterprise Version:8.1.3   and it seems there may be an issue with the urllibs library in python.   Currently we are not able to use the Jenkins Trigg... See more...
Good day,  We are currently using : Splunk Enterprise Version:8.1.3   and it seems there may be an issue with the urllibs library in python.   Currently we are not able to use the Jenkins Trigger action as it is no longer supported.  So we are using a webhook method to trigger a Jenkins build   The method we are using is:  url=http://<APIUSER>:<API token>@i<FQDN hostname>:8080/project/<Project>/<Folder>/<Job</buildWithParameters?token=<job token>?<Param>=<Value> the error we get is :  <urlopen error [Errno -2] Name or service not known> and we attempted to use the IP address instead of the FQDN, as well as the name of the server as it is in the hosts file.  Not sure if this is an issue with our Python installation on the Splunk SE side ?  Any suggestons
Hi, Currently splunk sending alerts to zabbix,bmc.I got a new requirement to send resolved alert state(like Resolved) to zabbix,bmc as well. Ex: Whenever a alert triggered it will go to zabbix,bmc ... See more...
Hi, Currently splunk sending alerts to zabbix,bmc.I got a new requirement to send resolved alert state(like Resolved) to zabbix,bmc as well. Ex: Whenever a alert triggered it will go to zabbix,bmc after few minutes that alert gone then that time alert resolved message also should go to Zabbix,bmc from splunk. Please suggest how to achieve this requirement.    
Hello, I push in splunk a tar.gz file named file.tar.gz. In this tar.gz file I have several files: file.tar.gz    |    | - filea    | - fileb    | - filec When splunk consume the tar.gz I l... See more...
Hello, I push in splunk a tar.gz file named file.tar.gz. In this tar.gz file I have several files: file.tar.gz    |    | - filea    | - fileb    | - filec When splunk consume the tar.gz I loose the file name (I can see only the file.tar.gz file as source field). the content of filea fileb filec are in the index but not the file name. I would like to manage the source field with the file name in tar.gz, as following source:filea instead of file.tar.gz source:fileb instead of file.tar.gz source:filec instead of file.tar.gz Could you please help me please ? Many thanks.  
Hello all, I would like to use the table command without changing the order of events. To give an example: When searching for "index=_* earliest=-15m latest=now", the first displayed event has the ... See more...
Hello all, I would like to use the table command without changing the order of events. To give an example: When searching for "index=_* earliest=-15m latest=now", the first displayed event has the current time and the last displayed event is 15 minutes in the past. Now when searching for "index=_* earliest=-15m latest=now | table _time,host,index" the events are resorted. _time is no longer descending (or ascending). I tried "index=_* earliest=-15m latest=now | table _time,host,index | sort 0 -_time". But that does not work 100% because some events have the same timestamp. So my question is: Can I use the table command (or some other command to form a table based on a given set of columns) without changing the sort oder?
I want to get string between two hypeen and show in table as input as :  some text - 512ad85e-e968-45cc-8783-30b696217j5a -  some text , and result must be 512ad85e-e968-45cc-8783-30b696217j5a  . the... See more...
I want to get string between two hypeen and show in table as input as :  some text - 512ad85e-e968-45cc-8783-30b696217j5a -  some text , and result must be 512ad85e-e968-45cc-8783-30b696217j5a  . there may be more no of records i just want to filter all distinct records. what i tried is using regex  \\w{8}-?\\w{4}-?\\w{4}-?\\w{4}-?\\w{12}.
I am running my Splunk application on version 8.1.1. Several observations from the result when using different search modes to run on the same SPL. I am having a tstats command to retrieve data from... See more...
I am running my Splunk application on version 8.1.1. Several observations from the result when using different search modes to run on the same SPL. I am having a tstats command to retrieve data from a specific index and further process with stats, lookup, eventstats, and streamstats commands. When the number of event is greater than 1M, the following issues are observed in different search mode, The number of the sum(count) is different The total number of rows in statistics tab is different Some of the column values displayed in another column (e.g. value belongs to field_13 is shown under column field_2)   Below is my masked SPL for further reference, | tstats prestats=t count as count where (`index_macro`) AND ("field_1"="I" OR "field_1"="T" OR "field_1"="O") AND field_3="*express*" AND field_4="*" AND ("field_5"="*") by "field_10",field_3, "field_5", "field_6", "field_7 1", "field_7 2", "field_7 3", "field_2" | stats count by "field_10", field_3, "field_5", "field_6", "field_7 1", "field_7 2", "field_7 3", "field_2" | eval field_7m = 'field_7 1'." ".'field_7 2'." ".'field_7 3' | search field_7m="*" | lookup watchlist_for_latest_field_3 "field_8" as field_3 OUTPUT "field_8", "field_9 (English)","field_9 (Chinese)","field_11" | search "field_8" = "*" | eval ts=strptime('field_10',"%Y-%m-%d %H:%M:%S") | stats sum(count) as count, max(ts) as latest_event_time_by_field_12 by field_3, "field_2", "field_5", "field_6" | eventstats sum(count) as field_12_cnt by field_3, "field_2", "field_5", "field_6" | eventstats sum(count) as field_13_cnt by field_3, "field_2", "field_5" | eventstats sum(count) as field_3_cnt by field_3, "field_2" | eventstats dc("field_5") as total_no_field_13 by field_3, "field_2" | eval field_3_for_sort = lower(field_3), field_3_addr_for_sort = lower('field_2'), field_13_for_sort = lower('field_5'), field_12_description_for_sort = lower('field_6') | sort 0 - field_3_cnt, +field_3_for_sort, +field_3_addr_for_sort, field_13_cnt, +field_13_for_sort, field_12_cnt, latest_event_time_by_field_12, +field_12_description_for_sort | streamstats dc("field_5") as rank_field_3 by field_3, "field_2" | streamstats count as rank_by_field_3_cntry by field_3, "field_2", "field_5" | where rank_field_3 <= 3 and rank_by_field_3_cntry <= 3 | eval "field_6" = "<".rank_by_field_3_cntry.">: ".'field_6' | stats list("field_6") as "field_12 description (Top 3)", values(field_3_cnt) as "Total Number of field_3_cnt", values(total_no_field_13) as "Total Number of field_13" by field_3, "field_2", "field_5", field_13_cnt | eval field_3_for_sort = lower(field_3), field_3_addr_for_sort = lower('field_2'), field_13_for_sort = lower('field_5') | sort 0 - "Total Number of field_3_cnt", +field_3_for_sort, +field_3_addr_for_sort, field_13_cnt, +field_13_for_sort | streamstats count as rank_by_field_3_after_group by field_3, "field_2" | eval "field_5" = "<".rank_by_field_3_after_group.">: ".'field_5' | lookup watchlist_for_latest_field_3 "field_8" as field_3 OUTPUT "field_8", "field_9 (English)","field_9 (Chinese)","field_11" | rename "field_5" as "field_13 (Top 3)", "field_8" as "field_8 from Watchlist", field_3 as "field_3 (CAPTION)", "field_2" as "field_2 (CAPTION)" | table "field_3 (CAPTION)", "field_2 (CAPTION)", "field_8 from Watchlist", "field_9 (Chinese)", "field_9 (English)", "field_11", "Total Number of field_3_cnt", "Total Number of field_13", "field_13 (Top 3)", "field_12 description (Top 3)" P.S. I also referred to some of the post having the similar problem (e.g. https://community.splunk.com/t5/Splunk-Search/Why-does-search-in-fast-mode-return-different-results-than/m-p/393596#M114453) while the solution seems cannot resolve my problem. limits.conf: [search_optimization::projection_elimination] cmds_black_list = lookup I target to run this SPL in a scheduled report and no method to force the scheduled report to run in verbose mode. May I know if there are any cures or workaround?
Hello AppDynamics Expert, There is some issue with our AppDynamics DB Agent. It was working fine 2 weeks ago. I can confirm the network is okay for both the AppD controller and AWS database, and t... See more...
Hello AppDynamics Expert, There is some issue with our AppDynamics DB Agent. It was working fine 2 weeks ago. I can confirm the network is okay for both the AppD controller and AWS database, and the credential of DB is good, too. But agent.log keeps repeating the below log : ——————— [AD Thread-Metric Reporter1] 28 Apr 2021 07:36:05,475 INFO SystemAgentTransientEventChannel - Full certificate chain validation performed using default certificate file [AD Thread-Metric Reporter1] 28 Apr 2021 07:36:05,498 ERROR MetricService - HTTP Request failed: HTTP/1.1 500 Internal Server Error [AD Thread-Metric Reporter1] 28 Apr 2021 07:36:05,499 WARN MetricService - Error sending metric data to controller:null [AD Thread-Metric Reporter1] 28 Apr 2021 07:36:05,499 ERROR MetricService - Error sending metrics - will requeue for later transmission com.singularity.ee.agent.commonservices.metricgeneration.metrics.MetricSendException: null at com.singularity.ee.agent.commonservices.metricgeneration.AMetricSubscriber.publish(AMetricSubscriber.java:350) ~[agent-shared-20.3.0.1.jar:?] ——————— I see a lot of people have the same issue when I google the issue but no resolution yet. Do you have any idea?
Hi guys   I have an installation on Splunk 8.1.2 where we have XmlWinEventLog data ingested. When we run this search:       index=wineventlog sourcetype=XmlWinEventLog earliest=-1h late... See more...
Hi guys   I have an installation on Splunk 8.1.2 where we have XmlWinEventLog data ingested. When we run this search:       index=wineventlog sourcetype=XmlWinEventLog earliest=-1h latest=now | stats count by host       It takes extremely long time to complete. Comparing the search.log to other non-XmlWinEventLog searches I can see that the above search, have the following in the search.log:       04-27-2021 14:43:35.526 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 04-27-2021 14:43:40.090 INFO PreviewExecutor - Finished preview generation in 0.000353521 seconds. 04-27-2021 14:43:41.126 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 04-27-2021 14:43:46.526 INFO PreviewExecutor - Finished preview generation in 0.001657283 seconds. 04-27-2021 14:43:47.625 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 04-27-2021 14:43:50.866 INFO PreviewExecutor - Finished preview generation in 0.001701492 seconds. 04-27-2021 14:43:51.926 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 04-27-2021 14:43:53.786 INFO PreviewExecutor - Finished preview generation in 0.001716926 seconds. 04-27-2021 14:43:54.825 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 04-27-2021 14:43:58.417 INFO PreviewExecutor - Finished preview generation in 0.00166631 seconds. 04-27-2021 14:43:59.426 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 04-27-2021 14:44:01.082 INFO PreviewExecutor - Finished preview generation in 0.002049016 seconds. 04-27-2021 14:44:02.125 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 04-27-2021 14:44:07.007 INFO PreviewExecutor - Finished preview generation in 0.001083249 seconds. 04-27-2021 14:44:08.025 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 04-27-2021 14:44:08.141 INFO PreviewExecutor - Finished preview generation in 0.002117643 seconds. 04-27-2021 14:44:09.225 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 04-27-2021 14:44:12.264 INFO PreviewExecutor - Finished preview generation in 0.003432417 seconds. 04-27-2021 14:44:13.525 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 04-27-2021 14:44:19.424 INFO PreviewExecutor - Finished preview generation in 0.002008858 seconds. 04-27-2021 14:44:20.825 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 04-27-2021 14:44:29.207 INFO PreviewExecutor - Finished preview generation in 0.001904259 seconds. 04-27-2021 14:44:30.926 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 04-27-2021 14:44:34.602 INFO PreviewExecutor - Finished preview generation in 0.001884954 seconds.         That is a full minute where it is in this state. Now the above search for one hour takes about 63 seconds, and 60 seconds in this state, that tells me that something is odd. The ReducePhaseExecutor and PreviewExecutor messages are not seen in search.log if I run the same search on the _internal index. I hve another installation that runs 8.1.1 that has the exact same behavior. Now if I just run the same search, but without the "by host" - the search takes 4.1 seconds to complete. It is really strange and unexpected behavior just adding "by host" will up the search time with aboute one minute.   Could anyone tell me what the above is, I guess it has something to do with "preview" but it does not make sense taking that amount of time, preferable some link to some documentation on what it is,  and why it is present in only searches in xml logs. I have tested on other types of data, cisco, _json, _internal, and none of those have the issue with running these executors, and spending a lot of time on it.   Thank you André
Hello, I am not able to login into splunk cloud.The credentials do not match. Please help.
Hi Current we have Splunk Enterprise version 7.2.2 . I am planning to upgrade Splunk V 8 What Splunk Enterprise version ( that Splunk support) should we install/upgrade now ? Version 8.0 or version... See more...
Hi Current we have Splunk Enterprise version 7.2.2 . I am planning to upgrade Splunk V 8 What Splunk Enterprise version ( that Splunk support) should we install/upgrade now ? Version 8.0 or version 8.1.3?   Thanks Khanh
my search...... product_name="orange_wallet" | fields product_name,productID | rex field=tag_description "(?i)orange_wallet(?<description>\w+)(?<size>\w+)" | table product_name,productID,descripti... See more...
my search...... product_name="orange_wallet" | fields product_name,productID | rex field=tag_description "(?i)orange_wallet(?<description>\w+)(?<size>\w+)" | table product_name,productID,description,size My question is can we include a field values into the regex? Aim is to replace orange_wallet with <product_name> and it need to be case insensitive. Thanks.