All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I have a code, that uses the output to fetch data from another Panel. First Panel   <title>Juniper Mnemonics</title> <table> <search> <query>index=nw_syslog | sear... See more...
Hi All, I have a code, that uses the output to fetch data from another Panel. First Panel   <title>Juniper Mnemonics</title> <table> <search> <query>index=nw_syslog | search hostname="*DCN*" | stats count by cisco_mnemonic, hostname | sort - count</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="drilldown">row</option> <option name="refresh.display">progressbar</option> <drilldown> <condition field="cisco_mnemonic"> <set token="message_token">$click.value$</set> </condition> <condition field="hostname"> <set token="hostname_token">$click.value$</set> </condition> <condition field="count"></condition> </drilldown> </table>     From this panel 2 contents are fetched for second panel search. Second Panel   index=nw_syslog | search hostname="*DCN*" | search cisco_mnemonic="$message_token$" | search hostname="$hostname_token$" | stats count by message | sort - count     Issue:  When ever i click the first panel table.( given ROW as Click Selection). its not getting fetching correctly. Only fetching "cisco_mnemonic" only for both cisco_mnemonic and hostname. Please guide me how can i get both in single click.    
I have 2 sourcetypes, vpn & winevents, how do you write a single query to get winevents of the top 5 busiest machines of IP X (1 IP is used by many users). The vpn sourcetype contains both hostname &... See more...
I have 2 sourcetypes, vpn & winevents, how do you write a single query to get winevents of the top 5 busiest machines of IP X (1 IP is used by many users). The vpn sourcetype contains both hostname & IP, while the winevents only contains the hostname. I'm assuming I'd utilize the append command and a sub search sourcetype=winevents | append [search sourcetype=vpn] | top limit=5  Any help is appreciated, thanks
We are getting the below error while executing the query.  com.microsoft.sqlserver.jdbc.SQLServerException: The value is not set for the parameter number 1.   Kindly advise on the r... See more...
We are getting the below error while executing the query.  com.microsoft.sqlserver.jdbc.SQLServerException: The value is not set for the parameter number 1.   Kindly advise on the resolution. 
Hi, I need help in evaluation the csv files under "<Splunk directory>\etc\apps\search\lookups" folder. we have multiple csv files in this folder and I need to check which csv file is not in use or u... See more...
Hi, I need help in evaluation the csv files under "<Splunk directory>\etc\apps\search\lookups" folder. we have multiple csv files in this folder and I need to check which csv file is not in use or used for which search so that unused csv file can be deleted.
Hello Team We have been using  Splunk app for Jenkins App in our environment and we have seperate dashboard for Jobinsight with in the App. With in the Jobinsight  Dashboard we have panel for "Lates... See more...
Hello Team We have been using  Splunk app for Jenkins App in our environment and we have seperate dashboard for Jobinsight with in the App. With in the Jobinsight  Dashboard we have panel for "Latest Build" which display the value for Build.  Expectation is when the value is clicked in Panel it suppose to display the build results by taking the "Click_value"  as variable. However we see that is value is not passing while clicking on the Panel. Code in the respective page:  $click.value$ value is not passing. <link>\n<![CDATA[build?type=build&host=$host$&job=$jobName$&build=$click.value$]]>\n </link> In the dashboard URL : splunk_app_jenkins/build?type=build&host=JENKINS-F1DEVOPS-PRDINTRANET-IE&job=F1_ALL_DEV_SML/f1-all-dev-sml/feature%2FGDS-641_lint_tools&build=$click.value$
Hello Guys,  We have to integrate one of the SQL server with Splunk and the current version is  SQL 2014. We are using splunk db connect app to configure it.  Kindly confirm if SQL team can upgrade... See more...
Hello Guys,  We have to integrate one of the SQL server with Splunk and the current version is  SQL 2014. We are using splunk db connect app to configure it.  Kindly confirm if SQL team can upgrade to 2017 , is it compatible with splunk db connect or do they need to upgrade it to SQL 2019 ?  Provide any solutions/documents on this .
Hi Community, I'm currently facing a concern around the Health Rule Violations API returning less information in the "description" field since my company updated the controller from version 4.5.16.2... See more...
Hi Community, I'm currently facing a concern around the Health Rule Violations API returning less information in the "description" field since my company updated the controller from version 4.5.16.2272 to 20.7.2-2909.  Here is the api result comparison between between both version and the deepLinkUrl output: 4.5.16.2272: "description": "description": "AppDynamics has detected a problem with Application <b>APP-1</b>.<br><b>Service Availability</b> continues to violate with <b>critical</b>.<br>All of the following conditions were found to be violating<br>For Application <b>APP-1</b>:<br>1) API-PORTAL<br><b>Availability's</b> value <b>2.00</b> was <b>less than</b> the threshold <b>3.00</b> for the last <b>30</b> minutes<br>" 20.7.2-2909: "description": "AppDynamics has detected a problem.<br><b>Business Transaction Health</b> is violating." I'm wondering if anything was misconfigured related to the health alert or is there a way to fine tune the alert to show a detailed version. Thank you ! 
index name = my_index source name = my_source sourcetype = my_sourcetpye host = 192.168.0.10 ----------------------------- The field action is =allow -> my_allow. Action = deny -> my_deny othe... See more...
index name = my_index source name = my_source sourcetype = my_sourcetpye host = 192.168.0.10 ----------------------------- The field action is =allow -> my_allow. Action = deny -> my_deny other -> my_myontype I want to change it to this. help me
Is it possible to send alert log from FireEye CM (central management) to FireEye App for Splunk ?
Hi, From the below events i need to extract the field called "Event_Name" which is associated with "BeyondTrust_PBUL_ACCEPT_Event" from below 3 events Desired output: Event_Name(filed name)=BeyondT... See more...
Hi, From the below events i need to extract the field called "Event_Name" which is associated with "BeyondTrust_PBUL_ACCEPT_Event" from below 3 events Desired output: Event_Name(filed name)=BeyondTrust_PBUL_ACCEPT_Event(field value) Example Event 1: <86>Dec 22 ddppvc0729 pbmerd2.1.0-12: BeyondTrust_PBUL_ACCEPT_Event: Time_Zone='IST'; Request_Date='2021/1/27'; Request_Time='2:2:51'; Request_End_Date='2021/1/27'; Request_End_Time='22:1:51';Submit_User='spnt'; Submit_Host='wcpl.com';  Example Event 2: <83>Dec 22 ddpc0729 pbmerd21.1.0-12: [2658] 5105.1 failed to get ACK packet during a CMD_SWAPTTY_ONE_LINE sequence - read failure in receive acknowledgement Example Event 3: <38>Dec 22 ddppvc0729 root[25132]: [ID 7011 auth.info] CEF:0|BeyondTrust|PowerBroker|1.1.0-12|7011|PBEvent=Accept|4|act=Accept end=Dec 1 2021 1:11:40 shost=dc8 dvchost=dc8 suser=t8adsfk duser=root filePath=/opt/ cs1Label=Ticket cs1=Not_Applicable deviceExternalId=0a2adfersds9 fname=./SSB_Refresh_Pbrun_Local_Policy_Files.sh   What i tried from regex extraction: Input: (?<Event_Name>\w{10}[a-zA-Z]+_[a-zA-Z]+_[a-zA-Z]+_[a-zA-Z]+) Output: matching 2 places from above 3 events      
Hi Everyone, I'm running Splunk Enterprise 8.2.2.1 on my MacOS (Big Sur), and it runs quite well, except that there is no search history available using a user id with admin role. But from the ... See more...
Hi Everyone, I'm running Splunk Enterprise 8.2.2.1 on my MacOS (Big Sur), and it runs quite well, except that there is no search history available using a user id with admin role. But from the CLI in: etc/users/bd/search/history There is actually a file called <hostname>.idx.csv which holds all my history. 1. Can anyone please explain what's going on here? PS. I have 5 instances running on my Mac (A combined SH/IDX, DPL, HFWD, and 2 UF's), and it all works nice together. The difference is that I have an internal created user on the SH (the one with no history above), but on IE the HFWD I use the user "splunk" (this user also runs all the instances on OS level) to log in with, and here history work just fine. 2. There is gotta be a missing link, but which? Cheers, Bjarne
Hi Everyone, I have 5 instances of Splunk running my Mac (Big Sur v11.6): SH+IDX DPL HFWD UF (sending to HFWD) UF (sending to IDX) All working pretty well, but there are a few hick-... See more...
Hi Everyone, I have 5 instances of Splunk running my Mac (Big Sur v11.6): SH+IDX DPL HFWD UF (sending to HFWD) UF (sending to IDX) All working pretty well, but there are a few hick-ups running on MacOSX (Big Sur, 11.6), and the new major one I've run it to is there is NO introspection (Resource Usage) collected! The "resource_usage.log" is completely empty, and running :        /opt/splunk_dpl/bin/splunkd instrument-resource-usage -p 8087 --with-kvstore --debug       Writes:       I-data gathering (Resource Usage) not supported on this platform. DEBUG RU_main - I-data gathering (IOWait Statistics) not supported on this OS WARN WatchdogActions - Initialization failed for action=pstacks. Deleting. DEBUG InstrumentThread - Entering 0th iter (thread KVStoreOperationStatsInstrumentThread) DEBUG InstrumentThread - Entering 0th iter (thread KVStoreCollectionStatsInstrumentThread) DEBUG InstrumentThread - Entering 0th iter (thread KVStoreServerStatusInstrumentThread) DEBUG InstrumentThread - Entering 0th iter (thread KVStoreProfilingDataInstrumentThread) DEBUG InstrumentThread - Entering 0th iter (thread KVStoreReplicaSetStatsInstrumentThread)         1. Does this really mean there is no support for resource usage on Mac, og am I getting something wrong here? To me there is not really that much difference between a Mac and a Linux box (while knowing there are some differences) , and most commands run on a linux are run the exact same way on a mac. 2. If this does not come out of the box, how can it be enabled? 3. Which processes are run on linux to exactly fulfill the "Resource Usage" and IOWait stats, that one could try move into the mac? 4. Does anyone know exactly how and where the scripts/processes are configured in Splunk to facilitate this? Any core details would be most appreciated. Cheers, Bjarne
Hi, Splunkers,  I have a dashboard with multiple panels, which all use shared time picker from token field2. when I used the following drilldown link to send token Gucid_token,  time range is used ... See more...
Hi, Splunkers,  I have a dashboard with multiple panels, which all use shared time picker from token field2. when I used the following drilldown link to send token Gucid_token,  time range is used dashboard's default time range.  <drilldown>           <link target="_blank">/app/appname/guciduuidsid_search_applied_rules_with_ors_log_kvp?form.Gucid_token=$click.value2</link>         </drilldown> but when I click drilldown link, I prefer to use a different hardcode time range, like  "Last 7 days", instead of original default time range of my dashboard. so, I added form.field2=Last 7 days in my drilldown link following  the 1st token form.Gucid_token=$click.value2  as below.   but unfortunately, it doesn't work.  <drilldown>           <link target="_blank">/app/appname/guciduuidsid_search_applied_rules_with_ors_log_kvp?form.Gucid_token=$click.value2$&amp;form.field2=Last%207%20days</link>         </drilldown>   anyone knows how to pass the hardcode time range through this drilldown link?    thanks in advance.   Kevin
This is my current WMI setup:   [WMI:WinLogSysTst] disabled = 0 event_log_file = System index = winlogsystst interval = 5 server = localhost current_only = 0     How can i tell it to get older d... See more...
This is my current WMI setup:   [WMI:WinLogSysTst] disabled = 0 event_log_file = System index = winlogsystst interval = 5 server = localhost current_only = 0     How can i tell it to get older data than when i made the input. I get only recent data and not old one. Thnak you.
I am taking events from three source types (same index; two common fields present across all three) and creating a table with the results. The events are indexed using a "timestamps" field that is pr... See more...
I am taking events from three source types (same index; two common fields present across all three) and creating a table with the results. The events are indexed using a "timestamps" field that is present in the raw data (the result of an API call to a monitoring tool and a subsequent JSON payload retrieval of synthetic test metrics; the value is in epoch time and pushed into _time using a transform aligned with the source types). Here's the query I'm using: index=smoketest_* sourcetype=smoketest_json_dyn_result OR sourcetype=smoketest_json_dyn_duration OR sourcetype=smoketest_json_dyn_statuscode | rename dt.entity.synthetic_location AS synLoc, dt.entity.http_check AS httpCheck | stats values(*) AS * by httpCheck, synLoc, _time | rename "responseTime{}" AS "Response Time (ms)" | table _time, synLoc, httpCheck, status, "Response Time (ms)", "Status code" The common fields found in all three source types are "synLoc" and "httpCheck". 95% of the time, I get the desired result pictured here (requested fields from all three sourcetypes align as a single row on the table): In this example, you can see the results of two unique tests (executing every five minutes, over a 15 minute period). Since the events being grabbed from the three source types all have the same _time value, this works as expected. If, however, one or two of the source types have events with a _time value that does not match the others, this happens: Again, there are two unique tests represented here. However, note that one row reflects a value from one source type at 10:01 while the two values from the other two source types are on a separate row at 10:02. Ideally, all three values should be on the same row (much like the 10:06 and 10:11 entries). How can I alter my search query to account for this behavior?
Hi All,   We are receiving below timestamp  issues, 0000 WARN DateParserVerbose [104706 merging_0] - Accepted time format has changed ((?i)(?<![\d\.])(20\d\d)([-/])([01]?\d)\2([012]?\d|3[01])\s... See more...
Hi All,   We are receiving below timestamp  issues, 0000 WARN DateParserVerbose [104706 merging_0] - Accepted time format has changed ((?i)(?<![\d\.])(20\d\d)([-/])([01]?\d)\2([012]?\d|3[01])\s+([012]?\d):([0-6]?\d):([0-6]?\d)\s*(?i)((?:(?:UT|UTC|GMT(?![+-])|CET|CEST|CETDST|MET|MEST|METDST|MEZ|MESZ|EET|EEST|EETDST|WET|WEST|WETDST|MSK|MSD|IST|JST|KST|HKT|AST|ADT|EST|EDT|CST|CDT|MST|MDT|PST|PDT|CAST|CADT|EAST|EADT|WAST|WADT|Z)|(?:GMT)?[+-]\d\d?:?(?:\d\d)?)(?!\w))?), possibly indicating a problem in extracting timestamp 12-27-2021 14:33:04.972 +0000 WARN DateParserVerbose [104095 merging_0] - Accepted time format has changed ((?i)(?<![\w\.])(?i)(?i)(0?[1-9]|[12]\d|3[01])(?:st|nd|rd|th|[,\.;])?([\- /]) {0,2}(?i)(?:(?i)(?<![\d\w])(jan|\x{3127}\x{6708}|feb|\x{4E8C}\x{6708}|mar|\x{4E09}\x{6708}|apr|\x{56DB}\x{6708}|may|\x{4E94}\x{6708}|jun|\x{516D}\x{6708}|jul|\x{4E03}\x{6708}|aug|\x{516B}\x{6708}|sep|\x{4E5D}\x{6708}|oct|\x{5341}\x{6708}|nov|\x{5341}\x{3127}\x{6708}|dec|\x{5341}\x{4E8C}\x{6708})[a-z,\.;]*|(?i)(0?[1-9]|1[012])(?!:))\2 {0,2}(?i)(20\d\d|19\d\d|[9012]\d(?!\d))(?![\w\.])), possibly indicating a problem in extracting timestamps All the Linux servers that are sending logs to Splunk are in EST timezone and we are expecting the same timezone to be indexed at. But we are still seeing issues not able to identify which of the servers are causing timezone issues. Is there any other checks we need to perform to solve the above errors. Thanks, Sharada
Hi need to find error codes then due to ID, count number of IPS. 2021-12-26 22:38:59,248 INFO CUS.AbCD-Server-2-0000000 [LoginService] load idss[IPS=987654*1234-1,productCode=000] 2021-12-26 22:38... See more...
Hi need to find error codes then due to ID, count number of IPS. 2021-12-26 22:38:59,248 INFO CUS.AbCD-Server-2-0000000 [LoginService] load idss[IPS=987654*1234-1,productCode=000] 2021-12-26 22:38:59,280 ERROR CUS.AbCD-Server-2-0000000 [LoginService] authorize: [AB_100] This is huge value. ConfigApp[DAILY_STATIC_SECOND_PIN] 2021-12-26 22:38:59,248 INFO CUS.AbCD-Server-2-0000000 [LoginService] load idss[IPS=987654*1234-1,productCode=000] 2021-12-26 22:38:59,280 ERROR CUS.AbCD-Server-2-0000000 [LoginService] authorize: [AB_100] This is huge value. ConfigApp[DAILY_STATIC_SECOND_PIN] 2021-12-26 22:38:59,248 INFO CUS.AbCD-Server-3-9999999 [LoginService] load idss[IPS=123456*4321-1,productCode=000] 2021-12-26 22:38:59,280 ERROR CUS.AbCD-Server-3-9999999 [LoginService] authorize: [AB_500] This is huge value. ConfigApp[DAILY_STATIC_SECOND_PIN] expected output: ID                                                                       IPS                                     count CUS.AbCD-Server-2-0000000   987654*1234-1     2 CUS.AbCD-Server-2-9999999   123456*4321-1     1   Any idea? Thanks,
Hi, The Eventhub capacity limited therefore we ask if we can also use an storage account to ingest the data via this addon? In the details of this addon is described that eventhub is used: Microso... See more...
Hi, The Eventhub capacity limited therefore we ask if we can also use an storage account to ingest the data via this addon? In the details of this addon is described that eventhub is used: Microsoft Defender Advanced Hunting Add-on for Splunk | Splunkbase   Kind regards
Hi, I am stuck implementing below use case , please help me on this : I have a lookup say url_requested.csv.  http_url host *002redir023.dns04* test *yahoo* test ... See more...
Hi, I am stuck implementing below use case , please help me on this : I have a lookup say url_requested.csv.  http_url host *002redir023.dns04* test *yahoo* test Another csv file :  malicious.csv url Description xyzsaas.com C&C http://002redir023.dns04.com malicious I have to check the url values in "url_requested.csv" with that in "malicious.csv" and get only those url and description which has a match in "malicious.csv" . url_requested.csv lookup has url column with wildcard prefixed and suffixed. I have added the wildcard configuration in transforms.conf following this : https://community.splunk.com/t5/Splunk-Search/Can-we-use-wildcard-characters-in-a-lookup-table/m-p/94513. My query : | inputlookup malicious.csv | table url description | lookup url_requested.csv  http_url as url outputnew host | search host=* | fields - host I am getting no results running this query. Please let me know where I am going wrong and help me with the solution. Result I am looking for : url Description http://002redir023.dns04.com malicious