All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, i am trying to create a dependency map without the external creation of tokens that are being fed to the append searches. Here is the motive: I have a list of Sources and Targets, where as ... See more...
Hello, i am trying to create a dependency map without the external creation of tokens that are being fed to the append searches. Here is the motive: I have a list of Sources and Targets, where as the Source of one Relation is the Target of many others and so on. This is recursive, but i would stop at 4 iterations for now ) The resulting table must only have the pairs of Source and Target Services as basis for the visualization. The first search looks something like this: index=poc_analyze_something_rather Target_Service=$my_initial_token_from dashboard$ | table Source_Service Target_Service The initial token is being fed via drilldown from the dashboard. So far no issue at all. So the first search creates the list of Source_Services connected to the Target_Service (token). Now i have actually two issues...sorry... First is that i cannot create the table of the pairs and create a token at the same time. The creation of the token would look something like this: index=poc_analyze_something_rather Target_Service=$my_initial_token_from dashboard$ | stats values(Source_Service) as results | eval list_of_Source_Services_search_one = mvjoin(results, ",") So the first issue is how to team them up in one search if possible The second issue starts once i have the token. The second search would look something like that: | append [ | search index=poc_analyze_something_rather Target_Service IN($list_of_Source_Services_Search_one$) | table Source_Service Target_Service ] However the first search does not seem to pass the token along into the append search. It is no issue at all if i make a search in the dashboard (no visualization) like this to create the token: <search>   <query>     index=poc_analyze_something_rather Target_Service=$my_initial_token_from dashboard$     | stats values(Source_Service) as results | eval source_list= mvjoin(results, ",")   </query>   <earliest>-15m</earliest>   <latest>now</latest>   <done>     <set token="list_of_Source_Services_Search_one">$result.source_list$</set>   </done> </search> The append search has no issues at all with this token. However there must be a way to create the list the Source and Targets without resulting to a dashboard with xml coded searches. Any idea? Thanks Mike
Hi Experts, I am running two searches by combining them with appendcols. But the final result is the common fields of both the searches. I want the entire search result of both the main and subsea... See more...
Hi Experts, I am running two searches by combining them with appendcols. But the final result is the common fields of both the searches. I want the entire search result of both the main and subsearch. I am using stats as well. Please advise. Thank you.
Hi, My requirement is to take each week monday data alone for a month in trending chart . This need to be showed for status field ,which will have ( pass,fail,error,deleted) values in it.The indivi... See more...
Hi, My requirement is to take each week monday data alone for a month in trending chart . This need to be showed for status field ,which will have ( pass,fail,error,deleted) values in it.The individual count of  status to be shown for each week monday for a month. Please let me know how to do this .  
How to install splunk services in centos
Now working lookup  On a local server on my computer, I got the result   But when I did exactly the same actions on the production server. one to one. Then there is no result   examp... See more...
Now working lookup  On a local server on my computer, I got the result   But when I did exactly the same actions on the production server. one to one. Then there is no result   examples in the photo, one user at a time. no result for everyone. where could i go wrong? Thanks!  Splunk Version 7.  both files have global mode  
Hello Team,  What could be the best possible solution to integrate Imperva cloud solution with Splunk. Is there any app/add-on available for Imperva to onboard the logs through API into Splunk. 
Hi, Our systems have multiple order records as XML transactions and each order can have multiple events on different dates. I want to search for orders that have had specific event codes and display... See more...
Hi, Our systems have multiple order records as XML transactions and each order can have multiple events on different dates. I want to search for orders that have had specific event codes and display a table to show the dates when each event code happened for that individual order. Index name is "xmlogs". Each XML has 'order ref', 'event date' and 'event code'. My search needs to be limited to event_code of 1001 or 1002 or 1003. Input XML looks like: <orderref>123456</order_ref> <evendate>2021-10-01T10:31:13</eventdate> <eventcode>1001</eventcode> Similar XML will be for event 1002 etc. I want output to look like: orderref 1001 1002 1003 123456 01/10/2021   04/10/2021 789123 05/10/2021 08/10/2021 13/10/2021   Any help will be much appreciated.
Hi  I am working in a system which looks for DFS (Dynamic Frequency Selection) channels. The search starts with the following event: CAC_STARTED. If if find a DFS channel then the search ends with... See more...
Hi  I am working in a system which looks for DFS (Dynamic Frequency Selection) channels. The search starts with the following event: CAC_STARTED. If if find a DFS channel then the search ends with CAC_COMPLETED and stays in DFS channel. If there is a radar detected, then the search ends with CAC_STOPPED and the system switches to non-DFS channel. Later the system again searches for DFS channel with CAC_STARTED and the patter follows.  I am trying to calculate the time spent on DFS and non-channels in a day.  Can someone please help me with the queries. I tried the following:  index=* mac="0cf9c0fef6fe" ("ACI_CAC_COMPLETED") | sort _time | stats max(_time) as maxtime min(_time) as mintime | eval maxt=strftime(maxtime,"%d:%H:%M:%S") | eval mint=strftime(mintime,"%d:%H:%M:%S") This gives me the total time spend on DFS channel BUT only if the system has never switched to non-DFS channel, ie., the ACI_CAC_STOPPED event never appeared in the whole span. How do I check if there was a ACI_CAC_STOPPED event in between an calculate the dfs and non-dfs time spent durations? Please advise.
Admission rules are cool but it would be great to know which ones people are using. It would also be great if the platform had a set of known bad SPL which could be toggled on. Here's a couple of obv... See more...
Admission rules are cool but it would be great to know which ones people are using. It would also be great if the platform had a set of known bad SPL which could be toggled on. Here's a couple of obvious ones we've started with: Prevent users searching all indexes: search_type=adhoc AND index=* Prevent users using all time searches: search_type=adhoc AND search_time_range=alltime  
How to add SSO in helm chart of splunk through config map  #helm #kubernetes #splunkhelm 
Hello, I would like to change table cell background color of  top 3 value of each column's search result . For example, top 3 value of column No.1 (50, 29, 25) need to be colored in column No. 1. ... See more...
Hello, I would like to change table cell background color of  top 3 value of each column's search result . For example, top 3 value of column No.1 (50, 29, 25) need to be colored in column No. 1. How can I change those cell background color?
Just seen this very old Answer concerning the XML view template: https://community.splunk.com/t5/Dashboards-Visualizations/Edit-the-default-dashboard-template/m-p/17091 This would seem to imply tha... See more...
Just seen this very old Answer concerning the XML view template: https://community.splunk.com/t5/Dashboards-Visualizations/Edit-the-default-dashboard-template/m-p/17091 This would seem to imply that onunloadCancelJobs is the default behaviour (generated into the XML preamble), but the docs indicate there is no default setting. I do not know enough about how splunk does all this to figure it out. I cannot find any other references except these two, apparently contradictory ones. Anybody on the inside care to clarify the actual situation? I'd like to know whether to add this rather useful property to all my views since I've only just found out about it. Also, if it's not the default behaviour....why not? Thank you, Charles
Hi,   I have some problems not being able to change the email in my account. My company is doing an email domain renewal and I need to change my email to a new email. Could you help me?   Thanks ... See more...
Hi,   I have some problems not being able to change the email in my account. My company is doing an email domain renewal and I need to change my email to a new email. Could you help me?   Thanks and Regards, Dano
Hi All, I'm trying to get data tied together into one matrix from Jira (API fed) that utilizes two source types (shown below). What I need is each issue(key) to have the following attributes repres... See more...
Hi All, I'm trying to get data tied together into one matrix from Jira (API fed) that utilizes two source types (shown below). What I need is each issue(key) to have the following attributes represented as a column:  Column Name sourcetype key jira:issues:json team_name jira:issues:json created_weekNo jira:issues:json created_yearNo jira:issues:json creationDate jira:issues:json slaName jira:sla:json   Problem: The "key" is the unique identifier in this case that can marry the data sets, but I'm having trouble getting the "slaName" from my "jira:sla:json" to combine as a column with my "jira:issues:json". Side note the "key" will have multiple entries that need to be reflected as separate rows, but the column information obtained from the "jira:issues:json" will be static over the lifetime of the ticket (as this is just the created date).  Ask: If anyone has any best practices that could help me out it would be greatly appreciated. Using the subsearch and appendcols in getting confusing as I'm not looking for any stats functions right now, just getting the table together is the main goal to eventually turn this into a visualization.  Thanks for your help!
Hi Community - I'm trying to extend the Levenshtein distance query in this tutorial: https://www.splunk.com/en_us/blog/tips-and-tricks/you-can-t-hyde-from-dr-levenshtein-when-you-use-url-toolbox.html... See more...
Hi Community - I'm trying to extend the Levenshtein distance query in this tutorial: https://www.splunk.com/en_us/blog/tips-and-tricks/you-can-t-hyde-from-dr-levenshtein-when-you-use-url-toolbox.html. Specifically, I'm trying to evaluate the Levenshtein distance of an email domain against multiple comparison domains on one line, with the resulting values going into a multivalue field. I tried doing this with mvmap: | eval lev=mvmap(inspect_domains, `ut_levenshtein(ut_domain, inspect_domains)`) Where inspect_domains is the multivalue field containing comparative domains, and ut_levenshtein is a search macro in the URL Toolbox app . This returns an error: "Error in 'eval' command: The expression is malformed. Expected ). " To my eye, the parentheses appear to be balanced. I nevertheless tried adding or removing parentheses to try to make Splunk happy, but no combination of parentheses seems to work Any ideas?
Need help writing a request file1.csv  username src_ip John 192.168.16.35 Smith 172.167.3.43 Aram 132.56.23.3   file2.csv IP address ASN Other 192.168.16.0/24 1234 R... See more...
Need help writing a request file1.csv  username src_ip John 192.168.16.35 Smith 172.167.3.43 Aram 132.56.23.3   file2.csv IP address ASN Other 192.168.16.0/24 1234 RU 172.167.3.0/24 4321 AG 132.56.23.0/24 6789 BR   output  username src_ip asn other John 192.168.16.35 1234 RU Smith 172.167.3.43 4321 AG Aram 132.56.23.3 6789 BR     Thanks guys !!!!
HI There,   Can i please know how make the REQUEST_ID clickable from the below query. i want pass the REQUEST_ID from query1 to query2 when the user clicks on the REQUEST_ID in table from query1. ... See more...
HI There,   Can i please know how make the REQUEST_ID clickable from the below query. i want pass the REQUEST_ID from query1 to query2 when the user clicks on the REQUEST_ID in table from query1.     Query 1: index=<<index_name>> | dedup REQUEST_ID | table USER_ID, ENTITY_TYPE, ENTITY_ID, REQUEST_ID, STATUS | where USER_ID="123123123"   Query 2: Index=<<index_name>> "error" | where $REQUEST_ID$             Thank you
Hello, I have some SQL trc binary files need to be ingested into SPLUNK from SQL server where UF has already been installed.  I am considering to use MS SQL TA which allows us to convert trc binary ... See more...
Hello, I have some SQL trc binary files need to be ingested into SPLUNK from SQL server where UF has already been installed.  I am considering to use MS SQL TA which allows us to convert trc binary files to text files and send to SPLUNK indexer/SH. My question is where I should deploy the SQL TA ....in UF/ HF/SH.  Thank you so much, appreciate your support in these efforts.
Please help me fix this SPL to produce the license usage listed above. Thx a million This is not working for me: index="_internal" | stats sum(GB) as A by Date, idx | eventstats max(A) as B by id... See more...
Please help me fix this SPL to produce the license usage listed above. Thx a million This is not working for me: index="_internal" | stats sum(GB) as A by Date, idx | eventstats max(A) as B by idx | where A=B | dedup A idx | sort idx | table Date,A idx  
I'm using tstats on an accelerated data model which is built off of a summary index. Everything works as expected when querying both the summary index and data model except for an exceptionally large... See more...
I'm using tstats on an accelerated data model which is built off of a summary index. Everything works as expected when querying both the summary index and data model except for an exceptionally large environment that produces 10-100x more results when running dc().   This works fine in said environment and produces 17,000,000~:   | tstats summariesonly=true count(assets.hostname) from datamodel="Summary_Host_Data" where (earliest=-1d latest=now)   This produces 0 results, which should be around 400,000~:   | tstats summariesonly=true dc(assets.hostname) from datamodel="Summary_Host_Data" where (earliest=-1d latest=now)   Even though the summary index works fine and produces 400,000~:   index=summary_host_data earliest=-1d | stats dc(hostname)   Finally, if I search over 6 hours instead of 1d, I do get results from the tstats using dc(). Is there some type of limit I'm running into with dc()? Or is there something else going on?