All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi there! I want to add columns to this table that I copied from the docs about timewrap. I want to add columns that have the averages for each field (accessories, sports, strategy, etc.) across th... See more...
Hi there! I want to add columns to this table that I copied from the docs about timewrap. I want to add columns that have the averages for each field (accessories, sports, strategy, etc.) across the timewrapped columns. Basically, a column for the average of ACCESSORIES_S1, ACCESSORIES_S0, etc., and then a column for the average of SPORTS_S1, SPORTS_S0, etc., and a column for the average of STRATEGY_S1, STRATEGY_S0, etc. Additionally, I eventually want to use these averages as a trigger for an alert when the counts on these (i.e., accessories, sports, strategy, etc.) surpass the average. Long story short, I have an arbitrary number of fields, with a count on those fields, and I want to alert when the count on those fields exceeds the average, without having to set up multiple alerts for each field because I don't know what the fields are going to be ahead of time and the field names can change.  @mattymo your multipart article on timewrap and Cyclical Statistical Forecasts and Anomalies has helped me so much, can you please help me on this application of timewrap? Thank you!
I'm using the Webtools Add-on to do a get request for each row on a keyword field but combine the curl results with the initial data. Ie the initial data returns _time, keyword, hostname, etc and the... See more...
I'm using the Webtools Add-on to do a get request for each row on a keyword field but combine the curl results with the initial data. Ie the initial data returns _time, keyword, hostname, etc and the curl request returns curl_message, curl_status,etc and I want my final table to be _time,keyword,hostname,curl_message,curl_status . Right now I'm able to get the curl response using the map search="uri=.../$keyword$ " but that only returns the curl output.  I do see that you can use datafield without a map command to do multiple curl requests (|curl uri=... datafield=keyword) and it would have the my desired output but that made the uri look like this uri=.../?keyword. I need the uri to look like uri=.../keyword instead.  Any help/tips would be appreciated.
Hey, In a dashboard, I need a panel where it gives the user an option to download EVERY field of a specific index. Now, this index has over 100 fields. Can I use the Events Panel on the dashboard t... See more...
Hey, In a dashboard, I need a panel where it gives the user an option to download EVERY field of a specific index. Now, this index has over 100 fields. Can I use the Events Panel on the dashboard to show all the fields (admittedly a small view due to the volume of the fields) and then the user can export the respective results from the given panel? Many thanks, Patrick
Good afternoon  - We're working with a customer that would like to ingest data from AINS FOIAXpress (https://www.ains.com/foiaxpress/).  We don't see a TA for this that I could locate on Splunkbase b... See more...
Good afternoon  - We're working with a customer that would like to ingest data from AINS FOIAXpress (https://www.ains.com/foiaxpress/).  We don't see a TA for this that I could locate on Splunkbase but was wondering if anyone here has knowledge of customers that have built this TA in the past.  Any insight or direction would be greatly appreciated!
We have an outside scanning agency that is constantly doing nmap like scans of our perimeter.   It is generating a log of log data on the perimeter CISCO firewalls. We know the IPs that the scanning ... See more...
We have an outside scanning agency that is constantly doing nmap like scans of our perimeter.   It is generating a log of log data on the perimeter CISCO firewalls. We know the IPs that the scanning is coming from; is there a way to tell the forwarders to NOT forward that log data from the firewalls for those IPs? For example, if any tcp/ip log data is seen from 1.2.3.4, don't forward it, but if from any other IP address, treat it normally and forward it. Thanks for any insights on this. Our Splunk SME are looking at CRIBL to do this but reading this thread makes me believe there are configuration settings that might address this?    V/R Bob M.
I have 2 logs. The first statement gets logged when a pod dies. The second gets logged when my app gets notified. Sometimes, the pod dies and my app doesn't get notified. I want to write an alert whe... See more...
I have 2 logs. The first statement gets logged when a pod dies. The second gets logged when my app gets notified. Sometimes, the pod dies and my app doesn't get notified. I want to write an alert when the pod dies but my application doesn't get notified. Log1 (when a pod dies):   index=log1 "Forced deletion of orphaned Pod" | rex "podnamespace/(?<machineName>(.*?))\s"   Log2 (when my app gets notified):   index=conversation "*Clearing DMC pod" sourcetype="cui-orchestration-log" podname=<podNameWhichDied>   I tried several options, but I am unable to refer to the field 'machineName' created by rex in the Log1 inside Log2 even though machineName has the right pod name:   index=log1 "Forced deletion of orphaned Pod" | rex "podnamespace/(?<machineName>(.*?))\s" | stats count as podsCrashedCount by machineName| appendcols [search index=log2 "App is deleting pod" podname=$machineName| stats dc(podname) as deletedInApp] | where podsCrashedCount!=deletedInApp  
I would want an alert to be triggered and sent to mail if a particular panel has the count=0 in the dashboard how should we achieve that pls help
Hello Guys, I make an dashboard where the users see their tasks to do, I would like to add an button or checkbox on each events that the user can checkbox when the task is done and  the result keep... See more...
Hello Guys, I make an dashboard where the users see their tasks to do, I would like to add an button or checkbox on each events that the user can checkbox when the task is done and  the result keep saved. How can I do that ?    regards,  
Hello All, I have JSON data and sometimes it is nested and sometimes it is not, whenever it is a nested array I have a {} in the field name, and when it's not there is no {}. I'm trying to make a fi... See more...
Hello All, I have JSON data and sometimes it is nested and sometimes it is not, whenever it is a nested array I have a {} in the field name, and when it's not there is no {}. I'm trying to make a field alias to a common field name. But, I want to write a single alias to convert the field name if {} is present or not to a new name? Any leads on how can I do it? (Either remove {} before the fields are extracted at search time or aliasing in the props.conf to a new name.) eg: items{}.description once, items.description the other other time --> rename to items.description during the search time without using rename command                        OR remove {} before fields are extracted on the search head. P.S: I don't want to do index time field extraction. #fieldaliasing #json   
Hi guys, Does anyone know whether it is possible to have Splunk show an actual value of an episode's field variable instead of showing the variable itself? I am trying to essentially prefill a cust... See more...
Hi guys, Does anyone know whether it is possible to have Splunk show an actual value of an episode's field variable instead of showing the variable itself? I am trying to essentially prefill a custom send email action with data that already comes inside each episode (these are referred to as common fields by Splunk). I have tried various ways, including passing the variable to alert_actions.conf and editing the HTML, but clearly the data from alert_actions.conf is passed as a pure string to some other script (I'm assuming it's Splunk's JavaScript which then processes the data further). Also, I know that the variable that is displayed is processed by a Python script upon pressing the "Done" button and it indeed takes the correct data, however, my problem is to have the variable's value already prefilled inside the inputboxes prior to clicking the done button. I am also attaching a screenshot for a better understanding of my situation. Note: %email_address% and %message% would be example of fields that are already contained within each episode  
Hi All, Can we retrieve the Exception count without any predefined field or without creating any field. Basically,I just want each Exception count in table where row is Exception name and count is ... See more...
Hi All, Can we retrieve the Exception count without any predefined field or without creating any field. Basically,I just want each Exception count in table where row is Exception name and count is the column. Consider Exceptions are Nullpointer, IllegalArgument etc.. Pls comment out the query that will be helpful.
I'm looking for some clarity about the recommended process for installing UFs in a VDI environment (e.g. Azure Virtual Desktop, VMWare Horizon, etc.). I'm familiar with the host image install and clo... See more...
I'm looking for some clarity about the recommended process for installing UFs in a VDI environment (e.g. Azure Virtual Desktop, VMWare Horizon, etc.). I'm familiar with the host image install and clone process that is outlined in the splunk docs link below. Is this process recommended for deploying VDIs? Install on the parent VDI and clone down to the child VDI sessions. Please advise if there are any special considerations for VDI vs. traditional VM creation/deployment. Integrate a universal forwarder onto a system image - Splunk Documentation
Hello, I try to add multiple Lines in Lookup Editor: 1. I open a Lookup File in Lookup Editor 2. I go to the End of the File, Right-Click and select "Insert a row after" 3. I copy a list from... See more...
Hello, I try to add multiple Lines in Lookup Editor: 1. I open a Lookup File in Lookup Editor 2. I go to the End of the File, Right-Click and select "Insert a row after" 3. I copy a list from a Text File and paste it into the first Cell of the new Row:     864BF3124938CFF63218BA1D5E7CB8B7 870399C2A81CF13085F99A75AD4C650B 358C46524D43B77AE7A7726481EB8FC6 f92edeb8298c55211bc4b6cc0dad1571     Result: the newly added Hashes are put in one Line/Cell. Expected Behaviour: three more Rows are added and the first Cells are filled with the inserted Hashes. The other Cells in the Rows remain empty. Several Colleagues with the same Splunk Installation, the same List of Hashes, the same Lookup File and the same Browser could replicate the expected Behaviour, so I assume that there might be some aspect we did not look at, yet. Thanks for any Ideas on this... Mathias
Hello. I am using the following Jamf Pro Add-on for Splunk (Version 2.10.4) to import Jamf data. https://splunkbase.splunk.com/app/4729/ Here, the following error may occur. <Error><error>The ... See more...
Hello. I am using the following Jamf Pro Add-on for Splunk (Version 2.10.4) to import Jamf data. https://splunkbase.splunk.com/app/4729/ Here, the following error may occur. <Error><error>The XML was too long</error></Error> Is there any way to resolve this error?   The following is a detailed description. The inputs are set up as follows. API Call Name       custom Search Name        /JSSResource/mobiledevices The number of records is about 60,000, but about 200 of them have the above error. According to the information on the following site, records with more than 10,000 characters seem to cause the above error. https://community.jamf.com/t5/jamf-pro/splunk-jamfpro-api-getting-started/m-p/169054            There is also information that Splunk does not capture data longer than 10,000 characters by default, but Splunk does not make that setting.
Hello all, after upgrading splunk to 8.1.0 , we have observed some issues with LDAP authentication. The uers are not able to login for sometime and after 10-15 mins the credentials work. Once we disa... See more...
Hello all, after upgrading splunk to 8.1.0 , we have observed some issues with LDAP authentication. The uers are not able to login for sometime and after 10-15 mins the credentials work. Once we disabled ldap authentication on couple of servers, nowits working. We dont see any error/ warnings related to LDAP in splunk. Can someone please help and let em know if there is any known issue in this. THanks
Hi all, from the available documentation, I am not getting how to practically update TA via Deplyoment server (i.e. distribute a newer version to the UFs via DS). If it matters, it is about the Add-O... See more...
Hi all, from the available documentation, I am not getting how to practically update TA via Deplyoment server (i.e. distribute a newer version to the UFs via DS). If it matters, it is about the Add-On for Linux and Unix. I would imagine that it looks like this: 1) get the TA on the Deployment Server via GUI - go to  "install app from file" -> upload the downloaded .tgz file from splunkbase -> restart Splunk 2) Backup the used TA (older version) 3) Copy the TA (newer version) from the App folder into the deployment-apps folder (via cp -R) 4) Redeploy Deployment Server via  splunk reload deploy-server 5) Check if data is still being obnoarded properly Am I missing anything? Is this approach valid? 
Hi there, I have created a dozen of statistics dashboard with search / filtering and drilldown for customers using a production voice platform. Each of those dashboards includes multiple panels, ... See more...
Hi there, I have created a dozen of statistics dashboard with search / filtering and drilldown for customers using a production voice platform. Each of those dashboards includes multiple panels, each delivering 10+ metrics which are consistent in naming accross the different dashboards. Just for the example there can be "Offered calls" "picked up calls" "lost calls" "waiting time"....etc. Those reports are today in french with rename commands in the query so that it looks nice reading the data for the french users as the source is in english.   I'd like to have those dashboards to be available in other languages, so there is the basic option to clone all those reports and suffix their names with a language code but that would be tedious and would not offer some dynamic option for users to switch language.   I would like to have some dynamic option using a language input form or detecting language of the Splunk user so that all the fields / metrics used in those dashboard are being translated. Basically have a translation table I could maintain while new fields / metrics are added and new language is required. Is that something possible ? I couldn't find the beginning of an idea of how to achieve this frankly.
Hi Team,   I have a dashboard, where when I click on a row, it drills down to another URL for a different dashboard.  Now I want to capture this URL that it's using in drilldown and store it in... See more...
Hi Team,   I have a dashboard, where when I click on a row, it drills down to another URL for a different dashboard.  Now I want to capture this URL that it's using in drilldown and store it in a token. How do I do it?    My sample code :        <row> <panel depends="$drilldown_display$"> <table> <title>Top RequestIds for $eptok$</title> <search> <query>$instance_select$ organizationId=CASE($organizationId$) earliest=$earliest$ latest=$latest$ sourcetype=CASE(applog*:axapx) [search $instance_select$ organizationId=CASE($organizationId$) earliest=$earliest$ latest=$latest$ sourcetype=CASE(applog*:gslog) "CPU_ENFORCED"| stats count by requestId | fields requestId | format ] entryPoint="$eptok$" | table requestId runTime | sort -runTime</query> </search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">row</option> <option name="rowNumbers">false</option> <option name="wrap">true</option> <drilldown> <set token="drilldown_display1">block</set> <set token="reqtok">$row.requestId$</set> <link target="_blank">https://splunk-web.monitoring.com/en-US/app/publicSharing/Dashboard_second?form.instance=$instance$&amp;form.orgId=$organizationId$&amp;form.requestId=$reqtok$</link> </drilldown> </table> </panel> </row> <row>     IF you see the above code, I am drilling to Dashboard_second based on the row I click.  I want to capture this URL for  Dashboard_second in a token and display it in my HTML Panel.  How do we achieve this ? My HTML Panel code :      <panel> <html> <h1 class="SectionHeader">Panel to display drilldown URL </h1> <div style="float:left; width:calc(95% - 60px);" class="pageInfo"> <pre> ===========INTERNAL============== <b>DrillDown URL : </b> $MyUrl$ ===========INTERNAL============== </pre> </div> </html> </panel>      
Hello Splunkers, from time to time, we observe a bit weird state of our indexer cluster and want to understand its reason. There are 3 indexers in the cluster (let's say z1el1, z1el2, z1el3) , one... See more...
Hello Splunkers, from time to time, we observe a bit weird state of our indexer cluster and want to understand its reason. There are 3 indexers in the cluster (let's say z1el1, z1el2, z1el3) , one of which seems to be overloaded for some time (see a screenshot).  Internal logs do not show anything wrong or critical.  Indexing rate goes up on one indexer and comes back to the normal state after a few hours. Loadbalancer was checked some time ago by a responsible team and it seems to ok. Can someone lead us to the right direction on what else to check/do?  
I have a lookup named tc with a field  indicator. I wanted to search that indicator field in my firewall sourcetype with wildcards as below. [|inputlookup tc|dedup indicator|eval indicator1="*".ind... See more...
I have a lookup named tc with a field  indicator. I wanted to search that indicator field in my firewall sourcetype with wildcards as below. [|inputlookup tc|dedup indicator|eval indicator1="*".indicator."*"|table indicator1|format] |where sourcetype="firewall" But this search was not efficient and is time consuming. Also I was not able to use union or Join as I have to look for a field with wildcard. Kindly suggest any alternatives.