All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi guys, Does anyone know whether it is possible to have Splunk show an actual value of an episode's field variable instead of showing the variable itself? I am trying to essentially prefill a cust... See more...
Hi guys, Does anyone know whether it is possible to have Splunk show an actual value of an episode's field variable instead of showing the variable itself? I am trying to essentially prefill a custom send email action with data that already comes inside each episode (these are referred to as common fields by Splunk). I have tried various ways, including passing the variable to alert_actions.conf and editing the HTML, but clearly the data from alert_actions.conf is passed as a pure string to some other script (I'm assuming it's Splunk's JavaScript which then processes the data further). Also, I know that the variable that is displayed is processed by a Python script upon pressing the "Done" button and it indeed takes the correct data, however, my problem is to have the variable's value already prefilled inside the inputboxes prior to clicking the done button. I am also attaching a screenshot for a better understanding of my situation. Note: %email_address% and %message% would be example of fields that are already contained within each episode  
Hi All, Can we retrieve the Exception count without any predefined field or without creating any field. Basically,I just want each Exception count in table where row is Exception name and count is ... See more...
Hi All, Can we retrieve the Exception count without any predefined field or without creating any field. Basically,I just want each Exception count in table where row is Exception name and count is the column. Consider Exceptions are Nullpointer, IllegalArgument etc.. Pls comment out the query that will be helpful.
I'm looking for some clarity about the recommended process for installing UFs in a VDI environment (e.g. Azure Virtual Desktop, VMWare Horizon, etc.). I'm familiar with the host image install and clo... See more...
I'm looking for some clarity about the recommended process for installing UFs in a VDI environment (e.g. Azure Virtual Desktop, VMWare Horizon, etc.). I'm familiar with the host image install and clone process that is outlined in the splunk docs link below. Is this process recommended for deploying VDIs? Install on the parent VDI and clone down to the child VDI sessions. Please advise if there are any special considerations for VDI vs. traditional VM creation/deployment. Integrate a universal forwarder onto a system image - Splunk Documentation
Hello, I try to add multiple Lines in Lookup Editor: 1. I open a Lookup File in Lookup Editor 2. I go to the End of the File, Right-Click and select "Insert a row after" 3. I copy a list from... See more...
Hello, I try to add multiple Lines in Lookup Editor: 1. I open a Lookup File in Lookup Editor 2. I go to the End of the File, Right-Click and select "Insert a row after" 3. I copy a list from a Text File and paste it into the first Cell of the new Row:     864BF3124938CFF63218BA1D5E7CB8B7 870399C2A81CF13085F99A75AD4C650B 358C46524D43B77AE7A7726481EB8FC6 f92edeb8298c55211bc4b6cc0dad1571     Result: the newly added Hashes are put in one Line/Cell. Expected Behaviour: three more Rows are added and the first Cells are filled with the inserted Hashes. The other Cells in the Rows remain empty. Several Colleagues with the same Splunk Installation, the same List of Hashes, the same Lookup File and the same Browser could replicate the expected Behaviour, so I assume that there might be some aspect we did not look at, yet. Thanks for any Ideas on this... Mathias
Hello. I am using the following Jamf Pro Add-on for Splunk (Version 2.10.4) to import Jamf data. https://splunkbase.splunk.com/app/4729/ Here, the following error may occur. <Error><error>The ... See more...
Hello. I am using the following Jamf Pro Add-on for Splunk (Version 2.10.4) to import Jamf data. https://splunkbase.splunk.com/app/4729/ Here, the following error may occur. <Error><error>The XML was too long</error></Error> Is there any way to resolve this error?   The following is a detailed description. The inputs are set up as follows. API Call Name       custom Search Name        /JSSResource/mobiledevices The number of records is about 60,000, but about 200 of them have the above error. According to the information on the following site, records with more than 10,000 characters seem to cause the above error. https://community.jamf.com/t5/jamf-pro/splunk-jamfpro-api-getting-started/m-p/169054            There is also information that Splunk does not capture data longer than 10,000 characters by default, but Splunk does not make that setting.
Hello all, after upgrading splunk to 8.1.0 , we have observed some issues with LDAP authentication. The uers are not able to login for sometime and after 10-15 mins the credentials work. Once we disa... See more...
Hello all, after upgrading splunk to 8.1.0 , we have observed some issues with LDAP authentication. The uers are not able to login for sometime and after 10-15 mins the credentials work. Once we disabled ldap authentication on couple of servers, nowits working. We dont see any error/ warnings related to LDAP in splunk. Can someone please help and let em know if there is any known issue in this. THanks
Hi all, from the available documentation, I am not getting how to practically update TA via Deplyoment server (i.e. distribute a newer version to the UFs via DS). If it matters, it is about the Add-O... See more...
Hi all, from the available documentation, I am not getting how to practically update TA via Deplyoment server (i.e. distribute a newer version to the UFs via DS). If it matters, it is about the Add-On for Linux and Unix. I would imagine that it looks like this: 1) get the TA on the Deployment Server via GUI - go to  "install app from file" -> upload the downloaded .tgz file from splunkbase -> restart Splunk 2) Backup the used TA (older version) 3) Copy the TA (newer version) from the App folder into the deployment-apps folder (via cp -R) 4) Redeploy Deployment Server via  splunk reload deploy-server 5) Check if data is still being obnoarded properly Am I missing anything? Is this approach valid? 
Hi there, I have created a dozen of statistics dashboard with search / filtering and drilldown for customers using a production voice platform. Each of those dashboards includes multiple panels, ... See more...
Hi there, I have created a dozen of statistics dashboard with search / filtering and drilldown for customers using a production voice platform. Each of those dashboards includes multiple panels, each delivering 10+ metrics which are consistent in naming accross the different dashboards. Just for the example there can be "Offered calls" "picked up calls" "lost calls" "waiting time"....etc. Those reports are today in french with rename commands in the query so that it looks nice reading the data for the french users as the source is in english.   I'd like to have those dashboards to be available in other languages, so there is the basic option to clone all those reports and suffix their names with a language code but that would be tedious and would not offer some dynamic option for users to switch language.   I would like to have some dynamic option using a language input form or detecting language of the Splunk user so that all the fields / metrics used in those dashboard are being translated. Basically have a translation table I could maintain while new fields / metrics are added and new language is required. Is that something possible ? I couldn't find the beginning of an idea of how to achieve this frankly.
Hi Team,   I have a dashboard, where when I click on a row, it drills down to another URL for a different dashboard.  Now I want to capture this URL that it's using in drilldown and store it in... See more...
Hi Team,   I have a dashboard, where when I click on a row, it drills down to another URL for a different dashboard.  Now I want to capture this URL that it's using in drilldown and store it in a token. How do I do it?    My sample code :        <row> <panel depends="$drilldown_display$"> <table> <title>Top RequestIds for $eptok$</title> <search> <query>$instance_select$ organizationId=CASE($organizationId$) earliest=$earliest$ latest=$latest$ sourcetype=CASE(applog*:axapx) [search $instance_select$ organizationId=CASE($organizationId$) earliest=$earliest$ latest=$latest$ sourcetype=CASE(applog*:gslog) "CPU_ENFORCED"| stats count by requestId | fields requestId | format ] entryPoint="$eptok$" | table requestId runTime | sort -runTime</query> </search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">row</option> <option name="rowNumbers">false</option> <option name="wrap">true</option> <drilldown> <set token="drilldown_display1">block</set> <set token="reqtok">$row.requestId$</set> <link target="_blank">https://splunk-web.monitoring.com/en-US/app/publicSharing/Dashboard_second?form.instance=$instance$&amp;form.orgId=$organizationId$&amp;form.requestId=$reqtok$</link> </drilldown> </table> </panel> </row> <row>     IF you see the above code, I am drilling to Dashboard_second based on the row I click.  I want to capture this URL for  Dashboard_second in a token and display it in my HTML Panel.  How do we achieve this ? My HTML Panel code :      <panel> <html> <h1 class="SectionHeader">Panel to display drilldown URL </h1> <div style="float:left; width:calc(95% - 60px);" class="pageInfo"> <pre> ===========INTERNAL============== <b>DrillDown URL : </b> $MyUrl$ ===========INTERNAL============== </pre> </div> </html> </panel>      
Hello Splunkers, from time to time, we observe a bit weird state of our indexer cluster and want to understand its reason. There are 3 indexers in the cluster (let's say z1el1, z1el2, z1el3) , one... See more...
Hello Splunkers, from time to time, we observe a bit weird state of our indexer cluster and want to understand its reason. There are 3 indexers in the cluster (let's say z1el1, z1el2, z1el3) , one of which seems to be overloaded for some time (see a screenshot).  Internal logs do not show anything wrong or critical.  Indexing rate goes up on one indexer and comes back to the normal state after a few hours. Loadbalancer was checked some time ago by a responsible team and it seems to ok. Can someone lead us to the right direction on what else to check/do?  
I have a lookup named tc with a field  indicator. I wanted to search that indicator field in my firewall sourcetype with wildcards as below. [|inputlookup tc|dedup indicator|eval indicator1="*".ind... See more...
I have a lookup named tc with a field  indicator. I wanted to search that indicator field in my firewall sourcetype with wildcards as below. [|inputlookup tc|dedup indicator|eval indicator1="*".indicator."*"|table indicator1|format] |where sourcetype="firewall" But this search was not efficient and is time consuming. Also I was not able to use union or Join as I have to look for a field with wildcard. Kindly suggest any alternatives.
Hi All, We are using Splunk Add-on for Box to get Box logs. With this Add-on, it appears that only [Events, Users, Folders, Groups] of the Box API endpoints is available. Furthermore, it seems tha... See more...
Hi All, We are using Splunk Add-on for Box to get Box logs. With this Add-on, it appears that only [Events, Users, Folders, Groups] of the Box API endpoints is available. Furthermore, it seems that only "Standard" columnscan be retrieved for each API. (API Reference - Box Developer Documentation) 1. Is there any way to get logs from other endpoints like COLLABORATIONS or DOWNLOADS? 2. is it possible to retrieve all available columns, for example FILES(FULL)? Best Regards,
Hi Guys, We have 1 indexer and 1 Search head in 2 different datacenter locations. (Lets say DC-A and DC-B) Since DC-A is being decommissioned, we have been directed to copy the indexed data from ... See more...
Hi Guys, We have 1 indexer and 1 Search head in 2 different datacenter locations. (Lets say DC-A and DC-B) Since DC-A is being decommissioned, we have been directed to copy the indexed data from the Indexer in DC-A to Indexer in DC-B.  Now, Indexer in DC-B has enough SAN to hold the indexed data from both the Datacenters but we would want to move/store the data in such a way that SH in DC-B is not able to search data from DC-A. So basically, I am looking at how to store data in indexer but make it non searchable. Any ideas, how to best proceed with this? Appreciate the help !! Thanks, Neerav Mathur 
I would like to find the detail of custom threats built-in as a file (via cli not via GUI). Can I do that?
I need a help from you. Could you please help me to generate a single query from these 3 separate queries ? The index is same in 1 & 2 queries. The source types of all 3 are different. Thank you. ... See more...
I need a help from you. Could you please help me to generate a single query from these 3 separate queries ? The index is same in 1 & 2 queries. The source types of all 3 are different. Thank you. 1. index="abc_oracle" source=audit_19c sourcetype="audit" | eval "Database Modifications:" = "Modification on " + host, "Date and Time" = TIMESTAMP, "Type" = SQL_TEXT, "User" = DB_USER , "Source" = sourcetype | search "Database Modifications:"="Modification on *" NOT select | rex field=_raw "SQL_TEXT=\S(?P<Type>\W?......)\s" | rex field=_raw "DB_USER=(?P<UserName>..........)" | table "Date and Time", "Database Modifications:" ,"Type", "User", "Source" 2. index="abc_oracle" source=audit_row_19c sourcetype="audit" | eval "Database Modifications:" = "Modification on " + host, "Date and Time" = TIMESTAMP, "Type" = SQL_TEXT, "User" = DB_USER , "Source" = sourcetype | search "Database Modifications:"="Modification on *" NOT select | rex field=_raw "SQL_TEXT=\S(?P<Type>\W?......)\s" | rex field=_raw "DB_USER=(?P<UserName>..........)" | table "Date and Time", "Database Modifications:" ,"Type", "User", "Source" 3. index="abc_11g" source=oracle_11g sourcetype="audit" | eval "Database Modifications:" = "Modification on " + host, "Date and Time" = TIMESTAMP_qab, "Type" = SQL_TEXT, "User" = DB_USER , "Source" = sourcetype | search "Database Modifications:"="Modification on *" NOT select | rex field=_raw "SQL_TEXT=\S(?P<Type>\W?......)\s" | rex field=_raw "DB_USER=(?P<UserName>..........)" | table "Date and Time", "Database Modifications:" ,"Type", "User", "Source" Thank you
Hello, I'd ask for a help on how to write a query where I need to get an alert "when there's a user added to a specific group and then removed from the group within 1 Hour time." I'm new to Splun... See more...
Hello, I'd ask for a help on how to write a query where I need to get an alert "when there's a user added to a specific group and then removed from the group within 1 Hour time." I'm new to Splunk, any help appreciated.
I am using Splunk DB Connect V3.7.0 and there seems to be a major security hole? I want to give some users access to some of the connections/identities. I set the permissions of what they can see,... See more...
I am using Splunk DB Connect V3.7.0 and there seems to be a major security hole? I want to give some users access to some of the connections/identities. I set the permissions of what they can see, and that works. BUT If a user explicitly asks for a connection that they cannot see, they are still allowed to access it?! This cannot be correct?
I am trying to setup a test environment so I can practice the new SPL that I am learning. I am trying to work with botsv1. I have downloaded and installed Splunk Enterprise along with the Splunk App ... See more...
I am trying to setup a test environment so I can practice the new SPL that I am learning. I am trying to work with botsv1. I have downloaded and installed Splunk Enterprise along with the Splunk App for Stream,  TA-Suricata, and the botsv1_data_set.tgz. At this point I should be able to run an "index=botsv1" which does run successfully, but it has zero events. That makes me think I have the app installed but not the data. When I click on the link in GetHub to download the botsv1.json.gz file it opens a new Chrome browser tab rather than downloading the file. The same with all the individual Json files. I know I am just doing it wrong (newbee), but how do I pull the data into Splunk so I can start searching it? 
I need help on development .  I have a requirement to capture the logs of a file path "care\outbound\prod" and "care\outbound\Test", Both the file names are same one will go to Test folder and other ... See more...
I need help on development .  I have a requirement to capture the logs of a file path "care\outbound\prod" and "care\outbound\Test", Both the file names are same one will go to Test folder and other will go to Prod folder. As per the initial requirement I want capture the test data that is coming to "care\outbound\Test" path. Need help on coding part. code:   index=*** doc_name= ***** "*care*"   I have choose "care" as a key point, What ever the files cross through "care" folder it captures. But I need to capture the files which are coming to  "care\outbound\Test" . Please let me know if you need more clarification.
It looks like Splunk Universal Forwarder service on Linux enables CPU accounting or CPU shares. If this is enabled another program cannot manually assign scheduling. Does Splunk Service need CPU a... See more...
It looks like Splunk Universal Forwarder service on Linux enables CPU accounting or CPU shares. If this is enabled another program cannot manually assign scheduling. Does Splunk Service need CPU accounting and can this be disabled when Splunk starts. We want to determine if this CPUShares= setting is absolutely necessary for the service or if you have workarounds for setting CPU scheduling for the service in the legacy style.