All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We're moving to Splunk Cloud, but we have some legacy hosts for which I need a forwarder upgrade.  Is there any compatible UF version 7 or newer that runs on 32-bit Windows Server 2008 SP2 (not R2!) ... See more...
We're moving to Splunk Cloud, but we have some legacy hosts for which I need a forwarder upgrade.  Is there any compatible UF version 7 or newer that runs on 32-bit Windows Server 2008 SP2 (not R2!) I've searched the available older versions and I'm coming up empty.  I'm grabbing at the last straw here. Thank you in advance...
I have a Panel in a Dashboard which shows results of a Query and picks the time range from a TimePicker.  Goal: If the user selects time greater than 30 days in the TimePicker, the search for this s... See more...
I have a Panel in a Dashboard which shows results of a Query and picks the time range from a TimePicker.  Goal: If the user selects time greater than 30 days in the TimePicker, the search for this specific panel's query should not search for more than 30 days. It should set the time range to 30 days only if user selects time greater than 30 days in TimePicker. For time selected lesser than 30 days, this specific panel should display results for that selected time range.  This is how the current query for this panel looks like: eventtype=$app_name$ | timechart span=1h count by _time
Hello, I am working with the timechart command on my following query and I am running into some problems. I am trying to compute:  timechart span=15m sum(ofAField) as sumOfField, avg(sumOfFiel... See more...
Hello, I am working with the timechart command on my following query and I am running into some problems. I am trying to compute:  timechart span=15m sum(ofAField) as sumOfField, avg(sumOfField) as avgOfField by task My problem with this one is that when I run it. I get the correct output for the first task but the out for the rest of the task are wrong. I am assuming that for the rest of the tasks only the sum portion of the time chart query is being calculated and not the avg. For background context there are about 11 different task this time chart is being grouped by.  TIA
Hi Everyone,  I am new to splunk and need some help. I am attempting to create a dashboard that separates the asset's vulnerabilities by department.  Right now we get the asset with the vulnerabil... See more...
Hi Everyone,  I am new to splunk and need some help. I am attempting to create a dashboard that separates the asset's vulnerabilities by department.  Right now we get the asset with the vulnerability and was wondering if there is a way to group them by the naming convention. For instance. sec-9564 would be the security department.  So id be saying: if pc starts with sec* than group it into the Security Dept column.  In the end I need to show a dashboard with each departments vulnerabilities.  Any help with this would be appreciated !
Can anyone assist me with the SPL to subtract EBVS% and PFAVS% fields to allow the successful plays field to improve?  I've attached a screenshot below.  
Do we have a query for an alert if the memory usage increases by 20% from the previous 20 minutes.    So if the server was running with an average RAM usage of 50%, then the server suddenly increase... See more...
Do we have a query for an alert if the memory usage increases by 20% from the previous 20 minutes.    So if the server was running with an average RAM usage of 50%, then the server suddenly increases to 75% RAM usage, we would trigger an alert.
With Splunk (splunk-library-javalogging) library update to version 1.11.4 , _time doesnot show millisecond  .  Having multiple source , logs shows up messed up . Attaching screenshot below . Using   ... See more...
With Splunk (splunk-library-javalogging) library update to version 1.11.4 , _time doesnot show millisecond  .  Having multiple source , logs shows up messed up . Attaching screenshot below . Using   log4j2.xml config with Json Logger  .  How to see milliseconds as part of _time .   
Hello! I'm trying to make the splunk forwarder part of my gold image template for windows servers.  Right now, I have a script that installs the forwarder using vmware customization once the VM is ... See more...
Hello! I'm trying to make the splunk forwarder part of my gold image template for windows servers.  Right now, I have a script that installs the forwarder using vmware customization once the VM is cloned from it's template. I would instead like to install the forwarder on the gold image itself and get rid of the script. I am using directions from a splunk document. The 3rd section titled "Clone and restore the image" does not make sense to me, it seems to be saying clone the same image 2 or 3 times.  Restart the machine and clone it with your favorite imaging utility. After cloning the image, use the imaging utility to restore it into another physical or virtual machine. Run the cloned image. Splunk services start automatically. Use the CLI to restart Splunk Enterprise to remove the cloneprep information: Step 1 - ok so I have cloned my gold image into another machine (that I would think can be used for prod) Step 2- Why do I need to restore the cloned VM to another machine? Step 3 - ok Step 4 - This is pretty inconvenient, this makes me have to either manually do something, or script it which I'm trying not to do. This may be terminology confusion on my part, but is there not a way to completely configure the forwarder on the gold image, and when I clone it using vmware, it just comes up and works? https://docs.splunk.com/Documentation/Splunk/8.2.4/Admin/Integrateauniversalforwarderontoasystemimage      
I have a table with a few columns in it. One of the columns is 'url'. I would like to construct a relative search URL that opens in a new tab.   { "type": "drilldown.customUrl... See more...
I have a table with a few columns in it. One of the columns is 'url'. I would like to construct a relative search URL that opens in a new tab.   { "type": "drilldown.customUrl", "options": { "url": "/en-US/app/search/search?q=search%20index%3D\"$index$\"%20url%3D\"$click.value$\"", "newTab": true } }     Token resolution works. $index$ is replaced as it should be (this is an existing token on my dashboard). But I can't seem to access anything in table. $click.value$ is clearly wrong. I've tried a bunch of other styles from the docs too. $row.url.value$ $row.url$ $row.value.2$ $url.value$ The problem I think is that I'm looking at documentation for older Splunk versions and/or documentation for the classic xml dashboard. What's the correct way to do this? Also, is there up-to-date documentation available for this?    
Hi,   How can I reduce the storage size of an index, what are the different methods/options? Also, will removing logs using the delete command reduce the storage size of an index?   Kind Re... See more...
Hi,   How can I reduce the storage size of an index, what are the different methods/options? Also, will removing logs using the delete command reduce the storage size of an index?   Kind Regards, Aftab
Hi, I am seeking assistance to execute Python script located under custom app.  Script is working fine in cmd prompt. But from Splunk console it is throwing error code 1. I am trying to invoke scri... See more...
Hi, I am seeking assistance to execute Python script located under custom app.  Script is working fine in cmd prompt. But from Splunk console it is throwing error code 1. I am trying to invoke script through below custom search command defined under commands.conf file. | script <python script> I have also installed required modules like Pandas and its dependencies. I am still seeing below errors. Please assist me to fix below errors Traceback (most recent call last): stderr import pandas as pd stderr from 'C:\Program Files\Splunk\bin\Python3.exe \__init__.py", line 17, in <module> stderr "Unable to import required dependencies:\n" + "\n".join(missing_dependencies) ImportError: Unable to import required dependencies: numpy Importing the numpy C-extensions failed. This error can happen for The Python version is: Python3.7 from "C:\Program Files\Splunk\bin\Python3.exe" The NumPy version is: "1.22.0" and make sure that they are the versions you expect. Original error was: No module named 'numpy.core._multiarray_umath' External search command returned error code 1.  
I want to import/export all the information on the "Show Transaction Thresholds" page Under Configuration:  Slow Transaction Threshold, Very Slow Transaction Threshold, Stall, etc. We want to make ... See more...
I want to import/export all the information on the "Show Transaction Thresholds" page Under Configuration:  Slow Transaction Threshold, Very Slow Transaction Threshold, Stall, etc. We want to make sure we're consistent across applications. We have many applications and several environments per application (Lab, QA, Production, etc).
Hi team I have a user that left the company and now their dashboard searches are alerting as "orphaned objects". I reassigned all of their objects to me, cloned their dashboards (scream test), bu... See more...
Hi team I have a user that left the company and now their dashboard searches are alerting as "orphaned objects". I reassigned all of their objects to me, cloned their dashboards (scream test), but when I go to (settings > user interface > views) to delete them I see no delete option except for the clones I made.  I changed the permissions on the dashboards to read/write the sc_admin role only  I (Admin) own all the objects now These dashboards were user made and not apart of 3rd party app What am I missing? I have a few screen shots below to show better what I am explaining Screen Shot of 'views' The clones are private the originals are not screen shot of what permissions are for original object I want to delete sc_admin has all the capabilities it can have assigned to it
Hi, Is there a recommendation or a guideline available by Splunk on naming convention for INDEXES I have a new Splunk Enterprise environment at my company with a plan of roughly 70 data sources to ... See more...
Hi, Is there a recommendation or a guideline available by Splunk on naming convention for INDEXES I have a new Splunk Enterprise environment at my company with a plan of roughly 70 data sources to be onboarded one after the other. For example the Windows TA has  I would be happy for each input i can get from you. Thank you in advance, Jay  
Hello , Has anyone configured Proofpoint ET or VirusTotal Adaptive response action in ES ?  Basically look up the destination IP from events against these websites . Can someone please advise how to ... See more...
Hello , Has anyone configured Proofpoint ET or VirusTotal Adaptive response action in ES ?  Basically look up the destination IP from events against these websites . Can someone please advise how to configure this ? For Proofpoint Check ET, it asks for Object .  What is Object here ?  
is it possible to append more than 10k records between 2 index?How to overcome this withou modifying conf file and adding any parameter thro SPL
  I am trying to set a token ($TimeFrame$) to contain the same text as displayed by the Time Frame filter after having selected any particular time picker range – in this case “Last 13 Days” selecte... See more...
  I am trying to set a token ($TimeFrame$) to contain the same text as displayed by the Time Frame filter after having selected any particular time picker range – in this case “Last 13 Days” selected from  Relative section of the Time Picker – but any time picker range or preset text being displayed in the Time Frame filter must work - see diagram below.            I would like to extract exactly the same text that Splunk>Enterprise puts in the filter display box and assign it to my token $TimeFrame$. I can only find solutions that work in a limited number of cases because it involves trying to convert the formatted earliest and latest tokens  back into text, for example, the code below works some of the time, but not for “Last 13 Days”, and is very messy having to deal with with special cases individually, for example  “All Time”:         <eval token="picktime">"From ".strftime($field1.earliest$,"%H:%M %e-%b-%Y")." to ".strftime($field1.latest$,"%H:%M %e-%b-%Y")</eval>         <eval token="TimeFrame">if($picktime$ == "From 01:00 1-Jan-1970 to 01:00 1-Jan-1970" OR $picktime$ == "From 00:00 1-Jan-1970 to 00:00 1-Jan-1970","All time",$picktime$)</eval> Anyone know of a better way of doing this? Mike  
I have two columns one is datacenter location and second- number of servers, I want to show this on map, how to show it without latitude and longitude details. Do I need to upload csv with latitude a... See more...
I have two columns one is datacenter location and second- number of servers, I want to show this on map, how to show it without latitude and longitude details. Do I need to upload csv with latitude and longitude for all these locations ? And then using geostat command  . location of datacenter - New York , Texas, Amsterdam, Mumbai 
So its great we can now import icons and images to Dashboard Studio pages.   It looks like they get stored in a KV store somewhere, however how can we manage these? For instance if I want a copy of... See more...
So its great we can now import icons and images to Dashboard Studio pages.   It looks like they get stored in a KV store somewhere, however how can we manage these? For instance if I want a copy of the dashboard on another Splunk instance,  how can I make sure that the icons and images are on the second instance. I don't see any way to manage these without having to manually update the dashboard on the second instance.... Help appreciated.
Hello, I've been trying to get data in SSE, but somehow I can't. The setup is the following - Installed Splunk Enterprise, Universal Forwarder, Forecpoint app, Syslog-ng(for receiving the logs, whic... See more...
Hello, I've been trying to get data in SSE, but somehow I can't. The setup is the following - Installed Splunk Enterprise, Universal Forwarder, Forecpoint app, Syslog-ng(for receiving the logs, which i monitor with the UF) and Splunk Security Essentials. I've tried different things with the demo data but when I'm trying to do anything with the live data i hit the wall.  I've tried to follow https://docs.splunk.com/Documentation/SSE/3.4.0/Install/ConfigureSSE these instructions, but they seem unclear and somehow inaccurate(For example in the chapter for getting data in - Configure the products you have in your environment with the Data Inventory dashboard. When I browse in the web interface there is no option to "2.b.Click Manually Configure to manually enter your data.") .   The first thing I've noticed was that this error for the ES Integration was thrown, for which i didn't find any information. When I open any use cases and for example "Basic Scanning", the sourcetype and index for forcepoint (index="forcepoint", sourcetype="next-generation-firewall") are missing by default. Are there any ways to add it automatically for all the use cases?   I've already have logs monitored by the indexer forwarded by the Forcepoint which are displayed in the Splunk Search and Reporting and Forcepoint App.   Even if i change the index and sourcetype in the enter a search field I still get these results. Can you give me any info on the tags, like what are they and what are they used for?   Any guides or tips will be highly appreciated, thanks!