All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  i'm using a drop down option in which i  have a static option ALL and value =*  the drop down optin is for different environment ALL ,DEV,QA,PROD and i have a table where hosts are populated bas... See more...
Hi  i'm using a drop down option in which i  have a static option ALL and value =*  the drop down optin is for different environment ALL ,DEV,QA,PROD and i have a table where hosts are populated based on environment the issues is in the title of the panel  when i select ALL the title of the panel shows * hosts per environment  for other DEV QA and PROD its working fine How can i replace the * with ALL in the title of the panel     
There is a requirement to create a chart that looks like below. Is there an app that can create columns like these? Is anyone aware how to generate 3D columns in Splunk? Not familiar with JS here, h... See more...
There is a requirement to create a chart that looks like below. Is there an app that can create columns like these? Is anyone aware how to generate 3D columns in Splunk? Not familiar with JS here, hence finding it difficult to create a custom visualization for this. Appreciate any help on the issue. Thanks!  
Hello Everyone. I am pretty new with splunk. I'll try to be brief:   I know that a specific event happened at an exact time. So I want to know what happened on that machine at that time and in the... See more...
Hello Everyone. I am pretty new with splunk. I'll try to be brief:   I know that a specific event happened at an exact time. So I want to know what happened on that machine at that time and in the last 5 minutes. This is to see what the machine was doing 5 minutes prior triggering the alert.   I got this query:    (where xxxxxxxxx  is the index, sourcetype and name of the machine I want to look) *********************************************************************************************** xxxxxxxxx [| gentimes start=-1 | addinfo | eval earliest=relative_time(info_min_time,"-5m") |table earliest latest | format "" "" "" "" "" "" ] *********************************************************************************************** This works well if I manually select in the timepicker anything. For example I click on "last 15 minutes", the query is in reality done for the last 20 minutes (as I want as well the 5 minutes before the earliest time).   The problem comes when I try to type an exact time in the query itself. I am trying by putting  for example:  earliest=10/19/2018:00:00:00   but it does not work. I am even trying  earliest=-1d  and it just does not listen to it, it listens to whatever is chosen in the timepicker. Maybe I am taking  the wrong approach with my initial query.  Has anybody been in this situation? Or anyone can shed some light here?  Thank you very much in advance.  
I have an xml file and using spath for it. My xml is having a tag like: <messages> <name>test1</name> <message-a> <cust-id>cust-1</cust-id> <part-a>name-1</part-a> <part-b>name-2</part-b> </m... See more...
I have an xml file and using spath for it. My xml is having a tag like: <messages> <name>test1</name> <message-a> <cust-id>cust-1</cust-id> <part-a>name-1</part-a> <part-b>name-2</part-b> </message-a> <message-b> <cust-id>cust-2</cust-id> <part-a>name-1</part-a> <part-b>name-2</part-b> </message-b> <messages> I want to use a if condition in both input and path fields of spath
  Dear Team,   I've below Splunk log and trying to get stats count based on consumer_application. I've tried below regular expression but no results were returned -   Splunk Query: "uri"... See more...
  Dear Team,   I've below Splunk log and trying to get stats count based on consumer_application. I've tried below regular expression but no results were returned -   Splunk Query: "uri":* (PaymentVerticle) | rex field=_raw "consumer_application\"\:\"(?P<Consumer>.*?)\"" | stats count by Consumer Splunk Log 2021-04-22T11:31:25.115912284Z app_name=java message={"name":"PaymentVerticle", "timestamp":"2021-04-22T11:31:25.115Z","level":"info","schemaVersion":"0.1","application":{"name":"PaymentVerticle","version":"1.1.1"},"request":{"address":{"uri":"PaymentVerticle"},"metadata":{"correlation_id":"042320210010GMT"}},"message":"Received request with body {\"payment_request\":{\"consumer_application\":\"BLUEPRISM\"}}"}
I am trialing the Splunk Cloud software and having read through all the information on how to setup universal forwarders i've reached an impasse. I believe i have setup the forwarder correctly: - ... See more...
I am trialing the Splunk Cloud software and having read through all the information on how to setup universal forwarders i've reached an impasse. I believe i have setup the forwarder correctly: - installed forwader incoprorated .spl certificate added logs to monitor added the forward-server details restarted splunk. I have opened ports 8089 and 9997 inbound/outbound to ensure not firewall blocking traffic. The documentation then seems to indicate that in the Splunk Cloud UI should see under under Settings --> Forwarding & Receiving option or a Forwarder under Data Inputs. I don't see either and as such can setup a data source. Could anyone advise if i have missed a step somewhere on client side universal forwarder setup or whether it is something within Splunk Cloud i have failed to do?  
It seems my tables in the beta app gets more or less the same with, so fields containing short text or numerical elements take up too much space from those with longer text strings for example. Seem... See more...
It seems my tables in the beta app gets more or less the same with, so fields containing short text or numerical elements take up too much space from those with longer text strings for example. Seem not to adjust well automatically and I am not finding a way to control it manually, any ideas?
I have this format of data  00H 01H 02H 1234567889 key_word1 03H 1234567889 key_word2 1234567889 key_word3   Is there a way that I can turn it into this? hour Number    Keywords 00H... See more...
I have this format of data  00H 01H 02H 1234567889 key_word1 03H 1234567889 key_word2 1234567889 key_word3   Is there a way that I can turn it into this? hour Number    Keywords 00H 0 0 01H  0      0 02H  1234567889 key_word1 03H 1234567889 key_word2 03H 1234567889  key_word3
Dear team, Can you please help me on the below query: We have a textbox in the Dashboard, whatever the value is given to the textbox that need to be stored in index. is it possible in splunk?
Hi, I got a set of table that has "_time" as row values and "hosts" as  column values like below. _time host-1-1 host-1-2 host-2-1 host-2-2 12:00 10 20 5 5 12:15 20 10 5 15 ... See more...
Hi, I got a set of table that has "_time" as row values and "hosts" as  column values like below. _time host-1-1 host-1-2 host-2-1 host-2-2 12:00 10 20 5 5 12:15 20 10 5 15 12:30 10 15 5 10 How can I group them based on their host name (host-1 & host-2) and add the values of two sub_hosts(e.g.  "host-1" = "host-1-1" + "host-1-2")? So the result will look like below. _time host-1 host-2 12:00 30 10 12:15 30 20 12:30 25 15
Need query to ignore Down and Up events which are in 2 min interval. Is it possible to throttle/trigger with multiple fields/value . For ex : we have 3 fields called Time, Device Name and AlertGrou... See more...
Need query to ignore Down and Up events which are in 2 min interval. Is it possible to throttle/trigger with multiple fields/value . For ex : we have 3 fields called Time, Device Name and AlertGroup 1st alert : Time : 08:00:00, Device Name is ABC and AlertGroup is Down 2nd alert : Time : 08:00:55, Device Name is XYZ and AlertGroup is Down 3rd alert : Time : 08:01:00, Device Name is ABC and AlertGroup is Up 4th alert : Time : 08:07:00, , Device Name is XYZ and AlertGroup is Up Now in this situation i dont want to trigger an alert/notification if we are receiving the alert with same device name with Down and Up with in 2 mins window. So if we compare with above ex: in this case 1st alert and 3rd alert should get ignored because its got correlated with in 2 min itself. Please help me to build this search query
We are deploying SHC into AWS via  pipeline code and attempting to configure SAML integration as part of the build but having some issues. When I check using btool, all the saml details are correct,... See more...
We are deploying SHC into AWS via  pipeline code and attempting to configure SAML integration as part of the build but having some issues. When I check using btool, all the saml details are correct, but if I check details via the GUI - nothing is populated and when I attempt to use SAML to login - I get errors about blank content being passed. If I update the GUI as well - then things seem to work - but this is not something we want to do every time we do a new build. Just wondering if anyone else out there has come across this and worked something out?
In some cases, the Splunk Phantom virtual appliance can lose its time synchronization with the system time. For example, some virtual machine management functions can be run that would revert the Sp... See more...
In some cases, the Splunk Phantom virtual appliance can lose its time synchronization with the system time. For example, some virtual machine management functions can be run that would revert the Splunk Phantom virtual appliance an older snapshot that is still running, thus pausing the virtual appliance and losing synchronization with the system time.   You can use any of the strategies on this page to work around this issue. Install VMWare Tools on the virtual appliance You can install the VMWare Tools configuration utility on the virtual appliance and synchronize it with the ESX host. In this scenario, the time is automatically synchronized whenever the host is resumed or reverted. Manually update the time on the host You can use the ntpdate command to force the date to be updated. Access the command line of the virtual appliance as root, then run the ntpdate command. For example: ntpdate -v -u 0.centos.pool.ntp.org​ Replace the NTP host or pool as desired. Install the VMWare Tools on your Splunk Phantom virtual machine In VMWare environments, you can install VMWare Tools configuration utility on your Splunk Phantom virtual machine. This causes the virtual machine to automatically synchronize the time with the physical host, assuming the physical host as NTP configured. Perform the following steps: Make sure NTP is properly configured on the physical host. In the VMWare management environment, install the VMWare Tools configuration utility on the virtual machine. This "inserts" a CD containing VMWare Tools into the virtual CD-ROM drive. Access the command line of virtual machine as the root user. Run the following command: mount /dev/cdrom /mnt Untar the file from the /mnt directory into the root user's home directory: [root@localhost]# cd ~ [root@localhost]# tar -xvf /mnt/VMWareTools-9.4.5-1598834.tar.gz [root@localhost]# cd vmware-tools-distrib/ Run the following command to start the installer, and follow the prompts to complete the installation: [root@localhost]# ./vmware-install.pl
Are there any automated scripts to back up the kvstore on each Splunk server as part of a basic back? How often should I backup the Kvstores? Also what do I need to backup in Splunk Enterprise Secur... See more...
Are there any automated scripts to back up the kvstore on each Splunk server as part of a basic back? How often should I backup the Kvstores? Also what do I need to backup in Splunk Enterprise Security please.
I have a lookup table like in splunk this: earliest_time latest_time S_NO SRC_IP 3/1/2021 4/1/2021 E1002 10.10.10.10   I want to exclude the SRC_IP within time(earliest_time and  la... See more...
I have a lookup table like in splunk this: earliest_time latest_time S_NO SRC_IP 3/1/2021 4/1/2021 E1002 10.10.10.10   I want to exclude the SRC_IP within time(earliest_time and  latest_time) from the search. How could I write the splunk sql to implement this?
Hi, I have following data: And I am trying to create SPL which gets me following result: I tried eventstate and stats command but not getting where i wanted.
We have Splunk Enterprise 8.1.2 and are preparing our app to migrate to a Splunk Cloud environment. After running the Splunk packaging tool to package our app for certification (Splunk package app), w... See more...
We have Splunk Enterprise 8.1.2 and are preparing our app to migrate to a Splunk Cloud environment. After running the Splunk packaging tool to package our app for certification (Splunk package app), we noticed that the dashboards, reports, and indexes for our app can no longer be deleted from the Splunk Web UI or CLI. The delete button is missing for the dashboards and reports. The button is greyed out for the indexes. Running the CLI remove command returns a successful message but does not delete the object. Does anyone know how to re-enable the delete function so we can delete old knowledge objects we no longer need?
Hello All, Actually I have Deployment Server in my environment and wondering to know the impact of Universal Forwarders communication with Deployment Server on network bandwidth, when they are phoni... See more...
Hello All, Actually I have Deployment Server in my environment and wondering to know the impact of Universal Forwarders communication with Deployment Server on network bandwidth, when they are phoning home. The total size of deployed apps (10 apps) are <1200KB and I have 500 Universal Forwarders installed (mainly on Windows devices).  Is there any recommendation to calculate the volume of this type of traffic? Thanks in advance. A.
Hello guys, heard about excessive LDAP issue in Splunk 8.1, could you confirm? Is it impacting Microsoft Active Directory? Should be fixed in 8.1.4 however no date yet for the release. Thanks for y... See more...
Hello guys, heard about excessive LDAP issue in Splunk 8.1, could you confirm? Is it impacting Microsoft Active Directory? Should be fixed in 8.1.4 however no date yet for the release. Thanks for your help. Best regards.
My organization is running 9 physical indexers that are connected to NFS mounts where the frozen buckets are rolled to after aging out. There is a project in planning to move/migrate the physical cab... See more...
My organization is running 9 physical indexers that are connected to NFS mounts where the frozen buckets are rolled to after aging out. There is a project in planning to move/migrate the physical cable connections on indexers that feeds to the NFS mount. I'm looking for advice on the best strategy to keep the indexer cluster ecosystem peaceful during migration. The physical connection from indexer to NFS will need to be disconnected for 3 days-1 week to allow for migration of the connection/cables to their future switch homes. The directory that holds the cold buckets has more than adequate storage to hold more/larger buckets during this migration time. Is there a recommended method/process to extend/expand the volume on cold buckets in the cluster that can be implemented temporarily during this maintenance window? It could then be restored to original configuration once the NFS mounts are reconnected? I was thinking that increasing the cold storage max volume(coldPath.maxDataSizeMB ) in indexes.conf prior to disconnect might provide this type of cover. However, I would love a second opinion given this is the first time I've been encountered with this type of request. Any insight/advise that can be provided would be greatly appreciated.