All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi guys i want to run a custom splunk command via a button i can put on the dashboard. i want to do it via a visualization and not editing the xml. I want it work like a panel that can be added via ... See more...
hi guys i want to run a custom splunk command via a button i can put on the dashboard. i want to do it via a visualization and not editing the xml. I want it work like a panel that can be added via the add panel option on the dashboard. i know it should be done via the visualizaions.js file in a custom visualization but i would like some help on how to write the visualization_source.js file for it.   So the logic is  button - visualization.js - custom command  im assuming the command will have to be run via REST   any help is appreciated
I  am trying to build a splunk query to get the error summary from a log. I want to capture all the events where there is some ERROR, Exception or Failure. Below is the sample data :     ERRO... See more...
I  am trying to build a splunk query to get the error summary from a log. I want to capture all the events where there is some ERROR, Exception or Failure. Below is the sample data :     ERROR org.mule.component.ComponentException: Failed to invoke ScriptComponent{bapmFlow.component.797791858}. Component that caused exception is: ScriptComponent{bapmFlow.component.797791858}. host = host1 = /odt/mule_/logs/bapm.logsourcetype = gdt_index 2/7/21 12:00:04.000 AM 2021-02-07 00:00:04,422 [[Java2python].bapmFlow.stage1.03] ERROR org.mule.exception.CatchMessagingExceptionStrategy - Failed to dispatch message to error queue after it failed to process. This may cause message loss. Message identification summary here: id=54972f10-6901-11eb-ad2a-0050568f5886 correlationId=<not set>, correlationGroup=-1, correlationSeq=-1 host = host1 = /odt/mule_/logs/bapm.logsourcetype = gdt_index 2021-02-07 00:00:04,407 [[Java2python].bapmFlow.stage1.03] ERROR org.mule.exception.CatchMessagingExceptionStrategy - ******************************************************************************** Message : org.mule.module.db.internal.domain.connection.ConnectionCreationException: Cannot get connection for URL jdbc:sqlserver://VLTROUXRPT.us.global.crux.com\PRS:1713;databaseName=DFT;domain=US;integratedSecurity=false;authenticationScheme=JavaKerberos;userName=Jack;password=<<credentials>>;trustServerCertificate=true;encrypt=true; : Login failed for user 'Jack'. ClientConnectionId:34edad77-7de1-4d0f-bc13-0fb7f090f722 (java.sql.SQLException) 2021-02-07 00:00:02,936 [[Java2python].bapmFlow.stage1.03] ERROR org.mule.exception.CatchMessagingExceptionStrategy - ... 89 lines omitted ... 2021-02-07 00:00:02,951 [[Java2python].bapmFlow.stage1.03] ERROR org.mule.exception.CatchMessagingExceptionStrategy - Failed to dispatch message to error queue after it failed to process. This may cause message loss. Message identification summary here: id=54970800-6901-11eb-a3d3-0050568f5165 correlationId=<not set>, correlationGroup=-1, correlationSeq=-1       I have noticed the below: The ERROR keyword before the failures with the exception name. So I built this basic query like below but it's not giving the desired results: index=hdt sourcetype=gdt_index ("ERROR" AND "Exception") OR "FAILED" | rex ".*?(?<Exception>(\w+\.)+\w*Exception).*" | rex "(?<ErrorMessage>\"Message\":(.*\",))" | stats values(ErrorMessage) as ErrorMessage by Exception  
Hi, Is there a way to enlist the size of files that are indexed using the local host and universal forwarders?  From the above screenshot, I have 2 forwarders and fifteenforty is the search head.... See more...
Hi, Is there a way to enlist the size of files that are indexed using the local host and universal forwarders?  From the above screenshot, I have 2 forwarders and fifteenforty is the search head.  Is it possible to create a table of the largest files, smallest files, total files by each host or something close to that?   Any degree of help will be appreciated.   Regards, Hisham
Hello, I am trying to setup a trail account on Splunk Cloud. I created an account. When I login I get the following message all the time: An internal error was detected when creating the stack. We... See more...
Hello, I am trying to setup a trail account on Splunk Cloud. I created an account. When I login I get the following message all the time: An internal error was detected when creating the stack. We're sorry, an internal error was detected when creating the stack. Please try again later. I tried this for about three days now but nothing improves. Can you please help me on this?   Kind regards,   Dirk Jan van der Pol
Hi, Hope you are fine. Please note that I am currently using Splunk for self-learning. The enterprise trial license expired and now I am using the free trial. I thought using the free trial would ... See more...
Hi, Hope you are fine. Please note that I am currently using Splunk for self-learning. The enterprise trial license expired and now I am using the free trial. I thought using the free trial would allow me to schedule searches as mentioned here , but in fact I am not able to schedule searches. I really need to use this feature, for my training, before starting my new job. Not sure what are the next steps to get this feature. Please assist.   Best Regards, Noura Ali
Hi,  I am pretty new to splunk and need help with a timechart. I have a timechart, that shows the count of packagelosses >50 per day. Now I want to add an average line to the chart, that matches to... See more...
Hi,  I am pretty new to splunk and need help with a timechart. I have a timechart, that shows the count of packagelosses >50 per day. Now I want to add an average line to the chart, that matches to the chosen space of time.     index= ... |eval Amount=lost_packages |where 2500 > Amount and Amount > 50 |timechart span=24h count(Amount) aligntime=@d     Can somebody tell me how i can calculate the average of Amount in the chosen space of time and how I can add the average to the timechart?  
I have two search conditions that I need to trigger alerts from. I have a hundred hosts on a HA cluster. Sometimes host(s)  happen to leave an HA cluster and come back online, due to network issues o... See more...
I have two search conditions that I need to trigger alerts from. I have a hundred hosts on a HA cluster. Sometimes host(s)  happen to leave an HA cluster and come back online, due to network issues or during a production changes by engineers. When a host leaves the HA cluster, I get a single message in Splunk that reads "serverX has gone out-of-sync". When the host joins back the HA cluster, I get a single message in Splunk that reads  "serverX has gone in-sync". This means I have two search results to play with.  My Goal: When a host leaves the HA cluster and comes back within an hour, do not send any alerts. But if a host leaves an HA cluster, but does not come back online after an hour, trigger an alert.    Here is what I have done so far (search period =1hr): index=test sync_status="out-of-sync" [search index=test sync_status="in-sync" | dedup server | table server] I get undesired results. I expect to see only the host that went offline but did not join back the cluster (of which I can see results when I do simple searches).   Am I in the right direction, from a search and logic perspective? Are they better search methods of doing it?
Hi, I want make a report(or Alert) each month to count the Total transaction success in 1 month and compare it to 3 months before it, if it exceed 200%. For example: count the Total transaction succe... See more...
Hi, I want make a report(or Alert) each month to count the Total transaction success in 1 month and compare it to 3 months before it, if it exceed 200%. For example: count the Total transaction success of January compare to last year Oct-Sep. My code is for the specific example above is     index=index1 earliest=10/1/2020:00:00:00 latest=1/31/2021:24:00:00 |search RESPONSE_CODE="0" |stats count AS Total count(eval(date_month="october" OR date_month="november" OR date_month="december")) AS Total_3MONTHS count(eval(date_month="january"))AS MONTH1 BY MERCHANT_CODE |eval 3MONTHS_AVG = round(Total_3MONTHS/3,2) |eval RATE = round((MONTH1/3MONTHS_AVG)*100,2) |search RATE>=200 |table MERCHANT_CODE, MONTH1, RATE     1. I want to make it automatically send me an email at the start of the month about the last month search without me manually change the time range and search term 2. Sometime, the 3MONTHS_AVG=0 due to there aren't any transactions on those 3 months. It lead to the RATE don't show as well because "divide by 0". If anyone have the solution to these problems, I would really appreciate. Thank you
Hello, could someone explain the purpose of this setting, does it serve tcp-ssl inputs.conf for instance AND/OR securing internal Splunk 8089 port in server.conf?   https://sc1.checkpoint.com/docu... See more...
Hello, could someone explain the purpose of this setting, does it serve tcp-ssl inputs.conf for instance AND/OR securing internal Splunk 8089 port in server.conf?   https://sc1.checkpoint.com/documents/R81/WebAdminGuides/EN/CP_R81_LoggingAndMonitoring_AdminGuide/Topics-LMG/SIEM-specific-instruction.htm sslRootCAPath = /etc/ssl/my-certs/RootCA.pem   on the other side : https://docs.splunk.com/Documentation/Splunk/7.3.4/Security/Securingyourdeploymentserverandclients sslRootCAPath = <full path to the operating system root CA certificate>   Thanks.  
I'm trying to get the full report that is generated by the scheduler. A webhook triggers on schedule to my python api endpoint and returns search_name, owner, result (dictionary), results_link, Sid a... See more...
I'm trying to get the full report that is generated by the scheduler. A webhook triggers on schedule to my python api endpoint and returns search_name, owner, result (dictionary), results_link, Sid and app.  The results dict only returns the first row of data from the report, whereas I need the full report. Is there an endpoint to grab this from?   Thanks
Deleted
I've got Splunk 8.1.2 running on Windows Server 2019. I've installed the latest version of DB Connect after after configuring the the General page it always says "Cannot communicate with task server,... See more...
I've got Splunk 8.1.2 running on Windows Server 2019. I've installed the latest version of DB Connect after after configuring the the General page it always says "Cannot communicate with task server, please check your settings" There are no errors in the dbconnect logs (index=_internal sourcetype=dbx*) But the output of one line might be interesting it says:     [main] INFO org.quartz.core.QuartzScheduler - Scheduler meta-data: Quartz Scheduler (v2.3.2) 'QuartzScheduler' with instanceId 'NON_CLUSTERED' Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally. NOT STARTED. Currently in standby mode. Number of jobs executed: 0 Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 32 threads. Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.     I am not sure if this is normal or not. The query server has no problem running on port 9999 and there is nothing else listening on port 9998 I have tried all of the things I could find in posts on this forum and in the documentation.  I can't find anywhere to go with this.
Hello, I have a csv file that  has the following format ProductName, StringID,GUID,ServicePlans A, AID,8f0c5670-4e56-4892-b06d-91c085d7004f,APlanA ,,,APlanB ,,,APLanC B,BID,113feb6c-3fe4-4440-bd... See more...
Hello, I have a csv file that  has the following format ProductName, StringID,GUID,ServicePlans A, AID,8f0c5670-4e56-4892-b06d-91c085d7004f,APlanA ,,,APlanB ,,,APLanC B,BID,113feb6c-3fe4-4440-bddc-54d774bf0318,BPlanA ,,,BPlanB I want to import the csv in a such way that rows with empty fields in ProductName are merged on the non empty filed field  ServicePlans  the result will be two rwo events A, AID,8f0c5670-4e56-4892-b06d-91c085d7004f,APlanA APlanB APlanC B,BID,113feb6c-3fe4-4440-bddc-54d774bf0318,BPlanA BPlanB can you provide a help with props.conf    
Security said Splunk Mongodb is vulnerable, it needs to be updated from version 3.6.17 to version 3.6.20. I already upgraded Splunk Enterprise to the latest version but Security said this did not upg... See more...
Security said Splunk Mongodb is vulnerable, it needs to be updated from version 3.6.17 to version 3.6.20. I already upgraded Splunk Enterprise to the latest version but Security said this did not upgrade the mongodb version. How do I upgrade mongodb?
We have TCP port open for a particular source. I have added persistent queue for this particular port as users were reporting random data loss. Is there a way to check persistent queue utilization i... See more...
We have TCP port open for a particular source. I have added persistent queue for this particular port as users were reporting random data loss. Is there a way to check persistent queue utilization in realtime and its historic values. Even after adding queue, users are still reporting some data loss and have not been able to confirm if this update had any positive impact due to retention  period. I would like to see if: 1) persistant queue is beeing utilized 2) IS the queue filling up as well. I need to check this via queries preferably.   Appreciate any help that could be provided!!
Can someone please let us know what this error means -  Application component node id:13222 is not associated to machine id:161132 These are coming in agent logs.
Hi everyone, We are getting the following errors at our search head cluster after upgrade from version 7.2 to 8.0.7.   02-19-2021 09:16:47.624 +1300 ERROR SHCMasterArtifactHandler - failed on r... See more...
Hi everyone, We are getting the following errors at our search head cluster after upgrade from version 7.2 to 8.0.7.   02-19-2021 09:16:47.624 +1300 ERROR SHCMasterArtifactHandler - failed on report target request aid=<user>__<user>__<app>__search7_1613679386.427489_48308674-xxxx-47E3-9082-xxxxxxx err='event=SHPMaster::addTarget aid=<user>__<user>__<app>__search7_1613679386.427489_48308674-xxxx-47E3-9082-xxxxxxxx not found' 02-19-2021 09:16:32.170 +1300 ERROR SHCMasterArtifactHandler - failed on report target request aid=<user>__<user>__<app>__RMD5e967e08868cc3c79_1613679388.427490_48308674-xxxx-47E3-9082-xxxxxxx err='event=SHPMaster::addTarget <user>__<user>__<app>__RMD5e967e08868cc3c79_1613679388.427490_48308674-xxxx-47E3-9082-xxxxxxx not found' 02-19-2021 09:16:32.171 +1300 ERROR SHCRepJob - failed job=SHPRepJob peer="xxx", guid="xxxxx" aid=<user>__<user>_TlpfUFJN__AlertsNow_1614294144.514899_E91XXXX-2563-4D2A-903BX-XXXXXXXX, tgtPeer="XXXXX", tgtGuid="XXXXXX", tgtRP=8091, useSSL=false tgt_hp=10.xx.xx.xx:8089 tgt_guid=XXXXX-E82E-47E3-9082-2AFC9B0XXXX err=uri=https://10.xx.xx.xx:8089/services/shcluster/member/artifacts/<user>__<user>_TlpfUFJN__AlertsNow_1614294144.514899_XXXX4-2563-4D2A-903B-DAF7XXXXX/replicate?output_mode=json, error=500 - Failed to trigger replication (artifact='<user>__<user>_TlpfUFJN__AlertsNow_1614294144.514899_XXXXX-2563-4D2A-903B-DAF743AXXXXX') (err='Replication match: aid=<user>__<user>_TlpfUFJN__AlertsNow_1614294144.514899_XXXX-2563-4D2A-903B-DAF7XXXX src=XXXX-2563-4D2A-903B-DAF743AXXXX target=XXXX8674-E82E-47E3-9082-2AFC9BXXXXX already exists!') 02-19-2021 09:16:32.271 +1300 INFO SHCMaster - event=SHPMaster::handleReplicationSuccess aid=<user>__<user>_TlpfUFJN__AlertsNow_1614294144.514899_XXXX3D4-2563-4D2A-903B-DAF7XXX src=XXXX3D4-2563-4D2A-903B-DAF743A448FC tgt=XXXX674-E82E-47E3-9082-2AFC9XXXXX msg='target hasn't added this artifact yet, will ignore'   The errors seem to be benign, we don't find failed or skipped searches.  Count of Artifacts also showing 100% completed.   Could not find any similar issue at known bug of this version either. Any idea what could be wrong in here Cheers, S
Hey all, I have a relatively dumb question. I'm trying to familiarize myself with Splunk's props.conf and transforms.conf files. Within one of my props.conf file there's a stanza defined for a cust... See more...
Hey all, I have a relatively dumb question. I'm trying to familiarize myself with Splunk's props.conf and transforms.conf files. Within one of my props.conf file there's a stanza defined for a custom sourcetype of mine but I can't figure out how the stanza is formatted as dumb as that sounds.   The stanza in props.conf is defined as   [(::){0}json:myCustomSourceType:*]     Could anyone help me understand what the "(::){0}" portion of that stanza is defining? According to the documentation for props.conf the accepted stanza formats are     [<spec>] * This stanza enables properties for a given <spec>. * A props.conf file can contain multiple stanzas for any number of different <spec>. * Follow this stanza name with any number of the following setting/value pairs, as appropriate for what you want to do. * If you do not set a setting for a given <spec>, the default is used. <spec> can be: 1. <sourcetype>, the source type of an event. 2. host::<host>, where <host> is the host, or host-matching pattern, for an event. 3. source::<source>, where <source> is the source, or source-matching pattern, for an event. 4. rule::<rulename>, where <rulename> is a unique name of a source type classification rule. 5. delayedrule::<rulename>, where <rulename> is a unique name of a delayed source type classification rule. These are only considered as a last resort before generating a new source type based on the source seen.    
I am trying to configure two panels in a single row where one panel is 85% and the other is 15%.  I already have the style set up to accomplish the following: I would like to hide the right pane... See more...
I am trying to configure two panels in a single row where one panel is 85% and the other is 15%.  I already have the style set up to accomplish the following: I would like to hide the right panel and have it appear when I click a bar on the right.  That would mean the bar graph would start across the entire row then slide to the right to reveal the right panel at 15%.    I have tried multiple configuration but cant seem to get it to work properly.  When I click on the bar graph it seems to split the row at 50% however, the it does show the table at 15%.   The code I am working with is below.  I have tried different combinations of the id tags but still cannot get it to work.  Style <style> #Table1{ width:15% !important; } #Panel1{ width:85% !important; } </style>   Table and Code to Hide the Panel on the Right <panel id="Table1" depends="$tkacct$"> <input id="Table2" type="checkbox" token="tokacct1" searchWhenChanged="true"> <label></label> <change> <unset token="tkacct"></unset> <unset token="form.tokacct1"></unset> </change> <choice value="hide">Hide Details</choice> <delimiter> </delimiter> </input> <table> <title>Table 1 Title</title> <search> <query>SPLUNK QUERY</query> ... ...   Thanks
I'm looking to create a bandwidth chart showing the bandwidth traffic our firewall over a time period and converting the data from bytes to GB.  Currently this is the search I'm running: index=firew... See more...
I'm looking to create a bandwidth chart showing the bandwidth traffic our firewall over a time period and converting the data from bytes to GB.  Currently this is the search I'm running: index=firewall host="HQ-5020-1.firstagain.local" | stats sum(bytes_in) as Received,sum(bytes_out) as Sent by dest_interface | rename dest_interface as Interface | eval Bandwidth=round(bytes_in/1024/1024/1024,2) | eval Bandwidth=Received + Sent However the conversion is not working and I cannot figure out how to get the time period to work.  It shows the interface but when I try a visualization, I only see the 1 data point where I would like to see either an "over time" type of graph.