All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Our firewall logs shows twice on splunk. I configured rsyslog server with tcp. When I configure the log server with udp . Everythink  is okey. But tcp is problem. When I configured the log server 105... See more...
Our firewall logs shows twice on splunk. I configured rsyslog server with tcp. When I configure the log server with udp . Everythink  is okey. But tcp is problem. When I configured the log server 10514 tcp every duplicate. 
Hello, We have noticed the following errors comming from our Search Heads from Splunk_TA_jmx: ERROR ExecProcessor [57556 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/ap... See more...
Hello, We have noticed the following errors comming from our Search Heads from Splunk_TA_jmx: ERROR ExecProcessor [57556 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_jmx/bin/jmx.py" File "/opt/splunk/etc/apps/Splunk_TA_jmx/lib/solnlib/conf_manager.py", line 459, in get_conf ERROR ExecProcessor [57556 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_jmx/bin/jmx.py" WARNING:root:Run function: get_conf failed: Traceback (most recent call last): ERROR ExecProcessor [57556 ExecProcessor] - Ignoring: "'/.\bin\scripted_inputs\ftr_lookups.py'" ERROR ExecProcessor [57556 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_jmx/bin/jmx.py" return super(Collection, self).get(name, owner, app, sharing, **query) ERROR ExecProcessor [57556 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_jmx/bin/jmx.py" splunklib.binding.HTTPError: HTTP 404 Not Found -- jmx_tasks does not exist Input is configured on Heavy Forwarder and works finee, but as in https://docs.splunk.com/Documentation/AddOns/released/JMX/Hardwareandsoftwarerequirements we have installed the addon also on Search Heads and we're not sure sure what to adjust/change. Does anyone have any idea on how to get rid of these errors? Greetings, Justyna
Hi Team, Is it possible to suppress  HTTP ERROR code eg (404) for specific URL instead of  suppressing all 404 Error code of the Application?
In ITSI, when triggering the email alert action via NEAP, Splunk ITSI always add a footer text to the mail body. We remove the footer text in the email alert action config gui and press save, but w... See more...
In ITSI, when triggering the email alert action via NEAP, Splunk ITSI always add a footer text to the mail body. We remove the footer text in the email alert action config gui and press save, but when we open the config again then Splunk has added the footer again. There is no footer added in the general mail setting in Splunk.
hello everyone Now I have been getting cluster Maps and Choropleth Maps generated , but a few issues with them. q1.when I add the same command from search app to the panel in the dash I loose all t... See more...
hello everyone Now I have been getting cluster Maps and Choropleth Maps generated , but a few issues with them. q1.when I add the same command from search app to the panel in the dash I loose all the state/regions names too!! works with the zoom function, is that ok? 2.  query: why do I have multiple tiles of the same regions running through how can I just create the view where I can see regions only where events have occurred? Screenshot attached I know the legend doesn't match the map as values show 0, but they change and seem to be ok after 10/15 mins, I dont know why!! I am trying to search for failed/successful applications logins by region/city/or country. my query:   index=a sourcetype=ab | iplocation ip | search status=failure AND connectionname=" ABwebsite" | stats count by Country| geom geo_countries allFeatures=True featureIdField=Country   if I don't add ip, no values populate on the map, there's just color.   Thankyou for looking into the query.  
Hello,  would it be possible to add a sencond email from the customer support account, in order to create new tickets from a secondary email address from other domain ?   Thanks and regards,  ... See more...
Hello,  would it be possible to add a sencond email from the customer support account, in order to create new tickets from a secondary email address from other domain ?   Thanks and regards,   
Splunk search was disabled because we exceeded the quote for 45 days. So we bought another license to add 10GB to our license. Applied the license fine. Yet still I cant search due to violation. I ... See more...
Splunk search was disabled because we exceeded the quote for 45 days. So we bought another license to add 10GB to our license. Applied the license fine. Yet still I cant search due to violation. I restarted the license manager server, the indexers but nothing. I'm under the license limit now, what do I have to do to enable searching again?
I need visualisation with box plot graph.IS this feature available in Dashboard studio
i have multiple dashboards with respect to same category  i need to create one main page with tabs for all Dashboards   Thanks in advance 
Hi, I am looking to grab a hand at turning 8 product charts into one table with Sparkline's if possible for trend tracking. I am currently using Trellis split on my dashboard to populate these 8 li... See more...
Hi, I am looking to grab a hand at turning 8 product charts into one table with Sparkline's if possible for trend tracking. I am currently using Trellis split on my dashboard to populate these 8 line charts showing the number of hits per month over the course of 12 months for which product. My data is stored on a lookup table.csv. My date field is stored as 04/02/2022 0:00 (4th feb). ProductType has things like - Candles, Teaset, Books I would instead prefer to show the Products in one table with a trendline/sparkline for each product tracking the last 12 months.  To get the trellis working i currently use the below. Which seems to work well and as needed with expected results.  | inputlookup XXX.csv | search ProductType="*" | search ProductDate="*2022*" | eval Date=strftime(strptime(ProductDate,"%d/%m/%Y"),"%b-%y") | chart count(ProductType) by Date, ProductType limit=0 | fields - OTHER, "-" | eval rank=case(ProductDate like "Jan-%",1,ProductDate like "Feb-%",2,ProductDate like "Mar-%",3,ProductDate like "Apr-%",4,ProductDate like "May-%",5,ProductDate like "Jun-%",6,ProductDate like "Jul-%",7,ProductDate like "Aug-%",8,ProductDate like "Sep-%",9,ProductDate like "Oct-%",10,ProductDate like "Nov-%",11,ProductDate like "Dec-%",12,1=1,13) | rex field=ProductDate "-(?<rank_year>\d+)" | sort 0 rank_year, rank | fields - rank rank_year However, when trying to get the sparklines/trendlines working using the below two attempts i do not get the results required. All Sparklines show a value of 0 - yet there are results for these fields being purchased on all these diff dates.  i have changed the search times, tried to add buckets, spans... even eval _time over Date and not having much luck.  | inputlookup XXX.csv | search ProductType="*" | search ProductDate="*2022*" | eval Date=strftime(strptime(ProductDate,"%d/%m/%Y"),"%b-%y") | chart sparkline count(Date) by ProductType, ProductDate limit=0 | fields - OTHER, "-" | eval rank=case(ProductDate like "Jan-%",1,ProductDate like "Feb-%",2,ProductDate like "Mar-%",3,ProductDate like "Apr-%",4,ProductDate like "May-%",5,ProductDate like "Jun-%",6,ProductDate like "Jul-%",7,ProductDate like "Aug-%",8,ProductDate like "Sep-%",9,ProductDate like "Oct-%",10,ProductDate like "Nov-%",11,ProductDate like "Dec-%",12,1=1,13) | sort 0 rank_year, rank | fields - rank rank_year And  | inputlookup XXX.csv | search ProductType="*" | search ProductDate="*2022*" | eval Date=strftime(strptime(ProductDate,"%d/%m/%Y"),"%d/%m/%Y") | chart sparkline count(ProductDate) by AppType limit=0 I believe i am going wrong with the date eval but have tried a fair few combos now with nearly all same results with sparklines always showing 0.  I have a about a years worth of data i want to track in the one visual table ( Very similar to how splunk does there own EQ example. ( to many products to show nicely on a line graph).  Thanks
I am performing two searches in an attempt to calculate the duration, but am having some issues. Here is what I have working so far. Im getting results but they are in two different rows when I see... See more...
I am performing two searches in an attempt to calculate the duration, but am having some issues. Here is what I have working so far. Im getting results but they are in two different rows when I see results, I was expecting for them to be in one row to be used to calculate the duration ? What am I missing... index=anIndex sourcetype=aSourceType (aString1 AND "START of script") | eval startTimeRaw=_time | append [search index=anIndex sourcetype=aSourceType (aString1 AND "COMPLETED OK") | eval endTimeRaw=_time ] | table startTimeRaw, endTimeRaw
I am using the below search to first get the difference in time everytime I see an event which has boot timestamp in it and using it first get the difference and then get the average of it by host.I ... See more...
I am using the below search to first get the difference in time everytime I see an event which has boot timestamp in it and using it first get the difference and then get the average of it by host.I am able to get the result correctly if I do one host per search like host=abc but if I use a wildcharacter for all hosts then I see the results are different (host=*) .I am assuming someother hosts having the events at same time is causing the issue .How to get the correct results for all hosts at a time . I get the time value as 11:50:58.59 if I use only host=abc but when I want to list all hosts (host=*.)for host abc I see value 00:18:18.67  index=abc "Boot timestamp" host=abc | eval _time=strptime(Boot_Time,"%Y-%m-%d %H:%M:%S") | reverse | delta _time as difference_secs | table _time difference_secs host | stats avg(difference_secs) as average by host | eval average=round(average,2) | eval time=tostring(average, "duration") is it possible to get all hosts average or it can be only individual .   Thanks in Advance 
Hello all,  I'm trying to utilize Cluster Agent Auto Instrumentations for java application in k8s cluster. the agent seems to be copied and started on app pods but doesn't report any metrics - I've ... See more...
Hello all,  I'm trying to utilize Cluster Agent Auto Instrumentations for java application in k8s cluster. the agent seems to be copied and started on app pods but doesn't report any metrics - I've checked agent's logs folders and there is no folder named for node-name and no files there as well. It seems like the issue can be in permission but why would Java agent reports to be started successfully. Can you please advice how to troubleshoot the issue? Here are logs from app pod and from cluster agent pod: Cluster Agent pod: [DEBUG]: 2022-09-21 22:10:46 - podhandler.go:230 - Handling pod update test/api-server-6c998f544b-6czbx [ERROR]: 2022-09-21 22:10:47 - executor.go:73 - Command basename `find /opt/appdynamics-java/ver21.7.0.32930/logs/ -maxdepth 1 -type d -name '*api-server*'` returned an error when exec on pod api-server-6c998f544b-rz8vr. command terminated with exit code 1 [WARNING]: 2022-09-21 22:10:47 - executor.go:75 - Exit status of command 'basename `find /opt/appdynamics-java/ver21.7.0.32930/logs/ -maxdepth 1 -type d -name '*api-server*'`' in container api-server in pod test/api-server-6c998f544b-rz8vr is 1 [DEBUG]: 2022-09-21 22:10:47 - javaappmetadatahelper.go:53 - Node folder name in container: api-server, pod: test/api-server-6c998f544b-rz8vr: [ERROR]: 2022-09-21 22:10:47 - javaappmetadatahelper.go:55 - Failed to find node name command terminated with exit code 1 [WARNING]: 2022-09-21 22:10:47 - podhandler.go:149 - Unable to find node name in pod test/api-server-6c998f544b-rz8vr, container api-server [DEBUG]: 2022-09-21 22:10:47 - podhandler.go:75 - Pod test/api-server-6c998f544b-rz8vr is in Pending state with annotations to be updated map[] [DEBUG]: 2022-09-21 22:10:47 - podhandler.go:87 - No annotations to update for pod test/api-server-6c998f544b-rz8vr App Pod: Picked up JAVA_TOOL_OPTIONS: -Dappdynamics.agent.accountAccessKey= -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.socket.collection.bci.enable=true -javaagent:/opt/appdynamics-java/javaagent.jar Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_APPLICATION_NAME] for application name [PreProd-EU] Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_TIER_NAME] for tier name [api-server] Full Agent Registration Info Resolver using selfService [false] Full Agent Registration Info Resolver using selfService [false] Full Agent Registration Info Resolver using ephemeral node setting [false] Full Agent Registration Info Resolver using application name [PreProd-EU] Read property [reuse node name] from system property [appdynamics.agent.reuse.nodeName] Full Agent Registration Info Resolver using tier name [api-server] Full Agent Registration Info Resolver using node name [null] Install Directory resolved to[/opt/appdynamics-java] [AD Agent init] Tue Sep 20 22:49:40 UTC 2022[INFO]: JavaAgent - Low Entropy Mode: Attempting to swap to non-blocking PRNG algorithm [AD Agent init] Tue Sep 20 22:49:40 UTC 2022[INFO]: JavaAgent - UUIDPool size is 10 Agent conf directory set to [/opt/appdynamics-java/ver21.7.0.32930/conf] [AD Agent init] Tue Sep 20 22:49:40 UTC 2022[INFO]: JavaAgent - Agent conf directory set to [/opt/appdynamics-java/ver21.7.0.32930/conf] [AD Agent init] Tue Sep 20 22:49:40 UTC 2022[DEBUG]: AgentInstallManager - Full Agent Registration Info Resolver is running [AD Agent init] Tue Sep 20 22:49:40 UTC 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_APPLICATION_NAME] for application name [PreProd-EU] [AD Agent init] Tue Sep 20 22:49:40 UTC 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_TIER_NAME] for tier name [api-server] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using selfService [false] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using selfService [false] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using ephemeral node setting [false] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using application name [PreProd-EU] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Read property [reuse node name] from system property [appdynamics.agent.reuse.nodeName] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using tier name [api-server] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using node name [null] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[DEBUG]: AgentInstallManager - Full Agent Registration Info Resolver finished running [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Agent runtime directory set to [/opt/appdynamics-java/ver21.7.0.32930] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Agent node directory set to [api-server-6c998f544b-mvx7s] Agent runtime conf directory set to /opt/appdynamics-java/ver21.7.0.32930/conf [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Agent runtime conf directory set to /opt/appdynamics-java/ver21.7.0.32930/conf [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - JDK Compatibility: 1.8+ [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Using Java Agent Version [Server Agent #21.7.0.32930 v21.7.0 GA compatible with 4.4.1.0 rc6a2713daa53e64a5abed3707e82fd87c36b5e49 release/21.7.0] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Running IBM Java Agent [No] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Java Agent Directory [/opt/appdynamics-java/ver21.7.0.32930] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Java Agent AppAgent directory [/opt/appdynamics-java/ver21.7.0.32930] Agent logging directory set to [/opt/appdynamics-java/ver21.7.0.32930/logs] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Agent logging directory set to [/opt/appdynamics-java/ver21.7.0.32930/logs] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Logging set up for log4j2 [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - #################################################################################### [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Java Agent Directory [/opt/appdynamics-java/ver21.7.0.32930] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Java Agent AppAgent directory [/opt/appdynamics-java/ver21.7.0.32930] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Using Java Agent Version [Server Agent #21.7.0.32930 v21.7.0 GA compatible with 4.4.1.0 rc6a2713daa53e64a5abed3707e82fd87c36b5e49 release/21.7.0] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - All agent classes have been pre-loaded Agent will mark node historical at normal shutdown of JVM Started AppDynamics Java Agent Successfully. [AD Agent init] Tue Sep 20 22:50:14 UTC 2022[INFO]: JavaAgent - Started AppDynamics Java Agent Successfully. 2022-09-20 22:50:16,300 ~main ERROR Recursive call to appender Buffer 2022-09-20 22:50:16,302 ~main ERROR Recursive call to appender Buffer WARN [2022-09-20 22:50:27,572] com.netflix.config.sources.URLConfigurationSource: No URLs will be polled as dynamic configuration sources.
I want to extract as below using universal forwarder props.conf           Whatever data I have before: should be the field name and after : would be the value eg- for Class field value is... See more...
I want to extract as below using universal forwarder props.conf           Whatever data I have before: should be the field name and after : would be the value eg- for Class field value is Catalyst 9500 "class": "Catalyst 9500", "var_actionname": "Logstash - Chain", "var_alertid": "4000", "var_app_sys_id": "", "var_assetfloor": "0", "var_assetlocation": "", "var_assetmake": "mycompany Systems", "var_assetmodel": "Catalyst 9500", "var_assetpanel": "", "var_assetplate": "", "var_assetpunch": "", "var_assetrack": "", "var_assetroom": "", "var_assetserial": "", "var_assetshelf": "", "var_assettag": "", "var_assetzone": "", "var_autopolicyname": "Chain Active Events", "var_autopolicynote": "", "var_categoryid": "8", "var_categoryname": "Network.Switches", "var_classid": "6659", "var_classname": "Catalyst 9500", "var_classtype": "mycompany Systems", "var_clearuser": "", "var_collector": "csit2apacdca06", "var_composite_criticality": 3, "var_composite_id": "0", "var_device_back_link": "https://123.121.12.13//index.?exec=registry&act=registry_device_management#devmgt_search.did=4526", "var_deviceid": "4526", "var_duty_pager": "", "var_esp_class_name": "", "var_event_back_link": "https://123.121.12.13//index.?exec=device_events&did=4526&etype=12708", "var_event_guid": "EEBC704A15AFBB55FA19EF7D50A93993", "var_eventcategory": "", "var_eventcounter": "1", "var_evententityid": "4526", "var_evententityname": "ccntrx4-cn-bb-gw2.mycompany.com", "var_evententitytype": "1", "var_eventfirstoccurtime": "2022-09-22 22:32:05", "var_eventid": "10784243", "var_eventindexid": ".1199", "var_eventlastoccurtime": "2022-09-22 22:32:05", "var_eventmessage": "mycompany: Temperature problem. Currently, Temperature (TenGigabitEthernet1/0/40 Module Temperature Sensor) status: unavailable", "var_eventpolicy": "mycompany: Temperature Unavailable", "var_eventpolicycause": "<strong><!--StartFragment-->Description</strong><br>mycompany network device is reporting an &quot;unavailable&quot; status on temperature. Meaning that the agent presently can not report the temperature&apos;s sensor value.<br><br><strong>Probable Cause</strong><br><ul class=\"fr-tag\"><li class=\"fr-tag\">The sensor could have a hard failure (disconnected wire).</li><li class=\"fr-tag\">The sensor could have a soft failure such as out-of-range, jitter, or wildly fluctuating readings.</li></ul><br><strong>Resolution</strong><br>Manually check functioning of fan and replace if necessary.<!--EndFragment-->", "var_eventpolicyexternalid": "", "var_eventpolicyid": "12708", "var_eventseverity_deprecated": "2", "var_eventseveritylevel": "3", "var_eventseveritytext": "MAJOR", "var_eventsourceid": "4", "var_eventsourcename": "Dynamic", "var_eventstate": "Active", "var_eventstateful": "1", "var_eventsubentityid": "0", "var_eventsubentityname": ".1199", "var_eventsubentitytype": "0", "var_eventticketid": "", "var_eventtimeactive": "2022-09-22 22:32:05", "var_eventtimedeleted": "None", "var_eventurllink": "https://123.121.12.13//index.?exec=events&q_type=aid&q_arg=10784243&q_sev=1&q_sort=0&q_oper=0", "var_eventusercleared": "", "var_eventusernote": "", "var_ipaddress": "10.79.194.32", "var_orgbillingid": "", "var_orgcrmid": "ff7ac89f1b5f8d94d73aec22b24bcbe9", "var_orgid": "2", "var_orgimpacted": "", "var_orgname": "mycompany IT", "var_parentid": "", "var_parentname": "", "var_priority": "", "var_resultvalue": "unavailable", "var_rootid": "", "var_rootname": "", "var_slsystemname": "", "var_super_organization": "unknown", "var_support_group": "", "var_sysid": "fd19769ddb00c3ccdaeaf9551d961908", "var_threshold": "", "var_ticketemailsubject": "2", "var_ticketid": "0", "var_username": "", "external_id": "ScienceLogic_", "manager": "SCIENCELOGIC__ASSURED", "signature": "ccntrx4-cn-bb-gw2.mycompany.com::Catalyst 9500::.1199", "source": "ccntrx4-cn-bb-gw2.mycompany.com", "source_id": "1234"            I will attach the example of the log file that needs to be pushed with extracted fields, in the comment section        
I used AoB to make a TA which is pulling data using a python script. Whether I export or Download Package, I get some additional files which are preventing me from installing the TA to another ins... See more...
I used AoB to make a TA which is pulling data using a python script. Whether I export or Download Package, I get some additional files which are preventing me from installing the TA to another instance. I get a .json and a .aob_meta, and these are the problematic files which get referenced in the error when I attempt to install to a different Splunk Enterprise server. the other unexpected behavior is that the props are not merging to default directory, as the docs lead me to believe should be happening, but I saw reference to that being a known bug I'm gonna attempt a manual packing of my app, but has anyone encountered these extra files?
I wanted to extract nth word in string with a hyphen delimiter from the following strings that are 3rd and 6th words as highlighted below.   KXTWRKTG-wsmp-t4-lambda-nodejs-PROD-100 KXTWRKTG-phl-... See more...
I wanted to extract nth word in string with a hyphen delimiter from the following strings that are 3rd and 6th words as highlighted below.   KXTWRKTG-wsmp-t4-lambda-nodejs-PROD-100 KXTWRKTG-phl-resolvers-lambda-nodejs-QAINT2-302
I would like to extract status value (i.e. 201) highlighted below using RegEx in the following link. However, it didn't work. Please advise https://regex101.com/r/QIB8EG/1 Rex \"status\":(?<statu... See more...
I would like to extract status value (i.e. 201) highlighted below using RegEx in the following link. However, it didn't work. Please advise https://regex101.com/r/QIB8EG/1 Rex \"status\":(?<status>[^\}])\",\"requestId\":   {"level":"debug","message":"handler result : {\"body\":\"{\\\"numberOfProcessedRecords\\\":1}\",\"status\":201}","requestId":"ecd06f97-975b-5faf-81a0-3431fb4d1070"}
The pan logs ingested decreased significantly and nothing should have changed from the syslog point of view. Is there a way to investigate why the data ingest has changed dramatically? Attached is a ... See more...
The pan logs ingested decreased significantly and nothing should have changed from the syslog point of view. Is there a way to investigate why the data ingest has changed dramatically? Attached is a screen of prior and after the drop in pan logs ingestion   I ran this command | tstats prestats=t count WHERE index=pan by host _time span=1h | timechart partial=f span=1h count by host limit=0 Prior to drop in pan log ingestion   After the drop in pan log ingsetion     Can anyone help with a solution to this sudden drop in pan log ingestion. Since the it hasn't come up to the previous level. Solution will be appreciated
Hello All, I need help trying to generate the P95,P99,P75, mean and median response times for the below data using tstats command. Need help with the splunk query. I am dealing with a large data an... See more...
Hello All, I need help trying to generate the P95,P99,P75, mean and median response times for the below data using tstats command. Need help with the splunk query. I am dealing with a large data and also building a visual dashboard to my management. So trying to use tstats as searches are faster. Stuck with unable to find these calculations with value of Total_TT in my tstat command. Can someone help me with the query.   Sample Data: 2022-09-11 22:00:59,998 INFO -(Success:true)-(Validation:true)-(GUID:68D74EBE-CE3B-7508-6028-CBE1DFA90F8A)-(REQ_RCVD:2022-09-11T22:00:59.051)-(RES_SENT:2022-09-11T22:00:59.989)-(SIZE:2 KB)-(RespSent_TT:0ms)-(Actual_TT:938ms)-(DB_TT:9ms)-(Total_TT:947ms)-(AppServer_TT:937ms)
I have a customer that would like to use Splunk to search for a set of devices by their respective barcodes. The devices (barcodes) will come from an external list that will be placed in a separate... See more...
I have a customer that would like to use Splunk to search for a set of devices by their respective barcodes. The devices (barcodes) will come from an external list that will be placed in a separate index. For this scenario, the separate index will be referred to as "index 2". Additionally, the barcodes from the external list (which reside in index 2) will need to be matched to their respective organizations.  These organizations reside in a separate index. For this scenario, the separate index (where the organization resides) will be referred to as "index 1". In a nutshell, the customer would like to compare the list of barcodes in index 2 and compare it to index 1 and see if they match any organizations.  Finally, if the a barcode (index 2) matches an organization (index 1), the customer would like to list all information associated with the barcode (i.e. hostname, serial number, organization name, etc.) that matched the organization.  Thank you in advanced for your help!