All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello all,  I'm trying to utilize Cluster Agent Auto Instrumentations for java application in k8s cluster. the agent seems to be copied and started on app pods but doesn't report any metrics - I've ... See more...
Hello all,  I'm trying to utilize Cluster Agent Auto Instrumentations for java application in k8s cluster. the agent seems to be copied and started on app pods but doesn't report any metrics - I've checked agent's logs folders and there is no folder named for node-name and no files there as well. It seems like the issue can be in permission but why would Java agent reports to be started successfully. Can you please advice how to troubleshoot the issue? Here are logs from app pod and from cluster agent pod: Cluster Agent pod: [DEBUG]: 2022-09-21 22:10:46 - podhandler.go:230 - Handling pod update test/api-server-6c998f544b-6czbx [ERROR]: 2022-09-21 22:10:47 - executor.go:73 - Command basename `find /opt/appdynamics-java/ver21.7.0.32930/logs/ -maxdepth 1 -type d -name '*api-server*'` returned an error when exec on pod api-server-6c998f544b-rz8vr. command terminated with exit code 1 [WARNING]: 2022-09-21 22:10:47 - executor.go:75 - Exit status of command 'basename `find /opt/appdynamics-java/ver21.7.0.32930/logs/ -maxdepth 1 -type d -name '*api-server*'`' in container api-server in pod test/api-server-6c998f544b-rz8vr is 1 [DEBUG]: 2022-09-21 22:10:47 - javaappmetadatahelper.go:53 - Node folder name in container: api-server, pod: test/api-server-6c998f544b-rz8vr: [ERROR]: 2022-09-21 22:10:47 - javaappmetadatahelper.go:55 - Failed to find node name command terminated with exit code 1 [WARNING]: 2022-09-21 22:10:47 - podhandler.go:149 - Unable to find node name in pod test/api-server-6c998f544b-rz8vr, container api-server [DEBUG]: 2022-09-21 22:10:47 - podhandler.go:75 - Pod test/api-server-6c998f544b-rz8vr is in Pending state with annotations to be updated map[] [DEBUG]: 2022-09-21 22:10:47 - podhandler.go:87 - No annotations to update for pod test/api-server-6c998f544b-rz8vr App Pod: Picked up JAVA_TOOL_OPTIONS: -Dappdynamics.agent.accountAccessKey= -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.socket.collection.bci.enable=true -javaagent:/opt/appdynamics-java/javaagent.jar Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_APPLICATION_NAME] for application name [PreProd-EU] Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_TIER_NAME] for tier name [api-server] Full Agent Registration Info Resolver using selfService [false] Full Agent Registration Info Resolver using selfService [false] Full Agent Registration Info Resolver using ephemeral node setting [false] Full Agent Registration Info Resolver using application name [PreProd-EU] Read property [reuse node name] from system property [appdynamics.agent.reuse.nodeName] Full Agent Registration Info Resolver using tier name [api-server] Full Agent Registration Info Resolver using node name [null] Install Directory resolved to[/opt/appdynamics-java] [AD Agent init] Tue Sep 20 22:49:40 UTC 2022[INFO]: JavaAgent - Low Entropy Mode: Attempting to swap to non-blocking PRNG algorithm [AD Agent init] Tue Sep 20 22:49:40 UTC 2022[INFO]: JavaAgent - UUIDPool size is 10 Agent conf directory set to [/opt/appdynamics-java/ver21.7.0.32930/conf] [AD Agent init] Tue Sep 20 22:49:40 UTC 2022[INFO]: JavaAgent - Agent conf directory set to [/opt/appdynamics-java/ver21.7.0.32930/conf] [AD Agent init] Tue Sep 20 22:49:40 UTC 2022[DEBUG]: AgentInstallManager - Full Agent Registration Info Resolver is running [AD Agent init] Tue Sep 20 22:49:40 UTC 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_APPLICATION_NAME] for application name [PreProd-EU] [AD Agent init] Tue Sep 20 22:49:40 UTC 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_TIER_NAME] for tier name [api-server] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using selfService [false] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using selfService [false] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using ephemeral node setting [false] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using application name [PreProd-EU] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Read property [reuse node name] from system property [appdynamics.agent.reuse.nodeName] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using tier name [api-server] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using node name [null] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[DEBUG]: AgentInstallManager - Full Agent Registration Info Resolver finished running [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Agent runtime directory set to [/opt/appdynamics-java/ver21.7.0.32930] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Agent node directory set to [api-server-6c998f544b-mvx7s] Agent runtime conf directory set to /opt/appdynamics-java/ver21.7.0.32930/conf [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: AgentInstallManager - Agent runtime conf directory set to /opt/appdynamics-java/ver21.7.0.32930/conf [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - JDK Compatibility: 1.8+ [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Using Java Agent Version [Server Agent #21.7.0.32930 v21.7.0 GA compatible with 4.4.1.0 rc6a2713daa53e64a5abed3707e82fd87c36b5e49 release/21.7.0] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Running IBM Java Agent [No] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Java Agent Directory [/opt/appdynamics-java/ver21.7.0.32930] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Java Agent AppAgent directory [/opt/appdynamics-java/ver21.7.0.32930] Agent logging directory set to [/opt/appdynamics-java/ver21.7.0.32930/logs] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Agent logging directory set to [/opt/appdynamics-java/ver21.7.0.32930/logs] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Logging set up for log4j2 [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - #################################################################################### [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Java Agent Directory [/opt/appdynamics-java/ver21.7.0.32930] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Java Agent AppAgent directory [/opt/appdynamics-java/ver21.7.0.32930] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - Using Java Agent Version [Server Agent #21.7.0.32930 v21.7.0 GA compatible with 4.4.1.0 rc6a2713daa53e64a5abed3707e82fd87c36b5e49 release/21.7.0] [AD Agent init] Tue Sep 20 22:49:41 UTC 2022[INFO]: JavaAgent - All agent classes have been pre-loaded Agent will mark node historical at normal shutdown of JVM Started AppDynamics Java Agent Successfully. [AD Agent init] Tue Sep 20 22:50:14 UTC 2022[INFO]: JavaAgent - Started AppDynamics Java Agent Successfully. 2022-09-20 22:50:16,300 ~main ERROR Recursive call to appender Buffer 2022-09-20 22:50:16,302 ~main ERROR Recursive call to appender Buffer WARN [2022-09-20 22:50:27,572] com.netflix.config.sources.URLConfigurationSource: No URLs will be polled as dynamic configuration sources.
I want to extract as below using universal forwarder props.conf           Whatever data I have before: should be the field name and after : would be the value eg- for Class field value is... See more...
I want to extract as below using universal forwarder props.conf           Whatever data I have before: should be the field name and after : would be the value eg- for Class field value is Catalyst 9500 "class": "Catalyst 9500", "var_actionname": "Logstash - Chain", "var_alertid": "4000", "var_app_sys_id": "", "var_assetfloor": "0", "var_assetlocation": "", "var_assetmake": "mycompany Systems", "var_assetmodel": "Catalyst 9500", "var_assetpanel": "", "var_assetplate": "", "var_assetpunch": "", "var_assetrack": "", "var_assetroom": "", "var_assetserial": "", "var_assetshelf": "", "var_assettag": "", "var_assetzone": "", "var_autopolicyname": "Chain Active Events", "var_autopolicynote": "", "var_categoryid": "8", "var_categoryname": "Network.Switches", "var_classid": "6659", "var_classname": "Catalyst 9500", "var_classtype": "mycompany Systems", "var_clearuser": "", "var_collector": "csit2apacdca06", "var_composite_criticality": 3, "var_composite_id": "0", "var_device_back_link": "https://123.121.12.13//index.?exec=registry&act=registry_device_management#devmgt_search.did=4526", "var_deviceid": "4526", "var_duty_pager": "", "var_esp_class_name": "", "var_event_back_link": "https://123.121.12.13//index.?exec=device_events&did=4526&etype=12708", "var_event_guid": "EEBC704A15AFBB55FA19EF7D50A93993", "var_eventcategory": "", "var_eventcounter": "1", "var_evententityid": "4526", "var_evententityname": "ccntrx4-cn-bb-gw2.mycompany.com", "var_evententitytype": "1", "var_eventfirstoccurtime": "2022-09-22 22:32:05", "var_eventid": "10784243", "var_eventindexid": ".1199", "var_eventlastoccurtime": "2022-09-22 22:32:05", "var_eventmessage": "mycompany: Temperature problem. Currently, Temperature (TenGigabitEthernet1/0/40 Module Temperature Sensor) status: unavailable", "var_eventpolicy": "mycompany: Temperature Unavailable", "var_eventpolicycause": "<strong><!--StartFragment-->Description</strong><br>mycompany network device is reporting an &quot;unavailable&quot; status on temperature. Meaning that the agent presently can not report the temperature&apos;s sensor value.<br><br><strong>Probable Cause</strong><br><ul class=\"fr-tag\"><li class=\"fr-tag\">The sensor could have a hard failure (disconnected wire).</li><li class=\"fr-tag\">The sensor could have a soft failure such as out-of-range, jitter, or wildly fluctuating readings.</li></ul><br><strong>Resolution</strong><br>Manually check functioning of fan and replace if necessary.<!--EndFragment-->", "var_eventpolicyexternalid": "", "var_eventpolicyid": "12708", "var_eventseverity_deprecated": "2", "var_eventseveritylevel": "3", "var_eventseveritytext": "MAJOR", "var_eventsourceid": "4", "var_eventsourcename": "Dynamic", "var_eventstate": "Active", "var_eventstateful": "1", "var_eventsubentityid": "0", "var_eventsubentityname": ".1199", "var_eventsubentitytype": "0", "var_eventticketid": "", "var_eventtimeactive": "2022-09-22 22:32:05", "var_eventtimedeleted": "None", "var_eventurllink": "https://123.121.12.13//index.?exec=events&q_type=aid&q_arg=10784243&q_sev=1&q_sort=0&q_oper=0", "var_eventusercleared": "", "var_eventusernote": "", "var_ipaddress": "10.79.194.32", "var_orgbillingid": "", "var_orgcrmid": "ff7ac89f1b5f8d94d73aec22b24bcbe9", "var_orgid": "2", "var_orgimpacted": "", "var_orgname": "mycompany IT", "var_parentid": "", "var_parentname": "", "var_priority": "", "var_resultvalue": "unavailable", "var_rootid": "", "var_rootname": "", "var_slsystemname": "", "var_super_organization": "unknown", "var_support_group": "", "var_sysid": "fd19769ddb00c3ccdaeaf9551d961908", "var_threshold": "", "var_ticketemailsubject": "2", "var_ticketid": "0", "var_username": "", "external_id": "ScienceLogic_", "manager": "SCIENCELOGIC__ASSURED", "signature": "ccntrx4-cn-bb-gw2.mycompany.com::Catalyst 9500::.1199", "source": "ccntrx4-cn-bb-gw2.mycompany.com", "source_id": "1234"            I will attach the example of the log file that needs to be pushed with extracted fields, in the comment section        
I used AoB to make a TA which is pulling data using a python script. Whether I export or Download Package, I get some additional files which are preventing me from installing the TA to another ins... See more...
I used AoB to make a TA which is pulling data using a python script. Whether I export or Download Package, I get some additional files which are preventing me from installing the TA to another instance. I get a .json and a .aob_meta, and these are the problematic files which get referenced in the error when I attempt to install to a different Splunk Enterprise server. the other unexpected behavior is that the props are not merging to default directory, as the docs lead me to believe should be happening, but I saw reference to that being a known bug I'm gonna attempt a manual packing of my app, but has anyone encountered these extra files?
I wanted to extract nth word in string with a hyphen delimiter from the following strings that are 3rd and 6th words as highlighted below.   KXTWRKTG-wsmp-t4-lambda-nodejs-PROD-100 KXTWRKTG-phl-... See more...
I wanted to extract nth word in string with a hyphen delimiter from the following strings that are 3rd and 6th words as highlighted below.   KXTWRKTG-wsmp-t4-lambda-nodejs-PROD-100 KXTWRKTG-phl-resolvers-lambda-nodejs-QAINT2-302
I would like to extract status value (i.e. 201) highlighted below using RegEx in the following link. However, it didn't work. Please advise https://regex101.com/r/QIB8EG/1 Rex \"status\":(?<statu... See more...
I would like to extract status value (i.e. 201) highlighted below using RegEx in the following link. However, it didn't work. Please advise https://regex101.com/r/QIB8EG/1 Rex \"status\":(?<status>[^\}])\",\"requestId\":   {"level":"debug","message":"handler result : {\"body\":\"{\\\"numberOfProcessedRecords\\\":1}\",\"status\":201}","requestId":"ecd06f97-975b-5faf-81a0-3431fb4d1070"}
The pan logs ingested decreased significantly and nothing should have changed from the syslog point of view. Is there a way to investigate why the data ingest has changed dramatically? Attached is a ... See more...
The pan logs ingested decreased significantly and nothing should have changed from the syslog point of view. Is there a way to investigate why the data ingest has changed dramatically? Attached is a screen of prior and after the drop in pan logs ingestion   I ran this command | tstats prestats=t count WHERE index=pan by host _time span=1h | timechart partial=f span=1h count by host limit=0 Prior to drop in pan log ingestion   After the drop in pan log ingsetion     Can anyone help with a solution to this sudden drop in pan log ingestion. Since the it hasn't come up to the previous level. Solution will be appreciated
Hello All, I need help trying to generate the P95,P99,P75, mean and median response times for the below data using tstats command. Need help with the splunk query. I am dealing with a large data an... See more...
Hello All, I need help trying to generate the P95,P99,P75, mean and median response times for the below data using tstats command. Need help with the splunk query. I am dealing with a large data and also building a visual dashboard to my management. So trying to use tstats as searches are faster. Stuck with unable to find these calculations with value of Total_TT in my tstat command. Can someone help me with the query.   Sample Data: 2022-09-11 22:00:59,998 INFO -(Success:true)-(Validation:true)-(GUID:68D74EBE-CE3B-7508-6028-CBE1DFA90F8A)-(REQ_RCVD:2022-09-11T22:00:59.051)-(RES_SENT:2022-09-11T22:00:59.989)-(SIZE:2 KB)-(RespSent_TT:0ms)-(Actual_TT:938ms)-(DB_TT:9ms)-(Total_TT:947ms)-(AppServer_TT:937ms)
I have a customer that would like to use Splunk to search for a set of devices by their respective barcodes. The devices (barcodes) will come from an external list that will be placed in a separate... See more...
I have a customer that would like to use Splunk to search for a set of devices by their respective barcodes. The devices (barcodes) will come from an external list that will be placed in a separate index. For this scenario, the separate index will be referred to as "index 2". Additionally, the barcodes from the external list (which reside in index 2) will need to be matched to their respective organizations.  These organizations reside in a separate index. For this scenario, the separate index (where the organization resides) will be referred to as "index 1". In a nutshell, the customer would like to compare the list of barcodes in index 2 and compare it to index 1 and see if they match any organizations.  Finally, if the a barcode (index 2) matches an organization (index 1), the customer would like to list all information associated with the barcode (i.e. hostname, serial number, organization name, etc.) that matched the organization.  Thank you in advanced for your help!    
Hello Team, In our environment, we have created use cases in the content management in Splunk ES. We want to know the query to search for the logs if anyone with Admin access made any changes in th... See more...
Hello Team, In our environment, we have created use cases in the content management in Splunk ES. We want to know the query to search for the logs if anyone with Admin access made any changes in the use cases by mistake. I will explain in detail, someone with admin access had made a change in the use case. To check who changed it, I was trying in splunk _internal with query, index="_internal" sourcetype=*content_management* But i am not getting any useful data with this query.  Please kindly help me where all logs stored for content management(use cases) in Enterprise security. How to search those logs, if anyone have any idea with query pls let me help with it. We have to check the internal logs for the changes being made in the content management. Thanks in advance. Bye Bye !
I see an interesting Simple XML idiom below: <input type="multiselect" token="multiselect_lines" searchWhenChanged="true"> <label>Lines</label> <choice value="ACEKLMRSWY">All lines</choice> <choice ... See more...
I see an interesting Simple XML idiom below: <input type="multiselect" token="multiselect_lines" searchWhenChanged="true"> <label>Lines</label> <choice value="ACEKLMRSWY">All lines</choice> <choice value="A">A Line</choice> <choice value="C">C Line</choice> <choice value="E">E Line</choice> <choice value="K">K Line</choice> <choice value="L">L Line</choice> <choice value="M">M Line</choice> <choice value="R">R Line</choice> <choice value="S">S Line</choice> <choice value="W">W Line</choice> <choice value="Y">Y Line</choice> <default>ACEKLMRSWY</default> <prefix>regex Location="^[</prefix> <suffix>]"</suffix> <change> <eval token="form.multiselect_lines"> case( mvcount('form.multiselect_lines') == 2 AND mvindex('form.multiselect_lines', 0) == "ACEKLMRSWY", mvindex('form.multiselect_lines', 1), mvfind('form.multiselect_lines', "ACEKLMRSWY") == mvcount('form.multiselect_lines') - 1, "ACEKLMRSWY", true(), 'form.multiselect_lines')</eval> </change> </input> It seems updating the appearance of the multiselect field "multiselect_lines" so whenever the selections in the multiselect change, "form.multiselect_lines" will be updated accordingly. I guess that it is supposed to solve the deficiency of multiselect in Splunk that the option of "All" does not disappear automatically when a subset is selected, or when there is no more subset selected, "All" as default does not come back automatically. The above is my trying to understand to achieve the functionality. It works as hypothesized in a dashboard that I'm studying, but when I copied the mechanism to my dashboard, it has no effect in the behavior. So I under what the token with the pattern of form.<multiselect_input_token>, and what does it take to have the above mechanism work in auto removing and adding "All" in appearance? I know that there is a javascript solution by modifying the list of multiselect options on the fly through Javascript. But I don't have the admin privilege to add the javascript for my dashboard. So a solution without requiring admin privilege is handy.
Hi, I am trying to monitor data from about 200 servers diff sources. What is the best way to do this easily and efficiently. I am on a time crunch. Any help will be fantastic. I understand that put... See more...
Hi, I am trying to monitor data from about 200 servers diff sources. What is the best way to do this easily and efficiently. I am on a time crunch. Any help will be fantastic. I understand that putting a universal forward the sever will pull data to the indexer. But I cant do that for over 200 servers. HELP. Thanks
Hello folks, I am new to splunk. We need to change log indexing from 5 to 3 on Splunk Enterprise for Windows.  It is safe to change all "maxBackupIndex" keys directly in /etc/log.cfg or there is ... See more...
Hello folks, I am new to splunk. We need to change log indexing from 5 to 3 on Splunk Enterprise for Windows.  It is safe to change all "maxBackupIndex" keys directly in /etc/log.cfg or there is a better way to achieve that goal? Thank you
Hello folks, we have some linux machines with UF installed on that connect to our search head. We haven't access to those machines. There is some SPL query that can we use to know when the UF v... See more...
Hello folks, we have some linux machines with UF installed on that connect to our search head. We haven't access to those machines. There is some SPL query that can we use to know when the UF version on the machines has changed?  Thank you.
Here is my experience troubleshooting  Splunk data ingestion related issues. 1. Search for the top 3 issue in your environment.   index=_internal host=<indeser_host or HF_host> source="/opt/splunk... See more...
Here is my experience troubleshooting  Splunk data ingestion related issues. 1. Search for the top 3 issue in your environment.   index=_internal host=<indeser_host or HF_host> source="/opt/splunk/var/log/splunk/splunkd.log" log_level=WARN | top 3 component   2.  Address the top 1 issue and review.  Going to use an example component issue.  In my case it's the DateParserVerbose is the top 1 issue.  When we use the drill down to see the logs.  You notice that field that are necessary are not parsed by the TA or Splunk to narrow the issue.  Here is the SPL to do that using REX.   index=_internal host=<index_host or HF_host>source="/opt/splunk/var/log/splunk/splunkd.log" log_level=WARN component=DateParserVerbose | rex "| rex "\] - (?P<source_message>.+)source\S(?P<data_source>.+)\|host\S(?P<data_host>\w{5}\d{4}\S$m$\S$msk$\S$mask$\S$msk$)" ```Note: $m$, $msk$, $mask$ are masked value. You need to put your domain here   3.  Then you can see which source is causing what issue.  You can event break down the message using rex to figure the common cause and repeat step 1.   | stats values(data_source) as data_source by data_host source_message | top source_message   Happy Splunkin!  We wish we had this in the beginning but, as most Splunker are task with so many tasks not enough to troubleshoot ingestion related issues.  How do you other type of troubleshooting like filter events and sending things to null queue or blacklist?  Do you see any in the logs? 
Hi all, Can anyone recommend a way of allowing 'investigative' information to be added to an alert, such that it's stored in our Splunk Cloud instance? We use a 3rd party supplier who carries out t... See more...
Hi all, Can anyone recommend a way of allowing 'investigative' information to be added to an alert, such that it's stored in our Splunk Cloud instance? We use a 3rd party supplier who carries out triage on our Splunk alerts, and they have their own ticketing system as part of their SOAR infrastructure. That works well but it's not our system and if we decide to stop using that supplier we potentially lose all the information and comments added to alert tickets. We would therefore like to add our own comments within Splunk when an alert is triggered, so that the information is stored for future reference, e.g. to help an analyst investigating an alert if it triggers again in future, or if someone else in the organisation wants to modify that alert. I'm aware an alert is simply an action that takes place if the output of a query meets a certain condition, so I'm not necessarily asking how to add information to the alert object itself. I'm open to any suggestions, e.g. simple add-ons or Apps that act like a basic SOAR/incident management system. Thanks.
Hi, I'm trying to identify the users who updated which look file and what information they updated. I was planning to track such information using dashboards. Thank you for your assistance in adva... See more...
Hi, I'm trying to identify the users who updated which look file and what information they updated. I was planning to track such information using dashboards. Thank you for your assistance in advance. The one I tested is below since it only displays a limited collection of data such as user, app, and lookup file name. index=_internal "Lookup edited successfully" | table _time user namespace lookup_file  
We currently get email alerts from Splunk whenever they are scheduling any maintenance on our instances however we would like to add an additional email and we are not sure how this cold be done. If ... See more...
We currently get email alerts from Splunk whenever they are scheduling any maintenance on our instances however we would like to add an additional email and we are not sure how this cold be done. If anyone could point us to any documentation or give steps on how this can be accomplished that would be great. Thanks
Hi,   I want to convert Epoch time appearing in my events in a field but I want to convert it at index time so that when I search for events instead of    {"@timestamp":1663854197000,"event":... See more...
Hi,   I want to convert Epoch time appearing in my events in a field but I want to convert it at index time so that when I search for events instead of    {"@timestamp":1663854197000,"event":{"id":"101........................   I want to change it to {"@timestamp":human readable format,"event":{"id":"101........................ I know that splunk reads the epoch time and converts it to human readable format under the _time field but I want to transform the raw events to have human readable format. I am assuming I would need to do it on props.conf to do it at index time, maybe SEDCMD could do it I am not sure I just cant get down the right syntax for this I would really appreciate if anyone can help with this. Thank you in advance!
Hi team, I am from admin team i wanted to how many of indexes are empty and are not having data anymore in it so that i can remove those indexes and clean up my indexes.conf    Can we have a qu... See more...
Hi team, I am from admin team i wanted to how many of indexes are empty and are not having data anymore in it so that i can remove those indexes and clean up my indexes.conf    Can we have a query or a way to find this