All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Lets say I have created two custom metrics i.e Counter and Timer using Micrometer in Spring boot 3. These metrics are exposed via /actuator/metrics and /actuator/prometheus endpoints. I have AppDyna... See more...
Lets say I have created two custom metrics i.e Counter and Timer using Micrometer in Spring boot 3. These metrics are exposed via /actuator/metrics and /actuator/prometheus endpoints. I have AppDynamics enabled for my application and no where I can see the custom metrics I created. There is a section called Metrics Browser, which does not show or display the custom metrics. How can I view these metric details on AppDynamics for my Spring boot 3 app.
Hi, I want to create alert based on file received. Everyday at randomly we used to receive files. ex. file name: file_20240613_1222_100.xml Here I can extract Date:20240613 and CompanyId: 1222 ... See more...
Hi, I want to create alert based on file received. Everyday at randomly we used to receive files. ex. file name: file_20240613_1222_100.xml Here I can extract Date:20240613 and CompanyId: 1222 I need create alert which should run for every 30 mins to check if any file arrived. If any file detected it should check both 'Date' and 'CompanyId' with last 30 days files received. If suppose there is any filename in last 30 days with same 'Date' and 'CompanyId' in the filename then it should trigger any email alert. Base search: index=wealth | search transform-file | search ace_message | rex field=_raw "inputFileName: (?<inputFileName>.*?)," | rex field=_raw "outputFileName: (?<outputFileName>.*?)," | rex field=inputFileName "file\_\d+\_(?<CompanyId>\d+)\_" | rex field=inputFileName "file\_(?<Date>\d+)\_" | table inputFileName,outputFileName, CompanyId, Date This will search for last 30 mins and see if any new file arrived , but I am not sure how to check the same fields for last 30 days filename. Can someone  help !
Hello everyone, Due to the high number of identical devices, I would like to be able to select which devices should be displayed via a dropdown. Has anyone done this before and can possibly give me ... See more...
Hello everyone, Due to the high number of identical devices, I would like to be able to select which devices should be displayed via a dropdown. Has anyone done this before and can possibly give me the procedure and the code? Many thanks and best regards Alex
Hi all, we have a cluster on site1 that runs    "CentOS Linux release 8.2.2004 (Core)" on kernal "4.18.0-348.7.1.el8_5.x86_64 ". on all peers. and we would like to deploy the site2 cluster with RHEL... See more...
Hi all, we have a cluster on site1 that runs    "CentOS Linux release 8.2.2004 (Core)" on kernal "4.18.0-348.7.1.el8_5.x86_64 ". on all peers. and we would like to deploy the site2 cluster with RHEL 8 on all the cluster members. would that cause any problems with Splunk's functionality?        
Hi All, I have a raw message which contains Action name like below : CommBank.Api.PricingExtractor.Controllers.EventPublishController.PublishEventsToKafkaTopics (CommBank.Api.PricingExtractor) ... See more...
Hi All, I have a raw message which contains Action name like below : CommBank.Api.PricingExtractor.Controllers.EventPublishController.PublishEventsToKafkaTopics (CommBank.Api.PricingExtractor) which I  have extracted using below regular expression  rex field=message "ActionName\\\":\\\"(?<ActionName>[^\\\"]+)" Is there a way to extract only last part after "." and before "("   i.e "PublishEventsToKafkaTopics" just this I tried few ways but was getting error. Any help will be appreciated Thanks in advance
  Hi Splunkers, I am currently working on creating custom alerts with JavaScript and am encountering issues with finding proper documentation. I reviewed some posts in the Splunk comm... See more...
  Hi Splunkers, I am currently working on creating custom alerts with JavaScript and am encountering issues with finding proper documentation. I reviewed some posts in the Splunk community that included documentation links, but many of those links are expired or removed. Could someone provide me with the current documentation links for creating custom alerts with JavaScript? Specifically, I am working with parameters like: 'action.email.sendresults': 1, 'action.email.message.alert': 'xxxxx', 'action.email.to': email, 'action.logevent.param.event': '{"session_id": $result.session_id$, "user": $result.user_name$}', 'action.logevent.param.index': index, 'alert.digest_mode': 0, 'alert.suppress': 1, Thanks, Sanjai  
Is it possible to action multiple operations in a single if condition, like what can be done in other languages? For example, in other scripting languages this can be done:   if(field==1){ gro... See more...
Is it possible to action multiple operations in a single if condition, like what can be done in other languages? For example, in other scripting languages this can be done:   if(field==1){ group=group+1; groups=groups+","+group; } else { //this is a comment, do nothing }   How can this be done in splunk?
Say I create a query that outputs (as a csv) the last 14 days of hosts and the dest_ports the host has communicated on. Then I would inputlookup that csv to compare the last 7 days of the same type ... See more...
Say I create a query that outputs (as a csv) the last 14 days of hosts and the dest_ports the host has communicated on. Then I would inputlookup that csv to compare the last 7 days of the same type of data. What would be simplest spl to detect anomalies?
I have a new indexer set up for dev, and I need to move its default SPLUNK_DB path to the mountpoints we have set up for its cold/data   Currently, we have storage allocated on drives for the cold ... See more...
I have a new indexer set up for dev, and I need to move its default SPLUNK_DB path to the mountpoints we have set up for its cold/data   Currently, we have storage allocated on drives for the cold and hot data. We have storage allocated at /export/opt/slunk/data/<cold|hot> Currently, I have ingested some test data with eventgen, and it ended up in /export/opt/splunk/var/lib/splunk/   I would just copy everything over and update the splunk-launch.conf and edit the $SPLUNK_DB to be /export/opt/splunk/data, but there are a lot of files under the /export/opt/splunk/var/lib/splunk/.   I really only have one index with data in it, the testindex index.  What would be the best way to go about migrating all of the data from /export/opt/splunk/var/lib/splunk/ while making sure that future events get sent to the correct hot/cold databases.  The files under /export/opt/splunk/var/lib/splunk/ dont specify hot or cold until i get into the specific directories. At this point, all of the data could be considered hot as its new, but id like to confirm that any future events get sent to the correct index.  When i run echo $SPLUNK_DB, i do not get any output. When i run printenv, I do not see $SPLUNK_HOME or $SPLUNK_DB and their values. WIthin the SPlunk-launch.conf, the $SPLUNK_DB is commented out, and there isnt one set in local to specify it. So why does it default to  /export/opt/splunk/var/lib/splunk/?  I saw this Splunk DOC:https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Moveanindex But I already have a directory i want, would i have to move each folder under the current db directory individually to ensure they land in the right place?  Id just like some guidance on best practice for this indexer. I just have one SH, one Indexer, and one Forwarder.    Thanks for any help   
How do i clone a dashboard and lookuptables from one App to another in Splunk
I am looking to place a heavy forwarder in Azure have it forward events/data to the main indexer with one method using http token. The heavy forwarder will just be used to forward data and not index ... See more...
I am looking to place a heavy forwarder in Azure have it forward events/data to the main indexer with one method using http token. The heavy forwarder will just be used to forward data and not index or search. I am asking will the HF need its own license/how will it relate to the license server?
The goal i have is to track when a user launch wireshark i want to see what user launched it. I also want to see what the user is doing within the application such as packets that were captured etc. ... See more...
The goal i have is to track when a user launch wireshark i want to see what user launched it. I also want to see what the user is doing within the application such as packets that were captured etc. Is this possible to do within Splunk enterprise. Are there any additional apps i will need to make the activity easily readable?
I am planning to build a Splunk dashboard for monitoring connection issues from various sources. Specifically, I need to identify when a connection fails or when an application stops sending data to ... See more...
I am planning to build a Splunk dashboard for monitoring connection issues from various sources. Specifically, I need to identify when a connection fails or when an application stops sending data to Splunk and display these issues on the dashboard. The data sources include: Application server universal forwarder to our Splunk heavy forwarder HEC (HTTP Event Collector) Various add-ons (e.g., Azure add-on, AWS add-on, DB Connect add-on) I am aware that many logs can be found under index=_internal, but I need assistance in identifying the necessary logs that pertain to real-time errors or connection failures. Could you please help me with this?
Hi community,   My forwarder is putting logs in index A before 2024/06/01, and in index B after this date. To avoid miss any data when searching, I have to have a query which searches both index. ... See more...
Hi community,   My forwarder is putting logs in index A before 2024/06/01, and in index B after this date. To avoid miss any data when searching, I have to have a query which searches both index. (index="A" "reports" "arts") OR (index="B" "reports" "arts") In this case, I believe if now I select "last 24 hours" in the time selector, the query will still search index A, which is unnecessary. I guess it would be more efficient if I can add a time limit in the first part, to limit the range of events. (earliest=-6mon latest="06/01/2024:00:00:00" index="A" "reports" "arts") OR (earliest="06/01/2024:00:00:00" index="B" "reports" "arts")   I expect Splunk would take an intersection of the two time ranges, but it doesn't. I noticed that adding these surprisingly slows down the query. The "earliest" and "latest" I added override the time selector. Even though I selected "last 24 hours", it returns events in the past 6 months of index A.   Again, my first query should give the correct result, but I'm still wondering if there's a way to improve the efficiency with the date 06/01. Any suggestions are appreciated!
  Hi all, Can you please help me with the Splunk query to list the Windows Process Names and CPU utilizations for the particular hostname. I have made the query as follows:- index=tuuk_perfmon ... See more...
  Hi all, Can you please help me with the Splunk query to list the Windows Process Names and CPU utilizations for the particular hostname. I have made the query as follows:- index=tuuk_perfmon source="Perfmon:Process" counter="% Processor Time" host=*hostname* (instance!="_Total" AND instance!="Idle" AND instance!="System") | eval 'CPU'=round(process_cpu_used_percent,2) | timechart latest('CPU') by process_name   With the above mentioned query, i can able to get the CPU utilization results for listed Windows Process names, but when analyzing the results, for particular time frame there are multiple 100% CPU utilization for mutiple Windows process names. Could someone please suggest or validate whether i am getting valid results and also the reason for multiple 100% CPU utilization?      
In the indexer, the search for data returns a timeline and details. The timeline is always green:  This is fine for queries returning pleasant result. However, when the query returns unpleasan... See more...
In the indexer, the search for data returns a timeline and details. The timeline is always green:  This is fine for queries returning pleasant result. However, when the query returns unpleasant results, I would like to use red.  
v9.2.0.1 Monitoring Console in Splunk manager is not displaying volume information. All panels say "Search is waiting for input...". When I open the search of a given panel, the query it opens to is... See more...
v9.2.0.1 Monitoring Console in Splunk manager is not displaying volume information. All panels say "Search is waiting for input...". When I open the search of a given panel, the query it opens to is an "All time" query with "undefined" in the query box... I'm trying to monitor volume (and index) size/space using the Management console, Indexing, Volume Details. ../app/splunk_monitoring_console/volume_detail_deployment... The dashboards/panels populate fine when looking at index data, but are empty when trying to view volume data. Index panels have a nice rest query, and the Volume panels are all "undefined". Fixing thoughts? Cheers,
I set the alart to High and security Domaiin = Network, but it appears to me in the Incident Review interface that it is low and security Domaiin = threat, and every event is classified like this, as... See more...
I set the alart to High and security Domaiin = Network, but it appears to me in the Incident Review interface that it is low and security Domaiin = threat, and every event is classified like this, as shown in the attached images.     
Hi team, Upload the CSV file into Splunk, In CSV file form 47th row to 7th row into single event, written configuration  but single event is not working. please find the below screen shot   From  ... See more...
Hi team, Upload the CSV file into Splunk, In CSV file form 47th row to 7th row into single event, written configuration  but single event is not working. please find the below screen shot   From   To  7th Row csv file   the above screen shot after loaded the splunk , it's not single event for 47th row 7 row. please help configuration above.
Hi, Following the official instructions https://apps.splunk.com/apps/id/Splunk_TA_microsoft_sysmon ,  Splunk Add-on for Sysmon 4.0.0 I just deployed the addon for sysmon in my indexer, search head... See more...
Hi, Following the official instructions https://apps.splunk.com/apps/id/Splunk_TA_microsoft_sysmon ,  Splunk Add-on for Sysmon 4.0.0 I just deployed the addon for sysmon in my indexer, search head and deployment servers so I started to collect sysmon logs. I am running Sysmon 15.14 on the endpoints. The logs started to flow into splunk but when I do searches on the index I constantly receive the following error: [indexer.mydomain.es, mysearchhead.mydomain.es] Could not load lookup=LOOKUP-eventcode I read the information in the https://docs.splunk.com/Documentation/AddOns/released/MSSysmon/Lookups but I couldnt find the root cause. The csv are in the path indicated in the documentation. Any suggestion? many thanks