All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a field which have values only with numbers and also with combination of number and special characters as values. I would like to filter the field values where both number and special characte... See more...
I have a field which have values only with numbers and also with combination of number and special characters as values. I would like to filter the field values where both number and special characters are in it. Example: Log 1 -> field1="238_345$345" Log 2 -> field1="+739-8883 Log 3 -> field1="542.789#298" Already I have tried in writing regex query but there is no expression to filter out the combination of digits & special characters. (No expression to filter all the special character). How can I filter and display the field value which have the combination of number and special characters? Could anyone help me on this?
Trying to expand the multivalue field with one to one mapping as shown in image. mvexpand create multiple row with all column matching value. Actual data with multivalue. child child_Name dv_... See more...
Trying to expand the multivalue field with one to one mapping as shown in image. mvexpand create multiple row with all column matching value. Actual data with multivalue. child child_Name dv_class n_name direction name parent 55555            
I'm trying to look for refernce or documintation that shows me which fields in sysmon logs should be mapped to which fields in endpoint datamodel.   for example Image & ParentImage it should show i... See more...
I'm trying to look for refernce or documintation that shows me which fields in sysmon logs should be mapped to which fields in endpoint datamodel.   for example Image & ParentImage it should show in which fields from endpoint datamodel since we have multiple fields for processes and parent processes it is confusing.
Hi, We have client looking to ingest logs using webmethod from one of application(caremonitor) logs from S3 bucket. Since we have not been used anytime before fetching logs via webmethod. Could you ... See more...
Hi, We have client looking to ingest logs using webmethod from one of application(caremonitor) logs from S3 bucket. Since we have not been used anytime before fetching logs via webmethod. Could you please let me know  the process to use this method and best practices. we are current using Splunk cloud platform and the care monitor looks to be cloud base application.
How to start the elastic search process. It's not mentioned in the Appd Documentation; Events service installation part. 
I have a simple question how can I check that in which of the apps a particular index has been used.
How to this the following file based on trigger time and elapsed time? "File name","AUTO_231126_012051_0329.CSV","V2.10" "Title comment","T1" "Trigger Time","'23-11-26 01:20:51.500" "CH","U1-2","... See more...
How to this the following file based on trigger time and elapsed time? "File name","AUTO_231126_012051_0329.CSV","V2.10" "Title comment","T1" "Trigger Time","'23-11-26 01:20:51.500" "CH","U1-2","Event" "Mode","Voltage" "Range","200mV" "UnitID","" "Comment","" "Scaling","ON" "Ratio","+1.00000E+02" "Offset","+0.00000E+00" "Time","U1-2[]","Event" +0.000000000E+00,+2.90500E+00,0 +1.000000000E-01,+1.45180E+01,0 +2.000000000E-01,+7.93600E+00,0 +3.000000000E-01,+3.60100E+00,0 +4.000000000E-01,+3.19100E+00,0 +5.000000000E-01,+3.17300E+00,0 +6.000000000E-01,+3.17300E+00,0 +7.000000000E-01,+3.18400E+00,0 +8.000000000E-01,+3.19400E+00,0 +9.000000000E-01,+3.16500E+00,0 +1.000000000E+00,+3.16000E+00,0
I want to combine these two events. Can anyone help me? I have tried using the join and append commands, but haven't been successful. I want to analyze data in the 'Endpoint' model to capture ... See more...
I want to combine these two events. Can anyone help me? I have tried using the join and append commands, but haven't been successful. I want to analyze data in the 'Endpoint' model to capture information related to 'mmc' and then compare the 'ID' to retrieve IP and port information from the 'Network' model.
    Hi, I have 2 indexers running in cluster and from the cluster master, i am getting the "Getting Missing Suitable candidates to create replicated copy in order to meet replication policy... See more...
    Hi, I have 2 indexers running in cluster and from the cluster master, i am getting the "Getting Missing Suitable candidates to create replicated copy in order to meet replication policy error." I tried roll and resync but the same error keeps coming back. if they can't roll and resync, my only choice is to delete, but how do I mass delete instead of clicking one by one to delete?
Hi Everyone, I am using splunk forwarder and I have below requirements  We have log files under path /opt/airflow/logs/*/*/*/*.log for example  /opt/airflow/logs/getServerInfo/some_run_id/get_upt... See more...
Hi Everyone, I am using splunk forwarder and I have below requirements  We have log files under path /opt/airflow/logs/*/*/*/*.log for example  /opt/airflow/logs/getServerInfo/some_run_id/get_uptime/1.log  or  /opt/airflow/logs/build_upgrade/some_run_id/ami_snapshot_task/5.log Now i want to extract the field some_run_id from the log file path and want to add this some_run_id to each log line while sending the logs to splunk Below is my normal logs format [2024-01-17, 03:17:02 UTC] {subprocess.py:89} INFO - PLAY [Gather host information] [2024-01-17, 03:17:01 UTC] {taskinstance.py:1262} INFO - Executing <Task(BashOperator): get_os_info> on 2024-01-17 03:16:37+00:00 [2024-01-17, 03:17:01 UTC] {standard_task_runner.py:52} INFO - Started process 1081826 to run task Now i want below format of logs in splunk (I want this format of logs in splunk not on the actual log files) some_run_id [2024-01-17, 03:17:02 UTC] {subprocess.py:89} INFO - PLAY [Gather host information] some_run_id [2024-01-17, 03:17:01 UTC] {taskinstance.py:1262} INFO - Executing <Task(BashOperator): get_os_info> on 2024-01-17 03:16:37+00:00 some_run_id [2024-01-17, 03:17:01 UTC] {standard_task_runner.py:52} INFO - Started process 1081826 to run task Any help is much appreciated !
Hello - I'd like to start with thanking the community for reviewing and helping!  Problem Statement: I have appt data from multiple clinical locations in Splunk with different types statues. I am ... See more...
Hello - I'd like to start with thanking the community for reviewing and helping!  Problem Statement: I have appt data from multiple clinical locations in Splunk with different types statues. I am trying to create a dashboard that would show trends in appts requests to see if we're gaining pts or losing them, what days are the busiest, what days are the slowest.  Query: index="index" cluster_id="*" dump_info:98 | spath output=log path=log | rex field=log ".*\{\'name\'\:\s\'(?<name>.*)\'\,\s\'service_type\'\:\s\'(?<service_type>.*)\'\,\s\'status\'\:\s\'(?<status>.*)\'\,\s\'start\'\:\s\'(?<start>.*)\'\,\s\'lastUpdated\'\:\s\'(?<lastUpdated>.*)\'\,\s\'date\'\:\s\'(?<date>.*)\'\}" | search name="*" AND status="*" AND start="*" | dedup name service_type status start lastUpdated date | eval startdate=strftime(strptime(start,"%Y-%m-%dT%H:%M:%SZ"),"%Y-%m-%d"), today=strftime(now(),"%Y-%m-%d") | where startdate=today | table name, status | stats count(status) as status_count, values(*) as * by name, status
Hi, recently I was given a task to create an app for a specific dept. in my Org which will have only 2-3 selected indexes.  Please guide me through high-level steps. 
Sorry i am a noob to regex and splunk regex especially. Regex to extarct all that is between the two single quotes. there will never be a single quote in the name. EG extract the client code after ... See more...
Sorry i am a noob to regex and splunk regex especially. Regex to extarct all that is between the two single quotes. there will never be a single quote in the name. EG extract the client code after word client and same for transaction   2024-01-16 15:04:22.7117 [135] INFO [javalang] Starting Report for client '0SD45' user 'user1' for transaction '123456'   @fieldextraction  @Anonymous 
How to create trending graph history for tenable vulnerability data? I would like for it to show the daily timeline.
Hello There, I have installed Splunk DB Connect and all its requirements on ver 9.0 This works pretty well for queries either in Search or in SQL Exploereer. I want to use a stored procedure to re... See more...
Hello There, I have installed Splunk DB Connect and all its requirements on ver 9.0 This works pretty well for queries either in Search or in SQL Exploereer. I want to use a stored procedure to return data using DB Connect. The procedure works fine in SQL Exploereer, and from the MySQL commang line. If I try to call the procedure from Splunk Search - I get no results I am using this format: dbxquery connection=myconn procedure="{call my_nice_proc(@val);}" this should return a text string, or a table of results if the query/procedure retuns 1 or more rows.. Any ideas on what I am missing? thanks, eholz1
Hi experts, I want to just combine these location sites - "HU1","IA2","IB0 and create new AM site. I tried this query, it works but it shows only new site. How to see the all original sites along ... See more...
Hi experts, I want to just combine these location sites - "HU1","IA2","IB0 and create new AM site. I tried this query, it works but it shows only new site. How to see the all original sites along with the new site in location field? |search location IN ("HU1","IA2","IB0") |eval row=if(location IN ("HU1","IA2","IB0"),"AM",location) |stats c by row. How to solve any idea? 
We are in the process of generating Events in ServiceNow using the Splunk add-on for ServiceNow.  We are passing Event information in the description field to communicate to the end user what actions... See more...
We are in the process of generating Events in ServiceNow using the Splunk add-on for ServiceNow.  We are passing Event information in the description field to communicate to the end user what actions need to be addressed.  As part of the output we want to include a table of information that summarizes the events detected.  We are able to aggregate and group the information as necessary, just having a hard time establishing a pattern where we can consistently control the output.    We have had issues formatting the data and we are seeking guidance on how we can exert greater control over the format.  We would like to include a brief sentence with instructions on how to move forward and we would like to identify all events impacted in table format.    |eval instructions = "The message we are seeking would look like the content below:  The header column and the output needs to be aligned and easy to read for the end user.    I have used a MVAppend Statement to add the header to a column, but could not concatenate the information in a manner where it display the information in a table format.   "  . " " | eval cheader = "Host Account Action " | eval tabledata= host . " " . Account . " " . Action | eval instructions = instructions . cheader . tabledata   "The account is a controlled account and you will need to provide justification for accessing the account outside of security controls.  Please review the table of events and provide insight into why control was violated." Table of Events:   Host                      Account           Action     LC200506         admin                Success  LC200507         admin                Failure    
I have a panel in a dashboard that plot a trend line for last 24 Hrs. Now I wanna create a new alert query that should follow the trendline of panel. If the output of alert query doesn't match (no... See more...
I have a panel in a dashboard that plot a trend line for last 24 Hrs. Now I wanna create a new alert query that should follow the trendline of panel. If the output of alert query doesn't match (not exactly but to an extent) the pattern of panel query then it should trigger an alert. 
I've recently been advised that our organization is intending to do away with the production domain where our current Splunk cluster resides, and move everything over two the other domain in use. Thi... See more...
I've recently been advised that our organization is intending to do away with the production domain where our current Splunk cluster resides, and move everything over two the other domain in use. This implementation does currently have nodes in two different domains, and the domain to go away happens to house both our Cluster Manager and four indexers in a two-site configuration running Splunk Enterprise 9.1.1. I don't yet have all the details (ie, is the IP/hostname changing or not) but in an effort to do some pre-emptive housecleaning and change the 'serverName' on one of the indexers in advance to go from FQDN to just the hostname, I got CM complaints that it couldn't rejoin the cluster due to the GUID belonging to another indexer.   01-16-2024 13:43:03.307 +0000 ERROR ClusterMasterPeerHandler [25028 TcpChannelThread] - Cannot add peer=X.X.X.X mgmtport=8089 (reason: Peer with guid=<GUID> is already registered and UP).   This error feels a little bit like a chicken/egg situation. Essentially I just had put the CM into maintenance-mode, stopped the peer, updated serverName in server.conf and started it back up. Perhaps I should have used 'splunk offline' vs 'splunk stop' here? This has me thinking the operation we're about to undertake is a fairly complex one. I haven't been able to find any relatively recent posts about doing something similar aside from a 2016 blog post that makes no mention of GUID and presume it was referring to stand-alone indexers vs clustered. Changing the GUID is presumably a non-starter due to the existing buckets all referencing it in their names... Long story short, I'm looking for an order of operations and some dos/donts for an undertaking like this.
I have to trim ITSI KV store collection size. I have created a local itsi_notable_event_retention.conf file in $SPLUNK_HOME/etc/apps/SA-ITOA/local/. I override the default values of retentionTimeInSe... See more...
I have to trim ITSI KV store collection size. I have created a local itsi_notable_event_retention.conf file in $SPLUNK_HOME/etc/apps/SA-ITOA/local/. I override the default values of retentionTimeInSec to 3 months. However the no of objects in the collection are still growing and hence the collection size. How do I trim the collection size?  I followed this document Modify notable event KV store collections in ITSI - Splunk Documentation. Please assist.