All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Experts, I am facing difficulty at index time fields extraction. My sample log file format: Time stamp: Fri Mar 18 00:00:49 2022 File: File_name_1 Renamed to: Rename_1 Time stamp: Fri Ma... See more...
Hello Experts, I am facing difficulty at index time fields extraction. My sample log file format: Time stamp: Fri Mar 18 00:00:49 2022 File: File_name_1 Renamed to: Rename_1 Time stamp: Fri Mar 18 00:00:50 2022 File: File_name_1 Renamed to: Rename_1 Time stamp: Fri Mar 18 00:00:51 2022 File: File_name_1 Renamed to: Rename_1 Time stamp: Fri Mar 18 00:00:52 2022 File: File_name_1 Renamed to: Rename_1 Time stamp: Fri Mar 18 00:00:53 2022 File: File_name_1 Renamed to: Rename_1   props.conf [ demo ] CHARSET=AUTO LINE_BREAKER=([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD=24 NO_BINARY_CHECK=true SHOULD_LINEMERGE=true TIME_FORMAT=%a %b %d %H:%M:%S %Y TIME_PREFIX=^Time stamp:\s+ TRANSFORMS-extractfield=extract_demo_field TRUNCATE=100000 transforms.conf [extract_demo_field] REGEX =^Time stamp:\s*(?<timeStamp>.*)$\s*^File:\s*(?<file>.*)$\s*^Renamed to:\s+(?<renameFile>.*)$ FORMAT = time_stamp::$1 file::$2 renamed_to::$3 WRITE_META = true  
How to resolve memory spike ?
Hi all, good morning, my first question on the community, as I have just started learning.  So please be gentle if what I am asking is something obvious. I have configured an alert that execut... See more...
Hi all, good morning, my first question on the community, as I have just started learning.  So please be gentle if what I am asking is something obvious. I have configured an alert that executes two actions: send an email, and execute an customized app.  I'd like to know how to search the logs generated by the execution of the actions configured on this alert, specifically the logs generated by the app.  Could anyone please let me know where to see the logs generated by the app (python scrypt)? Thanks in advance. Kind regards,  Juan
Hi everyone,  I have a line chart of some tasks and its duration. I am trying to add average of the duration of all tasks as threshold.   |stats values(duration) as "Duration(Hr)" by Task|sort Tas... See more...
Hi everyone,  I have a line chart of some tasks and its duration. I am trying to add average of the duration of all tasks as threshold.   |stats values(duration) as "Duration(Hr)" by Task|sort Task|stats avg(duration) as threshold   I am not getting any results from the query.I have previously added threshold as a fixed value. Is there any different method to add a threshold value which is calculated not fixed.
Hi,    I set up an universal forwarder on a docker container : 172.17.0.3 I configurated it to forward data to 172.17.0.10:9997 (my VirtualBox VM ip's) I first enabled port 9997 port to liste... See more...
Hi,    I set up an universal forwarder on a docker container : 172.17.0.3 I configurated it to forward data to 172.17.0.10:9997 (my VirtualBox VM ip's) I first enabled port 9997 port to listen on my splunk enterprise instance web API.   connexion between VM and docker is a bridge on the docker0 interface. I can ping the VM on my container and vice versa.   I checked the connexion between the UF and the VM using ss -ant | grep "9997"  and I got :  LISTEN 0    128    0.0.0.0:9997    0.0.0.0:* As i'm new into networking, I'm clueless on how to make the connexion works.   Thank you all for your help
The predefined table names in the add-on doesn't list the service ticket related table name, hence wanted to know the service ticket table name so that I can create a custom input to pull service tic... See more...
The predefined table names in the add-on doesn't list the service ticket related table name, hence wanted to know the service ticket table name so that I can create a custom input to pull service tickets data from ServiceNow using ServiceNow add-on.  I'm not able to find this information on the servicenow website or in splunk-servicenow add-on documentation. Can someone please let me know the same?
Hi, I need to extract host values from one index (index=1) and see if there are similar matches that exists in other indexes (index=2 and index=3). Below are the details: Index=1 hosta=* hostb=... See more...
Hi, I need to extract host values from one index (index=1) and see if there are similar matches that exists in other indexes (index=2 and index=3). Below are the details: Index=1 hosta=* hostb=* hostc=* index=2 hostx=* index=3 hostx=* Can someone please help me with an SPL to find this?
Hello All, Can someone help with this? 1. We have Splunk DB Connect installed on a local Heavy Forwarder to test a proof of concept, which has Java installed locally. It works fine. 2. Now, the... See more...
Hello All, Can someone help with this? 1. We have Splunk DB Connect installed on a local Heavy Forwarder to test a proof of concept, which has Java installed locally. It works fine. 2. Now, the Splunk DB Connect app is installed on Splunk Cloud, which requires settings related to the Java environment and Task Server on Splunk DB Connect -> Configuration -> Settings page. 3. I can't find any documentation for these Settings on the Splunk Cloud. Already raised a Support ticket but they are very slow. Regards, Sree.
Hello I have built an APP with which I collect logs from a REST API. For this I use the checkpoint manager and store type file . I get the following error:  ``` 2022-03-28 07:57:58,877 +0000 log... See more...
Hello I have built an APP with which I collect logs from a REST API. For this I use the checkpoint manager and store type file . I get the following error:  ``` 2022-03-28 07:57:58,877 +0000 log_level=INFO, pid=7000, tid=MainThread, file=ta_config.py, func_name=set_logging, code_line_no=94 | Set log_level=DEBUG 2022-03-28 07:57:58,878 +0000 log_level=INFO, pid=7000, tid=MainThread, file=ta_config.py, func_name=set_logging, code_line_no=95 | Start mdm_api task 2022-03-28 07:57:58,878 +0000 log_level=DEBUG, pid=7000, tid=MainThread, file=ta_config.py, func_name=_get_checkpoint_storage_type, code_line_no=102 | Checkpoint storage type=auto 2022-03-28 07:57:58,878 +0000 log_level=DEBUG, pid=7000, tid=MainThread, file=ta_checkpoint_manager.py, func_name=_create_state_store, code_line_no=44 | Got checkpoint storage type=auto 2022-03-28 07:57:58,878 +0000 log_level=INFO, pid=7000, tid=MainThread, file=ta_checkpoint_manager.py, func_name=_use_cache_file, code_line_no=93 | Stanza=mdm_api using cached file store to create checkpoint 2022-03-28 07:57:58,878 +0000 log_level=DEBUG, pid=7000, tid=MainThread, file=ta_checkpoint_manager.py, func_name=_create_state_store, code_line_no=64 | Creating file state store, use_cache_file=True, max_cache_seconds=5 2022-03-28 07:57:58,878 +0000 log_level=ERROR, pid=7000, tid=MainThread, file=ta_mod_input.py, func_name=main, code_line_no=287 | api_mdm_v5 task encounter exception Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\TA-mdm_api_v5\bin\ta_mdm_api_v5\aob_py3\cloudconnectlib\splunktacollectorlib\data_collection\ta_mod_input.py", line 283, in main cc_json_file=cc_json_file File "C:\Program Files\Splunk\etc\apps\TA-mdm_api_v5\bin\ta_mdm_api_v5\aob_py3\cloudconnectlib\splunktacollectorlib\data_collection\ta_mod_input.py", line 209, in run for task_config in task_configs File "C:\Program Files\Splunk\etc\apps\TA-mdm_api_v5\bin\ta_mdm_api_v5\aob_py3\cloudconnectlib\splunktacollectorlib\data_collection\ta_mod_input.py", line 209, in <listcomp> for task_config in task_configs File "C:\Program Files\Splunk\etc\apps\TA-mdm_api_v5\bin\ta_mdm_api_v5\aob_py3\cloudconnectlib\splunktacollectorlib\data_collection\ta_data_client.py", line 68, in create_data_collector dataloader) File "C:\Program Files\Splunk\etc\apps\TA-mdm_api_v5\bin\ta_mdm_api_v5\aob_py3\cloudconnectlib\splunktacollectorlib\data_collection\ta_data_collector.py", line 61, in __init__ task_config) File "C:\Program Files\Splunk\etc\apps\TA-mdm_api_v5\bin\ta_mdm_api_v5\aob_py3\cloudconnectlib\splunktacollectorlib\data_collection\ta_checkpoint_manager.py", line 40, in __init__ task_config[c.appname] File "C:\Program Files\Splunk\etc\apps\TA-mdm_api_v5\bin\ta_mdm_api_v5\aob_py3\cloudconnectlib\splunktacollectorlib\data_collection\ta_checkpoint_manager.py", line 71, in _create_state_store max_cache_seconds=max_cache_seconds TypeError: get_state_store() got an unexpected keyword argument 'max_cache_seconds' ```
Hi, I have the following JSON String logs. I would like to extract JSON unique field values. It should go over all the message fields and extract specific field values from a JSON array("name") and... See more...
Hi, I have the following JSON String logs. I would like to extract JSON unique field values. It should go over all the message fields and extract specific field values from a JSON array("name") and unique them. Could someone help with Splunk query?   Raw log { "@timestamp": "2022-03-28T07:38:45.123+00:00", "message": "request - {\"metrics\":[{\"name\":\"m1\",\"downsample\":\"sum\"},{\"name\":\"m2\",\"downsample\":\"sum\"},{\"name\":\"m1\",\"downsample\":\"sum\"}]}" } JSON { "metrics": [{ "name": "m1", "aggregator": "sum", }, { "name": "m2", "downsample": "sum" }, { "name": "m1", "downsample": "sum" }] }   Expected Output:   m1 m2  ...  
Hi there, One of my colleague having admin access has created a dashboard for audit to know that who logged into Splunk and how many times does the user login into Splunk for the last 7 days for a... See more...
Hi there, One of my colleague having admin access has created a dashboard for audit to know that who logged into Splunk and how many times does the user login into Splunk for the last 7 days for all the users. One of the users left the organization in January and we deleted the account with admin login and transferred all the knowledge objects to the other user also but Now we are seeing his name in the dashboard and alerts were triggering on his name also. We have again checked the user list but his name was not available, but we are still seeing his name in the alerts and dashboard. Can anyone help me with it…  the search query used for creating the dashboard is   index=_internal sourcetype=splunkd_access  | timechart span=6h count by user   The Raw Event displaying while searching the query is: 127.0.0.1 - name of the user* [28/Mar/2022:05:28:17.505 +0000] "POST /servicesNS/nobody/search/saved/searches/Single%20User%20Failed%20Attempt/notify?trigger.condition_state=1 HTTP/1.1" 200 1933 "-" "Splunk/8.1.0 (Linux 4.15.0-1023-azure; arch=x86_64)" - 2ms   Please help me to resolve it and thanks in advance......
Hi Team,    I have indexed the file as current timestamp but would like to execute the query by taking the filename timestamp as _time will that be possible now? if yes, how do we do that.
Hi Experts When using the following eval, I would like to declare a variable in macro as in create_var(3). | eval var_1 = if(isnull(var_1),"", var_1) , var_2 = if(isnull(var_2),"", var_2), var_3 ... See more...
Hi Experts When using the following eval, I would like to declare a variable in macro as in create_var(3). | eval var_1 = if(isnull(var_1),"", var_1) , var_2 = if(isnull(var_2),"", var_2), var_3 = if(isnull(var_3),"", var_3)  In some cases, we want to use MACRO because we need to define more than 30 variables. I am thinking that I can use foreach or map in the macro, but I am not sure how to do it. Any advice you could give me would be greatly appreciated!
Hi, folks. We were trying to ingest data from Openshift using Splunk Connect for Kubernetes. So far we have succeeded in ingesting from the whole infrastructure. So we are wondering if we can limit ... See more...
Hi, folks. We were trying to ingest data from Openshift using Splunk Connect for Kubernetes. So far we have succeeded in ingesting from the whole infrastructure. So we are wondering if we can limit from which specific namespaces we are going to ingest data from. As far as I understood from this link (https://github.com/splunk/splunk-connect-for-kubernetes#managing-sck-log-ingestion-by-using-annotations), it tells us how to route data per namespaces to specific indexes but still collecting from all namespaces. Our goal is currently to collect only from certain namespaces, e.g. just from 2 namespaces instead of all of them.  Is it possible to do with the add-on? Did I miss some detailed explanations? Please advise.
Hi, I am looking for various types of sample logs  dump similar to tutorialsdata.zip for exploring splunk search options.  Appreciate your help.   Best Regards, Anna
hi, can anyone help me how should I query the counts of kafka_datatype  of those stream_type which Im going to set an alert if there will be no increase within  3 hours within only a time period o... See more...
hi, can anyone help me how should I query the counts of kafka_datatype  of those stream_type which Im going to set an alert if there will be no increase within  3 hours within only a time period of time 7am-8pm  I have this query :   index="pcg_p4_datataservices_prod" sourcetype="be:monitoring-services" | setfields a=a | rex "^[^\|\n]*\|\s+(?P<kafka_datatype>\w+)\s+\-\s+(?P<kafka_count>.+)" | search kafka_datatype IN (PRODUCED, CONSUMED) | search stream_type IN (Datascore_Compress, Datascore_Decompress, Eservices_Eload, Eservices_Ebills) | eval service_details=stream_type | timechart span=3h limit=0 sum(kafka_count) by service_details   I tried to add in the query for the earliest/latest (as -180m I guess for 3hrs) but not showing any result that Im aiming to set it  to detect that if for 3 hours if no increase in the count of kafka_datatype of those stream_type which Im going to set the alert:   index="pcg_p4_datataservices_prod" sourcetype="be:monitoring-services" earliest=-180m latest=now | setfields a=a | rex "^[^\|\n]*\|\s+(?P<kafka_datatype>\w+)\s+\-\s+(?P<kafka_count>.+)" | search kafka_datatype IN (PRODUCED, CONSUMED) | search stream_type IN (Datascore_Compress, Datascore_Decompress, Eservices_Eload, Eservices_Ebills) | eval service_details=stream_type | timechart span=3h limit=0 sum(kafka_count) by service_details  
I've been trying to set up Splunk Security Essentials but keep running into Javascript errors and other odd behaviour. When I run the Automated data introspection in "Step One: CIM Searches" I always... See more...
I've been trying to set up Splunk Security Essentials but keep running into Javascript errors and other odd behaviour. When I run the Automated data introspection in "Step One: CIM Searches" I always get 42 completed searches and the remaining 22 searches complete but are never marked as completed (clicking the link to the search brings up the results I'd expect to see). In the browser Dev Tools Console it shows an error that window.updateOrMergeProducts is not a function and this seems to match up with the searches that are never marked as being completed. I've also noticed that it's getting a 400 Bad Request when doing a POST request to __raw/servicesNS/<username>/Splunk_Security_Essentials/search/jobs. Checking these I can see that they are all for searches like | from datamodel:Identity_Management.All_Assets | head 300000| stats count and the error that comes from searching is that the data model doesn't exist. I'm unsure if this is because something went wrong with the installation or if it's because the inventorying hasn't been completed yet. Unfortunately I also get errors when trying to configure the Data Inventory manually where I can't attach a product to 2 categories - e.g. successful authentications & failed authentications. I've tried resetting several times without any progress. I'm running Splunk Enterprise 8.2.5 and Splunk Security Essentials 3.5.0. Has anyone come across this behaviour before?
Hi, I have a parent panel which has below table panel Function Name Success Failure SLA greet 34 5 13.5 NGA 43 0 67.5 Customer 54 1 45 this has two drilldown panel the ... See more...
Hi, I have a parent panel which has below table panel Function Name Success Failure SLA greet 34 5 13.5 NGA 43 0 67.5 Customer 54 1 45 this has two drilldown panel the 1st drilldown panel should appear if we click the column value failure. the 2nd drilldown panel should appear if we click the column value SLA. Thanks in advance,  
Hello I use an input text token in my search like this town=$town$ By defaut, town = * The problem is that sometimes the field town doesnt exist in my events When i chose * i would be able ... See more...
Hello I use an input text token in my search like this town=$town$ By defaut, town = * The problem is that sometimes the field town doesnt exist in my events When i chose * i would be able to retrieve this kind of évents? Is it possible ? Thanks
I want a if else condition in which i need to pass address(path) . Suppose: If (condition==something) {Go to this path(ravi/go/bin.log)} Else if(condition==something) {go to this path(ravi/pyt... See more...
I want a if else condition in which i need to pass address(path) . Suppose: If (condition==something) {Go to this path(ravi/go/bin.log)} Else if(condition==something) {go to this path(ravi/python/bin.log)} Please help me with this. How can we do that.