All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I wanted to know if there is a definitive rule on how to structure a props.conf.  I read the docs and it does not say anything about a preference of where to call what operation. I underst... See more...
Hello, I wanted to know if there is a definitive rule on how to structure a props.conf.  I read the docs and it does not say anything about a preference of where to call what operation. I understand the search time operation order form Extract -> Report -> Eval -> FieldAlias -> Lookup. My question is within a stanz does all the extract have to happen at the top, then the Reports, then the Eval Ex: FIELDALIAS-src_ip = srcip ASNEW src_ip FIELDALIAS-dest_ip = dstip ASNEW dest_ip FIELDALIAS-src_port = sport ASNEW src_port FIELDALIAS-dest_port = dport ASNEW dest_port FIELDALIAS-authentication_protocol = protocol ASNEW authentication_protocol     FIELDALIAS-src_ip = srcip ASNEW src_ip FIELDALIAS-dest_ip = dstip ASNEW dest_ip FIELDALIAS-src_port = sport ASNEW src_port FIELDALIAS-dest_port = dport ASNEW dest_port      
Hi, can someone help on how to track Splunk code in Gitlab? Description:-  We do have a Splunk deployments using Gitlab, we want to be able to track and monitor every new changes or new code deploy... See more...
Hi, can someone help on how to track Splunk code in Gitlab? Description:-  We do have a Splunk deployments using Gitlab, we want to be able to track and monitor every new changes or new code deployed using Gitlab. I'm not sure if Splunk GITHUB add-on will do this work???   Thanks
Hi, I've run into an issue while working with the Splunk Rest API, specifically when trying to leverage extracted fields. Within the Splunk App my data lives in I have the following regular express... See more...
Hi, I've run into an issue while working with the Splunk Rest API, specifically when trying to leverage extracted fields. Within the Splunk App my data lives in I have the following regular expression as a field extraction for sendmail QID ^[^\\]\\n]*\\]:\\s+(?P<QID>[^:]+) This works as expected in the GUI for myself and users of the application. However, when attempting to leverage the "QID" field in a REST API Call with the following parameters (x-www-form-urlencoded. I'm showing this as a dict as I use python for my calls.), there is no QID field available to me.   x POST to services/search/jobs { "rf" : "QID", "adhoc_search_level" : "verbose", "search" : "search index=sec_email sourcetype=<mysourcetype> earliest=@d | fields QID, msgid | search msgid=\"<my_message_id>\"" } I've confirmed that I receive results here, but QID field is not available. My question here is: Is there a parameter I am missing to leverage pre-existing field extractions from the Splunk App, or am I going to need to use rex to re-extract (this is what I am doing now, but it's less than ideal).   Thank you!
Guys, can you help me ? I need to know the elapsed time between this two fields: CREATED_TS: 20220816182818.215 CURRENT_TIMESTAMP: 20220816185516 Do you have a tip on how can do this ? Thank... See more...
Guys, can you help me ? I need to know the elapsed time between this two fields: CREATED_TS: 20220816182818.215 CURRENT_TIMESTAMP: 20220816185516 Do you have a tip on how can do this ? Thank you. Clecimar
Hi, I'm wondering if it's possible to get an export of all triggered alerts including the alert name, alert trigger condition(s)/alert query, and alert severity as a table (CSV or JSON preferably)?... See more...
Hi, I'm wondering if it's possible to get an export of all triggered alerts including the alert name, alert trigger condition(s)/alert query, and alert severity as a table (CSV or JSON preferably)? I can access the triggered alerts from Activity > Triggered Alerts and all configured alerts from Search & Reporting Alerts but have not found a straightforward way to export everything. For the alert trigger condition(s)/query, I'm looking specifically for what index(es), field(s), and field value(s) the alert is monitoring for. Thanks in advance!
hai all, we have multiple forwarders installed nearly 1000above. we want to know if any UF stops sending data to splunk due to splunk service not running. how can i create dashboard to check if... See more...
hai all, we have multiple forwarders installed nearly 1000above. we want to know if any UF stops sending data to splunk due to splunk service not running. how can i create dashboard to check if UF is not sending or client is not connected.   thanks 
Hello Everyone, Currently i have an Splunk IT Service Manager installed, and i need to monitor que temperature of the CPU and the temperature of the power supply of the server. Anyone can help me t... See more...
Hello Everyone, Currently i have an Splunk IT Service Manager installed, and i need to monitor que temperature of the CPU and the temperature of the power supply of the server. Anyone can help me to enable those option through the App. Thank you very much. Diego.
Hi , I need some insights on useful alerts to be created to monitor logs and indexing in common.. We have huge logs indexed daily. What kind to alerts can be created to monitor those in common. ... See more...
Hi , I need some insights on useful alerts to be created to monitor logs and indexing in common.. We have huge logs indexed daily. What kind to alerts can be created to monitor those in common. need some use case. Thanks Mala S
We have setup the Splunk Mobile App to be deployed via MDM (InTune). Once installed, I check the instance name we are using and then select the SSO option. Our login page comes up but the screen is g... See more...
We have setup the Splunk Mobile App to be deployed via MDM (InTune). Once installed, I check the instance name we are using and then select the SSO option. Our login page comes up but the screen is grayed out and I can't enter anything. Anyone have any idea what we are doing run? Splunk Enterprise 8.2.2.1 / Secure Gateway 
I am developing a query that shows stats for events with the same orderId. There is a flaw though. When I run the query, I get results with only one event for an orderId, but when I take the orderId ... See more...
I am developing a query that shows stats for events with the same orderId. There is a flaw though. When I run the query, I get results with only one event for an orderId, but when I take the orderId associated to only one event and put it in the original query, the result comes up with 2 events. Here are my queries and results: (index=k8s_main LogType="KafkaMessageProcessedSuccess" message="OrderLineDestinationChangeRequested" Environment="PROD") OR (index=k8s_main container_name=fraud-single-proxy-listener message="Sending a message to kafka topic=order-events-avro*OrderLineDestinationChangeRequested*") | rename contextMap.orderId AS nefiOrderId OrderNumber AS omsOrderId | rename contextMap.requestId AS nefiRequestId NordRequestId AS omsRequestId | rename OrderLineId as omsOrderLineId | rex field=message "\"orderLineId\": \"(?<nefiOrderLineId>.*?)\", " | eval orderLineId = coalesce(nefiOrderLineId, omsOrderLineId) | eval requestId = mvappend(nefiRequestId, omsRequestId) | eval orderId = coalesce(nefiOrderId, omsOrderId) | stats dc(_time) AS eventCount values(_time) AS eventTime values(orderLineId) AS orderLineId values(requestId) AS requestId BY orderId | where eventCount = 1 Second query with the orderId in the initial search:  (index=k8s_main LogType="KafkaMessageProcessedSuccess" message="OrderLineDestinationChangeRequested" Environment="PROD" 381263531) OR (index=k8s_main container_name=fraud-single-proxy-listener message="Sending a message to kafka topic=order-events-avro*OrderLineDestinationChangeRequested*" 381263531) | rename contextMap.orderId AS nefiOrderId OrderNumber AS omsOrderId | rename contextMap.requestId AS nefiRequestId NordRequestId AS omsRequestId | rename OrderLineId as omsOrderLineId | rex field=message "\"orderLineId\": \"(?<nefiOrderLineId>.*?)\", " | eval orderLineId = coalesce(nefiOrderLineId, omsOrderLineId) | eval requestId = mvappend(nefiRequestId, omsRequestId) | eval orderId = coalesce(nefiOrderId, omsOrderId) | stats dc(_time) AS eventCount values(_time) AS eventTime values(orderLineId) AS orderLineId values(requestId) AS requestId BY orderId  
I have a Classic Dashboard that automatically changes the colors of a column by values.  The values are color coded so like values have the same color.  This does not use a range, just every value ge... See more...
I have a Classic Dashboard that automatically changes the colors of a column by values.  The values are color coded so like values have the same color.  This does not use a range, just every value gets a different color, automatically. When I converted my Classic Dashboard to Dashboard Studio, this functionality went away.  How can I get this back?  When I try to add column formatting, I am only given the option to color by ranges, not values. Thanks!
Hi Folks, Looking for someone to suggest how to extract the data from the below json api return in the following format? queries are sent in the api call, structured data is returned but withou... See more...
Hi Folks, Looking for someone to suggest how to extract the data from the below json api return in the following format? queries are sent in the api call, structured data is returned but without the key. servername:"id.server", type:"TXT", error"NOERROR", {     "result": {         "rows": 2001,         "data": [             {                 "dimensions": [                     "id.server",                     "TXT",                     "NOERROR",             }     ]     } } Thanks!
Hi. I need upgrade my Splunk Cluster, my current versión is 7.3.2  and I need upgrade to 8.0.10, but we have Enterprise Security App version 6.0.0 installed, in the Compatibility Matrix it says ES ... See more...
Hi. I need upgrade my Splunk Cluster, my current versión is 7.3.2  and I need upgrade to 8.0.10, but we have Enterprise Security App version 6.0.0 installed, in the Compatibility Matrix it says ES 6.0. 0 is compatible with Splunk Enterprise 8.0.10, but the installs of 8.0.10 aren't on the website, the same matrix says that ES 6.2.0 is compatible with Splunk version 8.1.0, my questions are. could be ES 6.0.0 compatible with Splunk Enterprise 8.1.0? And,  where I can obtained the 8.0.10 version of Splunk?  thank so much!
New to Splunk.  Have been tasked with finding a query to audit access to specific files.  Any ideas?
Has anyone been able to get the AWS Secrets Manager to work with DB Connect?  We would like to use AWS Secrets Manager to handle password rotations on a postgresql database that is also monitored by ... See more...
Has anyone been able to get the AWS Secrets Manager to work with DB Connect?  We would like to use AWS Secrets Manager to handle password rotations on a postgresql database that is also monitored by DB Connect.  I'm aware that Splunk allows for unsupported drivers (customer managed only) so we thought this might work... Here is Splunk's custom JDBC driver support documentation: Install database drivers - Splunk Documentation   Then the driver we are trying to use for Splunk hopefully is : GitHub - aws/aws-secretsmanager-jdbc: The AWS Secrets Manager JDBC Library enables Java developers to easily connect to SQL databases using secrets stored in AWS Secrets Manager.   The only gotcha with Secrets Manager's JDBC driver vs other JDBC drivers is it is not self-contained.
There is a scenario like one of our trend micro DDA is not reporting to our syslog server. Why it is not reporting  Previously we use port 514 and now we are using port 6514 but 6514 is not repor... See more...
There is a scenario like one of our trend micro DDA is not reporting to our syslog server. Why it is not reporting  Previously we use port 514 and now we are using port 6514 but 6514 is not reporting to syslog. And we want both the listening port 514 and 6514. My question  1. Can we have both the port open on our syslog I.e. 514 and 6514  2. How to enable the port listing on our syslog for the port 6514    Thank you 
I am trying to get the Splunk server data, such as system logs and audit logs, into the same index as my other Linux servers using the Splunk Linux App.  How do I get this data ingested into my Linux... See more...
I am trying to get the Splunk server data, such as system logs and audit logs, into the same index as my other Linux servers using the Splunk Linux App.  How do I get this data ingested into my Linux Index?  So far the forums and discussion groups only refer to the Splunk software data when I'm trying to get the server data.  I have the app installed on each of my Splunk Servers in the /app folder.
when doing a partial backup (GUI)/restore (CLI), it fails with the message:     process:17394 thread:MainThread ERROR [itsi.migration] [__init__:1413] [exception] Object names must be uni... See more...
when doing a partial backup (GUI)/restore (CLI), it fails with the message:     process:17394 thread:MainThread ERROR [itsi.migration] [__init__:1413] [exception] Object names must be unique for object type: kpi_threshold_template. List of duplicate names: [omitted list of duplicate objects]. Traceback (most recent call last): File "/opt/splunk/etc/apps/SA-ITOA/lib/migration/migration.py", line 200, in migration_bulk_save_to_kvstore handler.migration_save_single_object_to_kvstore(object_type=object_type, validation=validation, dupname_tag=dupname_tag, skip_local_failure=skip_local_failure) File "/opt/splunk/etc/apps/SA-ITOA/lib/migration/object_interface/itoa_migration_interface.py", line 130, in migration_save_single_object_to_kvstore skip_local_failure=skip_local_failure) File "/opt/splunk/etc/apps/SA-ITOA/lib/ITOA/itoa_common.py", line 1026, in save_batch skip_local_failure=skip_local_failure) File "/opt/splunk/etc/apps/SA-ITOA/lib/ITOA/itoa_object.py", line 431, in save_batch transaction_id=transaction_id, skip_local_failure=skip_local_failure) File "/opt/splunk/etc/apps/SA-ITOA/lib/ITOA/itoa_object.py", line 169, in do_object_validation raise e File "/opt/splunk/etc/apps/SA-ITOA/lib/ITOA/itoa_object.py", line 164, in do_object_validation self.validate_identifying_name(owner, objects, dupname_tag, transaction_id) File "/opt/splunk/etc/apps/SA-ITOA/lib/ITOA/itoa_object.py", line 232, in validate_identifying_name 409 File "/opt/splunk/etc/apps/SA-ITOA/lib/ITOA/itoa_common.py", line 949, in raise_error_bad_validation raise ItoaValidationError(message, logger, self.log_prefix, status_code=status_code) ITOA.itoa_exceptions.ItoaValidationError: Object names must be unique for object type: kpi_threshold_template. List of duplicate names: [omitted list of duplicate objects].       I tried using the -e switch documented (I tried even if it only renames services/entities), https://docs.splunk.com/Documentation/ITSI/4.4.5/Configure/kvstorejson when removing the json file that has the KPI Threshold templates, the script successfully creates/updates all other objects. to be complete, this is the CLI call:      /opt/splunk/bin/splunk cmd python /opt/splunk/etc/apps/SA-ITOA/bin/kvstore_to_json.py -i -d -n -f /home/<myuser>/depot/itsi/itsi_configurations/ -u admin -p <cut> -v -e dup_202208161350       any pointers? 
Hi All, AppDynamics is able to discover one business transaction end point but this URL is having multiple methods/operations and I need to have metrics for individual operation For example URL: my... See more...
Hi All, AppDynamics is able to discover one business transaction end point but this URL is having multiple methods/operations and I need to have metrics for individual operation For example URL: myserver/service Operations: query , discovery , activate  Each operation is having its own payload but same URL What I want to do is to see myserver/service-query  , myserver/service-discovery   , myserver/service-activate 
We've upgrade this add-on to version 2.2.0 and Using Modern Authentication (OAuth), when configured in HF, the internal log shows 404 error as below: 127.0.0.1 - splunk-system-user [14/Aug/2022:20:... See more...
We've upgrade this add-on to version 2.2.0 and Using Modern Authentication (OAuth), when configured in HF, the internal log shows 404 error as below: 127.0.0.1 - splunk-system-user [14/Aug/2022:20:08:03.558 -0700] "GET /servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/data/TA_MS_O365_Reporting_checkpointer/MDSLAB_obj_checkpoint_oauth HTTP/1.1" 404 140 "-" "curl" - 1ms Would anybody can know the cause of this error? Any solutions? Thanks.