All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a Classic Dashboard that automatically changes the colors of a column by values.  The values are color coded so like values have the same color.  This does not use a range, just every value ge... See more...
I have a Classic Dashboard that automatically changes the colors of a column by values.  The values are color coded so like values have the same color.  This does not use a range, just every value gets a different color, automatically. When I converted my Classic Dashboard to Dashboard Studio, this functionality went away.  How can I get this back?  When I try to add column formatting, I am only given the option to color by ranges, not values. Thanks!
Hi Folks, Looking for someone to suggest how to extract the data from the below json api return in the following format? queries are sent in the api call, structured data is returned but withou... See more...
Hi Folks, Looking for someone to suggest how to extract the data from the below json api return in the following format? queries are sent in the api call, structured data is returned but without the key. servername:"id.server", type:"TXT", error"NOERROR", {     "result": {         "rows": 2001,         "data": [             {                 "dimensions": [                     "id.server",                     "TXT",                     "NOERROR",             }     ]     } } Thanks!
Hi. I need upgrade my Splunk Cluster, my current versión is 7.3.2  and I need upgrade to 8.0.10, but we have Enterprise Security App version 6.0.0 installed, in the Compatibility Matrix it says ES ... See more...
Hi. I need upgrade my Splunk Cluster, my current versión is 7.3.2  and I need upgrade to 8.0.10, but we have Enterprise Security App version 6.0.0 installed, in the Compatibility Matrix it says ES 6.0. 0 is compatible with Splunk Enterprise 8.0.10, but the installs of 8.0.10 aren't on the website, the same matrix says that ES 6.2.0 is compatible with Splunk version 8.1.0, my questions are. could be ES 6.0.0 compatible with Splunk Enterprise 8.1.0? And,  where I can obtained the 8.0.10 version of Splunk?  thank so much!
New to Splunk.  Have been tasked with finding a query to audit access to specific files.  Any ideas?
Has anyone been able to get the AWS Secrets Manager to work with DB Connect?  We would like to use AWS Secrets Manager to handle password rotations on a postgresql database that is also monitored by ... See more...
Has anyone been able to get the AWS Secrets Manager to work with DB Connect?  We would like to use AWS Secrets Manager to handle password rotations on a postgresql database that is also monitored by DB Connect.  I'm aware that Splunk allows for unsupported drivers (customer managed only) so we thought this might work... Here is Splunk's custom JDBC driver support documentation: Install database drivers - Splunk Documentation   Then the driver we are trying to use for Splunk hopefully is : GitHub - aws/aws-secretsmanager-jdbc: The AWS Secrets Manager JDBC Library enables Java developers to easily connect to SQL databases using secrets stored in AWS Secrets Manager.   The only gotcha with Secrets Manager's JDBC driver vs other JDBC drivers is it is not self-contained.
There is a scenario like one of our trend micro DDA is not reporting to our syslog server. Why it is not reporting  Previously we use port 514 and now we are using port 6514 but 6514 is not repor... See more...
There is a scenario like one of our trend micro DDA is not reporting to our syslog server. Why it is not reporting  Previously we use port 514 and now we are using port 6514 but 6514 is not reporting to syslog. And we want both the listening port 514 and 6514. My question  1. Can we have both the port open on our syslog I.e. 514 and 6514  2. How to enable the port listing on our syslog for the port 6514    Thank you 
I am trying to get the Splunk server data, such as system logs and audit logs, into the same index as my other Linux servers using the Splunk Linux App.  How do I get this data ingested into my Linux... See more...
I am trying to get the Splunk server data, such as system logs and audit logs, into the same index as my other Linux servers using the Splunk Linux App.  How do I get this data ingested into my Linux Index?  So far the forums and discussion groups only refer to the Splunk software data when I'm trying to get the server data.  I have the app installed on each of my Splunk Servers in the /app folder.
when doing a partial backup (GUI)/restore (CLI), it fails with the message:     process:17394 thread:MainThread ERROR [itsi.migration] [__init__:1413] [exception] Object names must be uni... See more...
when doing a partial backup (GUI)/restore (CLI), it fails with the message:     process:17394 thread:MainThread ERROR [itsi.migration] [__init__:1413] [exception] Object names must be unique for object type: kpi_threshold_template. List of duplicate names: [omitted list of duplicate objects]. Traceback (most recent call last): File "/opt/splunk/etc/apps/SA-ITOA/lib/migration/migration.py", line 200, in migration_bulk_save_to_kvstore handler.migration_save_single_object_to_kvstore(object_type=object_type, validation=validation, dupname_tag=dupname_tag, skip_local_failure=skip_local_failure) File "/opt/splunk/etc/apps/SA-ITOA/lib/migration/object_interface/itoa_migration_interface.py", line 130, in migration_save_single_object_to_kvstore skip_local_failure=skip_local_failure) File "/opt/splunk/etc/apps/SA-ITOA/lib/ITOA/itoa_common.py", line 1026, in save_batch skip_local_failure=skip_local_failure) File "/opt/splunk/etc/apps/SA-ITOA/lib/ITOA/itoa_object.py", line 431, in save_batch transaction_id=transaction_id, skip_local_failure=skip_local_failure) File "/opt/splunk/etc/apps/SA-ITOA/lib/ITOA/itoa_object.py", line 169, in do_object_validation raise e File "/opt/splunk/etc/apps/SA-ITOA/lib/ITOA/itoa_object.py", line 164, in do_object_validation self.validate_identifying_name(owner, objects, dupname_tag, transaction_id) File "/opt/splunk/etc/apps/SA-ITOA/lib/ITOA/itoa_object.py", line 232, in validate_identifying_name 409 File "/opt/splunk/etc/apps/SA-ITOA/lib/ITOA/itoa_common.py", line 949, in raise_error_bad_validation raise ItoaValidationError(message, logger, self.log_prefix, status_code=status_code) ITOA.itoa_exceptions.ItoaValidationError: Object names must be unique for object type: kpi_threshold_template. List of duplicate names: [omitted list of duplicate objects].       I tried using the -e switch documented (I tried even if it only renames services/entities), https://docs.splunk.com/Documentation/ITSI/4.4.5/Configure/kvstorejson when removing the json file that has the KPI Threshold templates, the script successfully creates/updates all other objects. to be complete, this is the CLI call:      /opt/splunk/bin/splunk cmd python /opt/splunk/etc/apps/SA-ITOA/bin/kvstore_to_json.py -i -d -n -f /home/<myuser>/depot/itsi/itsi_configurations/ -u admin -p <cut> -v -e dup_202208161350       any pointers? 
Hi All, AppDynamics is able to discover one business transaction end point but this URL is having multiple methods/operations and I need to have metrics for individual operation For example URL: my... See more...
Hi All, AppDynamics is able to discover one business transaction end point but this URL is having multiple methods/operations and I need to have metrics for individual operation For example URL: myserver/service Operations: query , discovery , activate  Each operation is having its own payload but same URL What I want to do is to see myserver/service-query  , myserver/service-discovery   , myserver/service-activate 
We've upgrade this add-on to version 2.2.0 and Using Modern Authentication (OAuth), when configured in HF, the internal log shows 404 error as below: 127.0.0.1 - splunk-system-user [14/Aug/2022:20:... See more...
We've upgrade this add-on to version 2.2.0 and Using Modern Authentication (OAuth), when configured in HF, the internal log shows 404 error as below: 127.0.0.1 - splunk-system-user [14/Aug/2022:20:08:03.558 -0700] "GET /servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/data/TA_MS_O365_Reporting_checkpointer/MDSLAB_obj_checkpoint_oauth HTTP/1.1" 404 140 "-" "curl" - 1ms Would anybody can know the cause of this error? Any solutions? Thanks.
So I'm trying to install an app on Splunk Cloud and it went through the checks and it failed due to the following error: Detected an outdated version of the Splunk SDK for Python (1.6.15). Please u... See more...
So I'm trying to install an app on Splunk Cloud and it went through the checks and it failed due to the following error: Detected an outdated version of the Splunk SDK for Python (1.6.15). Please upgrade to version 1.6.16 or later. File: bin/splunklib/binding.py Does anyone know how to upgrade the Splunk SDK for Python on Splunk Cloud?
We've starter lookin into Risk-Based Alerting (RBA) in Splunk ES, and noticed that the logic for the risk notables is in fact case sensitive for risk objects (users and systems, mostly). This is a bi... See more...
We've starter lookin into Risk-Based Alerting (RBA) in Splunk ES, and noticed that the logic for the risk notables is in fact case sensitive for risk objects (users and systems, mostly). This is a bit counterintuitive, as the Asset & Indentity (A&I) settings clearly says that it is are not case sensitive, but we figured out that RBA doesn't use A&I at all, and instead just used the fieldvalue for the user/system directly, without having any logic to merge users/systems under different aliases. I've made a small change to the RBA alert "Risk Threshold Exceeded For Object Over 24 Hour Period" to at least make it case insensitve, in case anyone else need a fix for this problem as well. Just change the two first lines for the search from this:   | tstats `summariesonly` sum(All_Risk.calculated_risk_score) as risk_score, count(All_Risk.calculated_risk_score) as risk_event_count,values(All_Risk.annotations.mitre_attack.mitre_tactic_id) as annotations.mitre_attack.mitre_tactic_id, dc(All_Risk.annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id_count, values(All_Risk.annotations.mitre_attack.mitre_technique_id) as annotations.mitre_attack.mitre_technique_id, dc(All_Risk.annotations.mitre_attack.mitre_technique_id) as mitre_technique_id_count, values(All_Risk.tag) as tag, values(source) as source, dc(source) as source_count from datamodel=Risk.All_Risk by All_Risk.risk_object,All_Risk.risk_object_type | `drop_dm_object_name("All_Risk")`   To this:   | tstats `summariesonly` sum(All_Risk.calculated_risk_score) as risk_score, count(All_Risk.calculated_risk_score) as risk_event_count, values(All_Risk.annotations.mitre_attack.mitre_tactic_id) as annotations.mitre_attack.mitre_tactic_id, values(All_Risk.annotations.mitre_attack.mitre_technique_id) as annotations.mitre_attack.mitre_technique_id, values(All_Risk.tag) as tag, values(source) as source from datamodel=Risk.All_Risk by All_Risk.risk_object,All_Risk.risk_object_type | `drop_dm_object_name("All_Risk")` | eval risk_object=lower(risk_object) | stats sum(risk_score) as risk_score, sum(risk_event_count) as risk_event_count, values(annotations.mitre_attack.mitre_tactic_id) as annotations.mitre_attack.mitre_tactic_id, dc(annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id_count, values(annotations.mitre_attack.mitre_technique_id) as annotations.mitre_attack.mitre_technique_id, dc(annotations.mitre_attack.mitre_technique_id) as mitre_technique_id_count, values(tag) as tag, values(source) as source, dc(source) as source_count by risk_object, risk_object_type    
section for calculation_window_telemetry in /apps/SA-ITOA/default/savedsearches.conf:     """ search = | inputlookup calculation_window_telemetry_lookup | eval zipped = mvzip('kpis.title', 'kpis.s... See more...
section for calculation_window_telemetry in /apps/SA-ITOA/default/savedsearches.conf:     """ search = | inputlookup calculation_window_telemetry_lookup | eval zipped = mvzip('kpis.title', 'kpis.search_alert_earliest' , ",") | fields - kpis.title, kpis.search_alert_earliest| mvexpand zipped | eval x = split(zipped,",") | eval kpi_title = mvindex(x, 0) | eval search_alert_earliest = mvindex(x, 1) | fields - x, zipped| eval calculation_window_{search_alert_earliest}_min = 1 | where kpi_title!="ServiceHealthScore" | fields calc* | stats sum(*) as * """     Search Query:      """ | savedsearch calculation_window_telemetry | fields calculation_window_1_min calculation_window_5_min calculation_window_15_min calculation_window_1440_min | addtotals | rename Total as data.calculationWindowUsage.predefinedWindow.totalCount | rename calculation_window_1_min as data.calculationWindowUsage.predefinedWindow.calculationWindowValueCount.calculation_window_1_min | rename calculation_window_5_min as data.calculationWindowUsage.predefinedWindow.calculationWindowValueCount.calculation_window_5_min | rename calculation_window_15_min as data.calculationWindowUsage.predefinedWindow.calculationWindowValueCount.calculation_window_15_min | rename calculation_window_1440_min as data.calculationWindowUsage.predefinedWindow.calculationWindowValueCount.calculation_window_1440_min | append [ | savedsearch calculation_window_telemetry | fields - calculation_window_1_min calculation_window_5_min calculation_window_15_min calculation_window_1440_min | addtotals | rename Total as data.calculationWindowUsage.customWindow.totalCount | rename "calculation*" as data.calculationWindowUsage.customWindow.calculationWindowValueCount.calculation*] | stats first(*) as * | fillnull | makejson version(string),data.* output=event | table event """     Current output:   { "data": { "calculationWindowUsage": { "customWindow": { "calculationWindowValueCount": { "calculation_window_1260_min": 1, "calculation_window_111_min": 1 }, "totalCount": 2 }, "predefinedWindow": { "calculationWindowValueCount": { "calculation_window_1440_min": 1, "calculation_window_15_min": 1, "calculation_window_1_min": 1, "calculation_window_5_min": 1 }, "totalCount": 4 } } } }     Expected output:     { "data": { "calculationWindowUsage": { "customWindow": { "calculationWindowValueCount": [{ "calculation_window_value": 1260, "count": 1 }, { "calculation_window_value": 111, "count": 1 }], "total_count": 2 }, "predefinedWindow": { "calculationWindowValueCount": [{ "calculation_window_value": 1, "count": 1 }, { "calculation_window_value": 5, "count": 1 }, { "calculation_window_value": 15, "count": 1 }, { "calculation_window_value": 1440, "count": 1 }], "total_count": 4, } } } }     I required output in list of dictionary, can anyone help me on this. Thank you.
The values I need are located in the field "msg". Each msg contains 3 records. I run this query and get the result as below,   index=summary | search msg="*blablabla*" | rex max_match=3 "Type=(?<... See more...
The values I need are located in the field "msg". Each msg contains 3 records. I run this query and get the result as below,   index=summary | search msg="*blablabla*" | rex max_match=3 "Type=(?<Type>.+?)\," | rex max_match=3 "Restaurant=(?<Restaurant>.+?)\," | rex max_match=3 "Date=(?<Date>.+?)\," | rex max_match=3 "status=(?<status>.+?)\," | table Date, Restaurant, Type, status   Date Restaurant Type Status 2021-03-10 2022-01-04 2021-05-01 Domino SOUTHERN RESTAURANTS TRUST MCDONALD'S A B A NEW USED USED 2021-03-11 2021-03-12 2022-02-05 KFC Domino MCDONALD'S C B A NEW NEW USED 2021-03-11 2021-12-20 2021-05-09 Rooster CYREN BAR MCDONALD'S A A B NEW USED USED 2021-03-12 2021-12-18 2021-06-22 Helo KFC MCDONALD'S A A B NEW USED USED 2021-03-12 2022-01-05 2022-01-14 KFC MCDONALD'S MCDONALD'S A A B   The question is, how can I make each record separated? I would like to use query "where restaurant=KFC" to look for specific restaurant.  
I have Splunk logs stored in this format (2 example dataset below):        {"org":"myorg","environment":"prod","proxyName":"myproxy","uriPath":"/getdata","verb":"POST","request":"\n \"city\":... See more...
I have Splunk logs stored in this format (2 example dataset below):        {"org":"myorg","environment":"prod","proxyName":"myproxy","uriPath":"/getdata","verb":"POST","request":"\n \"city\":\"irving\",\n\"state\":\"TX\",\n\"isPresent\":\"Y\"","uid":"1234"} {"org":"myorg","environment":"prod","proxyName":"myproxy","uriPath":"/getdata","verb":"POST","request":"\n\"city\":\"san diego\",\n\"state\":\"CA\",\n\"isPresent\":\"N\"","uid":"1234"}         I'm trying to find all records where isPresent is "Y". Now request is a string containing a JSON's string representation. So, I'm using a query like this:       \\"isPresent\\":\\"Y\\" uid=1234 AND request!=null       But this query is bringing up to isPresent=Y and isPresent=N records, effectively meaning that the filter is not working at all. Any idea how I can search a string to check if it contains a specific substring?    
Hello everyone, asking your help with my subsearch query. I need to find events in index="1", take from it Logon_ID, and run search query in another one index (index="2"). My current search ind... See more...
Hello everyone, asking your help with my subsearch query. I need to find events in index="1", take from it Logon_ID, and run search query in another one index (index="2"). My current search index="2" EventCode=4662 AND (Condition="1" OR Condition="2") [ search index="1" EventCode=4624 Logon_Type=3 | eval Logon_ID=lower(Logon_ID) | eval Logon_ID=mvindex(Logon_ID,-1) | fields Logon_ID] It doesn't work, as I understand main search runs only by 1 Logon_ID, though in index1 there are many Logon_ID values. What could be the reason? Thank you.
i have the following two entries   Time Event 8/16/22 1:46:22.592 PM 2022/08/16 13:46:22.592154:P_GUI_SERV06 :pbaho3 : 98(cli) : Exit Allocate Order on portfolio list [ABC_DPM_MM... See more...
i have the following two entries   Time Event 8/16/22 1:46:22.592 PM 2022/08/16 13:46:22.592154:P_GUI_SERV06 :pbaho3 : 98(cli) : Exit Allocate Order on portfolio list [ABC_DPM_MM_BALANCED] with all instruments (Thread:00000001197f4730) host = PBIPSG07source = /app/PBISG/aaa/current/msg/server.logsourcetype = prd-pbisg-server-logtimeendpos = 26timestartpos = 0 8/16/22 1:45:51.201 PM 2022/08/16 13:45:51.201360:P_GUI_SERV06 :pbaho3 : 98(cli) : Start Allocate Order on portfolio list [ABC_DPM_MM_BALANCED] with all instruments (Thread:00000001197f4730) host = PBIPSG07   The entry will start with an entry like 'Start Allocate Order' and end with "Exit Allocate Order"   how do i build a Splunk search to calculate the duration taken between those two event ?   Based on the above , i would like to build more complex search: notice that there is ':pbaho3:' , so there will be multiple users in this case is 'pbaho3' , so how do i group the entries by specific users ?
Hi, We have a CSV file with the master data where all the constants are stored and have four columns, in the Splunk query we will get one of the columns as a result.  need to change the outcome wit... See more...
Hi, We have a CSV file with the master data where all the constants are stored and have four columns, in the Splunk query we will get one of the columns as a result.  need to change the outcome with another column name from the CSV file.    Sample - We have an id like this - "58vv1578eff-985sfv294-asfd" from the query result and this need to be changed to -  2897 in the final result.    TIA.,    Regards, SM. 
Hello, I am importing Cisco Ironport data into Splunk, field "subject" contains UTF-8 encoded data with jumbled characters. Is there a way to automatically decode the subject to a string format? No... See more...
Hello, I am importing Cisco Ironport data into Splunk, field "subject" contains UTF-8 encoded data with jumbled characters. Is there a way to automatically decode the subject to a string format? Now, I am using Powershell and CyberChef application to decode manually. Kindly let me know how to solve this issue. Regards, RK
I have two REX strings that work independently... ^\S+\s(?<microService>\S+).* [supplied by previous SPLUNK answer] ...and... "(?i)^(?:[^\+]*\+){2}\d+\]\s+\"(?P<missingFileDetails>[^\"]+)" ... See more...
I have two REX strings that work independently... ^\S+\s(?<microService>\S+).* [supplied by previous SPLUNK answer] ...and... "(?i)^(?:[^\+]*\+){2}\d+\]\s+\"(?P<missingFileDetails>[^\"]+)" [generated via erex]   How can these two REX commands be merged?