All Topics

Top

All Topics

All, Leveraging the following article (https://community.splunk.com/t5/Other-Usage/How-to-export-reports-using-the-REST-API/m-p/640406/highlight/false#M475) I was able to successfully manipulate the... See more...
All, Leveraging the following article (https://community.splunk.com/t5/Other-Usage/How-to-export-reports-using-the-REST-API/m-p/640406/highlight/false#M475) I was able to successfully manipulate the script to: 1. Run using an API token (as opposed to credentials). 2. Get it to run a search I am interested in returning data from. I am however running into an error with my search (shown below).   <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Unparsable URI-encoded request data</msg> </messages> </response>    The script itself now looks like this (I have removed the token and obscured the Splunk endpoint for obvious reasons.   #!/bin/bash # A simple bash script example of how to get notable events details from REST API # EXECUTE search and retrieve SID SID=$(curl -H "Authorization: Bearer <token ID here>" -k https://host.domain.com:8089/services/search/jobs -d search=" search index=index sourcetype="sourcetype" source="source" [ search index="index" sourcetype="sourcetype" source="source" deleted_at="null" | rename uuid AS host_uuid | stats count by host_uuid | fields host_uuid ] | rename data.id AS Data_ID host_uuid AS Host_ID port AS Network_Port | mvexpand data.xrefs{}.type | strcat Host_ID : Data_ID : Network_Port Custom_ID_1 | strcat Host_ID : Data_ID Custom_ID_2 | stats latest(*) as * by Custom_ID_1 | search state!="fixed" | search category!="informational" | eval unixtime=strptime(first_found,"%Y-%m-%dT%H:%M:%S")" <removed some of the search for brevity> \ | grep "sid" | awk -F\> '{print $2}' | awk -F\< '{print $1}') echo "SID=${SID}" Omitted the remaining portion of the script for brevity....     It is at this point shown in brackets (| eval unixtime=strptime(first_found,"%Y-%m-%dT%H:%M:%S") that I am getting the error in question. The search returns fine up to the point where I am converting time ---- I tried escaping using "\", but that did not seem to help. I am sure I am missing something simple and looking for some help.
  Hello Community, I'm seeking some guidance with optimizing a Splunk search query that involves multiple table searches and joins. The primary issue I'm encountering is the limitation imposed by... See more...
  Hello Community, I'm seeking some guidance with optimizing a Splunk search query that involves multiple table searches and joins. The primary issue I'm encountering is the limitation imposed by subqueries, restricting the total records to 50,000. Here's the current query structure I'm working with: index="sample" "message.process"="*app-name1" "message.flowName"="*| *" | rex field=message.correlationId "(?<UUID>^[0-9a-z-]{0,36})" | rename "message.flowName" as sapi-outbound-call | stats count by sapi-outbound-call UUID | join type=inner UUID [search index="sample" "message.process"="*app-name2" "message.flowName"="*| *" | rex field=message.correlationId "(?<UUID>^[0-9a-z-]{0,36})" | rename "message.flowName" as exp-inbound-call] | stats count by exp-inbound-call sapi-outbound-call | join left=L right=R where L.exp-inbound-call = R.exp-inbound-call [search index="sample" "message.process"="*app-name2" "message.flowName"="*| *" | rename "message.flowName" as exp-inbound-call | stats count by exp-inbound-call] | stats list(*) AS * by R.exp-inbound-call R.count | table R.exp-inbound-call R.count L.sapi-outbound-call L.count The intention behind this query is to generate statistics based on two query searches or tables while filtering out data based on a common UUID. However, the usage of multiple joins within subqueries is causing limitations due to the 50,000 record cap. I'm looking for alternative approaches or optimizations to achieve the same result without relying heavily on joins within subqueries. Any insights, suggestions, or examples would be incredibly valuable. Thank you in advance for your help and expertise! Regards
index=netlogs [| inputlookup baddomains.csv | eval url = "*.domain."*" | fields url] NOT [| inputlookup good_domains.csv | fields domain] I don't think my search is doing what I want it to do. I wou... See more...
index=netlogs [| inputlookup baddomains.csv | eval url = "*.domain."*" | fields url] NOT [| inputlookup good_domains.csv | fields domain] I don't think my search is doing what I want it to do. I would like to take the bad domains from the first lookup table and search the netlogs index to see if there are any hits. however, i would like to remove the good domains from the second lookup table from the search. Anyone know if there is a better way to do this?
Hello All,  I am setting up a multisite indexer cluster with cluster manager redundancy,  I am setting up 2 clustermanager (site1 and site2) Below is the config e.g. [clustering] mode = manag... See more...
Hello All,  I am setting up a multisite indexer cluster with cluster manager redundancy,  I am setting up 2 clustermanager (site1 and site2) Below is the config e.g. [clustering] mode = manager manager_switchover_mode = auto manager_uri = clustermanager:cm1,clustermanager:cm2 pass4SymmKey = changeme [clustermanager:cm1] manager_uri = https://10.16.88.3:8089 [clustermanager:cm2] manager_uri = https://10.16.88.4:8089 My question is, I have 2 indexers on each site, should I give the manager_uri in the peer (indexer) of site1 to point to cm1 and manager_uri in the peer (indexer) of site2 to  point to cm2. or all should point to the same indexer? indexer 1 / indexer 2 -  manager_uri = https://10.16.88.3:8089 indexer 3 / indexer 4 -  manager_uri = https://10.16.88.4:8089   Also in the SearhHeads what should I define for the manager_uri? please advice.   Thanks, Dhana
Hi,  We have enabled all the default JMX metric collection in the configuration like, kafka, tomcat, weblogic, PMi, cassandra,etc., But when very limited metrics are available under Metric Browser. ... See more...
Hi,  We have enabled all the default JMX metric collection in the configuration like, kafka, tomcat, weblogic, PMi, cassandra,etc., But when very limited metrics are available under Metric Browser.  Only JVM --> classes, garbage collection, memory, threads are visible.  None of the above.  Why is it so? We are more interested in looking at Tomcat related JMX metrics.  Your inputs are much appreciated.  Thanks, Viji
I have an index that provides a Date and a row count to populate a line chart on a dashboard using DBConnect.  The data looks like this: Date Submissions 2023-11-13 7 2023-11-14 35 20... See more...
I have an index that provides a Date and a row count to populate a line chart on a dashboard using DBConnect.  The data looks like this: Date Submissions 2023-11-13 7 2023-11-14 35 2023-11-15 19   When the line chart displays the data, the dates show up like this:  2023-11-12T19:00:00-05:00,  2023-11-13T19:00:00-05:00, 2023-11-14T19:00:00-05:00.  Is there some setting/configuration that needs to be updated?
Trying to get our Crowdstrike FDR set-up with the splunk TA. Tried resetting the Crowdstrike FDR API twice with the same error. error response recieved from server: unexpected error <class splunkl... See more...
Trying to get our Crowdstrike FDR set-up with the splunk TA. Tried resetting the Crowdstrike FDR API twice with the same error. error response recieved from server: unexpected error <class splunklib.reset_handler.error.resterror> from python handler: rest error [400]: bad request -- an error occured (accessdenied) when calling the listbuckets operation: access denied. see splunkd.log/python.log for more details. Any thoughts?
Hi, i need to add filter to error query into total transaction query so that i can get filtered error counts as well as total transaction in two column with service name  This below query i am usin... See more...
Hi, i need to add filter to error query into total transaction query so that i can get filtered error counts as well as total transaction in two column with service name  This below query i am using to get total transaction and total errors index="iss" Environment=PROD | where Appid IN ("APP-61", "APP-85", "APP-69", "APP-41", "APP-57", "APP-71", "APP-50", "APP-87") | rex field=_raw " (?<service_name>\w+)-prod" | eval err_flag = if(level="ERROR", 1,0) | eval success_flag = if(level!="ERROR", 1,0) | stats sum(err_flag) as Total_Errors, sum(success_flag) as Total_Successes by service_name | eval Total_Transaction = (Total_Successes+Total_Errors) | fields service_name, Total_Transaction, Total_Errors, Total_Successes i need to add search filter into errors so that it will only count those filtered errors not all errors and merge this below query into above one in err_flag line index="iss" Environment=PROD "Invalid JS format" OR ":[down and unable to retrieve response" OR "[Unexpected error occurred" OR ": [An unknown error has occurred" OR "exception" OR OR IN THE SERVICE" OR "emplateErrorHandler : handleError :" OR "j.SocketException: Connection reset]" OR "Power Error Code" OR "[Couldn't kickstart handshaking]" OR "[Remote host terminated the handshake]" OR "Caused by:[JNObject" OR "processor during S call" OR javx OR "Error while calling" OR level="ERROR" NOT "NOT MATCH THE CTRACT" NOT "prea_too_large" NOT g-500 NOT G-400 NOT "re-submit the request" NOT "yuu is null" NOT "igests data" NOT "characters" NOT "Asset type" NOT "Inputs U" NOT "[null" NOT "Invalid gii"   Please help me it would be wonderful, Thankyou
can you please suggest query to pull all the index and sourcetype lag/delay for last 30 days
Hi. I am a new splunk user with a question: When splunk is ingesting data we get a monitoring system warning about 10% FS Availability. Then the FS space returns to a value > 10% availability. Is th... See more...
Hi. I am a new splunk user with a question: When splunk is ingesting data we get a monitoring system warning about 10% FS Availability. Then the FS space returns to a value > 10% availability. Is there a file/location where temporary data is written while ingestion is happening?   Thanks 
I've Admin rights and when I click on any tag permission (Settings --> tags), I get the following error: The requested URL was rejected. Please consult with your administrator. Any idea why this ... See more...
I've Admin rights and when I click on any tag permission (Settings --> tags), I get the following error: The requested URL was rejected. Please consult with your administrator. Any idea why this is happening?  
Hi, After installing the Splunk Otel collector, i see the instance name of my VM is appearing in the below format subscription_id/resource_group_name/resource_provider_namespace/resource_name I wa... See more...
Hi, After installing the Splunk Otel collector, i see the instance name of my VM is appearing in the below format subscription_id/resource_group_name/resource_provider_namespace/resource_name I was looking for an option to change the name to only "resource_name" (which is the server name) Please assist where and how can i do , so it will be easy for identification.
Hello All, I have a lookup file with multiple columns: fieldA, fieldB, fieldC. I need to publish timechart for each value under fieldA based on search conditions of fieldB and fieldC. Thus, I want... See more...
Hello All, I have a lookup file with multiple columns: fieldA, fieldB, fieldC. I need to publish timechart for each value under fieldA based on search conditions of fieldB and fieldC. Thus, I want your guidance to understand how to build multiple timecharts from same field by reading the required field values from lookup file. Any inputs and information would be very helpful. Thank you Taruchit
below csv file getting generated which is ingested into splunk. These are the file counts created date wise on different folders. My rex command does not pickup the date, filepath and count. Please h... See more...
below csv file getting generated which is ingested into splunk. These are the file counts created date wise on different folders. My rex command does not pickup the date, filepath and count. Please help how we can extract these field from below csv raw data.   "Date","Folder","FileCount" "11-07-2023","E:\Intra\I\IE\Processed\Error","381" "11-08-2023","E:\Intra\I\IE\Processed\Error","263" "11-09-2023","E:\Intra\I\IE\Processed\Error","223" "11-10-2023","E:\Intra\I\IE\Processed\Error","133" "11-11-2023","E:\Intra\I\IE\Processed\Error","3" "11-12-2023","E:\Intra\I\IE\Processed\Success","4" "11-13-2023","E:\Intra\I\IE\Processed\Success","4"","218" "11-14-2023","E:\Intra\I\IE\Processed\Success","4"","200" "11-15-2023","E:\Intra\I\IE\Processed\Error","284"
Hi,  I am looking for a solution to remove UTF-8 character encoding from the logs I have a regular expression that works in the search field, but I would like to find an automated solution for Sp... See more...
Hi,  I am looking for a solution to remove UTF-8 character encoding from the logs I have a regular expression that works in the search field, but I would like to find an automated solution for Splunk Cloud. | rex mode=sed "s/\x1B\[[0-9;]*[mK]//g" Sample log line: 2023-11-15 11:47:21,605 backend_2023.2.8: INFO  [-dispatcher-7] vip.service.northbound.MrpServiceakkaAddress=akka://backend, akkaUid=2193530468036521242 MRP Service is alive and active. Any idea? Thanks for help. 
Hi, I am trying to configure OpenTelemetry (OTEL) to send metrics and our custom metrics to our SAAS controller, but I get a lot of "Forbidden" errors : Exporting failed. The error is not retryable... See more...
Hi, I am trying to configure OpenTelemetry (OTEL) to send metrics and our custom metrics to our SAAS controller, but I get a lot of "Forbidden" errors : Exporting failed. The error is not retryable. Dropping data. {"kind": "exporter", "data_type": "metrics", "name": "otlphttp", "error": "Permanent error: error exporting items, request to https://pdx-sls-agent-api.saas.appdynamics.com/v1/metrics responded with HTTP Status Code 403", "dropped_items": 35} Double-check the endpoint and the API key. Also, I carefully checked the configuration.  Does anyone have an idea? please  Please note: Our account is a new and trail account (I got it after discussing it with the accounts manager and explaining our needs). Thanks Diab 
For my dashboard, I am using the following regex. Although the current date is displayed at the end of the dashboard and the oldest date is displayed at the top, I require the date format to be mm-dd... See more...
For my dashboard, I am using the following regex. Although the current date is displayed at the end of the dashboard and the oldest date is displayed at the top, I require the date format to be mm-dd-yy only. My dashboard should show the most recent date at the top. Give me your finest recommendations, please. | eval date=strftime(_time, "%m-%d-%y") | stats count by date,
Hello all, I use Splunk API in order to export an SPL search. All queries are working well on my local dev environment and most work on production server. All queries that include or read from a c... See more...
Hello all, I use Splunk API in order to export an SPL search. All queries are working well on my local dev environment and most work on production server. All queries that include or read from a certain query (let's call it "SessionEntities") seem to return empty. For instance the query, " | inputlookup  SessionEntities", returns empty. The same query works both localy and even stranger, works on Splunk search page on the same server, while with the same query and different lookup, it returns with results. That lookup is no different than the others (no bigger content size), but still. Anyone has an idea of why could this be happening?
Error thrown: Internal configuration file error. Something wrong within the package or installation step. Contact your administrator for support. Detail: Error: duplicate l keys is not allowed at ap... See more...
Error thrown: Internal configuration file error. Something wrong within the package or installation step. Contact your administrator for support. Detail: Error: duplicate l keys is not allowed at appendError. I'm trying to create a new app in Splunk add-on builder. This error is thrown whenever I load the app's inputs or configuration page
Hi all, I am new to SPLUNK and would appreciate some community wisdom. We are trying to get data from an external AWS s3 bucket (hosted and managed by 3rd party supplier) onto our internal enterpris... See more...
Hi all, I am new to SPLUNK and would appreciate some community wisdom. We are trying to get data from an external AWS s3 bucket (hosted and managed by 3rd party supplier) onto our internal enterprise SPLUNK instance. We do not have any AWS accounts.  We have considered whitelisting but it is not secure enough. The supplier does not use AWS firehose Any ideas?