All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to provide developer support to splunk soar app. How we can do that.
HI All, I would like to visualize all the search fields/content I mentioned using the command search: index=*  | search (Apps=value1 Or Apps=value2 OR Apps=value3) | stats count by Apps ... See more...
HI All, I would like to visualize all the search fields/content I mentioned using the command search: index=*  | search (Apps=value1 Or Apps=value2 OR Apps=value3) | stats count by Apps Apps count value1 5 value2 0 value3 0   So, I want to see all the values I have mentioned in the search, even if they were not found (adding for example a 0 count) Is it possible? Thank you in advance. Matteo
Is there a way to set customised colours for sunburst visualisation on the basis of a string value?
File monitoring inputs for Splunk Add-on for Unix and Linux Query 1-->I have installed the above mentioned app to monitor the file monitoring input from the same. When I enable the default file mo... See more...
File monitoring inputs for Splunk Add-on for Unix and Linux Query 1-->I have installed the above mentioned app to monitor the file monitoring input from the same. When I enable the default file monitoring inputs I am getting source and source type as attached in the data. But I do not see much interesting fields for the same source and source type. Please assist me with the exact source and source type along with the list of interesting fields it will extract via field extraction. Query 2-->I have installed the above mentioned app to monitor the file monitoring input from the same. When I updated inputs.conf with new file monitoring inputs I am not getting data for the new input. Please let me know why and how can we work on the same to get exact data from new input files.
I am setting up a number of Kubernetes clusters in my organisation. We are using SPLUNK for monitoring. I have been told that I will need to ask the network team to reserve 2 CDIR range's for pods an... See more...
I am setting up a number of Kubernetes clusters in my organisation. We are using SPLUNK for monitoring. I have been told that I will need to ask the network team to reserve 2 CDIR range's for pods and services on each cluster because SPLUNK requires it. Can anyone clarify if SPLUNK does require every kubernetes cluster on the network to have a unique CIDR range for pods and services? 
Hello, I am trying fetch Azure Virtual Machine Metrics data using Add on 'Splunk_TA_microsoft-cloudservices' I have  created/added azure storage account and Inputs as stated in the doc in the add o... See more...
Hello, I am trying fetch Azure Virtual Machine Metrics data using Add on 'Splunk_TA_microsoft-cloudservices' I have  created/added azure storage account and Inputs as stated in the doc in the add on. But i dont see any logs indexed in splunk for the same.When i check the internal index i see the below error. What does it mean and How do i fix this error?  
Is there a way to monitor the creation of new Splunk users/admins?  I want to be notified if someone creates a new Splunk admin. 
Hello there. I hope you guys doing great. I am faving a problem while trying to implement the mint.jar (com.splunk.mint:mint:5.0.0) It showing error like: * What went wrong: Execution fa... See more...
Hello there. I hope you guys doing great. I am faving a problem while trying to implement the mint.jar (com.splunk.mint:mint:5.0.0) It showing error like: * What went wrong: Execution failed for task ':app:compileProdReleaseKotlin'. > Could not resolve all artifacts for configuration ':app:prodReleaseCompileClasspath'. > Could not download mint.jar (com.splunk.mint:mint:5.0.0) > Could not get resource 'https://mint.splunk.com/gradle/com/splunk/mint/mint/5.0.0/mint-5.0.0.jar'. > Could not GET 'https://mint.splunk.com/gradle/com/splunk/mint/mint/5.0.0/mint-5.0.0.jar'. > Remote host terminated the handshake Is there any archive file or any way to solve the issue guys? Thank you.
Hi All, I would like to know whether AppD provided monitoring for Oracle Cloud. If you can you, please route me to the reference documents?
I have 4 different kind of logs that is coming from one source (sample logs are below). I would like to configure this in different sourcetypes so that the timestamps that Splunk will get is correct.... See more...
I have 4 different kind of logs that is coming from one source (sample logs are below). I would like to configure this in different sourcetypes so that the timestamps that Splunk will get is correct. My problem is they have different timestamp filed names and character count on where the time field are positioned. A. It has timestamp coming from "time".     { "count": 1, "total": 1, "minimum": 1, "maximum": 1, "average": 1, "resourceId": "KSJDIOU-43782JH3K28-28378KMK", "time": "2022-11-24T06:05:00.0000000Z", "metricName": "TotalBillable", "timeGrain": "MPT1DRIVE"}     B. It has timestamp coming from "EventTimestamp"     { "Environment": "PROD", "Region": "SouthEast Asia", "ScaleUnit": "PRD-041", "TaskName": "ApplicationMetricsLog", "ActivityId": "89S7D-DS98-SDSDS", "SubscriptionId": "CKJD989897DS", "NamespaceName": "tm-uidso-prem-prd", "ActivityName": "ActiveConnections", "ResourceId": "KSJDIOU-43782JHFSDS3K28-28378KMK", "Outcome": "Success", "Protocol": "AMQP", "AuthType": "EntitySAS", "AuthId": "JKSDDI-55643", "NetworkType": "Public", "ClientIp": "1000.3425.0.2", "Count": 1, "Properties": "{\"EventTimestamp\":\"24/11/2022 06:10:05:7602\"}", "category": "MetricsLogs"}     C. It has timestamp coming from "time" but, time field is on a different character count from letter A.     { "Deployment": "ksdjksdos1loio2klkl3", "time": "2022-11-24T06:04:00Z", "timeGrain": "GFT2KOIO", "resourceId": "KLSDASKOSO-3434-545-XCDS", "metricName": "GoStarted", "dimensions": "{\"Deployment\":\"767sd898ds8d9sdd9s\",\"Role\":\"maria.Home.upon\",\"RoleInstance\":\"maria.Home.upon_OUT_69\"}", "average": 1, "minimum": 1, "maximum": 1, "total": 1, "count": 1}       D.  It has timestamp coming from "time" but, time field is on a different character count from letter A and C.     { "time": "2022-11-24T06:11:52.6825908Z", "resourceId": "dksjdks-sdsds-dsds-23232-3232s", "category": "FunctionLogs", "operationName": "Microsoft.Web/sites/functions/log", "level": "Informational", "location": "South America", "properties": {"appName":"func-dttysdvmj-eventstop-prd","roleInstance":"rollinginthedeep","message":"Response [sadlsad-d4343-dfsdf45-545dsd-sdsd] 200 OK (00.0s)\r\nETag:\"0xJYWEDFF6788DFSDF\"\r\nServer:Windows-Azure-Blob/1.0,Microsoft-HTTPAPI/2.0\r\nx-ms-request-id:dsds-8000000\r\nx-ms-client-request-id:sdsdsd0-dsdsdgfr1-454346fd76767gf\r\nx-ms-version:2020-08-04\r\nx-ms-lease-id:b51368e2-2d24-6c77-acab-78ced4658e79\r\nDate:Thu, 24 Nov 2022 06:11:52 GMT\r\nContent-Length:0\r\nLast-Modified:Mon, 17 Oct 2022 09:59:09 GMT\r\n","category":"Azure.Core.1","hostVersion":"467888.134263.2.1990097","hostInstanceId":"d57fdu6-kkew36-0000-dsf3-rgtty887gd","level":"Information","levelId":2,"processId":5976,"eventId":5,"eventName":"Response"}}       Thanks in advance.
Hi, Splunk which I am currently using has all of a sudden increased the log size consumption which has led to my license crossing the threshold. Only have two more warnings left. Have identified a w... See more...
Hi, Splunk which I am currently using has all of a sudden increased the log size consumption which has led to my license crossing the threshold. Only have two more warnings left. Have identified a way to filter out some of the Azure logs using regex on the logs. But for some reason the regex is not working. Can someone please help me why this regex is not working. While tested the regex it seems to be working fine but still logs are not getting filtered out even if it matches the criteria.  I tested my regex in the online site regex101 and the content seems to match the regex. But still logs are not getting filtered out. Can someone please guide me what would be the reason. 
 I am using set Label api in my playbook to move the containers to a different label after playbook execution. It works on most containers but sometimes it will not move the container to new label.In... See more...
 I am using set Label api in my playbook to move the containers to a different label after playbook execution. It works on most containers but sometimes it will not move the container to new label.In the debug logs i could see set Label failed, Validation Error, failed to set Label..Can somebody suggest what the issue might be?
I have generated a table as follows. I want 3 fields to be stacked into 1st column and other 3 fields into a 2nd column. Then I need to group these 2 columns into a single value (Test) as shown. I ha... See more...
I have generated a table as follows. I want 3 fields to be stacked into 1st column and other 3 fields into a 2nd column. Then I need to group these 2 columns into a single value (Test) as shown. I have shared the sample output as well. Please let me know how to generate it using search query or any other method.    
Logs from windows server are getting delayed to indexed in splunk. Noticed the below error in plunked.log in the windows server. — ERROR [MethodServerMonitor] wt.manager.ServerTable - Dead MethodSe... See more...
Logs from windows server are getting delayed to indexed in splunk. Noticed the below error in plunked.log in the windows server. — ERROR [MethodServerMonitor] wt.manager.ServerTable - Dead MethodServer reported; reported exception -- Any thoughts?
Hi all, I  would like to know how to write a SPL code to solve the issue that is to pick the scenarios follow the 3 logic.  (1) pick the Scenario_IDx whose time tag is later than its previous Sce... See more...
Hi all, I  would like to know how to write a SPL code to solve the issue that is to pick the scenarios follow the 3 logic.  (1) pick the Scenario_IDx whose time tag is later than its previous Scenario_IDy. (x is bigger than y) Any Scenario_IDx whose time tag is ealier than its previous Scenario can be ignored. Ex. Scenario_ID1 time tag should bigger than Scenario_Start. (In Ex.1: Scenario_ID1: 103 > Scenario_Start: 101) Scenario_ID2 time tag should smaller than Scneario_ID1 and Scenario_Start. (In Ex.1: Scenario_ID2: 104 >Scenario_Start: 101 and Scenario_ID2: 104 > Scenario_ID1: 103) (2) If there are multiple same scenario later than previous Scenario time tag, pick the one with the earliest time tag. Ex. Take Ex. 2 as an example. For Scenario_ID3, pick Scenario_ID3: 204 only.  Scenario_Start: 201  Scenario_ID1: 202   Scenario_ID2: 203  Scenario_ID3: 204  Scenario_ID3: 205      (3) If for the Scenario_IDy, there is no Scenario_IDx later than Scenario_IDy time tag. Then no need to list anything for Scenario_IDx. (x>y) Ex. Take Ex. 3 as an example. All time tag of Scenario_ID5 is earlier than the one of Scenario_ID1.  So in "Expected sequence", no need to list Scenario_ID5. Here are the sample original scenario sequence, the corresponding information sequence and the expected scenario sequence and the corresponding information sequence as well. Both of them are multi-value fields. Does anyone have suggestion on SPL code to compose the "Expected sequence" and "Expected information sequence" output?   Example no.  Original sequence (in time tag) Original information sequence (in time tag) Expected sequence (in time tag) Expected information sequence (in time tag) 1 Scenario_Start: 101 Scenario_ID1: 103 Scenario_ID1: 105 Scenario_ID2: 102 Scenario_ID2: 104 Scenario_Start_info:AAA  Scenario_ID1_info:BBB Scenario_ID1_info:CCC Scenario_ID2_info:DDD Scenario_ID2_info:EEE Scenario_Start: 101 Scenario_ID1: 103 Scenario_ID2: 104 Scenario_Start_info:AAA  Scenario_ID1_info:BBB Scenario_ID2_info:EEE 2 Scenario_Start: 201  Scenario_ID1: 202   Scenario_ID2: 203   Scenario_ID3: 204   Scenario_ID3: 205  Scenario_Start_info:AAA  Scenario_ID1_info:BBB Scenario_ID2_info:CCC Scenario_ID3_info:DDD Scenario_ID3_info:EEE Scenario_Start: 201 Scenario_ID1: 202   Scenario_ID2: 203   Scenario_ID3: 204   Scenario_Start_info:AAA  Scenario_ID1_info:BBB Scenario_ID2_info:CCC Scenario_ID3_info:DDD 3 Scenario_Start: 301 Scenario_ID1: 305 Scenario_ID5: 302 Scenario_ID5: 303 Scenario_ID5: 304 Scenario_Start_info:AAA  Scenario_ID1_info:BBB Scenario_ID5_info:CCC Scenario_ID5_info:DDD Scenario_ID5_info:EEE Scenario_Start:301 Scenario_ID1:305 Scenario_Start_info:AAA  Scenario_ID1_info:BBB Thank you so much.
Hello Splunkers! Does anyone know about async_saved_search_fetch setting? Splunk Documentation says, do not change the setting but want to know what this is.  async_saved_search_fetch = <boolea... See more...
Hello Splunkers! Does anyone know about async_saved_search_fetch setting? Splunk Documentation says, do not change the setting but want to know what this is.  async_saved_search_fetch = <boolean> Enables a separate thread that will fetch scheduled or auto-summarized saved   searches asynchronously. Do not change this setting unless instructed to do so by Splunk support. Default: true   How should I fix this issue?   Thank you for your work will be provided. 
I've created a new index in Splunk Cloud and trying to ingest log files from one of our application servers. This application server is setup as a Deployment Client (with Universal Forwarder). I've ... See more...
I've created a new index in Splunk Cloud and trying to ingest log files from one of our application servers. This application server is setup as a Deployment Client (with Universal Forwarder). I've completed the following steps: * Created new index on Splunk Cloud * Created new Server Class on the Deployment Server which points to the application server. The application server is 'phoning home' to the Deployment Server. I've got to the point where I need to create a Deployment App. I believe at this stage with Splunk Enterprise you need to create the data inputs, so you would select 'Add data -> Forward -> Select Server Class and choose the existing Server Class created previously such that the application server is shown in the 'List of Forwarders' box. Then you would specify the log data file-path in the Files and Directories settings, the sourcetype and finally the name of the destination index. But this is where I got stuck because my new index isn't in the list; presumably the Distribution Server can't talk to Splunk Cloud to pull down a list of indexes? So I naturally went onto Splunk Cloud to add a data input, but I can only choose from 'Local inputs' as 'Forwarded inputs' is empty. I'm aware the usual approach to creating a deployment app is to create an 'app' folder within $SPLUNK_HOME/etc/deployment-apps and create an inputs.conf file with the monitor stanza referencing the source data and destination index. But how do I reference an index that lives in Splunk Cloud? I can't simply type in 'my-server.splunkcloud.com/en-GB/indexes/my-index Please can someone point me to the official documentation that explains how to configure the deployment-client to send log data to a Splunk Cloud index?
Hello, We are attempting to link to another dashboard within the same app in Dashboard Studio, however the app does not show in the dropdown. I checked all the setting within the app, but was unabl... See more...
Hello, We are attempting to link to another dashboard within the same app in Dashboard Studio, however the app does not show in the dropdown. I checked all the setting within the app, but was unable to find an option to share, show, or other wise. Any help would be appreciated.  
Hi, if I had logs as such wirn different type data in the same sourcetype: "<134>Nov 23 21:23:17 NSX-edge-7-0 loadbalancer[2196]: [default]: 154545"   "<4>Nov 23 21:06:47 NSX-edge-7-0 firewall[... See more...
Hi, if I had logs as such wirn different type data in the same sourcetype: "<134>Nov 23 21:23:17 NSX-edge-7-0 loadbalancer[2196]: [default]: 154545"   "<4>Nov 23 21:06:47 NSX-edge-7-0 firewall[]: [default]: ACCEPT" How can I extract thew value after "[default]: " without extract null values???? For example, if in the first event I created a field called "FIELDA=154545", i dont want the value in the second event it to be "ACCEPT", I need to create second field called "FIELDB=ACCEPT" I hope to have made me understand  Regards,
We are receiving syslog data via UDP and we noticed that some data is missing. When running -  tcpdump -i eth0 port <udp port> I see lines such as -  UDP, bad length 5158 > 1472 And the data i... See more...
We are receiving syslog data via UDP and we noticed that some data is missing. When running -  tcpdump -i eth0 port <udp port> I see lines such as -  UDP, bad length 5158 > 1472 And the data is not being ingested.  https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fnetworkengineering.stackexchange.com%2Fquestions%2F74563%2Ftcpdump-output-with-bad-length-indicator-present&data=05%7C01%7CDan.Drillich%40ey.com%7Cdd747e3eb5df4e69f41f08daccba043e%7C5b973f9977df4bebb27daa0c70b8482c%7C0%7C0%7C638047396551230716%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=rM9Xbj7iD2jaOFqjWl3ZyzwUr9rJB0i5v0A7nRS9suE%3D&reserved=0   says -  The 1472 is the maximum payload length for the UDP datagram. Any ideas how to deal with it?