All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have 4 different kind of logs that is coming from one source (sample logs are below). I would like to configure this in different sourcetypes so that the timestamps that Splunk will get is correct.... See more...
I have 4 different kind of logs that is coming from one source (sample logs are below). I would like to configure this in different sourcetypes so that the timestamps that Splunk will get is correct. My problem is they have different timestamp filed names and character count on where the time field are positioned. A. It has timestamp coming from "time".     { "count": 1, "total": 1, "minimum": 1, "maximum": 1, "average": 1, "resourceId": "KSJDIOU-43782JH3K28-28378KMK", "time": "2022-11-24T06:05:00.0000000Z", "metricName": "TotalBillable", "timeGrain": "MPT1DRIVE"}     B. It has timestamp coming from "EventTimestamp"     { "Environment": "PROD", "Region": "SouthEast Asia", "ScaleUnit": "PRD-041", "TaskName": "ApplicationMetricsLog", "ActivityId": "89S7D-DS98-SDSDS", "SubscriptionId": "CKJD989897DS", "NamespaceName": "tm-uidso-prem-prd", "ActivityName": "ActiveConnections", "ResourceId": "KSJDIOU-43782JHFSDS3K28-28378KMK", "Outcome": "Success", "Protocol": "AMQP", "AuthType": "EntitySAS", "AuthId": "JKSDDI-55643", "NetworkType": "Public", "ClientIp": "1000.3425.0.2", "Count": 1, "Properties": "{\"EventTimestamp\":\"24/11/2022 06:10:05:7602\"}", "category": "MetricsLogs"}     C. It has timestamp coming from "time" but, time field is on a different character count from letter A.     { "Deployment": "ksdjksdos1loio2klkl3", "time": "2022-11-24T06:04:00Z", "timeGrain": "GFT2KOIO", "resourceId": "KLSDASKOSO-3434-545-XCDS", "metricName": "GoStarted", "dimensions": "{\"Deployment\":\"767sd898ds8d9sdd9s\",\"Role\":\"maria.Home.upon\",\"RoleInstance\":\"maria.Home.upon_OUT_69\"}", "average": 1, "minimum": 1, "maximum": 1, "total": 1, "count": 1}       D.  It has timestamp coming from "time" but, time field is on a different character count from letter A and C.     { "time": "2022-11-24T06:11:52.6825908Z", "resourceId": "dksjdks-sdsds-dsds-23232-3232s", "category": "FunctionLogs", "operationName": "Microsoft.Web/sites/functions/log", "level": "Informational", "location": "South America", "properties": {"appName":"func-dttysdvmj-eventstop-prd","roleInstance":"rollinginthedeep","message":"Response [sadlsad-d4343-dfsdf45-545dsd-sdsd] 200 OK (00.0s)\r\nETag:\"0xJYWEDFF6788DFSDF\"\r\nServer:Windows-Azure-Blob/1.0,Microsoft-HTTPAPI/2.0\r\nx-ms-request-id:dsds-8000000\r\nx-ms-client-request-id:sdsdsd0-dsdsdgfr1-454346fd76767gf\r\nx-ms-version:2020-08-04\r\nx-ms-lease-id:b51368e2-2d24-6c77-acab-78ced4658e79\r\nDate:Thu, 24 Nov 2022 06:11:52 GMT\r\nContent-Length:0\r\nLast-Modified:Mon, 17 Oct 2022 09:59:09 GMT\r\n","category":"Azure.Core.1","hostVersion":"467888.134263.2.1990097","hostInstanceId":"d57fdu6-kkew36-0000-dsf3-rgtty887gd","level":"Information","levelId":2,"processId":5976,"eventId":5,"eventName":"Response"}}       Thanks in advance.
Hi, Splunk which I am currently using has all of a sudden increased the log size consumption which has led to my license crossing the threshold. Only have two more warnings left. Have identified a w... See more...
Hi, Splunk which I am currently using has all of a sudden increased the log size consumption which has led to my license crossing the threshold. Only have two more warnings left. Have identified a way to filter out some of the Azure logs using regex on the logs. But for some reason the regex is not working. Can someone please help me why this regex is not working. While tested the regex it seems to be working fine but still logs are not getting filtered out even if it matches the criteria.  I tested my regex in the online site regex101 and the content seems to match the regex. But still logs are not getting filtered out. Can someone please guide me what would be the reason. 
 I am using set Label api in my playbook to move the containers to a different label after playbook execution. It works on most containers but sometimes it will not move the container to new label.In... See more...
 I am using set Label api in my playbook to move the containers to a different label after playbook execution. It works on most containers but sometimes it will not move the container to new label.In the debug logs i could see set Label failed, Validation Error, failed to set Label..Can somebody suggest what the issue might be?
I have generated a table as follows. I want 3 fields to be stacked into 1st column and other 3 fields into a 2nd column. Then I need to group these 2 columns into a single value (Test) as shown. I ha... See more...
I have generated a table as follows. I want 3 fields to be stacked into 1st column and other 3 fields into a 2nd column. Then I need to group these 2 columns into a single value (Test) as shown. I have shared the sample output as well. Please let me know how to generate it using search query or any other method.    
Logs from windows server are getting delayed to indexed in splunk. Noticed the below error in plunked.log in the windows server. — ERROR [MethodServerMonitor] wt.manager.ServerTable - Dead MethodSe... See more...
Logs from windows server are getting delayed to indexed in splunk. Noticed the below error in plunked.log in the windows server. — ERROR [MethodServerMonitor] wt.manager.ServerTable - Dead MethodServer reported; reported exception -- Any thoughts?
Hi all, I  would like to know how to write a SPL code to solve the issue that is to pick the scenarios follow the 3 logic.  (1) pick the Scenario_IDx whose time tag is later than its previous Sce... See more...
Hi all, I  would like to know how to write a SPL code to solve the issue that is to pick the scenarios follow the 3 logic.  (1) pick the Scenario_IDx whose time tag is later than its previous Scenario_IDy. (x is bigger than y) Any Scenario_IDx whose time tag is ealier than its previous Scenario can be ignored. Ex. Scenario_ID1 time tag should bigger than Scenario_Start. (In Ex.1: Scenario_ID1: 103 > Scenario_Start: 101) Scenario_ID2 time tag should smaller than Scneario_ID1 and Scenario_Start. (In Ex.1: Scenario_ID2: 104 >Scenario_Start: 101 and Scenario_ID2: 104 > Scenario_ID1: 103) (2) If there are multiple same scenario later than previous Scenario time tag, pick the one with the earliest time tag. Ex. Take Ex. 2 as an example. For Scenario_ID3, pick Scenario_ID3: 204 only.  Scenario_Start: 201  Scenario_ID1: 202   Scenario_ID2: 203  Scenario_ID3: 204  Scenario_ID3: 205      (3) If for the Scenario_IDy, there is no Scenario_IDx later than Scenario_IDy time tag. Then no need to list anything for Scenario_IDx. (x>y) Ex. Take Ex. 3 as an example. All time tag of Scenario_ID5 is earlier than the one of Scenario_ID1.  So in "Expected sequence", no need to list Scenario_ID5. Here are the sample original scenario sequence, the corresponding information sequence and the expected scenario sequence and the corresponding information sequence as well. Both of them are multi-value fields. Does anyone have suggestion on SPL code to compose the "Expected sequence" and "Expected information sequence" output?   Example no.  Original sequence (in time tag) Original information sequence (in time tag) Expected sequence (in time tag) Expected information sequence (in time tag) 1 Scenario_Start: 101 Scenario_ID1: 103 Scenario_ID1: 105 Scenario_ID2: 102 Scenario_ID2: 104 Scenario_Start_info:AAA  Scenario_ID1_info:BBB Scenario_ID1_info:CCC Scenario_ID2_info:DDD Scenario_ID2_info:EEE Scenario_Start: 101 Scenario_ID1: 103 Scenario_ID2: 104 Scenario_Start_info:AAA  Scenario_ID1_info:BBB Scenario_ID2_info:EEE 2 Scenario_Start: 201  Scenario_ID1: 202   Scenario_ID2: 203   Scenario_ID3: 204   Scenario_ID3: 205  Scenario_Start_info:AAA  Scenario_ID1_info:BBB Scenario_ID2_info:CCC Scenario_ID3_info:DDD Scenario_ID3_info:EEE Scenario_Start: 201 Scenario_ID1: 202   Scenario_ID2: 203   Scenario_ID3: 204   Scenario_Start_info:AAA  Scenario_ID1_info:BBB Scenario_ID2_info:CCC Scenario_ID3_info:DDD 3 Scenario_Start: 301 Scenario_ID1: 305 Scenario_ID5: 302 Scenario_ID5: 303 Scenario_ID5: 304 Scenario_Start_info:AAA  Scenario_ID1_info:BBB Scenario_ID5_info:CCC Scenario_ID5_info:DDD Scenario_ID5_info:EEE Scenario_Start:301 Scenario_ID1:305 Scenario_Start_info:AAA  Scenario_ID1_info:BBB Thank you so much.
Hello Splunkers! Does anyone know about async_saved_search_fetch setting? Splunk Documentation says, do not change the setting but want to know what this is.  async_saved_search_fetch = <boolea... See more...
Hello Splunkers! Does anyone know about async_saved_search_fetch setting? Splunk Documentation says, do not change the setting but want to know what this is.  async_saved_search_fetch = <boolean> Enables a separate thread that will fetch scheduled or auto-summarized saved   searches asynchronously. Do not change this setting unless instructed to do so by Splunk support. Default: true   How should I fix this issue?   Thank you for your work will be provided. 
I've created a new index in Splunk Cloud and trying to ingest log files from one of our application servers. This application server is setup as a Deployment Client (with Universal Forwarder). I've ... See more...
I've created a new index in Splunk Cloud and trying to ingest log files from one of our application servers. This application server is setup as a Deployment Client (with Universal Forwarder). I've completed the following steps: * Created new index on Splunk Cloud * Created new Server Class on the Deployment Server which points to the application server. The application server is 'phoning home' to the Deployment Server. I've got to the point where I need to create a Deployment App. I believe at this stage with Splunk Enterprise you need to create the data inputs, so you would select 'Add data -> Forward -> Select Server Class and choose the existing Server Class created previously such that the application server is shown in the 'List of Forwarders' box. Then you would specify the log data file-path in the Files and Directories settings, the sourcetype and finally the name of the destination index. But this is where I got stuck because my new index isn't in the list; presumably the Distribution Server can't talk to Splunk Cloud to pull down a list of indexes? So I naturally went onto Splunk Cloud to add a data input, but I can only choose from 'Local inputs' as 'Forwarded inputs' is empty. I'm aware the usual approach to creating a deployment app is to create an 'app' folder within $SPLUNK_HOME/etc/deployment-apps and create an inputs.conf file with the monitor stanza referencing the source data and destination index. But how do I reference an index that lives in Splunk Cloud? I can't simply type in 'my-server.splunkcloud.com/en-GB/indexes/my-index Please can someone point me to the official documentation that explains how to configure the deployment-client to send log data to a Splunk Cloud index?
Hello, We are attempting to link to another dashboard within the same app in Dashboard Studio, however the app does not show in the dropdown. I checked all the setting within the app, but was unabl... See more...
Hello, We are attempting to link to another dashboard within the same app in Dashboard Studio, however the app does not show in the dropdown. I checked all the setting within the app, but was unable to find an option to share, show, or other wise. Any help would be appreciated.  
Hi, if I had logs as such wirn different type data in the same sourcetype: "<134>Nov 23 21:23:17 NSX-edge-7-0 loadbalancer[2196]: [default]: 154545"   "<4>Nov 23 21:06:47 NSX-edge-7-0 firewall[... See more...
Hi, if I had logs as such wirn different type data in the same sourcetype: "<134>Nov 23 21:23:17 NSX-edge-7-0 loadbalancer[2196]: [default]: 154545"   "<4>Nov 23 21:06:47 NSX-edge-7-0 firewall[]: [default]: ACCEPT" How can I extract thew value after "[default]: " without extract null values???? For example, if in the first event I created a field called "FIELDA=154545", i dont want the value in the second event it to be "ACCEPT", I need to create second field called "FIELDB=ACCEPT" I hope to have made me understand  Regards,
We are receiving syslog data via UDP and we noticed that some data is missing. When running -  tcpdump -i eth0 port <udp port> I see lines such as -  UDP, bad length 5158 > 1472 And the data i... See more...
We are receiving syslog data via UDP and we noticed that some data is missing. When running -  tcpdump -i eth0 port <udp port> I see lines such as -  UDP, bad length 5158 > 1472 And the data is not being ingested.  https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fnetworkengineering.stackexchange.com%2Fquestions%2F74563%2Ftcpdump-output-with-bad-length-indicator-present&data=05%7C01%7CDan.Drillich%40ey.com%7Cdd747e3eb5df4e69f41f08daccba043e%7C5b973f9977df4bebb27daa0c70b8482c%7C0%7C0%7C638047396551230716%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=rM9Xbj7iD2jaOFqjWl3ZyzwUr9rJB0i5v0A7nRS9suE%3D&reserved=0   says -  The 1472 is the maximum payload length for the UDP datagram. Any ideas how to deal with it?  
Is there a way to achieve this?   I have  a lookup table with 2 columns alert_type and short_description.   alert_type | short_description cpu              | "The Host".host."cpu utilizatio... See more...
Is there a way to achieve this?   I have  a lookup table with 2 columns alert_type and short_description.   alert_type | short_description cpu              | "The Host".host."cpu utilization is high".cpu_perc."%" mem            | "The memory in the host ".host."is high with a percentage of ".mem_perc."%"   When alert type matches it should return short_description and the fields in the short description should replace with field values( host,cpu_perc and mem_perc)   Example : The Host abcd.com cpu utilization is high 90 % instead of a string "The Host".host."cpu utilization is high".cpu_perc."%"  
Hello, I have questions about my fire brigade installation, but I noticed the last questions on fire brigade are from 2016, and it shows as not supported on splunkbase. Is firebrigade dead?  if s... See more...
Hello, I have questions about my fire brigade installation, but I noticed the last questions on fire brigade are from 2016, and it shows as not supported on splunkbase. Is firebrigade dead?  if so what replaced it? --jason  
below is the value of a field.   what i would like to do is do a regex where i would output node# + temperature.   example output:   Node0_temperature=26 degrees C / 78 degrees F Node1_... See more...
below is the value of a field.   what i would like to do is do a regex where i would output node# + temperature.   example output:   Node0_temperature=26 degrees C / 78 degrees F Node1_temperature=29 degrees C / 84 degrees F   thanks, node0: -------------------------------------------------------------------------- Routing Engine status: Slot 0: Current state Master Election priority Master (default) Temperature 26 degrees C / 78 degrees F CPU temperature 41 degrees C / 105 degrees F DRAM 98254 MB (98304 MB installed) Memory utilization 4 percent 5 sec CPU utilization: User 0 percent Background 0 percent Kernel 4 percent Interrupt 1 percent Idle 95 percent node1: -------------------------------------------------------------------------- Routing Engine status: Slot 0: Current state Master Election priority Master (default) Temperature 29 degrees C / 84 degrees F CPU temperature 41 degrees C / 105 degrees F DRAM 98254 MB (98304 MB installed) Memory utilization 4 percent 5 sec CPU utilization: User 0 percent Background 0 percent Kernel 2 percent Interrupt 0 percent Idle 98 percent  
Is it possible to create a tableview without a manager, but passing the data via a javascript object? Object example: let x = [{'col1':'val1'},{'col1':'val2'},{'col1':'val3'},{'col1':'val4'}] ... See more...
Is it possible to create a tableview without a manager, but passing the data via a javascript object? Object example: let x = [{'col1':'val1'},{'col1':'val2'},{'col1':'val3'},{'col1':'val4'}] And after the table will be able to create itself.
Hi, I am having a local minikube Kubernetes cluster set up. Furthermore, I want to setup the Splunk App for Data Science and Deep Learning, to be able to interact with my local Kubernetes Cluster. O... See more...
Hi, I am having a local minikube Kubernetes cluster set up. Furthermore, I want to setup the Splunk App for Data Science and Deep Learning, to be able to interact with my local Kubernetes Cluster. On the setup page, I provide the information in the input field as shown in the screenshot below. For the Cluster CA, Cluster Certificate and Client key, I am using the contents of the files in ~/.minikube/certs . When I click the "Test & Save" button, I receive the following error message:  Exception: Could not connect to Kubernetes. HTTPConnectionPool(host='10.96.143.124', port=80): Max retries exceeded with url: //version/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f391087acd0>: Failed to establish a new connection: [Errno 110] Connection timed out'))  I know that this means Splunk is having troubles connecting to my local cluster. But unfortunately I feel I reached a dead end, and I am not sure how I can fix this issue. Any help on this would be greatly appreciated!   PS: Here is a screenshot of the error message:
Hello, We have installed the add on: Monitoring of java virtual machines with JMX on the forwarder level. It was forwarding data correctly when forwarder was on version 8.1.3. However after upgradi... See more...
Hello, We have installed the add on: Monitoring of java virtual machines with JMX on the forwarder level. It was forwarding data correctly when forwarder was on version 8.1.3. However after upgrading Forwarder to 9.0.1, it stopped working. We updated /etc/hosts file with 127.0.0.1 <hostname> and restarted splunk, The add on was sending data after that. Once the app server was restarted, it stopped sending data again and we got the below error : systemErrorMessage="Failed to retrieve RMIServer stub: javax.naming.NameNotFoundException: jmxrmi" If I comment the line added in /etc/hosts, I get the below error: 2022-11-15 21:55:57 ERROR Logger=ModularInput Probing socket connection to SplunkD failed.Either SplunkD has exited ,or if not, check that your DNS configuration is resolving your system's hostname (<hostname>) correctly : Connection refused (Connection refused) 2022-11-15 21:55:57 ERROR Logger=ModularInput Determined that Splunk has probably exited, HARI KARI. server.xml has below Listener: <Listener accessFile="${catalina.base}/conf/jmxremote.access" address="${base.jmx.bind}" authenticate="true" className="com.springsource.tcserver.serviceability.rmi.JmxSocketListener" passwordFile="${catalina.base}/conf/jmxremote.password" port="${base.jmx.port}" useSSL="false"/> the bind has below parameters in catalina.properties: base.jmx.port=<port> base.jmx.bind=<IP> config.xml: <jmxserver host="<IP>" jvmDescription="<hostname>" jmxport="<port>" jmxuser="admin" jmxpass="<jmx password>"> @Damien_Dallimor@PickleRick @gcusello@isoutamo
Hi I there any way to dynamically fill in the part in red? Assuming the alert is running from the Searched. The idea is if you re-install on a new Splunk install, you don't want to have to find a... See more...
Hi I there any way to dynamically fill in the part in red? Assuming the alert is running from the Searched. The idea is if you re-install on a new Splunk install, you don't want to have to find and replace all the   
We need to collect VMWare Carbon Black Cloud events to Splunk (Cloud) We use this app https://splunkbase.splunk.com/app/5332 on heavy forwarder to configure inputs. If we have a distributed environ... See more...
We need to collect VMWare Carbon Black Cloud events to Splunk (Cloud) We use this app https://splunkbase.splunk.com/app/5332 on heavy forwarder to configure inputs. If we have a distributed environment, is this app (5332) also needed on the indexers?   The release note mentions this app https://splunkbase.splunk.com/app/5334 for the indexers but its own details point back to the 5332 app. So, could someone please tell me which one is needed where? thank you,
Hi all, We have noticed on our EDR some noise coming from the script C:\Program Files\Splunk\bin\runScript.py" which seems to be starting a number of btool processes.   Could someone tell me wh... See more...
Hi all, We have noticed on our EDR some noise coming from the script C:\Program Files\Splunk\bin\runScript.py" which seems to be starting a number of btool processes.   Could someone tell me what's the usage of this and why it's happening? I have tried googling to find more information but no luck.   Appreciate it!